uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,089,554 | arxiv | \section{Dataset details and preparations}
\label{sec:appendix-datasets}
\subsection{WikiBio}
The WikiBio corpus \citep{lebret2016neural} comprises $728,321$ biography-infobox pairs from English Wikipedia (dumped in 2015). The task consists of generating the first sentence of a biography conditioned on the structured infobox. The average length of infoboxes and first sentences are $53.1$ and $26.1$ respectively. All tokens are lower-cased.
We associate a text segment for each key and value, and a separate segment for the target sentence. Five relation types were defined: 1) from a key/value to its paired value/key; 2) from a key/value/target to every other key except if the two belong to a key-value pair; 3) from a key/value/target to every other value except if the two belong to a key-value pair; 4) from a key/value to the target (these are masked); 5) from any segment to itself.
\subsection{E2E}
The E2E challenge is based on a crowd-sourced dataset of 50k samples in the restaurant domain. The task is to generate a description of a restaurant from structured input in the form of key-value pairs. We used a graph structure identical to the one explained for the WikiBio dataset.
\subsection{AGENDA}
AGENDA dataset \citep{GraphWriter} comprises $40,720$ scientific abstracts, each supplemented with the title and a matching knowledge graph. The knowledge graphs consist of entities and relations extracted from their corresponding abstract. The dataset is split into $38,720$ training, $1,000$ validation, and $1,000$ test datapoints. The average lengths of titles and abstracts are $9.5$ and $141.2$ tokens respectively. The goal of the task is to generate an abstract given the title and the associated knowledge graph as the source. The input graphs in this dataset can form more complex structures compared to WikiBio and E2E, which are limited to key-value pairs.
The knowledge graphs consists of a set of ``\textit{concepts}" and a set of predicates connecting them. There are seven predicate categories, namely: ``Used-for", ``Feature-of", ``Conjuction", ``Part-of", ``Evaluate-for", ``Hyponym-for", and ``Compare". Each concept is also assigned a \textit{class}, out of five possible classes: ``Task", ``Method", ``Metric", ``Material", or ``Other Scientific Term". We also incorporate the given title in the knowledge graph as a concept and give its own class of \textit{title-class}. Clearly, this new concept does not have any predicates connecting it to others.
We assign a text segment for each concept, class and target (the abstract), adding up to a total of three segment types. We also define a total of twelve types of relations, including the original seven predicate categories. The five supplementary relations are:
1) from a concept/class to its paired class/concept; 2) from a concept/class/target to every other concept unless the two are already linked; 3) from a concept/class/target to every other class except unless the two are already linked; 4) from a concept/class to the target (these are masked); 5) from any block to itself.
\subsection{WebNLG}
WebNLG dataset, curated from DB-Pedia, consists of sets of (head, predicate, tail) tuples forming a graphical structure and a descriptive target text verbalising them. The dataset consists of $25,298$ graph-text pairs, among which $9,674$ are distinct graphs (multiple targets for a graph produced by different crowdsourcing agents). In each graph there can be between one to seven tuples. The test data spans fifteen different domains, ten of which appear in the training data.
Both nodes (heads and tails) and the predicates are textual labels and the predicates are from an open set. As a result, we transform the graph to its Levi equivalent by allocating a new node per tuple with the predicate as its text label.
This new graph has three types of text segments: nodes (original heads and tails), predicates (newly added nodes), and the target.
There are seven types of relations in the new graph: 1) from a node segment to all predicate segments, for which the node was a head; 2) from a node segment to all predicate segments, for which the node was a tail; 3) from each node/predicate to all predicates, except if the two were linked by a tuple; 4) from each node to all other nodes (including the ones that are linked by a tuple) and from each predicate to all nodes except if the two are linked by a tuple; 5) from target to all node segments; 6) from target to all predicate segments; 7) from a segment to itself.
\section{Complete Experiment results}
Full results of our experiments are provided in Table \ref{tab:fs-wikibio}, Table \ref{tab:fs-e2e}, Table \ref{tab:fs-webnlg} and Table \ref{tab:fs-agenda}, for WikiBio, E2E, WebNLG and AGENDA tasks respectively. For WebNLG and E2E we used the official evaluation scripts to compute the scores. We also used the popularly used ROUGE-1.5.5.pl to calculate the ROUGE-4 scores.
\label{sec:appendix-fewshot}
\begin{table*}[h]
\centering
\begin{tabular}{||c | c c||}
\hline
Method & BLEU-4 & ROUGE-4\\
\hline\hline
StructureAware* & 44.89 \footnotesize{$\pm 0.33$} & 41.65 \footnotesize{$\pm 0.25$}\\
\hline
R2D2 & {\bf 46.23} \footnotesize{$\pm 0.15$} & 45.10 \footnotesize{$\pm 0.28$} \\
R2D2 (Flattened) & 45.62 \footnotesize{$\pm 0.73$} & {\bf 45.56} \footnotesize{$\pm 0.33$} \\
R2D2 (Scratch) & 44.33 \footnotesize{$\pm 0.53$} & 44.22 \footnotesize{$\pm 0.33$} \\
R2D2 (SF) & 44.01 \footnotesize{$\pm 0.65$} & 43.89 \footnotesize{$\pm 0.38$} \\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (70k / 12\%) & 44.40 \footnotesize{$\pm 0.12$} & 44.25 \footnotesize{$\pm 0.07$} \\
R2D2 (5k / 1\%) & 37.19 \footnotesize{$\pm 0.15$} & 39.08 \footnotesize{$\pm 0.43$} \\
R2D2 (.5k / 0.1\%) & 33.00 \footnotesize{$\pm 0.82$} & 34.10 \footnotesize{$\pm 0.42$} \\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2 Flattened)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (Flattened) (70k / 12\%) & 44.44 \footnotesize{$\pm 0.18$} & 44.21 \footnotesize{$\pm 0.08$} \\
R2D2 (Flattened) (5k / 1\%) & 36.39 \footnotesize{$\pm 0.21$} & 39.31 \footnotesize{$\pm 0.56$} \\
R2D2 (Flattened) (.5k / 0.1\%) & 30.53 \footnotesize{$\pm 1.09$} & 31.60 \footnotesize{$\pm 0.98$} \\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2 - Scratch)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (Scratch) (70k / 12\%) & 43.16 \footnotesize{$\pm 0.34$} & 42.67 \footnotesize{$\pm 0.44$}\\
R2D2 (Scratch) (5k / 1\%) & 34.48 \footnotesize{$\pm 0.51$} & 33.40 \footnotesize{$\pm 0.79$}\\
R2D2 (Scratch) (.5k / 0.1\%) & 7.94 \footnotesize{$\pm 0.71$} & 28.00 \footnotesize{$\pm 0.82$}\\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2 - Scratch and Flattened)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (SF) (70k / 12\%) & 42.96 \footnotesize{$\pm 0.25$} & 41.87 \footnotesize{$\pm 0.31$}\\
R2D2 (SF) (5k / 1\%) & 33.18 \footnotesize{$\pm 0.34$} & 29.81 \footnotesize{$\pm 0.50$}\\
R2D2 (SF) (.5k / 0.1\%) & 7.67 \footnotesize{$\pm 0.78$} & 31.00 \footnotesize{$\pm 0.80$}\\
\hline
\end{tabular}
\caption{\label{tab:fs-wikibio}Results on the WikiBio task. State-of-the-art~\citep{liu2018table} is notated by *.}
\end{table*}
\begin{table*}[h]
\centering
\renewcommand\tabcolsep{4pt}
\begin{tabular}{||c | c c c c c||}
\hline
Method & BLEU-4 & NIST & METEOR & ROUGE-L & CIDEr\\
\hline\hline
PragInfo* & \bf 68.60 & \bf 8.73 & 45.25 & \bf 70.82 & 2.37 \\
\hline
R2D2 & 67.70 \footnotesize{$\pm 0.64$} & 8.68 \footnotesize{$\pm 0.10$} & \bf 45.85 \footnotesize{$\pm 0.18$} & 70.44 \footnotesize{$\pm 0.32$} & \bf 2.38 \footnotesize{$\pm 0.04$}\\
R2D2 (Flattened) & 65.73 \footnotesize{$\pm 0.97$} & 8.48 \footnotesize{$\pm 0.13$} & 45.29 \footnotesize{$\pm 0.24$} & 70.21 \footnotesize{$\pm 0.73$} & 2.32 \footnotesize{$\pm 0.05$}\\
R2D2 (Scratch) & 52.25 \footnotesize{$\pm 2.45$} & 7.31 \footnotesize{$\pm 0.16$} & 37.40 \footnotesize{$\pm 1.05$} & 60.13 \footnotesize{$\pm 1.76$} & 1.34 \footnotesize{$\pm 0.08$}\\
R2D2 (SF) & 52.25 \footnotesize{$\pm 3.56$} & 7.24 \footnotesize{$\pm 0.21$} & 36.66 \footnotesize{$\pm 2.10$} & 58.97 \footnotesize{$\pm 1.91$} & 1.31 \footnotesize{$\pm 0.07$}\\
\hline
\multicolumn{6}{||c||}{Reduced training set (R2D2) (number of samples / percent of training data)} \\ \hline
R2D2 (4k / 10\%) & 67.01 \footnotesize{$\pm 0.65$} & 8.65 \footnotesize{$\pm 0.06$} & 45.26 \footnotesize{$\pm 0.30$} & 69.48 \footnotesize{$\pm 0.60$} & 2.29 \footnotesize{$\pm 0.06$}\\
R2D2 (1k / 2\%) & 63.93 \footnotesize{$\pm 1.13$} & 8.46 \footnotesize{$\pm 0.11$} & 43.90 \footnotesize{$\pm 1.18$} & 66.95 \footnotesize{$\pm 0.76$} & 2.15 \footnotesize{$\pm 0.07$}\\
R2D2 (.5k / 1\%) & 64.26 \footnotesize{$\pm 1.22$} & 8.48 \footnotesize{$\pm 0.10$} & 43.48 \footnotesize{$\pm 0.85$} & 66.74 \footnotesize{$\pm 1.14$} & 2.12 \footnotesize{$\pm 0.05$}\\
\hline
\multicolumn{6}{||c||}{Reduced training set (R2D2 Flattened) (number of samples / percent of training data)} \\ \hline
R2D2 (Flattened) (4k / 10\%) & 65.15 \footnotesize{$\pm 0.74$} & 8.64 \footnotesize{$\pm 0.05$} & 44.38 \footnotesize{$\pm 0.91$} & 68.71 \footnotesize{$\pm 1.18$} & 2.22 \footnotesize{$\pm 0.08$}\\
R2D2 (Flattened) (1k / 2\%) & 63.76 \footnotesize{$\pm 0.49$} & 8.50 \footnotesize{$\pm 0.08$} & 43.35 \footnotesize{$\pm 0.51$} & 66.92 \footnotesize{$\pm 0.98$} & 1.63 \footnotesize{$\pm 0.84$}\\
R2D2 (Flattened) (.5k / 1\%) & 63.02 \footnotesize{$\pm 0.42$} & 8.55 \footnotesize{$\pm 0.02$} & 43.44 \footnotesize{$\pm 0.15$} & 66.79 \footnotesize{$\pm 0.50$} & 2.17 \footnotesize{$\pm 0.04$}\\
\hline
\multicolumn{6}{||c||}{Reduced training set (R2D2 Scratch) (number of samples / percent of training data)} \\ \hline
R2D2 (Scratch) (4k / 10\%) & 52.86 \footnotesize{$\pm 1.05$} & 7.14 \footnotesize{$\pm 0.12$} & 37.69 \footnotesize{$\pm 1.41$} & 60.00 \footnotesize{$\pm 0.48$} & 1.15 \footnotesize{$\pm 0.02$}\\
R2D2 (Scratch) (1k / 2\%) & 52.72 \footnotesize{$\pm 0.21$} & 7.21 \footnotesize{$\pm 0.15$} & 34.78 \footnotesize{$\pm 0.93$} & 57.33 \footnotesize{$\pm 0.74$} & 1.26 \footnotesize{$\pm 0.03$}\\
R2D2 (Scratch) (.5k / 1\%) & 49.41 \footnotesize{$\pm 0.91$} & 7.09 \footnotesize{$\pm 0.20$} & 34.95 \footnotesize{$\pm 1.29$} & 56.40 \footnotesize{$\pm 0.90$} & 1.21 \footnotesize{$\pm 0.07$}\\
\hline
\multicolumn{6}{||c||}{Reduced training set (R2D2 Scratch and Flattened) (number of samples / percent of training data)} \\ \hline
R2D2 (SF) (4k / 10\%) & 51.93 \footnotesize{$\pm 1.25$} & 7.26 \footnotesize{$\pm 0.18$} & 37.19 \footnotesize{$\pm 1.04$} & 58.88 \footnotesize{$\pm 0.79$} & 1.20 \footnotesize{$\pm 0.05$}\\
R2D2 (SF) (1k / 2\%) & 49.29 \footnotesize{$\pm 0.99$} & 6.64 \footnotesize{$\pm 0.16$} & 31.85 \footnotesize{$\pm 2.41$} & 57.08 \footnotesize{$\pm 1.28$} & 1.03 \footnotesize{$\pm 0.09$}\\
R2D2 (SF) (.5k / 1\%) & 45.29 \footnotesize{$\pm 0.37$} & 6.72 \footnotesize{$\pm 0.27$} & 33.42 \footnotesize{$\pm 1.98$} & 54.34 \footnotesize{$\pm 2.03$} & 1.07 \footnotesize{$\pm 0.06$}\\
\hline
\end{tabular}
\caption{\label{tab:fs-e2e}Results on the E2E task. State-of-the-art~\citep{shen2019pragmatically} is notated by *.}
\end{table*}
\begin{table*}[h]
\centering
\begin{tabular}{||c | c c||}
\hline
Method & BLEU-4 & METEOR\\
\hline\hline
DataTurner* & 52.9 & {\bf 41.9}\\
\hline
R2D2 & {\bf 53.77} \footnotesize{$\pm 0.86$} & 41.30 \footnotesize{$\pm 0.36$}\\
R2D2 (Flattened) & 53.26 \footnotesize{$\pm 1.41$} & 40.04 \footnotesize{$\pm 0.47$}\\
R2D2 (Scratch) & 33.25 \footnotesize{$\pm 2.16$} & 25.42 \footnotesize{$\pm 0.17$}\\
R2D2 (SF) & 32.62 \footnotesize{$\pm 0.07$} & 24.40 \footnotesize{$\pm 0.01$}\\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (1.8k / 10\%) & 52.98 \footnotesize{$\pm 0.40$} & 40.80 \footnotesize{$\pm 0.42$}\\
R2D2 (360 / 2\%) & 47.58 \footnotesize{$\pm 0.41$} & 38.12 \footnotesize{$\pm 0.20$}\\
R2D2 (180 / 1\%) & 42.86 \footnotesize{$\pm 0.74$} & 36.23 \footnotesize{$\pm 1.09$}\\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2 Flattened)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (Flattened) (1.8k / 10\%) & 50.20 \footnotesize{$\pm 0.34$} & 39.00 \footnotesize{$\pm 0.40$}\\
R2D2 (Flattened) (360 / 2\%) & 46.52 \footnotesize{$\pm 0.62$} & 36.97 \footnotesize{$\pm 1.21$}\\
R2D2 (Flattened) (180 / 1\%) & 43.10 \footnotesize{$\pm 0.64$} & 35.19 \footnotesize{$\pm 0.41$}\\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2 Scratch)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (Scratch) (1.8k / 10\%) & 30.09 \footnotesize{$\pm 1.43$} & 23.30 \footnotesize{$\pm 0.35$}\\
R2D2 (Scratch) (360 / 2\%) & 22.59 \footnotesize{$\pm 1.79$} & 18.00 \footnotesize{$\pm 1.29$}\\
R2D2 (Scratch) (180 / 1\%) & 19.93 \footnotesize{$\pm 2.25$} & 16.70 \footnotesize{$\pm 1.98$}\\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2 Scratch and Flattened)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (SF) (1.8k / 10\%) & 29.20 \footnotesize{$\pm 1.02$} & 22.62 \footnotesize{$\pm 0.29$}\\
R2D2 (SF) (360 / 2\%) & 19.18 \footnotesize{$\pm 1.56$} & 16.71 \footnotesize{$\pm 0.95$}\\
R2D2 (SF) (180 / 1\%) & 18.64 \footnotesize{$\pm 2.03$} & 16.23 \footnotesize{$\pm 1.75$}\\
\hline
\end{tabular}
\caption{\label{tab:fs-webnlg}Results on the WebNLG task. State-of-the-art~\citep{harkous2020have} is notated by *.}
\end{table*}
\begin{table*}[h]
\centering
\begin{tabular}{||c | c c ||}
\hline
Method & BLEU-4 & METEOR\\
\hline\hline
{GraphWriter*} & 14.3 \footnotesize{$\pm 1.01$}& 18.8 \footnotesize{$\pm 0.28$}\\
\hline
R2D2 & {\bf 17.30} \footnotesize{$\pm 0.20$} & {\bf 21.82} \footnotesize{$\pm 0.15$} \\
R2D2 (Flattened) & 16.17 \footnotesize{$\pm 0.72$} & 20.36 \footnotesize{$\pm 0.38$} \\
R2D2 (Scratch) & 10.60 \footnotesize{$\pm 0.22$} & 15.71 \footnotesize{$\pm 0.09$} \\
R2D2 (SF) & 9.87 \footnotesize{$\pm 0.75$} & 14.63 \footnotesize{$\pm 0.18$} \\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (5k / 13\%) & 16.83 \footnotesize{$\pm 0.18$} & 21.71 \footnotesize{$\pm 0.24$} \\
R2D2 (1k / 2.6\%) & 14.60 \footnotesize{$\pm 0.59$} & 19.03 \footnotesize{$\pm 0.80$} \\
R2D2 (.5k / 1\%) & 13.00 \footnotesize{$\pm 0.38$} & 17.59 \footnotesize{$\pm 0.19$} \\
R2D2 (.1k / 0.26\%) & 10.61 \footnotesize{$\pm 1.77$} & 15.56 \footnotesize{$\pm 1.20$} \\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2 Flattened)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (Flattened) (5k / 13\%) & 16.37 \footnotesize{$\pm 0.28$} & 20.68 \footnotesize{$\pm 0.33$} \\
R2D2 (Flattened) (1k / 2.6\%) & 14.31 \footnotesize{$\pm 0.82$} & 19.15 \footnotesize{$\pm 0.97$} \\
R2D2 (Flattened) (.5k / 1\%) & 12.58 \footnotesize{$\pm 0.14$} & 16.84 \footnotesize{$\pm 0.02$} \\
R2D2 (Flattened) (.1k / 0.26\%) & 9.38 \footnotesize{$\pm 0.21$} & 15.00 \footnotesize{$\pm 1.27$} \\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2 Scratch)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (Scratch) (5k / 13\%) & 9.45 \footnotesize{$\pm 0.31$} & 14.69 \footnotesize{$\pm 0.22$}\\
R2D2 (Scratch) (1k / 2.6\%) & 3.21 \footnotesize{$\pm 1.21$} & 8.56 \footnotesize{$\pm 0.10$}\\
R2D2 (Scratch) (.5k / 1\%) & 2.96 \footnotesize{$\pm 1.09$} & 7.01 \footnotesize{$\pm 0.39$}\\
R2D2 (Scratch) (.1k / 0.26\%) & 2.70 \footnotesize{$\pm 0.79$} & 6.77 \footnotesize{$\pm 0.76$}\\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2 Scratch and Flattened)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (SF) (5k / 13\%) & 7.98 \footnotesize{$\pm 0.12$} & 14.01 \footnotesize{$\pm 0.55$}\\
R2D2 (SF) (1k / 2.6\%) & 3.18 \footnotesize{$\pm 0.69$} & 8.81 \footnotesize{$\pm 0.72$}\\
R2D2 (SF) (.5k / 1\%) & 3.12 \footnotesize{$\pm 0.90$} & 8.43 \footnotesize{$\pm 0.48$}\\
R2D2 (SF) (.1k / 0.26\%) & 2.31 \footnotesize{$\pm 1.79$} & 5.70 \footnotesize{$\pm 1.25$}\\
\hline
\end{tabular}
\caption{\label{tab:fs-agenda}Results on the AGENDA task. State-of-the-art~\citep{GraphWriter} is notated by *.}
\end{table*}
\subsection{Pretraining and few shot learning}
Our ablation studies show that without using the pretrained model, experiments on relatively small datasets such as AGENDA, E2E and WebNLG performed significantly worse, while this performance gap is much smaller for larger datasets such as WikiBio. Note, for experimental efficiency, we used the same large architecture for all tasks and smaller models for the smaller tasks may have avoided overfitting and improved performance.
The baseline model for the WebNLG task, the DataTurner model~\citep{harkous2020have}, also uses a pretrained GPT-2~\citep{radford2019language} model. GPT-2 is a fully autoregressive model, as such its information flow, even on the source input, is unidirectional. In contrast our XLNet model has bidrectional information flow for the source input.
The results of our few-shot experiments show that the pretrained model can achieve relatively good results with reduced training data. More specifically, training on about $10\%-13\%$ of the data from Wikibio (70k samples), E2E (4k samples), AGENDA (5k samples) and WebNLG (1.8k samples), led to a BLEU-4 score reduction of only 3.9\%, 4\%, 2.7\%, and 0.3\%, respectively, validating that low-resource data-to-text tasks can utilize transfer learning from large corpora.
\subsection{Joint Encoding of Graph and Text}
To understand the impact of our model explicitly encoding the graphical structure, we conducted ablation experiments by flattening the structured input into a linear sequence. This can be viewed as a regular Seq2Seq task. The loss of structural information resulted in $.2\%$, $2.9\%$, $6.5\%$, $0.9\%$ BLEU-4 score decrease on WikiBio, E2E, AGENDA, and WebNLG tasks respectively. As expected, AGENDA and WebNLG, which have more complex graph structures, showed a larger drop in performance when flattened, compared to WikiBio. More importantly, our ablation study shows that utilizing the graph structure can boost the performance independent of whether pretrained weights were used and the two have a complementary effect on the performance gain.
One surprising observation was that the graphical structure benefited the relatively simple E2E task too, where perhaps the structural information helped offset the data scarcity (compared to WikiBio).
We further investigated this phenomenon by repeating the few shot experiments on the flattened version of the model. The performance gain from utilization of the graphical structure was indeed more significant in several few shot experiments, such as WebNLG where reducing the training data to 1800 samples led to 1.4\% and 5.7\% drops in BLEU score for the main variant and the flattened version respectively.
\subsection{End-to-end Learning}
State-of-the-art models for certain tasks have multiple steps such as delexicalization and second pass rescoring, as mentioned earlier~\cite{shen2019pragmatically,moryossef2019step,e2e}. Our model on the other hand is purely end-to-end without the hassles of delexicalization, copy mechanism, or other interventions. Consequently, all model components including those devoted to textual description and structural information can be trained simultaneously, affording an opportunity to achieve better overall performance.
\subsection{Computational complexity}
The computational cost of R2D2 would be similar to that of regular transformers, requiring $O(n^2)$ dot product computations per layer, where n is the total length of all text segments concatenated.
Consequently, the framework would allow graphs that have less than $n_\mathrm{max}/n_\mathrm{seg}$ number of nodes, where $n_\mathrm{max}$ is the maximum permitted sequence length for the transformer and $n_\mathrm{seg}$ is the average segment length. Accordingly, R2D2 is more suitable for tasks involving smaller graphs or local sub graphs.
\subsection{Tasks}
We evaluated our model on a range of tasks related to data-to-text generation, which includes WikiBio~\citep{lebret2016neural}, with $728$k infobox-biography pairs from English Wikipedia, E2E challenge~\citep{e2e} with $50$k examples in the restaurant reservation domain, AGENDA~\citep{GraphWriter}, with $40$k scientific abstracts, each supplemented with a title and a matching knowledge graph, and WebNLG~\citep{webnlg}, containing $25$k graph-text pairs, with graphs consisting of tuples from DBPedia and text being a verbalization of the graph. On all datasets we repeated our experiments (including the training process) three times and summarized our results by reporting the mean and variance of each metric.
The AGENDA and WebNLG tasks involve more complex graphs as their source, presented as general sets of tuples of the form (head, predicate, tail). The two, however, differ on the set of their permitted predicates: AGENDA comes with a closed set of seven predicate types, while the WebNLG predicates are arbitrary texts, comprising an open set. For both tasks, we grant one text segment per node, whereas for the predicates, we simply use the original seven possible predicates in AGENDA as distinct categories, while for WebNLG, we perform the Levi graph transformation and allocate additional text segments for the predicates, connecting them to their associated head and tail segments as illustrated in Figure~\ref{fig:graph_transform}. See Appendix \ref{sec:appendix-datasets} for task-specific data/graph preparation details.
\subsection{Baselines}
We compared our method with the current state-of-the-art models on each task (chosen based on the BLEU-4 score) and performed ablation studies to understand the impact of different components of our model.
We did ablation studies with four variants of our model. In the "scratch" variant, the model architecture was kept the same as XLNet but the parameters were initialized with random values and learned from scratch. In the "flattened" variant, the input structure was converted into a sequence by separating the different components of the relations -- heads, tails, predicates, keys, values -- with special tokens. In the "SF" variant, we both trained from scratch and flattened the input. Finally, the regular R2D2 is our proposed variant, utilizing both the pretrained weights and the graphical structure.
Current state-of-the-art model on WikiBio is the Structure Aware model introduced by \citet{liu2018table}. Their method flattens the input into two parallel sequences of keys and values. They are then processed in a Seq2Seq framework, where the LSTM decoder is augmented with a dual attention mechanism. They compute and sum two attention scores for each of the two sequences.
Current state-of-the-art model on E2E is the Pragmatically Informative model of \citet{shen2019pragmatically}, which is a Seq2Seq model whose input is delexicalized. During inference, they combine multiple scores in a beam search.
The state-of-the-art model on AGENDA, the GraphWriter~\citep{GraphWriter}, uses an RNN encoder to map each text label into a fixed-dimension vector, which are then fed to a GNN model to process the graph. A decoder uses the output of the GNN as context and generates the target sequence. The decoder can also access the input text via a copy mechanism.
Current state-of-the-art for WebNLG is the recent DataTurner model proposed by \citet{harkous2020have}. They fine-tune the pre-trained GPT-2 model~\citep{radford2019language}, which is shared between the encoder and the decoder. The input to the model is a flattened version of the graph.
\subsection{Results}
\begin{table}[h]
\hspace{-0.3cm}
\centering
\begin{tabular}{||c | c c||}
\hline
Method & BLEU-4 & ROUGE-4\\
\hline\hline
StructureAware* & 44.89 \footnotesize{$\pm 0.33$} & 41.65 \footnotesize{$\pm 0.25$}\\
\hline
R2D2 & {\bf 46.23} \footnotesize{$\pm 0.15$} & 45.10 \footnotesize{$\pm 0.28$} \\
R2D2 (Flattened) & 45.62 \footnotesize{$\pm 0.73$} & {\bf 45.56} \footnotesize{$\pm 0.33$} \\
R2D2 (Scratch) & 44.33 \footnotesize{$\pm 0.53$} & 44.22 \footnotesize{$\pm 0.33$} \\
R2D2 (SF) & 44.01 \footnotesize{$\pm 0.65$} & 43.89 \footnotesize{$\pm 0.38$} \\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (70k / 12\%) & 44.40 \footnotesize{$\pm 0.12$} & 44.25 \footnotesize{$\pm 0.07$} \\
R2D2 (5k / 1\%) & 37.19 \footnotesize{$\pm 0.15$} & 39.08 \footnotesize{$\pm 0.43$} \\
R2D2 (.5k / 0.1\%) & 33.00 \footnotesize{$\pm 0.82$} & 34.10 \footnotesize{$\pm 0.42$} \\
\hline
\end{tabular}
\caption{\label{tab:wikibio}Results on the WikiBio task. State-of-the-art~\citep{liu2018table} is notated by *.}
\end{table}
The results of our experiments on WikiBio are reported in Table~\ref{tab:wikibio}, which show that our model outperforms the state-of-the-art~\citep{liu2018table} with a considerable margin on both BLEU-4 and ROUGE-4 scores.
The results for the E2E experiments are reported in Table~\ref{tab:e2e}, which show that our model is comparable to the state-of-the-art~\cite{shen2019pragmatically} model; we observe a small improvement in terms of NIST, METEOR, ROUGE-L and CIDEr, but with a slight reduction in BLEU-4 score.
On WebNLG task, as reported in Table~\ref{tab:webnlg}, our model performs comparably to the state-of-the-art model~\citep{harkous2020have}, achieving a small improvement in BLEU-4 score and a slight reduction on the METEOR metric.
Our results on the AGENDA task, reported in Table~\ref{tab:agenda}, show that our model outperforms the baseline (GraphWriter)~\citep{GraphWriter} model by a considerable margin.
Full results of our few-shot experiments, which includes all variants of R2D2, is available in Appendix \ref{sec:appendix-fewshot}.
\begin{table*}[h]
\centering
\renewcommand\tabcolsep{4pt}
\begin{tabular}{||c | c c c c c||}
\hline
Method & BLEU-4 & NIST & METEOR & ROUGE-L & CIDEr\\
\hline\hline
PragInfo* & \bf 68.60 & \bf 8.73 & 45.25 & \bf 70.82 & 2.37 \\
\hline
R2D2 & 67.70 \footnotesize{$\pm 0.64$} & 8.68 \footnotesize{$\pm 0.10$} & \bf 45.85 \footnotesize{$\pm 0.18$} & 70.44 \footnotesize{$\pm 0.32$} & \bf 2.38 \footnotesize{$\pm 0.04$}\\
R2D2 (Flattened) & 65.73 \footnotesize{$\pm 0.97$} & 8.48 \footnotesize{$\pm 0.13$} & 45.29 \footnotesize{$\pm 0.24$} & 70.21 \footnotesize{$\pm 0.73$} & 2.32 \footnotesize{$\pm 0.05$}\\
R2D2 (Scratch) & 52.25 \footnotesize{$\pm 2.45$} & 7.31 \footnotesize{$\pm 0.16$} & 37.40 \footnotesize{$\pm 1.05$} & 60.13 \footnotesize{$\pm 1.76$} & 1.34 \footnotesize{$\pm 0.08$}\\
R2D2 (SF) & 52.25 \footnotesize{$\pm 3.56$} & 7.24 \footnotesize{$\pm 0.21$} & 36.66 \footnotesize{$\pm 2.10$} & 58.97 \footnotesize{$\pm 1.91$} & 1.31 \footnotesize{$\pm 0.07$}\\
\hline
\multicolumn{6}{||c||}{Reduced training set (R2D2) (number of samples / percent of training data)} \\ \hline
R2D2 (4k / 10\%) & 67.01 \footnotesize{$\pm 0.65$} & 8.65 \footnotesize{$\pm 0.06$} & 45.26 \footnotesize{$\pm 0.30$} & 69.48 \footnotesize{$\pm 0.60$} & 2.29 \footnotesize{$\pm 0.06$}\\
R2D2 (1k / 2\%) & 63.93 \footnotesize{$\pm 1.13$} & 8.46 \footnotesize{$\pm 0.11$} & 43.90 \footnotesize{$\pm 1.18$} & 66.95 \footnotesize{$\pm 0.76$} & 2.15 \footnotesize{$\pm 0.07$}\\
R2D2 (.5k / 1\%) & 64.26 \footnotesize{$\pm 1.22$} & 8.48 \footnotesize{$\pm 0.10$} & 43.48 \footnotesize{$\pm 0.85$} & 66.74 \footnotesize{$\pm 1.14$} & 2.12 \footnotesize{$\pm 0.05$}\\
\hline
\end{tabular}
\caption{\label{tab:e2e}Results on the E2E task. State-of-the-art~\citep{shen2019pragmatically} is notated by *.}
\end{table*}
\begin{table}[h]
\centering
\begin{tabular}{||c | c c||}
\hline
Method & BLEU-4 & METEOR\\
\hline\hline
DataTurner* & 52.9 & {\bf 41.9}\\
\hline
R2D2 & {\bf 53.77} \footnotesize{$\pm 0.86$} & 41.30 \footnotesize{$\pm 0.36$}\\
R2D2 (Flattened) & 53.26 \footnotesize{$\pm 1.41$} & 40.04 \footnotesize{$\pm 0.47$}\\
R2D2 (Scratch) & 33.25 \footnotesize{$\pm 2.16$} & 25.42 \footnotesize{$\pm 0.17$}\\
R2D2 (SF) & 32.62 \footnotesize{$\pm 0.07$} & 24.40 \footnotesize{$\pm 0.01$}\\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (1.8k / 10\%) & 52.98 \footnotesize{$\pm 0.40$} & 40.80 \footnotesize{$\pm 0.42$}\\
R2D2 (360 / 2\%) & 47.58 \footnotesize{$\pm 0.41$} & 38.12 \footnotesize{$\pm 0.20$}\\
R2D2 (180 / 1\%) & 42.86 \footnotesize{$\pm 0.74$} & 36.23 \footnotesize{$\pm 1.09$}\\
\hline
\end{tabular}
\caption{\label{tab:webnlg}Results on the WebNLG task. State-of-the-art~\citep{harkous2020have} is notated by *.}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{||c | c c ||}
\hline
Method & BLEU-4 & METEOR\\
\hline\hline
{GraphWriter*} & 14.3 \footnotesize{$\pm 1.01$}& 18.8 \footnotesize{$\pm 0.28$}\\
\hline
R2D2 & {\bf 17.30} \footnotesize{$\pm 0.20$} & {\bf 21.82} \footnotesize{$\pm 0.15$} \\
R2D2 (Flattened) & 16.17 \footnotesize{$\pm 0.72$} & 20.36 \footnotesize{$\pm 0.38$} \\
R2D2 (Scratch) & 10.60 \footnotesize{$\pm 0.22$} & 15.71 \footnotesize{$\pm 0.09$} \\
R2D2 (SF) & 9.87 \footnotesize{$\pm 0.75$} & 14.63 \footnotesize{$\pm 0.18$} \\
\hline
\multicolumn{3}{||c||}{Reduced training set (R2D2)} \\
\multicolumn{3}{||c||}{(number of samples / percent of training data)} \\ \hline
R2D2 (5k / 13\%) & 16.83 \footnotesize{$\pm 0.18$} & 21.71 \footnotesize{$\pm 0.24$} \\
R2D2 (1k / 2.6\%) & 14.60 \footnotesize{$\pm 0.59$} & 19.03 \footnotesize{$\pm 0.80$} \\
R2D2 (.5k / 1\%) & 13.00 \footnotesize{$\pm 0.38$} & 17.59 \footnotesize{$\pm 0.19$} \\
R2D2 (.1k / 0.26\%) & 10.61 \footnotesize{$\pm 1.77$} & 15.56 \footnotesize{$\pm 1.20$} \\
\hline
\end{tabular}
\caption{\label{tab:agenda}Results on the AGENDA task. State-of-the-art~\citep{GraphWriter} is notated by *.}
\end{table}
\subsection{Overview and task definition}
\begin{figure*}[t!]
\centering
\begin{subfigure}[c]{0.25\textwidth}
\includegraphics[width=\textwidth]{fig2aa}
\caption{}
\label{fig:graph_original_format}
\end{subfigure}%
\hspace{5mm}
\begin{subfigure}[c]{0.53\textwidth}
\includegraphics[width=\textwidth]{fig2}
\caption{}
\label{fig:graph_transformed_format}
\end{subfigure}
\caption{The plot on the left (a) depicts a graph example in its original format, where the predicates can have text labels from an open set. The plot on the right (b) shows the transformed version, where we have only depicted the relations originating from one specific node: ``Clyde F.C". The new graph is a network of text segments, each segment either being of the type ``node" or ``predicate", colored in blue and pink respectively.}
\label{fig:graph_transform}
\end{figure*}
A standard graph structure can be described as a collection of tuples of the form -- (head, predicate, tail) -- where the heads and the tails collectively form the nodes in the graph and predicates correspond to the labels of the edges.
The scope of our work involves graphs whose nodes, and possibly edges, are annotated with text labels.
Broadly, our goal is to infer the text associated with one or more target nodes, given the rest of the information in the graph. For example, in D2T tasks the goal is to infer the text that describes a given input graph. In this case, we regard the target text as the label for an extra dummy node. In general, the target nodes could be any subset of nodes in the graph.
The input graphs for different tasks may vary widely in format and characteristics. Often, we need to transform them into a structure that is compatible with our framework (or is more efficient).
If the predicates are text sequences from an open set (or a large closed set), then we convert the input graph into its equivalent Levi graph~\citep{levi1942finite}, similar to the preprocessing step also used by \citet{beck2018graph}. This is done by adding a new node for each tuple, having the predicate as the text associated to the new node, effectively breaking the original tuples such as (head, predicate, tail) into two new tuples -- (head, "\textit{head-to-predicate}", predicate) and (predicate, ``\textit{predicate-to-tail}", tail). We refer to Figure~\ref{fig:graph_transform} for an illustration of such a transformation.
After our transformation, every node, including the newly added ones, corresponds to a text snippet, which we call a \textit{text segment}, and the edges between them (the new predicates) form a closed categorical set. For the rest of the paper, unless specified, we are always referring to the transformed version of the graph.
More formally, given a set of $N_X$ text segments $X=\{\sS_i: 1 \le i \le N_X\}$ as source, our goal is to predict the set of $N_Y$ target text segments $Y=\{\sS_i: N_X < i \le N_X+N_Y\}$.
Each text segment is itself a sequence of tokens, where the $t$-th token for the $i$-th segment is notated as $\sS_{i,t}$.
Each segment also has a categorical segment type (out of $K$ possible types), notated as $B=(b_1, ..., b_{N_X+N_Y})$. For example, one can allocate two segment types ``\textit{node}'' and ``\textit{predicate}'', differentiating segments that correspond to the original nodes from the ones added to replace the original predicates in the Levi graph transformation.
We are also given a closed set of predicates $Q$ (either original predicates or new ones the after graph transformation), and a set of tuples:
\begin{equation*}
\begin{split}
T = \{&(\sS_{\mathrm{head}_i}, q_i, \sS_{\mathrm{tail}_i}): 1 \le i \le N_T;\\
&\sS_{\mathrm{head}_i}, \sS_{\mathrm{tail}_i} \in X \cup Y; q_i \in Q\}.\\
\end{split}
\end{equation*}
We define an additional set of relations $L$, that supplements $Q$ to cover segment pairs that do not form any tuple. $L$ can have members such as ``\textit{same-segment}" for relations between each segment and itself, or ``\textit{unrelated}" for two unrelated ones.
The underlying relational structure between the segments can then be represented by an adjacency matrix $G\in (Q\cup L) ^{(N_X+N_Y)\times(N_X+N_Y)}$, where:
\begin{equation*}
G_{i,j}=
\begin{cases}
q, & \text{if}\ \exists q :(\sS_i, q, \sS_j) \in T \\
l_{i,j} \in L, & \text{otherwise}.
\end{cases}
\end{equation*}
We should note that we have assumed each pair of segments can have at most one predicate in $T$, and the relations can be directional (i.e. it is possible that $G_{i,j} \neq G_{j,i}$).
Our goal is to estimate $P(Y \mid X, B, G)$, where we call the set of variables ($X$, $Y$, $G$, $B$) the data schema. Characteristics of our model and its architecture are provided Section \ref{sec:model}. We should also note that the choice of the supplementary relations $L$ and the segment types ($b_i$s) are design decisions and can vary across tasks. We have provided details of such design choices for our experiments Appendix \ref{sec:appendix-datasets}.
While D2T tasks only require inferring one target segment, our approach is more general and can infer multiple target segments, which may have other applications such as completing the missing nodes in knowledge graphs. Although it should be noted that due to the autoregressive nature of the generation process, each target segment can observe its relations only to the previously generated segments (in addition to the source segments).
\subsection{Model architecture}
\label{sec:model}
\begin{figure*}[t!]
\centering
\begin{subfigure}[c]{0.66\textwidth}
\includegraphics[width=\textwidth]{fig1}
\caption{}
\end{subfigure}%
\hspace{5mm}
\begin{subfigure}[c]{0.25\textwidth}
\includegraphics[width=\textwidth]{attention}
\caption{}
\end{subfigure}
\caption{(a) R2D2 Architecture based on the XLNet model. The input is the concatenation of all text segments, where the blue, pink and orange segments correspond to node, predicate and target segment types. The model predicts the target tokens autoregressively, using a two-stream self-attention mechanism, with a \textit{query-stream} colored in lighter orange passing a query "???" token, while the darker colored \textit{content-stream}, passes the target tokens (ground truth tokens during the training or the previously decoded tokens during the inference) as context for generating future tokens.
(b) The self-attention between segment pairs. Each row corresponds to one segment as query, attending to other segments (columns) as keys. The colors show if the key is visible (green) or hidden (red) to the query, while the cell labels indicate the relation type, which influences the segment-wise attention scores.}
\label{fig:model_arch}
\end{figure*}
We model $P(Y \mid X, B, G)$ with a conditional autoregressive model, parameterized by a transformer based on the XLNet-large architecture~\citep{XLNET}.
XLNet is a masked language model used for unsupervised pretraining, a variant in the family of BERT~\citep{devlin2018bert} style models, with the primary distinction of having an autoregressive decoding mechanism (opposed to the parallel decoding in other BERT variants).
To enable this, for each target token two parallel streams are passed, one is given a special query symbol as input and predicts the real token, while the other is given the real token and processes it as context for decoding future tokens.
See Figure \ref{fig:model_arch} for an overview.
The autoregressive nature of XLNet is desirable for our framework, as we are also interested in using our model to decode target segments autoregressively. On the other hand, compared to entirely autoregressive models such as GPT-2~\citep{radford2019language}, XLNet has the advantage of applying a bidirectional attention over the source segments.
Another important advantage of XLNet is the use of relative segment embedding, in contrast to the absolute segment embedding employed in other BERT variants. To avoid confusion, we should remind that segment embedding and positional embedding are two different concepts; the former helps attending to the entirety of a segment (node), whereas the latter attends at the token level. The relative segment embedding is a more suitable approach for modeling texts in complex graphical structures, as explained further in Section~\ref{sec:attention}.
The input to our model consists of the concatenation of all source and target segments.
Any concatenation order would be fine, as the model is invariant to the order of the segments. However, in case of having multiple target segments, an order should be given for the autoregressive generation process.
The model does not ignore the boundary of the segments nor the inter-segment relations, as these aspects are modeled through the relative attention mechanisms implemented in XLNet, which is further explained in Section~\ref{sec:attention}.
While we use XLNet's embedding table to embed the sequence tokens, we also learn another embedding table for the segment types. The segment type ($b_i$) and the token ($\sS_{i,t}$) embeddings are then summed to provide the final representation of each input token fed to the transformer.
\subsection{Multi Segment Self-Attention}
\label{sec:attention}
Pretrained transformer models such as BERT usually allow having multi-segment inputs (usually limited to two segments), to help with tasks such as question answering where the context passage and the question can be marked as separate segments.
BERT and most of its other variations employ a global segment embedding for each of the two segment types (e.g. context and question) and add it to the input embedding to let the model know which tokens belong to which segment.
In contrast, XLNet uses a relative segment embedding approach, where the attention score between two positions is affected by whether they both belong to the same segment or not, enabling each transformer cell to decide which segment to attend to.
In this work, we extended the relative segment embedding to more than two segments, where the inter-segment attention depends on the type of relation between pairs of segments.
The attention score between any two positions has three components: 1) content-wise, 2) position-wise and 3) segment-wise. The scores of all three are summed to compute the final attention score. Figure~\ref{fig:attentiontypes} shows an example for the three attention components.
\begin{figure}[h]
\centering
\includegraphics[width=0.90\linewidth]{attentiontypes}
\caption{Multi-segment attention. The first row shows the structure of the text segments. The next three rows illustrate attention scores for each of the content-wise, segment-wise and position-wise components respectively. The shade of the green reflects the magnitude of the score.
Note that every token of each segment shares the same segment-wise score, while for the position-wise attention the three external segments share the same pattern of attention from right to left.}
\label{fig:attentiontypes}
\end{figure}
Consider we are calculating the attention score for the query token $i$ at position $t_i$ of the segment $\sS_i$ and the key token $j$ at position $t_j$ of segment $\sS_j$. Further assume at the current layer of the transformer, the query and key vector for $i$ and $j$ are $h_i^q$ and $h_j^k$ respectively.
The content-wise attention score is then calculated as:
\begin{equation}
a_{i,j}^\textrm{content} = \left( h^q_{i} + \phi_c \right)^T h^k_{j}.
\end{equation}
where $\phi_c$ is a learnable bias parameter.
To calculate the segment-wise attention, we learn a table of relation-type embedding $H \in \mathbb{R}^{(|Q|+|L|) \times d}$, where each row corresponds to one relation category and $d$ is the embedding dimension.
Assuming $\sigma_i$ and $\sigma_j$ are the segment indices for $\sS_i$ and $\sS_j$, the segment-wise attention score between $i$ and $j$ is derived as:
\begin{equation}
a_{i,j}^\textrm{segment} = \left( h^q_{i} + \phi_b \right)^T H_{G_{\sigma_i, \sigma_j}}.
\end{equation}
The positional attention score is computed using a table of relative positional embedding $R$, where $R_{\tau(i, j)}$ is the embedding for the relative position between $t_i$ and $t_j$ and $\tau(i, j)$ is the relative position of $t_i$ and $t_j$.
If the two positions are located in the same segment, then deriving $\tau(i,j)$ would be straightforward as the relative position is simply the positional difference. However if the two belong to different segments, then given the absence of an underlying global positioning, the relative position between the two requires a new definition.
We calculate the relative position between the tokens $i$ and $j$ as if their respective segments are concatenated as $[\sS_j, \sS_i]$.
More formally:
\begin{equation}
\tau(i, j)=
\begin{cases}
t_i - t_j, & \text{if}\ \sigma_i = \sigma_j\\
t_i-t_j+|\sS_j|, & \text{otherwise}.
\end{cases}
\end{equation}
The positional attention score can be then derived as:
\begin{equation}
a_{i,j}^\textrm{pos} = \left( h^q_{i} + \phi_p \right)^T R_{\tau(i,j)}.
\end{equation}
We should note that the reason we are using relative positional embedding in the first place is simply for compatibility with the pretrained XLNet model, otherwise our model could also utilize absolute positional embedding, where the leftmost token of each segment is viewed as having the absolute position 1, and tokens from different segments sharing the same absolute position are differentiated based on the content-wise and segment-wise attentions.
The final attention score is simply the sum of all three components:
\begin{equation}
a_{i,j} = a_{i,j}^\textrm{content} + a_{i,j}^\textrm{segment} + a_{i,j}^\textrm{pos}.
\end{equation}
\section{Introduction}
\input{introduction}
\section{Methods}
\input{methods}
\section{Experiments}
\input{experiments}
\section{Discussion}
\input{discussion}
\section{Applications in clinical note generation}
\input{clinical}
\section{Conclusion}
\input{conclusion}
|
2,877,628,089,555 | arxiv | \section{introduction}
The homogeneous spaces $G/H$, where $G$ is a compact connected Lie
group and $H$ is its closed connected subgroup of maximal rank are
classical manifolds, which play significant role in many areas of
mathematics. They can be characterized as homogeneous spaces with
positive Euler characteristic. Our interest in these manifolds is
related to the well known problem in cobordism theory to find
representatives in cobordism classes that have reach geometric
structure and carry many non-equivalent stable complex structures.
Let us mention~\cite{B} that all compact homogeneous spaces $G/H$
where $H$ is centralizer of an element of odd order admit an
invariant almost complex structure. Furthermore, if $H$ is a
centralizer of a maximal torus in $G$ then $G/H$ admits an invariant complex
structure which gives rise to a unique invariant Kaehler-Einstein
metric and moreover all homogeneous complex spaces are algebraic.
Besides that by~\cite{WG}, the stationary subgroup $H$ of any
homogeneous complex space $G/H$ can be realized as the fixed point
set under the action of some finite group of inner automorphisms of
$G$ and vice versa. This shows that these spaces can be made off
generalized symmetric spaces what results in the existence of
finite order symmetries on them. Our interest in the study of the
homogeneous spaces $G/H$ with positive Euler characteristic is also
stimulated by the well known relations between the cohomology rings of
these spaces and the deep problems in the theory of representations
and combinatorics (see, for example~\cite{F}). These problems are
formulated in terms of different additive basis in the cohomology
rings for $G/H$ and multiplicative rules related to that basis. We
hope that the relations in the cohomology rings $H^{*}(G/H,\mathbb{ Z} )$
obtained from the calculation of the universal toric genus of the
manifold $G/H$ may lead to new results in that direction.
In the paper~\cite{Novikov-67}, which opened a new stage in the
development of the cobordism theory, S.~P.~Novikov proposed a method
for the description of the fixed points for actions of groups on
manifolds, based on the formal group law for geometric cobordisms.
That paper rapidly stimulated active research work which brought
significant results. These results in the case of $S^1$-actions are
mainly contained in the
papers~\cite{Gusein-Zade-71-Izv},~\cite{Gusein-Zade-71-MZ},~\cite{Krichever-74},~\cite{Krichever-76}
and also in~\cite{Krichever-90}. Our approach to this problem uses
the results on the universal toric genus, which was introduced
in~\cite{BR} and described in details in~\cite{IMRN}. Let us note
that the formula for the universal toric genus in terms of fixed
points is a generalization of Krichever's
formula~\cite{Krichever-74} to the case of stable complex manifolds.
For the description of the cobordism classes of manifolds in terms
of their Chern numbers we appeal to the Chern-Dold character theory,
which is developed in~\cite{Buchstaber}.
The universal toric genus can be constructed for any even
dimensional manifold $M^{2n}$ with a given torus action and stable
complex structure which is equivariant under the torus action.
Moreover, if the set of isolated fixed points for this action is
finite than the universal toric genus can be localized, which means
that it can be explicitly written through the weights and the signs
at the fixed points for the representations that gives arise from
the given torus action.
The construction of the toric genus is reduced to the computation of
Gysin homomorphism of $1$ in complex cobordisms for fibration whose
fiber is $M^{2n}$ and the base is classifying space of the torus.
The problem of the localization of Gysin homomorphism is very known
and it was studied by many authors, starting with 60-es of the last
century. In~\cite{BR} and~\cite{IMRN} is obtained explicit answer
for this problem in the terms of the torus action on tangent bundle
for $M^{2n}$. The history of this problem is presented also in these
papers.
If consider compact homogeneous space $G/H$ with $\operatorname{rk} G = \operatorname{rk} H = k$,
we have on it the canonical action $\theta$ of the maximal torus
$T^{k}$ for $H$ and $G$, and any $G$-invariant almost complex structure
on $G/H$ is compatible with this action. Besides that, all fixed
points for the action $\theta$ are isolated, so one can apply
localization formula to compute universal toric genus for this
action and any invariant almost complex structure. Since, in this
case, we consider almost complex structures, all fixed points in the
localization formula are going to have sign $+1$. We prove that the
weights for the action $\theta$ at different fixed points can be
obtained by the action of the Weyl group $W_{G}$ up to the action of
the Weyl group $W_{H}$ on the weights for $\theta$ at identity fixed
point. On this way we get an explicit formula for the cobordism
classes of such spaces in terms of the weights at the fixed point
$eH$. This formula also shows that the cobordism class for $G/H$
related to an invariant almost complex structure can be computed
without information about cohomology for $G/H$.
We obtain also the explicit formulas, in terms of the weights at
identity fixed point, for the cohomology characteristic numbers for
homogeneous spaces of positive Euler characteristic endowed with an
invariant almost complex structure. We use further that the
cohomology characteristic numbers $s_\omega, \; \omega=(i_1, \ldots,
i_n)$, and classical Chern numbers $c^\omega=c_1^{i_1}\cdots
c_n^{i_n}$ are related by some standard relations from the theory of
symmetric polynomials. This fact together with the obtained formulas
for the characteristic numbers $s_\omega(\tau(G/H))$ proves that the
classical Chern numbers $c^\omega(\tau(G/H))$ for the almost complex homogeneous
spaces can be computed without information on
their cohomology. It also gives an explicit way for the computation
of the classical Chern numbers.
In studying invariant almost complex structures on compact
homogeneous spaces $G/H$ of positive Euler characteristic we appeal
to the theory developed in~\cite{BH} which describes such
structures in terms of complementary roots for $G$ related to $H$. In
that context by an invariant almost complex structure on $G/H$ we
assume the structure that is invariant under the canonical action
of $G$ on $G/H$.
In this paper we go further with an application of the universal toric
genus and consider the almost complex structures on $G/H$ for which
we require to be invariant only under the canonical action $\theta$
of the maximal torus $T$ on $G/H$, as well as the stable complex
structures equivariant under this torus action. We prove generally that for an arbitrary
$M^{2n}$ the weights for any two such structures differ
only by the signs. We also prove that the sign difference in the
weights determines, up to common factor $\pm 1$, the difference of
the signs at the fixed points related to any two such structures.
We provide an application of our results by obtaining explicit
formula for the cobordism class and top cohomology characteristic
number of the flag manifolds $U(n)/T^n$ and Grassmann manifolds
$G_{n,k}=U(n)/(U(k)\times U(n-k))$ related to the standard complex
structures. We want to emphasize that, our method when applying to
the flag manifolds and Grassmann manifolds gives the description of
their cobordism classes and characteristic numbers using the
technique of {\it divided difference operators}. Our method also
makes possible to compare cobordism classes that correspond to the
different invariant almost complex structures on the same
homogeneous space. We illustrate that on the space
$U(4)/(U(1)\times U(1)\times U(2))$, which is firstly given
in~\cite{BH} as an example of homogeneous space that admits two
different invariant complex structures.
We also compute the universal toric genus of the sphere
$S^{6}=G_{2}/SU(3)$ related to unique $G_2$-invariant almost complex
structure for which is known not to be complex~\cite{BH}. We prove
more, that this structure is also unique almost complex structure
invariant under the canonical action $\theta$ of the maximal torus
$T^{2}$.
This paper comes out from the first part of our work where we mainly
considered invariant almost complex structures on homogeneous spaces
of positive Euler characteristic. It has continuation which is going
to deal with the same questions, but related to the stable complex
structures equivariant under given torus action on homogeneous
spaces of positive Euler characteristic.
The authors are grateful to A.~Baker and A.~Gaifullin for useful
comments.
\section{Universal toric genus}
We will recall the results from~\cite{BR},~\cite{BPR} and~\cite{IMRN}.
\subsection{General setting.}In general setting one considers
$2n$-dimensional manifold $M^{2n}$ with a given smooth action
$\theta$ of the torus $T^{k}$. We say that $(M^{2n}, \theta,
c_{\tau})$ is {\it equivariant stable complex} if it admits
$\theta$-equivariant stable complex structure $c_{\tau}$. This means
that there exists $l\in \mathbb{ N}$ and complex vector bundle $\xi$ such that
\begin{equation}\label{scdp}
c_{\tau}\colon \tau (M^{2n})\oplus \mathbb{ R} ^{2(l-n)}\longrightarrow \xi
\end{equation}
is real isomorphism and the composition
\begin{equation}\label{scd}
r(t)\colon\xi\stackrel{c_\tau^{-1}}{\longrightarrow}\tau(M^{2n})\oplus
\mathbb{R}^{2(l-n)}\stackrel{d\theta(t)\oplus I}{\longrightarrow}
\tau(M^{2n})\oplus\mathbb{R}^{2(l-n)}\stackrel{c_\tau}{\longrightarrow}\xi
\end{equation}
is a complex transformation for any $t\in T^{k}$.
If there exists $\xi$ such that $c_{\tau}\colon \tau
(M^{2n})\longrightarrow \xi$ is an isomorphism, i.~e.~ $l=n$, then
$(M^{2n},\theta ,c_{\tau})$ is called {\it almost complex}
$T^k$-manifold.
Denote by $\Omega _{U}^{*} [[u_1,\ldots, u_{k}]]$ an algebra of
formal power series over $\Omega _{U}^{*} = U^{*}(pt)$. It is well
known~\cite{Novikov} that $U^{*}(pt) = \Omega _{U}^{*} = \mathbb{ Z}
[y_{1},\ldots ,y_{n},\ldots ]$, where $\operatorname{dim} y_{n}=-2n$. Moreover, as
the generators for $\Omega _{U}^{*}$ over the rationales, or in
other words for $\Omega _{U}^{*}\otimes \mathbb{ Q}$, can be taken the family
of cobordism classes $[\mathbb{ C} P^{n}]$ of the complex projective spaces.
When given a $\theta$-equivariant stable complex structure
$c_{\tau}$ on $M^{2n}$, we can always choose $\theta$-equivariant
embedding $i\colon M^{2n}\to \mathbb{ R} ^{2(n+m)}$, where $m>n$, such that
$c_{\tau}$ determines, up to natural equivalence, a
$\theta$-equivariant complex structure $c_{\nu}$ on the normal
bundle $\nu(i)$ of $i$. Therefore, one can define the universal
toric genus for $(M^{2n},\theta, c_{\tau})$ in complex cobordisms,
see~~\cite{BR},~\cite{IMRN}.
We want to note that, in the case when $c_{\tau}$ is almost complex
structure, a universal toric genus for $(M^{2n},\theta ,c_{\tau})$
is completely defined in terms of the action $\theta$ on tangent
bundle $\tau (M^{2n})$.
The universal toric genus for $(M^{2n}, \theta, c_{\tau})$ could be
looked at as an element in algebra $\Omega _{U}^{*} [[u_1,\ldots,
u_{k}]]$. It is defined with
\begin{equation}\label{uto}
\Phi (M^{2n}, \theta, c_{\tau}) = [M^{2n}] + \sum _{|\omega|>0}
[G_{\omega}(M^{2n})]u^{\omega} \; ,
\end{equation}
where $\omega = (i_1,\ldots ,i_k)$ and $u^{\omega} =
u_{1}^{i_1}\cdots u_{k}^{i_k}$.\\
Here by $[M^{2n}]$ is denoted the complex cobordism class of the
manifold $M^{2n}$ with stable complex structure $c_{\tau}$, by
$G_{\omega}(M^{2n})$ is denoted the stable complex manifold obtained
as the total space of the fibration $G_{\omega} \to B_{\omega}$ with
fiber $M$. The base $B_{\omega} = \prod _{j=1}^{k}B_{j}^{i_j}$,
where $B_{j}^{i_j}$ is Bott tower, i.~e.~$i_j$-fold iterated
two-sphere bundle over $B_{0}=pt$. The base $B_{\omega}$ satisfies
$[B_{\omega}]=0$, $|\omega|>0$, where $|\omega| =
\sum\limits_{j=1}^{k} i_{j}$.
For the universal toric genus of homogeneous space of positive Euler characteristic we prove the following.
\begin{lem}\label{L1}
Let $M^{2n}=G/H$, where $G$ is compact connected Lie group and $H$
its closed connected subgroup of maximal rank. Denote by $\theta$
the canonical action of the maximal torus $T^k$ on $M^{2n}$ and let
$c_{\tau}$ be $G$-equivariant stable complex structure on $M^{2n}$.
Then the universal toric genus $\Phi (M^{2n},\theta, c_{\tau})$
belongs to the image of homomorphism $Bj^{*}\colon U^{*}(BG) \to
U^{*}(BT^{k})$ which is induced by the embedding $T^{k}\subset G$.
\end{lem}
\begin{proof}
According to its construction, the universal toric genus $\Phi
(M^{2n},\theta ,c_{\tau})$ is equal to $p_!(1)$ and belongs to
$U^{-2n}(BT^{k})$, where
\[
p : ET^{k}\times _{T^k} M^{2n}\to BT^{k} \; .
\]
Using that the action $\theta$ is induced by the left action of the
group $G$ and looking at the commutative diagram
\begin{eqnarray}
ET^{k} & \rightarrow & EG\nonumber\\
\downarrow & & \downarrow\nonumber\\
BT^{k} & \rightarrow & BG\nonumber
\end{eqnarray}
we obtain the proof due to the fact that Gysin homomorphism is
functorial for the bundles that can be connected with commutative
diagram.
\end{proof}
\subsection{The action with isolated fixed points.}
We first introduce, following~\cite{BR}, the general notion of the
{\it sign} at isolated fixed point. Let we are on $M^{2n}$ given an
equivariant stable complex structure $c_{\tau}$.
We assume $\mathbb{ R} ^{2(l-n)}$ in~\eqref{scdp} to be endowed with
canonical orientation. Under this assumption the real
isomorphism~\eqref{scdp} defines an orientation on $\tau(M^{2n})$,
since $\xi$ is canonically oriented by an existing complex
structure.
On the other hand, if $p$ is the isolated fixed point, the
representation $r_{p}\colon T^k\to GL(l,\mathbb{ C})$ associated
to~\eqref{scd} produces the decomposition of the fiber $\xi
_{p}\cong \mathbb{ C} ^{l}$ as $\xi _{p}\cong \mathbb{ C} ^{l-n}\oplus \mathbb{ C} ^{n}$. In
this decomposition $r_{p}$ acts trivially on $\mathbb{ C} ^{l-n}$ and without
trivial summands on $\mathbb{ C} ^{n}$. Note that $\mathbb{ C}^{n}$ inherits here the
complex structure from $\xi _{p}$ which defines an orientation of
$\mathbb{ C}^{n}$. This together leads to the following definition.
\begin{defn}
The $\mathrm{sign} (p)$ at isolated fixed point $p$ is
$+1$ if the map
\[
\tau_p(M^{2n})\stackrel{I\oplus\hspace{.1ex}0}{\longrightarrow}
\tau_p(M^{2n})\oplus \mathbb{R}^{2(l-n)} \stackrel{c_{\tau,p}}
{\longrightarrow}\xi_p \cong\mathbb{C}^n \oplus\mathbb{C}^{l-n}
\stackrel{\pi}{\longrightarrow} \mathbb{C}^n \; ,
\]
preserves orientation. Otherwise, $\mathrm{sign} (p)$ is $-1$.
\end{defn}
\begin{rem}\label{cmx}
Note that for an almost complex $T^{k}$-manifold $M^{2n}$, it
directly follows from the definition that $\mathrm{sign} (p)= +1$ for any
isolated fixed point.
\end{rem}
If an action $\theta$ of $T^{k}$ on $M^{2n}$ has only isolated fixed
points, then it is proved that toric genus for $M^{2n}$ can be
completely described using just local data at the fixed
points,~\cite{BR},~\cite{IMRN}.
Namely, let $p$ again be an isolated fixed point. Then the non
trivial summand of $r_p$ from~\eqref{scd} gives rise to the
tangential representation of $T^{k}$ in $GL(n, \mathbb{ C})$. This
representation decomposes into $n$ non-trivial one-dimensional
representations of $r_{p,1}\oplus \ldots \oplus r_{p,n}$ of $T^{k}$.
Each of the representations $r_{p,j}$ can be written as
\[
r_{p,j}(e^{2\pi i x_{1}}, \ldots ,e^{2\pi i x_{k}})v= e^{2\pi i
\langle\Lambda_{j}(p), {\bf x}\rangle }v \; ,
\]
for some $\Lambda_{j}(p) = (\Lambda^{1}_{j}(p),\ldots
,\Lambda^{k}_{j} (p)) \in \mathbb{ Z} ^{k}$, where ${\bf x}=(x_{1},\ldots
,x_{k})\in \mathbb{ R} ^{k}$ and $\langle \Lambda_{j}(p), {\bf x}\rangle =
\sum\limits _{l=1}^{k}\Lambda^{l}_{j}(p)x_{l}$. The sequence $\{
\Lambda_{1}(p),\ldots ,\Lambda_{n}(p)\}$ is called {\it the weight
vector} for representation $r_{p}$ in the fixed point $p$.
\begin{rem}
Since $p$ is an isolated fixed point none of the couples of weights $\Lambda_{i}(p)$, $1\leq i\leq n$, have common integer factor.
\end{rem}
\begin{thm}\label{weights2}
Let $c_{\tau}^{1}$ and $c_{\tau}^{2}$ be the stable complex
structures on $M^{2n}$ equivariant under the given action $\theta$
of the torus $T^{k}$ on the manifold $M^{2n}$ with isolated fixed
points.
\begin{itemize}
\item The weights $\Lambda_{i}^{1}(p)$ and $\Lambda_{i}^{2}(p)$,
$1\leqslant i\leqslant n$, of an action $\theta$ at the fixed point
$p$ corresponding to the structures $c_{\tau}^{1}$ and $c_{\tau}^{2}$
are related with
\begin{equation}\label{swgeneral}
\Lambda_{i}^{2}(p) = a_{i}(p)\Lambda_{i}^{1}(p), \; \mbox{where} \;
a_{i}(p)=\pm 1 \; .
\end{equation}
\item The signs $\mathrm{sign}(p)_{1}$ and $\mathrm{sign}(p)_{2}$ at the fixed point $p$
corresponding to the structures $c_{\tau}^{1}$ and $c_{\tau}^{2}$ are
related with
\begin{equation}\label{signsgeneral}
\mathrm{sign}(p)_{2} = \epsilon \cdot \prod_{i=1}^{n}a_{i}(p)\cdot \mathrm{sign}(p)_{1} \; ,
\end{equation}
where $\epsilon = \pm 1$ depending if $c_{\tau}^{1}$ and
$c_{\tau}^{2}$ define of $\tau(M^{2n})$ the same orientation or not.
Here $a_{i}(p)$ are such that
$\Lambda^{2}_{i}(p)=a_{i}(p)\Lambda^{1}_{i}(p)$, for the weights
$\Lambda^{1}_{i}(p)$ and $\Lambda^{2}_{i}(p)$ of an action $\theta$
related to the structures $c_{\tau}^{1}$ and $c_{\tau}^{2}$.
\end{itemize}
\end{thm}
\begin{proof}
Let $p$ be an isolated fixed point of an action $\theta$ of the
torus $T^{k}$ on the manifold $M^{2n}$. If it is on $M^{2n}$ given
the $\theta$-equivariant stable complex structure $c_{\tau_{1}}$
than in the neighborhood of $p$, the tangential representation of
$T^{k}$ in $GL(n,\mathbb{ C} )$ assigned to an action $\theta$ and structure
$c_{\tau_{1}}$ decomposes into the sum of non-trivial
one-dimensional representations $r_{p,1}\oplus\ldots \oplus
r_{p,n}$. Any other stable complex structure $c_{\tau_{2}}$ which is
equivariant under the given action $\theta$ commutes with each of
the one-dimensional representations $r_{p,i}$, $1\leqslant
i\leqslant n$. Therefore, the one-dimensional summands in which
decomposes the tangential representation of $T^{k}$, assigned to an
action $\theta$ and structure $c_{\tau_{2}}$, are $r_{p,i}$ or it's
conjugate $\overline{r_{p,i}}$, $1\leqslant i\leqslant n$. This
implies that the relations between the weights for an action
$\theta$ related to the two different stable $\theta$-equivariant
stable complex structures are given by the
formula~\eqref{swgeneral}.
To prove the second statement of the Theorem let us note that the
sign at the fixed point $p$ of some $\theta$-equivariant
stable complex structure $c_{\tau}$ is determined by the
orientations of the real two-dimensional subspaces in which decomposes summand $\mathbb{ C}^{n}$ of
$\xi _{p}= \mathbb{ C}^{n}\oplus \mathbb{ C}^{l-n}$. That decomposition is obtained using the decomposition of the
tangential representation of $T^{k}$ determined by the action $\theta$ and the structure
$c_{\tau}$. Therefore, by~\eqref{swgeneral} it follows that the
relation between the signs at the fixed point for the given torus
action related to the two equivariant stable complex structures is
given by~\eqref{signsgeneral}.
\end{proof}
\begin{rem}
We want to point that Theorem~\ref{weights2} gives that, under
assumption that manifold $M^{2n}$ admits $\theta$-equivariant
stable complex structure, the signs at the fixed points for any
other $\theta$-equivariant stable complex structure are completely determined by an orientation
that structure defines on $M^{2n}$ and by t weights at fixed points. In
other words, when passing from the existing to some other
$\theta$-equivariant stable complex structure, the "correction" of
the sign at arbitrary fixed point is completely determined by the
"correction" of the weights at that fixed point up to some common
factor $\epsilon=\pm 1$ which points on the difference of
orientations on $M^{2n}$ that these two structures define.
\end{rem}
\subsection{Formal group low.} Let $F(u, v)= u + v + \sum
\alpha _{ij}u^{i}v^{j}$ be {\it the formal group for complex
cobordism} \cite{Novikov-67}. The corresponding power system $\{
[w](u)\in \Omega ^{*}[[u]] : w\in \mathbb{ Z} \}$ is uniquely defined with
$[0](u)=0$ and $[w](u) = F(u, [w-1])(u)$, for $w\in \mathbb{ Z}$. For ${\bf
w}=(w_1,\ldots ,w_{k})\in {\mathbb{ Z}}^{k}$ and ${\bf u}=(u_1,\ldots, u_k)$
one defines $\bf{[w](u)}$ inductively with ${\bf [w](u)} =[w](u)$
for $k=1$ and
\[
{\bf[w](u)} = F_{q=1}^{k}[w_{q}](u_{q})=F(F_{q=1}^{k-1}[w_{q}](u_{q}),
[w_{k}](u_{k})) \; ,
\]
for $k\geqslant 2$. Then for toric genus of the action $\theta$
with isolated fixed points the following localization formula holds,
which is first formulated in~\cite{BR} and proved in details
in~\cite{IMRN}.
\begin{thm}\label{EGFP}
If the action $\theta$ has a finite set $P$ of isolated fixed
points then
\begin{equation}\label{tw}
\Phi (M^{2n}, \theta, c_{\tau}) = \sum _{p\in P} \mathrm{sign} (p) \prod
_{j=1}^{n}\frac{1}{[\Lambda_{j}(p)]({\bf u})} \; .
\end{equation}
\end{thm}
\begin{rem}
Theorem~\ref{EGFP} together with formula~\eqref{uto} gives that
\begin{equation}\label{order}
\sum _{p\in P} \mathrm{sign} (p) \prod
_{j=1}^{n}\frac{1}{[\Lambda_{j}(p)]({\bf u})}= [M^{2n}] + \LL ({\bf u})\; ,
\end{equation}
where $\LL ({\bf u}) \in \Omega _{U}^{*} [[u_{1},\ldots ,u_{k}]]$
and $\LL ({\bf 0}) = 0$. In this way Theorem~\ref{EGFP} gives that
all summands in the left hand side of~\eqref{order} have order $n$
in $0$.
\end{rem}
\begin{rem}
As we will make it explicit further, the fact that after making the
sum, all singularities in formula~\eqref{tw} should disappear, gives
constraints on the weights and signs at the fixed points. Note also,
that formula~\eqref{tw} gives an expression for the cobordism class
$[M^{2n}]$ in terms of the weights and signs at fixed points.
\end{rem}
\subsection{Chern-Dold character.}\label{CDS} We show further how one,
together with Theorem~\ref{EGFP}, can use the notion of Chern-Dold
character in cobordisms in order to obtain an expression for
cobordism class $[M^{2n}]$ in terms of the characteristic numbers for $M^{2n}$,
as well as the relations on the weights and signs at fixed points.
In review of the basic definitions and results on Chern character we
follow~\cite{Buchstaber}.
Let $U^{*}$ be the theory of unitary cobordisms.
\begin{defn}
The Chern-Dold character for a topological space $X$ in the theory
of unitary cobordisms $U^{*}$ is a ring homomorphism
\begin{equation}
ch _{U} : U^{*}(X) \to H^{*}(X, \Omega _{U}^{*}\otimes \mathbb{ Q}) \ .
\end{equation}
\end{defn}
Recall that the Chern-Dold character as a multiplicative
transformation of cohomology theories is uniquely defined by the
condition that for $X=(pt)$ it gives canonical inclusion $\Omega
_{U}^{*}\to \Omega _{U}^{*}\otimes \mathbb{ Q}$.
The Chern-Dold character splits into composition
\begin{equation}\label{CD}
ch _{U} : U^{*}(X)\to H^{*}(X, \Omega _{U}^{*}(\mathbb{ Z} ))\to H^{*}(X,
\Omega _{U}^{*}\otimes \mathbb{ Q} ) \; .
\end{equation}
The ring $\Omega _{U}^{*}(\mathbb{ Z} )$ in \eqref{CD} is firstly described
in~\cite{Buchstaber}. It is a subring of $\Omega _{U}^{*}\otimes \mathbb{ Q}$
generated by the elements from $\Omega _{U}^{-2n}\otimes \mathbb{ Q}$ having
integer Chern numbers. It is equal to
\[
\Omega _{U}^{*}(\mathbb{ Z} )=\mathbb{ Z} [b_1,\ldots , b_n,\ldots ] \; ,
\]
where $b_n = \frac{1}{n+1}[\mathbb{ C} P^{n}]$.
The Chern character leaves $[M^{2n}]$ invariant, i.~e.~
\[
ch_{U}([M^{2n}])= [M^{2n}]\; ,
\]
and $ch_{U}$ is the homomorphism of $\Omega _{U}^*$-modules
It follows from the its description~\cite{Buchstaber} that the
Chern-Dold character $ch _{U} : U^{*}(X)\to H^{*}(X, \Omega
_{U}^{*}(\mathbb{ Z} ))$ as a multiplicative transformation of the cohomology
theories is given by the series
\[
ch _{U}u = h(x)=\frac{x}{f(x)},\quad \mbox{where}\quad f(x)= 1+\sum
_{i=1}^{\infty} a_{i}x^{i} \quad \mbox{and}\quad a_{i}\in \Omega
_{U}^{-2i}(\mathbb{ Z}) \; .
\]
Here $u=c_1^U(\eta) \in U^2(\mathbb{C}P^\infty)$ and $x=c_1^H(\eta)
\in H^2(\mathbb{C}P^\infty,\mathbb{Z})$ denote the first Chern classes
of the universal complex line bundle $\eta \to \mathbb{C}P^\infty$.
From the construction of Chern-Dold character it follows also the
equality
\begin{equation}\label{charact}
ch_{U}[M^{2n}] = [M^{2n}] = \sum _{\|\omega \|=n} s_{\omega}(\tau
(M^{2n}))a^{\omega}\; ,
\end{equation}
where $\omega = (i_{1},\ldots ,i_{n}), \; \| \omega \| = \sum
_{l=1}^{n} l\cdot i_{l}$ and $a ^{\omega} = a_{1}^{i_1}\cdots
a_{n}^{i_{n}}$. Here the numbers $s_{\omega}(\tau
(M^{2n})),{\|\omega \|=n}$ are {\it the cohomology characteristic
numbers} of $M^{2n}$ and they correspond to the cohomology tangent
characteristic classes of $M^{2n}$.
If on $M^{2n}$ is given torus action $\theta$ of $T^{k}$ and stable
complex structure $c_{\tau}$ which is $\theta$-equivariant, then the
Chern character of its toric genus is
\begin{equation}\label{cheq}
ch_{U}\Phi (M^{2n},\theta, c_{\tau}) = [M^{2n}] + \sum _{| \xi |
>0}[G_{\xi}(M^{2n})](ch_{U}{\bf u})^{\xi}\, ,
\end{equation}
where $ch_{U}{\bf u} = (ch_{U}u_1, \ldots, ch_{U}u_k),\;\; ch_{U}u_i
= \frac{x_i}{f(x_i)}$\, and\, $\xi = (i_1, \ldots,i_k),\; | \xi | =
i_1+ \cdots+i_k$.
We have that $F(u, v)=g^{-1}\left( g(u)+g(v)\right)$, where $g(u)=u
+ \sum\limits_{n>0}\frac{1}{n+1}[\mathbb{ C} P^{n}]u^{n+1}$ (see
\cite{Novikov-67}) is {\it the logarithm of the formal group} $F(u,
v)$ and $g^{-1}(u)$ is the exponent of $F(u, v)$, that is the
function inverse to the series $g(u)$. Using that $ch_{U}g(u)=
g(ch_{U}(u))= g\big( \frac{x}{f(x)} \big) =x$ (see
\cite{Buchstaber}), we obtain $g^{-1}(x)=\frac{x}{f(x)}$ and
$ch_{U}F(u_1, u_2)=\frac{x_1+x_2}{f(x_1+x_2)}$ and therefore
\[ ch_{U}[\Lambda _{j}(p)](u) = \frac{\langle \Lambda _{j}(p),
{\bf x}\rangle}{f(\langle \Lambda _{j}(p), {\bf
x}\rangle)}\; . \]
Applying these results to Theorem~\ref{EGFP} we get
\begin{equation}\label{chs}
ch_{U}\Phi(M^{2n},\theta, c_{\tau })= \sum _{p\in P}\mathrm{sign} (p)\prod
_{j=1}^{n}\frac{f(\langle \Lambda _{j}(p), {\bf x}\rangle)}{\langle
\Lambda _{j}(p),{\bf x}\rangle} \; .
\end{equation}
From~\eqref{cheq} and~\eqref{chs} it follows that
\begin{equation}\label{cc}
\sum _{p\in P}\mathrm{sign} (p)\prod _{j=1}^{n}\frac{f(\langle \Lambda _{j}(p),
{\bf x}\rangle)}{\langle \Lambda _{j}(p), {\bf x}\rangle}= [M^{2n}]
+ \sum _{| \xi | >0}[G_{\xi}(M^{2n})](ch_{U}{\bf u})^{\xi} \, .
\end{equation}
\begin{ex}\label{n=2}
Let us take $M^{2}=\mathbb{ C} P^{1}=U(2)/\big(U(1)\times U(1)\big)$. We have
the action $\theta$ of $T^2$ on $\mathbb{ C} P^{1}$ with two fixed points. The
weights related to the standard complex structure $c_\tau$, are
$(x_1-x_2)$ and $(x_2-x_1)$. By equation~\eqref{chs} we obtain
that the Chern character of the universal toric genus for
$(\mathbb{C}P^1,\theta,c_\tau)$ is given by the series
\[ ch_{U}\Phi(\mathbb{C}P^1,\theta,c_\tau) = \frac{f(x_1-x_2)}{x_1-x_2}
+ \frac{f(x_2-x_1)}{x_2-x_1} = 2\sum_{k=0}^{\infty} a_{2k+1}
(x_1-x_2)^{2k} \, . \] By equation~\eqref{cheq} we obtain
\[ [\mathbb{ C} P^1] + \sum_{i+j>0}[G_{i,j}(\mathbb{ C} P^1)]\frac{x_1^ix_2^j}{f(x_1)^if(x_2)^j}
= 2a_1 + 2\sum\limits_{k=1}^{\infty} a_{2k+1} (x_1-x_2)^{2k} \, . \]
Thus, $[\mathbb{ C} P^1]=2a_1$ and $\sum\limits_{i+j=n}[G_{i,j}(\mathbb{ C} P^1)]=0$
for any $n>0$. Moreover,
\begin{align*}
[G_{i,j}(\mathbb{ C} P^1)]\; & \sim \;0, \; \text{if }\;i+j=2k+1,\\
[G_{i,2k-j}(\mathbb{ C} P^1)]\; & \sim \;(-1)^i2\binom{2k}{i}a_{2k+1}, \;
k>0,
\end{align*}
where ``$\sim$'' is equality to the elements decomposable in
$\Omega_U(\mathbb{Z})$.
Note that the subgroup $S^{1}=\{(t_1,t_2)\in T^2 | t_2=1\}$ acts also on $\mathbb{ C} P^{1}$ with the same fixed points
as for the action of $T^2$, but the weights are going to be $x$ and $-x$. This action is, as an example, given in~\cite{BR}.
\end{ex}
If in the left hand side of the equation~\eqref{cc} we put $t{\bf x}$ instead
of ${\bf x}$ and then multiply it with $t^{n}$ we obtain the
following result.
\begin{prop}\label{cobc}
The coefficient for $t^{n}$ in the series in $t$
\[
\sum _{p\in P}\mathrm{sign} (p)\prod _{j=1}^{n}\frac{f(t\langle \Lambda
_{j}(p), {\bf x}\rangle)}{\langle \Lambda _{j}(p), {\bf x}\rangle}
\]
represents the complex cobordism class $[M^{2n}]$.
\end{prop}
\begin{prop}\label{coeffzero}
The coefficient for $t^{l}$ in the series in $t$
\[
\sum _{p\in P}\mathrm{sign} (p)\prod _{j=1}^{n}\frac{f(t\langle \Lambda
_{j}(p), {\bf x}\rangle)}{\langle \Lambda _{j}(p), {\bf x}\rangle}
\]
is equal to zero for $0\leqslant l\leqslant n-1$.
\end{prop}
\section{Torus action on homogeneous spaces with positive Euler
characteristic. }
Let $G/H$ be a compact homogeneous space of positive Euler
characteristic. It means that $G$ is a compact connected Lie group
and $H$ its connected closed subgroup, such that $\operatorname{rk} G = \operatorname{rk} H$.
Let $T$ be the maximal common torus for $G$ and $H$. There is
canonical action $\theta$ of $T$ on $G/H$ given by $t(gH)=(tg)H$,
where $t\in T$ and $gH\in G/H$. Denote by $N_{G}(T)$ the normalizer
of the torus $T$ in $G$. Then $W_{G} = N_{G}(T)/T$ is the Weyl group
for $G$. For the set of fixed points for the action $\theta$
we prove the following.
\begin{prop}\label{fixed}
The set of fixed points under the canonical action $\theta$ of $T$ on
$G/H$ is given by $(N_{G}(T))~\cdot~H$.
\end{prop}
\begin{proof}
It is easy to see that $gH$ is fixed point for $\theta$ for any
$g\in N_{G}(T)$. If $gH$ is the fixed point under the canonical
action of $T$ on $G/H$ then $t(gH) = gH$ for all $t\in T$. It
follows that $g^{-1}tg \in H$ for all $t\in T$, i.~e.~ $g^{-1}Tg
\subset H$. This gives that $g^{-1}Tg$ is a maximal torus in $H$
and, since any two maximal toruses in $H$ are conjugate, it follows
that $g^{-1}Tg = h^{-1}Th$ for some $h\in H$. Thus,
$(gh)^{-1}T(gh)=T$ what means that $gh\in N_{G}(T)$. But,
$(gh)H=gH$, what proves the statement.
\end{proof}
Since $T \subset N_{G}(T)$ leaves $H$ fixed, the following Lemma is
direct implication of the Proposition~\ref{fixed}.
\begin{lem}\label{wfixed}
The set of fixed points under the canonical action $\theta$ of $T$ on
$G/H$ is given by $W_{G}\cdot H$.
\end{lem}
Regarding the number of fixed points, it holds the following.
\begin{lem}\label{number}
The number of fixed points under the canonical action $\theta$ of $T$
on $G/H$ is equal to the Euler characteristic $\chi (G/H)$.
\end{lem}
\begin{proof}
Let $g, g^{'}\in N_{G}(T)$ are representatives of the same fixed
point. Then $g^{'}g^{-1}\in H$ and $g^{-1}Tg = T =
(g^{'})^{-1}Tg^{'}$, what gives that $g^{'}g^{-1}Tg(g^{'})^{-1} = T$
and, thus, $g^{'}g^{-1}\in N_{H}(T)$. This implies that the number
of fixed points is equal to
\[
\Big{\|} \frac{N_{G}(T)}{N_{H}(T)} \Big{\|} = \frac{\| \frac{N_{G}
(T)} {T}\|}{\| \frac{N_{H}(T)}{T} \|} = \frac{\| W_{G}\|}{\| W_{H}
\|} = \chi (G/H) \; .
\]
The last equality is classical result related to equal ranks
homogeneous spaces, see~\cite{Onishchik}.
\end{proof}
\begin{rem}
The proof of Lemma~\ref{number} gives that the set of fixed
points under the canonical action $\theta$ of $T$ on $G/H$ can be
obtained as an orbit of $eH$ by the action of the Weyl group
$W_{G}$ up to the action of the Weyl group $W_{H}$.
\end{rem}
\section{The weights at the fixed points.}
Denote by $\gg$, $\hh$ and $\TT$ the Lie algebras for $G$, $H$ respectively and
$T=T^k$, where $k=\operatorname{rk} G=\operatorname{rk} H$. Let $\alpha
_{1},\ldots, \alpha _{m}$ be the roots for $\gg$ related to $\TT$,
where $\operatorname{dim} G=2m+k$. Recall that the roots for $\gg$ related to
$\TT$ are the weights for the adjoint representation $\Ad _{T}$ of
$T$ which is given with $\Ad _{T}(t) = d_{e}\ad (t)$, where $\ad
(t)$ are inner automorphisms of $G$ defined by the elements $t\in T$.
One can always choose the roots for $G$ such that $\alpha
_{n+1},\ldots, \alpha _{m}$ gives the roots for $\hh$ related to
$\TT$, where $\operatorname{dim} H=2(m-n)+k$. The roots $\alpha _{1},\ldots,
\alpha _{n}$ are called the {\it complementary} roots for $\gg$
related to $\hh$. Using root decomposition for $\gg$ and $\hh$ it
follows that $T_{e}(G/H) \cong \gg _{\alpha _{1}}^{\mathbb{ C}}\oplus \ldots
\oplus \gg _{\alpha _{n}}^{\mathbb{ C}}$, where by $\gg _{\alpha _{i}}$ is
denoted the root subspace defined with the root $\alpha _{i}$ and
$T_{e}(G/H)$ is the tangent space for $G/H$ at the $e\cdot H$.
It is obvious
that $\operatorname{dim} _{\mathbb{ R}} G/H=2n$.
\subsection{Description of the invariant almost complex structures.}
Assume we are given an invariant almost complex structure $J$ on
$G/H$. This means that $J$ is invariant under the canonical action
of $G$ on $G/H$. Then according to the paper~\cite{BH}, we can say
the following.
\begin{itemize}\label{ac}
\item Since $J$ is invariant it commutes with adjoint representation
$Ad_{T}$ of the torus T. This implies that $J$ induces the complex
structure on each complementary root subspace $\gg _{\alpha
_{1}},\ldots ,\gg _{\alpha _{n}}$. Therefore, $J$ can be completely
described by the root system $\varepsilon _{1} \alpha _{1},\ldots
,\varepsilon _{n}\alpha _{n}$, where we take $\varepsilon _{i}= \pm
1$ depending if $J$ and adjoint representation $Ad_{T}$ define the
same orientation on $\gg _{\alpha _{i}}$ or not, $1\leqslant
i\leqslant n$. The roots $\varepsilon _{k}\alpha _{k}$ are called
{\it the roots of the almost complex structure} $J$.
\item If we assume $J$ to be integrable, it follows that it can be
chosen the ordering on the canonical coordinates of $\TT$ such that
the roots $\varepsilon _{1} \alpha _{1},\ldots ,\varepsilon _{n}
\alpha _{n}$ which define $J$ make the closed system of positive
roots.
\end{itemize}
Let us assume that $G/H$ admits an invariant almost complex
structure. Consider the isotropy representation $I_{e}$ of $H$ in
$T_{e}(G/H)$ and let it decomposes into $s$ {\it real irreducible
representations} $I_e = I_e^1 + \ldots +I_e^s$. Then it is proved
in~\cite{BH} that $G/H$ admits exactly $2^s$ invariant almost
complex structures. Because of completeness we recall the proof of
this fact shortly here. Consider the decomposition of $T_{e}(G/H)$
\[
T_{e}(G/H) = \mathcal{ I} _{1}\oplus \ldots \oplus\mathcal{ I} _{s}
\]
such that the restriction of $I_e$ on $\mathcal{ I} _{i}$ is $I_{e}^i$. The
subspaces $\mathcal{ I} _1,\ldots ,\mathcal{ I}_{s}$ are invariant under $T$ and
therefore each of them is the sum of some root subspaces, i.e. $\mathcal{ I}
_{i} = \gg _{\alpha _{i_1}}\oplus\ldots \oplus \gg _{\alpha
_{i_j}}$, for some complementary roots $\alpha _{i_1},\ldots ,\alpha
_{i_j}$. Any linear transformation that commutes with $I_e$ leaves
each of $\mathcal{ I} _i$ invariant. Since, by assumption $G/H$ admits
invariant almost complex structure, we have at least one linear
transformation without real eigenvalue that commutes with $I_e$.
This implies that the commuting field for each of $I_{e}^i$ is the
field of complex numbers and, thus, on each $\mathcal{ I} _{i}$ we have
exactly two invariant complex structures.
\begin{rem}
Note that this consideration shows that the numbers $\varepsilon
_1,\ldots ,\varepsilon _n$ that define an invariant almost complex
structure may not vary independently.
\end{rem}
\begin{rem}
In this paper we consider almost complex structures on $G/H$ that
are invariant under the canonical action of the group $G$, what, as
we remarked, imposes some relations on $\varepsilon _1,\ldots
,\varepsilon _n$. If we do not require $G$-invariance, but just
$T$-invariance, we will have more degrees of freedom on $\varepsilon
_1,\ldots ,\varepsilon _n$. This paper is going to have continuation,
where, among the other, the case of $T$-invariant structures will be
studied.
\end{rem}
\begin{ex}
Since the isotropy representation for $\mathbb{ C} P^{n}$ is irreducible over
the reals, it follows that on $\mathbb{ C} P^{n}$ we have only two invariant
almost complex structures, which are actually the standard complex
structure and its conjugate.
\end{ex}
\begin{ex}
The flag manifold $U(n)/T^{n}$ admits $2^{m}$ invariant almost
complex structures, where $m=\frac{n(n-1)}{2}$. By~\cite{BH} only two
of them, conjugate to each other, are integrable.
\end{ex}
\begin{ex}
As we already mentioned, the $10$-dimensional manifold
$M^{10}=U(4)/(U(1)\times U(1)\times U(2))$ is the first example of
homogeneous space, where we have an existence of two non-equivalent
invariant complex structures, see~\cite{BH}. We will, in the last
section of this paper, also describe cobordism class of $M^{10}$ for
these structures.
\end{ex}
\subsection{The weights at the fixed points.}
We fix now an invariant almost complex structure $J$ on $G/H$ and we
want to describe the weights of the canonical action $\theta$ of $T$
on $G/H$ at the fixed points of this action. If $gH$ is the fixed
point for the action $\theta$, then we have a linear map
$d_{g}\theta (t)\colon T_{g}(G/H)\to T_{g}(G/H)$ for all $t\in T$.
Therefore, this action gives rise to the complex representation
$d_{g}\theta$ of $T$ in $(T_{g}(G/H), J)$.
The weights for this representation at identity fixed point are
described in~\cite{BH}.
\begin{lem}
The weights for the representation $d_{e}\theta$ of $T$ in
$(T_{e}(G/H),J)$ are given by the roots of an invariant almost
complex structure $J$.
\end{lem}
\begin{proof}
Let us, because of clearness, recall the proof. The inner
automorphism $\ad (t)$, for $t\in T$ induces the map $\overline{\ad
}(t) : G/H\to G/H$ given with $\overline{\ad }(t)(gH) = t(gH)t^{-1} =
(tg)H$. Therefore, $\theta (t) = \overline{\ad} (t)$ and, thus,
$d_{e}\theta (t) = d_{e}\overline{\ad } (t)$ for any $t\in T$. This
directly gives that the weights for $d_{e}\theta$ in $(T_{e}(G/H),
J)$ are the roots that define $J$.
\end{proof}
For an arbitrary fixed point we prove the following.
\begin{thm}\label{weights}
Let $gH$ be the fixed point for the canonical action $\theta$ of $T$
on $G/H$. The weights of the induced representation $d_{g}\theta $
of $T$ in $(T_{g}(G/H), J)$ can be obtained from the weights of the
representation $d_{e}\theta$ of $T$ in $(T_{e}(G/H), J)$ by the
action of the Weyl group $W_{G}$ up to the action of the Weyl group
$W_{H}$.
\end{thm}
\begin{proof}
Note that Lemma~\ref{wfixed} gives that an arbitrary fixed point
can be written as $\mathrm{w} H$ for some $\mathrm{w} \in W_{G}/W_{H}$. Fix $\mathrm{w}
\in W_{G}/W_{H}$ and denote by $l(\mathrm{w} )$ the action of $\mathrm{w}$ on
$G/H$, given by $l(\mathrm{w})gH=(\mathrm{w} g)H$ and by $\ad (\mathrm{w} )$ the inner
automorphism of $G$ given by $\mathrm{w}$.
We observe that $\theta \circ \ad (\mathrm{w} ) = \ad (\mathrm{w} )\circ \theta
$ and therefore $d_{e}\theta \circ d_{e}\ad (\mathrm{w} ) = d_{e}\ad (\mathrm{w} )\circ
d_{e}\theta$. This implies that the weights for $d_{e}\theta \circ
d_{e}\ad (\mathrm{w} )$ we get by the action of $d_{e}\ad (\mathrm{w})$ on the
weights for $d_{e}\theta$. From the other side $\theta (\ad (\mathrm{w}
)t)gH = ({\mathrm{w}}^{-1}t\mathrm{w} g)H = (l({\mathrm{w}}^{-1})\circ \theta (t)\circ
l(\mathrm{w} ))gH$ what implies that $d_{e}(\theta \circ \ad (\mathrm{w} )) =
d_{\mathrm{w}}l({\mathrm{w}}^{-1})\circ d_{\mathrm{w}}\theta \circ d_{e}l(\mathrm{w} )$. This
gives that if, using the map $d_{\mathrm{w}}l_{{\mathrm{w}}^{-1}}$, we lift the
weights for $d_{\mathrm{w}}\theta$ from $T_{\mathrm{w}}(G/H)$ to $T_{e}(G/H)$, we
get that they coincide with the weights for $d_{e}\theta \circ
d_{e}\ad (\mathrm{w})$. Therefore, the weights for $d_{\mathrm{w}}\theta$ we can
get by the action of the element $\mathrm{w}$ on the weights for
$d_{e}\theta$.
\end{proof}
\section{The cobordism classes of homogeneous spaces with positive
Euler characteristic}
\begin{thm}
Let $G/H$ be a homogeneous space of compact connected Lie group such
that $\operatorname{rk} G=\operatorname{rk} H = k$, $\operatorname{dim} G/H=2n$ and consider the canonical
action $\theta$ of maximal torus $T=T^k$ for $G$ and $H$ on $G/H$.
Assume we are given an invariant almost complex structure $J$ on
$G/H$. Let $\Lambda_{j}=\varepsilon _{j} \alpha _{j}$, $1\leqslant j\leqslant
n$, where $\varepsilon _{1} \alpha _{1},\ldots ,\varepsilon
_{n}\alpha _{n}$ are the complementary roots of $G$ related to $H$
which define an invariant almost complex structure $J$. Then the
toric genus for $(G/H, J)$ is given with
\begin{equation}
\Phi (G/H, J) = \sum _{\mathrm{w} \in W_{G}/W_{H}} \prod _{j=1}^{n}\frac{1}
{ [\mathrm{w} (\Lambda_{j})]({\bf u})} \ .
\end{equation}
\end{thm}
\begin{proof}
Rewriting Theorem~\ref{EGFP}, since all fixed points have sign
$+1$, we get that the toric genus for $(G/H, J)$ is
\begin{equation}\label{EQH}
\Phi (G/H, J) = \sum _{p\in P} \prod _{j=1}^{n}\frac{1}{[\Lambda
_{j}(p)]({\bf u})} \; ,
\end{equation}
where $P$ is the set of isolated fixed points and $\{\Lambda
_{1}(p),\ldots ,\Lambda _{n}(p)\}$ is the weight vector of the
representation for $T$ in $T_{p}(G/H)$ associated to an action
$\theta$. By Theorem~\ref{number}, the set of fixed points $P$
coincides with the orbit of the action of $W_{G}/W_{H}$ on $eH$ and
also by Theorem~\ref{weights} the set of weight vectors at fixed
points coincides with the orbit of the action of $W_{G}/W_{H}$ on
the weight vector $\Lambda$ at $eH$. The result follows if we put
this data into formula~\eqref{EQH}.
\end{proof}
\begin{cor}\label{ch-tor-hom}
The Chern-Dold character of the toric genus for homogeneous space
$(G/H, J)$ is given with
\begin{equation}\label{CC}
ch_{U}\Phi(G/H, J) = \sum _{\mathrm{w} \in W_{G}/W_{H}}\prod _{j=1}^{n}
\frac{f (\langle \mathrm{w} (\Lambda_{j}), {\bf x}\rangle )}{\langle \mathrm{w}
(\Lambda_{j}),{\bf x}\rangle} \; ,
\end{equation}
where $f(t) = 1+ \sum\limits_{i\geqslant 1}a_{i}t^{i}$ for $a_{i}
\in \Omega _{U}^{-2i}(\mathbb{ Z}), \; {\bf x}=(x_1,\ldots ,x_{k})$ and by
$\langle \Lambda _{j}, {\bf x} \rangle=\sum\limits_{l=1}^k
\Lambda_j^lx_l$ is denoted the weight vector $\Lambda _{j}$ of
$T^k$-representation at $e\cdot H$.
\end{cor}
\begin{cor}\label{chom}
The cobordism class for $(G/H, J)$ is given as the coefficient for
$t^{n}$ in the series in $t$
\begin{equation}
\sum _{\mathrm{w} \in W_{G}/W_{H}}\prod _{j=1}^{n} \frac{f(t\langle \mathrm{w}
(\Lambda_{j}), {\bf x}\rangle )}{\langle \mathrm{w} (\Lambda_{j}) ,{\bf
x}\rangle} \; .
\end{equation}
\end{cor}
\begin{rem}
Since the weights of different invariant almost complex structures
on the fixed homogeneous space $G/H$ differ only by sign,
Corollary~\ref{chom} provides the way for comparing cobordism
classes of two such structures on $G/H$ without having their cobordism
classes explicitly computed.
\end{rem}
\section{Characteristic numbers of homogeneous spaces
with positive Euler characteristic.}
\subsection{Generally about stable complex manifolds.} Let $M^{2n}$ be
an equivariant stable complex manifold whose given action $\theta$ of
the torus $T^{k}$ on $M^{2n}$ has only isolated fixed points. Denote
by $P$ the set of fixed points for $\theta$ and set $t_{j}(p) =
\langle \Lambda _{j}(p), {\bf x}\rangle$, where $\{ \Lambda _{j}(p),
\; j=1,\ldots,n\}$ are the weight vectors of the representation of
$T^{k}$ at a fixed point $p$ given by the action $\theta$ and ${\bf
x}=(x_{1},\ldots ,x_{k})$.
Set
\begin{equation}\label{fdecomp}
\prod_{i=1}^{n}f(t_i)=1+\sum f_\omega(t_1, \ldots, t_n)a^\omega \; .
\end{equation}
Using this notation Proposition~\ref{coeffzero} could be
formulated in the following way.
\begin{prop}\label{coeffzero1}
For any $\omega$ with $0\leqslant \|\omega\| \leqslant (n-1)$ we have that
\[
\sum _{p\in P}\mathrm{sign} (p)\cdot \frac{f_\omega(t_1(p), \ldots,
t_n(p))}{t_1(p) \cdots t_n(p)} = 0 \; .
\]
\end{prop}
Note that Proposition~\ref{coeffzero1} gives the strong
constraints on the set of signs $\{\mathrm{sign} (p)\}$ and the set of weights
$\{\Lambda _{j}(p)\}$ at fixed points for a manifold with a given
torus action and related equivariant stable complex structure. For
example for $\omega = (i_1,\ldots ,i_n)$ such that $i_{k}=1$ for
exactly one $k$ such that $1\leqslant k\leqslant n-1$ and $i_{j}=0$
for $j\neq k$ it gives that the signs and the weights at fixed points have to satisfy
the following relations.
\begin{cor}\label{second}
\[
\sum _{p\in P}\mathrm{sign} (p)\cdot \frac{\sum\limits _{i=1}^{n}t_{j}^{k}(p)}{t_1(p)
\cdots t_n(p)} = 0 \; ,
\]
where $0\leqslant k\leqslant n-1$.
\end{cor}
As we already mentioned in~\eqref{charact} the cobordism class for
$M^{2n}$ can be represented as
\[
[M^{2n}]= \sum_{\| \omega \|= n}s_{\omega}(\tau (M^{2n}))a^{\omega} \; ,
\]
where $\omega = (i_{1},\ldots ,i_{n})$, $\| \omega \| = \sum _{l=1}^{n}
l\cdot i_{l}$ and $a ^{\omega} = a_{1}^{i_1}\cdots a_{n}^{i_{n}}$.
If the given action $\theta$ of $T^{k}$ on $M^{2n}$ is with isolated
fixed points, the coefficients $s_{\omega}(\tau (M^{2n}))$ can be
explicitly described using Proposition~\ref{cobc} and
expression~\eqref{fdecomp}.
\begin{thm}\label{sthm}
Let $M^{2n}$ be an equivariant stable complex manifold whose given
action $\theta$ of the $T^{k}$ has only isolated fixed points.
Denote by $P$ the set of fixed points for $\theta$ and set
$t_{j}(p) = \langle \Lambda _{j}(p), {\bf x}\rangle$, where $\Lambda
_{j}(p)$ are the weight vectors of the representation of $T^{k}$ at fixed points
given by the action $\theta$ and ${\bf x}=(x_{1},\ldots x_{k})$.
Then for $\|\omega\|=n$
\begin{equation}\label{somega}
s_{\omega}(\tau (M^{2n}))=\sum _{p\in P}\mathrm{sign} (p)\cdot
\frac{f_\omega(t_1(p), \ldots, t_n(p))}{t_1(p) \cdots t_n(p)} \; .
\end{equation}
\end{thm}
\begin{ex}\label{ex2}
\[ s_{(n,0,\ldots,0)}(\tau (M^{2n}))=
\sum _{p\in P}\mathrm{sign} (p)\; . \]
\end{ex}
\begin{ex}\label{ex3}
\[ s_{(0,\ldots,0,1)}(\tau (M^{2n}))=s_n(M^{2n})=
\sum _{p\in P}\mathrm{sign} (p) \frac{\sum\limits_{j=1}^n t_j^n(p)}{t_1(p)
\cdots t_n(p)} \; . \]
\end{ex}
\begin{rem}\label{int}
Note that the left hand side of~\eqref{somega} in
Theorem~\ref{sthm} is an integer number $s_{\omega}(\tau
(M^{2n}))$ while the right hand side is a rational function in
variables $x_1, \ldots, x_k$. So this theorem imposes strong
restrictions on the sets of signs $\{\mathrm{sign} (p)\}$ and weight vectors
$\{\Lambda_j (p)\}$ at the fixed points.
\end{rem}
\subsubsection{On existence of more stable complex structures.}
Let assume that manifold $M^{2n}$ endowed with torus action $\theta$
with isolated fixed points admits $\theta$-equivariant stable
complex structure $c_{\tau}$. Denote by $\Lambda_{1}(p),\ldots
,\Lambda _{n}(p)$ the weights for an action
$\theta$ at fixed points $p\in P$ and by $\mathrm{sign}(p)$ the signs at the
fixed points related to $c_{\tau}$. Let further $t_{j}(p) = \langle
\Lambda_{j}(p), {\bf x}\rangle$, $1\leqslant j\leqslant n$. Then
Theorem~\ref{weights2}, Proposition~\ref{coeffzero1} and
Theorem~\ref{sthm} give the following necessary condition for the
existence of another $\theta$-equivariant stable complex
structure on $M^{2n}$..
\begin{prop}\label{neccond}
If $M^{2n}$ admits an other $\theta$-equivariant stable complex
structure $(M^{2n},c^{'}_{\tau},\theta)$ then there exist
$a_{i}(p)=\pm 1$, where $p\in P$ and $1\leqslant i\leqslant n$, such
that the following conditions are satisfied:
\begin{itemize}
\item for any $\omega$ with $0\leqslant \|\omega\| \leqslant (n-1)$
\begin{equation}
\sum _{p\in P}\mathrm{sign} (p)\frac{f_\omega(a_{1}(p)t_1(p), \ldots,
a_{n}(p)t_n(p))}{t_1(p) \cdots t_n(p)} = 0 \; .
\end{equation}
\item for any $\|\omega\|=n$
\begin{equation}\label{somegaad}
\sum _{p\in P}\mathrm{sign} (p)
\frac{f_\omega(a_{1}(p)t_1(p), \ldots, a_{n}(p)t_n(p))}{t_1(p) \cdots t_n(p)}
\end{equation}
is an integer number.
\end{itemize}
\end{prop}
As a special case we get analogue of Corollary~\ref{second}.
\begin{cor}
If $M^{2n}$ admits an other $\theta$-equivariant stable complex
structure $(M^{2n},c^{'}_{\tau},\theta)$ then there exist
$a_{j}(p)=\pm 1$, where $p\in P$ and $1\leqslant j\leqslant n$, such
that
\begin{equation}\label{secondstable}
\sum _{p\in P}\mathrm{sign} (p)\frac{\sum\limits _{i=1}^{n}(a_{j}(p))^{k}t_{j}^{k}(p)}{t_1(p)
\cdots t_n(p)} = 0 \; ,
\end{equation}
for $1\leqslant k\leqslant n-1$.
\end{cor}
\begin{rem}
Note that the relations~\eqref{secondstable} only for $k$ being odd
give the constraints on the existence of the second stable complex
structure. In the same spirit it follows from Example~\ref{ex3}
that if $n$ is even than the characteristic number $s_{n}(M^{2n})$
are the same for all stable complex structures on $M^{2n}$
equivariant under the fixed torus action $\theta$. For $n$ being
odd, as Example~\ref{cps} and Subsection~\ref{M10} will show,
these numbers may be different.
\end{rem}
\subsection{Homogeneous spaces of positive Euler characteristic
and with invariant almost complex structure.} Let us assume
$M^{2n}$ to be homogeneous space $G/H$ of positive Euler
characteristic with canonical action of a maximal torus and endowed
with an invariant almost complex structure $J$. All fixed points
have sign $+1$ and taking into account Theorem~\ref{weights},
Proposition~\ref{coeffzero1} gives that the weights at the fixed
points have to satisfy the following relations.
\begin{cor}
For any $\omega$ with $0\leqslant \|\omega\| \leqslant (n-1)$ where
$2n = \operatorname{dim} G/H$ we have that
\begin{equation}
\sum _{\mathrm{w} \in W_{G}/W_{H}} \mathrm{w} \Big( \frac{f_\omega(t_1, \ldots,
t_n)}{t_1 \cdots t_n}\Big ) = 0 \; ,
\end{equation}
where $t_j = \langle \Lambda _{j}, {\bf x}\rangle$ and $\Lambda
_{j}, \; 1\leqslant j\leqslant n$, are the weights at the fixed
point $e \cdot H$.
\end{cor}
In the same way, Theorem~\ref{sthm} implies that
\begin{thm}\label{s}
For $M^{2n}=G/H$ and $t_j=\langle \Lambda_j,{\bf x} \rangle$, where
$\langle \Lambda_j,{\bf x} \rangle=\sum\limits_{l=1}^k \Lambda_j^l
x_l, \; {\bf x}=(x_1,\ldots,x_k), \; k=\operatorname{rk} G=\operatorname{rk} H$, we have
\begin{equation}
s_{\omega}(\tau (M^{2n}))=\sum _{\mathrm{w} \in W_{G}/W_{H}}\mathrm{w} \Big(
\frac{f_\omega(t_1, \ldots, t_n)}{t_1 \cdots t_n}\Big)
\end{equation}
for any $\omega$ such that $\|\omega\|=n$.
\end{thm}
\begin{ex}\label{euler}
\[
s_{(n,0,\ldots,0)}(G/H, J)= \| W_{G}/W_{H}\| = \chi (G/H)
\]
and, therefore, $s_{(n,0,\ldots ,0)}(G/H, J)$ does not depend on
invariant almost complex structure $J$.
\end{ex}
\begin{cor}\label{sn}
\[
s_{(0,\ldots,0,1)}(G/H, J)= s_n(G/H, J))= \sum _{\mathrm{w} \in
W_{G}/W_{H}} \mathrm{w} \Big( \frac{\sum\limits_{j=1}^n t_j^n}{t_1 \cdots
t_n}\Big) \; .
\]
\end{cor}
\begin{ex}In the case $\mathbb{C}P^n=G/H$ where $G=U(n+1),\;H=U(1)\times
U(n)$ we have action of $T^{n+1}$ and related to the standard
complex structure the weights are given with $\langle\Lambda_j,{\bf
x}\rangle = x_j-x_{n+1}, \;j=1,\ldots, n$ and
$W_{G}/W_{H}=\mathbb{Z}_{n+1}$ is cyclic group. So
\begin{equation}
s_n(\mathbb{C}P^n) = \sum_{i=1}^{n+1}\frac{\sum\limits_{j\neq i}
(x_i-x_j)^n}{\prod\limits_{j\neq i}(x_i-x_j)}= n+1 \; .
\end{equation}
\end{ex}
\begin{ex}
Let us consider Grassmann manifold $G_{q+2,2}=G/H$ where
$G=U(q+2),\;H=U(q)\times U(2)$. We have here the canonical action of
the torus $T^{q+2}$. The weights for this action at identity point
related to the standard complex structure are given with $\langle
\Lambda _{ij}, {\bf x}\rangle = x_i - x_j$, where $1\leqslant i\leqslant q$,
$j=q+1,q+2$. There are $\|W_{U(q+2)}/W_{U(2)\times
U(q)}\|=\frac{(q+2)(q+1)}{2}$ fixed points for this action.
Therefore
\begin{equation}\label{gn2}
s_{2q}(G_{q+2,2}) = \sum _{\mathrm{w} \in W_{U(q+2)}/W_{U(2)\times U(q)}}
\mathrm{w} \Big( \frac{\sum\limits_{i=1}^{q}\big(
(x_i-x_{q+1})^{2q}+(x_i-x_{q+2})^{2q}\big)
}{\prod\limits_{i=1}^{q}(x_i-x_{q+1})(x_i-x_{q+2})}\Big) \; .
\end{equation}
The action of the group $W_{U(q+2)}/W_{U(2)\times U(q)}$ on the
weights at the identity point in formula~\eqref{gn2} is given by
the permutations between the coordinates $x_1,\ldots ,x_q$ and coordinates $x_{q+1},x_{q+2}$. The explicit description of non trivial such permutations is as follows:
\begin{align*}
\mathrm{w} _{k,q+1}(k) &= q+1, \; \mathrm{w} _{k,q+1}(q+1) =k, \; \mbox{where} \;1\leqslant
k\leqslant q \; ,\\
\mathrm{w} _{k,q+2}(k) &= q+2, \; \; \mathrm{w} _{k,q+2}(q+2)\; = k, \; \mbox{where} \;
1\leqslant k\leqslant q \; ,
\end{align*}
\[ \mathrm{w} _{k,l}(q+1)=k, \; \mathrm{w} _{k,l}(k) = q+1, \; \mathrm{w} _{k,l}(q+2)=l, \; \mathrm{w}
_{k,l}(l) =q+2 \;\; \mbox{for}\; 1\leqslant k\leqslant q-1, \;
k+1\leqslant l\leqslant q \; .
\]
As we remarked before (see Remark~\ref{int}), the expression on the
right hand side in~\eqref{gn2} is an integer number, so we can get a value
for $s_{2q}$ by choosing the appropriate values for the vector
$(x_1,\ldots ,x_{q+2})$. For example, if we take $q=2$ and
$(x_1,x_2,x_3,x_4) = (1,2,3,4)$ the straightforward application of
formula~\eqref{gn2} will give that $s_{4}(G_{4,2}) = -20$.
\end{ex}
\begin{ex}In the case $G_{q+l,l}=G/H$, where $G=U(q+l),\; H=U(q)\times
U(l)$ we have
\begin{equation}
s_{lq}(G_{q+l,l}) = \sum_{\sigma \in S_{q+l}/(S_q \times S_l)}\sigma
\Big( \frac{\sum (x_i-x_j)^{lq}}{\prod(x_i-x_j)}\Big) \; ,
\end{equation}
where $1 \leqslant i \leqslant q, \; (q+1) \leqslant j \leqslant
(q+l)$ and $S_{q+l}$ is the symmetric group.
\end{ex}
We consider later, in the Section~\ref{app}, the case of this
Grassmann manifold in more details.
\subsubsection{Chern numbers.}
We want to deduce an explicit relations between cohomology
characteristic numbers $s_{\omega}$ and classical Chern numbers for
an invariant almost complex structure on $G/H$.
\begin{prop}\label{orbit}
The number $s_{\omega}(\tau (M^{2n}))$, where $\omega=(i_1, \ldots,
i_n), \; \|\omega\|=n$, is the characteristic number that
corresponds to the characteristic class given by the orbit of the
monomial
\[
(u_{1}\cdots u_{i_1})(u_{i_1+1}^2\cdots u_{i_1+i_2}^2)\cdots
(u_{i_1+\ldots +i_{n-1}+1}^{n}\cdots u_{i_1+\ldots +i_n}^{n}) \; .
\]
\end{prop}
\begin{rem}
Let $\xi=(j_1,\ldots, j_n)$ and ${\bf u}^\xi=u_{1}^{j_1}\cdots
u_{n}^{j_n}$. The orbit of the monomial ${\bf u}^\xi$ is defined
with
\[
O({\bf u}^\xi) = \sum {\bf u}^{\xi'} \; ,
\]
where the sum is over the orbit $\{\xi'=\sigma \xi, \; \sigma \in
S_n \}$ of the vector $\xi \in \mathbb{Z}^n$ under the symmetric
group $S_{n}$ acting by permutations of coordinates of $\xi$.
\end{rem}
\begin{ex}\label{ex1}
If we take $\omega = (n,0,\ldots ,0)$ we need to compute the
coefficient for $a_{1}^{n}$ and it is given as an orbit
$O(u_{1}\cdots u_{n})$ what is the elementary symmetric function
$\sigma _{n}$. If we take $\omega =(0,\ldots,0,1)$ then we should
compute the coefficient for $a_{n}$ and it is given with
$O(u_{1}^{n})=\sum _{j=1}^{n} u_{j}^{n}$, what is Newton polynomial.
\end{ex}
It is well known fact from the algebra of symmetric functions that
the orbits of monomials give the additive basis for the algebra of
symmetric functions. Therefore, any orbit of monomial can be
expressed through elementary symmetric functions and vice versa. It
gives the expressions for the characteristic numbers $s_{\omega}$ in
terms of Chern characteristic numbers $c^\omega=c_1^{i_1}\cdots
c_n^{i_n}$ for an almost complex homogeneous space $(G/H, J)$.
\begin{thm}\label{sc}
Let $\omega =(i_1,\ldots, i_n), \; \|\omega\|=n$, and assume that
the orbit of the monomial
\[
(u_{1}\cdots u_{i_1})(u_{i_1+1}^2\cdots u_{i_1+i_2}^2)\cdots
(u_{i_1+\ldots +i_{n-1}+1}^{n}\cdots u_{i_1+\ldots +i_n}^{n})
\]
is expressed through the elementary symmetric functions as
\begin{equation}\label{change}
O((u_{1}\cdots u_{i_1})(u_{i_1+1}^2\cdots u_{i_1+i_2}^2)\cdots
(u_{i_1+\ldots +i_{n-1}+1}^{n}\cdots u_{i_1+\ldots +i_n}^{n}))=
\end{equation}
\[
= \sum _{\| \xi \| = n} \beta_{\omega \xi}\sigma _{1}^{l_1}\cdots \sigma
_{n}^{l_n}
\]
for some $\beta_{\omega \xi}\in \mathbb{ Z}$ and $\| \xi\| = \sum _{j=1}^{n}j\cdot
l_j$, where $\xi=(l_1, \ldots, l_n)$. Then it holds
\begin{equation}\label{nocohomchern}
s_{\omega}(G/H, J) = \sum _{\mathrm{w} \in W_{G}/W_{H}}\mathrm{w} \Big(
\frac{f_\omega(t_1, \ldots, t_n)}{t_1 \dots t_n}\Big)= \sum _{\| \xi
\| = n} \beta_{\omega \xi}c_{1}^{l_1}\cdots c_{n}^{l_n} \; ,
\end{equation}
where $c_{i}$ are the Chern classes for the tangent bundle of
$(G/H,J)$.
\end{thm}
\begin{rem}
Let $p(n)$ denote the number of partitions of the number $n$. By
varying $\omega$, the equation~\eqref{nocohomchern} gives the system
of $p(n)$ linear equations in Chern numbers whose determinant is,
by~\eqref{change}, non-zero. Therefore, it provides the explicit
formulas for the computation of Chern numbers.
\end{rem}
\begin{rem}
We want to point that relation~\eqref{nocohomchern} in
Theorem~\ref{sc} together with Theorem~\ref{s} proves that the Chern
numbers for $(G/H, J)$ can be computed without having any
information on cohomology for $G/H$.
\end{rem}
\begin{ex}\label{ex4}
We provide the direct application of Theorem~\ref{sc} following
Example~\ref{ex1}. It is straightforward to see that
$s_{(n,0,\ldots ,0)}(G/H) = c_{n}(G/H)$ for any invariant almost
complex structure. This together with Example~\ref{euler} gives that
$c_{n}(G/H) = \chi (G/H)$.
\end{ex}
We want to add that it is given in~\cite{ms} a description of the
numbers $s_{I}$ that correspond to our characteristic numbers
$s_{\omega}$, but the numerations $I$ and $\omega$ are different.
To the partition $i\in I$ correspond the n-tuple $\omega
=(i_1,\ldots ,i_n)$ such that $i_k$ is equal to the number of
appearances of the number $k$ in the partition $i$.
\section{On equivariant stable complex homogeneous spaces of
positive Euler characteristic.}\label{eqsthom}
Assume we are given on $G/H$ a stable complex structure $c_{\tau}$
which is equivariant under the canonical action $\theta$ of the
maximal torus $T$. If $p=gH$ is the fixed point for an action
$\theta$, by Section 2 and Section 3, we see that $T_{gH}(G/H)=\gg
^{\mathbb{ C}}_{\mathrm{w} (\alpha _{1})}\oplus \ldots \oplus\gg ^{\mathbb{ C}}_{\mathrm{w} (\alpha
_{n})}$, where $\mathrm{w} \in W_{G}/W_{H}$ and $\alpha _{1},\ldots ,\alpha
_{n}$ are the complementary roots for $G$ related to $H$. The
following statement is the direct consequence of
Theorem~\ref{weights2} and Lemma~\ref{wfixed}.
\begin{cor}\label{stableweights}
Let $\alpha _{1},\ldots ,\alpha _{n}$ be the set of complementary
roots for $G$ related to $H$ . The set of weights of an action
$\theta$ at the fixed points related to an arbitrary equivariant
stable complex structure $c_{\tau}$ is of the form
\begin{equation}\label{stablevectors}
\{ a_{1}(\mathrm{w} )\cdot \mathrm{w} (\alpha _{1}),\ldots ,a_{n}(\mathrm{w} )\cdot
\mathrm{w} (\alpha _{n}) \} \; ,
\end{equation}
where $\mathrm{w} \in W_{G}/W_{H}$ and $a_{i}(\mathrm{w} )=\pm 1$ for $1\leqslant
i\leqslant n$.
\end{cor}
For the signs at the fixed points (which we identify with $\mathrm{w} \in
W_{G}/W_{H}$) using Theorem~\ref{weights2} we obtain the following.
\begin{cor}\label{stablesigns}
Assume that $G/H$ admits an invariant almost complex structure $J$
defined by the complementary roots $\alpha _{1},\ldots,\alpha_{n}$.
Let $c_{\tau}$ be the $\theta$-equivariant stable complex structure
with the set of weights $\{ a_{1}(\mathrm{w} )\cdot \mathrm{w} (\alpha
_{1}),\ldots ,a_{n}(\mathrm{w} )\cdot \mathrm{w} (\alpha _{n}) \}$, $\mathrm{w} \in
W_{G}/W_{H}$ at the fixed points. The signs at the fixed points are
given with
\begin{equation}\label{csign}
\mathrm{sign} (\mathrm{w} ) = \epsilon \cdot \prod _{i=1}^{n}a_{i}(\mathrm{w} ), \;\; \mathrm{w}\in W_{G}/W_{H} \; ,
\end{equation}
where $\epsilon =\pm 1$ depending if $J$ and $c_{\tau}$ define the
same orientation on $M^{2n}$ or not.
\end{cor}
This implies the following consequence of Proposition~\ref{neccond}.
\begin{cor}\label{stablecoeffzero}
Assume that $G/H$ admits an invariant almost complex structure $J$
defined by the complementary roots $\alpha _{1},\ldots,\alpha_{n}$.
Let $c_{\tau}$ be the $\theta$-equivariant stable complex structure
with the set of weights $\{ a_{1}(\mathrm{w} )\cdot \mathrm{w} (\alpha
_{1}),\ldots ,a_{n}(\mathrm{w} )\cdot \mathrm{w} (\alpha _{n}) \}$, $\mathrm{w} \in
W_{G}/W_{H}$ at the fixed points. Then
\begin{itemize}
\item for any $\omega$ with $0\leqslant \|\omega\| \leqslant (n-1)$ we have that
\begin{equation}
\sum _{\mathrm{w} \in W_{G}/W_{H}}\frac{f_{\omega}(a_{1}(\mathrm{w} )\mathrm{w} (\alpha _{1}),\ldots ,
a_{n}(\mathrm{w} )\mathrm{w} (\alpha _{n}))}{\mathrm{w} (\alpha_1) \cdots \mathrm{w} (\alpha_n)} = 0 \; ;
\end{equation}
\item for any $\|\omega\|=n$
\begin{equation}
\sum _{\mathrm{w} \in W_{G}/W_{H}}
\frac{f_\omega(a_{1}(\mathrm{w} )\mathrm{w} (\alpha _{1}),\ldots, a_{n}(\mathrm{w} )\mathrm{w} (\alpha _{n}))}
{\mathrm{w} (\alpha _{1}) \cdots \mathrm{w} (\alpha_{n})}
\end{equation}
is an integer number.
\end{itemize}
\end{cor}
\begin{rem}
Corollary~\ref{stablecoeffzero} gives the strong constraints on
the numbers $a_{i}(\mathrm{w} )$ that appear as the "coefficients" (related
to $J$) of the weights of the $\theta$-equivariant stable complex
structure. In that way, it provides information which vectors $\{
a_{1}(\mathrm{w} )\cdot \mathrm{w} (\alpha _{1}),\ldots ,a_{n}(\mathrm{w} )\cdot \mathrm{w}
(\alpha _{n}) \}$, $\mathrm{w} \in W_{G}/W_{H}$ can not be realized as the
weight vectors of some $\theta$ equivariant stable complex
structure.
\end{rem}
\begin{ex}\label{cps}
Consider complex projective space $\mathbb{ C} P^{3}$ and let $c_{\tau}$ be
a stable complex structure on $\mathbb{ C} P^{3}$ equivariant under the
canonical action of the torus $T^4$. By
Corollary~\ref{stableweights}, the weights for $c_{\tau}$ at the
fixed points are $a_{1}(\mathrm{w} )\cdot \mathrm{w} (x_1-x_4), a_{2}(\mathrm{w} )\cdot
\mathrm{w} (x_2-x_4), a_{3}(\mathrm{w} )\cdot \mathrm{w} (x_3-x_4)$, where $\mathrm{w} \in
W_{U(4)}/W_{U(3)} = \mathbb{ Z} _{4}$. Corollary~\ref{stablecoeffzero}
implies that the coefficients in $t$ and $t^2$ in the polynomial
\begin{equation}\label{pol}
\prod_{1\leq i<j\leq 4}(x_i-x_j)\cdot \sum_{\mathrm{w} \in \mathbb{ Z}_{4}}
\frac{f(ta_{1}(\mathrm{w} )\mathrm{w} (x_1-x_4))f(ta_{2}(\mathrm{w} )\mathrm{w}
(x_2-x_4))f(ta_{3}(\mathrm{w} )\mathrm{w} (x_3-x_4))}{ \mathrm{w} (x_1-x_4)\mathrm{w}
(x_2-x_4)\mathrm{w} (x_3-x_4)} \; ,
\end{equation}
where $f(t)=1+a_1t+a_{2}t^2+a_{3}t^3$, have to be zero. The
coefficient in $t$ for~\eqref{pol} is a polynomial
$P(x_1,x_2,x_3,x_4)$ of degree $4$ whose coefficients are some
linear combinations of the numbers $a_{i}(\mathrm{w} )$. Some of them are
as follows
\[
x_1x_4^3:a_{2}(2) - a_{3}(3), \;\; x_1^3x_4: a_{1}(3)-a_{1}(2), \;\;
x_1^3x_2: a_{1}(0)-a_{1}(3), \;\; x_2^3x_4: a_{2}(1)-a_{2}(3),
\]
\[
x_3x_4^3: a_{1}(1)-a_{2}(2),\;\; x_3^3x_4: a_{3}(2)-a_{3}(1),\;\;
x_1x_3^3: a_{3}(0)-a_{3}(2) \; ,
\]
what implies that
\begin{equation}\label{arelations}
a_{1}(0)=a_{1}(3)=a_{1}(2),\; a_{2}(0) = a_{2}(1) = a_{2}(3) \; ,
\end{equation}
\[ a_{3}(0) = a_{3}(1)=a_{3}(2), a_{1}(1)= a_{2}(2)=a_{3}(3) \; .
\]
The direct computation shows that the requirement
$P(x_1,x_2,x_3,x_4)\equiv 0$ is equivalent to the
relations~\eqref{arelations} and also that the same relations give
that the coefficient in $t^2$ in the polynomial~\eqref{pol} will
be zero.
Therefore, the weights for $c_{\tau}$ at the fixed points are
completely determined by the values for $a_{1}(0), a_{2}(0),
a_{3}(0), a_{1}(1)$. By Corollary~\ref{stablesigns} the signs at the
fixed points are also determined by these values up to factor
$\epsilon = \pm 1$ depending if $c_{\tau}$ and standard canonical structure
on $\mathbb{ C} P^{3}$ give the same orientation or not.
In turns out that in this case for all possible values $\pm 1$ for
$a_{1}(0), a_{2}(0), a_{3}(0), a_{1}(1)$ the corresponding
vectors~\eqref{stablevectors} can be realized as the weights vectors
of the stable complex structures. In order to verify this we look at
the decomposition $T(\mathbb{ C} P^{3})\oplus \mathbb{ C} \cong \hat {\eta}\oplus
\hat{\eta}\oplus \hat{\eta}\oplus \hat{\eta}$, where $\hat{\eta}$ is
the conjugate to the Hopf bundle over $\mathbb{ C} P^{3}$. Using this
decomposition we can get the stable complex structures on $T(\mathbb{ C}
P^{3})$ by choosing the complex structures on each $\hat{\eta}$.
The weight vector for any such stable complex structure is
determined by the corresponding choices for the values of $a_{1}(0),
a_{2}(0), a_{3}(0), a_{1}(1)$. For example if we take
$a_{1}(0)=a_{2}(0)=a_{3}(0)=a_{1}(1)=1$ we get the weights for the
standard complex structure which we can get from the stable complex
structure $\hat {\eta}\oplus \hat{\eta}\oplus \hat{\eta}\oplus
\hat{\eta}$.
If we take $a_{1}(0)=a_{2}(0)=a_{3}(0)=1$,
$a_{1}(1)=-1$ the corresponding weights come from the stable complex
structure $\hat{\eta}\oplus \hat{\eta}\oplus \hat{\eta}\oplus \eta$.
It follows that this stable complex structure and the standard complex structure
define an opposite orientations on $\mathbb{ C} P^3$ and, therefore, $\epsilon=-1$.
Usign~\eqref{csign} we obtain that related to this stable complex structure the signs at the fixed
points are $\mathrm{sign}(0)=-1$, $\mathrm{sign}(1)=\mathrm{sign}(2)=\mathrm{sign}(3)=1$. The weights at the fixed points are:
\[
(0) : x_1-x_4,\; x_2-x_4,\; x_3-x_4;\;\; (1) : x_1-x_4,\; x_2-x_1,\; x_3-x_1\;;
\]
\[
(2) : x_1-x_2,\; x_2-x_4,\; x_3-x_2;\;\; (3) : x_1-x_3,\; x_2-x_3,\; x_3-x_4\;.
\]
By Example~\ref{ex3} we obtain that the number $s_{3}$ for $\mathbb{ C} P^3$ related to this stable complex
structure is equal to $-2$. In that way it shows that $\mathbb{ C} P^3$ with this non-standard stable complex structure realizes multiplicative generator in complex cobordism ring of dimension $6$.
The relations~\eqref{arelations} also give an examples of the
vectors~\eqref{stablevectors} that can not be realized as the weight
vectors of the stable complex structures on $\mathbb{ C} P^{3}$ equivariant
under the canonical torus action.
\end{ex}
\begin{ex}
Using Proposition~\ref{stableweights}, we can also find an examples
of the vectors~\eqref{stablevectors} on the flag manifold $U(3)/T^3$
or Grassmann manifold $G_{4,2}$ that can {\it not} be realized as
the weight vectors of the stable complex structures.
In a case of $U(3)/T^3$ any stable complex structure has the
weights of the form $a_1(\sigma)\sigma (x_1-x_2),a_2(\sigma)\sigma
(x_1-x_3),a_3(\sigma)\sigma (x_2-x_3)$, where $\sigma \in S_{3}$.
The relation~\eqref{secondstable} gives that for $k=1$ the numbers
$a_{i}(\sigma )$ have to satisfy the following relation
\[
\sum_{\sigma \in S_{3}}\frac{a_1(\sigma)\sigma
(x_1-x_2)+a_2(\sigma)\sigma (x_1-x_3)+a_3(\sigma)\sigma
(x_2-x_3)}{\sigma (x_1-x_2)\sigma (x_1-x_3)\sigma (x_2-x_3)} = 0 \;
.
\]
It is equivalent to
\[
a_{1}(123)+a_{2}(123)+a_1(213)-a_3(213)+a_2(321)+a_3(321) \]
\[-a_1(132)-a_2(132)-a_2(231)-a_3(231)-a_1(312)+a_3(312)= 0 \; ,
\]
\[
-a_{1}(123)+a_{3}(123)-a_1(213)-a_2(213)+a_1(321)-a_3(321) \]
\[+a_2(132)-a_3(132)+a_1(231)+a_2(231)-a_2(312)-a_3(312)= 0 \; .
\]
It follows from these relations that the
vector~\eqref{stablevectors} determined with $a_{1}(123)=-1$ and
$a_{i}(\sigma)=1$ for all the others $1\leqslant i\leqslant 3$ and
$\sigma \in S_{3}$, can not be obtained as the weight vector of some
stable complex structure on $U(3)/T^3$ equivariant under the
canonical action of the maximal torus.
Using the same argument we can also conclude that for the
Grassmannian $G_{4,2}$ the vector determined with $a_{1}(1234)=-1$
and $a_{i}(\sigma )=1$ for all the others $1\leqslant i\leqslant 4$
and $\sigma \in S_{4}/S_{2}\times S_{2}$ can not be realized as the
weight vector of the stable complex structure equivariant under the
canonical action of $T^4$.
\end{ex}
\section{Some applications.}\label{app}
\subsection{Flag manifolds $U(n)/T^n$.}
We consider invariant complex structure on $U(n)/T^n$.
Recall~\cite{Adams} that the
Weyl group $W_{U(n)}$ is the symmetric group and it permutes the
coordinates $x_1,\ldots ,x_n$ on Lie algebra $\TT ^{n}$ for $T^{n}$.
The canonical action of the torus $T^{n}$ on this manifold has $\|
W_{U(n)}\| = \chi (U(n)/T^n) = n!$ fixed points and its weights at
identity point are given by the roots of U(n).
We first consider the case $n=3$ and apply our results to explicitly
compute cobordism class and Chern numbers for $U(3)/T^3$. The roots
for U(3) are $x_1-x_2$, $x_1-x_3$ and $x_2-x_3$. Therefore the
cobordism class for $U(3)/T^3$ is given as the coefficient for $t^3$
in the polynomial
\[
[U(3)/T^3] = \sum _{\sigma \in S_3}\sigma
\Big(\frac{f(t(x_1-x_2))f(t(x_1-x_3))f(t(x_2-x_3))}{(x_1-x_2)
(x_1-x_3)(x_2-x_3)}\Big) \; ,
\]
where $f(t)=1+a_1t+a_2t^2+a_3t^3$, what implies
\[
[U(3)/T^3] = 6(a_1^3 +a_1a_2-a_3) \; .
\]
This gives that the characteristic numbers $s_\omega$ for $U(3)/T^3$
are
\[
s_{(3,0,0)}=6, \quad s_{(1,1,0)} = 6, \quad s_{(0,0,1)}=-6 \; .
\]
By Theorem~\ref{sc} we have the following relations between
characteristic numbers $s_\omega$ and Chern numbers $c^\omega$
\[
c_3 = 6, \; c_1c_2 - 3c_3=6, \; c_1^3-3c_1c_2+3c_3=-6, \; \;
\mbox{what gives}\; \; c_1c_2=24, \; c_1^3=48 \; .
\]
To simplify the notations we take further $\Delta_n =
\prod\limits_{1\leqslant i<j\leqslant n}(x_i-x_j)$.
\begin{thm}\label{thm}
The Chern-Dold character of the toric genus for the flag manifold
$U(n)/T^n$ is given by the formula:
\begin{equation}\label{40}
ch_{U}\Phi(U(n)/T^n) = \frac{1}{\Delta_n}\sum _{\sigma \in S_{n}}
\mathrm{sign} (\sigma) \sigma \Big(\prod\limits_{1\leqslant i<j\leqslant
n}f(x_i-x_j)\Big) \; ,
\end{equation}
where $f(t) = 1+\sum\limits_{i\geqslant 1}a_{i}t^{i}$ and $\mathrm{sign}
(\sigma )$ is the sign of the permutation $\sigma$.
\end{thm}
\subsubsection{Using of divided difference operators.}
Consider the ring of the symmetric polynomials $\operatorname{Sym}_n \subset
\mathbb{Z}[x_1, \ldots, x_n]$. There is a linear operator (see
\cite{Macdonald-95})
\[ L : \mathbb{Z}[x_1, \ldots, x_n] \longrightarrow \operatorname{Sym}_n : \;
L{\bf x}^\xi = \frac{1}{\Delta_n }
\sum _{\sigma \in S_{n}} \mathrm{sign} (\sigma) \sigma ({\bf x}^\xi) \; , \] where
$\xi=(j_1, \ldots, j_n)$ and ${\bf x}^\xi = x_1^{j_1} \cdots
x_n^{j_n}$.
It follows from the definition of Schur polynomials $\operatorname{Sh_\lambda}(x_1,
\ldots, x_n)$ where $\lambda=(\lambda_1 \geqslant \lambda_2
\geqslant \cdots \geqslant \lambda_n \geqslant 0)$ (see
\cite{Macdonald-95}), that
\[ L{\bf x}^{\lambda+\delta} = \operatorname{Sh_\lambda}(x_1,\ldots, x_n) \; , \]
where $\delta=(n-1,n-2,\ldots,1,0)$ and $L{\bf x}^\delta=1$.
Moreover, the operator $L$ have the following properties:
\begin{itemize}
\item $L{\bf x}^\xi=0$, if $j_1 \geqslant j_2 \geqslant \cdots
\geqslant j_n\geqslant 0$ and $\xi \neq \lambda+\delta$ for some
$\lambda=(\lambda_1 \geqslant \lambda_2 \geqslant \cdots \geqslant
\lambda_n \geqslant 0)$;
\item $L{\bf x}^\xi=\mathrm{sign} (\sigma)L\sigma({\bf x}^\xi)$, where $\xi=(j_1,
\ldots, j_n)$ and $\sigma\xi=\xi'$, where $\sigma \in S_n$ and
$\xi'=(j'_1 \geqslant j'_2 \geqslant \cdots \geqslant j'_n\geqslant
0)$;
\item $L$ is a homomorphism of $\operatorname{Sym}_n$-modules.
\end{itemize}
We have
\begin{equation}\label{P}
\prod\limits_{1\leqslant i<j\leqslant n}f(t(x_i-x_j))=
1+\sum_{|\xi|>0} P_\xi(a_1, \ldots, a_n, \ldots)t^{|\xi|}{\bf x}^\xi
\; ,
\end{equation}
where
$|\xi|=\sum\limits_{q=1}^n j_q$.
\begin{cor} \label{L}
The Chern-Dold character of the toric genus for the flag manifold
$U(n)/T^n$ is given by the formula:
\[ ch_{U}\Phi(U(n)/T^n) = \sum_{|\lambda|\geqslant 0}\Big(
\sum_{\sigma\in S_n}\mathrm{sign} (\sigma) P_{\sigma(\lambda+\delta)}(a_1,
\ldots, a_n, \ldots)\Big)\operatorname{Sh_\lambda}(x_1, \ldots, x_n) \,, \] where
$\delta=(n-1,n-2,\ldots,1,0),\; \lambda=(\lambda_1 \geqslant
\lambda_2 \geqslant \cdots \geqslant \lambda_n \geqslant 0)$. In
particular,
\begin{equation}\label{CL}
[U(n)/T^n] = \sum_{\sigma\in S_n}\mathrm{sign} (\sigma) P_{\sigma\delta}(a_1,
\ldots, a_n, \ldots)\,.
\end{equation}
\end{cor}
\begin{proof}\label{delta}
Set $m=\frac{n(n-1)}{2}$. From Theorem \ref{thm} and the formula
\eqref{P} we obtain:
\[ ch_{U}\Phi(U(n)/T^n) = \sum_{|\xi|\geqslant m} P_\xi L{\bf x}^\xi\,. \]
The first property of the operator $L$ gives that for any $\xi$ we
will have $L{\bf x}^{\xi}=0$, whenever ${\bf x}^{\xi}\neq \sigma
({\bf x}^{\lambda+\delta})$ for some $\sigma \in S_n$ and
$\lambda=(\lambda_1 \geqslant \lambda_2 \geqslant \cdots \geqslant
\lambda_n \geqslant 0)$. The second property gives that
$L\sigma({\bf x}^{\lambda+\delta})= \mathrm{sign} (\sigma)\operatorname{Sh_\lambda}(x_1, \ldots,
x_n)$.
\end{proof}
\begin{rem}
1. In the case $n=2$ this corollary gives the result of Example
\ref{n=2}.\\
2. As we will show in Corollary~\ref{cor8} below, polynomials
$P_{\sigma\delta}$ in the formula~\eqref{CL} appears to be
polynomials only in variables $a_1,\ldots, a_{2n-3}$.
\end{rem}
Set by definition
\[ 1+\sum_{|\xi|>0} \sigma^{-1}(P_\xi)t^{|\xi|}{\bf x}^\xi =
\sigma\Big( \prod\limits_{1\leqslant i<j\leqslant n}f(t(x_i-x_j))
\Big), \] where $\sigma \in S_n$ on the right acts by the permutation
of variables $x_1, \ldots, x_n$. Directly from the definition we
have
\[ 1+\sum_{|\xi|>0} \sigma^{-1} (P_\xi)t^{|\xi|}{\bf x}^\xi = 1+\sum_{|\xi|>0}P_\xi t^{|\xi|}\sigma({\bf
x}^\xi). \] Therefore
\begin{equation}\label{sigma}
\sigma (P_\xi) = P_{\sigma \xi}\;.
\end{equation}
Together with Corollary~\ref{L} the formula~\eqref{sigma} implies the following Theorem.
\begin{thm}\label{t-chi}
\begin{equation}\label{chi}
[U(n)/T^n] = \Big(\sum_{\sigma\in S_n}\mathrm{sign} (\sigma)\sigma\Big)
P_{\delta}(a_1, \ldots, a_n, \ldots)\;.
\end{equation}
\end{thm}
\begin{cor}
\[ \sigma[U(n)/T^n]=\mathrm{sign} (\sigma)[ U(n)/T^n]\;. \]
\end{cor}
\begin{proof}
\[
\sigma[U(n)/T^n]=\Big(\sum_{\tilde{\sigma}\in S_n}\mathrm{sign} (\tilde{\sigma})\sigma\tilde{\sigma}\Big)P_{\delta}(a_1,\ldots ,a_n,\ldots)=
\]
\[
=\Big(\sum_{\bar{\sigma}\in S_n}\mathrm{sign} (\sigma^{-1}\bar{\sigma})\bar{\sigma}\Big)P_{\delta}(a_1,\ldots ,a_n,\ldots)=
\mathrm{sign} (\sigma^{-1})\Big(\sum_{\bar{\sigma}\in S_n}\mathrm{sign} (\bar{\sigma})\bar{\sigma}\Big)P_{\delta}(a_1,\ldots ,a_n,\ldots) \; .
\]
Since $\mathrm{sign} (\sigma) = \mathrm{sign} (\sigma^{-1})$, the formula follows.
\end{proof}
\begin{ex}
The direct computation gives that for $n=3$ the polynomials $P_{\sigma\delta}(a_1,a_2,a_3)=\sigma(P_{\delta})$ from~\eqref{P}, where $\delta = (2,1,0)$ and $\sigma \in S_3$ are
\[
P_{\delta}=a_1^3-a_1a_2-3a_3\;,
\]
\[
(12)P_{\delta}=(13)P_{\delta}=-P_{\delta}\;,
\]
\[
(23)P_{\delta}=-(a_1^3+5a_1a_2+3a_3)\;.
\]
The action of the transpositions implies that the rest two permutations act as
\[
(312)P_{\delta}=P_{\delta},\quad (231)P_{\delta}=-(23)P_{\delta} \;.
\]
Using Corollary~\ref{L} we obtain the cobordism class $[U(3)/T^3]=
6(a_1^3+a_1a_2-a_3)$. The above also gives that the symmetric group $S_{3}$ acts
non-trivially on $P_{\delta}(a_1,a_2,a_3)$ and
$\sum\limits_{\sigma\in S_3}\sigma P_\delta(a_1,a_2,a_3)=0$.
\end{ex}
We have
\[ \prod\limits_{1\leqslant i<j\leqslant n}f(t(x_i-x_j)) \equiv
\prod\limits_{1\leqslant i<j\leqslant n}f(t(x_i+x_j))\mod 2\;. \]
Using that $ \prod\limits_{1\leqslant i<j\leqslant n}f(t(x_i+x_j))$
is the symmetric series of variables $x_1,\ldots,x_n$ we obtain
\[ \sigma(P_\xi) \equiv P_\xi \mod 2\;, \]
for any $\sigma\in S_n$. Thus Theorem \ref{t-chi} implies the
following:
\begin{cor}
All Chern numbers of the manifold $U(n)/T^n,\; n \geqslant 2$, are
even.
\end{cor}
\begin{rem}
Cobordism class $[U(3)/T^3]$ gives nonzero element in
$\Omega_U^{-6}\otimes \mathbb{Z}/2$ because\\ $s_{3}(U(3)/T^3)\equiv
2 \mod 4$.
\end{rem}
The characteristic number $s_{m}$ for $U(n)/T^{n}$ is given as
\begin{equation} \label{sm}
s_{m}(U(n)/T^n) = \sum _{1\leqslant i<j\leqslant n} L(x_i-x_j)^{m} \;
.
\end{equation}
Corollary \ref{L} implies the following:
\begin{cor}
$s_{1}(U(2)/T^2)=2; \; s_{3}(U(3)/T^3)= -6$\; and
\begin{equation}
s_{m}(U(n)/T^n) = 0 \; ,
\end{equation}
where $m=\frac{n(n-1)}{2}$ and $n>3$.
\end{cor}
We can push up this further. Denote by $(u_1,\ldots,u_m) = \big(
(x_i-x_j),\; i<j \big)$, where $m=\frac{n(n-1)}{2}$. Then for
$\omega = (i_1,\ldots ,i_m),\, \|\omega\|=m$ we have that
\[ O\Big( (u_1 \cdots u_{i_1})(u_{i_1+1}^2
\cdots u_{i_1+i_2}^2)\cdots (u_{i_{1}+\ldots +i_{m-1}+1}^m\cdots u_{i_{1}+\ldots +i_{m}}^m)\Big) =
\sum_{|\xi|=m}\alpha_{\omega,\xi}{\bf x}^\xi\;. \] This implies that
\begin{equation}
s_{\omega}(U(n)/T^n) = \sum_{|\xi|=m}\alpha_{\omega,\xi}L{\bf x}^\xi = \sum_{\sigma\in S_n}\mathrm{sign} (\sigma)
\alpha_{\omega,\sigma\delta}\,.
\end{equation}
Therefore, if $\xi=(j_1, \ldots, j_n)$, then
$\max\limits_{p_1,\ldots,p_s}(j_{p_1}+ \cdots+ j_{p_s}) = s\Big(
n-\frac{s+1}{2} \Big), \; 1\leqslant s \leqslant n$. In particular,
it holds that $\max\limits_{p_1,p_2}(j_{p_1}+j_{p_2}) = 2n-3$.
\begin{cor} \label{cor8}
Let $\omega = (i_1,\ldots ,i_m)$ such that $i_k\neq 0$ for some
$k>2n-3$, then
\[ s_{\omega}(U(n)/T^n) = 0 \; .\]
\end{cor}
If $\omega = (i_1,\ldots ,i_k), \,\|\omega\|=m$, does not satisfy
Corollary~\ref{cor8}, but $i_{k_1},\ldots ,i_{k_l} \neq 0$ for some
$k_1,\ldots ,k_l$ then we have that $k_{p}=2(n-1)-q_{p}$, for
$q_p\geqslant 1$, $1\leqslant p\leqslant l$. In this case we can say the following.
\begin{cor}\label{cor9}
If $n\geqslant 2l$ and $\sum\limits_{p=1}^{l}q_{p} < l(2l-1)$ then
\[ s_{\omega}(U(n)/T^n) = 0\; .\]
\end{cor}
\begin{rem}\label{trans}
From the second property of the operator $L$ we obtain that $L
\mathcal{P}(x_1, \ldots, x_n)=0$ for any series $\mathcal{P}(x_1,
\ldots, x_n)$, whenever $\sigma (\mathcal{P}(x_1, \ldots, x_n)) =
\varepsilon \mathcal{P}(x_1, \ldots, x_n)$ for a permutation $\sigma
\in S_n$, where $\varepsilon= \pm 1$ and $\varepsilon \cdot
\mathrm{sign}(\sigma)=-1$. This, in particular, gives that
$L(\mathcal{P}(x_1,\ldots ,x_n)+\sigma _{ij}(\mathcal{P}(x_1,\ldots
,x_n)))=0$ for any transposition $\sigma _{ij}$ of $x_i$ and $x_j$,
where $1\leqslant i<j\leqslant n$.
\end{rem}
Using Remark~\ref{trans} we can compute some more characteristic
numbers of the flag manifolds.
\begin{cor}
Let $n=4q$ or $4q+1$ and $\omega=(i_1, \ldots, i_m), \; \| \omega \|
= m$, where $i_{2l-1}=0$ for $l=1, \ldots, \frac{m}{2}$. Then
$s_{\omega}(U(n)/T^n) = 0$.
\end{cor}
Since $\sigma
_{12}((x_1-x_2)^{2l}\prod\limits_{\underset{(i,j)\neq(1,2)}{1\leqslant
i<j\leqslant n}}f(t(x_i-x_j)))=(x_1-x_2)^{2l}
\prod\limits_{\underset{(i,j)\neq(1,2)}{1\leqslant i<j\leqslant
n}}f(t(x_i-x_j))$ we have, also because of Remark~\ref{trans}, that
\[ L \Big(\prod\limits_{1\leqslant i<j\leqslant
n}f(t(x_i-x_j)\Big) = L \Big(\widetilde f(t(x_1-x_2))
\prod\limits_{\underset{(i,j)\neq(1,2)}{1\leqslant i<j\leqslant
n}}f(t(x_i-x_j))\Big)\; ,
\]
where $\widetilde f(t)=\sum\limits_{l\geqslant 1}a_{2l-1}t^{2l-1}$.
Using this property of $L$ once more we obtain
\begin{thm} \label{thm8}
For $n\geqslant 4$ the cobordism class for the flag manifold
$U(n)/T^n$ is given as the coefficient for $t^{\frac{n(n-1)}{2}}$ in
the series in $t$
\begin{equation} \label{tilde}
L \Big(\widetilde f(t(x_1-x_2))\widetilde f(t(x_{n-1}-x_n))
\prod\limits_{\underset{(i,j)\neq(1,2),(n-1,n)}{1\leqslant
i<j\leqslant n}}f(t(x_i-x_j))\Big)\; .
\end{equation}
\end{thm}
\begin{rem}
Corollary~\ref{cor9} implies that if $s_{\omega}\neq 0$ for some
$\omega =(i_1,\ldots ,i_m)$, then for some $1\leqslant l\leqslant
\frac{m}{2}$ it has to be $i_{2l-1}\neq 0$. The Theorem~\ref{thm8}
gives stronger results that, for $n\geqslant 4$ in polynomials
$P_{\sigma\delta}$ in~\eqref{CL} each monom contains the factor
$a_{2i_1-1}a_{2i_2-1}$.
\end{rem}
Theorem~\ref{thm8} provide a way for direct computation of the
number $s_{\omega}$, for $\omega =(i_1,\ldots ,i_m)$ such that $\|
\omega \| =2$, where $\| \omega \| = i_1+\ldots +i_m$. For $n>5$ we
have that $s_{\omega}(U(n)/T^n) = 0$ for such $\omega$. For $n=4$
and $n=5$ these numbers can be computed very straightforward as the
next example shows.
\begin{ex}
We provide
computation of the characteristic
number $s_{(1,0,0,0,1,0)}$ for $U(4)/T^4$. From the formula
(\ref{tilde}) we obtain immediately:
\begin{align*}
s_{(1,0,0,0,1,0)}(U(4)/T^4) &= L\Big( (x_1-x_2)(x_3-x_4)^5 +
(x_1-x_2)^5(x_3-x_4) \Big)=\\
&=10L \Big( (x_1-x_2)(x_3-x_4)(x_1^2x_2^2+x_3^2x_4^2)\Big)=\\
&=20L \Big( x_1^3x_2^2(x_3-x_4)+(x_1-x_2)x_3^3x_4^2)\Big)=\\
&=40L \Big( x_1^3x_2^2x_3+x_1x_3^3x_4^2\Big)=80\; .
\end{align*}
\end{ex}
\begin{rem}
We want to emphasize that the formula~\eqref{40} gives the
description of the cobordism classes of the flag manifolds in terms
of {\it divided difference operators}. The divided difference
operators are defined with (see \cite{BGG})
\[\partial_{ij}P(x_1,\ldots ,x_n)=\frac{1}{x_i-x_j}\Big(P(x_1,\ldots
,x_n)-\sigma _{ij}P(x_1,\ldots ,x_n)\Big),\] where $i<j$. Put
$\sigma _{i,i+1}=\sigma_i$, $\partial_{i,i+1}=\partial_i, \;
1\leqslant i \leqslant n-1$. We can write down operator $L$ as the
following composition (see \cite{F, Macdonald-91})
\[
L= (\partial_1 \partial_2 \cdots \partial_{n-1})(\partial_1 \partial_2
\cdots \partial_{n-2}) \cdots (\partial_1 \partial_2) \partial_1 \; .
\]
Denote by $\mathrm{w} _0$ the permutation $(n,n-1, \ldots, 1)$. Wright down
a permutation $\mathrm{w} \in S_n$ in the form $\mathrm{w} = \mathrm{w} _0 \sigma _{i_1}
\cdots \sigma _{i_p}$ and set $\nabla_{\mathrm{w}} =
\partial_{i_p}\cdots
\partial_{i_1}$. It is natural to set $\nabla_{\mathrm{w} _0} = I$ --- identity
operator. The space of operators $\nabla_{\mathrm{w}}$ is dual to the space
of the Schubert polynomials
$\mathfrak{G}_{\mathrm{w}}=\mathfrak{G}_{\mathrm{w}}(x_1,\ldots ,x_n)$, since it
follows from their definition that $\mathfrak{G}_{\mathrm{w}} =
\nabla_{\mathrm{w}} {\bf x}^\delta$. Note that $\mathfrak{G}_{\mathrm{w} _0}= {\bf
x}^\delta$. For the identity permutation $e=(1,2,\ldots, n)$ we have
$e=\mathrm{w} _0 \cdot \mathrm{w} _0^{-1}$. So $\nabla_e = L$ and
$\mathfrak{G}_{e} = \nabla_{e} {\bf x}^\delta = 1$.
Schubert polynomials were introduced in \cite{BGG} and in \cite{D}
in context of an arbitrary root systems. The main reference on
algebras of operators $\nabla_{\mathrm{w}}$ and Schubert polynomials
$\mathfrak{G}_{\mathrm{w}}$ is \cite{Macdonald-91}.
The description of the cohomology rings of the flag manifolds
$U(n)/T^n$and Grassmann manifolds $G_{n,k}=U(n)/(U(k)\times U(n-k))$
in the terms of Schubert polynomials is given in~\cite{F}.
The description of the {\it complex cobordism ring} of the flag manifolds
$G/T$, for $G$ compact, connected Lie group and $T$ its maximal torus,
in the terms of the Schubert polynomials calculus is given
in~\cite{Bressler-Evens-90, Bressler-Evens-92}.
\end{rem}
\subsection{Grassmann manifolds.} As a next application we will compute
cobordism class, characteristic numbers $s_{\omega}$ and,
consequently, Chern numbers for invariant complex structure on
Grassmannian $G_{4,2} = U(4)/(U(2)\times U(2)) = SU(4)/S(U(2)\times
U(2))$. Note that, it follows by~\cite{BH} that, up to equivalence,
$G_{4,2}$ has one invariant complex structure $J$. The corresponding
Lie algebra description for $G_{4,2}$ is $A_{3}/(\TT ^{1}\oplus A_1
\oplus A_1)$.
The number of the fixed points under the canonical action of $T^{3}$
on $G_{4,2}$ is, by Theorem~\ref{number}, equal to 6. Let
$x_1,x_2,x_3,x_4$ be canonical coordinates on maximal abelian
algebra for $A_3$. Then $x_1, x_2$ and $x_3, x_4$ represents
canonical coordinates for $A_1\oplus A_1$. The weights of this
action at identity point $(T_{e}(G_{4,2}), J)$are given by the
positive complementary roots $x_1-x_3, x_1-x_4, x_2-x_3, x_2-x_4$
for $A_3$ related to $A_1\oplus A_1$ that define $J$.
The Weyl group $W_{U(4)}$ is the symmetric group of permutation on
coordinates $x_1,\ldots ,x_4$ and the Weyl group $W_{U(2)\times
U(2)} = W_{U(2)}\times W_{U(2)}$ is the product of symmetric groups
on coordinates $x_1, x_2$ and $x_3, x_4$ respectively. Let
$\mathrm{w}_{j}\in W_{U(4)}/W_{U(2)\times U(2)}$. Corollary~\ref{chom}
gives that the cobordism class $[G_{4,2}]$ is the coefficient for
$t^{4}$ in polynomial
\begin{equation}\label{gr}
\begin{split}
\sum _{j=1}^{6}\mathrm{w} _{j}\Big( \frac{f(t(x_1-x_3))f(t(x_1-x_4))
f(t(x_2-x_3))
f(t(x_2-x_4))}{(x_1-x_3)(x_1-x_4)(x_2-x_3)(x_2-x_4)}\Big) = \qquad \qquad\\
= \frac{1}{4} L\Big( (x_1-x_2)(x_3-x_4)f(t(x_1-x_3))
f(t(x_1-x_4)) f(t(x_2-x_3)) f(t(x_2-x_4)) \Big)\; ,
\end{split}
\end{equation}
where $f(t) = 1 + a_{1}t + a_{2}t^2 + a_{3}t^3 + a_{4}t^4$.
Expanding formula~\eqref{gr} we get that
\begin{equation}\label{CCG42}
[G_{4,2}] = 2(3a_{1}^4+ 12a_{1}^2a_{2} + 7a_{2}^2 + 2a_{1}a_{3}-10a_4) \; .
\end{equation}
The characteristic numbers $s_{\omega}$ can be read off form this formula:
\[
s_{(4,0,0,0)}=6, \quad s_{(2,1,0,0)}=24, \quad s_{(0,2,0,0)}=14,
\quad s_{(1,0,1,0)}=4, \quad s_{(0,0,0,1)} = -20 \; .
\]
The coefficients $\beta _{\omega \xi}$ from Theorem~\ref{sc} can be
explicitly computed and for $8$-dimensional manifold give the
following relation between characteristic numbers $s_{\omega}$ and
Chern numbers:
\[
s_{(0,0,0,1)}= c_{1}^4 - 4c_{1}^2c_{2} + 2c_{2}^2 + 4c_{1}c_{3} - 4c_{4}
, \quad s_{(2,1,0,0)} = c_1c_3 - 4c_4 \; ,
\]
\[
s_{(0,2,0,0)} = c_{2}^2 - 2c_1c_3 + 2c_4, \quad
s_{(1,0,1,0)}=c_1^2c_2-c_1c_3+4c_4-2c_2^2,\quad s_{(4,0,0,0)} = c_4 \; .
\]
We deduce that the Chern numbers for $(G_{4,2}, J)$ are
\[
c_4 = 6, \quad c_1c_3 = 48, \quad c_2^2 = 98, \quad c_1^2c_2 = 224,
\quad c_1^4 = 512 \; .
\]
The given example generalizes as follows. Denote by $\Delta_{p,q}
= \prod\limits_{p\leqslant i<j\leqslant q}(x_i-x_j)$, then
$\Delta_n = \Delta_{1,n}$.
\begin{thm}\label{CGR}
The cobordism class for Grassmann manifold $G_{q+l,l}$ is given as
the coefficient for $t^{lq}$ in the series in $t$
\begin{equation}
\sum _{\sigma\in S_{q+l}/S_{q}\times S_{l}}\sigma\Big(\prod
\frac{f(t(x_i-x_j))}{(x_i-x_j)}\Big) = \frac{1}{q!l!}L \Big(
\Delta_q\Delta_{q+1,q+l}\prod f(t(x_i-x_j)) \Big)\; ,
\end{equation}
where $1 \leqslant i \leqslant q, \; (q+1) \leqslant j \leqslant
(q+l)$ and $S_{q+l}$ is the symmetric group.
\end{thm}
Let us introduce the polynomials $Q_{(q+l,l)\xi}$ defined with
\[
\Delta_q\Delta_{q+1,q+l}\prod\limits_{\underset{q+1\leqslant j\leqslant q+l}{1\leqslant i\leqslant q}}f(t(x_i-x_j))=
\sum_{|\xi | \geqslant \frac{(q+l)^2-(q+l)}{2}}Q_{(q+l,l)\xi}(a_1,\ldots , a_n,\ldots)
t^{|\xi |-\frac{(q+l)^2-(q+l)}{2}}x^{\xi}\; ,
\]
where $\xi = (j_1,\ldots ,j_{q+l})$ and $|\xi |=\sum\limits_{k=1}^{q+l}j_k$.
Appealing to Theorem~\ref{CGR} we obtain the following.
\begin{cor}\label{CDEL}
The cobordism class for Grassmann manifold $G_{q+l,l}$ is given with
\begin{equation}
[G_{q+l,l}] =\frac{1}{q!l!}\sum _{\sigma\in S_{q+l}}\mathrm{sign} (\sigma)Q_{(q+l,l)\sigma\delta}(a_1,\ldots,a_{ql})\; ,
\end{equation}
where $\delta = (q+l-1,q+l-2,\ldots,0)$.
\end{cor}
\begin{ex}
For $q=l=2$ the calculations give that the polynomials $Q_{\sigma\delta}=Q_{(4,2)\sigma\delta}(a_1,a_2,a_3,a_4)$ are as follows
\[
Q_{(3,2,1,0)}=-Q_{(2,3,1,0)}=-Q_{(3,2,0,1)}=Q_{(2,3,0,1)}=Q_{(1,0,3,2)}=-Q_{(1,0,2,3)}=-Q_{(0,1,3,2)}=Q_{(0,1,2,3)}\]
\[=a_1^4+4a_2^2-4a_1a_3\; ,
\]
\[
Q_{(2,1,3,0)}=-Q_{(1,2,3,0)}=-Q_{(2,1,0,3)}=Q_{(1,2,0,3)}=Q_{(3,0,2,1)}=-Q_{(3,0,1,2)}=-Q_{(0,3,2,1)}=Q_{(0,3,1,2)}\]\[=a_1^4+4a_1^2a_2+a_2^2+6a_1a_3-6a_4\; ,
\]
\[
Q_{(1,3,2,0)}=-Q_{(3,1,2,0)}=Q_{(3,1,0,2)}=-Q_{(1,3,0,2)}=Q_{(2,0,1,3)}=-Q_{(2,0,3,1)}=Q_{(0,2,3,1)}=-Q_{(0,2,1,3)}\]\[=a_1^4+8a_1^2a_2+2a_2^2-4a_4\; ,
\]
Using Corollary~\ref{CDEL} we obtain the formula~\eqref{CCG42} for the cobordism class $[G_{4,2}]$.
\end{ex}
\subsection{Homogeneous space $SU(4)/S(U(1)\times U(1)\times U(2))$}\label{M10}
Following~\cite{BH} and~\cite{KT} we know that 10-dimensional space
$M^{10} = SU(4)/S(U(1)\times U(1)\times U(2))$ admits, up to
equivalence, two invariant complex structure $J_1$ and $J_2$ and one
non-integrable invariant almost complex structure $J_{3}$. We
provide here the description of cobordism classes for all of three
invariant almost complex structures. The Chern numbers for all the
invariant almost complex structures are known and they have been
completely computed in~\cite{KT} through multiplication in
cohomology. We provide also their computation
using our method.
The corresponding Lie algebra description for $M^{10}$ is
$A_{3}/(\TT ^{2}\oplus A_1)$. Let $x_1, x_2, x_3, x_4$ be canonical
coordinates on maximal Abelian subalgebra for $A_3$. Then $x_1, x_2$
represent canonical coordinates for $A_1$. The number of fixed
points under the canonical action of $T^{3}$ on $M^{10}$ is, by
Theorem~\ref{number}, equal to 12.
\subsubsection{The invariant complex structure $J_1$.}\label{M101}
The weights of the action of $T^{3}$ on $M^{10}$ at identity point
related to $J_1$ are given by the complementary roots $x_1-x_3,
x_1-x_4, x_2-x_3, x_2-x_4, x_3-x_4$ for $A_{3}$ related to $A_1$,
(see~\cite{BH},~\cite{KT}). The cobordism class $[M^{1}, J_1]$ is,
by Corollary~\ref{chom}, given as the coefficient for $t^{5}$ in
polynomial
\[
\sum _{j=1}^{12}\mathrm{w} _{j}\Big( \frac{f(t(x_1-x_3))f(t(x_1-x_4))
f(t(x_2-x_3))f(t(x_2-x_4))
f(t(x_3-x_4))}{(x_1-x_3)(x_1-x_4)(x_2-x_3)(x_2-x_4)(x_3-x_4)}\Big ) \; ,
\]
where $\mathrm{w}_{j}\in W_{U(4)}/W_{U(2)}$ and $f(t) = 1 + a_{1}t +
a_{2}t^2 + a_{3}t^3 + a_{4}t^4 + a_{5}t^5$.
Therefore we get that
\[
[M^{10}, J_{1}]= 4(3a_1^5 + 12a_1^3a_2 + 7a_1a_2^2 -
5a_1^2a_3 - 2a_2a_3 - 10a_1a_4 + 5a_5) \; .
\]
By Theorem~\ref{sc} we get the following relations between
characteristic numbers $s_\omega$ and Chern numbers for $(M^{10},
J_1)$.
\[
s_{(0,0,0,0,1)} = 20 = c_1^5 - 5c_1^3c_2 + 5c_1^2c_3 + 5c_1c_2^2 -
5c_1c_4 - 5c_2c_3 + 5c_5 \; ,
\]
\[
s_{(1,2,0,0,0)}= 28 = c_2c_3 - 3c_1c_4 + 5c_5, \quad
s_{(2,0,1,0,0)} = -20 = c_1^2c_3 - c_1c_4 - 2c_2c_3 + 5c_5 \; ,
\]
\[
s_{(0,1,1,0,0)} = -8 = -2c_1^2c_3 + c_1c_2^2 - c_2c_3 + 5c_1c_4 - 5
c_5, \quad s_{(3,1,0,0,0)} = 48 = c_1c_4 - 5c_5 \; ,
\]
\[
s_{(1,0,0,1,0)} = -40 = c_1^3c_2 - c_1^2c_3 - 3c_1c_2^2 + c_1c_4 +
5c_2c_3 - 5c_5, \quad s_{(5,0,0,0,0)} = 12 = c_5 \; .
\]
This implies that the Chern numbers for $(M^{10}, J_1)$ are as
follows:
\[
c_5 = 12, \quad c_1c_4 = 108, \quad c_2c_3 = 292, \quad c_1^2c_3 =
612, \quad c_1c_2^2 = 1028, \quad c_1^3c_2 = 2148, \quad c_1^5 =
4500 \; .
\]
\subsubsection{The invariant complex structure $J_2$.} The weights of the
action of $T^{3}$ on $M^{10}$ at identity point related to $J_2$ are
given by the positive complementary roots $x_4-x_1, x_4-x_2,
x_4-x_3, x_1-x_3, x_2-x_3$ for $A_{3}$ related to $A_1$,
(see~\cite{BH},~\cite{KT}). The cobordism class $[M^{1}, J_2]$ is,
by Corollary~\ref{chom}, given as the coefficient for $t^{5}$ in
polynomial
\[
\sum _{j=1}^{12}\mathrm{w} _{j}\Big( \frac{f(t(x_4-x_1))f(t(x_4-x_2))
f(t(x_4-x_3))f(t(x_1-x_3))
f(t(x_2-x_3))}{(x_4-x_1)(x_4-x_2)(x_4-x_3)(x_1-x_3)(x_2-x_3)}\Big) \; ,
\]
where $\mathrm{w}_{j}\in W_{U(4)}/W_{U(2)}$ and $f(t) = 1 + a_{1}t +
a_{2}t^2 + a_{3}t^3 + a_{4}t^4 + a_{5}t^5$.
Therefore we get that
\[
[M^{10}, J_{2}]= 4(3a_1^5 + 12a_1^3a_2 + 7a_1a_2^2 - 5a_1^2a_3 +
8a_2a_3 - 10a_1a_4 - 5a_5) \; .
\]
\begin{rem}
We could proceed with the description of cobordism class $[M^{10},
J_{2}]$ appealing to the description of $J_1$ from~\ref{M101}
and applying the results from Section~\ref{eqsthom}. The corresponding description of the weights for the
action on $T^3$ on $(M^{10}, J_2)$ looks like
\[
(+1)\cdot (x_1-x_3), (-1)\cdot (x_1-x_4), (+1)\cdot (x_2-x_3),
(-1)\cdot (x_2-x_4), (-1)\cdot (x_3-x_4) \; ,
\]what means that
\[
a_{1}(e)=+1, a_{2}(e)=-1, a_{3}(e)=+1, a_{4}(e)=-1,a_{5}(e)=-1\; .
\]
Since $J_1$ and $J_2$ define on $M^{10}$ an opposite orientation, we
have that $\epsilon =-1$ and it follows that the fixed points have
sign $+1$.
\end{rem}
Applying the same procedure as above we get that the Chern
numbers for $(M^{10}, J_{2})$ are:
\[
c_5 = 12, \quad c_1c_4 = 108, \quad c_2c_3 = 292, \quad c_1^2c_3 = 612,
\quad c_1c_2^2 = 1068, \quad c_1^3c_2 = 2268, \quad c_1^5 = 4860 \; .
\]
\subsubsection{The invariant almost complex structure $J_3$.} The weights for
the action of $T^{3}$ on $M^{10}$ at identity point related to $J_3$
are given by complementary roots $x_1-x_3, x_2-x_3, x_4-x_1,
x_4-x_2, x_3-x_4$, (see~\cite{KT}). Using Corollary~\ref{chom} we get that the
cobordism class for $(M^{10}, J_3)$ is
\[
[M^{10}, J_{3}] = 4(3a_1^5 - 12a_1^3a_2 + 7a_1a_2^2 + 15a_1^2a_3 -
12a_2a_3 -10a_1a_4 + 15a_5) \; .
\]
\begin{rem}
We could also as in the previous case proceed with the computation
of the cobordism class $[M^{10}, J_{3}]$ appealing on the on the
description of $J_1$ from~\ref{M101}. The corresponding description
of the weights is
\[
(+1)\cdot (x_1-x_3), (-1)\cdot (x_1-x_4), (+1)\cdot (x_2-x_3),
(-1)\cdot (x_2-x_4), (+1)\cdot (x_3-x_4)\; .
\]
Since $J_1$ and $J_2$ define the same orientation it follows that
all fixed points for an action $T^3$ on $M^{10}$ have sign $+1$.
\end{rem}
The characteristic numbers for $(M^{10}, J_{3})$ are given as
coefficients in its cobordism class, what, as above, together with
Theorem~\ref{sc} gives that the Chern numbers for $(M^{10}, J_{3})$
are as follows:
\[
c_5 = 12, \quad c_1c_4 = 12, \quad c_2c_3 = 4, \quad c_1^2c_3 = 20,
\quad c_1c_2^2 = -4, \quad c_1^3c_2 = -4, \quad c_1^5 = -20 \; .
\]
\begin{rem}
Further work on the studying of Chern numbers
and the geometry for the generalizations of this example is done
in~\cite{H05} and in~\cite{KT}.
\end{rem}
\subsection{Sphere $S^{6}$.}According to~\cite{BH} we know that the
sphere $S^{6}=G_2/SU(3)$ admits $G_2$-invariant almost complex
structure, but it does not admit $G_2$-invariant complex structure.
The existence of an invariant almost complex structure follows from
the fact that $SU(3)$ is connected centralizer of an element of
order $3$ in $G_2$ which generates it's center, while the
non-existence of an invariant complex structure is because the
second Betti number for $S^6$ is zero. Note that being isotropy
irreducible, $S^6=G_2/SU(3)$ has unique, up to conjugation,
invariant almost complex structure $J$.
The roots for the Lie algebra $\gg _{2}$ are given with (see~\cite{Onishchik})
\[
\pm x_1,\; \pm x_2,\; \pm x_3,\; \pm (x_1-x_2),\; \pm(-x_1+x_3),\; \pm(-x_2+x_3),
\]
where $x_1+x_2+x_3=0$. It follows that the system of complementary roots for $\gg _{2}$ related to $A_2$ is
$\pm x_1$, $\pm x_2$, $\pm x_3$. According to~\ref{ac}, since $S^6=G_2/SU(3)$ is isotropy irreducible, this implies that the roots of an existing invariant almost complex
structure $J$ on $S^{6}$ are
\[
\alpha _{1} = x_1,\; \alpha_{2}=x_2,\;
\alpha_{3}= x_3=-(x_1+x_2)\; .
\]
The canonical action of a common maximal torus $T^2$ on
$S^6=G_2/SU(3)$ has $\chi (S^6)=2$ fixed points.
By Theorem~\ref{weights} we get that the weights at
the fixed points for this action are given by the action of the Weil group $W_{G_2}$ up to the action of the Weil group $W_{SU(3)}$ on the roots for $J$:
\[
x_{1},\; x_{2},\; x_3\;\; \mbox{and}\;\; -x_{1},\;
-x_{2},\; -x_3\; .
\]
Since the weights at these two fixed points are of opposite signs, Corollary~\ref{ch-tor-hom} implies that in the Chern character of the universal toric genus
for $(S^6, J)$ the coefficients for $a^{\omega}$ are going to be zero for $\|\omega \|$ being even.
As the almost complex structure $J$ is invariant under the action of the group $G_2$, it follows by Lemma~\ref{L1} that the universal toric genus for $(S^{6},J)$ belongs to the image of the ring $U^{*}(BG_2)$ in $U^{*}(BT^2)$ of the map induced by embedding $T^2\subset G_2$ . Furthermore,
using Corrollary~\ref{ch-tor-hom} we obtain that the universal toric genus for $(S^6,J)$ is series in $\sigma_{2}$ and $\sigma_{3}$, where $\sigma_2$ and $\sigma_3$ are the elementary symmetric functions in three variables. The direct computation gives that the beginning terms in the series of the Chern chraracter are
\begin{align*}
ch_{U}\Phi(S^{6},J) = & 2(a_1^3 - 3a_1a_2 + 3a_3)+\\
& + 2( a_1a_2^2-2a_1^2a_3 - a_2a_3+5a_1a_4-5a_5)\sigma _{2}+\\
& + 2(a_1a_3^2-2a_1a_2a_4-a_3a_4+2a_1^2a_5+3a_2a_5-7a_1a_6+7a_7)\sigma_{2}^{2} +\\
& + 2(3a_9-3a_1a_8-3a_2a_7+6a_3a_6-3a_4a_5+3a_1^2a_7-3a_1a_2a_6-3a_1a_3a_5+\\
& + 3a_1a_4^2+3a_2^2a_5-3a_2a_3a_4+a_3^3)\sigma_{3}^{2}+\\
& + 2(-9a_9+9a_1a_8-5a_2a_7+3a_3a_6-a_4a_5-2a_1^2a_7+\\
& + 2a_1a_2a_6-2a_1a_3a_5+a_1a_4^2)\sigma_{2}^{3}+ \ldots
\end{align*}
In particular, we obtain that the cobordism class for $(S^6,J)$ is
\begin{equation}\label{cobsph}
[S^{6},J]=2(a_1^3 - 3a_1a_2 + 3a_3) \; .
\end{equation}
\begin{rem}
We can also compute cobordism class $[S^{6}, J]$ using relations
between Chern numbers and characteristic numbers for an invariant
almost complex structure given by Theorem~\ref{sc}:
\[
c_3 = s_{(3,0,0)},\; c_1c_2 - 3c_3 = s_{(1,1,0)},\; c_1^3 - 3c_1c_2
+ 3c_3 = s_{(0,0,1)}\; .
\]
Since for $S^6$ we obviously have that $c_1c_2=c_1^3=0$ and
$c_{3}=2$, it implies $s_{(3,0,0)}=2$, $s_{(1,1,0)}=-s_{(0,0,1)}=6$
what gives formula~\eqref{cobsph}.
\end{rem}
\subsubsection{On stable complex structures.}
If now $c_{\tau}$ is an arbitrary stable complex structure on $S^6$,
equivariant under the given action of $T^2$, than
Corollary~\ref{stableweights} implies that the weights at the fixed points of this
action related to $c_{\tau}$ are given with
\[
a_{1}(1)\alpha_{1},\; a_{2}(1)\alpha_{2},\;a_{3}(1)\alpha_{3},\;
\mbox{and}\;
a_{1}(2)(-\alpha_{1}),\;a_{2}(2)(-\alpha_{3}),\;a_{3}(1)(-\alpha_{2})\;
.
\]
By Corollary~\ref{stablesigns} the signs at fixed points are
\[
\mathrm{sign}(i)=\epsilon \cdot \prod_{j=1}^{3}a_{j}(i),\;\; i=1,2\; .
\]
Corollary~\ref{stablecoeffzero} implies that the coefficients $a_{j}(i)$ should satisfy the following
equations:
\[
a_{1}(1)+a_{1}(2)=a_{2}(1)+a_{2}(2) = a_{3}(1)+a_{3}(2),\;
a_{1}(1)a_{2}(1)-a_{1}(2)(a_{1}(1)-a_{2}(1))=1 \; .
\]
These equations have ten solutions which we write as the couples of
triples that correspond to the fixed points. Two of them are
$(1,1,1), (1,1,1)$ and $(-1,-1,-1), (-1,-1,-1)$ and correspond to an
invariant almost complex structure $J$ and it's conjugate. The other
eight couples are of the form $(i,j,k), (-i,-j,-k)$ where $i,j,k=\pm
1$ and they describe the weighs of any other stable complex
structure on $S^6$ equivariant under the given torus action. Note
that, since $\tau (S^{6})$ is trivial bundle, we have on $S^{6}$
many stable complex structures different from $J$.
The fact that, for any $T^2$-equivariant stable complex structure on $S^6$ different from $J$ or it's conjugate, the weights at two fixed points differ by sign, together with Proposition~\ref{cobc} proves the following Proposition.
\begin{prop}
The cobordism class for $S^6$
related to any $T^2$-equivariant stable complex structure that is not equivalent to $G_2$-invariant almost complex structure is trivial. It particular, besides of described $G_2$-invariant almost
complex structure, $S^6$ does not admit any other almost complex
structure invariant under the canonical action of $T^2$.
\end{prop}
\bibliographystyle{amsplain}
|
2,877,628,089,556 | arxiv | \section{Introduction}
Spontaneous Symmetry Breaking (SSB) through the non-vanishing
expectation value $\langle \Phi \rangle \neq$ 0 of a
self-interacting scalar field $\Phi(x)$ is the essential ingredient
to generate the particle masses in the Standard Model. This old idea
\cite{Higgs:1964ia,Englert:1964et} of a fundamental scalar field, in the following
denoted for brevity as the Higgs field, has more recently found an
important experimental confirmation after the observation, at the
Large Hadron Collider of CERN \cite{Aad:2012tfa,Chatrchyan:2012xdj}, of a narrow scalar
resonance, of mass $m_h \sim 125 $ GeV whose phenomenology fits well
with the perturbative predictions of the theory. The discovery of
this resonance, identified as the long sought Higgs boson, has
produced the general conviction that modifications of this general
picture, if any, can only come from new physics, e.g. supersymmetry.
Though, in spite of the present phenomenological consistency, this
conclusion may be too premature. So far only the gauge and Yukawa
couplings of the 125 GeV Higgs particle have been tested. This is
the sector of the theory described by these interactions and by the
associated induced coupling, say $\lambda^{\rm ind}$, determined by
\begin{equation} \label{nonauto}\frac{d\lambda^{\rm ind}}{dt}=
\frac{1}{16\pi^2}\left[-12 y^4_t +\frac{3}{4}(g')^4 +
\frac{3}{2}(g')^2g^2 + \frac{9}{4}g^4\right] \end{equation} where $g$ and $g'$
are the SU(2)xU(1) gauge couplings and we have just restricted to
the quark-top Yukawa coupling $y_t$ evolving according to \begin{equation}
\label{top}\frac{dy_t}{dt}= \frac{1}{16\pi^2}\left[\frac{9}{2}y_t^3
- \left( \frac{17}{12}(g')^2 + \frac{9}{4}g^2 +8
g^2_3\right)y_t\right] \end{equation} where $g_3$ is the SU(3)$_c$ coupling.
Instead, the effects of a genuine scalar self-coupling $\lambda$, if
any, are below the accuracy of the measurements. For this reason, an
uncertainty about the mechanisms at the base of symmetry breaking
still persists.
We briefly mention that, at the beginning, SSB was explained in
terms of a classical scalar potential with a double-well shape. Only
later, after the work of Coleman and Weinberg \cite{Coleman:1973jx},
it became increasingly clear that the phenomenon should be described
at the quantum level and that the classical potential had to be
replaced by the effective potential $V_{\rm eff}(\varphi )$ which
includes the zero-point energy of all particles in the spectrum.
This has produced the present view where the description of SSB is
demanded to the combined study of all couplings and of their
evolution up to very large energy scales.
But, in principle, SSB could still be determined by the pure scalar
sector if the contribution of the other fields to the vacuum energy
is negligible. This may happen if, as in the original picture with
the classical potential, the primary mechanism producing SSB is
quite distinct from the remaining Higgs field self-interactions
induced through the gauge and Yukawa couplings. The type of scenario
we have in mind is sketched below:
~~ i) One could first take into account the indications of most
recent lattice simulations of pure $\lambda\Phi^4$ in 4D
\cite{lundow2009critical,Lundow:2010en,akiyama2019phase}.
These calculations, performed in the Ising
limit of the theory with different algorithms, indicate that on the
largest lattices available so far the SSB phase transition is
(weakly) first order.
~~ ii) With a first-order transition, SSB would emerge as a true
instability of the symmetric vacuum at $\varphi=0$. Its quanta have
a tiny and still positive mass squared $V''_{\rm
eff}(\varphi=0)=m^2_\Phi>0$ but, nevertheless, their interactions
can destabilize this symmetric vacuum \cite{Consoli:1999ni}
and produce the condensation process responsible for symmetry
breaking. This primary $\lambda\Phi^4$ sector should be considered
with its own degree of locality defined by some cutoff scale
$\Lambda_s$. We are thus lead to identify $\Lambda_s$ as the Landau
pole for a bare coupling $\lambda_B=+\infty$. This corresponds
precisely to the Ising limit and provides the best possible
definition of a local $\lambda\Phi^4$ for any non-zero low-energy
coupling $\lambda\sim 1/ \ln \Lambda_s\ll 1$. This is the relevant
one for low-energy physics, as in the original Coleman-Weinberg
calculation of the effective potential at $\varphi^2 \ll
\Lambda^2_s$.
~~ iii) After this first step, the description of the basic
$\lambda\Phi^4$ sector can further be improved by going to a next
level. Since, for any non-zero $\lambda$, there is a finite Landau
pole, one can consider the whole set of theories
($\Lambda_s$,$\lambda$), ($\Lambda'_s$,$\lambda'$),
($\Lambda''_s$,$\lambda''$)...with larger and larger Landau poles,
smaller and smaller low-energy couplings but all having the same
depth of the potential, i.e. with the same vacuum energy ${\cal
E}=V_{\rm eff}(\langle \Phi \rangle)$. This requirement derives from
imposing the RG-invariance of the effective potential in the
three-dimensional space ($\varphi$, $\lambda$, $\Lambda_s$) and, in
principle, allows one to handle the $\Lambda_s \to \infty$ limit
\footnote{This limit should also be considered because the scalar
sector is assumed to induce SSB and thus to determine the vacuum
structure and its symmetries. In a quantum field theory, imposing
invariance under RG-transformations is then the standard method to
remove the ultraviolet cutoff or, in alternative, to minimize its
influence on observable quantities.}. In this formalism, besides a
first invariant mass scale ${ \cal I}_1$, defined by $|{\cal E}|\sim
{ \cal I}^4_1$, there is a second invariant ${\rm \cal I}_2$,
related to a particular normalization of the vacuum field, which is
the natural candidate to represent the weak scale ${ \cal
I}_2=\langle \Phi \rangle\sim$ 246 GeV. The minimization of the
effective potential can then be expressed as a relation ${ \cal
I}_1= K { \cal I}_2$ in terms of some proportionality constant $K$.
This RG-analysis of the effective potential, discussed in Sects.2
and 3, is the main point of this paper. It takes into account that,
in those approximation schemes that reproduce the type of weak
first-order phase transition favored by recent lattice simulations,
there are {\it two} vastly different mass scales, say $m_h$ and
$M_h$. These are defined respectively by the second derivative and
the depth of the effective potential at its minima and related by
$M^2_h\sim L m^2_h >> m^2_h$ where $L=\ln (\Lambda_s/M_h)$.
Therefore, even though $(m_h/\langle \Phi \rangle)^2 \sim 1/L$, the
larger $M_h={ \cal I}_1$ remains finite in units of ${ \cal
I}_2=\langle \Phi \rangle$.
To appreciate the change of perspective, let us recall the usual
description of a second-order phase transition as summarized in the
scalar potential reported in the Review of Particle Properties
\cite{Tanabashi:2018oca}. In this review, which gives the present
interpretation of the theory in the light of most recent
experimental results, the scalar potential is expressed as
(PDG=Particle Data Group) \begin{equation} \label{VPDG} V_{\rm
PDG}(\varphi)=-\frac{ 1}{2} m^2_{\rm PDG} \varphi^2 + \frac{
1}{4}\lambda_{\rm PDG}\varphi^4 \end{equation} By fixing $m_{\rm PDG}\sim$ 88.8
GeV and $\lambda_{\rm PDG}\sim 0.13$, this potential has a minimum
at $|\varphi|=\langle \Phi \rangle\sim$ 246 GeV and quadratic shape
$V''_{\rm PDG}(\langle \Phi \rangle)=$ (125 GeV)$^2$. Note that, as
a built-in relation, the second derivative of the potential (125
GeV)$^2$ also determines its depth, i.e. the vacuum energy ${\cal
E}_{\rm PDG}$
\begin{equation} \label{EPDG} {\cal E}_{\rm PDG}=-\frac{ 1}{2} m^2_{\rm PDG}
\langle \Phi\rangle^2 + \frac{ 1}{4}\lambda_{\rm PDG} \langle
\Phi\rangle^4 =-\frac{ 1}{8} (125~{\rm GeV}\langle \Phi\rangle)^2
\sim -1.2\cdot10^8~ {\rm GeV}^4\end{equation} Instead in our case, by
identifying $m_h\sim$ 125 GeV, the vacuum energy ${\cal E}\sim
-\frac { 1}{8} M^2_h \langle \Phi \rangle^2$ would be deeper than
Eq.(\ref{EPDG}) by the potentially divergent factor $L$. Thus, it
would also be insensitive to the other sectors of the theory, e.g.
the gauge and Yukawa interactions, whose effect is just to replace
the scalar self coupling $\lambda$ with the total coupling
$\lambda^{\rm tot}=\lambda +\lambda^{\rm ind}$ in the definition of
the quadratic shape of the effective potential. All together, once
the picture sketched above works also in the $\Lambda_s \to \infty$
limit, where $\lambda$ becomes extremely small at any finite energy
scale, the phenomenology of the 125 GeV resonance would remain the
same and SSB would essentially be determined by the pure scalar
sector.
We emphasize that the relation $M_h= K\langle \Phi \rangle$ is not
introducing a new large coupling $K^2=O(1)$ in the picture of
symmetry breaking. This $K^2$ should not be viewed as a coupling
constant or, at least, as a coupling constant which produces {\it
observable} interactions in the broken symmetry phase. From this
point of view, it may be useful to compare SSB to the phenomenon of
superconductivity in non-relativistic solid state physics. There the
transition to the new, superconductive phase represents an essential
instability that occurs for any infinitesimal two-body attraction
$\epsilon$ between the two electrons forming a Cooper pair. At the
same time, however, the energy density of the superconductive phase
and all global quantities of the system (energy gap, critical
temperature, etc.) depend on the much larger collective coupling
$\epsilon N$ obtained after re-scaling the tiny 2-body strength by
the large number of states near the Fermi surface. This means that,
in principle, the same macroscopic description could be obtained
with smaller and smaller $\epsilon$ and Fermi systems of
corresponding larger and larger $N$. In this comparison $\lambda$ is
the analog of $\epsilon$ and $K^2$ is the analog of $\epsilon N$.
Another aspect, implicit in the usual picture of SSB, is that
$V''_{\rm PDG}( \langle \Phi \rangle )$, which strictly speaking is
the self-energy function at zero momentum $|\Pi(p=0)|$, is assumed
to coincide with the pole of the Higgs propagator. As discussed in
Sect.4, $m_h$ and $M_h$ refer to different momentum regions in the
connected scalar propagator $G(p)=1/ (p^2 -\Pi(p))$, namely $m_h$
for $p \to 0$ and $M_h$ at larger $p$. Therefore, if $\Lambda_s$
were large but finite, so that both $m_h$ and $M_h$ are finite, the
transition between the two scales should become visible by
increasing the energy.
In Sect.5, we will show that this two-scale structure is supported
by lattice simulations in the 4D Ising limit of the theory. In fact,
once $m^2_h$ is directly computed from the zero-momentum connected
propagator $G(p=0)$ (the inverse susceptibility) and $M_h$ is
extracted from the behaviour of $G(p)$ at higher momentum, the
lattice data confirm the increasing expected logarithmic trend
$M^2_h\sim L m^2_h$.
From a phenomenological point of view, these simulations indicate
that a relatively low value, e.g. $m_h$=125 GeV, could in principle
coexist with a much larger $M_h$. By combining various lattice
determinations, our final estimate $M_h= 720 \pm 30 $ GeV will lead
us to re-consider, in Sect.6, the experimental situation at LHC. In
particular, an independent analysis \cite{Cea:2018tmm} of the ATLAS
+ CMS data indicating an excess in the 4-lepton channel as if there
were a new scalar resonance around 700 GeV. This excess, if
confirmed, could indicate the second heavier mass scale discussed in
this paper. Then, differently from the low-mass state at 125 GeV,
the decay width of such heavy state into longitudinal vector bosons
will be crucial to determine the strength of the {\it observable}
scalar self-coupling and the degree of locality of the theory.
Finally, the simultaneous presence of two mass scales would also
require an interpolating parametrization for the Higgs field
propagator in loop corrections. This could help to reduce the
3-sigma discrepancies with those precision measurements which still
favor rather large values of the Higgs particle mass.
\section{The one-loop effective potential}
To study SSB in $\lambda\Phi^4$ theory, the crucial quantity is the
physical, mass squared parameter $m^2_\Phi=V''_{\rm eff}(\varphi=0)$
introduced by first quantizing the theory in the symmetric phase at
$\varphi=0$. A first-order scenario corresponds to a phase
transition occurring at some small but still positive $m^2_\Phi$. In
this case, the symmetric vacuum, although {\it locally} stable
(because its excitations have a physical mass $m^2_\Phi> 0$), would
be {\it globally} unstable in some range of mass below a critical
value, say $0 \leq m^2_\Phi <m^2_c$. If $m^2_c$ is extremely small,
however, one speaks of a {\it weak} first-order transition to mean
that it would become indistinguishable from a second-order
transition if one does not look on a fine enough scale.
This first-order scenario is equivalent to say that the lowest
energy state of the massless theory at $m^2_\Phi=0$ corresponds to
the broken-symmetry phase, as suggested by Coleman and Weinberg
\cite{Coleman:1973jx} in their one-loop calculation. This represents the
simplest scheme which is consistent with this picture. We will first
reproduce below this well known computation and exploit its
implications. A discussion on the general validity of the one-loop
approximation is postponed to the following section.
The Coleman-Weinberg potential is \begin{equation} V_{\rm eff}(\varphi) =
\frac{\lambda}{4!} \varphi^4 +\frac{\lambda^2}{256 \pi^2} \varphi^4
\left[ \ln ({\scriptstyle{\frac{1}{2}}} \lambda \varphi^2 /\Lambda^2_s ) - \frac{1}{2}
\right] \end{equation} and its first few derivatives are \begin{equation} \label{vprime}
V'_{\rm eff}(\varphi) = \frac{\lambda}{6} \varphi^3
+\frac{\lambda^2}{64 \pi^2} \varphi^3 \ln ({\scriptstyle{\frac{1}{2}}} \lambda \varphi^2
/\Lambda^2_s ) \end{equation} and \begin{equation} \label{vsecond}
V''_{\rm eff}(\varphi) = \frac{\lambda}{2} \varphi^2
+\frac{3\lambda^2}{64 \pi^2} \varphi^2 \ln ({\scriptstyle{\frac{1}{2}}} \lambda \varphi^2
/\Lambda^2_s ) +\frac{\lambda^2\varphi^2}{32\pi^2} \end{equation} We observe
that, by introducing the mass squared parameter \begin{equation}
M^2(\varphi)\equiv {\scriptstyle{\frac{1}{2}}} \lambda\varphi^2 \end{equation} the one-loop potential
can be expressed as a classical background + zero-point energy of a
particle with mass $M(\varphi)$ (after subtraction of constant terms
and of quadratic divergences), i.e. \begin{equation} \label{zero} V_{\rm
eff}(\varphi) = \frac{\lambda\varphi^4}{4!} - \frac{M^4(\varphi)
}{64 \pi^2} \ln \frac{ \Lambda^2_s \sqrt{e} } {M^2(\varphi)} \end{equation}
Thus, non-trivial minima of $V_{\rm eff}(\varphi)$ occur at those
points $\varphi=\pm v$ where \footnote{In view of a possible
ambiguity in the normalization of the vacuum field, that may affect
the identification of the weak scale $\langle \Phi\rangle\sim$ 246
GeV, we will for the moment denote as $\varphi=\pm v$ the minima
entering the computation of the effective potential.}\begin{equation}
\label{basic} M^2_h \equiv M^2(\pm v)={{\lambda v^2}\over{2
}}=\Lambda^2_s \exp( -{{32 \pi^2 }\over{3\lambda }})\end{equation} so that \begin{equation}
\label{mh} m^2_h\equiv V''_{\rm eff}(\pm v) =
\frac{\lambda^2v^2}{32\pi^2}= \frac{\lambda}{16\pi^2}M^2_h\sim
\frac{M^2_h}{L} \ll M^2_h \end{equation} where $L\equiv \ln
\frac{\Lambda_s}{M_h}$. Notice that the energy density depends on
$M_h$ and {\it not} on $m_h$, because \begin{equation} \label{basicground} {\cal
E}= V_{\rm eff}(\pm v)= -\frac{M^4_h}{128 \pi^2 } \end{equation} therefore the
critical temperature at which symmetry is restored, $k_BT_c\sim
M_h$, and the stability conditions of the broken phase depends on
the larger $M_h$ and not on the smaller scale $m_h$.
These are the results for the $m_\Phi=0$ case. To study the phase
transition for a small $m^2_\Phi >0$, we will just quote the results
of Ref.\cite{Consoli:1999ni}. In this case, the one-loop potential
has the form \begin{equation} \label{nonzero}V_{\rm eff}(\varphi)= {\scriptstyle{\frac{1}{2}}}
m^2_\Phi\varphi^2+ \frac{\lambda\varphi^4}{4!}+ \frac{M^4(\varphi)
}{64 \pi^2} \left[ \ln \frac{M^2(\varphi) } { \sqrt{e} \Lambda^2_s}
+ F\left(\frac{m^2_\Phi} { M^2(\varphi)}\right) \right] \end{equation} where
\begin{equation} F(y)= \ln (1+y) + \frac{y(4+3y)} { 2 (1+y)^2} \end{equation} Then, by
introducing the mass-squared parameter Eq.(\ref{basic}) of the $
m_\Phi=0$ case, the condition for non-trivial minima $\varphi= \pm
v$ for $ m_\Phi \neq 0$, can be expressed as \cite{Consoli:1999ni}
\begin{equation} m^2_\Phi \leq \frac{\lambda M^2_h} { 64\pi^2\sqrt{e}} \equiv
m^2_c \end{equation} Since the critical mass for the phase transition vanishes,
in units of $M_h$, in the $\Lambda_s \to \infty$ limit \begin{equation}
\frac{m^2_c} { M^2_h }\sim \frac{1} { L} \to 0\end{equation} SSB emerges as an
infinitesimally weak first-order transition.
Notice that this critical mass has the same typical magnitude as the
quadratic shape $m^2_h$ in Eq.(\ref{mh}). In this sense, by
requiring SSB, we are establishing a mass hierarchy
\cite{Consoli:1999ni}. On the one hand, the tiny mass of the
symmetric phase $m^2_\Phi\leq m^2_c$ and the similar infinitesimal
quadratic shape $m^2_h$ of the potential at its minima. On the other
hand, the much larger $M^2_h$ entering the zero-point energy which
destabilizes the symmetric phase \footnote{The analysis for the
one-component scalar field can be easily extended to a continuous
symmetry O(N) theory. To this end, it is convenient to follow
ref.\cite{Dolan:1974gu} where it is shown that the one-loop
potential is only due to the zero-point energy associated with the
radial field $\rho (x)$, the contribution from the Goldstone bosons
being exactly canceled by the change in the quantum measure $(Det
\rho)$.}.
As anticipated in the Introduction, to improve our analysis of the
primary $\lambda\Phi^4$ sector, we will now consider the whole set
of pairs ($\Lambda_s$,$\lambda$),($\Lambda'_s$,$\lambda'$),
($\Lambda''_s$,$\lambda''$)...with different Landau poles and
corresponding low-energy couplings. The correspondence is such to
obtain the same value for the vacuum energy Eq.(\ref{basicground}),
or equivalently for the the mass scale Eq.(\ref{basic}), and thus
the cutoff independence of the result by requiring \begin{equation}
\label{CSground} \left(\Lambda_s\frac{\partial}{\partial\Lambda_s} +
\Lambda_s\frac{\partial \lambda}{\partial\Lambda_s}\frac{\partial
}{\partial \lambda}\right){\cal E}(\lambda,\Lambda_s)=0 \end{equation} By
assuming Eq.(\ref{basicground}) and with the definition \begin{equation}
\Lambda_s\frac{\partial \lambda}{\partial\Lambda_s}\equiv
-\beta(\lambda)= -\frac{3\lambda^2}{16\pi^2} +O(\lambda^3) \end{equation} the
solution is thus $|{\cal E}| \sim {\cal I }^4_1$, where ${\cal I
}_1$ is the first RG-invariant \footnote{ Note the minus sign in the
definition of the $\beta-$ function. This is because we are
differentiating the coupling constant
$\lambda=\lambda(\mu,\Lambda_s)$, at a certain scale $\mu=M_h$ and
with cutoff $\Lambda_s$, with respect to the cutoff and not with
respect to $\mu$. Namely, at fixed $\mu$, we are considering
different integral curves so that $\lambda$ has to decrease by
increasing $\Lambda_s$. Also, to use consistently the 1-loop
$\beta-$function in Eq.(\ref{I1}), the integral at the exponent
should be considered a definite integral that only depends on
$\lambda$ because its other limit, say $\lambda_0> \lambda$, is kept
fixed and such that, for $x<\lambda_0$, one can safely neglect
$O(x^3)$ terms in $\beta(x)$. Therefore, since $\lambda_0$ cannot be
too large, there is a relative $\lambda-$independent factor
$\exp({{16 \pi^2 }\over{3\lambda_0 }})>>1$ between Eq.(\ref{basic})
and Eq.(\ref{I1}). Strictly speaking, this means that, to obtain the
same physical $M_h$ from Eq.(\ref{basic}) and Eq.(\ref{I1}), one
should use vastly different values of $\Lambda_s$. This is a typical
example of cutoff artifact.} \begin{equation} \label{I1} {\cal I }_1= M_h=
\Lambda_s \exp({\int^{\lambda}\frac{dx}{\beta(x)} })\sim \Lambda_s
\exp( -{{16 \pi^2 }\over{3\lambda }})\end{equation} The above relations derive
from the more general requirement of RG-invariance of the effective
potential in the three-dimensional space ($\varphi$, $\lambda$,
$\Lambda_s$) \begin{equation} \label{CSveff}
\left(\Lambda_s\frac{\partial}{\partial\Lambda_s} +
\Lambda_s\frac{\partial \lambda}{\partial\Lambda_s}\frac{\partial
}{\partial \lambda} + \Lambda_s\frac{\partial
\varphi}{\partial\Lambda_s}\frac{\partial }{\partial \varphi}
\right) V_{\rm eff}(\varphi,\lambda,\Lambda_s)=0 \end{equation} In fact, at
the minima $\varphi=\pm v$, where $(\partial V_{\rm eff}/\partial
\varphi)= 0$, Eq.(\ref{CSground}) is a direct consequence of
Eq.(\ref{CSveff}).
Another consequence of this RG-analysis is that, by introducing an
anomalous dimension for the vacuum field \begin{equation} \label{anomalous}
\Lambda_s\frac{\partial \varphi}{\partial\Lambda_s}\equiv
\gamma(\lambda) \varphi \end{equation} there is a second invariant associated
with the RG-flow in the ($\varphi$, $\lambda$, $\Lambda_s$) space,
namely \begin{equation} \label{I2} {\cal I }_2(\varphi) = \varphi
\exp({\int^{\lambda}dx \frac{\gamma(x)}{\beta(x)} })\end{equation} which
introduces a particular normalization of $\varphi$. This had to be
expected because from Eq.(\ref{basic}) the cutoff-independent
combination is \begin{equation} \lambda v^2\sim M^2_h={\cal I }^2_1 \end{equation} and not
$v^2$ itself, thus implying $\gamma= \beta/(2\lambda)$ \footnote{We
emphasize that this is the anomalous dimension of the {\it vacuum}
field $\varphi$ which is the argument of the effective potential. As
such, it is quite unrelated to the more conventional anomalous
dimension of the {\it shifted} field as obtained from the residue of
the connected propagator $Z=Z_{\rm prop}= 1 + O(\lambda)$. By
``triviality'', the latter is constrained to approach unity in the
continuum limit. To better understand the difference, it is useful
to regard symmetry breaking as a true condensation phenomenon
\cite{Consoli:1999ni} associated with the macroscopic occupation the
same quantum state ${\bf k}=0$. Then $\varphi$ is related to the
condensate while the shifted field is related to the modes at ${\bf
k} \neq 0$ which are not macroscopically populated. Numerical
evidence for these two different re-scalings will be provided in
Sect.5. In fact, the logarithmic increasing $L$ relating $v^2$ and
$\langle \Phi \rangle^2$ is the counterpart
\cite{Consoli:1993jr,Consoli:1997ra} of the logarithmic increasing
$L$ between $M^2_h$ and $m^2_h$ which can be observed on the
lattice.}. Therefore, the condition for the minimum of the effective
potential can be expressed as a proportionality relation between the
two invariants in terms of some constant $K$, say \begin{equation}{\cal I }_1= K
{\cal I }_2(v) \end{equation} Then, with the aim of extending our description
of SSB to the Standard Model, a question naturally arises. Suppose
that, as in the first version of the theory, SSB is essentially
generated in the pure scalar sector and the other couplings are just
small perturbative corrections. When we couple scalar and gauge
fields, and we want to separate the field in a vacuum component and
a fluctuation, which is the correct definition of the weak scale
$\langle \Phi \rangle\sim $ 246 GeV? A first possibility would be to
identify $\langle \Phi \rangle$ with the same $v$ considered so far
which in general, i.e. beyond the Coleman-Weinberg limit, is related
to $M_h$ through a relation similar to Eq.(\ref{basic}), say \begin{equation}
\label{large} v^2\sim L M^2_h= L {\cal I }^2_1 \end{equation} But $\langle \Phi
\rangle \sim $ 246 GeV is a basic entry of the theory (as the
electron mass and fine structure constant in QED). For such a
fundamental quantity, once we are trying to describe SSB in a
cutoff-independent way, it would be more appropriate a relation with
the second invariant, i.e. \begin{equation} \langle \Phi\rangle^2 = {\cal
I}^2_2(v) = \frac {{\cal I}^2_1}{K^2 }= \frac{ M^2_h}{K^2 } \end{equation} so
that both $\langle \Phi \rangle^2\sim (v^2/L)$ and $M^2_h\sim
(v^2/L)$ are cutoff-independent quantities. If we adopt this latter
choice, the proportionality can then be fixed through the
generalization of Eq.(\ref{mh}) in terms of some constant $c_2$ \begin{equation}
\label{basicnew} V''_{\rm eff}(\pm v) = m^2_h \sim \frac{c_2
M^2_h}{L}\end{equation} and the traditional definition of $\langle \Phi\rangle$
from the quadratic shape of the effective potential \begin{equation} V''_{\rm
eff}(\pm v) = m^2_h = \frac{ \lambda\langle \Phi\rangle^2}{3}\sim
\frac{16 \pi^2}{9L} \langle \Phi\rangle^2 \end{equation} This gives \begin{equation}
\label{kfinite} M_h \sim \frac{4\pi}{3 \sqrt{c_2}} \langle \Phi
\rangle\equiv K \langle \Phi \rangle\end{equation} in terms of the constant
$c_2$ that, in Sect.5, will be estimated from lattice simulations of
the theory.
\section{On the validity of the one-loop potential }
Following the lattice simulations of
refs.\cite{lundow2009critical,Lundow:2010en,akiyama2019phase}, which
support the picture of SSB in $\lambda\Phi^4$ as a weak first-order
transition, we have considered in Sect.2 the simplest approximation
scheme which is consistent with this scenario, namely the one-loop
effective potential. From its functional form and its minimization
conditions, we have also argued that this simplest scheme can become
the basis for an alternative approach to the ideal continuum limit
such that the vacuum energy ${\cal E}$ and the natural definition of
the Standard Model weak scale $\langle \Phi \rangle \sim $ 246 GeV
are both finite, cutoff independent quantities.
But one may object that, as remarked by Coleman and Weinberg already
in 1973, the straightforward minimization procedure followed in our
Sect.2, and used to derive ${\cal E}$ and $\langle \Phi \rangle$,
can be questioned. The point is that by performing the standard
Renormalization Group (RG) ``improvement'' of the one-loop
potential, all leading-logarithmic terms are reabsorbed into a
positive running coupling constant $\lambda(\varphi) $. Thus, by
preserving the positivity of $\lambda(\varphi) $, the one-loop
minimum disappears and one would now predict a second-order
transition at $m^2_\Phi = 0$, as in the classical potential. The
conventional view is that the latter result is trustworthy while the
former is not. The argument is that the one-loop potential's
non-trivial minimum occurs where the one-loop ``correction'' term is
as large as the tree-level term. However, also this standard
RG-improved result can be questioned because, near the one-loop
minimum, the convergence of the resulting geometric series of
leading logs is not so obvious.
To gain insight, one can then compare with other approximation
schemes, for instance the Gaussian approximation
\cite{Barnes:1978cd,Stevenson:1985zy} which has a variational nature
and explores the Hamiltonian in the class of the Gaussian functional
states. It also represents a very natural alternative because, at
least in the continuum limit, a Gaussian structure of Green's
functions fits with the generally accepted ``triviality'' of the
theory in 3+1 dimensions. This other calculation produces a result
in agreement with the one-loop potential
\cite{Consoli:1993jr,Consoli:1997ra}. This agreement does not mean
that there are no non-vanishing corrections beyond the one-loop
level; there are, but those additional terms do not alter the
functional form of the result. The point is that, again, as in the
one-loop approximation, the gaussian effective potential can be
expressed as a classical background + zero-point energy with a
$\varphi-$dependent mass as in Eq.(\ref{zero}) \footnote{As already
remarked for the one-loop potential, also for the Gaussian effective
potential the zero-point energy in a spontaneously broken O(N)
theory is just due to the shifted radial field. For the Gaussian
approximation this requires the diagonalization
\cite{Naus:1995xu,Okopinska:1995su} of the mass matrix to explicitly
display a spectrum with one massive field and (N-1) massless fields
as required by the Goldstone theorem.}, i.e. \begin{equation} \label{vgauss}
V^G_{\rm eff}(\varphi) = \frac{\hat\lambda\varphi^4}{4!}
-\frac{\Omega^4(\varphi) }{64 \pi^2} \ln \frac{ \Lambda^2_s
\sqrt{e} } {\Omega^2(\varphi)} \end{equation} with \begin{equation} \hat \lambda=
\frac{\lambda } {1 + \frac{\lambda}{16 \pi^2} \ln \frac {\Lambda_s}{
\Omega(\varphi)} } \end{equation} and \begin{equation} \Omega^2(\varphi) =
\frac{\hat\lambda\varphi ^2} {2 } \end{equation} This explains why the one-loop
potential can also admit a non-perturbative interpretation. It is
the prototype of the gaussian and post-gaussian calculations
\cite{Stancu:1989sk,Cea:1996pe} where higher-order contributions to
the energy density are effectively reabsorbed into the same basic
structure: a classical background + zero-point energy with a
$\varphi-$dependent mass.
\begin{figure}
\begin{center}
\includegraphics[bb=20 0 1100 500,
angle=0,scale=0.23]{figura_marzo_2020.eps} \caption{\it The
re-arrangement of the perturbative expansion considered by Stevenson
\cite{Stevenson:2008mw} in his alternative RG-analysis of the
effective potential. Besides the tree-level +$\lambda \delta^{3}(\bf
r)$ repulsion, the quanta of the symmetric phase, with mass
$m_\Phi$, feel a $-\lambda^2 \frac {e^{-2 m_\Phi r}}{r^3}$
attraction from the Fourier transform of the second diagram in
square bracket \cite{Consoli:1999ni} whose range becomes longer and
longer in the $m_\Phi \to 0$ limit. For $m_\Phi$ below a critical
mass $m_c$, this dominates and induces SSB in the one-loop
potential. Since the higher-order terms just renormalize these two
basic effects, the RG-improved effective potential, in this new
scheme, confirms the same scenario of the one loop approximation. }
\end{center}
\label{pictorial}
\end{figure}
But, even by taking into account the indications of lattice
simulations
\cite{lundow2009critical,Lundow:2010en,akiyama2019phase}, and having
at hand the explicit one-loop and gaussian calculations
Eqs.(\ref{zero}) and (\ref{vgauss}), a skeptical reader may still be
reluctant to abandon the standard second-order scenario. He would
like a general argument explaining why the standard RG-analysis,
which predicts the correct $\Lambda_s-$dependence of the low-energy
coupling, fails instead to predict the order of the phase
transition.
Finding such a general argument was, indeed, the motivation of
ref.\cite{Consoli:1999ni} : understanding the physical mechanisms at
the base of SSB as a first-order transition. Here, the crucial
observation was that the quanta of the symmetric phase, the
``phions'' \cite{Consoli:1999ni}, besides the +$\lambda
\delta^{3}(\bf r)$ tree-level repulsion, also feel a $-\lambda^2
\frac {e^{-2 m_\Phi r}}{r^3}$ attraction which shows up at the
one-loop level and whose range becomes longer and longer in the
$m_\Phi \to 0$ limit \footnote{Starting from the scattering matrix
element $\cal M$, obtained from Feynman diagrams, one can construct
an interparticle potential that is is basically the 3-dimensional
Fourier transform of $\cal M$, see the articles of Feinberg et al.
\cite{Feinberg:1968zz,Feinberg:1989ps}.}. By taking into account
both effects, a calculation of the energy density in the dilute-gas
approximation \cite{Consoli:1999ni}, which is equivalent to the
one-loop potential, indicates that for small $m_\Phi$ the
lowest-energy state is not the empty state with no phions but a
state with a non-zero density of phions Bose condensed in the
zero-momentum mode. The instability corresponds to spontaneous
symmetry breaking and happens when the phion's physical mass
$m^2_\Phi$ is still positive.
Then, if one thinks that SSB originates from these two qualitatively
different competing effects, one can now understand why the standard
RG-resummation fails to predict the order of the phase transition.
In fact, the one-loop attractive term originates from the {\it
ultraviolet finite} part of the one-loop diagrams. Therefore, the
correct way to include higher order terms in the effective potential
is to renormalize {\it both} the tree-level repulsion and the
long-range attraction, as in a theory with {\it two} coupling
constants \footnote{This is similar to what happens in scalar
electrodynamics \cite{Coleman:1973jx}. There, if the scalar self-coupling is not
too large, no conflict arises between one-loop potential and its
standard RG-improvement. }. This strategy, which is clearly
different from the usual one, has been implemented by Stevenson
\cite{Stevenson:2008mw}, see Fig.1. In this new scheme, one can obtain
SSB without violating the positivity of $\lambda(\varphi)$ so that
one-loop effective potential and its RG-group improvement now agree
very well. Stevenson's analysis confirms the weak first-order
scenario and the same two-mass picture $M^2_h\sim m^2_h \ln
(\Lambda_s/M_h)$.
\section{$m_h$ and $M_h$: the quasi-particles of the broken phase}
After having described the various aspects and the general validity
of the one-loop calculation, let us now try to sharpen the meaning
of the two mass scales $m_h$ and $M_h$. To this end, we will first
express the inverse propagator in its general form in terms of the
2-point self-energy function $\Pi(p)$ \begin{equation} \label{inverse} G^{-1}(p)=
p^2 -\Pi(p) \end{equation} Then, since the derivatives of the effective
potential produce (minus) the n-point functions at zero external
moment, our smaller mass can be expressed as \begin{equation} \label{Pi0}
m^2_h\equiv V''_{\rm eff}(\varphi=\pm v)=-\Pi(p=0)=|\Pi(p=0)| \end{equation} so
that $G^{-1}(p)\sim p^2 + m^2_h $ for $p\to 0$.
As far as $M_h$ is concerned, we can instead use the relation of the
zero-point-energy (``zpe'') in Eq.(\ref{zero}) to the trace of the
logarithm of the inverse propagator \begin{equation}\label{general} zpe=
\frac{1}{2} \int {{ d^4 p}\over{(2\pi)^4}} \ln (p^2-\Pi(p))\end{equation} Then,
after subtraction of constant terms and of quadratic divergences, to
match the one-loop form in Eq.(\ref{zero}), we can impose suitable
lower and upper limits to the $p$-integration in the logarithmic
divergent part (i.e. $p^2_{\rm max}\sim \sqrt{e}\Lambda^2_s$ and
$p^2_{\rm min}\sim M^2_h$)
\begin{equation} \label{connection2} zpe=-\frac{1}{4} \int^{p_{\rm
max}}_{p_{\rm min}} {{ d^4 p}\over{(2\pi)^4}} \frac{\Pi^2(p)}{p^4}
\sim-\frac{ \langle \Pi^2(p)\rangle }{64\pi^2} \ln\frac{p^2_{\rm
max}}{p^2_{\rm min}}\sim -\frac{M^4_h}{64\pi^2}
\ln\frac{\sqrt{e}\Lambda^2_s }{M^2_h} \end{equation} This shows that the
quartic term $M^4_h$ is associated with the typical, average value
$\langle \Pi^2(p)\rangle$ at non-zero momentum. Thus, if we trust in
the one-loop relation $M^2_h\sim m^2_h\ln\frac{\Lambda_s}{M_h}$,
there should be substantial deviations when trying to extrapolate
the propagator to the higher-momentum region with the same
1-particle form $G^{-1}(p)\sim p^2 + m^2_h $ which controls the
$p\to 0$ limit.
Before considering deviations of the propagator from the standard
1-particle form, one should first envisage what kind of constraints
are placed by ``triviality''. This dictates a continuum limit as a
generalized free-field theory, i.e. where all interaction effects
are reabsorbed into the first two moments of a Gaussian
distributions. Therefore, in this limit, the spectrum can just
contain free massive particles.
However Stevenson's alternative RG-analysis \cite{Stevenson:2008mw},
besides confirming the two-scale structure $M^2_h\sim m^2_h \ln
(\Lambda_s/M_h)$ found at one loop, also indicates how to recover
the massive free-field limit in an unconventional way. In fact, his
propagator interpolates between $G^{-1}(p=0)= m^2_h $ and
$G^{-1}(p)\sim (p^2 + M^2_h) $ at momenta $p^2 >> m^2_h$, see his
Eqs.(16)$-$(22). This suggests the general following form of the
propagator \begin{equation} \label{interpolation} G^{-1}(p) = (p^2 + M^2_h) f (p)
\end{equation} with $f(p)\sim (m_h/M_h)^2$ in the $p \to 0$ limit and $f (p)
\to 1$ for momenta $p^2 >> m^2_h$. Also, note that his Eq.(23)
should be read as $G^{-1}(p)$ and that he considers the continuum
limit $(m_h/M_h)^2 \to 0$. Then $f(p)$ becomes a step function which
is unity for any finite $p$ (i.e. for any $p$ finite in units of
$M_h$) except for a discontinuity at $p=0$ where $f=0$. Up to this
discontinuity in the zero-measure set $p=0$, one then re-discovers
the usual trivial continuum limit with just one massive free
particle \footnote{Note that $p=0$ represents a Lorentz-invariant
set being transformed into itself under any transformation of the
Poincar\'e Group. Thus, in principle, a continuum limit with a
discontinuity in the zero-measure set $p=0$ is not forbidden in
translational invariant vacua as with SSB.}.
We are thus lead to consider the following picture of the cutoff
theory where both $m_h$ and $M_h$ are finite, albeit vastly
different scales. This picture introduces two types of
``quasi-particles'': quasi-particles of type I, with mass $m_h$,
and quasi-particles of type II, with mass $M_h$. The quasi-particles
of type I are the weakly coupled excitations of the broken-symmetry
phase in the low-momentum region. By increasing the momentum these
first quasi-particle states become more strongly coupled. However,
the constraint placed by ``triviality'' is that, by approaching the
continuum limit, all interaction effects have to be effectively
reabsorbed into the mass of other quasi-particles, those of type II,
i.e. into the parameter we have called $M_h$. The very large
difference between $M_h$ and $m_h$, expected from our analysis of
the effective potential, implies that at higher momentum the
self-coupling of quasi-particles of type I becomes substantial but,
nevertheless, will remain hidden in the transition from $m_h$ to
$M_h$. In an ideal continuum limit, the whole low-momentum region
for the quasi-particles of type I reduces to the zero-measure set
$p=0$ and one is just left \footnote{Here, an analogy can help
intuition. To this end, one can compare the continuum limit of SSB
to the incompressibility limit of a superfluid. In general, this has
two types of excitations: low-momentum compressional modes (phonons)
and higher momentum vortical modes (rotons). If the sound velocity
$c_s\to \infty$ the phase space of the phonon branch, the analog of
the quasi-particles of type I, with energy $E({\bf k})=c_s|{\bf
k}|$, would just reduce to the zero-measure set ${\bf k}=0$. Then,
in this limit, only rotons, the analog of the quasi-particles of
type II, would propagate in the system.} with the quasi-particles of
type II with mass $M_h$.
To show that this new interpretation of ``triviality'' is not just
speculation, in the following section, we will report the results of
lattice simulations of the broken-symmetry phase which support our
two-mass picture.
\section{Comparison with lattice simulations}
We will now compare the two-mass picture of Sects.2-4 with the
results of lattice simulations in the broken-symmetry phase of
$\lambda\Phi^4$ in 4D.
These simulations have been performed in the
Ising limit of the theory governed by the lattice action
\begin{equation}
\label{ising} S_{\rm{Ising}} = -\kappa \sum_x\sum_{\mu} \left[
\phi(x+\hat e_{\mu})\phi(x) + \phi(x-\hat e_{\mu})\phi(x) \right]
\end{equation}
with the lattice field $\phi(x)$ taking only the values $\pm 1$.
Also, the broken-symmetry phase corresponds to $\kappa> \kappa_c$,
this critical value being now precisely determined as
$\kappa_c=0.0748474(3)$ \cite{lundow2009critical,Lundow:2010en}.
Addressing to \cite{Lang:1993sy,montvay1997quantum} for the various
aspects of the analysis, we recall that the Ising limit is
traditionally considered a convenient laboratory for a
non-perturbative study of the theory. As anticipated in the
Introduction, it corresponds to a $\lambda\Phi^4$ with an infinite
bare coupling, as if one were sitting precisely at the Landau pole.
In this sense, for any finite cutoff, it provides the best
definition of the local limit for a given value of the renormalized
parameters.
Using the Swendsen-Wang \cite{Swendsen:1987ce} and
Wolff~\cite{Wolff:1988uh} cluster algorithms, we computed the vacuum
expectation value
\begin{equation}
\label{baremagn}
v=\langle |\phi| \rangle \quad , \quad \phi \equiv \frac{1}{V_4}\sum_x
\phi(x)
\end{equation}
and the connected propagator
\begin{equation}
\label{connected} G(x)= \langle \phi(x)\phi(0)\rangle - v^2
\end{equation}
where $\langle ...\rangle$ denotes averaging over the lattice
configurations.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth,clip]{fig_k=0.0740.eps}
\caption{\it The lattice data for the re-scaled propagator in the
symmetric phase at $\kappa=0.074$ as a function of the square
lattice momentum ${\hat p}^2$ with $\hat{p}_\mu =2 \sin p_\mu/2$.
The fitted mass is $m_{\rm{latt}}=0.2141(28)$ and the dashed line
indicates the value of $Z_{\rm{prop}}=0.9682(23)$. The zero-momentum
full point is
$Z_\varphi=(2\kappa\chi_{\rm{latt}})m^2_{\rm{latt}}=0.9702(91)$.
Data are taken from Ref.~\cite{Cea:1999kn}. } \label{0.074}
\end{figure}
Our scope was to check the basic relation $M^2_h\sim m^2_h \ln
(\Lambda_s/M_h)$ where $M_h$ describes the higher momentum
propagator and $m_h$ is defined from the zero-momentum 2-point
function Eq.(\ref{Pi0}) \begin{equation} m^2_h\equiv V''_{\rm eff}(\pm
v)=-\Pi(p=0)= |\Pi(p=0)|\end{equation} By introducing the Fourier transform of
the propagator $G(p)$, its $p=0$ limit is the susceptibility $\chi$
whose conventional definition includes the normalization factor
$2\kappa$, i.e. $2\kappa\chi \equiv 2\kappa G(p=0)$. Therefore the
extraction of $m_h$ is straightforward \begin{equation} 2\kappa \chi=2\kappa
G(p=0) =\frac{1}{|\Pi(p=0)|} \equiv \frac{1}{m^2_h} \end{equation} Extraction
of $M_h$ requires more efforts. To this end, let us denote by $m
_{\rm{latt}}$ the mass obtained directly from a fit to the
propagator data in some region of momentum. If our picture is
correct, the difference of the value $M_h\equiv m _{\rm{latt}}$, as
fitted in the higher-momentum region, from the corresponding
$m_h\equiv (2\kappa \chi_{\rm{latt}})^{-1/2}$, should become larger
and larger in the continuum limit. Namely, the quantity \begin{equation}
Z_\varphi=\frac{M^2_h}{m^2_h} \equiv m^2 _{\rm{latt}} (2\kappa
\chi_{\rm{latt}}) \end{equation} should exhibit a definite logarithmic increase
when approaching the critical point $\kappa \to \kappa_c$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth,clip]{fig_k=0.07512.eps}
\caption{\it The lattice data for the re-scaled propagator in the
broken phase at $\kappa=0.07512$ as a function of the square lattice
momentum ${\hat p}^2$ with $\hat{p}_\mu =2 \sin p_\mu/2$. The fitted
mass is $m_{\rm{latt}}=0.2062(41)$ and the dashed line indicates the
value of $Z_{\rm{prop}}=0.9551(21)$. The zero-momentum full point is
$Z_\varphi=(2\kappa\chi_{\rm{latt}})m^2_{\rm{latt}}=1.234(50)$. Data
are taken from Ref.~\cite{Cea:1999kn}. } \label{0.07512}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth,clip]{fig_k=0.07504.eps}
\caption{\it The lattice data for the re-scaled propagator in the
broken phase at $\kappa=0.07504$ as a function of the square lattice
momentum ${\hat p}^2$ with $\hat{p}_\mu =2 \sin p_\mu/2$. The fitted
mass is $m_{\rm{latt}}=0.1723(34)$ and the dashed line indicates the
value of $Z_{\rm{prop}}=0.9566(13)$. The zero-momentum full point is
$Z_\varphi=(2\kappa\chi_{\rm{latt}})m^2_{\rm{latt}}=1.307(52)$. Data
are taken from Ref.~\cite{Cea:1999kn}. } \label{0.07504}
\end{figure}
This analysis was first performed in Ref.\cite{Cea:1999kn} for both
symmetric and broken phase. The data for the connected propagator
$2\kappa G(p)$ were first fitted to the 2-parameter form \begin{equation}
\label{twoparameter} G_{\rm{fit}}(p)=\frac{Z_{\rm{prop}} }{{\hat
p}^2 +m^2_{\rm{latt}} }\end{equation} in terms of the squared lattice momentum
${\hat p}^2$ with $\hat{p}_\mu=2 \sin p_\mu/2$. The data were then
plotted after a re-scaling by the factor $({\hat
p}^2+m^2_{\rm{latt}})$. In this way, deviations from constancy
become clearly visible and indicate how well a given lattice mass
can describe the data down to $p\to 0$.
The results for the symmetric phase, in Fig.\ref{0.074} at
$\kappa=0.074$, show that, there, a single lattice mass works
remarkably well in the whole range of momentum down to $p=0$. Also
$Z_\varphi=(2\kappa)m^2_{\rm{latt}}\chi_{\rm{latt}}=0.9702(91)$
agrees very well with the fitted $Z_{\rm{prop}}=0.9682(23)$.
In Figs.\ref{0.07512} and \ref{0.07504} we then report the analogous
plots for the broken-symmetry phase at $\kappa=0.07512$ and
$\kappa=0.07504$ for $m_{\rm{latt}}=$ 0.2062(41) and
$m_{\rm{latt}}=$ 0.1723(34) respectively. As one can see, the fitted
lattice mass describe well the data for not too small values of the
momentum but in the $p\to 0$ limit the deviation from constancy
becomes highly significant statistically. To make this completely
evident, we show in Fig.\ref{chisquare} the normalized chi-square
vs. the number of points included in the fit.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth,clip]{fig_k=0.07504_chisquare.eps}
\caption{\it For $\kappa=0.07504$ we show the value of the
normalized chi-square and the fitted lattice mass depending on the
number of points included in the high-energy region. Data are taken
from Ref.~\cite{Cea:1999kn}. } \label{chisquare}
\end{figure}
Notice that the two quantities
$Z_\varphi=(2\kappa)m^2_{\rm{latt}}\chi_{\rm{latt}}=1.234(50)$ and
$Z_\varphi=(2\kappa)m^2_{\rm{latt}}\chi_{\rm{latt}}=1.307(52)$
respectively are now very different from the corresponding
quantities $Z_{\rm{prop}}=0.9551(21)$ and $Z_{\rm{prop}}=0.9566(13)$
obtained from the higher-momentum fits. Also, the value of
$Z_\varphi$ increases by approaching the critical point as expected.
The whole issue was thoroughly re-analyzed by Stevenson
\cite{Stevenson:2005yn} in 2005. For an additional check, he also
extracted propagator data from the time-slices for the connected
correlator measured by Balog et al. \cite{Balog:2004zd} for
$\kappa=0.0751$. He found that their higher-momentum data were
requiring a mass value $ m_{\rm{latt}}\sim 0.2$ but, again, see his
Fig.6(d), this mass could not describe the very low momentum points,
exactly as in our Figs.\ref{0.07512} and \ref{0.07504}. In
connection with the susceptibility $\chi_{\rm{latt}}=206.4(1.2)$
measured by Balog et al. at $\kappa=0.0751$ (see their Table 3),
this gives $Z_\varphi=(2\kappa\chi_{\rm{latt}})m^2_{\rm{latt}}\sim
1.24$ in very good agreement with our determination
$Z_\varphi=(2\kappa\chi_{\rm{latt}})m^2_{\rm{latt}}=1.234(50)$ at
the very close point $\kappa=0.07512$.
Therefore, data collected by other groups were confirming that in
the broken-symmetry phase $M_h\equiv m_{\rm{latt}}$, obtained from a
fit to the higher-momentum propagator data, and $m_h=(2\kappa
\chi_{\rm{latt}})^{-1/2}$ become more and more different in the
continuum limit.
\begin{table}[htb]
\tbl{The values of the susceptibility at various $\kappa$. The
results for $\kappa=0.07512$ and $\kappa=0.07504$ are taken from
ref.\cite{Cea:1999kn}. The result for $\kappa=0.0751$ is taken from
ref.\cite{Balog:2004zd} while the other value at $\kappa=0.0749$
derives from our new simulations on a $76^4$ lattice.}
{\begin{tabular}{@{}ccc@{}} \toprule
$\kappa$ &lattice &$\chi_{\rm{latt}}$ \\
\hline
0.07512 &$32^4$ &193.1(1.7) \\
\hline
0.0751 &$48^4$ &206.4(1.2) \\
\hline 0.07504 &$32^4$ &293.38(2.86) \\
\hline 0.0749 &$76^4$ & 1129(24) \\
\botrule
\end{tabular}}
\end{table}
However since this is still not generally appreciated, and to
emphasize the phenomenological implications, we will now display
more precisely the predicted logarithmic increase of $Z_\varphi$. To
this end, we will show that the lattice data give consistent values
of the proportionality constant $c_2$ in Eq.(\ref{basicnew}) \begin{equation}
\label{cc2} Z_\varphi=\frac{M^2_h}{m^2_h} \equiv
(2\kappa\chi_{\rm{latt}})m^2_{\rm{latt}} \sim \frac{ L }{c_2} \end{equation}
where $L\equiv \ln(\Lambda_s/m_{\rm{latt}})$. This requires to
compute the combination \begin{equation} \label{c2} m_{\rm{latt}} \sqrt{
\frac{2\kappa \chi_{\rm{latt}} }{\ln(\pi/am_{\rm{latt}}) }} \equiv
\frac{1}{\sqrt {c_2}} \end{equation} where we have replaced the cutoff
$\Lambda_s \sim (\pi/a)$ in terms of the lattice spacing $a$. In
this derivation, no additional theoretical inputs (such as
definitions of renormalized mass and coupling constant) are needed.
The only two ingredients are i) the direct measurement of the
susceptibility and ii) the direct measurements of the connected
propagator. The higher-momentum region reproduced by the
two-parameter form Eq.(\ref{twoparameter}) is determined by the data
themselves and used to extract $m_{\rm{latt}}$.
\begin{table}[htb]
\tbl{The values of $m_{\rm{latt}}$, as obtained from a direct fit to
the higher-momentum propagator data, are reported together with the
other quantities entering the determination of the coefficient $c_2$
in Eq.(\ref{c2}). The entries at $\kappa=0.07512$ and
$\kappa=0.07504$ are taken from ref.\cite{Cea:1999kn}. The
susceptibility at $\kappa=0.0751$ is directly reported in
ref.\cite{Balog:2004zd}. The corresponding mass at $\kappa=0.0751$
was extracted by Stevenson \cite{Stevenson:2005yn} (see his
Fig.6(d)) by fitting to the higher-momentum data of
ref.\cite{Balog:2004zd}. The two entries at $\kappa=0.0749$, from
our new simulations on a $76^4$ lattice, refer to higher-momentum
fits for ${\hat p}^2>0.1$ and ${\hat p}^2>0.2$ respectively.}
{\begin{tabular}{ccccc}\toprule $\kappa$ & $m_{\rm{latt}}$ & $
(2\kappa \chi_{\rm{latt}})^{1/2}$ &~~ $[\ln(\Lambda_s/m_{\rm{latt}}
)]^{-1/2}$ &
~~$(c_2)^{-1/2}$\\
\hline
0.07512 & 0.2062(41) & 5.386(23)& 0.606(2) & 0.673(14) \\
\hline 0.0751 & $\sim 0.200$ & 5.568(16) & $\sim 0.603$ & $\sim
0.671$\\
\hline 0.07504 & 0.1723(34) & 6.636(32)& 0.587(2) & 0.671(14) \\
\hline 0.0749 & 0.0933(28) & 13.00(14) & 0.533(2) & 0.647(20) \\
\hline 0.0749 & 0.100(6) & 13.00(14) & 0.538(4) & 0.699(42) \\
\botrule
\end{tabular}}
\end{table}
We give first in Table 1 the measured values of the lattice
susceptibility at various $\kappa$ (well within the scaling region).
We then report in Table 2 the fitted $m_{\rm{latt}}$ together with
the other quantities entering the determination of the coefficient
$c_2$ in Eq.(\ref{c2}). The spread of the central values at
$\kappa=0.0749$ reflects the theoretical uncertainty in the choice
of the higher-momentum range, ${\hat p}^2>0.1$ and ${\hat p}^2>0.2$
respectively. Only the region ${\hat p}^2<0.1$ cannot be
consistently considered with the rest of the data, see
Fig.\ref{0933}. In this low-momentum range the propagator data would
in fact require the same mass parameter
$m_h=(2\kappa\chi_{\rm{latt}})^{-1/2}=0.0769$ fixed by the inverse
susceptibility, see Fig.\ref{susce}.
The reason of this uncertainty is that, differently from the
simulations at $\kappa=0.07512$ and $\kappa=0.07504$, this
higher-momentum range cannot be uniquely determined by simply
imposing a normalized chi-square of order unity as in
Fig.\ref{chisquare}. To this end, in fact, statistical errors should
be reduced by, at least, a factor of $2$ with a corresponding
increase of the CPU time by a factor $4$. Due to the large size
$76^4$ of the lattice needed to run a simulation at $\kappa=0.0749$,
this increase in statistics would take several additional months.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth,clip]{k=0.0749_rescaled_prop_m=0.093281.eps}
\caption{\it The propagator data, at
$\kappa=0.0749$, rescaled with the lattice mass
$m_{\rm{latt}}=0.0933(28)$ obtained from the fit to all data with
${\hat p}^2>0.1$. The square at $p=0$ is $Z_\varphi=
m^2_{\rm{latt}}(2\kappa\chi_{\rm{latt}})= 1.47(9)$.} \label{0933}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\textwidth,clip]{k=0.0749_rescaled_prop_m=0.0769231.eps}
\caption{\it The propagator data at
$\kappa=0.0749$ for ${\hat p}^2<0.1$. The lattice mass used here for
the rescaling was fixed at the value
$m_h\equiv(2\kappa\chi_{\rm{latt}})^{-1/2}=0.0769$. } \label{susce}
\end{figure}
Nevertheless, with our present statistics this type of uncertainty
can be translated into the average estimate $m_{\rm{latt}} \sim
0.096 (3)$ at $\kappa=0.0749$, or $1/\sqrt{c_2}\sim 0.67 \pm 0.02$.
In turn, besides the statistical errors, this is equivalent to a
systematic error $\pm 0.02 $ in the final average \begin{equation}
\label{c2final} \frac{1}{\sqrt {c_2}}=0.67 \pm 0.01({\rm stat}) \pm
0.02({\rm sys}) \end{equation} With this determination, we can then compare
with the L\"uscher-Weisz scheme \cite{Luscher:1987ek} where mass $m_{\rm R}$,
coupling constant \footnote{In the L\"uscher-Weisz paper the scalar
self coupling is called $g$. However, here, to avoid possible
confusion with the gauge couplings we will maintain the traditional
notation $\lambda$.} $\lambda_{\rm R}$ and weak scale $\langle
\Phi\rangle$ are related through the relation \begin{equation} \label{mlw}
\frac{m^2_{\rm R} }{\langle \Phi\rangle^2}= \frac{\lambda_{\rm R}
}{3} \end{equation} and the mass is expressed in terms of the zero-momentum
propagator as \begin{equation} \frac{Z_{\rm R} }{m^2_{\rm R}} = G(p=0) = 2\kappa
\chi= \frac{1 }{m^2_h} \end{equation} through a perturbative rescaling $Z_{\rm
R} \lesssim1$.
Traditionally, Eq.(\ref{mlw}) has been used to place upper bounds on
the Higgs boson mass depending on the value of $\lambda_{\rm R}\sim
(1/L)$ and thus on the magnitude of $\Lambda_s$. Instead, in our
case, where $M^2_h \sim L m^2_h \sim L m^2_{\rm R} $, it can be used
to express the value of $M_h$ in units of $\langle \Phi\rangle$
because the two quantities now scale uniformly, see
Eq.(\ref{kfinite}). Since our estimate of the $M_h-m_h$ relation
just takes into account the leading-order logarithmic effect, in a
first approach, we will neglect the non-leading quantity $Z_{\rm R}$
and, as sketched at the end of Sect.2, approximate $m_{\rm R}\sim
m_h$. Therefore, by using the leading-order relation $(m_h/\langle
\Phi\rangle)^2\sim 16 \pi^2/(9L)$, Eq.(\ref{cc2}) and the average
value Eq.(\ref{c2final}), the logarithmic divergent $L$ drops out
and we find \begin{equation} \frac{M_h}{\langle \Phi\rangle}=
\sqrt{\frac{m^2_h}{\langle \Phi\rangle^2 } \frac{M^2_h}{m^2_h }}\sim
\sqrt{ \frac{16\pi^2}{9L } \frac{L}{c_2 } }= 2.81 \pm 0.04({\rm
stat}) \pm 0.08({\rm sys}) \end{equation} or, for $ \langle \Phi\rangle \sim$
246 GeV, \begin{equation} \label{leading} M_h = 690 \pm 10 ({\rm stat}) \pm
20({\rm sys}) ~~\rm{GeV} \end{equation} We observe that the above value is
slightly smaller but consistent with our previous estimate
\cite{Cea:2003gp,Cea:2009zc} \begin{equation}\label{old} M_h = 754 \pm 20 ({\rm
stat}) \pm 20({\rm sys}) ~~\rm{GeV} \end{equation} This had been obtained,
within the same L\"uscher-Weisz scheme, but using instead the full
chain \begin{equation} \label{modified} \frac{M_h}{\langle \Phi\rangle} =
\sqrt{\frac{M^2_h }{m^2_h} \frac{m^2_h }{m^2_{\rm R}} \frac{m^2_{\rm
R} }{ \langle \Phi\rangle^2 }}=\sqrt{ \frac{Z_\varphi}{Z_{\rm R}}
\frac{\lambda_{\rm R} }{3}} \end{equation} and thus account for both the
logarithmic divergent $Z_\varphi $ and the non-leading correction
$Z_{\rm R}$.
This old estimate Eq.(\ref{old}) can now be compared with our new
determination of $Z_\varphi$ from the direct measurement of the
lattice propagator. To eliminate any explicit dependence on the
lattice mass it is convenient to introduce the traditional divergent
log used to describe the continuum limit of the Ising model
\cite{ZinnJustin:2002ru} \begin{equation} \label{lkappa}L(k)= \frac{1}{2}\ln
\frac{\kappa_c}{\kappa -\kappa_c} \end{equation} and define a set of values \begin{equation}
\label{zkappa} Z_\varphi\equiv \frac{ L(k)}{c_2} \end{equation} at the various
$\kappa$. By using our Eq.(\ref{c2final}) and
$\kappa_c=0.0748474(3)$, all entries needed in Eq.(\ref{modified})
are reported in Table 3. Then, by averaging at the various $\kappa$,
the new determination $M_h\sim 752 \pm 20$ GeV is the same value
Eq.(\ref{old}) obtained in refs.\cite{Cea:2003gp,Cea:2009zc}.
\begin{table}[htb]
\tbl{We report the original L\"uscher-Weisz entries
\cite{Luscher:1987ek} $\lambda_{\rm R}$ and $Z_{\rm R}$, the
rescaling $\sqrt{Z_\varphi} \equiv \sqrt{\frac{L(k)}{c_2}}$, with
$L(k)$ as in Eq.(\ref{lkappa}) and $1/\sqrt{c_2}=0.670 \pm 0.023$ as
in Eq.(\ref{c2final}), together with the resulting $M_h$ from
Eq.(\ref{modified}). } {\begin{tabular}{ccccc}\toprule $\kappa$ &
$\lambda_{\rm R}$ & $Z_{\rm R}$ &
$\sqrt{Z_\varphi}$ & $M_h$ ({\rm GeV})\\
\hline
0.0759 & 27(2) & 0.929(14) &0.98(3) & 751 (37) \\
\hline
0.0754 &24(2) & 0.932(14) &1.05(4) & 757 (40) \\
\hline 0.0751 & 20(1) & 0.938(12)& 1.13(4) & 742 (33) \\
\hline 0.0749 & 16.4(9) & 0.944(11) &1.28(5) & 758 (34) \\
\botrule
\end{tabular}}
\end{table}
One may object that the new precise $\kappa_c$ is marginally
consistent with the old value $0.07475(7)$ used originally by
L\"uscher-Weisz \cite{Luscher:1987ek} to compute the $\lambda_{\rm R}$'s and
$Z_{\rm R}$'s reported in Table 3. However, $Z_{\rm R}$ is a very
slowly varying, non-leading quantity whose dependence on the
critical point is well within the uncertainties reported in Table 3.
Also, the dependence of $\lambda_{\rm R}$ on the various mass scales
is only logarithmic and possible differences are further flattened
because only $\sqrt{\lambda_{\rm R}}$ enters the determination of
$M_h$ \footnote{With a critical $\kappa_c=0.074848$ very close to
the present most precise determination $\kappa_c=0.0748474(3)$, the
$\lambda_{\rm R}$'s were re-computed by Stevenson
\cite{Stevenson:2005yn}, see his Fig.1 (f). His new central values are
about $\lambda_{\rm R}=$ 30, 25, 21, 16.7 for $\kappa=$ 0.0759,
0.0754, 0.0751 and 0.0749 respectively and thus within the
uncertainties reported in Table 3. In any case, the average $+2.7\%$
increase in the central value of $M_h$ remains within the $\pm 20$
GeV systematic error reported in Eq.(\ref{old}).}.
We thus conclude that, either with the original estimate of
refs.\cite{Cea:2003gp,Cea:2009zc} or with our new determination of
$Z_\varphi$ in Table 3, Eq.(\ref{modified}) remains as an
alternative approach to $M_h$ which has its own motivations and
takes also into account the average $+3\% $ effect embodied in
$\sqrt{Z_{\rm R}}\sim 0.97$. In this perspective,
Eqs.(\ref{leading}) and (\ref{old}) could be combined in a final
estimate \begin{equation} \label{final} M_h= 720 \pm 30~{\rm GeV} \end{equation} which
incorporates the various statistical and theoretical uncertainties.
\section{Summary and outlook}
In the first version of the theory, with a classical scalar
potential, the sector inducing SSB was quite distinct from the
remaining self-interactions of the Higgs field induced through its
gauge and Yukawa couplings. In this paper, we have adopted a similar
perspective but, following most recent lattice simulations,
described SSB in $\lambda\Phi^4$ theory as a weak first-order phase
transition.
In the approximation schemes we have considered, there are two
different mass scales. On the one hand, a mass $m_h$ defined by the
quadratic shape of the effective potential at its minimum and
related to the zero-momentum self-energy $\Pi(p=0)$. On the other
hand, a second mass $M_h$, defined by the zero-point energy which is
relevant for vacuum stability and related to a typical average value
$ \langle \Pi(p)\rangle $ at larger $|p|$.
So far, these two scales have always been considered as a single
mass but our results indicate instead the order of magnitude
relation $M^2_h\sim m^2_h L\gg m^2_h$, where $L=\ln (\Lambda_s/M_h)$
and $\Lambda_s$ is the ultraviolet cutoff of the scalar sector which
induces SSB. We have checked this two-scale structure with lattice
simulations of the propagator and of the susceptibility in the 4D
Ising limit of the theory. These confirm that, by approaching the
critical point, $M^2_h$, as extracted from a fit to the
higher-momentum propagator data, increases logarithmically in units
of $m^2_h$, as defined from the inverse zero-momentum susceptibility
$|\Pi(p=0)|=(2\kappa \chi)^{-1}$. At the same time, see
Fig.\ref{susce}, $m_h=(2\kappa \chi)^{-1/2}$ is the right mass to
describe the propagator in the low-momentum region. Therefore, in a
cutoff theory where both $m_h$ and $M_h$ are finite, one should
think of the scalar propagator as a smooth interpolation between
these two masses.
With the aim of extending our description of SSB to more ambitious
frameworks, we have also developed in Sect.2 a RG-analysis which, in
principle, could also be extended to the $\Lambda_s \to \infty$
limit and introduces two invariants ${\cal I }_1$ and ${\cal I }_2$.
The former is related to the vacuum energy ${\cal E }\sim -M^4_h$,
through the relation ${\cal I }_1=M_h$. The latter is the natural
candidate to represent the weak scale $\langle \Phi \rangle\sim$ 246
GeV through the relation ${\cal I }_2= \langle \Phi \rangle$.
Therefore since, differently from $m_h$, the larger mass $M_h$
remains finite in units of $\langle \Phi \rangle$ in the continuum
limit, one can write a proportionality relation, say $M_h=K \langle
\Phi \rangle$, and extract the constant $K$ from lattice
simulations. As discussed in Sect.5, this leads to our final
estimate $M_h \sim 720 \pm 30 $ GeV which incorporates various
statistical and theoretical uncertainties.
The existence of two masses in our picture of SSB leads to exploit
the natural identification of our lower mass $m_h$ with the present
experimental value 125 GeV. In this case, we obtain \begin{equation} \frac{
M_h}{125~ {\rm GeV} } \sim \sqrt{ \frac{ L}{c_2} }\end{equation} so that from
\begin{equation} M_h \sim \frac{ 4\pi\langle\Phi\rangle }{3 \sqrt{c_2} } \end{equation} we
find $\sqrt{ L} \sim 8.25 $. When taken at face value, this would
imply a scalar cutoff $\Lambda_s\sim 2.6 \cdot 10 ^{32}$ GeV which
is much larger than the Planck scale. But, as pointed out in the
footnote before Eq.(\ref{I1}), this may be just a cutoff artifact
because to obtain the same physical $M_h$ from Eq.(\ref{basic}) and
Eq.(\ref{I1}) one should use vastly different values of the
ultraviolet cutoff.
Instead, as emphasized in the Introduction, our aim was to give a
cutoff-independent description of symmetry breaking in
$\lambda\Phi^4$ theory, i.e. a description that could also remain
valid in the $\Lambda_s \to \infty$ limit. In this perspective, for
an experimental check of our picture, we should first look at the
{\it cutoff-independent} $M_h-\langle \Phi \rangle$ relation. Since
this would imply the existence of a new scalar resonance around 700
GeV, we will now briefly recall some experimental signals from LHC
that may support this prediction. The $M_h-m_h$ relative magnitude
will be re-discussed afterwards by making use of a physical,
measurable quantity.
Let us start with the 2-photon channel. At the time of 2016, both
ATLAS \cite{Aaboud:2016tru} and CMS \cite{Khachatryan:2016hje} experiments
reported an excess of events in the 2-photon channel that could
indicate a new narrow resonance around 750 GeV. The collisions were
recorded at center of mass energy of 8 and 13 TeV and the local
statistical significance of the signal was estimated to be 3.8 sigma
by ATLAS and 3.4 sigma by CMS. Later on, with more statistics, the
two Collaborations reported a considerable reduction in the observed
effect. For ATLAS \cite{Aaboud:2017yyg} the local deviation from the
background-only hypothesis was reduced to 1.8 sigma while for CMS
\cite{Khachatryan:2016yec}, the original 3.4 sigma effect was now lowered
to about 1.9 sigma. Yet, in spite of the reported modest statistical
significance, if one looks at the 2-photon invariant mass
distribution in figure 2a of ATLAS \cite{Aaboud:2017yyg}, an excess of
events at about 730 GeV is clearly visible. Interestingly, this
excess is immediately followed by a strong decrease in the number of
events. This may indicate the characteristic $(M^2-s)$ effect due to
the (negative) interference of a resonance of mass $M$ with a
non-resonating background. These last papers were published in 2017
and the total integrated luminosity was 36 fb$^{-1}$ (12.9 + 19.7 +
3.3) for CMS and 36.7 fb$^{-1}$ for ATLAS. This is just a small
fraction of the full present statistics of about 140 fb$^{-1}$ per
experiment.
Let us now consider the ``golden'' 4-lepton channel at large values
of the invariant mass $m_{4l}>600$ GeV. For the latest paper by
ATLAS \cite{Aaboud:2017rel}, with a statistics of 36.1 fb$^{-1}$, one
can look at their figure 4a. Again, as in their corresponding
2-photon channel (the mentioned figure 2a of \cite{Aaboud:2017yyg}),
there is a clean excess of events for $m_{4l}=$ 700 GeV where the
signal exceeds the background by about a factor of three. At the
closest points, 680 and 720 GeV, the signal becomes consistent with
the background within 1 sigma but the central values are still
larger than the background by a factor of two. The other paper by
CMS \cite{CMS:2018mmw} refers to a statistics of 77.4 fb$^{-1}$ but the
results in the region $m_{4l}\sim$ 700 GeV, illustrated in their
Fig.9, cannot be easily interpreted.
However, here, an independent analysis of these data by Cea
\cite{Cea:2018tmm} can greatly help. The extraction of the CMS data
and their combination with the ATLAS data presented in Figures 1 and
2 of ref. \cite{Cea:2018tmm} indicates an evident excess in the
4-lepton final state with a statistical significance of about 5
sigma. The natural interpretation of this excess would be in terms
of a scalar resonance, with a mass of about $700$ GeV, which decays
into two $Z$ bosons and then into leptons. We emphasize that one
does not need to agree with Cea's theoretical model to appreciate
his analysis of the data. Therefore, if this excess will be
confirmed, it could represent the second heavier mass scale
discussed in our paper. We emphasize that the statistical sample
used in \cite{Cea:2018tmm} is the whole official set of data
available at present, namely 113.5 fb$^{-1}$ (36.1 for ATLAS + 77.4
for CMS). Again, as for the 2-photon case, this is still far from
the nominal collected luminosity of about 140 fb$^{-1}$ per
experiment.
In this situation, where only a small fraction of the full
statistics has been made available, further speculations on the
characteristics of a hypothetical heavy mass state at 700 GeV may be
premature. Nevertheless, even though this scale is not far from the
usual triviality bounds, the actual situation we expect is very
different. In fact these bounds have been obtained for $M_h\lesssim
\Lambda_s$ while we are now considering a corner of the parameter
space, i.e. large $M_h$ with $M_h \ll \Lambda_s$, that does not
exist in the conventional treatment. For this reason the
phenomenology of such heavy resonance (i.e. its production cross
sections and decay rates) may differ sizeably from the perturbative
expectations. In particular, differently from the low-mass state at
125 GeV, the decay width of the heavy state into longitudinal vector
bosons will be crucial to determine the strength of the scalar
self-interaction. We thus return to the previous issue concerning
the relative magnitude of $M_h$ and $m_h$.
From the experimental ATLAS + CMS papers that we have considered,
the total width of this hypothetical heavy resonance can hardly
exceed 40 GeV. For a mass of 720 GeV, about 30 GeV of this width,
those into heavy and light fermions, gluons, photons...would
certainly be there. Thus, the decay width into W's and Z's should be
of the order of 10 GeV, or less. The observation of such a heavy but
narrow resonance would then confirm the scenario of
ref.~\cite{Castorina:2007ng} where, with a heavy Higgs particle,
re-scattering of longitudinal vector bosons was effectively reducing
their large tree-level coupling and thus the decay width in that
channel. In the language of the present paper, this could be
expressed by saying that the tree-level estimate $\Gamma_0(h \to V_L
V_L) \sim M^3_h G_{\rm Fermi}\sim$ 175 GeV becomes the much smaller
value $\Gamma(h \to V_L V_L) \sim M_h (m^2_h G_{\rm Fermi})$ where
$M_h$ is from phase space and $m^2_h G_F$ is the reduced strength of
the interaction. If $M_h$ is close to 720 GeV and the mass $m_h$
needed for the reduction of the width is close to 125 GeV, say a
width into vector bosons of the order of 5 GeV, this would then
close the circle and lead to the identification $m_h \sim 125$ GeV.
Finally, the simultaneous presence of two different mass scales in
the Higgs field propagator would also require some interpolating
form, of the type Eq.(\ref{interpolation}), in the loop corrections.
Since some precision measurements (e.g. the b-quark forward-backward
asymmetry or the value of $\sin^2\theta_w$ from neutral current
experiments \footnote{For a general discussion of the various
quantities and of systematic errors see ref.\cite{Chanowitz:2009dz}.}) still
point to a rather large Higgs particle mass, with about 3-sigma
discrepancies, this could provide an alternative way to improve the
overall quality of a Standard Model fit.
|
2,877,628,089,557 | arxiv | \section{Introduction}
The advent of ubiquitous large-scale distributed systems advocates that tolerance to various kinds of faults and hazards must be included from the very early design of such systems. \emph{Self-stabilization}~\cite{D74j,D00b,T09bc} is a versatile technique that permits forward recovery from any kind of \emph{transient} faults, while \emph{Byzantine Fault-tolerance}~\cite{LSP82j} is traditionally used to mask the effect of a limited number of \emph{malicious} faults. Making distributed systems tolerant to both transient and malicious faults is appealing yet proved difficult~\cite{DW04j,DD05c,NA02c} as impossibility results are expected in many cases.
Two main paths have been followed to study the impact of Byzantine faults in the context of self-stabilization:
\begin{enumerate}
\item \emph{Byzantine fault masking.} In completely connected synchronous systems, one of the most studied problems in the context of self-stabilization with Byzantine faults is that of \emph{clock synchronization}. In~\cite{BDH08c,DW04j}, probabilistic self-stabilizing protocols were proposed for up to one third of Byzantine processors, while in \cite{DH07cb,HDD06c} deterministic solutions tolerate up to one fourth and one third of Byzantine processors, respectively.
\item \emph{Byzantine containment.} For \emph{local} tasks (\emph{i.e.} tasks whose correctness can be checked locally, such as vertex coloring, link coloring, or dining philosophers), the notion of \emph{strict} stabilization was proposed~\cite{NA02c,SOM05c,MT07j}. Strict stabilization guarantees that there exists a \emph{containment radius} outside which the effect of permanent faults is masked. In \cite{NA02c}, the authors show that this Byzantine containment scheme is possible only for \emph{local} tasks. As many problems are not local, it turns out that it is impossible to provide strict stabilization for those.
\end{enumerate}
\paragraph{Our Contribution}
In this paper, we investigate the possibility of Byzantine containment in a self-stabilizing setting for tasks that are global (\emph{i.e.} for with there exists a causality chain of size $r$, where $r$ depends on $n$ the size of the network), and focus on two global problems, namely tree orientation and tree construction. As strict stabilization is impossible with such global tasks, we weaken the containment constraint by limiting the number of times that correct processes can be disturbed by Byzantine ones. Recall that strict stabilization requires that processes beyond the containment radius eventually achieve their desired behavior and are never disturbed by Byzantine processes afterwards. We relax this requirement in the following sense: we allow these correct processes beyond the containment radius to be disturbed by Byzantine processes, but only a limited number of times, even if Byzantine nodes take an infinite number of actions.
The main contribution of this paper is to present new possibility results for containing the influence of unbounded Byzantine behaviors. In more details, we define the notion of \emph{strong stabilization} as the novel form of the containment and introduce \emph{disruption times} to quantify the quality of the containment. The notion of strong stabilization is weaker than the strict stabilization but is stronger than the classical notion of self-stabilization (\emph{i.e.} every strongly stabilizing protocol is self-stabilizing, but not necessarily strictly stabilizing). While strict stabilization aims at tolerating an unbounded number of Byzantine processes, we explicitly refer the number of Byzantine processes to be tolerated. A self-stabilizing protocol is $(t,c,f)$-strongly stabilizing if the subsystem consisting of processes more than $c$ hops away from any Byzantine process is disturbed at most $t$ times in a distributed system with at most $f$ Byzantine processes. Here $c$ denotes the containment radius and $t$ denotes the disruption time.
To demonstrate the possibility and effectiveness of our notion of strong stabilization, we consider \emph{tree construction} and \emph{tree orientation}. It is shown in \cite{NA02c} that there exists no strictly stabilizing protocol with a constant containment radius for these problems. The impossibility result can be extended even when the number of Byzantine processes is upper bounded (by one). In this paper, we provide a $(f\Delta^d, 0, f)$-strongly stabilizing protocol for rooted tree construction, provided that correct processes remain connected, where $n$ (respectively $f$) is the number of processes (respectively Byzantine processes) and $d$ is the diameter of the subsystem consisting of all correct processes. The containment radius of $0$ is obviously optimal. We show that the problem of tree orientation has no constant bound for the containment radius in a tree with two Byzantine processes even when we allow processes beyond the containment radius to be disturbed a finite number of times. Then we consider the case of a single Byzantine process and present a $(\Delta,0,1)$-strongly stabilizing protocol for tree orientation, where $\Delta$ is the maximum degree of processes. The containment radius of $0$ is also optimal. Notice that each process does not need to know the number $f$ of Byzantine processes and that $f$ can be $n-1$ at the worst case. In other words, the algorithm is adaptive in the sense that the disruption times depend on the actual number of Byzantine processes. Both algorithms are also optimal with respect to the number of tolerated Byzantine nodes.
\section{Preliminaries}\label{preliminaries}
\subsection{Distributed System}
A \emph{distributed system} $S=(P,L)$ consists of a set $P=\{v_1,v_2,\ldots,v_n\}$ of processes and a set $L$ of bidirectional communication links (simply called links). A link is an unordered pair of distinct processes. A distributed system $S$ can be regarded as a graph whose vertex set is $P$ and whose link set is $L$, so we use graph terminology to describe a distributed system $S$.
Processes $u$ and $v$ are called \emph{neighbors} if $(u,v)\in L$. The set of neighbors of a process $v$ is denoted by $N_v$, and its cardinality (the \emph{degree} of $v$) is denoted by $\Delta_v (=|N_v|)$. The degree $\Delta$ of a distributed system $S=(P,L)$ is defined as $\Delta = \max \{\Delta_v\ |\ v \in P\}$. We do not assume existence of a unique identifier for each process (that is, the system is anonymous). Instead we assume each process can distinguish its neighbors from each other by locally arranging them in some arbitrary order: the $k$-th neighbor of a process $v$ is denoted by $N_v(k)\ (1 \le k \le \Delta_v)$.
Processes can communicate with their neighbors through \emph{link registers}. For each pair of neighboring processes $u$ and $v$, there are two link registers $r_{u,v}$ and $r_{v,u}$. Message transmission from $u$ to $v$ is realized as follows: $u$ writes a message to link register $r_{u,v}$ and then $v$ reads it from $r_{u,v}$. The link register $r_{u,v}$ is called an \emph{output register} of $u$ and is called an \emph{input register} of $v$. The set of all output (respesctively input) registers of $u$ is denoted by $Out_u$ (respectively $In_u$), \emph{i.e.} $Out_u=\{r_{u,v}\ |\ v \in N_u\}$ and $In_u=\{r_{v,u}\ | v \in N_u\}$.
The variables that are maintained by processes denote process states. Similarly, the values of the variables stored in each link register denote the state of the registers. A process may take actions during the execution of the system. An action is simply a function that is executed in an atomic manner by the process. The actions executed by each process is described by a finite set of guarded actions of the form $\langle$guard$\rangle\longrightarrow\langle$statement$\rangle$. Each guard of process $u$ is a boolean expression involving the variables of $u$ and its input registers. Each statement of process $u$ is an update of its state and its output/input registers.
A global state of a distributed system is called a \emph{configuration} and is specified by a product of states of all processes and all link registers. We define $C$ to be the set of all possible configurations of a distributed system $S$. For a process set $R \subseteq P$ and two configurations $\rho$ and $\rho'$, we denote $\rho \stackrel{R}{\mapsto} \rho'$ when $\rho$ changes to $\rho'$ by executing an action of each process in $R$ simultaneously. Notice that $\rho$ and $\rho'$ can be different only in the states of processes in $R$ and the states of their output registers. For completeness of execution semantics, we should clarify the configuration resulting from simultaneous actions of neighboring processes. The action of a process depends only on its state at $\rho$ and the states of its input registers at $\rho$, and the result of the action reflects on the states of the process and its output registers at $\rho '$.
A \emph{schedule} of a distributed system is an infinite sequence of process sets. Let $Q=R^1, R^2, \ldots$ be a schedule, where $R^i \subseteq P$ holds for each $i\ (i \ge 1)$. An infinite sequence of configurations $e=\rho_0,\rho_1,\ldots$ is called an \emph{execution} from an initial configuration $\rho_0$ by a schedule $Q$, if $e$ satisfies $\rho_{i-1} \stackrel{R^i}{\mapsto} \rho_i$ for each $i\ (i \ge 1)$. Process actions are executed atomically, and we also assume that a \emph{distributed daemon} schedules the actions of processes, \emph{i.e.} any subset of processes can simultaneously execute their actions.
The set of all possible executions from $\rho_0\in C$ is denoted by $E_{\rho_0}$. The set of all possible executions is denoted by $E$, that is, $E=\bigcup_{\rho\in C}E_{\rho}$. We consider \emph{asynchronous} distributed systems where we can make no assumption on schedules except that any schedule is \emph{weakly fair}: every process is contained in infinite number of subsets appearing in any schedule.
In this paper, we consider (permanent) \emph{Byzantine faults}: a Byzantine process (\emph{i.e.} a Byzantine-faulty process) can make arbitrary behavior independently from its actions. If $v$ is a Byzantine process, $v$ can repeatedly change its variables and its out put registers arbitrarily.
In asynchronous distributed systems, time is usually measured by \emph{asynchronous rounds} (simply called \emph{rounds}). Let $e=\rho_0,\rho_1,\ldots$ be an execution by a schedule $Q=R^1,R^2,\ldots$. The first round of $e$ is defined to be the minimum prefix of $e$, $e'=\rho_0,\rho_1,\ldots,\rho_k$, such that $\bigcup_{i=1}^k R^i =P'$ where $P'$ is the set of correct processes of $P$. Round $t\ (t\ge 2)$ is defined recursively, by applying the above definition of the first round to $e''=\rho_k,\rho_{k+1},\ldots$. Intuitively, every correct process has a chance to update its state in every round.
\subsection{Self-Stabilizing Protocol Resilient to Byzantine Faults}
Problems considered in this paper are so-called \emph{static problems}, \emph{i.e.} they require the system to find static solutions. For example, the spanning-tree construction problem is a static problem, while the mutual exclusion problem is not. Some static problems can be defined by a \emph{specification predicate} (shortly, specification), $spec(v)$, for each process $v$: a configuration is a desired one (with a solution) if every process satisfies $spec(v)$. A specification $spec(v)$ is a boolean expression on variables of $P_v~(\subseteq P)$ where $P_v$ is the set of processes whose variables appear in $spec(v)$. The variables appearing in the specification are called \emph{output variables} (shortly, \emph{O-variables}). In what follows, we consider a static problem defined by specification $spec(v)$.
A \emph{self-stabilizing protocol} is a protocol that eventually reaches a \emph{legitimate configuration}, where $spec(v)$ holds at every process $v$, regardless of the initial configuration. Once it reaches a legitimate configuration, every process $v$ never changes its O-variables and always satisfies $spec(v)$. From this definition, a self-stabilizing protocol is expected to tolerate any number and any type of transient faults since it can eventually recover from any configuration affected by the transient faults. However, the recovery from any configuration is guaranteed only when every process correctly executes its action from the configuration, \emph{i.e.}, we do not consider existence of permanently faulty processes.
When (permanent) Byzantine processes exist, Byzantine processes may not satisfy $spec(v)$. In addition, correct processes near the Byzantine processes can be influenced and may be unable to satisfy $spec(v)$. Nesterenko and Arora~\cite{NA02c} define a \emph{strictly stabilizing protocol} as a self-stabilizing protocol resilient to unbounded number of Byzantine processes.
Given an integer $c$, a \emph{$c$-correct process} is a process defined as follows.
\begin{definition}[$c$-correct process]
A process is $c$-correct if it is correct (\emph{i.e.} not Byzantine) and located at distance more than $c$ from any Byzantine process.
\end{definition}
\begin{definition}[$(c,f)$-containment]\label{def:cfcontained}
A configuration $\rho$ is \emph{$(c,f)$-contained} for specification $spec$ if, given at most $f$ Byzantine processes, in any execution starting from $\rho$, every $c$-correct process $v$ always satisfies $spec(v)$ and never changes its O-variables.
\end{definition}
The parameter $c$ of Definition~\ref{def:cfcontained} refers to the \emph{containment radius} defined in \cite{NA02c}. The parameter $f$ refers explicitly to the number of Byzantine processes, while \cite{NA02c} dealt with unbounded number of Byzantine faults (that is $f\in\{0\ldots n\}$).
\begin{definition}[$(c,f)$-strict stabilization]\label{def:cfstabilizing}
A protocol is \emph{$(c,f)$-strictly stabilizing} for specification $spec$ if, given at most $f$ Byzantine processes, any execution $e=\rho_0,\rho_1,\ldots$ contains a configuration $\rho_i$ that is $(c,f)$-contained for $spec$.
\end{definition}
An important limitation of the model of \cite{NA02c} is the notion of $r$-\emph{restrictive} specifications. Intuitively, a specification is $r$-restrictive if it prevents combinations of states that belong to two processes $u$ and $v$ that are at least $r$ hops away. An important consequence related to Byzantine tolerance is that the containment radius of protocols solving those specifications is at least $r$. For some problems, such as the spanning tree construction we consider in this paper, $r$ can not be bounded to a constant. We can show that there exists no $(o(n),1)$-strictly stabilizing protocol for the spanning tree construction.
To circumvent the impossibility result, we define a weaker notion than the strict stabilization. Here, the requirement to the containment radius is relaxed, \emph{i.e.} there may exist processes outside the containment radius that invalidate the specification predicate, due to Byzantine actions. However, the impact of Byzantine triggered action is limited in times: the set of Byzantine processes may only impact the subsystem consisting of processes outside the containment radius a bounded number of times, even if Byzantine processes execute an infinite number of actions.
From the states of $c$-correct processes, \emph{$c$-legitimate configurations} and \emph{$c$-stable configurations} are defined as follows.
\begin{definition}[$c$-legitimate configuration]
A configuration $\rho$ is $c$-legitimate for \emph{spec} if every $c$-correct process $v$ satisfies $spec(v)$
\end{definition}
\begin{definition}[$c$-stable configuration]
A configuration $\rho$ is $c$-stable if every $c$-correct process never changes the values of its O-variables as long as Byzantine processes make no action.
\end{definition}
Roughly speaking, the aim of self-stabilization is to guarantee that a distributed system eventually reaches a $c$-legitimate and $c$-stable configuration. However, a self-stabilizing system can be disturbed by Byzantine processes after reaching a $c$-legitimate and $c$-stable configuration. The \emph{$c$-disruption} represents the period where $c$-correct processes are disturbed by Byzantine processes and is defined as follows
\begin{definition}[$c$-disruption]
A portion of execution $e=\rho_0,\rho_1,\ldots,\rho_t$ ($t>1$) is a $c$-disruption if and only if the following holds:
\begin{enumerate}
\item $e$ is finite,
\item $e$ contains at least one action of a $c$-correct process for changing the value of an O-variable,
\item $\rho_0$ is $c$-legitimate for \emph{spec} and $c$-stable, and
\item $\rho_t$ is the first configuration after $\rho_0$ such that $\rho_t$ is $c$-legitimate for \emph{spec} and $c$-stable.
\end{enumerate}
\end{definition}
Now we can define a self-stabilizing protocol such that Byzantine processes may only impact the subsystem consisting of processes outside the containment radius a bounded number of times, even if Byzantine processes execute an infinite number of actions.
\begin{definition}[$(t,k,c,f)$-time contained configuration]
A configuration $\rho_0$ is $(t,k,c,f)$-time contained for \emph{spec} if given at most $f$ Byzantine processes, the following properties are satisfied:
\begin{enumerate}
\item $\rho_0$ is $c$-legitimate for \emph{spec} and $c$-stable,
\item every execution starting from $\rho_0$ contains a $c$-legitimate configuration for \emph{spec} after which the values of all the O-variables of $c$-correct processes remain unchanged (even when Byzantine processes make actions repeatedly and forever),
\item every execution starting from $\rho_0$ contains at most $t$ $c$-disruptions, and
\item every execution starting from $\rho_0$ contains at most $k$ actions of changing the values of O-variables for each $c$-correct process.
\end{enumerate}
\end{definition}
\begin{definition}[$(t,c,f)$-strongly stabilizing protocol]
A protocol $A$ is $(t,c,f)$-strongly stabilizing if and only if starting from any arbitrary configuration, every execution involving at most $f$ Byzantine processes contains a $(t,k,c,f)$-time contained configuration that is reached after at most $l$ rounds. Parameters $l$ and $k$ are respectively the $(t,c,f)$-stabilization time and the $(t,c,f)$-process-disruption time of $A$.
\end{definition}
Note that a $(t,k,c,f)$-time contained configuration is a $(c,f)$-contained configuration when $t=k=0$, and thus, $(t,k,c,f)$-time contained configuration is a generalization (relaxation) of a $(c,f)$-contained configuration. Thus, a strongly stabilizing protocol is weaker than a strictly stabilizing one (as processes outside the containment radius may take incorrect actions due to Byzantine influence). However, a strongly stabilizing protocol is stronger than a classical self-stabilizing one (that may never meet their specification in the presence of Byzantine processes).
The parameters $t$, $k$ and $c$ are introduced to quantify the strength of fault containment, we do not require each process to know the values of the parameters. Actually, the protocols proposed in this paper assume no knowledge on the parameters.
There exists some relationship between these parameters as the following proposition states:
\begin{proposition}
If a configuration is $(t,k,c,f)$-time contained for \emph{spec}, then $t\leq nk$.
\end{proposition}
\begin{proof}
Let $\rho_0$ be a $(t,k,c,f)$-time contained configuration for \emph{spec}. Assume that $t>nk$.
If there exists no execution $e=\rho_0,\rho_1,\ldots$ such that $e$ contains at least $nk+1$ $c$-disruptions, then $\rho_0$ is in fact a $(nk,k,c,f)$-time contained configuration for \emph{spec} (and hence, we have $t\leq nk$). This is contradictory. So, there exists an execution $e=\rho_0,\rho_1,\ldots$ such that $e$ contains at least $nk+1$ $c$-disruptions.
As any $c$-disruption contains at least one action of a $c$-correct process for changing the value of an O-variable by definition, we obtain that $e$ contains at least $nk+1$ actions of $c$-correct processes for changing the values of O-variables. There is at most $n$ $c$-correct processes. So, there exists at least one $c$-correct process which takes at least $k+1$ actions for changing the value of O-variables in $e$. This is contradictory with the fact that $\rho_0$ is a $(t,k,c,f)$-time contained configuration for \emph{spec}.
\end{proof}
\subsection{Discussion}
There exists an analogy between the respective powers of $(c,f)$-strict stabilization and $(t,c,f)$-strong stabilization for the one hand, and self-stabilization and pseudo-stabilization for the other hand.
A \emph{pseudo-stabilizing} protocol (defined in~\cite{BGM93j}) guarantees that every execution has a suffix that matches the specification, but it could never reach a legitimate configuration from which any possible execution matches the specification. In other words, a pseudo-stabilizing protocol can continue to behave satisfying the specification, but with having possibility of invalidating the specification in future. A particular schedule can prevent a pseudo-stabilizing protocol from reaching a legitimate configuration for arbitrarily long time, but cannot prevent it from executing its desired behavior (that is, a behavior satisfying the specification) for arbitrarily long time. Thus, a pseudo-stabilizing protocol is useful since desired behavior is eventually reached.
Similarly, every execution of a $(t,c,f)$-strongly stabilizing protocol has a suffix such that every $c$-correct process executes its desired behavior. But, for a $(t,c,f)$-strongly stabilizing protocol, there may exist executions such that the system never reach a configuration after which Byzantine processes never have the ability to disturb the $c$-correct processes: all the $c$-correct processes can continue to execute their desired behavior, but with having possibility that the system (resp. each process) could be disturbed at most $t$ (resp. $k$) times by Byzantine processes in future. A notable but subtle difference is that the invalidation of the specification is caused only by the effect of Byzantine processes in a $(t,c,f)$-strongly stabilizing protocol, while the invalidation can be caused by a scheduler in a pseudo-stabilizing protocol.
\section{Strongly-Stabilizing Spanning Tree Construction}
\subsection{Problem Definition}
In this section, we consider only distributed systems in which a given process $r$ is distinguished as the root of the tree.
For \emph{spanning tree construction}, each process $v$ has an O-variable $prnt_v$ to designate a neighbor as its parent. Since processes have no identifiers, $prnt_v$ actually stores $k~(\in \{1,~2,\ldots,~\Delta_v\})$ to designate its $k$-th neighbor as its parent. No neighbor is designated as the parent of $v$ when $prnt_v = 0$ holds. For simplicity, we use $prnt_v=k\ (\in \{1,~2,\ldots,~\Delta_v\}$) and $prnt_v=u$ (where $u$ is the $k$-th neighbor of $v\in N_v(k)$) interchangeably, and $prnt_v=0$ and $prnt_v=\bot$ interchangeably.
The goal of spanning tree construction is to set $prnt_v$ of every process $v$ to form a rooted spanning tree, where $prnt_r = 0$ should hold for the root process $r$.
We consider Byzantine processes that can behave arbitrarily. The faulty processes can behave as if they were any internal processes of the spanning tree, or even as if they were the root processes. The first restriction we make on Byzantine processes is that we assume the root process $r$ can start from an arbitrary state, but behaves correctly according to a protocol. Another restriction on Byzantine processes is that we assume that all the correct processes form a connected subsystem; Byzantine processes never partition the system.
It is impossible, for example, to distinguish the (real) root $r$ from the faulty processes behaving as the root, we have to allow that a spanning forest (consisting of multiple trees) is constructed, where each tree is rooted with a root, correct or faulty one.
We define the specification predicate $spec(v)$ of the tree construction as follows.
\[spec(v) : \begin{cases}
(prnt_v = 0) \wedge (level_v = 0) \text{ if } v \text{ is the root } r \\
(prnt_v \in \{1,\ldots,~\Delta_v\}) \wedge ((level_v = level_{prnt_v}+1)\vee(prnt_v \text{ is Byzantine})) \text{ otherwise}
\end{cases}\]
Notice that $spec(v)$ requires that a spanning tree is constructed at any $0$-legitimate configuration, when no Byzantine process exists.
Figure~\ref{fig:ST} shows an example of $0$-legitimate configuration with Byzantine processes. The arrow attached to each process points the neighbor designated as its parent.
\begin{figure}[t]
\noindent \begin{centering} \include{TC}
\par\end{centering}
\caption{A legitimate configuration for spanning tree construction (numbers denote the level of processes). $r$ is the (real) root and $b$ is a Byzantine process which acts as a (fake) root.}
\label{fig:ST}
\end{figure}
\subsection{Protocol $ss$-$ST$}
In many self-stabilizing tree construction protocols (see the survey of \cite{G03r}), each process checks locally the consistence of its $level$ variable with respect to the one of its neighbors. When it detects an inconsistency, it changes its $prnt$ variable in order to choose a ``better'' neighbor. The notion of ``better'' neighbor is based on the global desired property on the tree (\emph{e.g.} shortest path tree, minimun spanning tree...).
When the system may contain Byzantine processes, they may disturb their neighbors by providing alternatively ``better'' and ``worse'' states. The key idea of protocol $ss$-$ST$ to circumvent this kind of perturbation is the following: when a correct process detects a local inconsistency, it does not choose a ``better'' neighbor but it chooses another neighbor according to a round robin order (along the set of its neighbor).
Figure \ref{fig:ssST} presents our strongly-stabilizing spanning tree construction protocol $ss$-$ST$ that can tolerate any number of Byzantine processes other than the root process (providing that the subset of correct processes remains connected). These assumptions are necessary since a Byzantine root or a set of Byzantine processes that disconnects the set of correct processes may disturb all the tree infinitely often. Then, it is impossible to provide a $(t,k,f)$-strongly stabilizing protocol for any finite integer $t$.
The protocol is composed of three rules. Only the root can execute the first one ({\tt GA0}). This rule sets the root in a legitimate state if it is not the case. Non-root processes may execute the two other rules ({\tt GA1} and {\tt GA2}). The rule {\tt GA1} is executed when the state of a process is not legitimate. Its execution leads the process to choose a new parent and to compute its local state in function of this new parent. The last rule ({\tt GA2}) is enabled when a process is in a legitimate state but there exists an inconsistence between its variables and its shared registers. The execution of this rule leads the process to compute the consistent values for all its shared registers.
\begin{figure
\begin{center}
\begin{tabbing}
xxx \= xxx \= xxx \= xxx \= xxx \= xxx \= xxx \= xxx \= xx \=\kill
{\tt constants of process $v$} \\
\> $\Delta_v =$ the degree of $v$; \\
\> $N_v = $ the set of neighbors of $v$; \\
{\tt variables of process $v$} \\
\> $prnt_v\in \{0,1,2,\ldots,\Delta_v\}$: integer; // $prnt_v=0$ if $v$ has no parent,\\
\> \> \> \> \> \> \> \> \> // $prnt_v=k\in \{1,2,\ldots,\Delta_v\}$ if $N_v[k]$ is the parent of $v$. \\
\> $level_v$: integer; // distance from the root.\\
{\tt variables in shared register $r_{v,u}$} \\
\> $r$-$prnt_{v,u}$: boolean; // $r$-$prnt_{v,u}=$\textit{true} iff $u$ is a parent of $v$. \\
\> $r$-$level_{v,u}$: integer; // the value of $level_v$ \\
{\tt predicates} \\
\> $pred_0: prnt_v \neq 0$ {\tt or} $level_v \neq 0$ {\tt or} $\exists w\in N_v,[(r$-$prnt_{v,w}, r$-$level_{v,w})\neq($\textit{false}$, 0)]$\\
\> $pred_1: prnt_v \notin \{1,~2,\ldots,~\Delta_v\}$ {\tt or} $level_v \ne r$-$level_{prnt_v,v}+1$ \\
\> $pred_2: (r$-$prnt_{v,prnt_v}, r$-$level_{v,prnt_v})\neq($\textit{true}$, level_v)$\\
\> \> \> {\tt or} $\exists w\in N_v-\{prnt_v\},[(r$-$prnt_{v,w}, r$-$level_{v,w})\neq($\textit{false}$, level_v)]$ \\
{\tt atomic action of the root $v=r$} // represented in form of guarded action \\
\> {\tt GA0:}$pred_0$ $\longrightarrow$ \\
\> \> \> $prnt_v :=0 ;$\\
\> \> \> $level_v := 0;$\\
\> \> \> {\tt for each} $w \in N_v$ {\tt do} $(r$-$prnt_{v,w}, r$-$level_{v,w}):=($\textit{false}$, 0)$;\\
{\tt atomic actions of $v\neq r$} // represented in form of guarded actions \\
\> {\tt GA1:}$pred_1 \longrightarrow$ \\
\> \> \> $prnt_v := next_v(prnt_v)$ {\tt where} $next_v(k)=(k$ {\tt mod} $\Delta_v)+1$;\\
\> \> \> $level_v := r$-$level_{prnt_v,v}+1;$\\
\> \> \> $(r$-$prnt_{v,prnt_v}, r$-$level_{v,prnt_v}):=($\textit{true}$, level_v)$;\\
\> \> \> {\tt for each} $w \in N_v-\{prnt_v\}$ {\tt do} $(r$-$prnt_{v,w}, r$-$level_{v,w}):=($\textit{false}$, level_v)$;\\
\> {\tt GA2:}$\neg pred_1$ {\tt and} $pred_2 \longrightarrow$ \\
\> \> \> $(r$-$prnt_{v,prnt_v}, r$-$level_{v,prnt_v}):=($\textit{true}$, level_v)$;\\
\> \> \> {\tt for each} $w \in N_v-\{prnt_v\}$ {\tt do} $(r$-$prnt_{v,w}, r$-$level_{v,w}):=($\textit{false}$, level_v)$;\\
\end{tabbing}
\end{center}
\caption{Protocol $ss$-$ST$ (actions of process $v$)}
\label{fig:ssST}
\end{figure}
\subsection{Proof of Strong Stabilization of $ss$-$ST$}
We cannot make any assumption on the initial values of register variables. But, we can observe that if an output register of a correct process has inconsistent values with the process variables then this process is enabled by a rule of $ss$-$ST$. By fairness assumption, any such process takes a step in a finite time.
Once a correct process $v$ executes one of its action, variables of its output registers have values consistent with the process variables: $r$-$prnt_{v,prnt_v}=true$, $r$-$prnt_{v,w}=false\ (w \in N_v-\{prnt_v\})$, and $r$-$level_{v,w}=level_v\ (w \in N_v)$ hold.
Consequently, we can assume in the following that all the variables of output registers of every correct process have consistent values with the process variables.
We denote by $\cal{LC}$ the following set of configurations:
\[\begin{array}{rcl}
\cal{LC}&=&\Big\{\rho\in C \Big| (prnt_r=0) \wedge (level_r=0) \wedge\\
&& ~~~~~~~~~~\big(\forall v\in V-(B\cup\{r\}),(prnt_v \in \{1,\ldots,~\Delta_v\}) \wedge (level_v = level_{prnt_v}+1)\big)\Big\}
\end{array}\]
We interest now on properties of configurations of $\cal{LC}$.
\begin{lemma}\label{lem:closure}
Any configuration of $\cal{LC}$ is $0$-legitimate and $0$-stable.
\end{lemma}
\begin{proof}
Let $\rho$ be a configuration of $\cal{LC}$. By definition of $spec$, it is obvious that $\rho$ is $0$-legitimate.
Note that no correct process is enabled by $ss$-$ST$ in $\rho$. Consequently, no actions of $ss$-$ST$ can be executed and we can deduce that $\rho$ is $0$-stable.
\end{proof}
We can observe that there exists some $0$-legitimate configurations which not belong to $\cal{LC}$ (for example the one of Figure \ref{fig:ssST}).
\begin{lemma}\label{lem:convergence}
Given at most $n-1$ Byzantine processes, for any initial configuration $\rho_0$ and any execution $e=\rho_0,\rho_1,\ldots$ starting from $\rho_0$, there exists a configuration $\rho_i$ such that $\rho_i\in\cal{LC}$.
\end{lemma}
\begin{proof}
First, note that if all the correct processes are disabled in a configuration $\rho$, then $\rho$ belongs to $\cal{LC}$. Thus, it is sufficient to show that $ss$-$ST$ eventually reaches a configuration $\rho_i$ in any execution (starting from any configuration) such that all the correct processes are disabled in $\rho_i$.
By contradiction, assume that there exists a correct process that is enabled infinitely often. Notice that once the root process $r$ is activated, $r$ becomes and remains disabled forever. From the assumption that all the correct processes form a connected subsystem, there exists two neighboring correct processes $u$ and $v$ such that $u$ becomes and remains disabled and $v$ is enabled infinitely often. Consider execution after $u$ becomes and remains disabled. Since the daemon is weakly fair, $v$ executes its action infinitely often. Then, eventually $v$ designates $u$ as its parent. It follows that $v$ never becomes enabled again unless $u$ changes $level_u$. Since $u$ never becomes enabled, this leads to the contradiction.
\end{proof}
\begin{lemma}\label{lem:tcf}
Any configuration in $\cal{LC}$ is a $(f\Delta^d,\Delta^d,0,f)$-time contained configuration of the spanning tree construction, where $f$ is the number of Byzantine processes and $d$ is the diameter of the subsystem consisting of all the correct processes.
\end{lemma}
\begin{proof}
Let $\rho_0$ be a configuration of $\cal{LC}$ and $e=\rho_0,\rho_1,\ldots$ be an execution starting from $\rho_0$. First, we show that any $0$-correct process takes at most $\Delta^d$ actions in $e$, where $d$ is the diameter of the subsystem consisting of all the correct processes.
Let $F$ be the set of Byzantine processes in $e$. Consider a subsystem $S'$ consisting of all the correct processes: $S'=(P-F, L')$ where $L'=\{l \in L~|~l \in (P-F) \times (P-F)\}$. We prove by induction on the distance $\delta$ from the root in $S'$ that a correct process $v$ $\delta$ hops away from $r$ in $S'$ executes its action at most $\Delta^\delta$ times in $e$.
\begin{itemize}
\item Induction basis ($\delta=1$):\\
Let $v$ be any correct process neighboring to the root $r$. Since $\rho_0$ is a legitimate configuration, $prnt_r=0$ and $level_r=0$ hold at $\rho_0$ and remain unchanged in $e$. Thus, if $prnt_v=r$ and $level_v=1$ hold in a configuration $\sigma$, then $v$ never changes $prnt_v$ or $level_v$ in any execution starting from $\sigma$. Since $prnt_v=r$ and $level_v=1$ hold within the first $\Delta_v -1\leq \Delta$ actions of $v$, $v$ can execute its action at most $\Delta$ times.
\item Induction step (with induction assumption):\\
Let $v$ be any correct process $\delta$ hops away from the root $r$ in $S'$, and $u$ be a correct neighbor of $v$ that is $\delta-1$ hops away from $r$ in $S'$ (this process exists by the assumption that the subgraph of correct processes of $S$ is connected). From the induction assumption, $u$ can execute its action at most $\Delta^{\delta-1}$ times.
Assume that $prnt_v=u$ and $level_v=level_u+1$ hold in a given configuration $\sigma$. We can observ that $v$ is not enabled until $u$ does not modify its state. Then, the round-robin order used for pointers modification allows us to deduce that $v$ executes at most $\Delta_v\leq \Delta$ actions between two actions of $u$ (or before the first action of $u$). By the induction assumption, $u$ executes its action at most $\Delta^{\delta-1}$ times. Thus, $v$ can execute its action at most $\Delta + \Delta \times (\Delta^{\delta-1}) = \Delta^\delta$ times.
\end{itemize}
Consequently, any $0$-correct process takes at most $\Delta^d$ actions in $e$.
We say that a Byzantine process $b$ deceive a correct neighbor $v$ in the step $\rho\mapsto\rho'$ if the state of $b$ makes the guard of an action of $v$ true in $\rho$ and if $v$ executes this action in this step.
As a $0$-disruption can be caused only by an action of a Byzantine process from a legitimate configuration, we can bound the number of $0$-disruptions by counting the total number of times that correct processes are deceived of neighboring Byzantine processes.
If a $0$-correct $v$ is deceived by a Byzantine neighbor $b$, it takes necessarily $\Delta_v$ actions before being deceiving again by $b$ (recall that we use a round-robin policy for $prnt_v$). As any $0$-correct process $v$ takes at most $\Delta^d$ actions in $e$, $v$ can be deceived by a given Byzantine neighbor at most $\Delta^{d-1}$ times. A Byzantine process can have at most $\Delta$ neighboring correct processes and thus can deceive correct processes at most $ \Delta \times \Delta^{d-1} = \Delta^d$ times. We have at most $f$ Byzantine processes, so the total number of times that correct processes are deceived by neighboring Byzantine processes is $f\Delta^d$.
Hence, the number of $0$-disruption in $e$ is bounded by $f\Delta^D$. It remains to show that any $0$-disruption have a finite length to prove the result.
By contradiction, assume that there exists an infinite $0$-disruption $d=\rho_i,\ldots$ in $e$. This implies that for all $j\geq i$, $\rho_j$ is not in $\cal{LC}$, which contradicts Lemma \ref{lem:convergence}. Then, the result is proved.
\end{proof}
\begin{theorem}[Strong-stabilization]\label{theorem:ss}
Protocol $ss$-$ST$ is a $(f\Delta^d, 0, f)$-strong stabilizing protocol for the spanning tree construction, where $f$ is the number of Byzantine processes and $d$ is the diameter of the subsystem consisting of all the correct processes.
\end{theorem}
\begin{proof}
From Lemmas \ref{lem:closure} and \ref{lem:tcf}, it is sufficient to show that $ss$-$ST$ eventually reaches a configuration in $\cal{LC}$. Lemma \ref{lem:convergence} allows us to conclude.
\end{proof}
\subsection{Time Complexities}
\begin{proposition}
The $(f\Delta^d,0,f)$-process-disruption time of $ss$-$ST$ is $\Delta^d$ where $d$ is the diameter of the subsystem consisting of all the correct processes.
\end{proposition}
\begin{proof}
This result directly follows from Theorem \ref{theorem:ss} and Lemma \ref{lem:tcf}.
\end{proof}
\begin{proposition}
The $(f\Delta^d,0,f)$-stabilization time of $ss$-$ST$ is $O((n-f)\Delta^d)$ rounds where $f$ is the number of Byzantine processes and $d$ is the diameter of the subsystem consisting of all the correct processes.
\end{proposition}
\begin{proof}
By the construction of the algorithm, any correct process $v$ which has a correct neighbor $u$ takes at most $\Delta$ steps between two actions of $u$.
Given two processes $u$ and $v$, we denote by $d'(u,v)$ the distance between $u$ and $v$ in the subgraph of correct processes of $S$. We are going to prove the following property by induction on $i>0$:
$(P_i)$: any correct process $v$ such that $d'(v,r)=i$ takes at most $2\cdot\overset{i}{\underset{j=1}{\sum}}\Delta^j$ steps in any execution starting from any configuration.
\begin{itemize}
\item Induction basis ($i=1$):\\
Let $v$ be a correct neighbor of the root $r$. By the algorithm, we know that the root $r$ takes at most one step (because $r$ is correct). By the previous remark, we know that $v$ takes at most $\Delta$ steps before and after the action of $r$. Consequently, $v$ takes at most $2\Delta$ steps in any execution starting from any configuration.
\item Induction step ($i>1$ with induction assumption):\\
Let $v$ be a correct process such that $d'(v,r)=i$. Denote by $u$ one neighbor of $v$ such that $d'(u,r)=i-1$ (this process exists by the assumption that the subgraph of correct processes of $S$ is connected).
By the previous remark, we know that $v$ takes at most $\Delta$ steps before the first action of $u$, between two actions of $u$ and after the last action of $u$. By induction assumption, we know that $u$ takes at most $2\cdot\overset{i-1}{\underset{j=1}{\sum}}\Delta^j$ steps. Consequently, $v$ takes at most $A$ actions where:
\[A=\Delta+\left(2\cdot\overset{i-1}{\underset{j=1}{\sum}}\Delta^j\right)\cdot\Delta+\Delta=2\cdot\overset{i}{\underset{j=1}{\sum}}\Delta^j\]
\end{itemize}
Since there is $(n-f)$ correct processes and any correct process satisfies $d'(v,r)<d$, we can deduce that the system reach a legitimate configuration in at most $O((n-f)\Delta^d)$ steps of correct processes.
As a round counts at least one step of a correct process, we obtain the result.
\end{proof}
\section{Strongly-Stabilizing Tree Orientation}
\subsection{Problem Definition}
In this section, we consider only \emph{tree systems}, \emph{i.e.} distributed systems containing no cycles. We assume that all processes in a tree system are identical and thus no process is distinguished as a root.
Informally, \emph{tree orientation} consists in transforming a tree system (with no root) into a rooted tree system. Each process $v$ has an O-variable $prnt_v$ to designate a neighbor as its parent. Since processes have no identifiers, $prnt_v$ actually stores $k~(\in \{1,~2,\ldots,~\Delta_v\})$ to designate its $k$-th neighbor as its parent. But for simplicity, we use $prnt_v=k$ and $prnt_v=u$ (where $u$ is the $k$-th neighbor of $v$) interchangeably.
The goal of tree orientation is to set $prnt_v$ of every process $v$ to form a rooted tree. However, it is impossible to choose a single process as the root because of impossibility of symmetry breaking. Thus, instead of a single root process, a single \emph{root link} is determined as the root: link $(u, v)$ is the root link when processes $u$ and $v$ designate each other as their parents (Fig.~\ref{fig:TO}(a)). From any process $w$, the root link can be reached by following the neighbors designated by the variables $prnt$.
When a tree system $S$ has a Byzantine process (say $w$), $w$ can prevent communication between subtrees of $S-\{w\}$\footnote{For a process subset $P'~(\subseteq P)$, $S-P'$ denotes a distributed system obtained by removing processes in $P'$ and their incident links.}. Thus, we have to allow each of the subtrees to form a rooted tree independently. We define the specification predicate $spec(v)$ of the tree orientation as follows.
\begin{center}
$spec(v):\forall u~(\in N_v) [(prnt_v=u)\vee (prnt_u=v)\vee (u$ is Byzantine faulty)].
\end{center}
Note that the tree topology, the specification and the uniquiness of $prnt_v$ (for any process $v$) imply that, for any $0$-legitimate configuration, there is at most one root link in any connected component of correct processes. Hence, in a fault-free system, there exists exactly one root link in any $0$-legitimate configuration.
Figure~\ref{fig:TO} shows examples of $0$-legitimate configurations (a) with no Byzantine process and (b) with a single Byzantine process $w$. The arrow attached to each process points the neighbor designated as its parent. Notice that, from Fig.~\ref{fig:TO}(b), subtrees consisting of correct processes are classified into two categories: one is the case of forming a rooted tree with a root link in the subtree ($T_1$ in Fig.~\ref{fig:TO}(b)), and the other is the case of forming a rooted tree with a root process, where the root process is a neighbor of a Byzantine process and designates the Byzantine process as its parent ($T_2$ in Fig.~\ref{fig:TO}(b)).
\begin{figure}[t]
\begin{center}
\includegraphics{TO.ps}
\end{center}
\caption{Tree orientation}
\label{fig:TO}
\end{figure}
\subsection{Impossibility for Two Byzantine Processes}
Tree orientation seems to be a very simple task. Actually, for tree orientation in fault-free systems, we can design a self-stabilizing protocol that chooses a link incident to a center process\footnote{A process $v$ is a center when $v$ has the minimum eccentricity where eccentricity is the largest distance to a leaf. It is known that a tree has a single center or two neighboring centers.} as the root link: in case that the system has a single center, the center can choose a link incident to it, and in case that the system has two neighboring centers, the link between the centers become the root link. However, tree orientation becomes impossible if we have Byzantine processes. By the impossibility results of \cite{NA02c}, we can show that tree orientation has no $(o(n), 1)$-strictly stabilizing protocol; \emph{i.e.} the Byzantine influence cannot be contained in the sense of ``strict stabilization'', even if only a single Byzantine process is allowed.
An interesting question is whether the Byzantine influence can be contained in a weaker sense of ``strong stabilization''. The following theorem gives a negative answer to the question: if we have two Byzantine processes, bounding the number of disruptions is impossible. We prove the impossibility for more restricted schedules, called the \emph{central daemon}, which disallows two or more processes to make actions at the same time. Notice that impossibility results under the central daemon are stronger than those under the distributed daemon in the sense that impossibility results under the central daemon also hold for the distributed daemon.
\begin{theorem}\label{th:imposs}
Even under the central daemon, there exists no deterministic $(t, o(n), 2)$-strongly stabilizing protocol for tree orientation where $t$ is any (finite) integer and $n$ is the number of processes.
\end{theorem}
\begin{proof}
Let $S=(P,L)$ be a chain (which is a special case of a tree system) of $n$ processes: $P=\{v_1,~v_2,\ldots,v_n\}$ and $L=\{(v_i,~v_{i+1})\ |\ 1 \le i \le n-1\}$.
For purpose of contradiction, assume that there exists a $(t, o(n), 2)$-strongly stabilizing protocol $A$ for some integer $t$. In the following, we show, for $S$ with Byzantine processes $v_1$ and $v_n$, that $A$ has an execution $e$ containing an infinite number of $o(n)$-disruptions. This contradicts the assumption that $A$ is a $(t, o(n), 2)$-strongly stabilizing protocol.
In $S$ with Byzantine processes $v_1$ and $v_n$, $A$ eventually reaches a configuration $\rho_1$ that is $o(n)$-legitimate for \emph{spec} and $o(n)$-stable by definition of a $(t, o(n), 2)$-strongly stabilizing protocol. This execution to $\rho_1$ constitutes the prefix of $e$.
To construct $e$ after $\rho_1$, consider another chain $S'=(P',L')$ of $3n$ processes and an execution of $A$ on $S'$, where let $P'=\{u_1,~u_2,\ldots,u_{3n}\}$ and $L'=\{(u_i,~u_{i+1})\ |\ 1 \le i \le 3n-1\}$. We consider the initial configuration $\rho'_1$ of $S'$ that is obtained by concatenating three copies (say $S'_1, S'_2$ and $S'_3$) of $S$ in $\rho_1$ where only the central copy $S'_2$ is reversed right-and-left (Fig.~\ref{fig:imp_proof}). More formally, the state of $w_i$ and of $w_{2n+i}$ in $\rho'_1$ is the same as the one of $v_i$ in $\rho_1$ for any $i\in\{1,\ldots,n\}$. Moreover, for any $i\in\{1,\ldots,n\}$, the state of $w_{n+i}$ in $\rho'_1$ is the same as the one of $v_i$ in $\rho_1$ with the following modification: if $prnt_{v_i}=v_{i-1}$ (respectively $prnt_{v_i}=v_{i+1}$) in $\rho_1$, then $prnt_{w_{n+i}}=w_{n+i+1}$ (respectively $prnt_{w_{n+i}}=w_{n+i-1}$) in $\rho'_1$. For example, if $w$ denotes a center process of $S$ (\emph{i.e.} $w=v_{\lceil n/2 \rceil}$), then $w$ is copied to $w'_1=u_{\lceil n/2 \rceil}, w'_2=u_{2n+1-\lceil n/2 \rceil}$ and $w'_3=u_{2n+\lceil n/2 \rceil}$, but only $prnt_{w'_2}$ designates the neighbor in the different direction from $prnt_{w'_1}$ and $prnt_{w'_3}$. From the configuration $\rho'_1$, protocol $A$ eventually reaches a legitimate configuration $\rho''_1$ of $S'$ when $S'$ has no Byzantine process (since a strongly stabilizimg protocol is self-stabilizig in a fault-free system). In the execution from $\rho'_1$ to $\rho''_1$, at least one $prnt$ variable of $w'_1, w'_2$ and $w'_3$ has to change (otherwise, it is impossible to guarantee the uniquiness of the root link in $\rho''_1$). Assume $w'_i$ changes $prnt_{w'_i}$.
Now, we construct the execution $e$ on $S$ after $\rho_1$. The main idea of this proof is to construct an execution on $S$ indistinguishable (for correct processes) from one of $S'$ because Byzantine processes of $S$ behave as correct processes of $S'$. Since $v_1$ and $v_n$ are Byzantine processes in $S$, $v_1$ and $v_n$ can simulate behavior of the end processes of $S'_i$ (\emph{i.e.} $u_{(i-1)n+1}$ and $u_{in}$). Thus, $S$ can behave in the same way as $S'_i$ does from $\rho'_1$ to $\rho''_1$. Recall that process $w'_1$ modifies its pointer in the execution of $S'_i$ does from $\rho'_1$ to $\rho''_1$. Consequently, we can construct the execution that constitutes the second part of $e$, where $prnt_w$ changes at least once. Letting the resulting configuration be $\rho_2$ (that coincides with the configuration $\rho''_i$ of $S'_i$), $\rho_2$ is clearly $o(n)$-legitimate for \emph{spec} and $o(n)$-stable. Thus, the second part of $e$ contains at least one $o(n)$-disruption.
By repeating the argument, we can construct the execution $e$ of $A$ on $S$ that contains an infinite number of $o(n)$-disruptions.
\end{proof}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{imp_proof.ps}
\end{center}
\caption{Construction of execution where $w$ of $S$ changes its parent infinitely often.}
\label{fig:imp_proof}
\end{figure}
\subsection{A Strongly Stabilizing Protocol for a Single Byzantine Process}
\subsubsection{Protocol $ss$-$TO$}
In the previous subsection, we proved that there is no strongly stabilizing protocol for tree orientation if two Byzantine processes exist. In this subsection, we consider the case with at most a single Byzantine process, and present a $(\Delta,0,1)$-strongly stabilizing tree orientation protocol $ss$-$TO$. Note that we consider the distributed daemon for this possibility result.
In a fault-free tree system, tree orientation can be easily achieved by finding a center process. A simple strategy for finding the center process is that each process $v$ informs each neighbor $u$ of the maximum distance to a leaf from $u$ through $v$. The distances are found and become fixed from smaller ones. When a tree system contains a single Byzantine process, however, this strategy cannot prevent perturbation caused by wrong distances the Byzantine process provides: by reporting longer and shorter distances than the correct one alternatively, the Byzantine process can repeatedly pull the chosen center closer and push it farther.
The key idea of protocol $ss$-$TO$ to circumvent the perturbation is to restrict the Byzantine influence to one-sided effect: the Byzantine process can pull the chosen root link closer but cannot push it farther. This can be achieved using a non-decreasing variable $level_v$ as follows: when a process $v$ finds a neighbor $u$ with a higher level, $u$ chooses $v$ as its parent and copies the level value from $u$. This allows the Byzantine process (say $z$) to make its neighbors choose $z$ as their parents by increasing its own level. However, $z$ can not make neighbor change their parents to other processes by decreasing its own level. Thus, the effect the Byzantine process can make is one-sided.
Protocol $ss$-$TO$ is presented in Fig.~\ref{fig:ssTO}. For simplicity, we regard constant $N_v$ as denoting the neighbors of $v$ and regard variable $prnt_v$ as storing a parent of $v$. Notice that they should be actually implemented using the ordinal numbers of neighbors that $v$ locally assigns.
The protocol is composed of three rules. The first one ({\tt GA1}) is enabled when a process has a neighbor which provides a strictly greater level. When the rule is executed, the process chooses this neighbor as its parent and computes its new state in function of this neighbor. The rule {\tt GA2} is enabled when a process $v$ has a neighbor $u$ (different from its current parent) with the same level such that $v$ is not the parent of $u$ in the current oriented tree. Then, $v$ chooses $u$ as parent, increments its level by one and refresh its shared registers. The last rule ({\tt GA3}) is enabled for a process when there exists an inconsistence between its variables and its shared registers. The execution of this rule leads the process to compute the consistent values for all its shared registers.
\begin{figure}[h!]
\begin{center}
\begin{tabbing}
xxx \= xxx \= xxx \= xxx \= xxx \= \kill
{\tt constants of process $v$} \\
\> $\Delta_v =$ the degree of $v$; \\
\> $N_v = $ the set of neighbors of $v$; \\
{\tt variables of process $v$} \\
\> $prnt_v$: a neighbor of $v$; // $prnt_v=u$ if $u$ is a parent of $v$. \\
\> $level_v$: integer; \\
{\tt variables in shared register $r_{v,u}$} \\
\> $r$-$prnt_{v,u}$: boolean;
// $r$-$prnt_{v,u}=$\textit{true} iff $u$ is a parent of $v$. \\
\> $r$-$level_{v,u}$: integer; // the value of $level_v$ \\
{\tt predicates} \\
\> $pred_1: \exists u \in N_v [r$-$level_{u,v} > level_v]$ \\
\> $pred_2: \exists u \in N_v-\{prnt_v\} [(r$-$level_{u,v} = level_v)\wedge
(r$-$prnt_{u,v}=$\textit{false}$)]$ \\
\> $pred_3: ((r$-$prnt_{v,prnt_v},r$-$level_{v,prnt_v})\neq ($\textit{true}$,level_v))\vee$\\
\> \> \> $(\exists u\in N_v-\{prnt_v\},(r$-$prnt_{v,u},r$-$level_{v,u})\neq ($\textit{false}$,level_v))$\\
{\tt atomic actions} // represented in form of guarded actions \\
\> {\tt GA1:}$pred_1\ \longrightarrow$ \\
\> \> \> Let $u$ be a neighbor of $v$ s.t.
$r$-$level_{u,v}=\max_{w \in N_v}r$-$level_{w,v}$;\\
\> \> \> $prnt_v :=u;\ level_v := r$-$level_{u,v};$\\
\> \> \> $(r$-$prnt_{v,u}, r$-$level_{v,u}) := ($\textit{true}$, level_v)$;\\
\> \> \> {\tt for each} $w \in N_v-\{u\}$ {\tt do}
$(r$-$prnt_{v,w}, r$-$level_{v,w}):=($\textit{false}$, level_v)$;\\
\> {\tt GA2:}$\neg pred_1\wedge pred_2\ \longrightarrow$ \\
\> \> \> Let $u$ be a neighbor of $v$ s.t.
$(r$-$level_{u,v} = level_v)\wedge (r$-$prnt_{u,v}=$\textit{false}$)$;\\
\> \> \> $prnt_v :=u;\ level_v := level_v+1;$\\
\> \> \> $(r$-$prnt_{v,u}, r$-$level_{v,u}) := ($\textit{true}$, level_v)$;\\
\> \> \> {\tt for each} $w \in N_v-\{u\}$ {\tt do}
$(r$-$prnt_{v,w}, r$-$level_{v,w}):=($\textit{false}$, level_v)$;\\
\> {\tt GA3:}$\neg pred_1\wedge \neg pred_2\wedge pred_3 \longrightarrow$ \\
\> \> \> $(r$-$prnt_{v,prnt_v}, r$-$level_{v,prnt_v}) := ($\textit{true}$, level_v)$;\\
\> \> \> {\tt for each} $w \in N_v-\{prnt_v\}$ {\tt do}
$(r$-$prnt_{v,w}, r$-$level_{v,w}):=($\textit{false}$, level_v)$;
\end{tabbing}
\end{center}
\caption{Protocol $ss$-$TO$ (actions of process $v$)}
\label{fig:ssTO}
\end{figure}
\subsubsection{Closure of Legitimate Configurations of $ss$-$TO$}
We refine legitimate configurations of protocol $ss$-$TO$ into several sets of configurations and show their properties. We cannot make any assumption on the initial values of register variables. But once a correct process $v$ executes its action, variables of its output registers have values consistent with the process variables: $r$-$prnt_{v,prnt_v}=true$, $r$-$prnt_{v,w}=false\ (w \in N_v-\{prnt_v\})$, and $r$-$level_{v,w}=level_v\ (w \in N_v)$ hold. In the following, we assume that all the variables of output registers of every correct process have consistent values.
First we consider the fault-free case.
\begin{definition}[$\mathcal{LC}_0$]\label{def:legit0}
In a fault-free tree, we define the set of configurations $\mathcal{LC}_0$ as the set of configurations such that: (a) $spec(v)$ holds for every process $v$ and (b) $level_u=level_v$ holds for any processes $u$ and $v$.
\end{definition}
In any configuration of $\mathcal{LC}_0$, variables $prnt_v$ of all processes form a rooted tree with a root link as Fig.~\ref{fig:TO}(a), and all variables $level_v$ have the same value.
\begin{lemma}\label{lem:legit0}
In a fault-free tree, once protocol $ss$-$TO$ reaches a configuration $\rho$ in $\mathcal{LC}_0$, it remains at $\rho$.
\end{lemma}
\begin{proof}
Consider any configuration $\rho$ in $\mathcal{LC}_0$. Since all variables $level_v$ have the same value, the guard of {\tt GA1} cannot be true in $\rho$. Since $spec(v)$ holds at every process in $\rho$, there exist no neighboring processes $u$ and $v$ such that $prnt_u \ne v$ and $prnt_v \ne u$ holds. It follows that the guard of {\tt GA2} cannot be true in $\rho$. Once each process executes an action, all the variables of its output registers are consistent with its local variables, and thus, the guard of {\tt GA3} cannot be true.
\end{proof}
For the case with a single Byzantine process, we define the following sets of configurations.
\begin{definition}[$\mathcal{LC}_1$]\label{def:legit1}
Let $z$ be the single Byzantine process in a tree system. A configuration is in the set $\mathcal{LC}_1$ if every subtree (or a connected component) of $S$-$\{z\}$ satisfies either the following (C1) or (C2).
\begin{enumerate}
\renewcommand{\labelenumi}{(C\arabic{enumi})}
\item
(a) $spec(u)$ holds for every correct process $u$,
(b) $prnt_v=z$ holds for the neighbor $v$ of $z$, and
(c) $level_w \ge level_x$ holds for any neighboring correct processes $w$ and $x$ where $w$ is nearer than $x$ to $z$.
\item
(d) $spec(u)$ holds for every correct process $u$, and
(e) $level_v=level_w$ holds for any correct processes $v$ and $w$.
\end{enumerate}
\end{definition}
\begin{definition}[$\mathcal{LC}_2$]\label{def:legit2}
Let $z$ be the single Byzantine process in a tree system. A configuration is in the set $\mathcal{LC}_2$ if every subtree (or a connected component) of $S$-$\{z\}$ satisfies the condition (C1) of Definition \ref{def:legit1}.
\end{definition}
In any configuration of $\mathcal{LC}_2$, every subtree forms the rooted tree with the root process neighboring the Byzantine process $z$. For configurations of $\mathcal{LC}_2$, the following lemma holds.
\begin{lemma}\label{lem:st-legit}
Once protocol $ss$-$TO$ reaches a configuration $\rho$ of $\mathcal{LC}_2$, it remains in configurations of $\mathcal{LC}_2$ and, thus, no correct process $u$ changes $prnt_u$ afterward. That is, any configuration of $\mathcal{LC}_2$ is $(0,1)$-contained.
\end{lemma}
\begin{proof}
Consider any execution $e$ starting from a configuration $\rho$ of $\mathcal{LC}_2$. In $\rho$, every subtree of $S-\{z\}$ forms the rooted tree with the root process neighboring the Byzantine process $z$. Note that, as long as no correct process $u$ changes $prnt_u$ in $e$, action {\tt GA2} cannot be executed at any correct process. On the other hand, if a process $u$ executes action {\tt GA1} in $e$, $level_{prnt_u} \ge level_u$ necessarily holds immediately this action. Consequently, if we assume that no correct process $u$ changes $prnt_u$ in $e$ (by execution of {\tt GA1}) then every configuration of $e$ is in $\mathcal{LC}_2$. To prove the lemma, it remains to show that $e$ contains no activation of {\tt GA1} by a correct process. In the following, we show that any correct process $u$ never changes $prnt_u$ in $e$.
For contradiction, assume that a correct process $u$ changes $prnt_u$ first among all correct processes. Notice that every correct process $v$ can execute {\tt GA1} or {\tt GA3} but cannot change $prnt_v$ before $u$ changes $prnt_u$. Also notice that $u$ changes $prnt_u$ to its neighbor (say $w$) by execution of {\tt GA1} and $w$ is a correct process. From the guard of {\tt GA1}, $level_w > level_u$ holds immediately before $u$ changes $prnt_u$. On the other hand, since $w$ is a correct process, $w$ never changes $prnt_w$ before $u$. This implies that $prnt_w = u$ holds immediately before $u$ changes $prnt_u$, and thus $level_u \ge level_w$ holds. This is a contradiction.
\end{proof}
Notice that a correct process $u$ may change $level_u$ by execution of {\tt GA1} even after a configuration of $\mathcal{LC}_2$. For example, when the Byzantine process $z$ increments $level_z$ infinitely often, every process $u$ may also increment $level_u$ infinitely often.
\begin{lemma}\label{lem:legit1}
Any configuration $\rho$ in $\mathcal{LC}_1$ is $(\Delta_z,1,0,1)$-time contained where $z$ is the Byzantine process.
\end{lemma}
\begin{proof}
Let $\rho$ be a configuration of $\mathcal{LC}_1$. Consider any execution $e$ starting from $\rho$. By the same discussion as the proof of Lemma~\ref{lem:st-legit}, we can show that any subtree satisfying (C1) at $\rho$ always keeps satisfying the condition and no correct process $u$ in the subtree changes $prnt_u$ afterward.
Consider a subtree satisfying (C2) at $\rho$ and let $y$ be the neighbor of the Byzantine process $z$ in the subtree. From the fact that variables $prnt_u$ form a rooted tree with a root link and all variables $level_u$ have the same value in the subtree at $\rho$, no process $u$ in the subtree changes $prnt_u$ or $level_u$ unless $y$ executes $prnt_y:=z$ in $e$. When $prnt_y:=z$ is executed, $level_y$ becomes larger than $level_u$ of any other process $u$ in the subtree. Since the value of variable $level_u$ of each correct process $u$ is non-decreasing, every correct neighbor (say $v$) of $y$ eventually executes $prnt_v:=y$ and $level_v:=level_y$ (by {\tt GA1}). By repeating the argument, we can show that the subtree eventually reaches a configuration satisfying (C1) in $O(d')$ rounds where $d'$ is the diameter of the subtree. It is clear that any configuration before reaching the first configuration satisfying (C1) is not in $\mathcal{LC}_1$, and that each process $u$ changes $prnt_u$ at most once during the execution.
Therefore, any execution $e$ starting from $\rho$ contains at most $\Delta_z$ 0-disruptions where each correct process $u$ changes $prnt_u$ at most once.
\end{proof}
\subsubsection{Convergence of $ss$-$TO$}
We first show convergence of protocol $ss$-$TO$ to configurations of $\mathcal{LC}_0$ in a \emph{fault-free} case.
\begin{lemma}\label{lem:conv0}
In a fault-free tree system, protocol $ss$-$TO$ eventually reaches a configuration of $\mathcal{LC}_0$ from any initial configuration.
\end{lemma}
\begin{proof}
We prove the convergence to a configuration of $\mathcal{LC}_0$ by induction on the number of processes $n$. It is clear that protocol $ss$-$TO$ reaches a configuration of $\mathcal{LC}_0$ from any initial configuration in case of $n=2$.
Now assume that protocol $ss$-$TO$ reaches a configuration of $\mathcal{LC}_0$ from any initial configuration in case that the number of processes is $n-1$ (inductive hypothesis), and consider the case that the number of processes is $n$.
Let $u$ be any leaf process and $v$ be its only neighbor and $\rho$ be an arbitrary configuration.
In a first time, we show that any execution $e$ starting from $\rho$ reaches in a finite time a configuration such that $level_v \ge level_u$ holds. If this condition holds in $\rho$, we have the result. Otherwise ($level_v<level_u$), $u$ is continuously enabled by {\tt GA1} (until the condition is true). Hence, the condition becomes true (by an activation of $v$) or this action is executed by $u$ in a finite time. In both cases, we obtain that $level_v \ge level_u$ holds in at most one round.
After that, process $u$ can execute only guarded action {\tt GA1} or {\tt GA3} since $prnt_u=v$ always holds. Thus, after the first round completes, $prnt_u=v$ and $level_v \ge level_u$ always hold (indeed, $v$ can only increase its $level$ variable and $level$ variable of $u$ can only take greater values than $v$'s). It follows that $v$ never executes $prnt_v:=u$ in the second round and later. This implies that $e$ reaches in a finite time a configuration $\rho'$ such that (a) $prnt_v \ne u$ always holds after $\rho'$ , or (b) $prnt_v= u$ always holds after $\rho'$ (since $v$ cannot execute $prnt_v:=u$ after $\rho'$ if $prnt_v \ne u$).
In case (a), the behavior of $v$ after $\rho'$ is never influenced by $u$: $v$ behaves exactly the same even when $u$ does not exist. From the inductive hypothesis, protocol $ss$-$TO$ eventually reaches a configuration $\rho''$ such that $S-\{u\}$ satisfies the condition of $\mathcal{LC}_0$ and remains in $\rho''$ afterward (from Lemma \ref{lem:legit0}). After $u$ executes its action at $\rho''$, $level_u = level_v$ holds and thus the configuration of $S$ is in $\mathcal{LC}_0$.
Now consider case (b), where we do not use the inductive hypothesis. The fact that $prnt_v=u$ (and $prnt_u=v$) always holds after $\rho'$ implies that $level_v$ (and also $level_u$) remains unchanged after $\rho'$. Assume now that a neighbor $w~(\ne u)$ of $v$ satisfies continuously $level_w\neq level_v$ or $prnt_w\neq v$ from a configuration $\rho''$ of $e$ after $\rho'$. If $w$ satisfies continuously $level_w> level_v$ from $\rho''$, then $v$ executes {\tt GA1} in a finite time, this is a contradiction. If $w$ satisfies continuously $level_w< level_v$ from $\rho''$, then $w$ executes {\tt GA1} in a finite time and takes a $level$ value such that $level_w\geq level_v$, that contradicts the fact that $w$ satisfies continuously $level_w< level_v$ from $\rho''$. This implies that $level_w= level_v$ and $prnt_w= v$ in a finite time in any execution starting from $\rho'$. As $v$ does not modify its state after $\rho'$, $w$ is never enabled after $\rho'$. This implies that the fragment of $S$ consisting of processes within distance two from $u$ reaches a configuration satisfying the condition of $\mathcal{LC}_0$ and remains unchanged. We can now apply the same reasoning by induction on the distance of any process to $u$ and show that $ss$-$TO$ eventually reaches a configuration in $\mathcal{LC}_0$ where link $(u, v)$ is the root link.
Consequently, protocol $ss$-$TO$ reaches a configuration of $\mathcal{LC}_0$ from any initial configuration.
\end{proof}
Now, we consider the case with a single Byzantine process.
\begin{lemma}\label{lem:conv1}
In a tree system with a single Byzantine process, protocol $ss$-$TO$ eventually reaches a configuration of $\mathcal{LC}_1$ from any initial configuration.
\end{lemma}
\begin{proof}
Let $z$ be the Byzantine process, $S'$ be any subtree (or a connected component) of $S-\{z\}$ and $y$ be the process in $S'$ neighboring $z$ (in $S$).
We prove, by induction on the number of processes $n'$ of $S'$, that $S'$ eventually reaches a configuration satisfying the condition (C1) or (C2) of Definition \ref{def:legit1}.
It is clear that $S'$ reaches a configuration satisfying (C1) from any initial configuration in case of $n'=1$.
Now assume that $S'$ reaches a configuration satisfying (C1) or (C2) from any initial configuration in case of $n'=k-1$ (inductive hypothesis), and consider the case of $n'=k\ (\ge 2)$.
From $n' \ge 2$, there exists a leaf process $u$ in $S'$ that is not neighboring the Byzantine process $z$. Let $v$ be the neighbor of $u$. Since processes $u$ and $v$ are correct processes, we can show the following by the same argument as the fault-free case (Lemma \ref{lem:conv0}): after some configuration $\rho$, (a) $prnt_v \ne u$ always holds, or (b) $prnt_v= u$ always holds. In case (a), we can show from the inductive hypothesis that $S'$ eventually reaches a configuration satisfying (C1) or (C2). In case (b), we can show that $S'$ eventually reaches a configuration satisfying (C2) where link $(u, v)$ is the root link.
Consequently, protocol $ss$-$TO$ reaches a configuration of $\mathcal{LC}_1$ from any initial configuration.
\end{proof}
The following main theorem is obtained from Lemmas \ref{lem:legit0}, \ref{lem:st-legit}, \ref{lem:legit1}, \ref{lem:conv0} and \ref{lem:conv1}.
\begin{theorem}\label{th:strong}
Protocol $ss$-$TO$ is a $(\Delta,0,1)$-strongly stabilizing tree-orientation protocol.
\end{theorem}
\subsubsection{Round Complexity of $ss$-$TO$}
In this subsection, we focus on the round complexity of $ss$-$TO$. First, we show the following lemma.
\begin{lemma}\label{lem:roundp}
Let $v$ and $u$ be any neighbors of $S$. Let $S'$ be the subtree of $S-\{v\}$ containing $u$ and $h(v,u)$ be the largest distance from $v$ to a leaf process of $S'$. If $S' \cup \{v\}$ contains no Byzantine process, $prnt_v := u$ of {\tt GA1} or {\tt GA2} can be executed only in the first $2 h(v,u)$ rounds. Moreover, in round $2 h(v,u)$+1 or later, $level_v$ remains unchanged as long as $prnt_v = u$ holds.
\end{lemma}
\begin{proof}
We prove the lemma by induction on $h(v,u)$.
First consider the case of $h(v,u)=1$, where $u$ is a leaf process. When the first round completes, all the output registers of every process becomes consistent with the process variables. Since $u$ is a leaf process, $prnt_u = v$ always holds. It follows that process $v$ can execute $prnt_v := u$ only in {\tt GA1}. Once $v$ executes its action in the second round, $level_v \ge level_u$ holds and $prnt_v := u$ of {\tt GA1} cannot be executed afterward (see proof of Lemma \ref{lem:conv0}). Thus, $prnt_v := u$ of {\tt GA1} can be executed only in the first and second rounds. It is clear that in round $3$ or later, $level_v$ remains unchanged as long as $prnt_v = u$ holds.
We assume that the lemma holds when $h(v,u) \le k-1$ (inductive hypothesis) and consider the case of $h(v,u) = k$. We assume that $prnt_v := u$ of {\tt GA1} or {\tt GA2} is executed in round $r$, and show that $r \le 2k$ holds in the following. Variable $level_v$ is also incremented in the action, and let $\ell$ be the resultant value of $level_v$. In the following, we consider two cases.
\begin{itemize}
\item Case that $prnt_v := u$ of {\tt GA1} is executed in round $r$: when $prnt_v := u$ is executed, $level_u=\ell$ holds. But $level_u<\ell$ holds when $v$ executes its action in round $r-1$; otherwise, $v$ reaches a state with $level_v \ge \ell$ in round $r-1$ and cannot execute $prnt_v := u$ (with $level_v := \ell$) in round $r$. This implies that $u$ incremented $level_u$ to $\ell$ in round $r-1$ or $r$.
In the case that $u$ makes the increment of $level_u$ by {\tt GA1}, $u$ executes $prnt_u :=w$ for $w\ (\ne v)$ in the same action. Since $h(u,w)<h(v,u)$ holds, the action is executed in the first $2 h(u,w)$ rounds from the inductive hypothesis. Consequently, $prnt_v := u$ of {\tt GA1} is executed in round $2 h(u,w)+1\ (< 2 h(v,u))$ at latest.
In the case that $u$ makes the increment of $level_u$ by {\tt GA2}, $u$ executes $prnt_u :=w$ for some $w\ (\in N_u)$ in the same action, where $w=v$ may hold. For the case of $w \ne v$, we can show, by the similar argument to the above, that $prnt_v := u$ is executed in round $2 h(u,w)+1\ (< 2 h(v,u))$ at latest. Now consider the case of $w=v$. Then $level_v = level_u = \ell -1$, $prnt_v \ne u$ and $prnt_u \ne v$ hold immediately before $u$ executes $prnt_u:=v$ and $level_u:=\ell$. Between the actions of $level_u := \ell -1$ (with $prnt_u := w\ (w \ne v)$) and $level_u := \ell$ (with $prnt_u := v$), $v$ can execute its action at most once; otherwise, $level_v \ge \ell-1$ holds after the first action, and $level_v \ge \ell$ or $prnt_v = u$ holds after the second action. This implies that $level_u := \ell -1$ with $prnt_u := w\ (w \ne v)$ is executed in the previous or the same round as the action of $level_u := \ell$, and thus, in round $r-2$ or later. Since $h(u,w)<h(v,u)$ holds, the action is executed in the first $2 h(u,w)$ rounds from the inductive hypothesis. Consequently, $prnt_v := u$ of {\tt GA1} is executed in round $2 h(u,w)+2\ (\le 2 h(v,u))$ at latest.
\item Case that $prnt_v := u$ is executed in {\tt GA2}: then $level_v = level_u = \ell -1$, $prnt_v \ne u$ and $prnt_u \ne v$ hold immediately before $v$ executes $prnt_v:=u$ and $level_v:=\ell$. Between the executions of $level_v := \ell -1$ and $level_v := \ell$, $u$ can execute its action at most once, and $u$ executes $prnt_u := w$ for some $w\ (\ne v)$ in the action.Since $h(u,w)<h(v,u)$ holds, this action is executed in the first $2 h(u,w)$ rounds from the inductive hypothesis. Consequently, $prnt_v := u$ is executed in round $2 h(u,w)+1\ (< 2 h(v,u))$.
\end{itemize}
It remains to show that $level_v$ remains unchanged in round $2 h(v,u)$+1 or later, as long as $prnt_v = u$ holds. Now assume that $prnt_v=u$ holds at the end of round $2 h(v,u)$.
\begin{itemize}
\item Case that $prnt_u=v$ holds at the end of round $2 h(v,u)$: since $h(u,w) < h(v,u)$ for any $w \in N_u-\{v\}$, $prnt_u := w$ cannot be executed in round $2 h(v,u)+1$ or later from the inductive hypothesis, and so $prnt_u=v$ holds afterward. Thus, it is clear that $level_v$ remains unchanged as long as $prnt_v=u$ (and $prnt_u=v$) holds.
\item Case that $prnt_u \ne v$ holds at the end of round $2 h(v,u)$: let $prnt_u = w$ hold for some $w \in N_u-\{v\}$ at the end of round $2 h(v,u)$. Since $h(u,w) < h(v,u)$, $level_u$ remains unchanged as long as $prnt_u = w$ holds from the inductive hypothesis. It follows that $level_v$ remains unchanged as long as $prnt_v=u$ and $prnt_u = w$ hold. Since $h(u,x) < h(v,u)$ for any $x \in N_u-\{v\}$, $prnt_u := x$ cannot be executed in round $2 h(v,u)+1$ or later, but $prnt_u := v$ can be executed. Immediately after execution of $prnt_u := v$, $level_v=level_u$ holds if $prnt_v$ remains unchanged. Thus, it is clear that $level_v$ remains unchanged as long as $prnt_v=u$ (and $prnt_u=v$) holds.
\end{itemize}
\end{proof}
The following lemma holds for the fault-free case.
\begin{lemma}\label{lem:round0}
In a fault-free tree system, protocol $ss$-$TO$ reaches a configuration of $\mathcal{LC}_0$ from any initial configuration in $O(d)$ rounds where $d$ is the diameter of the tree system $S$.
\end{lemma}
\begin{proof}
Lemma \ref{lem:roundp} implies that, after round $2d+1$ or later, no process $v$ changes $prnt_v$ or $level_v$ and thus the configuration remains unchanged. Lemma \ref{lem:conv0} guarantees that the final configuration is a configuration in $\mathcal{LC}_0$.
\end{proof}
For the single-Byzantine case, the following lemma holds.
\begin{lemma}\label{lem:round1}
In a tree system with a single Byzantine process, protocol $ss$-$TO$ reaches a configuration of $\mathcal{LC}_1$ from any initial configuration in $O(n)$ rounds.
\end{lemma}
\begin{proof}
Let $z$ be the Byzantine process and $S'$ be any subtree of $S -\{z\}$. Let $v$ be the neighbor of $z$ in $S'$. From Lemma \ref{lem:roundp}, $v$ cannot execute $prnt_v := w$ for any $w \in N_v-\{z\}$ in round $2 d'+1$ or later, where $d'$ is the diameter of $S'$. We consider the following two cases depending on $prnt_v$.
\begin{itemize}
\item Case 1: there exists $w \in N_v-\{z\}$ such that $prnt_v =w$ at the end of round $2d'$ and $prnt_v$ remains unchanged during the following $d'$ rounds (from round $2d'+1$ to round $3d'$).
From Lemma \ref{lem:roundp}, $level_v$ also remains unchanged during the $d'$ rounds. By the similar discussion to that in proof of Lemma \ref{lem:roundp}, we can show that $S'$ reaches a configuration satisfying the condition (C2) of Definition \ref{def:legit1} by the end of round $3d'$.
\item Case 2: $prnt_v=z$ at the end of round $2d'$ or there exists at least one configuration during the following $d'$ rounds (from round $2d'+1$ to round $3d'$) such that $prnt_v=z$ holds.
Let $c$ be the configuration where $prnt_v=z$ holds. From Lemma \ref{lem:roundp}, $prnt_v=z$ always holds after $c$. We can show, by induction of $k$ that, a fraction of $S'$ consisting of processes with distance up to $k$ from $v$ satisfies the condition (C1) at the end of $k$ rounds after $c$. Thus, $S'$ reaches a configuration satisfying the condition (C1) of Definition \ref{def:legit1} by the end of round $4d'$.
\end{itemize}
After a subtree reaches a configuration satisfying the condition (C2), its configuration may change into one satisfying the condition (C1) and the configuration may not satisfy (C1) or (C2) during the transition. However, Lemma \ref{lem:legit1} guarantees that the length of the period during the subtree does not satisfy (C1) or (C2) is $O(d')$ rounds, where $d'$ is the diameter of the subtree. Since the total of diameters of all the subtrees in $S-\{z\}$ is $O(n)$, the convergence to a configuration of $\mathcal{LC}_1$ satisfying (C1) or (C2) can be delayed at most $O(n)$ rounds.
\end{proof}
Finally, we can show the following theorem.
\begin{theorem}\label{th:round}
Protocol $ss$-$TO$ is a $(\Delta,0,1)$-strongly stabilizing tree-orientation protocol. The protocol reaches a configuration of $\mathcal{LC}_0 \cup \mathcal{LC}_1$ from any initial configuration. The protocol may move from a legitimate configuration to an illegitimate one because of the influence of the Byzantine process, but it can stay in illegitimate configurations during the total of $O(n)$ rounds (that are not necessarily consecutive) in the whole execution.
\end{theorem}
\begin{proof}
Theorem \ref{th:strong} shows that $ss$-$TO$ is a $(\Delta,0,1)$-strongly stabilizing tree-orientation protocol. Lemma \ref{lem:round0} and \ref{lem:round1} guarantee that $ss$-$TO$ reaches a configuration of $\mathcal{LC}_0 \cup \mathcal{LC}_1$ from any initial configuration within $O(n)$ rounds. For the case with a single Byzantine process (say $z$), each subtree of $S-\{z\}$ may experience an illegitimate period (not satisfying the condition (C1) or (C2)) after such a configuration. However, Lemma \ref{lem:legit1} guarantees that the length of the illegitimate period is $O(d')$ where $d'$ is the diameter of the subtree. Since the total of diameters of all the subtrees in $S-\{z\}$ is $O(n)$, the total length of the periods that does not satisfy (C1) or (C2) is $O(n)$ rounds.
\end{proof}
\section{Concluding Remarks}
We introduced the notion of strong stabilization, a property that permits self-stabilizing protocols to contain Byzantine behaviors for tasks where strict stabilization is impossible. In strong stabilization, only the first Byzantine actions that are performed by a Byzantine process may disturb the system. If the Byzantine node does not execute Byzantine actions, but only correct actions, its existence remains unnoticed by the correct processes. So, by behaving properly, the Byzantine node may have the system disturbed arbitrarily far in the execution. By contrast, if the Byzantine node executes many Byzantine actions at the beginning of the execution, there exists a time after which those Byzantine actions have no impact on the system. As a result, the faster an attacker spends its Byzantine actions, the faster the system become resilient to subsequent Byzantine actions. An interesting trade-off appears: the more actually Byzantine actions are performed, the faster the stabilization of our protocols is (since the number of steps performed by correct processes in response to Byzantine disruption is independent from the number of Byzantine actions). Our work raises several important open questions:
\begin{enumerate}
\item is there a trade-off between the number of perturbations Byzantine nodes can cause and the containment radius ? In this paper, we strove to obtain optimal containment radius in strong stabilization, but it is likely that some problems do not allow strong stabilization with containment radius 0. It is then important to characterize the difference in containment radius when the task to be solved is ``harder'' than tree orientation or tree construction.
\item is there a trade-off between the total number of perturbations Byzantine nodes can cause and the number of Byzantine nodes, that is, is a single Byzantine node more effective to harm the system than a team of Byzantine nodes, considering the same total number of Byzantine actions ? A first step in this direction was recently taken by~\cite{YMB10c}, where Byzantine actions are assumed to be upper bounded, for the (global) problem of leader election. Their result hints that only Byzantine actions are relevant, independently of the number of processes that perform them. It is thus interesting to see if the result still holds in the case of potentially infinite number of Byzantine actions.
\end{enumerate}
\singlespacing
|
2,877,628,089,558 | arxiv |
\subsection{Compute-Data Co-location Algorithm}
\label{subsec:algorithm}
In traditional GPUs, thread-blocks can be scheduled in any order, as they are supposed to run concurrently.
The number of thread-blocks that can be run together in one SM is determined by thread-block resource constraints.
Normally, the thread-blocks are scheduled in order and as soon as one thread-block retires, the next thread-block is scheduled to any available SM.
However, to benefit from careful data placement, as is enabled by our dual-mode address mapping mechanism, thread-blocks and the data they access must be co-located in the same memory stack.
To steer thread-blocks and the data they access to the same memory stack, we set an affinity between thread-blocks and memory stacks.
\subsubsection{Affinity-based Work Scheduling Algorithm}
We compute which memory stack each thread-block has \emph{affinity} to using the following equation.
\begin{dmath}
affinity = \left ( \frac{block\_id}{N_{blocks\_per\_stack}} \right ) \bmod N_{stacks}
\label{eq:bsch}
\end{dmath}
$block\_id$ is flattened for multi-dimensional data based on row-major ordering, i.e., {\tt blockIdx.y} $\times$ {\tt blockDim.x} + {\tt blockIdx.x}.
$N_{blocks\_per\_stack}$ is the number of thread-blocks that can run concurrently in one memory stack.
For example, if one memory stack has four SMs and each of which can run six thread-blocks, $N_{blocks\_per\_stack}$ is 24.
When \emph{N} is the number of memory stacks and \emph{T} is the total number of thread-blocks, \nicefrac[]{T}{N} thread-blocks have the same affinity.
With the affinity information, whenever an SM is available, instead of assigning any unscheduled thread-block to it, the scheduler picks one that has affinity to that memory stack.\footnote{This scheduling algorithm is conceptually similar to the guided scheduling policy in OpenMP, where programmer specifies chunk size (the number of loop iterations that one thread executes).}
This may potentially lead to load imbalance compared to the baseline of assigning any available thread-block to any SM in the system.
However, the number of thread-blocks typically being much greater than the number of memory stacks reduces the likelihood of load imbalance.
The hardware and runtime system must be extended to support this modified scheduling scheme.
The scheduling algorithm could be optimized further to select thread-blocks from other stacks when a memory stack does not have any work left to do, similar to the work-stealing algorithm, for the potential work imbalance that can occur when there are large differences among the amount of work across thread-blocks.
However, in our 20 evaluated benchmarks, only one suffered performance degradation due to the affinity-based scheduling algorithm.
Therefore, we did not implement the work-stealing optimization.
\subsubsection{Data Placement Algorithm}
\label{subsec:data_placement_algorithm}
While dual-mode address mapping enables the ability to localize an entire page in a single memory stack, the question of how to identify the exclusively accessed or shared pages remains.
This identification is particularly difficult for GPU systems because data structures are allocated by the host processor before the kernel invocation and used by all threads in the kernel later.\footnote{We only target global data structures, which are used by the threads in the system since local data structures are easily identifiable with specific keywords.}
For example, Figure~\ref{fig:kmeans-code} (a) shows the host code that allocates data structures (line 3-4) and initializes them (line 5).
In the kernel, shown in Figure~\ref{fig:kmeans-code} (b), each thread accesses \texttt{nfeatures} elements (line 4) from \texttt{(pid $\times$ nfeatures)-th} element of \texttt{feature_flipped_d} array (line 5).
Since the amount of data each thread (and thread-block) accesses is \textit{unknown} at the time data structure is allocated, it is not trivial to partition (or place) data appropriately.
\vspace{-0.5em}
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth,keepaspectratio]{figs/kmeans_combined}
\caption{Code snippet from K-means Clustering. (a) shows the host code. (b) shows the kernel code.}
\label{fig:kmeans-code}
\end{figure}
To realize an NDP-aware data placement, we propose a compiler- and/or profiler-based technique that identifies the amount of data used by one thread-block for each data structure and decides which address mapping is desirable for the data structure (technically, for the pages in which the data structure is allocated).
It is based on the following four observations.
First, the amount of data used by one thread-block is often determined by the number of threads in a thread-block and the size of data structure that each thread accesses.
Second, compile-time (symbolic) analysis can be used to detect if there exists a regular access pattern for a data structure.
Third, profiler-assisted techniques can be used to estimate input-dependent accesses (more on this is explained later).
Fourth, although the number of threads in a thread-block is often input-dependent, it is determined before a kernel invocation (generally, even before data structures are allocated).
Based on these observations, we implemented the compile-time analysis on LLVM infrastructure.
We extended the FunctionPass, which enables traversing all the kernel functions at compile time, and performed the symbolic analysis.
For all the memory accesses inside the kernel function, we analyzed the ``GetElementPtrInst'' LLVM instruction, which performs the index computation.
Based on the index expression and the types of variables it uses, we examine if there exists a runtime-constant stride between two consecutive thread-blocks.\footnote{In the examination, we check if expression uses only the 1) kernel-invocation-constants, such as parameters, block/grid dimensions or global constants, which are determined before kernel invocation and remain constant throughout the kernel execution, 2) thread index, thread-block index and/or loop index (for local loops in the kernel).}
If such a stride is found, we insert instructions in the host code to compute the stride distance between two consecutive thread-blocks at runtime.
We use profiler-assisted techniques for the case where the access pattern is input-dependent \emph{and} only when the input is not changed frequently (e.g., graph computing workloads).
Note that the profiler performs a similar examination as the compile-time analysis.
Also our mechanism use FGP for irregularly accessed data, shared data or parameter objects, as they are accessed by many thread-blocks.
Where the data should be located can also be computed, as the affinity-based work scheduling algorithm already determines where the computation will be performed.
For example, if one thread-block accesses the first $B$ bytes of a memory object and $N$ consecutive thread-blocks will be scheduled to the SMs in a memory stack, the mapping algorithm allocates contiguous chunks of $B \times N$ bytes on each memory stack.
The equations to compute $chunk\_size$ and $stack\_id$ are as follows:
\begin{dmath}
chunk\_size = min(4KB, B \times N_{blocks\_per\_stack})
\label{eq:cs}
\end{dmath}
\begin{dmath}
stack\_id = \left ( \frac{virtual\_addr -
obj\_start\_addr}{chunk\_size} \right ) \bmod N_{stacks}
\label{eq:si}
\end{dmath}
Please note that the $chunk\_size$ is upper-bounded by 4KB since an arbitrary number of pages can be allocated in a single memory stack for any large object with hardware support to map an entire page to a single memory stack with CGP.
$obj\_start\_addr$ is the starting virtual address of an object.
When the $chunk\_size$ is not a multiple of physical page size, we round up to the next multiple of pages.
The resulting misaligned pages will be shared by SMs from two consecutive memory stacks, but this is still better than un-aligned distribution of data across all memory stacks.
Commonly, $N_{blocks\_per\_stack}$ is moderately big since multiple thread-blocks can run concurrently on an SM, which often results in a big $chunk\_size$ (greater or close to 4KB).
Note that programs often use more than one data structure.
Our proposed mechanism support multiple data structures since we compute the $chunk\_size$ for each data structure using its own $B$ size based on the structure's access pattern.
We demonstrate how our data placement algorithm works with Figure~\ref{fig:kmeans-code}, a code snippet from K-means Clustering.
The size of each data element can be identified (and computed) at compile-time, and the first element and the number of consecutive elements that each thread accesses can also be analyzed with our compile-time analysis routine.
In this example, each thread accesses \texttt{nfeatures} consecutive elements from \texttt{(pid $\times$ nfeatures)-th} element, as shown in lines 4 and 5 of Figure~\ref{fig:kmeans-code} (b).
Since each thread-block has \texttt{blockDim.x} threads, \texttt{blockDim.x $\times$ nfeatures $\times$ sizeof(float)} is the $B$ value.
This means that the first thread-block accesses $B$ bytes from the starting address of the array (\texttt{in}) and the second thread-block accesses next $B$ bytes.
Note that the number of thread-blocks and threads per thread-block are determined before the kernel invocation.
When a {\tt cudaMalloc} function is called, our extended runtime system uses this information and the $B$ information to compute the $chunk\_size$ using Eq~\eqref{eq:cs} for the corresponding data structure and decides whether it should be allocated with the FGP or CGP.
If a data structure is accessed by multiple kernels, the information of the first kernel that accesses it is used to compute the number of thread-blocks per memory stack.
Accesses to 3-D data structures are often more complicated than those to 1-D or 2-D data structures, for which the index is typically computed with both {\tt blockDim.x} and {\tt blockDim.y}.
In this paper, we focus on 2-D data structure and leave the extension to support the 3-D data structures and more complex data structures for the future work.
\section{Background}
\label{sec:background}
\subsection{Baseline Architecture}
\label{subsec:baseline}
\begin{figure}[htb]
\includegraphics[width=\columnwidth,keepaspectratio]{figs/baseline_memory_stack}
\caption{Overview of an NDP memory stack}
\label{fig:baseline_memory_stack}
\end{figure}
Figure~\ref{fig:baseline_memory_stack} shows a conceptual diagram of an example GPU-based NDP memory stack.
Although our mechanism does not rely on any particular memory organization, we choose high bandwidth memory (HBM) as our baseline~\cite{hbm2}.
HBM is composed of multiple memory channels and uses a wide-lane bus interface to achieve high memory bandwidth and low power dissipation.
Each memory stack has one or more streaming multiprocessors (SMs) on its logic layer and high-speed off-chip links for remote data accesses - to/from other memory stacks and the host processor - and a crossbar network that connects SMs and HBM.
We assume SMs in the memory stack are equipped with a hardware TLB and memory management units (MMUs) that access page tables and are capable of performing virtual address translation.
\subsection{Programming Model}
We use the widely adopted GPU programming model as our programming model.
The host processor launches GPU kernels, and the runtime system partitions and distributes thread-blocks across all the SMs in the system.
Each HBM has multiple (four in our evaluation) SMs, so up to the number of SMs $\times$ the number of thread-blocks per SM are concurrently executed in each memory stack.
\subsection{Networks}
\label{subsec:network}
There are three kinds of networks in our system: (1) a network among the host processor and memory stacks (denoted as \textbf{Host} in Figure~\ref{fig:baseline_architecture_motivation}), (2) a network among memory stacks (denoted as \textbf{Remote} in Figure~\ref{fig:baseline_architecture_motivation}), and (3) a network that connects SMs in a memory stack to their local memory (denoted as \textbf{Local} in Figure~\ref{fig:baseline_architecture_motivation}).
To provide an efficient execution for legacy (non-NDP-aware) applications, it is logical to dedicate most of the network resources available in a system for the host processor and memory stack connections.
For this reason, we assume that the Remote network has much less bandwidth than the Host network.
The order of memory bandwidth among the three types of networks is as follows: Local > Host > Remote.
\subsection{Address Interleaving}
To increase memory-level parallelism, or to reduce channel/rank/bank conflicts, fine-grain interleaving is typically used in modern memory systems by striping small chunks of the physical address space (often the size of a few cache lines) across different banks, ranks and channels.
In a system with multiple memory stacks, a page can be striped across multiple stacks with fine-grain interleaving, or the entire page can be allocated in a single memory stack with coarse-grain interleaving.
Complex address decoding schemes have been studied before~\cite{zha:zhu00,pet:dan16}; however, for brevity, we assume a simple address mapping scheme.
We discuss the applicability of our mechanism in systems with complex address mapping schemes in Section~\ref{subsec:complex_address_mapping}.
\section{Conclusion}
We introduce {CODA} that realizes co-location of computation and data in a system with multiple near-data processing (NDP) memory stacks.
We make an observation that the key for efficient use of NDP in improving performance and energy efficiency is to reduce remote data accesses.
To this end, we first propose a lightweight hardware mechanism that supports dual-mode address mapping at a page granularity.
With this, a page can either be spread across memory stacks (for data shared by SMs in more than one memory stacks or data primarily used by the host processor) or localized to a single memory stack (for the data exclusively used by SMs in one memory stack).
Second, we propose a software/hardware cooperative mechanism that (1) identifies exclusively accessed pages based on the anticipated access pattern for each data structure, and (2) steers computations to the memory where data they access is located.
To anticipate the access pattern for each memory object, we utilize a combination of compile-time analysis and profiler-assisted techniques.
We use the LLVM infrastructure and perform the symbolic analysis for pattern detection.
To co-locate computations and data, we propose an affinity-based work scheduling algorithm.
Our extensive evaluations across a wide range of workloads show that {CODA} improves performance by 31\% and reduces 38\% remote data accesses over a baseline system that cannot exploit compute-data affinity characteristics.
\section{Discussions}
\label{sec:dis}
\subsection{Complex Address Mapping}
\label{subsec:complex_address_mapping}
So far we have assumed a simple address mapping scheme for ease of explanation.
Modern processors, however, use more complex address mapping schemes such as XORing multiple bits (not necessarily consecutive) for channel selection~\cite{pet:dan16}.
In this section, we discuss the applicability of our dual-mode address mapping mechanism in such systems.
Note that computation and data co-location algorithm presented in Section~\ref{subsec:algorithm} is orthogonal to the address mapping scheme used in the underlying system.
Although the detailed address mapping scheme differs for different architectures, the mappings can be classified into those that use the channel-selection bits exclusively (i.e., they are not used as part of the row- or column-selection), and those that do not (i.e., at least one bit from the channel-selection bits is used as part of the row- or column-selection).
Our dual-mode address mapping mechanism can be easily extended to support a system with the former class of mappings, where channel-selection bits are used exclusively, by swapping the channel-selection bits with other higher order bits after XOR operation.
However, it is not trivial to support a system with the latter class of mappings, where channel-selection bits are not exclusively used.
One way might be to identify which bits are used exclusively for the channel-selection and which bits are not, and then carefully swapping the channel-selection bits with those that are not used for channel-selection.
This requires further investigation and is a part of our future work.
\subsection{Large Page and Memory Management}
\label{subsec:large_page}
Large pages have been used to mitigate address translation overheads by reducing the number of PTEs to maintain and increasing TLB hit rates.
However, it comes at a cost, such as an increase in fragmentation and memory footprint.
In this section, we discuss the applicability of our dual-mode address mapping mechanism for the large pages.
Again, computation and data co-location algorithm presented in Section~\ref{subsec:algorithm} is orthogonal to the page size.
First, our dual-mode address mapping can be easily extended for the large pages.
For 2MB pages, for example, address bits [22:21] can be used (instead of address bits [13:12] in the case of 4KB page) to index memory stacks to allocate the entire page in a single memory stack.
However, the key challenge in supporting large page is not about choosing which bits to use for stack selection but about dealing with fragmentation issues.
Although our mechanism may complicate page management and potentially increase fragmentation issues, we believe that if page-groups are small (e.g., 4 or 8 pages), this is likely to not be significantly more complicated than normal page management.
Also, the memory manager can be modified to deal with page-groups for most operations (e.g., flushing out to disk) for better memory management.
This requires further exploration and is a part of our future work.
\subsection{PTE Extension}
Our proposed mechanism requires a modification to the PTE format.
X86 ISA reserves 3 bits [11:9] for future usage~\cite{intel_sw}, so we can use one of the bits to indicate the granularity information.
When a system employs large pages, extra bits are available in the PTE, which gives more freedom to modify PTE contents.
\subsection{NUMA or NUCA Systems}
In this section, we discuss the difference and uniqueness of our system from the conventional NUMA (Non-Uniform Memory Architectures)~\cite{cha:dev94} or NUCA (Non-Uniform Cache Access)~\cite{har:fer09} systems.
First, in NUMA systems, memory policies such as node-local or interleave can be specified and (relatively) easily controlled.
For example, the first-touch based page allocation has already been used in NUMA systems.
On the contrary, the first-touch based page allocation \textit{cannot} be used in our system due to the lack of first-touch information (recall that data structures are generally allocated and initialized by the host processor before the kernel invocation).
Even if the first-touch information \textit{was} available, a memory page could not be allocated in a single memory stack without hardware support for the localization.
Moreover, since multiple memory stacks behave like NUMA for GPUs in memory stacks but behave like UMA (Uniform Memory Architecture) for the host processor, we need a mechanism to accommodate the needs of two different computing units.
Second, NUCA systems (e.g., R-NUCA~\cite{har:fer09}) rely on data migration after an access pattern is identified.
The migration overhead is much smaller in NUCA systems than in our system because the former migrates data within a single device (i.e., a tiled L2 cache architecture), whereas the latter migrates data across multiple devices connected via low-bandwidth, high-latency interconnect links.
\subsection{Other NDP Systems}
The dual-mode address mapping mechanism works irrespective of the type of processing units: CPU, GPU, etc.
Only the chunk size detection algorithm (in Section~\ref{subsec:algorithm}) needs to be adjusted depending on the programming model.
Our proposed mechanism will also work well on NDP with conventional DRAM devices~\cite{far:ahn15, bat:sum16}.
Processing cores in DDRx DIMMs benefit from accessing data in the same DIMM, but the host processor should utilize all DIMMs concurrently.
Therefore, our proposed mechanism can be very effective to provide such an access localization solution without jeopardizing the host processor's performance.
\section{Evaluation Results}
\label{sec:eval}
\subsection{Performance}
\label{subsec:eval:perf}
\begin{figure}[!htb]
\includegraphics[width=3.3in,keepaspectratio,angle=0]{figs/eval_perf}
\caption{Speedup of {CODA} over FGP-Only, CGP-Only, and an idealized first-touch-based allocation scheme (CGP-Only + FTA)}
\label{fig:eval_perf}
\end{figure}
Figure~\ref{fig:eval_perf} shows the performance improvement of {CODA} for the benchmarks described in Table~\ref{table:benchmark}.
FGP-Only represents the baseline where every page is distributed across memory stacks at 128-byte fine-grain interleaving granularity, and CGP-Only represents the case where consecutive 4KB pages are allocated in consecutive memory stacks in a circular order; this represents affinity-unaware data placement even when coarse-grain data allocation is available.
CGP-Only+FTA (First-Touch-based Allocation)\footnote{Here, first-touch-based allocation scheme places each physical page on the memory stack that first accessed it. We ignore the accesses by the host processor for the purpose for determining the first access since all the pages are initially allocated by the host processor before the kernel invocation.} represents the case where each page is allocated to the memory stack that first touches the page.
Even though this is not a practical implementation due to the lack of first-touch information at the time data is allocated (and often initialized) by the host processor, this can be a good indicator of the potential effectiveness of coarse-grain allocation for each benchmark.\footnote{One simple way to implement first-touch-based allocation is to migrate pages upon first access. We observed that this migration-based first-touch allocation is not very effective (not shown, 7\% speedup, as opposed to 19\% speedup of CGP-Only+FTA) mainly due to small number of reuses of memory pages after migrations (due to burst and clustered access patterns); that is, the migration overhead is not mitigated. This makes a case for better data allocation rather than reactive data movement.}
Our evaluation results show that {CODA}~outperforms both FGP-Only and CGP-Only by 31\%.
{CODA}~even outperforms CGP-Only+FTA for most benchmarks.
Allocating an entire page in the memory stack that exclusively accesses it if the page is exclusively accessed by one memory stack brings a substantial reduction in remote data accesses and increase in local data accesses.
This variation in remote and local data accesses directly leads to the performance improvement, as remote data accesses are limited by the low bandwidth of the off-chip links, whereas local data accesses exploit the large internal memory bandwidth.
Perhaps more importantly, such bandwidth discrepancy becomes even more pronounced as the interconnection network becomes overwhelmed with more remote data accesses.
Though lower bandwidth of the off-chip links does not necessarily mean longer memory access latency, when coupled with the off-chip communication overheads such as queuing delays and/or external transfer time, average memory access latency can be significantly affected by the number of remote data accesses as well.
Our mechanism achieves 1.56x and 1.13x average performance improvements over the baseline for \textbf{block-exclusive} and \textbf{core-exclusive} benchmarks, respectively.
This is particularly effective in graph algorithms with large numbers of neighbor accesses (e.g., \texttt{BFS}, \texttt{DC}, \texttt{PR}, and \texttt{SSSP}), which are difficult to handle efficiently without an NDP-aware data allocation.
Even for the \textbf{sharing} benchmarks in which most pages are accessed by many SMs, our mechanism can localize accesses whenever possible and achieves 1.29x average performance improvements over the baseline.
\subsection{Local vs Remote Access}
\label{subsec:eval:remote}
\begin{figure}[!htb]
\includegraphics[width=3.3in,keepaspectratio,angle=0]{figs/eval_acc_var}
\caption{Comparison of local and remote data accesses between FGP-Only and our mechanism (CODA)}
\label{fig:eval_acc_var}
\end{figure}
Figure~\ref{fig:eval_acc_var} shows distribution of memory accesses, local versus remote, for the baseline and how it varies with our mechanism.
Our mechanism significantly reduces remote data accesses for all the evaluated benchmarks but one, \texttt{GE}.
A substantial reduction in remote data accesses and an increase in local data accesses contribute to the performance improvement for the following reasons.
First, local data accesses can utilize the large internal memory bandwidth, while remote data accesses are limited by the lower memory bandwidth of the off-chip links.
Second, for the remote data accesses, a great amount of time could be spent on waiting for network due to the off-chip communication.
This can be incurred as a result of limited network bandwidth, but can be exacerbated further due to the artifacts of the off-chip communication, such as queuing delays, routing delays, etc.
Our mechanism significantly reduces remote data accesses, enabling the utilization of large internal memory bandwidth and also mitigating the effect of interconnection network congestion by placing data objects in the same memory stack in which the computation is to be performed.
Our mechanism is especially effective for the block-exclu\-sive and core-exclusive benchmarks.
On average, 47\% and 34\% remote data accesses are reduced, respectively.
Even for the sharing benchmarks, by identifying the pages that are accessed by a few thread-blocks or SMs and allocating them where the computation is to be performed, our mechanism reduces 32\% remote data accesses.
\subsection{Sensitivity to Bandwidth}
\label{subsec:eval:bw}
\begin{figure}[!ht]
\includegraphics[width=3.3in,keepaspectratio,angle=0]{figs/eval_sensitivity_bw}
\caption{Speedup with different remote bandwidth among memory stacks.}
\label{fig:eval_sensitivity_bw}
\end{figure}
Even for highly provisioned systems with unrealistically large Remote bandwidth and low remote memory access latency, co-location of thread-blocks and the data they access improves performance, as shown in Figure~\ref{fig:eval_sensitivity_bw}.
This is because even in such systems, remote memory accesses cannot be completely free from all resource conflicts.
Careful data placement, as is enabled by our mechanism, can significantly reduce the possibility of such conflicts and therefore can contribute to the performance improvement.
Even when a system has 256 GB/s of aggregated Remote bandwidth, our mechanism improves performance by 8\% (up to 23\%).
It should be noticed that as the gap between Local bandwidth and Remote bandwidth increases (Remote bandwidth is decreased while Local bandwidth remains the same), our mechanism provides more benefit by reducing remote data accesses and opening up more opportunity to exploit large internal memory bandwidth, thereby mitigating the performance penalty of the off-chip communication (performance improvement goes up to 15.2\% and 37.4\%, respectively).
\subsection{Sensitivity to Graph Properties}
In graph computing, the number of vertices and their neighbors that each thread-block accesses depends highly on graph properties.
To examine the impact of the graph properties on our proposed mechanism, we differentiate the properties that can be estimated at the time the graph is preprocessed\footnote{The term preprocessing generally implies a heavy-weight operation such as a clever partitioning to reduce communication. In this study, however, we only extract basic properties of a graph without scanning through the entire graph.} from those that cannot be estimated.
Basic graph properties such as the number of vertices and edges can be obtained at the time the graph is preprocessed.
These, combined with the number of threads per thread-block, which is determined based on the resource constraints of the underlying hardware, can be used to estimate the average number of edges that each thread-block accesses ($\mu$) before a kernel invocation and the standard deviation ($\sigma$) of it.
The coefficient of variation of a graph, which can be estimated as \nicefrac[]{$\sigma$}{$\mu$}, is a good indicator of how regular a graph is: graphs with a smaller coefficient of variation is regular.
Therefore, the granularity at which the graph should be distributed, or the block stride distance, can be determined.
\begin{figure}[!ht]
\includegraphics[width=3.3in,keepaspectratio]{figs/eval_graph_inputs}
\caption{PageRank performance with different graphs}
\label{fig:eval_input_sensitivity}
\end{figure}
Figure~\ref{fig:eval_input_sensitivity} compares the performance of FGP-Only and {CODA}, using the PageRank workload.
The evaluation is based on four real-world graphs, which have 59K to 9M vertices.
Graphs are sorted based on their regularity: graphs with a smaller coefficient of variation appear toward the left side of the figure.
The coefficient of variation of each graph are also depicted.
We confirm that the effectiveness of our mechanism depends highly on graph properties.
Regular graphs benefit more from our mechanism (55\%) than irregular graphs (5\%) since the estimation accuracy depends only on the properties that can be estimated at the time graph is preprocessed.
Notably, {CODA} does \emph{not} degrade performance in any case since it detects the data objects that are exclusively accessed by one memory stack and localizes them with CGP, while distributing other data objects with FGP, as in the case of FGP-Only.
\subsection{Multiprogrammed Workloads}
\label{subsec:multi}
\begin{figure}[!ht]
\includegraphics[width=3.3in,keepaspectratio,angle =0]{figs/eval_multi}
\caption{Performance of multiple applications}
\label{fig:eval_multi}
\end{figure}
To further analyze the impact of having hardware that provides the ability to map an entire page to a single memory stack using CGP, we evaluate our CGP-Only configuration with four mixes of multiprogrammed workloads.
Each benchmark is chosen randomly from each category to construct a multiprogrammed workload.
Figure~\ref{fig:eval_multi} compares the performance of CGP-Only with that of FGP-Only, showing that the CGP-Only outperforms the FGP-Only for all the workloads.
With FGP-Only hardware, every memory page is distributed across all memory stacks, which results in a significant number of remote data accesses from all applications.
With hardware that can map an entire page to a single memory stack, as enabled by our mechanism, however, memory pages that an application accesses can be allocated to the memory stack where the application is executed, and hence, all the accesses can exploit the large internal memory bandwidth within the memory stack.
This is an important contribution since it is infeasible or difficult to reduce remote data accesses in the presence of multiple workloads running in a system.
\subsection{Impact of Interleaving Granularity}
\label{subsec:eval:host}
\begin{figure}[!ht]
\includegraphics[width=3.3in,keepaspectratio,angle =0]{figs/eval_host}
\caption{Performance impact of interleaving granularity on the host processor}
\label{fig:eval_host}
\end{figure}
So far we have demonstrated the necessity of the coarse-grain interleaving (technically, selective use of CGP and FGP) for the efficient use of NDP.
One might consider using \textit{just} coarse-grain interleaving in a system with multiple NDP memory stacks.
However, in this section we present the performance of FGP-Only and CGP-Only for the host side execution (assuming the same overall computational capability as the all NDP stacks) to demonstrate the necessity of the fine-grain interleaving as well.
When an application is run on the host processor, it is desirable that the memory objects it accesses are distributed across multiple memory stacks to achieve maximum memory bandwidth utilization by distributing concurrent accesses across all available memory interfaces.
Figure~\ref{fig:eval_host} shows the performance of the host processor with memories interleaved at different granularities.
FGP-Only and CGP-Only indicate the use of fine-grained interleaved memory and coarse-grained interleaved memory, respectively.
Our evaluation results show that FGP-Only outperforms the CGP-Only by 1.48x due to better memory bandwidth utilization.
\subsection{Impact of Affinity-based Scheduling}
\label{subsec:eval:b2cfix}
\begin{figure}[!htb]
\includegraphics[width=3.3in,keepaspectratio]{figs/eval_b2cfix}
\caption{Performance impact of an affinity-based work scheduling mechanism}
\label{fig:eval_b2cfix}
\end{figure}
Thread-blocks cannot be scheduled to any SM with our affinity-based work scheduling mechanism.
In this section, we evaluate the performance impact of the affinity-based work scheduling mechanism.
Figure~\ref{fig:eval_b2cfix} compares the performance of the affinity-based work scheduling mechanism (FGP-Only + Affinity-based Work Schedule) and that of the baseline (FGP-Only).
All our evaluated benchmarks are virtually unaffected by the restricted scheduling mechanism, as expected, except for one benchmark, \texttt{SAD}.
The reason why the performance of \texttt{SAD} is degraded by the affinity-based work scheduling is that the number of thread-blocks is small (61) compared to the number of memory stacks and available SMs (16).
Maintaining load balancing across all available compute resources might be more crucial than carefully co-locating thread-blocks and the data they access, when compute resource bounds the overall performance.
This problem can be alleviated with resource-monitoring-based schemes.
\section{Introduction}
\label{sec:introduction}
Recent studies have demonstrated that near-data processing (NDP) is an effective technique to improve performance and energy efficiency of data-intensive workloads~\cite{ahn:yoo15,ahn:hon15,aki:fra15,zha:jay13,zha:jay14,chu:jay13,hsi:ebr16,nai:had17}.
However, leveraging NDP in realistic systems with multiple memory modules (e.g., DIMMs or 3D-stacked memories) introduces a new challenge.
In today's systems, where no computation occurs in memory modules, the physical address space is interleaved at a fine granularity among all memory modules to help improve the utilization of processor-memory interfaces by distributing the memory traffic.
However, this is at odds with efficient use of NDP, which requires careful placement of data in memory modules such that near-data computations and the data they exclusively use can be localized in individual memory modules, while distributing shared data among memory modules to reduce memory bandwidth contention.
\begin{figure*}
\centering
\includegraphics[width=\textwidth,keepaspectratio]{figs/baseline_architecture}
\caption{(a) Overview of a near-data processing system with multiple memory stacks. (b) A localization of data in one memory stack that limits memory bandwidth utilization for host computation. (c) A distribution of data across memory stacks that increases memory bandwidth utilization for host computation. (d) An NDP-agnostic placement of computations and their exclusively used data that increases remote data accesses. (e) Localization of shared data in one memory stack that increases memory bandwidth contention. (f) An NDP-aware placement of computations and their exclusively used data that eliminates remote data accesses. (g) Distribution of shared data acorss memory stacks that reduces memory bandwidth contention.}
\label{fig:baseline_architecture_motivation}
\end{figure*}
Figure~\ref{fig:baseline_architecture_motivation} (a) shows a high-level diagram of an NDP system with multiple memory stacks.
The system consists of the host processor and multiple 3D memory stacks, where each of which has one or more processing units on its logic layer.
Memory stacks are connected with the processor-centric topology proposed by Kim et al.~\cite{kim:kim13}, constituting the entire memory address space; they are used as main memory for all processing units in the system, including the host processor and processing units in memory stacks.
While processing units in memory stacks can transparently access data in any memory stack, accesses to data in other memory stacks use the low bandwidth off-chip links and traverse the interconnect, incurring higher latency and leading to lower system performance and energy efficiency.
On the other hand, a local data access, which occurs when a processing unit accesses data in its local memory stack, utilizes high memory bandwidth within the memory stack, incurring lower latency and leading to higher system performance and energy efficiency.
Therefore, it is critical to minimize remote data accesses for the efficient use of NDP.
Figure~\ref{fig:baseline_architecture_motivation} (b) and (c) demonstrate the need for the distribution of data across memory stacks to increase the bandwidth utilization of processor-memory interfaces when computation is performed in the host processor.
Figure~\ref{fig:baseline_architecture_motivation} (d) - (g) demonstrate the need for more careful placement of computations and data when computation is performed in the processing units near memory.
Figure~\ref{fig:baseline_architecture_motivation} (d) and (e) present two cases of NDP-agnostic computations and data placements.
In Figure~\ref{fig:baseline_architecture_motivation} (d), computations and the data they exclusively use are placed in different memory stacks, causing all the remote memory traffics among memory stacks.
In Figure~\ref{fig:baseline_architecture_motivation} (e), shared data is localized in one memory stack, increasing memory bandwidth contention in that memory stack.
Figure~\ref{fig:baseline_architecture_motivation} (f) and (g) present the ideal cases of NDP-aware computations and data placements.
In Figure~\ref{fig:baseline_architecture_motivation} (f), computations and their private data are placed in the same memory stacks, eliminating all the remote data accesses and leading to better use of NDP.
In Figure~\ref{fig:baseline_architecture_motivation}(g), shared data is spread across memory stacks, reducing memory bandwidth contention in the first memory stack and leading to performance improvement.
Our \textbf{goal} in this paper is to realize such an NDP-aware placement of computation and data with very low overhead, while not sacrificing the host processor performance;
Near-data computations and the data they exclusively use should be localized in individual memory stacks for efficient use of NDP, whereas shared data and data accessed by the host processor should be spread across memory stacks to reduce memory bandwidth hotspots and to maximize bandwidth utilization.
Unfortunately, there are two key \textbf{challenges} that need to be solved to achieve this goal: (1) how to \textit{selectively} localize data in a system with multiple memory stacks where address space is finely interleaved (data is spread across multiple memory stacks by default), and (2) how to \textit{identify} the data that favors localization and how to \textit{co-locate} computations with the data they exclusively use?
(1) To solve the first challenge, we propose a lightweight hardware mechanism that supports dual-mode address mapping at a page granularity, so that a page can be spread across memory stacks or localized to a single memory stack.
The key idea is to use different sets of address mapping bits for each memory page depending on its anticipated access pattern, allowing the two sets of mappings to co-exist; low order bits are used to distribute a page across memory stacks, whereas high order bits are used to place (or localize) an entire page in a single memory stack.
The granularity information for each memory page is stored in the page table entry (PTE) and translation lookaside buffer (TLB) entry.
At the time a virtual address is translated into a physical address and the memory request is sent, our mechanism uses the appropriate address mapping depending on the granularity information.
Admittedly, the concept of changing address mapping to change data layout or to increase memory-level parallelism is not new~\cite{zha:zhu00,gha:jal16}.
However, our proposed mechanism is different from previous proposals in that it enables coexistence of pages with different address mappings while not requiring large-scale page migrations.
(2) Identifying exclusively accessed data and co-locating computations that use that data is particularly difficult for GPU systems for two reasons.
First, data structures are usually allocated (and often initialized) by the host processor (CPU) before kernel invocation and used by all threads in the kernel later.
Which thread accesses which (and which part of) data structures is not determined at the time data structures are allocated.
Second, and more importantly, thread-blocks\footnote{We use the term thread-block to refer to work-group in OpenCL and block in CUDA.} can be scheduled to any core in GPU systems.
Considering the efficacy of an NDP system depends on the co-location of thread-blocks and the data they exclusively use; this \textit{nondeterministic} aspect of GPU execution models hinders the efficient use of NDP.
For these reasons, we target a GPU-based NDP system.
The applicability of our mechanism to other core types is discussed later.
To solve the second challenge in such a GPU-based NDP system, we make two key observations.
First, the amount of data used by one thread-block is often determined by the number of threads in a thread-block and the amount of data each thread accesses.
The latter can be estimated by either compile-time analyses (for input-independent access patterns) or profiler-assisted techniques (for input-dependent access patterns).
Although the number of threads in a thread-block is often input-dependent, it is determined before kernel invocation (specifically, even before data structures are allocated).
Both combined, we come to a conclusion that the amount of data used by one thread-block \textit{can} be estimated.
Second, we observe that a slight restriction on the thread-block scheduling policy based on the affinity between thread-blocks and compute resources (memory stacks) in a heterogeneous system in terms of memory access latencies enables the efficient use of NDP despite the potential compute resource sub-utilization or load imbalance.
We base our observation on the fact that the number of threads and thread-blocks is typically much greater than the number of cores in GPU systems.
Based on these observations, we propose a software/hardware cooperative solution that (1) utilizes a compiler- and profiler-based technique to analyze the access pattern for each memory object and determine how each memory object should be layered across memory stacks, and (2) uses an affinity-based scheduling algorithm to steer thread-blocks to the memory stack where the data they access is located.
Our paper makes the following \textbf{contributions}.
First, we propose a lightweight hardware mechanism that supports dual-mode address mapping at a page granularity, such that a page can be spread across memory stacks or localized to a single memory stack.
This enables pages with different address mappings to coexist in the same memory space.
Second, we propose a software/hardware cooperative solution that utilizes a compiler-based and profiler-assisted technique to decide whether to localize or distribute each memory object based on its anticipated access pattern.
This mechanism steers computations to the memory where data they exclusively access is located, thereby achieving efficient use of NDP.
Third, we evaluate our proposed mechanism with a wide range of data-intensive workloads and show that it improves performance by 31\% and reduces 38\% of remote data accesses over a baseline system that does not have dual-mode address mapping nor an affinity-based computation and data co-placement mechanism.
\section{Mechanism}
\label{sec:mechanism}
In this section, we describe our mechanisms to enable co-location of computations and data in a system with multiple NDP memory stacks.
Section~\ref{subsec:mechanism:overview} provides an overview of a non-NDP-aware distribution of computations and data, and demonstrates how an NDP-aware mechanism can improve it.
Section~\ref{subsec:address_interleaving} describes a hardware mechanism that supports dual-mode address mapping at a page granularity that either distributes a page across memory stacks or localizes the page to a single memory stack.
Section~\ref{subsec:algorithm} describes a software/hardware cooperative solution that utilizes a compiler-based and profiler-assisted technique to decide whether to localize or distribute each memory object based on its anticipated access pattern and an affinity-based scheduling algorithm that realizes steering of thread-blocks to the memory stack where the data they access is located.
\subsection{Overview}
\label{subsec:mechanism:overview}
Figure~\ref{fig:mechanism_overview} (a) shows how a non-NDP-aware mechanism places thread-blocks and memory pages across multiple memory stacks.
In this example, there are four memory stacks in the system, each with two SMs in the logic layer.
Five 4KB memory pages (A, B, C, D, E) are allocated and they are distributed across all memory stacks at 256B granularity.
Each 256B chunk is color-coded depending on which thread-block accesses it.
For instance, A0, A1, A2 and A3 are accessed by TB0 and TB4, and A4, A5, A6 and A7 are accessed by TB1 and TB5.
Note that page B is accessed only by TB0 and TB4, and page C is accessed only by TB1 and TB5.
Accesses from TB0 to A0, B0, B4, B8 and B12 are efficient since thread-block and data are located in the same memory stack, while those to the rest (A1, A2, A3, B1, B2, B3, B5, \ldots, B15) are not.
\begin{figure}[htb]
\includegraphics[width=\columnwidth,keepaspectratio]{figs/mechanism_overview}
\caption{(a) shows an example non-NDP-aware distribution of thread-blocks (denoted as \textbf{TB}) and pages (denoted as \textbf{A}, \textbf{B}, \textbf{C}, \textbf{D} and \textbf{E}) across memory stacks. (b) demonstrates how an NDP-aware mechanism can do better.}
\label{fig:mechanism_overview}
\end{figure}
Figure~\ref{fig:mechanism_overview} (b) demonstrates how an NDP-aware mechanism can place the thread-blocks and memory pages for efficient use of NDP.
Page B, C, D and E are allocated in different memory stacks and the thread-blocks that access each page exclusively are placed in the corresponding memory stacks.
With this co-location of thread-blocks (computations) and the data they exclusively use, all the accesses to the page B from TB0 and TB4, those to the page C from TB1 and TB5, etc, are efficient, exploiting the large internal memory bandwidth.
Note that page A is still distributed across memory stacks since it is accessed by all thread-blocks (shared).
\subsection{Dual-mode Address Mapping}
\label{subsec:address_interleaving}
\textbf{Hardware Support.}
We propose to use different sets of bits for address mapping for each memory page depending on the anticipated access patterns, allowing the two sets of mappings to co-exist.
The default (fine-grain) address mapping distributes a page across memory stacks (as is done today), and the alternative (coarse-grain) address mapping allocates (or localizes) an entire page in a single memory stack (as is desirable for NDP exclusive data).
We refer to the distributed page as \textbf{FGP} and the localized page as \textbf{CGP}.
FGP is better suited for the data that is shared among SMs in multiple memory stacks or accessed primarily by the host processor.
On the other hand, CGP is better suited for the data that is exclusively accessed by the SMs in one memory stack.
Note that once hardware provides the ability to map an entire page to one memory stack (as is enabled by our selective use of coarse-grain address mapping), an NDP-aware operating system (OS) could allocate arbitrarily large objects within one memory stack by mapping all the virtual pages of that object to the physical pages (CGPs) in the memory stack.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth,keepaspectratio]{figs/addr_trans_new}
\caption{Hardware for a dual-mode address mapping}
\label{fig:mechanism}
\end{figure}
PTEs, TLB entries and cache lines are extended to indicate the granularity information, fine-grain or coarse-grain, for each page, as shown in Figure~\ref{fig:mechanism}.
The granularity bit in a PTE is set by the OS when the CGP is allocated, and the granularity bit in a cache line is set when the cache line is allocated.
When the granularity bit is set, indicating CGP, the lowest bits from the PPN (Physical Page Number) are used to index memory stack, whereas the highest bits from the page offset are used for the FGP.
For example, in a system with four memory stacks, when a cache line is evicted from the last level cache, a write-back request is sent to the memory stack indexed by either the bits [13:12] when the granularity bit is set (for the CGP) or the bits [11:10] when the granularity bit is not set (for the FGP).
Be assured that we only change the mapping of the physical address to memory stacks and not the physical address itself.
Thus, cache is accessed with the original physical address, irrespective of the granularity information, and our mechanism does not have any impact on the cache coherence protocol or virtual address translation.
\textbf{System Software Support.}
The OS should be aware of the dual-mode address mapping (1) to indicate the granularity information in the PTEs and TLB entries, and (2) for page management, such as free page management or page replacement.
It is important to note that it requires a set of adjacent FGPs to allocate a CGP (technically, a set of CGPs are allocated together).
Consider a system where an FGP spans N consecutive memory stacks, occupying a contiguous block of M bytes in each memory stack.
In that system, a CGP occupies N$\times$M contiguous bytes within a single memory stack.
Therefore, a single CGP occupies the space that would have been utilized by N different FGPs within one memory stack (but does not utilize any of the space those N FGPs would have occupied in other memory stacks).
As a result, each block of N contiguous pages must uniformly be configured as FGP or CGP to avoid data layout conflicts.
However, different blocks of N pages may be independently configured as FGP or CGP based on application or OS requirements.
For example, when FGP 0 in Figure~\ref{fig:fgr_cgr_example} (a), consisting of block 0, 1, 2 and 3, is converted to a CGP, there are conflicts with block 4, 8 and 12 from the three subsequent FGPs (each from FGP 1, FGP 2 and FGP 3, respectively).
Therefore, those four FGPs must be converted to CGPs together, as shown in Figure~\ref{fig:fgr_cgr_example} (b).
We use the term \textit{page-group} to refer to such a set of pages that must be converted together.
Hence, the OS should decide between FGP and CGP at a page-group granularity and can switch between FGP and CGP only when all the pages in the page-group are free.
\begin{figure}[htb]
\includegraphics[width=\columnwidth,keepaspectratio]{figs/fgr_cgr_example}
\caption{Conceptual diagram of page-group. The number indicates a memory block address and the blocks of the same color belong to the same OS page.}
\label{fig:fgr_cgr_example}
\end{figure}
\section{Evaluation Methodology}
\label{sec:method}
\subsection{Hardware Configurations}
\label{subsec:hardware}
\begin{table}[htb]
\caption{Evaluated system}
\label{table:system_configuration}
\includegraphics[width=3.3in,keepaspectratio]{figs/system_configuration}
\centering
\end{table}
\begin{table*}[htb]
\caption{Benchmark categories}
\label{table:benchmark}
\includegraphics[width=\textwidth,keepaspectratio]{figs/benchmarks}
\end{table*}
We evaluate our mechanism using SST~\cite{rod:hem11} with MacSim~\cite{ros:cop11}, a cycle-level microarchitecture simulator.
Low-level DRAM timing constraints are faithfully simulated using DRAMSim2~\cite{ros:cop11}, which was modified to model the HBM 2.0 specification~\cite{hbm2}.
Our default system configuration comprises the host processor and four HBM-based memory stacks, where each memory stack consists of four SMs and 8GB HBM memory.
More details on the simulated system configuration are provided in Table~\ref{table:system_configuration}.
We use 128-byte interleaving and 4KB interleaving to form the FGR and CGR, respectively.
Each channel is modeled to provide 32 GB/s of peak memory bandwidth; therefore 256 GB/s of total internal memory bandwidth is exploitable by the SMs in the logic layer.
We assume 128 GB/s of aggregate memory bandwidth is available for the Host network.
We model a Remote network to provide 16 GB/s of memory bandwidth.
We also perform detailed sensitivity studies, where we vary the bandwidth of Local, Host and Remote network.
\subsection{Benchmarks}
\label{subsec:bench}
We use 20 benchmarks from GraphBIG~\cite{lif:yin15}, Rodinia~\cite{che:boy09}, and Parboil~\cite{parboil}.
We classify a benchmark as being \textbf{block-exclusive} if almost all pages (> 90\%) are accessed by only one thread-block, \textbf{core-exclusive} if almost all pages (> 90\%) are accessed by one memory stack (i.e., multiple SMs in the same memory stack), \textbf{block-majority} if the majority of pages (> 60\%) are accessed by only one thread-block, \textbf{core-majority} if the majority of pages (> 60\%) are accessed by one memory stack, and \textbf{sharing} if most of the pages are accessed by more than one memory stack.
Table~\ref{table:benchmark} summarizes the benchmarks and the category they belong to.
\section{Motivation}
\label{sec:motivation}
\begin{figure}[htb]
\includegraphics[width=\columnwidth,keepaspectratio]{figs/static_page_distribution}
\caption{Distribution of memory pages according to the number of thread-blocks that access each page}
\label{fig:intro_motivation}
\end{figure}
Figure~\ref{fig:intro_motivation} shows distribution of memory pages according to the number of thread-blocks that access each memory page for various data-intensive workloads from publicly available GPU benchmark suites~\cite{lif:yin15,che:boy09,parboil}.
It is observed that for some workloads, such as \texttt{BFS}, \texttt{DC}, \texttt{PR}, \texttt{SSSP}, \texttt{BC}, \texttt{GC}, and \texttt{NW}, most pages are accessed by only one or two thread-blocks.
In traditional systems, where no computation occurs in memory, distributing pages irrespective of which and how many thread-blocks access them helps improve the utilization of processor-memory interfaces by distributing the memory traffic.
However, when computation is performed \textit{near} memory (as is enabled by NDP), distributing such pages across memory stacks incurs lots of remote traffics.
Therefore it is imperative to place such pages (exclusively used data) and the thread-blocks (computations) that access them in individual memory stacks for efficient use of NDP.
In contrast, in the case of \texttt{HS3D} and \texttt{HS}, most pages are accessed by almost all thread-blocks.
Even in the presence of NDP, it is better to distribute such pages (shared data) across memory stacks to reduce memory bandwidth contention.
From this, we make two observations.
First, some pages are accessed exclusively by a few thread-blocks, while other pages are accessed, or shared, by many thread-blocks.
The exclusively used pages should be placed in individual memory stacks with the thread-blocks that access them to eliminate remote data accesses, and the shared pages should be distributed across memory stacks to reduce memory bandwidth contention.
Second, each application has different distribution of exclusive and shared pages.
For example, most pages in \texttt{BFS} are exclusively used, so the memory system should be capable of localizing all of them.
On the other hand, most pages in \texttt{HS} are shared, so the memory system should also be capable of distributing all of them.
These observations motivate the need for a mechanism that can allocate localized pages versus distributed pages \textit{flexibly} based on an application's needs.
\section{Related Work}
\label{sec:related}
\noindent\textbf{Processing in memory.}
Processing in memory was proposed decades ago~\cite{gok:hol95, hal:kog99, osk:cho98, pat:and97, mur:kog00, pat:and97-2, kan:hua99}.
Recent advances in 3D stacking technology have given a boost to NDP research~\cite{ahn:yoo15, ahn:hon15, aki:fra15, zha:jay13, zha:jay14, chu:jay13, hsi:ebr16, nai:had17, nai:ant15} to accelerate workloads in various domains (e.g., large-scale graph processing workloads~\cite{ahn:hon15, nai:had17}, Map-Reduce workloads~\cite{pug:jes14}, and HPC applications~\cite{zha:jay14}).
Among these works, Hsieh et al.~\cite{hsi:ebr16} (TOM) addressed the issue of local and remote memory accesses in a system with multiple NDP memory stacks.
It performs runtime profiling to learn best address mapping for data accessed by offloading candidates, and distributes that data with the discovered mapping.
In contrast to our proposal, this work 1) essentially delays and decelerates the regular kernel execution because it tests all different address mappings (10 mappings, sweeping from bit position 7 to bit position 16) for all the data accessed by offloading candidates during the runtime learning phase, 2) implicitly assumes a hardware mechanism to distribute data with different mappings.
\noindent\textbf{Increasing Memory-Level Parallelism.}
Zurawski et al.~\cite{jur:mur95} presented an address bit swapping scheme to increase memory-level parallelism by reducing the row buffer conflicts in traditional DRAM systems, which is used in AlphaStation 600 5-series workstations.
Zhang et al.~\cite{zha:zhu00} proposed a permutation-based page interleaving scheme in order to reduce row-buffer conflicts and to exploit data access locality in the row-buffer.
Ghasempour et al.~\cite{gha:jal16} proposed a hardware mechanism to dynamically change the address mapping to increase bank-level parallelism at the cost of a significant amount of page migration overhead.
While our proposed mechanism also uses address bit swapping scheme, it is different from these works in two ways.
First, our mechanism applies address mapping scheme at a page granularity such that pages with different address mappings co-exist in the same memory space.
Our mechanism is lightweight in a sense that it incurs negligible performance overhead and does not have any impact on the cache coherence protocol or virtual address translation.
Second, our mechanism does not require large-scale page migrations; only a few (e.g., four or eight, depending on the number of memory stacks) pages are affected, since we selectively use CGP at the page-group granularity.
\noindent\textbf{Static-time Data Alignment.}
Static-time data allocation has a long history of research.
For example, HPF (High Performance Fortran) provides compiler directives to specify data alignment among processors~\cite{hpf20}.
Although our mechanism shares the same philosophy with the HPF directives such as {\tt block} or {\tt cyclic}, they are different in the sense that the HPF directives are applied at virtual address space, whereas it is done in the physical memory space in our mechanism since the source of non-uniformity of memory access pattern is caused when a virtual page is mapped to the physical memory domain.
\noindent\textbf{Multiple GPUs.}
Static-time data allocation has also been researched in the context of multiple GPUs.
A system with multiple GPUs is more close to an MPI-based system, since each GPU has its own memory and physical address space is not interleaved across multiple GPU memories.
In this sense, several algorithms were proposed to automatically partition data among multiple devices, e.g., multiple GPUs or CPUs and GPUs~\cite{cab:vil14, lee:sam13, kim:kim11, luk:hon09, lee:sam15, gre:obo11, ram:bon13}.
In contrast, the focus of work work is to enable data partitioning among memory stacks via selective use of coarse-grain interleaving (hardware mechanism) and to enable co-location of computations with the data they access (software mechanism). |
2,877,628,089,559 | arxiv | \section{Introduction}
In molecular communication, signals are conveyed from transmitter to receiver in patterns of molecules, which propagate via Brownian motion; see \cite{Nakano2013c}. For example, information can be conveyed in the quantity of molecules. A single bit $\{0,1\}$ may be transmitted by releasing zero molecules for $0$, or a large number of molecules for $1$. In this example, the receiver's task is to observe the number of arriving molecules and determine which bit was sent.
Presently, molecular communication design follows the paradigm of conventional communication systems; signal detection techniques often assume synchronization and channel state knowledge. Algorithms to accomplish these tasks are an active area of research. For example, work has been done to address synchronization (see \cite{Abadal2011a,Moore2013,Shahmohammadian2013,Lin2016}) and parameter estimation (see \cite{Moore2012,Noel2014c}) in molecular communication.
Synchronization in particular is an important issue; in the opening example above, the transmitter and receiver must agree on when to transmit and when to detect. Many contemporary papers assume that perfect synchronization can be achieved, particularly when timing is used to convey information; see \cite{Rose2015,Srinivas2012}. On the other hand, asynchronous symbol detection is also an active research area in molecular communication. For example, in \cite{Lee2015a}, the propagation time is estimated from the variance of arrival times of a large number of molecules. In \cite{Lin2015b}, molecules arrive over time and a counter increments until a threshold number is observed.
Complex synchronization algorithms are not practical for nanonetworking applications, which envision low-complexity nanoscale devices (or biological ``devices'') whose computational capabilities are limited; see \cite{Akyildiz2008}. This paper focuses on \emph{asynchronous peak detection} for demodulation, which we also supplement with decision feedback as a variant. To motivate our design, consider the scenario in Fig.~\ref{fig_cir}. At time $t = 0$, a number $N$ of molecules is released by the transmitter, representing a bit $1$. The transmitter and receiver agree on the communication strategy, but are not synchronized; the receiver distinguishes between a $0$ and a $1$ by observing the {\em peak} of the response. If $N \rightarrow \infty$, we would observe something close to the dashed line in the figure, i.e., a relatively smooth response with a clear peak. However, in nanonetworking applications, $N$ is relatively small. Thus, the figure shows a simulated curve with many local peaks, both before and after the peak of the expected curve. Our goal is to detect the peak asynchronously and in the presence of inter-symbol interference (ISI).
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{fig_0031_cir.pdf}
\caption{Channel impulse response for a passive receiver. The environment considered here is the same as that in Section~\ref{sec_results}, except the sampling period is decreased to $1\,\textnormal{m}\textnormal{s}$. One sample realization of the received signal is compared with that expected from the analytical expression in (\ref{eqn_cir}), which has been scaled by the number of molecules $N$ released.}
\label{fig_cir}
\end{figure}
Related work in this direction includes \cite{Li2016a,Damrath2016}, which presented non-coherent detection algorithms that require no knowledge of the underlying channel impulse response. In \cite{Li2016a}, the local convexity of the diffusive signal is exploited. In \cite{Damrath2016}, an adaptive threshold detector subtracts the previous observation from the current sample. In this work, we simply find the largest sample, which results in a simple yet effective asynchronous detector. However, we do apply the expected impulse response (asynchronously) in the variant with decision feedback.
The main contribution of this paper is the design and analysis of the asynchronous peak detection scheme for diffusive signaling, where the symbol is modulated by the quantity of molecules and detected by observing the size of the peak. Unlike other approaches to this problem, we explicitly model the timing offset between the transmitter and receiver, and the variant with (asynchronous) decision feedback mitigates the ISI that is common in molecular communication systems. Our proposed design significantly outperforms the common single-sample detector, and, in the presence of a large timing offset, the variant with decision feedback can outperform an energy detector with decision feedback.
The rest of this paper is organized as follows. In Section~\ref{sec_model}, we present our asynchronous system model and the distribution of the largest observation in a given sampling interval. In Section~\ref{sec_rx}, we propose and analyze the asynchronous peak detector. We verify the analysis by comparing expected detector performance with simulations in Section~\ref{sec_results}, and conclude in Section~\ref{sec_concl}.
\section{System Model and Preliminaries}
\label{sec_model}
\subsection{System Model}
We consider a diffusion-based molecular communication system with one transmitter and one receiver. The transmitter releases molecules according to a random binary sequence $\mathbf{b} = [\bit{0}, \bit{1},\ldots,\bit{L-1}]$. Time is divided into slots of size $\Delta t$ and is referenced by index $k \in \{0,1,\ldots\}$. For simplicity, the transmitter modulates with impulsive ON/OFF keying and begins transmitting at time index $k = 0$. Each bit lasts $M$ slots: every $M$ slots, the transmitter releases $N$ molecules for a bit 1 and no molecules for a bit 0.
The receiver makes discrete observations $\obsSignal{k-\delta} = \obsSignal{j}$, where $\delta$ is some constant but unknown offset from the transmitter's clock. From the observations $\obsSignal{j}$, the receiver detects the sequence $\hat{\mathbf{b}} = [\bitObs{0}, \bitObs{1},\ldots,\bitObs{L-1}]$. Although this results in blocks of length $L$, we do not consider blockwise detection in this paper, as we focus on low-complexity symbol-by-symbol algorithms.
Generally, we make no assumptions about the geometry of the environment, including the shape of the transmitter and receiver, nor about the reception mechanism at the receiver. We do assume that the receiver's observations are \emph{independent}, such that $\obsSignal{j}$ is only conditioned on the expected time-varying signal $\obsSignalExp{j}$ and not on $\obsSignal{i}$, where $i\nej$. It is also possible to extend our analysis to a different modulation scheme or to release the $N$ molecules over a finite interval.
Two examples of the received signal $\obsSignal{j}$ are as follows. If the receiver is a passive observer, then $\obsSignal{j}$ is the number of molecules at the receiver at time $j\Delta t$. If the receiver has an absorbing surface, then $\obsSignal{j}$ is the number of molecules that have been absorbed within the interval $\big((j-1)\Delta t,j\Delta t\big]$. Strictly speaking, neither of these signals have independent observations, but we leave a detailed consideration of sample dependence for future work.
\subsection{Receiver Signal}
When the transmitter releases $N$ molecules, we assume that the diffusive behavior of each individual molecule is independent. Using the independence of molecule behavior and receiver observations, and given that molecules are only released at transmitter time indices $k$ that are integer multiples of $M$, then the observed number of molecules $\obsSignal{k}$ is a discrete-time random process that follows a Binomial distribution with $N$ trials and success probability $\probSingleObs{k}$.
Let $z_\ell[k]$ represent the number of molecules observed at time $k$ due to a \emph{single} release of $N$ molecules at $k=\ell$, and assuming that no molecules were released at any other time. We note that, like $\obsSignal{k}$, $z_\ell[k]$ is a discrete-time random process.
When the transmitter is modulating the binary sequence $\mathbf{b}$, then an observation at arbitrary time $k$ can include molecules from any current or previous symbol, i.e.,
\begin{equation}
\obsSignal{k} = \sum_{l=0}^{\lfloork/M\rfloor} \bit{l}z_\ell[k],
\label{eqn_rx_obs_sync}
\end{equation}
where the $z_\ell[k]$ term is associated with $N$ trials and success probability $\probSingleObs{k-lM}$ when $\bit{l} = 1$, and $\lfloor\cdot\rfloor$ is the floor function. Thus, $\obsSignal{k}$ is a summation of Binomial random variables with \emph{different} success probabilities, i.e., a Poisson Binomial random variable. Instead of evaluating the Poisson Binomial distribution (which has combinatorial complexity; see \cite{Hong2013}), it is computationally simpler to approximate each Binomial random variable as a Poisson random variable with mean $E[z_\ell[k]] = \bar{z}_\ell[k] =N\probSingleObs{k-lM}$. Thus, by the properties of Poisson random variables, $\obsSignal{k}$ is also a Poisson random variable with mean
\begin{equation}
\obsSignalExp{k} = \sum_{l=0}^{\lfloork/M\rfloor} \bit{l}\bar{z}_\ell[k],
\label{eqn_rx_obs_sync_poisson}
\end{equation}
which is accurate for $N$ sufficiently large and each $\probSingleObs{k-lM}$ sufficiently small; see \cite[Ch.~5]{Ross2009}. The cumulative distribution function (CDF) of Poisson random variable $X$ can be written as \cite[Ch.~24]{Abramowitz1964}
\begin{equation}
\Pr\{X \le x\} = \frac{\Gamma(\lfloorx+1\rfloor,\overline{x})}{\Gamma(\lfloorx+1\rfloor)},
\label{eqn_poisson_cdf}
\end{equation}
where $\Gamma(\cdot)$ and $\Gamma(\cdot,\cdot)$ are the Gamma and incomplete Gamma functions, respectively, and $\overline{x}$ is the mean of random variable $X$. Generally, for discrete $x$, we have $\Pr\{X \le x\} = \Pr\{X < x+1\}$. The form in (\ref{eqn_poisson_cdf}) also enables us to consider non-integer $x$, in which case $\Pr\{X \le x\} = \Pr\{X < x\}$. This property will be helpful when we consider the performance of detectors with decision feedback.
Given the distribution of $\obsSignal{k}$, we can describe the distribution of the maximum observation over some interval. Specifically, for the $l$th bit, the receiver makes observations $\obsSignal{j}, j \in \{lM-\delta,\ldots,(l+1)M-\delta\}$. It can be shown that the maximum value of these $M$ observations, $\maxSample{l}$, has CDF
\begin{equation}
\Pr\{\maxSample{l} \le a\} = \prod_{j = lM-\delta}^{(l+1)M-\delta} \Pr\{\obsSignal{j} \le a\}.
\label{eqn_max_cdf}
\end{equation}
In \cite{Noel2014c}, we used the maximum observation $\maxSample{l}$ to estimate channel parameters, but we did not describe its CDF nor use it for symbol detection, as we do in this paper.
\subsection{Existing Detectors}
To assess the performance of the asynchronous detector, we compare it with two existing detectors that are both variants of the weighted sum detectors that we proposed in \cite{Noel2014d}. The \emph{single sample detector} makes one observation at the time when the largest observation is \emph{expected}, i.e., at $\argmax \obsSignalExp{k}$ when $\bit{l}=1$, and compares that observation with the threshold $\tau$. The \emph{energy detector} takes \emph{all} observations in the receiver interval, adds them together, and compares the sum with the threshold $\tau$. The performance of these detectors is discussed in \cite{Noel2014d}, and we discuss the bit error probability of the energy detector with decision feedback in Section~\ref{sec_rx_ed}.
\section{Receiver Design and Performance}
\label{sec_rx}
In this section, we propose the asynchronous peak detector. We consider two variants. The first variant (\emph{simple asynchronous detector}) is non-adaptive and compares each observation with the same constant threshold for every bit. The second variant (\emph{asynchronous detector with decision feedback}) is an adaptive detector that adjusts the threshold for \emph{each} observation in a bit interval, based on the inter-symbol interference (ISI) \emph{expected} in each observation. We derive the bit error probability of each variant, and also discuss the bit error probability of the energy detector with decision feedback.
\subsection{Simple Asynchronous Detector}
The simple asynchronous detector decodes the $l$th bit by comparing the maximum observation $\maxSample{l}$ with the constant threshold $\tau$. Thus, the decision rule is
\begin{equation}
\bitObs{l} = \left\{
\begin{array}{rl}
1 & \text{if} \quad
\maxSample{l} \ge \tau,\\
0 & \text{otherwise}.
\end{array} \right.
\end{equation}
This detector is simple to implement because it only requires $M$ comparisons (including one with the threshold). We claim that this simple detector has significantly improved performance over a single sample detector for a wide range of offset $\delta$, since this detector does not impose precisely when the maximum value should be observed. However, when $\delta$ is sufficiently large (either positive or negative), the simple asynchronous detector has a higher risk of observing molecules transmitted in a different bit interval, due to the sampling window of length $M$. We leave the study of sampling window length for future work.
For a given transmitter sequence $\mathbf{b}$, the average probability of error for the $l$th bit can be calculated as
\begin{align}
\pErrorCur{l} = & P_1\Pr\{\maxSample{l} < \tau | \bit{l}=1,\bit{n},n\nel\}\, + \nonumber \\
&P_0\Pr\{\maxSample{l} \ge \tau | \bit{l}=0,\bit{n},n\nel\}.
\label{eqn_simple_async_error}
\end{align}
To evaluate (\ref{eqn_simple_async_error}), we need $\Pr\{\maxSample{l} < \tau\}$, and since $\tau$ is discrete we use $\Pr\{\maxSample{l} < \tau\} = \Pr\{\maxSample{l} \le \tau-1\}$. We also find $\Pr\{\maxSample{l} \ge \tau\}$ via $1-\Pr\{\maxSample{l} < \tau\}$. From (\ref{eqn_max_cdf}), we need the distributions of the individual observations $\obsSignal{j}$, which we find via (\ref{eqn_poisson_cdf}) as
\begin{equation}
\Pr\{\obsSignal{j} \le \tau-1\} = \frac{\Gamma(\lfloor\tau\rfloor,\obsSignalExp{j})}{\Gamma(\lfloor\tau\rfloor)},
\label{eqn_obs_cdf}
\end{equation}
where the mean signal $\obsSignalExp{j}$ is evaluated via (\ref{eqn_rx_obs_sync_poisson}). In general, we need to account for ``future'' bits, i.e., $n > l$, when the offset $\delta$ is negative. To evaluate the overall average probability of error $\overline{P}_\textnormal{e}$ for a given threshold $\tau$, we average (\ref{eqn_simple_async_error}) over all $L$ bits and all realizations of transmitter sequence $\mathbf{b}$.
\subsection{Asynchronous Detector with Decision Feedback}
\label{sec_rx_async_df}
The asynchronous detector with decision feedback is an adaptive detector. It decodes the $l$th bit by first subtracting the expected ISI from every observation, or analogously adding the expected ISI to the constant threshold $\tau$. Due to the time-varying nature of the channel impulse response, the expected ISI is \emph{also time-varying}. The decision rule is
\begin{equation}
\bitObs{l} = \left\{
\begin{array}{rl}
1 & \text{if} \quad
\maxSampleAdaptive{l} \ge \tau,\\
0 & \text{otherwise},
\end{array} \right.
\end{equation}
where $\maxSampleAdaptive{l}$ is the maximum adaptive observation, found as
\begin{equation}
\maxSampleAdaptive{l} = \max_{j \in \{lM-\delta,\ldots,(l+1)M-\delta\}} \obsSignal{j} - \obsSignalExpISI{j},
\end{equation}
and
\begin{equation}
\obsSignalExpISI{j} = \sum_{n=0}^{l-1} \bitObs{n}\obsSignalOneExp{j+\delta}
\label{eqn_max_isi}
\end{equation}
is the average ISI expected by the receiver in the $j$th observation, conditioned on the receiver's previously-decided bits $\bitObs{n}, n \in \{0,1,l-1\}$. We emphasize that the receiver believes that its offset with the transmitter is $\delta=0$, such that $j+\delta=k$ in (\ref{eqn_max_isi}). This detector is more complex to implement than the simple asynchronous detector, since the receiver must have knowledge of $\obsSignalOneExp{\cdot}$ and be able to subtract terms of $\obsSignalOneExp{\cdot}$ from individual observations according to previous bit decisions.
The corresponding CDF of the $l$th maximum adaptive observation $\maxSampleAdaptive{l}$ is
\begin{equation}
\Pr\{\maxSampleAdaptive{l} \le a\} = \prod_{j = lM-\delta}^{(l+1)M-\delta} \Pr\{\obsSignal{j} - \obsSignalExpISI{j} \le a\}.
\label{eqn_max_adaptive_cdf}
\end{equation}
The average bit error probability of the asynchronous detector with decision feedback can be calculated using (\ref{eqn_simple_async_error}), where $\maxSample{l}$ is replaced with $\maxSampleAdaptive{l}$. Thus, we need the distribution of $\maxSampleAdaptive{l}$. Since $\obsSignalExpISI{\cdot}$ is generally non-integer, we should consider non-integer values of $a$. However, the actual observations $\obsSignal{\cdot}$ are always discrete. So, we let $a + \obsSignalExpISI{j} = a[j]$ and consider the distribution of $\Pr\{\obsSignal{j} \le a[j]\}$.
We distinguish between the conditioning of $\obsSignal{j}$ and $a[j]$. The observation $\obsSignal{j}$, whose mean $\obsSignalExp{j}$ is found via (\ref{eqn_rx_obs_sync_poisson}) using $j=k-\delta$, is conditioned on the \emph{true} transmitter sequence $\mathbf{b}$ as well as the receiver's offset $\delta$ from the transmitter's clock. The value of $a[j]$, found using (\ref{eqn_max_isi}), is conditioned on the receiver's previous decisions $\bitObs{n}, n < l$, and the assumption that $\delta=0$. From (\ref{eqn_poisson_cdf}), the CDF of $\obsSignal{j}$ is
\begin{equation}
\Pr\{\obsSignal{j} \le a[j]\} = \frac{\Gamma(\lfloora[j]+1\rfloor,\obsSignalExp{j})}{\Gamma(\lfloora[j]+1\rfloor)},
\label{eqn_obs_adaptive_cdf}
\end{equation}
and $\Pr\{\obsSignal{j} < a[j]\} = \Pr\{\obsSignal{j} \le a[j]\}$ when $a[j]$ is non-integer. Thus, to use (\ref{eqn_max_adaptive_cdf}) to evaluate $\Pr\{\maxSampleAdaptive{l} < \tau\}$, the $j$th observation CDF might be found using $\Pr\{\obsSignal{j} \le \tau+\obsSignalExpISI{j}-1\}$ or $\Pr\{\obsSignal{j} \le \tau+\obsSignalExpISI{j}\}$, depending on whether $\tau+\obsSignalExpISI{j}$ is discrete or non-integer, respectively.
\subsection{Energy Detector with Decision Feedback}
\label{sec_rx_ed}
We considered an energy detector with decision feedback in \cite{Noel2014e}, but we did not derive its expected bit error probability. The derivation is similar to that in Section~\ref{sec_rx_async_df}, where we observe the receiver's $l$th adaptive sum $\sumAdaptive{l}$ instead of its $l$th maximum adaptive observation $\maxSampleAdaptive{l}$, i.e.,
\begin{equation}
\sumAdaptive{l} = \sum_{j=lM-\delta}^{(l+1)M-\delta} (\obsSignal{j} - \obsSignalExpISI{j}),
\end{equation}
where $\obsSignalExpISI{j}$ has the same form as in (\ref{eqn_max_isi}), and the error probability is calculated using (\ref{eqn_simple_async_error}), where $\maxSample{l}$ is replaced with $\sumAdaptive{l}$. The CDF of $\sumAdaptive{l}$ has the same form as (\ref{eqn_obs_adaptive_cdf}), i.e.,
\begin{equation}
\Pr\{\sumAdaptive{l} \le a\} = \frac{\Gamma(\lfloora[l]+1\rfloor,\sum_j\obsSignalExp{j})}{\Gamma(\lfloora[l]+1\rfloor)},
\label{eqn_ed_adaptive_cdf}
\end{equation}
where $a[l] = a + \sum_j \obsSignalExpISI{j}$ and this term could be discrete or non-integer.
\section{Numerical and Simulation Results}
\label{sec_results}
In this section, we evaluate the bit error performance of the proposed asynchronous detector in comparison with existing detectors. We consider both the simple variants and the adaptive variants with decision feedback. The analytical expressions are verified by comparing with particle-based simulations executed in the AcCoRD simulator; see \cite{Noel2016}.
We consider an unbounded 3D environment with a point transmitter and a passive spherical receiver of radius $r_\textnormal{RX}$ that is centered at a distance $d$ from the transmitter. When the transmitter releases a molecule at time $t=0$, the probability that the molecule is inside the receiver for the $k$th observation is accurately approximated as \cite[Eq.~(3.5)]{Crank1979}
\begin{equation}
\probSingleObs{k} = \frac{V_\textnormal{RX}}{(4\piD k\Delta t)^\frac{3}{2}}\EXP{-\frac{d^2}{4D k\Delta t}},
\label{eqn_cir}
\end{equation}
where $D$ is the constant diffusion coefficient and $V_\textnormal{RX}$ is the receiver volume.
For evaluation, we consider the system parameters that are summarized in Table~\ref{table_param}. The transmitter modulates a sequence of $L=20$ binary symbols that are generated randomly and with equal probability. $N=2\times10^4$ molecules are released for each bit-1. The corresponding average channel response from a single bit-1 is plotted in Fig.~\ref{fig_cir} on page~\pageref{fig_cir}. The simulation is repeated $10^3$ times, so there are a total of $N=2\times10^4$ transmitted bits. The expected average error probability $\overline{P}_\textnormal{e}$ is calculated from the bit sequences that were simulated; for each sequence, we calculate the expected error probability of every bit, and then we take the average over all bits and all sequences.
\begin{table}[!tb]
\centering
\caption{Simulation system parameters.}
{\renewcommand{\arraystretch}{1.4}
\begin{tabular}{l|c|c||c}
\hline
\bfseries Parameter & \bfseries Symbol & \bfseries Units& \bfseries Value \\ \hline \hline
RX Radius & $r_\textnormal{RX}$ & $\mu\textnormal{m}$
& 0.5 \\ \hline
Molecules Released & $N{}$ & $\textnormal{mol}$
& $2\times10^4$ \\ \hline
Sequence Length & $L{}$ & -
& 20 \\ \hline
Distance to RX & $d{}$ & $\mu\textnormal{m}$
& 5 \\ \hline
Diffusion Coeff.
& $D{}$ & $\textnormal{m}^2/\textnormal{s}$
& $10^{-10}$ \\ \hline
Sampling Period & $\Delta t$ & $\textnormal{m}\textnormal{s}$
& $\{8,40\}$ \\ \hline
Symbol Period & $M\Delta t$ & $\textnormal{m}\textnormal{s}$
& 200 \\ \hline
\# of Realizations & - & -
& $10^3$ \\ \hline
\end{tabular}
}
\label{table_param}
\end{table}
Overall, the simulation and analytical curves in this section agree very well, and slight deviations (particularly at lower error probabilties) are primarily because we calculated the \emph{average} error probability of every sequence but only simulated each sequence \emph{once}. We also note that the bit error probabilities observed throughout this section are relatively high (i.e., generally above $10^{-3}$). This was deliberately imposed so that we could accurately observe meaningful bit error probabilities from $2\times10^4$ bits for all detectors considered. Specifically, the symbol period of $200\,\textnormal{m}\textnormal{s}$ is not significantly larger than the expected peak time; from Fig.~\ref{fig_cir}, the average number of molecules expected after $200\,\textnormal{m}\textnormal{s}$ is still about one third the number expected at the expected peak time of $d^2/(6D)\approx40\,\textnormal{m}\textnormal{s}$. Furthermore, the individual observations are relatively small; the peak of the expected channel response is only 6 molecules. Methods to achieve lower bit error probabilities include using a larger symbol period and (as we will see in Section~\ref{sec_results_samples}) sampling more often.
\subsection{Error Versus Threshold}
First, we consider the case where the receiver sampling is synchronized with the symbol intervals of the transmitter, i.e., $\delta = 0$. We vary the constant (baseline) decision threshold $\tau$ and measure the corresponding average bit error probability $\overline{P}_\textnormal{e}$ for the detectors considered. We compare the detectors in Fig.~\ref{fig_error_vs_thresh_40ms}. As expected, every curve shows an increasing error probability with sufficiently high and sufficiently low $\tau$, such that there exists an optimal threshold that balances the correct detection of bit-1 versus the correction detection of bit-0. Analytical derivations for approximating the optimal threshold of the existing detectors were presented in \cite{Tepekule2015a,Ahmadzadeh2015a}.
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{fig_0031_error_vs_thresh_40ms.pdf}
\caption{Average probability of error for different threshold $\tau$. The sampling period is $\Delta t = 40\,$ms. The asynchronous (Async.) detector is compared with the single sample detector and the energy detector (ED). Asynchronous and energy detectors are also considered with decision feedback (DF).}
\label{fig_error_vs_thresh_40ms}
\end{figure}
In Fig.~\ref{fig_error_vs_thresh_40ms}, we consider a sampling period of $\Delta t = 40\,\textnormal{m}\textnormal{s}$ (i.e., there are $M=5$ samples per bit). The single sample detector uses the 1st sample in every interval for detection. The best error probability observed for the single sample detector, which is above $0.09$, is higher than that for the asynchronous detector, which is less than $0.07$. This improvement comes at the cost of taking the maximum of all 5 samples, but the key advantage is that the asynchronous detector did not need know \emph{when} the maximum sample was expected. Additional performance improvement is observed by adding decision feedback (DF) to the asynchronous detector, which decreases the minimum error probability to below $0.05$. This adaptive asynchronous detector is actually better than the (non-adaptive) energy detector. However, adding decision feedback to the energy detector leads to a significant improvement, such that its minimum error probability drops to about $0.008$. Overall, in the synchronized communication case, the performance of the asynchronous detector ranks between that of the single sample detector and that of the energy detector.
\subsection{Error Versus Receiver Clock Offset}
\label{sec_results_offset}
Second, we vary the timing offset $\delta$ and measure the corresponding minimum bit error probability. For a given offset, we use the analytical expression to numerically find the threshold $\tau$ that gives the minimum expected error probability, and then we measure the average error probability from the simulations using that threshold. For simplicity, we assume that the signal $\obsSignal{j}$ has value zero beyond the intended sampling interval, i.e., $\obsSignal{j} = 0, j \notin \{0,1,\ldots,\obsPerBitL-1\}$. For ease of comparison, and to demonstrate lower error probabilities for the asynchronous detectors, the results are presented in Figs.~\ref{fig_error_vs_offset_8ms} and \ref{fig_error_vs_offset_40ms} for sampling intervals of $\Delta t=8\,\textnormal{m}\textnormal{s}$ and $\Delta t=40\,\textnormal{m}\textnormal{s}$, respectively. Furthermore, we only consider integer values of $\delta$.
In Fig.~\ref{fig_error_vs_offset_8ms}, we consider sample offset $\delta \in [-6,15]$, i.e., the offset between the receiver's and transmitter's clocks is between $-48\,\textnormal{m}\textnormal{s}$ and $120\,\textnormal{m}\textnormal{s}$. The performance of the single sample detector is highly sensitive to positive $\delta$, because the expected peak time is relatively early in the symbol interval. When the offset is $\ge 40\,\textnormal{m}\textnormal{s}$, this detector is no longer sampling in the intended symbol interval and its performance is very poor. The asynchronous detectors (both with and without decision feedback) are much more resilient to positive offset $\delta$ and demonstrate a slow performance decay as $\delta$ increases. Interestingly, these detectors actually \emph{improve} with a small negative $\delta$, because in the first few observations of a symbol interval when $\delta=0$ we are more likely to see molecules due to the previous symbol than due to the current symbol (see Fig.~\ref{fig_cir}). However, performance rapidly degrades as $\delta$ decreases further and the receiver's largest observation is more likely to include molecules that were released for the following symbol.
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{fig_0031_error_vs_offset_8ms.pdf}
\caption{Average probability of error at optimal threshold for different sampling offsets $\delta$. The sampling period is $\Delta t = 8\,$ms. The asynchronous detectors are less sensitive to positive $\delta$ than the single sample detector.}
\label{fig_error_vs_offset_8ms}
\end{figure}
In Fig.~\ref{fig_error_vs_offset_40ms}, we consider sample offset $\delta \in [-2,5]$, i.e., the offset between the receiver's and transmitter's clocks is between $-40\,\textnormal{m}\textnormal{s}$ and $200\,\textnormal{m}\textnormal{s}$. For negative $\delta$, the performance of all detectors degrades significantly. However, the asynchronous detector with decision feedback is the \emph{least sensitive} to positive $\delta$, and it even performs \emph{better} than the energy detector with decision feedback when the offset is greater than $100\,\textnormal{m}\textnormal{s}$ (albeit with an error probability above $0.1$). This is because the energy detector is including a lot of energy from the ``tail'' of the previous symbol in its bit decision, whereas the asynchronous detector is only comparing the value of the largest (adaptive) sample and this maximum is more likely to come from the intended symbol interval (e.g., consider the left half vs the right half of Fig.~\ref{fig_cir}).
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{fig_0031_error_vs_offset_40ms.pdf}
\caption{Average probability of error at the optimal threshold for different sampling offsets $\delta$. The sampling period is $\Delta t = 40\,$ms. The asynchronous detector with decision feedback is less sensitive to positive $\delta$ than the energy detectors.}
\label{fig_error_vs_offset_40ms}
\end{figure}
\subsection{Error Versus Sampling Period}
\label{sec_results_samples}
Finally, we explicitly study the impact of the number of samples $M$ on the error probability, where the minimum probability for a given number of samples is determined using the same method as that described for a given offset in Section~\ref{sec_results_offset}. Here, we set the offset to $\delta = 0$ and consider $M=\{2,5,10,25,50\}$. The results are shown in Fig.~\ref{fig_error_vs_samples}. For $M>5$, the performance of the single sample detector does not improve since the sampling time stays the same. All other detectors, including the asynchronous detectors, improve with increasing $M$ over the range considered. The energy detector with decision feedback actually improves by many orders of magnitude, although in practice detector performance would eventually be limited by sample dependence.
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{fig_0031_error_vs_samples.pdf}
\caption{Average probability of error at the optimal threshold for different numbers of samples per interval $M$. The sampling period $\Delta t$ is $200\,\textnormal{m}\textnormal{s}/M$. All detectors except the single sample detector improve with increasing $M$. Error probabilities for the energy detector with decision feedback are not shown for $M > 10$ because only $2\times10^4$ bits were simulated.}
\label{fig_error_vs_samples}
\end{figure}
\section{Conclusions}
\label{sec_concl}
In this paper, we proposed and analyzed the asynchronous peak detector, both with and without decision feedback. Both variants demonstrate resilience to timing offsets between the transmitter and receiver, and can outperform existing detectors. Future work in this area includes: 1) modeling sample dependence in the distribution of the largest observation; and 2) optimizing the symbol sampling window.
\bibliographystyle{IEEEtran}
|
2,877,628,089,560 | arxiv | \section{Introduction}\label{sec:intro}
Automatic evaluation of natural language generation (NLG) is a complex task due to multiple acceptable outcomes.
Apart from manual human evaluation, most recent works in NLG are evaluated using word-overlap-based metrics such as BLEU \cite{gkatzia_snapshot_2015}, which compute similarity against gold-standard human references. However, high quality human references are costly to obtain, and for most word-overlap metrics, a minimum of 4 references are needed in order to achieve reliable results \cite{finch2004does}.
Furthermore, these metrics tend to perform poorly at segment level \cite{lavie_meteor:_2007,chen_systematic_2014,novikova:2017}.
We present a novel approach to assessing NLG output quality without human references, focusing on segment-level (utterance-level) quality assessments.\footnote{%
In our data, a ``segment'' refers to an utterance generated by an NLG system in the context of a human-computer dialogue, typically 1 or 2 sentences in length (see Section~\ref{sec:dataset}).
We estimate the utterance quality without taking the dialogue context into account. Assessing the appropriateness of responses in context is beyond the scope of this paper, see e.g.\ \cite{Liu:EMNLP2016,lowe_towards_2017,curry:2017}.}
We train a recurrent neural network (RNN) to estimate the quality of an NLG output based on comparison with the source meaning representation (MR) only.
This allows to efficiently assess NLG quality not only during system development, but also at runtime, e.g.\ for optimisation, reranking, or compensating low-quality output by rule-based fallback strategies.
To evaluate our method, we use crowdsourced human quality assessments of real system outputs from three different NLG systems on three datasets in two domains.
We also show that adding fabricated data with synthesised errors to the training set increases relative performance by 21\% (as measured by Pearson correlation).
In contrast to recent advances in referenceless quality estimation (QE) in other fields such as machine translation (MT) \cite{bojar_findings_2016} or grammatical error correction
\cite{napoles_theres_2016}, NLG QE is more challenging because (1) diverse realisations of a single MR are often acceptable (as the MR is typically a limited formal language); (2) human perception of NLG quality is highly variable, e.g. \cite{dethlefs2014cluster};
(3) NLG datasets are costly to obtain and thus small in size.
Despite these difficulties, we achieve promising results -- correlations with human judgements achieved by our system stay in a somewhat lower range than those achieved e.g.\ by state-of-the-art MT QE systems \cite{bojar_findings_2016}, but they significantly outperform word-overlap metrics.
To our knowledge, this is the first work in trainable NLG QE without references.
\begin{figure}[tb]
\centering{
\includegraphics[width=\columnwidth]{figs/model.pdf}
}
\vspace{-0.3cm}
\caption{The architecture of our referenceless NLG QE model.}\label{fig:model}
\vspace{-0.3cm}
\end{figure}
\vspace{-0.2cm}
\section{Our Model}\label{sec:model}
\vspace{-0.1cm}
We use a simple RNN model based on Gated Recurrent Units (GRUs) \cite{cho_learning_2014}, composed of one GRU encoder each for the source MR and the NLG system output to be rated, followed by fully connected layers operating over the last hidden states of both encoders. The final classification layer is linear and produces the quality assessment as a single floating point number (see Figure~\ref{fig:model}).\footnote{The meaning of this number depends entirely on the training data. In our case, we use a 1--6 Likert scale assessment (see Section~\ref{sec:dataset}), but we could, for instance, use the same network to predict the required number of post-edits, as commonly done in MT (see Section~\ref{sec:related}).}
The model assumes both the source MR and the system output to be sequences of tokens $x^M = \{x_1^M,\dots,x_m^M\}$, $x^S = \{x_1^S,\dots,x_n^S\}$, where each token is represented by its embedding \cite{bengio_neural_2003}.
The GRU encoders encode these sequences left-to-right into sequences of hidden states $\{h^M_t\}_{t=1}^m$, $\{h^S_t\}_{t=1}^n$:
\begin{align}
h^M_x &= \mbox{gru}(x^M_t,h^M_{t-1}) \label{eq:gruM}\\
h^S_x &= \mbox{gru}(x^S_t,h^S_{t-1}) \label{eq:gruS}
\end{align}
The final hidden states $h^M_m$ and $h^S_n$ are then fed to a set of fully connected $\tanh$ layers $z_i, i\in \{0\dots k\}$:
\begin{align}
z_0 &= \tanh (W_0(h^M_m \circ h^S_n)) \label{eq:ff0}\\
z_i &= \tanh (W_k z_{i-1}) \label{eq:ffi}
\end{align}
The final prediction is given by a linear layer:
\begin{equation}
\hat y = W_{k+1} z_k \label{eq:final}
\end{equation}
In (\ref{eq:ff0}--\ref{eq:final}), $W_0\dots W_k$ stand for square weight matrices, $W_{k+1}$ is a weight vector.
The network is trained in a supervised setting by minimising mean square error against human-assigned quality scores on the training data (see Section \ref{sec:experiments}).
Embedding vectors are initialised randomly and learned during training; each token found in the training data is given an embedding dictionary entry.
Dropout \cite{hinton_improving_2012} is applied on the inputs to the GRU encoders for regularisation.
The floating point values predicted by the network are rounded to the precision and clipped to the range found in the training data.\footnote{We use a precision of 0.5 and the 1--6 range (see Section~\ref{sec:dataset}).}
\subsection{Model Variants}
We also experimented with several variants of the model which performed similarly or worse than those presented in Section~\ref{sec:results}.
We list them here for completeness:
\begin{itemize}[itemsep=0pt,topsep=0pt]
\item replacing GRU cells with LSTM \cite{hochreiter_long_1997},
\item using word embeddings pretrained by the word2vec tool \cite{mikolov_efficient_2013} on Google News data,\footnote{We used the model available at \url{https://code.google.com/archive/p/word2vec/}.}
\item using a set of independent binary classifiers, each predicting one of the individual target quality levels (see Section~\ref{sec:dataset}),
\item using an ordered set of binary classifiers trained to predict 1 for NLG outputs above a specified quality level, 0 below it,\footnote{The predicted value was interpolated from classifiers' predictions of the positive class probability.}
\item pretraining the network using a different task (classifying MRs or predicting next word in the sentence).
\end{itemize}
\section{Experimental Setup}\label{sec:experiments}
In the following, we describe the data we use to evaluate our system, our method for data augmentation, detailed parameters of our model, and evaluation metrics.
\subsection{Dataset}\label{sec:dataset}
\begin{table}[tb]
\begin{center}
\begin{tabular}{crrrr}\hline
\bf System$\downarrow$ Data$\to$ & BAGEL & SFRest & SFHot & Total\Tstrut\Bstrut \\\hline
LOLS & 202 & 581 & 398 & 1,181 \\
RNNLG & - & 600 & 477 & 1,077 \\
TGen & 202 & - & - & 202 \\\hdashline[0.5pt/2pt]
Total & 404 & 1,181 & 875 & 2,460\Tstrut\Bstrut \\\hline
\end{tabular}
\end{center}
\caption{Number of ratings from different source datasets and NLG systems in our data.}\label{tab:data-stats}
\end{table}
\newcommand{\rawquant}[2]{\multirow{3}{*}{\shortstack[c]{#1 \\ \it #2}}}
\begin{figure*}
\begin{center}\small
\begin{tabular}{llcccccc}\hline
\multicolumn{2}{c}{\bf Instance} & \bf H & \bf B & \bf M & \bf R & \bf C & \bf \ref{li:Ftonly}\Tstrut\Bstrut \\\hline
\Tstrut\bf MR & inform(name=`la ciccia',area=`bernal heights',price\_range=moderate)\hspace{-0.5cm}
& \rawquant{5.5}{\phantom{0}} & \rawquant{1}{0.000} & \rawquant{3}{0.371} & \rawquant{3.5}{0.542} & \rawquant{2}{2.117} & \rawquant{4.5}{\phantom{0}} \\
\bf Ref & la ciccia is a moderate price restaurant in bernal heights \\
\bf\Bstrut Out & la ciccia, is in the bernal heights area with a moderate price range. \\\hdashline[0.5pt/2pt]
\Tstrut\bf MR & inform(name=`intercontinental san francisco',price\_range=`pricey')
& \rawquant{2}{\phantom{0}} & \rawquant{4.5}{0.707} & \rawquant{3}{0.433} & \rawquant{5.5}{0.875} & \rawquant{2}{2.318} & \rawquant{5}{\phantom{0}} \\
\bf Ref & sure, the intercontinental san francisco is in the pricey range. \\
\bf Out & the intercontinental san francisco is in the pricey price range. \\\hline
\end{tabular}
\end{center}
\caption{Examples from our dataset. \emph{Ref} = human reference (one selected), \emph{Out} = system output to be rated. \emph{H} = median human rating, \emph{B} = BLEU, \emph{M} = METEOR, \emph{R} = ROUGE, \emph{C} = CIDEr, \emph{\ref{li:Ftonly}} = rating given by our system in the \ref{li:Ftonly} configuration. The metrics are shown with normalised and rounded values (see Section~\ref{sec:eval-measures}) on top and the original, raw values underneath (in italics). The top example is rated low by all metrics, our system is more accurate. The bottom one is rated low by humans but high by some metrics and our system.}\label{fig:data-examples}
\end{figure*}
Using the CrowdFlower crowdsourcing platform,\footnote{\url{http://www.crowdflower.com}} we collected a dataset of human rankings for outputs of three recent data-driven NLG systems as provided to us by the systems' authors; see \cite{novikova:2017} for more details.
The following systems are included in our set
\begin{itemize}[itemsep=0pt,topsep=0pt]
\item LOLS \cite{lampouras_imitation_2016}, which is based on imitation learning,
\item RNNLG \cite{wen_semantically_2015}, a RNN-based system,
\item TGen \cite{dusek_training_2015}, a system using perceptron-guided incremental tree generation.
\end{itemize}
Their outputs are on the test parts of the following datasets (see Table~\ref{tab:data-stats}):
\begin{itemize}[itemsep=0pt,topsep=0pt]
\item BAGEL \cite{mairesse_phrase-based_2010} -- 404 short text segments (1--2 sentences) informing about restaurants,
\item SFRest \cite{wen_semantically_2015} -- ca.~5,000 segments from the restaurant information domain (including questions, confirmations, greetings, etc.),
\item SFHot \cite{wen_semantically_2015} -- a set from the hotel domain similar in size and contents to SFRest.
\end{itemize}
During the crowdsourcing evaluation of the outputs, crowd workers were given two random system outputs along with the source MRs and were asked to evaluate the absolute overall quality of both outputs on a 1--6 Likert scale (see Figure~\ref{fig:data-examples}).
We collected at least three ratings for each
system output; this resulted in more ratings for the same sentence if two systems' outputs were identical.
The ratings show a moderate inter-rater agreement of 0.45 ($p<0.001$) intra-class correlation coefficient \cite{landis1977measurement} across all three datasets.
We computed medians of the three (or more) ratings in our experiments to ensure more consistent ratings, which resulted in .5 ratings for some examples. We keep this granularity throughout our experiments.
We use our data in a 5-fold cross-validation setting (three training, one development, and one test part in each fold).
We also test our model on a subset of ratings for a particular NLG system or dataset in order to assess its cross-system and cross-domain performance (see Section~\ref{sec:results}).
\subsection{Data Preprocessing}\label{sec:preprocessing}
The source MRs in our data are variants of the dialogue acts (DA) formalism \cite{young_hidden_2010} -- a shallow domain-specific MR, consisting of the main DA type (\emph{hello}, \emph{inform}, \emph{request}) and an optional set of slots (attributes, such as \emph{food} or \emph{location}) and values (e.g.\ \emph{Chinese} for food or \emph{city centre} for location).
DAs are converted into sequences for our system as a list of triplets ``DA type -- slot -- value'', where DA type may be repeated multiple times and/or special null tokens are used if slots or values are not present (see Figure~\ref{fig:model}).
The system outputs are tokenised and lowercased for our purposes.
We use delexicalisation to prevent data sparsity, following \cite{mairesse_phrase-based_2010,henderson_robust_2014,wen_semantically_2015},
where values of most DA slots (except unknown and binary \emph{yes/no} values) are replaced in both the source MRs and the system outputs by slot placeholders -- e.g.\ \emph{Chinese} is replaced by \emph{X-food} (cf.\ also Figure~\ref{fig:model}).\footnote{Note that only values matching the source MR are delexicalised in the system outputs -- if the outputs contain values not present in the source MR, they are kept intact in the model input.}
\subsection{Synthesising Additional Training Data}\label{sec:synth-data}
\begin{table}
\begin{center}
\begin{tabular}{lcccccc}\hline
\bf Setup & \bf Instances\Tstrut\Bstrut\\\hline
\ref{li:base} & \phantom{0}1,476 \\
\ref{li:fs} & \phantom{0}3,937 \\
\ref{li:fts} & 13,442 \\
\ref{li:Ftonly} & 45,137 \\
\ref{li:Ftrain} & 57,372 \\
\ref{li:Fall} & 80,522 \\\hline
\end{tabular}
\end{center}
\caption{Training data size comparison for different data augmentation procedures (varies slightly in different cross-validation folds due to rounding).}\label{tab:data-size}
\vspace{-0.3cm}
\end{table}
Following prior works in grammatical error correction \cite{rozovskaya_ui_2012,felice_generating_2014,xie_neural_2016}, we synthesise additional training instances by introducing artificial errors:
Given a training instance (source MR, system output, and human rating), we generate a number of errors in the system output and lower the human rating accordingly.
We use a set of basic heuristics mimicking some of the observed system behaviour to introduce errors into the system outputs:\footnote{To ensure that the errors truly are detrimental to the system output quality, our rules prioritise content words, i.e.\ they do not change articles or punctuation if other words are present. The rules never remove the last word left in the system output.}
\begin{enumerate}[itemsep=0pt,topsep=0pt]
\item removing a word,
\item duplicating a word at its current position,
\item duplicating a word at a random position
\item adding a random word from a dictionary learned from all training system outputs,
\item replacing a word with a random word from the dictionary.
\end{enumerate}
We lower the original Likert scale rating of the instance by 1 for each generated error. If the original rating was 5.5 or 6, the rating is lowered to 4 with the first introduced error and by 1 for each additional error.
We also experiment with using additional natural language sentences where human ratings are not available -- we use human-authored references from the original training datasets and assume that these would receive the maximum rating of 6.
We introduce artificial errors into the human references in exactly the same way as with training system outputs.
\subsection{Model Training Parameters}\label{sec:training-params}
We set the network parameters based on several experiments performed on the development set of one of the cross-validation folds (see Section~\ref{sec:dataset}).\footnote{We use embedding size 300, learning rate 0.0001, dropout probability 0.5, and two fully connected $\tanh$ layers ($k=2$).}
We train the network for 500 passes over the training data, checking Pearson and Spearman correlations on the validation set after each pass (with equal importance). We keep the configuration that yielded the best correlations overall.
For setups using synthetic training data (see Section~\ref{sec:synth-data}), we first perform 20 passes over all data including synthetic, keeping the best parameters, and then proceed with 500 passes over the original data.
To compensate for the effects of random network initialisation, all our results are averaged over five runs with different initial random seeds following \citet{wen_semantically_2015}.
\subsection{Evaluation Measures}\label{sec:eval-measures}
Following practices from MT Quality estimation \cite{bojar_findings_2016},\footnote{See also the currently ongoing WMT`17 Quality Estimation shared task at \url{http://www.statmt.org/wmt17/quality-estimation-task.html}.} we use Pearson's correlation of the predicted output quality with median human ratings as our primary evaluation metric. Mean absolute error (MAE), root mean squared error (RMSE), and Spearman's rank correlation are used as additional metrics.
We compare our results to some of the common word-overlap metrics -- BLEU \cite{papineni_bleu_2002}, METEOR \cite{lavie_meteor:_2007}, ROUGE-L \cite{lin_rouge:_2004}, and CIDEr \cite{vedantam_cider:_2015} -- normalised into the 1--6 range of the predicted human ratings and further rounded to 0.5 steps.\footnote{We used the Microsoft COCO Captions evaluation script to obtain the metrics scores \cite{chen_microsoft_2015}. Trials with non-quantised metrics yielded very similar correlations.}
In addition, we also show the MAE/RMSE values for a trivial constant baseline that always predicts the overall average human rating (4.5).
\section{Results}\label{sec:results}
\begin{table*}
\begin{center}
\begin{tabular}{lcccc}\hline
\bf Setup & \bf Pearson & \bf Spearman & \bf MAE & \bf RMSE \\\hline
Constant & - & - & 1.013 & 1.233 \\
BLEU* & 0.074 & 0.061 & 2.264 & 2.731 \\
METEOR* & 0.095 & 0.099 & 1.820 & 2.129 \\
ROUGE-L* & 0.079 & 0.072 & 1.312 & 1.674 \\
CIDEr* & 0.061 & 0.058 & 2.606 & 2.935 \\\hdashline[0.5pt/2pt]
\ref{li:base}: Base system & 0.273 & 0.260 & 0.948 & 1.258 \\
\ref{li:fs}: + errors generated in training system outputs & 0.283 & 0.268 & 0.948 & 1.273 \\
\ref{li:fts}: + training references, with generated errors & 0.278 & 0.261 & 0.930 & 1.257 \\
\ref{li:Ftonly}: + systems training data, with generated errors & \bf 0.330 & 0.274 & 0.914 & 1.226 \\\hdashline[0.5pt/2pt]
\ref{li:Ftrain}: + test references, with generated errors* & \bf 0.331 & 0.265 & 0.937 & 1.245 \\
\ref{li:Fall}: + complete datasets, with generated errors* & \bf 0.354 & \bf 0.287 & 0.909 & 1.208 \\\hline
\end{tabular}
\end{center}
\caption{Results using cross-validation over the whole dataset. Setups marked with ``*'' use human references for the test instances. All setups \ref{li:base}--\ref{li:Fall} produce significantly better correlations than all metrics ($p<0.01$). Significant improvements in correlation ($p<0.05$) over \ref{li:base} are marked in bold.}\label{tab:results-cv}
\end{table*}
We test the following configurations that only differ in the amount and nature of synthetic data used (see Section~\ref{sec:synth-data} and Table~\ref{tab:data-size}):
\begin{enumerate}[label=S\arabic*,itemsep=0pt,topsep=0pt]
\item \label{li:base} Base system variant, with no synthetic data.
\item \label{li:fs} Adding synthetic data -- introducing artificial errors into system outputs from the training portion of our dataset (no additional human references are used).
\item \label{li:fts} Same as previous, but with additional human references from the training portion of our dataset (including artificial errors; see Section~\ref{sec:synth-data}).\footnote{As mentioned in Section~\ref{sec:dataset}, our dataset only comprises the test sets of the source NLG datasets, i.e.\ the additional human references in \ref{li:fts} represent a portion of the source test sets. The difference to \ref{li:Ftonly} is the amount of the additional data (see Table~\ref{tab:data-size}).\label{fn:fts-vs-ftonly}}
\item \label{li:Ftonly} As previous, but with additional human references from the training parts of the respective source NLG datasets (including artificial errors), i.e.\ references on which the original NLG systems were trained.\textsuperscript{\ref{fn:fts-vs-ftonly}}
\item \label{li:Ftrain} As previous, but also includes additional human references from the test portion of our dataset (including artificial errors).\footnote{Note that the model still does not have any access at training time to test NLG system outputs or their true ratings.}
\item \label{li:Fall} As previous, but also includes development parts of the source NLG datasets (including artificial errors).
\end{enumerate}
Synthetic data are never created from system outputs in the test part of our dataset.
Note that~\ref{li:base} and~\ref{li:fs} only use the original system outputs and their ratings, with no additional human references. \ref{li:fts}~and~\ref{li:Ftonly} use additional human references (i.e.\ more in-domain data), but do not use human references for the instances on which the system is tested. \ref{li:Ftrain}~and~\ref{li:Fall} also use human references for test MRs, even if not directly, and are thus not strictly referenceless.
\subsection{Results using the whole dataset}\label{sec:cv}
The correlations and error values we obtained over the whole data in a cross-validation setup are shown in Table~\ref{tab:results-cv}.
The correlations only stay moderate for all system variants.
On the other hand, we can see that even the base setup (\ref{li:base}) trained using less than 2,000 examples performs better than all the word-overlap metrics in terms of all evaluation measures. Improvements in both Pearson and Spearman correlations are significant according to the Williams test \cite{williams_regression_1959,kilickaya_re-evaluating_2017} ($p<0.01$). When comparing the base setup against the constant baseline, MAE is lower but RMSE is slightly higher, which suggests that our system does better on average but is prone to occasional large errors.
The results also show that the performance can be improved considerably by adding synthetic data, especially after more than tripling the training data in \ref{li:Ftonly} (Pearson correlation improvements are statistically significant in terms of the Williams test, $p<0.01$). Using additional human references for the test data seems to be helping further in \ref{li:Fall} (the difference in Pearson correlation is statistically significant, $p<0.01$):
The additional references apparently provide more information even though the SFHot and SFRest datasets have similar MRs (identical when delexicalised) in training and test data \cite{lampouras_imitation_2016}.\footnote{Note that unlike in NLG systems training on SFHot and SFRest, the identical MRs in training and test data do not allow our system to only memorize the training data as the NLG outputs to be rated are distinct. However, the situation is not 100\% referenceless as the system may have been exposed to other NLG outputs for the same MR. Our preliminary experiments suggest that our systems can also handle lexicalised data well, without any modification (Pearson correlation 0.264--0.359 for \ref{li:base}--\ref{li:Fall}).}
The setups using larger synthetic data further improve MAE and RMSE: \ref{li:Ftonly} and \ref{li:Fall} increase the margin against the constant baseline up to ca.~0.1 in terms of MAE, and both are able to surpass the constant baseline in terms of RMSE.
\subsection{Cross-domain and Cross-System Training}\label{sec:cross-domain}
\begin{table*}[tb]
\begin{center}\small
\begin{tabular}{l cccc cccc cccc}\hline
& \multicolumn{4}{c}{\ref{it:xO}: small in-domain data only} & \multicolumn{4}{c}{\ref{it:xT}: out-of-domain data only} & \multicolumn{4}{c}{\ref{it:xA}: out-of-dom.\ + small in-dom.} \\
& \bf Pear & \bf Spea & \bf MAE & \bf RMSE & \bf Pear & \bf Spea & \bf MAE & \bf RMSE & \bf Pear & \bf Spea & \bf MAE & \bf RMSE \\\hline
Constant & - & - & 0.994 & 1.224 & - & - & 0.994 & 1.224 & - & - & 0.994 & 1.224 \\
BLEU* & 0.033 & 0.016 & 2.235 & 2.710 & 0.033 & 0.016 & 2.235 & 2.710 & 0.033 & 0.016 & 2.235 & 2.710 \\
METEOR* & 0.076 & 0.074 & 1.719 & 2.034 & 0.076 & 0.074 & 1.719 & 2.034 & 0.076 & 0.074 & 1.719 & 2.034 \\
ROUGE-L* & 0.064 & 0.049 & 1.255 & 1.620 & 0.064 & 0.049 & 1.255 & 1.620 & 0.064 & 0.049 & 1.255 & 1.620 \\
CIDEr* & 0.048 & 0.043 & 2.590 & 2.921 & 0.048 & 0.043 & 2.590 & 2.921 & 0.048 & 0.043 & 2.590 & 2.921 \\\hdashline[0.5pt/2pt]
\ref{li:base} & 0.147 & 0.136 & 1.086 & 1.416 & 0.162 & 0.152 & 0.985 & 1.281 & 0.170 & \uline{0.166} & 1.003 & 1.315 \\
\ref{li:fs} & \bf 0.196 & \bf 0.176 & 1.059 & 1.364 & \bf 0.197 & \bf 0.189 & 1.003 & 1.311 & \bf 0.219 & \bf \uline{0.218} & 0.988 & 1.296 \\
\ref{li:fts} & \bf 0.176 & \bf 0.163 & 1.093 & 1.420 & 0.147 & 0.134 & 1.037 & 1.366 & \bf \uline{0.219} & \bf \uline{0.216} & 0.979 & 1.302 \\
\ref{li:Ftonly} & \bf 0.264 & \bf 0.218 & 0.983 & 1.307 & 0.162 & 0.138 & 1.084 & 1.448 & \bf 0.247 & \bf 0.211 & 0.983 & 1.306 \\\hdashline[0.5pt/2pt]
\ref{li:Ftrain}* & \bf 0.280 & \bf 0.221 & 1.009 & 1.341 & 0.173 & 0.145 & 1.077 & 1.438 & \bf 0.210 & 0.162 & 1.095 & 1.442 \\
\ref{li:Fall}* & \bf 0.271 & \bf 0.202 & 0.991 & 1.331 & 0.188 & 0.178 & 1.037 & 1.392 & \bf 0.224 & \bf 0.210 & 1.002 & 1.339 \\\hline
\end{tabular}
\end{center}
\caption{Cross-domain evaluation results. Setups marked with ``*'' use human references of test instances. All setups produce significantly better correlations than all metrics ($p<0.01$). Significant improvements in correlation ($p<0.05$) over the corresponding \ref{li:base} are marked in bold, significant improvements over the corresponding \ref{it:xO} are underlined.}\label{tab:cross-domain}
\end{table*}
\begin{table*}[tb]
\begin{center}\small
\begin{tabular}{l cccc cccc cccc}\hline
& \multicolumn{4}{c}{\ref{it:xO}: small in-system data only} & \multicolumn{4}{c}{\ref{it:xT}: out-of-system data only} & \multicolumn{4}{c}{\ref{it:xA}: out-of-sys.\ + small in-sys.} \\
& \bf Pear & \bf Spea & \bf MAE & \bf RMSE & \bf Pear & \bf Spea & \bf MAE & \bf RMSE & \bf Pear & \bf Spea & \bf MAE & \bf RMSE \\\hline
Constant & - & - & 1.060 & 1.301 & - & - & 1.060 & 1.301 & - & - & 1.060 & 1.301 \\
BLEU* & 0.079 & 0.043 & 2.514 & 2.971 & 0.079 & 0.043 & 2.514 & 2.971 & 0.079 & 0.043 & 2.514 & 2.971 \\
METEOR* & 0.141 & 0.122 & 1.929 & 2.238 & 0.141 & 0.122 & 1.929 & 2.238 & 0.141 & 0.122 & 1.929 & 2.238 \\
ROUGE-L* & 0.064 & 0.048 & 1.449 & 1.802 & 0.064 & 0.048 & 1.449 & 1.802 & 0.064 & 0.048 & 1.449 & 1.802 \\
CIDEr* & 0.127 & 0.106 & 2.801 & 3.112 & 0.127 & 0.106 & 2.801 & 3.112 & 0.127 & 0.106 & 2.801 & 3.112 \\\hdashline[0.5pt/2pt]
\ref{li:base} & \it 0.341 & 0.334 & 1.054 & 1.405 & \it 0.097 & \it 0.117 & 1.052 & 1.336 & 0.174 & 0.179 & 1.114 & 1.455 \\
\ref{li:fs} & \bf 0.358 & 0.345 & 1.007 & 1.342 & \it 0.115 & \it 0.119 & 1.057 & 1.355 & \bf 0.203 & \bf 0.222 & 1.253 & 1.613 \\
\ref{li:fts} & \bf 0.378 & \bf 0.365 & 0.971 & 1.326 & \it 0.112 & \it 0.094 & 1.059 & 1.387 & \bf \uline{0.404} & \bf 0.377 & 0.968 & 1.277 \\
\ref{li:Ftonly} & \bf 0.390 & \bf 0.360 & 0.981 & 1.311 & \bf 0.247 & \bf 0.189 & 1.011 & 1.338 & \bf 0.370 & \bf 0.346 & 0.997 & 1.312 \\\hdashline[0.5pt/2pt]
\ref{li:Ftrain}* & \bf 0.398 & \bf 0.364 & 1.043 & 1.393 & \bf 0.229 & \bf 0.174 & 1.025 & 1.328 & \bf 0.386 & \bf 0.356 & 0.975 & 1.301 \\
\ref{li:Fall}* & \bf 0.390 & 0.353 & 1.036 & 1.389 & \bf 0.332 & \bf 0.262 & 0.969 & 1.280 & \bf 0.374 & \bf 0.330 & 0.979 & 1.298 \\\hline
\end{tabular}
\end{center}
\caption{Cross-system evaluation results. Setups marked with ``*'' use human references of test instances. Setups that do not produce significantly better correlations than all metrics ($p<0.05$) are marked in italics. Significant improvements in correlation ($p<0.05$) over the corresponding \ref{li:base} are marked in bold, significant improvements over the corresponding \ref{it:xO} are underlined.}\label{tab:cross-system}
\end{table*}
Next, we test how well our approach generalises to new systems and datasets and how much in-set data (same domain/system) is needed to obtain reasonable results.
We use the SFHot data as our test domain and LOLS as our test system, and we treat the rest as out-of-set.
We test three different configurations:
\begin{enumerate}[label=C\arabic*,itemsep=0pt,topsep=0pt]
\item \label{it:xO} Training exclusively using a small amount of in-set data (200 instances, 100 reserved for validation), testing on the rest of the in-set.
\item \label{it:xT} Training and validating exclusively on out-of-set data, testing on the same part of the in-set as in \ref{it:xO} and \ref{it:xA}.
\item \label{it:xA} Training on the out-of-set data with a small amount of in-set data (200 instances, 100 reserved for validation), testing on the rest of the in-set.
\end{enumerate}
The results are shown in Tables~\ref{tab:cross-domain} and~\ref{tab:cross-system}, respectively.
The correlations of \ref{it:xT} suggest that while our network can generalise across systems to some extent (if data fabrication is used), it does not generalise well across domains without using in-domain training data.
\ref{it:xO}~and \ref{it:xA}~configuration results demonstrate that even small amounts of in-set data help noticeably.
However, if in-set data is used, additional out-of-set data does not improve the results in most cases (\ref{it:xA} is mostly not significantly better than the corresponding \ref{it:xO}).
Except for a few cross-system \ref{it:xT} configurations with low amounts of synthetic data, all systems perform better than word-overlap metrics. However, most setups are not able to improve over the constant baseline in terms of MAE and RMSE.
\section{Related Work}\label{sec:related}
This work is the first NLG QE system to our knowledge; the most related work in NLG is probably the system by \citet{dethlefs2014cluster}, which reranks NLG outputs by estimating their properties (such as colloquialism or politeness) using various regression models.
However, our work is also related to QE research in other areas, such as MT \cite{specia_machine_2010}, dialogue systems \cite{lowe_towards_2017} or grammatical error correction \cite{napoles_theres_2016}.
QE is especially well researched for MT, where regular QE shared tasks are organised \cite{callison-burch_findings_2012,bojar_findings_2013,bojar_findings_2014,bojar_findings_2015,bojar_findings_2016}.
Many of the past MT QE systems participating in the shared tasks are based on Support Vector Regression \cite{specia_multi-level_2015,bojar_findings_2014,bojar_findings_2015}. Only in the past year, NN-based solutions started to emerge.
\citet{patel_translation_2016} present a system based on RNN language models, which focuses on predicting MT quality on the word level.
\citet{kim_recurrent_2016} estimate segment-level MT output quality using a bidirectional RNN over both source and output sentences combined with a logistic prediction layer.
They pretrain their RNN on large MT training data.
Last year's MT QE shared task systems achieve Pearson correlations between 0.4--0.5, which is slightly higher than our best results. However, the results are not directly comparable: First, we predict a Likert-scale assessment instead of the number of required post-edits. Second, NLG datasets are considerably smaller than corpora available for MT. Third, we believe that QE for NLG is harder due to the reasons outlined in Section~\ref{sec:intro}.
\section{Conclusions and Future Work}\label{sec:concl}
We presented the first system for referenceless quality estimation of natural language generation outputs.
All code and data used here is available online at:
\begin{center}
\url{https://github.com/tuetschek/ratpred}
\end{center}
In an evaluation spanning outputs of three different NLG systems and three datasets,
our system significantly outperformed four commonly used reference-based metrics.
It also improved over a constant baseline, which always predicts the mean human rating, in terms of MAE and RMSE. The smaller RMSE improvement suggests that our system is prone to occasional large errors.
We have shown that generating additional training data, e.g.\ by using NLG training datasets and synthetic errors, significantly improves the system performance.
While our system can generalise to unseen NLG systems in the same domain to some extent, its cross-domain generalisation capability is poor.
However, very small amounts of in-domain/in-system data improve performance notably.
In future work, we will explore improvements to our error synthesising methods as well as changes to our network architecture (bidirectional RNNs or convolutional NNs).
We also plan to focus on relative ranking of different NLG outputs for the same source MR or predicting the number of post-edits required.
We intend to use data collected within the ongoing E2E NLG Challenge \cite{novikova_e2e_2017}, which promises greater diversity than current datasets.
\section*{Acknowledgements}
The authors would like to thank Lucia Specia and the two anonymous reviewers for their helpful comments.
This research received funding from the EPSRC projects DILiGENt (EP/M005429/1) and MaDrIgAL (EP/N017536/1). The Titan Xp used for this research was donated by the NVIDIA Corporation.
|
2,877,628,089,561 | arxiv | \section{Introduction}
\label{sec:introduction}
New fundamental, massive, neutral, spin-1 gauge bosons ($Z'$) appear ubiquitously in theories Beyond the Standard Model (BSM). The strongest limits for such a state generally exist for the $e^+e^-$ and $\mu^+\mu^-$ signatures, known collectively as Drell-Yan (DY). However, in addition to their importance in extracting the couplings to top quarks, resonance searches in the $t\bar{t}$ channel can offer additional handles on the properties of a $Z'$ due to uniquely available asymmetry observables, owing to the fact that (anti)tops decay prior to hadronisation and spin information is effectively transmitted to decay products. Their definition in $t\bar{t}$, however, requires the reconstruction of the top quark pair. In these proceedings we summarise our study of the sensitivity to the presence of a single $Z'$ boson at the Large Hadron Collider (LHC) arising from a number of generationally universal benchmark models (section~\ref{sec:models}), as presented in our recently submitted paper~\cite{cerrito2016}. We simulate top pair production and six-fermion decay mediated by a $Z'$ with full tree-level SM interference and all intermediate particles allowed off-shell, with analysis focused on the lepton-plus-jets final state, and imitating some experimental conditions at the parton level (section~\ref{sec:method}). We assess the prospect for an LHC analysis to profile a $Z'$ boson mediating $t\bar{t}$ production, using the cross section in combination with asymmetry observables, with results and conclusions in section~\ref{sec:results} and~\ref{sec:conclusions}, respectively.
\section{Models}
\label{sec:models}
There are several candidates for a Grand Unified Theory (GUT), a hypothetical enlarged gauge symmetry, motivated by gauge coupling unification at approximately the $10^{16}$ GeV energy scale. $Z'$ often arise due to the residual U$(1)$ gauge symmetries after their spontaneous symmetry breaking to the familiar SM gauge structure. We study a number of benchmark examples of such models. These may be classified into three types: $E_6$ inspired models, generalised Left-Right (GLR) symmetric models and General Sequential Models (GSMs)~\cite{accomando2011}.
One may propose that the gauge symmetry group at the GUT scale is E$_6$. When recovering the SM, two residual symmetries U$(1)_\psi$ and U$(1)_\chi$ emerge, which may survive down to the TeV scale. LR symmetric models introduce a new isospin group, SU$(2)_R$, perfectly analogous to the SU$(2)_L$ group of the SM, but which acts on right-handed fields. This symmetry may arise naturally when breaking an SO$(10)$ gauge symmetry. We are particularly interested in the residual U$(1)_R$ and U$(1)_{B-L}$ symmetries, where the former is related to $T^3_R$, and $B$ and $L$ refer to Baryon and Lepton number, respectively. An SSM $Z'$ has fermionic couplings identical to those of the SM $Z$ boson, but is generically heavier. In the SM the $Z$ couplings to fermions are uniquely determined by well defined eigenvalues of the $T^3_L$ and $Q$ generators, the third isospin component and the Electro-Magnetic (EM) charge.
For each class we may take a general linear combination of the appropriate operators and fix $g'$, varying the angular parameter dictating the relative strengths of the component generators, until we recover interesting limits. These models are all universal, with the same coupling strength to each generation of fermion. Therefore, as with an SSM $Z'$, the strongest experimental limits come from the DY channel. The limits for these models have been extracted based on DY results, at $\sqrt{s}=7$ and $8$~TeV with an integrated luminosity of $L=20$~fb$^{-1}$, from the CMS collaboration~\cite{thecmscollaboration2015}\nocite{theatlascollaboration2014c} by Accomando et al.~\cite{accomando2016}, with general consensus that such a state is excluded below $3$ TeV.
\section{Method}
\label{sec:method}
Measuring $\theta$ as the angle between the top and the incoming quark direction, in the parton centre of mass frame, we define the forward-backward asymmetry:
\begin{equation}
A_{FB}=\frac{N_{t}(\cos\theta>0)-N_t(\cos\theta<0)}{N_t(\cos\theta>0)+N_t(\cos\theta<0)}, \quad \cos\theta^* =\frac{y_{tt}}{|y_{tt}|}\cos\theta
\end{equation}
With hadrons in the initial state, the quark direction is indeterminate. However, the $q$ is likely to carry a larger partonic momentum fraction $x$ than the $\bar{q}$ in $\bar{x}$. Therefore, to define $A^{*}_{FB}$ we choose the $z^*$ axis to lie along the boost direction. The top polarisation asymmetry ($A_{L}$), measures the net polarisation of the (anti)top quark by subtracting events with positive and negative helicities:
\begin{equation}
A_{L}=\frac{N(+,+)+N(+,-)-N(-,-)-N(-,+)}{N(+,+)+N(+,-)+N(-,-)+N(-,+)}, \quad \frac{1}{\Gamma_l}\frac{d\Gamma_l}{dcos\theta_l}=\frac{1}{2}(1 + A_L \cos\theta_l),
\end{equation}
where $\lambda_{t}$($\lambda_{\bar{t}}$) denote the eigenvalues under the helicity operator of $t$($\bar{t}$). Information about the top spin is preserved in the distribution of $\cos\theta_l$. We construct two dimensional histograms in $m_{tt}$ and $(\cos\theta_{l})$, and equate the gradient of a fitted straight line to $A_{L}$.
In each of the models, the residual U$(1)'$ gauge symmetry is broken around the TeV scale, resulting in a massive $Z'$ boson. This leads to an additional term in the low-energy Lagrangian, from which we may calculate the unique $Z'$ coupling structure for each observable:
\begin{align}
\mathcal{L} &\supset g^\prime Z^\prime_\mu \bar{\psi}_f\gamma^\mu(f_V - f_A\gamma_5)\psi_f,
\label{eq:zprime_lagrangian}\\
\hat{\sigma} &\propto \left(q_V^2 + q_A^2\right)\left((4 - \beta^2)t_V^2 + t_A^2\right),\\
A_{FB} &\propto q_V q_A t_V t_A,\\
A_{L} &\propto \left(q_V^2 + q_A^2\right)t_V t_A,
\end{align}
where $f_V$ and $f_A$ are the vector and axial-vector couplings of a specific fermion ($f$).\nocite{hagiwara1992,stelzer1994,lepage1978}
While a parton-level analysis, we incorporate restraints encountered with reconstructed data, to assess, in a preliminary way, whether these observables survive. The collider signature for our process is a single $e$ or $\mu$ produced with at least four jets, in addition to missing transverse energy ($E^{\rm miss}_{T}$). Experimentally, the $b$-tagged jet charge is indeterminate and there is ambiguity in $b$-jet (anti)top assignment. We solely identify $E^{\rm miss}_{T}$ with the transverse neutrino momentum. Assuming an on-shell $W^\pm$ we may find approximate solutions for the longitudinal component of the neutrino momentum as the roots of a quadratic equation. In order to reconstruct the event, we account for bottom-top assignment and $p_z^\nu$ solution selection simultaneously, using a chi-square-like test, by minimising the variable $\chi^2$:
\begin{equation}
\chi^2 = \left(\frac{m_{bl\nu}-m_{t}}{\Gamma_t}\right)^2 + \left(\frac{m_{bqq}-m_{t}}{\Gamma_t}\right)^2,
\label{eq:chi2}
\end{equation}
where $m_{bl\nu}$ and $m_{bqq}$ are the invariant mass of the leptonic and hadronic (anti)top, respectively.
In order to characterise the sensitivity to each of these $Z'$ models, we test the null hypothesis, which includes only the known $t\bar{t}$ processes of the SM, assuming the alternative hypothesis ($H$), which includes the SM processes with the addition of a single $Z'$, using the profile Likelihood ratio as a test statistic, approximated using the large sample limit, as described in~\cite{cowan2011}. This method is fully general for any $n$D histogram, and we test both $1$D histograms in $m_{tt}$, and $2$D in $m_{tt}$ and the defining variable of each asymmetry to assess their combined significance.
\section{Results}
\label{sec:results}
\begin{figure}[H]
\centering
\begin{subfigure}{0.494\textwidth}
\includegraphics[width=\textwidth]{{mtt-r-gsm-ggqq-gazx-tt-bbllvv-2-4-5x10m-a-l100}.pdf}
\caption{Events expected - GSM models}
\end{subfigure}
\begin{subfigure}{0.494\textwidth}
\includegraphics[width=\textwidth]{{mtt-r-glr-ggqq-gazx-tt-bbllvv-2-4-5x10m-a-l100}.pdf}
\caption{Events expected - GLR models}
\end{subfigure}
\begin{subfigure}{0.494\textwidth}
\includegraphics[width=\textwidth]{{afb-r-gsm-ggqq-gazx-tt-bbllvv-2-4-5x10m-a-y0.5-l100}.pdf}
\caption{$A_{FB}^{*}$ - GSM models}
\end{subfigure}
\begin{subfigure}{0.494\textwidth}
\includegraphics[width=\textwidth]{{afb-r-glr-ggqq-gazx-tt-bbllvv-2-4-5x10m-a-y0.5-l100}.pdf}
\caption{$A_{FB}^{*}$ - GLR models}
\end{subfigure}
\begin{subfigure}{0.494\textwidth}
\includegraphics[width=\textwidth]{{al-r-gsm-ggqq-gazx-tt-bbllvv-2-4-5x10m-a-l100}.pdf}
\caption{$A_L$ - GSM models}
\end{subfigure}
\begin{subfigure}{0.494\textwidth}
\includegraphics[width=\textwidth]{{al-r-glr-ggqq-gazx-tt-bbllvv-2-4-5x10m-a-l100}.pdf}
\caption{$A_L$ - GLR models}
\end{subfigure}
\caption{Expected distributions for each of our observables of interest, with an integrated luminosity of $100$~fb$^{-1}$, at $\sqrt{s}=13$~TeV. The shaded bands indicate the projected statistical uncertainty.}
\label{fig:distinguishing}
\end{figure}
Figure~\ref{fig:distinguishing} shows plots for the differential cross section, $A_{FB}^{*}$ and $A_{L}$. The statistical error is quantified for this luminosity assuming Poisson errors. The absent models, including all of the $E_6$ class, only produce an asymmetry via the interference term, which generally gives an undetectable enhancement with respect to the SM yield. The absence of a corresponding peak in either asymmetry offers an additional handle on diagnosing a discovered $Z'$. The cross section, profiled in $m_{tt}$, shows a very visible peak for all models. The GSM models feature a greater peak, and width, consistent with their stronger couplings, but the impact on the cross section is otherwise similar for both classes. Mirroring the cross section, the $A_{FB}^{*}$ distribution clearly distinguishes between the models and SM, with the difference in width even more readily apparent. The best distinguishing power over all the models investigated comes from the $A_{L}$ distribution, which features an oppositely signed peak for the GLR and GSM classes.
To evaluate the significance of each asymmetry as a combined discovery observable we bin in both $m_{tt}$ and its defining variable. For $A_{FB}^*$, the asymmetry is calculated directly. Therefore, we divide the domain of $\cos\theta^*$ into just two equal regions. $A_L$ is extracted from the gradient of the fit to $\cos\theta_l$ for each mass slice, and we calculate the significance directly from this histogram. The final results of the likelihood-based test, as applied to each model, and tested against the SM, are presented in table~\ref{tab:significance}. The models with non-trivial asymmetries consistently show an increased significance for the 2D histograms compared with using $m_{tt}$ alone, illustrating their potential application in gathering evidence to herald the discovery of new physics.
\begin{table}
\footnotesize
\centering
\begin{tabular}{|llccc|}
\hline
\bigstrut
Class & U$(1)'$ & \multicolumn{3}{c|}{Significance ($Z$)} \\
\cline{3-5}
\bigstrut
& & $m_{tt}$ & $m_{tt}$ \& $\cos\theta^{*}$ & $m_{tt}$ \& $\cos\theta_l$ \\
\hline
\bigstrut[t]
\multirow{6}{*}{E$_6$} & U$(1)_\chi$ & $ 3.7$ & - & - \\
& U$(1)_\psi$ & $ 5.0$ & - & - \\
& U$(1)_\eta$ & $ 6.1$ & - & - \\
& U$(1)_S$ & $ 3.4$ & - & - \\
& U$(1)_I$ & $ 3.4$ & - & - \\
& U$(1)_N$ & $ 3.5$ & - & - \\
\hline
\bigstrut[t]
\multirow{4}{*}{GLR} & U$(1)_{R }$ & $ 7.7$ & $ 8.5$ & $ 8.6$ \\
& U$(1)_{B-L}$ & $ 3.6$ & - & - \\
& U$(1)_{LR}$ & $ 5.1$ & $ 5.6$ & $ 5.8$ \\
& U$(1)_{Y }$ & $ 6.3$ & $ 6.8$ & $ 7.0$ \\
\hline
\bigstrut[t]
\multirow{3}{*}{GSM} & U$(1)_{T^3_L}$ & $12.1$ & $13.0$ & $14.0$ \\
& U$(1)_{SM}$ & $ 7.1$ & $ 7.3$ & $ 7.6$ \\
& U$(1)_{Q}$ & $24.8$ & - & - \\
\hline
\end{tabular}
\caption{Expected significance, expressed as the Gaussian equivalent of the $p$-value.}
\label{tab:significance}
\end{table}
\section{Conclusions}
\label{sec:conclusions}
We have investigated the scope of the LHC in accessing semileptonic final states produced by $t\bar t$ pairs emerging from the decay of a heavy $Z'$ state. We tested a variety of BSM scenarios embedding one such a state, and show that asymmetry observables can be used to not only aid the diagnostic capabilities provided by the cross section, in identifying the nature of a possible $Z'$ signal, but also to increase the combined significance for first discovery. While the analysis was performed at the parton level, we have implemented a reconstruction procedure of the (anti)tops that closely mimics experimental conditions. We have, therefore, set the stage for a fully-fledged analysis eventually also to include parton-shower, hadronisation, and detector reconstruction, which will constitute the subject of a forthcoming publication. In short, we believe that our results represent a significant phenomenological advancement in proving that charge and spin asymmetry observables can have a strong impact in accessing and profiling $Z'\to t\bar t$ signals during Run 2 of the LHC. This is all the more important in view of the fact that several BSM scenarios, chiefly those assigning a composite nature to the recently discovered Higgs boson, embed one or more $Z'$ state which are strongly coupled to top (anti)quarks~\cite{barducci2012}.
\section*{Acknowledgements}
We acknowledge the support of ERC-CoG Horizon 2020, NPTEV-TQP2020 grant no. 648723, European Union. DM is supported by the NExT Institute and an ATLAS PhD Grant, awarded in February 2015. SM is supported in part by the NExT Institute and the STFC Consolidated Grant ST/L000296/1. FS is supported by the STFC Consolidated Grant ST/K001264/1. We would like to thank Ken Mimasu for all his prior work on $Z'$ phenomenology in $t\bar{t}$, as well as his input when creating the generation tools used for this analysis. Thanks also go to Juri Fiaschi for helping us to validate our tools in the case of Drell-Yan $Z'$ production. Additionally, we are very grateful to Glen Cowan for discussions on the statistical procedure, and Lorenzo Moneta for aiding with the implementation.
|
2,877,628,089,562 | arxiv | \section{Introduction}
The theory of pseudodifferential operators has proven to be a powerful tool in many disciplines of mathematics.
The space of conormal distributions was designed to contain the Schwartz kernels of pseudodifferential operators with H\"ormander symbols, see \cite[Chapter~18.2]{Hormander0}.
Conormal distributions are the starting point for the theory of Lagrangian distributions and Fourier integral operators \cite[Chapter~25]{Hormander0}, but it has also been studied in itself to a great extent, and it is essential in several theories, see e.g. \cite{Bony,MelroseAPS}. A distribution $u$ defined on a smooth manifold is conormal with respect to a closed smooth submanifold if $Lu$ belongs to a certain Besov space locally for certain differential operators $L$ that depend on the submanifold.
For the well-studied pseudodifferential operators on $\rr d$ with Shubin symbols \cite{Shubin1}, we are not aware of a concept corresponding to conormal distributions.
In this paper we fill this gap by introducing a theory of conormal distributions with repect to linear subspaces of $\rr d$, adapted to Shubin operators.
Recall that a Shubin symbol $a \in \Gamma_\rho^m$ of order $m \in \ro$ satisfies the estimates
\begin{equation*}
|\partial_x^\alpha\partial_\xi^\beta a(x,\xi)| \lesssim (1+|x|+|\xi|)^{m-\rho|\alpha+\beta|}, \quad (x,\xi) \in \rr d \times \rr d, \ \alpha, \beta \in \nn d,
\end{equation*}
where $0 \leqs \rho \leqs 1$.
The key feature of the Shubin symbols that is difficult to describe by the standard techniques is the inherent isotropy, in particular that taking derivatives with respect to $x$ increases the decay in $\xi$. The tool that we employ to circumvent this issue is a version of the short-time Fourier transform, which is more suitable to isotropic symbols than the standard Fourier transform on which the classical theory is based.
Our work may be seen as phase space analysis of Shubin conormality.
We extend Tataru's characterization \cite{Tataru} of the Schwartz kernels of pseudodifferential operators with $m=\rho=0$ to $0 \leqs \rho \leqs 1$ and order $m \in \ro$. The behavior of the symbols with respect to derivatives and the order is reflected in phase space.
Based on the characterization of the Schwartz kernels of Shubin operators, we define conormal tempered distributions on $\rr d$ with respect to a linear subspace and an order $m \in \ro$. To distinguish them from H\"ormander's notion of conormal distribution, we use the prefix $\Gamma$-conormal. The Schwartz kernels of Shubin operators are thus identical to the $\Gamma$-conormal distributions on $\rr {2d}$ with respect to the diagonal in $\rr {2d}$.
We prove functional properties of $\Gamma$-conormal distributions and check that they transform well under the Fourier transform and linear coordinate transformations. We equip them with a topology such that these operators become continuous.
The present paper can be seen as a first step in the direction of a phase space analysis for Lagrangian distributions in the Shubin calculus which, as far as we know, does not yet exist. This will be the subject of a forthcoming paper.
The paper is organized as follows:
In Section \ref{sec:prelim} we introduce the FBI-type integral transform on which our analysis is based and state its basic properties.
Section \ref{sec:shubchar} contains a phase space characterization of Shubin symbols in terms of the integral transform.
In Section \ref{sec:pseudochar} we transfer the characterization to the Schwartz kernels of the associated class of global pseudodifferential operators.
Along the way we give a simple proof of the continuity of these operators on the associated scale of Shubin--Sobolev modulation spaces.
Finally in Section \ref{sec:gconorm} we define $\Gamma$-conormal distributions and discuss their functional and microlocal properties.
\section{An integral transform of FBI type}\label{sec:prelim}
In this section we introduce the tool for the definition of Shubin conormal distributions, namely a variant of the FBI transform, and discuss its main properties. First we fix some notation.
\subsection*{Basic notation}
We use $\cS(\rr d)$ and $\cS'(\rr d)$ for the Schwartz space of rapidly decaying smooth functions and its dual the tempered distributions. We write $\langle u,v\rangle$ for the bilinear pairing between a test function $v$ and a distribution $u$ and $(u,v)=\langle u,\overline{v}\rangle$ for the sesquilinear pairing as well as the $L^2$ scalar product if $u, v \in L^2(\rr d)$.
We use $T_{y}u(x) = u(x-y)$
and $M_\xi u(x) = e^{i \la x,\xi \ra}u(x)$, where $\la \cdot,\cdot \ra$ denotes the inner product on $\rr d$, for the operation of translation by $y\in \rr d$
and modulation by $\xi\in \rr d$, respectively, applied to functions or distributions. For $x \in \rr d $ we write $\eabs{x}=\sqrt{1+|x|^2}$. Peetre's inequality is
\begin{equation}
\label{eq:Peetre}
\eabs{x+y}^s \leqs C_s \eabs{x}^s\eabs{y}^{|s|}\qquad x,y \in \rr d, \quad s \in \ro, \quad C_s>0.
\end{equation}
We write $\dbar x$ for the dual Lebesgue measure $(2\pi)^{-d}\dd x$.
The notation $f (x) \lesssim g(x)$ means that $f(x) \leqs C g(x)$ for some $C>0$, for all $x$ in the domain of $f$ and $g$.
If $f (x) \lesssim g (x) \lesssim f (x)$ then we write $f (x) \asymp g (x)$.
The Fourier transform is normalized for $f \in \cS(\rr d)$ as
\begin{equation*}
\cF f (\xi) = \widehat f (\xi) = (2\pi)^{-d/2} \int_{\rr d} f(x) e^{-i \la x,\xi \ra} \, \dd x
\end{equation*}
which makes it unitary on $L^2(\rr d)$.
The partial Fourier transform with respect to a vector variable indexed by $j$ is denoted $\cF_j$.
For $1 \leqs j \leqs d$ we use $D_j = -i \partial_j$ and extend to multi-indices.
The orthogonal projection on a linear subspace $Y \subseteq \rr d$ is $\pi_Y$.
We denote by $\M_{d_1 \times d_2}( \ro )$ the space of $d_1\times d_2$ matrices with real entries, by $\GL(d,\ro)$ the group of invertible elements of $\M_{d \times d}( \ro )$, and by $\On(d)$ the subgroup of orthogonal matrices in $\GL(d,\ro)$.
The real symplectic group \cite{Folland1} is denoted $\Sp(d,\ro)$ and is defined as the matrices in $\GL(2d,\ro)$ that leaves invariant the
canonical symplectic form on $T^* \rr d$
\begin{equation*}
\sigma((x,\xi), (x',\xi')) = \la x' , \xi \ra - \la x, \xi' \ra, \quad (x,\xi), (x',\xi') \in T^* \rr d.
\end{equation*}
For a function $f$ on $\rr d$ and $A \in \GL(d,\ro)$ we denote the pullback by $A^* f = f \circ A$.
The determinant of $A \in \M_{d \times d}( \ro )$ is $|A|$, the transpose is $A^t$, and the inverse of the transpose is $A^{-t}$.
\subsection*{An integral transform of FBI type}
\begin{defn}\label{def:FBItransform}
Let $u\in \cS^\prime(\rr d)$ and let $g\in \cS(\rr d)\setminus\{0\}$ be a \textit{window function}. Then the transform $\cT_g u: \rr {2d} \rightarrow \co$ is
\begin{equation}
\label{eq:cTdef}
\cT_g u(x,\xi)=(2\pi)^{-d/2}(u,T_x M_{\xi}g), \quad x, \xi \in \rr d.
\end{equation}
\end{defn}
If $u \in \cS(\rr d)$ then $\cT_g u \in \cS(\rr {2d})$ \cite[Theorem~11.2.5]{Grochenig1}.
The adjoint $\cT_g^*$ is $(\cT_g^* U, f) = (U, \cT_g f)$ for $U \in \cS'(\rr {2d})$ and $f \in \cS(\rr d)$.
When $U$ is a polynomially bounded measurable function we write
\begin{equation*}
\cT_g^* U(y) = (2\pi)^{-d/2} \int_{\rr {2d}} U(x,\xi) \, T_{x} M_{\xi} g(y) \, \dd x \, \dd \xi
\end{equation*}
where the integral is defined weakly so that $(\cT_g^* U, f) = (U, \cT_g f)_{L^2}$ for $f \in \cS(\rr d)$.
\begin{rem}
For $u\in\cS(\rr d)$ we have
\begin{equation*}
\cT_g u(x,\xi)=(2\pi)^{-d/2} \int_{\rr d} u(y) \, e^{-i \la y-x,\xi \ra} \, \overline{g (y-x)} \ \dd y
= e^{i\langle x, \xi \rangle} \cF(u \, T_x \overline g) (\xi).
\end{equation*}
\end{rem}
The standard, $L^2$-normalized Gaussian window function on $\rr d$ is denoted $\psi_0(x) = \pi^{-d/4}e^{-|x|^2/2}$.
\begin{prop}
\label{prop:Swdchar}
\rm{\cite[Theorem~11.2.3]{Grochenig1}}
Let $u\in\cS'(\rr d)$ and $g \in \cS(\rr d) \setminus 0$. Then $\cT_g u\in C^\infty(\rr {2d})$ and there exists $N \in \no$ that does not depend on $g$ such that
\begin{equation}
\label{eq:Swdineq}
|\cT_g u(x,\xi)|\lesssim \eabs{(x,\xi)}^{N}, \quad (x,\xi) \in \rr {2d}.
\end{equation}
We have $u\in \cS(\rr d)$ if and only if for any $N \in \no$
\begin{equation}
\label{eq:Swineq}
|\cT_g u(x,\xi)|\lesssim \eabs{(x,\xi)}^{-N}, \quad (x,\xi) \in \rr {2d}.
\end{equation}
\end{prop}
\begin{rem}
(Relation to other integral transforms.)
The transform $\cT_g$ is related to the short-time Fourier transform (cf. \cite{Grochenig1})
\begin{equation*}
\cV_g u(x,\xi) = (2\pi)^{-d/2}(u,M_{\xi} T_x g), \quad x, \xi \in \rr d,
\end{equation*}
(for the Gaussian window $g=\psi_0$ also known as the Gabor transform) via
\begin{equation*}
\cT_g u(x,\xi) = e^{i \la x, \xi \ra} \cV_{g}u(x,\xi).
\end{equation*}
For the standard Gaussian window \eqref{eq:cTdef} may be expressed as
\begin{equation}
\label{eq:cTpconvdef}
\cTp u(x,\xi)=(2\pi)^{-d/2}e^{-\frac{|\xi|^2}{2}} ( u * \psi_0 )(x-i\xi)=\mathcal{B} u(x-i\xi) \, e^{-(|x|^2 + |\xi|^2)/2},
\end{equation}
where $\mathcal{B}$ stands for the Bargmann transform \cite{Grochenig1}.
\end{rem}
We have for two different windows $g,h\in \cS(\rr d)$
\begin{equation}
\label{eq:reproducing}
\cT_h^*\cT_g u=(h,g) u, \qquad u \in \cS'(\rr d),
\end{equation}
and consequently, $\|g\|_{L^2}^{-2}\cT_g^*\cT_g u=u$ for $g \in \cS(\rr d) \setminus 0$ \cite{Grochenig1}.
If $(h,g) \neq 0$ the inversion formula \eqref{eq:reproducing} can be written as
\begin{equation*}
( u,f) = (h,g)^{-1} (\cT_g u, \cT_h f), \qquad u \in \cS'(\rr d), \quad f \in \cS(\rr d).
\end{equation*}
Two important features of $\cT_g$ which distinguishes it from the short-time Fourier transform are the following differential identities.
\begin{align}
\label{eq:diffident}
\partial_x^\alpha \cT_g u (x,\xi) & = \cT_g (\partial^\alpha u) (x,\xi), \qquad \alpha \in \nn d, \\
\label{eq:diffidentstar}
D_{\xi}^\beta \cT_g u (x,\xi) & = \cT_{g_\beta} u (x,\xi), \qquad \beta \in \nn d, \qquad g_\beta (x) = (-x)^\beta g(x).
\end{align}
As described in \cite{Grochenig1} for the short time Fourier transform, \eqref{eq:reproducing} may be used to estimate the behavior of $\cT_g$ under a change of window.
The following version of this result takes derivatives into account:
\begin{lem}
\label{lem:windchange}
Let $u \in \cS'(\rr d)$ and let $f,g,h \in\cS(\rr d) \setminus 0$ satisfy $(h,g) \neq 0$.
Then for all $\alpha,\beta \in \nn d$ and $(x,\xi) \in \rr {2d}$
\begin{equation}\label{eq:Tderivative}
|\partial_x^\alpha \partial_{\xi}^\beta \cT_f u(x,\xi)| \leqs (2\pi)^{-d/2} |(h,g)|^{-1} |\partial_x^\alpha \cT_g u|*|\cT_{f_\beta} h| (x,\xi).
\end{equation}
\end{lem}
\begin{proof}
We obtain from \eqref{eq:reproducing}
\begin{equation}
\label{eq:reprointer}
\cT_f u = (h,g)^{-1} \cT_f \cT_h^*\cT_g u.
\end{equation}
We may express $\cT_f \cT_h^*\cT_g u$ as
\begin{align*}
\cT_f \cT_h^*\cT_g u(x,\xi)
& = (2\pi)^{-d/2} ( \cT_g u, \cT_h (T_x M_\xi f) ) \\
& = (2\pi)^{-d} \int_{\rr {2d}} \cT_g u(y,\eta) \, (T_y M_\eta h, T_x M_\xi f) \, \dd y \, \dd \eta \\
& = (2\pi)^{-d/2} \int_{\rr {2d}} e^{i \la x-y,\eta \ra} \cT_g u(y,\eta) \, \cT_f h(x-y,\xi-\eta) \, \dd y \, \dd \eta \\
& = (2\pi)^{-d/2} \int_{\rr {2d}} e^{i \la y,\eta \ra} \cT_g u(x-y,\eta) \, \cT_f h(y,\xi-\eta) \, \dd y \, \dd \eta.
\end{align*}
Combining \eqref{eq:diffident}, \eqref{eq:diffidentstar} and \eqref{eq:reprointer} yields
\begin{align*}
& \partial_x^\alpha D_{\xi}^\beta \cT_f u (x,\xi) \\
& = (2\pi)^{-d/2} (h,g)^{-1} \int_{\rr {2d}}
e^{i \la x-y,\eta \ra} \partial_y^\alpha\cT_g u(y,\eta) \, \cT_{f_\beta} h(x-y,\xi-\eta) \, \dd y\, \dd \eta.
\end{align*}
Taking absolute value gives \eqref{eq:Tderivative}.
\end{proof}
\subsection{Transformation under shifts and symplectic matrices}
A pseudodifferential operator in the Weyl quantization is for $f \in \cS(\rr d)$ defined as
\begin{equation} \label{shubop}
a^w(x,D) f(x) = \int_{\rr {2d}} e^{i \la x-y, \xi \ra} a\left((x+y)/2,\xi \right) \, f(y) \, \dbar \xi \, \dd y
\end{equation}
where $a$ is the Weyl symbol.
We will later use Shubin symbols, but for now it suffices to note that the Weyl quantization extends
by the Schwartz kernel theorem to $a \in \cS'(\rr {2d})$, and then gives
rise to a continuous linear operator from $\cS(\rr d)$ to $\cS'(\rr d)$.
The Schwartz kernel of the operator $a^w(x,D)$ is
\begin{equation}\label{eq:schwartzkernelpseudo}
K_{a}(x,y)=\int_{\rr d} e^{i \la x-y, \xi \ra} a\left((x+y)/2,\xi \right) \dbar \xi
\end{equation}
interpreted as a partial inverse Fourier transform in $\cS'(\rr {2d})$ when $a \in \cS'(\rr {2d})$.
The \emph{metaplectic representation} \cite{Folland1,Taylor1} works as follows.
To each symplectic matrix $\chi \in \Sp(d,\ro)$ is associated an operator $\mu(\chi)$ that is unitary on $L^2(\rr d)$, and determined up to a complex factor of modulus one, such that
\begin{equation}\label{metaplecticoperator}
\mu(\chi)^{-1} a^w(x,D) \, \mu(\chi) = (a \circ \chi)^w(x,D), \quad a \in \cS'(\rr {2d})
\end{equation}
(cf. \cite{Folland1,Hormander0}).
The operator $\mu(\chi)$ is a homeomorphism on $\mathscr S$ and on $\mathscr S'$.
The metaplectic representation is the mapping $\Sp(d,\ro) \ni \chi \rightarrow \mu(\chi)$.
It is in fact a representation of the so called $2$-fold covering group of $\Sp(d,\ro)$, which is called the metaplectic group.
In Table \ref{tab:meta} we list the generators $\chi$ of the symplectic group, the corresponding unitary operators $\mu(\chi)$ on $u \in L^2$,
and the corresponding transformation on $\cT_g u$, cf. \cite{DeGosson}.
We also list the correspondence for phase shift operators.
Here $x_0, \xi_0 \in \rr d$, $A \in \GL(d,\ro)$, $B \in M_{d\times d}(\mathbb{R})$ with $B=B^t$.
\renewcommand{\arraystretch}{1.2}
\begin{table}[htb!]
\caption{The metaplectic representation}
\label{tab:meta}
\begin{tabular}{|c||c|}
\hline
\textbf{Transformation} & \textbf{Action on:} \\
& $(x,\xi)\in T^* \rr d$ \\
& $u\in L^2(\rr{d})$ \\
& $\cT_g u(x,\xi) \in L^2(\rr {2d})$ \\
\hline \hline
& $(A^{-1}x,A^t\xi)$ \\
Coordinate change & $|A|^{1/2} A^* u$ \\
& $|A|^{-1/2} \cT_{A^{-*}g} u (A x,A^{-t} \xi)$\\
\hline
& $(\xi,-x)$ \\
Rotation $\pi/2$ & $\cF u$ \\
& $e^{i \la x,\xi \ra} \cT_{\cF^{-1} g} u (-\xi,x)$ \\
\hline
& $(x,\xi+Bx)$ \\
Shearing & $e^{\frac{i}{2} \la x,Bx \ra}u(x)$ \\
& $e^{\frac{i}{2} \la x,B x \ra}\cT_{g_{B}} u(x,\xi-Bx)$ \\
\hline
& $(x+x_0,\xi+\xi_0)$ \\
Shift & $T_{x_0} M_{\xi_0} u$ \\
& $ e^{i \la \xi_0,x-x_0 \ra} \cT_g u (x-x_0,\xi-\xi_0)$ \\
\hline
\end{tabular}
\end{table}
\renewcommand{\arraystretch}{1}
The proofs of the claims in Table \ref{tab:meta} are collected in the following lemmas.
\begin{lem}
\label{lem:shift}
Let $u \in \cS '(\rr d)$ and $g \in \cS(\rr d) \setminus 0$.
If $(x_0,\xi_0) \in T^* \rr d$, $A\in\GL(d,\ro)$, $B \in \M_{d \times d}(\ro)$ is symmetric, $v(x)=e^{\frac{i}{2} \la x,Bx \ra} u(x)$
and $g_B(y)= e^{-\frac{i}{2} \la y, B y \ra}g(y)$,
then for $(x,\xi) \in T^* \rr d$
\begin{align*}
\cT_g(T_{x_0} M_{\xi_o} u)(x,\xi) & = e^{i \la \xi_0, x-x_0 \ra} \cT_g u(x-x_0,\xi-\xi_0), \\
\cT_g( |A|^{1/2} A^*u)(x,\xi) & = |A|^{-1/2}\cT_{A^{-*}g} u (A x,A^{-t} \xi), \\
\cT_g v(x,\xi) & = e^{\frac{i}{2} \la x,Bx \ra }\cT_{g_B} u(x,\xi - Bx).
\end{align*}
\end{lem}
\begin{proof}
The first and the fourth entry of Table \ref{tab:meta} are immediate consequences of Definition \ref{def:FBItransform}.
For the third identity, assume first $u\in\cS(\rr d)$. Then
\begin{align*}
\cT_g v (x,\xi) & = (2\pi)^{-d/2} \int_{\rr d} \overline{g(y-x)} \, e^{\frac{i}{2} \la y, B y \ra} u(y) e^{i \la x-y,\xi \ra}\ \dd y \\
&=(2\pi)^{-d/2} e^{\frac{i}{2} \la x,B x \ra} \int_{\rr d} \overline{g(y-x) \, e^{-\frac{i}{2} \la y-x, B (y-x) \ra }} u(y) e^{i \la x-y, \xi-Bx \ra }\ \dd y \\
&=e^{\frac{i}{2} \la x, B x \ra } \cT_{g_B} u(x,\xi-Bx).
\end{align*}
The formula extends to $u \in \cS'(\rr d)$.
\end{proof}
Finally we prove the claim for ``Rotation $\pi/2$'' in Table \ref{tab:meta}.
For later use, we prefer to show a more general result for a possibly partial Fourier transform.
\begin{lem}
\label{lem:Fourier}
If $u \in \cS'(\rr d)$, $0 \leqs n \leqs d$ and $x=(x_1,x_2) \in \rr d$, $x_1 \in \rr n$, $x_2 \in \rr {d-n}$,
then
\begin{equation*}
\cT_g u(x_1,x_2,\xi_1,\xi_2) = e^{i \la x_2,\xi_2 \ra} \cT_{\cF_2 g} \cF_2 u (x_1,\xi_2,\xi_1, -x_2).
\end{equation*}
\end{lem}
\begin{proof}
\begin{multline*}
e^{i \la x_2,\xi_2 \ra} \cT_{\cF_2 g} \cF_2 u (x_1,\xi_2,\xi_1, -x_2) \\
= e^{i \la x_2,\xi_2 \ra} (2\pi)^{-d/2} ( \cF_2 u, T_{x_1,\xi_2} M_{\xi_1,-x_2} \cF_2 g) \\
= e^{i \la x_2,\xi_2 \ra} (2\pi)^{-d/2} ( u, \cF_2^{-1} T_{x_1,\xi_2} M_{\xi_1,-x_2} \cF_2 g) \\ = (2\pi)^{-d/2} ( u, T_{x_1,x_2} M_{\xi_1,\xi_2} g).
\end{multline*}
\end{proof}
\begin{rem}
The extreme cases $n=0$ and $n=d$ represent $\cF_2 = \cF$ (the full Fourier transform) and the trivial case $\cF_2 = I$ (the identity), respectively.
\end{rem}
We observe that up to certain phase factors, changes of windows and sign conventions, the ``Action on $\cT_g u(x,\xi)$'' reflects the inversion of ``Action on $T^* \rr d$'' in Table \ref{tab:meta}.
\section{Characterization of Shubin symbols}
\label{sec:shubchar}
We first recall the definition of Shubin's class of global symbols for pseudodifferential operators \cite{Shubin1}.
\begin{defn}
We say that $a \in C^\infty(\rr d)$ is a Shubin symbol of order $m \in \ro$ and parameter $0 \leqs \rho \leqs 1$, denoted $a \in \Gamma_{\rho}^m(\rr d)$, if there exist $C_\alpha>0$ such that
\begin{equation}
\label{eq:shubinineq}
|\partial^\alpha a(z)| \leqs C_\alpha \eabs{z}^{m-\rho|\alpha|}, \qquad \alpha \in \nn d, \quad z \in \rr d.
\end{equation}
$\Gamma_\rho^m(\rr d)$ is a Fr\'echet space equipped with the seminorms $\rho^m_M(a)$ of best possible constants $C_\alpha$ in \eqref{eq:shubinineq} maximized over $|\alpha| \leqs M$, $M \in \no$.
We denote $\Gamma^m(\rr d) = \Gamma_1^m(\rr d)$.
\end{defn}
Obviously $\Gamma_\rho^m(\rr d) \subseteq \cS'(\rr d)$ so Proposition \ref{prop:Swdchar} already gives some information on $\cT_g a$ when $a\in \Gamma_\rho^m(\rr d)$.
The following result, which is a chief tool in the paper, gives characterizations of $\cT_g a$ for $a \in \Gamma_\rho^m(\rr d)$.
\begin{prop}
\label{prop:symbchar}
Suppose $a\in \cS'(\rr d)$.
Then $a\in \Gamma_\rho^m(\rr d)$ if and only if for one (and equivalently all) $g\in\cS(\rr d)\setminus 0$
\begin{equation}
\label{eq:Gineq1}
|\partial_x^\alpha \partial_\xi^\beta \cT_g a(x,\xi)|\lesssim \eabs{x}^{m- \rho |\alpha|}\eabs{\xi}^{-N}, \quad N \geqs 0, \quad \alpha, \beta \in \nn d, \quad x, \xi \in \rr d,
\end{equation}
or equivalently
\begin{equation}
\label{eq:Gineq}
|\partial_x^\alpha \cT_g a(x,\xi)|\lesssim \eabs{x}^{m- \rho |\alpha|}\eabs{\xi}^{-N}, \quad N \geqs 0, \quad \alpha \in \nn d, \quad x, \xi \in \rr d.
\end{equation}
\end{prop}
\begin{proof}
Let $a\in \Gamma_\rho^m(\rr d)$, let $g\in\cS(\rr d)\setminus 0$ and let $\alpha,\beta, \gamma \in\nn d$ be arbitrary. We seek to show
$$
|\xi^\gamma \partial_x^\alpha \partial_\xi^\beta \cT_g a(x,\xi)| \lesssim \eabs{x}^{m-\rho|\alpha|}.
$$
To that end we use \eqref{eq:diffident} and \eqref{eq:diffidentstar}, integrate by parts and estimate using \eqref{eq:Peetre} and the fact that $g \in \cS$
\begin{align*}
|\xi^\gamma \partial_x^\alpha \partial_\xi^\beta \cT_g a(x,\xi)|
& = \left| \xi^\gamma \cT_{g_\beta}(\partial^\alpha a)(x,\xi) \right| \\
& = (2 \pi)^{-d/2} \left| \int_{\rr d} \left((i\partial_{y})^\gamma e^{- i \la \xi,y \ra }\right) \overline{g_\beta(y)} \, \partial^\alpha a(x+y)\ \dd y \right| \\
&\lesssim \int_{\rr{d}} \left| \partial_{y}^\gamma \left[\overline{g_\beta(y)} \, \partial^\alpha a(x+y)\right] \right|\ \dd y \\
& = \int_{\rr{d}} \left| \sum_{\kappa \leqs \gamma} \binom{\gamma}{\kappa} \partial^{\gamma-\kappa}\overline{g_\beta(y)} \, \partial^{\alpha+\kappa} a(x+y)\right|\ \dd y \\
& \lesssim \sum_{\kappa \leqs \gamma} \binom{\gamma}{\kappa} \int_{\rr d} \left| \partial^{\gamma-\kappa} g_\beta(y) \right| \, \eabs{x+y}^{m-\rho|\alpha+\kappa|} \dd y \\
& \lesssim \eabs{x}^{m-\rho|\alpha|} \sum_{\kappa \leqs \gamma} \binom{\gamma}{\kappa} \int_{\rr d} \left| \partial^{\gamma-\kappa} g_\beta(y) \right| \, \eabs{y}^{|m|+\rho |\alpha+\kappa|} \dd y \\
&\lesssim \eabs{x}^{m- \rho |\alpha|}.
\end{align*}
This implies \eqref{eq:Gineq1} and as a special case \eqref{eq:Gineq}.
Conversely, suppose that \eqref{eq:Gineq} holds for $a \in \cS'(\rr d)$ for some $g \in \cS(\rr d) \setminus 0$, which is a weaker assumption than \eqref{eq:Gineq1}.
We obtain from \eqref{eq:reproducing} that $a$ is given by
\begin{align*}
a(y)
& = \|g\|_{L^2}^{-2} \, \cT_g^* \cT_g a(y) \\
& = \|g\|_{L^2}^{-2} \, (2 \pi)^{-d/2} \int_{\rr {2d}} \cT_g a(x,\xi) \, e^{i \la \xi,y-x \ra} \, g(y-x) \, \dd x \, \dd \xi
\end{align*}
which is an absolutely convergent integral due to \eqref{eq:Gineq} and the fact that $g\in\cS(\rr d)$.
We may differentiate under the integral, so integration by parts, \eqref{eq:Gineq} and \eqref{eq:Peetre} give for any $\alpha \in \nn d$ and any $y \in \rr d$
\begin{align*}
\left|\partial^\alpha a(y)\right|
& = \|g\|_{L^2}^{-2} \, (2 \pi)^{-d/2} \left| \int_{\rr {2d}} \cT_ga(x,\xi) \, \partial_y^\alpha \left( e^{i \la \xi,y-x \ra} \, g(y-x) \right) \, \dd x \, \dd \xi \right| \\
& = \|g\|_{L^2}^{-2} \, (2 \pi)^{-d/2} \left| \int_{\rr {2d}} \cT_ga(x,\xi) \, (-\partial_x)^\alpha \left( e^{i \la \xi,y-x \ra} \, g(y-x) \right) \, \dd x \, \dd \xi \right| \\
& = \|g\|_{L^2}^{-2} \, (2 \pi)^{-d/2} \left| \int_{\rr {2d}} \partial_x^\alpha \cT_ga(x,\xi) \, e^{i \la \xi,y-x \ra} \, g(y-x) \, \dd x \, \dd \xi \right| \\
& = \|g\|_{L^2}^{-2} \, (2 \pi)^{-d/2} \left| \int_{\rr {2d}} \partial_x^\alpha \cT_ga(y-x,\xi) \, e^{i \la \xi,x \ra} \, g(x) \, \dd x \, \dd \xi \right| \\
& \lesssim \int_{\rr {2d}} \eabs{y-x}^{m- \rho |\alpha|} \, \eabs{\xi}^{-d-1} \, |g(x)| \, \dd x \, \dd \xi \\
& \lesssim \eabs{y}^{m- \rho |\alpha|} \int_{\rr {2d}} \eabs{\xi}^{-d-1} \, \eabs{x}^{|m|+ \rho |\alpha|} \, |g(x)| \, \dd x \, \dd \xi \\
& \lesssim \eabs{y}^{m- \rho |\alpha|}.
\end{align*}
Thus $a \in \Gamma_\rho^m(\rr d)$.
\end{proof}
\begin{rem}
It follows from the proof that the best possible constants in \eqref{eq:Gineq} maximized over $|\alpha| \leqs M$ yield seminorms $\rho^m_{g,M,N}$, $M,N \in \no$, on $\Gamma_\rho^m(\rr{d})$ equivalent to $\rho^m_{M}$, $M \in \no$.
\end{rem}
We will next reformulate the characterization of $\Gamma^m(\rr d)$ in a more geometric form.
\begin{prop}
\label{prop:symbchargeom}
Let $a\in \cS'(\rr d)$.
Then $a \in \Gamma^m(\rr d)$ if and only if for one (and equivalently all) $g \in \cS(\rr d) \setminus 0$ and all $N,k \in \no$
\begin{equation}
\label{eq:Gineqgeom}
|L_1 \cdots L_k \cT_g a(x,\xi)|\lesssim \eabs{x}^m\eabs{\xi}^{-N}, \quad (x,\xi) \in T^* \rr d,
\end{equation}
for any vector fields of the form $L_i = x_j\partial_{x_n}$ where $1 \leqs j,n \leqs d$, $i=1,\dots,k$.
\end{prop}
\begin{proof}
We may write
\begin{equation*}
L_1 \cdots L_k = \sum_{|\alpha|=|\beta| \leqs k} c_{\alpha\beta} x^\alpha \partial^\beta, \quad c_{\alpha\beta} \in \ro,
\end{equation*}
and all differential operators of this form are linear combinations of products of the vector fields $L_i$.
If $a \in \Gamma^m(\rr d)$ then the estimates \eqref{eq:Gineq} hold for any $g \in \cS(\rr d) \setminus 0$.
For $N,k \in \no$ we have
\begin{align*}
|L_1 \cdots L_k \cT_g a(x,\xi)| & \lesssim \sum_{|\alpha|=|\beta| \leqs k} \eabs{x}^{|\alpha|} |\partial_x^\beta \cT_g a(x,\xi)| \\
&\lesssim \eabs{x}^m \eabs{\xi}^{-N}
\end{align*}
which confirms \eqref{eq:Gineqgeom}.
Suppose on the other hand that the estimates \eqref{eq:Gineqgeom} hold for some $g \in \cS(\rr d) \setminus 0$ and $N,k \in \no$.
Then for any $\alpha,\beta \in \nn d$ such that $|\alpha|=|\beta|$ and $N \in \no$
\begin{equation*}
|x^\alpha \partial_x^\beta \cT_g a(x,\xi)| \lesssim \eabs{x}^m \eabs{\xi}^{-N}.
\end{equation*}
This gives using $|x|^{|\beta|}\leqs d^{|\beta|/2} \max_{|\alpha|=|\beta|} |x^\alpha|$
\begin{equation*}
|\partial_x^\beta \cT_g a(x,\xi)| \lesssim \eabs{x}^{m-|\beta|} \eabs{\xi}^{-N}, \quad |x| > 1, \quad \xi \in \rr d.
\end{equation*}
In order to prove \eqref{eq:Gineq}, which is equivalent to $a \in \Gamma^m(\rr d)$,
it thus remains to show that $\eabs{\xi}^N | \partial_{x}^\beta \cT_g a (x,\xi)|$ remains uniformly bounded for $|x| \leqs 1$ and $ \xi \in \rr d$, for any $N \in \nn{}$. For that we estimate
\begin{align*}
\eabs{\xi}^N | \partial_{x}^\beta \cT_g a(x,\xi)|
& = \eabs{\xi}^N \left| \sum_{\alpha \leqs \beta} c_{\alpha\beta} (i\xi)^{\alpha} \cT_{\partial^{\beta-\alpha} g} a (x,\xi) \right|\\
& \lesssim \eabs{\xi}^{|\beta|+N} \sum_{\alpha \leqs \beta} \left| \cT_{\partial^{\alpha} g} a (x,\xi) \right|.
\end{align*}
By Lemma \ref{lem:windchange} we have
\begin{equation*}
\left| \cT_{\partial^{\alpha} g} a (x,\xi) \right|
\lesssim \big( \underbrace{|\cT_g a|}_{\lesssim \eabs{x}^{m}\eabs{\xi}^{-M}} * \ |\underbrace{\cT_{\partial^{\alpha} g} g}_{\in \cS}| \big) (x,\xi)\lesssim \eabs{x}^{m}\eabs{\xi}^{-M}
\end{equation*}
where the last inequality follows by Peetre's inequality \eqref{eq:Peetre} applied to the convolution. Choosing $M \geqs |\beta|+N$, we obtain
\begin{equation*}
\eabs{\xi}^N \left|\partial_{x}^\beta \cT_g a(x,\xi)\right|\lesssim 1\qquad \text{ for } |x| \leqs 1, \quad \xi \in \rr d,
\end{equation*}
which proves the claim.
\end{proof}
\begin{rem}
The vector fields $x_j\partial_{x_n}$ play a role in spanning all vector fields tangential to $\{0 \} \times \rr d \subseteq T^* \rr d$, see \cite[Lemma 18.2.5]{Hormander0}.
\end{rem}
\subsection{Classical symbols}
An important subclass of the Shubin symbols are those that admit a polyhomogeneous expansion, so called classical symbols.
A symbol $a\in \Gamma^m(\rr {d})$ is called classical, denoted $a \in \Gamma^m_\cl(\rr {d})$, if there are functions $a_{m-j}$, homogeneous of degree $m-j$ and smooth outside $z=0$, $j=0,1,\dots$, such that for any zero-excision function\footnote{This means a function of the form $1-\phi$ where $\phi\in C_c^\infty(\rr {d})$ and $\phi\equiv 1$ near zero.} $\chi$ we have for any $N \in \no$
\begin{equation*}
a-\chi\sum_{j=0}^{N-1} a_{m-j}\in \Gamma^{m-N}(\rr {d}).
\end{equation*}
By Euler's relation for homogeneous functions, $u$ is homogeneous of degree $m$ if and only if $R u = m u,$ where $R$ is the radial vector field $Ra(x)=\la x, \nabla a(x) \ra$. Adapting the method of Joshi \cite{Joshi} gives the following characterization of classical Shubin symbols.
\begin{prop}
\label{lem:class}
A symbol $a\in \Gamma^m(\rr {d})$ is classical if and only if for all $N \in \no_0$
\begin{equation*}
(R-m+N-1) (R-m+N-2) \cdots (R-m) \, a \in \Gamma^{m-N}(\rr {d}).
\end{equation*}
\end{prop}
The transformation $a \rightarrow \cT_g a$ does not preserve homogeneity. Nevertheless
\eqref{eq:diffident} and \eqref{eq:diffidentstar} give the relation
\begin{equation*}
\cT_g \left(R a \right) (x,\xi) = \la x + i \nabla_\xi, \nabla_x \ra \cT_g a(x,\xi) =: \wt{R} \cT_g a(x,\xi).
\end{equation*}
\begin{cor}
\label{cor:classsymbchar}
Let $a \in \cS'(\rr {d})$ and $g \in \cS(\rr {d}) \setminus 0$. Then $a \in \Gamma_\cl^m(\rr {d})$ if and only if
\begin{multline}
\label{eq:classtransf}
\left| \partial_x^\alpha \left( (\wt{R}-m+N-1) (\wt{R}-m+N-2) \cdots (\wt{R}-m) \cT_g a(x,\xi) \right) \right| \\
\lesssim \eabs{x}^{m- N-|\alpha|}\eabs{\xi}^{-M}
\end{multline}
for any $M \geqs 0$, $N\in \no_0$, $\alpha \in \nn d$ and $(x,\xi) \in T^* \rr d$.
\end{cor}
\begin{proof}
By Proposition \ref{lem:class}, $a \in \Gamma_\cl^m(\rr {d})$ if and only if
\begin{equation*}
(R-m+N-1) (R-m+N-2) \cdots (R-m) \, a \in \Gamma^{m-N}(\rr {d}).
\end{equation*}
By Proposition \ref{prop:symbchar} this holds if and only if for all $\alpha \in \nn d$, $(x, \xi) \in \rr {2d}$, and $M \geqs 0$
\begin{align*}
& |\partial_x^\alpha \cT_g \left( (R-m+N-1) (R-m+N-2) \cdots (R-m) \, a\right)(x,\xi)| \\
& \lesssim \eabs{x}^{m- N-|\alpha|}\eabs{\xi}^{-M}.
\end{align*}
This is equivalent to \eqref{eq:classtransf}.
\end{proof}
\section{Characterization of pseudodifferential operators}
\label{sec:pseudochar}
When $a \in \Gamma_\rho^m(\rr {2d})$ the pseudodifferential operator $a^w(x,D)$ is continuous on $\cS(\rr d)$, and extends to a continuous operator on $\cS'(\rr d)$ \cite{Shubin1}.
The formulas \eqref{shubop} and \eqref{eq:schwartzkernelpseudo} can be interpreted as oscillatory integrals if $0 < \rho \leqs 1$.
\begin{lem}
\label{lem:tTpsdo}
Let $a\in \Gamma_\rho^m(\rr {2d})$ and $g \in \cS(\rr {2d}) \setminus 0$. Then, for $(z,\zeta)=( z_1,z_2; \zeta_1,\zeta_2) \in T^* \rr {2d}$,
\begin{multline}
\label{eq:tTpsdo}
\cT_g K_a(z,\zeta)
\\ = (2 \pi)^{-d/2} \cT_h a \left(\frac{z_1+z_2}{2},\frac{\zeta_1-\zeta_2}{2},\zeta_1+\zeta_2,z_2-z_1 \right) e^{\frac{i}{2} \la \zeta_1-\zeta_2,z_1-z_2 \ra}
\end{multline}
where $h = \cF_2 (g \circ \kappa)$, $\kappa(x,y) = (x+y/2,x-y/2)$ and $x,y \in \rr d$.
\end{lem}
\begin{proof}
The statement \eqref{eq:tTpsdo} can be rephrased as
\begin{multline*}
\cT_g K_a \left( z_1 - \frac{z_2}{2} , z_1 + \frac{z_2}{2}; \zeta_1 + \frac{\zeta_2}{2},- \zeta_1 + \frac{\zeta_2}{2} \right) \\
= (2 \pi)^{-d/2} \cT_h a \left( z_1, \zeta_1; \zeta_2, z_2 \right) e^{- i \la \zeta_1, z_2 \ra},
\end{multline*}
for all $( z_1,z_2; \zeta_1,\zeta_2) \in T^* \rr {2d}$.
We have $K_a = (2 \pi)^{-d/2} (\cF_2^{-1} a) \circ \kappa^{-1}$
which gives
\begin{equation}\label{eq:STFTSchwartz1}
\begin{aligned}
& \cT_g K_a \left( z_1 - \frac{z_2}{2} , z_1 + \frac{z_2}{2}; \zeta_1 + \frac{\zeta_2}{2},- \zeta_1 + \frac{\zeta_2}{2} \right) \\
& = (2 \pi)^{-3d/2} ( (\cF_2^{-1} a) \circ \kappa^{-1}, T_{z_1 - \frac{z_2}{2} , z_1 + \frac{z_2}{2}} M_{\zeta_1 + \frac{\zeta_2}{2},- \zeta_1 + \frac{\zeta_2}{2}} g) \\
& = (2 \pi)^{-3d/2} ( a, \cF_2( T_{z_1 - \frac{z_2}{2} , z_1 + \frac{z_2}{2}} M_{\zeta_1 + \frac{\zeta_2}{2},- \zeta_1 + \frac{\zeta_2}{2}} g \circ \kappa)).
\end{aligned}
\end{equation}
We calculate
\begin{align*}
& (2 \pi)^{d/2} \cF_2( T_{z_1 - \frac{z_2}{2} , z_1 + \frac{z_2}{2}} M_{\zeta_1 + \frac{\zeta_2}{2},- \zeta_1 + \frac{\zeta_2}{2}} g \circ \kappa) (y,\eta) \\
& = \int_{\rr d} T_{z_1 - \frac{z_2}{2} , z_1 + \frac{z_2}{2}} M_{\zeta_1 + \frac{\zeta_2}{2},- \zeta_1 + \frac{\zeta_2}{2}} g \circ \kappa(y,u) e^{-i \la u, \eta \ra} \, \dd u \\
& = \int_{\rr d} e^{i \left( \la \zeta_1 + \frac{\zeta_2}{2}, y + \frac{u}{2} - z_1 + \frac{z_2}{2} \ra + \la - \zeta_1 + \frac{\zeta_2}{2}, y - \frac{u}{2} - z_1 - \frac{z_2}{2} \ra - \la u, \eta \ra \right) } \\
& \qquad \qquad \qquad \qquad \qquad \times g\left( y + \frac{u}{2} - z_1 + \frac{z_2}{2}, y - \frac{u}{2} - z_1 - \frac{z_2}{2} \right) \dd u \\
& = e^{i \left( \la \zeta_1, z_2 \ra + \la \zeta_2, y - z_1 \ra \right) } \int_{\rr d} e^{-i \la u, \eta-\zeta_1 \ra } g\left( y + \frac{u}{2} - z_1 + \frac{z_2}{2}, y - \frac{u}{2} - z_1 - \frac{z_2}{2} \right) \dd u \\
& = e^{i \la \zeta_1, z_2 \ra} e^{i \la \zeta_2, y - z_1 \ra } \int_{\rr d} e^{-i \la u-z_2, \eta-\zeta_1 \ra } g\left( y - z_1 + \frac{u}{2}, y - z_1 - \frac{u}{2} \right) \dd u \\
& = (2 \pi)^{d/2} e^{i \la \zeta_1, z_2 \ra} e^{i \left( \la \zeta_2, y - z_1 \ra + \la z_2, \eta-\zeta_1 \ra \right)} \cF_2 (g \circ \kappa)(y-z_1,\eta-\zeta_1) \\
& = (2 \pi)^{d/2} e^{i \la \zeta_1, z_2 \ra} T_{z_1,\zeta_1} M_{\zeta_2,z_2} \cF_2 (g \circ \kappa)(y,\eta).
\end{align*}
Insertion into \eqref{eq:STFTSchwartz1} gives the claimed conclusion.
\end{proof}
\begin{defn}
For $u\in\cS'(\rr {2d})$ and $g \in \cS(\rr {2d}) \setminus 0$ we denote
\begin{equation*}
\cT_g^\Delta u(z_1,z_2,\zeta_1,\zeta_2) = e^{-\frac{i}{2} \la \zeta_1-\zeta_2, z_1-z_2 \ra }\cT_g u (z_1,z_2,\zeta_1,\zeta_2)
\end{equation*}
for $\quad ( z_1,z_2; \zeta_1,\zeta_2) \in T^* \rr {2d}.$
\end{defn}
As a consequence of Proposition \ref{prop:symbchar} we obtain the following characterization of the Schwartz kernels of Weyl quantized Shubin operators.
\begin{prop}
\label{prop:LGchar}
Let $K\in\cS'(\rr {2d})$. Then $K$ is the Schwartz kernel of an operator of the form \eqref{shubop} with $a \in \Gamma_\rho^m(\rr {2d})$ if and only if for all $\alpha,\beta \in \nn d$ and $N \in \no$ and any $g \in \cS(\rr {2d}) \setminus 0$ we have
\begin{equation}\label{eq:kernelchar1}
\begin{aligned}
& |(\partial_{z_1} + \partial_{z_2})^\alpha (\partial_{\zeta_1} - \partial_{\zeta_2})^\beta \cT_g^\Delta K (z_1,z_2, \zeta_1, \zeta_2)| \\
& \qquad \lesssim \eabs{(z_1+z_2,\zeta_1-\zeta_2)}^{m- \rho |\alpha+\beta|}\eabs{(z_1-z_2,\zeta_1+\zeta_2)}^{-N}, \\
& \qquad \qquad ( z_1,z_2; \zeta_1,\zeta_2) \in T^* \rr {2d}.
\end{aligned}
\end{equation}
\end{prop}
\begin{rem}\label{rem:kernelchargeom}
Corresponding to Proposition \ref{prop:symbchargeom}, we may rephrase the estimates \eqref{eq:kernelchar1} for $\Gamma^m(\rr {2d})$ as
\begin{equation*}
|L_1 \cdots L_k \cT_g^\Delta K (z_1,z_2, \zeta_1, \zeta_2)|
\lesssim \eabs{(z_1+z_2,\zeta_1-\zeta_2)}^{m}\eabs{(z_1-z_2,\zeta_1+\zeta_2)}^{-N},
\end{equation*}
where $L_i$ are differential operators of the form
\begin{align*}
L_i & = (z_{1,j} + z_{2,j}) (\partial_{z_{1,n}} + \partial_{z_{2,n}}),
\quad
& L_i = (z_{1,j} + z_{2,j}) (\partial_{\zeta_{1,n}} - \partial_{\zeta_{2,n}}), \\
L_i & = (\zeta_{1,j} - \zeta_{2,j}) (\partial_{z_{1,n}} + \partial_{z_{2,n}}),
\quad \mbox{or} \quad
& L_i = (\zeta_{1,j} - \zeta_{2,j}) (\partial_{\zeta_{1,n}} - \partial_{\zeta_{2,n}})
\end{align*}
for $1 \leqs j,n \leqs d$ and $1 \leqs i \leqs k$.
\end{rem}
Proposition \ref{prop:LGchar} may be phrased in terms of the Schwartz kernel $K_{\cT_g a^w(x,D) \cT_h ^*}$ of the operator $\cT_g a^w(x,D) \cT_h^*$
for $a \in \Gamma_\rho^m(\rr {2d})$.
Let $u,v \in \cS(\rr d)$ and $g,h \in \cS(\rr d) \setminus 0$.
On the one hand
\begin{align*}
( a^w(x,D) u,v) & = \| g \|^{-2}_{L^2} \| h \|^{-2}_{L^2} (\cT_g a^w(x,D) \cT_h^* (\cT_h u), \cT_g v ) \\
& = \| g \|^{-2}_{L^2} \| h \|^{-2}_{L^2} ( K_{\cT_g a^w(x,D) \cT_h^*}, \cT_g v \otimes \overline{\cT_h u} )
\end{align*}
and on the other hand
\begin{equation*}
( a^w(x,D) u,v) = ( K_a, v \otimes \overline u ) = \| g \|^{-2}_{L^2} \| h \|^{-2}_{L^2} ( \cT_{g\otimes \overline{h}} K_a,\cT_{g\otimes \overline{h}}(v \otimes \overline u)).
\end{equation*}
Since
\begin{equation*}
(\cT_g v \otimes \overline{\cT_h u}) (z_1,\zeta_1, z_2, \zeta_2) = \cT_{g\otimes\overline{h}} (v \otimes \overline u) (z_1,z_2,\zeta_1,-\zeta_2)
\end{equation*}
this proves the formula
\begin{equation}
\label{eq:cTpkernelident}
K_{\cT_g a^w(x,D) \cT_h^*} (z_1,\zeta_1,z_2,-\zeta_2 )=\cT_{g\otimes\overline{h}} K_a (z_1,z_2,\zeta_1,\zeta_2).
\end{equation}
In view of the last identity and Proposition \ref{prop:LGchar} we have the following result.
Tataru \cite[Theorem 1]{Tataru} obtained a version of this characterization in the special case $\Gamma_0^0$, and $\alpha =\beta =0$.
\begin{cor}
We have $a \in \Gamma_\rho^m(\rr {2d})$ if and only if for all $\alpha,\beta \in \nn d$ and $N \in \no$ and any $g,h \in \cS(\rr {2d}) \setminus 0$
\begin{multline}
\left| (\partial_{z_1} + \partial_{z_2})^\alpha (\partial_{\zeta_1} - \partial_{\zeta_2})^\beta \left( e^{- \frac{i}{2} \la z_1-z_2, \zeta_1-\zeta_2 \ra} K_{\cT_g a^w(x,D) \cT_h^*} (z_1,\zeta_1,z_2,-\zeta_2 )\right) \right| \\ \lesssim \eabs{(z_1+z_2,\zeta_1-\zeta_2)}^{m-\rho |\alpha+\beta|}\eabs{(z_1-z_2,\zeta_1+\zeta_2)}^{-N}, \quad ( z_1,z_2; \zeta_1,\zeta_2) \in T^* \rr {2d}.
\end{multline}
\end{cor}
\subsection{Continuity in Shubin--Sobolev spaces}
As an application of the previous characterization we give a simple proof of continuity of Shubin pseudodifferential operators in isotropic Sobolev spaces. The Shubin--Sobolev spaces $Q^s(\rr d)$, $s \in \ro$, introduced by Shubin \cite{Shubin1} (cf. \cite{Grochenig1,Nicola1}) can be defined as the modulation space $M^{2}_s(\rr d)$, that is
\begin{equation*}
Q^s (\rr d) = \{u \in \cS'(\rr d): \, \eabs{ \cdot }^s \cT_g u \in L^2(\rr {2d}) \}
\end{equation*}
where $g \in \cS(\rr d) \setminus 0$ is fixed and arbitrary, with norm
\begin{equation*}
\| u \|_{Q^s} = \left\| \eabs{ \cdot }^s \cT_g u \right\|_{L^2(\rr {2d})}.
\end{equation*}
The characterization of Shubin pseudodifferential operators given in Proposition \ref{prop:LGchar} yields a simple proof of their $Q^s$-continuity, cf. \cite{Tataru}.
\begin{prop}
If $a \in \Gamma_0^m(\rr {2d})$ then $a^w(x,D) :Q^{s+m}(\rr d)\rightarrow Q^s(\rr d)$ is continuous for all $s \in \ro$.
\end{prop}
\begin{proof}
Set $A=a^w(x,D)$.
We have for $u\in Q^{s+m}(\rr{d})$
\begin{align*}
\|A u\|_{Q^s}
& = \sup_{v\in Q^{-s}} |( A u,v)|
= \sup_{v\in Q^{-s}} |( K_{\cTp A \cTp^*}, \cTp v \otimes \overline{\cTp u} )| \\
& = \sup_{v\in Q^{-s}} |( \eabs{\cdot}^{s} \otimes \eabs{\cdot}^{-s-m} K_{\cTp A \cTp^*}, \underbrace{\eabs{\cdot}^{-s} \, \cTp v}_{\in L^2(\rr {2d})} \otimes \underbrace{\eabs{\cdot}^{s+m} \, \overline{ \cTp u}}_{\in L^2(\rr {2d})} )|.
\end{align*}
It remains to show that
\begin{equation}\label{eq:schwartzkernel1}
\eabs{(z_1,\zeta_1)}^{s}\eabs{(z_2,\zeta_2)}^{-s-m} K_{\cTp A\cTp^*} (z_1,\zeta_1, z_2, \zeta_2)
\end{equation}
is the Schwartz kernel of a continuous operator on $L^2(\rr {2d})$.
First we deduce from \eqref{eq:cTpkernelident}, Proposition \ref{prop:LGchar} and \eqref{eq:Peetre} the estimate for any $N \in \no$
\begin{align*}
& \eabs{(z_1,\zeta_1)}^{s}\eabs{(z_2,\zeta_2)}^{-s-m} |K_{\cTp A\cTp^*} (z_1,\zeta_1, z_2, \zeta_2)| \\
& = \eabs{(z_1,\zeta_1)}^{s}\eabs{(z_2,\zeta_2)}^{-s-m} |\cTp K_a (z_1,z_2, \zeta_1, -\zeta_2)| \\
& \lesssim \eabs{(z_1,\zeta_1)}^{s}\eabs{(z_2,\zeta_2)}^{-s-m} \eabs{(z_1+z_2, \zeta_1+\zeta_2)}^m \eabs{(z_1-z_2,\zeta_1-\zeta_2)}^{-N} \\
& \lesssim \eabs{(z_2,\zeta_2)}^{-m} \eabs{(z_1, \zeta_1)+(z_2,\zeta_2)}^m \eabs{(z_1,\zeta_1)-(z_2,\zeta_2)}^{|s| -N} \\
& \lesssim \eabs{(z_1,\zeta_1)-(z_2,\zeta_2)}^{|s|+|m| - N}.
\end{align*}
Then we apply Schur's test which gives, for $N>0$ sufficiently large,
\begin{align*}
\int_{\rr {2d}} \left|\eabs{(z_1,\zeta_1)}^{s}\eabs{(z_2,\zeta_2)}^{-s-m} K_{\cTp A\cTp^*} (z_1,\zeta_1, z_2, \zeta_2) \right| \, \dd z_1 \, \dd \zeta_1 & \lesssim 1,\\
\int_{\rr {2d}} \left|\eabs{(z_1,\zeta_1)}^{s}\eabs{(z_2,\zeta_2)}^{-s-m} K_{\cTp A\cTp^*} (z_1,\zeta_1, z_2, \zeta_2) \right| \, \dd z_2 \, \dd \zeta_2 & \lesssim 1.
\end{align*}
This implies that \eqref{eq:schwartzkernel1} is the Schwartz kernel of an operator that is continuous on $L^2(\rr {2d})$.
\end{proof}
\section{$\Gamma$-conormal distributions}
\label{sec:gconorm}
The kernels of pseudodifferential operators with H\"ormander symbols are prototypes of conormal distributions, see \cite[Chapter~18.2]{Hormander0}. We introduce an analogous notion in the Shubin calculus. Before giving a precise definition we make some observations to clarify our idea.
Proposition \ref{prop:LGchar} may be rephrased using the diagonal and the antidiagonal
\begin{equation*}
\Delta = \{(x,x) \in \rr {2d}: \ x \in \rr d \}, \qquad \Delta^\perp = \{(x,-x) \in \rr {2d}: \ x \in \rr d \}
\end{equation*}
considered as linear subspaces of $\rr{2d}$.
Denoting Euclidean distance to a subset $V$ by $\dist(\cdot,V)$ we have
\begin{equation*}
\dist((x,y),\Delta) = \inf_{z \in \rr d} \left| (x,y) - (z,z) \right| = \frac{|x-y|}{\sqrt{2}}, \quad (x,y) \in \rr {2d},
\end{equation*}
and $\dist((x,y),\Delta^\perp)) = |x+y|/\sqrt{2}$ for $(x,y) \in \rr {2d}$.
The inequalities \eqref{eq:kernelchar1} can thus be expressed, for $(x,\xi) \in T^* \rr {2d}$, as
\begin{equation}\label{eq:kernelchar2}
\begin{aligned}
\left| L_1 \cdots L_k \cT_g^\Delta K_a (x,\xi) \right |
& \lesssim \left( 1 + \dist((x,\xi),N(\Delta^\perp)) \right)^{m- \rho k} \\
& \qquad \times \left( 1 + \dist((x,\xi),N(\Delta)) \right)^{-N},
\end{aligned}
\end{equation}
where $N(\Delta) =\Delta \times \Delta^\perp \subseteq T^* \rr {2d}$ and $N(\Delta^\perp) = \Delta^\perp \times \Delta \subseteq T^* \rr {2d}$ denote the conormal spaces of $\Delta$ and $\Delta^\perp$ respectively, and
\begin{equation}\label{eq:Ljdef}
L_j = \langle b_j, \nabla_{x,\xi} \rangle
\end{equation}
is a first order differential operator with constant coefficients such that $b_j \in N(\Delta), j=1,2,\dots,k$ and $k,N \in \no$.
Observe that in \eqref{eq:kernelchar2} we may substitute $N(\Delta^\perp)$ by any linear subspace transversal to $N(\Delta)$, that is any vector subspace $V \subseteq T^* \rr{2d}$ such that $T^* \rr{2d} = N(\Delta) \oplus V$.
Note also that
\begin{equation*}
\frac{1}{2} \la x_1-x_2, \xi_1-\xi_2 \ra = \langle \pi_{\Delta^\perp} x ,\xi \rangle.
\end{equation*}
In the following we generalize \eqref{eq:kernelchar2} by replacing the diagonal $\Delta$ by a general linear subspace, and the dimension $2d$ is replaced by $d$.
For simplicity of notation we work with $\rho=1$ but this can be generalized to $0 \leqs \rho \leqs 1$.
\begin{defn}\label{def:Gconormal}
Suppose $Y \subseteq \rr d$ is an $n$-dimensional linear subspace, $0 \leqs n \leqs d$, let $N(Y) = Y \times Y^\perp$,
and let $V \subseteq T^* \rr d$ be a $d$-dimensional linear subspace such that $N(Y) \oplus V = T^* \rr d$.
Then $u \in \cS'(\rr d)$ is $\Gamma$-conormal to $Y$ of degree $m\in \ro$, denoted $u \in I^m_\Gamma(\rr d,Y)$, if for some $g \in \cS(\rr d) \setminus 0$ and for any $ k,N \in \mathbb{N}$ we have
\begin{equation}
\label{eq:conormchar}
\begin{aligned}
\left| L_1 \cdots L_k \cT^Y_g u (x,\xi) \right |
& \lesssim \left( 1 + \dist((x,\xi),V) \right)^{m-k} \left( 1 + \dist((x,\xi),N(Y)) \right)^{-N}, \\
& \qquad (x,\xi) \in T^* \rr d,
\end{aligned}
\end{equation}
where
\begin{equation*}
\cT_g^Y u(x,\xi) = e^{-i \la \pi_{Y^\perp} x, \xi\ra} \cT_g u (x, \xi), \quad (x,\xi) \in T^* \rr d,
\end{equation*}
and
$L_j$, $j=1,\dots,k$, are first order differential operators defined by \eqref{eq:Ljdef} with $b_j \in N(Y)$.
\end{defn}
For a fixed $g \in \cS \setminus 0$ we equip $I^m_\Gamma(\rr d,Y)$ with a topology using seminorms defined as the best possible constants in \eqref{eq:conormchar} for $N,M \in \no$ fixed, maximized over $k\leqs M$ and all combinations of $b_j \in N(Y)$ belonging to a fixed and arbitary basis.
As observed, the definition is independent of the linear subspace $V$ as long as $N(Y) \oplus V = T^* \rr d$, and often it is convenient to use $V = N(Y)^\perp = N(Y^\perp)$.
We will also see that the definition and the topology does not depend on $g \in \cS(\rr d) \setminus 0$ (see Corollary \ref{cor:windowindep2}).
If we pick coordinates such that $Y = \rr n \times \{0\} \subseteq \rr d$ then
\begin{align*}
N(Y) & = \{(x_1, 0, 0, \xi_2): \ x_1 \in \rr n, \, \xi_2 \in \rr {d-n}\} \subseteq T^* \rr d, \\
N(Y^\perp) & = \{(0, x_2, \xi_1, 0): \ x_2 \in \rr {d-n}, \, \xi_1 \in \rr {n}\} \subseteq T^* \rr d.
\end{align*}
We split variables as $x=(x_1,x_2) \in \rr d$, $x_1 \in \rr n$, $x_2 \in \rr {d-n}$.
The inequalities \eqref{eq:conormchar} reduce to
\begin{equation}
\label{eq:Gconormdefineq}
|\partial^\alpha_{x_1} \partial^\beta_{\xi_2} \left( e^{-i \la x_2, \xi_2 \ra }\cT_g u (x,\xi) \right)| \lesssim \eabs{(x_1,\xi_2)}^{m-|\alpha+\beta|} \eabs{(x_2,\xi_1)}^{-N}
\end{equation}
for $\alpha \in \nn n$, $\beta \in \nn {d-n}$ and $N \in \no$.
\begin{example}
By Proposition \ref{prop:LGchar} and \eqref{eq:kernelchar2} we have
\begin{equation*}
I^m_\Gamma(\rr {2d},\Delta) = \{ K_a \in \cS'(\rr {2d}): a \in \Gamma^m(\rr {2d}) \}.
\end{equation*}
\end{example}
\begin{example}
Write $x=(x_1,x_2)$, $x_1 \in \rr n$, $x_2 \in \rr {d-n}$, and consider $u = 1 \otimes \delta_0 \in \cS'(\rr d)$ with $1\in\cS'(\rr{n})$ and $\delta_0 \in\cS'(\rr {d-n})$.
The distribution $u$ is a prototypical example of a distribution $\Gamma$-conormal (and also conormal in the standard sense of \cite[Chapter~18.2]{Hormander0}) to the subspace $\rr n \times \{0\}$.
It is a Gaussian distribution in the sense of H\"ormander \cite{Hormander2} (cf. \cite{PRW1}). A computation yields
\begin{equation*}
\cTp u(x,\xi) = (2\pi)^{- \frac{d-n}{2}} \pi^{-\frac{d}{4}}e^{i \la x_2, \xi_2 \ra} e^{-\frac{1}{2} (|x_2|^2+|\xi_1|^2)}
\end{equation*}
so the inequalities \eqref{eq:Gconormdefineq} are satisfied for $m=0$. In particular $\delta_0 (\rr d) \in I_{\Gamma}^0(\rr d, \{ 0 \})$.
\end{example}
Next we characterize the conormal distributions of which the latter example is a particular case.
Again we denote $x=(x_1,x_2)\in \rr d$, $x_1 \in \rr n$, $x_2 \in \rr {d-n}$.
\begin{lem}
\label{lem:IGchar}
If $u \in \cS'(\rr d)$ and $0 \leqs n \leqs d$ then $u\in I^m_\Gamma(\rr d, \rr n \times \{0\})$ if and only if
\begin{equation*}
u(x) = (2\pi)^{-(d-n)/2} \int_{\rr {d-n}} e^{i \la x_2,\theta \ra} a(x_1,\theta)\ \dd \theta
\end{equation*}
for some $a\in \Gamma^m(\rr d)$, that is $u=\cF_2^{-1}a$.
\end{lem}
\begin{proof}
Let $g\in\cS(\rr d)\setminus 0$. By Lemma \ref{lem:Fourier} we have
\begin{equation*}
\cT_g u(x_1,x_2,\xi_1,\xi_2) = e^{i \la x_2,\xi_2 \ra} \cT_{\cF_2 g} \cF_2 u (x_1,\xi_2,\xi_1, -x_2).
\end{equation*}
Set $a=\cF_2 u\in\cS^\prime(\rr d)$. Proposition \ref{prop:symbchar} implies that $a\in \Gamma^m(\rr d)$ if and only if the estimate \eqref{eq:Gconormdefineq} hold for all for $\alpha \in \nn n$, $\beta \in \nn {d-n}$ and $N \in \no$.
By Definition \ref{def:Gconormal} this happens exactly when $u\in I^m_\Gamma(\rr d, \rr n \times \{0\})$.
\end{proof}
The extreme cases $n=0$ and $n=d$ yield
\begin{cor}
\label{cor:extremeconormcases}
$I^m_\Gamma(\rr d, \{0\}) = \cF \Gamma^m(\rr d)$ and $I^m_\Gamma(\rr d, \rr d) = \Gamma^m(\rr d)$.
\end{cor}
The proof of Lemma \ref{lem:IGchar} gives the following byproduct.
\begin{cor}\label{cor:windowindep1}
The topology on $I^m_\Gamma(\rr d, \rr n \times \{0\})$ does not depend on $g$.
\end{cor}
The next result treats how $\Gamma$-conormal distributions behave under orthogonal coordinate transformations.
\begin{lem}\label{lem:conormalcoord}
If $Y \subseteq \rr d$ is an $n$-dimensional linear subspace, $0 \leqs n \leqs d$,
and $B \in \On(d)$ then $B^*: I_\Gamma^m(\rr{d},Y) \rightarrow I_\Gamma^m(\rr d, B^t Y)$ is a homeomorphism.
\end{lem}
\begin{proof}
Let $g \in \cS(\rr d) \setminus 0$. We have
\begin{equation*}
\cT_g (B^*u) (x,\xi) = \cT_h u (B x, B \xi)
\end{equation*}
where $h = (B^t)^*g \in \cS(\rr d)$.
From this and $\pi_{(B^t Y)^\perp} = B^t \pi_{Y^\perp} B$ we obtain
\begin{equation*}
\cT_g^{B^t Y} (B^*u) (x,\xi) = \cT_h^Y u (B x, B \xi)
\end{equation*}
so $B^*u \in I_\Gamma^m(\rr d, B^t Y)$ follows from Definition \ref{def:Gconormal}, $N(B^t Y) = B^t Y \times B^t Y^\perp$ and
\begin{equation*}
\dist( (Bx,B\xi),N(Y)) = \dist((x,\xi),N(B^t Y)), \qquad (x,\xi) \in T^* \rr d.
\end{equation*}
It also follows that the map $u \rightarrow B^*u$ is continuous from $I_\Gamma^m(\rr d,Y)$ to $I_\Gamma^m(\rr d,B^tY)$ when the topologies for $I_\Gamma^m(\rr d,Y)$ and $I_\Gamma^m(\rr d,B^tY)$
are defined by means of $h \in \cS$ and $g \in \cS$, respectively.
\end{proof}
If we combine Lemma \ref{lem:conormalcoord} with Corollary \ref{cor:windowindep1} then we obtain the following generalization of the latter result.
\begin{cor}\label{cor:windowindep2}
If $Y \subseteq \rr d$ is an $n$-dimensional linear subspace, $0 \leqs n \leqs d$,
then the topology on $I^m_\Gamma(\rr d, Y)$ does not depend on $g$.
\end{cor}
We can also extract the following generalization of Lemma \ref{lem:IGchar} from Lemma \ref{lem:conormalcoord}.
\begin{prop}
\label{prop:IGchar}
Let $0 \leqs n \leqs d$ and let $Y \subseteq \rr d$ be an $n$-dimensional linear subspace.
Then $u \in \cS'(\rr d)$ satisfies $u \in I^m_\Gamma(\rr d,Y)$ if and only if
\begin{equation}\label{uoscint}
u(x) = \int_{\rr {d-n}} e^{i \la M_2^t x, \theta \ra} a(M_1^t x, \theta) \, \dd \theta
\end{equation}
for some $a \in \Gamma^m(\rr d)$, where $M_2 \in \M_{d \times (d- n)}( \ro)$ and $M_1 \in \M_{d \times n}( \ro)$ are matrices such that
$Y = \Ker M_2^t$ and
$U = [M_1 \ M_2] \in \GL(d,\ro)$.
\end{prop}
\begin{proof}
If $u \in I^m_\Gamma(\rr d,Y)$ then we can pick $U = [M_1 \ M_2] \in \On(d)$
where $M_1 \in \M_{d \times n}(\ro)$ and $M_2 \in \M_{d \times (d-n)}(\ro)$ such that $Y = \Ker M_2^t$,
which implies that $U^t Y = \rr n \times \{0\}$.
By Lemma \ref{lem:conormalcoord} we have $U^* u \in I^m_\Gamma(\rr d,\rr n \times \{0\})$,
and \eqref{uoscint} with $a \in \Gamma^m(\rr d)$ is then a consequence of Lemma \ref{lem:IGchar}.
Suppose on the other hand that \eqref{uoscint} holds for $a \in \Gamma^m(\rr d)$ and $U = [M_1 \ M_2] \in \GL(d,\ro)$. Set $Y = \Ker M_2^t$.
We may assume that $U = [M_1 \ M_2] \in \On(d)$, after modifying $a \in \Gamma^m(\rr d)$ by means of a linear invertible coordinate transformation, which is permitted since $\Gamma^m$ is invariant under such transformations.
By Lemma \ref{lem:IGchar} we have $U^* u \in I^m_\Gamma(\rr d,\rr n \times \{0\})$,
and Lemma \ref{lem:conormalcoord} then gives $u \in I^m_\Gamma(\rr d,Y)$.
\end{proof}
Since
\begin{equation*}
\bigcap_{m \in \ro} \Gamma^m(\rr d) = \cS(\rr d)
\end{equation*}
we have the following consequence.
\begin{cor}
If $0 \leqs n \leqs d$ and $Y \subseteq \rr d$ is an $n$-dimensional linear subspace then
\begin{equation*}
\cS(\rr d) \subseteq I^m_\Gamma(\rr d,Y).
\end{equation*}
\end{cor}
We also obtain a generalization of Lemma \ref{lem:conormalcoord}.
\begin{cor}
\label{cor:coordchange}
If $Y \subseteq \rr d$ is an $n$-dimensional linear subspace, $0 \leqs n \leqs d$,
and $B \in \GL(d,\ro)$ then $B^*: I_\Gamma^m(\rr{d},Y) \rightarrow I_\Gamma^m(\rr d, B^{-1} Y)$ is a homeomorphism.
\end{cor}
\begin{proof}
By Proposition \ref{prop:IGchar} we have $u \in I^m_\Gamma(\rr d,Y)$ if and only if $B^*u \in I^m_\Gamma(\rr d,B^{-1}Y)$.
It remains to show that $B^*$ is continuous.
By Lemma \ref{lem:conormalcoord} we may replace $Y$ with any $n$-dimensional linear subspace.
Using the singular value decomposition $B = U \Sigma V^t$, where $U,V \in \On(d)$ and $\Sigma$ is diagonal with positive entries,
the proof of the continuity of $B^*$ reduces, again using Lemma \ref{lem:conormalcoord}, to a proof of the continuity of
\begin{equation*}
\Sigma^*: I^m_\Gamma(\rr d,\rr n \times \{0\}) \rightarrow I^m_\Gamma(\rr d,\rr n \times \{0\}).
\end{equation*}
The latter continuity follows straightforwardly using the estimates
\eqref{eq:Gconormdefineq}.
\end{proof}
By Lemma \ref{lem:Fourier}
\begin{equation*}
\cT_{\wh g} \wh u (x,\xi)
= e^{i \la x,\xi \ra} \cT_g u(-\xi,x)
\end{equation*}
which gives
\begin{align*}
\cT_{\wh g}^{Y^\perp} \wh u (x,\xi)
= e^{i (\la x,\xi \ra - \la \pi_Y x,\xi \ra)} \cT_g u(-\xi,x)
= \cT_g^{Y} u(-\xi,x).
\end{align*}
Thus it follows from Definition \ref{def:Gconormal} that
$\cF: I^m_\Gamma(\rr{d},Y) \rightarrow I^m_\Gamma(\rr{d},Y^\perp)$ continuously.
\begin{prop}
\label{prop:FourierImg}
If $Y \subseteq \rr d$ is an $n$-dimensional linear subspace, $0 \leqs n \leqs d$,
then the Fourier transform is a homeomorphism from $I^m_\Gamma(\rr{d},Y)$ to $I^m_\Gamma(\rr{d},Y^\perp)$.
\end{prop}
\begin{example}
If $u \in I_\Gamma^m(\rr d, \rr n \times \{0\})$ then by Lemma \ref{lem:IGchar} there exists $a \in \Gamma^m(\rr d)$ such that
\begin{equation*}
u(x) = (2\pi)^{-(d-n)/2} \int_{\rr {d-n}} e^{i \la x_2,\theta \ra} a(x_1,\theta)\ \dd \theta.
\end{equation*}
If $B \in \GL(d,\ro)$ and
\begin{equation*}
B=\begin{pmatrix}
B_1 & 0 \\
0 & B_2
\end{pmatrix}
\end{equation*}
then the action of $B$ can understood as an action on the symbol of $u$,
\begin{equation*}
B^*u(x) = (2\pi)^{-(d-n)/2} \int_{\rr {d-n}} e^{i \la x_2,\theta \ra} a(B_1 x_1,B_2^{-t}\theta) |B_2 |^{-1} \, \dd \theta.
\end{equation*}
\end{example}
\begin{rem}\label{rem:conormalgeom}
The estimates \eqref{eq:conormchar} in Definition \ref{def:Gconormal} can be translated to a geometric form, as in Remark \ref{rem:kernelchargeom} for Schwartz kernels of Shubin operators.
The result is
\begin{align*}
& \left| (\Pi_{N(Y)} (x,\xi))^\alpha (\Pi_{N(Y)} \partial_{x,\xi})^\beta \cT^Y_g u (x,\xi) \right | \\
& \qquad \lesssim \left( 1 + \dist((x,\xi),V) \right)^{m} \left( 1 + \dist((x,\xi),N(Y)) \right)^{-N},
\end{align*}
for $\alpha, \beta \in \nn {2d}$ such that $|\alpha|=|\beta|$, and $N \in \no$ arbitrary.
\end{rem}
\begin{rem}
Let $X$ be a smooth manifold of dimension $d$ and let $Y \subseteq X$ be a closed submanifold.
H\"ormander's conormal distributions $I^m(X,Y)$ with respect to $Y$ of order $m\in \ro$ is by \cite[Definition~18.2.6]{Hormander0} all $u\in \mathcal{D}'(X)$ such that
\begin{equation*}
L_1\dots L_k u \in B^{-m-d/4}_{2,\infty, \, \rm loc}(X), \quad k \in \no,
\end{equation*}
where $L_j$ are first order differential operators with coefficients tangential to $Y$,
and where $B^{-m-d/4}_{2,\infty, \, \rm loc}(X)$ is a Besov space.
Comparing this definition with the estimates defining $I_\Gamma^m(\rr d,Y)$ in Remark \ref{rem:conormalgeom} we see that the fact that we are working with isotropic symbol classes made it necessary to replace the local, Fourier-based Besov spaces with a global, isotropic version based on the transform $\cTp$, resembling a modulation space.
We note that he submanifold $Y$ is allowed to be nonlinear in $I^m(X,Y)$, as opposed to the linear
submanifold $Y \subseteq \rr d$ we use in $\Gamma$-conormal distributions $I_\Gamma^m(\rr d,Y)$.
\end{rem}
\subsection{Microlocal properties of $\Gamma$-conormal distributions}
The wave front set of a conormal distribution in $I^m(X,Y)$ is contained in the conormal bundle of the submanifold $Y$ \cite[Lemma~25.1.2]{Hormander0}.
The wave front set adapted to the Shubin calculus is the Gabor wave front set studied e.g. in \cite{Hormander1, Nakamura1, Rodino1,SW,SW2}, see also \cite{CS}.
It can be introduced using either pseudodifferential operators or the short-time Fourier transform. In the latter definition one may replace $\mathcal{V}_g u$ by $\cT_g u$ since they are identical up to a factor of modulus one.
\begin{defn}
\label{def:WFG}
If $u \in \cS'(\rr d)$ and $g\in\cS(\rr d)\setminus0$ then $(x_0,\xi_0)\in T^*\rr{d} \setminus{0}$ satisfies $(x_0,\xi_0) \notin \WF_G(u)$ if
there exists an open cone $V \subseteq T^* \rr d \setminus 0$ containing $(x_0,\xi_0)$, such that for any $N\in\no$ there exists $C_{V,g,N}>0$ such that $|\cT_g u(x,\xi)|\leqs C_{V,g,N} \eabs{(x,\xi)}^{-N}$ when $(x,\xi) \in V$.
\end{defn}
The definition does not depend on $g\in\cS(\rr d)\setminus0$.
The Gabor wave front set transforms well under the metaplectic operators discussed in Section \ref{sec:prelim}, cf. \cite{Hormander1}, that is
\begin{equation*}
\WF_G(\mu(\chi) u) = \chi \left(\WF_G(u)\right), \quad u \in \cS'(\rr d), \quad \chi \in \Sp(d,\ro).
\end{equation*}
\begin{prop}\label{prop:WFconormal}
Let $Y \subseteq \rr d$ be an $n$-dimensional linear subspace, $0 \leqs n \leqs d$.
If $u \in I^m_\Gamma(\rr d,Y)$ then
\begin{equation*}
WF_G( u) \subseteq N(Y).
\end{equation*}
\end{prop}
\begin{proof}
Suppose $(x,\xi) \notin N(Y)$. This means $(\pi_{Y^\perp} x,\pi_Y\xi) \neq 0$, so $(x,\xi) \in V$ where
the open conic set $V \subseteq T^* \rr d$ is defined by
\begin{equation*}
V = \{ (x,\xi) \in T^* \rr d : \ |(\pi_Y x, \pi_{Y^\perp} \xi)| < C |( \pi_{Y^\perp} x,\pi_Y \xi )| \}
\end{equation*}
for some $C>0$.
Using
\begin{equation*}
|(x,\xi)|^2 = |(\pi_Yx, \pi_{Y^\perp} \xi)|^2 + |( \pi_{Y^\perp} x,\pi_Y\xi)|^2,
\end{equation*}
$\dist(x,Y) = | \pi_{Y^\perp} x|$, $\dist(x,Y^\perp) = |\pi_Yx|$ and
\begin{equation*}
\dist^2( (x,\xi), N(Y) ) = \dist^2( x ,Y) + \dist^2(\xi,Y^\perp),
\end{equation*}
the result follows from Definition \ref{def:Gconormal} (with trivial operators $L_j$).
\end{proof}
\begin{cor}\label{cor:WFpsdokernel}
If $a \in \Gamma^m(\rr {2d})$ and $a^w(x,D)$ has Schwartz kernel $K_a$ then
\begin{equation*}
\WF_G(K_a) \subseteq N(\Delta) \subseteq T^* \rr {2d}.
\end{equation*}
\end{cor}
It is well known that Shubin pseudodifferential operators are microlocal with respect to $\WF_G$, that is if $a \in \Gamma^m( \rr {2d})$ and $u \in \cS' (\rr d)$ then
\begin{equation*}
\WF_G(a^w(x,D) u)\subseteq \WF_G(u),
\end{equation*}
see e.g. \cite{Hormander1,SW2}. We show that they also preserve $\Gamma$-conormality.
\begin{prop}
\label{prop:pseudocomp}
Let $Y \subseteq \rr d$ be an $n$-dimensional linear subspace, $0 \leqs n \leqs d$.
If $a \in \Gamma^{m'}(\rr {2d})$ then $a^w(x,D)$ is continuous from $I^m_\Gamma(\rr d,Y)$ to $I_\Gamma^{m+m'}(\rr d,Y)$.
\end{prop}
\begin{proof}
If $a \in \Gamma^{m'} (\rr {2d})$ and $U \in \On(d)$ then we have by symplectic invariance of the Weyl calculus
\eqref{metaplecticoperator}
\begin{equation*}
(U^t)^* a^w(x,D) U^* = b^w(x,D)
\end{equation*}
where $b(x,\xi) = a(U^t x, U^t \xi) \in \Gamma^{m'} (\rr {2d})$. By Lemma \ref{lem:conormalcoord} we may therefore assume that $Y = \rr n \times \{0\}$. The symplectic invariance also guarantees that
\begin{equation*}
\mathscr{F}_2^{-1} b^w(x,D) \mathscr{F}_2 = c^w(x,D)
\end{equation*}
with $c(x,\xi) = b(x_1,\xi_2,\xi_1,-x_2)\in \Gamma^{m'} (\rr {2d})$
where $x=(x_1,x_2)\in \rr d$, $x_1 \in \rr n$, $x_2 \in \rr {d-n}$.
To prove $a^w(x,D) u \in I_\Gamma^{m+m'}(\rr d,\rr n \times \{0\})$ for $a \in \Gamma^{m'} (\rr {2d})$ and $u\in I^m_\Gamma(\rr d,\rr n \times \{0\})$
is therefore by Lemma \ref{lem:IGchar} equivalent to proving that $a^w(x,D) u \in \Gamma^{m+m'}(\rr d)$ for $a \in \Gamma^{m'} (\rr {2d})$ and $u \in \Gamma^m(\rr d)$.
Let $a \in \Gamma^{m'} (\rr {2d})$, $u \in \Gamma^m(\rr d)$ and set $A= a^w(x,D)$. By Proposition \ref{prop:symbchar} it suffices to verify
\begin{equation*}
|\partial^\alpha_{x} \cTp Au (x,\xi)| \lesssim \eabs{x}^{m+m'-|\alpha|}\eabs{\xi}^{-N}, \quad (x,\xi) \in T^* \rr d,
\end{equation*}
for any $N \geqs 0$ and $\alpha \in \nn d$.
Let $N \geqs 0$ and $\alpha \in \nn d$.
Writing $\cTp Au=(\cTp A\cTp^*)\cTp u$ and using \eqref{eq:cTpkernelident} we are thus tasked with estimating $\partial^\alpha_{x}$ acting on
\begin{equation}\label{eq:kernelreformulation}
\begin{aligned}
\cTp Au (x,\xi) & = \int_{\rr {2d}} \cTp K_a(x,y,\xi,-\eta) \cTp u(y,\eta)\ \dd y \, \dd \eta \\
& = \int_{\rr {2d}} e^{\frac{i}{2} \la x-y,\xi+\eta \ra} \, \cTp^\Delta K_a(x,y,\xi,-\eta) \, \cTp u(y,\eta)\ \dd y \, \dd \eta.
\end{aligned}
\end{equation}
The integral \eqref{eq:kernelreformulation} converges due to the estimates
\begin{equation*}
\label{eq:Gineqker}
|\partial_y^\alpha \cTp u(y,\eta)|\lesssim \eabs{y}^{m-|\alpha|}\eabs{\eta}^{-N}, \quad y, \eta \in \rr d, \quad \alpha \in \nn d, \quad N \geqs 0,
\end{equation*}
which follows from Proposition \ref{prop:symbchar}, and the estimates
\begin{equation*}
\begin{aligned}
| (\partial_x + \partial_y)^\alpha \cTp ^\Delta K_a (x,y, \xi, -\eta)|
& \lesssim \eabs{(x+y,\xi + \eta)}^{m'-|\alpha|} \eabs{(x-y,\xi-\eta)}^{-N}, \\
& \qquad x,y,\xi,\eta \in \rr d, \quad \alpha \in \nn d, \quad N \geqs 0,
\end{aligned}
\end{equation*}
that are guaranteed by Proposition \ref{prop:LGchar}.
Writing $\partial_{x_j} = \partial_{x_j} + \partial_{y_j} - \partial_{y_j}$ for $1 \leqs j \leqs d$ and differentiating under the integral in \eqref{eq:kernelreformulation} we obtain by integration by parts for any $N_1,N_2 \geqs 0$
\begin{equation*}
\begin{aligned}
& \left|\partial^\alpha_{x} \cTp Au (x,\xi)\right| \\
& = \sum_{\beta \leqs \alpha} C_{\beta} \left| \int_{\rr {2d}} (\partial_x + \partial_y)^\beta \left(e^{\frac{i}{2} \la x-y,\xi+\eta \ra}\,\cTp^\Delta K_a(x,y,\xi,-\eta)\right) \, \partial^{\alpha-\beta}_y \cTp u(y,\eta)\ \dd y \, \dd \eta\right|\\
& = \sum_{\beta \leqs \alpha} C_{\beta} \left| \int_{\rr {2d}} e^{\frac{i}{2} \la x-y,\xi+\eta \ra} \, (\partial_x + \partial_y)^\beta \, \cTp^\Delta K_a(x,y,\xi,-\eta) \, \partial^{\alpha-\beta}_y \cTp u(y,\eta)\ \dd y \, \dd \eta\right|\\
& \lesssim \sum_{\beta \leqs \alpha} \int_{\rr {2d}} \left|(\partial_x + \partial_y)^\beta \cTp^\Delta K_a(x,y,\xi,-\eta) \, \partial^{\alpha-\beta}_y \cTp u(y,\eta)\right|\ \dd y \, \dd \eta,\\
& \lesssim \sum_{\beta \leqs \alpha} \int_{\rr {2d}} \eabs{(x+y,\xi + \eta)}^{m'-|\beta|} \eabs{(x-y,\xi-\eta)}^{-N_1} \, \eabs{y}^{m-|{\alpha-\beta}|}\eabs{\eta}^{-N_2} \, \dd y \, \dd \eta.
\end{aligned}
\end{equation*}
Finally we estimate
\begin{align*}
\int_{\rr {2d}} &\eabs{(x+y,\xi+\eta)}^{m'-|\beta|} \eabs{(x-y,\xi-\eta)}^{-N_1}\eabs{y}^{m-|\alpha-\beta|} \eabs{\eta}^{-N_2} \dd y \, \dd \eta\\
& = \int_{\rr {2d}} \eabs{(2x+y,2\xi+\eta)}^{m'-|\beta|}\eabs{(y,\eta)}^{-N_1}\eabs{y+x}^{m-|\alpha-\beta|} \eabs{\eta+\xi}^{-N_2} \dd y \, \dd \eta\\
&\lesssim \int_{\rr {2d}} \eabs{x}^{m'-|\beta|} \eabs{y}^{|m'|+|\beta|}\eabs{\xi}^{|m'|+|\beta|}\eabs{\eta}^{|m'|+|\beta|}\eabs{(y,\eta)}^{-N_1}\eabs{x}^{m-|\alpha-\beta|} \\
& \qquad \qquad \qquad \qquad \qquad \times \eabs{y}^{|m|+|\alpha|} \eabs{\xi}^{-N_2} \eabs{\eta}^{N_2}\dd y \, \dd \eta\\
&\lesssim \eabs{x}^{m'+m-|\alpha|} \eabs{\xi}^{|m'|+|\alpha|-N_2} \int_{\rr {2d}} \eabs{y}^{|m'|+|m| + 2 |\alpha|} \eabs{\eta}^{|m'|+|\alpha|+N_2}\eabs{(y,\eta)}^{-N_1} \dd y \, \dd \eta\\
&\lesssim \eabs{x}^{m'+m-|\alpha|} \eabs{\xi}^{-N},
\end{align*}
provided
$N_1 > N_2+2|m'|+|m|+3|\alpha| +2d$ and $N_2 \geqs N+|m'| + |\alpha|$
This proves
\begin{equation*}
\left|\partial^\alpha_{x} \cTp Au (x,\xi)\right|\lesssim \eabs{x}^{m^\prime+m-|\alpha|} \eabs{\xi}^{-N}, \quad (x,\xi) \in T^* \rr d
\end{equation*}
and as a by-product of these estimates we obtain the claimed continuity.
\end{proof}
\begin{rem}
The proof shows that the result can be generalized. If $a \in \Gamma_\rho^{m'}(\rr {2d})$ and $u \in I_{\Gamma,\rho}^m(\rr d, Y)$
then $a^w(x,D) u \in I_{\Gamma,\rho}^{m+m'}(\rr d, Y)$, for $0 \leqs \rho \leqs 1$.
Here $I_{\Gamma,\rho}^m(\rr d, Y)$ is defined as in Definition \ref{def:Gconormal} with the modified estimate
\begin{equation*}
\left( 1 + \dist((x,\xi),V) \right)^{m- \rho k} \left( 1 + \dist((x,\xi),N(Y)) \right)^{-N}
\end{equation*}
in \eqref{eq:conormchar}.
\end{rem}
Since Proposition \ref{prop:pseudocomp} shows how $\Gamma$-conormality is preserved under the action of a pseudodifferential operator, we obtain the following result on conormal elliptic regularity:
\begin{cor}[Conormal elliptic regularity]
Suppose $u \in \cS'(\rr d)$ solves the pseudodifferential equation $a^w(x,D) u = f$ with $f \in I_\Gamma^m(\rr d, Y)$ where $a \in \Gamma^{m'}(\rr {2d})$ is globally elliptic, that is satisfying
\begin{equation}\label{glell}
|a(x,\xi)| \geqs C \langle (x,\xi) \rangle^{m'}, \qquad |(x,\xi)| \geq R
\end{equation}
for $C,R>0$. Then $u \in I_\Gamma^{m-m'}(\rr d, Y)$.
\end{cor}
\begin{proof}
Under condition \eqref{glell}, $a^w(x,D)$ admits a parametrix $p^w(x,D)$ with $p \in \Gamma^{-m'}$ and $p^w(x,D)a^w(x,D) = I+R$, where $R$ is continuous $\cS' \rightarrow \cS$ \cite{Shubin1}. Then $u=p^w(x,D)f-Ru$ and hence $u \in I_\Gamma^{m-m'}(\rr {d}, Y)$.
\end{proof}
\section*{acknowledgements}
The authors would like to express their gratitude to Luigi Rodino, Joachim Toft and Moritz Doll for helpful discussions on the subject.
|
2,877,628,089,563 | arxiv | \section{Introduction \label{section intro}}
Feature selection is indispensable for predicting clinical or biological outcomes from microbiome data as researchers are often interested in identifying the most relevant microbial features associated with a given outcome. This task can be particularly challenging in microbiome analyses, as the datasets are typically high-dimensional, underdetermined (the number of features far exceeds the number of samples), sparse (a large number of zeros are present), and compositional (the relative abundance of taxa in a sample sum to one). Current methodological research has been focusing on developing and identifying the best methods for feature selection that handle the above characteristics of microbiome data, however, methods are typically evaluated based on overall performance of model prediction, such as Mean Squared Error (MSE), R-squared or Area Under the Curve (AUC). While prediction accuracy is important, another possibly more biologically relevant criterion for choosing an optimal feature selection method is reproducibility, i.e. how reproducible are all discovered features in unseen (independent) samples? If a feature selection method is identifying true signals in a microbiome dataset, then we would expect those discovered features to be found in other similar datasets using the same method, indicating high reproducibility of the method. If a feature selection method yields a good model fit yet poor reproducibility, then its discovered features will mislead related biological interpretation. The notion of reproducibility for evaluating feature selection method seems intuitive and sensible, yet in reality we neither have access to multiple similar datasets to estimate reproducibility, nor have a well-defined mathematical formula to define reproducibility. The many available resampling techniques~\citep{efron1994introduction} enable us to utilize well-studied methods, for example bootstrapping, to create replicates of real microbiome datasets for estimating reproducibility. Moreover, given the burgeoning research in reproducibility estimation in the field of computer science~\citep{kalousis2005stability, kalousis2007stability, nogueira2018quantifying}, we can borrow their concept of Stability to approximate the reproducibility of feature selection methods in microbiome data analysis.
In this paper, we investigate the performance of a popular model prediction metric MSE and the proposed feature selection criterion Stability in evaluating four widely used feature selection methods in microbiome analysis (lasso, elastic net, random forests and compositional lasso)~\citep{tibshirani1996regression, zou2005regularization, breiman2001random, lin2014variable}. We evaluate both extensive simulations and experimental microbiome applications, with a focus of feature selection analysis in the context of continuous outcomes. We find that Stability is a superior feature selection criterion to MSE as it is more reliable in discovering true and biologically meaningful signals. We thus suggest microbiome researchers use a reproducibility criterion such as Stability instead of a model prediction performance metric such as MSE for feature selection in microbiome data analysis.
\section{Methods\label{method}}
\subsection{Estimation of stability}
The Stability of a feature selection method was defined as the robustness of the feature preferences it produces to differences in training sets drawn from the same generating distribution~\citep{kalousis2005stability}. If the subsets of chosen features are nearly static with respect to data changes, then this feature selection method is a \textit{stable} procedure. Conversely, if small changes to the data result in significantly different feature subsets, then this method is considered \textit{unstable}, and we should not trust the output as reflective of the true underlying structure influencing the outcome being predicted. In biomedical fields, this is a proxy for reproducible research, in the latter case indicating that the biological features the method has found are likely to be a data artifact, not a real clinical signal worth pursuing with further resources~\citep{lee2013robustness}. \citet{goh2016evaluating} recommend augmenting statistical feature selection methods with concurrent analysis on stability and reproducibility to improve the quality of selected features prior to experimental validation~\citep{sze2016looking, duvallet2017meta}.
While the intuition behind the concept of stability is simple, there is to date no single agreed-upon measure for precisely quantifying stability. Up to now, there have been at least~16 different measures proposed to quantify the stability of feature selection algorithms in the field of computer science~\citep{nogueira2017stability}. Given the variety of stability measures published, it is sensible to ask: which stability measure is most valid in the context of microbiome research? A multiplicity of methods for stability assessment may lead to publication bias in that researchers may be drawn toward the metric that extracts their hypothesized features or that reports their feature selection algorithm as more stable~\citep{boulesteix2009stability}. Under the perspective that a useful measure should obey certain properties that are desirable in the domain of application, and provide capabilities that other measures do not, Nogueira and Brown aggregated and generalized the requirements of the literature into a set of five properties~\citep{nogueira2017stability}. The first property requires the stability estimator to be fully defined for any collection of feature subsets, thus allowing a feature selection algorithm to return a varying number of features. The second property requires the stability estimator to be a strictly decreasing function of the average variance of the selection of each feature. The third property requires the stability estimator to be bounded by constants not dependent on the overall number of features or the number of features selected. The fourth property states that a stability estimator should achieve its maximum if and only if all chosen feature sets are identical. The fifth property requires that under the null model of feature selection, where we independently draw feature subsets at random, the expected value of a stability estimator should be constant. These five properties are desirable in any reasonable feature selection scenario, and are critical for useful comparison and interpretation of stability values. Among all the existing measures, only Nogueira’s stability measure (defined below) satisfies all five properties, thus we adopted this measure in the current work.
We assume a data set of $n$ samples $\{x_i,y_i\}_{i=1}^n$ where each $x_i$ is a $p$-dimensional feature vector and $y_i$ is the associated biological outcome. The task of feature selection is to identify a feature subset, of size $k<p$, that conveys the maximum information about the outcome $y$. An ideal approach to measure stability is to first take $M$ data sets drawn randomly from the same underlying population, to apply feature selection to each data set, and then to measure the variability in the $M$ feature sets obtained. The collection of the $M$ feature sets can be represented as a binary matrix $Z$ of size $M \times p$, where a row represents a feature set (for a particular data set) and a column represents the selection of a given feature over the $M$ data sets as follows
\begin{equation*}
Z =
\begin{pmatrix}
Z_{1,1} & \cdots & Z_{1,p} \\
\vdots & \ddots & \vdots \\
Z_{M,1} & \cdots & Z_{M,p}
\end{pmatrix}
\end{equation*}
Let $Z_{.f}$ denote the $f^{th}$ column of the binary matrix $Z$, indicating the selection of the $f^{th}$ feature among the $M$ data sets. Then $Z_{.f} \sim Bernoulli (p_f)$, where $\hat p_f = \frac{1}{M} \sum_{i=1}^M Z_{i,f}$ as the observed selection probability of the $f^{th}$ feature. Nogueira defined the stability estimator as
\begin{equation}
\hat \Phi(Z) = 1- \frac{\frac{1}{p} \sum_{f=1}^p \sigma_f^2}{E [\frac{1}{p} \sum_{f=1}^p \sigma_f^2 |H_0 ]}
= 1-\frac{\frac{1}{p} \sum_{f=1}^p \sigma_f^2 }{\frac{\bar k}{p} (1- \frac{\bar k}{p})}
\end{equation}
where $\sigma_f^2= \frac{M}{M-1} \hat p_f (1-\hat p_f)$ is the unbiased sample variance of the selection of the $f^{th}$ feature, $H_0$ denotes the null model of feature selection (i.e. feature subsets are drawn independently at random), and $\bar k = \frac{1}{M} \sum_{i=1}^M \sum_{f=1}^p Z_{i,f}$ is the average number of selected features over the $M$ data sets.
In practice, we usually only have one data sample (not $M$), so a typical approach to measure stability is to first take $M$ bootstrap samples of the provided data set, and apply the procedure described in the previous paragraph. Other data sampling techniques can be used as well, but due to the well understood properties and familiarity of bootstrap to the community, we adopt the bootstrap approach.
\subsection{Four selected feature selection methods }
Lasso, elastic net, compositional lasso and random forests were chosen as benchmarked feature selection methods in this paper due to their wide application in microbiome community~\citep{knights2011supervised}. Lasso is a penalized least squares method imposing an $L_1$-penalty on the regression coefficients~\citep{tibshirani1996regression}. Owing to the nature of the $L_1$-penalty, lasso does both continuous shrinkage and automatic variable selection simultaneously. One limitation of lasso is that if there is a group of variables among which the pairwise correlations are very high, then lasso tends to select one variable from the group and ignore the others. Elastic net is a generalization of lasso, imposing a convex combination of the $L_1$ and $L_2$ penalties, thus allowing elastic net to select groups of correlated variables when predictors are highly
correlated~\citep{zou2005regularization}. Compositional lasso is an extension of lasso to compositional data analysis~\citep{lin2014variable}, and it is one of the most highly cited compositional feature selection methods in microbiome analysis~\citep{kurtz2015sparse, li2015microbiome, shi2016regression, silverman2017phylogenetic}. Compositional lasso, or the sparse linear log-contrast model, considers variable selection via $L_1$ regularization. The log-contrast regression model expresses the continuous outcome of interest as a linear combination of the log-transformed compositions subject to a zero-sum constraint on the regression vector, which leads to the intuitive interpretation of the response as a linear combination of log-ratios of the original composition. Suppose an $n \times p$ matrix $X$ consists of $n$ samples of the composition of a mixture with $p$ components, and suppose $Y$ is a response variable depending on $X$. The nature of composition makes each row of $X$ lie in a $(p-1)$-dimensional positive simplex $S^{p-1}=\{(x_1,…,x_p ): x_j>0,j=1,…,p \text{ and } \sum_{j=1}^p x_j =1 \}$. This compositional lasso model is then expressed as
\begin{equation}
y=Z \beta + \epsilon, \sum_{j=1}^p \beta_j =0
\end{equation}
where $Z=(z_1,…,z_p )=(logx_{ij})$ is the $n \times p$ design matrix, and $\beta= (\beta_1,…,\beta_p)^T$ is the $p$-vector of regression coefficients. Applying the $L_1$ regularization approach to this model is then
\begin{equation}
\hat \beta = argmin (\frac{1}{2n} ||y - z\beta||_2^2 + \lambda ||\beta||_1), \text{ subject to } \sum_{j=1}^p \beta_j = 0
\end{equation}
where $\beta = (\beta_1,…, \beta_p)^T, \lambda>0$ is a regularization parameter, and $||.||_2$ and $||.||_1$ denote the $L_2$ and $L_1$ norms, respectively.
Random forests is regarded as one of the most effective machine learning techniques for feature selection in microbiome analysis \citep{belk2018microbiome, liu2017experimental, namkung2020machine, santo2019clustering, statnikov2013comprehensive}. Random forests is a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest \citep{breiman2001random}. Since random forests do not select features but only assign importance scores to features, we choose features from random forests using Altmann’s permutation test \citep{altmann2010permutation}, where the response variable is randomly permuted $S$ times to construct new random forests and new importance scores computed. The $S$ importance scores are then used to compute the p-value for the feature, which is derived by computing the fraction of the $S$ importance scores that are greater than the original importance score.
\subsection{Simulation settings}
We compared the performance of the popular model prediction metric MSE and the proposed criterion Stability in evaluating four widely used feature selection methods for different data scenarios. We simulated features with Independent, Toeplitz and Block correlation structures for datasets with the number of samples and features in all possible combinations of $(50, 100, 500, 1000)$, resulting in the ratio of $p$ (number of features) over $n$ (number of samples) ranging from 0.05 to 20. Our simulated compositional microbiome data are an extension of the simulation settings from \citet{lin2014variable} as follows:
\begin{enumerate}
\item Generate an $n \times p$ data matrix $W=(w_{ij})$ from a multivariate normal distribution $N_p(\theta,\Sigma)$. To reflect the fact the components of a composition in metagenomic data often differ by orders of magnitude, let $\theta = (\theta_j)$ with $\theta_j =log(0.5p)$ for $j=1,…,5$ and $\theta_j=0$ otherwise. To describe different types of correlations among the components, we generated three general correlation structures: Independent design where covariates are independent from each other, Toeplitz design where $\Sigma =(\rho^ {|i-j|})$ with $\rho=0.1,0.3,0.5,0.7,0.9$, and Block design with 5 blocks, where the intra-block correlations are 0.1, 0.3, 0.5, 0.7, 0.9, and the inter-block correlation is 0.09.
\item Obtain the covariate matrix $X=(x_{ij})$ by the transformation $x_{ij} = \frac{exp(w_{ij})}{\sum_{k=1}^p exp(w_{ik})}$, and the $n \times p$ log-ratio matrix $z=log(X)$, which follows a logistic normal distribution \citep{aitchison1982statistical}.
\item Generate the responses $y$ according to the model $y=Z \beta^*+ \epsilon$, $\sum_{j=1}^p \beta_j^* =0$, where $\epsilon \sim N(0, 0.5^2)$, and $\beta^*=(1,-0.8,0.6,0,0,-1.5,-0.5,1.2,0,…,0)^T$, indicating that only 6 features are real signals.
\item Repeat steps 1-3 for 100 times to obtain 100 simulated datasets for each simulation setting, and apply the desired feature selection algorithm with 10-fold cross-validation on the 100 simulated datasets. Specifically, each simulated dataset is separated into training and test sets in the ratio of 8 : 2, 10-fold cross-validation is applied to the training set ($80\%$ of the data) for parameter tuning and variable selection, and then model prediction (i.e. MSE) is evaluated on the test set ($20\%$ of the data). Hence, stability is measured according to Nogueira’s definition based on the 100 subsets of selected features. Average MSE is calculated as the mean of the MSEs across the 100 simulated datasets, and the average false positive or false negative rate denotes the mean of the false positive or false negative rates across the 100 simulated datasets.
\end{enumerate}
In summary, a total of 176 simulation scenarios were generated, with 16 for Independent design, 80 for Toeplitz or Block design, and 100 replicated datasets were simulated for each simulation setting, resulting in 17,600 simulated datasets in total.
\section{Simulation results}
Given that the true numbers of false positive and false negative features are known in simulations, we can utilize their relationships with MSE and Stability to compare the reliability of MSE and Stability in evaluating feature selection methods. In theory, we would expect to see a positive correlation between MSE and false positive rate or false negative rate, while a negative correlation between Stability and false positive or false negative rates. This is because when the real signals are harder to select (i.e. increasing false positive or false negative rates), a feature selection method would perform worse (i.e. increasing MSE or decreasing Stability). The first column in Figure~\ref{f:fig1} shows the relationship between MSE and false positive rate in three correlation designs, and the second column in Figure~\ref{f:fig1} shows the relationship between Stability and false positive rate. In contrast to the random pattern in MSE vs. false positive rate (Figure~\ref{f:fig1} A-C-E), where drastic increase in false positive rate could lead to little change in MSE (e.g. random forests), or big drop in MSE corresponds to little change in false positive rate (e.g. elastic net), we see a clear negative correlation pattern between Stability and false positive rate (Figure~\ref{f:fig1} B-D-F). Regarding false negative rate, we also observe a random pattern in MSE and a meaningful negative correlation relationship in Stability (Supplementary Figure 1). These results suggest that Stability is a more reliable evaluation criterion than MSE due to its closer reflection of the ground truth in the simulations (i.e. false positive \& false negative rates), and this is true irrespective of feature selection method used, features-to-sample size ratio ($p/n$) or correlation structure among the features.
\begin{figure}
\centerline{\includegraphics[width=5in]{fig1_mse_stab_fpr.png}}
\caption{Comparing the relationship between MSE and False Positive Rate vs. Stability and False Positive Rate in three correlation structures. Colored dots represent values from different feature selection methods: compositional lasso (red), elastic net (green), lasso (blue) and random forests (purple). Size of dots indicate features-to-sample size ratio $p/n$.}
\label{f:fig1}
\end{figure}
Using the more reliable criterion Stability, we now investigate the best feature selection method in different simulation scenarios. Based on Stability, compositional lasso has the highest stability in “easier” correlation settings (Toeplitz 0.1 – 0.7 in Supplementary Figure 2, represented by Toeplitz 0.5 in Figure \ref{f:fig2} A due to their similar results; Block 0.9-0.3 in Supplementary Figure 3, represented by Block 0.5 in Figure \ref{f:fig2} C) for all combinations of $n$ (number of samples) and $p$ (number of features). Across all ``easier'' correlation scenarios, compositional lasso has an average stability of 0.76 with its minimum at 0.21 and its maximum close to 1 (0.97), while the 2nd best method Lasso has an average stability of only 0.44 with the range from 0.09 to 0.89, and the average stabilities of random forests and Elastic Net hit as low as 0.24 and 0.17 respectively. In ``extreme” correlation settings (Toeplitz 0.9 in Figure \ref{f:fig2} B or Block 0.1 in Figure \ref{f:fig2} D), compositional lasso no longer maintains the highest stability across all scenarios, but it still has the highest average stability of 0.42 in Toeplitz 0.9 (surpassing the 2nd best Lasso by 0.09), and the second highest average stability in Block 0.1 (only 0.03 lower than the winner Lasso). Regarding specific scenarios in “extreme” correlation settings, compositional lasso, lasso or random forests can be the best in different combinations of $p$ and $n$. For example, in both Toeplitz 0.9 and Block 0.1, with small $p$ (when $p$ = 50 or 100), random forests has highest stability ($\geq 0.8$) when $n$ is largest ($n=1000$), but Lasso or compositional lasso surpasses random forest when n is smaller than 1000, although all methods have poor stability ($\leq 0.4$) when $n \leq 100$. This indicates that best feature selection method based on Stability depends on the correlation structure among features, the number of samples and the number of features in each particular dataset; thus there is no single omnibus best, i.e., most stable, feature selection method.
\begin{figure}
\centerline{\includegraphics[width=5in]{fig2_methods_stab.png}}
\caption{Method comparisons based on Stability in representative correlation structures. Colored bars represent Stability values corresponding to specific number of samples (x-axis) and number of features ($p$) for different feature selection methods: compositional lasso (red), elastic net (green), lasso (blue) and random forests (purple). Note that Toeplitz 0.1-0.7 has similar results as Toeplitz 0.5 (see Supplementary Figure 2), and Block 0.9-0.3 has similar results as Block 0.5 (see Supplementary Figure 3). Moreover, Stability equals to zero when no features were selected by methods (e.g. random forests chooses nothing when the number of samples equals 50).}
\label{f:fig2}
\end{figure}
How will results differ if we use MSE as the evaluation criterion? Using the extreme correlation settings (Toeplitz 0.9 and Block 0.1) as examples, random forests has lowest MSEs for all combinations of $p$ and $n$ (Figure \ref{f:fig3} A-B). However, Figure \ref{f:fig3} C-D unveils that random forests has highest false negative rates in all scenarios of Toeplitz 0.9 and Block 0.1, and its false negative rates can reach as high as the maximum 1, indicating that random forests fails to pick up any real signal despite its low prediction error. Moreover, Figure \ref{f:fig3} E-F show that random forests can have highest false positive rates when $p$ is as large as 500 or 1000. All these highlight the danger of choosing inappropriate feature selection method based on MSE, where the merit of high predictive power masks high errors in false positives and false negatives. On the other hand, the method with lowest false positive rates (compositional lasso) (Figure~\ref{f:fig3} E-F) was rather found to have the worst performance by MSE (Figure~\ref{f:fig3} A-B), suggesting another pitfall of missing the optimal method when using MSE as the evaluation criterion.
\begin{figure}
\centerline{\includegraphics[width=5in]{fig3_methods_mse.png}}
\caption{Method comparisons based on MSE in extreme correlation structures (Toeplitz 0.9 for A,C,E and Block 0.1 for B,D,F). Colored bars represent MSE (A-B), False Negative Rates (C-D), and False Positive Rates (E-F) corresponding to a specific number of samples (x-axis) and features ($p$) for different feature selection methods: compositional lasso (red), elastic net (green), lasso (blue) and random forests (purple). Note that false positive rates are not available for random forests when number of samples equals 50 because it chooses zero features. }
\label{f:fig3}
\end{figure}
The use of point estimates alone to compare feature selection methods, without incorporating variability in these estimates, could be misleading. Hence, as a next step, we evaluate reliability of MSE and Stability across methods using a hypothesis testing framework. This is demonstrated with the cases of $n = 100$ $\&$ $p = 1000$ for Toeplitz 0.5 and Block 0.5, where compositional lasso is found to be the best feature selection method based on Stability, while random forests is the best based on MSE. We use bootstrap to construct $95\%$ confidence intervals to compare compositional lasso vs. random forests based on Stability or MSE. For each simulated data (100 in total for Toeplitz 0.5 or Block 0.5), we generate 100 bootstrapped datasets and apply feature selection methods to each bootstrapped dataset. Then for each simulated data, Stability is calculated based on the 100 subsets of selected features from the bootstrapped replicates, and the variance of Stability is measured as its variability across the 100 simulated data. Since MSE can be obtained for each simulated data without bootstrapping, we use the variability of MSE across the 100 simulated data as its variance. Based on the 95\% CI for the difference in Stability between compositional lasso and random forest methods (Table 1), we see that compositional lasso is better than random forest in terms of Stability index, and not statistically inferior to random forests in terms of MSE despite its inferior raw value. This suggests that Stability has higher precision (i.e. lower variance). Conversely, MSE has higher variance, which results in wider confidence intervals and its failure to differentiate methods.
\begin{table}[h]
\centering
\label{t:table1}
\caption{Hypothesis testing using Bootstrap to compare compositional lasso (CL) with random forests (RF) based on Stability or MSE using two simulation scenarios (*indicate statistically significant). }
\begin{tabularx}{\textwidth}{ |X|X|X| }
\hline
Example (N = 100 \& P = 1000) & Estimated mean difference (CL – RF) in Stability index with 95\% CI & Estimated mean difference (CL – RF) in MSE with 95\% CI \\
\hline
Toeplitz 0.5 & 0.22 (0.19, 0.28)* & 0.23 (-0.62, 1.36) \\
\hline
Block 0.5 & 0.23 (0.17, 0.29)* & 0.44 (-0.27, 1.57) \\
\hline
\end{tabularx}
\end{table}
\section{Experimental microbiome data applications\label{data}}
To compare the reliability of MSE and Stability in choosing feature selection methods in microbiome data applications, two experimental microbiome datasets were chosen to cover common sample types (human gut and environmental soil samples) and the scenarios of $p \approx n$ and $p >> n$ (where $p$ is the number of features and $n$ is the number of samples).
The human gut dataset represents a cross-sectional study of 98 healthy volunteers to investigate the connections between long-term dietary patterns and gut microbiome composition \citep{wu2011linking}, and we are interested in identifying a subset of important features associated with BMI, which is a widely-used gauge of human body fat and associated with the risk of diseases. The soil dataset contains 88 samples collected from a wide array of ecosystem types in North and South~America~\citep{lauber2009pyrosequencing}, and we are interested in discovering microbial features associated with the pH gradient, as pH was reported to be a strong driver behind fluctuations in the soil microbial communities~\citep{morton2017balance}. Prior to our feature selection analysis, the same filtering procedures were applied to the microbiome count data from these two datasets, where only the microbes with a taxonomy assignment at least to genus level or lower were retained for interpretation, and microbes present in fewer than 1\% of the total samples were removed. Moreover, the count data were transformed into compositional data after replacing any zeroes by the maximum rounding error~0.5~\citep{lin2014variable}.
Comparisons of feature selection methods in these two microbiome datasets are shown in Table 2, which are consistent with simulation results, where the best method chosen by MSE or Stability in each dataset can be drastically different. Based on MSE, random forests is the best in the BMI Gut dataset, while being the worst based on Stability. Similarly, in the pH Soil dataset, random forests is the second best method according to MSE, yet the worst in terms of Stability. If we use Stability as the evaluation criterion, then Elastic Net is the best in the BMI Gut and compositional lasso is the best in the pH Soil, yet both methods would be the worst if MSE was used as the evaluation criterion. One important note is that the Stability values in these two experimental microbiome datasets are low: none of the feature selection method exceeds a stability of 0.4, indicating the challenging task of feature selection in real microbiome applications. However, this possibility of low Stability values was already reflected in our simulated scenarios of ``extreme” correlation scenarios. Another important note, which might be counter-intuitive, is that the dataset with a high $p/n$ ratio (pH Soil) has higher stabilities than the dataset with $p/n$ ratio close to 1 (i.e. similar $p$ \& $n$ values) (BMI Gut). This might be explained by the clearer microbial signals in environmental samples than in human gut samples, but it also highlights the impact of the dataset itself, whose characteristics cannot be easily summarized with the numbers of $p$ and $n$, on feature selection results. Correlation structures between features as considered in our simulations could play an important role, and there may be many other unmeasured factors involved as well.
\begin{table}[h]
\centering
\label{t:table2}
\caption{Method comparisons based on Stability Index and MSE in experimental microbiome datasets (methods ordered in terms of best MSE/Stability performance, followed with raw MSE/Stability values in parentheses). }
\begin{tabularx}{\textwidth}{ |c|c|X|X| }
\hline
Dataset & $n * p$ $(p/n)$ & MSE \newline (lower is better) & Stability \newline (higher is better) \\
\hline
BMI Gut & 98 * 87 (0.9) &
Random forests (4.99) \newline
Compositional lasso (21.59) \newline
Lasso (24.07) \newline
Elastic Net (25.33) &
Elastic Net (0.23) \newline
Compositional lasso (0.22) \newline
Lasso (0.14) \newline
Random forests (0.02)\\
\hline
pH Soil & 89 * 2183 (24.5) & Elastic Net (0.23) \newline
Random forests (0.26) \newline
Lasso (0.34) \newline
Compositional lasso (0.46) &
Compositional lasso (0.39) \newline
Lasso (0.31) \newline
Elastic Net (0.16) \newline
Random forests (0.04)\\
\hline
\end{tabularx}
\end{table}
Apart from the comparisons based on point estimates, we can further compare MSE and Stability with hypothesis testing using nested bootstrap~\citep{wainer2018nested}. The outer bootstrap generates 100 bootstrapped replicates of the experimental microbiome datasets, and the inner bootstrap generates 100 bootstrapped dataset for each bootstrapped replicate from the outer bootstrap. Feature selections are performed on each inner bootstrapped dataset with 10-fold cross-validation after a 80:20 split of training and test sets. The variance of Stability is calculated based on the Stability values across the outer bootstrap replicates, and the variance of MSE is calculated across both inner and outer bootstrap replicates, since MSE is available for each bootstrap replicate while Stability has to be estimated based on feature selection results across multiple bootstrap replicates. Using the datasets of BMI Gut and pH Soil, Table 3 confirms with simulation results that raw value difference in MSE does not indicate statistical difference, yet difference in Stability does help to differentiate methods due to its higher precision. A comparison between the observed difference in Table 2 and the estimated mean difference from bootstrap in Table 3 further confirms this discovery. Compared to the estimated mean differences between compositional lasso and random forests based on stability (0.27 in the BMI Gut and 0.36 in the pH Soil), the observed differences (0.2 in the BMI Gut and 0.35 in the pH Soil) differ by 26\% in the BMI Gut and 3\% in the pH Soil. However, this difference is much more drastic based on MSE. Compared to the estimated mean differences between compositional lasso and random forests based on MSE (16.6 in the BMI Gut and 0.2 in the pH Soil), the observed differences (11.8 in the BMI Gut and 0.08 in the pH Soil) have huge differences of 41\% and 160\% in each dataset respectively. Hence, Stability is consistently shown to be more appropriate than MSE in experimental data applications as in simulations.
\begin{table}[h]
\centering
\label{t:table3}
\caption{Hypothesis testing using Bootstrap to compare compositional lasso (CL) with random forests (RF) based on Stability or MSE using two experimental microbiome datasets (*indicate statistically significant).}
\begin{tabularx}{\textwidth}{ |X|X|X| }
\hline
Dataset & Estimated mean difference (CL – RF) in Stability index with 95\% CI & Estimated mean difference (CL – RF) in MSE with 95\% CI \\
\hline
BMI Gut & 0.27 (0.17, 0.34)* & 11.8 (-2.1, 41.2) \\
\hline
pH Soil & 0.36 (0.28, 0.44)* & 0.08 (-0.28, 0.95) \\
\hline
\end{tabularx}
\end{table}
\section{Discussion}
Reproducibility is imperative for any scientific discovery, but there is a growing alarm about irreproducible research results. According to a survery by Nature Publishing Group of 1,576 researchers in 2016, more than 70\% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments \citep{baker20161}. This ``reproducibility crisis” in science affects microbiome research as much as any other areas, and microbiome researchers have long struggled to make their research reproducible \citep{schloss2018identifying}. Great efforts have been made towards setting protocols and standards for microbiome data collection and processing \citep{thompson2017communal}, but more could be achieved using statistical techniques for reproducible data analysis. Microbiome research findings rely on statistical analysis of high-dimensional data, and feature selection is an indispensable component for discovering biologically relevant microbes. In this article, we focus on discovering a reproducible criterion for evaluating feature selection methods rather than developing a better feature selection method. We question the common practice of evaluating feature selection methods based on overall performance of model prediction~\citep{knights2011human}, such as Mean Squared Error (MSE), as we detect a stark contrast between prediction accuracy and reproducible feature selection. Instead, we propose to use a reproducibility criterion such as Nogueira’s Stability measurement~\citep{nogueira2017stability} for identifying the optimal feature selection method.
In both our simulations and experimental microbiome data applications, we have shown that Stability is a preferred evaluation criterion over MSE for feature selection, because of its closer reflection of the ground truth (false positive and false negative rates) in simulations, and its better capacity to differentiate methods due to its higher precision. Hence, if the goal is to identify the underlying true biological signal, we propose to use a reproducibility criterion like Stability instead of a prediction criterion like MSE to choose feature selection algorithms for microbiome data applications. MSE is better suited for problems where prediction accuracy alone is the focus.
The strength of our work lies in the comparisons of widely used microbiome feature selection methods using extensive simulations, and experimental microbiome datasets covering various sample types and data characteristics. The comparisons are further confirmed with non-parametric hypothesis testing using bootstrap. Although Nogueira et al. were able to derive the asymptotical normal distribution of Stability~\citep{nogueira2017stability}, their independent assumption for two-sample test might not be realistic due to the fact that two feature selection methods are applied to the same dataset. Hence our non-parametric hypothesis testing is an extension of their two-sample test for Stability. However, our current usage of bootstrap, especially the nested bootstrap approach for experimental microbiome data applications, is computationally expensive; further theoretical development on hypothesis testing for reproducibility can be done to facilitate more efficient method comparisons based on Stability. Last but not least, although our paper is focused on microbiome data, we do expect the superiority of reproducibility criteria over prediction accuracy criteria in feature selection to apply in other types of datasets as well. We thus recommend that researchers use stability as an evaluation criterion while performing feature selection in order to yield reproducible results.
\section*{Acknowledgements}
We gratefully acknowledge supports from IBM Research through the AI Horizons Network, and UC San Diego AI for Healthy Living program in partnership with the UC San Diego Center for Microbiome Innovation. This work was also supported in part by CRISP, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA. LN was partially supported by NIDDK 1R01DK110541-01A1.
\section*{Supporting Information}
The code that implements the methodology, simulations and experimental microbiome data applications is available at the Github repository https://github.com/knightlab-analyses/stability-analyses.
\label{lastpage}
\bibliographystyle{biom}
\section*{Supplementary Figures}
\begin{suppfigure}[h]
\centerline{\includegraphics[width=5in]{s1_mse_stab_fnr.png}}
\caption{Compare the relationship between MSE and False Negative Rate vs. Stability and False Negative Rate in three correlation structures. Colored dots represent values from different feature selection methods: compositional Lasso (red), Elastic Net (green), Lasso (blue) and random forests (purple). Size of dots indicate features-to-sample size ratio $p/n$.}
\label{f:figS1}
\end{suppfigure}
\begin{suppfigure}
\centerline{\includegraphics[width=5in]{s2_stab_easy_toe.png}}
\caption{Method comparisons based on Stability in easier Toeplitz correlation structures from 0.1 to 0.7. Colored bars represent Stability values corresponding to specific number of samples (x-axis) and number of features (p) for different feature selection methods: compositional Lasso (red), Elastic Net (green), Lasso (blue) and random forests (purple). Compositional Lasso has highest stability in all cases across all correlation strength. Note that Stability equals to zero when no features were selected by methods (e.g. random forests chooses nothing when the number of samples equals 50).}
\label{f:figS2}
\end{suppfigure}
\begin{suppfigure}
\centerline{\includegraphics[width=5in]{s3_stab_easy_block.png}}
\caption{Method comparisons based on Stability in easier Block correlation structures from 0.9 to 0.3. Colored bars represent Stability values corresponding to specific number of samples (x-axis) and number of features (p) for different feature selection methods: compositional Lasso (red), Elastic Net (green), Lasso (blue) and random forests (purple). Compositional Lasso has highest stability in all cases across all correlation strength. Note that Stability equals to zero when no features were selected by methods (e.g. random forests chooses nothing when the number of samples equals 50).}
\label{f:figS3}
\end{suppfigure}
\label{lastpage}
\end{document}
|
2,877,628,089,564 | arxiv | \section{Introduction}
Theoretical models of the UV line-driven winds of hot stars \citep{1988ApJ...335..914O}
predict that internal instabilities should produce
high-density clumps and low-density voids, as the hypersonic acceleration cannot be maintained smoothly.
Also, in CAK theory \citep*{1975ApJ...195..157C},
the acceleration of the wind depends nonlinearly on the mass flux, so variations
in the mass flux launched from the stellar photosphere also produce significant velocity and density variations as the wind accelerates.
Both of these types of wind perturbations will steepen into shocks,
compressing and driving dense clumps \citep{2018A&A...611A..17S},
and producing observable hard X-ray emission (e.g., Oskinova 2016 and references therein).
Direct observational evidence that this is occurring even in single stars already exists in optical emission
lines of dense winds, in moving emission bumps
\citep[e.g.,][]{1988ApJ...334.1038M, 1999ApJ...514..909L} and reduced levels of free-electron scattering in the line wings \citep{1991A&A...247..455H, 1998A&A...335.1003H}
For all these reasons, the concept of a constant mass-flux, smoothly accelerating hypersonic flow is likely not realized in actual winds.
Nevertheless, most current empirical analyses of stellar atmospheres and winds rely on the use of spherical non-LTE
model atmospheres. In state-of-art analyses the stellar and wind parameters, such as
$\dot{M}$, are derived by fitting resonance and recombination lines simultaneously.
P\,Cygni lines (often high opacity lines such as C\,{\sc iv}) are predominately used to determine terminal wind velocities $v_{\infty}$
(E.g. \citealt{1990ApJ...361..607P
), whereas \ensuremath{\dot{M}}\ in the stronger winds of O-stars
(mass-loss rates above $10^{-7}$ \ensuremath{\msun\,\mathrm{yr}^{-1}}\ \citep[e.g.][]{2007A&A...465.1003M})
is often inferred using \ensuremath{\mathrm{H}\alpha} emission.
The latter benefits from arising from what is incontrovertibly the dominant wind
species in hot main-sequence stars, ionized H \citep[e.g.][]{1996A&A...305..171P}
Yet there is a notable difference between UV resonance lines
and (optical) recombination lines such as H$\alpha$ (see left-hand side vs right-hand side of Fig.\,\ref{f_oski}).
The resonance lines involve scattering of pre-existing UV stellar continuum, and their optical
depth $\propto \rho$, while recombination involves photon creation by collision of
2 particles, such that line emissivities $\propto \rho^2$.
The latter also holds for the free-free radio continuum, so crucial indicators of global wind mass-loss rates
are sensitive to the local density variations induced by clumping.
Thus, use of these ``density squared'' diagnostics allow
inhomogeneities to mimic a higher mass-loss rate.
Wind inhomogeneities can also induce strong variations in the local velocity gradients (an effect
termed ``vorosity'' in Sundqvist et al. 2014), which allow more of the stellar continuum to penetrate through the
wind opacity of otherwise saturated UV resonance lines, which can mimic a \textit{lower} mass-loss rate
in absorption-line diagnostics.
Hence, the dual analysis of UV plus optical lines is affected by clumping
in subtle and important ways that must be informed by new types of observations.
This has significance not only for our understanding of the wind dynamics themselves, but also for establishing the evolutionary
consequences of mass loss during phases of continuous wind outflow, especially the main sequence.
The latter is the longest-lived phase of stellar evolution, and happens first, establishing the initial conditions for
all subsequent evolutionary phases-- including the ultimate supernova of the massive star.
Furthermore, the high-mass end of the main sequence has the most profound impact on galactic evolution because it reprocesses the stellar
material on such short timescales, all while lower-mass stars are still in their formative stages.
Thus, to understand the evolution of massive stars and their impact on the host galaxy in population synthesis studies,
we must first accurately determine the mass lost in these decisive initial phases (Langer 2012),
and track how it evolves in the blue supergiant phase.
This is especially true for the highest-mass stars, whose wind mass-loss rates are highest and have the greatest
evolutionary significance for all subsequent phases of the star, but such stars are also the rarest.
Somewhat less massive stars
afford more numerous examples to study, and the lessons learned from them place our understanding into a
fuller context, so we wish to see the impact of clumping over the fullest possible range of massive stars.
Yet despite the this need for clumping corrections to
observationally inferred mass-loss rates,
no systematic ultra-high SNR study of clumping and CIR-type features in dynamical spectra of massive star winds has yet
been undertaken.
The IUE MEGA campaign
was only able to obtain SNR ~ 20-25, and only on a few targets.
The efforts by HST were also limited
to a few targets, and were hampered in some cases by pointing difficulties, and an orbit that only allowed continuous observation for
about an hour at a time
Hence these past experiments were only able to hint at the presence of clumping and dynamics on scales smaller than the
gross ``discrete absorption component'' (DAC) structures that are often seen in massive-star winds,
yet these structures
are believed to be responsible for extreme density variations necessary to account for the filling-in of P V absorption
and the absence of free-electron scattering wings
in dense winds.
Enter \textit{Polstar}, whose orbit allows continuous coverage for days, at wavelength resolution $R = 33,000$ in Ch1.
One objective (termed \textit{S2}) of the \textit{Polstar} mission is deep, continuous exposures of the ~40 brightest
massive stars over a range of masses and OB spectral types, to watch clumping and other structure develop in real time in their winds.
The fundamental physics of the wind driving is informed by the nature of this clumping, and larger structures such as co-rotating
interactions regions (CIRs, c.f. Cranmer \& Owocki 1996),
as well as prolate/oblate asphericities.
But the primary goal of \textit{S2} is to make the necessary corrections to clumping-sensitive mass-loss rate determinations, such as free-free radio
(Lamers \& Leitherer 1993) and H alpha emission (Puls et al. 1996), because of its importance to stellar evolution as described next.
\section{The importance of accurate mass-loss rate determinations}
Models of the most massive stars confirm the observational implication that mass loss can play
a crucial role in stellar evolution, and accurate knowledge of the
mass-loss rate is required for understanding the impact on the host galaxy.
Figure~\ref{fig:compaMdot} presents the evolution of 60\,\ensuremath{M_\odot}\ models computed until the end of central carbon burning, and shows how this evolution is altered when the mass-loss rates \citep[chosen from][]{2001A&A...369..574V} are reduced by a factor of 2 or 3 (here the factors have been applied only during the main sequence, MS). The difference is mainly due to larger winds during the MS (in the standard case) reducing the size of the core, and hence reducing the overall luminosity: at the middle of the MS, the \ensuremath{\dot{M}}/2 model has a core that is 3\% larger than the standard model, and the \ensuremath{\dot{M}}/3 has a core that is 5\% larger. In the middle of central helium burning, the difference increases, amounting to 50\% and 45\% respectively. At the end of central C-burning, the CO-core mass ($M_\mathrm{CO}$) is 57\% (35\%) larger in the \ensuremath{\dot{M}}/2 (\ensuremath{\dot{M}}/3) models respectively, leading to very different endpoint locations.
\begin{figure}[h!]
\centering
\includegraphics[width=7cm]{./figs/HRD_M60Zsol_compaMdot.pdf}
\caption{Hertzsprung-Russell diagram of 60\,\ensuremath{M_\odot}\ models computed with mass-loss recipe of \citet[black]{2001A&A...369..574V},
and the same recipe divided by a factor of 2 and 3 during the MS (teal blue and cyan, respectively).}
\label{fig:compaMdot}
\end{figure}
A direct effect of changing the mass-loss rates is the duration of the Wolf-Rayet (WR) phase expected for the models. The standard model becomes a WR early in the He-burning phase ($X(^4\mathrm{He})=0.97$), while the lower \ensuremath{\dot{M}}-rates models arrive later in this regime ($X(^4\mathrm{He})=0.29$ and $0.44$ respectively). This affects directly their statistical observability.
Surprisingly, the models show that the behaviour is not simply monotonic with the reduction factor on the mass-loss rates,
revealing the sensitivity of the advanced phases to what precedes them.
In stars, gravity and fusion interact in complex ways, and changing one parameter in the modeling feeds back on all other aspects. Because of its high luminosity at the end of the MS, the \ensuremath{\dot{M}}/3 model loses more mass during the crossing of the Hertzsprung-Russell diagram, spends less time with a $\log(T_\mathrm{eff})<4.0$, and ends its evolution in an intermediate situation between the standard and the \ensuremath{\dot{M}}/2 models.
Hence, changes to mass-loss rates at the factor 2 level can profoundly alter the initial conditions for all subsequent evolution, ultimately
changing the final state where supernova occurs.
Since internal structure at all scales can induce such changes in the mass-loss rates inferred from H $\alpha$ and radio emissions,
understanding high-mass stellar evolution requires resolving and analyzing the dynamical nature of wind clumping.
\section{The \textit{Polstar} experiment for objective {S2}}
The effective area ($A$ = 50 cm$^2$ at 150 nm) and spectral resolution ($R = 33,000$) of channel 1 (hereafter Ch1) of the
\textit{Polstar} instrument are ideally suited for continuous observing of bright targets that can track features
accelerating through the line profile in real time.
The spectral resolution ($\cong$ 10 km s$^{-1}$) penetrates to the Sobolev scale of thermal Doppler shifts in the metal-ion
UV resonance lines observed, and the area $A$ provides sufficient signal-to-noise (SNR) for exposures of duration
\begin{equation}
t \ = \ \frac{c}{aR} \ ,
\end{equation}
where $a$ is the characteristic acceleration rate of the features.
Such exposure times are synchronized with the rate that features move across resolution elements,
and since the typical feature acceleration is $a ~ 0.04$ km s$^{-2}$
(inferred from Massa \& Prinja 2015)
this implies $t \cong 250 s$.
Since structure formation below the sound speed will be inhibited by gas pressure, we need only resolve $\cong$ 20 km s$^{-1}$,
so exposure times of $t \cong 500 s$ can be used to maximize S/N and limit daily data rate.
Given \textit{Polstar's} effective area, this implies the SNR is
\begin{equation}
\frac{S}{N} \ \cong \ 100 \sqrt{f} \ ,
\end{equation}
where $f$ is the target flux in units of $10^-9$ erg cm$^{-2}$ s$^{-1}$ nm$^{-1}$, a characteristic
flux for \textit{Polstar} targets.
Thus dynamical spectra can be produced by continuous monitoring for days, as was done with the IUE MEGA campaign
(Massa \& Prinja 2015),
with greatly improved SNR for bright targets ($f > 20$).
There are 36 targets that satisfy this brightness limit (shown in Table 1), plus we added, because of their
special importance, 2 additional sources
that are within a factor of 2 of the brightness limit, for a total of 38 targets for the {S2} objective.
The star $\zeta$ Oph was observed by HST for under 5 hours, but it provides a useful comparison with previous results,
and $\phi$ Per is a variable system involving spinup by binary interaction, allowing the effects of
rapid rotation on wind dynamics to be studied.
Both added stars will produce SNR > 250, or an order of magnitude above the IUE MEGA campaign, while all the rest
exceed SNR $\cong$ 400.
This sensitivity is deemed necessary to penetrate the full range of the dynamical structures down to the Sobolev scale $ \cong R/100$,
since at that scale, we expect $ \cong 100^2$ cross sections of Sobolev size across the face of the star, with a statistical
variance of some 1 \%, requiring SNR at the several-hundred level to confidently resolve.
And once the features are detected, the time ($\cong$ 500 s) and velocity ($\cong$ 20 km s$^{-1}$) bins will allow them to be tracked
as they accelerate through the wind, so we can understand how they initiate and evolve, and what will be their impact
on clumping-sensitive mass-loss determinations.
All these targets can be monitored for a day or longer (in special cases) within
the observing time of the 3 year science mission set aside for this objective \textit{S2}.
This is an opportune number of targets, because it both allows trends with mass and spectral type to be considered.
The other objectives can also benefit from this wealth of observational data on the three dozen brightest massive
stars in the sky, as the data can be stacked and binned to achieve unprecedented UV polarization precision.
For example, for a median star from the list like $\gamma$ Cas, observed with a 2-hour cadence to allow temporal
variations over the observation to be studied, and binned to $R = 30$ to focus on continuum polarization, would allow
a SNR of some $1 \times 10^5$ per resolution element.
Even with the lower effective area of Ch1, that would support a polarization precision of better than
$3 \times 10^{-4}$ , in the same dataset that would already by mined for its dynamical spetroscopic potential.
The polarization information would reveal processes that break the spherical symmetry in a globally steady way,
while the dynamical spectra reveal evolving process on much smaller spatial and temporal scales within the wind.
The reason polarimetry is automatically obtained, simultaneously with the spectral information, is that \textit{Polstar}
determines all four Stokes parameters, $I$, $Q$, $U$, and $V$, in all its observations.
Many experiments [REF? other white papers]
will use the higher effective area of the low-resolution channel 2 (Ch2) when doing sensitive polarimetry,
but in this experiment, the continuous monitoring is in the high-resolution Ch1, so the polarimetric information will come
from binning down to low resolution and simply accepting the roughly factor ~3 loss of effective area in Ch1.
This loss is mitigated by the use of inherently bright targets, so when the SNR $\cong$ 500-1000 at $R = 33,000$, it will
be $\sqrt{1000}$ times better, or $\cong$ 15,000-30,000, at $R = 33$ (a sufficient
resolution for using the spectral shape to remove
background polarization sources such as the ISM). Separation from background
polarization is also assisted by the time-dependent character of the
intrinsic polarization from the wind dynamics.
Such high SNR is conducive to polarimetric precision at the desired ~ $10^{-4}$ magnitude level, which is capable of
detecting the temporal variations from some 10,000 clumps with individual free-electron scattering optical depths of ~0.1.
Hence, the \textit{Polstar} mission objective {S2} will combine dynamical spectra and time-dependent linear polarization
variations to track clump and CIR features down to the Sobolev scale, as they accelerate through the winds of ~40 bright
massive stars, to understand how clumping and structure form, and what is their impact on wind mass-loss rate determinations.
It can also detect global structures with very small latitudinal differences in optical depth, down below $\cong$ 0.01,
due to rotation or other temporally persistent aspherical features, as well as rotationally modulated polarization
from large-scale azimuthal structures such as CIRs.
\begin{table}
\caption{{\em Polstar} Targets for Structured Wind Studies}
\begin{tabular}{llcc}
\hline\hline TARGET & NAME & FLUX (1500) & comments \\
& & (10$^{-10}$ erg/s/cm$^2$/\AA) & \\ \hline
HD 122451 & $\beta$ Cen & 600 & \\
HD 108248 & $\alpha$ Cru & 600 & 75 d binary \\
HD 116058 & $\alpha$ Vir & 530 & \\
HD 11123 & $\beta$ Cru & 400 & \\
HD 68273 & $\gamma$ Vel & 260 & \\
HD 66811 & $\zeta$ Pup & 230 & \\
HD 37742 & & 200 & \\
HD 37742 & $\zeta$ Ori & 200 & \\
HD 116658 & Spica & 200 & \\
HD 35468 & $\gamma$ Ori & 190 & \\
HD 52089 & $\epsilon$ CMa & 180 & \\
HD 44743 & $\beta$ CMa & 160 & \\
HD 36486 & $\delta$ Ori & 150 & \\
HD 24912 & & 130 & \\
HD 149438 & & 130 & \\
HD 5394 & $\gamma$ Cas & 100 & \\
HD 205021 & $\eta$ Cep & 100 & \\
HD 37043 & $\iota$ Ori & 100 & \\
HD 34085 & Rigel & 90 & \\
HD 127972 & $\eta$ Cen & 90 & \\
HD 143018 & $\pi$ Sco & 75 & \\
HD 151890 & $\mu^1$ Sco & 70 & \\
HD 105435 & $\delta$ Cen & 55 & \\
HD 87901 & Regulus & 50 & \\
HD 120307 & $\nu$ Cen & 40 & \\
HD 122451 & & 40 & \\
HD 157246 & $\gamma$ Ara & 35 & \\
HD 120324 & $\mu$ Cen & 34 & \\
HD 147165 & $\sigma$ Sco & 32 & \\
HD 52089 & $\xi$ CMa & 30 & \\
HD 121743 & $\phi$ Cen & 30 & \\
HD 37202 & $\zeta$ Tau & 30 & \\
HD 50013 & $\kappa$ CMa & 29 & \\
HD 19356 & Algol & 27 & \\
HD 3360 & $\zeta$ Cas & 27 & \\
HD 65816 & & 20 & \\
HD 10516 & $\phi$ Per & 15 & \\
HD 149757 & $\zeta$ Oph & 10 & \\
\hline
\end{tabular}
\end{table}
\section{Small-scale clumpy wind structure}
\label{sec_clumping}
Due to the variability of spectral lines, as well as the presence of linear polarisation, astronomers have known for decades that stellar winds are not stationary but time-dependent, and that this leads to inhomogeneous, clumpy media \citep[see][]{2008A&ARv..16..209P},2008cihw.conf.....H)
Motivated by
hydrodynamic simulations \citep[e.g.,][]{1997A&A...322..878F, 2013MNRAS.428.1837S} including the line-deshadowing
instability (LDI) \citep{1988ApJ...335..914O}, extreme density variations are expected to lead to large
root-mean-square density ratios, $D = \sqrt{<\rho^2>}/<\rho>$, sometimes called a ``filling factor" (though it is not necessary,
nor physically realistic,
to regard the interclump medium as void).
For optical depths that depend linearly on density, such as free-electron optical depth, the mean
opacity of a clumped medium is the same as that for a smooth wind,
whereas opacities that scale with the square of density (such as for recombination lines like \ensuremath{\mathrm{H}\alpha}),
the optical depths are enhanced by the clumping factor $D$.
The presence of large $D$ values due to extensive micro-clumping (clumps on size scales smaller
than a continuum mean-free-path, typical of most hot-star winds) yields an enhancement to mass-loss rates derived from
diagnostics that scale with the density squared, such as \ensuremath{\mathrm{H}\alpha} emission.
That enhancement is a factor of $\sqrt{D}$, requiring a corresponding correction to
older mass-loss rates derived with the assumption of smooth winds \citep{2007A&A...465.1003M}.
However, determining the correct value of $D$ is difficult, and is the purpose of objective \textit{S2}.
The goal is to use the resonance line opacity that is liberally sprinkled throughout the UV, because it covers a wide
range from strong to weak lines, allowing access to wind regions both close to and far from the star, and over a range
of wind mass-loss rates.
One challenge with metal resonance lines is that the abundances and degree of ionization can be unknowns.
In the EUV, the P\,{\sc v}\,\ensuremath{\lambda \lambda 1118, 1128}\ line has the advantage in certain O stars of being expected to be the dominant ionization stage,
so should in principle provide an accurate
estimate of solely the mass-loss rate.
However, \citet{2006ApJ...637.1025F}
selected a large sample of O-stars, which also had
$\rho^2$ (from \ensuremath{\mathrm{H}\alpha}/radio) estimates available, and found that treating resonance lines as though
they were linear in $\rho$ required
extreme clumping factors of up to $D \sim$ 400 (e.g. \citealt{2003ApJ...595.1182B
).
But UV resonance lines are more complicated than that, because the Sobolev optical
depth in such lines depends not on density, but on the ratio of density to velocity gradient, $\rho (dv/dr)^{-1}$.
Since the local acceleration, and hence $dv/dr$, also depends on density,
the Sobolev optical depth scales much more steeply than linear in density.
For example, if we neglect time dependent terms in the force equation (a rough approximation), and also neglect any
feedback between density and line opacities (also imperfect), then at given velocity $v$ and radius $r$, the standard
treatment using the CAK $\alpha$ parameter gives
\begin{equation}
\frac{dv}{dr} \ \propto \ \rho^\alpha \left ( \frac{dv}{dr} \right )^{-\alpha} \ .
\end{equation}
With $\alpha \cong 2/3$, we then have $dv/dr \cong \rho^{-2}$, which implies the Sobolev optical depth
in the UV resonance line in question obeys $\tau \propto \rho (dv/r)^{-1} \ propto \ \rho^3$.
This is quite a steep dependence, owing to the dependence on velocity gradient, wherein spatial regions
of low density yield open holes in velocity space where there is low line optical depth, an effect termed
``vorosity'' (Sundqvist et al. 2018) in analogy with the ``porosity'' concept for spatial holes in
continuum opacity.
Hence clumping alters the degree of absorption in a P Cygni trough, because the vorosity allows a
window through the normally optically thick line \citep{2018A&A...611A..17S},
to a degree that we wish to observe and understand
in order to understand the necessary clumping corrections.
\subsection{Optically thick clumping (``macro''-clumping)}
\label{sec_macro}
Some analysis of optically thick clumping effects in resonance lines has already been undertaken, taking into account
more than just the ``filling factor'' $D$, but also the distribution, size, and clump geometry.
The conventional description of macro-clumping is based on a clump size, $l$, and an average spacing of a statistical clump distribution, $L$, with porosity length $h=L^3/l^2$ \citep{2018MNRAS.475..814O}.
This porosity length $h$ represents the key parameter defining a clumped medium, as it corresponds to the photon mean free path in a medium consisting of optically thick clumps.
\begin{figure*}
\begin{center}
\hspace{-0.2cm}
\includegraphics[width=\textwidth]{figs/oskinova_clump_figure.pdf}
\vspace{-0.2cm}
\caption{Porosity as the likely solution for the PV problem. The \ensuremath{\mathrm{H}\alpha}\ line on the left-hand side is hardly affected by macro-clumping, while the P\,{\sc v}\,\ensuremath{\lambda \lambda 1118, 1128}\ UV line on the right is strongly affected. Adapted from \citet{2007A&A...476.1331O}.
\label{f_oski}
\end{center}
\end{figure*}
\citet{2007A&A...476.1331O} employed an “effective opacity” concept in the formal integral for the line profile modelling of the famous O supergiant $\zeta$~Pup.
Figure~\ref{f_oski} shows that the most pronounced effect involves strong resonance lines, such as the resonance doublet P\,{\sc v}\,\ensuremath{\lambda \lambda 1118, 1128}\ which can be reproduced by this macro-clumping approach -- without the need for extremely low \ensuremath{\dot{M}}\ -- resulting from an effective opacity reduction when clumps become optically thick.
Given that \ensuremath{\mathrm{H}\alpha}\ remains optically thin for O-type stars it is not affected by porosity\ and it can be reproduced simultaneously with P\,{\sc v}\,\ensuremath{\lambda \lambda 1118, 1128}. This enabled a solution to the P\,{\sc v}\,\ensuremath{\lambda \lambda 1118, 1128}\ problem (see e.g.\ \citealt{2013A&A...559A.130S, 2018A&A...619A..59S}).
\subsection{The origin of wind clumping}
\label{sec_origin}
In the canonical view of structure formation via the LDI, clumping would be expected to develop when velocities are sufficiently large to produce shocked structures. For typical O-star winds, this occurs at half the terminal wind velocity, corresponding to roughly 1.5 stellar radii.
Various observational indications, including the existence of linear polarisation \citep[e.g.][]{2005A&A...439.1107D}
and radial dependent Stokes I diagnostics \citep{2006A&A...454..625P}
indicate that clumping already exists at very low velocities, and likely arises inside the stellar photosphere.
\citet{2009A&A...499..279C}
suggested that waves produced by the subsurface convection zone could lead to velocity fluctuations, and possibly density fluctuations, and could thus be the root cause for the wind clumping seen close to the stellar surface.
Assuming the horizontal extent of the clumps to be comparable to the vertical extent in terms of the sub-photospheric pressure scale height $H_{\rm p}$, one may estimate the number of convective cells by dividing the stellar surface area by the surface area of a convective cell finding that it scales as ($R/H_{\rm P})^2$. For main-sequence O stars in the canonical mass range 20-60\,$M_{\odot}$, pressure scale heights are within the range 0.04-0.24 $R_{\odot}$, corresponding to total clump numbers of 6 $\times 10^3-6 \times 10^4$. These estimates could be tested through linear polarisation monitoring, probing wind clumping close to the wind base.
In an investigation of linear polarisation variability in WR stars, \citet{1989ApJ...347.1034R}
revealed an anti-correlation between the terminal velocity and the observed scatter in linear polarisation. They interpreted this in terms of blobs growing or surviving more effectively in slower rather than faster winds.
\citet{2005A&A...439.1107D}
found this trend to continue into the lower temperature regime of the LBVs, whose winds are even slower. Therefore, LBVs are ideal test-beds for constraining wind clump properties -- due to their very long wind-flow times.
As \citet{2005A&A...439.1107D}
found the polarisation angles of LBVs to vary irregularly with time, and optical line polarisation effects were attributed to wind inhomogeneity.
Given the short timescale of the observed polarisation variability, Davies et al. (2007) argued that LBV winds consist of order thousands of clumps near the surface.
For main-sequence O stars the derivation of the numbers of wind clumps and their sizes from polarimetry has not yet been feasible as very high S/N data is needed. This becomes feasible with the proposed \textit{Polstar} mission, which can reach polarization precision at the $1 \times 10^{-4}$
level due to very high SNR when the data is binned to $R \cong 30,$ as mentioned above.
\section{Large-scale structures}
\subsection{Magnetically confined winds}
{\bf refer somewhere in the next parag to the S1 white paper?}
As an additional complication, magnetic fields too can influence these hot-star winds significantly leading often to large-scale structures. Typically, their overall influence on the wind dynamics can be characterized by a single magnetic confinement parameter,
\begin {equation}
\eta_\ast \equiv \frac {B_{eq}^2 R_\ast^2}{\dot{M} v_\infty}
\end{equation}
which characterizes the ratio between magnetic field energy density and kinetic energy density of the wind, as defined in \cite{2002ApJ...576..413U}.
Extensive magnetohydrodynamic (MHD) simulations show that, in general, for the stellar models with weak magnetic confinement, $\eta_\ast < 1$ field lines are stretched into radial configuration by strong outflow very quickly on a dynamical timescale. However, even for magnetic confinement as weak as $\eta_\ast \sim 1/10$ the field can have enough influence to enhance density by diverting the wind material from higher latitudes towards the magnetic equator.
On the other hand, for stronger confinement, $\eta_\ast > 1$, the magnetic field remains closed over a limited range of latitude and height about the equatorial surface, but eventually is opened into a nearly radial configuration at large radii. Within closed loops, the flow is channeled toward loop tops into shock collisions where the material cools and becomes dense. With the stagnated material then pulled by gravity back onto the star in quite complex and variable inflow patterns. Within open field flow, the equatorial channeling leads to oblique shocks that eventually lead to a thin, dense, slowly outflowing ``disk'' at the magnetic equator. This is in concert with the ``magnetically confined wind shock'' model first forwarded by \cite{1997A&A...323..121B}
Such large scale wind structures are inferred most directly from time variability in the blueshifted absorption troughs of UV P Cygni profiles.
More recent MHD modelling shows that the wind structure and wind clumping properties change strongly with increasing wind-magnetic confinement. In particular, in strongly magnetically confined flows, the LDI leads to large-scale, shellular sheets ('pancakes') that are quite distinct from the spatially separate, small-scale clumps in non-magnetic hot-star winds \citep{2021arXiv211005302D}.
\subsection{Discrete Absorption Components}
\begin{figure*}
\begin{center}
\hspace{-0.2cm}
\includegraphics[width=\textwidth]{figs/raman_spliced_egs.pdf}
\vspace{-0.2cm}
\caption{\bf Dynamical spectra as deviations from the mean profiles shown at bottom,
taken from the IUE MEGA project for $\zeta$ Pup and $\xi$ Per. Strong (left) and weak (center) lines are spliced together (right) to show
the continuous acceleration of the DAC features (Massa \& Prinja 2015).}
\end{center}
\end{figure*}
Early high-resolution ultraviolet spectroscopy of massive stars obtained with the Copernicus satellite revealed the presence of sharp absorption features superimposed onto the extended blueshifted absorption troughs of far ultraviolet (FUV) resonance lines \citep{1975ApJ...199..691U,1976ApJ...203..386M,1976ApJS...32..429S}, often located near the terminal wind velocity. Initially dubbed ``narrow absorption components", or NACs, they were later found to be snapshots of a dynamic feature now known as ``discrete absorption components" (or DACs). The discovery of DACs was made possible by the advent of time-resolved high-resolution FUV spectroscopic observations obtained with the International Ultraviolet Explorer (IUE). They were found to start out as broad, shallow absorption features at low velocity, narrowing and deepening (in some cases saturating) as they migrate to higher velocities over characteristic timescales on the order of days (e.g. \citealt{1987ApJ...317..389P}). Furthermore, they were found to occur nearly ubiquitously among O stars \citep{1989ApJS...69..527H}. Since they cover a large range of velocities in the absorption trough, they must be caused by features covering a large radial extent within the stellar wind, and be indicative of a physical process universally present in massive stars.
Determining whether the variability seen in the UV time series spectra is
due to material originating near or at the surface of the star and then
propagating outward, or low speed material at large distances from the
star which simply appears at low velocity is a goal that we seek to
address with Polstar. Should the origin of all of the features be tied to
the stellar surface, this would have implications about the structure of
the winds (how does the component of the wind which shapes saturated wind
lines co-exist with large scale structure), the structure of the
photosphere (what are causes the photospheric irregularities that give
rise to the different wind flows), X-ray production (can some X-rays originate
along the interfaces of
the CIRs) and the theoretical derivations and simulations based on a
smooth wind.
Generally, there is no way to directly determine the location in the wind where an absorption feature at a specific velocity originates. For example, a low velocity feature could be near the stellar surface, accelerating outward, or due to a slowly moving parcel of wind material far out in the wind, which is re-accelerating after being shocked. This ambiguity is always the case for resonance lines, but not for excited state lines. An excited state wind line arises from an allowed transition whose lower level is the upper level of a resonance transition (typically below 900A). They frequently appear as wind lines in O stars with strong mass fluxes. One of the most commonly observed excited state lines is N\,{\sc iv}$\lambda$1718\AA , whose lower level is the upper level of the N\,{\sc iv}$\lambda$765\AA\ resonance line. Because a strong EUV radiation field is required to populate an excited state line, these lines can only exist close to the star (this is what gives excited state lines their distinctive shapes). As a result, a feature which appears at low velocity in an excited state line must originate close to the stellar surface
Unfortunately, this same property means that excited state lines weaken quickly at large distances from the star and, as a result, rarely extend to very high velocities. This challenge can be met however by the superior S/N spectra offered by Polstar. We see, therefore, how resonance and excited state lines complement one another. If a feature (indicated by excess or reduced absorption) appears at low velocity in an excited state line and then joins a high velocity feature in a resonance line (whose low velocity portion may be saturated), this provides evidence that the excess or deficiency of absorbing material which caused the feature originated close to the stellar surface (where the radiation field is intense) and then propagated outward, into the wind. {\it NS: As an example Fig. 4 from de Jong 2001 could be mentioned here. There are dynamical spectra of Si IV, N IV, and Ha, where correlation can be seen between absorption features in the lines.}
A powerful diagnostic is to compare and 'splice' together the temporal
behaviour seen in the Si\,{\sc iv}$\lambda\lambda$1400\AA\ resonance line doublet and N\,{\sc iv}$\lambda$1718\AA\
excited state singlet (cite $\zeta$\,Pup and $\xi$\,Per figures). Our analysis is
relatively simple, and based primarily on inspection of dynamic spectra.
Dynamic spectra are images of time ordered spectra normalized by the mean
spectrum of the series. The properties of coherent structure seen in the
full comparison between these lines will allow is to (i) test whether
every absorption feature observed at low/near-zero velocity connects to a
feature at high velocity. (ii) reveal “banana” shaped patterns, which are
the accepted signatures of CIRs tied to the stellar surface. (iii)
establish connections between the wind features and the stellar surface.
(iv) constrain any instances of evidence features formed (at intermediate
velocity) in the wind: if all features can be traced back to the surface
and they move through the profile monotonically with velocity, this would
imply that we only see evidence for porosity in the dynamic spectra, and
not “vorocity”. (v) test the notion that features in dynamic spectra must
be huge, with large lateral extent in order to remain in the line of sight
for days, persisting to very large velocity.
\subsection{Corotating Interaction Regions}
As mentioned above, one of the leading hypotheses to explain DACs involves large-scale structures extending through most of the radial extent of the wind, assuming that it accelerates more or less monotonically, as expected from theory (e.g. \citealt{1975ApJ...195..157C}). \citet{1986A&A...165..157M} suggested that ``corotating interaction regions" (or CIRs), as seen in the Solar wind, could be responsible for the observed phenomenology. This model necessitates regions of either stronger or weaker wind, interacting with the bulk of the wind as the star rotates. This can be accomplished by adopting \textit{ad hoc} brightness spots on the stellar surface. \citet{1996ApJ...462..469C} tested both dark and bright spots using 2D hydrodynamical simulations, finding that only synthetic line profiles computed using the CIRs that emanate from bright spots reproduce the observed DACs. However, their model does not probe what the physical origin of the bright spots might be. That said, one important finding from their study was that while there was a slight overdensity within the CIRs, the real source of the enhanced absorption in DACs stems from the velocity plateau (or ``kink") that arises when sparser, faster material from the bulk of the wind plows into the denser, slower material driven by the bright spots.
High spectral resolution can provide good information about the spatial distribution of material in the wind (see Fig.~\ref{fig:cir}).
\begin{figure}
\centering
\includegraphics[trim=10 0 50 400, clip, width=9cm]{./figs/new_vel_colcont_r50_rho.pdf}
\caption{Adapted from \citet{2017MNRAS.470.3672D}: 2D equatorial slice of a wind with CIRs created by four equally spaced bright spots. The greyscale background represents density, and color-coded line-of-sight isovelocity contours are overlaid for an observer along the positive x-axis (blue corresponds to high-velocity blue-shifted material, and red to high-velocity red-shifted material). A 50 km s$^{-1}$ resolution is shown, illustrating that the $\sim$10 km s$^{-1}$ resolution achievable with Polstar will spatially sample the complex structure of the wind material very effectively. This will continue to be true when the data is binned to 20 km s$^{-1}$ resolution to obtain the high
SNR desired in this experiment. The figure demonstrates that when large-scale structure is present, spatial resolution cannot be simply
interpreted from the Doppler shifts in the line profile; additional geometric information from continuum polarization will be necessary
to resolve ambiguities in the global structure.}
\label{fig:cir}
\end{figure}
While DACs repeat in the dynamic spectra of OB stars (e.g. \citealt{1996A&AS..116..257K}), they are not strictly periodic. Nevertheless, they are characterized by a timescale that is understood to be related to the rotation period of a star \citep{1988MNRAS.231P..21P}. This rotational modulation prompted the hypothesis that CIRs might be a consequence of large-scale magnetic fields on the surfaces of massive stars, as have since been detected on a subset of them \citep{2017MNRAS.465.2432G} -- interacting with the stellar wind to form ``magnetospheres", as discussed above. However, using high-quality optical spectropolarimetric data, \citet{2014MNRAS.444..429D} have placed stringent upper limits on the magnetic field strengths of a sample of 13 well-studied OB stars with extensive FUV time-resolved spectra. They conclusively demonstrated that large-scale fields could not exert sufficient dynamic influence on the wind to form CIRs. An alternative explanation involves non-radial pulsations (NRPs), but reconciling their typical periods with the DAC recurrence timescales requires complex mode superpositions \citep{1999A&A...345..172D}.
Therefore, the prevailing hypothesis regarding the origin of putative CIRs involves the presence of spots on the stellar surface.
\subsection{Surface spots}
Even if surface spots seem a promising cause of CIR structures, the reasons behind their presence remain unclear.
Unlike lower-mass Ap/Bp stars and chemically peculiar intermediate mass stars (such as Am stars and HgMn stars), early-type OB stars are not expected to have chemical abundance spots, as they would be continuously stripped by the wind (e.g. \citealt{1987ApJ...322..302M}). One possible mechanism by which such brightness spots can be formed involves small-scale magnetic fields. A possible consequence of convective motions in the subsurface convection zone due to the iron opacity peak (FeCZ; \citealt{2009A&A...499..279C}), magnetic spots can lead to brightness enhancements by locally reducing the gas pressure, and hence the optical depth. This creates a sort of ``well", allowing us to see hotter plasma, deeper inside the envelope \citep{2011A&A...534A.140C}. While such spots have yet to be detected in OB stars, weak structured fields have now been detected on several A-type stars (e.g. \citealt{2017MNRAS.472L..30P}). There have also been further theoretical advances suggesting that stars with radiative envelopes can either have large-scale fields strong enough to inhibit convection in the FeCZ, or weak, disorganized fields generated by that convection \citep{2020ApJ...900..113J}, leading to a bimodal distribution of field strengths and thus explaining the so-called `magnetic desert' \citep{2007A&A...475.1053A}.
Regardless of their physical origin (localized magnetic fields or NRPs), bright spots have been detected on at least two archetypal massive stars: $\xi$ Per \citep{2014MNRAS.441..910R} and $\zeta$ Pup \citep{2018MNRAS.473.5532R}. These initial studies suggest that the spots are large (angular radius greater than $\sim$10\textdegree) and relatively persistent (lasting roughly tens of rotational cycles), and lead to fairly faint variability (about 10 mmag in the optical -- only detectable with space-based facilities). Assuming that these are hot spots (which means that their brightness enhancement in the ultraviolet compared to the surrounding photosphere is greater than in the optical), further hydrodynamical simulations carried out by \citet{2017MNRAS.470.3672D} showed that the spot properties inferred from the light curve of $\xi$ Per could also lead to CIRs that quantitatively reproduce the DACs observed for this star.
Clearly the UV band has much to say about the importance of these spots, and \textit{Polstar}'s observations will have a lot
to contribute to this issue, while in the process of establishing corrections to the mass-loss rates.
\section{Hydrodynamic Modeling}
Recent models of the line-driven instability (LDI) using 2D hydrodynamics driven by 1D radiative transfer, simplified to allow
dynamical resolution of the resulting clumping, gives the density structure shown in Fig. (5).
These models do not use any variations at the lower boundary, so the extreme degree of clumping is entirely self-excited due
to the rapid growth rate of the intrinsic instability.
The lateral scale is limited only by the tiny scale ($\cong R/100$) of the Sobolev length, producing thousands of clumps in all.
As the radial scale is stretched out by the radial instability in the velocity gradient, the clumps appear radially elongated,
which causes them to produce absorption fluctuations that are wide in velocity ($> 100$ km s$^{-2}$), but only a few percent in depth,
as can be seen in the time varying P Cygni profile in Fig. (6).
\begin{figure*}
\label{figuremodelFlorian.png}
\begin{center}
\hspace{-0.2cm}
\includegraphics[width=\textwidth]{figs/modelFlorian.png}
\vspace{-0.2cm}
\caption{\bf Shown is the density structure in the 2D LDI simulation. To limit computing time, the radiative driving
is entirely radial.}
\end{center}
\end{figure*}
Since the model is only 2D, there is an artificial azimuthal symmetry around the line to the observer, which
produces circularly coherent structures that overestimates the magnitude of the variances shown.
Assuming the azimuthal coherence length should actually be on the Sobolev scale, $\cong R/100$, an order of
magnitude of cancellation in the temporal variations could be expected.
This will require further modeling to explore, but given that the largest features have a variance approaching
10 \% in this azimuthally symmetric treatment, the largest temporal variations might appear at the 1 \% level or less.
Nevertheless, the Polstar experiment described above achieves SNR in the range 400 - 1000, depending on the target
brightness, so temporal variations over a day-long observation could easily characterize variances at the 1 \% level,
or even less.
The intended binning to 20 km s$^{-1}$ resolution will also easily resolve the model features, which typically show
a velocity width $> 100$ km s$^{-1}$.
\begin{figure*}
\label{figurePCygniFlorian.png}
\begin{center}
\hspace{-0.2cm}
\includegraphics[width=\textwidth]{figs/PCygniFlorian.png}
\vspace{-0.2cm}
\caption{\bf Temporal snapshots of the P Cygni profile variations from the LDI simulation, showing variations of width
> 100 km s$^{-1}$. The depth of the features is enhanced artificially in the 2D model by azimuthal coherence around the observer
line of sight, and could be an order of magnitude less. The mean shape of the P Cygni absorption is less deep than a smooth model,
owing to the effects of ``vorosity".}
\end{center}
\end{figure*}
The dynamical spectrum produced by this model is shown in Fig. (7).
Again the artificial azimuthal coherence in the model overestimates the scale of the features, but their general
attributes are seen to be highly reminiscent of observed dynamical spectra.
The model features have an acceleration that is seen to be nearly constant, and somewhat faster than
what has been so far observed (the scale of the acceleration in the simulation is $\cong$ 0.2 km s$^{-2}$,
whereas the few observed accelerations are perhaps about 1/4 that rapid).
This may be because observed features are primarily limited to large DACs that are likely linked to CIRs induced by surface
features, not the LDI-induced features simulated above.
The higher SNR accessible by the {S2} Polstar experiment will allow us to ascertain whether there are different acceleration
rates for features linked to surface inhomogeneities, as opposed to self-excited clumps in the wind.
If such potentially more rapid accelerations exist in the data,
the wide features that appear in the simulation shown can still be temporally
tracked using the exposure times in this experiment by binning the data to 100 km s$^{-2}$,
approximately doubling the already high SNR in the process.
Alternatively, if the accelerations are more at the scale of the surface gravity of the star, as may tend to be true
of the few DACs that have been well resolved, then the finer binning at the sound speed level ($\cong$ 20 km s$^{-1}$)
will match well the exposure times and produce the experiment described in section 3 above.
\begin{figure*}
\label{figuredynspecFlorian.png}
\begin{center}
\hspace{-0.2cm}
\includegraphics[width=\textwidth]{figs/dynspecFlorian.png}
\vspace{-0.2cm}
\caption{\bf Dynamical spectra from the LDI model are shown relative to the mean profile on the left, and for the full
profile on the right. The velocity and temporal widths of the features are true to the simulation, but their magnitude
is overestimated by perhaps an order of magnitude because of the model 2D azimuthal symmetry around the observer line of sight.}
\end{center}
\end{figure*}
\section{Conclusions}
The \textit{Polstar} {S2} experiment is built to resolve and track small-scale clumping in line-driven winds at SNR > 300, and also to connect
larger scale structure with UV polarization signatures stemming from free-electron scattering.
The purpose is to quantify corrections of mass-loss rate determinations that are sensitive to density inhomogeneity, such as \ensuremath{\mathrm{H}\alpha}
and free-free radio emission.
This will lead to more accurate determinations of how much mass high-mass main-sequence stars and blue supergiants lose, setting the
stage for, and significantly altering, later evolutionary phases for single stars and binaries alike.
In the process, the physics of the driving of these winds, and their profound importance on galactic ecology, will be better understood.
\begin{acknowledgements}
RI acknowledges funding support from a grant by the National Science Foundation, AST-2009412.
Scowen acknowledges his financial support by the NASA Goddard Space Flight Center to formulate the mission proposal for Polstar.
Y.N. acknowledges support from the Fonds National de la Recherche Scientifique (Belgium), the European Space Agency (ESA) and the Belgian Federal Science Policy Office (BELSPO) in the framework of the PRODEX Programme (contracts linked to XMM-Newton and Gaia).
SE acknowledges the STAREX grant from the ERC Horizon 2020 research and innovation programme (grant agreement No. 833925), and the COST Action ChETEC (CA 16117) supported by COST (European Cooperation in Science and Technology). A.D.-U. is supported by NASA under award number 80GSFC21M0002.
AuD acknowledges support by NASA through Chandra Award number TM1-22001B issued by the Chandra X-ray Observatory 27 Center, which is oper- ated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. NS acknowledges support provided by NAWA through grant number PPN/SZN/2020/1/00016/U/DRAFT/00001/U/00001.
LH, FAD, and JOS acknowledge support from the Odysseus program of the Belgian Research Foundation Flanders (FWO) under grant G0H9218N.
\end{acknowledgements}
|
2,877,628,089,565 | arxiv | \section{Introduction}
\paragraph{}
The surprising discovery of the Bartnik-McKinnon (BM) non-trivial particle-like
structure~\cite{bartnik} in the Einstein-Yang-Mills system
opened many possibilities for the existence of non-trivial
solutions to Einstein-non-Abelian-gauge systems.
Indeed, soon after its discovery, many other
self-gravitating structures with non-Abelian gauge fields
have been discovered~\cite{selfgrav}. These
include black holes
with non-trivial hair, thereby leading to the possibility of
evading the no-hair conjecture~\cite{nohair}.
The physical reason for the existence of these classical
solutions is the `balance' between
the non-Abelian gauge-field repulsion
and the gravitational attraction. Such a balance
allows for dressing black hole solutions
by non-trivial configurations (outside the horizon)
of fields that are not associated with a Gauss-law,
thereby leading to an `apparent' evasion of the no-hair conjecture.
\paragraph{}
Among such black-hole solutions, a physically interesting
case is that of a spontaneously broken Yang-Mills theory
in a non-trivial black-hole space-time (EYMH)~\cite{greene}.
This system has been recently examined from a stability point of view,
and found to possess an instability~\cite{winst},
thereby making the physical importance
of the solution rather marginal, but also indicating another
dimension of the no-hair conjecture, not considered in the original
analysis, that of stability.
\paragraph{}
In this article, we shall give more details of this stability considerations
by extending the analysis to incorporate counting of the
unstable modes, and going beyond the linear-stability case
by employing catastrophe theory~\cite{torii}
in order to analyse
instabilities
in the gravitational sector of the solution.
Catastrophe theory is a powerful mathematical tool
to study or explain a variety of change of states in
nature, and in particular a {\it discontinuous }
change of states that occurs eventually despite a
gradual (smooth) change of certain parameters of the
system. In the case at hand, the catastrophe
functional,
which exhibits a discontinuous change in its
behaviour, will be the mass of the black hole space time,
whilst the control parameter, whose smooth change
turns on the catastrophe at a given critical value, will
be the vacuum expectation value (v.e.v.) of the Higgs field.
The advantage of using the v.e.v. of the Higgs field as the control
parameter, rather than the horizon radius as was done in \cite{torii},
is that it will allow us to relate the stability of the
EYMH black holes to that of the Schwarzschild solution, which
is well known to be stable.
The type of catastrophe encountered will be that of a {\it fold}
catastrophe.
The catastrophe-theoretic approach allows for a
universal stability study of non-abelian black hole solutions
that goes beyond linearised perturbations; the particular
use of the Higgs v.e.v. as a control parameter
in the case of the EYMH systems allows an
{\it exact} counting of the unstable modes.
\paragraph{}
As part of our analysis, we shall make an attempt to associate
the above catastrophe-theoretic considerations with
some `thermodynamic/information-theoretic' aspects of black hole physics,
and in particular with the entropy of the black hole.
By computing explicitly the entropy of
quantum fluctuations of (scalar) matter fields
near the horizon we shall show that `high-entropy' branches
of the solution possess less unstable modes (in the gravitational sector)
than the `low-entropy' ones. As a partial, but not less important, result
of this part of our analysis, we shall also show that
the entropy of the black hole possess linear {\it and} logarithmic
divergencies. The linear divergencies do not violate the Bekenstein-Hawking
formula relating entropy to the classical horizon area. The only
difference is the divergent proportionality factors in front, which, however,
can be absorbed in a conjectured renormalization of Newton's constant in the
model~\cite{thooft,susskind}. This is not the case with the logarithmic
divergencies
though. The latter persist even in `extreme black-hole' cases,
where the linear divergencies disappear.
They clearly violate the Bekenstein-Hawking
formula. In our case they owe
their presence to the non-Abelian gauge and Higgs fields.
The presence of logarithmic divergencies in black hole physics
has been noted in ref. \cite{susskind}, but only in examples
involving truncation from (3+1)-dimensional space-times to (1 + 1) dimensions,
and
in that reference their
presence
had been attributed to this bad truncation of the four-dimensional
black hole spectrum. Later on, however, such logarithmic divergencies
have been confirmed to exist in
string-inspired dilatonic black holes in (3+1)
dimensions~\cite{dilatonbh}. Their presence in our EYMH system,
and in general in non-Abelian black holes as we shall show, indicates
that such
logarithmic divergencies are {\it generic}
in black hole space-times with non-conventional hair, and probably indicates
information loss, even in extreme cases, associated with the
presence of space time boundaries.
This probably implies
that the entropy of the black hole is not only associated with
classical geometric factors, but
is a much more complicated phenomenon related
to information carried by the various (internal) black hole states.
The latter
phenomenon could be associated with, and may be offer ways out of,
the usual difficulties of reconciling quantum mechanics
with canonical quantum gravity.
\paragraph{}
The structure of the article
is as follows. In section 2 we shall discuss the no hair conjecture
for black holes space-times with non-trivial scalar field
configurations by following
a modern approach due to Bekenstein~\cite{bekmod}.
We shall show that the proof of the no-hair theorem fails for the
case of the EYMH system, in accordance with the explicit
solution found in ref. \cite{greene}. In section 3 we shall present
a stability analysis of the system based on linear perturbations.
We shall demonstrate the existence of instabilities
in the sphaleron sector, following a variational
approach which is an extension of the approach of
Volkov and Gal'tsov~\cite{volkov}
to study particle-like solutions. We shall also
present arguments for counting the unstable modes in the sphaleron
sector of the theory.
In section 4 we shall present a method for counting the unstable
modes in the gravitational sector by
going beyond the linearised-perturbation analysis using catastrophe
theory, with the mass functional of the black hole as the
catastrophe functional and the Higgs v.e.v. as the control parameter.
In section 5, in
connection with the latter approach, we shall estimate
the entropy of the various branches of the solution using a WKB approximation.
We shall show that the high-entropy branch of solutions
is relatively more stable (in the sense of possesssing fewer unstable
modes) than the low-entropy branch.
As we have already mentioned, we shall also discuss the existence
of logarithmic divergencies
in the entropy, associated with the presence of
non-trivial hair in the black hole, in certain extreme cases, and we shall
argue about an explicit violation of the classical Bekenstein-Hawking
entropy formula, indicating a different
(information oriented) r\^ole of the black hole entropy.
Conclusions and outlook will be presented in section 6.
Some technical aspects of our approach will be discussed in two appendices.
\section{Bypassing Bekenstein's no-hair theorem in
\newline EYMH systems}
\paragraph{}
Recently, Bekenstein presented a modern elegant proof
of the no-hair theorem for black holes, which covers
a variety of cases with scalar fields~\cite{bekmod}. The theorem
is formulated in such a way so as to rule out a multicomponent
scalar field dressing an asymptotically flat, static, spherically-symmetric
black hole. The basic assumption of the theorem is that the scalar
field is minimally coupled to gravity and bears a non-negative energy density
as seen by any observer, and the proof relies on very general principles,
such as energy-momentum conservation and the Einstein equations.
{}From the positivity assumption and the conservation equations
for the energy momentum tensor $T_{MN}$ of the theory, $\nabla ^M T_{MN}= 0$,
one obtains for a spherically-symmetric space-time background
the condition that near the horizon the radial component
of the energy-momentum tensor and its first derivative are negative
\begin{equation}
T _r^r < 0, \qquad (T_r^r)' < 0
\label{zero}
\end{equation}
with the prime denoting differentiation with respect to $r$.
This
implies that in such systems there must be regions in space, outside
the horizon where both quantities in (\ref{zero}) change sign.
This contradicts the results following from Einstein's equations
though~\cite{bekmod}, and this {\it contradiction} constitutes
the proof of the no-hair theorem, since the only allowed non-trivial
configurations are Schwarzschild black holes.
We note, in passing,
that there are known exceptions to the original version of the
no-hair theorem~\cite{nohair}, such as conformal
scalar fields coupled to gravity, which
come from the fact that in such theories the scalar fields
diverge at the horizon of the black hole~\cite{confdiv}.
\paragraph{}
The interest for our case
is that the theorem rules out the existence
of non-trivial hair due to a Higgs field with a double (or multiple) well
potential, as is the case for spontaneous symmetry breaking.
Given that stability issues are not involved in the proof,
it is of interest to reconcile the results of the theorem with the situation
in our case of EYMH systems,
where at least
we know that an explicit solution with non-trivial
hair exists~\cite{greene}, albeit unstable~\cite{winst}.
As we shall
show below, the formal reason
for bypassing
the modern version
of the no-hair theorem~\cite{bekmod}
lies in the violation
of the key relation among the
components of the stress tensor, $T_t^t =T_\theta^\theta$,
shown to hold in the case of ref. \cite{bekmod}.
The physical reason
for the `elusion' of the above no-hair conjecture lies in the fact that
the presence of the repulsive non-Abelian gauge interactions
balance the gravitational attraction, by producing terms that
make the result (\ref{zero}) inapplicable in the present case.
Below we shall demonstrate this in a mathematically rigorous way.
\paragraph{}
To this end, consider the EYMH theory with Lagrangian
\begin{equation}
{\cal L}_{EYMH} = -\frac{1}{4\pi} \left\{ \frac{1}{4} |F_{MN}|^2
+ \frac{1}{8} \phi ^2 |A_M|^2 + \frac{1}{2} |\partial _M \phi |^2
+ V(\phi ) \right\}
\label{onea}
\end{equation}
where $A_M$ denotes the Yang-Mills field, $F_{MN}$ its field strength,
$\phi $ is the Higgs field and $V (\phi )$ its potential.
All the indices are contracted with the help of the
background gravitational tensor $g_{MN}$.
In the spirit of Bekenstein's
modern version of the no-hair theorem, we now examine
the energy-momentum tensor of the model (\ref{onea}). It can be written in the
form
\begin{equation}
8 \pi T_{MN} = -{\cal E } g_{MN} + \frac{1}{4\pi} \left\{ F_{MP}F_{N}{}^{P}
+ \frac{\phi ^2}{4} A_M A_N + \partial _M \phi \partial _N \phi \right\}
\label{threea}
\end{equation}
with ${\cal E} \equiv -{\cal L}_{EYMH} $.
\paragraph{}
Consider, now, an observer moving with a four-velocity $u^M $ . The observer
sees a local energy density
\begin{equation}
\rho = {\cal E} + \frac{1}{4\pi} \left\{ u^M F_{MP} F_N{}^{P} u^N +
\frac{\phi ^2}{4} (u^M A_M )^2 + (u^M \partial _M \phi )^2 \right\},
\qquad u^M u_M = -1.
\label{foura}
\end{equation}
To simplify the situation let us
consider a space-time with a time-like killing vector, and suppose that the
observer moves along this killing vector. Then
$ u^M \partial _M \phi = 0 $ and by an appropriate gauge choice
$u^M A_M =0 = u^M F_{MN} $. This gauge choice is compatible with the
spherically-symmetric ansatz
for $A_M$ of ref. \cite{greene}. Under these assumptions,
\begin{equation}
\rho = {\cal E}
\label{fivea}
\end{equation}
and the requirement that the local energy density as seen by any observer
is non-negative
implies
\begin{equation}
{\cal E } \ge 0.
\label{sixa}
\end{equation}
\paragraph{}
We are now in position to proceed with the announced proof of the bypassing
of the no-hair theorem of ref. \cite{bekmod} for the EYMH black hole of
ref. \cite{greene}.
To this end we consider a spherically-symmetric ansatz for the
space-time metric $g_{MN}$, with an invariant line element of the form
\begin{equation}
ds^2 = - e^{\Gamma } dt^2 + e^\Lambda dr^2
+ r^2 (d\theta ^2 + \sin^2 \theta d\varphi ^2),
\qquad \Gamma = \Gamma (r),~\Lambda = \Lambda (r).
\label{sevena}
\end{equation}
To make the connection with the black hole case we further assume that the
space-time is asymptotically flat.
\paragraph{}
{}From the conservation of the energy-momentum tensor, following from the
invariance of the effective action under general co-ordinate transformations,
one has for the $r$-component of the conservation equation
\begin{equation}
[(-g)^{\frac{1}{2}} T_r^r ]' - \frac{1}{2} (-g)^{\frac{1}{2}}
\left( \frac{\partial }{\partial r} g_{MN}\right) T^{MN} = 0
\label{eighta}
\end{equation}
with the prime denoting differentiation with respect to $r$. The spherical
symmetry of the space time implies that $T_\theta ^\theta =
T_\varphi ^\varphi $.
Hence, (\ref{eighta}) can be written as
\begin{equation}
(e^{\frac{\Gamma+\Lambda}{2}} r^2 T_r^r )' - \frac{1}{2}
e^{\frac{\Gamma + \Lambda}{2}} r^2 \left[ \Gamma' T_t^t + \Lambda ' T_r^r
+ \frac{4}{r}
T_\theta ^\theta \right] = 0.
\label{ninea}
\end{equation}
Observing that the terms containing $\Lambda $ cancel,
and integrating over the radial coordinate $r$ from the horizon
$r_h$ to a generic distance $r$, one obtains
\begin{equation}
T_r^r (r) = \frac{e^{-\frac{\Gamma}{2}}}{2 r^2}
\int _{r_h}^r dr e^{\frac{\Gamma}{2}} r^2 \left[ \Gamma' T_t^t +
\frac{4}{r} T_\theta ^\theta \right]
\label{tena}
\end{equation}
Note that
the assumption that scalar invariants, such as
$T_{MN}T^{MN}$ are finite on the horizon (in order that the latter
is regular),
implies that the boundary terms on the horizon vanish in
(\ref{tena}).
\paragraph{}
It is then straightforward to obtain
\begin{equation}
(T_r^r)' =\frac{1}{2} \left[ \Gamma' T_t^t
+ \frac{4}{r} T_\theta ^\theta \right]
- \frac{e^{-\frac{\Gamma}{2}}}{r^2} (e^{\frac{\Gamma}{2}} r^2)' T_r^r.
\label{elevena}
\end{equation}
\paragraph{}
Next, we consider Yang-Mills fields of the form~\cite{greene}
\begin{equation}
A = (1 + \omega (r) ) [-{\hat \tau} _\phi d\theta + {\hat \tau}_\theta
\sin \theta d\varphi ]
\label{twelvea}
\end{equation}
where $\tau _i $, $i =r,\theta,\varphi $
are the generators of the $SU(2)$ group
in spherical-polar coordinates.
Ansatz (\ref{twelvea}) yields
\begin{eqnarray}
T_t^t &=& - {\cal E} \nonumber \\
T_r^r &=& - {\cal E} + {\cal F} \nonumber \\
T_\theta ^\theta & = & -{\cal E} + {\cal J}
\label{thirteena}
\end{eqnarray}
with (see Appendix A for details of the relevant quantities),
\begin{eqnarray}
{\cal F} &\equiv & \frac{e^{-\Lambda}}{4\pi} \left[ \frac{2\omega '^2}{r^2}
+ \phi '^2 \right]
\nonumber \\
{\cal J} &\equiv & \frac{1}{4\pi} \left[ \frac{\omega '}{r^2}e^{-\Lambda}
+ \frac{(1 - \omega ^2)^2}{r^4} + \frac{\phi ^2}{4r^2} (1 + \omega ^2)
\right].
\label{fourteena}
\end{eqnarray}
Substituting (\ref{fourteena}) in (\ref{thirteena}) yields
\begin{equation}
T_r^r (r) = \frac{e^{-\frac{\Gamma}{2}}}{r^2}\int _{r_h}^r
\left\{ -
(e^{\frac{\Gamma}{2}}r^2)' {\cal E} + \frac{2}{r} {\cal J}
\right\} dr
\label{fifteena}
\end{equation}
\begin{equation}
(T_r^r)' (r) = -\frac{e^{-\frac{\Gamma}{2}}}{r^2}
(e^{\frac{\Gamma}{2}}r^2)'
{\cal F} + \frac{2}{r}{\cal J}
\label{fifteenb}
\end{equation}
where ${\cal E}$ is expressed as
\begin{equation}
{\cal E} =\frac{1}{4\pi} \left[ \frac{(\omega ')^2}{r^2}
e^{-\Lambda } + \frac{(1-\omega ^2)^2}{2r^4} +
\frac{\phi ^2(1 + \omega )^2}{4r^2} + \frac{1}{2} (\phi ')^2e^{-\Lambda }
+ \frac{{\lambda}}{4} (\phi ^2 - { v}^2)^2 \right].
\label{sixteena}
\end{equation}
We now turn to the Einstein equations for the first time, following
the analysis of ref. \cite{bekmod}. Our aim is to examine whether there is a
contradiction with the requirement
of the non-negative energy density. These equations read for our system
\begin{eqnarray}
e^{-\Lambda} (r^{-2} - r^{-1}\Lambda ' ) - r^{-2} &=&
8\pi T_t^t = -8\pi {\cal E} \nonumber \\
e^{-\Lambda} (r^{-1} \Gamma' + r^{-2} ) - r^{-2} &=& 8\pi T_r^r.
\label{seventeena}
\end{eqnarray}
Integrating out the first of these yields
\begin{equation}
e^{-\Lambda } = 1 -\frac{8\pi}{r} \int _{r_h}^r
{\cal E} r^2 dr - \frac{2 {\cal M_{0}}}{r}
\label{eighteena}
\end{equation}
where ${\cal M_{0}}$ is a constant of integration.
\paragraph{}
The requirement for
asymptotic flatness of space-time implies the following
asymptotic behaviour for the energy-density functional ${\cal E} \sim
O(r^{-3}) $ as $r \rightarrow \infty $, so that $\Lambda \sim O(r^{-1}) $.
In order that $e^{\Lambda} \rightarrow \infty $ at the horizon,
$r \rightarrow r_h $, ${\cal M_{0}}$ is fixed by
\begin{equation}
{\cal M_{0}} = \frac{r_h}{2}.
\label{nineteen}
\end{equation}
The second of the equations (\ref{seventeena}) can be rewritten in the form
\begin{equation}
e^{-\frac{\Gamma}{2}} r^{-2}
(r^2 e^{\frac{\Gamma}{2}})' = \left[ 4\pi r T_r^r
+ \frac{1}{2r} \right] e^{\Lambda} + \frac{3}{2r}.
\label{twenty}
\end{equation}
\paragraph{}
Consider, first, the behaviour of $T_r^r$ as $ r \rightarrow \infty $.
Asymptotically, $e^{\frac{\Gamma}{2}} \rightarrow 1 $, and so the leading
behaviour of $(T_r^r)'$ is
\begin{equation}
(T_r^r) ' = \frac{2}{r} [{\cal J} - {\cal F} ].
\label{21}
\end{equation}
We, now, note that the fields $\omega $ and $\phi $ have masses
$\frac{{ v}}{2}$ and $\mu = \sqrt{{ \lambda}}{ v}$
respectively. From the field equations and the requirement of finite energy
density their behaviour at infinity must then be
\begin{eqnarray}
\omega (r) &\sim & -1 + ce^{-\frac{{ v}}{2} r} \nonumber \\
\phi (r) &\sim & { v} + a e^{-\sqrt{2} \mu r}
\label{22}
\end{eqnarray}
for some constants $c$ and $a$. Hence, the leading asymptotic behaviour
of ${\cal J}$ and ${\cal F}$ is
\begin{eqnarray}
{\cal J} &\sim & \frac{1}{4\pi} \left[ \frac{c^2 { v}^2}{4r^2}e^{-
{ v}r} + \frac{2c^2}{r^4} e^{-{ v}r} +
\frac{{ v}^2 c^2 }{4r^2} e^{-{ v}r} \right] \nonumber \\
{\cal F} &\sim & \frac{1}{4\pi} \left[ \frac{c^2 { v}^2}
{2r^2} e^{-{ v} r} + 2 a^2 \mu ^2 e^{-\sqrt{2} \mu r} \right]
\label{23}
\end{eqnarray}
since $e^{-\Lambda } \rightarrow 1$ asymptotically.
\paragraph{}
The leading behaviour of $(T_r^r)'$, therefore, is
\begin{equation}
(T_r^r)' \sim \frac{1}{4\pi} \left[ \frac{2c^2}{r^4} e^{-{ v}r}
- 2a^2 \mu ^2 e^{-2\sqrt{2} \mu r} \right].
\label{24}
\end{equation}
There are two possible cases: (i) $2\sqrt{2}\mu > { v} $ (corresponding
to ${ \lambda } > 1/8$); in this case $(T_r^r)' > 0$
asymptotically, (ii) $2\sqrt{2} \mu \le { v}$ (corresponding to
${ \lambda } \le 1/8$) ; then, $(T_r^r)' < 0$ asymptotically.
\paragraph{}
Since ${\cal J}$ vanishes exponentially at infinity, and ${\cal E} \sim
O[r^{-3}]$ as $r \rightarrow \infty$, the integral
defining $T_r^r (r)$ converges as $r \rightarrow \infty $
and $|T_r^r|$ decreases as $r^{-2}$.
\paragraph{}
Thus, in case (i) above, $T_r^r$ is negative and increasing as $r \rightarrow
\infty$, and in case (ii) $T_r^r$ is positive and decreasing.
\paragraph{}
Now turn to the behaviour of $T_r^r$ at the horizon.
When $r \simeq r_h$ , ${\cal E}$ and ${\cal J}$ are both finite, and
$\Gamma'$ diverges as $1/(r- r_h)$.
Thus the main contribution to $T_r^r$ as $r \simeq r_h$ is
\begin{equation}
T_r^r (t) \simeq \frac{e^{-\Gamma/2}}{r^2} \int _{r_h} ^r
(-e^{\Gamma/2}
r^2) \frac{\Gamma'}{2} {\cal E} dr
\label{25}
\end{equation}
which is finite.
\paragraph{}
At the horizon, $e^\Gamma = 0$; outside the horizon, $e^\Gamma > 0$ .
Hence $\Gamma' >0$ sufficiently close to the horizon,
and, since ${\cal E} \ge 0$,
$T_r^r < 0$ for $r$ sufficiently close to the horizon.
\paragraph{}
Since ${\cal F} \sim O[r-r_h]$ at $r \simeq r_h$, $(T_r^r)'$ is finite
at the horizon and the leading contribution is
\begin{equation}
(T_r^r)' (r_h) \simeq -\frac{\Gamma'}{2} {\cal F} + \frac{2}{r}{\cal J}.
\label{26}
\end{equation}
{}From ref. \cite{greene} we record the relation
\begin{equation}
re^{-\Lambda} \frac{\Gamma'}{2} = e^{-\Lambda}\left[ \omega '^{2} +
\frac{1}{2} r^2 (\phi ')^2 \right] - \frac{1}{2} \frac{(1-\omega ^2)^2}{r^2}
- \frac{1}{4}\phi ^2 (1 + \omega ^2)^2 + \frac{m}{r} -
\frac{{ \lambda}}{4} (\phi ^2 - { v}^2)^2 r^2
\label{27}
\end{equation}
where $e^{-\Lambda} = 1 - \frac{2m(r)}{r}$. Hence,
\begin{eqnarray}
(T_r^r)' &=& -\frac{e^{-\Lambda}}{4\pi r}\left[ \frac{2(\omega ')^2}{r^2}
+ (\phi ')^2 \right] \left\{ (\omega ')^2 + \frac{1}{2} r^2 (\phi ')^2
- \frac{1}{2} e^\Lambda \frac{(1-\omega ^2)}{r^2} \right.\\
& & \left.
-\frac{\phi ^2}{4} e^\Lambda (1 + \omega )^2 + \nonumber
\frac{m}{r}e^\Lambda \frac{{\lambda}}{4} (\phi ^2 -
{ v}^2 )r^2 e^\Lambda \right\} \\
& &
+ \frac{1}{2\pi r} \left[ \frac{(\omega ')^2}{r^2} e^\Lambda +
\frac{(1 - \omega ^2)^2}{r^4} + \frac{\phi ^2}{4r^2}
(1 + \omega)^2 \right].
\label{28}
\end{eqnarray}
For $r \simeq r_h$,
this expression simplifies to
\begin{eqnarray}
(T_r^r)'(r_h) & \simeq & {\cal J}(r_h) \left[ \frac{2}{r_h} + \frac{4\pi}{r_h}
{\tilde {\cal F}}(r_h) \right] \nonumber \\
& & - {\tilde {\cal F}}(r_h) \left[ \frac{1}{2}
+ \frac{1}{2}\frac{(1 - \omega ^2)^2}{r_h^2} -
\frac{{ \lambda}}{4}(\phi ^2 - { v}^2)^2 r_h^2 \right] \nonumber \\
& = &
{\tilde {\cal F}}(r_{h}) \left[
\frac {(1- \omega ^{2})^{2}}{2r_{h}^{3}} +
\frac {\phi ^{2}}{4r_{h}}(1+\omega ^{2})^{2} +
\frac {\lambda }{4}r_{h}(\phi ^{2} -v^{2})^{2} -
\frac {1}{2r_{h}} \right] \nonumber \\ & &
+ \frac {2}{r_{h}} {\cal J}
\label{29}
\end{eqnarray}
where ${\tilde {\cal F}} = e^\Lambda {\cal F}(r) = \frac{1}{4\pi}
[\frac{2(\omega ')^2}{r^2} + (\phi')^2]$.
\paragraph{}
Consider for simplicity the case $r_{h}=1$.
Then, from the field equations \cite{greene} (see also Appendix A)
\begin{eqnarray}
\omega _{h}' & = &
\frac {1}{\cal D} \left[
\frac {1}{4} \phi _{h}^{2} (1+\omega _{h}) -
\omega _{h} (1-\omega _{h}^{2}) \right] =
\frac {\cal A}{\cal D} \\
\phi _{h}' & = &
\frac {1}{\cal D} \left[
\frac {1}{2} \phi _{h} (1+\omega _{h})^{2}
+\lambda \phi _{h} (\phi _{h}^{2}- v^{2}) \right] =
\frac {\cal B}{\cal D}
\end{eqnarray}
where
\begin{equation}
{\cal D}=1- (1-\omega _{h}^{2})^{2} -
\frac {1}{2} \phi _{h}^{2} (1+\omega _{h})^{2}
- \frac {1}{2} \lambda (\phi _{h}^{2} -v^{2})^{2}.
\end{equation}
Then the expression (\ref{29}) becomes
\begin{equation}
(T^{r}_{r})'(r_{h})=
\frac {1}{4\pi {\cal {D}}} \left[
8\pi {\cal D} {\cal J} - {\cal A}^{2} -\frac {1}{2} {\cal B}^{2}
\right]
=\frac {{\cal {C}}}{ 4 \pi {\cal {D}}}.
\end{equation}
{}From the field equations \cite{greene} (see Appendix A),
\begin{equation}
{\cal D}=1-2m_{h}'
\end{equation}
which is always positive because the black holes are non-extremal.
(See Appendix A for further discussion of this point.)
Thus the sign of $(T^{r}_{r})'(r_{h})$ is the same as that of
${\cal C}$.
Simplifying, we have
\begin{equation}
{\cal C}=c_{1}+c_{2}+c_{3}+c_{4}+c_{5}
\end{equation}
where
\begin{eqnarray}
c_{1} & = &
(1-\omega _{h}^{2})^{2} \omega _{h}^{2} (3-2\omega _{h}^{2}) \\
c_{2} & = &
\frac {1}{8} \phi _{h}^{2} (1+\omega _{h}) (1-\omega _{h}^{2})
(12\omega _{h}^{3} +12\omega _{h}^{2} -7\omega _{h}-9) \\
c_{3} & = &\frac {1}{16} \phi _{h}^{4} (1+\omega _{h})^{2}
(4\omega _{h}^{2} +8\omega _{h}+5) \\
c_{4} & = &
-\lambda (\phi _{h}^{2}-v^{2})^{2} \left[
(1-\omega _{h}^{2})^{2} +\frac {1}{2} \lambda \phi _{h}^{2} \right] \\
c_{5} & = &
\frac {1}{4} \lambda \phi _{h}^{2} (v^{2}-\phi _{h}^{2})
(1+\omega _{h})^{2} (2-v^{2}+\phi _{h}^{2}).
\end{eqnarray}
The first term is always positive since $|\omega _{h}|\le 1$.
The cubic in $c_{2}$ possesses a local maximum at $\omega _{h}=-0.886$
where it has the value $-1.724$ and a local minimum at
$\omega _{h}=0.219$ where it equals $-9.831$.
The cubic has a single root at $\omega _{h}=0.8215$, and thus is
positive for $\omega _{h}>0.8215 $ and negative for
$\omega _{h}<0.8215$.
The quadratic in $c_{3}$ is always positive and possesses no real
roots.
It has a minimum value of 1 when $\omega _{h}=-1$.
The term $c_{4}$ is always negative and $c_{5}$ is always positive
since $|\phi _{h}|\le v$.
\paragraph{}
In order to assess whether or not $\cal C$ as a whole is positive or
negative, we shall consider each branch of black hole solutions in
turn.
\paragraph{}
Firstly, consider the $k=1$ branch of solutions.
As $v$ increases from 0 up to $v_{max}=0.352$, $\omega _{h}$
increases monotonically from 0.632 to 0.869.
The derivative of the first term is
\begin{equation}
\frac {dc_{1}}{d\omega _{h}}=
2\omega _{h}(1-\omega _{h}^{2})(3-13\omega _{h}+8\omega _{h}^{2})
\end{equation}
where the quartic has roots given by $\omega _{h}^{2}=1.347 {\mbox { or }}
0.278$.
This derivative is negative for $\omega _{h}\in (0.632,0.869) $
and hence the first term decreases as $v$ increases, and is bounded below
by its value at the bifurcation point $v=v_{max}$, namely
\begin{equation}
c_{1}\ge 0.0674.
\end{equation}
The cubic in $c_{2}$ increases as $\omega _{h}$ increases from 0.632
to 0.869, and is bounded below by its value when $\omega _{h}=0.632$
\begin{equation}
12\omega _{h}^{3} +12\omega _{h}^{2} -7 \omega _{h} -9
\ge -5.602.
\end{equation}
Along this branch of solutions, it is also true that
\begin{eqnarray}
(1-\omega _{h}^{2}) & \le & 1-(0.632)^{2}=0.601 \nonumber \\
1+\omega _{h} & \le & 2 \nonumber \\
\phi _{h} & \le & 0.19v \le 0.0669.
\end{eqnarray}
Altogether this gives
\begin{equation}
c_{2}\ge -5.602 \times \frac {1}{8} \times (0.0669)^{2}
\times 2\times 0.601 =
-3.767 \times 10^{-3}.
\end{equation}
The quadratic in $c_{3}$ increases as $\omega _{h}$ increases
along the branch and so is bounded above by its value when
$\omega _{h}=0.869$;
\begin{equation}
4\omega _{h}^{2} +8\omega _{h} +5 \le 14.973.
\end{equation}
Thus,
\begin{equation}
c_{3}\ge -\frac {1}{16} \times (0.0669 )^{4} \times 2^{2}
\times 14.973 =
-7.498 \times 10^{-5}.
\end{equation}
For the fourth term, since
\begin{equation}
(\phi ^{2}_{h} -v^{2})^{2}\le v^{4}
\end{equation}
we have
\begin{equation}
c_{4} \ge -0.15 \times (0.352)^{4} \times \left(
(0.601)^{2} +\frac {1}{2} \times 0.15\times (0.0669)^{2}
\right) =-8.326 \times 10^{-4}
\end{equation}
and finally,
\begin{equation}
c_{5}\ge 0.
\end{equation}
Thus, adding these expressions up, one obtains
\begin{equation}
{\cal C}\ge 0.0627 \ge 0
\end{equation}
so that $(T^{r}_{r})'(r_{h})>0$ along the whole of the
$k=1$ branch of black hole solutions.
\paragraph{}
For the quasi-$k=0$ branch, as $v$ decreases from $v_{max}$ down
to 0, $\omega _{h}$ increases monotonically from 0.869, subject to
the inequality
\begin{equation}
0.869<\omega _{h} < 1-0.1v^{2}.
\end{equation}
Hence, along this branch,
\begin{equation}
(1-\omega _{h}^{2})^{2} =(1-\omega _{h})^{2} (1+\omega _{h})^{2}
\ge (0.1v^{2})^{2} \times (1.869)^{2}
=0.0349 v^{4}.
\end{equation}
Thus the first term is bounded below as follows:
\begin{equation}
c_{1} \ge 0.0349 v^{4} \times (0.869)^{2} \times 1
= 0.0264v^{4}.
\end{equation}
Since $\omega _{h}>0.869$, the cubic in $c_{2}$ is positive all
along this branch, so that $c_{2}$ and $c_{5}$ are positive.
The quadratic in $c_{3}$ is bounded above by its value when
$\omega _{h}=2$, i.e.
\begin{equation}
4\omega _{h}^{2} +8\omega _{h} +5 \le 37.
\end{equation}
All along the quasi-$k=0$ branch,
\begin{equation}
\phi _{h} \le 0.5 v^{2}.
\end{equation}
This gives
\begin{equation}
c_{3} \ge -\frac {1}{16} \times (0.5v^{2})^{4} \times 2^{2}
\times 37 = -0.578 v^{8}.
\end{equation}
Since $v\le 0.352$,
\begin{equation}
c_{3} \ge -0.578 \times (0.352)^{4} \times v^{4}
=-8.874\times 10^{-3} v^{4}.
\end{equation}
Finally, for $c_{4}$ we have
\begin{eqnarray}
c_{4} & \ge &
-0.15 v^{4} \left[ \frac {1}{2} \times 0.15 \times (0.5)^{2} v^{4}
+(1-(0.869)^{2})^{2} \right] \nonumber \\
& = &
-8.992 \times 10^{-3}v^{4} -2.813 \times 10^{-3} v^{8} \nonumber \\
& \ge &
-8.992 \times 10^{-3} v^{4} -2.813 \times 10^{-3} \times
(0.352)^{4} v^{4} \nonumber \\
& = &
-9.035 \times 10^{-3} v^{4}.
\end{eqnarray}
In total this gives
\begin{equation}
{\cal C} \ge 0.0264v^{4} -8.874\times 10^{-3} v^{4}
-9.035 \times 10^{-3} v^{4} = 8.491 \times 10^{-3}v^{4}
\ge 0.
\end{equation}
In conclusion, $(T^{r}_{r})'(r_{h})$ is positive for all the black
hole solutions having one node in $\omega $, regardless of the
value of the Higgs mass $v$.
\paragraph{}
Let us now check on possible contradictions
with Einstein's equations.
\paragraph{}
Consider first the case
${ \lambda} > 1/8$. Then, as $r \rightarrow
\infty$, $T_r^r < 0 $ and $(T_r^r)' > 0$.
As $r \rightarrow r_h $, $T_r^r < 0$ and $(T_r^r)' > 0$.
Hence there is no contradiction with
Einstein's equations in this case.
\paragraph{}
Consider now the case $\lambda \le 1/8$.
In this case, as $r \rightarrow \infty$, $T_r^r > 0$ and
$(T_r^r)' < 0$, whilst as $r \rightarrow r_h$, $T_r^r
< 0 $ and $(T_r^r)' > 0$.
Hence, there is an interval $[ r_a, r_b ]$ in which $(T_r^r)' $ is positive
and there exists a `critical' distance $r_c \in (r_a, r_b)$ at which
$T_r^r$ changes sign.
\paragraph{}
However, unlike the case when the gauge fields are absent~\cite{bekmod},
here
there is {\it no contradiction} with the result following from Einstein
equations, because $(T_r^r)' > 0$ in some open interval close to the
horizon, as we have seen above.
\paragraph{}
In conclusion the method of ref. \cite{bekmod} cannot be used to prove
a `no-scalar-hair' theorem for the EYMH system, as expected from the
existence of the
explicit solution of ref. \cite{greene}. The {\it key} difference is
the presence of the positive term $\frac{2}{r}{\cal J}$
in the expression (\ref{fifteenb}) for $(T_r^r)'$. This term is
dependent on the Yang-Mills field and vanishes if this field is absent.
Thus, there is a sort of `balancing' between the gravitational
attraction and the non-Abelian gauge field repulsion, which
is responsible for the existence of the classical non-trivial
black-hole solution of ref. \cite{greene}.
However, as we shall discuss below, this solution is not stable against
(linear) perturbations of the various field configurations~\cite{winst}.
Thus, although the `letter' of the `no-scalar-hair' theorem of ref.
\cite{bekmod}, based on non-negative scalar-field-energy density,
is violated, its `spirit' is maintained in the sense that there exist
instabilities which that the solution cannot be formed
as a result of collapse of stable matter.
\section{Instability analysis of
sphaleron sector of the EYMH black hole}
\paragraph{}
The black hole solutions of ref. \cite{greene} in the EYMH system
resemble the sphaleron solutions
in $SU(2)$ gauge theory and one would expect them to be
unstable for topological reasons. Below we shall confirm this
expectation by proving~\cite{winst} the existence of unstable
modes in the sphaleron sector
of the EYMH black hole system (for notation and definitions
see Appendix A).
\paragraph{}
Recently, an instability
proof of sphaleron solutions for arbitrary gauge
groups in the EYM system has been given \cite{bs,brodbeck}.
The method consists of studying linearised radial perturbations
around an equilibrium solution, whose detailed knowledge
is not necessary to establish stability.
The stability is examined by mapping the system
of algebraic equations for the perturbations
into a coupled system of differential equations
of Schr\"odinger type \cite{bs,brodbeck}.
As in the particle case of ref. \cite{bartnik}, the
instability of the solution is established once
a bound state in the respective
Schr\"odinger equations is found.
The latter shows up as an
imaginary frequency mode in the spectrum, leading to an
exponentially growing mode.
There is an elegant physical interpretation behind this
analysis, which is similar to the
Cooper pair instability of super-conductivity.
The gravitational attraction balances the
non-Abelian gauge field repulsion in the classical
solution \cite{bartnik}, but the existence of bound states
implies imaginary parts in the quantum ground state
which lead to instabilities of the solution, in much
the same way as the classical ground state
in super-conductivity is not the absolute minimum
of the free energy.
\paragraph{}
However, this method cannot be applied directly to the black
hole case, due to divergences occuring in some of the
expressions involved. This is
a result of the singular behaviour
of the metric function at the physical space-time boundaries
(horizon) of the black hole.
\subsection{Linearized perturbations and instabilities}
\paragraph{}
It is the purpose of this section to generalise the method
of ref. \cite{bs} to incorporate the black hole solution
of the EYMH system of ref. \cite{greene}. By
constructing appropriate
trial linear radial perturbations,
following ref. \cite{brodbeck,volkov},
we show the existence of bound states in the
spectrum of the coupled Schr\"odinger
equations, and thus the instability of the black hole.
Detailed knowledge of the black hole solutions is not
actually required, apart from the fact that
the existence of an horizon leads to modifications
of the trial perturbations as compared to those of
ref. \cite{bs,brodbeck}, in order to avoid divergences
in the respective expressions \cite{volkov}.
\paragraph{}
We start by sketching the basic steps \cite{bs,volkov}
that will lead to a study of the stability of
a classical solution $\phi _s (x, t)$
with finite energy
in a (generic) classical field theory.
One considers
small perturbations $\delta \phi (x,t)$
around $\phi _s (x, t)$, and specifies \cite{bs}
the time-dependence as
\begin{equation}
\delta \phi (x ,t ) = \exp (-i \Omega t ) \Psi (x ).
\label{linear}
\end{equation}
The
linearised
system (with respect to such perturbations),
obtained from the equations of motion,
can be
cast into a Schr\"odinger eigenvalue problem
\begin{equation}
{\cal H} \Psi = \Omega ^2 A \Psi
\label{schr}
\end{equation}
where the operators ${\cal H}$, $A$ are assumed independent
of the `frequency' $\Omega$. As we shall show later on, this is
indeed the case of our black hole solution of the EYMH system.
In that case it will also be shown that
${\cal H}$ is a self-adjoint operator with respect
to a properly defined inner (scalar) product in the space
of functions $\{\Psi \}$ \cite{bs},
and the $A$ matrix is positive definite,
$ <\Psi | A | \Psi > > 0 $.
A criterion for instability is the existence of an imaginary
frequency mode in (\ref{schr})
\begin{equation}
\Omega ^2 < 0.
\label{inst}
\end{equation}
This is usually difficult to solve analytically
in realistic models,
and usually numerical calculations are required \cite{stab}.
A less informative method which admits analytic treatment
has been proposed recently in ref. \cite{bs,volkov},
and we shall follow this
for the purposes of the present work.
The method consists of a variational approach
which makes use of the following functional
defined through (\ref{schr}):
\begin{equation}
\Omega ^2 (\Psi ) = \frac{ <\Psi | {\cal H } | \Psi >}
{<\Psi | A | \Psi >}
\label{funct}
\end{equation}
with $\Psi $ a {\it trial} function.
The lowest eigenvalue is known to provide a
{\it lower} bound for this functional.
Thus,
the criterion of instability, which is equivalent to
(\ref{inst}), in this approach
reads
\begin{eqnarray}
\Omega ^2 ( \Psi ) &<& 0
\nonumber \\
<\Psi | A | \Psi > &<& \infty .
\label{trueinst}
\end{eqnarray}
The first of the above
conditions implies that the operator
${\cal H }$ is not positive definite, and therefore
negative eigenvalues do exist.
The second condition, on the {\it finiteness}
of the expectation value of the operator $A$,
is required to ensure that $\Psi$ lies in the
Hilbert space containing the domain of ${\cal H}$.
In certain cases, especially
in the black hole case,
there are divergences
due to singular behaviour of modes at, say, the horizons,
which could spoil these conditions
(\ref{trueinst}).
The advantage of the above variational method
lies in the fact that it is an easier task
to choose appropriate trial functions $\Psi $
that satisfy (\ref{trueinst}) than solving the
original eigenvalue problem (\ref{schr}).
In what follows we shall apply this second method
to the black hole solution of ref. \cite{greene}.
\paragraph{}
For completeness,
we first review basic formulas
of the spherically symmetric black hole solutions
of the EYMH system \cite{greene}.
The space-time metric
takes the form \cite{greene}
\begin{equation}
ds^2 =-N(t,r) S^{2}(t,r) dt^2 + N^{-1} dr^2 +
r^2 (d\theta ^2 + \sin^2 \theta d\varphi ^2)
\label{one}
\end{equation}
and we assume the following ansatz for
the non-Abelian gauge potential \cite{greene,bs}
\begin{equation}
A = a_0 \tau _r dt + a_1 \tau _r dr + (\omega +1 )
[- \tau _\varphi d\theta + \tau _\theta \sin \theta d\varphi ]
+ {\tilde \omega} [ \tau _\theta d\theta + \tau _\varphi \sin \theta d\varphi ]
\label{two}
\end{equation}
where $\omega, {\tilde \omega}$ and $a_i, i = 0,1 $
are functions of $t, r$. The $\tau _i$ are appropriately normalised
spherical
generators of the SU(2) group in the notation
of ref. \cite{bs}.
\paragraph{}
The Higgs doublet assumes the form
\begin{equation}
{\tilde \Phi} \equiv \frac{1}{\sqrt{2}} \left( \begin{array}{c}
\nonumber \psi _2 + i \psi _1 \\
\phi - i \psi _3 \end{array}\right)
\qquad ; \qquad {\mbox {\boldmath $ \psi $}}
= \psi {\mbox {\boldmath ${\hat r}$}}
\label{three}
\end{equation}
with the Higgs potential
\begin{equation}
V({\tilde \Phi } )=\frac{\lambda }{4} ( {\tilde \Phi} ^{\dag}
{\tilde \Phi } - { v}^2)^2
\label{four}
\end{equation}
where ${ v}$ denotes the v.e.v. of ${\tilde \Phi } $ in the non-trivial
vacuum.
\paragraph{}
The quantities $\omega, \phi$ satisfy the static field
equations
\begin{eqnarray}
N \omega '' + \frac{(N S)'}{S} \omega ' & =& \frac{1}{r^2}
(\omega ^2 - 1) \omega + \frac{\phi ^2}{4}
(\omega + 1) \nonumber \\
N \phi '' + \frac{(N S)'}{ S} \phi ' + \frac{2N}{r} \phi '
&=& \frac{1}{2r^2} \phi (\omega +1 )^2 + \lambda \phi
(\phi ^2 - {v}^2 )
\label{five}
\end{eqnarray}
where the prime denotes differentiation with respect
to $r$. For later use, we also mention that
a dot
will denote
differentiation
with respect to $t$.
\paragraph{}
If we choose a gauge in which $\delta a_{0} =0$,
the linearised perturbation equations decouple into two sectors
\cite{bs} . The first consists of the gravitational modes
$\delta N$, $\delta S$, $\delta \omega$ and
$\delta \phi$ and the second of the matter perturbations
$\delta a_{1}$, $\delta {\tilde {\omega }}$
and $\delta \psi $.
For our analysis in this section it will be sufficient
to concentrate on the matter perturbations, setting the
gravitational perturbations $\delta N$ and $\delta S$
to zero, because an instability will show up in
this sector of the theory. An instability study in the gravitational
sector will be discussed in the following section 4.
The equations for
the linearised matter perturbations
take the form \cite{bs}
\begin{equation}
{\cal H} \Psi + A {\ddot \Psi } = 0
\label{six}
\end{equation}
with,
\begin{equation}
\Psi = \left( \begin{array}{c}
\nonumber \delta a_1 \\
\nonumber \delta {\tilde \omega} \\
\nonumber \delta \psi \end{array}\right)
\label{seven}
\end{equation}
and,
\begin{equation}
A = \left( \begin{array}{ccc}
Nr^2 & 0 & 0 \\
0 & 2 & 0 \\
0 & 0 & r^2 \\
\end{array}\right)
\label{eight}
\end{equation}
and the components of ${\cal H}$ are
\begin{eqnarray}
{\cal H}_{a_1a_1} &=& 2 (N S)^2 \left( \omega ^2
+ \frac{r^2}{8} \phi ^2 \right) \nonumber \\
{\cal H}_{{\tilde \omega} {\tilde \omega}} &=&
2 p_* ^2 + 2NS^2 \left( \frac{\omega ^2 -1}{r^2} + \frac{\phi ^2}{4}
\right)
\nonumber \\
{\cal H}_{\psi\psi} &=& 2 p_*\frac{r^2}{2} p_* +
2 NS^2 \left( \frac{(-\omega + 1)^2}{4} + \frac{r^2 }{2}\lambda
(\phi ^2 - {v}^2) \right) \nonumber \\
{\cal H}_{a_1{\tilde \omega}} &=& - 2i N S [ (p_* \omega) - \omega p_* ]
\nonumber \\
{\cal H}_{{\tilde \omega} a_1} &=&
-2i [ p_* N S \omega + N S (p_* \omega) ]
\\
{\cal H}_{a_1 \psi } &=& \frac{i r^2}{2} N S [(p_* \phi) - \phi p_* ]
\nonumber \\
{\cal H}_{\psi a_1} &=& i p_* \frac{r^2}{2}
NS \phi + i\frac{r^2}{2} NS (p_* \phi )
\nonumber \\
{\cal H}_{{\tilde \omega}\psi} &=& {\cal H}_{\psi {\tilde
\omega}} = -\phi N S^2 \nonumber
\label{nine}
\end{eqnarray}
where the operator $p_*$ is
\begin{equation}
p_* \equiv - i NS \frac{d}{dr}.
\label{ten}
\end{equation}
Upon specifying the time-dependence
(\ref{linear})
\begin{equation}
\Psi (r, t) = \Psi (r) e^{i \sigma t} \qquad ;
\qquad
\Psi (r) = \left( \begin{array}{c}
\nonumber \delta a_1 (r) \\
\nonumber \delta {\tilde \omega} (r) \\
\nonumber \delta \psi (r) \end{array}\right)
\label{linar2}
\end{equation}
one arrives easily to an eigenvalue
problem of the form (\ref{schr}), which can then be
extended to the variational approach (\ref{trueinst}).
\paragraph{}
To this end, we choose
as trial perturbations the following expressions
(c.f. \cite{bs})
\begin{eqnarray}
\nonumber \delta a_1 &=&- \omega ' Z \\
\nonumber \delta {\tilde \omega} &=& (\omega ^2 - 1) Z \\
\nonumber \delta \psi &=& -\frac{1}{2} \phi (\omega + 1) Z
\label{eleven}
\end{eqnarray}
where $Z$ is a function of $r$ to be determined.
\paragraph{}
One may define the inner product
\begin{equation}
<\Psi | X > \equiv \int _{r_h} ^{\infty} {\overline \Psi } X
\frac{1}{NS} dr
\label{twelve}
\end{equation}
where $r_h$ is the position of the horizon of the black hole.
The operator ${\cal H}$ is then symmetric with respect to
this scalar product.
Following ref. \cite{bs}, consider the expectation value
\begin{equation}
<\Psi | A | \Psi > = \int _{r_h}^{\infty}
dr \frac{1}{NS} Z^2 \left[ Nr^2 (\omega ')^2 +
2 (\omega ^2 - 1)^2 +
\frac{r^2}{4} \phi ^2 (\omega + 1)^2 \right]
\label{thirteen}
\end{equation}
which is clearly positive definite for real $Z$.
Its finiteness will be examined later, and depends
on the choice of the function $Z$.
\paragraph{}
Next, we proceed to the evaluation of
the expectation value of the
Hamiltonian ${\cal H}$ (\ref{nine}); after a tedious calculation
one obtains
\begin{eqnarray}
<\Psi | {\cal H} | \Psi > &=& \int _{r_h}^{\infty}
dr S Z^2 \{ - 2 N (\omega ')^2 + 2 P^2 N (\omega ^2 - 1)^2
\nonumber \\ & &
+ \frac{1}{4} P^2 N r^2 \phi ^2 (\omega + 1)^2
- \frac{2}{r^2} (\omega ^2 - 1)^2 - \frac{1}{2}
\phi ^2 (\omega + 1)^2 \} \\
\nonumber & + & {\mbox {boundary terms}}
\label{fourteen}
\end{eqnarray}
where $P \equiv \frac{1}{Z} \frac{d Z}{d r} $.
The boundary terms will be shown to vanish so we omit them in
the expression (\ref{fourteen}). The final result is
\begin{eqnarray}
<\Psi | {\cal H} | \Psi > &= & \int _{r_h}^{\infty}
dr S \left\{ - 2 N (\omega ')^2 - \frac{2}{r^2}
(\omega ^2 - 1)^2 - \frac{1}{2} \phi ^2 ( \omega + 1)^2 \right\}
\nonumber \\
& + & \int _{r_h}^{\infty} dr \left\{
\frac{2}{r^2} (\omega ^2 - 1) ^2 + \phi ^2 (\omega + 1)^2
+ 2 N (\omega ')^2 \right\} S ( 1 - Z^2) \nonumber \\
& + & \int _{r_h}^{\infty} dr S N \left( \frac{d Z}{dr} \right) ^2
\left[ 2 (\omega ^2 -1)^2 + \frac{1}{4} r^2 \phi ^2 (\omega + 1)^2
\right] .
\label{fifteen}
\end{eqnarray}
\paragraph{}
The first of these terms is manifestly negative.
To examine the remaining two, we introduce the
`tortoise' co-ordinate $r^*$ defined by \cite{volkov}
\begin{equation}
\frac{d r^*}{dr} = \frac{1}{N S}
\label{tortoise}
\end{equation}
and define a sequence of functions $Z _k ( r^* )$ by
\cite{volkov}
\begin{equation}
Z_k ( r^* ) = Z\left( \frac{r^*}{k}
\right) \qquad ; \qquad k =1,2, \ldots
\label{seventeen}
\end{equation}
where
\begin{eqnarray}
Z ( r^* ) &=& Z ( -r^* ), \nonumber \\
Z ( r^* ) &=& 1 \qquad
{\mbox {for $ r^* \in [ 0, a] $}} \nonumber \\
- D \le& \frac{d Z}{d r^*} & < 0, \qquad
{\mbox { for $r^* \in [a, a + 1 ]$}}
\nonumber \\
Z( r^* ) &=& 0 \qquad
{\mbox {for $ r^* > a + 1 $}}
\label{eighteen}
\end{eqnarray}
where $a$, $D$ are arbitrary positive constants.
Then, for each value of $k$ the vacuum expectation values
of ${\cal H }$ and $A$ are finite,
$ <\Psi | {\cal H } | \Psi > < \infty $,
and $<\Psi |A| \Psi > < \infty $,
with $Z = Z_k $, and all boundary terms vanish. This
justifies {\it a posteriori} their being dropped in
eq. (\ref{fourteen}). The integrands in the second and third terms
of eq. (\ref{fifteen}) are uniformly convergent
and tend to zero as $ k \rightarrow \infty $. Hence, choosing
$k $ sufficiently large the dominant contribution
in (\ref{fifteen}) comes from the first term which is negative.
\paragraph{}
This confirms the existence of bound states in the
Schr\"odinger equation (\ref{six}), (\ref{schr}),
and thereby the
instability (\ref{trueinst})
of the associated black hole solution of ref. \cite{greene}
in the coupled EYMH system.
The above analysis reveals the existence of at least one
negative {\it odd-parity} eigenmode
in the spectrum
of the
EYMH black hole.
\paragraph{}
\subsection{Counting sphaleron-like unstable modes in the EYMH
system}
\paragraph{}
The exact number of such negative modes is an interesting
question and we next proceed to investigate it.
Recently, a method for determining the number of the
sphaleron-like unstable modes
has been applied
by Volkov et al. \cite{eigenmodes}
to the gravitating sphaleron case. We have been able
to extend it to the present EYMH black hole.
The method consists of mapping the system
of linearized perturbations to a system of coupled
Scr\"odinger-like equations. Counting of unstable
modes is then equivalent to counting bound states
of the quantum-mechanical analogue system.
It is important to notice that
due to the fact that the EYMH black hole solution
is not known analytically, but only
numerically, it will be necessary to make
certain physically plausible assumptions
concerning certain
analyticity requirements~\cite{simon} for the
solutions of the analogue system. This is equivalent
to requiring that
the conditions for the
validity of perturbation theory
in ordinary quantum mechanics
be applied to this problem.
Details are described below.
\paragraph{}
Working in the gauge $\delta a_0 = 0$, and denoting the
derivative with respect to the tortoise coordinate (\ref{tortoise})
by a prime,
we can write
the linearised perturbations in the sphaleron sector of the EYMH system
as~\cite{bs}:
\begin{eqnarray}
2N^2 S^2 \left( \omega ^2 + \frac{r^2}{8}\phi ^2 \right) \delta a_1
+ 2 NS (\omega
\delta {\tilde \omega }' - \omega ' \delta {\tilde \omega } ) & & \nonumber \\
+ \frac{1}{2} r^2NS (\phi ' \delta \psi - \phi \delta \psi ' ) &=&
Nr^2 \sigma ^2 \delta a_1 \nonumber \\
2(N S \omega \delta a_1 )' + 2 N S \omega ' \delta a_1 +
\delta {\tilde \omega } '' + N S^2 \phi \delta \psi & & \nonumber \\
-\frac{2}{r^2} S^2 \left( (\omega ^2 - 1)
+ \frac{\phi ^2}{4}\right) \delta {\tilde \omega}
&=& -2\sigma ^2 \delta {\tilde \omega } \nonumber \\
\frac{1}{2} (NSr^2 \phi \delta a_1 )' + \frac{1}{2}r^2 N S \phi ' \delta a_1
- N S^2 \phi \delta {\tilde \omega } - (r^2 \delta \psi ')' & & \nonumber \\
+ 2 N S^2 \left( \frac{(1-\omega)^2}{4} + \frac{1}{2}r^2 \lambda
(\phi ^2 - { v}^2 ) \right) \delta \psi &=& r^2 \sigma^2 \delta \psi
\label{3-1}
\end{eqnarray}
together with the Gauss constraint
\begin{equation}
\sigma ^2 \left\{ \left( \frac{r^2}{S} \delta a_1 \right)'
+ 2 \omega {\tilde \omega}
-r^2 \frac{\phi}{2} \delta \psi \right\} = 0.
\label{3-2}
\end{equation}
Define $\delta \xi = r \delta \psi $, $\delta \alpha = \frac{r^2}{2S}
\delta a_1 $. Then
\begin{equation}
\sigma ^2 \delta \alpha = f(r^*)
\label{3-3}
\end{equation}
where
\begin{equation}
f(r^*) = NS\omega ^2 \delta a_1 + \frac{N}{8}Sr^2\phi ^2 \delta a_1
- \omega ' \delta {\tilde \omega} + \omega \delta {\tilde \omega }
+ \frac{N}{4} S \phi \delta \xi + \frac{r}{4}
(\phi ' \delta \xi - \phi \delta \xi ')
\label{3-4}
\end{equation}
and
\begin{equation}
f'(r^*) = \sigma ^2 \left( -\omega \delta {\tilde \omega } + \frac{r}{4}
\phi \delta \xi \right)
\label{3-5}
\end{equation}
and the Gauss constraint becomes
\begin{equation}
\sigma \left( \delta \alpha ' + \omega \delta {\tilde \omega }
- r \frac{\phi}{4} \delta \xi \right) = 0.
\label{3-6}
\end{equation}
Next, we define a `strong' Gauss constraint by
\begin{equation}
\delta \alpha ' = -\omega \delta {\tilde \omega }
+ \frac{r}{4}\phi \delta \xi
\label{3-8}
\end{equation}
even when $\sigma = 0$.
\paragraph{}
Using (\ref{3-4}) and (\ref{3-8}) we may write
\begin{eqnarray}
\delta {\tilde \omega } &=& \frac{r^2}{P}\phi ^2
\left( \frac{\delta \alpha '}{r^2\phi ^2 }\right)'
- \frac{Q}{P} \delta \alpha \nonumber \\
\delta \xi &=& \frac{4 \omega ^2}{Pr\phi}
\left( \frac{\delta \alpha '}{\omega ^2} \right)'
- \frac{4Q\omega}{Pr\phi} \delta \alpha
\label{3-9}
\end{eqnarray}
where
\begin{eqnarray}
P (r^*) &=& - 2\omega ' + 2 \omega \phi ' + 2 \omega
\frac {N S}{r} \nonumber \\
Q(r^*) &=& 2\frac{NS^2}{r^2} \omega ^2 + \frac{1}{4} NS^2 \phi ^2
- \sigma ^2 .
\label{definPQ}
\end{eqnarray}
If we substitute these expressions into the equation for $\delta {\tilde
\omega}$ we obtain the following equation for $\delta \alpha $:
\begin{eqnarray}
-\delta \alpha ^{{\rm (iv)}} + \left( \frac{2P'}{P} + HP \right)
\delta \alpha ''' & & \nonumber \\
+ \left\{ \frac{P''}{P} - \frac{2P'^2}{P^2} + 2H'P + Q -\sigma ^2 +
NS^2J
\right\} \delta
\alpha '' + & & \nonumber \\
\left\{ H''P + 2P \left( \frac{Q}{P} \right) ' + \sigma ^2 HP +
\right. & & \nonumber \\ \left.
NS^2
\left( -\frac{2\omega P}{r^2}
- \frac{H}{r^2} (\omega^2 - 1) P - \frac{H}{4}\phi ^2 P + \frac{4}{r^2}
\omega ' \right) \right\} \delta \alpha ' + & & \nonumber \\
\left\{ P \left( \frac{Q}{P} \right)'' - 2\omega P
\left( \frac{NS^2}{r^2} \right) ' - \frac{4PNS^2}{r^2}
\omega '+ \sigma ^2 Q - NS^2QJ \right\} \delta \alpha & =& 0
\label{3-10}
\end{eqnarray}
where
\begin{eqnarray}
H &\equiv & \frac{2NS}{r P} + \frac{2 \phi '}{\phi P} \nonumber \\
J &\equiv & -\frac{2\omega}{r^2} + \frac{\omega ^2 - 1}{r^2}
+ \frac{\phi^2}{4}.
\label{3-11}
\end{eqnarray}
Alternatively, we can eliminate $\delta \xi $ to obtain
the following pair of coupled Schr\"odinger
equations~\cite{brodbeck,eigenmodes}:
\begin{eqnarray}
\sigma ^2 \delta \alpha &=& - \delta \alpha '' + \left( \frac{2\phi '}{\phi }
+ \frac{2NS}{r} \right) \delta \alpha ' + \left( \frac{2NS^2}{r^2} \omega ^2
+ \frac{N}{4}S^2\phi ^2 \right) \delta \alpha
+ P\delta {\tilde \omega} \nonumber \\
\sigma ^2 \delta {\tilde \omega } &=& -\delta {\tilde \omega}''
-\frac{2N}{r^2}S^2 (1 + \omega ) \delta \alpha '
- \left\{ \frac{4\omega ' NS^2}{r^2} + 2\omega (\frac{NS^2}{r^2})' \right\}
\delta \alpha \nonumber \\
& &
+ \frac{NS^2}{r^2} \left\{ (\omega - 1)^2
+ \frac{r^2 \phi ^2}{4} \right\}
\delta {\tilde \omega}.
\label{3-12}
\end{eqnarray}
To proceed, it appears necessary to make the following assumptions:
\begin{itemize}
\item
We assume that the equilibrium solutions are continuous functions of the
Higgs mass ${ v}$.
\item
We also assume that given a Schr\"odinger-like equation
$ - \Psi '' + V \Psi = E \Psi $, where the potential $V$
depends continuously on some parameter ${ v}$,
then the bound state energies also depend continuously on ${ v}$.
This can be proven rigorously if
we make the physically plausible assumption of analyticity
of the operators involved in the above system. The proof
then relies on
the powerful
Kato-Rellich theorem of analytic operators~\cite{simon}.
This analyticity requirement is
the case if perturbation theory is used to solve
the Schr\"odinger system with potential $V({ v} + \delta {
v})$ in terms of the spectrum of $V({ v})$, implying that
the changes in the bound state energies $\delta E$ due to
the infinitesimal shift in the parameter ${ v}$
are also infinitesimal. It is also the case where
{\it variational} methods are applicable.
This assumption
implies that the eigenvalues of the discrete spectrum
of the above equations (\ref{3-12}), i.e.
the bound-state energies for $\sigma ^2 < 0 $, are also continuous
as ${ v}$ varies continuously.
\end{itemize}
\paragraph{}
The above assumptions had to be made because in the case of
the EYMH black hole system, the solution is not known
analytically but only numerically, and therefore the issue of analyticity
of the various operators involved with respect to
the Higgs field v.e.v., $v$, cannot be rigorously established.
\paragraph{}
We now notice that
the continuous spectrum of the equations (\ref{3-12})
is given by $\sigma ^2 > 0$.
Hence, the number of negative modes will change by one
whenever a mode is either absorbed into the continuum or emerges from it.
\paragraph{}
For $\sigma ^2 = 0$ the equations possess pure ``gauge mode" solutions of the
form
\begin{equation}
\delta \alpha = \frac{r^2 \Omega '}{2NS^2}, \qquad
\delta {\tilde \omega } = -\omega \Omega, \qquad \delta \xi
= \frac{r \phi }{2} \Omega
\label{3-13}
\end{equation}
where
\begin{equation}
\left( \frac{r^2 \Omega '}{2NS^2}\right) '= \left( \omega ^2
+ \frac{r^2}{8}\phi ^2 \right) \Omega.
\label{3-14}
\end{equation}
\paragraph{}
Thus, near the event horizon, $\Omega \sim O[(r-r_h)^k] $, where
$k=0$ or $1$, and at infinity,
$\Omega \sim e^{\pm \frac{{ v}}{2}}$
upon choosing $S(\infty ) =1$.
Hence, there is a single non-degenerate, non-normalisable
eigenmode with $\sigma ^2 = 0$.
\paragraph{}
For the fourth-order equation with $\sigma ^2 =0$, $\delta \alpha
\sim O[(r-r_h)^k]$ near the horizon, where $k =0$ (twice, corresponding
to the pure ``gauge mode" solutions ) or $k=\frac{-1 \pm \sqrt{5}}{2}$.
\paragraph{}
In the latter case,
$\delta {\tilde \omega}, \delta \xi \sim O[(r-r_h)^{k-1}]$, and so
will not remain bounded near the horizon. Hence there is a single
non-degenerate zero mode for non-zero ${ v}$.
\paragraph{}
{}From the above it follows that the number of negative modes
of the system (\ref{3-12}) cannot change at any non-zero
value of ${ v}$,
including the bifurcation point ${ v}_{max}$,
by continuity. The negative eigenvalues of this system
are non-degenerate and hence cannot themselves bifurcate at some value
of ${ v}$.
\paragraph{}
The only possible place where the number
of negative modes may change is at ${ v} = 0$.
Let $\phi = { v} {\tilde \phi } , \delta \xi = { v} \delta {\tilde
\xi }$. Then the system of equations (\ref{3-12}) becomes
\begin{eqnarray}
-\delta \alpha '' + \left( \frac{2{\tilde \phi } '}{{\tilde \phi }} +
\frac{2NS}{r} \right) \delta \alpha '
+ \left( \frac{2NS^2{\tilde \phi}^2}{4}\right) \delta
\alpha & & \nonumber \\
+ \left( -2\omega ' + 2\omega \frac{{\tilde \phi}'}{{\tilde \phi}}
+ 2\omega \frac{NS}{r} \right) \delta {\tilde \omega}
& = & \sigma ^2 \delta \alpha
\nonumber \\
-\delta {\tilde \omega} '' - 2\frac{NS^2}{r^2}
(1 + \omega) \delta \alpha ' + \frac{NS^2}{r^2}
\left[ (\omega -1)^2 + { v}^2
\frac{r^2{\tilde \phi}^2}{4}\right]
\delta {\tilde \omega} & & \nonumber \\
- \left[ 4 \omega ' \frac{NS^2}{r^2}
+ 2\omega \left( \frac{NS^2}{r^2}\right) ' \right] \delta \alpha
&= & \sigma ^{2} \delta {\tilde \omega}
\label{3-15}
\end{eqnarray}
together with the Gauss constraint
$\delta \alpha ' = -\omega \delta {\tilde \omega} + \frac{1}{4}
{ v}^2 r {\tilde \phi} \delta {\tilde \xi }$.
\paragraph{}
Consider the $k=n$ branch of the EYMH solutions. In this branch,
$\frac{{\tilde \phi }'}{{\tilde \phi}}$ has a well-defined limit as
${ v} \rightarrow 0$, and the system (\ref{3-12})
is continuous at ${ v} =0$. In this case,
the Gauss constraint reduces to
$\delta \alpha ' = -\omega \delta {\tilde \omega } $ , and substituting
in (\ref{3-15}) yields the equation
\begin{equation}
-\delta \alpha '' + \frac{2}{\omega} \omega ' \delta \alpha '
+ \frac{2NS^2}{r^2}\omega ^2 \delta \alpha =
\sigma ^2 \delta \alpha .
\label{3-16}
\end{equation}
This is the equation studied in ref. \cite{eigenmodes}
where it was shown that there are exactly $n$ negative eigenvalues.
Furthermore, there is a single non-degenerate zero mode given by
\begin{equation}
\delta \alpha = \frac{r^2 \Omega '} {2NS^2}
\label{3-17}
\end{equation}
where
\begin{equation}
\left( \frac{r^2 \Omega '}{2NS^2}\right) '=\omega ^2 \Omega .
\end{equation}
\paragraph{}
As before, near the horizon
$\Omega \sim O[(r-r_h)] $ or $\Omega \sim O[1]$
whilst at infinity $\Omega \sim r$ or $r^{\frac{1}{2}}$, giving a single
eigenmode with zero eigenvalue.
Thus, the number of negative eigenvalues
does not change at ${ v}=0$
for this branch of solutions.
\paragraph{}
For the quasi $k=n-1$ branch of solutions
$\frac{{\tilde \phi}'}{{\tilde \phi}}$ does not
have a well-defined limit as ${ v} \rightarrow 0$, and the
system (\ref{3-12}) is not continuous at ${ v}=0$.
Hence, by continuity, we can conclude that both the $k=n$ and the
quasi $k=n-1$ black holes have {\it exactly} $n$ unstable modes
in the {\it sphaleron} sector.
\section{Instabilities in the Gravitational Sector -
\newline Catastrophe theory approach}
\paragraph{}
It is the aim of this section to prove the
existence and
count the
exact number of unstable modes in the gravitational
sector of the solutions.
In the first part we shall study the conditions
for the existence of unstable modes in a linearized
framework, and we shall study the possibility of a change
in the stability of the system as one varies the Higgs v.e.v.
$v$ continuously from $0$ up to the bifurcation point (c.f. figure 2).
In the second part, which will deal with the change of the stability
of the system at the bifurcation point, we shall go beyond linearized
perturbations by applying catastrophe theory.
It should be stressed that although catastrophe theory was
first employed
by the authors of \cite{torii}, our approach in this section is
somewhat different, and has certain advantages, not least of which
is that we are able to exploit the known stability of the
Schwarzschild black hole to draw conclusions about the non-trivial
EYMH black holes.
\subsection{Linearized perturbations}
\paragraph{}
The linearised perturbation equations for the gravitational
sector are:
\begin{eqnarray}
-\delta \omega '' + U_{\omega\omega} \delta \omega
+ U_{\omega\phi} \delta \phi &=& \sigma ^2 \delta \omega \nonumber \\
-\delta \phi '' - \frac{2NS}{r} \delta \phi ' +
U_{\phi\omega} \delta \omega + U_{\phi\phi} \delta \phi &=& \sigma ^2 \delta
\phi
\label{4-1}
\end{eqnarray}
where the prime denotes differentiation with respect to the
tortoise coordinate (\ref{tortoise}) as before, and
\begin{eqnarray}
U_{\omega\omega} &=& \frac{NS^2}{r^2} \left[
3\omega ^2 - 1 + \frac{1}{4}r^2\phi^2
- 4r^2 \omega '^2 \left( \frac{N}{r} + \frac{(NS)'}{S}\right) +
\frac{\omega(\omega ^2 -1)}{r} \omega '
+ 2 r \omega ' \phi ^2 \right]
\nonumber \\
U_{\omega\phi} &=& \frac{NS^2}{r^2}
\left[ \frac{1}{2}(1 + \omega )
\phi r^2 - 2\phi ' \omega ' r^3 \left( \frac{N}{r} + \frac{(NS)'}{S}\right)
+ 2r \phi ' \omega (\omega ^2 - 1) \right. \nonumber \\
& & \left.
+ \phi \omega '
(1 + \omega )^2 r + \frac{1}{2} \phi ' \phi ^2 (1 + \omega )
+ 2\lambda r^3 \phi \omega ' (\phi ^2 - { v}^2) \right]
\nonumber \\
U_{\phi\omega} &=& \frac{2}{r^2}U_{\omega\phi} \nonumber \\
U_{\phi\phi} &=& \frac{NS^2}{r^2} \left[ \frac{1}{2} (1 + \omega )^2 +
\lambda r^2 (3\phi ^2 - {\tilde v}^2)
-2r^3 (\phi ')^2 \left( \frac{N}{r} + \frac{(NS)'}{S} \right)
\right. \nonumber \\ & & \left.
+ 2\phi ' \phi (1 + \omega )^2 r + 4 \lambda \phi \phi ' r^3
(\phi ^2 - {\tilde v}^2) \right].
\label{4-2}
\end{eqnarray}
The continuity argument described previously when applied here, implies
that the number of negative eigenvalues of the system (\ref{4-1},\ref{4-2})
can change only when there is a zero mode.
\paragraph{}
Suppose that for some ${ v} \ne 0$ there is such a zero mode of
(\ref{4-1}). Given the background solution ($\omega, \phi $)
of the field equations, then $(\omega + \delta \omega, \phi + \delta \phi )$,
together with the corresponding metric functions, will also be a solution
of the field equations.
\paragraph{}
As $r \rightarrow \infty $ , then
\begin{equation}
\left(
\begin{array}{c}
\delta \omega \\
\delta \phi \end{array} \right)
\sim O[e^{-{ v} r^*}] \qquad r \rightarrow -\infty
\label{arr1}
\end{equation}
and
\begin{equation}
\left(
\begin{array}{c}
\delta \omega \\
\delta \phi \end{array} \right) \sim {\rm const} \qquad
r \rightarrow -\infty .
\label{ar1}
\end{equation}
Also, since in this sector the change
in the mass function ${m(r)}$ is given by (see Appendix A)
\begin{equation}
\delta {m(r)} =2N \frac {d\omega }{dr}
\delta \omega (r) + r^2 N \frac {d\phi }{dr} \delta \phi
(r)
\label{4-3}
\end{equation}
we have that for this zero mode $\delta {m} \rightarrow 0$
as $r^* \rightarrow \pm \infty $.
Hence, for {\it fixed} parameters $r_h, \lambda, { v}, g$ there
are {\it two solutions} of the original field equations
$\left( \begin{array}{c} \omega \\ \phi \end{array} \right) $,
and
$\left( \begin{array}{c} \omega + \delta \omega \\ \phi
+ \delta \phi \end{array} \right)$,
satisfying the required boundary
conditions. For our purposes we shall {\it assume} that the solution
of ref. \cite{greene} is {\it unique}. Compatibility with the above analysis,
then, requires
the latter to be valid only at the {\it bifurcation point} (c.f. figure 2).
Hence, there is a single zero mode at ${ v}={ v}_{max}$. For any
other ${ v}$ the zero mode is absent.
\paragraph{}
Let $\delta \phi \equiv { v}\delta {\tilde \phi }$,
$\phi \equiv { v} {\tilde \phi }$. The equations (\ref{4-2})
become
\begin{eqnarray}
-\delta \omega '' + U_{\omega\omega} \delta \omega + { v}^2 {\tilde
U}_{\omega\phi} \delta {\tilde \phi} &=& \sigma ^2 \delta \omega \nonumber \\
-\delta {\tilde \phi } - \frac{2NS}{r} \delta {\tilde \phi }' +
{\tilde U}_{\phi\omega} \delta \omega + U_{\phi\phi} \delta
{\tilde \phi} &=& \sigma ^2 \delta {\tilde \phi }
\label{4-4}
\end{eqnarray}
where $U_{\omega\phi} \equiv { v} {\tilde U}_{\omega\phi},
{\tilde U}_{\phi\omega} \equiv { v} {\tilde U}_{\phi\omega} $.
\paragraph{}
These equations have a well-defined limit as ${ v} \rightarrow 0$,
and are continuous at ${ v} =0$. At ${ v} =0$
the equations reduce to
\begin{equation}
\delta \omega '' + U_{\omega\omega} \delta \omega =
\sigma ^2 \delta \omega
\label{4-5}
\end{equation}
where
\begin{equation}
U_{\omega\omega} = \frac{NS^2}{r^2} \left[ 3\omega ^2 - 1
-4r(\omega ')^2 \left( \frac{N}{r} + \frac{(NS)'}{S} \right) +
\frac{\omega (\omega ^2 - 1)}{r}
\delta \omega ' \right] .
\label{4-6}
\end{equation}
If this equation possesses a zero mode, then $\delta \omega \rightarrow
{\mbox {const}} $, as $r \rightarrow \infty $ for this mode. As $r \rightarrow
\infty $, $N \rightarrow 1 - \frac{{m}}{r} + O[e^{-r}]$,
$\omega \rightarrow -1 + O[e^{-r}]$, $S \rightarrow 1 + O[e^r]$, and the
equation takes the form
\begin{equation}
\frac{d^2}{d r^2} (\delta \omega ) = -\frac{{m}}{r^2}
\left( 1 - \frac{2 {m}}{r}\right) ^{-1} \frac{d}{dr} (\delta \omega)
+ \frac{2}{r^2} \left( 1 - \frac{{m}}{r}\right) ^{-1} \delta \omega.
\label{4-7}
\end{equation}
Suppose that
$\delta \omega = \sum _{n=0}^{\infty} a_n r^{-\rho - n} $, $a_0 \ne 0$
as $r \rightarrow \infty $. Then $\rho = 2$~or~$-1$.
Let $\delta \omega = r^2 f(r) $. Then, $f(r)$ satisfies the equation
\begin{equation}
f'' + f' \left( 1 - \frac{2{m}}{r} \right) ^{-1} \left( \frac{4}{r}
- \frac{6 {m}}{r^2} \right) = 0.
\label{4-8}
\end{equation}
The solution to this equation as $r \rightarrow \infty$ assumes the form
\begin{equation}
f = \frac{A}{8{\cal M}^3} { \log}\left( \frac{r - 2{\cal M}}{r}
\right)
+ \frac{A}{4{\cal M}^2r^2} + \frac{A}{4{\cal M} r^2} + B
\qquad r \rightarrow \infty
\label{4-9}
\end{equation}
with $A$, $B$ arbitrary constants and ${\cal M}=\lim _{r \rightarrow
\infty }m(r)$, so that
\begin{equation}
\delta \omega \rightarrow Br^2 + \frac{A r^2}{8{\cal M}^3}
{\log} \left( \frac{r -
2{\cal M}}{r} \right)
+ \frac{A r}{4{\cal M}^2} + \frac{A}{4 {\cal M}}.
\label{4-10}
\end{equation}
Hence $\delta \omega $ can remain bounded as $r \rightarrow \infty$ only if
$A=B=0$, i.e. only the trivial solution exists.
Thus, there are no non-trivial zero modes for ${ v} =0$.
\paragraph{}
We, therefore, conclude that along each branch of solutions
the number of negative modes remains constant from ${ v}=0$ to
${ v}={ v}_{max}$.
\subsection{Bifurcation points and catastrophe theory}
\paragraph{}
In order to determine what happens at ${ v} = { v}_{max}$
we appeal to catastrophe theory~\cite{torii}.
Our aim is to study the possibility of a change of the stability
of the system at $v_{max}$. To this end, we have to determine
a certain function
({\it catastrophe functional}) in the black hole solution which changes
{\it discontinuously} despite the smooth change of certain
(control) parameters of the system.
As we shall show below, in the case at hand the r\^ole
of the catastrophe functional is played by the mass function
of the black hole, whilst the control parameter is the Higgs v.e.v.
$v$. At the bifurcation point $v_{max}$ we shall find
a {\it fold} catastrophe which affects the relevant
stability of the branches of the solution. In addition, the
catastrophe-theoretic approach allows for an {\it exact}
counting of the unstable modes in the various branches.
For notations and mathematical definitions on catastrophe
theory we refer the interested reader to Appendix B.
\paragraph{}
We should note at this point that although catastrophe theory
seems powerful enough to yield a universal stability study
of all kinds of non-Abelian black holes~\cite{torii}, however
one should express some caution in drawing conclusions
about absolute stability. Indeed, catastrophe theory
gives information about instabilities of certain modes of the
system. If catastrophe theory gives a stable branch of solution,
this does not mean that the
system is completely stable, given that there
may be other instabilities in sectors where catastrophe theory does not
apply. In our EYMH system this is precisely the case with the sphaleron
sector. However safe conclusions can be reached,
within the framework of catastrophe theory,
regarding {\it relative}
stability of branches of solutions, and it is in this sense that we shall
use it here in order to count the number of unstable modes of the
various branches of the EYMH system.
Having expressed these cautionary remarks we are now ready to proceed
with our catastrophe-theoretic analysis.
\paragraph{}
The mass functional ${\cal M}$ (c.f. appendix A)
can be re-written as a functional of the matter fields only as follows:
\paragraph{}
\noindent First note that ${\cal M}=m(\infty)$. Let
$\mu (r) \equiv m(r) - m(r_h) = m(r) - \frac{r_h}{2}$.
Then, using a prime to denote $d/dr $
\begin{eqnarray}
\mu '(r) = m'(r) &=& \frac{1}{2}
\left[ \left( 1-\frac{2{m}}{r}\right) (2(\omega ')^2
+ r^2 (\phi ')^2 ) \right] \nonumber \\ & &
+ \frac {r^2}{2} \left[ \frac{(1-\omega ^2)^2}{r^4}
+ \frac{\phi ^2}{2r^2} (1 + \omega )^2 +
\frac{\lambda }{2} (\phi ^2 - { v}^2 )^2 \right] \nonumber \\
& = & \frac{1}{2}
\left[ \left(1 - \frac{r_h}{r} \right) (2 (\omega ')^2 + r^2 (\phi ')^2 )
\right] \nonumber \\ & &
+ \frac {r^2 }{2} \left[ \frac{(1-\omega ^2)^2}{r^4} +
\frac{\phi ^2 (1 + \omega )^2 }{2r^2} +
\frac{\lambda}{2}(\phi ^2 - {\tilde v}^2)^2 \right] \nonumber \\
& &
- \frac{\mu}{r} (2 (\omega ')^2 + r^2 (\phi ')^2 ).
\label{4-11}
\end{eqnarray}
The last term on the right-hand-side can be written in terms of the
metric function $\delta $ (cf appendix A)
\begin{equation}
-\frac{\mu}{r} (2(\omega ')^2 + r^2 (\phi ')^2 ) = \mu \delta '.
\label{4-12}
\end{equation}
Solving for $\mu$ gives
\begin{equation}
\mu (r) =e^{\delta (r)} \int _{r_h} ^r {\cal K} [\omega, \phi ]
e^{-\delta (r')} dr'
\label{4-13}
\end{equation}
where
\begin{eqnarray}
{\cal K}[\omega,\phi] & \equiv & \frac{1}{2} \left[ \left(1
- \frac{r_h}{r}\right)
(2(\omega ')^2 + r^2 (\phi ')^2 ) \right] \nonumber \\
& &
+ \frac {r^2}{2} \left\{ \frac{(1- \omega ^2)^2}{r^4} + \frac{\phi ^2
(1 + \omega )^2 }{2r^2}
+ \frac{\lambda}{2} (\phi ^2 - { v}^2 )^2 \right\}.
\label{4-14}
\end{eqnarray}
Hence, setting $\delta (\infty ) = 0$ we obtain
\begin{equation}
{\cal M} = \frac{r_h}{2} +
\int _{r_h}^\infty {\cal K}[\omega, \phi] e^{-\delta
(r)} dr.
\label{4-15}
\end{equation}
Varying this functional with respect to the matter fields
yields the correct equations of motion~\cite{heuslerstr}.
Thus, the equilibrium solutions of the field equations
will be stationary points of the functional ${\cal M}$.
\paragraph{}
If we plot the solution curve in $({ v}, \delta _0, {\cal M} )$
space, then the resulting curve is smooth (c.f. figure 1).
For the {\it Catastrophe Theory} (c.f. Appendix B) we consider
$\delta _0$ as a variable, and ${ v}$
as a {\it control } parameter. The Whitney surface
could be defined in our case as follows.
For each ${ v}$, consider a smoothly varying set of functions
$\omega _\delta, \phi _\delta $ indexed by the value of $\delta _0$ they give:
\begin{equation}
\delta _0 = \int _{r_h}^\infty \frac{1}{r} (2 (\omega _\delta ')^2
+ r^2 (\phi _\delta ')^2 ) dr
\label{4-16}
\end{equation}
such that $\omega _\delta , \phi _\delta $ are the appropriate solutions
to the field equations when $\delta _0$ lies on the solution curve.
Then, the solution curve represents the curve of extremal points of
this Whitney surface, $\delta {\cal M} =0$.
\paragraph{}
The projections of this curve onto the $({ v}, \delta _0 )$
and $(\delta _0, {\cal M} )$ planes are also smooth curves.
The catastrophe map $\chi $ projects the solution curve onto the
$({ v}, {\cal M})$ plane
\begin{equation}
\chi~:~({ v},\delta _0, {\cal M} ) \rightarrow
({ v}, {\cal M}).
\label{4-17}
\end{equation}
This yields the curve shown in figure 2. This map is regular except at the
point $({ v}={ v}_{max}, {\cal M}={\cal M}_{max} )$,
where it is singular. This point is the {\it bifurcation set} B.
\paragraph{}
Since the Whitney surface describes a one-parameter (${ v})$
family of functions of a single variable ($\delta $), and the bifurcation
point is a single point, we have a fold catastrophe, as found in ref.
\cite{torii} from a different point of view. A more detailed comparison,
of the results of that reference with ours will be made at the end of the
section.
\paragraph{}
Catastrophe theory tells us that the stability of the system
will change at the point B on the solution curve, and, furthermore,
that the branch of solutions (including the point B) having the higher
mass (for the same value of ${ v}$) will be
{\it unstable}, relative to the other branch. Hence,
from our previous continuity considerations, the $k=n$ branch
of solutions will have {\it exactly one more} negative mode than the
quasi- $k=n-1$ branch.
\paragraph{}
The catastrophe theory analysis applies to the gravitational sector
rather than the sphaleronic sector, since gravitational perturbations
correspond to {\it small} changes in the functions
$\omega $ and $\phi $, whilst keeping the functional form of
${\cal M}$ {\it fixed}. On the contrary, sphaleronic perturbations
keep the functions $\omega$ and $\phi $ fixed, affecting the functional
form of ${\cal M}$. As we discussed in the previous section, the number
of unstable sphaleron modes is the {\it same} for the $k=n$ and
quasi-$k=n-1$ branches.
\paragraph{}
All that remains is to determine the number of
negative modes of the quasi-$k=0$ branch of solutions.
{}From the above considerations, this will be equal to the number
of negative modes of the ${ v}=0$ limiting case of this branch
of solutions, which is nothing other than the Schwarzschild black hole.
The gravitational perturbation equation is in this case,
where a prime now denotes $d/dr^{*}$,
\begin{equation}
-\delta \omega '' + U_{\omega\omega} \delta \omega = \sigma ^2 \delta \omega
\label{4-18}
\end{equation}
where
\begin{equation}
U_{\omega\omega} =\frac{NS^2}{r^2}
\left[ 3\omega ^2 - 1 - 4r(\omega ')^2
\left( \frac{N}{r} + \frac{(NS)'}{S} \right)
+ \frac{\omega (\omega ^2 - 1)}{r}
\delta
\omega ' \right].
\label{4-19}
\end{equation}
For the Schwarzschild solution $N=1 -\frac{r_h}{r}, S=1, \omega = 1$.
Equation (\ref{4-18}), then, reduces to
\begin{equation}
-\delta \omega '' + \frac{2}{r^2} \left( 1 -\frac{r_h}{r}
\right) \delta \omega =
\sigma ^2 \delta \omega
\label{4-20}
\end{equation}
which has the form of a standard one-dimensional Schr\"odinger
equation with potential
\begin{equation}
V(r^*) =\frac{2}{r^2} \left( 1 -\frac{r_h}{r} \right) \qquad
\frac{d}{dr ^* } \equiv \left( 1 - \frac{r_h}{r} \right) \frac {d}{dr}.
\label{4-21}
\end{equation}
As $r^* \rightarrow \pm \infty $, $V(r^*) \rightarrow 0$.
On the other hand, for finite $r^*$, $-\infty < r ^* < \infty $,
the potential is positive definite, $V(r^*) > 0$.
\paragraph{}
Then, by a standard theorem of quantum mechanics~\cite{messiah},
the Schr\"odinger equation (\ref{4-20},\ref{4-21})
has no bound states. Thus, the Schwarzschild solution (and, hence, the
quasi-$k=0$ branch of solutions) has no negative gravitational
modes, a known result in agreement with the no-hair conjecture.
\paragraph{}
Working inductively through the various branches of solutions
(the ${ v} = 0$ limit of the
$k=n$ branch is the same as that for the quasi-$k=n-1$
branch after replacing $\omega $ by $-\omega$) we find that
the $k=n$ branch possesses exactly $n$ unstable gravitational
modes, and the quasi-$k=n-1$ branch exactly $n-1$
negative modes. This result has been conjectured but not proven
in ref. \cite{lavmaison}.
\paragraph{}
Before closing the section we would like to compare our
results with those of ref. \cite{torii}.
In ref. \cite{torii} the authors also used catastrophe theory to draw
conclusions about the stability of the EYMH black holes, but their
approach was somewhat different.
There the writers fixed the parameter $\lambda =0.125$ and also
fixed the Higgs mass $v$.
They then varied the horizon radius $r_{h}$ and for each solution
calculated the value of the mass functional $\cal M$, the field
strength at the horizon $B_{h}$ given by
\begin{equation}
B_{h}=|F^{2}|^{\frac {1}{2}} \left| _{\mbox {horizon}} \right.
\end{equation}
and the Hawking-Bekenstein entropy $S=\pi r_{h}$, to give
a smooth solution curve in $({\cal M},B_{h},S)$ space.
The projection of this curve on to the $({\cal M},S)$ plane
has the same qualitative features as our figure 2.
Here we have also fixed $\lambda =0.15$, and in addition we have
fixed $r_{h}=1$ and varied the Higgs mass $v$ from 0 up to the
bifurcation point.
Torii et al concluded that the $k=1$ branch of solutions was more
unstable than the quasi-$k=0$ branch of solutions.
The advantage of our approach is that, by interpolating between
the various coloured black hole solutions \cite{bizon}, beginning
with the Schwarzschild solution, we will be able to calculate the
exact number of unstable modes of each branch of solutions and not
just give qualitative information concerning their relative
stability.
\section{Entropy considerations}
It remains now to associate the above catastrophic considerations
with some elementary `thermodynamic' properties of the black hole
solutions, and in particular their entropy.
Our aim in this section is to give elementary
estimates of the entropy of the various branches, assuming
thermodynamic equilibrium of the black hole with a surrounding
heat bath. Such estimates will allow an association of the
stability issues with the amount of entropy carried by the various branches
of the solution. In particular we shall argue that the `high-entropy'
branch has relatively fewer unstable modes than the `low-energy' ones
and, thus, is relatively more stable.
We shall employ approximate WKB semi-classical methods
for the evaluation of the entropy. We shall also study the conditions
under which such estimates are valid.
\paragraph{}
The calculation of the entropy
of the black hole will be made on the basis
of calculating the entropy
of quantum matter fields
in the black hole space time.
This will constitute only a partial
contribution to the total black hole entropy.
A complete calculation requires a proper quantization
of the gravitational field, which at present is not possible,
given the non-renormalizability of (local) quantum gravity.
Ignoring such back reaction effects of the matter fields
to the (quantum) geometry of space time results in ultra-violet
divergences in the calculated entropy of the matter
fields~\cite{thooft,susskind}.
Such divergences can be absorbed
in a renormalization of the gravitational (Newton's) constant.
This is so because the entropy is proportional
to the area of the black hole horizon,
with the divergent contributions appearing as
multiplicative factors.
\paragraph{}
In what follows we shall estimate the entropy of a scalar field
propagating in the EYMH black hole background.
Anticipating a path integral formalism
for quantum gravity, we shall compute only the entropy
which is due to quantum fluctuations of the scalar field
in the black hole background.
The part that contains the classical solutions to the equations
of motion contributes to the `classical' entropy, associated
with the classical geometry of space time. This part is known
to be proportional to $1/4$ of the horizon area~\cite{thooft,susskind}.
The quantum-scalar-field entropy part will also turn out to be
proportional to the horizon area, but the
proportionality coefficient is linearly
divergent as the ultraviolet cut-off is removed, exactly as it happens
in the corresponding computation for the Schwarzschild
black hole~\cite{thooft}.
Absorbing the divergence into a conjectured renormalization
of the gravitational constant will enable us to estimate the entropy
of the various branches of the EYMH black hole solution, and relate this
to the above-mentioned catastrophe-theoretic arguments.
\paragraph{}
As we shall show, this will be possible only in the {\it non-extremal} case,
which is the case of the numerical solutions studied in ref. \cite{greene}
and in the present work. Among the solutions, however,
there exist some {\it extremal} cases, for which the Hawking temperature -
which is defined by assuming thermal equilibrium
of the black hole system with a surrounding heat bath - vanishes.
In such a case, the linearly divergent entropy of the scalar field
vanishes. However, there are non trivial {\it logarithmically}
divergent contributions to the black hole entropy
which cannot be absorbed in a renormalization of the gravitational
constant. Moreover, the classical Bekenstein-Hawking entropy formula
seems to be violated by such contributions to the
black-hole entropy.
The situation is similar to the case of a scalar field
in an extreme (3+1)-dimensional
dilatonic black hole background~\cite{dilatonbh}, and seems to be
generic to black holes with non-conventional hair.
We shall briefly comment on this issue at the end of the section.
\paragraph{}
We shall be brief in our discussion and concentrate only in basic new results,
relevant for our discussion above. For details in the formalism we refer
the interested reader to the existing literature~\cite{thooft,susskind}.
To start with, we note that
the metric for the EYMH black holes is given by:
\begin{equation}
ds^{2} =
-\left( 1- \frac{2m(r)}{r} \right) e^{-2\delta (r)} dt^{2}
+ \left( 1- \frac{2m(r)}{r} \right) ^{-1} dr^{2}
+r^{2}( d \theta ^{2} + \sin ^{2} \theta d\varphi ^{2}).
\end{equation}
Consider a scalar field of mass $\mu $ propagating in this spacetime
\cite{thooft},
satisfying the Klein-Gordon equation:
\begin{equation}
\frac {1}{\sqrt{-g}} \partial _{\mu } ( \sqrt {-g}
g^{\mu \nu } \partial _{\nu} \Phi )
- \mu ^{2} \Phi =0.
\end{equation}
Since the metric is spherically symmetric, consider solutions of the
wave equation of the form
\begin{equation}
\Phi (t,r,\theta, \varphi)= e^{-iEt} f_{El}(r) Y_{lm_{l}}(\theta,\varphi)
\end{equation}
where $Y_{lm_{l}}(\theta ,\varphi )$ is a spherical harmonic
and $E$ is the energy of the wave.
The wave equation separates to give the following radial equation for
$f_{El}(r)$
\begin{eqnarray}
\left( 1-\frac{2m(r)}{r} \right) ^{-1} E^{2} f_{El} (r) & & \nonumber \\
+ \frac {e^{\delta (r)}}{r^{2}} \frac {d}{dr}
\left[ e^{-\delta (r)} r^{2} \left( 1- \frac {2m(r)}{r} \right)
\frac {df_{El} (r)}{dr} \right] & & \nonumber \\
-\left[ \frac {l(l+1)}{r^{2}} +\mu ^{2} \right] f_{El} (r) & = & 0.
\end{eqnarray}
The ``brick wall'' boundary condition is assumed \cite{thooft},
namely, the wave
function is cut off just outside the horizon,
\begin{equation}
\Phi =0 \mbox { at } r=r_{h} + \epsilon
\end{equation}
where $r_{h}$ is the black hole horizon radius, and $\epsilon$ is
a small, positive, fixed distance which will play the r\^ole
of an ultraviolet cut-off. We
also impose an infra-red cut-off at a very large distance $L$ from
the horizon:
\begin{equation}
\Phi =0 \mbox { at } r=L, \mbox { where } L \gg r_{h}.
\end{equation}
Hence $f$ satisfies
\begin{equation}
f_{El} (r) =0 \mbox { when } r=r_{h} + \epsilon
\mbox { or } r=L.
\end{equation}
In anticipation of being able to use a WKB approximation, define
functions $ K(r)$ and $h(r)$ by
\begin{eqnarray}
K^{2} (r,l,E) & = & \left( 1- \frac {2m(r)}{r} \right) ^{-1}
\left[ E^{2} \left( 1- \frac {2m(r)}{r} \right) ^{-1}
- \frac {l(l+1)}{r^{2}} - \mu^{2} \right] \label{waveno} \\
h(r) & = & e^{-\delta (r)} r^{2} \left(
1-\frac {2m(r)}{r} \right) .
\end{eqnarray}
Then the equation for $f_{El}(r)$ becomes
\begin{equation}
\frac {1}{h(r)} \frac {d}{dr} \left[
h(r) \frac {d}{dr} f_{El} (r) \right] +
K^{2} (r,l,E) f_{El} (r) =0.
\end{equation}
Now define a function $u(r)$ by
\begin{equation}
f_{El} (r)= \frac {u(r)}{\sqrt {h(r)}}.
\end{equation}
Then $u(r)$ satisfies
\begin{equation}
\frac {d^{2}u}{dr^{2}} +\left[
K^{2} +\frac {1}{4h^{2}} \left( \frac {dh}{dr} \right) ^{2}
-\frac {1}{2h} \frac {d^{2}h}{dr^{2}} \right] u =0.
\end{equation}
The WKB approximation for $u$ will be valid if
\begin{equation}
\left| \frac {1}{4h^{2}} \left( \frac {dh}{dr} \right) ^{2}
-\frac {1}{2h} \frac {d^{2}h}{dr^{2}} \right|
\ll
\left| K^{2} \right|
\end{equation}
and
\begin{equation}
\left| \frac {dK}{dr} \right| \ll \left| K^{2} \right| .
\end{equation}
The first inequality is required so that $u$ can be taken to satisfy
the equation
\begin{equation}
\frac {d^{2}u}{dr^{2}} +K^{2}u =0
\end{equation}
where $K$ is now the radial wave number, and the second inequality is
required so that the approximation to the wave function
\begin{equation}
u(r) \sim \frac {1}{\sqrt {K(r)}} \exp
\left[ \pm i \int K(r) dr \right]
\end{equation}
is valid. Assuming, for the present, that the WKB approximation
is valid, define the radial wave-number $K$ as above whenever the
right-hand-side of (\ref{waveno}) is non-negative.
Define $K^{2}=0$ otherwise.
Then the number of radial modes $n_{K}$ is given by
\begin{equation}
\pi n_{K} = \int _{r_{h}+\epsilon } ^{L}
dr K(r,l,E) \label{nk}
\end{equation}
where the fact that $n_{K}$ must be an integer restricts the possible
values of $E$.
For fixed energy $E$, the total number $N$ of solutions with
energy less than or equal to $E$ is
\begin{eqnarray}
\pi N & = & \int (2l+1) \pi n_{K} dl \nonumber \\
& = &
\int _{r_{h}+\epsilon }^{L}
\left( 1-\frac {2m(r)}{r} \right) ^{-1} dr
\int (2l+1) dl \nonumber \\ & & \times
\left[ E^{2} -\left( \frac {l(l+1)}{r^{2}} + \mu ^{2}
\right) \left( 1-\frac {2m(r)}{r} \right) \right]
^{\frac {1}{2}}
\end{eqnarray}
where the integration is performed over all values of $l$ such that
the argument of the square root is positive.
\paragraph{}
The Hawking temperature of the black hole is given by
\begin{equation}
T^{-1}=\beta =
\frac {4\pi r_{h} e^{\delta _{h}}}{1-2m'_{h}} \label{temp}
\end{equation}
where $\delta _{h}$ is fixed by the requirement that
$\delta (\infty ) =0$, and $'=d/dr $.
Assume further that $\beta^{-1} \ll 1 $.
It should be noted that $T=0$ in the extreme case $m'_{h}=0.5$.
Further comments on the entropy in this situation will be
made at the end of the section.
However, as discussed in Appendix A, this situation does not arise
for the black holes we are concerned with.
The free energy $F$ of the system is given by
\begin{eqnarray}
e^{-\beta F} &=&
\sum e^{-\beta E} \nonumber \\
& = &
\prod _{n_{K},l,m_{l}} \frac {1}{1-\exp (-\beta E)}.
\end{eqnarray}
Hence
\begin{eqnarray}
\beta F & = &
\sum _{n_{K},l,m_{l}} \log (1- e^{-\beta E}) \nonumber \\
& \simeq &
\int dl (2l+1) \int dn_{K} \log (1-e^{-\beta E})
\end{eqnarray}
for large $\beta $, integrating over appropriate $l$, $E$.
Integrating by parts,
\begin{eqnarray}
F & = &
-\frac {1}{\beta } \int dl (2l+1) \int d(\beta E)
\frac {n_{K}}{\exp (\beta E) -1} \nonumber \\
& = & -\frac {1}{\pi} \int dl (2l+1) \int dE
\frac {1}{\exp (\beta E) -1} \int _{r_{h}+\epsilon }^{L}
dr \nonumber \\ & & \times
\left(1-\frac {2m(r)}{r} \right) ^{-1} \left[
E^{2} -\left( 1-\frac {2m(r)}{r} \right) \left(
\frac {l(l+1)}{r^{2}} +\mu ^{2} \right) \right] ^{\frac {1}{2}}
\end{eqnarray}
where we have substituted for $n_{K}$ from (\ref{nk}).
The $l$ integration can be performed explicitly,
\begin{eqnarray} & &
\int dl (2l+1) \left[ E^{2} \left( 1-\frac {2m(r)}{r} \right)
\left( \frac {l(l+1)}{r^{2}} + \mu ^{2} \right)
\right] ^{\frac{1}{2}} \nonumber \\
&= & \frac {2}{3} r^{2} \left( 1-\frac {2m(r)}{r} \right) ^{-1}
\left( E^{2} -\left( 1-\frac {2m(r)}{r} \right) \mu^{2}
\right) ^{\frac {3}{2}}
\end{eqnarray}
to give
\begin{equation}
F= -\frac {2}{3\pi} \int dE \frac {1}{\exp (\beta E)-1}
\int _{r_{h}+\epsilon }^{L} dr r^{2}
\left( 1-\frac {2m(r)}{r} \right) ^{-2}
\left[ E^{2} -\left( 1-\frac {2m(r)}{r} \right) \mu^{2}
\right] ^{\frac {3}{2}}.
\end{equation}
Introduce a dimensionless radial co-ordinate $x$ by
\begin{equation}
x=\frac {r}{r_{h}}.
\end{equation}
Then
\begin{equation}
F= -\frac {2r_{h}^{3}}{3\pi} \int dE
\frac {1}{\exp (\beta E) -1} \int _{1+{\hat {\epsilon }}}^{\hat {L}}
dx x^{2} \left( 1-\frac {2{\hat{m}}(x)}{x} \right) ^{-2}
\left[ E^{2} - \left (1-\frac {2{\hat {m}}(x)}{x} \right)
\mu ^{2} \right] ^{\frac {3}{2}} \label{integrand}
\end{equation}
where ${\hat {\epsilon }} = \frac {\epsilon }{r_{h}}$,
${\hat {L}} =\frac {L}{r_{h}} $,
and
${\hat {m}}(x)=\frac {m(xr_{h})}{r_{h}} $.
\paragraph{}
The contribution to $F$ for large values of $x$ is
\begin{equation}
F_{0}=-\frac {2}{9\pi} L^{3}
\int _{\mu}^{\infty} dE \frac {(E^{2}-\mu^{2})^{\frac {3}{2}}}
{\exp (\beta E) -1}
\end{equation}
which is the expression for the free energy in flat space.
The contribution for $x$ near $1$ diverges as $\epsilon \rightarrow
0$.
For $x$ near $1$, the leading order term in the integrand
in (\ref{integrand}) is
\begin{equation}
E^{3}(x-1)^{-2}(1-2{\hat {m}}'_{h})^{2}
\end{equation}
where
${\hat {m}}'_{h} = {\hat {m}}' (1)= m'(r_{h})$.
This gives the leading order divergence in $F$
\begin{eqnarray}
F_{div} & = &
-\frac {2r_{h}^{3}}{3\pi}
\frac {(1-2{\hat {m}}'_{h})^{-2}}{\hat {\epsilon }}
\int dE \frac {E^{3}}{\exp (\beta E) -1} \nonumber \\
& = &
-\frac {2\pi ^{3}}{45 {\hat {\epsilon }}}
\frac {r_{h}^{3} (1-{\hat {m}}'_{h} )^{-2}}{\beta ^{4}}
\nonumber \\
& = &
-\frac {2\pi ^{3}}{45 \epsilon }
\frac {r_{h}^{4} (1-{\hat {m}}'_{h}) ^{-2}}{\beta ^{4}}.
\end{eqnarray}
The total energy $U$ and entropy $S$ are given by
\begin{equation}
U= \frac {\partial }{\partial \beta }(\beta F)
= \frac {2\pi ^{3}}{15 \epsilon }
\frac {r_{h}^{4} (1-2 {\hat {m}}'_{h})^{-2}}{\beta ^{4}}
\end{equation}
\begin{equation}
S=\beta ^{2} \frac {\partial F}{\partial \beta }
= \frac {8\pi ^{3}}{45 \epsilon }
\frac {r_{h}^{4} (1-2{\hat {m}}'_{h} )^{-2}}{\beta ^{3}}.
\label{entrop2}
\end{equation}
Substituting for $\beta $ from (\ref{temp}), obtain
\begin{equation}
S=\frac {r_{h}}{360 \epsilon} (1-2m_{h}' ) e^{-3\delta _{h}}.
\label{entropy}
\end{equation}
\paragraph{}
Before discussing the implications of this formula, it is
necessary to ascertain when the approximations used are valid.
Firstly, $ \beta \gg 1$ if $r_{h} \gg 1 $ or $1-2m_{h}' \ll 1$.
In the first case, non-trivial (viz. non-Schwarzschild) solutions
exist only for very small values of $v $, the Higgs mass \cite{torii}
and these solutions will be very close to the Schwarzschild
solution having mass $M=r_{h}/2$.
In the second case, the black hole is very nearly extremal. For
$1=2m_{h}'$ exactly, the above analysis does not apply.
However, $1-2m_{h}' \ll 1$ for large $n$ \cite{bizon} and for
any value of $v $ for which a non-trivial solution exists.
We shall discuss the physical implications of this (nearly)
extremal case at the end of the section.
\paragraph{}
Secondly, consider the validity of the WKB approximation.
The principal contribution to the free energy $F$ comes from
the region where $K$ is large; in particular, above we have
concentrated on $x$ close to 1.
It is expected that the WKB approximation will be valid
when $K$ is large.
For $K$ large, it may be approximated by
\begin{equation}
K=E\left( 1-\frac {2m(r)}{r} \right) ^{-1}.
\end{equation}
For $r$ near $r_{h}$, then
\begin{equation}
K=E(1-2m_{h}')^{-1}(r-r_{h})^{-1} + 0(1)
\end{equation}
whence
\begin{equation}
\frac {dK}{dr} =-E(1-2m_{h}')^{-1}(r-r_{h})^{-2}.
\end{equation}
Hence
\begin{equation}
\left| \frac {dK}{dr} \right| \ll |K^{2}|
{\mbox { if }}
\left| \frac {E}{1-2m'_{h}} \right| \gg 1.
\end{equation}
Similarly, for $r$ near $r_{h}$ we may approximate $h$ by
\begin{equation}
h=e^{-\delta }r^{2}(1-2m_{h}')(r-r_{h}) + 0(r-r_{h})^{2}.
\end{equation}
Then
\begin{equation}
\frac {1}{4h^{2}} \left( \frac {dh}{dr} \right) ^{2}
-\frac {1}{2h} \left( \frac {d^{2}h}{dr^{2}} \right)
= \frac {1}{4(r-r_{h})^{2}} + 0(r-r_{h})^{-1},
\end{equation}
so that
\begin{equation}
\left| \frac {1}{4h^{2}} \left( \frac {dh}{dr} \right) ^{2}
-\frac {1}{2h} \frac {d^{2}h}{dr^{2}} \right|
\ll |K^{2}|
\mbox { if } \left| \frac {4E}{1-2m_{h}'} \right| \gg 1.
\end{equation}
Thus, for black hole solutions with large $n$, the WKB approximation
is valid except for small values of $E$.
Now return to the expression for the entropy (\ref{entrop2}),
\begin{equation}
S \equiv S_{linear}
= \frac {r_{h}}{360 \epsilon } (1-2m_{h}') e^{-3\delta _{h}}.
\label{linearen}
\end{equation}
We notice first that the entropy is positive, due to the fact that
for the solutions $m_h' < 1/2$ to avoid naked singularities.
Having said that, we now
fix $n$ and consider the two branches of black hole solutions, the
$k=n$ and quasi-$k=n-1$ solutions.
The linear divergence $r_h/\epsilon $ is a common multiplicative factor
in all branches, and thus can be absorbed in a renormalization of the
gravitational constant~\cite{susskind}.
This can be done as follows: Re-write $r_h/\epsilon = 4\pi r_h^2
\frac{1}{\epsilon 4\pi r_h}$,
where $A=4\pi r_h^2$ is the horizon area,
$G_0$ is the bare graviational coupling constant
(which,
by convention, had been set to one in the
previous formulae),
and
$r_{h}\epsilon=2m_{h} \epsilon$
may be considered as the invariant distance (cut-off)
of the brick wall from the horizon.
The classical Bekenstein-Hawking entropy formula
is then still valid, but with the renormalized gravitational constant $G_R$
replacing the bare (classical) one $G_0$
\begin{equation}
S_{classical} + S_{linear} = (\frac{1}{4G_0} + O[\frac{1}{\epsilon}])A
=\frac{1}{4G_R}A
\label{beh}
\end{equation}
\paragraph{}
Such a renormalization
may be thought of as expressing quantum matter back reaction effects
to the space-time geometry.
Doing this in our case, we observe from (\ref{linearen}) that
for each $v$, the $k=n$ solution has larger $m_{h}'$ and
$\delta _{h}$ than the quasi-$k=n-1$ solution.
Hence the $k=n$ solution has a lower entropy than the
quasi-$k=n-1$ solution, in agreement with Torii et al \cite{torii}.
\paragraph{}
Before closing the section we would like to make some
important comments
concerning the extreme case $m'(r_h)=1/2$, for which the Hawking
temperature (\ref{temp}) {\it vanishes}.
In this case the linearly-divergent part of the entropy
(\ref{entropy}) also vanishes, but this is not the case
for the next-to-leading order {\it logarithmically} divergent
part\footnote{It should be noted that the
logarithmic divergent parts exist also in the non-extreme case, but
there they are suppressed by the dominant linearly divergent terms.
It can be easily checked that for the solutions of ref. \cite{greene},
their presence does not affect the entropy considerations above, based
on the linearly divergent term.}.
\paragraph{}
The logarithmic divergent part of the free energy can be found
from (\ref{integrand}) by requiring the following
expansion
\begin{equation}
{\hat m}(x) = {\hat m}_h + {\hat m}'_h (x-1) +
\frac{1}{2} {\hat m}_h'' (x-1)^2 + \dots \qquad {\hat m}_h = \frac{1}{2}.
\label{expn}
\end{equation}
Using the trick $x = 1 + (x-1)$ we can write down the identities:
\begin{eqnarray}
x^{-1} &=& 1 - (x-1) + (x-1)^2 + \dots \nonumber \\
x^{-2} &=& 1 + 2 (x-1) + (x-1)^2.
\label{ident}
\end{eqnarray}
Hence
\begin{eqnarray}
1-\frac{2{\hat m}(x)}{x} &=& (1 - 2{\hat m}'_h)(x-1) +
(2{\hat m}_h - 1 - {\hat m}_h'') (x-2)^2 + \dots \nonumber \\
\left( 1-\frac{2{\hat m}(x)}{x} \right) ^{-2} &=& (1-2{\hat m}'_h)^{-2}
(x-2)^{-2} \nonumber \\ & & \times
\left\{ 1 + \frac{2(x-1)
(2{\hat m}'_h - 1- {\hat m}''_h )}{2{\hat m}'_h -1} +\dots \right\}.
\label{comput}
\end{eqnarray}
Substituting in (\ref{integrand}) we obtain for the next-to-leading
order divergence of the free energy
\begin{eqnarray}
F_{nlo} &=& \frac{2r^3_h}{3\pi} \frac{2 (2-4{\hat m}'_h + {\hat m}''_h )}
{(1 - {\hat m}'_h)^3} \int dE \frac{E^3}{e^{\beta E} - 1} \int _{1 + {\hat
\epsilon}} dx \frac{1}{x-1} \nonumber \\ & & +
\frac{2r_h^3}{3\pi} \frac{3}{2}\mu^2 \frac{1}{1-2{\hat m}'_h}
\int dE \frac{E}{e^{\beta E}- 1} \int _{1 + {\hat \epsilon}} \frac{dx}{x-1}.
\label{intgr}
\end{eqnarray}
Using the formulae
\begin{eqnarray}
\int _0^\infty dE \frac{E^3}{e^{\beta E} - 1} &=& \frac{\pi ^4}
{15\beta ^4}
\nonumber \\
\int _0^\infty dE \frac{E}{e^{\beta E} -1 } &=& \frac{\pi ^2}{6 \beta ^2}
\label{resintgr}
\end{eqnarray}
the expression (\ref{intgr})
reduces to
\begin{equation}
F_{nlo} = \frac{4}{45}r_h^3 \frac{\pi ^4}{\beta ^4}
\frac{2 - 4 {\hat m}'_h + {\hat m}''_h }{(1 - 2{\hat m}'_h )^3 }
\log {\hat \epsilon} - \frac{1}{6} r_h^3 \frac{\pi}{\beta ^2}
\mu ^2 \frac{1}{1- 2{\hat m}'_h }\log {\hat \epsilon}.
\label{nloF}
\end{equation}
{}From (\ref{entrop2}) the corresponding
next-to-leading contribution to the entropy in the extremal case
${\hat m}'_h =1/2$
(where the linear divergence vanishes) is given by
the following expression :
\begin{equation}
S_{nlo} = \left[ \frac{1}{3}r_h^2 \mu ^2 e^{-\delta _h}
- \frac{1}{180} e^{- 3\delta _h} {\hat m}''_h \right]
\log \left( \frac{\epsilon}{r_h} \right)
\label{entrop3}
\end{equation}
Thus, we observe that in the extremal case the entropy
diverges logarithmically with the ultraviolet cut-off,
in a similar spirit to the case of the dilatonic black hole
background~\cite{dilatonbh}. In our case, however, the
horizon area does not vanish, because there is no dilaton
field exponentially coupled to the graviton.
Thus, one could hope that the divergent contribution
(\ref{entrop3}) could be absorbed in the renormalization
of the gravitational constant, so that a formal
Bekenstein-Hawking expression for the
entropy is still valid. However,
as we see from (\ref{entrop3}), for generic
scalar fields this cannot be the case,
due to terms
that spoil the proportionality
of $S_{nlo}$ to the black hole
horizon area $A=4\pi r_h^2$.
Indeed, let us analyze the various contributions
in (\ref{entrop3}).
\paragraph{}
The first term can be absorbed into a renormalization
of the gravitational constant, and respects the
classical formula (\ref{beh}). This is not the case with the
second term however. From
equation (\ref{double}) of Appendix A, we observe that
there are contributions that depend on the (boundary) horizon values
of the fields $\phi _h$ and $\omega _h$
which are not proportional to the horizon area $A$,
\begin{equation}
m_h'' =\frac{1}{r_h} \left( 1- \frac{\phi _h ^2}{2\omega _{h}}
(1+\omega _{h})^2 \right)
\label{curious}
\end{equation}
Thus, the associated contribution to the black hole entropy
seems not to be related to geometric aspects of the black hole
background.
\paragraph{}
One is tempted to interpret
such contributions
as being associated with information loss across the horizon.
This is supported by the fact that the logarithmic divergencies
disappear for black holes whose horizon is vanishing in the sense of
$r_h \rightarrow \epsilon$.
For consistency with
the interpretation as loss of information,
the {\it positivity} requirement of the relevant contribution
to the black hole entropy has to be imposed.
Returning to formula (\ref{entrop3}) we observe that
the above requirement implies $m_h'' > 0$.
In the present case we do not know whether
extremal solutions exist. {\it A priori} there is no
reason why such solutions should not exist in the EYMH system.
If such a solution exists, the above-mentioned positivity
requirement will impose restrictions on the
boundary (horizon) values of the hair fields of the black hole
background.
{}From the case at hand, it seems that the ambiguities
in sign are associated with the presence of the non-abelian
gauge field component $\omega$. Indeed, from (\ref{curious}),
it is immediately seen that the contributions of the scalar Higgs field
$\phi $ alone
to $m_h''$ are manifestly positive. The terms that
could lead to negative logarithmic contributions to the entropy
are associated with the field $\omega$ and vanish for $\omega = -1$.
\paragraph{}
This phenomenon is somewhat similar to what is happening
in the case
of a (spin one) gauge field in the presence of an
ordinary black hole background.
If one integrates quantum fluctuations
of a spin one field in a gravity background, there is an induced
coefficient in front of the Einstein curvature term in the
effective action whose sign is negative for space-time dimensions less than
$8$~\cite{kabat}.
Notice that such sign ambiguities do not occur for scalar fields
in conventional black hole backgrounds.
In our case,
there are gauge fields present in the black hole
background associated with non-conventional hair.
The sign ambiguities found above in the logarithmically-divergent
contributions to the entropy (\ref{entrop3})
occur already when one considers
quantum fluctuations of
scalar fields. This is associated with negative signatures of terms
that involve the gauge field hair background in the effective action.
\paragraph{}
Some comments are now in order concerning the
the so-called {\it entanglement} entropy~\cite{bombelli}
of fields in background space times with event horizons or
other space-time boundaries.
The entanglement entropy is obtained from the density matrix of the
field upon tracing over degrees of freedom that cross the event horizon
or lie in the interior of the black hole, and therefore is
closely associated with loss of information. The entanglement entropy
is always positive.
This immediately implies a difference
from the ordinary black hole entropy, computed above,
for the case of spin one fields~\cite{kabat}.
On the other hand, for scalar fields
in ordinary black hole
backgrounds both entropies are identical,
since in that case sign ambiguities in the entropy
do not arise. On the other hand, our computation for the
extreme EYMH case, provided the latter exists, has shown that, in general,
one should expect a difference between the two entropies
even in the case of scalar fields propagating
in such (extreme) non-Abelian black hole backgrounds.
\paragraph{}
There exists, of course,
the interesting possibility that the entanglement entropy of scalar
fields in this extreme black hole background
can be identified with the logarithmic entropy terms (\ref{entrop3}),
in which case the latter must be positive definite.
This, as we discussed above, would imply restrictions
on the boundary (horizon) values of the gauge hair for the extreme black hole
to exist. The restrictions seem to be relatively mild though.
As an example of the kind of the situation one
encounters in such cases, consider the case
where
extreme EYMH black hole solutions exist.
{}From (\ref{curious}), we observe that positivity
of $m _h ''$ implies restrictions on the size of $\omega _h$,
$2\omega _h > \phi _h^2 ( 1 + \omega _h)^2 > 0$, which
is a mild restriction.
\paragraph{}
However, all these are mere speculations at this stage.
One has to await for
a complete analytic solution of the EYMH black hole problem
before reaches any conclusions regarding entropy production
and information loss in extreme cases.
Therefore, we leave any further considerations
on such issues for future work.
\section{Conclusions and Outlook}
\paragraph{}
In this work,
we have analyzed in detail black holes
in (3+1)-dimensional Einstein-Yang-Mills-Higgs
systems. We have argued that the conditions for
the no-hair theorem are violated, which allows
for the existence of Higgs and non-Abelian hair.
This analytic work supports the numerical evidence for the
existence of hair found in \cite{greene}.
This is due to a balance between the
gauge field repulsion and the gravitational attraction.
However we have shown that the above black holes
are unstable, and therefore cannot
be formed by gravitational collapse of stable matter.
Although the instability of the black hole
sphaleron sector
was expected for topological reasons, however
our analysis in this work, which includes an {\it exact}
counting of the unstable modes in this sector,
acquires value in the
sense that we have managed to
describe rigorously the sphaleron black holes
from a mathematical point of view.
In the gravitational sector we have
used catastrophe theory to classify
and count the unstable modes.
Our method of using as a catastrophe
functional the black hole mass and as
a control parameter the Higgs field v.e.v.
proved advantageous over existing methods
of similar origin~\cite{torii} in that
we managed to understand the
connection with the Schwarzschild black holes
from a stability/catastrophe-theoretic
point of view.
The above analysis, although applied
to a specific class of systems, however
is quite general and the various steps
can be applied to other self-gravitating structures
in order to reach conclusions related to
the existence of non-trivial hair
and their stability. For instance, we can tackle
the problem of moduli hair
of black holes in string-inspired dilaton-coupled
higher derivative gravity~\cite{kanti}.
The presence of Gauss-Bonnet combinations in such
systems shares many similarities with the case of
the non-Abelian black holes, and it would be
interesting to study in detail the possibility
of having non-trivial hair (to all orders in
the Regge slope $\alpha '$) and its stability,
following the methods advocated in the present work.
\paragraph{}
In addition to the question of the stability
of non-conventional hairy solutions,
the above analysis has revealed
another important aspect
concerning the information theoretic content
of these (3+1)-dimensional hairy black holes, namely the existence of
logarithmic divergent contributions to the entropy of
matter (quantum (scalar) fields) near the horizon.
Such contributions owe their existence
to the non-trivial hair of the black hole, and they modify
the Bekenstein-Hawking entropy formula, by yielding contributions
that do not depend on the horizon area.
Our findings can be compared to a similar
situation characterizing
extreme (string-inspired) black holes~\cite{dilatonbh}. There,
the deviation from the Bekenstein-Hawking entropy
was seen to occur by the fact that in the extreme case, due to the
presence of the dilaton, the effective horizon area vanishes, whilst
the entropy did not vanish. In our case, despite the non vanishing
entropy in the extreme case, the logarithmically-divergent
entropy contributions violate explicitly the classical entropy-area
formula by yielding contributions that are independent of the horizon area.
This kind of entropy is clearly associated with loss of information
across the horizon but it is not described in terms
of classical geometric characteristics of the black hole.
If true in a full quantum theory of gravity,
this phenomenon might explain the information paradox.
The question of associating this entropy with the
entanglement entropy of fields in the EYMH background
is left open in the present work. We hope to come
back to this issue in the near future.
\paragraph{}
Whether a full quantum theory of gravity could
make sense of such divergencies or not remains to be seen.
There are conjectures/indications that string theory,
which is believed to be a mathematically
consistent, finite theory of quantum gravity, yields
finite extensive quantities at the horizon~\cite{susskind},
if string states, which in a generalized sense are gauged states,
are properly taken into account~\cite{emn}.
However, our understanding of these issues,
which are associated with the incompatibility - at present at least -
of canonical quantum gravity with quantum mechanics, is so incomplete
that any claim or attempt to relate the above issues to
realistic computations
involving quantum black hole physics would be inappropriate.
We think, however, that
it is interesting to point out yet another contradiction
of quantum mechanics and general relativity associated
with the proper quantization of extended objects possessing space-time
boundaries.
\paragraph{}
\noindent {\Large {\bf Acknowledgements}}
\paragraph{}
N.E.M. would like to thank the organizers of the {\it 5th Hellenic School
and Workshop
on Particle Physics and Quantum Gravity}, Corfu (Greece), 3-25 September
1995, for the opportunity they gave him to present
results of the present work. E.W. gratefully
acknowledges E.P.S.R.C. for a research studentship.
We also thank J. Bekenstein for a useful correspondence.
\newpage
\section*{Appendix A}
\subsection*{Notation and conventions}
Throughout this paper we use the sign conventions of Misner, Thorne
and Wheeler \cite{misner}
for the metric and curvature tensors. In particular, the signature of
the metric is $(-+++)$. For the EYMH system, we write the most general
spherically symmetric metric in the form
\begin{equation}
ds^{2}= -NS^{2} dt^{2}+N^{-1} dr^{2} +r^{2}(d\theta ^{2}
+ \sin ^{2} \theta d\varphi ^{2})
\end{equation}
where $N$ and $S$ are functions of $t$ and $r$ only and can be written
in terms of the mass function $m$ and the function $\delta $ as
\begin{equation}
N(t,r)=1-\frac {2m(t,r)}{r}, \qquad
S(t,r)=e^{-\delta (t,r) }.
\end{equation}
This latter form of the metric is particularly useful for black hole
space-times.
Following ref. \cite{greene}, we take the most general spherically symmetric
SU(2) gauge potential in the form
\begin{equation}
A=a_{0} \tau_{r} dt + a_{1} \tau_{r} dr
+(1+\omega )[ \tau_{\theta } \sin \theta d\varphi -\tau _{\varphi}
d\theta ]
+{\tilde {\omega }} [\tau _{\theta } d\theta + \tau _{\varphi}
\sin \theta d\varphi ]
\end{equation}
where $a_{0}$, $a_{1}$, $\omega $ and ${\tilde {\omega }}$ are
functions of $t$ and $r$ alone and the $\tau _{i}$ are given by
\begin{eqnarray}
\tau _{r} & = &
\tau _{1} \sin \theta \cos \varphi +
\tau _{2} \sin \theta \sin \varphi +
\tau _{3} \cos \theta \\
\tau _{\theta } & = &
\tau _{1} \cos \theta \cos \varphi +
\tau _{2} \cos \theta \sin \varphi -
\tau _{3} \sin \theta \\
\tau _{\varphi } & = &
-\tau _{1} \sin \varphi +
\tau _{2} \cos \varphi
\end{eqnarray}
with $\tau _{i}$, $i=1,2,3$ the usual Pauli spin matrices.
The complex Higgs doublet assumes the form
\begin{equation}
\Phi =\frac {1}{\sqrt {2}} \left(
\begin{array}{c}
\psi _{2}+i\psi _{1} \\
\phi -i\psi _{3}
\end{array}
\right)
\end{equation}
where a suitable spherically symmetric ansatz is
\begin{equation}
{\mbox {\boldmath $\psi $}}
=\psi (t,r) {\mbox {\boldmath ${\hat r}$}},
\qquad
\phi = \phi (t,r).
\end{equation}
Then the EYMH Lagrangian is \cite{greene}
\begin{eqnarray}
{\cal L}_{EYMH} &=&
-\frac {1}{4\pi} \left[
\frac {1}{4} |F|^{2} +
\frac {1}{8} (\phi ^{2} +|\psi |^{2} )|A|^{2}
+\frac {1}{2} g^{MN}\left[ \partial _{M} \phi \partial _{N}\phi
+(\partial _{M} \psi ) \cdot (\partial _{N} \psi ) \right]
\right. \nonumber \\ & & \left.
+V(\phi ^{2}+ |\psi |^{2})
+\frac {1}{2} g^{MN} A_{M} \cdot [
\psi \times \partial _{N} \psi +
\psi \partial _{N} \phi -
\phi \partial _{N} \psi ] \right]
\end{eqnarray}
and the Higgs potential is
\begin{equation}
V(\phi ^{2})=\frac {\lambda }{4} (\phi ^{2} -v^{2})^{2}.
\end{equation}
\paragraph{}
For the equilibrium static solutions,
$a_{0}$, $a_{1}$, $\tilde \omega $ and $\psi$ all vanish and the
remaining functions depend on $r$ only.
The metric functions $m(r)$ and $\delta (r)$ are required by the
Einstein equations to satisfy the following, where $'=d/dr $:
\begin{eqnarray}
m'(r) & = &
\frac {1}{2} \left[
\left( 1-\frac {2m(r)}{r} \right) (2\omega '^{2} +r^{2} \phi '^{2})
\right] \nonumber \\ & &
+ \frac {r^{2}}{2} \left[
\frac {(1-\omega ^{2})^{2}}{r^{4}} +
\frac {\phi ^{2}}{2r^{2}} (1+\omega )^{2}
+ \frac {\lambda }{2} (\phi ^{2} -v^{2})^{2}
\right] \label{first} \\
\delta '(r) & = &
-\frac {1}{r} (2\omega '^{2} +r^{2}\phi '^{2})
\end{eqnarray}
subject to the boundary conditions $m(r_{h})=\frac {r_{h}}{2}$
in order for a regular event horizon at $r=r_{h}$, and, in order for
the spacetime to be asymptotically flat, $\delta (\infty )=0$.
For an asymptotically flat spacetime, it is also the case that
$m(r) \rightarrow M$ as $r\rightarrow \infty $, where $M$ is a
constant equal to the ADM mass of the black hole.
Integrating (\ref{first}) from $r_{h}$ to $\infty $ we obtain:
\begin{eqnarray}
{\cal M} -\frac {r_{h}}{2} & = &
m(\infty )-m(r_{h}) = \int _{r_{h}}^{\infty }
m'(r) dr \nonumber \\
& = &
\int _{r_{h}}^{\infty } \frac {1}{2} \left[
\left( 1-\frac {2m}{r} \right) (2\omega '^{2} +r^{2} \phi '^{2})
\right] \nonumber \\ & &
+\frac {r^{2}}{2} \left[ \frac {(1-\omega ^{2})^{2}}{r^{4}}
+ \frac {\phi ^{2}}{2r^{2}} (1+\omega )^{2}
+\frac {\lambda }{2} (\phi ^{2}-v^{2})^{2} \right]
\end{eqnarray}
This equation defines the mass functional $\cal M$ as an integral of
the fields over the spacetime.
\paragraph{}
Finally we define the `tortoise' co-ordinate $r^{*}$ by
\begin{equation}
\frac {dr^{*}}{dr} = \frac {1}{NS}.
\end{equation}
\paragraph{}
\subsection*{Numerical solution of equilibrium equations}
The static field equations for the metric functions and matter fields
are:
\begin{eqnarray}
m'(r) & = & \frac {1}{2} \left[ \left( 1-\frac {2m}{r}\right)
(2\omega '^{2} +r^{2} \phi '^{2}) \right]
\nonumber
\\ & &
+\frac {r^{2}}{2} \left[ \frac {(1-\omega ^{2})^{2}}{r^{4}} +
\frac {\phi ^{2}}{2r^{2}} (1+\omega )^{2} +
\frac {\lambda }{2} (\phi ^{2}-v^{2})^{2} \right] \label{third} \\
\delta '(r) & = &
-\frac {1}{r} (2\omega '^{2} +r^{2}\phi '^{2})\label{second} \\
N\omega '' & = &
-\frac {(NS)'}{S} \omega' + \frac {1}{r^{2}} (\omega ^{2}-1)\omega
+\frac {\phi ^{2}}{4} (1+\omega ) \\
N\phi '' & = &
-\frac {(NS)'}{S} \phi' -
\frac {2N}{r} \phi' +
\frac {\phi }{2r^{2}} (1+\omega )^{2} +
\lambda \phi (\phi ^{2}-v^{2}) \label{fourth}
\end{eqnarray}
For finite energy solutions, we require that $\omega (\infty )=-1$,
$\phi (\infty )=v$ and $\delta (\infty )=0$ in order for spacetime
to be asymptotically flat.
These equations trivially possess the Reissner-Nordstr\"{o}m solution
given by
\begin{equation}
m\equiv \frac {r_{h}}{2}, \qquad
\omega \equiv -1, \qquad
\phi \equiv v, \qquad
\delta \equiv 0.
\end{equation}
Non-trivial solutions do not occur in closed form, so a numerical
method of solution is necessary as in \cite{greene}.
We set the horizon radius $r_{h}=1$ and $\lambda =0.15$
(cf. $\lambda =0.125$ in ref. \cite{greene}).
\paragraph{}
{}From the above equations, if the function $\delta (r)$ satisfies
(\ref{second}) then $\delta (r) + \mbox { constant }$ will also be a valid
solution.
To make the numerical solution easier, we set $\delta (r_{h})=0$
(so that $\delta (\infty )=0$ will not be satisfied) when
integrating outwards from $r_{h}$. An appropriate constant can then
be added to $\delta (r)$, after the field equations have been solved,
to ensure that the boundary condition at infinity holds.
\paragraph{}
With this transformation, there are two unknowns at the event horizon,
$\omega (r_{h})$ and $\phi (r_{h})$, since the field equations yield
\begin{eqnarray}
\omega '(1) & = &
\frac {\frac {1}{4} \phi ^{2}_{h} (1+\omega _{h})
-\omega _{h}(1-\omega _{h}^{2}) }
{1-(1-\omega _{h}^{2})^{2}-\frac {1}{2}\phi ^{2}_{h}(1+\omega _{h})^{2}
-\frac {\lambda }{2}(\phi ^{2}_{h}-v^{2})^{2}} \nonumber \\
\phi '(1) & = &
\frac {\frac {1}{2} \phi _{h} (1+\omega _{h})^{2}
+\lambda \phi _{h} (\phi _{h}^{2}-v^{2})}
{1-(1-\omega _{h}^{2})^{2}-\frac {1}{2}\phi ^{2}_{h}(1+\omega
_{h})^{2}
-\frac {\lambda }{2}(\phi ^{2}_{h}-v^{2})^{2}}
\label{syst}
\end{eqnarray}
where
\begin{equation}
\omega _{h}=\omega (r_{h})=\omega (1) \qquad
\phi _{h}=\phi (r_{h})=\phi (1).
\end{equation}
Solving the field equations (\ref{third})--(\ref{fourth})
is therefore a two-parameter
shooting problem.
The procedure is to take initial `guesses' for the unknowns
$\omega _{h}$ and $\phi _{h}$ and then integrate the differential
equations out from $r_{h}$ using a standard ordinary differential
equation solver, attempting to satisfy the boundary conditions for
large $r$.
The initial starting values for $\omega _{h}$ and $\phi _{h}$
are then adjusted until these boundary conditions are satisfied
(see \cite{press} for further details of the algorithm used).
\paragraph{}
For each fixed value of the Higgs mass $v$, there are many solutions
which can be indexed by the number of nodes $k$ of the potential
function $\omega (r)$.
Here we concentrate on the case $k=1$.
Then, for each $v$, there are two solutions which can be ascribed
to one of two families of solutions: the $k=1$ branch or the
quasi-$k=0$ branch, depending on the behaviour of the families as
$v\rightarrow 0$.
The quasi-$k=0$ branch of solutions approaches the Schwarzschild
solution $\omega \equiv 1$, $\phi \equiv 0$ as $v\rightarrow 0$,
whereas the $k=1$ branch of solutions approaches the first coloured
black hole of \cite{bizon} as $v\rightarrow 0$, with
$\phi \equiv 0$.
As $v$ increases, the two branches of solutions join up at
$v=v_{max}=0.352$.
This phenomenon does not occur for $\lambda =0.125$, as found
by Greene, Mathur and O'Neill \cite{greene}.
However, they conjectured that the two branches of solutions would
converge of some value of $\lambda $.
We stress here that our approach is somewhat different from that
of ref. \cite{torii}, where the field equations were solved for
fixed Higgs mass $v$ and varying $r_{h}$, whereas we have fixed
$r_{h}$ and varied $v$.
\paragraph{}
For each value of the Higgs mass $v$, we calculated the quantities
\begin{eqnarray}
{\cal M} & = &
\frac {r_{h}}{2} + \int _{r_{h}}^{\infty } \left\{
\frac {1}{2} \left[ \left(
1-\frac {2m}{r} \right) (2\omega '^{2} +r^{2} \phi '^{2})
\right] \right. \nonumber \\ & &
+r^{2}\left. \left[
\frac {(1-\omega ^{2})^{2}}{r^{4}} +
\frac {\phi ^{2}}{2r^{2}}(1+\omega )^{2} +
\frac {\lambda }{2} (\phi ^{2}-v^{2})^{2} \right] \right\} dr
\\
\delta _{0} & = &
\int _{r_{h}}^{\infty} \frac {1}{r} (2\omega '^{2}+
r^{2}\phi '^{2}) dr
\end{eqnarray}
for each of the two solutions.
The resulting solution curve plotted in $(v,\delta _{0},{\cal M})$
space is shown in figure 1.
The projection of this curve on to the $(v,{\cal M})$
plane are shown in
figure 2.
\paragraph{}
One issue that is important, especially when we come to consider
the thermodynamics and entropy of the black holes, is whether or
not they are extremal.
An extremal black hole occurs when $N$ has a double zero at the
event horizon, and is caused physically by an inner horizon moving
outwards until it coincides with the outermost event horizon.
Mathematically, the condition for extremality is that
\begin{equation}
m'(1) = \frac {1}{2}.
\label{extrem2}
\end{equation}
{}From the field equations (\ref{fourth}), we have
\begin{equation}
m'(1) =\frac {1}{2} \left[
(1-\omega _{h}^{2})^{2} + \frac {1}{2} \phi _{h}
(1+\omega _{h})^{2} +\frac {\lambda }{2}
(\phi _{h} ^{2}-v^{2})^{2} \right]
\label{extrem}
\end{equation}
\begin{equation}
m''(r_{h})=\frac {1}{r_{h}} \left( 1-
\frac{\phi _{h}^{2}}{2\omega _{h}} (1+\omega _{h})^{2} \right)
\label{double}
\end{equation}
where in the last relation we have kept an explicit $r_{h}$
dependence for calculational convenience.
There is thus no {\it a priori} reason why this quantity should not be
equal to one half for some equilibrium solution.
For the solutions on the $k=1$ and quasi-$k=0$ branches, we can
however place the following bounds on $m'(1)$.
The first term is decreasing for $\omega _{h}$ positive and
increasing,
and hence is bouded above by its value for the smallest value of
$\omega _{h}$ along these branches, which is $\omega _{h}=0.632$, whence
\begin{equation}
(1-\omega _{h}^{2})^{2} \le 0.360.
\end{equation}
Along both these branches, $\phi _{h} \le 0.19 v$ which gives the
following bound on the second term,
\begin{equation}
\frac {1}{2} \phi _{h}^{2}(1+\omega _{h})^{2} \le
2 \times (0.19v)^{2} \le 2 \times 0.19^{2} \times 0.352^{2}
= 8.95 \times 10^{-3}.
\end{equation}
Finally, for the last term we have
\begin{equation}
\frac {\lambda }{2} (\phi _{h}^{2}-v^{2})^{2}
\le \frac {\lambda }{2}v^{4} \le 0.15 \times 0.5 \times 0.352^{4}
= 1.15 \times 10^{-3}.
\end{equation}
Adding together all the contributions, we find that
\begin{equation}
m'(1)\le 0.5 \times ( 0.360+8.95\times 10^{-3} + 1.15\times 10^{-3} )
=0.185
\le 0.5
\end{equation}
and hence all the equilibrium black holes considered here are
non-extremal.
\paragraph{}
\subsection*{Linear perturbation equations}
Consider small, time-dependent perturbations about the equilibrium
solutions discussed above, within the initial ansatz for the
metric and matter field functions.
We use a $\delta $ to denote one of these small perturbation
quantities, all other quantities are assumed to be static equilibrium
functions.
Following ref. \cite{bs}, we set $\delta a_{0}=0$ so that the
field configurations remain purely magnetic.
With this choice, the perturbation equations decouple into two
independent coupled systems.
The first concerns $\delta a_{1} $, $\delta {\tilde {\omega }}$
and $\delta \psi $ only.
The equations take the form, with a prime denoting $d/dr^{*}$
where $r^{*}$ is the tortoise co-ordinate:
\begin{eqnarray}
-Nr^{2}{\ddot {\delta a_{1}}} & = &
2N^{2}S^{2} \left( \omega ^{2} +\frac {r^{2}}{8} \phi ^{2}\right)
\delta a_{1} + 2NS (\omega \delta {\tilde \omega }'
-\omega '\delta {\tilde {\omega }}) \nonumber \\
& &
+\frac {1}{2} r^{2} NS (\phi ' \delta \psi -\phi \delta \psi ') \\
2{\delta {\ddot {\tilde {\omega }}}} & = &
2(NS\omega \delta a_{1})' +
2NS\omega ' \delta a_{1} +\delta {\tilde {\omega }}'' +
NS^{2}\phi \delta \psi \nonumber \\
& &
-\frac {2}{r^{2}}S^{2} \left( \omega ^{2}-1 +\frac {\phi ^{2}}{4}
\right) \delta {\tilde {\omega }} \\
-r^{2} \delta {\ddot {\psi }} & = &
\frac {1}{2} (NSr^{2} \phi \delta a_{1})'
+ \frac {1}{2} r^{2} NS \phi ' \delta a_{1}
-NS^{2} \phi \delta {\tilde {\omega }} -(r^{2}\delta \psi ')' \nonumber \\
& &
+2NS^{2} \left( \frac {(1-\omega )^{2}}{4}+\frac {1}{2} r^{2}
\lambda (\phi ^{2} -v^{2}) \right) \delta \psi \\
0 & = & \partial _{t}
\left\{ \left( \frac {r^{2}}{S} \delta a_{1} \right) '
+ 2\omega {\tilde {\omega }} -\frac {r^{2}}{2} \phi \delta \psi
\right\}.
\end{eqnarray}
This final equation is known as the {\it Gauss constraint} equation,
since it represents an additional constraint on the field
perturbations rather than an equation of motion.
This system of coupled equations is referred to as the
{\it sphaleronic sector} because it does not involve any perturbations
of the metric functions.
\paragraph{}
The remaining perturbation equations form the {\it gravitational
sector} and concern the perturbations of the metric functions and also
$\delta \omega $ and $\delta \phi $:
\begin{eqnarray}
-\delta {\ddot {\omega }} & = &
-\delta \omega '' +U_{\omega \omega } \delta \omega
+U_{\omega \phi }\delta \phi \\
-\delta {\ddot {\phi }} & = &
-\delta \phi '' +U_{\phi \omega } \delta \omega
+U_{\phi \phi }\delta \phi
\end{eqnarray}
where the $U$'s are complicated functions of $N$, $S$, $\omega $ and
$\phi $ and are given explicitly in section 4, equation \ref{4-2}.
The equations governing the behaviour of $\delta m$ and $\delta S$
are derived from the linearised Einstein equations and are:
\begin{eqnarray}
\frac {d}{dr}(S\delta m ) & = & \frac {d}{dr} \left(
2NS \frac {d\omega }{dr} \delta \omega +r^{2}NS \frac {d\phi }{dr}
\delta \phi \right)
\label{mreqn} \\
\delta {\dot {m}} & = &
2N \frac {d\omega }{dr} \delta {\dot {\omega }} +
r^{2}N \frac {d\phi }{dr} \delta {\dot {\phi }}
\label{mteqn} \\
\delta \left( \frac {1}{S} \frac {dS}{dr} \right) & = &
\frac {4}{r} \frac {d\omega }{dr} \frac {d\delta \omega }{dr}
+2r \frac {d\phi }{dr} \frac {d\delta \phi }{dr}.
\label{seqn}
\end{eqnarray}
{}From (\ref{mreqn}) $ \delta m$ has the form
\begin{equation}
\delta m =2N\frac {d\omega }{dr} \delta \omega +
Nr^{2} \frac {d\phi }{dr} \delta \phi + \frac {f(t)}{S}
\label{compare}
\end{equation}
where $f(t)$ is an arbitrary function of $t$.
Compare this with the following, which results from integrating
(\ref{mteqn}):
\begin{equation}
\delta m = 2N\frac {d\omega }{dr} \delta \omega +
Nr^{2} \frac {d\phi }{dr} \delta \phi + g(r)
\label{compare1}
\end{equation}
where $g(r)$ is an arbitrary function of $r$.
Comparing (\ref{compare}) with (\ref{compare1}), we see that
$f(t) \equiv 0 \equiv g(r) $ and
\begin{equation}
\delta m = 2N\frac {d\omega }{dr} \delta \omega +
Nr^{2} \frac {d\phi }{dr} \delta \phi .
\end{equation}
\paragraph{}
We consider periodic perturbations of the form
\begin{equation}
\delta \omega (r,t) =\delta \omega (r) e^{i\sigma t}
\end{equation}
and similarly for the other pertubation quantities.
When substituted into the perturbation equations for each of the two
sectors, the equations studied in detail in sections 3 and 4 are
derived.
\paragraph{}
\section*{Appendix B}
\subsection*{Definitions and results of catastrophe theory}
Consider a family of functions
\begin{equation}
f:X \times C \rightarrow I \! \! R
\qquad f(x,c)=f_{c}(x)
\end{equation}
Here $X$ and $C$ are both manifolds known as the state space and
control space respectively.
In other words, we have a family of functions of the variable $x$,
the members of the family being indexed by $c$.
{}From now on we take both $X$ and $C$ to be intervals of the real line.
Then $f$ maps out a surface $z=f(x,c)$ in $I \! \! R ^{3}$
which is known as the {\it Whitney surface }\cite{poston}.
\paragraph{}
The catastrophe manifold is defined as the subset of $X\times C$
at which
\begin{equation}
\frac {d}{dx} f_{c}(x) =0,
\end{equation}
namely it is the set of all critical points of the family of
functions. In section 4, critical points of the functional
$\cal M$ correspond to solutions of the field equations, and
hence the catastrophe manifold corresponds to the projection of
the solution curve onto the $(x,c)=(\delta _{0},v)$ plane.
\paragraph{}
The catastrophe map $\chi $ is the restriction to the catastrophe
manifold of the natural projection
\begin{equation}
\pi : X \times C \rightarrow C,
\qquad
\pi (x,c)=c.
\end{equation}
This can easily be extended to a projection of the solution curve
on to the $(c,z)$ plane:
\begin{equation}
\chi (x,c,z=f(x,c)) =(c,z=f(x,c)).
\end{equation}
The singularity set is the set of singular points of $\chi $
in the catastrophe manifold, and the image of the singularity
set in $C$ is called the {\it bifurcation set }$B$.
Here both manifolds $X$ and $C$ are of dimension 1, and hence $\chi $
will be singular whenever its derivative vanishes.
\paragraph{}
The first result we require is that the singularity set is the
set of points $(x,c)$ at which $f_{c}(x)$ has a degenerate
critical point, in other words, both
\begin{equation}
\frac {d}{dx} f_{c}(x) =0 {\mbox { and }}
\frac {d^{2}}{dx^{2}} f_{c}(x) =0.
\end{equation}
This implies that the set $B$ is the place where the number and
nature of the critical points of the family of functions $f_{c}(x)$
change (see \cite{poston} for more details of these results).
\paragraph{}
In our case, where both $X$ and $C$ are one-dimensional,
the only possibility is that the bifurcation set $B$ either is
empty (in which case there is no catastrophe) or $B$ contains
a single point (when a {\it fold catastrophe } occurs).
We observe in section 4 that the latter situation arises.
\paragraph{}
The catastrophe manifold is a curve $\cal C$ in the $(x,c)$ plane,
with the point $B$ lying on it.
On one side of the point $B$, points lying on $\cal C$ correspond
to maxima of the functions $f_{c}(x)$ whilst on the other side of $B$
they represent minima.
In section 4 the value of $f$ corresponds to energy, so that minima of
$f$ will represent (relatively) stable objects, whilst maxima of $f$
will represent (relatively) unstable configurations.
|
2,877,628,089,566 | arxiv | \section{Introduction}
\label{sec.introduction}
\IEEEPARstart{T}he morphological examination of bone marrow cells plays a vital role in the diagnosis of blood diseases. It can identify the morphological characteristics of blood diseases such as acute leukemia, hemolymphoid tumors, and dysplasia\cite{MedicalKnowledge2009, MedicalKnowledge2014}. However, the analysis of cell morphology is mainly done manually by experienced hematologists now. This process is not only cumbersome and labor-intensive but also subjective factors that will bring negative effects\cite{Wu2020}. Therefore, a computer-assisted quantitative automatic analysis system is essential, and the cell detection algorithm as its core has aroused the interest of many researchers.
In recent years, the research on cell detection algorithms can be divided into traditional methods based on image processing and shallow machine learning, and methods based on deep learning. The general steps of traditional methods include cell segmentation, feature extraction, and cell classification. In some previous studies, image processing algorithms such as intensity clustering, watershed transformation, and adaptive thresholding were used as segmentation methods\cite{Deep10class}, traditional classifying algorithms such as Support Vector Machine (SVM) and Naive Bayes Classifier were used as cell classifiers\cite{TraditionBayers, TraditionANN}. Although they have achieved excellent results, they are still not enough to apply automatic analysis of bone marrow smear. Traditional methods have the limitations of low learning ability and the need for artificial features\cite{Deep10class}. Algorithms based on deep learning do not need to manually design features but learn features from a large amount of data and have powerful feature extraction capabilities. At the same time, deep learning methods can also be mixed and manually adjusted to maintain strong feature extraction capabilities while maintaining strong feature extraction capabilities. Presently, in some studies on cell classification and detection algorithms based on deep learning, they have achieved better results than traditional methods\cite{Deep10class, Deep5detect, Deep40class, Deep5class, Deep11detect}. Therefore, this paper focuses on a bone marrow cell detection algorithm based on deep learning.
\begin{figure*}[!t]
\begin{center}
\centering
\includegraphics[width=0.95\textwidth]{Structure.png}
\caption{Including the following content: the structure of the proposed bone marrow cell detection algorithm, sub-modules, and the process of using samples to train the model. CSP module means Cross Stage Partial module, SPP module means Spatial Pyramid Pooling module, FPN means module of Feature Pyramid Networks, PAN means module of Pixel Aggregation Network, NMS means Non-maximum suppression, and CIoU means Complete IoU\cite{YOLOv5}. Part (a) is the image of the local area of the bone marrow smear, Part (b) is the sample label corresponding to the image (including cell location, class), and Part (c) is the structure diagram of the proposed method. The sub-modules of the proposed algorithm can be divided into the following three categories: Normal core module of YOLOv5, Effective core module of YOLOv5, and Proposed loss function, corresponding to the three color labels at the bottom of the figure. Both the Normal core module of YOLOv5 and the Effective core module of YOLOv5 are important core modules that support the excellent performance of YOLOv5 networks. Among them, the Effective core module of YOLOv5 has outstanding advantages in detecting bone marrow cells. The following is the algorithm training process: 1. the image input algorithm is forwarded to the Prediction part to obtain the detection result of the algorithm, 2. the loss part inputs the detection result to the loss function and the sample label corresponding to the image, 3. finally backpropagates and optimizes the model parameters.}
\label{fig.Structure}
\end{center}
\end{figure*}
Currently, cell detection algorithms based on deep learning develop steadily. Kutlu et al. used Faster-RCNN for end-to-end cell detection in 5 classes and achieved excellent results\cite{Deep5detect}. Wang et al. used YOLOv3 and Single Shot MultiBox Detector(SSD) for end-to-end cell detection in 11 classes and achieved outstanding performance\cite{Deep11detect}.In the existing research, the accuracy of the cell detection algorithm is sufficient for the commercial automatic analysis system of peripheral blood smears. However, the cell detection algorithm is still not effective enough to be used in an automatic bone marrow smear analysis system. (The bone marrow cell detection algorithm in the paper refers to the cell detection algorithm used to detect bone marrow cells.) The cell detection algorithm based on deep learning for bone marrow cell detection has the following three difficulties:
\begin{enumerate}
\item Bone marrow cells classes are divided according to the series and stages of the cells, and the number of classes exceeds 30. The classes of adjacent stages belonging to the same series have a high degree of similarity so that it is difficult to distinguish. In contrast, the classes belonging to different series have apparent differences so that it is easier to distinguish. The above-mentioned inter-class relationship contains information that helps the algorithm classify cells, but it is still difficult for the algorithm to make full use of this information.
\item In bone marrow smears, the cells are densely distributed, and there are many adherent cells. The algorithm should have sufficient cell segmentation capabilities.
\item The classification of part of cell classes needs to be combined with the context information of the surrounding cells in the image for classification judgment. The algorithm should have the ability to extract global context information.
\end{enumerate}
In view of the above three problems, the main contributions of this paper include the following two points:
\begin{enumerate}
\item This paper proposes a novel loss function. The fine-grained classes are defined as finely divided classes according to the principle of different series and different stages. The coarse-grained classes are defined as roughly divided classes according to the principle of merging fine-grained classes of the same series and adjacent stages. A coarse-grained class often contains more than one fine-grained class, and the algorithm directly predicts the fine-grained classes. The proposed loss function will increase the penalty for the model's prediction of coarse-grained classification errors during algorithm training, try to use the similarity between fine-grained classes belonging to the same coarse-grained class to strengthen the supervision of model feature extraction.
\item This paper proposes a bone marrow cell detection algorithm based on YOLOv5 network\cite{YOLOv5}, trained with the novel loss function. The proposed algorithm can segment cells effectively because of powerful feature extraction capabilities, and also can extract context information because of a sub-module that can expand the receptive field. Compared with the existing cell detection algorithms, the proposed bone marrow cell detection algorithm has superior performance.
\end{enumerate}
\section{Related Works}
Cell detection algorithms based on deep learning can be divided into two types. One type of algorithm includes two steps of segmentation and classification: the two-step algorithm. The other type of algorithm completes the end-to-end detection: the end-to-end algorithm. Then the following are analyzed separately.
The two-step cell detection algorithms are relatively common, which first uses the cell segmentation algorithm to segment the image of a single cell from the image and then uses the cell classification algorithm to classify each single cell image. For example, the method in \cite{TraditionDeepMethod1} uses a threshold segmentation algorithm to segment the single-cell image and then uses the Alexnet image classification model to classify the single-cell image. The method in \cite{DeepDeepMethod1} uses Faster-RCNN network to segment the single-cell image and then uses Alexnet to perform fine-grained classification. For cell detection tasks, the segmentation and classification algorithms included in the two-step algorithm are relatively independent and clear to optimize separately. The excellent segmentation algorithm can be combined with the excellent classification algorithm \cite{Deep10class, Deep40class} for cell detection. However, it will have the following limitations when the two-step algorithm is used to detect bone marrow cells: 1. the classification of single cells is challenging to combine context information, 2. the training process is cumbersome, 3. the detection speed is low.
With the rapid improvement of end-to-end object detection network performance, many researchers have begun to use end-to-end cell detection algorithms that are mainly based on end-to-end target detection networks. For example, the method in \cite{Deep5detect} uses the Faster-RCNN network to implement an end-to-end cell detection algorithm and trains them used transfer learning methods. The method in \cite{Deep11detect} is based on YOLOv3 and SSD to achieve end-to-end cell detection algorithms. Compared with the two-step cell detection algorithm, the end-to-end cell detection algorithm has the advantages of a simple training process and fast running speed. What's more, it is relatively easy to combine the contextual information. At present, the main limitation of the end-to-end cell detection algorithm is that it cannot make full use of the characteristics of the bone marrow cell detection task, such as the similarities between the types of bone marrow cells mentioned in Chapter \ref{sec.introduction}.
\section{Methods}
\label{sec. methods}
The YOLOv5 network is a classic one-stage detection network, and it is the latest work of the YOLO architecture\cite{YOLO} series. Compared with the previous work YOLOv4 network, YOLOv5 network has achieved better accuracy and faster inference speed, and nearly 90\% has reduced the volume of model parameters.
This paper proposes a bone marrow cell detection algorithm based on the YOLOv5 network. Figure \ref{fig.Structure} shows the following information: the structure of the algorithm model, essential sub-modules, and training method. The following content of this chapter introduces two types of essential sub-modules, corresponding to the Effective core modules and Proposed loss function in the Figure \ref{fig.Structure}.
\subsection{Effective Core Module}
The following four sub-modules in the proposed method have four advantages in bone marrow cell detection tasks:
\begin{enumerate}
\item SPP module \cite{SPP}. The algorithm adds the SPP module to the Neck part, which helps the feature map merge local and global features. SPP can extract context-related information of bone marrow cells.
\item Mosaic data enhancement. Lack of data is a major challenge in the bone marrow cell detection task. The algorithm adds Mosaic data enhancement at the input end. It uses four pictures to be spliced by random scaling, random cropping, and random arrangement to enrich the data set.
\item Adaptive anchor boxes calculation. Bone marrow cells are mainly circular, and most of the cells are similar in shape and size. The algorithm adds an adaptive anchor boxes calculation at the input end, making it easy to find the best anchor boxes to improve the model’s ability to locate cells.
\item CIoU loss\cite{CIOU}. In the labeled samples of the bone marrow cell detection task, the rectangular boxes of the labeled cells are often unable to frame the cells due to manual labeling accurately. However, it can be ensured that the center of labeled boxes and the aspect ratio of human-labeled data is approximately the same as ground truth. The proposed algorithm uses CIoU loss, which gains the advantage in considering the overlap area, center point distance, and aspect ratio, making the penalty for the model more effective.
\end{enumerate}
\subsection{Proposed Loss Function}
The YOLOv5 network originally calculates the loss of each labeled ground-truth bounding box using Eq.\ref{eq.origin_loss}, including three parts, namely positioning loss, confidence loss, and classification loss. The way to calculate classification loss is as shown in Eq.\ref{eq.origin_loss_cls}, where $ S^2 $ is the number of grid cells, B is the number of anchor boxes that each grid cell shall predict, $ I_{ij}^{obj\ } $represents whether the labeled ground-truth bounding box of the current calculation loss belongs to the j anchor box of the i grid cell, and classes represent all the categories of the sample, $ \delta $ means the function of Sigmoid,$ \widehat{p_i}\left(c\right )$ is the true probability of class c. In contrast, $ {\ p}_i\left(c\right) $is the predicted probability of class c. The classification loss part calculates the sum of the classification loss of the labeled ground-truth bounding box and all its corresponding anchor boxes.
\begin{equation}
Loss=Loss_{box}+Loss_{obj}+Loss_{cls}
\label{eq.origin_loss}
\end{equation}
\begin{footnotesize}
\begin{equation}
\begin{aligned}
Loss_{cls}&=\sum_{i=0}^{S\times S}\sum_{j=0}^{B}{I_{ij}^{obj}\times} \\
&\sum_{c\in c l a s s e s}\left[\widehat{p_i}\left(c\right)log{\left(\delta\left(p_i\left(c\right)\right)\right)} + \left(1-\widehat{p_i}\left(c\right)\right)log{\left(1-\delta\left(p_i\left(c\right)\right)\right)}\right]
\end{aligned}
\label{eq.origin_loss_cls}
\end{equation}
\end{footnotesize}
The experiment found that if we only train the ability of the proposed algorithm to detect the position of bone marrow cells, the algorithm performs exceptionally well, with accuracy and recall reaching over 99\%. Therefore, the limitation of the proposed bone marrow cell detection algorithm lies in the classification of bone marrow cells instead of segmentation.
Based on the above judgment, the weights of each part of the loss function of the YOLOv5 network were adjusted first. A basic weights parameter $ \alpha $ is set to satisfy $ \alpha>1 $ , and the three-part weights of the loss function of the YOLOv5 network is adjusted according to Eq.\ref{eq.optimized_loss} to make it more sensitive to classification loss, which make algorithm focus more on learning classification problems.
\begin{equation}
Loss=Loss_{box}+Loss_{obj}+{\alpha}Loss_{cls}
\label{eq.optimized_loss}
\end{equation}
Next, try to optimize the loss function by using the similarity between the fine-grained classes of bone marrow cells. In the fine-grained classes, the cell classes of the same series at different stages have common characteristics. However, when calculating the classification loss value, the prediction errors of different series of cells and the prediction errors of the same series of cells are equivalent to the penalty effect of the model, which is not conducive to the supervised model to learn more useful features. The proposed loss tries to use the following methods to solve the above problems: Using the proposed loss function Eq.\ref{eq.optimized_loss_cls} to replace the classification part of the loss function Eq.\ref{eq.optimized_loss}. $ \gamma $ in Eq.\ref{eq.optimized_loss_cls} is the same as in Eq.\ref{eq.gammer_value}. When the class of the labeled ground-truth bounding box is inconsistent with the coarse classification class predicted by the j anchor box of i grid cell, the classification loss of the anchor box will be multiplied by the weights $ \left(1+\beta\right) $ to satisfy $ \beta>0 $.
\begin{footnotesize}
\begin{equation}
\begin{aligned}
{Loss}_{cls}&=\sum_{i=0}^{S\times S}\sum_{j=0}^{B}{I_{ij}^{obj}}\left(1+\gamma\right)\times \\
&\sum_{c\in c l a s s e s}\left[\widehat{p_i}\left(c\right)log{\left(\delta\left(p_i\left(c\right)\right)\right)}+\left(1-\widehat{p_i}\left(c\right)\right)log{\left(1-\delta\left(p_i\left(c\right)\right)\right)}\right]
\end{aligned}
\label{eq.optimized_loss_cls}
\end{equation}
\end{footnotesize}
\begin{equation}
\gamma=\left\{
\begin{array}{rcl}
\beta & & predict_{coarse}\neq target_{coarse} \\
0 & & predict_{coarse}= target_{coarse}\\
\end{array} \right.
\label{eq.gammer_value}
\end{equation}
In Eq.3 and Eq.4, $ \alpha $ and $ \beta $ are two hyperparameters. Increasing $ \alpha $ can increase the model's attention to classification, and increasing $ \beta $ can increase the penalty weight for prediction errors of coarse-grained classes.
\section{Experimental Results and Discussion}
\label{sec.experimental_results_and_discussion}
The bone marrow cell labeling data set we used for training and testing came from the Southern Hospital of Southern Medical University and was collected by optical microscope and camera. The data set included 2,549 labeled images from 144 patients, 15,642 labeled individual cells. The size of each image is 1024 x 684, and the total number of types of bone marrow cells is 36. (An example is shown in Figure \ref{fig.dataset}.) To ensure a low correlation between the training set and the test set, we divide the data set with patients as the division unit, the training set contains data of 117 patients, and the test set contains data of 27 patients.
\begin{figure}[H]
\begin{center}
\centering
\includegraphics[width=0.4\textwidth]{Dataset.png}
\caption{Example of data set}
\label{fig.dataset}
\end{center}
\end{figure}
\begin{table*}[!htbp]
\caption{ Experimental results}
\label{table_1}
\renewcommand\arraystretch{1.6}
\centering
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{ Method and Hyperparameters} & \makecell[c]{mAP@0.5 \\ fine-grained classification} & \makecell[c]{mAP@0.5 \\ coarsed-grained classification} \\
\cline{1-4}
\hline
\multicolumn{2}{|c|}{ SSD\cite{Deep11detect} } & 21.18\% & 37.38\% \\
\hline
\multicolumn{2}{|c|}{ Faster-RCNN\cite{Deep5detect} } & 34.97\% & 56.83\% \\
\hline
\multicolumn{2}{|c|}{ YOLOv5 + Normal loss function\tnote{1} } & 44.65\% & 66.00\% \\
\hline
\multirow{5}*{ \makecell[c]{YOLOv5 + \\ Class weighted loss function\tnote{2}} } & $ \alpha = 2.50 $ & 48.93\% & 66.68\%\\
\cline{2-4}
& $ \alpha = 2.75 $ & 48.48\% & 66.77\%\\
\cline{2-4}
& $ \alpha = 3.00 $ & 48.76\% & 66.28\%\\
\cline{2-4}
& $ \alpha = 3.25 $ & 48.50\% & 67.00\%\\
\cline{2-4}
& $ \alpha = 3.50 $ & 49.68\% & 66.14\%\\
\hline
{\makecell[c]{YOLOv5 + \\ Proposed loss function\tnote{3}}} & {$\alpha = 2$, $\beta = 1$} & \textbf{51.14\%} & \textbf{69.71\%} \\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[1] The normal loss is the original loss function of YOLOv5 network.
\item[2] The class weighted loss is the loss function that increases the weight of the classification part.
\item[3] The proposed loss is the loss function proposed in the paper
\end{tablenotes}
\end{threeparttable}
\end{table*}
The data set included 36 fine-grained classes, including lymphocyte, Promyelocyte, megaloblasts, Metamyelocyte, megaloblasts, late megaloblasts, etc. Based on the principle of the same series, adjacent stages, the existence of common features, and the need for cell counting when diagnosing blood diseases, we merged 36 fine-grained classes and obtained 14 coarse-grained classes(An example is shown in Figure \ref{fig.combine_class}).
\begin{figure}[H]
\begin{center}
\centering
\includegraphics[width=0.4\textwidth]{Combine_class.png}
\caption{Example of fine-grained classes merged into coarse-grained classes. Polychromatic normoblast and Orthochromatic normoblast belonging to the coarse-grained classes include Polychromatic normoblast and Orthochromatic normoblast belonging to the fine-grained classes, and Myelocyte and Metamyelocyte belonging to the coarse-grained classes include Myelocyte and Metamyelocyte belonging to the fine-grained classes.}
\label{fig.combine_class}
\end{center}
\end{figure}
We use the Pytorch deep learning framework and NVIDIA RTX 2070 GPU for training and testing in the following experiments. When experimenting with the methods of other papers, we use the hyperparameters proposed in those papers. When experimenting with the proposed bone marrow cell detection algorithm in this paper, the following hyperparameters are set for training: batch size is 18, total training epochs is 80, and the image size after resize is 640x640.
In order to fully demonstrate the advantages of the algorithm proposed in this paper, we have selected the following two indicators to analyze the performance of the cell detection algorithm quantitatively:
\begin{enumerate}
\item The algorithm predicts the mAP(mean average precision) indicator of fine-grained classes \cite{mAP}, IOU threshold is set to 0.5, after this referred to as mAP@0.5.
\item The algorithm predicts the mAP@0.5 indicator of the coarse-grained class. It should be added that both the classes of algorithm prediction and sample annotation are fine-grained classes. The result of the algorithm predicting the coarse-grained classes is obtained from the mapping rule of the fine-grained classes to the coarse-grained classes.
\end{enumerate}
Using the data set of this paper, the cell detection algorithms which have excellent performance in the current field\cite{Deep5detect, Deep11detect} and the bone marrow cell detection algorithm proposed in this paper are trained, measured, and evaluated. Each experiment was repeated three times to take the average value to avoid the contingency, the indexes obtained in Table \ref{table_1}.
Since the weight of the classification part of the proposed loss function is unevenly distributed between $ \alpha $ and $ \alpha+\beta $, we selected five $ \alpha $ which are equidistant distributed between $ \alpha $ and $ \alpha(1 + \beta) $ in the class weighted experiment, verifying that proposed loss is not the result of adjusting the weight of the classification section.
From the experimental results in Table \ref{table_1}, the following conclusions can be analyzed:
\begin{enumerate}
\item Proposed bone marrow cell detection algorithm performs well. The mAP@0.5 in the case of fine-grained classification reaches 51.14\%. The mAP@0.5 in the case of coarse-grained classification reaches 69.71\%, which is significantly better than the other two methods\cite{Deep5detect, Deep11detect}. Compared with the Faster-RCNN combined with transfer learning method\cite{Deep5detect}, the proposed algorithm improves the mAP@0.5 of fine-grained classification by 16.17\% and the mAP@0.5 of coarse-grained classification by 12.88\%.
\item Increase the weight of the classification part in the loss function, the model can focus more on the classification of cells, which can effectively improve the performance of the bone marrow cell detection algorithm. Compared with the model with normal loss function, the model with class weighted loss function improves the mAP@0.5 of fine-grained classification by 5.03\%.
\item Use the loss proposed in this paper to adjust the penalty for model prediction of coarse-grained class errors and fine-grained class errors, which can effectively improve the performance of the YOLOv5 network for bone marrow cell detection algorithms. Compared to model with class weighted loss function, the model with proposed loss function improves the mAP@0.5 of fine-grained classification by 1.46\%, and the mAP@0.5 of coarse-grained classification by 3.57\%. The results show that the use of the relationship between the coarse-grained classes and the fine-grained classes to strengthen the supervised training of the model is likely to help the algorithm learn the characteristics of the cell classes more effectively.
\end{enumerate}
It should be noticed that when using cell morphology to diagnose patients, some common blood diseases can determine the diagnosis results by coarse-grained classification and counting of cells. The proposed algorithm effectively improves the model's performance in predicting the coarse classes. , Which improves the feasibility of the bone marrow cell detection algorithm in the automatic analysis system.
In the end, we evaluated the running speed of the proposed bone marrow cell detection algorithm: The model deployed on openvino uses Intel i7-10700 CPU to detect cells in a single image. It takes 73ms, which can reach the requirements of most automatic analysis systems for the efficiency of cell detection algorithms.
\section{Conclusion and Future Work}
\label{sec.conclusion_future_work}
The bone marrow cell detection algorithm is the core of the meaningful bone marrow cell morphology automatic analysis system. This paper proposes a bone marrow cell detection algorithm based on the novel loss function and YOLOv5 network. The experimental results show that the proposed bone marrow cell detection algorithm has a better performance than other cell detection algorithms. The final mAP@0.5 of the fine-grained classification reaches 51.14\%, and the mAP@0.5 of the coarse-grained classification reaches 69.71\%. In future research, we will continuously try to combine model optimization and increase the sample size of the data set to improve the algorithm's accuracy in classifying cells.
\bibliographystyle{IEEEbib}
|
2,877,628,089,567 | arxiv | \section{Introduction}
Strong gravitational lenses are excellent tools for cosmological and
astrophysical studies \citep[see, e.g.,][]{cskreview}. Because of
their utility, an increase in the number of lenses is a highly
desirable goal, especially when the numbers become large enough to
marginalize over hidden parameters associated with any given lens
system. The majority of lenses discovered in the last decade were
found through dedicated surveys that used a variety of techniques to
find strong lenses, such as targeting the potential lensed objects
\citep[e.g.,][]{hstlens,class1,class2,winnsearch,wisotzki,pindorlens},
searching around the potential lensing galaxies
\citep[e.g.,][]{goodslens}, or looking for multiple redshifts
associated with a single object in large spectroscopic surveys
\citep[e.g.,][]{bolton1,bolton2,slacs1}. However, a
large number of lens systems were discovered serendipitously. In this
paper we report just such a discovery. We have found two additional
strong lens candidates in a single image centered on a known
gravitational lens, the data of which were obtained with the Advanced
Camera for Surveys
\citep[ACS;][]{acs1,acs2} on the {\em Hubble Space Telescope} (HST).
\begin{figure*}
\plottwo{fig1a.eps}{fig1b.eps}
\caption{Three-color images of ``Fred''\ (left) and ``Ginger''\
(right). The images were constructed from the ACS images, with the
F606W data in the blue channel, the sum of F606W and F814W in the
green channel, and F814W in the red channel, respectively. In each
image, north is up and east is to the left. The blue blobs and arc
stand out clearly from the red lensing galaxies.
\label{fig_color}}
\end{figure*}
The main goal of the ACS observations was to obtain a high
sensitivity image of the Einstein ring of a previously known
gravitational lens, CLASS B1608+656 \citep{stm1608}. The Einstein
ring provides further constraints to the lensing mass model and
reduces the uncertainties in the determination of $H_0$ from this
time-delay system \citep[e.g.,][]{Ko01, 1608H0}. The exquisite angular
resolution and surface brightness sensitivity of the ACS imaging also
allows the properties of the B1608+656 field to be studied. In
particular, the contribution to the B1608+656 image splitting by
a small group of galaxies associated with the main lensing galaxy has
been investigated \citep{1608groupdisc}, and a weak lensing analysis of
the mass distribution along the line of sight to B1608+656 is
being conducted. The deep ACS imaging has also allowed us to find
two additional galaxy-scale gravitational lens systems in the same
field of view.
In Section 2 we present the ACS and ground-based imaging of the two
lens candidates. The spectra of the lensing galaxies and one of the
lensed sources are presented in Section 3. We test lensing mass
models for the two systems using a non-parametric lensing code in
Section 4. Finally, in Section 5 we discuss the lensing hypothesis for
both systems and estimate the likelihood of finding two additional
lenses in this field. Throughout this paper we assume a flat Universe
with $\Omega_M = 0.3$, $\Omega_\Lambda = 0.7$, and, unless otherwise
stated, we will express the Hubble Constant as H$_0 = 100~h$~km\ s$^{-1}$\ Mpc$^{-1}$.
\section{Optical Imaging}
In this section, we present space- and ground-based multi-color optical
imaging of the two lens candidates in the field of B1608+656.
\subsection{Hubble Space Telescope}
High resolution optical imaging of the B1608+656 field was obtained
with the ACS (GO-10158; PI: Fassnacht). The data were acquired over
the course of five visits between 2004 August 24 and September 17. The
Wide Field Channel (WFC) of the ACS was used throughout, providing a
field of view (FOV) of $202\arcsec \times 202\arcsec$ and a scale of
0.05~arcsec~pixel$^{-1}$. Our observations consisted of nine orbits
that used the F606W filter and eleven orbits with the F814W filter,
corresponding to total exposure times of 22516~s and 28144~s,
respectively. The data were reduced in the standard manner using the
{\it stsdas} package within {\sc iraf}\footnote{IRAF (Image Reduction
and Analysis Facility) is distributed by the National Optical
Astronomy Observatories, which are operated by the Association of
Universities for Research in Astronomy under cooperative agreement
with the National Science Foundation.}. The final combined images were
produced using {\it multidrizzle} \citep{multidrizzle}, which also
corrected the data for the ACS geometric distortion. The area covered
by the final combined image in each filter, defined as the region in
which the weight file had pixel values greater than 2000, is 11.9
arcmin$^2$. Catalogs of objects in the ACS images were generated
by running SExtractor
\citep{sextractor} with the parameters suggested by
\citet{faintacsgals}. The count rates in the images were converted to
Vega-based magnitudes using the zero points on the ACS web site\footnote{See
\url{http://www.stsci.edu/hst/acs/analysis/zeropoints}}. Full details
of the acquisition and reduction of our ACS imaging will be presented
in a future paper. A summary of the imaging observations is given in Table
\ref{tab_obsdata}.
\begin{deluxetable}{ccccr}
\tabletypesize{\scriptsize}
\tablecolumns{10}
\tablewidth{0pc}
\tablecaption{Imaging Observations}
\tablehead{
\colhead{}
& \colhead{}
& \colhead{}
& \colhead{}
& \colhead{$t_{exp}$}\\
\colhead{Date}
& \colhead{Telescope}
& \colhead{Instrument}
& \colhead{Filter}
& \colhead{(sec)}
}
\startdata
2000 Apr & P60 & CCD13 & $g$ & 7200 \\
2000 Apr & P60 & CCD13 & $r$ & 3600 \\
2000 Apr & P60 & CCD13 & $i$ & 3000 \\
2000 Jul & P60 & CCD13 & $g$ & 5400 \\
2004 Aug/Sep & HST & ACS/WFC & F606W & 22516 \\
2004 Aug/Sep & HST & ACS/WFC & F814W & 28144 \\
\enddata
\label{tab_obsdata}
\end{deluxetable}
A visual inspection of the ACS images, undertaken with the goal of
evaluating the properties of the galaxies surrounding B1608+656,
revealed two objects with lens-like morphologies. Each consists of a
reddish early-type galaxy with a nearby blue arc or multiple blue
blobs, similar to other lenses found in HST imaging
\citep[e.g.,][]{ratnatunga,goodslens,blakeslee,slacs1}. The two
lens candidates, ACS J160919+6532 and ACS J160910+6532 are shown in
Figure~\ref{fig_color}. For simplicity, the two lens candidates will
hereafter referred to as ``Fred'' and ``Ginger'', respectively. In
Fig.~\ref{fig_fc} we show a larger field of view that includes the two
lensing candidates and B1608+656. Both lens candidates are in close
proximity to B1608+656 on the sky:
``Fred''\ lies $\sim 36$\arcsec\ to the northeast, whereas
``Ginger''\ is $\sim 37$\arcsec\ to the north-northwest. The
coordinates of the lens candidates are given in Table
\ref{tab_lenscoords}.
\begin{deluxetable}{lccc}
\tabletypesize{\scriptsize}
\tablecolumns{4}
\tablewidth{0pc}
\tablecaption{Lens System Coordinates}
\tablehead{
\colhead{Name}
& \colhead{RA (J2000)}
& \colhead{Dec (J2000)}
& \colhead{$z_{l}$}
}
\startdata
``Fred''\ & 16 09 18.760 & $+$65 32 49.72 & 0.6321 \\
``Ginger''\ & 16 09 10.292 & $+$65 32 57.38 & 0.4264 \\
\enddata
\label{tab_lenscoords}
\end{deluxetable}
``Fred''\ consists of a red spheroidal galaxy with two blue candidate
lensed images to the north and south. The blue image to the north
is extended in an east/west direction, while the southern
blue image is both fainter and covers a smaller angular
size on the sky. The sizes and surface brightnesses of
the two blue images are consistent with gravitational lensing. Their
separation is $\sim$1\farcs5.
The second lens candidate, ``Ginger'', is a bright red elliptical with an
extended blue gravitational arc to the north east. The narrow arc-like
feature appears to curve toward the red galaxy, which is consistent
with lensing. There is also evidence of substructure in the arc, which
is presumably due to clumps of star formation. No obvious counter-image
can visually be identified in the color map. However, the
galaxy-subtracted residuals (Section~4) do show a faint possible
counter-image.
Fitting de Vaucouleurs profiles to the lens galaxies -- after masking
the lensed features -- yields the parameters listed in
Table~\ref{tab_lensprops} \citep[for a description of the procedure
see, e.g.,][]{T06}. Observed quantities are transformed into rest
frame quantities as described in \citet{T01}, yielding effective radii
and surface brightnesses of R$_{\rm e}$=$4.76\pm0.48$ and
$3.60\pm0.36$ kpc and SB$_{\rm e}$=$20.42\pm0.05$ and $20.81\pm0.05$
mag arcsec$^{-2}$ for ``Fred'' and ``Ginger'', respectively. These values
were calculated assuming that $h = 0.7$. Correcting for evolution to
$z=0$ adopting $d\log (M/L_{\rm
B})/dz=-0.72\ \pm0.04$ \citep{T05}, and assuming that the lens
galaxies obey the Fundamental Plane \citep[FP;][]{D87,DD87}
relationship yields estimates for the central velocity dispersions of
$\sigma_{\rm FP}$=$180\pm51$ and $142\pm32$ km\ s$^{-1}$\ for ``Fred''
and ``Ginger'', respectively. The uncertainty on $\sigma_{\rm FP}$ is
dominated by the uncertainty in the evolutionary correction and in the
intrinsic thickness of the FP (assumed to be 0.08 in $\log {\rm
R}_{\rm e}$).
\begin{figure}[!]
\plotone{fig2.eps}
\caption{Wider
field of view, showing B1608+656 and the two additional strong lens
candidates in relation to it. The image was obtained through the
F814W filter with the ACS.
\label{fig_fc}}
\end{figure}
\subsection{Palomar 60-inch Telescope}
Ground-based imaging of the B1608+656 field was obtained as part of a
program to investigate the environments of strong gravitational lenses
\citep[e.g.,][]{1608groupdisc}. The observations were conducted
using the Palomar 60-Inch Mayer telescope (P60), with the CCD13
detector. The CCD provides a FOV of approximately
13\arcmin$\times$13\arcmin, with a pixel size of 0\farcs379. The data
were taken using the Gunn $g$, $r$, and $i$ filters in 2000 April.
The seeing was approximately 1\farcs2 through the $r$ and $i$ filters,
while it was 1\farcs5 through the $g$ filter. Exposure times are
given in Table~\ref{tab_obsdata}. A further observing session in 2000
July was used to increase the depth of the $g$ band data. In these
observations, the seeing was $\sim$1\farcs2. The conditions during
both observing sessions were photometric. The data were reduced using
standard {\sc iraf} tasks, and then the $g$-band data from the two observing
sessions were combined. Astrometric solutions were computed with the
aid of positions obtained from the USNO-A2.0 catalog \citep{usno} while
photometric solutions were derived from observations of several Gunn
standard stars \citep{gunnstd}. Given the seeing and the faintness of the
background objects, the background sources are not resolved in the
P60 imaging. Therefore only total (lens + source) magnitudes are
given in Table~\ref{tab_lensprops}.
\ifsubmode
\begin{deluxetable}{lccccccccc}
\else
\begin{deluxetable*}{lccccccccc}
\fi
\tabletypesize{\scriptsize}
\tablecolumns{10}
\tablewidth{0pc}
\tablecaption{Lens System Photometric Properties}
\tablehead{
\colhead{}
& \colhead{}
& \colhead{}
& \colhead{}
& \colhead{}
& \colhead{}
& \colhead{R$_{\rm e,F606W}$\tablenotemark{b}}
& \colhead{R$_{\rm e,F814W}$\tablenotemark{b}}
& \colhead{}
& \colhead{}\\
\colhead{Name}
& \colhead{$g_{tot}$\tablenotemark{a}}
& \colhead{$r_{tot}$\tablenotemark{a}}
& \colhead{$i_{tot}$\tablenotemark{a}}
& \colhead{F606W$_{\rm l}$\tablenotemark{b}}
& \colhead{F814W$_{\rm l}$\tablenotemark{b}}
& \colhead{(\arcsec)}
& \colhead{(\arcsec)}
& \colhead{F606W$_{\rm s}$\tablenotemark{c}}
& \colhead{F814W$_{\rm s}$\tablenotemark{c}}
}
\startdata
``Fred''\ & 22.5 & 21.6 & 21.0 & 21.9 & 20.3 & 0.62 & 0.69 & 23.9 & 23.1 \\
``Ginger''\ & 22.1 & 20.8 & 20.4 & 21.1 & 20.0 & 0.64 & 0.66 & 24.6 & 24.0 \\
\enddata
\label{tab_lensprops}
\tablenotetext{a}{Magnitudes are the ``MAG\_AUTO'' magnitudes returned by
SExtractor, with the default values for the Kron factor and minimum radius.}
\tablenotetext{b}{Magnitudes and effective radii of the lensing galaxy
from the best fit
de Vaucouleurs model; not corrected for Galactic extinction}
\tablenotetext{c}{Brighter of the two lensed images.}
\ifsubmode
\end{deluxetable}
\else
\end{deluxetable*}
\fi
\section{Spectroscopy \& Redshifts}
Several attempts to obtain the redshifts of the lenses and background
sources have been made. A summary of the spectroscopic observations
is given in Table~\ref{tab_specdata}. The first observation, with the
Echellete Spectrograph and Imager \citep[ESI;][]{esi} on the
W. M. Keck II Telescope, was obtained as a part of the general
investigation of the environment of the B1608+656 lens system. The
subsequent observations, with the Low Resolution Imaging Spectrograph
\citep[LRIS;][]{lris} at Keck and the Gemini Multi-object Spectrograph
\citep[GMOS;][]{gmos} on Gemini-North, were targeted specifically at the lens
candidates. The LRIS observations were obtained on subsequent nights
and consisted of one exposure on each lens candidate. Each
observation was taken at the very end of the night, ending after 18
degree twilight, so the blue-side observations were swamped by the sky
emission. In contrast, the GMOS observations of ``Fred''\ were obtained
in excellent conditions, with a dark sky and seeing of $\sim$0\farcs6.
The three GMOS exposures of the system were dithered in the spectral
direction in order to fill in the gaps between the chips.
The LRIS and ESI data were reduced using scripts that provided
interfaces to the standard {\sc iraf} tasks. In addition, a portion of the
ESI data reduction was performed using custom IDL scripts. The GMOS
data were reduced using the {\it gmos} package in {\sc iraf}. The
wavelength calibration for all of the exposures was based on arclamp
spectra obtained adjacent in time to the science observations.
\begin{figure}
\plotone{fig3.eps}
\caption{
Optical spectra of the ``Fred''\ lens candidate, obtained with Keck/ESI
(top) and Gemini/GMOS (bottom). The ESI spectrum has been smoothed
with a boxcar of width 8.3\AA\ (35~pixels), while the GMOS spectrum
has been smoothed with a boxcar of width 9.2\AA\ (5~pixels).
The observed absorption features, as well as the expected location
of the [\ion{O}{2}] emission, have been marked.
\label{fig_spec6}}
\end{figure}
The first lens candidate, ``Fred''\, has a redshift of $z_\ell =
0.6321$, based on multiple absorption features (\ion{Ca}{2},
H$\delta$, and the G-band) seen in the ESI and GMOS spectra
(Figure~\ref{fig_spec6}). The ``Fred''\ redshift places it in a small
group that also includes the lensing galaxy in the B1608+656 system
\citep{1608groupdisc}. The ESI observations were obtained before
the system was identified as a lens candidate. Although the position
angle (PA) of the slit was approximately correct for the system
morphology, no trace of the lensed source is seen in the spectrum. On
the other hand, the GMOS spectrum of this object was obtained with a
slit PA chosen specifically to cover both the lensing galaxy and the
lensed source. The brighter of the two blue blobs is clearly seen in
the two-dimensional spectrum and is spatially separated from the lens
galaxy spectrum, allowing a separate spectrum to be extracted for the
lensed image (Figure~\ref{fig_spec6_src}). However, no clear emission
or absorption features, beyond those due to contamination from the
lens galaxy light and imperfect subtraction of the night sky lines,
are seen in the spectrum of the source. Given the blue color of the
lensed source, we expect to see emission lines in the spectrum; the
lack of emission features allows us to place tentative limits on the
redshift of the background source. Emission from [\ion{O}{2}]
$\lambda$3727 shifts out of the range covered by the GMOS spectrum at
a redshift of $z_s = 1.0$, while Ly$\alpha$ emission enters the
spectral range at $z_s = 2.8$. We therefore assume that the
background object falls within this redshift range. For completeness
we have to consider the
possibility that the blue knots are star formation associated with the
lensing galaxy. However, if this were the case, we would expect
strong [\ion{O}{2}] emission at a wavelength of 6083\AA. We do not
see any evidence for [\ion{O}{2}] emission at 6083\AA\ in any of the ``Fred''
spectra and therefore conclude that the arc is not due to star
formation in the lensing galaxy.
The other lens candidate, ``Ginger'', has a spectrum typical of an
early-type galaxy (Figure~\ref{fig_spec4}). A number of absorption
features, including lines due to \ion{Ca}{2} H and K, H$\delta$, and
the G-band give a redshift of $z_{lens} = 0.4264$. Although the slit
PA was chosen to cover both the galaxy and the brightest portion of
the arc, no redshift for the background source was obtained. This may
just be due to the short integration time and bright night sky. The
redshift of the lens galaxy places it in another group detected along
the line of sight to B1608+656. This group has a mean redshift of $z
= 0.426$ \citep{1608groupdisc}.
\ifsubmode
\begin{deluxetable}{lcccccrrrc}
\else
\begin{deluxetable*}{lcccccrrrc}
\fi
\tabletypesize{\scriptsize}
\tablecolumns{10}
\tablewidth{0pc}
\tablecaption{Spectroscopic Observations}
\tablehead{
\colhead{}
& \colhead{}
& \colhead{}
& \colhead{}
& \colhead{}
& \colhead{Wavelength}
& \colhead{Slit Width}
& \colhead{Slit}
& \colhead{$t_{exp}$}
& \colhead{}
\\
\colhead{Date}
& \colhead{Target}
& \colhead{Telescope}
& \colhead{Instrument}
& \colhead{Grating}
& \colhead{Coverage}
& \colhead{(\arcsec)}
& \colhead{PA}
& \colhead{(sec)}
& \colhead{SNR\tablenotemark{a}}
}
\startdata
2001 Jul 23 & ``Fred''\ & Keck II & ESI & 175\tablenotemark{b}
& 3887--10741 & 1.0 & +11.8 & 3600 & 2.5 \\
2005 Apr 12 & ``Ginger''\ & Keck I & LRIS/R & 600/5000
& 4899--7488 & 1.0 & +48.2 & 1800 & 8 \\
2005 Apr 13 & ``Fred''\ & Keck I & LRIS/R & 600/5000
& 4908--7452 & 1.0 & +8.2 & 1800 & 8 \\
2005 May 13 & ``Fred''\ & Gemini-N & GMOS & B600
& 4587--7478 & 1.0 & +8.2 & 4500 & 12 \\
\enddata
\tablenotetext{a}{Value given is the average SNR pix$^{-1}$ in the
region just redward of the \ion{Ca}{2} absorption lines.}
\tablenotetext{b}{Cross-dispersed.}
\label{tab_specdata}
\ifsubmode
\end{deluxetable}
\else
\end{deluxetable*}
\fi
\section{Gravitational Lens Models}
We reconstruct the lensed images and source of both systems to test the
lens hypothesis and assess whether a relatively simple mass model can
explain the observed lens-like features in Figure~1. We adopt a
Singular Isothermal Ellipsoid (SIE) lens mass model \citep{siemodel},
which describes the mass distribution (stellar plus dark
matter) in the inner regions of massive lens-galaxies extremely well
\citep[e.g.,][]{tk_galevol,slacs3}. In addition, we add where necessary
an external shear to account for the group environment
\citep{1608groupdisc}. The modeling is done via a
non-parametric reconstruction method, which is described in more
detail in \citet{tk_galevol} and \citet{lvek04}. We regularize the
solutions somewhat to attain a smoother solution; see
\citet{lvek04} for a proper discussion.
The modeling is conducted on the the F606W ACS images, after subtracting
off the emission from the lensing galaxy. The galaxy-subtracted images
are shown in the upper left panels in Figure~\ref{fig_mod6}. The
``Fred''\ image shows clearly the two bright lensed images. In the
``Ginger''\ image, the arc is seen in the top right corner of the
galaxy-subtracted image, while there is a hint of a counter-image in
the lower left corner
The resulting non-parametric source and lens models are shown in
Figure~\ref{fig_mod6} for both systems. The resulting SIE and
external shear parameters, for the mass models centered on the
brightness peaks of the observed galaxies, are given in
Table~\ref{tab_models}. Note that the PA values are given in the
frame of the image and are rotated $-$93.8$^\circ$ with respect to the
PA on the sky. The bracketed quantities are fixed at the observed
values.
\begin{deluxetable}{ccc}
\tabletypesize{\scriptsize}
\tablecolumns{10}
\tablewidth{0pc}
\tablecaption{Lens Model Parameters}
\tablehead{
\colhead{Parameter}
& \colhead{``Fred''}
& \colhead{``Ginger''}
}
\startdata
$b_{\rm SIE}$ & 0\farcs73 & 0\farcs61 \\
$\theta_{\rm SIE}$ & \nodata & [$-49^\circ$]\tablenotemark{a} \\
$q_{\rm SIE}$ & [1.0]\tablenotemark{a} & 0.84 \\
$\gamma_{\rm ext}$ & 0.056 & \nodata \\
$\theta_{\rm ext}$ & 131$^\circ$ & \nodata \\
\enddata
\tablenotetext{a}{Held fixed at observed value.}
\label{tab_models}
\end{deluxetable}
\section{Discussion\label{sec_disc}}
The first question to address is whether or not the candidates
presented in this paper are actually gravitational lenses. Of course,
a measurement of the redshifts of the background objects would make
the lens hypothesis more secure. The current set of spectra were
obtained with limited observing time or wavelength coverage, or were
observed with the slit at a non-optimal PA. Therefore, a
more dedicated observing campaign may yet yield the redshifts,
especially at shorter wavelengths where the contrast between the lens
and the source is improved \citep[as for HST1543, see][]{tk_galevol}.
The blue colors of the background objects suggest that their spectra
may contain emission lines and thus that it may be possible to obtain
redshifts in spite of the faintness of the objects. Even without
redshifts, however, the system morphologies and the surface
brightnesses of the background objects are consistent with
gravitational lensing, as can be seen from the lens modeling.
\subsection{The lens model for ``Fred''}
Based on several arguments -- besides the similar colors and typical
lens-geometry of the two galaxy-subtracted residual images -- we
strongly believe that ``Fred''\ is a {\em second} strong-lens system in
the field of the B1608+656 lens system. (1) The source brightness
distribution of ``Fred''\ shows structural features that correspond to
the features seen in the higher magnification image, as expected under
the lens hypothesis. However, when mapped onto the second, much
fainter, image, it also matches the somewhat triangular structure of
that image extremely well, with overall residuals between the observed
images and the lens model at an RMS level of $<10^{-2}$. There is no
{\em a priori} reason that this mapping onto the second system should
be successful if the system were not a lens system. (2) The Einstein
radius of $b_{\rm SIE}$=0\farcs73, determined from the best-fit model,
implies an approximate stellar velocity dispersion $\sigma_{\rm SIE}$
between 290 and 210 km\ s$^{-1}$\ for source redshifts between $z_{\rm s}$=1.0
and 2.0, respectively. These values are typical for most lens
galaxies (i.e., around L$_*$) and also agree well with the brightness
of the galaxy and absence of emission lines in the spectra. A direct
measurement of its velocity dispersion and source redshift, however,
can secure this agreement more accurately. (3) The SIE model for
``Fred''\ requires an external shear of $\gamma_{\rm ext}\approx 0.06$
with a PA of 225~degrees. This is the direction of the previously
known lens system, B1608+656, (see Figure~2) and could be caused by its
group environment \citep{1608groupdisc}, although the group parameters
are not well constrained by the current data. (4) The last piece of
(circumstantial in this case) evidence, is the good agreement between
the ellipticity of the stellar light and that of the SIE mass model.
This correlation is typically very strong \citep[RMS of $\sim$0.1 in
$q_*/q_{\rm SIE}$; see][]{slacs3} for elliptical-galaxy systems with a
significant stellar mass component inside the Einstein radius.
The morphology of the lensed source is unusual. However, given its
blue color and presumed (i.e., relatively high) redshift one might
expect the emission from such a galaxy to be dominated by knots of
star formation. In fact, the source shape is similar to several of
the reconstructed lensed sources in the SDSS Lens ACS Survey (SLACS)
sample of \citet{slacs1}. Generally speaking, the increased abundance
of peculiar and irregular galaxies at high redshifts and faint
magnitudes is well established from deep
surveys \citep[e.g.,][]{HDFmorph}. Once again, the facts that the
unusual morphology is seen in both components and that the lens model
so clearly maps the brighter image into the fainter one are strong
arguments in favor of the lensing hypothesis.
The projected mass of the lensing galaxy, within the Einstein ring
radius, is given by
$$
M_E = \frac{c^2}{4 G} \frac{D_\ell D_s}{D_{\ell s}} \theta_E^2
$$
where $\theta_E$ is the Einstein ring radius in angular units and
$D_\ell$, $D_s$, and $D_{\ell s}$ are the angular diameter distances
to the lens, to the background source, and between the lens and the
background source, respectively. The Einstein ring radius is given
above, and corresponds to half the separation between the two lensed
images. The derived mass of the lensing galaxy is $6.9 \times 10^{10}
(D_s / D_{\ell s}) h^{-1} M_\odot$. For source redshifts between 1.0
and 2.0, this corresponds to masses between 2.2 and $1.1 \times
10^{11} h^{-1} M_\odot$. Adopting the surface photometry derived in
\S~2.1, this corresponds to a $B$-band mass-to-light ratio inside the
cylinder of radius equal to the Einstein Radius of 13.6 $h$ (6.8 $h$) in
solar units for $z_{\rm s}=1.0$ (2.0). These values are consistent
with those found for other lens galaxies \citep[e.g.,][]{tk_galevol},
lending further support to the lensing hypothesis for ``Fred''.
The velocity dispersion estimated via the FP in \S~2.1 provides a
further consistency check on the lensing interpretation, or an
estimate of the redshift of the background source if the lensing
hypothesis is accepted \citep[e.g.,][]{Ko00}. In fact, empirical
evidence suggests that at scales comparable to the effective radius,
the ratio of stellar to SIE velocity dispersion, $f_{\rm
SIE}=\sigma_*/\sigma_{\rm SIE}$, is close to unity
\citep[e.g.,][]{Ko00,vdv03,tk_galevol,T06,slacs3}.
The estimated velocity dispersion from the FP ($\sigma_{\rm
FP}=180\pm51$ km\ s$^{-1}$) is consistent with that of the best fitting SIE
for plausible $z_s$, consistent with the lensing
hypothesis. Conversely, adopting the lens hypothesis, $\sigma_{\rm
FP}$ would imply $z_s>1.52$ for $f_{\rm SIE}=1.0\pm0.1$.
\begin{figure}
\plotone{fig4.eps}
\caption{
Optical spectrum of the ``Fred''\ background object, obtained with
GMOS. All of the marked features are associated with the lensing
galaxy. The emission lines correspond to regions in which bright
night-sky lines have been imperfectly subtracted. No clear emission
lines from the background source are seen.
\label{fig_spec6_src}}
\end{figure}
\begin{figure}
\plotone{fig5.eps}
\caption{
The optical spectrum of the ``Ginger''\ lens candidate taken with LRIS
on the Keck Telescope.
The data have been smoothed with a boxcar of width 5
pixels, corresponding to 6.25~\AA. The spectrum is dominated by the
emission from the lens, whose spectral shape is consistent with an
early-type galaxy. The redshift of the lens is $z=0.4264$. There
is no evidence of ongoing star formation (the expected position of
the [\ion{O}{2}] emission line is marked).
\label{fig_spec4}}
\end{figure}
\begin{figure*}
\plottwo{fig6a.eps}{fig6b.eps}
\caption{{\bf
(Left)} Model results for the ``Fred''\ lens candidate. Panels show
observed data with lensing galaxy subtracted (top left), the
reconstructed lensed image (top right), the reconstructed source
(bottom left), and the residuals between observed and model lensed
emission (bottom right). The panels are rotated by $-93.8^\circ$,
so that the approximate orientation is east up and north to the
right.
{\bf (Right)} Idem, but for the ``Ginger''\ lens candidate.
\label{fig_mod6}}
\end{figure*}
\subsection{The lens model for ``Ginger''}
This system is significantly less constrained because the presumed
counter-image is very faint and the brightness distribution of the
images lacks very distinct structure. Due to the lack of constraints,
only the lens strength ($b_{\rm SIE}$) and ellipticity ($q_{\rm SIE}$)
are varied in the modeling, and no external shear is used. The
resulting reconstructed source consists of two features that
correspond to those seen in the brighter of the two (lensed) images.
Even though both the geometry and the relative brightnesses of the
images can be fit by a model with several free parameters, the models
are very tentative. The Einstein radius of $b$=0\farcs61, determined
from the best-fit model, implies an approximate stellar velocity
dispersion between 205 and 180 km\ s$^{-1}$\ for source redshifts between
$z_{\rm s}$=1.0 and 2.0, respectively. Following the same arguments as
for ``Fred'' we can obtain consistency checks or an estimate for $z_{\rm
s}$, by comparing the results of the lens model with the surface
photometry. The Einstein radius implies a mass within the cylinder of
7.3 (5.4)$\times10^{10}$ $h^{-1}$ M$_{\odot}$ for $z_{\rm s}=1.0$
(2.0). This corresponds to a $B$-band mass-to-light ratio of 11.4 $h$
(8.4 $h$) in solar units for $z_{\rm s}$=1.0 (2.0), which is
consistent with typical values found for other lens galaxies at
similar redshift
\citep{slacs3}. The velocity dispersion implied by the FP,
$\sigma_{\rm FP}=142\pm32$km\ s$^{-1}$, is somewhat (but not significantly for the
higher source redshift range) smaller than that obtained from the lens
model for $z_s$ in the range 1--2. Thus, under the lens hypothesis
either $z_s>2.2$ is required ($z_s>3$ for $f_{\rm SIE}=1.0\pm0.1$) or
$f_{\rm SIE}<1$, possibly indicating extra convergence from the
environment of ``Ginger''. None of these arguments appear conclusive as to
the lensing nature of ``Ginger''. Whether ``Ginger''\ is a {\sl third} strong
lens in the field of B1608+656 will probably require a direct
measurement of the source redshift and a more definitive detection
of the counter-image.
\section{Summary \& Conclusions}
Our investigations have shown that the single ACS pointing centered on
B1608+656 contains two additional strong lens candidates. This result
implies that there are one to two additional lenses in a
$\sim$10~arcmin$^2$ area of the sky, giving {\em a posteriori} lensing
rates of 0.07-0.46 arcmin$^{-2}$ if only ``Fred'' is a real lens, and
0.14--0.59 arcmin$^{-2}$ if both of the candidates are real lenses
\citep[68\% limits assuming Poisson statistics;][]{gehrels86}.
These rates should be contrasted with the results of other HST
lens-search campaigns, which have found lower lensing rates. For
example, 10 lens candidates, two of which have been confirmed, were
found in the $\sim$600~arcmin$^2$ of the Medium Deep Survey
\citep{ratnatunga}, for a lensing rate of $\leq$0.02 arcmin$^{-2}$.
Also, the search in the Great Observatories Origins Deep Survey
(GOODS) ACS data by
\citet{goodslens} resulted in six lens candidates in $\sim$300
arcmin$^2$, once again giving a lensing rate of at most $\sim$0.02
arcmin$^{-2}$.
In hindsight, it might not be surprising that the lensing rate in the
B1608+656 field is an order of magnitude higher than those in the
larger surveys, although the effect of small number statistics should
not yet be discounted. Qualitatively this is easy to understand,
because the field that is being imaged in these observations is not a
random line of sight. First, the targeted field is already known to
contain a massive early-type lens galaxy. These galaxies often are
found in dense environments such as groups and clusters
\citep[e.g.,][]{morphdensity}. Spectroscopic investigations of
this particular field have revealed the presence of at least three
galaxy groups along the line of sight to the
B1608+656 lens system, including a group that is physically associated
with the lensing galaxy \citep{1608groupdisc}. Second, the lensed source
in the B1608+656 system is itself a massive early-type galaxy
exhibiting AGN activity in the radio domain
\citep{stm1608,zs1608}. Both as a massive galaxy and as a
radio source \citep[e.g.,][]{ASradiogroups}, the background source can
be expected to reside in an overdense environment. The lensed object
in the B1608+656 system is at a redshift of $z_s = 1.394$
\citep{zs1608}. If the background sources in ``Fred''\
and ``Ginger''\ were at the same redshift, their angular separations
from the B1608+656 system would correspond to $\sim 210
h^{-1}$ kpc, not unusual for a group of galaxies. Third, by obtaining
much deeper than normal space-based imaging of this field, we have
started to pick up the ubiquitous population of faint blue galaxies.
Depending on the steepness of the luminosity function of this
population of galaxies, magnification bias can lead to a higher
lensing rate than expected based on the density of galaxies at the
limiting magnitude of the imaging \citep{slacs1}.
Being quantitative about the expected lensing rate is rather more
difficult because it depends on many unknown factors such as the
redshift and mass distributions of the potential lensing galaxies and
the redshift distribution of the sources, and thus is beyond the scope
of this paper. However, to test the plausibility of the qualitative
argument, we take a simple path to estimate the expected lensing rate
in a deep ACS image such as this one. Our assumptions are that (1) only
early-type galaxies will act as lensing galaxies and (2)
all of the lenses will have lensing cross-sections of $\sim$1 arcsec$^2$.
The lensing cross-section is estimated by taking a circular
area with a diameter equal to the typical image
separation for the lenses discovered by the Cosmic Lens All-Sky Survey
\citep{class2}. We define the early-type galaxies as luminous
(F814W$<$21.5) red ((F606W - F814W)$ >$1.0) galaxies that also have
morphologies typical of early types (i.e., excluding galaxies with
clear disk-like structure). As a sanity check, all three of the
lensing galaxies in this field satisfy the above criteria. The
magnitude, color, and morphology cuts yield a conservative estimate of
14 potential lenses. To test the validity of our cuts, we extended
them to one magnitude fainter and one magnitude bluer. Of the 30
additional galaxies that were thus included, only two satisfied the
morphology criterion. Therefore, we assume that our cuts have located
the vast majority of the luminous ellipticals in the field. The
resulting total cross-section for lensing in this field is $\sim$16
arcsec$^2$. This cross-section must be compared to the density of
faint blue galaxies. The completeness limit for the F606W imaging that
is presented in this paper is F606W$\sim$26.5\footnote{Here the
magnitudes are in the AB system, in order to compare to the results
of \citet{faintacsgals}.}. Assuming a typical magnification factor
from strong lensing of a few, which is what we find from the lens
models, thus requires a knowledge of the integrated number density of
galaxies with F606W$<$28. We use the corrected number counts
of \citet{faintacsgals} to obtain an integrated number density of
$\sim 2 \times 10^6$ galaxies per square degree or $\sim$0.2 galaxies
per square arcsecond. The combination of this surface density with
the approximate lensing cross-section in the field yields an {\em a
posteriori} expectation value of $3.2 \pm 1.7$ lenses, given our
assumptions. Therefore, it appears quite likely that two additional
lenses would be found in these images.
We conclude that an efficient method to search for new gravitational
lenses is to obtain deep images of the fields surrounding known strong
lenses. This method takes advantage of the enhancement of the
line-of-sight number densities of both the potential lenses and
potential sources. We note that this is more an effect of biased
galaxy formation than a bias due to the enhancement of the lensing
cross-section by the group environment, which is secondary effect to
lensing statistics \citep[e.g.,][]{keeton_group, 1608groupdisc}.
We also note that the results presented here suggest that clustering
of sources and lenses should be taken into account when comparing
observed and predicted lensing statistics in order, e.g., to place
limits on cosmological parameters.
\acknowledgments
These observations would not have been possible without the expertise
and dedication of the staffs of the Palomar and Keck observatories.
We especially thank Karl Dunscombe, Grant Hill, Jean
Mueller, Gary Puniwai, Kevin Rykoski, Gabrelle Saurage, and
Skip Staples.
CDF and JPM acknowledge support under HST program \#GO-10158. Support
for program \#GO-10158 was provided by NASA through a grant from the
Space Telescope Science Institute, which is operated by the
Association of Universities for Research in Astronomy, Inc., under
NASA contract NAS 5-26555.
This work is supported in part by the European Community's Sixth
Framework Marie Curie Research Training Network Programme, Contract
No. MRTN-CT-2004-505183 `ANGLES'.
Based in part on observations made with the NASA/ESA Hubble Space
Telescope, obtained at the Space Telescope Science Institute, which is
operated by the Association of Universities for Research in Astronomy,
Inc., under NASA contract NAS 5-26555. These observations are
associated with program
\#GO-10158.
Some of the data presented herein were obtained at the W. M. Keck
Observatory, which is operated as a scientific partnership among the
California Institute of Technology, the University of California, and
the National Aeronautics and Space Administration. The Observatory was
made possible by the generous financial support of the W.M. Keck
Foundation. The authors wish to recognize and acknowledge the very
significant cultural role and reverence that the summit of Mauna Kea
has always had within the indigenous Hawaiian community. We are most
fortunate to have the opportunity to conduct observations from this
mountain.
Based on observations obtained at the Gemini Observatory, which is
operated by the Association of Universities for Research in Astronomy,
Inc., under a cooperative agreement with the NSF on behalf of the Gemini
partnership: the National Science Foundation (United States), the Particle
Physics and Astronomy Research Council (United Kingdom), the
National Research Council (Canada), CONICYT (Chile), the Australian
Research Council (Australia), CNPq (Brazil) and CONICET (Argentina).
\ifsubmode
\newpage
\fi
|
2,877,628,089,568 | arxiv | \section{Introduction}
Light--cone(LC) dominated hard scattering processes, like deep
inelastic scattering
and deeply virtual Compton scattering, and various hadronic
wave functions, e.g.,~vector meson amplitudes, appearing in (semi) exclusive
processes are most effectively described by the help of the {\em nonlocal}
light--cone expansion \cite{AZ}.Thereby, the {\em same} nonlocal LC--operator,
and its anomalous dimension, is related to {\em different} phenomenological
distribution amplitudes, and their $Q^2$--evolution kernels \cite{MRGDH}.
Growing experimental precision requires the consideration of higher twist
contributions. Therefore, it is necessary to decompose the nonlocal
LC--operators according to their twist content.
Unfortunately, `geometric' twist $(\tau) =$ dimension $(d)-$ spin $(j)$,
introduced for the {\em local} LC--operators \cite{GT} cannot be
extended directly to nonlocal LC--operators. Therefore, motivated
by LC--quantization
and by kinematic phenomenology the notion of `dynamic' twist $(t)$ was
introduced counting powers $Q^{2-t}$ of the momentum transfer \cite{JJ}.
However, this notion is defined only for
{\em matrix elements} of operators, is {\em not} Lorentz invariant and
its relation to `geometric' twist is complicated, cf.~\cite{BBKT}.
Here, we carry on a systematic procedure to uniquely decompose nonlocal
LC--operators into harmonic operators of well defined geometric twist,
cf.~Ref.~\cite{GLR}. This will be demonstrated for tensor operators
of low rank, namely (pseudo)scalars, (axial) vectors and (skew)symmetric tensors
Thereby, various harmonic tensor operators (in $D= 2h$ space-time dimensions)
are introduced being related to specific infinite series of
symmetry classes which are defined by corresponding Young tableaus.
In the scalar case these LC--operators are series of the well-known harmonic
polynomials corresponding to (symmetric) tensor representations of the
orthogonal group $SO(D)$, cf.~\cite{BT}.
Symmetric tensor operators of rank 2 are considered as an example.
\section{General procedure}
\baselineskip=11pt
Let us generically denote by $O_\Gamma(\ka \xx, \kb \xx)$
with $\xx^2 = 0$ any of the following bilocal light-ray operators,
either the quark operators
$:\overline\psi(\ka \xx) \Gamma U(\ka \xx, \kb \xx) \psi(\kb \xx):$
with $\Gamma=\{1,\gamma_\alpha,\sigma_{\alpha\beta},
\gamma_5\gamma_\alpha,\gamma_5\}$ or the gluon operators
$:{F_\mu}^\rho(\ka \xx) U(\ka \xx, \kb \xx) F_{\nu\rho}(\kb \xx):$
and
${:{F_\mu}^\rho(\ka \xx) U(\ka \xx, \kb \xx)
{\widetilde F}_{\nu\rho}(\kb \xx):}$, where
$\psi(x), F_{\mu\nu}(x)$ and ${\widetilde F}_{\mu\nu}(x)$ are
the quark field, the gluon field strength and its dual, respectively,
and the path ordered phase factor is given by
$U(\ka x, \kb x)={\cal P} \exp\{-\mathrm{i} g
\int^{\ka}_{\kb} d\kappa \,x^\mu A_\mu (\kappa x)\}$
with the gauge potential $A_\mu(x)$.
The general procedure of decomposing these operators
into operators of definite twist consists of the following steps:\\
(1) {\em Expansion} of the nonlocal operators for {\em arbitrary}
values of $x$
into a Taylor series of {\em local} tensor operators
having definite rank $n$ and canonical dimension $d$:
\begin{eqnarray}
\label{GLCEOP}
O_\Gamma(\ka x, \kb x)=\hbox{$\sum_{n=0}^{\infty}$}
(n!)^{-1}{x}^{\mu_1}\ldots{x}^{\mu_n}
O_{\Gamma\,\mu_1\ldots\mu_n}(\ka,\kb);
\end{eqnarray}
the local operators are defined by
(the brackets $(\ldots)$ denote total symmetrization)
\begin{eqnarray}
\label{local}
O_{\Gamma\,\mu_1\ldots\mu_n}(\kappa_1,\kappa_2)
\equiv
\left[\Phi(x)\Gamma
{\sf D}_{(\mu_1}\!\ldots{\sf D}_{\mu_n)}
\Phi(x)\right]\big|_{x=0},
\end{eqnarray}
with generalized covariant derivatives
${\sf D}_\mu(\kappa_1,
\kappa_2)\equiv
\kappa_1 (\overleftarrow{\partial_\mu}-\mathrm{i} g A_\mu)+
\kappa_2 (\overrightarrow{\partial_\mu}+\mathrm{i} g A_\mu).
$
\noindent
(2) {\em Decomposition} of the local operators (\ref{local}) into
tensors being {\em irreducible} under the (ortho\-chronous)
Lorentz group, i.e.~having twist $\tau = d-j$. These are
{\em traceless} tensors being classified
according to their {\em symmetry class}. The latter
is determined by a (normalized) {\em Young-Operator}
${\cal Y}_{[m]}={f_{[m]}}{\cal QP}/{n!}$,
where $[m]= (m_1\geq m_2\geq\ldots\geq m_r)$ with
$\sum^r_{i=1}m_i = n$ defines a Young pattern, and
${\cal P}$ and ${\cal Q}$ denote symmetrizations and
antisymmetrizations w.r.to the horizontal and vertical
permutations of a standard tableau, respectively;
$f_{[m]}$ is the number of standard tableau's to $[m]$.
This decomposition can be done for any dimension $D = 2h$
of the complex orthogonal group $SO(2h, C)$.
Then, the allowed Young patterns are restricted by
$\ell_1+\ell_2 \leq 2h~ (\ell_i$: length of columns of $[m]$).
Since the operators (\ref{local}) are totally symmetric w.r.t.
$\mu_i$'s only the following symmetry types,
depending
on the additional tensor structure $\Gamma$,
are of relevance (lower spins $j$ are related to trace terms):\\
$
\begin{array}{lllll}
\hspace{.5cm}
\rm{(i)}&[m] = (n)&j = n, n-2, n-4,..., & f_{(n)} &= 1,\\
\hspace{.5cm}
\rm{(ii)}&[m] = (n,1)&j = n, n-1, n-2,..., & f_{(n,1)}&= n,\\
\hspace{.5cm}
\rm{(iii)}&[m] = (n,1,1)&j = n, n-1,n-2,..., & f_{(n,1,1)} &= n(n+1)/2,\\
\hspace{.5cm}
\rm{(iv)}&[m] = (n,2)&j = n, n-1,n-2,..., & f_{(n,2)} &= (n+1)(n+2)/2.\\
\end{array}
$
\vspace*{0cm}
\noindent
If multiplied by ${x}^{\mu_1}\ldots{x}^{\mu_n}$ according to
Eq.~(\ref{GLCEOP}) these representations may be characterized by
harmonic tensor polynomials of order $n$, cf.~Sect. 3 below.\\
(3) {\em Resummation} of the infinite series (for any $n$)
of irreducible tensor harmonics of equal twist $\tau$ and symmetry type
${\cal Y}_{\rm [m]}$ to nonlocal {\em harmonic operators of definite twist}.
Thereby, the phase factors may be reconstructed on the expense of (two)
additional integrations over parameters multiplying the $\kappa$--variables.
As a result one obtains a decomposition of the original nonlinear operator
into a series of operators with growing twist, each being defined by subtracting
appropriate traces, cf. Sect. 4 below.\\
(4) {\em Projection} onto the light--cone, $x\rightarrow\xx = x + \eta
{(x\eta)}\big(\sqrt{1 - {x^2}/{(x\eta)^2}} - 1\big),~~ \eta^2=1$,
leads to the required twist decomposition:
\begin{eqnarray}
\label{LCdecomp}
O_\Gamma(\kappa_1 \tilde x,\kappa_2\tilde{x})
~=&
\sum_{\tau_i = \tau_{\rm min}}^{\tau_{\rm max}}
O_\Gamma^{\tau_i}(\kappa_1 \tilde x,\kappa_2\tilde{x}).
\end{eqnarray}
This sum terminates since most of the traces are proportional to
$x^2$ and, therefore, vanish on the light--cone. \\
Let us remark that step (3) and (4) may be interchanged without
changing the result. In fact, this corresponds to another formalism
acting on the complex light--cone \cite{BT} being used for the
construction of conformal tensors by Dobrev and Ganchev \cite{DG}.
\section{Harmonic tensor polynomials}
The construction of irreducible tensors may start with
traceless tensors $\tl T_{\Gamma(\mu_1 \ldots \mu_n)}$, with
$\Gamma$ indicating additional tensor indices, which afterwards
have to be subjected to the symmetry requirements determined
by the symmetry classes (i) -- (iv).
Let us start with a generic tensor, not being traceless, whose
symmetrized indices are contracted by $x^{\mu_i}$'s:
$T_{\Gamma n}(x) =
x^{\mu_1}\ldots x^{\mu_n} T_{\Gamma (\mu_1 \ldots \mu_n)}$.
The conditions
for scalar, (axial) vector resp.~(skew)symmetric tensors
to be traceless read:
\begin{eqnarray}
\hspace{-.5cm}
\square \tl T_{\Gamma~n}(x)&=&0,
\\
\hspace{-.5cm}
\partial^\alpha \tl T_{\alpha~n}(x) = 0
\quad{\rm resp.}\quad
\partial^\alpha \tl T_{\alpha\beta~n}(x) =\!\!\! &0&\!\!\! =
\partial^\beta \tl T_{\alpha\beta~n}(x)
\quad{\rm and}\quad
g^{\alpha\beta} \tl T_{\alpha\beta~n}(x) = 0.
\end{eqnarray}
The general solutions of these equations in $D=2h$ dimensions are:\\
(1) {\em Scalar harmonic polynomials} (cf.~\cite{VK}):
\begin{eqnarray}
\label{T_harm4}
\tl T_n(x) =
\sum_{k=0}^{[\frac{n}{2}]}
\hbox{\large$\frac{(h+n-k-2)!}{k!(h+n-2)!}$}
\left(\hbox{\large$\frac{-x^2}{4}$}\right)^{\!\!k}
\square^{k}T_n(x)
\equiv
H^{(2h)}_n\!\left(x^2|\square\right)T_n(x)
\end{eqnarray}
(2) {{\em Vector harmonic polynomials}:
\begin{eqnarray}
\label{T_vecharm}
\hspace{-.5cm}
\tl T_{\alpha n}(x)
\!\!\!\!&=&\!\!\!\!
\Big\{\!\delta_{\alpha}^{\beta}
\!-\!\hbox{\large$\frac{1}{(h+n-1)(2h+n-3)}$}
\Big(\!(\!h\!-\!2\!+\!x\partial)x_\alpha\partial^\beta
\!-\!\hbox{\large$\frac{1}{2}$}x^2
\partial_\alpha\partial^\beta\Big)\!\Big\}\!
H^{(2h)}_n\!\left(x^2|\square\right)\! T_{\beta n}(x)
\end{eqnarray}
(3) {{\em Skew tensor harmonic polynomials}:
\begin{eqnarray}
\label{T_tenharm}
\hspace{-.5cm}
\lefteqn{
\tl T_{[\alpha\beta]n}(x)
=
\Big\{\!\delta_{[\alpha}^\mu\delta_{\beta]}^\nu
+\hbox{\large$\frac{2}{(h+n-1)(2h+n-4)}$}
\Big(\!(\!h\!-\!2\!+\!x\partial)
x_{[\alpha}\delta_{\beta]}^{[\mu}\partial^{\nu]}
-\hbox{\large$\frac{1}{2}$}x^2
\partial_{[\alpha}\delta_{\beta]}^{[\mu}\partial^{\nu]}\Big)}
\nonumber\\
\hspace{-.5cm}
&\qquad\qquad
-\hbox{\large$\frac{2}{(h+n-1)(2h+n-4)(2h+n-2)}$}
x_{[\alpha}\partial_{\beta]}x^{[\mu}\partial^{\nu]}
\Big\}
H^{(2h)}_n\!\left(x^2|\square\right)
T_{[\mu\nu] n}(x)
\end{eqnarray}
(4) {\em Symmetric tensor harmonic polynomials}:
\begin{eqnarray}
\label{T_symharm}
\hspace{-.5cm}
\lefteqn{
\tl T_{(\alpha\beta)n}(x)
=
\Big\{\delta_{(\alpha}^\mu\delta_{\beta)}^\nu
+a_n g_{\alpha\beta}x^{(\mu}\partial^{\nu)}
-2g_n x_{(\alpha}\delta_{\beta)}^{(\mu}\partial^{\nu)}
-2b_n x_{(\alpha}x^{(\mu}\partial_{\beta)}\partial^{\nu)}
+c_n x^2 \delta_{(\alpha}^{(\mu}\partial_{\beta)}\partial^{\nu)}
}
\nonumber\\
\hspace{-.5cm}
&\qquad\qqua
-d_n x^2 x_{(\alpha}\partial_{\beta)}\partial^\mu\partial^\nu
+e_n x_\alpha x_\beta \partial^\mu \partial^\nu
+f_n x^2 x^{(\mu}\partial^{\nu)}\partial_\alpha\partial_\beta
-k_n x^2 g_{\alpha\beta} \partial^\mu\partial^\nu
\nonumber\\
\hspace{-.5cm}
&\qquad\quad\;\;\;
+h_n (x^4/4) \partial_\alpha\partial_\beta\partial^\mu\partial^\nu \Big\}
\Big(\delta_\mu^\rho\delta_\nu^\sigma
- (2h)^{-1} g_{\mu\nu}g^{\rho\sigma}\Big)
H^{(2h)}_n\!\left(x^2|\square\right)
T_{(\rho\sigma) n}(x)
\end{eqnarray}
where, up to the common factor
$[(h-1)(h+n)(h+n-1)(2h+n-2)(2h+n-3)]^{-1}$,
the coefficients are given by\\
$
\begin{array}{lll}
a_n = (h+n-1)(h+n-2)(2h+n-3), &\qquad
b_n = (h+n-2)(2h+n-3),\\
c_n = [h(h+n-1) -n+2](2h+n-3), &\qquad
d_n = [h(h+n-4)-2(n-3)],\\
\end{array}
$
\\%vspace
$
\begin{array}{lll}
e_n = [h(h+n-2) - n+3](h+n-2), &\qquad
f_n = (2h+n-3), \\
g_n = [h(h+n-1)-n+1](h+n-2)(2h+n-3), &\qquad
h_n = (h-3) \\
k_n = [h(h+n-3) -2n+3]+(h+n)(h+n-1)/2.&\\
\end{array}
$\\
These coefficients look quite complicated. However, in $D=4$
dimensions they simplify considerably. The extension to tensors of
higher rank is obvious, but cumbersome. Up to now no simple
building principle
has been found, and a general theory, to the best of our knowledge,
does not exist.
\section{Twist decomposition: An example}
Besides the scalar case these harmonic tensor
polynomials are not irreducible with respect to $SO(2h, C)$. Irreducible
polynomials are obtained by (anti)symmetrization according to the
Young patterns (i) -- (iv); this may be achieved by acting on it with
corresponding differential operators, multiplied by the weights $f_{[m]}$.
Afterwards, in order to get the (local) operators on the light--cone,
one takes
the limit $x \rightarrow \xx$. The nonlocal LC--operators are obtained
by resummation according to Eq.~(\ref{GLCEOP}).
For the quark operators that procedure has been already considered in
\cite{GLR}. Here we shall consider the gluon operators being tensors
of rank 2. The relevant Young patterns are given by (i) -- (iv).
Of course, in physical applications they appear sometimes also
multiplied with $x_\alpha$ or/and $x_\beta$; then higher patterns
are irrelevant.
Let us consider totally symmetric tensors
leading to much simpler symmetric harmonic tensor
polynomials then those given by the general expression (\ref{T_symharm}):
\begin{align}
\label{sy_ha_te_po}
T^{\rm (i)}_{(\alpha\beta) n}(x)=
&\,\hbox{\large$\frac{1}{(n+2)(n+1)}$}\partial_\alpha\partial_\beta
\tl T_{n+2}(x
=\,\hbox{\large$\frac{1}{(n+2)(n+1)}$}
\sum_{k=0
\hbox{\large$\frac{(-1)^k(h+n-k)!}{4^kk!(h+n)!}$}
\Big\{
x^{2k}\partial_\alpha\partial_\beta
\nonumber\\
&+
2k x^{2(k-1)}\big(g_{\alpha\beta}+2 x_{(\alpha}\partial_{\beta)}\big)
+ 4k(k-1)x_\alpha x_\beta x^{2(k-2)}\Big\}\square^{k}
T_{n+2}(x).
\end{align}
Related to the number of partial derivatives this sum contains
symmetric tensor, vector and scalar polynomials of lower degree.
Of course, if projected onto the light-cone only those terms survive which
do not depend on $x^2$. They are given by
\begin{align}
\label{sy_te_po(z)}
T^{\rm (i)}_{(\alpha\beta) n}(\xx)
=&\hbox{\large$\frac{1}{(n+2)(n+1)(h+n)(h+n-1)}$}
\mbox{d}_\alpha\mbox{d}_\beta T_{n+2}(\xx),
\end{align}
with the `interior' derivative
$\mbox{d}_\mu=[(h-1+x\partial)\partial_\mu -\frac{1}{2}x_\mu\square]\big|_{x = \xx}$
on the light-cone (cf. Ref.~\cite{BT}); because of $\mbox{d}^2=0$
the conditions of tracelessness on the light-cone,
\mbox{d}^\alpha T^{\rm (i)}_{(\alpha\beta) n}(\xx)=0=
\mbox{d}^\beta T^{\rm (i)}_{(\alpha\beta) n}(\xx)$ {and}
$g^{\alpha\beta} T^{\rm (i)}_{(\alpha\beta) n}(\xx)=0,
$
are trivially fulfilled.
The local gluon operator having twist $\tau = 2h-2$ is obtained
from the original operator $G_{\alpha\beta}(x) =
\frac{1}{(n+2)(n+1)}\partial_\alpha\partial_\beta G_{n+2}(x)$
by subtracting the traces being of twist $2h$ and $2h+2$. These traces,
corresponding to irreducible representations, are
\begin{align}
\label{yyy}
G^{\rm tw(2h)(i)a}_{(\alpha\beta) n}(\xx)
=& \hbox{\large$ \frac{1}{2(n+2)(n+1)(h+n)}$}g_{\alpha\beta}\square
G_{n+2}(x)\big|_{x=\xx},
\\
G^{\rm tw(2h)(i)b}_{(\alpha\beta) n}(\xx)
=&
\hbox{\large$ \frac{1}{(n+2)(n+1)(h+n)}$}
\Big(x_{(\alpha}\partial_{\beta)}\square
-\hbox{\large$ \frac{1}{2(h+n-2)}$}
x_{\alpha}x_{\beta}\square^2
\Big)G_{n+2}(x)\big|_{x=\xx},
\\
G^{\rm tw(2h+2)(i)a}_{(\alpha\beta) n}(\xx)
=&-\hbox{\large$ \frac{1}{4(n+2)(n+1)(h+n)(h+n-1)}$}
x_{\alpha}x_{\beta}\square^2
G_{n+2}(x)\big|_{x=\xx},
\\
\label{xxx}
G^{\rm tw(2h+2)(i)b}_{(\alpha\beta) n}(\xx)
=&\hbox{\large$ \frac{1}{2(n+2)(n+1)(h+n)(h+n-2)}$}
x_{\alpha}x_{\beta}\square^2 G_{n+2}(x)\big|_{x=\xx}.
\end{align}
For the nonlocal gluon operator
$G_{(\alpha\beta)}(\ka\xx,\kb\xx)=F_{(\alpha}^{\, \rho}(\ka\xx)
U(\ka\xx,\kb\xx) F_{\beta)\rho}(\kb\xx)$
the expressions (\ref{sy_te_po(z)}) may be resummed leading,
if restricted to $D=4$,
to the following one-parameter integral representation
($G(\ka\xx,\kb\xx)\equiv
x^\alpha x^\beta G_{(\alpha\beta)}(\ka\xx,\kb\xx)$):
\begin{align}
\label{Gtw2_gir}
G^{\mathrm{tw2}(\mathrm{i})}_{(\alpha\beta)} (\ka\xx,\kb\xx)
=&
\int_{0}^{1}\mbox{d}\lambda \Big\{(1-\lambda)\partial_\alpha\partial_\beta
-(1-\lambda+\lambda\ln\lambda)
\left(\hbox{\large$\frac{1}{2}$}g_{\alpha\beta}
+x_{(\alpha}\partial_{\beta)}\right)\square
\nonumber\\
&-\hbox{\large$\frac{1}{4}$}\big(2(1-\lambda)+(1+\lambda)\ln\lambda\big)
x_\alpha x_\beta\square^2\Big\}
G(\ka\lambda x,\kb\lambda x)\Big|_{x=\xx}.
\end{align}
Analogous expressions result for (\ref{yyy}) -- (\ref{xxx}).
This completes the twist decomposition of
$G_{(\alpha\beta)}^{\mathrm{(i)}} (\ka\xx,\kb\xx)$. The decomposition
of $G_{\alpha\beta}(\ka\xx,\kb\xx)$ w.r.to the other symmetry classes is
much more complicated. A complete presentation will be
given elsewhere.
The authors would like to thank D. Robaschik for
many stimulating discussions.
\vskip -1mm
\parskip=0pt
\baselineskip=10pt
|
2,877,628,089,569 | arxiv | \section{Introduction}\label{intro}
We begin with some notation. Let $[n] = \{ 1,2,3,\cdots, n\}$, which we view as our canonical set of
size $n$. Let $\binom{[n]}{r}$ be the collection of $r$-subsets of $[n]$,
and for $k\leq r$ let $\binom{[k]}{r}$ denote the collection of all $r$-subsets of an
arbitrary set $S\subset [n]$ of size $k$ (so $S$ is not necessarily $ \{ 1,2,3,\cdots, k\}$).
For integers $a < b$ we let $[a,b]$ denote the set of integers $x$ satisfying $a\le x\le b$.
Now let $\mathcal{A}$ and $\mathcal{B}$ be two families of subsets of $[n]$. We say $\mathcal{A}$ is
{\it intersecting} if $A_{1}\cap A_{2} \ne \emptyset$ for all pairs $A_{1}, A_{2}\in \mathcal{A}$. Further $\mathcal{A}$ is {\it nontrivial}
if $\cap_{A\in \mathcal{A}}A = \emptyset$, and is {\it trivial} otherwise.
The pair of families $\mathcal{A}$, $\mathcal{B}$
is {\it cross intersecting} if $A\cap B\ne \emptyset$ for all pairs of sets $A,B$, where $A\in \mathcal{A}$ and $B\in \mathcal{B}$. A
{\it matching} of $\mathcal{A}$ is a collection of sets in $\mathcal{A}$ that are pairwise disjoint.
For $S\subset [n]$
we let $V(S) = \{x\in [n]: x\in S \}$, and we let $V(\mathcal{A}) = \cup_{S\in \mathcal{A}}V(S)$ (the vertex
set of $\mathcal{A}$). We sometimes refer to the members of $\mathcal{A}$ as {\it edges} of $\mathcal{A}$.
We will use standard
graph theoretic or combinatorial notation, as may be found for example in \cite{W}. Additional notation will be defined where it is initially used in the text.
Now define the the
{\it Kneser Graph} $K(n,r)$ to be the graph with vertex set $V = \binom{[n]}{r}$,
and edge set $E = \{ vw: v,w\in \binom{[n]}{r}, v\cap w = \emptyset \}$. We can suppose that $n\geq 2r$ since otherwise $K(n,r)$
has no edges. Clearly $K(n,r)$ has $\binom{n}{r}$ vertices, is regular of degree $\binom{n-r}{r}$, and it can be shown that it is both vertex and edge transitive.
The Kneser Graph arises in
several examples; $K(n,1)$ is just the complete graph $K_{n}$ on $n$ vertices, $K(n,2)$ is the complement of the line graph of $K_{n}$,
$K(2n-1,n-1)$ is also known as the odd graph $O_{n}$, and $K(5,2)$ is isomorphic to the Petersen graph. The diameter of $K(n,r)$ was shown to be
$\lceil \frac{r-1}{n-2r} \rceil + 1$ in \cite{Val}, and $K(n,r)$ was shown to be Hamiltonian for $n\geq \frac{1}{2}( 3r + 1 + \sqrt{5r^{2} -2r +1} )$ in \cite{Chen}.
A longstanding problem on $K(n,r)$ was Kneser's conjecture; that the chromatic number satisfies
$\chi(K(n,r)) = n -2r + 2$ if $n\geq 2r$ and of course $\chi(K(n,r)) = 1$ otherwise. The upper bound is achieved by a simple
coloring; color an $r$-set by its largest element if this element is at least $2r$, and otherwise color it by $1$. The difficulty was in proving the
corresponding lower bound, and this result was first proved by Lovasz \cite{L} using methods of algebraic topology. More elementary, but still topological, proofs were given
by B\'{a}r\'{a}ny \cite{Ba} soon after, and by Dol'nikov \cite{Do} and Greene \cite{Gr} later. A mostly combinatorial proof (still with topological elements) was given by Matou\v{s}ek \cite{Ma}.
Recently some results on a labeling problem relating to $K(n,r)$ appeared in the literature \cite{JPP}. Let $G = (V,E)$ be a graph on $n$ vertices
and $f: V\rightarrow C_{n}$ a one to one map of the vertices of $G$ to the cycle $C_{n}$ on $n$ vertices. Let $|f| = $ min$\{ dist_{C_{n}}(f(u), f(v)): uv\in E \}$, where
$dist_{C_{n}}$ denotes the distance function on $C_{n}$; that is, $dist_{C_{n}}(x,y)$ is the mod $n$ distance between $x$ and $y$ when we view
the vertices of $C_{n}$ as the integers mod $n$. Now let $s(G) = $ max$\{ |f| \}$, where the maximum is taken over all such one to one maps $f$. It is shown
in \cite{JPP} that $s(K(n,2)) = 3$ when $n\geq 6$, that $s(K(n,3)) = 2n-7$ or $2n-8$ for $n$ sufficiently large, and that for fixed $r\geq 4$ and $n$ sufficiently large we have
$\frac{2n^{r-2}}{(r-2)!} - \frac{(\frac{7}{2}r - 2)n^{r-3}}{(r-3)!} - O(n^{r-4})\le s(K(n,r))\le \frac{2n^{r-2}}{(r-2)!} - \frac{(\frac{7}{2}r - 3.2)n^{r-3}}{(r-3)!} + o(n^{r-3}).$
This paper considers the following related well known labeling problem. Let $G = (V,E)$ be a graph on $n$ vertices. Now consider $f: V\rightarrow [1,n]$ a
one to one map, and let $dilation(f) =$ max$\{ |f(v) - f(w)|: vw\in E \}$. Define the {\it bandwidth} $B(G)$ of $G$ to be the minimum possible
value of $dilation(f)$ over all such one to one maps $f$. There is an extensive literature on the bandwidth of graphs and related layout problems (see \cite{DPS} for a survey),
originally motivated by the attempt
to find fast algorithms for matrix operations, and by problems in VLSI design. Our main result is the following.
\begin{theorem} \label{main theorem}
Let $r\geq 4$ be fixed integer. As $n\rightarrow \infty$ we have
$$B(K(n,r)) = \binom{n}{r} - \frac{1}{2}\binom{n-1}{r-1} - 2\frac{n^{r-2}}{(r-2)!} + (r + 2)\frac{n^{r-3}}{(r-3)!} + O(n^{r-4}).$$
\end{theorem}
We observe that there is the trivial upper bound $B(K(n,r))\le \binom{n}{r} - \frac{1}{2}\binom{n-1}{r-1}$, as follows. Let $\beta(G)$ be
the maximum possible size of an independent set of vertices in any graph $G$ on $N$ vertices.
Then $B(G)\le N - \lfloor \frac{1}{2}\beta(G) \rfloor$, achieved by a one to one map $f: V(G)\rightarrow [1,N]$ which sends any half of the vertices of a
maximum independent set $S$ to $[1, \lfloor \frac{1}{2}\beta(G) \rfloor]$, the other half of $S$ to
$[N- \lceil \frac{1}{2}\beta(G) \rceil+1, N]$, and the remainder of $V(G)$ arbitrarily to the rest of the interval $[1,N]$. Now an independent set in $K(n,r)$
is just an intersecting family in $\binom{[n]}{r}$, and by the Erdos, Ko, Rado theorem \cite{EKR} the maximum size of such a family is $\binom{n-1}{r-1}$. It follows
that $B(K(n,r))\le \binom{n}{r} - \frac{1}{2}\binom{n-1}{r-1}$.
Our contribution here is to precisely determine $B(K(n,r))$ for fixed $r$ and $n$ growing, up to an $O(n^{r-4})$ error term. With this in mind, we will occasionally state inequalities
involving $n$ and $r$ which are true when $n$ is large enough relative to the fixed $r$. In these cases we will often not state this requirement on $n$ and $r$ explicitly.
\section{The lower bound}
Our goal in this section is to prove the lower bound $B(K(n,r)) \geq \binom{n}{r} - A$, where $A = \frac{1}{2}\binom{n-1}{r-1} + 2\frac{n^{r-2}}{(r-2)!} - (r + 2)\frac{n^{r-3}}{(r-3)!} + O(n^{r-4})$;
that is, to show that $dilation(f)\geq \binom{n}{r} - A$ for any one to one map $f: V(K(n,r))\rightarrow [1, \binom{n}{r}]$.
As notation, for two families of $r$-sets ${\mathcal A},{\mathcal B}\subset \binom{[n]}{r}$, let us write ${\mathcal A}\sim {\mathcal B}$ to mean that there is
some $S\in {\mathcal A}$ and $T\in {\mathcal B}$ such that $S\cap T = \emptyset$. So ${\mathcal A}\sim {\mathcal B}$ says that ${\mathcal A}$ and ${\mathcal B}$ are not cross intersecting, or equivalently that there are vertices $S,T$ of $K(n,r)$, where
$S\in {\mathcal A}$ and $T\in {\mathcal B}$, such that $ST\in E(K(n,r))$. Roughly speaking we will be showing that for any one to one map $f: V(K(n,r))\rightarrow [1, \binom{n}{r}]$ there is
an initial (resp. final) subinterval $I$ (resp. $F$) of $[1, \binom{n}{r}]$, with $|I| + |F|$
reasonably small, such that $f^{-1}(I)\sim f^{-1}(J)$. This forces a ``long" edge $ST$; that is one satisfying $|f(S) - f(T)|\geq \binom{n}{r} - (|I| + |F|)$, and leads to our lower bound
on $B(K(n,r))$.
We discuss briefly the relation between our lower bound proof and existing results in the literature on cross intersecting families.
Now $dilation(f)\geq \binom{n}{r} - A$ is equivalent to at least one of statements $f^{-1}(j)\sim f^{-1}([\binom{n}{r} - A + j, \binom{n}{r}])$, $1\le j\le A$, being true. This is in turn equivalent to
at least one of the statements $f^{-1}([1,j])\sim f^{-1}([\binom{n}{r} - A + j, \binom{n}{r}])$, $1\le j\le A$, being true. In these and also other cases
we will be interested in proving ${\mathcal A}\sim {\mathcal B}$ for certain pairs ${\mathcal A},{\mathcal B}$ of families of subsets of $[n]$. There are many results in the
literature which say that if ${\mathcal A},{\mathcal B}$ are cross intersecting families (possibly satisfying additional conditions), then $|{\mathcal A}| + |{\mathcal B}|\le U(n)$ or $|{\mathcal A}||{\mathcal B}|\le T(n)$ for suitable functions
U and T. We mention the papers \cite{HM}, \cite{FT}, and \cite{FranklTok} as examples of the large literature containing such bounds.
If we could show that the families ${\mathcal A},{\mathcal B}$ for which we try to prove ${\mathcal A}\sim {\mathcal B}$ violate these bounds; that is, $|{\mathcal A}| + |{\mathcal B}| > U(n)$ for example, then we could conclude that ${\mathcal A},{\mathcal B}$ could not be cross intersecting and hence
that ${\mathcal A}\sim {\mathcal B}$ as desired. In the examples just cited, the bounds were
too generous for our purposes, and to the best of our knowledge the same holds for the other published results of this type.
Indeed in our setting, we will be dealing with
cross-intersecting families ${\mathcal A},{\mathcal B}$ where one of them contains a large matching,
which substantially restricts $|{\mathcal A}|+|{\mathcal B}|$.
We now proceed to our lower bound result. We will use a few results from the literature. The following extension
of the Hilton-Milner theorem on intersecting families to cross-intersecting families was established by F\"uredi \cite{F-cross-int}.
\begin{theorem} {\rm \cite{F-cross-int}} \label{cross-intersecting}
Let $n,a,b$ be positive integers where $n\geq a+b$. Let ${\mathcal A}\subseteq \binom{[n]}{a}$
and ${\mathcal B}\subseteq \binom{[n]}{b}$, and suppose that ${\mathcal A}$ and ${\mathcal B}$ are cross-intersecting.
If $|{\mathcal A}|\geq \binom{n-1}{a-1}-\binom{n-b-1}{a-1}+1$ and $|{\mathcal B}|>\binom{n-1}{b-1}-\binom{n-a-1}{b-1}+1$, then there exists an element $x\in [n]$ that lies in all members of ${\mathcal A}$ and ${\mathcal B}$.
That is; ${\mathcal A} \cup {\mathcal B}$ is a trivial family.
\end{theorem}
Let $t$ be a positive integer. A family ${\mathcal F}$ of sets is said to be {\it $t$-intersecting}
if $|F\cap F'|\geq t$ for all $F, F'\in{\mathcal F}$. Erd\H{os}, Ko, and Rado \cite{EKR} proved that for
fixed positive intergers $r,t$, where $r\geq t+1$, there exists
$n_0(r,t)$ such that if $n\geq n_0(r,t)$ then the maximum size of an $t$-intersecting
family of $r$-subsets of $[n]$ is $\binom{n-t}{r-t}$. For $t\geq 15$, Frankl \cite{Frankl-t-intersecting} obtained the smallest possible $n_0(r,t)$ for which the statement holds.
Wilson \cite{Wilson} obtained the smallest possible $n_0(r,t)$ for all $t$.
\begin{theorem} {\rm (\cite{Frankl-t-intersecting} for $t\geq 15$, \cite{Wilson} for all $t$)} \label{t-intersecting}
For all $n\geq (r-t+1)(t+1)$, if ${\mathcal F}\subseteq \binom{[n]}{r}$ satisfies that
$|F\cap F'|\geq t$ for all $F,F'\in {\mathcal F}$ (i.e. ${\mathcal F}$ is $t$-intersecting) then $|{\mathcal F}|\leq \binom{n-t}{r-t}$.
\end{theorem}
Erd\H{o}s \cite{Erdos-matching} showed that there exist $n_1(r,p)$ such that
for all $n\geq n_1(r,p)$ the maximum size of a family of $r$-subsets of $[n]$
not containing a matching of $p+1$ edges is $\binom{n}{r}-\binom{n-p}{r}$.
There has subsequently been a lot of work on
determining the smallest $n_1(r,p)$ for which the statement holds (see \cite{BDE, FLM, FRR, HLS} for instance). The best result among these is due to Frankl \cite{Frankl-matching}.
\begin{theorem} {\rm \cite{Frankl-matching}} \label{F-matching}
Let ${\mathcal F}\subseteq \binom{[n]}{r}$ such that ${\mathcal F}$ contains no matching of size $p+1$,
where $n\geq (2p+1)r-p$. Then $|{\mathcal F}|\leq \binom{n}{r}-\binom{n-p}{r}$.
\end{theorem}
Note that $\binom{n}{r}-\binom{n-p}{r}<p\binom{n-1}{r-1}$. For our purpose, we will just use the following weakening of Theorem \ref{F-matching} that applies to all $n$.
\begin{lemma} \label{matching} {\rm \cite{Frankl-matching}}
Suppose ${\mathcal F}\subseteq \binom{[n]}{r}$ satisfies $|{\mathcal F}|> p\binom{n-1}{r-1}$. Then ${\mathcal F}$ contains a matching of size $p+1$.
\end{lemma}
In fact, Frankl showed that if ${\mathcal F}\subseteq \binom{n}{r}$ contains no $(s+1)$-matching
then $|{\mathcal F}|\leq s|\delta({\mathcal F})|$, where $\delta({\mathcal F})$ denotes the number of distinct $(r-1)$-sets that are contained in edges of ${\mathcal F}$.
The following lemma is straightforward to verify.
\begin{lemma} \label{binomial-estimate}
Let $n,r,c$ be integers, where $n\geq r\geq 2$ and $c\leq r$. We have
$$\frac{n^r}{r!}-\frac{c+\frac{r-1}{2}}{(r-1)!} n^{r-1}
\leq \binom{n-c}{r}\leq \frac{n^r}{r!}-\frac{c+\frac{r-1}{2}}{(r-1)!} n^{r-1}
+4r^4 n^{r-2}.$$
\end{lemma}
We can now prove our lower bound result. Through the remainder of this section, for each
vertex $x\in V(K(n,r))$, let $D(x)$ denote the $r$-subset of $[n]$ to which $x$ corresponds.
\begin{theorem} \label{lower-bound}
Let $r\geq 4$ be a fixed positive integer.
Let $f$ be a bijection from $V(K(n,r))$ to $\{1,\ldots, \binom{n}{r}\}$. Then for $n$ sufficiently large relative to $r$ we have
$$dilation(f)\geq \binom{n}{r}-\frac{1}{2}\binom{n-1}{r-1}-2\frac{n^{r-2}}{(r-2)!}+\frac{r+2}{(r-3)!} n^{r-3}-9r^4 n^{r-4}.$$
\end{theorem}
\begin{proof}
Let $$N=\ce{\frac{1}{2}\binom{n-1}{r-1}}-r\binom{n-2}{r-2}.$$
and
$${\mathcal A}=\left\{D(x):1\leq f(x) \le N\right \} \mbox { and }
{\mathcal B}=\left\{D(x):\binom{n}{r}-N\leq f(x)\leq
\binom{n}{r}\right\}.$$
Assume $n$ to be sufficiently large relative to $r$ so that ${\mathcal A}$ and ${\mathcal B}$ are disjoint.
If $dilation(f)\geq \binom{n}{r}-\frac{1}{2}\binom{n-1}{r-1} + K$ for some constant $K$, then we are already done. So we assume that $dilation(f)<\binom{n}{r}-\frac{1}{2}\binom{n-1}{r-1} + K$
for any constant $K$.
\medskip
{\bf Claim 1.} There exists an element $x$ that lies in all members of ${\mathcal A} \cup {\mathcal B}$; that is ${\mathcal A} \cup {\mathcal B}$ is trivial.
\medskip
{\it Proof of Claim 1.} Let ${\mathcal A}'=\{D(x): f(x)\leq r\binom{n-2}{r-2}\}$ and ${\mathcal B}'=\{D(x):
f(x)\geq \binom{n}{r}-r\binom{n-2}{r-2}\}$. Then ${\mathcal A}'\subseteq {\mathcal A}$ and ${\mathcal B}'\subseteq
{\mathcal B}$. If ${\mathcal A}'$ and ${\mathcal B}$ are not cross-intersecting then there exist $x,y$ in $K(n,r)$
with $f(x)\leq r\binom{n-2}{r-2}$ and $f(y)\geq \binom{n}{r}-N$ such that
$xy\in E(K(n,r))$, which yields $dilation(f)\geq |f(x)-f(y)|\geq \binom{n}{r}-N-r\binom{n-2}{r-2}\geq \binom{n}{r}-\frac{1}{2}{\binom{n-1}{r-1}} - 1$, contradicting
our assumption. Hence ${\mathcal A}',{\mathcal B}$ are cross-intersecting. Similarly, ${\mathcal A}, {\mathcal B}'$
are cross-intersecing. For sufficiently large $n$, we have
$|{\mathcal A}'|=r\binom{n-2}{r-2}>\binom{n-1}{r-1}-\binom{n-r-1}{r-1}+1$
and $|{\mathcal B}|\geq\frac{1}{2}\binom{n-1}{r-1}-r\binom{n-2}{r-2} - 1 >\binom{n-1}{r-1}-\binom{n-r-1}{r-1}+1$.
By Theorem \ref{cross-intersecting}, there exists an element $x$ that lies in
all members of ${\mathcal A}'$ and ${\mathcal B}$. By a similar argument, there exists an element
$y$ that lies in all members of ${\mathcal A}$ and ${\mathcal B}'$. Suppose $x\neq y$. Then
$x,y$ both lie in all members of ${\mathcal A}'$, which is impossible since there are
only $\binom{n-2}{r-2}$ many $r$-subsets of $[n]$ containing both $x$ and $y$, while
$|{\mathcal A}'|=r\binom{n-2}{r-2}\geq 4\binom{n-2}{r-2}$. Hence $x=y$. So the element $x$ lies in all $r$-subsets of
${\mathcal A} \cup {\mathcal B}$. \hskip 1.3em\hfill\rule{6pt}{6pt}
\medskip
Without loss of generality, we may assume that element $1$ lies in all members of
${\mathcal A} \cup {\mathcal B}$. Let ${\mathcal S}_1=\{D\in \binom{[n]}{r}, 1\in D\}$ and $\overline{{\mathcal S}_1} = \{D\in \binom{[n]}{r}: 1\notin D\}$. Then
${\mathcal A}\cup {\mathcal B}\subseteq {\mathcal S}_1$, $|{\mathcal S}_1|=\binom{n-1}{r-1}$, and $|\overline{{\mathcal S}_1} |=\binom{n-1}{r}.$
For every pair of elements $x,y\in [n]$, let $a(x,y)$ denote the number of sets in
${\mathcal A}$ that contain either $x$ or $y$ or both, and let $b(x,y)$ denote the number
of sets in ${\mathcal B}$ that contain either $x$ or $y$ or both. Also let ${\mathcal A}_1(x,y) =\{D(z): 1\leq f(z)\leq \max\{\binom{n-2}{r-2}+3r^2\binom{n-3}{r-3}, a(x,y)\} + \binom{n-3}{r-3}\}$
\medskip
{\bf Claim 2.}
Let $\mathcal{X} \subseteq \overline{{\mathcal S}_1}$ satisfy
$|\mathcal{X}|>\binom{n-3}{r-3}+r^2\binom{n-4}{r-4}$. Then there exist elements $u,v\in [n]$ such
that ${\mathcal A}_1(u,v)$ and $\mathcal{X}$ are not cross-intersecting.
\medskip
{\it Proof of Claim 2.} Let ${\mathcal A}_0=\{D(x): 1\leq f(x)\leq \binom{n-2}{r-2}+3r^2\binom{n-3}{r-3}\}$ and ${\mathcal A}'_0=\{D\setminus \{1\}: D\in {\mathcal A}_0\}$, so $|{\mathcal A}'_0| = |{\mathcal A}_0|$.
Since ${\mathcal A}_0\subseteq {\mathcal A}_{1}(u,v)$ for any $u,v\in [n]$,
we may assume that ${\mathcal A}_0$ and $\mathcal{X}$ are cross-intersecting, since otherwise the claim holds for any choice of $u,v$.
First suppose ${\mathcal A}'_0$ contains a matching $M$ of size $4$. Let $C\in \mathcal{X} $. Since
$1\notin C$, $C$ must intersect each of the edges of $M$. Since $M$ is a matching,
$C$ intersects each edge of $M$ in a different vertex. But the number of such $C$
is at most $(r-1)^4 \binom{n-4}{r-4}<|\mathcal{X}|$ for large $n$, a contradiction. Thus any maximum matching $M$ of ${\mathcal A}'_0$
satisfies $|M|\le 3$, so that $|V(M)|\le 3(r-1) < 3r$.
Note that $V(M)$ forms a vertex cover of ${\mathcal A}'_0$; or else
we can find a larger matching in ${\mathcal A}'_0$. Hence some vertex $u$ in $V(M)$ lies in
at least $|{\mathcal A}'_0|/3r>3r^{2}\binom{n-3}{r-3}/3r>r\binom{n-3}{r-3}$ edges of ${\mathcal A}'_0$. Let ${\mathcal A}'_{0}(u)$ be the set
of edges in ${\mathcal A}'_0$ that contain $u$, and let ${\mathcal A}''_{0}(u) = \{E-u: E\in {\mathcal A}'_{0}(u)\}$. Then ${\mathcal A}''_{0}(u)\subset \binom{[n-2]}{r-2}$ and and we've seen that
$|{\mathcal A}''_{0}(u)| > r\binom{n-3}{r-3}$. Applying Lemma \ref{matching} we see that ${\mathcal A}''_{0}(u)$ contains a matching of size $r+1$, call it $D_{1}, D_{2}, \cdots, D_{r+1}$.
Let $E_{i} = D_{i}\cup \{u\} \in {\mathcal A}'_0$, $1\le i\le r+1$.
Note that $u$ lies in at most $\binom{n-2}{r-2}$ edges of
${\mathcal A}'_0$. So there are at least $|{\mathcal A}'_0| - \binom{n-2}{r-2} = 3r^2\binom{n-3}{r-3}$ edges
of ${\mathcal A}'_0$ not covered by $u$. These edges are covered by $V(M)\setminus \{u\}$. So some vertex $v\in V(M)\setminus \{u\}$ must lie in at least $3r^2\binom{n-3}{r-3}/(3r-1)\geq r\binom{n-3}{r-3}$ of these edges. Hence by Lemma \ref{matching} as above, there are $(r+1)$ edges $F_1,\ldots, F_{r+1}$ of ${\mathcal A}'_0$ containing $v$ such that $F_1\setminus \{v\},\ldots,
F_{r+1}\setminus \{v\}$ form a matching in $\binom{[n-2]}{r-2}$. Again let $C\in \mathcal{X} $. Since $1\notin C$ by assumption, $C$ must intersect all members of ${\mathcal A}'_0$; in particular, $C$ intersects
all the edges $E_1,\ldots, E_{r+1}$. Since $|C|\leq r$, it follows that $u\in C$. Similarly,
we have $v\in C$. So in order for $\mathcal{X}$ to cross-intersect ${\mathcal A}_0$, all edges of $\mathcal{X} $ contain both $u$ and $v$.
For this fixed choice of $u,v\in M$ satisfying $u,v\in C$ for all edges $C\in \mathcal{X}$,
we show that
${\mathcal A}_1(u,v)$ and $\mathcal{X}$ are not cross-intersecting.
Suppose not. Let ${\mathcal A}'_1(u,v) =\{D\setminus \{1\}: D\in {\mathcal A}_1(u,v)\}$, so ${\mathcal A}'_1(u,v)\subset \binom{[n-1]}{r-1}$.
Since $|{\mathcal A}'_1(u,v)| = |{\mathcal A}_1(u,v)| > a(u,v)+\binom{n-3}{r-3}$, there are more than $\binom{n-3}{r-3}$ edges of ${\mathcal A}'_1(u,v)$ that contain neither $u$ nor $v$. Applying Theorem \ref{t-intersecting}
with $n-1$ and $r-1$ playing the roles of $n$ and $r$ respectively, and with $t = 2$, we see that among these edges there are two edges
$E,E'$ such that $|E\cap E'|\leq 1$. First suppose that $E\cap E'=\emptyset$. For any $C\in \mathcal{X}$, $C$ must contain both $u$ and $v$
and intersect each of $E$ and $E'$. This yields $|\mathcal{X}|\leq (r-1)^2\binom{n-4}{r-4}$,
contradicting our assumption about $|\mathcal{X} |$. Hence we may assume that $E\cap E'=\{w\}$ for some $w\notin \{1,u,v\}$. As usual, all members of $\mathcal{X}$ must contain
$u$ and $v$ and intersect $E$ and $E'$. Among them there are at most $\binom{n-3}{r-3}$
that contain $w$ and at most $(r-2)^2 \binom{n-4}{r-4}$ that do not contain $w$. Hence $|\mathcal{X}|\leq \binom{n-3}{r-3}+(r-2)^2\binom{n-4}{r-4}$, contradicting our
assumption about $|\mathcal{X}|$. \hskip 1.3em\hfill\rule{6pt}{6pt}
\medskip
Symmetric with ${\mathcal A}_1(x,y)$ defined above, let ${\mathcal B}_1(x,y) =\{D(z): \binom{n}{r} - f(z)\leq \max\{\binom{n-2}{r-2}+3r^2\binom{n-3}{r-3}, b(x,y)\} + \binom{n-3}{r-3}\}$.
By a similar argument which we omit, we have the following.
\medskip
{\bf Claim 3.}
Let $\mathcal{X} \subseteq \overline{{\mathcal S}_1}$ satisfy $|\mathcal{X}|>\binom{n-3}{r-3}+r^2\binom{n-4}{r-4}$. Then there exist elements $u',v'\in [n]$ such that ${\mathcal B}_1(u',v')$
and $\mathcal{X}$ are not cross-intersecting.
\medskip
Now, let ${\mathcal C}$ be the subcollection of $\binom{[n]}{r}$ of minimum size such that $f({\mathcal C})$ is an interval immediately following
$f({\mathcal A})$, and $|{\mathcal C} \cap \overline{{\mathcal S}_1}| = 1 + \binom{n-3}{r-3}+r^2\binom{n-4}{r-4}$.
Similarly, let ${\mathcal D}$ be the subcollection of $\binom{[n]}{r}$ of minimum size such that $f({\mathcal D})$ is an interval immediately preceding
$f({\mathcal B})$, and $|{\mathcal D} \cap \overline{{\mathcal S}_1}| = 1 + \binom{n-3}{r-3}+r^2\binom{n-4}{r-4}$.
When $n$ is sufficiently large, ${\mathcal C}$ and ${\mathcal D}$ are well defined and are disjoint.
By definition,
\begin{equation} \label{ABCD}
|{\mathcal A}\cup {\mathcal B}\cup {\mathcal C} \cup {\mathcal D}|\leq \binom{n-1}{r-1}+2\binom{n-3}{r-3}
+2r^2\binom{n-4}{r-4} + 2.
\end{equation}
By Claim 2 applied to ${\mathcal D} \cap \overline{{\mathcal S}_1}$ in place of $\mathcal{X}$, there exist elements $u,v\neq 1$ such that
some member $D\in {\mathcal D}$ is disjoint from an $r$-set $E$ satisfying $f(E)\le \max\{\binom{n-2}{r-2}+3r^2\binom{n-3}{r-3}, a(u,v)\}+\binom{n-3}{r-3}$.
Note also that $f(D)\geq \binom{n}{r}-|{\mathcal B}|-|{\mathcal D}|$.
Letting
$$\ell=\binom{n-2}{r-2}+3r^2\binom{n-3}{r-3}.$$
we then have
\begin{equation} \label{lower1}
dilation(f)\geq |f(D) - f(E)|\geq \binom{n}{r}-|{\mathcal B}|-|{\mathcal D}|-\max\{\ell, a(u,v)\}-\binom{n-3}{r-3}.
\end{equation}
By a similar argument, for some elements $u',v'\neq 1$, we have
\begin{equation} \label{lower2}
dilation(f)\geq \binom{n}{r}-|{\mathcal A}|-|{\mathcal C}|-\max\{\ell, b(u',v')\}-\binom{n-3}{r-3}.
\end{equation}
Let
$$\lambda_1=\frac{1}{2}(|{\mathcal A}|+|{\mathcal B}|+|{\mathcal C}|+|{\mathcal D}|), \quad \mbox{ and } \quad
\lambda_2=\frac{1}{2}(\max\{\ell, a(u,v)\}+ \max\{\ell, b(u',v')\}).$$
By averaging \eqref{lower1} and \eqref{lower2}, we get
\begin{equation} \label{lower-average}
dilation(f)\geq \binom{n}{r}-\lambda_1-\lambda_2-\binom{n-3}{r-3}.
\end{equation}
By \eqref{ABCD},
\begin{equation} \label{bound1}
\lambda_1\leq \frac{1}{2} \binom{n-1}{r-1}+\binom{n-3}{r-3}
+r^2\binom{n-4}{r-4} + 1.
\end{equation}
Letting $m(r,n) = \frac{1}{2}[\binom{n-2}{r-2}+\binom{n-3}{r-2}+\binom{n-4}{r-2}+\binom{n-5}{r-2}]$, we show that $\lambda_{2}\le m(r,n)$. Observe first
that $\lambda_{2} = \max \{ \frac{1}{2}(\ell+a(u,v), \frac{1}{2}(\ell+b(u',v'), \frac{1}{2}(a(u,v)+b(u',v')), \ell \}$, and we can bound
the expressions in the braces as follows.
Certainly $a(u,v)$ is no more than the total number of $r$-subsets of
$[n]$ that contain $1$ and at least one of $u,v$. Thus
$a(u,v)\le \binom{n-1}{r-1}-\binom{n-3}{r-1} = \binom{n-2}{r-2}+\binom{n-3}{r-2}$,
and similarly for $b(u',v')$. Thus
$\frac{1}{2}(\ell+a(u,v))\le \frac{1}{2}[2\binom{n-2}{r-2} + \binom{n-3}{r-2} + 3r^{2}\binom{n-3}{r-3}] < m(r,n)$ for large $n$. The same bound
holds for $\frac{1}{2}(\ell+b(u',v'))$ symmetrically. Now
$a(u,v)+b(u',v')$ is no more than the total number of $r$-subsets of
$[n]$ that contain $1$ and at least one of $u,v,u',v'$. Thus
$a(u,v) + b(u',v')\le \binom{n-1}{r-1}-\binom{n-5}{r-1}=\binom{n-2}{r-2}+\binom{n-3}{r-2}
+\binom{n-4}{r-2}+\binom{n-5}{r-2} = 2m(r,n)$, so $m(r,n)\geq \frac{1}{2}[a(u,v)+b(u',v')]$. Finally,
since $\ell=\binom{n-2}{r-2}+3r^2\binom{n-3}{r-3}$, for sufficiently large $n$ we have
$m(r,n) > \frac{1}{2}[\binom{n-2}{r-2}+\binom{n-3}{r-2}+\ell]\geq
\ell$. So we've shown that $\lambda_{2}\le m(r,n)$.
The required lower bound on $dilation(f)$ is now obtained as follows. Applying Lemma \ref{binomial-estimate} to the four binomial terms in $m(r,n)$, we get
\begin{equation} \label{bound2}
\lambda_2\leq m(r,n)\leq 2\frac{n^{r-2}}{(r-2)!}-\frac{r+4}{(r-3)!}n^{r-3} + 8r^4 n^{r-4}.
\end{equation}
Hence by \eqref{lower-average}, \eqref{bound1}, and \eqref{bound2}, we have
\begin{eqnarray*}
dilation(f) &\geq& \binom{n}{r}-\lambda_1-\lambda_2-\binom{n-3}{r-3}.\\
&\geq& \binom{n}{r}-\frac{1}{2}\binom{n-1}{r-1}-2\binom{n-3}{r-3}-2\frac{n^{r-2}}{(r-2)!}+\frac{r+4}{(r-3)!} n^{r-3}-9r^4 n^{r-4}\\
&\geq&\binom{n}{r}-\frac{1}{2}\binom{n-1}{r-1}-2\frac{n^{r-2}}{(r-2)!}+\frac{r+2}{(r-3)!} n^{r-3}-9r^4 n^{r-4}.\\
\end{eqnarray*}
\end{proof}
\section{The upper bound}
In this section we give a construction which yields our upper bound for $B(K(n,r))$. We begin with some notation. For any
sequence $\{ i_{j} \}$ of $t$ integers in increasing order, $1\le i_{1} < i_{2} < \cdots < i_{t}\le n$, let $S_{i_{1}i_{2}\cdots i_{t}}$ be the collection
of all $r$-sets in $\binom{[n]}{r}$ whose smallest $t$ elements are $i_{1}, i_{2}, \cdots , i_{t}$, so that $|S_{i_{1}i_{2}\cdots i_{t}}| = \binom{n-i_{t}}{r-t}$. Occasionally we insert commas
between successive $i_{j}$ for clarity; e.g. $S_{1,8,10}$ is the collection of $r$-sets in $\binom{[n]}{r}$ whose smallest three elements are $1,8$, and $10$.
For any
$S\subset \binom{[n]}{r}$ , let $\mathcal{I}(S) = \{ w\in [n]: w\in \bigcap_{v\in S}v \}$, the intersection of all $r$-sets in $S$.
Let $f:V(K(n,r))\rightarrow [1,\binom{n}{r}]$ be a one to one map. As a convenience, in this section we identify any subset $A\subseteq V(K(n,r))$ with $f(A)\subseteq [1,\binom{n}{r}]$, and any
subset $B\subseteq [1,\binom{n}{r}]$ with $f^{-1}(B)$. For subsets $F,G\subset [1,\binom{n}{r}]$,
we say that $G$ is a {\it right blocker} of $F$ if
\newline(a) $G = [u,\binom{n}{r}]$ for some $\frac{1}{2}\binom{n}{r} < u < \binom{n}{r}$ (so $G$ is a terminal interval).
\newline(b) $F$ and $G$ are cross intersecting; that is, $v\cap w\ne \emptyset$ for any $v\in F$, $w\in G$, viewing $v,w$ as $r$-subsets under the identification above.
In particular, $vw\notin E(K(n,r))$ for all $v\in F$ and $w\in G$.
For any subset $F\subset V(K(n,r))$ and map $f$ as above, let $\partial(F) =$ max$\{ |f(v) - f(w)|: v\in F$ or $w\in F, vw\in E(K(n,r)) \}$.
\begin{lemma} \label{blockedintervals} Let $f:V(K(n,r))\rightarrow [1,\binom{n}{r}]$ be a one to one map.
Suppose $G$ is a right blocker for $F = [x,y]\subset [1,\binom{n}{r}]$. Then
$\partial(F)\le$ max$\{y-1,\binom{n}{r} - (|G| + x)\}$.
\end{lemma}
\begin{proof} Consider an edge $vw$, $v\in F$. If $f(w) < f(v)$, then since $f(v)\le y$ and $f(w)\geq 1$ we get
$|f(v) - f(w)|\le y-1$ in this case. If $f(w) > f(v)$, then since $w\notin G$ we have
$f(w) \le \binom{n}{r} - |G|$ while $f(v) \geq x.$ It follows that $|f(v) - f(w)|\le \binom{n}{r} - (|G| + x)$, as required. \end{proof}
In our next theorem we obtain an upper bound for $B(K(n,r))$ by construction.
\begin{theorem} \label{betterupperbound} Let $r\geq 3$ be a fixed positive integer. Then for large $n$ we have
$$B(K(n,r))\le \binom{n}{r} - \frac{1}{2}\binom{n-1}{r-1} - 2\frac{n^{r-2}}{(r-2)!} + (r + 2)\frac{n^{r-3}}{(r-3)!} + O(n^{r-4}). $$
\end{theorem}
\begin{proof} Let $L(n,r) = \binom{n}{r} - \frac{1}{2}\binom{n-1}{r-1} - 2\frac{n^{r-2}}{(r-2)!} + (r + 2)\frac{n^{r-3}}{(r-3)!} + O(n^{r-4})$, the right side
of the inequality in the theorem. We will define a map $f:V(K(n,r))\rightarrow [1, \binom{n}{r}]$ and
a partition of $[1, \binom{n}{r}]$ into intervals such that $\partial(F)\le L(n,r)$ for each
interval F in the partition. Since $dilation(f)$ is the maximum of $\partial(F)$ over all these $F$, we obtain a map $f$
satisfying $dilation(f)\le L(n,r)$, as required.
First we begin with a partition $S_{1} = S_{1}' \cup S_{1}''$ of $S_{1}$ in which $|S_{1}' | = \lfloor \frac{1}{2}|S_{1}| \rfloor $ and
$|S_{1}'' | = \lceil \frac{1}{2}|S_{1}| \rceil $, defined as follows.
We include $S_{12}\cup S_{15}\cup S_{1,8,9}\cup S_{1,8,10}$ in $S_{1}'$, and also include $S_{13}\cup S_{14}\cup S_{167}\cup S_{168}$ in $S_{1}''$, and then
fill out the rest of $S_{1}'$ and $S_{1}''$ arbitrarily with points from $S_{1}$, up to size $\lfloor \frac{1}{2}|S_{1}| \rfloor$ for $S_{1}'$, and up to size
$\lceil \frac{1}{2}|S_{1}| \rceil$ for $S_{1}''$. Formally, we let
$S_{1}' = S_{12}\cup S_{15}\cup S_{189}\cup S_{1,8,10}\cup X'$ and $S_{1}'' = S_{13}\cup S_{14}\cup S_{167}\cup S_{168}\cup X''$,
where $X'$ is any subset of $S_{1} - \{ S_{12}\cup S_{15}\cup S_{189}\cup S_{1,8,10}\cup S_{13}\cup S_{14}\cup S_{167}\cup S_{168} \}$ of size
$\lfloor \frac{1}{2}|S_{1}| \rfloor - | S_{12}\cup S_{15}\cup S_{189}\cup S_{1,8,10} |$, and where $X'' = S_{1} - S_{1}' - \{S_{13}\cup S_{14}\cup S_{167}\cup S_{168}\} $.
Now define the map $f:V(K(n,r)\rightarrow [1, \binom{n}{r}]$ by Table \ref{thelayout}, with the following meaning. There are $29$ cells
in this table, counting $R = V(K(n,r))-(S_{1}\cup S_{2}\cup S_{3})$ as a single cell using wraparound. We call these cells {\it blocks} of $f$. Each
block labeled $S_{ij}$, $S_{t}$, $S_{ijk}$, or $R$ in this Table indicates that $f(S_{ij})$, $f(S_{t})$, $f(S_{ijk})$, or $R$ (respectively) is an interval of length
$|S_{ij}|$, $|S_{t}|$, $|S_{ijk}|$, or $|R|$ (respectively) in $[1, \binom{n}{r}]$.
The order in which points of $S_{ij}$ (or of $S_{t}$, $S_{ijk}$, or $R$) are mapped to this
interval is arbitrary. So we view the blocks of $f$ interchangably either as subsets of $V(K(n,r))$ or as intervals in $[1, \binom{n}{r}]$ under the
identification explained at the beginning of this section.
The relative order in which these blocks are mapped to $[1, \binom{n}{r}]$ is indicated by the left to right order
of their appearance in the Table \ref{thelayout}, where the second row of the Table is understood to follow the first row in left to right order.
We define the blocks $..S_{ij}$, $S_{ij}..$, $S_{1}''..$, and $..S_{1}'$ in Table \ref{thelayout} not explained above.
A block $..S_{ij}$ (resp. $S_{ij}..$) indicates that for some subset $S\subset S_{ij}$ the image $f(S)$ occupies some set of consecutive blocks
immediately preceding (resp. following) the block $..S_{ij}$ (resp. $S_{ij}..$), and that $f(S_{ij} - S)=..S_{ij}$ (resp. $S_{ij}..$).
Furthermore $f(S_{ij} - S)$
is the interval of length
$|S_{ij} - S|$ in the position of block $..S_{ij}$ (resp. $S_{ij}..$). So altogether $f(S)$ together with $..S_{ij}$ (resp. $S_{ij}..$) is a consecutive set of blocks
of $f$ which constitute the image $f(S_{ij})$.
As an example, consider the block $..S_{15}$. Referring to Table \ref{thelayout}, we see that the role of $S$ here is played by $S = S_{156}\cup S_{157}$, so $f(S)$ occupies the two blocks
immediately preceding the block $..S_{15}$. Thus $..S_{15} = f(S_{15} - S)$ is the interval of length $|S_{15} - S|$ in the position of block $..S_{15}$; specifically,
$..S_{15} = f(S_{15} - S) = [|S_{12}\cup S_{156}\cup S_{157}| + 1, |S_{12}\cup S_{15}|]$. Note that
$f(S)$ together with $..S_{15}$ is a sequence of three consecutive blocks constituting the image $f(S_{15})$. Referring to Table \ref{thelayout}, we see further that
$f(S_{156}) = [ |S_{12}|+1, |S_{12}|+|S_{156}| ]$, and $f(S_{157}) = [ |S_{12}|+|S_{156}| +1, |S_{12}|+|S_{156}|+|S_{157}| ]$. As another example, for the the block $S_{1}''..$, the role of $S$ is played
by $S = S_{168}\cup S_{167}\cup S_{14}..\cup S_{146}\cup S_{145}\cup S_{13}$, so $f(S)$ occupies the six consecutive blocks immediately following the block
$S_{1}''..$. We have $S_{1}''.. = f(S_{1}'' - S) = [\binom{n}{r} - |S_{1}''| + 1, \binom{n}{r} - |S|]$, so $S_{1}''..$ is the first of seven consecutive blocks which altogether constitute
$f(S_{1}'') = [\binom{n}{r} - |S_{1}''| + 1, \binom{n}{r}]$. Similarly
$..S_{1}'$ is the last of seven consecutive blocks constituting $f(S_{1}') = [1, |S_{1}'|]$.
\begin{table}
\begin{center}
\hskip-1.0cm\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$S_{12}$ & $S_{156}$ & $S_{157}$ & $..S_{15}$ & $S_{189}$ & $S_{1,8,10}$ & $..S_{1}'$& $S_{346}$ & $S_{347}$ & $..S_{34}$ & $S_{356}$ & $S_{357}$ & $..S_{35}$ & $..S_{3}$ & $R$ \\
\hline
$R$ & $S_{2}..$ & $S_{23}..$ & $S_{236}$ & $S_{235}$ & $S_{25}..$ & $S_{259}$ & $S_{258}$ & $S_{1}''..$ & $S_{168}$ & $S_{167}$ & $S_{14}..$ & $S_{146}$ & $S_{145}$ & $S_{13}$ \\
\hline
\end{tabular}
\caption{The mapping $f$} \label{thelayout}
\end{center}
\end{table}
As mentioned above it suffices to show that $\partial(F)\le L(n,r)$ for each block $F$ of $f$. Start with the special block $F = R$. Note first
that $|S_{3}| = \binom{n-3}{r-1} < \binom{n-2}{r-1} = |S_{2}|$, and $| |S_{1}'| - |S_{1}''| |\le 1$. Thus Table \ref{thelayout} shows that
the initial subinterval of $[1, \binom{n}{r}]$ consisting of the $14$ blocks immediately preceding $R$ (i.e the interval $S_{1}'\cup S_{3}$) is shorter that the final subinterval
of $[1, \binom{n}{r}]$ consisting of the $14$ blocks immediately following $R$ (i.e the interval $S_{1}''\cup S_{2}$). Thus
$\partial(R)\le \binom{n}{r} - |S_{1}'| - |S_{3}| = \binom{n}{r} - \frac{1}{2}\binom{n-1}{r-1} - \binom{n-3}{r-1} < L(n,r)$, as required
Consider now any block $F\ne R$. Each such $F$ is contained in either
$[1, \frac{1}{2}\binom{n}{r}]$ or $[ \frac{1}{2}\binom{n}{r}, \binom{n}{r}]$. Since $\frac{1}{2}\binom{n}{r} < L(n,r)$, any edge $ab\in E(K(n,r))$, $a<b$, for which
$|f(a) - f(b)|\geq L(n,r)$ must satisfy $a\in [1,\frac{1}{2}\binom{n}{r}]$ and $b\in[ \frac{1}{2}\binom{n}{r}, \binom{n}{r}]$. We are thus reduced to showing
$\partial(F)\le L(n,r)$ for each block $F= [x,y]\subset [1,\frac{1}{2}\binom{n}{r}]$. There are $14$ such blocks, in fact the leftmost $14$ blocks in the first row of Table \ref{thelayout}.
Since $y < \frac{1}{2}\binom{n}{r} < L(n,r)$, by Lemma \ref{blockedintervals}
we are reduced to showing that for each of these blocks $F$ there is a right blocker $G$ for $F$
such that $|G| + x \geq \frac{1}{2}\binom{n-1}{r-1} + 2\frac{n^{r-2}}{(r-2)!} - (r + 2)\frac{n^{r-3}}{(r-3)!} + O(n^{r-4})$. Let $B(n,r)$
be the right side of the last inequality.
For each block $M$ of $f$ let $\widehat{M}$ denote the terminal subinterval of $[1,\binom{n}{r}]$ beginning with $M$.
As an example, we can use
Table \ref{thelayout} to see that $\widehat{S_{259}} = \{S_{259}\cup S_{258}\cup S_{1}''..\cup S_{168}\cup S_{167}\cup S_{14}..\cup S_{146}\cup S_{145}\cup S_{13}\}.$
With this notation, for each block $F \subset [1, \frac{1}{2}\binom{n}{r}]$, Table \ref{shortenedblockers} gives a right blocker $G$ of $F$ (directly below $F$ in the Table)
in the form $G = \widehat{M}$ for some block $M$. For example, the right blocker of $S_{157}$ given by Table \ref{shortenedblockers} is $\widehat{S_{235}}$.
To verify that these $G$ are indeed right blockers, it remains only to check that $\{ F\}$ and
the corresponding $G = \widehat{M}$ are cross intersecting families.
To do this, it suffices to show that for each block $B$
appearing in $\widehat{M}$ we have $\mathcal{I}(B)\cap \mathcal{I}(F)\ne \emptyset$. For example, when $F = S_{189}$, the corresponding right blocker
for $F$ in Table \ref{shortenedblockers} is $G = \widehat{S_{259}}$, and the blocks $B$ appearing in $G$ are given earlier in this paragraph. Examining each of these blocks $B$
(in left to right order) for the condition $\mathcal{I}(B)\cap \mathcal{I}(F)\ne \emptyset$
we get
$9\in \mathcal{I}(S_{259})\cap \mathcal{I}(189), 8\in \mathcal{I}(S_{258})\cap \mathcal{I}(189),\cdots , 1\in \mathcal{I}(S_{13})\cap \mathcal{I}(189)$,
as required. We leave to the reader the similar verification that for blocks $F\ne R$, $F\subset [1, \frac{1}{2}\binom{n}{r}]$, we have that $\{ F\}$ and the
corresponding $G = \widehat{M}$ given by Table \ref{shortenedblockers} are cross intersecting families.
Let then $F$ be any block of $f$ satisfying $F = [x, y] \subset [1,\frac{1}{2}\binom{n}{r}]$. We now verify the property $|G| + x \geq B(n,r)$, where $G$
is the right blocker of $F$ given in Table \ref{shortenedblockers}.
First consider such blocks satisfying $F\ne S_{12}$. The crucial feature of $f$ which ensures this property for such blocks $F$ is that
the right blocker $G$ of $F$ given in Table \ref{shortenedblockers} satisfies
\begin{equation} \label{bockerproperty}
|G| + x = 1 + |S_{1}''(or S_{1}')\cup S_{ab}\cup S_{cd}\cup S_{rst}\cup S_{r's't'}|, b + d = 7.
\end{equation}
where the five sets on the right side have pairwise empty intersection. Suppose for a moment that this property holds for $F$ and its right blocker $G$. The using
Lemma \ref{binomial-estimate} we obtain $|G|+x = 1 + \frac{1}{2}\binom{n-1}{r-1} + \binom{n-b}{r-2} + \binom{n-d}{r-2} + \binom{n-t}{r-3} + \binom{n-t'}{r-3} =
1 + \frac{1}{2}\binom{n-1}{r-1} + 2\frac{n^{r-2}}{(r-2)!} - (\frac{b+d+r-3-2}{(r-3)!})n^{r-3} + O(n^{r-4}) = B(n,r)$
since $b+d = 7$, as required.
It remains to verify that (\ref{bockerproperty}) holds for these $F$ (and their corresponding $G$). We do this for three cases, and leave the verification of the others to the reader.
Consider first $F = S_{157}$. From Table \ref{thelayout} we have $x = 1 + |S_{12}\cup S_{156}|$. Table \ref{shortenedblockers} gives the right blocker for $F$
given by $G = \widehat{S_{235}} = S_{235}\cup S_{25}\cup S_{1}''$. Thus
$|G| + x = 1 + |S_{1}''\cup S_{25}\cup S_{12}\cup S_{156}\cup S_{235}|$, as required by (\ref{bockerproperty}). As a second example let
$F = S_{1,8,10}$. Table \ref{thelayout} gives $x = 1 + |S_{12}| + |S_{15}| + |S_{189}|$, and
Table \ref{shortenedblockers} gives the right blocker $G = \widehat{S_{258}} = S_{258}\cup S_{1}''$ of $S_{1,8,10}$ of $F$. Thus we obtain
$|G|+x = 1 + |S_{1}''\cup S_{12}\cup S_{15}\cup S_{189}\cup S_{258}|$, thereby verifying (\ref{bockerproperty}) for $F = S_{1,8,10}$. Finally consider
$F = S_{347}$. Using the Tables Table \ref{thelayout} and \ref{shortenedblockers} we have $x = 1 + |S_{1}'\cup S_{346}|$, while
the right blocker for $F$ is $G = \widehat{S_{167}} = S_{167}\cup S_{14}\cup S_{13}$. So finally $|G| + x = 1 + |S_{1}'\cup S_{14}\cup S_{13}\cup S_{167}\cup S_{346}|$,
as required by (\ref{bockerproperty}).
We leave to the reader the similar
verification that (\ref{bockerproperty}) holds for the remaining blocks in the first row of Table \ref{thelayout} satisfying $F\ne S_{12}$, $F\ne R$.
Finally in the case $F = S_{12}$, Table \ref{shortenedblockers} gives the right blocker $G = \widehat{S_{2}} = S_{2}\cup S_{1}''$. Hence
$|G| + x > |G| = |S_{2}| + |S_{1}''| = \binom{n-2}{r-1} + \frac{1}{2}\binom{n-1}{r-1} > B(n,r)$. \end{proof}
\begin{table}
\begin{center}
\hskip-1.0cm\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$F$ & $S_{12}$ & $S_{156}$ & $S_{157}$ & $..S_{15}$ & $S_{189}$ & $S_{1,8,10}$ & $..S_{1}'$& $S_{346}$ & $S_{347}$ & $..S_{34}$ & $S_{356}$ & $S_{357}$ & $..S_{35}$ & $..S_{3}$ \\
\hline
$G$ & $\widehat{S_{2}..}$ & $\widehat{S_{236}}$ & $\widehat{S_{235}}$ & $\widehat{S_{25}..}$ & $\widehat{S_{259}}$ & $\widehat{S_{258}}$ & $\widehat{S_{1}''..}$ & $\widehat{S_{168}}$ & $\widehat{S_{167}}$ &
$\widehat{S_{14}..}$ & $\widehat{S_{146}}$ & $\widehat{S_{145}}$ & $\widehat{S_{145}}$ & $\widehat{S_{13}}$ \\
\hline
\end{tabular}
\caption{block $F$ of $f$, right blocker $G$ for $F$} \label{shortenedblockers}
\end{center}
\end{table}
|
2,877,628,089,570 | arxiv | \section{Introduction}
\subsection{Analog to analog compression}
Analog to analog (A2A) compression of signals has recently gathered interest in information theory \cite{verdu, WV2, montanari, HAT}. In A2A compression, a high dimensional analog signal $x^n \in \mR^n$ is encoded into a lower dimensional analog signal $y^m=f_n(x^n) \in \mR^m$. The goal is to design the encoding so as to preserve in $y^m$ all the information about $x^n$, and to obtain successful decoding for a given distortion measure like MSE or error probability. In particular, the encoding may be corrupted by noise. It is worth mentioning that when the alphabet of $x$ and $y$ is finite, this framework falls into traditional topics of information theory such as lossless and lossy data compression, or joint source-channel coding. The novelty of A2A compression is to consider $x$ and $y$ to be real valued and to impose regularity constrains on the encoder, in particular linearity, as motivated by compressed sensing \cite{CS1,CS2}.
The challenge and practicality of A2A compression is to obtain dimensionality reduction, i.e., $m/n \ll 1$, by exploiting a prior knowledge on the signal. This may be sparsity as in compressed sensing. For $k$-sparse signals, and without any stability or complexity considerations, it is not hard to see that the dimensionality reduction can be of order $k/n$. A measurement rate of order $k/n\log(n/k)$ has been shown to be sufficient to obtain stable recovery by solving tractable optimization algorithms like convex programming ($l_1$ minimization). This remarkable achievement has gathered tremendous amount of attention with a large variety of algorithmic solutions deployed over the past years. The vast majority of the research has however capitalized on a common sparsity model.
Several works have explored connections between information theory and compressed sensing\footnote{\cite{kud_pfi,pfister,dimakis} investigate LDPC coding techniques for compressed sensing}, in particular \cite{AT, RG, SBB, wainwright, WWR, guo}, however it is only recently \cite{verdu} that a foundation of A2A compression has been developed, shifting the attention to probabilistic signal models beyond the sparsity structure. It is shown in \cite{verdu} that under linear encoding and Lipschitz-continuous decoding, the fundamental limit of A2A compression is the R{\'e}nyi information dimension (RID), a measure whose operational meaning had remained marginal in information theory until \cite{verdu}. In the case of a nonsingular mixture distribution, the RID is given by the mass on the continuous part, and for the specific case of sparse mixture distributions, this gives a dimensionality reduction of order $k/n$. It is natural to ask whether this improvement on compressed sensing is due to potentially complex or non-robust coding strategies. \cite{WV2} shows that robustness to noise is not a limitation of the framework in \cite{verdu}. Two other works \cite{montanari,HAT} have corroborated the fact that complexity may not be a limitation either. In \cite{montanari} spatially-coupled matrices are used for the encoding of the signal, leveraging on the analytical ground of spatially-coupled codes and predictions of \cite{mezard}. In particular, \cite{montanari} shows that the RID is achieved using approximate message passing algorithm with block diagonal Gaussian measurement matrices measurement matrices.
However, the size of the blocks are increasing as the measurement rate approaches the RID.
In \cite{HAT}, using a new entropy power inequality (EPI) for integer-valued random variables that was further developed in \cite{epi_Z}, the polarization technique was used to deterministically construct partial Hadamard matrices for encoding discrete signals over the reals. This provides a way to achieve a measurement rate of $o(n)$ for signals with a zero RID along with a stable low complexity recovery algorithm. The case of mixture distributions was however left open in \cite{HAT}.
This paper proposes a new approach to A2A compression by means of a polarization theory over the reals. The use of polarization techniques for sparse recovery was proposed in \cite{dublin} for discrete signals, relying on coding strategies over finite fields. In this paper, it is shown that using the RID, one obtains a natural counter-part over the reals of the entropy polarization phenomenon \cite{channel_polarcode,source_polarcode}.
Specifically, the entropy (or source) polarization phenomenon \cite{source_polarcode} shows that transforming an i.i.d.\ sequence of discrete random variables using an Hadamard matrix polarizes the conditional entropies to the extreme values of 0 and 1 (deterministic and maximally random distributions). We show in this paper that the RID of an i.i.d.\ sequence of mixture random variables also polarizes to the two extreme values $0$ and $1$ (discrete and continuous distributions). To get to this result, properties of the RID in vector settings and related information measures are first developed. It is then shown that the RID polarization is, as opposed to the entropy polarization, obtained with an analytical pattern. In other words, there is no need to rely on algorithms to compute the set of components which tend to 0 or 1, as this is given by a known pattern equivalent to the BEC channel polarization \cite{channel_polarcode}. This is then used to obtain universal A2A compression schemes based on explicit partial Hadamard matrices. The current paper focuses on the encoding strategies and on extracting the RID without specifying the decoding strategy. Numerical simulations provide evidence that efficient message passing algorithms may be used in conjunction to the obtained encoders.
Finally, the paper extends the realm of A2A compression to a multi signal settings. Techniques of distributed compressed sensing were introduced in \cite{DCS} for specific classes of sparse signal models. We provide here an information theoretic framework for general multi signal A2A compression, as a counter part of the Slepian\,\&\,Wolf coding problem in source compression \cite{slepian}. A measurement rate region to extract the RID of correlated signals is obtained and is shown to be tight.
\subsection{Notations and preliminaries}
The set of reals, integers and positive integers will be denoted by $\mR$, $\mathbb{Z}$ and $\mathbb{Z}_+$ respectively. $\mathbb{N}=\mathbb{Z}_+\backslash \{0\}$ will denote the set of strictly positive integers. For $n\in \mathbb{N}$, $[n]=\{1,2,\dots,n\}$ denotes the sequence of integers from $1$ to $n$. For a set $S$, the cardinality of the set will be denoted by $|S|$, thus $|[n]|=n$.
All random variables are denoted by capital letters and their realization by lower case letter ($x$ is a realization of the random variable $X$). The expected value and the variance of a random variable $X$ are denoted by $\mathbb{E}\{X\}$ and $\sigma_X^2$. For $i,j \in \mathbb{Z}$, $X_i^j$ is a column vector consisting of the random variables $\{X_i,X_{i+1},\dots,X_j\}$ and for $i>j$, we set $X_i^j$ equal to null.
For a discrete random variable $X$ with a distribution $p_X$, $H(X)=H(p_X)$ denotes the discrete entropy of $X$. For the continuous case, $h(X)=h(p_X)$ denotes the differential entropy of $X$. Throughout the paper, we assume that all of discrete and continuous random variables have well-defined discrete entropy and differential entropy respectively. For random elements $X$, $Y$ and $Z$, $I(X;Y)$ and $I(X;Y|Z)$ denote the mutual information of $X$ and $Y$ and the conditional mutual information of $X$ and $Y$ given $Z$. $I(X;Y|z)$ denotes the mutual information of $X$ and $Y$ given a specific realization $Z=z$. Hence, $I(X;Y|Z)=\mathbb{E}_Z \{I(X;Y|z)\}$. For simplicity, we also assume that all of the random variables (discrete, continuous or mixture) have finite second order moments.
All probability distributions are assumed to be nonsingular. Hence, in the general case for a random variable $X$, the distribution of $X$ can be decomposed as $p_X=\delta p_c + (1-\delta) p_d$, where $p_c$ and $p_d$ are the continuous and the discrete part of the distribution and $0\leq \delta \leq 1$ is the weight of the continuous part. Thus, $\delta=0$ and $\delta=1$ corresponds to the fully discrete and fully continuous case respectively. For such a probability distribution, the R\'enyi information dimension is interchangeably denoted by $d(p_X)$ or $d(X)$ and is equal to the weight of the continuous part $\delta$.
There is another representation for a random variable $X$ that we will repeatedly use in the paper. Assume $U$ is a continuous random variable with probability distribution $p_c$ and $V$ is a discrete random variable with probability distribution $p_d$ and $U$ and $V$ are independent. Let $\Theta \in \{0,1\}$ be a binary valued random variable, independent of $U$ and $V$ with $\mathbb{P}(\Theta=1)=\delta$. It is easy to see that we can represent $X$ as $X=\Theta U + \bar{\Theta} V$, where $\bar{\Theta}=1-\Theta$. In this case, the random variable $X$ will have the distribution $p_X=\delta p_c + (1-\delta) p_d$. Also, if $X_1^n$ is a sequence of such random variables with the corresponding binary random variables $\Theta_1^n$, $C_\Theta=\{i\in [n]: \Theta_i=1\}$ is a random set consisting of the position of the continuous components of the signal. Similarly, $\bar{C}_\Theta =[n]\backslash C_\Theta$ is defined to be the position of the discrete components.
For a matrix $\Phi$ of a given dimension $m\times n$ and a set $S\subset [n]$, $\Phi_S$ is a sub-matrix of dimension $m\times |S|$ consisting of those columns of $\Phi$ having index in $S$. Similarly, for a vector of random variables $X_1^n$, the vector $X_S=\{X_i : i\in S\}$ is a sub-vector of $X_1^n$ consisting of those random variables having index in $S$. For two matrices $A$ and $B$ of dimensions $m_1\times n$ and $m_2\times n$, $[A;B]$ denotes the $(m_1+m_2)\times n$ matrix obtained by vertically concatenating $A$ and $B$.
For an $x\in \mR$ and a $q\in \mathbb{N}$, $[x]_q=\frac{ \lfloor q x \rfloor }{q}$ denotes the uniform quantization of $x$ by interspacing $\frac{1}{q}$. Similarly, for a vector of random variables $X_1^n$, $[X_1^n]_q$ will denote the component-wise uniform quantization of $X_1^n$.
For $a(q)$ and $b(q)$ two functions of $q$, $a(q) \preceq b(q)$ or equivalently $b(q)\succeq a(q)$ will be used for $$\lim_{q\to\infty} \frac{b(q)-a(q)}{\log_2(q)}\geq 0.$$ Similarly, $a(q)\doteq b(q)$ is equivalent to $a(q) \preceq b(q), a(q) \succeq b(q)$.
An ensemble of single terminal measurement matrices will be denoted by $\{\Phi _N\}$, where $N$ is the labeling sequence and can be any subsequence of $\mathbb{N}$. The dimension of the family will be denoted by $m_N\times N$, where $m_N$ is the number of measurements taken by $\Phi_N$. The asymptotic measurement rate of the ensemble is defined by $\limsup_{N \to \infty} \frac{m_N}{N}$. We will also work with an ensemble of multi terminal measurement matrices. We will focus to the two terminal case and the extension to more than two terminals will be straightforward. We will denote these two terminals by $x,y$ and the corresponding ensemble by $\{\Phi_N^x,\Phi_N^y\}$ with the corresponding dimension $m_N^x \times N$ and $m_N^y\times N$. The measurement rate vector for this ensemble will be denoted by $(\rho_x,\rho_y)$, where $\rho_x=\limsup_{N\to\infty} \frac{m_N^x}{N}, \rho_y=\limsup_{N\to\infty} \frac{m_N^y}{N}$.
\section{R\'enyi information dimension}\label{section:RID}
Let $X$ be a random variable with a probability distribution $p_X$ over $\mR$. The upper and the lower RID of this random variable are defined as follows:
\begin{align*}
\bar{d}(X)&=\limsup _{q\to\infty} \frac{H([X]_q)}{\log_2(q)},\\
\underline{d}(X)&=\liminf_{q\to \infty} \frac{H([X]_q)}{\log_2(q)}.
\end{align*}
By Lebesque decomposition or Jordan decomposition theorem, any probability distribution over $\mR$ like $p_X$ can be written as a convex combination of a discrete part, a continuous part and a singular part, namely,
\begin{align*}
p_X=\alpha_d p_d + \alpha_c p_c + \alpha_s p_s,
\end{align*}
where $p_d$, $p_c$ and $p_s$ denote the discrete, continuous and the singular part of the distribution and $\alpha_d,\alpha_c,\alpha_s\geq 0$ and $\alpha_d+\alpha_c+\alpha_s=1$. In \cite{renyi}, R\'enyi showed that if $\alpha_s=0$, namely, there is no singular part in the distribution and $p_X=(1-\delta)\,p_d + \delta\, p_c$ for some $\delta \in [0,1]$, then the RID is well-defined and $d(X)=\bar{d}(X)=\underline{d}(X)=\delta$. Moreover, he proved that if $X_1^n$ is a continuous random vector then $\lim_{q\to \infty} \frac{H([X_1^n]_q)}{\log_2(q)}=n$, implying the RID of $n$ for the $n$-dimensional continuous random vector.
Our objective is to extend the definition of RID for arbitrary vector random variables, which are not necessarily continuous. To do so, we first restrict ourselves to a rich space of random variables with well-defined RID. Over this space, it will be possible to give a full characterization of the RID as we will see in a moment.
\begin{definition}
Let $(\Omega,{\cal F},\mathbb{P})$ be a standard probability space. The space ${\cal L}(\Omega,{\cal F},\mathbb{P})$ is defined as ${\cal L} =\cup _{n=1}^\infty {\cal L} _n $, where ${\cal L}_1$ is the set of all nonsingular random variables and for $n\in \mathbb{N} \backslash \{1\}$, ${\cal L}_n$ is the space of $n$-dimensional random vectors defined as
\begin{align*}
{\cal L}_n=&\{X_1^n: \text{ there exist } k\in \mathbb{N}, A \in \mR^{n\times k} \text{ and } Z_1^k\\
&\text{ independent and nonsingular such that } X_1^n=A Z_1^k\}.
\end{align*}
\end{definition}
\begin{remark}
It is not difficult to see that all $n$-dimensional vector random variables, singular or nonsingular, can be well approximated in the space ${\cal L}$, for example in $\ell_2$-sense. However, this is not sufficient to fully characterize the RID. Specially, the RID is discontinuous in $\ell _p$ topology, $p\geq 1$. For example, we can construct a sequence of fully discrete random variables in ${\cal L}$ converging to a fully continuous random variable in $\ell_p$, whereas the RID of the sequence is $0$ and does not converge to $1$. Although we have such a mathematical difficulty in giving a characterization of the RID, we think that the space ${\cal L}$ is rich enough for modeling most of the cases that we encounter in applications.
\end{remark}
Over ${\cal L}$, we will generalize the definition of the RID to include joint RID, conditional RID and R\'enyi information defined as follows.
\begin{definition}
Let $X_1^n$ be a random vector in ${\cal L}$. The joint RID of $X_1^n$ provided that it exists, is defined as
$$d(X_1^n)=\lim_{q\to\infty} \frac{H([X_1^n]_q)}{\log_2(q)}.$$
\end{definition}
\begin{definition}
Let $(X_1^n,Y_1^m)$ be a random vector in ${\cal L}$. The conditional RID of $X_1^n$ given $Y_1^m$ and R\'enyi information of $Y_1^n$ about $X_1^n$, provided they exist, are defined as follows:
\begin{align*}
&d(X_1^n|Y_1^m)=\lim_{q\to\infty} \frac{H([X_1^n]_q|Y_1^m)}{\log_2(q)}\\
&I_R(X_1^n;Y_1^n)=d(X_1^n)-d(X_1^n|Y_1^m).
\end{align*}
\end{definition}
Generally, it is difficult to give a characterization of RID for a general multi-dimensional distribution because it can contain probability mass over complicated subsets or sub-manifolds of lower dimension. However, we will show that the vector R\'enyi information dimension is well-defined for the space ${\cal L}$. In order to give the characterization of RID over ${\cal L}$, we also need to define some concepts from linear algebra of matrices, namely, for two matrices of appropriate dimensions, we propose the following definition of the ``influence" of one matrix on another matrix and ``residual" of one matrix given another matrix.
\begin{definition}
Let $A$ and $B$ be two arbitrary matrices of dimension $m_1\times n$ and $m_2\times n$. Also let $K \subset [n]$. The influence of the matrix $B$ on the matrix $A$ and the residual of the matrix $A$ given $B$ over the column set $K$ are defined to be
\begin{align*}
I(A;B)[K]&=\mathrm{rank}([A;B]_K)-\mathrm{rank}(A_K),\\
R(A;B)[K]&=\mathrm{rank}([A;B]_K)-\mathrm{rank}(B_K).
\end{align*}
\end{definition}
\begin{remark}
It is easy to check that $I(A;B)[K]$ is the amount of increase of the rank of the matrix $A_K$ by adding rows of the matrix $B_K$ and $R(A;B)[K]$ is the residual rank of the matrix $A_K$ knowing the rows of the matrix $B_K$. Moreover, one can easily check that $I(A;B)[K]=R(B;A)[K]$.
\end{remark}
\begin{thm}\label{RID_maintheorem}
Let $(X_1^n,Y_1^m)$ be a random vectors in the space ${\cal L}$, namely, there are i.i.d. nonsingular random variables $Z_1^k$ and two matrices $A$ and $B$ of dimension $n\times k$ and $m\times k$ such that $X_1^n=A Z_1^k$ and $Y_1^m=B Z_1^k$. Let $Z_i=\Theta_i U_i + \bar{\Theta}_i V_i$ be the representation for $Z_i, \ i \in [k]$. Then, we have
\begin{enumerate}
\item $d(X_1^n)=\mathbb{E} \{ \mathrm{rank}(A_{C_\Theta})\}$,
\item $d(X_1^n|Y_1^m)=\mathbb{E} \{ R(A;B)[C_\Theta]\}$,
\end{enumerate}
where $C_\Theta=\{ i\in[k]: \Theta_i=1\}$ is the random set consisting of the position of continuous components.
\end{thm}
\begin{remark}
Notice that the results intuitively make sense, namely, for a specific realization $\theta_1^k$ if $\theta_i=0$ we can neglect $Z_i$ because it is fully discrete and does not affect the RID. Moreover, over the continuous components the resulting contribution to the RID is equal to the rank of the matrix $A_{C_\theta}$, which is the effective dimension of the space over which the continuous random variable $A_{C_\theta} U_{C_\theta}$ is distributed. Finally, all of these contributions are averaged over all possible realizations of $\Theta_1^k$.
\end{remark}
Using Theorem \ref{RID_maintheorem}, it is possible to prove a list of properties of the RID.
\begin{thm}\label{RID_extensions}
Let $(X_1^n,Y_1^m)$ be a random vector in ${\cal L}$ as in Theorem \ref{RID_maintheorem}. Then, we have the following properties:
\begin{enumerate}
\item $d(X_1^n)=d(M X_1^n)$ for any arbitrary invertible matrix $M$ of dimension $n\times n$.
\item $d(X_1^n,Y_1^m)=d(X_1^n)+d(Y_1^m|X_1^n)$.
\item $I_R(X_1^n;Y_1^m)=I_R(Y_1^m;X_1^n)$.
\item $I_R(X_1^n;Y_1^m)\geq 0$ and $I_R(X_1^n;Y_1^m)=0$ if and only if $X_1^n$ and $Y_1^m$ are independent after removing discrete common parts, namely, those $Z_i, i\in[k]$ that are fully discrete.
\end{enumerate}
\end{thm}
Further investigation also shows that we have a very nice duality between the discrete entropy and the RID as depicted in Table \ref{tab:duality1}. As we will see in Subsection \ref{results:single} and \ref{results:multi}, this duality can be generalized to include some of the theorems in classical information theory like single terminal and multi terminal (Slepian \& Wolf) source coding problems.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|}
\hline
Discrete random variables & Random variables in ${\cal L}$ \\
Discrete entropy $H$ & RID $d$\\
Conditional entropy & Conditional RID \\
Mutual information & R\'enyi mutual information \\
Deterministic & Discrete\\
Chain rule & Chain rule\\
\hline
\hline
Single terminal source coding & Single terminal A2A compression\\
Multi terminal source coding & Multi terminal A2A compression\\
\hline
\end{tabular}
\caption{Duality between $H$ and $d$}
\label{tab:duality1}
\end{table}
\section{Main results}
In this section, we will give a brief overview of the results proved in the paper. Subsection \ref{results:RID_polarize} is devoted to the results obtained for the polarization of the R\'enyi information dimension. These results are used in Subsections \ref{results:single} and \ref{results:multi} to study {\it A2A compression} problem from an information theoretic point of view. Subsection \ref{results:single} considers the single terminal case whereas Subsection \ref{results:multi} is devoted to the multi terminal case.
\subsection{Polarization of the R\'enyi information dimension}\label{results:RID_polarize}
Before stating the polarization result for the RID, we define the $m$-dimensional erasure process as follows.
\begin{definition}
Let $\alpha \in [0,1]$. An ``erasure process'' with initial value $\alpha$ is defined as follows.
\begin{enumerate}
\item $e^\emptyset=\alpha$. $e^+=2\alpha-\alpha^2$ and $e^-=\alpha^2$.
\item Let $e_n=e^{b_1b_2\dots b_n}$, for some arbitrary $\{+,-\}$-valued sequence $b_1^n$. Define
\begin{align*}
e_n^+&=e^{b_1b_2\dots b_n+}=2e_n - e_n^2,\\
e_n^-&=e^{b_1b_2\dots b_n-}=e_n^2.
\end{align*}
\end{enumerate}
\end{definition}
\begin{remark}
Notice that using the $\{+,-\}$ labeling, we can construct a binary tree where each leaf of the tree is assigned a specific $\{+,-\}$-valued sequence.
\end{remark}
Let $\{B_n\}_{n=1}^\infty$ be a sequence of i.i.d. uniform $\{+,-\}$-valued random variables. By replacing $B_1^n$ for $\{+,-\}$-labeling $b_1^n$ in the definition of the erasure process, we obtain a stochastic process $e_n=e^{B_1B_2 \dots B_n}$. Let ${\cal F}_n$ be the $\sigma$-field generated by $B_1^n$. Using the BEC polarization \cite{channel_polarcode, rate_polarcode}, we have the following results:
\begin{enumerate}
\item $(e_n,{\cal F}_n)$ is a positive bounded martingale.
\item $e_n$ converges to $e_\infty \in \{0,1\}$ with $\mathbb{P}(e_\infty=1)=\alpha$.
\item For any $0< \beta <\frac{1}{2}$, $\liminf _{n \to \infty} \mathbb{P}(e_n \leq 2^{-N^\beta})=1-\alpha$, where $N=2^n$ is the number of all possible cases that $e_n$ can take.
\end{enumerate}
Let $n\in \mathbb{N}$ and $N=2^n$. Assume that $X_1^N$ is a sequence of i.i.d. nonsingular random variables with a RID equal to $d(X)$ and let $Z_1^N=H_N X_1^N$, where $H_N$ is the Hadamard matrix of order $N$. For $i\in[N]$, let us define $I_n(i)=d(Z_i|Z_1^{i-1})$. Assume that $b_1^n$ is the binary expansion of $i-1$. By replacing $0$ by $+$ and $1$ by $-$, we can equivalently represent $I_n(i)$ be a sequence of $\{+,-\}$ values, namely, $I_n(i)=I^{b_1b_2\dots b_n}$. Similar to the erasure process, we can convert $I_n$ to a stochastic process $I_n=I^{B_1B_2\dots B_n}$ by using i.i.d. uniform $\{+,-\}$-valued random variables $B_1^n$. We have the following theorem.
\begin{thm}[Single terminal RID polarization]\label{single_RID_polarization}
$(I_n,{\cal F}_n)$ is an erasure stochastic process with initial value $d(X)$ polarizing to $\{0,1\}$.
\end{thm}
For $n\in \mathbb{N}$ and $N=2^n$, let $\{(X_i,Y_i)\}$ be a sequences of random vectors in the space ${\cal L}$, with joint and conditional RID $d(X,Y)$, $d(X|Y)$ and $d(Y|X)$. Let $Z_1^N=H_N X_1^N$ and assume that $W_1^N=H_N Y_1^N$. Let us define two processes $I_n$ and $J_n$ as follows.
\begin{align*}
I_n(i)&=d(Z_i|Z_1^{i-1}), i\in [N],\\
J_n(i)&=d(W_i|W_1^{i-1},Z_1^N), i\in [N].
\end{align*}
Similarly, we can label $I_n$ and $J_n$ by a sequence of $b_1^n$ and convert them to stochastic processes $I_n=I^{B_1B_2\dots B_n}$ and $J_n=J^{B_1B_2\dots B_n}$. By this definition, we have the following theorem.
\begin{thm}[Multi terminal RID polarization]\label{multi_RID_polarization}
$(I_n,{\cal F}_n)$ and $(J_n, {\cal F}_n)$ are erasure stochastic processes with initial value $d(X)$ and $d(Y|X)$, both polarizing to $\{0,1\}$.
\end{thm}
\begin{remark}
In the $t$ terminal case $t>2$ for a $t$ terminal source $(X_1,X_2,\dots X_t)$, using a similar method it is possible to construct erasure processes with initial values $d(X_1),d(X_2|X_1), \dots, d(X_t |X_1^{t-1})$, polarizing to $\{0,1\}$.
\end{remark}
\subsection{Single terminal A2A compression}\label{results:single}
In this subsection, we will use the properties of the RID developed in Section \ref{section:RID} to study the A2A compression of memoryless sources. We assume that we have a memoryless source with some given probability distribution. The idea is to capture the information of the source, to be made clearer in a moment, by taking some linear measurements. As is usual in information theory, we are mostly interested in asymptotic regime for large block lengths. To do so, we will use an ensemble of measurement matrices to analyze the asymptotic behavior. We will also define the notion of REP (restricted iso-entropy property) for an ensemble of measurement matrices. This subsection is devoted to the single terminal case. The results for the multi terminal case will be given in Subsection \ref{results:multi}. We are mostly interested to the the measurement rate region of the problem in order to successfully capture the source.
\begin{definition}
Let $X_1^N$ be a sequence of i.i.d. random variables with a probability distribution $p_X$ (discrete, mixture or continuous) over $\mR$, and let $D_1^N=[X_1^N]_q$ for $q \in \mathbb{N}$. The family of measurement matrices $\{\Phi_N\}$, indexed with a subsequence of $\mathbb{N}$ and with dimension $m_N\times N$, is $\epsilon$-REP($p_X$) with the measurement rate $\rho$ if
\begin{align}\label{rep_definition}
&\limsup_{q\to\infty} \frac{H(D_1^N|\Phi_N X_1^N)}{H(D_1^N)} \leq \epsilon,\\
&\limsup _{N \to \infty} \frac{m_N}{N}=\rho. \nonumber
\end{align}
\end{definition}
To give some intuitive justification for the REP definition, let us assume that all of the measurements are captured with a device with finite precision $\frac{1}{q_0}$ for some $q_0 \in \mathbb{N}$. In that case, although the potential information of the signal, in terms of bits, can be very large, but what we effectively observe through the finite precision device is only $H([X_1^N]_{q_0})$. In such a setting, the ratio of the information we lose after taking the measurements, assuming that some genie gives us the infinite precision measurement captured from the signal, is exactly what we have in the definition of REP, namely,
\begin{align}\label{information_ratio}
\frac{H(D_1^N|\Phi_N X_1^N)}{H(D_1^N)},
\end{align}
where we assume that $D_1^N=[X_1^N]_{q_0}$. This might be a reasonable model for application because pretty much this is what happens in reality. The problem with this model is that it is not invariant under some obvious transformations like scaling. For example, assume that we are scaling the signal by some real number. In this case, through some simple examples it is possible to show that the ratio in (\ref{information_ratio}) can change considerably. There are two approaches to cope with this problem. One is to scale the signal with a desired factor to match it to the finite precision quantizer, which in its own can be very interesting to analyze but probably will be two complicated. The other way, is to take our approach and develop a theory for the case in which the resolution is high enough so that the quality measure proposed in (\ref{information_ratio}) is not affected by the shape of the distribution of the signal.
\begin{remark}
Notice that in the fully discrete case, the REP definition is simplified to the equivalent form
\begin{align*}
&\frac{H(X_1^N|\Phi_N X_1^N)}{H(X_1^N)} \leq \epsilon,\\
&\limsup _{N \to \infty} \frac{m_N}{N}\leq\rho.
\end{align*}
\end{remark}
\begin{remark}
For a non discrete source with strictly positive RID, $d(X)>0$, if we divide the numerator and the denumerator in the expression (\ref{rep_definition}) by $\log_2(q)$, take the limit as $q$ tends to infinity and use the definition of the RID, we get the equivalent form
$$\frac{d(X_1^N|\Phi_N X_1^N)}{d(X_1^N)} \leq \epsilon.$$
Interestingly, this implies that in the high resolution regime that we are considering for analysis, the information isometry (keeping more than $1-\epsilon$ ratio of the information of the signal) is equivalent to the R\'enyi isometry. Moreover, from the properties of the RID, it is easy to see that this REP measure meets some of the invariance requirements that we expect. For example, it is scale invariant and any invertible linear transformation of the input signal $X_1^N$ keeps the $\epsilon$-REP measure unchanged.
\end{remark}
We can also extend the definition when the probability distribution of the source is not known exactly but it is known to belong to a given collection of distributions $\Pi$.
\begin{definition}
Assume $\Pi=\{\pi: \pi \in \Pi\}$ is a class of nonsingular probability distributions over $\mR$. The family of measurement matrices $\{\Phi_N\}$, indexed with a subsequence of $\mathbb{N}$ and with dimension $m_N\times N$, is $\epsilon$-REP \hspace{-1mm}($\Pi$) for measurement rate $\rho$ if it is $\epsilon$-REP \hspace{-1mm}($\pi$) for every $\pi \in \Pi$.
\end{definition}
Now that we have the required tools and definitions, we give a characterization of the required measurement rate in order to keep the information isometry. Similar to all theorems in information theory, we do this using the ``converse" and ``achievability" parts.
\begin{thm}[Converse result]\label{mixture_converse}
Let $X_1^N$ be a sequence of i.i.d. random variables in ${\cal L}$. Suppose $\{\Phi_N\}$ is a family of $\epsilon$-REP($p_X$) measurement matrices of dimension $m_N\times N$, then $\rho \geq d(X_1)(1-\epsilon)$.
\end{thm}
\begin{remark}
This result implies that to capture the information of the signal the asymptotic measurement rate must be approximately greater then the RID of the source. This in some sense is similar to the single terminal source coding problem in which the encoding rate must be grater then the entropy of the source. This again the emphasizes the analogy between $H$ and $d$. Moreover, in the discrete case, $d(X)=0$, the result is trivial.
\end{remark}
\begin{remark}
It was proved in \cite{verdu} that under linear encoding and block error probability distortion condition, the measurement rate must be higher than the RID of the source, $\rho\geq d(X)$. Theorem \ref{mixture_converse} strengthen this result stating that $\rho \geq d(X)$ must hold even under the milder $\epsilon$-REP restriction on the measurement ensemble.
\end{remark}
Theorem \ref{mixture_converse} puts a lower bound on the measurement rate in order to keep the $\epsilon$-REP property. However, it might happen that there is no measurement family to achieve this bound. Fortunately, as we will see, it is possible to deterministically truncate the family of Hadamard matrices to obtain a measurement family with $\epsilon$-REP property and measurement rate $d(X)$. This is summarized in the following two theorems. Notice that in the fully continuous case as Theorem \ref{mixture_converse} implies, the feasible measurement rate is approximately $1$ which for example can be achieved with any complete orthonormal family, thus no explicit construction is necessary. For the noncontinuous case, we will distinguish between the fully discrete case and the mixture case because they need different proof techniques. Theorem \ref{achievability_discrete} and \ref{achievability_hadamard} summarize the results.
\begin{thm}[Achievability result]\label{achievability_discrete}
Let $X_1^N$ be a sequence of i.i.d. discrete integer\footnote{We proved this theorem using the EPI result we developed in \cite{epi_Z}, where we proved the result for lattice discrete random variables. However, we believe that such a result is also true for non-lattice discrete distributions.}-valued random variables. Then, for any $\epsilon >0$, there is a family of $\epsilon$-REP($p_X$) partial Hadamard matrices of dimension $m_N\times N$, for $N=2^n$ with $\rho=0$.
\end{thm}
\begin{thm}[Achievability result]\label{achievability_hadamard}
Let $X_1^N$ be a sequence of i.i.d. random variables in ${\cal L}$. Then, for any $\epsilon >0$, there is a family of $\epsilon$-REP($p_X$) partial Hadamard matrices of dimension $m_N\times N$, for $N=2^n$ with $\rho=d(X_1)$.
\end{thm}
We have also the general result in Theorem \ref{universal_mixture} which implies that we can construct a family of truncated Hadamard matrices which is $\epsilon$-REP for a class of distributions.
\begin{thm}[Achievability result]\label{universal_mixture}
Let $\Pi$ be a family of probability distributions with strictly positive RID. Then, for any $\epsilon >0$, there is a family of $\epsilon$-REP \hspace{-1mm}($\Pi$) partial Hadamard matrices of dimension $m_N\times N$, for $N=2^n$, with $\rho=\sup _{\pi \in \Pi} d(\pi)$.
\end{thm}
\begin{remark}
Theorem \ref{universal_mixture} implies that there is a fixed ensemble of measurement matrices capable of capturing the information of the all of the distributions in the family $\Pi$. This is very useful in applications because usually taking the measurements is costly and most of the time we do not have the exact distribution of the signal. If each distribution needs its own specific measurement matrix, we have to do several rounds of the measurement each time taking the measurements compatible with one specific distribution and do the recovery process for that specific distribution. The benefit of Theorem \ref{universal_mixture} is that one measurement ensemble works for all of distributions. It is also good to notice that although the measurement ensemble is fixed, the recovery (decoding) process might need to know the exact distribution of the signal in order to have successful recovery.
\end{remark}
\subsection{Multi terminal A2A compression}\label{results:multi}
In this section, our goal is to extend the A2A compression theory from the single terminal case to the multi terminal case. In the multi terminal setting, we have a memoryless source which is distributed in more than one terminal and we are going to take linear measurements from different terminals in order to capture the information of the source. We are again interested in an asymptotic regime for large block lengths. To do so, we will use an ensemble of distributed measurement matrices that we will introduce in a moment. Similar to the single terminal case, we are interested in the measurement rate region of the problem, namely, the number of measurements that we need from different terminals in order to capture the signal faithfully. We will analyze the problem for two terminal case. The extension to more than two terminals is straightforward.
\begin{definition}
Let $\{(X_i,Y_i)\}_{i=1}^N$ be a two terminal memoryless source with $(X_1,Y_1)$ being in ${\cal L}$. The family of distributed measurement matrices $\{\Phi^x_N, \Phi^y_N\}$, indexed with a subsequence of $\mathbb{N}$, is $\epsilon$-REP \hspace{-1mm}$(p_{X,Y})$ for the measurement rate $(\rho_x,\rho_y)$ if
\begin{align}\label{multiterminal_formula}
&\limsup_{q\to\infty} \frac{H([X_1^N]_q,[Y_1^N]_q|\Phi^x_N X_1^N, \Phi^y_N Y_1^N)}{H([X_1^N]_q,[Y_1^N]_q)} \leq \epsilon,\\
&\limsup _{N \to \infty} \frac{m^x_N}{N}\leq\rho_x,\ \ \limsup_{N \to \infty} \frac{m^y_N}{N}\leq \rho_y.\nonumber
\end{align}
\end{definition}
\vspace{1mm}
\begin{remark}
If $(X,Y)$ is a random vector in ${\cal L}$ with $d(X,Y)>0$, similar to what did in the single terminal case, dividing the numerator and the denumerator in the expression (\ref{multiterminal_formula}) by $\log_2(q)$ and taking the limit as $q$ tends to infinity, we get the equivalent definition
\begin{align*}
\frac{d(X_1^N,Y_1^N|\Phi_N^x X_1^N, \Phi_N^y Y_1^N)}{d(X_1^N,Y_1^N)}\leq \epsilon,
\end{align*}
which implies the equivalence of the information isometry and the R\'enyi isometry.
\end{remark}
\vspace{1mm}
\begin{remark}
Notice that in the fully discrete case, the definition above is simplified to the equivalent form
\begin{align*}
&\frac{H(X_1^N,Y_1^N|\Phi^x_N X_1^N, \Phi^y_N Y_1^N)}{H(X_1^N,Y_1^N)} \leq \epsilon,\\
&\limsup _{N \to \infty} \frac{m^x_N}{N}\leq\rho_x,\ \ \limsup_{N \to \infty} \frac{m^y_N}{N}\leq \rho_y.
\end{align*}
\end{remark}
We can also extend the definition to a class of probability distributions.
\begin{definition}
Assume that $\Pi=\{\pi: \pi \in \Pi\}$ is a class of nonsingular probability distributions in ${\cal L}$. The family of measurement matrices $\{\Phi^x_N,\Phi^y_N\}$ is $\epsilon$-REP \hspace{-1mm}($\Pi$) for measurement rate $(\rho_x,\rho_y)$ if it is $\epsilon$-REP \hspace{-1mm}($\pi$) for every $\pi \in \Pi$.
\end{definition}
\begin{definition}
Let $(X,Y)$ be a two dimensional random vector in ${\cal L}$ with a distribution $p_{X,Y}$. The R\'enyi information region of $p_{X,Y}$ is the set of all $(\rho_x,\rho_y) \in [0,1]^2$ satisfying
\begin{align*}
\rho_x \geq d(X|Y), \ \rho_y \geq d(Y|X), \ \rho_x+\rho_y \geq d(X,Y).
\end{align*}
\end{definition}
\begin{definition}
Assume that $\Pi$ is a class of two dimensional random vectors from ${\cal L}$. The R\'enyi information region of the class $\Pi$ is the intersection of the R\'enyi information regions of the distributions in $\Pi$.
\end{definition}
Similar to the single terminal case, we are interested in the rate region of the problem. We have the following converse and achievability results.
\begin{thm}[Converse result] \label{converse_theorem_multi}
Let $\{(X_i,Y_i)\}_{i=1}^N$ be a two-terminal memoryless source with $(X_1,Y_1)$ being in ${\cal L}$. Assume that the distributed family of measurement matrices $\{\Phi^x_N,\Phi^y_N\}$ is $\epsilon$-REP with a measurement rate $(\rho_x,\rho_y)$. Then,
\begin{align*}
&\rho_x+\rho_y \geq d(X,Y)(1-\epsilon),\\
&\rho_x \geq d(X|Y) -\epsilon d(X,Y),\ \rho_y \geq d(Y|X) -\epsilon d(X,Y).
\end{align*}
\end{thm}
\begin{remark}
This rate region is very similar to the rate region of the distributed source coding (Slepian\,\&\,Wolf) problem with the only difference that the discrete entropy has been replaced by the RID, which again emphasizes the analogy between the discrete entropy and the RID. Similar to the Slepian\,\&\,Wolf problem, we call $\rho_x+\rho_y=d(X,Y)$ the dominant face of the measurement rate region.
\end{remark}
\begin{thm}[Acievability result]\label{achievability_discrete_multi}
Let $\{(X_i,Y_i)\}_{i=1}^N$ be a discrete two-terminal memoryless source. Then there is a family of $\epsilon$-REP partial Hadamard matrices $\{\Phi^x_N,\Phi^y_N\}$ with $(\rho_x,\rho_y)=(0,0)$.
\end{thm}
\begin{thm}[Achievability result]\label{achievability_hadamard_multi}
Let $\{(X_i,Y_i)\}_{i=1}^N$ be a two-terminal memoryless source with $(X_1,Y_1)$ belonging to ${\cal L}$. Given any $(\rho_x,\rho_y)$ satisfying
\begin{align*}
\rho_x+\rho_y \geq d(X_1,Y_1), \rho_x \geq d(X_1|Y_1), \rho_y \geq d(Y_1|X_1),
\end{align*}
there is a family of $\epsilon$-REP partial Hadamard matrices with measurement rate $(\rho_x,\rho_y)$.
\end{thm}
We have also the general result in Theorem \ref{universal_mixture_multi} which implies that we can construct a family of truncated Hadamard matrices which is $\epsilon$-REP for a class of distributions.
\begin{thm}[Achievability result]\label{universal_mixture_multi}
Let $\Pi$ be a family of two dimensional probability distributions in ${\cal L}$. Then, for any $(\rho_x,\rho_y)$ in the measurement region of $\Pi$, there is a family of partial Hadamard matrices which is $\epsilon$-REP \hspace{-1mm}($\Pi$) with a measurement rate $(\rho_x,\rho_y)$.
\end{thm}
\section{Proof techniques}
In this section, we will give a brief overview of the techniques used to prove the results. We will divide this section into three subsections. In Subsection \ref{prooftech:RID}, we will overview the proof techniques for the RID. Subsection \ref{prooftech:single} and \ref{prooftech:multi} will be devoted to proof ideas and intuitions about the A2A compression problem in the single and multi terminal case.
\subsection{R\'enyi information dimension}\label{prooftech:RID}
in this section we will prove Theorem \ref{RID_maintheorem} and \ref{RID_extensions} and we will give further intuitions about the RID over the space ${\cal L}$.
{\bf Proof of Theorem \ref{RID_maintheorem}:}
To prove the first part of the theorem, notice that
$$H([X_1^n]_q ) \doteq H([X_1^n]_q , \Theta_1^k) \doteq H([X_1^n]_q|\Theta_1^k),$$
because $H(\Theta_1^k) \leq k \doteq 0$. As $\Theta_1^k \in \{0,1\}^k$ and takes finitely many values, it is sufficient to show that for any realization $\theta_1^k$,
\begin{align}\label{RID_maintheorem_formula1}
\lim_{q\to\infty} \frac{H([X_1^n]_q|\theta_1^k)}{\log_2(q)}=\mathrm{rank}(A_{C_\theta}).
\end{align}
Taking the expectation over $\Theta_1^k$, we will get the result. To prove (\ref{RID_maintheorem_formula1}), notice that
\begin{align}
H([X_1^n]_q|\theta_1^k)&= H([A_{C_\theta} U_{C_\theta} + A_{\bar{C}_\theta} V_{\bar{C}_\theta}]_q)\nonumber \\
&\doteq H([A_{C_\theta} U_{C_\theta} + A_{\bar{C}_\theta} V_{\bar{C}_\theta}]_q|V_{\bar{C}_\theta})\label{RID_maintheorem_formula2}\\
&\doteq H([A_{C_\theta} U_{C_\theta}]_q),\label{RID_maintheorem_formula3}
\end{align}
where we used $H(V_{\bar{C}_\theta}) \leq N H(V_1) \doteq 0$. We also used the fact that knowing $V_{\bar{C}_\theta}$, $[A_{C_\theta} U_{C_\theta}]_q$ and $[A_{C_\theta} U_{C_\theta} + A_{\bar{C}_\theta} V_{\bar{C}_\theta}]_q$ are equal up to finite uncertainty. Specifically, suppose $L$ is the minimum number of lattices of size $\frac{1}{q}$ required to cover $A_{\bar{C}_\theta} \times [0,\frac{2}{q}]^{|{\bar{C}_\theta}|}$, which is a finite number. Then
$$H([A_{C_\theta} U_{C_\theta}]_q | V_{\bar{C}_\theta}, [A_{C_\theta} U_{C_\theta} + A_{\bar{C}_\theta} V_{\bar{C}_\theta}]_q) \leq \log_2(L),$$
which implies (\ref{RID_maintheorem_formula2}) and (\ref{RID_maintheorem_formula3}).
Generally $A_{C_\theta}$ is not full rank. Assume that the rank of $A_{C_\theta}$ is equal to $m$ and let $A_m$ be a subset of linearly independent rows. It is not difficult to see that knowing $[A_m U_{C_\theta}]_q$ there is only finite uncertainty in the remaining components of $[A_{C_\theta} U_{C_\theta}]_q$, which is negligible compared with $\log_2(q)$ as $q$ tends to infinity. Therefore, we obtain
\begin{align*}
H([X_1^n]_q|\theta_1^k)&\doteq H([A_{C_\theta} U_{C_\theta}]_q)\\
&\doteq H([A_m U_{C_\theta}]_q)\\
&\doteq m \log_2(q).
\end{align*}
Thus, taking the limit as $q$ tends to infinity, we obtain
$$\lim _{q\to\infty} \frac{H([X_1^n]|\theta_1^k)}{\log_2(q)}=\mathrm{rank}(A_{C_\theta}).$$
Also, taking the expectation with respect to $\Theta_1^k$, we obtain $d(X_1^n)=\mathbb{E}\{\mathrm{rank}(A_{C_\Theta})\}$, which is the desired result.
To prove the second part of the theorem, notice that $$H([X_1^n]_q | Y_1^m) \doteq H([X_1^n]_q | Y_1^m , \Theta_1^k).$$ For a specific realization $\theta_1^k$ we have
\begin{align*}
H&([X_1^n]_q | Y_1^m,\theta_1^k)\\
&=H([A_{C_\theta} U_{C_\theta} + A_{\bar{C}_\theta} V_{\bar{C}_\theta}]_q | B_{C_\theta} U_{C_\theta} + B_{\bar{C}_\theta} V_{\bar{C}_\theta})\\
&\doteq H([A_{C_\theta} U_{C_\theta} + A_{\bar{C}_\theta} V_{\bar{C}_\theta}]_q | B_{C_\theta} U_{C_\theta} + B_{\bar{C}_\theta} V_{\bar{C}_\theta} , V_{\bar{C}_\theta})\\
&\doteq H([A_{C_\theta} U_{C_\theta}]_q | B_{C_\theta} U_{C_\theta}).
\end{align*}
Generally, $A_{C_\theta}$ is not full-rank. Let $A_m$ be the set of all linearly independent rows of $A_{C_\theta}$ of size $m$. Then
$$H([A_{C_\theta} U_{C_\theta}]_q | B_{C_\theta} U_{C_\theta})\doteq H([A_m U_{C_\theta}]_q|B_{C_\theta} U_{C_\theta}).$$
It may happen that some of the rows of $A_m$ can be written as a linear combination of rows of $B_{C_\theta}$. Let $A_r$ be the remaining matrix after dropping $m-r$ predictable rows of $A_m$. Given, $B_{C_\theta} U_{C_\theta}$, $A_r U_{C_\theta}$ has a continuous distribution thus $$H([A_r U_{C_\theta}]_q|B_{C_\theta} U_{C_\theta}) \doteq r \log_2(q).$$ It is easy to check that $r$ is exactly $R(A;B)[{C_\theta}]$. Therefore, taking the expectation with respect to $\Theta_1^k$, we get
$$d(X_1^n | Y_1^m)=\mathbb{E} \{ R(A;B)[C_\Theta]\}.$$
We also get the following corollary, which shows the additive property of the RID for the independent random variables from ${\cal L}$.
\begin{corol}
Let $X_1^n$ be independent random variables from ${\cal L}$. Then $d(X_1^N)=\sum_{i=1}^N d(X_i)$.
\end{corol}
\begin{proof}
Notice that we can simply write $X_1^N=I_N \times X_1^N$, where $I_N$ is the identity matrix of order $N$. Therefore, by the rank characterization for the RID, we have
\begin{align*}
d(X_1^N)=\mathbb{E} \{\mathrm{rank} ( I_N[C_\Theta])\}=\mathbb{E}\{\sum_{i=1}^N \Theta_i \}=\sum_{i=1}^N d(X_i),
\end{align*}
where we used the fact that the columns of $I_N$ are linearly independent thus adding a column increases the rank by $1$. Therefore, the rank of $I_N(C_\Theta)$ is equal to the number of $1$'s is $\Theta_1^N$, namely, $\sum_{i=1}^N \Theta_i$.
\end{proof}
Using the results of Theorem \ref{RID_maintheorem}, we can prove Theorem \ref{RID_extensions}.
{\bf Proof of Theorem \ref{RID_extensions}:} For part $1$, the proof is simple by considering the rank characterization. We know that $X_1^n=A Z_1^k$ and $d(X_1^n)=\mathbb{E} \{\mathrm{rank}(A_{C_\Theta}) \}$. Moreover, $M X_1^n=M A Z_1^k$ thus $d(X_1^n)=\mathbb{E} \{\mathrm{rank}(M A_{C_\Theta}) \}$. As $M$ is invertible $\mathrm{rank}(A_{C_\Theta})=\mathrm{rank}(M A_{C_\Theta})$, thus we get the result.
For part $2$, notice that for any realization $\theta_1^k$ and the corresponding set $C_\theta$,
\begin{align*}
\mathrm{rank}([A;B]_{C_\theta})&=\mathrm{rank}(A_{C_\theta}) + R(B;A)[{C_\theta}]\\
&=\mathrm{rank}(B_{C_\theta}) + R(A;B)[{C_\theta}].
\end{align*}
Taking the expectation over $\Theta_1^k$, we get the desired result $$d(X_1^n,Y_1^m)=d(X_1^n)+d(Y_1^m|X_1^n)=d(Y_1^m)+d(X_1^n|Y_1^m).$$
For part $3$, using the chain rule result from part $2$ and applying the definition of $I_R(X_1^n;Y_1^m)$, we get
\begin{align*}
I_R(X_1^n;Y_1^m)=d(X_1^n) + d(Y_1^m)-d(X_1^n,Y_1^m),
\end{align*}
which shows the symmetry of $I_R$ with respect to $X_1^n$ and $Y_1^m$.
For part $4$, notice that for a specific realization $\theta_1^k$, a simple rank check shows that $R(A;B)[{C_\theta}] \leq \mathrm{rank}(A_{C_\theta})$.
Taking the expectation over $\Theta_1^k$, we get $d(X_1^n|Y_1^m) \leq d(X_1^n)$.
If $X_1^n$ and $Y_1^m$ are independent, the equality follows from the definition. For the converse part, notice that if $X_1^m$ is fully discrete then $d(X_1^n|Y_1^m)\leq d(X_1^n)=0$. Similarly, if $Y_1^m$ is fully discrete then $d(Y_1^m|X_1^n)\leq d(Y_1^m)=0$ and using the identity $d(X_1^n)-d(X_1^n|Y_1^m)=d(Y_1^m)-d(Y_1^m|X_1^n)$, we get the equality. This case is fine because after removing the discrete $Z_i, i\in[k]$, either $X_1^n$ or $Y_1^m$ is equal to $0$, namely, a deterministic value, and the independence holds.
Assume that none of $X_1^n$ or $Y_1^m$ is fully discrete. Without loss of generality, let $Z_1^r$ be the non-discrete random variables among $Z_1^k$ and let $\tilde{X}_1^n$ and $\tilde{Y}_1^m$ be the resulting random vectors after dropping the discrete constituents, namely, we have $\tilde{X}_1^n=A_r Z_1^r$ and $\tilde{Y}_1^m=B_r Z_1^r$, where $A_r$ and $B_r$ are the matrices consisting of the first $r$ columns of $A$ and $B$ respectively. It is easy to check that $d(X_1^n)=d(\tilde{X}_1^n)$ and $d(\tilde{X}_1^n|\tilde{Y}_1^m)=d(X_1^n|Y_1^m)$. Thus it remains to show that $\tilde{X}_1^n$ and $\tilde{Y}_1^m$ are independent. As we have dropped all of the discrete components, the resulting $\Theta_i , \ i \in [r]$ are $1$ with strictly positive probability. This implies that for any realization of $\theta_1^n$ and the corresponding ${C_\theta}$, $R(A_r;B_r)[{C_\theta}]=\mathrm{rank}(A_{r,{C_\theta}})$. In particular, this holds for any ${C_\theta}$ of size $1$, namely, for any column of $A_r$ and $B_r$, which implies that if $A_r$ has a non-zero column the corresponding column in $B_r$ must be zero and if $B_r$ has a non-zero column then the corresponding column in $A_r$ must be zero. This implies that $\tilde{X}_1^n$ and $\tilde{Y}_1^m$ depend on disjoint subsets of the random variables $Z_1^r$. Therefore, they must be independent.
\subsection{Polarization of the RID}\label{prooftech:RID_polarize}
In this section, we will prove the polarization of the RID in the single and multi terminal case as stated in Theorem \ref{single_RID_polarization} and Theorem \ref{multi_RID_polarization}. The main idea is to use the recursive structure of the Hadamard matrices and the rank characterization of the RID in the space ${\cal L}$.
\begin{proof}[{\bf Proof of Theorem \ref{single_RID_polarization}}]
For the initial value, we have $I_0(1)=d(X_1)$. Let $n\in \mathbb{N}$ and $N=2^n$.
To simplify the proof, instead of the Hadamard matrices, $H$, we will use shuffled Hadamard matrices, $\tilde{H}$, constructed as follows: $\tilde{H}_1=H_1$ and $\tilde{H}_{2N}$ is constructed from $\tilde{H}_N$ as follows
$$
\footnotesize{\begin{array}{ccc}
\left (
\begin{matrix} &\tilde{h}_1 \\
&\vdots \\
&\tilde{h}_N \\
\end{matrix} \right )
& \to &
\left (
\begin{matrix} \tilde{h}_1 &,&\tilde{h}_1\\
\tilde{h}_1 &,& -\tilde{h}_1\\
\vdots &,&\vdots\\
\tilde{h}_i &,& \tilde{h}_i\\
\tilde{h}_i &,& -\tilde{h}_i\\
\vdots &,&\vdots\\
\end{matrix} \right )
\end{array}},$$
where $\tilde{h}_i$, $i\in[N]$ denotes the $i$-th row of the $\tilde{H}_N$. Let $X_1^n$ be as in Theorem \ref{single_RID_polarization} and let $\tilde{Z}_1^N=\tilde{H}_N X_1^N$, where $H_N$ is replaced by $\tilde{H}_N$. Also, let $\tilde{I}_n(i)=d(\tilde{Z}_i|\tilde{Z}_1^{i-1})$, $i\in[N]$. We first prove that $\tilde{I}$ is also an erasure process with initial value $d(X_1)$ and evolves as follows
\begin{align*}
\tilde{I}_n(i)^+&=\tilde{I}_{n+1}(2i-1)=2\tilde{I}_n(i)-\tilde{I}_n(i)^2\\
\tilde{I}_n(i)^-&=\tilde{I}_{n+1}(2i)=\tilde{I}_n(i)^2,
\end{align*}
where $i\in [N]$ with the corresponding $\{+,-\}$-labeling $b_1^n$. Also, let $\tilde{H}^{i-1}$ and $\tilde{H}^i$ denote the first $i-1$ and the first $i$ rows of $\tilde{H}_N$. Also, let $\tilde{h}_i$ denote the $i$-th row of $\tilde{H}_N$. Thus, we have $\tilde{Z}_1^i=\tilde{H}^i X_1^N$ and $\tilde{Z}_1^{i-1}=\tilde{H}^{i-1} X_1^N$. As $X_1^N$ are i.i.d. nonsingular random variables, it results that $\tilde{Z}_1^i$ belong to the space ${\cal L}$ generated by the $X_1^N$ random variables.
Notice that using the rank characterization for the RID over ${\cal L}$, we have
\begin{align*}
d(\tilde{Z}_i|\tilde{Z}_1^{i-1})=\mathbb{E} \{I(\tilde{H}^{i-1};\tilde{h}_i)[C_\Theta]\},
\end{align*}
where $I(\tilde{H}^{i-1};\tilde{h}_i)[C_\Theta] \in \{0,1\}$ is the amount of increase of rank of $\tilde{H}^{i-1}_{C_\Theta}$ by adding $\tilde{h}_i$. Now, consider the stage $n+1$, where we have the shuffled Hadamard matrix $\tilde{H}_{2N}$. Consider the row $i^+$ which corresponds to the row $2i-1$ of $\tilde{H}_{2N}$.
Now, if we look at the first block of the new matrix, we simply notice that adding $\tilde{h}_i$ has the same effect in increasing the rank of this block as it had in $\tilde{H}_N$. A similar argument holds for the second block. Moreover, adding $\tilde{h}_i$ increases the rank of the matrix if it increases the rank of either the first or the second block or both. Let ${\mathbf 1} _i (\Theta_1^N) \in \{0,1\}$ denote the random rank increase in $\tilde{H}^{i-1}$ by adding $\tilde{h}_i$, then we have
$${\mathbf 1} _{2i-1} (\Theta_1^{2N})={\mathbf 1} _i ({\Theta}_1^N) +{\mathbf 1} _i (\Theta_{N+1}^{2N}) - {\mathbf 1} _i ({\Theta}_1^N) {\mathbf 1} _i ({\Theta}_{N+1}^{2N}).$$
$\Theta_1^N$ and $\Theta _{N+1}^{2N}$ are i.i.d. random variables and a simple check shows that ${\mathbf 1} _i ({\Theta}_1^N)$ and ${\mathbf 1} _i (\Theta_{N+1}^{2N})$ are also i.i.d.. Taking the expectation value, we obtain
\begin{align}\label{plus_part}
\tilde{I}_n(i)^+=2\tilde{I}_n(i) - \tilde{I}_n(i)^2.
\end{align}
Moreover, if we denote $\tilde{W}_1^N=\tilde{H}_N X_{N+1}^{2N}$, then by the structure of $\tilde{H}_N$ it is easy to see that $\tilde{I}_n(i)^+$ and $\tilde{I}_n(i)^-$ can be written as follows:
\begin{align*}
\tilde{I}_n(i)^+&=d(\tilde{Z}_i+\tilde{W}_i| \tilde{Z}_1^{i-1}, \tilde{W}_1^{i-1}),\\
\tilde{I}_n(i)^-&=d(\tilde{Z}_i-\tilde{W}_i| \tilde{Z}_i+\tilde{W}_i, \tilde{Z}_1^{i-1}, \tilde{W}_1^{i-1}).
\end{align*}
Using the chain rule for the RID, we have
\begin{align*}
\frac{\tilde{I}_n(i)^+ + \tilde{I}_n(i)^-}{2}&=\frac{1}{2} d(\tilde{Z}_i-\tilde{W}_i, \tilde{Z}_i+\tilde{W}_i|\tilde{Z}_1^{i-1}, \tilde{W}_1^{i-1})\\
&=\frac{1}{2} d(\tilde{Z}_i, \tilde{W}_i|\tilde{Z}_1^{i-1}, \tilde{W}_1^{i-1})\\
&=d(\tilde{Z}_i,|\tilde{Z}_1^{i-1})=\tilde{I}_n(i),
\end{align*}
which along with (\ref{plus_part}), implies that $\tilde{I}_n(i)^-=\tilde{I}_n(i)^2.$ Therefore, $\tilde{I}$ evolves like an erasure process with initial value $d(X)$.
Now, notice that the only difference between $H_N$ and $\tilde{H}_N$ is the permutation of the rows, namely, there is a row shuffling matrix $B_N$ such that $\tilde{H}_N=B_N H_N$. It was proved in \cite{source_polarcode} that $B_N$ and $H_N$ commute, which implies that $\tilde{H}_N X_1^N= H_N B_N X_1^N$. However, notice that $X_1^N$ is an i.i.d. sequence and $B_N X_1^N$ is again an i.i.d. sequence with the same distribution as $X_1^N$. In particular, adding or removing $B_N$ does not change the RID values, which implies that for $Z_1^N=H_N X_1^N$ and $I_n(i)=d(Z_i|Z_1^{i-1})$, $I_n(i)=\tilde{I}_n(i)$. Therefore, $I$ is also be an erasure process with initial value $d(X)$, which polarizes to $\{0,1\}$.
\end{proof}
Using a similar technique, we can prove Theorem \ref{multi_RID_polarization}. The main idea is that $(X,Y)$ are correlated random variables in the space ${\cal L}$ and they can be written as a linear combination of i.i.d. nonsingular random variables.
\begin{proof}[{\bf Proof of Theorem \ref{multi_RID_polarization}}]
For the initial value, we have $I_0(1)=d(X_1)$ and $J_0(1)=d(Y_1|X_1)$. As $\{(X_i,Y_i)\}_{i=1}^N$ is a memoryless source, similar to the single terminal case, it is easy to see that $I$ is an erasure process with initial value $d(X_1)$ and it remains to show that $J$ is also an erasure process but with initial value $d(Y_1|X_1)$.
Let $\tilde{H}^{i-1}$, $\tilde{H}^i$ and $\tilde{h}_i$ denote the first $i-1$ rows, the first $i$ rows, and the $i$-th row $\tilde{H}_N$. As $X_1,Y_1\in {\cal L}$ there is a sequence of i.i.d. random variables $E_1^k$ and two vectors $a_1^k$ and $b_1^k$ such that $X_1=\sum_{i=1}^k a_i E_i$ and $Y_1=\sum_{i=1}^k b_i E_i$. As $\{(X_i,Y_i)\}_{i=1}^N$ is memoryless, there is a concatenation of sequence of i.i.d. copies of $E_1^k$,
$
E=\{E_1^k(1), E_1^k(2),\dots, E_1^k(N)\}
$,
such that
\begin{align*}
Z_1^N&=\tilde{H}_N X_1^N=[(B_N H_N) \otimes (a_1^k)^t] E,\\
W_{1}^{N}&=\tilde{H}_N Y_1^N=[(B_N H_N) \otimes (b_1^k)^t] E,
\end{align*}
where $\otimes$ denotes the Kronecker product and $(a_1^k)^t, (b_1^k)^t$ are the transpose of the column vectors $a_1^k$ and $b_1^k$. Let
\begin{align}\label{gamma_notation}
\Gamma=\{\Theta_1,\Theta_2,\dots,\Theta_N\}
\end{align}
be the random element corresponding to the $\Theta$ pattern of $E_1^k(j), j \in[N]$, where $\Theta_j \in \{0,1\}^k, j\in[N]$. Using the rank result developed for the RID, it is easy to see that for every $j\in[N]$
\begin{align*}
J_n(j)&=d(W_{j}|W_1^{j-1},Z_1^N)\\
&=\mathbb{E} \{I([H^{j-1}\otimes (b_1^k)^t; H\otimes (a_1^k)^t];h_j\otimes (b_1^k)^t)[C_\Gamma]\}.
\end{align*}
For $i\in[N]$, let ${\mathbf 1} _i (\Theta_1^N) \in \{0,1\}$ denote the random increase of rank of $[H^{i-1}\otimes (a_1^k)^t]_{C_{\gamma}}$ by adding $h_i\otimes (a_1^k)^t$. Now, consider the stage $n+1$, where we are going to combine two copies of $\tilde{H}_N$ to construct the matrix $\tilde{H}_{2N}$. The the row $i$ corresponding to $W_i$ is split into two new rows $i^+$ and $i^-$ which correspond to the row number $2i-1$ and the row number $2i$ of $\tilde{H}_{2N}$.
$$\left (
\begin{matrix}
\tilde{H}_N \otimes(a_1^k)^t &,&\tilde{H}_N \otimes(a_1^k)^t\\
\tilde{H}_N \otimes(a_1^k)^t &,& -\tilde{H}_N \otimes(a_1^k)^t\\
\vdots &,&\vdots\\
\tilde{h}_{i-1}\otimes(b_1^k)^t &,&\tilde{h}_{i-1}\otimes(b_1^k)^t\\
\tilde{h}_{i-1}\otimes(b_1^k)^t &,&-\tilde{h}_{i-1}\otimes(b_1^k)^t\\
\tilde{h}_i \otimes(b_1^k)^t&,& \tilde{h}_i\otimes(b_1^k)^t\\
\end{matrix} \right )$$
Similar to the single terminal case, we see that adding $\tilde{h}_i\otimes(b_1^k)^t$ increases the rank of the matrix if it increases the rank of the either the first or the second block. In other words,
\begin{align*}
{\mathbf 1} _{2i-1} (\Theta_1^{2N})={\mathbf 1} _i ({\Theta}_1^N) +{\mathbf 1} _i (\Theta_{N+1}^{2N}) - {\mathbf 1} _i ({\Theta}_1^N) {\mathbf 1} _i ({\Theta}_{N+1}^{2N}),
\end{align*}
where ${\mathbf 1} _{i} (\Theta_1^N), {\mathbf 1} _{i} (\Theta_{N+1}^{2N}) \in \{0,1\}$ are the corresponding amount of increase of the rank of the first and second block by adding the $i$-the row. In particular, $\Theta_1^N$ and $\tilde{\Theta}_1^N$ are i.i.d. so are ${\mathbf 1} _i ({\Theta}_1^N)$ and ${\mathbf 1} _i (\Theta_{N+1}^{2N})$. Taking the expectation, similar to what did in the single terminal case, we obtain that
\begin{align}\label{plus_part_multi}
J_n(i)^+=2J_n(i) - J_n(i)^2.
\end{align}
Moreover, one can also show that for $i\in [N]$, $$\frac{J_n(i)^+ + J_n(i)^-}{2}=J_n(i),$$ which together with (\ref{plus_part_multi}), implies that
$J_n(i)^-=J_n(i)^2$. Therefore, $J$ is also an erasure process with initial value $d(Y|X)$. Similar to the single terminal case, one can also show that the permutation matrix $B_N$ is not necessary, thus the proof is complete.
\end{proof}
\subsection{Single terminal A2A compression}\label{prooftech:single}
In this part, we will overview the techniques used to prove the achievability part. The converse part, given in Theorem \ref{mixture_converse}, has been proved in Appendix \ref{converse_proof_single}. We will give separate constructions for the fully discrete case and the mixture case although the proof techniques used are very similar.
\vspace{2mm}
{\bf Achievability proof for the mixture case:}
We will give an explicit construct of the the measurement ensemble as follows. Let $n\in \mathbb{N}$ and let $N=2^n$. Assume that $X_1^N$ is a sequence of i.i.d. nonsingular random variables with RID equal to $d(X)$. Let $Z_1^N=H_N X_1^N$, where $H_N$ is the Hadamard matrix of order $N$. Also assume that $I_n(i)=d(Z_i|Z_1^{i-1})$, $i\in[N]$. As we proved in Theorem \ref{single_RID_polarization}, $I$ is an erasure process with initial value $d(X)$. We will construct the measurement matrix $\Phi_N$ by selecting all of the rows of $H_N$ with the corresponding $I_n$ value greater than $\epsilon\, d(X)$. Therefore, we can construct the measurement ensemble $\{\Phi_N\}$ labelled with all $N$ that are a power of $2$. Assume that the dimension of $\Phi_N$ is $m_N\times N$. It remains to prove that the ensemble $\{\Phi_N\}$ is $\epsilon$-REP with measurement rate $d(X)$. This will complete the proof of Theorem \ref{achievability_hadamard}.
\vspace{2mm}
\begin{proof}[{\bf Proof of Theorem \ref{achievability_hadamard}}] We first show that the family $\{\Phi_N\}$ has measurement rate $d(X)$. Notice that the process $I_n$ converges almost surely. Thus, it also converges in probability. Specifically, considering the uniform probability assumption, this implies that
\begin{align*}
\limsup _{N \to \infty}\frac{m_N}{N}&= \limsup_{N \to \infty} \frac{\# \{ i\in[N] : I_n(i)\geq \epsilon\, d(X) \}}{N}\\
&=\limsup_{n \to \infty} \mathbb{P}(I_n \geq \epsilon\, d(X))\\
&=\mathbb{P}(I_\infty \geq \epsilon\, d(X))=d(X).
\end{align*}
It remains to prove that $\{\Phi_N\}$ is $\epsilon$-REP \hspace{-1mm}. Let $S=\{i \in [N]: I_n(i) \geq \epsilon \, d(X)\}$ denote the selected rows to construct $\Phi_N$ and let $Z_1^N=H_N X_1^N$ be the full measurements. It is easy to check that $\Phi_N X_1^N=Z_S$. Also let $B_i = S \cap [i-1]$ denote all of the indices in $S$ before $i$. We have
\begin{align*}
d(X_1^N|Z_S)&=d(Z_1^N|Z_S)=d(Z_{S^c}|Z_S)\\
&=\sum _{i \in {S^c}} d(Z_i|Z_{B_i},Z_S)\\
&\leq \sum_{i \in {S^c}} d(Z_i|Z_1^{i-1})\\
&= \sum_{i \in {S^c}} I_n(i) \leq N \epsilon\, d(X) = \epsilon\, d(X_1^N),
\end{align*}
which shows the $\epsilon$-REP property for $\{\Phi_N\}$.
\end{proof}
\vspace{2mm}
{\bf Achievability proof for the discrete case:}
For the discrete case, the construction of the measurement family is very similar to the mixture case with the only difference that instead of using the erasure process corresponding to the RID, we use the discrete entropy function. More exactly, in the discrete case, assuming that $Z_1^N=H_N X_1^N$, we define the following process for $i \in [N]$, $I_n(i)=H(Z_i|Z_1^{i-1})$. In \cite{HAT}, using the conditional EPI result \cite{epi_Z}, the following was proved.
\begin{lemma}[``Absorption phenomenon'']
$(I_n,{\cal F}_n,\mathbb{P})$ is a positive martingale converging to $0$ almost surely.
\end{lemma}
Similar to the mixture case, we again construct the family $\{\Phi_N\}$ by selecting those rows of the shuffled Hadamard matrix with $I$ value greater than $\epsilon\, H(X_1)$.
\vspace{1mm}
\begin{proof}[{\bf Proof of Theorem \ref{achievability_discrete}}]
By a similar procedure, it is easy to show that $\{\Phi_N\}$ has zero measurement rate.
\begin{align*}
\limsup _{N \to \infty}\frac{m_N}{N}&=\limsup_{n\to\infty} \mathbb{P}(I_n \geq \epsilon H(X_1))\\
& \leq \mathbb{P}(\limsup _{n\to\infty} I_n \geq \epsilon H(X_1))\\
&=\mathbb{P}(I_\infty \geq \epsilon\, H(X_1))=0.
\end{align*}
Moreover, assuming that $S=\{i \in [N]: I_n(i) \geq \epsilon\, H(X_1) \}$ and $B_i = S \cap [i-1]$, we have
\begin{align*}
H(X_1^N|Z_S)&=H(Z_1^N|Z_S)=H(Z_{S^c}|Z_S)\\
&=\sum _{i \in {S^c}} H(Z_i|Z_{B_i},Z_S)\\
&\leq \sum_{i \in {S^c}} H(Z_i|Z_1^{i-1})\\
&= \sum_{i \in {S^c}} I_n(i) \leq N \, \epsilon\, H(X_1)=\epsilon\,H(X_1^N),
\end{align*}
which show the $\epsilon$-REP property for $\{\Phi_N\}$.
\end{proof}
\vspace{2mm}
The last step is to prove Theorem \ref{universal_mixture}, namely, to show that for a family of mixture distributions $\Pi$ with strictly positive RID, there is a fixed measurement family $\{\Phi_N\}$ which is $\epsilon$-REP for all of the distributions in $\Pi$ with a measurement rate vector lying in the R\'enyi information region of of the family.
\begin{proof}[{\bf Proof of Theorem \ref{universal_mixture}}]
The proof is simple considering the fact that the construction of the family $\{\Phi_N\}$ in the proof of Theorem \ref{achievability_hadamard} depends only on the erasure pattern. Also, the erasure pattern is independent of the shape of the distribution and only depends on its RID. Moreover, it can be shown that the erasure patterns for different value of $\delta$ are embedded in one another, namely, for $\delta>\delta'$, $I^{\delta}_n(i) \geq I^{\delta'}_n(i), i\in[N]$. Considering the method we use to construct the family $\{\Phi_N\}$, this implies that an $\epsilon$-REP measurement family designed for a specific RID $\delta$ is $\epsilon$-REP for any distribution with RID less than $\delta$. Thus, if we design $\{\Phi_N\}$ for $\sup_{\pi \in \Pi}d(\pi)$, it will be $\epsilon$-REP for any distribution in the family.
\end{proof}
Figure \ref{absorption} shows the \textit{absorption phenomenon} for a binary random variable with $\mathbb{P}(1)=p=0.05$. Figure \ref{polarization} shows the polarization of the RID for a random variable with RID $0.5$.
\begin{figure}[h]
\centering
\includegraphics[width=2.6 in]{discrete_p05}
\caption{Absorption pattern for $N=512,\ p=0.05$}
\label{absorption}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=2.6 in]{Renyi_512_05.eps}
\caption{Polarization of the RID for $N=512,d(X)=0.5$}
\label{polarization}
\end{figure}
\subsection{Multi terminal A2A compression}\label{prooftech:multi}
In this section, we will give a brief overview of the techniques used to prove the achievability part. The proof of the converse part is given in Appendix \ref{converse_proof_multi}.
{\bf Acievability proof for the mixture case:}
The proof technique is very similar to the single terminal case. We will define the suitable erasure process and we will use it to construct the desired $\epsilon$-REP measurement matrices for the multi terminal case. Let $\{(X_i,Y_i)\}_{i=1}^N$, $i\in [N]$, be a two-terminal memoryless source, where $N$ is a power of two. Let $Z_1^N=H_N X_1^N$ and $W_1^N=H_N Y_1^N$. For $i\in [N]$, let us define $I_n(i)=d(Z_i|Z_1^{i-1})$ and $J_n(i)=d(W_i|W_1^{i-1},Z_1^N)$. Using Theorem \ref{multi_RID_polarization}, we can show that $I_n$ and $J_n$ are erasure processes with initial values $d(X)$ and $d(Y|X)$ polarizing to $\{0,1\}$.
The next step is to construct the two terminal measurement ensemble. Let $n\in \mathbb{N}$ and $N=2^n$. We will construct $\Phi^x_N$ by selecting those rows of the Hadamard matrix, $H_N$, with $I_n(i) >\epsilon\, d(X)$. Similarly, $\Phi^y_N$ is constructed by selecting those rows of $H_N$ with $J_n(i)>\epsilon\, d(Y|X)$. It remains to prove that the family $\{\Phi^x_N,\Phi^y_N\}$ labeled with $N$, a power of $2$, and of dimension $m^x_N \times N$ and $m^y_N\times N$ is $\epsilon$-REP with measurement rate $(d(X),d(Y|X))$. By this construction, we can achieve one of the corner points of the dominant face of the rate region. If we switch the role of $X$ and $Y$ we will get the other corner point $(d(X|Y),d(Y))$. One way to obtain any point on the dominant face is to use time sharing for the two family. However, it is also possible to use an explicit construction proposed in \cite{monotone_polarcode}, which directly gives any point on the dominant face of the measurement rate region without any need to time sharing. We will just prove the achievability for the corner point $(d(X),d(Y|X))$.
\begin{proof}[{\bf Proof of Theorem \ref{achievability_hadamard_multi}}] We first show that the family $\{\Phi^x_N,\Phi^y_N\}$ has measurement rate $(d(X),d(Y|X))$. Notice that the processes $I^x_n,I^y_n$ converge almost surely thus, thay converge in probability. Specifically, considering the uniform probability assumption and using a similar technique as we used in the single terminal case, we get the following:
\begin{align*}
\limsup _{N \to \infty}\frac{m^x_N}{N}&= \limsup_{N \to \infty} \frac{\# \{ i\in[N] : I^x_n(i)\geq \epsilon\, d(X) \}}{N}\\
&=\limsup_{n \to \infty} \mathbb{P}(I^x_n \geq \epsilon\, d(X))\\
&=\mathbb{P}(I^x_\infty \geq \epsilon\, d(X))=d(X).
\end{align*}
Similarly, we can show that $\limsup_{N \to \infty} \frac{m^y_N}{N}=d(Y|X)$.
It remains to prove that $\{\Phi^x_N,\Phi^y_N\}$ is $\epsilon$-REP \hspace{-1mm}. Let $S_X=\{i \in [N]: I_n(i) \geq \epsilon \, d(X)\}$ and $S_Y=\{i \in [N]: J_n(i) \geq \epsilon \, d(Y|X)\}$ denote the selected rows to construct $\{\Phi^x_N,\Phi^y_N\}$ and let $Z_1^N= H_N X_1^N$ and $W_1^N= H_N Y_1^N$ be the full measurements for the $x$ and the $y$ terminal. Let $B^X_i=S_X^c \cap [1:i-1]$ and $B^Y_i=S_Y^c \cap [1:i-1]$ be the set of all indices in $S_X^c$ and $S_Y^c$ less than $i$. We have
\begin{align*}
d(X_1^N,Y_1^N&|Z_{S_X}, W_{S_Y})=d(Z_1^N,W_1^N|Z_{S_X},W_{S_Y})\\
&\leq d(Z_1^N|Z_{S_X}) + d(W_1^N|Z_1^N,W_{S_Y})\\
&\leq\sum _{i \in S^c_X} d(Z_i|Z_{B^X_i},Z_{S_X}) \\
&+ \sum_{i\in S^c_Y}d(W_i|W_{B^Y_i},W_{S_Y},Z_1^N)\\
&\leq\sum _{i \in S^c_X} d(Z_i|Z_1^{i-1}) + \sum_{i\in S^c_Y}d(W_i|W_1^{i-1}, Z_1^N)\\
& \leq N \epsilon\, d(X) + N \epsilon\, d(Y|X)\\
&= \epsilon N d(X,Y) =\epsilon\, d(X_1^N,Y_1^N),
\end{align*}
which shows the $\epsilon$-REP property for the two terminal measurement family $\{\Phi^x_N,\Phi^y_N\}$.
\end{proof}
\vspace{1mm}
{\bf Achievability proof for the discrete case:}
In the fully discrete case, the construction is very similar to the mixture case with the only difference that instead of using the RID, we will use the entropy. Similar to the single terminal case, we can prove the following.
\begin{lemma}
$(I_n,{\cal F}_n)$ and $(J_n,{\cal F}_n)$ are positive martingale converging to $0$ almost surely.
\end{lemma}
We again construct the family $\{\Phi^x_N,\Phi^y_N\}$ by selecting those rows of $H_N$ with $I_n> \epsilon H(X)$ and $J_n>\epsilon H(Y|X)$.
\begin{proof}[{\bf Proof of Theorem \ref{achievability_discrete_multi}}]
Similar to the single terminal case, it is easy to show that $\{\Phi^x_N,\Phi^y_N\}$ has measurement rate $(0,0)$.
It remains to prove that $\{\Phi^x_N,\Phi^y_N\}$ is $\epsilon$-REP . Let $S_X=\{i \in [N]: I_n(i) \geq \epsilon \, H(X)\}$ and $S_Y=\{i \in [N]: J_n(i) \geq \epsilon \, H(Y|X)\}$ denote the selected rows to construct $\{\Phi^x_N,\Phi^y_N\}$ and let $Z_1^N= H_N X_1^N$ and $W_1^N= H_N Y_1^N$ be the full measurements for the $X$ and the $Y$ terminal. Let $B^X_i=S_X^c \cap [1:i-1]$ and $B^Y_i=S_Y^c \cap [1:i-1]$ be the set of all indices in $S_X^c$ and $S_Y^c$ less than $i$. We have the following:
\begin{align*}
H(X_1^N,Y_1^N&|Z_{S_X}, W_{S_Y})=H(Z_1^N,W_1^N|Z_{S_X},W_{S_Y})\\
&\leq H(Z_1^N|Z_{S_X}) + H(W_1^N|Z_1^N,W_{S_Y})\\
&\leq\sum _{i \in S^c_X} H(Z_i|Z_{B^X_i},Z_{S_X}) \\
&+ \sum_{i\in S^c_Y}H(W_i|W_{B^Y_i},W_{S_Y},Z_1^N)\\
&\leq\sum _{i \in S^c_X} H(Z_i|Z_1^{i-1}) + \sum_{i\in S^c_Y}H(W_i|W_1^{i-1}, Z_1^N)\\
& \leq N \epsilon\, H(X) + N \epsilon\, H(Y|X)\\
&= \epsilon N H(X,Y) =\epsilon\, H(X_1^N,Y_1^N),
\end{align*}
which shows the $\epsilon$-REP property for the two terminal measurement family $\{\Phi^x_N,\Phi^y_N\}$.
\end{proof}
The last step is to prove Theorem \ref{universal_mixture}, namely, to show for a family of mixture distributions $\Pi$, there is a fixed measurement family $\{\Phi^x_N,\Phi^y_N\}$, which is $\epsilon$-REP for all of the distributions in $\Pi$ with a measurement rate in the R\'nyi information region of the family.
\begin{proof}[{\bf Proof of Theorem \ref{universal_mixture_multi}}]
The proof is simple considering the fact that the construction of the family $\{\Phi^x_N, \Phi^y_N\}$ in the proof of Theorem \ref{achievability_hadamard_multi} depends only on the erasure pattern which is independent of the shape of the distribution and only depends on its RID. This implies that for any $(\rho_x,\rho_y)$ in the R\'enyi information region of $\Pi$, the designed measurement family $\{\Phi^x_N,\Phi^y_N\}$ is $\epsilon$-REP \hspace{-1mm}$(\Pi)$.
\end{proof}
\section{Numerical simulations}
Up to now, we defined the notion of $\epsilon$-REP for an ensemble of measurement matrices. This definition is what we call an ``informational'' characterization, in the sense that taking measurements by the ensemble potentially keeps more than $1-\epsilon$ ratio of the information of the source. Now, we can ask the natural question that weather this has some ``operational'' implication, in the sense that after having the linear measurements, is it possible to recover the source up to an acceptable distortion? In particular, is there a computationally feasible algorithm to do that?
To explain the operational view more, let us give an example from polar codes for binary source compression which has lots of similarities with what we have done. As shown in \cite{source_polarcode}, for a binary memoryless source with $\mathbb{P}(0)=p$, for a large block length $n$, there is a matrix $G_n$, of dimension approximately equal to $nh_2(p) \times n$ such that the linear measurement of the source by this matrix over $\mathbb{F}_2$ faithfully captures all of the randomness of the source. This in its own only solves the encoding part of problem without directly addressing the decoding part, namely, it does not imply the existence of a decoder to recover the source from the measurements up to negligible distortion (error probability). Therefore, the operational picture is not complete yet. Fortunately, in the case of polar codes, the successive cancellation decoder (or other decoders proposed) fills up the gap and shows that the \textit{informational} characterization implies the \textit{operational} characterization.
For simulations, we use a unit variance sparse distribution $p_X(x)=(1-\delta) \delta_0(x) + \delta p_c(x)$, where $\delta_0(x)$ is the unit delta measure at point zero, $p_c$ is the distribution of the continuous part and $\delta \in \{0.0,0.1,\dots,0.9,1.0\}$ is the RID of the signal. We use the MSE (mean square error) as distortion measure. The simulations are done with the Hadamard matrix of order $N=512$. To build the measurement matrix $A$, we select all of the rows of $H_N$ with highest conditional RID, as stated in \ref{prooftech:single}, until we get acceptable recovery distortion. Figure \ref{ell_1_PT} shows the phase transition (PT) diagram for the $\ell_1$-minimization algorithm. The simulations are done with $3$ different distributions for $p_c$: Gaussian, Laplacian and Uniform. The acceptable recovery distortion is set to $0.01$. The recovery is successful for the measurement rates above the plotted curves. The results show the insensitivity of the PT region to the distribution of the continuous components.
\begin{figure}[h!]
\centering
\includegraphics[width=3.2 in]{ell1_PT}
\caption{PT diagram for $\ell_1$-minimization}
\label{ell_1_PT}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=3.2 in]{AMP_l1_PT}
\caption{PT diagram for AMP and $\ell_1$-minimization}
\label{AMP_PT}
\end{figure}
We also used the AMP algorithm to recover the signal, where for simplicity, we only did the simulations for the Gaussian case for $p_c$. The AMP iteration is as follows:
\vspace{-2mm}
\begin{align*}
&z_t=y-A \hat{x}_{t} +\frac{1}{\gamma} z_{t-1} \langle \eta'_{t-1}(A^*z_{t-1} +\hat{x}_{t-1}) \rangle, \\
&\hat{x}_{t+1}=\eta_t(A^*z_t + \hat{x}_t),
\end{align*}
where $y=A\,x$ is the linear measurements taken by $A$, $\gamma$ is the measurement rate, $\langle a_1^n \rangle=\sum_{i=1}^n a_i /n$, $\eta_t(u)=(\eta_{t,1}(u_1), \dots, \eta_{t,N}(u_N))$ and where $\eta_{t,i}(u_i)=\mathbb{E} \{X|u_i=X+ \tau_t N \}$, with $N \sim {\cal N}(0,1)$ independent of the signal $X$ and $\tau_t$ given by the state evolution equation for AMP, is the soft-thresholding function designed for the known distribution of $X$. For initialization, we use $\hat{x}_0=0$ and $z_0=0$. Figure \ref{AMP_PT} compares the PT diagram for AMP and $\ell_1$-minimization. Although AMP, with the thresholding function $\eta_t$ designed for the known distribution of the signal, performs better than $\ell_1$-minimization, there is still a gap with the optimal line.
\section*{Acknowledgment}
S.\,Haghighatshoar acknowledges Mr.\, Adel Javanmard for his helpful comments about the AMP algorithm. E.\,Abbe would like to thank Sergio Verd\`u for stimulating discussions on the R\'enyi information dimension.
|
2,877,628,089,571 | arxiv | \section{Introduction}
\label{sec:intro}
Line charts are widely used to represent data in a wide range of documents such as financial forecasts, research publications, and medical reports \cite{kobayashi1999toward,wang2003cost}. Automatic analysis of line charts can facilitate document mining, medical diagnosis, and help the visually impaired understand documents. However, when line charts are published in images, the raw data is lost. Recovering the underlying information of line charts will improve the performance of existing chart classification and question-answering systems such as \cite{kafle2018dvqa,Kahou2018FigureQAAA,kosemen2020multi}. It is trivial to identify the maximum value in a line chart given the raw data, but not so given only an image. Additionally, some analysis can only be performed when all the raw data is available. For example, when an audiologist fits a hearing aid for a patient given an audiogram ( a line chart representing one's hearing loss level at different frequencies), he or she must read the exact values from the chart. Therefore, Line Chart Data Extraction is a task of great significance as it improves the accuracy of qualitative analysis such as classification and question-answering, and makes quantitative applications such as automatic hearing-aid fitting possible.
There have been numerous methods \cite{huang2007extraction,zhou2001chart} based on manually designed features and rules, however, they failed to generalize a diverse range of line chart designs. Many recent works such as \cite{Kato2022ParsingLC,luo2021chartocr} achieved better performance using deep neural networks. But these systems are difficult to train in that the input image is typically passed through a sequence of independent networks performing different tasks such as OCR and object detection, each requiring a separate training pipeline.
One major limitation of these works is that they focus on extracting data from "clean" chart images such as scrapped images from the Internet, Excel sheets, and PDF documents \cite{Kato2022ParsingLC,luo2021chartocr,savva2011revision}. We argue that extracting chart data from real-world camera photos is equally important, if not more. There is an analogy between scene text recognition as an extension of optical character recognition(OCR) and chart data extraction from camera images as an extension of chart data extraction from "clean" images. Just as scene text recognition powers real-time documentation translation and text-to-speech services, chart data extraction from camera images can power applications such as helping the visually impaired to read charts in printed documents and automatic tuning of hearing aids using audiograms photos taken by cellphone cameras. Unfortunately, this natural extension of chart data extraction hasn't been fully explored.
We believe there are several reasons that this area is currently under-explored. First, Deep Neural Networks can heavily overfit small datasets, which is why current deep methods such as \cite{luo2021chartocr,Kato2022ParsingLC} are trained on large-scale datasets of scrapped or generated images. However, there are no large-scale datasets of camera photos of line charts available. Additionally, collecting and annotating such photos can be costly and time-consuming. Second, existing methods rely on pre-trained OCR models. Camera photos introduce additional skew, rotation, and color perturbations to texts which lead to more noise in OCR results and make the task more challenging. Third, existing methods rely on bounding box estimations to find chart regions and assume the x and y-axis of the chart are perfectly aligned with that of the image. However, in real camera photos, the chart is projected onto the image plane through a homography transformation, which means additional components need to be added to the system to predict such transformation. Given many of these systems are already very complicated, this is no trivial task.
In this paper, we tackle the problem of chart data extraction from camera images by directly addressing these challenges. Our strategy is to develop a system that pretrains on synthetic data only but can perform inference on real camera photos. \cref{fig:teaser} is an illustration of our proposed system. Our main contribution can be summarized as follows. 1) We propose Chart-RCNN, an end-to-end trainable network that extracts line chart data from camera photos. 2) We propose a synthetic data generation pipeline that can be used to train models capable of performing inference in real camera photos. 3) We collected datasets of camera photos of both real and synthetic charts with annotations of raw data. These datasets can be used to benchmark the performance of future works in this area
\section{Related Works}
\label{sec:related_works}
\subsection{Line Chart Datasets }
\textbf{PMC} \cite{Davila2020ICPR2} dataset contains real charts extracted from Open-Access publications found in the PubMedCentral. (PMC). It contains 7,401 and 3,155 line charts in the training and test sets respectively. However, only 1486 images contain annotations of raw data.
\textbf{FigureQA} \cite{Kahou2018FigureQAAA} is a synthetic dataset of over 100,000 images originally purposed for question-answering tasks. They also include the metadata used during the generation which makes it possible to use it as a chart data extraction dataset. However, they used only five line styles and the charts do not have any marks on them.
\textbf{ExcelChart400K} is published alongside ChartOCR \cite{luo2021chartocr}. It contains Images generated via crawled Excel sheets and annotation of keypoints. However, they do not provide ground truth OCR annotation nor the raw data used to generate the line charts.
All these datasets contain only clean images. Our datasets on the contrary contain real camera photos. We also provide annotations of raw data and a diverse range of chart styles.
\subsection{Line Chart Data Extraction}
In typical rule-based systems such as \cite{10.1145/1284420.1284427,Poco2017ReverseEngineeringVR}, histogram-based techniques and spatial filtering are used to locate coordinate axes and data points, while OCR is employed to detect textblocks. Then a set of handicraft algorithms is applied to filter false detections and generate a final output. However, these algorithms fail to generalize to the diverse range of chart designs. To address this issue, some systems such as ChartSense \cite{Jung2017ChartSenseID} introduce an interactive process where a human agent can make corrections when necessary. However, this makes the data extraction process time-consuming and difficult to scale.
Deep-learning-based methods such as \cite{luo2021chartocr,Kato2022ParsingLC} outperform rule-based methods by training on large-scale datasets. ChartOCR \cite{luo2021chartocr} uses CornerNet to find keypoint proposals and Microsoft OCR API to perform text recognition. They also include a separate QUERY network to predict if two points belong to the same line. \cite{Kato2022ParsingLC} uses a segmentation network that produces segmentation masks for different lifestyles and uses linear programming to perform line tracing. Text recognition is performed using STAR-Net. However, these methods assume clean, aligned images as input and cannot handle camera photos. Additionally, unlike our methods which train an OCR network alongside the detection network, they rely on generic OCR models that do not take into account the prior distributions of tick numbers.
\section{Dataset}
\begin{table*}[t]
\centering
\begin{tabular}{ccccccc}
\toprule
Dataset & Image Source & Raw Data & Style & Alignment & Background\\
\midrule
PMC \cite{Davila2020ICPR2} & Crawled & Partial* & Many & Aligned & White \\
FigureQA \cite{Kahou2018FigureQAAA}& Generated & \cmark & 5 styles & Aligned & White \\
ExcelChart400K\cite{luo2021chartocr} & Crawled & \xmark & Many & Aligned & White \\\hline
\textbf{SYN-Train }& Generated & \cmark & 30+ styles & Perspective & Synthetic \\
\textbf{SYN-Camera} & Camera Photo & \cmark & 30+ styles& Perspective & Print \\
\textbf{Audiogram} & Camera Photo & \cmark & 2 styles & Perspective & Print \\
\textbf{Audiogram (Scanned)} & Scanned & \cmark & 2 styles & Aligned & Print \
\\\bottomrule
\end{tabular}
\caption{A comparison between our dataset and existing ones. The most significant difference is that our dataset contains more than clean images that are crawled or generated.}
\label{tab:dataset_comparison}
\end{table*}
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{cvpr2023-author_kit-v1_1-1/latex/dataset.pdf}
\caption{\textbf{Samples from Our Proposed Dataset.} SYN-Train is an online synthetic dataset with aggressive augmentation to mimic the look of camera photos. SYN-Test is a dataset of pretrained images for evaluation. SYN-Camera are real camera photos of charts in SYN-Test. Audiogram datasets consist of data generated by Noah 4, a software widely used by audiologists. There is a camera version and a scanned version. }
\label{fig:dataset}
\end{figure}
We propose two datasets with variants. \textbf{SYN} dataset consists of synthetic data generated using matplotlib \cite{hunter2007matplotlib}. It has various mark and line styles. \textbf{SYN-Train} is an online dataset with infinitely many images. \textbf{SYN-Test} is a fixed subset of \textbf{SYN-Train} with 4,000 images. \textbf{SYN-Camera} contains 200 camera photos of charts in \textbf{SYN-Test}. \textbf{Audiogram} dataset consists of 420 camera photos of audiogram created using Noah 4\cite{himsa_noah}, a software system widely used in the hearing care industry for such purposes. 223 images have ground truth annotation of bounding boxes of labels and marks. This dataset is used to evaluate how our model performs in real-world scenarios and compare models trained on synthetic datasets with those trained on manually labeled data. Samples from all these datasets are shown in \cref{fig:dataset}.
Our datasets differ from existing ones mentioned in \cref{sec:related_works} in that they consist of camera photos or images that are visually similar to camera photos with annotations of raw data. \textbf{SYN} also have a larger variety in line styles compared with existing synthetic datasets such as \cite{Kahou2018FigureQAAA}. \cref{tab:dataset_comparison} illustrates a comparison between our datasets and existing ones.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{cvpr2023-author_kit-v1_1-1/latex/main2.pdf}
\caption{\textbf{Proposed Chart-RCNN architecture.} A standard Faster-RCNN with ResNet-FPN backbone is used to detect tick labels and marks, the out put is then passed to Text Decoder and Mark Grouping head for text recognition and mark clustering. A downsampled feature map from FPN is passed to Homography Prediction head to predict the perspective transforms of the image.}
\label{fig:architecture}
\end{figure*}
\subsection{Synthetic }
\label{sec:synthetic_data_generation}
We use matplotlib to randomly generate line charts for \textbf{SYN} dataset. The images have a dimension of 720x720. Each of the axes has a random number of ticks between 5 and 10. The tick labels are randomly selected from a set of common intervals such as [0, 1], [0, 100], and [1000, 10000]. The stride between two consecutive labels is randomly selected among a set of common values such as 0.1, 0.2, 1, 10, and 100. Numbers less than 1 are randomly converted to percentile. Numbers larger than 1000 are randomly converted to human-readable notations such as 5k, and 10k. The grid lines are randomly added. Each line chart has a random number of lines between 1 and 3 with random combinations of mark types, mark colors, and line styles. \cref{tab:syn_generation} illustrates all the parameters used in data generation.
After a clean image is generated, we apply a set of aggressive augmentations to make it visually similar to a camera photo. We first apply random Thin-Plate-Spline (TPS) \cite{tps} using a random grid to create small-scale permutations. We then apply a random perspective transform with a distortion scale of 0.5. We add random Gaussian noise at multiple scales. We randomly apply color jittering to the resulting image, and finally, randomly add a blur effect using Gaussian Blur or Motion Blur. Figure \cref{fig:transforms} is an illustration of this pipeline. As shown in the figure, the clean image looks visually similar to a camera photo after these transformations.
We generate 4,000 clean images without transforms as \textbf{SYN-Test} dataset. We print 200 of them on A4 paper and then took camera photos as the \textbf{SYN-Camera} dataset.
We propose two datasets with variants. \textbf{SYN} dataset consists of synthetic data generated using matplotlib . It has various mark and line styles. \textbf{SYN-Train} is an online dataset with infinitely many images. \textbf{SYN-Test} is a fixed subset of \textbf{SYN-Train} with 4,000 images. \textbf{SYN-Camera} contains 200 camera photos of charts in \textbf{SYN-Test}. \textbf{Audiogram} dataset consists of 420 camera photos of audiogram created using Noah 4 \cite{himsa_noah}, a software system widely used in the hearing care industry for such purposes. 223 images have ground truth annotation of bounding boxes of labels and marks. This dataset is used to evaluate how our model performs in real-world scenarios and compare models trained on synthetic datasets with those trained on manually labeled data.
All these datasets have raw data available. All except for \textbf{SYN-Camera} also have annotations of bounding boxes of labels and marks, alongside their values.
a
\begin{table}
\centering
\begin{tabular}{cc}
\toprule
Parameter & Range\\
\midrule
Number of Ticks & 5-10 \\
Number of Lines & 1-3 \\
Grid & on, off \\
Aspect Ratio & 0.25-1.0\\
Mark Style & circle, triangle, cross, diamond, ... \\
Line Style & solid, dot, dash, dash-dot\\
Line Color & random \\
\bottomrule
\end{tabular}
\caption{Randomized Parameters used in data generation. }
\label{tab:syn_generation}
\end{table}
\subsection{Audiogram}
We first collected raw audiogram images generated by Noah 4 System used by audiologists. These audiograms were printed on standard A4-sized paper. We took a total of 420
images of these printed audiograms under various angles and lighting conditions as the \textbf{Audiogram} dataset. In some photos, there are cast shadows of other objects on the paper. In some photos, obstructions such as pens are purposely placed on the audiogram region. In some photos, the paper is bent or folded. These intentional occlusions and distortions make the dataset particularly
challenging. To evaluate the impact of these distortions, we generated an additional set of 30 audiograms, printed them,
and scanned them using a laser scanner to form \textbf{Audiogram-Scanned} Dataset.
All audiograms have their raw data available. We picked 223 camera photos and manually added annotations of ground truth bounding boxes of labels and marks.
\section{Method}
\label{fig:method}
We propose Chart-RCNN, an extension of Faster-RCNN \cite{Ren2015FasterRT} with additional heads attached to the ResNet-FPN backbone performing OCR, mark grouping and homography prediction. The network is trained jointly end-to-end. As is shown in \cref{fig:architecture}, an input image first go through standard Faster-RCNN blocks which generate bounding box predictions and classification for labels and marks. We use RoI Align to extract features in regions corresponding to text labels and marks, which are then passed to Text Decoder for OCR and Mark Grouping Head for clustering. The whole feature map from FPN backbone is downsampled and passed to a Homography Predictor to predict the perspective transforms.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{cvpr2023-author_kit-v1_1-1/latex/transforms.pdf}
\caption{\textbf{Transforms of SYN Dataset}. We visualize transformed used in data generation process. TPS and Perspective produce local and global geometric distortions respectively, while the Color and Noise change the visual appearance of the image. }
\label{fig:transforms}
\end{figure}
\subsection{Chart Element Detection}
We employ the standard Faster-RCNN head following \cite{Ren2015FasterRT}. The RoIAlign generates a 7x7 feature map which is passed to a Two-Layer MLP with a hidden dimension of 1024. The output is then passed to a linear layer for classification. We employ cross-entropy loss on the classification output.
\subsection{Homography Estimation}
\label{sec:homography}
We estimate the perspective transformation of an image by detecting four corners of the chart region. These four corners uniquely determine a projective transform that sends these four points respective to (0,0), (0,1), (1,1), (1,0) in the canonical coordinate system. The parameter of such a transformation can be determined by solving a linear system following \cite{mou2013robust} We explore two approaches to predicting the corners: Mask-based ones and keypoint-based ones.
The mask-based approach pass the downsample feature maps to an FCN head \cite{Shelhamer2015FullyCN} consisting of two Convolution layers of size 3 and stride 1 with RELU activation, followed by a 1x1 convolution for pixel-wise classification. We extract polygon contours from the mask image following \cite{Suzuki1985TopologicalSA}. The polygon is subsequently reduced to a quadrilateral using a modified version of Douglas‐Peucker Algorithm \cite{Visvalingam1990TheDA} (See appendix for more implementation details).
The keypoint-based approach directly predicts the coordinates of four points following the design of Keypoint-RCNN \cite{He2020MaskR}. The architecture is similar to the mask-based approach. The only difference is that the prediction target is a 4-channel heatmap corresponding to 4 corners.
We employ cross-entropy loss in both options.
\subsection{Text Recognition}
\label{sec:text_recog}
We explore two architecture for OCR: Transformer and CRNN. For Transformer, we followed the design of TrOCR\cite{Li2021TrOCRTO}. RoiAlign generates a feature map of 3x9 for each text region, which are then treated as 3x9 patches and passed to the Transformer block. For CRNN, we follow the design of \cite{Shi2017AnET}. The text regions are pooled to a 32x96 feature map. The corresponding RGB information of the original image is added to this feature map through a Residual connection. We employ cross entropy loss for Transformer and connectionist temporal classification(CTC) loss for CRNN.
\subsection{Mark Clustering}
The Mark Grouping Head projects the features to a space where the embedding of the marks from the same line are close to each other. It consists of a series of convolution blocks followed by a 2-layer MLP (See appendix for more implementation details). We then calculate the cosine similarity of all mark features. We assume the likelihood of two points belonging to the same line is defined by
\begin{equation}
\mathbb{P}(G(x,y)=1)=exp(\frac{f(x)^Tf(y)}{\tau\lVert f(x) \rVert \lVert f(y) \rVert })
\label{eq:likelyhood}
\end{equation}
Where $G(x,y)$ is an indicator function that equals to 1 if and only if mark $x$ and $y$ belongs to the same line. $f$ is the mark grouping head. $\tau$ is a hyperparameter.
This formulation transforms the clustering problem to a binary classification problem and naturally leads to the following loss:
\begin{multline}
\mathcal{L}_{Group}= - \sum_{i \ne j} [G(x_i,x_j) log \mathbb{P}(G(x,y)=1) \\
+ log(1- \mathbb{P}(G(x_i,x_j)=1))]
\label{eq:loss}
\end{multline}
The final loss of the entire Chart-RCNN model is simply the sum of loss of each module.
\subsection{Post Processing}
We gather the output from different modules. We first remap the detected label and mark coordinates to the canonical coordinate system with four corners of the chart remapped to (0,0), (0,1), (1,1), and (1,0). We group the labels by their proximity to chart borders. Labels close to the top and bottom border are considered as labels for the x-axis, whereas those close to the left and right border are considered as labels for the y-axis. We retrieve the text output from Text Decoder and convert them to floats. Then we fit two RANSAC linear models to obtain a transformation from the canonical coordinate system to the raw data values. We apply such transformations to the coordinates of detected maps to get the raw data points. When necessary, we perform threshold-based hierarchical clustering \cite{rai2010survey} on mark features generated from the Mask head. When evaluating on datasets with a known number of mark classes, we use a linear classification head to directly predict the classes of marks and group them into lines accordingly.
\section{Experiment Results}
\subsection{Metric}
We evaluate the performance on data extraction using F1 score which is defined as
\begin{equation}
F1= 200 \times \frac{precision \times recall}{precision + recall}
\label{eq:f1}
\end{equation}
where precision and recall are calculated under a tolerance of 5\% relative error for SYN dataset and an absolute error of 5dB for Audiogram dataset.
When multiple lines are present, the precision and recall are calculated by iterating over all possible correspondences between detected lines and ground truth and taking the best match.
\subsection{Data Extraction on Camera Images}
We train our ChartRCNN model using a global batch size of 16 across 4 GPUs for 100 epochs. We use Adam \cite{kingma2014adam} optimizer with a learning rate of $10^{-4}$ and a weight decay of 1.0e-6. The momentum is 0.9. The learning rate is linearly scaled up in the first 3 warmup epochs, then follows a cosine decay schedule.
We perform evaluations on 4 datasets: SYN-Test, SYN-Camera, Audiogram, and Audiogram-Scanned. All models are trained using SYN dataset. For SYN-Test and SYN-Camera evaluation, we trained on the online version of the SYN dataset with infinitely many images. For Audiogram and Audiogram-Scanned, we train our model on a subset of SYN dataset. We limit the range of randomly generated data and the styles to match the audiogram. For example, the x-axis is generated in a log scale instead of a linear one as the general SYN dataset. All our methods are never trained on real camera images.
To evaluate the effectiveness of our synthetic data generation pipeline, we compared our results with models trained on labeled images in Audiogram dataset. This model uses a standard Faster-RCNN to detect and classify marks and labels. The OCR is performed through a classification head by considering a set of possible common tick labels in audiograms as separate classes. We report the results in \cref{tab:main_result}. To the best of our knowledge, there are no comparable works capable of extracting line chart data from ill-aligned non-clean images. Hence we were unable to compare against other works.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{cvpr2023-author_kit-v1_1-1/latex/test.pdf}
\caption{\textbf{SYN as a Universal Approximation.} We find several typical use-cases of line charts in different domains: 1. Material Science Paper 2. Financial Forecast 3. Audiogram 4. Specs of Electronic Components. We observe that there are some images in our randomly generated SYN-Camera dataset that share certain similarities with these images. Both images in the leftmost column contain solid lines with square and circle marks. Both images in the second most left column contain no top and right border and have a dashed line. In the third column, the style of the audiogram is an exact match for the style of a line in the bottom image. In the rightmost column, both images have lines of similar trends presented in a grid. }
\label{fig:test}
\end{figure*}
\begin{table}[h]
\centering
\begin{tabular}{ccc}
\toprule
Dataset & Ours & Supervised \\
\midrule
SYN-Test & 90.2 & -\\
SYN-Camera & 85.3 & - \\
Audiogram & \textbf{87.3} & 84.3 \\
Audiogram-Scanned & 97.1 & \textbf{98.9}\\
\bottomrule
\end{tabular}
\caption{Data Extraction on Clean and Camera Images. Numbers are F1 scores. Our model defeats the supervised baseline trained on camera photos of audiograms, and is comparable on scanned images. }
\label{tab:main_result}
\end{table}
As is shown in \cref{tab:main_result}. Our model defeats the supervised baseline (+3.0) trained on real camera photos of audiograms without using any manually annotated data. The only human labor required is to enter a prior distribution of data ranges and line styles. Hence, it is an effective method for such tasks. Our model also achieved comparable high performance (97.1 vs 98.9) on scanned audiograms. For SYN dataset, we did not compare our method against a supervised baseline because labeling camera photos are time-consuming. We believe in using audiograms which are generated using Noah 4 \cite{himsa_noah} that are used by audiologists across the world in clinical practices can better demonstrate the effectiveness of our method in practical application scenarios. Hence we chose to label the audiograms.
\subsection{Generalizability and Limitations }
We argue that our experiments on the SYN dataset provide good intuition on how our model generalizes on real data because of its variation in styles. To show this, we demonstrate in \cref{fig:test} that charts from different domains are proper subsets of SYN under certain parametrization. We find line charts in different domains have corresponding images in the randomly generated SYN-Camera dataset that are reasonably similar. Hence, the SYN-Camera dataset is a good dataset to evaluate the performance of our model. This is also validated on a domain-specific dataset: Audiogram, where our method outperforms the supervised method trained on human-annotated labels. Moreover, by the law of large numbers, the model is likely to encounter charts that look more similar because these real-world charts are in the parameter space of our random generation process. To validate this, we test our model pretrained on the general SYN dataset for 200 epochs (2x schedule) without any prior knowledge of the style nor label distribution of Audiograms. The F1 score is \textbf{71.6} on Audiogram, suggesting a limited zero-shot transfer capability.
However, while the SYN dataset can approximate line charts in various domains, our method still has the following limitations. First, we use mark detection as the basis of line recognition, which is not applicable to line charts that do not have any marks. Second, we assume the chart region has distinguishable corners which can be used to perform homography estimation. As we have demonstrated, in \cref{fig:test}, a wide range of charts from different domains satisfy our assumptions. In domain-specific applications such as audiograms or spec sheets of electronic components, charts have a uniform style and these assumptions are easily satisfied.
\section{Ablation and Analysis}
\label{sec:ablation}
In this section, we evaluate the contribution of different components to the performance of our proposed system. For all ablation experiments, we chose a shorter schedule (20 epochs) than the full evaluation (100 epochs) due to limited computing resources. The experiments are evaluated on Audiogram Dataset.
\subsection{Transforms}
\begin{table}[h]
\centering
\begin{tabular}{@{}lc@{}}
\toprule
Transforms & F1 \\
\midrule
Baseline & 14.3 \\
+ Perspective & 17.7 \\
+ Color & 76.1\\
+ Blur and Noise & 82.1\\
+ TPP & \textbf{84.8} \\
\bottomrule
\end{tabular}
\caption{\textbf{Ablation Results of Different Transforms.} All proposed transforms lead to a positive increase in F1 scores. Color seems to be the most important as it simulates different white balance settings of a camera.}
\label{tab:ablation_transforms}
\end{table}
We employ a wide range of transforms in the data transformation process as described in \cref{sec:synthetic_data_generation} and illustrated in \cref{fig:transforms}. To evaluate the importance of each transform, we start with a baseline in which the model is trained on only clean images just like most existing works. Then we gradually add different transforms into the data generation process. The results are shown in \cref{tab:ablation_transforms}. All proposed transforms lead to a positive increase in F1 scores among which Color has the most significant impact. We believe this is likely due to the fact that it emulates the different white balancing settings of a camera and different lighting conditions. Without such transforms, the model could easily overfit to data with a pure white background which is rather rare in real camera photos. Blur and Noise are the second most important transform because it emulates the change in focal length and real-world noises captured by camera sensors. We believe it also mitigates the overfitting problem caused by the model attending to specific local structure that is unique to certain fonts or the way in which a line or circle is drawn by the computer. By adding high-frequency noises, the local structure is perturbed.
\subsection{Number of Images}
\begin{table}[h]
\centering
\begin{tabular}{@{}lc@{}}
\toprule
Num of Images & F1 \\
\midrule
100 & 76.7 \\
1,000 & 81.4 \\
10,000 & 81.9 \\
\textbf{$\infty$} & \textbf{84.8}\\
\bottomrule
\end{tabular}
\caption{\textbf{Ablation Results on the Number of Training Images.} The performance improves with more images. }
\label{tab:num_of_images}
\end{table}
We employ an online dataset of infinitely many images which is rare in related works. The stochastic process also makes the training process indeterministic. To evaluate whether this is necessary, we explored alternative options where the model is only trained on a fixed number of pre-generated images. As is shown in \cref{tab:num_of_images}, the performance improves with more images. We believe that an online dataset solves the overfitting problem because it is hard to overfit a dataset with infinitely many images. Additionally, we observe no significant increase in data time. Hence our method is desirable.
\subsection{OCR}
Unlike \cite{luo2021chartocr} which uses external OCR API or \cite{Kato2022ParsingLC} which uses a trained OCR, we fine-tune an OCR model jointly during the training. We believe this is necessary for that it can help the model learn a prior distribution of text labels present in specific types of charts, which can compensate for the additional difficulty caused by noises and perspective projections in camera photos. To validate our hypothesis, we plug in a CRNN model pretrained for number recognition into our trained model and compare how it affects the performance. We also benchmarked a transformer-based Text Decoder proposed in \cref{sec:text_recog} against CRNN.
\begin{table}[h]
\centering
\begin{tabular}{@{}lcc@{}}
\toprule
Text Decoder & Homography Predictor & F1 \\
\midrule
Online CRNN & Keypoint & \textbf{84.8} \\
Online Transformers & Keypoint & 81.9 \\
Pretrained CRNN & Keypoint & 79.4\\
Online CRNN & Mask & 84.3 \\
\bottomrule
\end{tabular}
\caption{\textbf{Ablation Results for Text Decoder and Homography Predictor.} Online CRNN has the best performance with a lead of +2.9. Both methods for homography estimation have comparable performance. Due to the complexity of the mask-based method, we picked the keypoint-based one as our final design. }
\label{tab:ocr_ablation}
\end{table}
As shown in \cref{tab:ocr_ablation}, Online CRNN has the best performance with a lead of +2.9 against the Transformer and a lead of +5.4 against trained CNN. This confirms our hypothesis that online OCR training is necessary. While several benchmarks showed \cite{wrobel2021ocr,Li2021TrOCRTO} that Transformers tend to outperform CNN-based architecture, they are harder to train and require more delicate tuning. This could be why we failed to observe an expected lead in Transformers. Due to limited computing resources, we were unable to explore more variants of transformer-based OCR methods.
\subsection{Homography Estimation}
We estimate homography transforms by first predicting the four corners of the chart region. We proposed a mask-based approach and a keypoint-based approach in \cref{sec:homography}. We benchmarked their performance. The results are shown in \cref{tab:ocr_ablation}. While both methods achieved similar performance, the mask-based approach requires complicated postprocessing. Hence, we find the keypoint-based approach which is more intuitive and straightforward is more desirable.
\subsection{Error Analysis}
So far all the analyses are based on the F1 score which tests the overall performance of our method as an integrated system. To further ablate the source of errors, we replace part of the model output with ground truth annotations and evaluate the performance of the remaining system. These experiments are performed on images sampled from a manually annotated training set of the Audiogram dataset because it requires ground truth annotations of bounding boxes. This is ok because our models are never exposed to human labels. However, this set of images is not used in our other experiments because it is exposed to a supervised baseline. Hence, the numbers are not directly comparable with our other experiments. The results are shown in \cref{tab:error_analysis}
\begin{table}[h]
\centering
\begin{tabular}{@{}lc@{}}
\toprule
Architecture & F1 \\
\midrule
Chart-RCNN & 86.3 \\
+ GT Detection & 87.3 \\
+ GT OCR & 87.3 \\\hline
All GT & 100.0 \\
\bottomrule
\end{tabular}
\caption{\textbf{Error Analysis}. Adding ground truth bounding boxes improves the result while further adding ground truth OCR annotations leads to no significant improvements.}
\label{tab:error_analysis}
\end{table}
We discovered that adding ground truth bounding boxes improves the result while further adding ground truth OCR annotations leads to no significant improvements. This suggests our online fine-tuning process combined with a robust RANSAC regressor can effectively handle noises in OCR output. The results also suggest that most error comes from homography estimation. This is reasonable since we are setting a small tolerance of 5dB, and small errors in keypoint estimations may propagate to the estimated transformation matrix, which in turn translates to reprojected coordinates. This process involves inverting and multiplying several matrices and in this process, the errors may be amplified in a super-linear fashion.
\section{Conclusion}
One of the most significant challenges of chart data extraction from camera photos is the cost to obtain annotated training samples. We proposed a customizable synthetic dataset based on a matplotlib with rich styles and can approximate chart styles in specific domains. We introduce a series of transforms that make a clean chart looks visually similar to a camera photo. We proposed Chart-RCNN, a unified architecture to extract data from camera photos of line charts. We demonstrated through experiments a cost-effective way of training such a model using no annotated data which still achieves comparable results with models trained on human-annotated data. Evaluations suggest our model is feasible for real-world applications such as audiogram interpretation.
Ablation analysis suggests most errors come from the perspective estimation process. Our future works will attempt to incorporate some of the state-of-the-art architecture beyond 4-point-based methods. We will also explore more possible applications such as helping the visually impaired to read charts in printed documents and books.
{\small
\bibliographystyle{ieee_fullname}
\section*{Appendix}
\section{Implementation Details}
\subsection{Keypoint-based Homography Estimation}
We use a modified version of keypoint prediction head of \cite{He2020MaskR}. Since the local structures of the four corners of a chart are similar, and some charts do not have a border at corners, we add an attention mechanism so that the model can access global positional information but mere local structures. The output of the ResNet-FPN backbone is downsampled to a 14x14 feature map. This is considered as a sequence of length 196. We add 2D positional encoding and pass the sequence through 3 transformer layers. The sequence is then reshaped back to a 14x14 feature map, which is then passed through 4 additional convolution layers. To get the final output of a 4-channel keypoint heatmap, we use a ConvTranspose layer of kernel size 4. The exact architecture is shown in \cref{fig:keypoint_head}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{cvpr2023-author_kit-v1_1-1/latex/keypoint_head.pdf}
\caption{\textbf{Architecture of Keypoint Head }. The output of ResNet-FPN is downsampled to a 14x14 feature map. 2D Positional Encoding is then added. We consider the feature map as a sequence of length 196 and pass it through three transformer layers whose Multi-Head Attention Block has 8 heads and a dimension of 256. The output of transformer layers is then reshaped back to a feature map. It is then passed through 4 convolution layers with kernel size 3 and ReLU activation. The final output is generated by a ConvTranspose layer of kernel size 4. }
\label{fig:keypoint_head}
\end{figure}
\subsection{Mask-based Homography Estimation}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{cvpr2023-author_kit-v1_1-1/latex/mask_reg.pdf}
\caption{\textbf{Comparison Between Ours and Douglas-Peucker }. The Douglas-Peucker is typically used to simplify polygons. However, it only reduces a polygon to a subset of the original vertices. In this example, the white line is the polygon generated by predicted masks, and the colored regions are reduced quadrilaterals. The four corners of the chart are actually outside the original polygon, hence Douglas-Peucker fails to recover them. }
\label{fig:dp_mask}
\end{figure}
We use the same mask prediction head following \cite{He2020MaskR}. However, it is non-trivial to recover a quadrilateral from predicted binary masks because of noises in predictions. Typically, such tasks are performed by Douglas‐Peucker Algorithm \cite{Visvalingam1990TheDA} which reduces complicated polygons to simple ones. However, it is flawed because the vertices of the simplified polygon are always a subset of the vertices of the original polygon. This can give sub-optimal approximations when the four corners of a chart are not the vertices of the predicated mask. As shown in \cref{fig:dp_mask}, our method overcomes this problem.
The details of this procedure are described in \cref{alg:cap}. Given an input binary mask $M$, we first convert it to a set of contours $C$. Then we find $c\in C$ which is the contour with the largest enclosed area. Let $h$ be its convex hull, we set the threshold of Douglas‐Peucker as $0.5\%$ of the circumference of $h$ and perform standard Douglas‐Peucker. This gives us a reduced polygon $c_{approx}$. Let $E$ be the set of its edges. We want to find a subset $E_4$ containing 4 edges that will be the edges of the final reduced quadrilateral. We initialize $E_4$ as an empty set, then repetitively adding the longest edge in $E$ that is not too close to an existing edge in $E_4$. In particular, we consider two edges $e_1,e_2$ as too close if the angle between them is less than $20$ degrees and the minimum distance between three points (two ends and the midpoint) of one line and an arbitrary point on the other line is less than a $1/28$ of image dimension. With four edges, we find all their intersections and filter out those outside the image region. We then sort the remaining points by the angle of the vector from the centroid of all these points to a specific point. This recovers an ordered set of four points that can be used to solve a projective transformation matrix.
\begin{algorithm}
\caption{Homography Estimation on Binary Mask}\label{alg:cap}
\begin{algorithmic}
\Require $M \in \{0,1 \}^{H \times W} \,\, \, \, \text{Binary Mask}$
\Ensure $p_1,p_2,p_3,p_4 \in \mathbb{R}^2 \,\, \, \, \text{Keypoints}$
\State $C \gets \textsc{cv2.FindContours}(M) $
\State $c \gets \textsc{LargestContourByArea}(C) $
\State $h \gets \textsc{cv2.ConvexHull}(c) $
\State $\epsilon \gets 0.005 \cdot length(h)$
\State $ c_{approx} \gets \textsc{DouglasPeucker}(c,\epsilon)$
\State $ E \gets \textsc{GetEdges}(c_{approx})$
\State $ E_4 \gets \{\}$
\While {$|E_4| < 4$}
\State $ e \gets \textsc{GetLongestLine}(E) $
\State $ E_4.\text{add}(e) $
\State $ E.\text{pop}(e) $
\State $ E.\text{removeCloseLines}(e) $
\EndWhile
\State $ P\gets \textsc{GetIntersections}(E_4)$
\State $ p_1,p_2,p_3,p_4 \gets \textsc{SortByAngle}(P)$
\end{algorithmic}
\end{algorithm}
\subsection{Mark Grouping Head}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{cvpr2023-author_kit-v1_1-1/latex/mark_grouping.pdf}
\caption{\textbf{Architecture of Mark Grouping Head }. The mark features are first projected to a different space by an MLP. We then compute the pair-wise cosine similarity of all mark features. We consider the resulting matrix as log-likelihood of an random variable indicating if two marks belong to the same line and apply cross-entropy loss on it following \cref{eq:loss}. }
\label{fig:mark_grouping}
\end{figure}
We illustrate the architecture of our Mark Grouping Head in \cref{fig:mark_grouping}. The mark features are first projected to a different space by an MLP. We then compute the pair-wise cosine similarity of all mark features. We consider the resulting matrix as the log-likelihood of a random variable indicating if two marks belong to the same line and apply cross-entropy loss on it following \cref{eq:loss}. In \cref{eq:loss}, we introduce a temperature hyperparameter $\tau$ because the cosine similarity is bounded in the interval $[-1,1]$. $\tau$ scales this interval to a larger range. In batched training, we apply a mask and the loss is only applied to the similarity between marks in the same image. We also masked out the diagonal entries of the similarity matrix because they are trivial. In order to add more flexibility to the output range so that the similarity is not always centered at zero, we add a learnable bias term that is added to the cosine similarity matrix.
\section{F1 Score under Different Error Tolerance}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{cvpr2023-author_kit-v1_1-1/latex/acc.png}
\caption{\textbf{F1 under difference tolerance}. To further evaluate our method, we plot the F1 score of our model on different datasets across different error tolerance. In audiogram datasets, the tolerance is measured by dB. In SYN dataset, the tolerance is measured by percentage. }
\label{fig:acc}
\end{figure*}
To further evaluate our method, we plot the F1 score of our model on different datasets across different error tolerance in \cref{fig:acc}. We noticed that in Audiogram-Camera, we can achieve an F1 score of $95.1$ with only a $5$dB tolerance, which is half the length of the gap between two consecutive ticks. This suggests most errors are within a margin of $5$dB. In SYN datasets, the F1 continues to increase as the error tolerance gets larger, suggesting a wider distribution of errors. This is likely caused by the varying chart styles in the dataset with different levels of difficulty.
\section{More Visualization of Datasets}
In \cref{fig:more_data} we provide more visualization of samples in the datasets we used. In particular, columns SYN-Train, SYN-Camera, and Audiogram-Camera demonstrate the similarity of synthetic data and camera photos.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{cvpr2023-author_kit-v1_1-1/latex/more_vis.pdf}
\caption{\textbf{More Visualization of Samples in Datasets.} The leftmost three columns are SYN-Train, SYN-Test, and SYN-Camera. The rightmost two columns are Audiogram-Camera and Audiogram Scanned. }
\label{fig:more_data}
\end{figure*}
|
2,877,628,089,572 | arxiv |
\section{The BA model with $m=1$}
\ \ \ \ Denote the first-passage probability of the Markov chain
by $f(k,i,t)=P\{k_i(t)=k,\ k_i(l)\neq k,\ l=1,2,\cdots,t-1\}$.
First, the relationship between the first-passage probability and
the vertex degrees is established.
\medskip
\textbf{Lemma 1} For $k>1$,
\begin{eqnarray}
f(k,i,s)&=&P(k-1,i,s-1)\,\frac{k-1}{2(s-1)+N_{0}},\\
P(k,i,t)&=&\sum\limits_{s=i+k-1}^t
f(k,i,s)\,\prod\limits_{j=s}^{t-1}
\left(1-\frac{k}{2j+N_{0}}\right).
\end{eqnarray}
\textbf{Proof} First, consider Eq. (3). The degree of a vertex is
always nondecreasing, and increasing at most by $1$ each time,
according to the construction rule of the BA model. Thus, it
follows from the Markovian properties that
\begin{eqnarray*}
& &f(k,i,s)=P\{k_i(s)=k,\ k_i(l)\neq k,\ l=1,2,\cdots,s-1\}\\
& &=P\{k_i(s)=k,\ k_i(s-1)=k-1,\ k_i(l)\neq k,\ l=1,2,\cdots,s-2\}\\
& &=P\{k_i(s)=k,\ k_i(s-1)=k-1\}\\
& &=P\{k_i(s-1)=k-1\}\,P\{k_i(s)=k|k_i(s-1)=k-1\}\\
& &=P(k-1,i,s-1)\,\frac{k-1}{2(s-1)+N_{0}}.
\end{eqnarray*}
Second, observe that the earliest time for the degree of vertex $i$
to reach $k$ is at step $k+i-1$, and the latest time to do so is at
step $t$. After this vertex degree becomes $k$, it will not increase
any more. Thus, Eq. (4) is proved.
\medskip
\textbf{Lemma 2} (Stolz-Ces\'aro Theorem) In sequence
$\{\frac{x_n}{y_n}\}$, assume that $\{y_n\}$ is a monotone
increasing sequence with $y_n\rightarrow\infty$. If the limit
$\lim\limits_{n\rightarrow\infty}\frac{x_{n+1}-x_n}{y_{n+1}-y_n}=l$
exists, where $-\infty\leq l\leq+\infty$, then
$\lim\limits_{n\rightarrow\infty}\frac{x_n}{y_n}=l$.
\textbf{Proof} This is a classical result, see [14].
\medskip
\textbf{Lemma 3} For the probability $P(k,t)$ defined in Eq. (1),
the limit $\lim\limits_{t\rightarrow\infty}P(1,t)$ exists and is
independent of the initial network; moreover,
\begin{eqnarray}
P(1)\triangleq\lim\limits_{t\rightarrow\infty}P(1,t)=\frac{2}{3}>0.
\end{eqnarray}
\textbf{Proof} From the construction of the BA model or Eq. (2),
it follows that
\[
P(1,i,t+1)=\left(1-\frac{1}{2t+N_{0}}\right)P(1,i,t).
\]
Since $P(1,t+1,t+1)=1$, one has
\begin{eqnarray*}
P(1,t+1)&=&\frac{1}{t+1+m_{0}}\sum\limits_{i=-m_{0},i\neq0}^{t+1}P(1,i,t+1)\\
&=&\frac{t+m_0}{t+1+m_0}\left(1-\frac{1}{2t+N_{0}}\right)P(1,t)+\frac{1}{t+1+m_0}.
\end{eqnarray*}
Then, by iteration,
\begin{eqnarray*}
P(1,t)&=&\prod\limits_{i=1}^{t-1}\left(1-\frac{1}{2i+N_{0}}\right)
\frac{i+m_0}{i+1+m_0}\left[P(1,1)+\sum\limits_{l=1}^{t-1}
\frac{\frac{1}{l+1+m_0}}{\prod\limits_{j=1}^{l}
\left(1-\frac{1}{2j+N_{0}}\right)\frac{j+m_0}{j+1+m_0}}\right]\\
&=&\frac{1}{t+m_0}\prod\limits_{i=1}^{t-1}\left(1-\frac{1}{2i+N_{0}}\right)
\left[(1+m_0)P(1,1)+\sum\limits_{l=1}^{t-1}\prod\limits_{j=1}^{l}
\left(1-\frac{1}{2j+N_{0}}\right)^{-1}\right].
\end{eqnarray*}
Next, let
\begin{eqnarray*}
x_n\triangleq&(1+m_0)P(1,1)
+\sum\limits_{l=1}^{n-1}\prod\limits_{j=1}^{l}
\left(1-\frac{1}{2j+N_{0}}\right)^{-1}
\end{eqnarray*}
and
\[
y_n\triangleq
(n+m_0)\prod\limits_{i=1}^{n-1}\left(1-\frac{1}{2i+N_{0}}\right)^{-1}>0.
\]
Thus, it follows that
\[
x_{n+1}-x_n=\prod\limits_{j=1}^{n}\left(1-\frac{1}{2j+N_{0}}\right)^{-1}
\]
and
\[
y_{n+1}-y_n=\frac{3n+N_{0}+m_0}{2n+N_{0}}\prod\limits_{i=1}^{n}
\left(1-\frac{1}{2i+N_{0}}\right)^{-1}>0.
\]
Since $y_n>0$ and $y_{n+1}-y_n>0$, $\{y_n\}$ is a strictly monotone
increasing nonnegative sequence, hence $y_n\rightarrow\infty$.
Moreover,
\[
\frac{x_{n+1}-x_n}{y_{n+1}-y_n}=\frac{2n+N_{0}}{3n+N_{0}+m_0}\
\rightarrow\ \frac{2}{3}~~ (n\rightarrow\infty).
\]
From Lemma 2, one has
\[
P(1)\triangleq\lim\limits_{t\rightarrow\infty}P(1,t)
=\lim\limits_{n\rightarrow\infty}\frac{x_n}{y_n}
=\lim\limits_{n\rightarrow\infty}\frac{x_{n+1}-x_n}{y_{n+1}-y_n}=\frac{2}{3}>0.
\]
This completes the proof.
\medskip
\textbf{Lemma 4} For $k>1$, if the limit
$\lim\limits_{t\rightarrow\infty}P(k-1,t)$ exists, then the limit
$\lim\limits_{t\rightarrow\infty}P(k,t)$ also exists and,
moreover,
\begin{eqnarray}
P(k)\triangleq\lim\limits_{t\rightarrow\infty}P(k,t)=\frac{k-1}{k+2}\,P(k-1)>0.
\end{eqnarray}
\textbf{Proof} First, observe that
\begin{eqnarray*}
P(k,t)&=&\frac{1}{t+m_0}\sum\limits_{i=-m_{0}i\neq0}^{t}P(k,i,t)\\
&=&\frac{1}{t+m_0}\sum\limits_{i=-m_{0}}^{-1}P(k,i,t)
+\frac{t}{t+m_0}\,\frac{1}{t}\,\sum\limits_{i=1}^{t}P(k,i,t).
\end{eqnarray*}
Next, denote
$\overline{P}(k,t)\triangleq\frac{1}{t}\sum\limits_{i=1}^{t}P(k,i,t)$.
One only needs to prove that the limit
$\lim\limits_{t\rightarrow\infty}\overline{P}(k,t)$ exists, which
will imply that the limit $\lim\limits_{t\rightarrow\infty}P(k,t)
=\lim\limits_{t\rightarrow\infty}\overline{P}(k,t)$ exists.
To show that the limit of $\overline{P}(k,t)$ exists as
$t\rightarrow\infty$, observe that $P(k,i,t)=0$ when $i>t+1-k$,
since in this case even if this vertex $i$ increases its degree by
1 each time, it cannot reach degree $k$. Then, it follows from
Lemma 1 that
\begin{eqnarray*}
\overline{P}(k,t)
&=&\frac{1}{t}\sum\limits_{i=1}^{t+1-k}P(k,i,t)\\
&=&\frac{1}{t}\sum\limits_{i=1}^{t+1-k}\sum\limits_{s=i+k-1}^{t}f(k,i,s)\,
\prod\limits_{j=s}^{t-1}\left(1-\frac{k}{2j+N_{0}}\right)\\
&=&\frac{1}{t}\sum\limits_{i=1}^{t+1-k}\sum\limits_{s=i+k-1}^{t}
P(k-1,i,s-1)\,\frac{k-1}{2(s-1)+N_{0}}\,\prod\limits_{j=s}^{t-1}
\left(1-\frac{k}{2j+N_{0}}\right)\\
&=&\frac{1}{t}\sum\limits_{s=k}^{t}\sum\limits_{i=1}^{s+1-k}P(k-1,i,s-1)\,
\frac{k-1}{2(s-1)+N_{0}}\,\prod\limits_{j=s}^{t-1}
\left(1-\frac{k}{2j+N_{0}}\right)\\
&=&\frac{1}{t}\sum\limits_{s=k}^{t}\sum\limits_{i=1}^{s-1}
P(k-1,i,s-1)\,\frac{k-1}{2(s-1)+N_{0}}\,\prod\limits_{j=s}^{t-1}
\left(1-\frac{k}{2j+N_{0}}\right)\\
&=&\frac{1}{t}\sum\limits_{s=k}^{t}\overline{P}(k-1,s-1)\,
\frac{(s-1)(k-1)}{2(s-1)+N_{0}}\,\prod\limits_{j=s}^{t-1}
\left(1-\frac{k}{2j+N_{0}}\right)\\
&=&\frac{1}{t}\,\prod\limits_{i=k}^{t-1}\left(1-\frac{k}{2i+N_{0}}\right)
\Bigg[\,\overline{P}(k-1,k-1)\frac{(k-1)^2}{2(k-1)+N_{0}}\\
& & +\sum\limits_{l=k}^{t-1}\overline{P}(k-1,l)\,
\frac{l(k-1)}{2l+N_{0}}\,\prod\limits_{j=k}^{l}
\left(1-\frac{k}{2j+N_{0}}\right)^{-1}\Bigg].
\end{eqnarray*}
Next, let
\begin{eqnarray*}
x_n&\triangleq&\overline{P}(k-1,k-1)\,\frac{(k-1)^2}{2(k-1)+N_{0}}\\
&&+\sum\limits_{l=k}^{n-1}\overline{P}(k-1,l)\,\frac{l(k-1)}{2l+N_{0}}
\prod\limits_{j=k}^{l}\left(1-\frac{k}{2j+N_{0}}\right)^{-1}
\end{eqnarray*}
and
\[
y_n\triangleq
n\prod\limits_{i=k}^{n-1}\left(1-\frac{k}{2i+N_{0}}\right)^{-1}>0
\rightarrow\infty.
\]
Obviously,
\[
x_{n+1}-x_n=\overline{P}(k-1,n)\,\frac{n(k-1)}{2n+N_{0}}\,
\prod\limits_{j=k}^{n}\left(1-\frac{k}{2j+N_{0}}\right)^{-1},
\]
and since
\begin{eqnarray*}
y_{n+1}-y_n
&=&\left[(n+1)-n\left(1-\frac{k}{2n+N_{0}}\right)\right]
\prod\limits_{i=k}^{n}\left(1-\frac{k}{N_{0}+2i}\right)^{-1}\\
&=&\frac{(k+2)n+N_{0}}{2n+N_{0}}\,\prod\limits_{i=k}^{n}
\left(1-\frac{k}{2i+N_{0}}\right)^{-1}>0,
\end{eqnarray*}
one has that $\{y_n\}$ is a strictly monotone increasing
nonnegative sequence, hence $y_n\rightarrow\infty$. Also, by
assumption,
\[
\frac{x_{n+1}-x_n}{y_{n+1}-y_n}=\frac{(k-1)n}{(k+2)n+N_{0}}\,
\overline{P}(k-1,n)\ \rightarrow\ \frac{k-1}{k+2}\,P(k-1)~~
(n\rightarrow\infty).
\]
Thus, it follows from Lemma 2 that
\[
\lim\limits_{t\rightarrow\infty}\overline{P}(k,t)
=\lim\limits_{n\rightarrow\infty}\frac{x_n}{y_n}
=\lim\limits_{n\rightarrow\infty}
\frac{x_{n+1}-x_n}{y_{n+1}-y_n}=\frac{k-1}{k+2}\,P(k-1)>0,
\]
therefore, $\lim\limits_{t\rightarrow\infty}P(k,t)$ exists, and Eq.
(6) is thus proved.
\medskip
\textbf{Theorem 1} The steady-state degree distribution of the BA
model with $m=1$ exists, and is given by
\begin{eqnarray}
P(k)=\frac{4}{k(k+1)(k+2)}\ \thicksim\ 4k^{-3}>0.
\end{eqnarray}
\textbf{Proof} By mathematical induction, it follows from Lemmas 3
and 4 that the steady-state degree distribution of the BA model
with $m=1$ exists. Then, solving Eq. (6) iteratively, one obtains
\[
P(k)=\frac{k-1}{k+2}\,P(k-1)
=\frac{k-1}{k+2}\,\frac{k-2}{k+1}\,\frac{k-3}{k}\,P(k-3).
\]
By continuing the process till $k=3+1$, one finally obtains
\[
P(k)=\frac{4}{k(k+1)(k+2)}\ \thicksim\ 4k^{-3}>0.
\]
\medskip
From Theorem 1, one can see that the degree distribution formula
of Krapivsky et al.$^{[5]}$ is exact, although the mathematical
proof there was not as rigorous as that given above.
\section{The BA model with $m\geq2$}
\ \ \ \ \textbf{Lemma 5} Under the $m\Pi$-hypothesis, the BA model
when $k>m$ satisfies
\begin{eqnarray}
f(k,i,s)&=&P(k-1,i,s-1)\,\frac{k-1}{2(s-1)+\frac{N_{0}}{m}},\\
P(k,i,t)&=&\sum\limits_{s=i+k-m}^t
f(k,i,s)\,\prod\limits_{j=s}^{t-1}
\left(1-\frac{k}{2j+\frac{N_{0}}{m}}\right).
\end{eqnarray}
\textbf{Proof} First, consider Eq. (8). The degree of vertex is
nondecreasing, and increasing at most by $1$ each time, according
to the construction of the BA model. Thus, it follows from the
Markovian properties that
\begin{eqnarray*}
f(k,i,s)&=&P\{k_i(s)=k,\ k_i(l)\neq k,\ l=1,2,\cdots,s-1\}\\
&=&P\{k_i(s)=k,\ k_i(s-1)=k-1,\ k_i(l)\neq k,\ l=1,2,\cdots,s-2\}\\
&=&P\{k_i(s)=k,\ k_i(s-1)=k-1\}\\
&=&P\{k_i(s-1)=k-1\}\,P\{k_i(s)=k|k_i(s-1)=k-1\}\\
&=&P(k-1,i,s-1)\,\frac{k-1}{2(s-1)+\frac{N_{0}}{m}}.
\end{eqnarray*}
Second, observe that the earliest time for the degree of vertex
$i$ to reach $k$ is at step $k+i-m$, and the latest time to do so
is at step $t$. After this vertex degree becomes $k$, it will not
increase any more. Thus, Eq. (9) is proved.
\medskip
\textbf{Lemma 6} Under the $m\Pi$-hypothesis, in the BA model the
limit $\lim\limits_{t\rightarrow\infty}P(m,t)$ exists and is
independent of the initial network; moreover,
\begin{eqnarray}
P(m)\triangleq\lim\limits_{t\rightarrow\infty}P(m,t)=\frac{2}{m+2}>0.
\end{eqnarray}
\textbf{Proof} From the construction of the BA model or (2), it
follows that
\[
P(m,i,t+1)=\left(1-\frac{m}{2t+\frac{N_{0}}{m}}\right)P(m,i,t).
\]
Since $P(m,t+1,t+1)=1$, one has
\begin{eqnarray*}
P(m,t+1)&=&\frac{1}{t+1+m_0}\,\sum\limits_{i=-m_{0},i\neq0}^{t+1}P(m,i,t+1)\\
&=&\frac{t+m_0}{t+1+m_0}\left(1-\frac{m}{2t+\frac{N_{0}}{m}}\right)P(m,t)
+\frac{1}{t+1+m_0}.
\end{eqnarray*}
Then, iterative calculation yields
\begin{eqnarray*}
P(m,t)&=&\prod\limits_{i=1}^{t-1}\left(1-\frac{m}{2i+\frac{N_{0}}{m}}\right)
\frac{i+m_0}{i+1+m_0}\left[\,P(m,1)+\sum\limits_{l=1}^{t-1}
\frac{\frac{1}{l+1+m_0}}{\prod\limits_{j=1}^{l}
\left(1-\frac{m}{2j+\frac{N_{0}}{m}}\right)\frac{j+m_0}{j+1+m_0}}\right]\\
&=&\frac{1}{t+m_0}\prod\limits_{i=1}^{t-1}
\left(1-\frac{m}{2i+\frac{N_{0}}{m}}\right)\left[(1+m_0)P(m,1)
+\sum\limits_{l=1}^{t-1}\prod\limits_{j=1}^{l}
\left(1-\frac{m}{2j+\frac{N_{0}}{m}}\right)^{-1}\right].
\end{eqnarray*}
Next, let
\[
x_n\triangleq(1+m_0)P(m,1)+\sum\limits_{l=1}^{n-1}
\prod\limits_{j=1}^{l}\left(1-\frac{m}{2j+\frac{N_{0}}{m}}\right)^{-1}
\]
and
\[
y_n\triangleq(n+m_0)\prod\limits_{i=1}^{n-1}
\left(1-\frac{m}{2i+\frac{N_{0}}{m}}\right)^{-1}>0.
\]
It follows that
\[
x_{n+1}-x_n
=\prod\limits_{j=1}^{n}\left(1-\frac{m}{2j+\frac{N_{0}}{m}}\right)^{-1}
\]
and
\[
y_{n+1}-y_n=\frac{(m+2)n+\frac{N_{0}}{m}+mm_0}{2n+\frac{N_{0}}{m}}\,
\prod\limits_{i=1}^{n}\left(1-\frac{m}{2i+\frac{N_{0}}{m}}\right)^{-1}>0.
\]
Consequently,
\[
\frac{x_{n+1}-x_n}{y_{n+1}-y_n}
=\frac{2n+\frac{N_{0}}{m}}{(m+2)n+\frac{N_{0}}{m}+mm_0}\
\rightarrow\ \frac{2}{m+2}~~ (n\rightarrow\infty).
\]
It follows from Lemma 2 that
\[
P(m)=\lim\limits_{t\rightarrow\infty}P(m,t)
=\lim\limits_{n\rightarrow\infty}\frac{x_n}{y_n}
=\lim\limits_{n\rightarrow\infty}\frac{x_{n+1}-x_n}{y_{n+1}-y_n}
=\frac{2}{m+2}>0.
\]
This completes the proof.
\medskip
\textbf{Lemma 7} Under the $m\Pi$-hypothesis, in the BA model with
$k>m$, if $\lim\limits_{t\rightarrow\infty}P(k-1,t)$ exists, then
$\lim\limits_{t\rightarrow\infty}P(k,t)$ exists and, moreover,
\begin{eqnarray}
P(k)\triangleq
\lim\limits_{t\rightarrow\infty}P(k,t)=\frac{k-1}{k+2}\,P(k-1)>0.
\end{eqnarray}
\textbf{Proof} First, observe that
\[
P(k,t)=\frac{1}{t+m_0}\sum\limits_{i=-m_{0},i\neq0}^{t}P(k,i,t)
=\frac{1}{t+m_0}\sum\limits_{i=-m_{0}}^{-1}P(k,i,t)
+\frac{t}{t+m_0}\frac{1}{t}\sum\limits_{i=1}^{t}P(k,i,t).
\]
Denote
$\overline{P}(k,t)\triangleq\frac{1}{t}\sum\limits_{i=1}^{t}P(k,i,t)$.
One only needs to prove that the limit
$\lim\limits_{t\rightarrow\infty}\overline{P}(k,t)$ exists, which
will imply that the limit $\lim\limits_{t\rightarrow\infty}P(k,t)
=\lim\limits_{t\rightarrow\infty}\overline{P}(k,t)$ exists.
To show that the limit of $\overline{P}(k,t)$ exists as
$t\rightarrow\infty$, observe that $P(k,i,t)=0$ when $i>t+m-k$.
Therefore, it follows from Lemma 5 that
\begin{eqnarray*}
\overline{P}(k,t)
&=&\frac{1}{t}\sum\limits_{i=1}^{t+m-k}P(k,i,t)\\
&=&\frac{1}{t}\sum\limits_{i=1}^{t+m-k}\sum\limits_{s=i+k-m}^{t}f(k,i,s)\,
\prod\limits_{j=s}^{t-1}\left(1-\frac{k}{2j+\frac{N_{0}}{m}}\right)\\
&=&\frac{1}{t}\sum\limits_{i=1}^{t+m-k}\sum\limits_{s=i+k-m}^{t}
P(k-1,i,s-1)\,\frac{k-1}{2(s-1)+\frac{N_{0}}{m}}\,\prod\limits_{j=s}^{t-1}
\left(1-\frac{k}{2j+\frac{N_{0}}{m}}\right)\\
&=&\frac{1}{t}\sum\limits_{s=k-m+1}^{t}\sum\limits_{i=1}^{s+m-k}
P(k-1,i,s-1)\,\frac{k-1}{2(s-1)+\frac{N_{0}}{m}}\,\prod\limits_{j=s}^{t-1}
\left(1-\frac{k}{2j+\frac{N_{0}}{m}}\right)\\
&=&\frac{1}{t}\sum\limits_{s=k-m+1}^{t}\sum\limits_{i=1}^{s-1}
P(k-1,i,s-1)\,\frac{k-1}{2(s-1)+\frac{N_{0}}{m}}\,\prod\limits_{j=s}^{t-1}
\left(1-\frac{k}{2j+\frac{N_{0}}{m}}\right)\\
&=&\frac{1}{t}\sum\limits_{s=k-m+1}^{t}\overline{P}(k-1,s-1)\,
\frac{(s-1)(k-1)}{2(s-1)+\frac{N_{0}}{m}}\,\prod\limits_{j=s}^{t-1}
\left(1-\frac{k}{2j+\frac{N_{0}}{m}}\right)\\
&=&\frac{1}{t}\,\prod\limits_{i=k-m+1}^{t-1}
\left(1-\frac{k}{2i+\frac{N_{0}}{m}}\right)
\Bigg[\,\overline{P}(k-1,k-m)\frac{(k-1)(k-m)}{2(k-m)
+\frac{N_{0}}{m}}\\
& &+\sum\limits_{l=k-m+1}^{t-1}\overline{P}(k-1,l)\,
\frac{l(k-1)}{2l+\frac{N_{0}}{m}}\,\prod\limits_{j=k-m+1}^{l}
\left(1-\frac{k}{2j+\frac{N_{0}}{m}}\right)^{-1}\Bigg]
\end{eqnarray*}
Next, let
\begin{eqnarray*}
x_n&\triangleq&\overline{P}(k-1,k-m)\,\frac{(k-1)(k-m)}{2(k-m)+\frac{N_{0}}{m}}\\
&&+\sum\limits_{l=k-m+1}^{n-1}\overline{P}(k-1,l)\,\frac{l(k-1)}{2l+\frac{N_{0}}{m}}
\prod\limits_{j=k-m+1}^{l}\left(1-\frac{k}{2j+\frac{N_{0}}{m}}\right)^{-1}
\end{eqnarray*}
and
\[
y_n\triangleq n\prod\limits_{i=k-m+1}^{n-1}
\left(1-\frac{k}{2i+\frac{N_{0}}{m}}\right)^{-1}>0.
\]
It follows that
\[
x_{n+1}-x_n=\overline{P}(k-1,n)\,\frac{n(k-1)}{2n+\frac{N_{0}}{m}}
\prod\limits_{j=k-m+1}^{n}\left(1-\frac{k}{2j+\frac{N_{0}}{m}}\right)^{-1}
\]
and
\begin{eqnarray*}
y_{n+1}-y_n
&=&\left[(n+1)-n\left(1-\frac{k}{2n+\frac{N_{0}}{m}}\right)\right]
\prod\limits_{i=k-m+1}^{n}\left(1-\frac{k}{2i+\frac{N_{0}}{m}}\right)^{-1}\\
&=&\frac{(k+2)n+\frac{N_{0}}{m}}{2n+\frac{N_{0}}{m}}\prod\limits_{i=k-m+1}^{n}
\left(1-\frac{k}{2i+\frac{N_{0}}{m}}\right)^{-1}>0.
\end{eqnarray*}
By assumption,
\[
\frac{x_{n+1}-x_n}{y_{n+1}-y_n}=\frac{(k-1)n}{(k+2)n+\frac{N_{0}}{m}}\,
\overline{P}(k-1,n)\rightarrow\frac{k-1}{k+2}P(k-1)~~
(n\rightarrow\infty).
\]
It then follows from Lemma 2 that
\[
\lim\limits_{t\rightarrow\infty}\overline{P}(k,t)
=\lim\limits_{n\rightarrow\infty}\frac{x_n}{y_n}
=\lim\limits_{n\rightarrow\infty}\frac{x_{n+1}-x_n}{y_{n+1}-y_n}
=\frac{k-1}{k+2}\,P(k-1)>0.
\]
Thus, $\lim\limits_{t\rightarrow\infty}P(k,t)$ exists and Eq. (11)
is proved.
\medskip
\textbf{Theorem 2} Under the $m\Pi$-hypothesis, the steady-state
degree distribution of the BA model with $m\geq2$ exists, and is
given by
\begin{eqnarray}
P(k)=\frac{2m(m+1)}{k(k+1)(k+2)}\ \thicksim\ 2m^2k^{-3}>0.
\end{eqnarray}
\textbf{Proof} By induction, it follows from Lemmas 6 and 7 that
the steady-state degree distribution of the BA model with $m\geq2$
exists. Equation (11) follows from iteration
\[
P(k)=\frac{k-1}{k+2}\,P(k-1)
=\frac{k-1}{k+2}\,\frac{k-2}{k+1}\,\frac{k-3}{k}\,P(k-3),
\]
till $k=3+m$. Thus, one obtains
\[
P(k)=\frac{2m(m+1)}{k(k+1)(k+2)}\ \thicksim\ 2m^2k^{-3}>0.
\]
\medskip
One can see that this degree distribution formula is consistent
with the formula obtained by Dorogovtsev et al.$^{[6]}$ and
Bollob\'as et al.$^{[9]}$, which allow multiple edges and loops.
\medskip
Finally, the authors thank Professor Yirong Liu for many helpful
discussions.
\vspace{.6cm}
Z. Hou$^1$:\ zthou@csu.edu.cn
X. Kong$^1$:\ kongxiangxing2008@163.com
D. Shi$^{1,2}$:\ shidh2001@263.net
G. Chen$^3$:\ gchen@ee.cityu.edu.hk
\end{document}
|
2,877,628,089,573 | arxiv | \section{Introduction}
It is increasingly clear from observations that
protoplanetary discs are highly structured --- exhibiting gaps,
asymmetries, and spirals --- and thus deviate
significantly from simple smooth disc models (Andrews et al.~2011, Muto et
al.~2012, Perez et al.~2014, Brogan et al.~2015). Besides
these observed features, theory predicts
an annular region between roughly 1 AU and 10 AU in which the
magnetorotational instability
fails to initiate turbulence,
but where hydrodynamical processes may hold sway (Gammie 1996,
Armitage 2011). Originally termed a `dead zone' (though not nearly so
dead as first thought), this region plays a critical role in theories
of outburst behaviour and vortex and planet formation
(e.g.\ Armitage et al.~2001, Varni\'ere \& Tagger 2006, Kretke et
al.~2009, Chatterjee \& Tan 2014).
One possible source of hydrodynamical activity
is convection. Irradiation from the protostar will certainly generate a
negative radial temperature gradient in the disc, though it is unclear
if this is sufficient to force the entropy $S$ to decrease outward as well (a
necessary precondition for convection).
In fact, observations indicate that on radii $\gtrsim 20$ AU most disks
exhibit $dS/dR>0$ and hence convection is impossible
(Andrews et al.~2009, 2010; Isella et al.~2009, Guilloteau et al.~2011).
The situation is uncertain, however, at smaller radii, near the disk
surface,
and around structures such as
dead zones, opacity transitions, and gaps; here the sign of the gradient may well be reversed.
This paper is relevant to specific disc locations for which this is
indeed the case.
The other problem that faces convection
is the discs's strong differential rotation, which
would easily negate its
negative entropy gradient; according to
the Solberg-H\o{}iland criterion, protoplanetary discs are stable.
However, certain double-diffusive processes have found
ways around this constraint. Examples include:
a resistive instability
that employs diffusing magnetic fields (Latter et al.~2010a),
the subcritical baroclinic instability (SBI, Lesur \& Papaloizou 2010),
and the convective overstability (Klahr \& Hubbard 2014, Lyra 2014),
the latter two making use of thermal diffusion (or cooling). It is to the
convective overstability that this work is devoted.
Thermal diffusion introduces a crucial time
lag between an inertial wave's dynamical oscillation
(an epicycle, essentially)
and its associated thermodynamic
oscillation. After half an epicycle a fluid blob returns
to its starting radius at a different temperature than its
surroundings. As a consequence, it suffers a buoyancy acceleration that
amplifies the initial oscillation, leading to runaway growth.
Such overstable convection was first touched on by
Chandrasekhar (1953, 1961) but, as interest originally lay in
stellar interiors, researchers focussed on
cases in which the oscillations arose not from rotation but from
magnetic tension (Cowling 1957) or composition gradients
(`semi-convection', Kato 1966).
It is only recently, decades later, that oscillatory convection has
been raised
in the context of accretion discs (Klahr \& Hubbard 2014, Lyra 2014),
even though it could play an important part in dead zone dynamics.
Indeed, the local simulations of Lyra (2014)
suggest that the instability's nonlinear saturated state transports
a respectable amount of angular momentum and even generates
vortices, possibly in conjunction with the SBI.
This paper will revisit both the linear and nonlinear theory of the
convective overstability, remaining in the Boussinesq approximation
throughout. I reproduce analytic expressions for the growth rate,
and show that the fastest
growing mode possesses a local growth
rate of $|N^2|/\Omega\sim 10^{-3}\Omega$, where $N$ and $\Omega$ are the radial
buoyancy and rotation frequencies of the disc. Its
vertical wavelength is short, of order $\sqrt{\xi/\Omega}\sim 10^{-2}
H$ at 1 AU, where $\xi$ is the thermal diffusivity
and $H$ is the disc scale height,
while its radial wavelength is much longer and connects up
to the global structure. I also discuss the prevalence of convective
overstability in realistic discs, and conclude (in agreement with Lin
\& Youdin 2015) that it is not widespread, perhaps only appearing in inner
disc regions, dead zones, or near gaps.
I also show that the unstable modes are
nonlinear solutions to the governing equations of Boussinesq
hydrodynamics, and thus can grow to arbitrarily large amplitudes, at
least in principle.
Before a mode grows too powerful, however, it is attacked by parasitic
instabilities, the foremost of which involves the well known parametric resonance
between inertial waves and an epicycle (e.g.\ Gammie et al.~2000).
Subsequently, an analytical estimate for the overstability's
saturation level can be
derived that predicts
a maximum amplitude of $|N^2|/\Omega^2$ over the
background.
I discuss the nature of the
ensuing turbulence, and consider connections with
semiconvection, as well as the SBI.
In particular,
I argue that if the characteristic amplitude of
the turbulent state is determined by the parasitic modes
and if $|N|\ll \Omega$,
then its motions will be too axisymmetric and too weak to
transport appreciable angular momentum or to
generate vortices. If, on the other hand, the saturation culminates in
layer formation, as can occur in semiconvection,
the turbulent velocities may be orders of magnitude
greater and more interesting dynamics may ensue.
Numerical simulations are needed to determine
which outcome is
realistic.
The overstability's turbulent stirring
could agitate dust grains, impeding settling
but also enhancing the collision frequency and collision speeds of
0.1-1 m particles.
Lastly, the convective overstability, being essentially global in
radius, may be connected to global dynamics and (more speculatively)
excite a small amount of eccentricity, though this
cannot be tested in the local model used here.
\section{Model equations}
Being interested in small scales and subsonic flow, I
approximate the protoplanetary disc with the Boussinesq shearing
sheet. This model describes a small `block' of disc centred at a
radius $R_0$ moving on the circular orbit prescribed by $R_0$ and at
an orbital frequency of $\Omega$. The block is represented in
Cartesian coordinates with the $x$ and $y$ directions corresponding to
the radial and azimuthal directions, respectively
(see Goldreich \& Lynden-Bell 1965).
The governing equations are
\begin{align} \label{GE1}
&\d_t \u + \u\cdot\nabla\u = -\frac{1}{\rho}\nabla P -2\Omega \ensuremath{\mathbf{e}_{z}}\times
\u \notag \\
& \hskip2cm + 2q\Omega\, x\,\ensuremath{\mathbf{e}_{x}} -N^2\theta\,\ensuremath{\mathbf{e}_{x}} + \nu\nabla^2\u, \\
& \d_t\theta + \u\cdot\nabla\theta = u_x + \xi\,\nabla^2\theta,\label{GE2} \\
& \nabla\cdot \u = 0, \label{GE3}
\end{align}
where $\u$ is the fluid velocity, $P$ is pressure, $\rho$ is the
(constant) background density, and $\theta$ is the buoyancy variable. The
shear parameter of the sheet is denoted by $q$, equal to $3/2$ in a
Keplerian disc, and the buoyancy frequency arising from the radial
stratification is denoted by $N$.
Being interested
in the optically thicker dead zone, we employ thermal diffusion rather
than an optically thin cooling law, as is done in Klahr \& Hubbard
(2014) and Lyra (2014). Viscosity, denoted by $\nu$, has been included for
completeness but will be usually set to 0.
Following Lesur \& Papaloizou (2010), the stratification length $\ell$ has
been absorbed into $\theta$. Thus
$\theta \propto \ell^{-1}(\rho'/\rho)$, where $\rho'$ is the perturbation to the
background density. The (squared) buoyancy frequency can be determined from
\begin{align} \label{Nsq}
N^2 = - \frac{1}{\gamma \rho}\frac{\d P}{\d R}\frac{\d \ln\left(P\rho^{-\gamma}\right)}{\d R},
\end{align}
evaluated at $R=R_0$. In the above $\gamma$ is the adiabatic index.
Another important quantity is the (squared) epicyclic frequency
\begin{equation}
\kappa^2 = 2(2-q)\Omega^2.
\end{equation}
In addition to $q$, the system can be specified by two other
dimensionless parameters.
The Richardson number
measures the relative strength of the radial stratification;
it is denoted by $n^2$ and defined via
\begin{equation}
n^2 = -\frac{N^2}{\kappa^2}.
\end{equation}
In thin astrophysical discs, $n^2$ is generally small (see Section 3.4).
The Prandtl number helps quantify the separation in scales between the
thermal lengthscale and the viscous lengthscale; it is denoted by Pr
and defined via
\begin{equation}
\text{Pr} = \frac{\nu}{\xi},
\end{equation}
it too is generally small. Finally, though the outer scale does not
appear in the governing equations, it can be useful to define
the
Peclet number
\begin{equation}
\text{Pe}= \frac{H^2\kappa}{\xi},
\end{equation}
where $H$ is the vertical scale height. In our problem,
this parameter helps quantify
the separation in scales between the instability length and the
disc thickness.
\section{Linear theory}
In this section I revisit the analyses presented in Klahr \& Hubbard
(2014) and Lyra (2014), and provide an analytical expression for the growth rate in
the limit of small Richardson number. I explicate
the physical mechanism of instability and apply these results to the
conditions expected in protoplanetary discs.
\subsection{Inviscid eigenproblem}
Equations \eqref{GE1}-\eqref{GE3} yield the equilibrium,
$\u=- q\Omega x\,\ensuremath{\mathbf{e}_{y}}$, $\theta=0$, and $P$ a constant.
This is perturbed by disturbances
proportional to $\text{e}^{st + \ensuremath{\text{i}} Kz}$, where $s$ is a
(complex) growth rate and $K$ is a (real) vertical wavenumber. We assume that
any radial variation exhibited by our modes lie on scales larger than
the shearing sheet and does not interfere with its local
physics. Viscosity is also dropped, to ease the analysis.
Denoting perturbation with primes, the linearised equations are
\begin{align} \label{lin1}
& s\,u_x' = 2\Omega\, u_y' - N^2 \theta', \\
& s\,u_y' = (q-2)\Omega\, u_x', \label{lin2}\\
& s\,\theta'= u_x' - \xi k^2\,\theta'.\label{lin3}
\end{align}
Our ansatz ensures that $u_z'=0$, via the incompressibility condition,
and $P'=0$, via the $z$-component of the momentum equation. The
dispersion relation for these modes is easily obtained:
\begin{align} \label{disp}
s^3 + \beta s^2 + (N^2+\kappa^2)s + \beta \kappa^2=0,
\end{align}
where $\beta= \xi\,K^2$ is the (length-scale dependent) cooling rate.
Apart from differences in notation, Eq.~\eqref{disp} agrees with Eq.~(18)
in Klahr \& Hubbard (2014), Eq.~(21)
in Lyra (2014), and the inviscid version of Eq.~(B2) in Guilet \&
M\"uller (2015).
When there is no thermal diffusion whatsoever Eq.~\eqref{disp} is easy
to solve and one obtains buoyancy-assisted epicyclic
motion. Instability occurs when
\begin{equation}
N^2 + \kappa^2 < 0,
\end{equation}
i.e.\ the Solberg-H\o{}iland criterion.
When $\xi$ is nonzero the dispersion relation is a cubic and the analytic
solution messy. A numerical solution, using
fiducial parameters, is plotted in
Fig.~1.
In the
natural limit of small Richardson number $n^2 \ll 1$,
a convenient analytical expression for $s$
is available, as Guilet \& M\"uller (2015) show. They set $s= s_0 + s_1
n^2+\dots$ and to leading order Eq.~\eqref{disp} becomes
$$ (s_0^2+\kappa^2)(s_0+\beta) =0,$$
which yields
a decaying energy mode and two epicycles. We take one of the epicycles,
$s_0=\ensuremath{\text{i}} \kappa$,
and at the next order obtain
$$ s_1 = -\frac{1}{2}\frac{\kappa^2}{\kappa^2+\beta^2}\left(\beta-\ensuremath{\text{i}}\kappa\right) .$$
The real part of the growth rate is hence
\begin{equation}\label{sapprox}
\text{Re}(s) = -\frac{1}{2}\frac{\beta\,N^2}{\kappa^2+\beta^2} +\mathcal{O}(n^4\kappa),
\end{equation}
which matches the full solution to Eq.~\eqref{disp} for all $k$ (or
$\beta$) in the appropriate limit.
Maximum growth is achieved when $\beta=\kappa$, i.e. when
$K=\sqrt{\xi/\kappa}$, for which
\begin{equation}
\text{Re}(s_\text{max}) = -\frac{1}{4}\frac{N^2}{\kappa},
\end{equation}
which agrees with Eq.~(32) in Lyra (2014).
Three things are noteworthy.
First, the Solberg-H\o{}iland criterion no longer determines the onset of
instability. Instead, growth occurs when
$N^2 <0$
(the Schwarzchild criterion); thus the convective overstability has used
thermal diffusion to negate the stabilising influence of rotation.
Second, the maximum growth rate
is independent of the magnitude of thermal diffusion, even though
it is crucial to the existence of the instability. Thermal diffusion
works as a catalyst of instability,
but only fixes its characteristic lengthscale.
Third, because its radial wavenumber is zero the mode's group speed
$\d\text{Im}(s)/\d K$ is small, proportional to $n^2$ (and hence much
less than the phase speed). As a consequence,
there is no risk that the mode energy
propagates far away before it grows to appreciable amplitudes.
\begin{figure}
\begin{center}
\scalebox{0.6}{\includegraphics{COlin.eps}}
\caption{Real part of the convective overstability's growth rate as a
function of wavenumber $K$. The Richardson number is
$n^2=0.01$. The solid line represents the full solution to
Eq.\eqref{disp}, but the analytic approximation of
Eq.~\eqref{sapprox}
is indistinguishable.} \label{DispReln}
\end{center}
\end{figure}
The eigenfunction of the unstable mode itself
consists of a vertical stack of planar fluid sheets
each undergoing slowly growing epicycles and each
communicating with its neighbours via thermal diffusion.
Because vertical motion is absent from the mode, it is relatively
impervious to the disc's vertical structure, in particular a stable
stratification.
Note that if a
scale-free cooling law is adopted, rather than thermal diffusion
(Klahr \& Hubbard 2014, Lyra 2014),
the unstable modes can possess an arbitrary dependence on $z$. In
fact, such a model predicts that
\emph{every} scale, from the viscous cut-off all the way to $H$, grows
at the same maximum rate, a situation that is somewhat artificial and
may pose problems when the nonlinear dynamics are simulated.
Putting to one side its vertical dependence, the convective
overstability
can also
be identified as
the local manifestation of a global eccentric mode, with azimuthal
wavenumber equal to 1 (Ogilvie 2001, Ogilvie \& Barker 2014).
The convective overstability may thus potentially excite
the disc's eccentricity, though this is an idea not pursued in this
paper.
Lastly, there also exist modes
with non-zero radial wavenumbers that grow slower and are not
treated here (see Lyra 2014). Non-axisymmetric modes, on the other
hand, offer
only a short period of transient growth before being sheared out, and
are also neglected.
\subsection{Influence of viscosity}
In the regime $n^2\gg \text{Pr}$, viscosity only adds a small
correction to the maximum growth rate. But it does damp
small scales that would be otherwise unstable. In Appendix A, I calculate
the critical wavenumber
upon which short modes are stabilised. For small Prandtl number it may be
estimated by
\begin{equation}
K_c \approx \left(\frac{n^2}{\text{Pr}}\right)^{1/4}K_\text{fast},
\end{equation}
where $K_\text{fast}$ is the wavenumber of fastest growth. Note that
the $1/4$ power means that the spatial separation between the fastest
growing and marginal modes is not as vast as one would first think.
We may also derive a revised stability condition for
the convective overstability. When viscosity is present the Schwarzchild
criterion is replaced by
\begin{equation}\label{critter}
N^2 < -\frac{2\text{Pr}}{1+\text{Pr}}\Omega^2.
\end{equation}
The entropy gradient must be negative \emph{and} sufficiently
strong. However, owing to
the smallness of Pr in real discs, the criterion is
only slightly modified. On the other hand,
in numerical simulations where $\nu$ is
unrealistically large, Eq.~\eqref{critter} should be kept in mind.
\subsection{Instability mechanism}
Figure 2 illustrates the basic mechanism of the convective
instability, drawing on arguments in Cowling (1958). The mechanism is
almost identical to that driving the SBI, as described in Lesur \&
Papaloizou (2010). The only difference is that the convective
overstability forces fluid blobs to execute
epicycles, while the SBI forces them to circulate around a vortex.
Consider a local
patch in the disc-plane exhibiting a weak entropy gradient so
that `colder' fluid is at larger radii, and `hotter' fluid is at
smaller radii. Suppose a fluid blob at a given radius, associated with
an entropy of $S_0$, is made to undergo epicyclic motion, panels (a)
and (b).
\begin{figure}
\begin{center}
\scalebox{0.4}{\includegraphics{cartoon.eps}}
\caption{Four panels indicating the convective overstability
mechanism. In panel (a) a fluid blob is embedded in a radial
entropy gradient. In panel (b) it
undergoes half an epicycle and returns to its original radius with
a smaller entropy than when it begun $S_1<S_0$. It hence feels a
buoyancy acceleration inwards and the epicycle is amplified. The
process occurs in reverse once the epicycle is complete, shown in
panel (c), where now $S_2>S_0$. The
oscillations hence grow larger and larger.}\label{Cartoon}
\end{center}
\end{figure}
Panel (b) shows the blob after half an epicycle, when it has returned to its
original radius. During its outward excursion it has come into contact
with colder fluid and has thus exchanged some of its initial heat via
thermal diffusion. As a consequence, when it returns to
its starting radius it is cooler and possesses a new entropy $S_1<
S_0$. Because of this entropy difference the blob suffers an inward
buoyancy force (represented by the magenta arrow) that boosts the
amplitude of the epicycle.
Panel (c) shows the blob after executing a full epicycle. Again it is
back at its starting radius but now it has greater entropy than its
surroundings $S_2>S_0$ because it has attempted to equilibriate with
the hotter fluid at smaller radii. The blob now feels an outwardly
directed buoyancy force that further amplifies the epicycle. And panel
(d) shows the next phase, where the process runs away.
Instability would be quenched if thermal diffusion was too efficient
or too inefficient. In the first (isothermal) case, at every stage of the epicycle
the fluid blob would possess the same entropy as its surrounding. It
would hence never feel a buoyancy acceleration. In the second
(adiabatic) case, the fluid blob's entropy would never deviate
from $S_0$ and it would execute buoyancy-adjusted epicycles.
\subsection{Parameter values and physical scales}
\subsubsection{Buoyancy frequency
and characteristic timescales}
I first discuss the sign and magnitude of $N^2$. This quantity must be negative
for there to be instability, but what do recent observations have to
say about this? Let us examine the stability of the disc midplane first.
Assuming that $\gamma = \frac{7}{5}$, the inviscid
instability condition may be re-expressed as
$q_\rho > \frac{5}{2}q_T$, where $q_\rho=d\ln \rho/d \ln R$ and
$q_T=d\ln T/d \ln R$ with $T$ temperature.
Taking $\rho$ at the midplane,
this becomes
\begin{equation}\label{instcrit}
q_\Sigma > \tfrac{5}{2}q_T + q_H,
\end{equation}
where $q_\Sigma = d\ln \Sigma/d\ln R$ and $q_H=
d\ln H/d \ln R$, and in which $\Sigma$ is surface density.
Equation \eqref{instcrit}
basically states that instability favours disc radii with fairly flat density
profiles, and discs that are less flared. In addition, the stronger the
negative temperature gradient the more likely instability, as expected.
To date, the various $q$ parameters
have been estimated from (sub-)mm observations of
some two dozen pre-main sequence stars, mainly in the
Taurus and Ophiucus star-forming regions (Andrews et al.~2009, Isella et
al.~2009, Guilloteau et al.~2011).
Generally $q_T$ lies between -0.6 and -0.5 and $q_H= 1.04-1.26$.
There is greater variation in the density structure. Andrews et
al.~(2009) find that $q_\Sigma$ can fall between $-1$ and $-0.4$. As a
consequence, all but one of the discs in their sample fail to satisfy
Eq.~\eqref{instcrit}, with AS 209 perhaps marginally unstable.
Isella et al.~(2009) and Guilloteau et al.~(2011)
obtain a larger spread, with $q_\Sigma$ varying between
$-1.5$ and $0$ on intermediate to long radii, far from the disk inner
edge. Though
some inner regions may satisfy Eq.~\eqref{instcrit}, most of the discs
in this sample exhibit an insufficiently flat density structure and are hence
also generally stable\footnote{It should be remembered that, because
instabilities tend to erase the unstable conditions from which they
erise, the observations cannot show disc structures that are `about
to be attacked' by instability but rather
structures after the instability has had its way.}.
The conditions for convective overstability improve, however, the
further from the disc midplane. A locally isothermal model with
realistic power
laws in density and temperature reveals that the magnitude of
$q_\rho$ has decreased by
$\sim 30\%$ at $z=(3/4)H$. This is sufficient to push some discs to
marginal stability or perhaps better,
but a more thorough study of disc structure
is necessary to settle the issue.
Note that locations higher up in the disc, $z>(3/4)H$,
are probably inappropriate venues for overstability on
account of the magnetorotational turbulence, wind launching, and/or
associated planar jets
(Fleming \& Stone 2003, Bai \& Stone 2013, Gressel et al.~2015).
Taking these results on face value, we conclude (as do Lin \& Youdin
2015) that most locations in most protoplanetary discs exhibit a positive
$N^2$ and are hence stable to the convective overstability (as well
as the SBI).
At smaller radii ($\sim 1$ AU) the picture is less clear
because there the disk structure is less well constrained by the
observations.
This is also true in and around
conspicuous disc features such as dead zones (which may be partly
shadowed by the disc's hotter inner radii), opacity transitions,
and edges, because the
fitting models assume smooth disc profiles. These regions could in principle
possess unstable entropy gradients, but more advanced numerical
modelling is needed to establish whether this is actually the case
(see Faure et al.~2014, Flock et al.~2015). Finally, because gas
off the midplane possess weaker radial density gradients, instability
may also favour locations higher up in the disk.
The working hypothesis of
this paper is that there are indeed certain disc regions, in particular
dead-zones, that are convectively overstable.
Supposing that a disc region is overstable,
in order for the instability to have a measurable effect,
it must grow sufficiently quickly.
The maximum growth rate depends closely on
the strength of the magnitude of $N^2$, via Eq.~\eqref{sapprox}.
But, as discussed above, it is unclear what values it should take.
The best we can do is assume that both the
pressure and entropy decrease with radius so that $\d_R \sim 1/\ell$.
Then
\begin{equation}
N^2 \sim \left(\frac{H}{\ell}\right)^2 \Omega^2,
\end{equation}
from Eq.~\eqref{Nsq},
where the stratification length $\ell\lesssim R$ for a smooth gradient (as might be the case deeper
in a dead zone),
and $\ell \gtrsim H$ near an abrupt structure (a dead zone edge or gap edge,
for example). Throughout the paper we take a conservative approach and
consider $\ell \sim R$. With the standard scaling $H/R \sim 0.05$ we obtain,
\begin{equation}\label{est}
\text{Re}(s) \sim 10^{-3}\,\Omega.
\end{equation}
Thus the efolding time at 1 AU is roughly 1000 years, plenty of scope
for the instability to develop within a protoplanetary disc's
lifespan. It should not be forgotten that Eq.~\eqref{est} may be an
underestimate near more abrupt disc structures, for which $\ell\gtrsim H$.
\subsubsection{Prandtl number and characteristic lengthscales}
The previous subsection assumes that the fastest growing scales fit into the
disc or are not so small that viscosity stabilises them. This needs to
be checked. From
\eqref{sapprox} the dominant vertical scale is $K\sim \sqrt{\Omega/\xi}$. A
standard expression for the thermal diffusivity at the midplane at 1 AU is
\begin{align}\notag
&\xi = 2.39\times 10^{12}\left(\frac{\rho}{10^{-9}
\text{g}\text{cm}^{-3}}\right)^{-2}\left(\frac{T}{100\,\text{K}}\right)^3
\\ & \hskip3cm
\times \left(\frac{\kappa_\text{op}}
{\text{cm}^2\text{s}^{-1}}\right)^{-1}\,\text{cm}^2\text{s}^{-1},
\end{align}
where $\kappa_\text{op}$ is opacity. The
reference values are drawn from the minimum mass solar nebula at a
few AU (Hayashi et al.~1985) and calculated opacities at low
temperatures (Henning \& Stognienko 1996). Hence
the dominant vertical wavelength at 1 AU is
\begin{equation}
\lambda \sim K^{-1}\sim 10^{10} \text{cm} \sim 0.01\,H,
\end{equation}
which fits comfortably into the disc. It follows that the Peclet number is
$\sim 10^4$.
The convective overstability exhibits
relatively small vertical scales, endorsing the
adoption of the local Boussinesq model, yet still far larger than the
viscous length. At 1 AU, $\nu \sim 10^5$ cm$^2$ s$^{-1}$ and the
Prandtl number is $\sim 10^{-7}$.
On the other hand, the radial scales are unconstrained by the analysis
(similarly to magnetorotational channel modes, Balbus \& Hawley 1991),
and must lie on scales of order $\ell$ or $R$.
\subsection{Unstable modes are nonlinear solutions}
The final important point is that the linear modes explored in this
section are also exact nonlinear solutions to the governing equations.
Both $\u$ and $\theta$ depend on $z$ and $t$, but $u_z'=0$. Therefore
all the nonlinear terms in Eqs \eqref{GE1}-\eqref{GE2} vanish:
$$\u'\cdot\nabla\u'= \u'\cdot\nabla\theta'=0.$$
As a consequence, a convectively overstable mode
will grow exponentially even after it leaves the linear regime, and in
theory can achieve arbitarily large amplitudes. A similar property is
shared by magnetorotational channel flows (Goodman \& Xu 1994).
Of course, the exponential growth cannot continue indefinitely. For a
start, the system will ultimately violate the Boussinesq assumptions, i.e.\
subsonic flow and small thermodynamic variation. It may also be that
the mode's global radial structure intervenes to halt the
runaway. Perhaps, before either comes into play, parasitic modes,
feeding on the mode's strong shear, destroy the mode and
initiate a period of hydrodynamical turbulence. It is this last
possibility that we explore next.
\section{Parasitic instabilities}
\subsection{Linearised equations}
In this section, we view the growing oscillations of
Section 3 as part of the basic state and subsequently
explore growing perturbations to this state.
It is assumed the
oscillations have reached an amplitude characterised by the shear
rate $S$, and though we permit $S\sim\Omega$ or larger,
to ease the calculations
the buoyancy frequency is assumed small, so that $|N| \ll
\kappa,\,S$. Because the parasitic growth rates will be $\sigma\sim
S$, the buoyancy term may then be dropped
from the perturbation equations,
and the thermal equation decouples.
We may also omit the slow growth of the convectively overstable
mode itself, as it is negligible compared to $\kappa$ and $S$.
Finally, we set $q=3/2$, and thus $\kappa=\Omega$. The equilibrium
to leading order may now be written as
$$ u_x = S \cos \Omega t\,\cos K z, \qquad u_y= -\tfrac{3}{2}\Omega\,x
-\tfrac{1}{2}S \sin \Omega t \cos K z, $$
where we have used the eigenvector of Eqs \eqref{lin1}-\eqref{lin2}
to describe the oscillatory component of the equilibrium.
Units are chosen so that $K=1$, $\kappa=1$, and $\rho=1$. The background state is
disturbed by velocity and pressure perturbations taking the standard
Floquet form,
$$ \hat{\u}(t,z) \text{e}^{\sigma t + \ensuremath{\text{i}} m z+ \ensuremath{\text{i}} k x},
\qquad \hat{p}(t,z) \text{e}^{\sigma t + \ensuremath{\text{i}} m z+ \ensuremath{\text{i}} k x},$$
where the hatted variables are $2\pi$-periodic in both $t$ and $z$.
Here $k$ is a radial wavenumber,
and $\sigma$ and $m$ are Floquet exponents; the former serves as the
growth rate of the perturbation. Note that I neglect non-axisymmetric
disturbances; because such modes will be sheared out quickly, they are less
effective at killing their hosts. Only at very large $S$ will they be important.
The linearised equations governing the
evolution of these perturbations are
\begin{align}
\sigma\hat{u}_x &= -\d_t \hat{u}_x -\ensuremath{\text{i}} k \,\epsilon U\,\hat{u}_x -\epsilon \d_zU\,\hat{u}_z -
\ensuremath{\text{i}} k \hat{p} + 2 \hat{u}_y, \label{LIN1}\\
\sigma\hat{u}_y &= -\d_t \hat{u}_y -\ensuremath{\text{i}} k \,\epsilon U\,\hat{u}_y - \epsilon
\d_z V\,\hat{u}_z
-\tfrac{1}{2}\hat{u}_y, \\
\sigma\hat{u}_z &= -\d_t \hat{u}_z -\ensuremath{\text{i}} k \,\epsilon U\,\hat{u}_z - \d_z\hat{p}-\ensuremath{\text{i}} m
\hat{p}, \\
0 &= \ensuremath{\text{i}} k \hat{u}_x + \d_z \hat{u}_z+ \ensuremath{\text{i}} m \hat{u}_z , \label{LIN2}
\end{align}
where the background convective oscillation is represented by
\begin{align}\label{backg}
U = \cos t\,\cos z, \qquad V= -\tfrac{1}{2}\sin t \cos z,
\end{align}
and
the parameter $\epsilon= S/\Omega$ measures the amplitude of the
background convective oscillation.
We now have a two-dimensional eigenvalue problem in both $t$ and
$z$. The eigenvalue is $\sigma$, while the governing parameters are
simply $\epsilon$ and $k$. Generally Eqs \eqref{LIN1}-\eqref{LIN2}
must be solved numerically, but in the limits of small and large $S$
some analytical progress can be made. We treat these asymptotic limits
first and then give the numerical solutions.
\subsection{Asymptotic solutions}
\subsubsection{Large amplitude oscillations}
The first limit is perhaps the easiest to understand, though not the
most relevant. I assume that
the background oscillations are extremely strong, with shear rates
$S\gg \Omega$.
In this regime, parasites grow so fast
that the oscillation is destroyed before
completing even one cycle. As a consequence,
it should be
regarded as `frozen' and, furthermore,
the background differential rotation omitted.
The problem then reduces to determining the
stability of a spatially periodic shear called Kolmogorov
flow (Meshalkin \& Sinai 1961), an especially well-studied
model problem for which numerous results have been proven
(see, for example, Beaumont 1981 and Gotoh et al.~1983).
The inflexion
point theorem suggests the flow is unstable, and indeed Drazin \&
Howard (1962) show that instability occurs on $k<1$. A
reasonable approximation to the growth rates is derived by Green
(1974), who uses a truncated Fourier series to obtain
\begin{equation}\label{largeass}
\sigma^2 \approx \frac{k^2(1-k^2)}{2(1+k^2)}\,\epsilon^2,
\end{equation}
when $m=0$. Here $k$ should be understood as the wavenumber in the
direction of the shear at any given instant.
In agreement with our initial assumption, Eq.~\eqref{largeass} yields
a growth rate $ \sim S$.
\subsubsection{Small amplitude oscillations}
Before such large amplitudes are achieved the oscillation will be
destroyed by a different class of parasitic mode involving a
parametric
resonance between the growing epicycle and two inertial waves. The
instability is a relation of the famous elliptical instability, which
disrupts
vortices both in and outside of protoplanetary discs (Pierrehumbert
1986, Bayly 1986, Kerswell 2002,
Lesur \& Papaloizou 2009, Railton \& Papaloizou 2014). A variant of
the instability also attacks accretion discs themselves when the
streamlines deviate from non-circular orbits (Goodman 1993), as in the
case of eccentric (Papaloizou 2005, Barker \&
Ogilvie 2014) and warped discs (Gammie et al.~2000, Ogilvie \& Latter
2013). Locally this deviation appears as an oscillation with
frequency equal to $\kappa$ and similar in form to the convective
overstability mode.
The parametric instability can be understood as a special case of a
three-wave coupling (Gammie et al.~2000). The primary overstable oscillation,
with frequency $\kappa$,
provides a means by which two linear inertial waves, of frequencies
$\omega_1$ and $\omega_2$, can work together
to draw out energy from the primary. In order
for this to happen a
resonance condition $\omega_1+\omega_2 = \kappa$
must be met. Resonance only occurs for a discrete set of $k$,
each corresponding to a different vertical mode number $n$.
\begin{figure}
\begin{center}
\scalebox{0.55}{\includegraphics{Paralin1.eps}}
\scalebox{0.55}{\includegraphics{Paralin2.eps}}
\caption{Parasitic growth rates $\sigma$ as a function of radial
wavenumber $k$ for two different amplitudes. The top panel is for
$\epsilon=0.1$, the bottom for $\epsilon=1$.
In the top panel the predictions of the asymptotic theory are
represented by blue diamonds.}\label{Parasites1}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\scalebox{0.55}{\includegraphics{Eigf.eps}}
\caption{Four panels showing the eigenfunction (flows in the $x$-$z$ plane)
at four different moments during the oscillation. See Section 4.3.}
\label{Parasites2}
\end{center}
\end{figure*}
Appendix B outlines an asymptotic theory of this resonance in
the limit of small oscillation amplitude and for $m=0$. The appropriate regime is
$|N|\ll S \ll \Omega$: thus the thermal dynamics are omitted, but the
rotational dynamics are not.
In dimensionless variables the
resonance condition is
\begin{equation}\label{resonant}
\frac{n}{\sqrt{n^2+k^2}}+\frac{n+1}{\sqrt{(n+1)^2+k^2}} = 1,
\end{equation}
where $n$ is an integer, describing the vertical wavenumber of the
first inertial mode. The first few resonances occur at
$k\approx 2.49,\,4.26,\,6.02,\,7.76,\,9.50$. Note that parametric
instability favours much shorter vertical scales than the shear
instability of Section 4.2.1.
The growth rate to leading order in $\epsilon$ is
given by
\begin{equation}\label{smallass}
\sigma^2=
\frac{k^2(1+2\omega_1)^2(\omega_1^2+2\omega_1-n)(\omega_1^2+n)}
{64\,n(1+n)\,\omega_1(1+\omega_1)}\,\epsilon^2,
\end{equation}
where the frequency of the first inertial wave is taken to be
\begin{equation}
\omega_1= -\frac{n}{\sqrt{n^2+k^2}}.
\end{equation}
The growth rates corresponding to the first five $n$ are plotted in
Fig.~3a, alongside the full numerical solution.
Finally, as $n\to \infty$ the growth rates plateau to a constant maximum
value, given by
$\sigma \approx \frac{3\sqrt{3}}{32}\,\epsilon$.
\subsection{Numerical solutions}
In this section Eqs \eqref{LIN1}-\eqref{LIN2} are solved numerically
using a pseudo-spectral technique. I partition the $2\pi$-periodic $t$ and $z$
domains into $M_t$ and $M_z$ cells and represent each dependent variable
as a vector of length $M_t M_z$. Derivatives are described using
appropriate matrices (Boyd 2002), upon which
\eqref{LIN1}-\eqref{LIN2} may be approximated by a $4M_tM_z\times 4M_t
M_z$
algebraic eigenvalue problem,
the eigenvalues of which are the growth rates $\sigma$. These are obtained
by the QZ algorithm or an Arnoldi method (Golub \& van Loan 1996).
For moderate $k$, $M_t=M_z=30$ yields converged growth rates.
The real part of the growth rate $\sigma$
is plotted in Fig.~3 as a function of
radial wavenumber $k$. Two illustrative values of the oscillation
amplitude are chosen, $\epsilon = 0.1$ and $\epsilon=1$. The vertical
wavenumber of the mode's envelope $m$ is set to zero.
The top panel lies in the small amplitude regime, and hence I also
plot the growth rates produced by the asymptotic theory of Section
4.2.2, as blue diamonds. Instability occurs in distinct
bands located at the resonant $k$ values predicted by
Eq.~\eqref{resonant}. The asymptotic growth rates are in good
agreement with the numerical results, which validates both the
analytic theory and the computations. As is typically the case, the
larger the $k$ the wider each resonant band, and at some large value the
bands overlap and the instability becomes more complicated in nature
than the simple three-wave resonance idea. This is also true the
larger $\epsilon$, as is clear from the bottom panel of Fig.~3. Here
$\epsilon=1$ and instability occurs for all $k$ above a critical
value. As expected, the growth rates are an order of magnitude greater
than the $\epsilon=0.1$ case.
In Fig.~4 a representative eigenmode is shown, when $\epsilon=0.1$
and $k=2.486$. The figure comprises four snapshots in the $x-z$ plane
of the velocity taken at equally spaced moments during
its $2\pi$ cycle. The mode comes from the first band of parametric
instability in Fig.~3a, and consists of $n=1$ and $n=2$ inertial
waves coupled via the overstable oscillation. Note its characteristic
oblique motions: the radial and vertical mode speeds are most closely
correlated in panels 1 and 3 ($t=0,\,\pi$), when the background
radial shear take its largest values (cf.\ Eq.~\eqref{backg}).
The associated Reynolds stresses of the mode are thus able to
extract the overstability's energy. Similar behaviour is observed in
the instability of a warp (Ogilvie \& Latter 2013).
\subsection{Maximum amplitude}
It is possible to estimate a maximum saturation amplitude of the
convective overstability by comparing
the growth of the overstability itself with that of its parasites.
Similar calculations have been carried out in the magnetorotational channel
context (Pessah \& Goodman 2009, Latter et al.~2010b), but the
overstability problem is made easier by the axisymmetry of the
parasitic modes. They cannot be sheared away by the differential
rotation.
In the previous subsections, the slow growth of the oscillation was
neglected and so $\epsilon$ was taken to be a constant.
In this section its time dependence is reinstated, so that
(in dimensionless variables)
$\epsilon = \epsilon_0 \text{e}^{st}$, where $\epsilon_0$ is the
oscillation's starting amplitude (the level of the background fluctuations),
and $s\sim n^2$ is its growth rate.
A crude estimate for the time it takes a parasitic mode to
overrun its host may be derived by equating the growth rates of parasite and
host, $s\sim \sigma$ (Pessah \& Goodman 2009).
The oscillation's amplitude at this point is then easy to calculate:
$\epsilon_\text{max} \sim n^2$. Returning to dimensional
units, the maximum shear is
\begin{equation} \label{saturated}
S_\text{max} \sim \frac{|N^2|}{\Omega}.
\end{equation}
Given the smallness of $N^2$ in protoplanetary discs, this is not a
large value at all, only some $10^{-3}$ the background shear rate (cf.\
Section 3.4). In realistic discs, the convective
overstability grows so slowly that it is overrun by parasitic
modes before it extracts appreciable energy from the
thermal gradient.
This rough estimate may be improved upon by
setting the \emph{amplitudes} of
the oscillation and parasite to be equal, rather than the growth
rates. The parasitic mode's amplitude
we denote by $p$, and its growth is determined from the ODE
$dp/dt = \sigma(t)p$, where $\sigma\sim \epsilon$.
This equation yields
$ p \sim p_0 \text{exp}[(\text{exp}(st)-1)\epsilon_0/s]$, where $p_0$ is
the initial amplitude of the parasite. Setting $\epsilon=p$ produces
a nonlinear equation for the time of destruction, which may be solved
in terms of special functions. The maximum oscillation
amplitude may then be computed:
\begin{equation}
\epsilon_\text{max} \sim -n^2 W_{-1}\left(-\frac{p_0}{n^2}
\text{e}^{-\epsilon_0/n^2}\right),
\end{equation}
where $W_{-1}$ is the second real branch of the Lambert W-function
(Corless et al.~1996). Note that the final amplitude depends not only
on $n^2$ but also on the initial conditions. If next we assume that
both the overstability and the parasite grow
simultaneously from the same reservoir of small amplitude noise $(< n^2)$,
then we have the asymptotic estimate
\begin{equation}
\epsilon_\text{max} \sim n^2 \ln\left(\frac{n^2}{\epsilon_0}\right).
\end{equation}
This shows that the
dependence on the initial condition is weak and
that \eqref{saturated} is largely unaltered: convective
oscillations are destroyed at low amplitudes.
\section{Nonlinear saturation}
\subsection{Parasitic theory}
Once an isolated overstable mode is overrun by a parasite the flow
breaks down into a disordered state. Let us suppose that the
nonlinear dynamics that follow are controlled by the emergence and
decline of these fastest growing overstable modes. How might this
dynamical situation work? One possibility is that
the system settles on a weakly nonlinear
state, with the three resonant waves
joined by a small number of shorter
wavelength modes. These shorter modes, ordinarily stabilised by viscosity, will
remove the energy input by the instability and let the system
reach a statistically steady state. The nonlinear standing waves
observed in some semiconvection simulations may be an example of this
(Mirouh et al.~2012).
Though likely in
simulations (with their relatively large Pr),
real discs may struggle, however, to host such low-order nonlinear
dynamics. Another possibility is that the emergence and destruction of
overstable modes could control developed turbulence in which
many more modes participate. Pessah \& Goodman (2009) make such an argument to explain
the level of magnetorotational turbulence in shearing boxes,
but being axisymmetric the parasites
considered here are much more effective because they do not shear
out (see also Latter et al.~2010b).
Whatever its details, a parasitic theory of saturation
would then argue that
Eq.~\eqref{saturated} sets not only the maximum amplitude of a single
overstable oscillation but also the saturation amplitude of the ensuing
turbulence.
Equation \eqref{saturated} predicts that
the convective overstability generates only a very mild level of
turbulence in realistic discs.
Because the typical strain rate is usually much less than $\Omega$, the
flow will be essentially axisymmetric. And if one
estimates the typical turbulent lengthscale by $1/K$, then associated
velocities will be gentle:
\begin{equation}\label{turb}
v_\text{turb}\sim 10^{-2}\,n^2\,c_s
\end{equation}
at 1 AU, where $c_s$ is the local sound speed (cf.\ Section 3.2).
Because of its low amplitude and axisymmetry, the saturated state
should not drive
significant angular momentum transport nor
generate vortices via SBI (or another mechanism). The appearance of
both phenomena in the simulations of Lyra (2014) can be attributed to
a strong stratification, $n^2 \sim 0.1$, which may occur only in
special regions of the disc, perhaps near very abrupt disc features.
\subsection{Connections with semiconvection}
Of course, the parasitic theory of saturation may only be part of
the story. While it should reliably predict the initial amplitude of the turbulent flow,
on longer times the turbulence could evolve according to
other dynamics altogether. The saturation of
semi-convection
is illuminating in
this respect, as it
shares many of the features of convective overstability in discs.
(Indeed, in two dimensions the mathematical formalisms are almost identical.)
Instead of relying on angular momentum and entropy gradients,
semiconvection
emerges in the presence of composition and entropy gradients (Kato
1966, Rosenblum et al.~2011). Its nonlinear development takes one of two
courses: (a)
mild turbulent convection or (b)
large-scale `layering' of convective zones over strongly
stratified interfaces (Turner 1968, Merryfield 1995).
The turbulent transport in the second course is far greater than in
the first by at least an order of magnitude,
but it only arises when the background stratification is
sufficiently strong (Rosenblum
et al.~2011, Mirouh et al.~2012). In fact, there exists a critical buoyancy
frequency $|N^2_c|$,
below which the turbulence is weak and above which it suddenly becomes
much more intense. This critical value depends closely on Pr (Mirouh
et al.~2012).
Can the convective overstability in discs
exhibit similar bimodal behaviour? In the disc context the layers would
be combined zonal and elevator flows, and thus would correspond to
rings of concentrated vorticity. Indeed, the 2D
simulations of Lyra (2014) develop large-scale radial structure at
late times that could be identified as a `semi-convective layer'.
Key questions are: what is the critical value of $N^2$
below which radial layers fail to develop (if such a value exists)? Which side of
this value do realistic dead zones fall? What about other more abrupt
disc features? Obviously if layers dominate
the overstability's long term saturation then
the `parasitic theory' must be discarded,
and the convective
overstability may instigate more vigorous, and interesting, dynamics.
Simulations are currently underway to
test these competing ideas.
Zonal flows may be
susceptible to the Kelvin-Helmholtz instability, shedding
vortices as they degenerate. Lyra's 3D simulations also
manifest vortices, though it is unclear if they arise from the
breakdown of zonal layers or by the SBI, seeded
by vigorous non-axisymmetric turbulence. Again it is important to
note that these simulations are strongly stratified, with $n^2\approx
0.1$. More realistic values may yield a less vigorous and more
axisymmetric state, one that may struggle to seed vortices directly.
Again this needs to be checked in dedicated 3D simulations.
\section{Discussion}
Ordinarily the angular momentum gradient in a
protoplanetary disc is sufficiently strong to
stabilise a negative entropy gradient, if one exists.
But because thermal diffusion is far more efficient than
viscous diffusion, double diffusive instabilities arise that
can unleash the energy stored in the adverse gradient. These include
the subcritical
baroclinic instability and the convective overstability
(Lesur \& Papaloizou 2010, Klahr
\& Hubbard 2014, Lyra 2014). The resistive
instability, on the other hand, uses magnetic fields to diffuse angular
momentum faster than heat and is hence double-diffusive in the
opposite sense (Latter et al.~2010a). These various
mechanisms may liven up the dead zones of protoplanetary
discs, though it is unclear if any one of them is the answer to the
question of angular momentum transport.
In this paper I revisit the linear and nonlinear dynamics of the
convective overstability. Because the linear modes are also nonlinear
solutions, my focus has been on the parasitic modes that limit their
amplitude, the idea being that the parasites control the
saturation level of the ensuing turbulent dynamics. This approach predicts
that the overstability generates only very weak turbulence, with a
strain field
$\sim |N^2|/\Omega \sim 10^{-3}\Omega$. The conclusion is that the
flow remains axisymmetric and rather gentle, unable to transport much
angular momentum nor generate vortices. However, near very abrupt disc
structures, such as edges, $|N^2|$ may be larger and greater activity
might be anticipated.
But I also explore alternative
ideas, drawing on recent work in semi-convection that shows when
$|N^2|$ crosses a critical value the flow splits into
thermo-compositional layers that greatly enhance transport (Rosenblum et
al.~2011, Mirouh et al.~2012). Something similar may occur in
simulations of the overstability, the layers taking the form of `zonal
flows' (Lyra 2014). Future work should establish the critical $|N^2|$
above which this takes place, and whether we expect this
behaviour in realistic discs.
If convective overstability is present, which is not always assured,
what is its role in the disc dynamics?
One possible application is
to the excitation of random motions in disc solids, thereby
influencing their collision speeds and frequencies. The greatest
effect will be on marginally coupled particles, whose stopping
time is similar to the typical turbulent turnover time ($\sim
1/\Omega$ for inertial wave turbulence). These particles should have
radii of roughly 10 cm to 1 m (Chiang \& Youdin 2010). The
velocity dispersion induced by the coupling will be of order
$v_\text{turb}$, a fairly mild enhancement in most cases.
The turbulent flow may also concentrate such particles, further
amplifying collision rates, though this can only be checked by
detailed numerical simulations (Hogan \& Cuzzi 2007, Pan \& Padoan
2013).
In conclusion, the significance of the overstability, vis-a-vis other hydrodynamical
processes, essentially comes down to the magnitude of $N^2$. It
sets the base level of turbulent motions (cf.\ Eq.~\eqref{turb}) and
whether the system selects a gentle or more vigorous state (cf.\
Section 5.2). Thus further constraints on protoplanetary disc structure and
further numerical simulations are needed to help properly assess the
instability's place in disc dynamics.
\section*{Acknowledgments}
I thank the reviewer, Steve Balbus, for a helpful set of comments that
improved the manuscript.
I also thank Gordon Ogilvie,
Sebastien Fromang, Geoffroy Lesur, Andrew Youdin, and John Papaloizou
for helpful tips and suggestions. I am also grateful to Hubert Klahr and Wlad Lyra for
clarifying some of their work and for inspiring me to have a look at
the problem. Finally I am indebted to Jerome Guilet who generously
read through an earlier version of the manuscript and who suggested
valuable improvements to Section 4.4 particularly.
This research is partially funded by STFC grant
ST/L000636/1.
|
2,877,628,089,574 | arxiv | \chapter*{Preface}
In this thesis, we study deformations of compact holomorphic Poisson manifolds \footnote{For general information of Poisson geometry, see Appendix \ref{appendixa}. A holomorphic Poisson manifold is a complex manifold such that its structure sheaf is a sheaf of Poisson algebras. For more details of deformations of compact holomorphic Poisson manifolds, see the part \ref{part1} of the thesis.} and algebraic Poisson schemes.\footnote{A Poisson algebraic scheme is an algebraic scheme over an algebraically closed field $k$ such that its structure sheaf is a sheaf of Poisson algebras. For more details of the definition of Poisson schemes and deformations of algebraic Poisson schemes, see the part \ref{part3} of the thesis.} Deformations of compact holomorphic Poisson manifolds is based on Kodaira-Spencer's deformation theory of compact complex manifolds, and deformations of algebraic Poisson schemes is based on Grothendieck's deformation theory of algebraic schemes. The only difference is that we deform an additional structure, namely `Poisson' structures' in a family of compact holomorphic Poisson manifolds or algebraic Poisson schemes. Hence when we ignore Poisson structures, the underlying deformation theory is same to ordinary deformation theory in the sense of Kodaira-Spencer, and Grothendieck. The relationship between deformations of compact complex manifolds and deformations of algebraic schemes is well described in the Introduction of Sernesi's book \cite{Ser06}. I will briefly explain their relationship described in \cite{Ser06}, and then extend their relationship to the relationship between deformations of compact holomorphic Poisson manifolds, and algebraic Poisson schemes in the following.
Given a compact complex manifold $X$, a family of deformations of $X$ is a commutative diagram of holomorphic maps between complex manifolds
\begin{center}
$\xi:$$\begin{CD}
X @>>> \mathcal{X}\\
@VVV @VV\pi V\\
\star @>>> B
\end{CD}$
\end{center}
with $\pi$ proper and smooth, B connected and where $\star$ denotes the singleton space. We denote by $\mathcal{X}_t$ the fibre $\pi^{-1}(t),t\in B$. We call $(\mathcal{X},B,\pi)$ a complex analytic family. Kodaira and Spencer started studying small deformations of $X$ in a complex analytic family by defining, for every tangent vector $\frac{\partial}{\partial t}\in T_{t_0}B$, the derivative of the family along $\frac{{\partial}}{\partial t}\in T_{t_0}B$ as an element
\begin{align*}
\frac{\partial \mathcal{X}_t}{\partial t}\in H^1(X,\Theta)
\end{align*}
which gives a Kodaira Spencer map $\kappa:T_{t_0}B\to H^1(X,\Theta)$. They investigated the problem of classifying all small deformations of $X$, by constructing a ``complete family" of deformations of $X$ which roughly means that every small deformation of $X$ is induced from the complete family. More precisely, they established the following theorems.
\begin{thm}[Theorem of Existence]
Let $X$ be a compact complex manifold and suppose $H^2(X,\Theta)=0$. Then there exists a complex analytic family $(\mathcal{X},B,\pi)$ with $0\in B\subset \mathbb{C}^m$ satisfying the following conditions:
\begin{enumerate}
\item $\pi^{-1}(0)=M$
\item $\rho_0:\frac{\partial}{\partial t}\to \left(\frac{\partial M_t}{\partial t}\right)_{t=0}$ with $M_t=\pi^{-1}(t)$ is an isomorphism of $T_0(B)$ onto $H^1(M,\Theta):T_0(B)\xrightarrow{\rho_0} H^1(M,\Theta)$.
\end{enumerate}
\end{thm}
\begin{thm}[Theorem of Completeness]
Let $(\mathcal{X},B,\pi)$ be a complex analytic family and $\pi^{-1}(0)=X$. If $\rho_0:T_0 B\to H^1(X,\Theta)$ is surjective, the complex analytic family $(\mathcal{X},B,\pi)$ is compete at $0\in B$.
\end{thm}
By combing these two theorems, we get
\begin{corollary}
If $H^2(X,\Theta)=0$, then there exists a complete family of deformations of $X$ whose Kodaira Spencer map is an isomorphism. If moreover, $H^0(X,\Theta)=0$, then such complete family is universal.
\end{corollary}
Later Kuranishi generalized this result without assumptions on $H^2(X,\Theta)=0$ by relaxing the definition of a family of deformations of $X$ in a way that $B$ is allowed to be an analytic space.
On the other hand, Grothendieck's algebraic deformation theory is to algebraically formalize Kodaira-Spencer's analytic deformation theory. Let $X$ be an algebraic scheme over $k$, where $k$ is an algebraically closed field. A local deformation, or a local family of deformations of $X$ is a commutative diagram
\begin{center}
$\xi:$$\begin{CD}
X @>>> \mathcal{X}\\
@VVV @VV\pi V\\
Spec(k) @>>> S
\end{CD}$
\end{center}
where $\pi$ is a flat, $S=Spec(A)$ where $A$ is a local $k$-algebra with residue field $k$, and $X$ is identified with the fibre over the closed point. We can define a deformation functor
\begin{align*}
Def_X:\mathcal{A}^*\to (Sets)
\end{align*}
defined by $Def_X(A)=\{\text{local deformations of $X$ over $Spec(A)$}\}/(\text{isomorphisms})$, where $\mathcal{A}^*$ is the category of noetherian local $k$-algebras with the residue $k$. To study the question of representability of the functor $Def_X$ by some noetherian local $k$-algebra $\mathcal{O}$, the approach of Grothendieck was to formalize the method of Kodaira and Spencer, which consists in a formal construction followed by a proof of convergence. One of main problems is on prorepresentabiliy of $Def_X:\bold{Art}\to (Sets)$, where $\bold{Art}$ is the category of local artinian $k$-algebras with residue $k$.
As I said before, deformation theories of holomorphic Poisson manifolds and algebraic Poisson schemes are based on deformation theories of compact complex manifolds and algebraic schemes. The main difference is that we simply put one more structure on complex analytic families or algebraic families, namely ``Poisson structures''. So deformations of compact holomorphic Poisson manifolds, or algebraic Poisson schemes mean that we deform not only underlying complex or algebraic structures, but also Poisson structures. I will explain small deformations of compact holomorphic Poisson manifolds. Given a holomorphic Poisson manifold $(X,\Lambda_0)$, a family of deformations of $(X,\Lambda_0)$ is a commutative diagram of holomorphic maps between a holomorphic Poisson manifold $(\mathcal{X},\Lambda)$ and a complex manifold $B$
\begin{center}
$\xi:$$\begin{CD}
(X,\Lambda_0) @>>> (\mathcal{X},\Lambda)\\
@VVV @VV\pi V\\
\star @>>> B
\end{CD}$
\end{center}
with $\pi$ is proper and smooth, $B$ is connected and where $\star$ denotes the singleton space. We denote $(\mathcal{X}_t,\Lambda_t)$ the fiber $\pi^{-1}(t), t\in B$ which is a compact holomorphic Poisson submanifold of $(\mathcal{X},\Lambda)$. We call $(\mathcal{X},\Lambda, \pi, B)$ a Poisson analytic family. As in a complex analytic family, we can define, for every tangent vector $\frac{\partial}{\partial t}\in T_{t_0} B$, the derivative of the family along $\frac{\partial}{\partial t}$ as an element
\begin{equation*}
\frac{\partial (\mathcal{X}_t,\Lambda_t)}{\partial t}\in HP^2(X,\Lambda_0)
\end{equation*}
which gives a linear map
\begin{equation*}
\varphi:T_{t_0} B\to HP^2(X,\Lambda_0)
\end{equation*}
called the Poisson Kodaira Spencer map of the family $(\mathcal{X},\Lambda,\pi, B)$. We can also define the concept of a complete family as in deformations of compact complex manifolds. I was interested in the problem of classifying all small deformations of $(X,\Lambda_0)$, by constructing a ``complete family" of deformations of $(X,\Lambda_0)$, but by some technical issues, I believe that I only proved the theorem of existence for holomorphic Poisson structures.
\begin{thm}[Theorem of Existence for holomorphic Poisson structures]\label{theorem of existence}
Let $(M,\Lambda_0)$ be a compact holomorphic Poisson manifold satisfying some assumption and suppose that $HP^3(M,\Lambda_0)=0$. Then there exists a Poisson analytic family $(
\mathcal{X},\Lambda,B,\pi)$ with $0\in B\subset \mathbb{C}^m$ satisfying the following conditions:
\begin{enumerate}
\item $\pi^{-1}(0)=(X,\Lambda_0)$
\item $\varphi_0:\frac{\partial}{\partial t}\to\left(\frac{\partial (M_t,\Lambda_t)}{\partial t}\right)_{t=0}$ with $(\mathcal{X}_t,\Lambda_t)=\pi^{-1}(t)$ is an isomorphism of $T_0(B)$ onto $HP^2(M,\Lambda_0):T_0 B\xrightarrow{\rho_0} HP^2(M,\Lambda_0)$.
\end{enumerate}
\end{thm}
\begin{conjecture}[Theorem of Completeness for holomorphic Poisson structures]\
Let $(\mathcal{X},\Lambda, B,\pi)$ be a Poisson analytic family and $\pi^{-1}(0)=(X,\Lambda_0)$. If $\varphi_0:T_0 B\to HP^2(X,\Lambda_0)$ is surjective, then the Poisson analytic family $(\mathcal{X},\Lambda, B,\pi)$ is complete at $0\in B$.
\end{conjecture}
By combining the theorem and the conjecture, we get
\begin{corollary}
If $HP^3(X,\Lambda_0)=0$, then there exists a complete family of deformations of $(X,\Lambda_0)$ whose Poisson Kodaira Spencer map is an isomorphism. Moreover if $HP^1(X,\Lambda_0)=0$, then such complete family is universal.
\end{corollary}
The natural question is the existence of Kuranishi family for deformations of a holomorphic Poisson manifold.
\begin{conjecture}
A complete family of deformations of $(X,\Lambda_0)$ such that the Poisson Kodaira Spencer map is an isomorphism exists without assumptions on $HP^3(X,\Lambda_0)=0$ provided the base $B$ is allowed to be an analytic space.
\end{conjecture}
While I worked on deformations of holomorphic Poisson structures, the reason why I focused on ``theorem of existence", ``theorem of completeness" and``construction of Kuranishi family" for holomorphic Poisson structures is that I wanted to extend the relationship between Kodaira Spencer's analytic deformation theory and Grothendieck's algebraic deformation theory to the relationship between analytic Poisson deformation theory and algebraic deformation theory of Poisson schemes as presented in the book \cite{Ser06}. Now I will explain deformations of algebraic Poisson schemes. Deformations of Poisson schemes have already been studied by Namikawa (\cite{Nam09}), Ginzburg, and Kaledin (\cite{Gin04}). It seems that Ginzburg and Kaledin (\cite{Gin04}) defined firstly deformations of Poisson schemes in the context of Grothendieck's deformation theory. Let's fix an algebraically closed field $k$ and consider an Poisson algebraic $k$-scheme $(X,\Lambda_0)$. A local Poisson deformation or a local family of Poisson deformations of $(X,\Lambda_0)$ is a cartesian diagram
\begin{center}
$\xi:$$\begin{CD}
(X,\Lambda_0) @>>> (\mathcal{X},\Lambda)\\
@VVV @VV\pi V \\
Spec(k) @>>> S
\end{CD}$
\end{center}
where $\pi$ is a flat morphism, $S=Spec\,A$ and $(\mathcal{X},\Lambda)$ is a Poisson $S$-scheme via $\pi$ where $A$ is a local $k$-algebra with residue field $k$, and the Poisson $k$-scheme $(X,\Lambda_0)$ is identified with the fiber over the closed point. Similarly we can define a Poisson deformation functor
\begin{align*}
Def_X:\mathcal{A}^*\to (Sets)
\end{align*}
defined by $Def_X(A)=\{\text{local Poisson deformations of $X$ over $Spec(A)$}\}/(\text{isomorphisms})$, where $\mathcal{A}^*$ is the category of noetherian local $k$-algebras with the residue $k$. We can consider analogous problems coming from classical deformation theory of algebraic schemes.
I have been guided by the analytic and algebraic deformation theory originating from Kodaira-Spencer and Grothendieck in the context of Poisson category through books, articles and papers from senior mathematicians. My thesis on deformations of compact holomorphic Poisson manifolds and algebraic Poisson schemes is the sophistication of this general picture.
\part{Deformations of compact holomorphic Poisson manifolds}\label{part1}
In the first part of the thesis, we study deformations of holomorphic Poisson structures in the framework of Kodaira and Spencer's deformation theory of complex analytic structures (\cite{Kod58},\cite{Kod60}). The main difference from Kodaira and Spencer's deformation theory is that for deformations of a holomorphic Poisson manifold, we deform not only its complex structures, but also holomorphic Poisson structures. We thoroughly apply Kodaira and Spencer's ideas to holomorphic Poisson category.
Kodaira and Spencer's main idea of deformations of complex analytic structures is as follows \cite[p.182]{Kod05}. A $n$-dimensional compact complex manifold $M$\footnote{In this thesis, we assume that a complex manifold is connected} is obtained by glueing domains $U_1,...,U_n$ in $\mathbb{C}^n:M=\cup_{j=1}^n U_j$ where $\mathfrak{U}=\{U_j|j=1,...,n\}$ is a locally finite open covering of $M$, and that each $U_j$ is a polydisk:
\begin{align*}
U_j=\{z_j\in \mathbb{C}^n||z_j^1|<1,...,|z_j^n|<1\}
\end{align*}
and for $p\in U_j\cap U_k$, the coordinate transformation
\begin{align*}
f_{jk}:z_k\to z_j=(z_j^1,...,z_j^n)=f_{jk}(z_k)
\end{align*}
transforming the local coordinates $z_k=(z_k^1,...,z_k^n)=z_k(p)$ into the local coordinates $z_j=(z_j^1,...,z_j^n)=z_j(p)$ is biholomorphic. According to Kodaira,
\begin{quote}
\textit{$``$A deformation of $M$ is considered to be the glueing of the same polydisks $U_j$ via different identification. In other words, replacing $f_{jk}^{\alpha}(z_k) $ by the functions $f_{jk}^{\alpha}(z_k,t)=f^{\alpha}_{jk}(z_k,t_1,...,t_m),$ $ f_{jk}(z_k,0)=f_{jk}^{\alpha}(z_k)$ of $z_k$, and the parameter $t=(t_1,...,t_m)$, we obtain deformations $M_t$ of $M=M_0$ by glueing the polydiscks $U_1,...,U_n$ by identifying $z_k\in U_k$ with $z_j=f_{jk}(z_k,t)\in U_j$"}
\end{quote}
A $n$-dimensional compact holomorphic Poisson manifold $M$ is a compact complex manifold such that the structure sheaf $\mathcal{O}_M$ is a sheaf of Poisson algebras.(See Appendix \ref{appendixa}) The holomorphic Poisson structure is encoded in a holomorphic section (a holomorphic bivector field) $\Lambda \in H^0(M,\wedge^2 \Theta_M)$ with $[\Lambda,\Lambda]=0$.\footnote{We denote by $T=T_M$ the holomorphic tangent bundle of $M$, by $\Theta_M$ the sheaf of holomorphic vector fields on $M$, by $T^*=T_{M}^*$ by the dual bundle of $T_M$, by $\bar{T}=\bar{T}_M$ the anti holomorphic tangent bundle, by $\bar{T}^*=\bar{T}^*_M$ the dual bundle of $\bar{T}^*_M$, by $T_{\mathbb{C}} M=T\oplus \bar{T}$ the complexified tangent bundle, and by $T_{\mathbb{C}}^* M=T^* \oplus \bar{T}^*$ the dual bundle of $T_{\mathbb{C}} M$ and the bracket $[-,-]$ is the Schouten bracket. See Appendix \ref{appendixc}.} In the sequel a holomorphic Poisson manifold will be denoted by $(M,\Lambda)$. For deformations of a holomorphic Poisson manifold $(M,\Lambda)$, we use the ideas of Kodaira and Spencer. A $n$-dimensional holomorphic Poisson manifold is obtained by glueing the domains $U_1,...,U_n$ in $\mathbb{C}^n$: $M=\bigcup_{j=1}^n U_j$ where $\mathfrak{U}=\{U_j|j=1,...,n\}$ is a locally finite open covering of $M$ and each $U_j$ is a polydisk
\begin{align*}
U_j=\{z_j\in \mathbb{C}^n||z_j^1|<1,...,|z_j^n|<1\}
\end{align*}
equipped with a holomorphic bivector fields $\Lambda_j=\sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(z_j) \frac{\partial}{\partial z_j^{\alpha}}\wedge \frac{\partial}{\partial z_j^{\beta}}$\footnote{In this thesis, we always assume that $g_{\alpha\beta}^j(z)=-g_{\beta\alpha}^j(z)$} with $[\Lambda_j,\Lambda_j]=0$ on $U_j$ and for $p\in U_j\cap U_k$, the coordinate transformation
\begin{align*}
f_{jk}:z_k\to z_j=(z_j^1,...,z_j^n)=f_{jk}(z_k)
\end{align*}
transforming the local coordinates $z_k=(z_k^1,...,z_k^n)=z_k(p)$ into the local coordinates $z_j=(z_j^1,...,z_j^n)=z_j(p)$ is a biholomorphic Poisson map.\footnote{For the definition of Poisson map, See Appendix \ref{appendixa}.}
Deformations of a holomorphic Poisson manifold $(M,\Lambda)$ is the glueing of the Poisson polydisks $(U_j,\Lambda_j(t))$ parametrized by $t$ via different identification. That is, replacing $f_{jk}^{\alpha}(z_k)$ by the functions $f_{jk}^{\alpha}(z_k,t) ( f_{jk}(z_k,0)=f_{jk}^{\alpha}(z_k)$ of $z_k$), replacing $\Lambda_j=\sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(z_j) \frac{\partial}{\partial z_j^{\alpha}}\wedge \frac{\partial}{\partial z_j^{\beta}}$ by $\Lambda_j(t)=\sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(z_j,t) \frac{\partial}{\partial z_j^{\alpha}}\wedge \frac{\partial}{\partial z_j^{\beta}}$ with $[\Lambda_j(t),\Lambda_j(t)]=0$ and $\Lambda_j(0)=\Lambda_j$, and the parmeter $t=(t_1,...,t_m)$, we obtain defomrations $(M_t,\Lambda_t)$ by gluing the Poisson polydisks $(U_1,\Lambda_1(t)),...,(U_n,\Lambda_n(t))$ by identifying $z_k\in U_k$ with $z_j=f_{jk}(z_k,t)\in U_j$. The work on deformations of holomorphic Poisson structures is based on this fundamental idea.
In chapter \ref{chapter1}, we define a family of compact holomorphic Poisson manifolds, called a Poisson analytic family in the framework of Kodaira-Spencer deformation theory. In other words, when we ignore Poisson structures, a family of compact holomorphic Poisson manifolds is just a family of compact complex manifolds in the sense of Kodaira and Spencer. So deformations of holomorphic Poisson manifolds means that we deform complex structures as well as Poisson structures. And we show that infinitesimal deformation of a holomorphic Poisson manifold $(M,\Lambda)$ in a Poisson analytic family is encoded in the truncated holomorphic Poisson cohomology. More precisely, an infinitesimal deformation is realized as an element in the second hypercohomology group\footnote{We adopt the notation from \cite{Nam09} for the expression of the truncated holomorphic Poisson cohomology groups} $HP^2(M,\Lambda)$ of a complex of sheaves $0\to \Theta_M\to \wedge^2 \Theta_M\to \cdots\to \wedge^n \Theta_M\to 0$ induced by $[\Lambda,-]$. Analogously to deformations of complex structure, we define so called Poisson Kodaira Spencer map where the Kodaira Spencer map is realized as a component of the Poisson Kodaira Spencer map. We define a concept of a trivial family, locally trivial family, rigidity and pullback family, and raise some questions that I cannot answer at this stage.
In chapter \ref{chapter2}, we study the integrability condition for a Poisson analytic family. Kodaira showed that given a family of deformations of a compact complex manifold $M$, locally the family is represented by a $C^{\infty} (0,1)$ vectors $\varphi(t)$ with $\varphi(0)=0$ satisfying $\bar{\partial}\varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$. And we show that given a family of deformations of a holomorphic Poisson manifold $(M,\Lambda)$, locally the family is represented by a $C^{\infty} (0,1)$ vectors $\varphi(t)$ with $\varphi(0)=0$ and a $C^{\infty}$ bivectors $\Lambda(t)$ with $\Lambda(0)=\Lambda$ satisfying $[\Lambda(t),\Lambda(t)]=0, \bar{\partial} \Lambda(t)-[\Lambda(t),\varphi(t)]=0$, and $\bar{\partial}\varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$. Replacing $\varphi(t)$ by $-\varphi(t)$, the integrability condition becomes $\bar{\partial}(\varphi(t)+\Lambda(t))+\frac{1}{2}[\varphi(t)+\Lambda(t),\varphi(t)+\Lambda(t)]=0$ which is a solution of the Maurer Cartan equation of the differential graded Lie algebra $(\mathfrak{g},\bar{\partial},[-,-])$.(See Appendix \ref{appendixc}). But we have another differential graded Lie algebra structure on $\mathfrak{g}$.(See Proposition \ref{d}) If we take $\Lambda'(t)=\Lambda(t)-\Lambda$. Then we have $\Lambda'(0)=0$ and the integrability condition is equivalent to $L(\varphi(t)+\Lambda'(t))+\frac{1}{2}[\varphi(t)+\Lambda'(t),\varphi(t)+\Lambda'(t)]=0$ where $L=\bar{\partial}+[\Lambda,-]$.\footnote{We remark that the integrability condition was proved in more general context in the language of generalized complex geometry (See \cite{Gua11}). As $H^1(M,\Theta)$ is realized as a subspace of the generalized second cohomology group of a complex manifold $M$, $HP^2(M,\Lambda)$ is realized as a subspace of the generalized second cohomology group of a holomorphic Poisson manifold $(M,\Lambda)$. In this thesis, we deduce the integrability condition by following Kodaira's original approach, that is, by starting from a concept of a geometric family (a Poisson analytic family). } Then $\varphi(t)+\Lambda'(t)$ is a solution of the Mauer Cartan equation of the differential graded Lie algebra $(\mathfrak{g},L,[-,-])$. In the part II of the thesis, we show that the differential graded Lie algebra $(\mathfrak{g},L,[-,-])$ is a differential graded Lie algebra governing the holomorphic Poisson deformations of $(M,\Lambda)$ in the language of functor of Artin rings.
In chapter \ref{chapter3}, under some assumption, we establish an analogous theorem to the following theorem of Kodaira and Spencer.(\cite{Kodaira58},\cite{Kod05} p.270)
\begin{thm}[Theorem of Existence]
Let $M$ be a compact complex manifold and suppose $H^2(M,\Theta)=0$. Then there exists a complex analytic family $(\mathcal{M},B,\omega)$ with $0\in B\subset \mathbb{C}^m$ satisfying the following conditions:
\begin{enumerate}
\item $\omega^{-1}(0)=M$
\item $\rho_0:\frac{\partial}{\partial t}\to \left(\frac{\partial M_t}{\partial t}\right)_{t=0}$ with $M_t=\omega^{-1}(t)$ is an isomorphism of $T_0(B)$ onto $H^1(M,\Theta):T_0(B)\xrightarrow{\rho_0} H^1(M,\Theta)$.
\end{enumerate}
\end{thm}
Similiary, under the assumption (\ref{assumption}), we prove the theorem of existence for deformations of holomorphic Poisson structures.(See Theorem \ref{theorem of existence})
\begin{thm}[Theorem of Existence for holomorphic Poisson structures]\label{theorem of existence}
Let $(M,\Lambda_0)$ be a compact holomorphic Poisson manifold satisfying $($\ref{assumption}$)$ and suppose that $HP^3(M,\Lambda_0)=0$. Then there exists a Poisson analytic family $(
\mathcal{M},\Lambda,B,\omega)$ with $0\in B\subset \mathbb{C}^m$ satisfying the following conditions:
\begin{enumerate}
\item $\omega^{-1}(0)=(M,\Lambda_0)$
\item $\varphi_0:\frac{\partial}{\partial t}\to\left(\frac{\partial (M_t,\Lambda_t)}{\partial t}\right)_{t=0}$ with $(M_t,\Lambda_t)=\omega^{-1}(t)$ is an isomorphism of $T_0(B)$ onto $HP^2(M,\Lambda_0):T_0 B\xrightarrow{\rho_0} HP^2(M,\Lambda_0)$.
\end{enumerate}
\end{thm}
Our proof is rather formal. We throughly follow the Kuranishi methods presented in \cite{Mor71}. The reason for the assumption is to apply their methods in the holomorphic Poisson context and my unfamiliarity with the analytic properties of the operator $\bar{\partial}+[\Lambda,-]$. I do not know that we could relax the assumption (\ref{assumption}). Lastly, based on Kuranishi's lecture notes \cite{Kur71}, we define a Poisson analytic family over a complex space, a concept of pullback family, and complete family. We pose a problem on the existence of Kuranishi family in holomorphic Poisson context. I could not access to this problem for my unfamiliarity with analysis behind the operator $\bar{\partial} +[\Lambda,-]$.
\chapter{Poisson analytic families}\label{chapter1}
\section{Families of holomorphic Poisson manifolds}
\begin{definition}$($compare $\cite{Kod05}$ p.59$)$ \label{test}
Suppose given a domain $B\in \mathbb{C}^m$ , and a set $\{(M_t,\Lambda_t)|t \in B\}$ of holomorphic Poisson manifolds $(M_t,\Lambda_t)$, depending on $t\in B$. We say that $\{(M_t,\Lambda_t)|t\in B\}$ is a family of compact holomorphic Poisson manifolds or a Poisson analytic family of compact holomorphic Poisson manifolds if $(\mathcal{M},\Lambda)$ is a holomorphic Poisson manifold and a holomorphic map $\pi:\mathcal{M}\to B$ satisfies the following properties
\begin{enumerate}
\item $\pi^{-1}(t)$ is a compact holomorphic Poisson submanifolds of $(\mathcal{M},\Lambda)$.
\item $(M_t,\Lambda_t)=\pi^{-1}(t)(M_t$ has the induced Poisson holomorphic structure $\Lambda_t$ from $\Lambda)$.
\item The rank of Jacobian of $\pi$ is equal to $m$ at every point of $\mathcal{M}$.
\end{enumerate}
\end{definition}
Then we can choose a system of local complex coordinates $\{z_1,...,z_j,...\},z_j:p\to z_j(p)$, and coordinate polydisks $U_j$ with respect to $z_j$ satisfying the following conditions.
\begin{enumerate}
\item $z_j(p)=(z_j^1(p),...,z_j^n(p),t_1,...,t_m),(t_1,...,t_m)=\omega(p)$
\item $\mathcal{U}=\{\mathcal{U}_j|j=1,2,...\}$ is locally finite.
\end{enumerate}
Then
\begin{align*}
\{p\mapsto (z_j^1(p),...,z_j^n(p))| \mathcal{U}_j \cap M_t\ne \emptyset\}
\end{align*}
gives a system of local complex coordinates on $M_t$. In terms of these coordinates, $\omega$ is the projection given by
\begin{align*}
\omega:(z_j^1,...,z_j^n,t_1,...,t_m)\to (t_1,...,t_m).
\end{align*}
For $j,k$ with $U_j\cap U_k\ne \emptyset$, we denote the coordinate transformations from $z_k$ to $z_j$ by
\begin{align*}
f_{jk}:(z_k^1,...,z_k^n,t)\to (z_j^1,...,z_j^n,t)=f_{jk}(z_k^1,...,z_k^n,t)
\end{align*}
Note that $t_1,...,t_m$ as part of local coordinates on $\mathcal{M}$ do not change under these coordinate transformations. Thus $f_{jk}$ is given by
\begin{align*}
z_j^{\mathcal{\alpha}}=f_{jk}(z_k^1,...,z_k^n,t_1,...,t_m),\,\,\,\,\,\,\alpha=1,...,n.
\end{align*}
We now discuss the holomorphic Poisson structures. Since $M_t\hookrightarrow \mathcal{M}$ is a holomorphic Poisson manifold induced from $\Lambda$ and $\mathcal{M}=\cup M_t$, the Poisson structure $\Lambda$ on $\mathcal{M}$ can be expressed in terms of local coordinates as $\Lambda=g_{\alpha \beta}(z_j^1,...,z_j^n,t)\frac{\partial}{\partial{z_j^{\alpha}}}\wedge \frac{\partial}{\partial{z_j^{\beta}}}$ on $U_j$. For fixed $t^0$, the holomorphic Poisson structure $\Lambda_{t^0}$ on $M_{t_0}$ is given by $g_{\alpha \beta}(z_j^1,...,z_j^n,t^0)\frac{\partial}{\partial{z_j^{\alpha}}}\wedge \frac{\partial}{\partial{z_j^{\beta}}}$ by restricting $\Lambda$ to $M_{t^0}$ and $g_{\alpha\beta}(z_j,t)$ is holomorphic with respect to $z_j$.
Of course, the definition can be extended to the case $B$ is an arbitrary complex manifold. We also could define a family of compact holomorphic Poisson manifolds in the following way.
\begin{definition}$($compare $\cite{Uen99}$ p.2$)$
A holomorphic map $\pi$ of $\mathcal{M}$ onto $B$ is called a family of compact holomorphic Poisson manifolds or a Poisson analytic family of compact holomorphic poisson manifolds if theres a holomorphic Poisson manifold $(\mathcal{M},\Lambda)$ satisfying the following conditions
\begin{enumerate}
\item $\pi$ is proper. In other words, inverse image of a compact set in $B$ is compact.
\item $\pi$ is a submersion. In other words, for each point $x\in \mathcal{M}$, $d\pi_x:T_x \mathcal{M} \to T_{\pi(x)} B$ is surjective.\\
$($The above two conditions imply that $\pi^{-1}(t)$ is a complex submanifold of $\mathcal{M}$ for each $t\in B$.$)$
\item $\pi^{-1}(t)$ is a holomorphic Poisson submanifold of $(\mathcal{M},\Lambda)$ for each $t\in B$
\item $\pi^{-1}(t)$ is connected.
\end{enumerate}
\end{definition}
\begin{example}[complex tori]$($$\cite{Kod58}$ p.408$)$
Let $S$ be the space of $n\times n$ matrices $s=(s_{\beta}^{\alpha})$ with $|\mathfrak{J} s | >0$, where $\alpha$ denotes the row index and $\beta$ the column index. For each matrix $s\in S$ we define an $n\times 2n$ matrix $\omega(s)=(\omega_j^{\alpha}(s))$ by
\begin{equation} \label{chunghoon}
\omega_j^{\alpha}(s)=
\begin{cases}
\delta_j^{\alpha},\,\,\,\,\,\,\,\,\,\, $\text{for $1\leq j\leq n$}$\\
s_j^{\alpha},\,\,\,\,\,\,\,\,\ $\text{for $j=n+\beta, 1\leq \beta \leq n$}$
\end{cases}
\end{equation}
Let $\mathbb{C}^n$ be the space of $n$ complex variables $z=(z^1,...,z^{\alpha},...,z^n)$ and let $G$ be the discontinuous abelian group of analytic automorphisms of $\mathbb{C}^n\times S$ generated by
\begin{align*}
g_j:(z,s)\to (z+\omega_j(s),s),\,\,\,\,\, j=1,...,2n,
\end{align*}
where $\omega_j(s)=(\omega_j^1(s),...,\omega_j^{\alpha}(s),...,\omega_j^n(s))$ is th $j$-th column vector of $\omega(s)$. The factor space
\begin{align*}
\mathscr{B}=\mathbb{C}^n\times S/G
\end{align*}
is obviously a complex manifold and the canonical projection $\mathbb{C}^m\times S\to S$ induces a regular map $\omega:\mathscr{B}\to S$ such that $B_s=\omega^{-1}(s)$ is a complex torus of complex dimension $n$ with the periods $\omega_j(s)$ $(j=1,...,2n)$ Then $\mathscr{B}=\{B_s,s\in S\}$ forms a complex analytic family of complex tori.
We would like to describe a $G$-invariant holomorphic bicvector field of the form $\sum_{i,j} f(z,s)\frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j}$ on $\mathbb{C}^n\times S$. Then this induces a holomorphic bivector field on $\mathscr{B}$.
Any element of $g\in G$ is of the form $g:(z,s)\to (z+m_1\omega_1(s)+\cdots +m_n\omega_{2n}(s),s)$. Hence for $\sum_{i,j} f_{ij}(z,s)\frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j}$ to be invariant vector field, we have $f_{ij}(z,s)=f_{ij}(z+m_1\omega_1(s)+\cdots +m_n\omega_{2n} (s),s)$ for any inters $m_1,...,m_{2n}$. This means that $f(z,s)$ is independent of $z$. So $f(z,s)=f(s)$. Hence an invariant bivector field is of the from $\Lambda=\sum_{i,j}f_{ij}(s)\frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j}$. Since $f_{ij}(s)$ are independent of $z$, we have $[\Lambda,\Lambda]=0$. So $(\mathscr{B},\Lambda)$ is a Poisson analytic family.
\end{example}
\begin{example}[Hirzebruch-Nagata surface]$($$\cite{Uen99}$ p.13$)$
Take two $\mathbb{C}\times \mathbb{P}^1$ and write the coordinates as $(u,(\xi_0:\xi_1), (v,(\eta_0:\eta_1))$, respectively, where $(\xi_0:\xi_1),(\eta_0:\eta_1)$ are the homogeneous coordinates of $\mathbb{P}^1$.
Patch $\mathbb{C}\times \mathbb{P}^1$ together by relation
\begin{equation}
\begin{cases}
u=1/v, \\
(\xi_0:\xi_1)=(\eta_0:v^m\eta_1)
\end{cases}
\end{equation}
Then we obtain a two dimensional compact complex manifold $F_m$. The complex manifold $F_m$ is called the Hirzebruch-Nagata surface.
Now we deform the patching by introducing a new patching relation with parameter $t\in \mathbb{C}$.
\begin{equation}
\begin{cases}
u=1/v, \\
(\xi_0:\xi_1)=(\eta_0:v^m\eta_1+tv^k\eta_0), m-2\leq 2k \leq m
\end{cases}
\end{equation}
and patching two $\mathbb{C}\times \mathbb{P}^1$ by the relation, we obtain a surface $S_t$ for each $t\in \mathbb{C}$. By the relation we have $S_0=F_m$ and $\omega:\mathcal{S}=\{S_t\}_{t\in\mathbb{C}} \to \mathbb{C}$ is an complex analytic family.
We put a holomorphic Poisson structure $\Lambda$ on $\mathcal{S}$ so that $(\mathcal{S},\Lambda)$ is a Poisson analytic family.
For one $\mathbb{C}\times \mathbb{P}^1$ with coordinate $(u,(\xi_0:\xi_1))$, we have two affine covers, namely, $\mathbb{C}\times \mathbb{C}$ and $\mathbb{C}\times \mathbb{C}$. They are glued via $\mathbb{C}\times (\mathbb{C}-\{0\})$ and $\mathbb{C}\times (\mathbb{C}-\{0\})$ by $(u,x=\frac{\xi_1}{\xi_0})\mapsto (u,y=\frac{\xi_0}{\xi_1})=(u,\frac{1}{x})$. Similary for another $\mathbb{C}\times \mathbb{P}^1$ they are glued via $\mathbb{C}\times (\mathbb{C}-\{0\})$ and $\mathbb{C}\times (\mathbb{C}-\{0\})$ and $\mathbb{C}\times (\mathbb{C}-\{0\})$ via $(v,w=\frac{\eta_1}{\eta_0})\mapsto (v,z)=(v,\frac{1}{w}=\frac{\eta_0}{\eta_1})$. We put holomorphic Poisson structures $\Lambda$ with $[\Lambda,\Lambda]=0$ on each patches and show that they are glued via the above relations to give a global bivector field on $\mathcal{S}$. On $(u,x)$ coordinate, we give $g(t)x^2\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial x}$. On $(u,y)$ coordinate, we give $-g(t)\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial y}$. On $(v,w)$ coordinate, we give $-g(t)v^{2k-m+2}(wv^{m-k}+t)^2\frac{\partial}{\partial v}\wedge\frac{\partial}{\partial w}$. And on $(v,z)$ coordinate, we give $g(t)v^{2k-m+2}(v^{m-k}+tz)^2\frac{\partial}{\partial v}\wedge \frac{\partial}{\partial z}$. In the following picture, we have
{\tiny{
\begin{center}
$\begin{CD}
(u,x)=(\frac{1}{v},v^mw+tv^k)=(\frac{1}{v},\frac{v^m+tv^kz}{z}),g(t)x^2\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial x}@<<< (v,w),-g(t)v^{2k-m+2}(wv^{m-k}+t)^2\frac{\partial}{\partial v}\wedge\frac{\partial}{\partial w}\\
@VVV @VVV \\
(u,y)=(u,\frac{1}{x})=(\frac{1}{v},\frac{z}{v^m+tv^kz})=(\frac{1}{v},\frac{1}{v^mw+tv^k}),-g(t)\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial y}@<<< (v,z=\frac{1}{w}),g(t)v^{2k-m+2}(v^{m-k}+tz)^2\frac{\partial}{\partial v}\wedge \frac{\partial}{\partial z}
\end{CD}$
\end{center}}}
we have $\frac{\partial}{\partial x}=-\frac{1}{x^2}\frac{\partial}{\partial y}$, $\frac{\partial}{\partial v}=-\frac{1}{v^2}\frac{\partial}{\partial u}+(mv^{m-1}w+ktv^{k-1})\frac{\partial}{\partial x}=-\frac{1}{v^2}\frac{\partial}{\partial u}+\frac{mv^{m-1}+ktv^{k-1}z}{z}\frac{\partial}{\partial x}=-\frac{1}{v^2}\frac{\partial}{\partial u}+\frac{-mv^{m-1}-ktv^{k-1}}{(v^m+tv^k)^2}\frac{\partial}{\partial y}$.
$\frac{\partial}{\partial w}=v^m\frac{\partial}{\partial x}=\frac{-v^m}{(v^mw+tv^k)^2}\frac{\partial}{\partial y}$, and $\frac{\partial}{\partial z}=\frac{-v^m}{z^2}\frac{\partial}{\partial x}=\frac{v^m}{(v^m+tv^kz)^2}\frac{\partial}{\partial y}$, $\frac{\partial}{\partial w}=-\frac{1}{w^2}\frac{\partial}{\partial z}$.
Now we show that they are glued.
\begin{enumerate}
\item $(u,x)$ and $(u,y)$. $g(t)x^2\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial x}=g(t)x^2(-\frac{1}{x^2})\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial y}=-g(t)\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial y}$
\item $(v,w)$ and $(v,z)$. $-g(t)v^{2k-m+2}(wv^{m-k}+t)^2\frac{\partial}{\partial v}\wedge\frac{\partial}{\partial w}=-g(t)v^{2k-m+2}(wv^{m-k}+t)^2(\frac{-1}{w^2})\frac{\partial}{\partial v}\wedge\frac{\partial}{\partial z}=g(t)v^{2k-m+2}(v^{m-k}+tz)^2\frac{\partial}{\partial v}\wedge \frac{\partial}{\partial z}$
\item $(u,x)$ and $(v,w)$. $-g(t)v^{2k-m+2}(wv^{m-k}+t)^2\frac{\partial}{\partial v}\wedge\frac{\partial}{\partial w}=-g(t)v^{2k-m+2}(wv^{m-k}+t)^2(-v^{m-2})\frac{\partial}{\partial u}\wedge\frac{\partial}{\partial x}=g(t)x^2\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial x}$
\item $(u,y)$ and $(v,z)$. $g(t)v^{2k-m+2}(v^{m-k}+tz)^2\frac{\partial}{\partial v}\wedge \frac{\partial}{\partial z}=g(t)v^{2k-m+2}(v^{m-k}+tz)^2\frac{v^m}{(v^m+tv^kz)^2}(-\frac{1}{v^2})\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial y}=-g(t)\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial y}$
\item $(u,x)$ and $(v,z)$. $g(t)v^{2k-m+2}(v^{m-k}+tz)^2\frac{\partial}{\partial v}\wedge \frac{\partial}{\partial z}=g(t)v^{2k-m+2}(v^{m-k}+tz)^2\frac{v^{m-2}}{z^2}\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial x}=g(t)(\frac{v^{m}}{z}+tv^k)^2\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial x}=g(t)x^2\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial x}$
\item $(u,y)$ and $(v,w)$. $-g(t)v^{2k-m+2}(wv^{m-k}+t)^2\frac{\partial}{\partial v}\wedge\frac{\partial}{\partial w}=-g(t)v^{2k-m+2}(wv^{m-k}+t)^2\frac{v^{m-2}}{(v^mw+tv^k)^2}\frac{\partial}{\partial u}\wedge\frac{\partial}{\partial y}=-g(t)\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial y}$
\end{enumerate}
So $(\mathcal{S},\Lambda)$ is a Poisson analytic family.
\end{example}
\begin{example}[Hopf surfaces]
By a Hopf surface we mean any complex manifold homeomorphic to $S^1\times S^3$. Let $W=\mathbb{C}^2-\{0\}$.
\begin{thm}\label{w}
For every Hopf surface $X$ there exist numbers $m\in \mathbb{N},a,b,t, \in \mathbb{C}$ satisfying
\begin{equation*}
0<|a|\leq |b| <1\,\,\,\,\, \text{and}\,\,\,\,\, (b^m-a)t=0
\end{equation*}
such that $X$ is biholomorphic to $W/<\gamma>$, where $\gamma$ is an automorphism of $W$ given by
\begin{equation*}
\gamma(z_1,z_2)=(az_1+tz_2^m,bz_2)
\end{equation*}
Conversely, for any $m,a,b,t$ as above, the corresponding group $<\gamma>$ acts freely and properly discontinuous on $W$ and the complex manifold $W/<\gamma>$ is a Hopf surface.
\end{thm}
\begin{proof}
See $\cite{Kod65}$.
\end{proof}
We construct an one parameter Poisson analytic family of general Hopf surfaces.
An automorphism of $W\times \mathbb{C}$ given by
\begin{equation*}
g:(z_1,z_2,t)\to(az_1+tz_2^m,bz_2,t)
\end{equation*}
where $0<|a|\leq |b| <1$ and $b^m-a=0$$($i.e $a=b^m$$)$, generates an infinitely cyclic group $G$, which properly discontinuous and fixed point free by Theorem \ref{w}. Hence $\mathcal{M}=W\times \mathbb{C}/G$ is a complex manifold. Since the projection of $W\times \mathbb{C}$ to $\mathbb{C}$ commutes with $g$, it induces a holomorphic map $\omega$ of $\mathcal{M}$ to $\mathbb{C}$. Clearly the rank of the Jacobian matrix of $\omega$ is equal to 1. Thus $(\mathcal{M},\mathbb{C},\omega)$ is a complex analytic family with $\omega^{-1}(t)=W/G_t=M_t$.
We give a holomorphic Poisson structure on $\mathcal{M}$. A holomorphic bivector field on $\mathcal{M}$ is induced from a $G$-invariant holomorphic bivector field on $W\times \mathbb{C}$. In what follows we write $(z',t')=(z_1',z_2',t')$ instead of $(a^n z_1+na^{n-1}t z_2^m,b^n z_2,t)$. In this notation we have
\begin{equation*}
g^n:(z_1,z_2,t)\to(z_1',z_2',t'),
\end{equation*}
We consider a $G$-invariant holomorphic bivector field on $W\times \mathbb{C}$ of the form $f(z_1,z_2,t)\frac{\partial}{\partial z_1}\wedge\frac{\partial}{\partial z_2}$, where $f(z_1,z_2,t)$ is a holomorphic function on $W\times \mathbb{C}$. Since
\begin{equation*}
\frac{\partial}{\partial z_1}=a^n\frac{\partial}{\partial z_1'},\,\,\,\,\, \frac{\partial}{\partial z_2}=mna^{n-1}t z_2^{m-1}\frac{\partial}{\partial z_1'}+b^n\frac{\partial}{\partial z_2'}
\end{equation*}
the bivector field $f(z_1,z_2,t)\frac{\partial}{\partial z_1}\wedge\frac{\partial}{\partial z_2}$ is transformed by $g^n$ into the bivector field
\begin{equation*}
f(z_1,z_2,t)a^n b^n\frac{\partial}{\partial z_1'}\wedge\frac{\partial}{\partial z_2'}
\end{equation*}
Since $f(z_1,z_2,t)\frac{\partial}{\partial z_1}\wedge\frac{\partial}{\partial z_2}$ is $G$-invariant, we have
\begin{equation*}
f(z_1',z_2',t')=f(z_1,z_2,t)a^nb^n
\end{equation*}
By Hartog's theorem, holomorphic function $f(z_1,z_2,t)$ on $W\times \mathbb{C}$ are extended to holomorphic function on $\mathbb{C}^2\times \mathbb{C}$. Therefore we may assume that $f(z_1,z_2,t)$ is holomorphic on all $\mathbb{C}^2\times \mathbb{C}$. We have
\begin{equation*}
f(z_1,z_2,t)=\frac{1}{a^n b^n}f(a^n z_1+na^{n-1}t z_2^m,b^n z_2,t)=\frac{1}{b^{n(m+1)}}f(b^{nm} z_1+nb^{m(n-1)}t z_2^m,b^n z_2,t)
\end{equation*}
Consequently, since $0<|b |<1$, letting
\begin{equation*}
f(z_1,z_2,t)=\sum_{i,j,k}^{+\infty} c_{ijk} {z_1}^i {z_2}^j t^k
\end{equation*}
be the power series expansion of $f(z_1,z_2,t)$, we have
\begin{align*}
f(z_1,z_2,t)&=\lim_{n\to +\infty} \frac{1}{b^{n(m+1)}} \sum_{i,j,k} c_{ijk}(b^{nm} z_1+nb^{m(n-1)}t z_2^m)^i(b^n z_2)^j t^k\\
& =(c_{0(m+1)0}+c_{0(m+1)1}t+c_{0(m+1)2}t^2+\cdots)z_2^{m+1}
\end{align*}
Hence $(\mathcal{M},([(c_{0(m+1)0}+c_{0(m+1)1}t+c_{0(m+1)2}t^2+\cdots)z_2^{m+1}]\frac{\partial}{\partial z_1}\wedge \frac{\partial}{\partial z_2})$ is a Poisson analytic family of Hopf surfaces. For each $t$, we have a holomorphic Poisson structure $[(c_{0(m+1)0}+c_{0(m+1)1}t+c_{0(m+1)2}t^2+\cdots)z_2^{m+1}]\frac{\partial}{\partial z_1}\wedge \frac{\partial}{\partial z_2}$ on a Hopf surface $M_t$.
\end{example}
\section{Infinitesimal deformation}
\subsection{Infinitesimal deformation and truncated holomorphic Poisson cohomology}\
In this section, we show that for a Poisson analytic family $(\mathcal{M},\Lambda,B,\omega)$, the infinitesimal deformation of a holomorphic Poisson manifold $(M_t,\Lambda_t)$ with dimension $n$ is captured by the second Hyperohomology group of complex of sheaves $0\to \Theta_{M_t}\to \wedge^2 \Theta_{M_t}\to \cdots \to \wedge^n \Theta_{M_t}\to 0$ induced by $[\Lambda_t,-]$\footnote{For the definition, see Appendix \ref{appendixb}} analogously to how the infinitesimal deformation of a complex manifold $M_t$ is captured by the first cohomology group $H^1(M_t,\Theta_t)$.
Let $(M,\Lambda)$ be a holomorphic Poisson manifold and consider the complex of sheaves
\begin{align*}
0\to \Theta_M\xrightarrow{[\Lambda,-]}\wedge^2 \Theta_M\xrightarrow{[\Lambda,-]}\cdots \xrightarrow{[\Lambda,-]} \wedge^n \Theta_M\to 0
\end{align*}
where $\Theta_M$ is the holomorphic tangent sheaf. Let $\mathcal{U}=\{U_j\}$ be sufficiently fine open covering of $M$ such that $U_j=\{z_j\in \mathbb{C}^n||z_j^{\alpha}|<r_j^{\alpha},\alpha=1,...,n\}$. Then we can compute the hypercohomology group of the above complex of sheaves by the following \u{C}ech resolution.(See Appendix \ref{appendixb})
\begin{center}
$\begin{CD}
@A[\Lambda,-]AA\\
C^0(\mathcal{U},\wedge^3 \Theta_M)@>-\delta>>\cdots\\
@A[\Lambda,-]AA @A[\Lambda,-]AA\\
C^0(\mathcal{U},\wedge^2 \Theta_M)@>\delta>> C^1(\mathcal{U},\wedge^2 \Theta_M)@>-\delta>>\cdots\\
@A[\Lambda,-]AA @A[\Lambda,-]AA @A[\Lambda,-]AA\\
C^0(\mathcal{U},\Theta_M)@>-\delta>>C^1(\mathcal{U},\Theta_M)@>\delta>>C^2(\mathcal{U},\Theta_M)@>-\delta>>\cdots\\
@AAA @AAA @AAA @AAA \\
0@>>>0 @>>> 0 @>>> 0@>>> \cdots
\end{CD}$
\end{center}
\begin{definition}
We say that the $i$-th truncated holomorphic Poisson cohomology group\footnote{In \cite{Wei99}, holomorphic Poisson cohomology for a holomorphic Poisson manifold $(M,\Lambda)$ is defined by the $i$-th hypercohomology group of complex of sheaves $\mathcal{O}_M\to \Theta_M\to \wedge^2 \Theta_M \to \cdots \to\wedge^n \Theta_M\to 0$ induced by $[\Lambda,-]$. However since there is no role of the structure sheaf $\mathcal{O}_M$ in deformations of holomorphic Poisson manifolds, we truncate the complex of sheaves. See also \cite{Nam09}. } of a holomorphic Poisson manifold $(M,\Lambda)$ is the $i$-th hypercohomology group associated with the complex of sheaves $0\to \Theta_M\xrightarrow{[\Lambda,-]} \wedge^2 \Theta_M\xrightarrow{[\Lambda,-]} \cdots \xrightarrow{[\Lambda,-]} \wedge^n \Theta_M\to 0$ where $\Theta_M$ is the holomorphic tangent sheaf, and is denoted by $HP^i(X,\Lambda)$\footnote{We adopt the notation from \cite{Nam09}. By general philosophy of deformation theory, it might be natural to shift the grading after truncation so that the $0$-th cohomology group corresponds to infinitesimal automorphisms, the first cohomology group corresponds to infinitesimal deformations and third cohomology group corresponds to obstructions. However, we follow \cite{Nam09}. So I put $0\to 0\to 0\cdots$ on the bottom of the complex.}
\end{definition}
Now we relate the 2nd truncated holomorphic Poisson cohomology group $HP^2(M_t,\Lambda_t)$ to the infinitesimal deformation of $(M_t,\Lambda_t)$ in a Poisson analytic family $(\mathcal{M},\Lambda,B,\pi)$ for each $t$. Let $t_0\in B$ and choose a sufficiently small polydisk $\Delta$ with $t_0\in \Delta \subset B$. Then $\omega^{-1}(\Delta)=\mathcal{M}_{\Delta}=\bigcup_{j=1}^l \mathcal{U}_j$ where $\mathcal{U}_j:=U_j\times \Delta$ such that $U_j$ is a polydisk, and $(z_j,t)\in U_j\times \Delta$ and $(z_k,t)\in U_k\times \Delta$ are the same point on $\mathcal{M}_{\Delta}$ if
\begin{align*}
z_j^{\alpha}=f_{jk}^{\alpha}(z_k,t),\,\,\, \alpha=1,...,n
\end{align*}
$z_j^{\alpha}=f_{jk}^{\alpha}(z_k^1,...,z_k^n,t_1,...,t_m)$
is a holomorphic transition function in $z_k^1,...,z_k^n,t_1,...,t_m$. And on each local complex coordinate system $U_j\times \Delta$, $\Lambda$ can be expressed as $\sum_{\alpha,\beta} g_{\alpha \beta}^{j}(z,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}$ where $g^{j}_{\alpha \beta}(z,t)$ is a holomorphic function on $U_{j}\times \Delta$ and $g_{\alpha \beta}^j(f_{jk}^1(z_k,t),...,f_{jk}^n(z_k,t))=\sum_{r,s} g_{rs}^k(z_k,t)\frac{\partial f_{jk}^{\alpha}}{\partial z_k^r}\frac{\partial f_{jk}^{\beta}}{\partial z_k^s}$ on $U_j\times \Delta\cap U_k\times \Delta$. We will denote $\mathcal{U}_j^t:=U_j\times t$. For each $t\in \Delta$, $\mathcal{U}^t=\{\mathcal{U}_j^t\}$ be an open covering of $M_t$. Let $\frac{\partial}{\partial t}=\sum_{\lambda=1}^m c_{\lambda}\frac{\partial}{\partial t_{\lambda}}$, $c_{\lambda}\in \mathbb{C}$, of $B$. We show that
\begin{proposition}\label{gg}
\begin{align*}
( \{\theta_{jk}(t)=\sum_{\alpha=1}^n \frac{\partial f_{jk}^{\alpha}(z_k,t)}{\partial t}\frac{\partial}{\partial z_j^{\alpha}}\},\{\Lambda_j(t)=\sum_{\alpha,\beta} \frac{\partial g_{\alpha \beta}^{j}(z,t)}{\partial t}\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}\})\in C^1(\mathcal{U}^t,\Theta_{M_t})\oplus C^0(\mathcal{U}^t,\wedge^2 \Theta_{M_t})
\end{align*}
define a 2-cocycle and call its cohomology class $\in HP^2(M_t,\Lambda_t)$ the infinitesimal $($Poisson$)$ deformation along $\frac{\partial}{\partial t}$ and this expression is independent of the choice of system of local coordinates.
\end{proposition}
\begin{proof}
First, $\delta(\{\theta_{jk}(t)\})=0$ (See $\cite{Kod05}$ p.201). Second, since $[\sum_{\alpha,\beta} g_{\alpha \beta}^{j}(z,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}},\sum_{\alpha,\beta} g_{\alpha \beta}^{j}(z,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}]=0$, by taking the derivative with repect to $t$, we have $[\sum_{\alpha,\beta} \frac{\partial g_{\alpha \beta}^{j}(z,t)}{\partial t}\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}},\sum_{\alpha,\beta} g_{\alpha \beta}^{j}(z,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}]+[\sum_{\alpha,\beta} g_{\alpha \beta}^{j}(z,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}},\sum_{\alpha,\beta} \frac{\partial g_{\alpha \beta}^{j}(z,t)}{\partial t}\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}]=2[\sum_{\alpha,\beta} g_{\alpha \beta}^{j}(z,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}},\sum_{\alpha,\beta} \frac{\partial g_{\alpha \beta}^{j}(z,t)}{\partial t}\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}]=0$. It remains to show that $\delta(\{\Lambda_j(t)\})+[\Lambda_t,\{\theta_{jk}\}]=0$. More precisely, on $\mathcal{U}_{jk}^t$, we show that $\Lambda_{k}(t)-\Lambda_{j}(t)+[\Lambda_t,\theta_{jk}(t)]=0$. In other words,
\begin{align*}
(*)\sum_{r,s=1}^n \frac{\partial g^k_{rs}}{\partial t}\frac{\partial}{\partial z^{r}_k}\wedge\frac{\partial}{\partial z_k^{s}}-\sum_{\alpha,\beta=1}^n \frac{\partial g^j_{\alpha \beta}}{\partial t}\frac{\partial}{\partial z^{\alpha}_j}\wedge\frac{\partial}{\partial z_j^{\beta}}+[\sum_{r,s=1}^n g_{rs}^{j}(z,t)\frac{\partial}{\partial z_{j}^{r}}\wedge \frac{\partial}{\partial z_{j}^{s}},\sum_{c=1}^n \frac{\partial f_{jk}^{c}(z_k,t)}{\partial t}\frac{\partial}{\partial z_j^{c}}]=0
\end{align*}
We note that since $z_j^{\alpha}=f_{jk}^{\alpha}(z_k^1,...,z_k^n,t_1,...,t_m)$ for $\alpha=1,...,n$, $\frac{\partial}{\partial z_k^{r}}=\sum_{a=1}^{n}\frac{\partial f_{jk}^a}{\partial z_k^{r}}\frac{\partial}{\partial z_j^a}$ for $r=1,...,n$. Hence
\begin{align*}
\sum_{r,s=1}^n \frac{\partial g^k_{rs}}{\partial t}\frac{\partial}{\partial z^{r}_k}\wedge\frac{\partial}{\partial z_k^{s}}=\sum_{r,s,a,b=1}^n \frac{\partial g_{rs}^k}{\partial t}\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^b}{\partial z_k^s}\frac{\partial}{\partial z_j^a}\wedge \frac{\partial}{\partial z_j^b}\\
\end{align*}
\begin{align*}
&[\sum_{r,s=1}^n g_{rs}^{j}(z,t)\frac{\partial}{\partial z_{j}^{r}}\wedge \frac{\partial}{\partial z_{j}^{s}},\sum_{c=1}^n \frac{\partial f_{jk}^{c}(z_k,t)}{\partial t}\frac{\partial}{\partial z_j^{c}}]=\sum_{r,s,c=1}^n [g_{rs}^{j}(z,t)\frac{\partial}{\partial z_{j}^{r}}\wedge \frac{\partial}{\partial z_{j}^{s}},\frac{\partial f_{jk}^{c}(z_k,t)}{\partial t}\frac{\partial}{\partial z_j^{c}}]\\
&=\sum_{r,s,c=1}^n [g_{rs}^j \frac{\partial}{\partial z_j^r},\frac{\partial f_{jk}^c}{\partial t} \frac{\partial}{\partial z_j^c}]\wedge \frac{\partial}{\partial z_j^s}-g_{rs}^j[\frac{\partial}{\partial z_j^s},\frac{\partial f_{jk}^c}{\partial t}\frac{\partial}{\partial z_j^c}]\wedge \frac{\partial}{\partial z_j^r}\\
&=\sum_{r,s,c=1}^n g_{rs}^j\frac{\partial}{\partial z_j^r}\left(\frac{\partial f_{jk}^c}{\partial t}\right) \frac{\partial}{\partial z_j^c}\wedge \frac{\partial}{\partial z_j^s}-\frac{\partial f_{jk}^c}{\partial t}\frac{\partial g_{rs}^j}{\partial z_j^c}\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}+g_{rs}^j\frac{\partial}{\partial z_j^s}\left(\frac{\partial f_{jk}^c}{\partial t}\right)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^c}
\end{align*}
By considering the coefficients of $\frac{\partial}{\partial z_j^a}\wedge \frac{\partial}{\partial z_j^b}$, $(*)$ is equivalent to
\begin{align*}
(**)\sum_{r,s=1}^n \frac{\partial g_{rs}^k}{\partial t}\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^b}{\partial z_k^s}-\frac{\partial g_{ab}^j}{\partial t}-\sum_{c=1}^n \frac{\partial g_{ab}^j}{\partial z_j^c}\frac{\partial f_{jk}^c}{\partial t}+\sum_{c=1}^n g_{cb}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^a}{\partial t}\right)+g_{ac}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^b}{\partial t}\right)=0
\end{align*}
On the other hand, since $\sum_{\alpha,\beta=1}^n g_{\alpha \beta}^j\frac{\partial}{\partial z_j^{\alpha}}\wedge \frac{\partial}{\partial z_j^{\beta}}=\sum_{r,s=1}^n g_{rs}^k\frac{\partial}{\partial z_k^{r}}\wedge \frac{\partial}{\partial z_k^{s}}=\sum_{r,s,a,b=1}^n g_{rs}^k \frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^b}{\partial z_k^s} \frac{\partial}{\partial z_j^a}\wedge\frac{\partial }{\partial z_j^b}$ on $\mathcal{U}_j^t \cap \mathcal{U}_k^t\ne \emptyset$, we have
\begin{align*}
g_{ab}^j(f_{jk}^1(z_k,t),...,f_{jk}^n(z_k,t),t_1,...,t_m)=\sum_{r,s=1}^n g_{rs}^k \frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^b}{\partial z_k^s}.\\
\end{align*}
By taking the derivative with respect to $t$, we have
\begin{align*}
\frac{\partial g_{ab}^j}{\partial z_j^1}\frac{\partial f_{jk}^1}{\partial t}+\cdots+\frac{\partial g_{ab}^j}{\partial z_j^n}\frac{\partial f_{jk}^n}{\partial t}+\frac{\partial g_{ab}^j}{\partial t}=\sum_{r,s=1}^n \frac{\partial g_{rs}^k}{\partial t}\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^b}{\partial z_k^s}+g_{rs}^k(\frac{\partial}{\partial z_k^r}\left(\frac{\partial f_{jk}^a}{\partial t}\right)\frac{\partial f_{jk}^b}{\partial z_k^s}+\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial}{\partial z_k^s}\left(\frac{\partial f_{jk}^b}{\partial t}\right) )
\end{align*}
Hence $(**)$ is equivalent to
\begin{align*}
\sum_{c=1}^n g_{cb}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^a}{\partial t}\right)+g_{ac}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^b}{\partial t}\right)=\sum_{r,s=1}^n g_{rs}^k(\frac{\partial}{\partial z_k^r}\left(\frac{\partial f_{jk}^a}{\partial t}\right)\frac{\partial f_{jk}^b}{\partial z_k^s}+\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial}{\partial z_k^s}\left(\frac{\partial f_{jk}^b}{\partial t}\right) )
\end{align*}
Indeed,
{\small{\begin{align*}
\sum_{c=1}^n g_{cb}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^a}{\partial t}\right)+g_{ac}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^b}{\partial t}\right)=\sum_{r,s,c=1}^n g_{rs}^k\frac{\partial f_{jk}^c}{\partial z_k^r}\frac{\partial f_{jk}^b}{\partial z_k^s}\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^a}{\partial t}\right)+g_{rs}^k\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^c}{\partial z_k^s}\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^b}{\partial t}\right)\\
\sum_{r,s=1}^n g_{rs}^k(\frac{\partial}{\partial z_k^r}\left(\frac{\partial f_{jk}^a}{\partial t}\right)\frac{\partial f_{jk}^b}{\partial z_k^s}+\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial}{\partial z_k^s}\left(\frac{\partial f_{jk}^b}{\partial t}\right) )=\sum_{r,s,c=1}^n g_{rs}^k\frac{\partial f_{jk}^c}{\partial z_k^r}\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^a}{\partial t}\right)\frac{\partial f_{jk}^b}{\partial z_k^s}+g_{rs}^k\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^c}{\partial z_k^s}\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^b}{\partial t}\right)
\end{align*}}}
It remains to show that $(\Lambda(t),\theta(t))$ is independent of the choice of systems of local coordinates. We can show the infinitesimal deformation does not change under the refinement of the open covering (See $\cite{Kod05}$ page 190). Since we can choose a common refinement for two system of local coordinates, it is sufficient to show that given two local coordinates $x_j=(z_j,t)$ and $u_j=(w_j,t)$ on each $\mathcal{U}_j$, the infinitesimal deformation $(\eta(t),\Lambda'(t))$ with respect to $\{u_j\}$ coincides with $(\theta(t),\Lambda(t))$ with respect to $\{x_j\}$. Let
\begin{align*}
w_j^{\alpha}=g_j^{\alpha}(z_j^1,...,z_j^n,t)
\end{align*}
the coordinate transformation from $(z_j,t)$ to $(w_j,t)$ which is holomorphic in $z_j^1,...,z_j^n$.
So we have $\frac{\partial}{\partial z_j^r}=\sum_{a} \frac{\partial g_j^a}{\partial z_j^r}\frac{\partial}{\partial w_j^a}$. And let
\begin{align*}
\theta_j(t)=\sum_{\alpha=1}^n \frac{\partial g_j^{\alpha}(z_j,t)}{\partial t} \frac{\partial }{\partial w_j^{\alpha}}, \,\,\,\,\,w_j^{\alpha}=g_j^{\alpha}(z_j,t),
\end{align*}
Then we claim that $(\theta_{jk}(t),\Lambda_j(t))-(\eta_{jk}(t),\Lambda'_j(t))=\theta_k(t)-\theta_j(t)-[\Lambda, \theta_j(t)]=-\delta(-\theta(t))+[\Lambda,-\theta(t)]$. Since $\delta(\theta_j(t))=\{\theta_{jk}(t)\}-\{\eta_{jk}(t)\}$(see page 192). We only need to see $\Lambda_j(t)-\Lambda'_j(t)+[\Lambda,\theta_j(t)]=0$. Equivalently,
{\small{\begin{align*}
\sum_{r,s} \frac{\partial \Lambda^{rs}_{j}(z_j,t)}{\partial t}\frac{\partial }{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}-\sum_{\alpha,\beta} \frac{\partial \Lambda^{'\alpha\beta}_j(w_j,t)}{\partial t}\frac{\partial}{\partial w_j^{\alpha}}\wedge \frac{\partial}{\partial w_j^{\beta}}+[\sum_{\alpha,\beta} \Lambda^{'\alpha\beta}_j(w_j,t)\frac{\partial}{\partial w_j^{\alpha}}\wedge \frac{\partial}{\partial w_j^{\beta}},\sum_c \frac{\partial g_j^{c}(z_j,t)}{\partial t} \frac{\partial }{\partial w_j^{c}}]=0
\end{align*}}}
But the computation is essentially same to the above.
\end{proof}
\begin{definition}[(holomorphic) Poisson Kodaira-Spencer map]
Let $(\mathcal{M},\Lambda,B,\pi)$ be a family of compact holomorphic Poisson manifolds, where $B$ is a domain of $\mathbb{C}^m$, and $(z,t)$ its system of local coordinates. Then each $(z_j,t)$ on $\mathcal{U}_j$ is a local complex coordinate system of the complex manifold $\mathcal{M}$. And in the local complex coordinate system $\Lambda$ can be expressed as $\sum_{\alpha,\beta} g_{\alpha \beta}^{j}(z,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}$ where $g^{j}_{\alpha \beta}(z,t)$ is a holomorphic function on $\mathcal{U}_{j}$. For a tangent vector $\frac{\partial}{\partial t}=\sum_{\lambda=1}^{m} c_{\lambda}\frac{\partial}{\partial t_{\lambda}},c_{\lambda} \in \mathbb{C}$, of $B$, we put
\begin{align*}
\frac{\partial \Lambda_t}{\partial t}=\sum_{\alpha,\beta}\left[\sum_{\lambda=1}^{m}c_{\lambda}\frac{\partial g_{\alpha \beta}^{j}(z,t)}{\partial t_{\lambda}}\right] \frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}
\end{align*}
The $($holomorphic$)$ Poisson Kodaira-Spencer map is a $\mathbb{C}$-linear map of
\begin{align*}
\varphi_t:T_t(B) &\to HP^2(M_t,\Lambda_t)\\
\frac{\partial}{\partial t} &\mapsto \left[\rho_t\left(\frac{\partial}{\partial t}\right)\left(=\frac{\partial{M}_t}{\partial t}\right), \frac{\partial{\Lambda_t}}{\partial t}\right]=\frac{\partial (M_t,\Lambda_t)}{\partial t}
\end{align*}
where $\rho_t:T_t(B)\to H^1(M_t,\Theta_t)$ is the Kodaira-Spencer map. $($See $\cite{Kod05}$ p.201$)$
\end{definition}
\subsection{Tirivial, locally trivial family and rigidity}\
\begin{definition}
Two Poisson analytic families $(\mathcal{M},\Lambda, B,\pi)$ and $(\mathcal{N},\Lambda', B,\pi')$ are equivalent if there is a biholomorphic Poisson map $\Phi$ of $(\mathcal{M},\Lambda)$ onto $(\mathcal{N},\Lambda')$ such that $\pi=\pi'\circ \Phi$.
Then $(M_t,\Lambda_t)$ and $(N_t,\Lambda_t)$ are biholomorphic Poisson map.
\end{definition}
\begin{definition}
A Poisson analytic family $(\mathcal{M},\Lambda,,B,\pi)$ is called trivial if it is equivalent to $(M\times B,\Lambda_{t_0}\oplus 0,B,\pi')$\footnote{For the definition of product of holomorphic Poisson manifolds, see Appendix \ref{appendixa}. Here we consider $B$ as a holomorphic Poisson manifold with trivial Poisson structure $0$} with $M=\pi^{-1}(t^0)$ where $t^0$ is some point of $B$. Similarly we define the local triviality of $(\mathcal{M},\Lambda,B,\pi):$ for each $t\in B$, there exists a neighborhood $\Delta$ of $t$ such that $(\mathcal{M}_{\Delta},\Lambda_{\Delta}, \Delta,\pi)$ is trivial.\footnote{Let $(\mathcal{M},\Lambda,B,\pi)$ be a Poisson analytic family. Let $\Delta$ be an open set of $B$. Then ($\mathcal{M}_{\Delta}=\pi^{-1}(\Delta),\Lambda|_{M_{\Delta}},\Delta,\pi|_{\mathcal{M}_{\Delta}})$ is an Poisson analytic family. We denote the family by $(\mathcal{M}_{\Delta},\Lambda_{\Delta},\Delta,\pi)$}
\end{definition}
The following problem is an analogue of a question from deformations of complex structures.
\begin{problem}
If $dim\,HP^2(M_t,\Lambda_t)$ is independent of $t\in B$, and $\varphi_t=0$ identically, then is the holomorphic Poisson family $(\mathcal{M},\Lambda,B,\pi)$ locally trivial $?$\footnote{For the question of deformations of complex structures, we can find the proof in $\cite{Kod05}$. But I could not access to this problem for unfamiliarity of analysis}
\end{problem}
\begin{definition}
We say that a compact holomorphic Poisson manifold $(M,\Lambda_0)$ is rigid if, for any Poisson analytic family $(\mathcal{M},\Lambda,B,\pi)$ such that $M_{t_0}=M$, we can find a neighborhood $\Delta$ of $t_0$ such that $M_t=M_{t_0}$ for $t\in \Delta$. More precisely, $(\mathcal{M}_{\Delta},\Lambda_{\Delta},\Delta,\pi)$ is Poisson biholomorphic to $(M_{t_0}\times \Delta, \Lambda_{t_0}\oplus 0,\Delta, pr)$ where we consider $\Delta$ as a trivial holomorphic Poisson manifold $(\Delta,0)$ and $pr$ is the second projection.
\end{definition}
The following problem is an analogue of a question from deformations of complex structures.
\begin{problem}
If $HP^2(M,\Lambda_0)=0$, is $(M,\Lambda_0)$ is rigid $?$\footnote{I verified that we can use Kodaira's methods presented in \cite{Mor71}. Actually the proof is the special case of theorem of completeness(See $\cite{Kod05}$). But I could not prove the inductive step in Kodaira's methods. However, in the part \ref{part3} of the thesis, we prove that for a nonsingular Poisson variety $(X,\Lambda_0)$ over an algebraically closed field $k$, if $HP^2(X,\Lambda_0)=0$, then $(X,\Lambda_0)$ is rigid in algebraic Poisson deformations (see Proposition \ref{3rigid}). I could not find any example with $HP^2(M,\Lambda)=0$. Even for complex projective plane $\mathbb{P}_{\mathbb{C}}^2$ with any holomorphic Poisson structure $\Lambda$, $HP^2(\mathbb{P}_{\mathbb{C}}^2,\Lambda)\ne 0$. See \cite{Pin11}.}
\end{problem}
\subsection{Change of parameter}(compare $\cite{Kod05}$ p.205)
Suppose given a Poisson analytic family $\{(M_t,\Lambda_t)|(M_t,\Lambda_t)=\omega^{-1}(t),t\in B\}=(\mathcal{M},\Lambda,B,\pi)$ of compact holomorphic Poisson manifolds, where $B$ is a domain of $\mathbb{C}^m$. Let $D$ be a domain of $\mathbb{C}^r$ and $h:s\to t=h(s),s\in D$, a holomorphic map of $D$ into $B$. Then by changing the parameter from $t$ to $s$, we construct a Poisson analytic family $\{(M_{h(t)},\Lambda_{h(t)})|s\in D\}$ on the parameter space $D$ in the following way.
Let $\mathcal{M}\times_B D:=\{(p,s)\in \mathcal{M}\times B|\omega(p)=h(s)\}$. Then we have the following commutative diagram
\begin{center}
$\begin{CD}
\mathcal{M}\times_B D @>p>> \mathcal{M}\\
@V\pi VV @VV\omega V\\
D @>h>> B
\end{CD}$
\end{center}
Since $\omega$ is a submersion, $\mathcal{M}\times_B D$ is a complex submanifold of $\mathcal{M}\times D$ and $\pi$ is a submersion. So $(\mathcal{M}\times_B D,D,\pi)$ is a complex analytic family in the sense of Kodaira and Spencer and we have $\pi^{-1}(s)=M_{h(s)}$. We show that it is naturally a Poisson analytic family such that $\pi^{-1}(s)=(M_{h(s)},\Lambda_{h(s)})$. Note that $D$ can be considered as a holomorphic Poisson manifold with trivial Poisson structure. In other words, $(D,0)$ is a holomorphic Poisson manifold. Then $(\mathcal{M}\times D,\Lambda\oplus 0)$ is a holomorphic Poisson manifold.\footnote{For the definition of product of two holomorphic Poisson manifolds, see Appendix \ref{appendixa}} We show that $\mathcal{M}\times_B D$ is a holomorphic Poisson submanifold of $(\mathcal{M}\times D,\Lambda\oplus 0)$. We check locally by applying Proposition \ref{box} (3). Let $(p_0,s_0)\in \mathcal{M}\times_B D$. Taking a sufficiently small coordinate polydisk $\Delta$ with $h(s_0)\in \Delta$, we represent $(\mathcal{M}_{\Delta},\Lambda_{\Delta})=\omega^{-1}(\Delta)$ in the form of
\begin{align*}
(\mathcal{M}_{\Delta},\Lambda_{\Delta})=(\bigcup_{j=1}^l U_j\times \Delta, \sum_{\alpha,\beta} g_{\alpha\beta}^j(z_j,t)\frac{\partial}{\partial z_j^{\alpha}}\wedge\frac{\partial}{\partial z_j^{\beta}})
\end{align*}
where each $U_j$ is a polydisk independent of $t$, and $(z_j,t)\in U_j\times \Delta$ and $(z_k,t)\in U_k\times \Delta$ are the same point on $\mathcal{M}_{\Delta}$ if $z_j^{\alpha}=f_{jk}^{\alpha}(z_k,t), \alpha=1,...,n$. Let $E$ be a sufficiently small polydisk of $D$ with $s_0\in E$ and $h(E)\subset \Delta$. Then we can represent $\mathcal{M}\times D$ locally in the form of
\begin{align*}
(\mathcal{M}_{\Delta}\times E,\Lambda_{\Delta}\oplus 0)=(\bigcup_{j=1}^l U_j\times \Delta\times E, \sum_{\alpha,\beta} g_{\alpha\beta}^j(z_j,t)\frac{\partial}{\partial z_j^{\alpha}}\wedge\frac{\partial}{\partial z_j^{\beta}})
\end{align*}
where $(z_j,t,s)\in U_j\times \Delta\times E$ and $(z_k,t,s)\in U_k\times \Delta\times E$ are the same point on $\mathcal{M}_{\Delta}\times E$ if $z_j=f_{jk}(z_k,t)$.
And we can represent $\mathcal{M}\times_B D$ locally in the form of
\begin{align*}
\bigcup_{j=1}^l U_j\times G_E \subset \mathcal{M}\times \Delta
\end{align*}
where $G_E=\{(h(s),s)|s\in E\}\subset \Delta\times E$ and $(z_j,h(s),s)\in U_j\times G_E$ and $(z_k,h(s),s)\in U_k\times G_E$ are the same points if $z_j=f_{jk}(z_k,h(s))$. We note that at $(p_0,s_0)\in \mathcal{M}\times_B D$, we have $(\Lambda\oplus 0)_{(p_0,s_0)}=\sum_{\alpha,\beta} g_{\alpha\beta}^j(p_0,h(s_0))\frac{\partial}{\partial z_j^{\alpha}}|_{p_0}\wedge\frac{\partial}{\partial z_j^{\beta}}|_{p_0}\in \wedge^2 T_{\mathcal{M}\times_B D}$. Hence $\mathcal{M}\times_B D$ is a holomorphic Poisson submanifold of $(\mathcal{M}\times D,\Lambda \oplus 0)$. Since $i:\mathcal{M}\times_B D\hookrightarrow \mathcal{M}\times D$ is a Poisson map and $\mathcal{M}\times D \to\mathcal{M}$ is a Poisson map, $p:\mathcal{M}\times_B D\to \mathcal{M}$ is a Poisson map.
Since $G_E$ is biholomorphic to $E$. The holomorphic Poisson manifold $\mathcal{M}\times_B D$ is represented locally by the form
\begin{align*}
(\bigcup_{j=1}^l U_j\times E, \sum_{\alpha,\beta} g_{\alpha\beta}^j(z_j,h(s))\frac{\partial}{\partial z_j^{\alpha}}\wedge\frac{\partial}{\partial z_j^{\beta}})
\end{align*}
where $(z_k,s)\in U_k\times E$ and $(z_j,s)\in U_j\times E$ are the same points if $z_j=f_{jk}(z_k,h(s))$.
\begin{definition}
The Poisson analytic family $(\mathcal{M}\times_B D,D, (\Lambda\oplus 0)|_{\mathcal{M}\times_B D},\pi)$ is called the Poisson analytic family induced from $(\mathcal{M},B,\Lambda,\omega)$ by the holomorphic map $h:D\to B$.
\end{definition}
Now we consider the change of variable formula in the infinitesimal deformations.
\begin{thm}
For any tangent vector $\frac{\partial}{\partial s}=c_1\frac{\partial}{\partial s_1}+\cdots +c_r\frac{\partial}{\partial s_r}\in T_s(D)$, the infinetesimal holomorphic poisson deformation of $(M_{h(s)},\Lambda_{h(s)})$ along $\frac{\partial}{\partial s}$ is given by
\begin{align*}
\frac{\partial(M_{h(s)},\Lambda_{h(s)})}{\partial{s}}=(\sum_{\lambda=1}^{m} \frac{\partial t_{\lambda}}{\partial s} \frac{\partial M_t}{\partial t_{\lambda}},\sum_{\lambda=1}^{m} \frac{\partial t_{\lambda}}{\partial s}\frac{\partial{\Lambda_t}}{\partial t_{\lambda}})
\end{align*}
\end{thm}
\begin{proof}
We put
\begin{align*}
\theta_{\lambda jk}(t)&=\sum_{\alpha=1}^{n}\frac{\partial{f_{jk}^{\alpha}(z_k,t_1,...,t_m)}}{\partial{t_{\lambda}}}\frac{\partial}{\partial{z_j^{\alpha}}},\\
\eta_{jk}(s)&=\sum_{\alpha=1}^{n}\frac{\partial{f_{jk}^{\alpha}(z_k, h(s))}}{\partial{s}}\frac{\partial}{\partial{z_j^{\alpha}}},\\
\Lambda_{\lambda j}(t)&=\sum_{\alpha,\beta =1}^n\frac{\partial{g^{j}_{\alpha \beta}(z_j,t_1,...,t_m)}}{\partial{t_{\lambda}}}\frac{\partial}{\partial{z_j^{\alpha}}}\wedge \frac{\partial}{\partial{z_j^{\beta}}},\\
\Lambda_{j}(s)&=\sum_{\alpha, \beta=1}^n\frac{\partial{g^{j}_{\alpha \beta}(z_j,h(s))}}{\partial{s}}\frac{\partial}{\partial{z_j^{\alpha}}}\wedge \frac{\partial}{\partial{z_j^{\beta}}}
\end{align*}
$\frac{\partial{(M_t,\Lambda_t)}}{\partial{t_{\lambda}}}$ is the cohomology class of the 1-cocycle $(\{\theta_{\lambda jk}(t)\},\{\Lambda_{j}(s)\})$, and $\frac{\partial{(M_{h(s)}, \Lambda_{h(s)})}}{\partial{s}}$ is that of $(\{\eta_{jk}(s)\},\{\Lambda_{j}(s)\})$. Since $h(s)=(t_1,...,t_m)$, we have
\begin{align*}
\frac{\partial{f_{jk}^{\alpha}(z_k, h(s))}}{\partial{s}}&=\sum_{\lambda=1}^{m}\frac{\partial{t_{\lambda}}}{\partial{s}}\frac{\partial{f_{jk}^{\alpha}(z_k,t_1,...,t_m)}}{\partial{t_{\lambda}}},\\
\frac{\partial{g^{j}_{\alpha \beta}(z_j,h(s))}}{\partial{s}}&=\sum_{l=1}^{r}c_l\frac{\partial{{g^{j}_{\alpha \beta}(z_j,h(s))}}}{\partial{s_l}}=\sum_{l=1}^r \sum_{\lambda=1}^m c_l\frac{\partial{t_{\lambda}}}{\partial{s_l}}\frac{\partial{{g^{j}_{\alpha \beta}(z_j,t_1,...,t_m)}}}{\partial{t_\lambda}}=\sum_{\lambda=1}^{m}\frac{\partial{t_{\lambda}}}{\partial{s}}\frac{\partial{g^j_{\alpha \beta}(z_j,t_1,...,t_m)}}{\partial{t_{\lambda}}}
\end{align*}
Hence we get the theorem.
\end{proof}
At this point, we discuss a concept of completeness in deformations of holomorphic Poisson manifolds. We define a complete family.
\begin{definition}
Let $(\mathcal{M},\Lambda,B, \omega)$ be a Poisson analytic family of compact holomorphic Poisson manifolds, and $t^0\in B$. Then $(\mathcal{M},\Lambda,B,\omega)$ is called complete at $t^0\in B$ if for any Poisson analytic family $(\mathcal{N},\Lambda',D,\pi)$ such that $D$ is a domain of $\mathbb{C}^l$ containing $0$ and that $\pi^{-1}(0)=\omega^{-1}(t^0)$, there are a sufficiently small domain $\Delta$ with $0\in \Delta\subset D$, and a holomorphic map $h:s\to t=h(s)$ with $h(0)=t^0$ such that $(\mathcal{N}_{\Delta},{\Lambda'}_{\Delta},\Delta,\pi)$ is the Poisson analytic family induced from $(\mathcal{M},\Lambda,B,\omega)$ by $h$ where $(\mathcal{N}_{\Delta},{\Lambda'}_{\Delta})=\pi^{-1}(\Delta)$.
\end{definition}
The following problem is an analogue of theorem of completeness from deformations of complex structures.
\begin{problem}[Theorem of Completeness for deformations of holomorphic Poisson manifolds]
If $\varphi_0:T_0 B\to HP^2(M,\Lambda_0)$ is surjective, is the Poisson analytic family $(\mathcal{M},\Lambda, B,\omega)$ complete at $0\in B?$\footnote{I verified that for this problem, we can use Kodaira's methods presented in $\cite{Kod05}$. But I could not prove the inductive step in the Poisson direction.}
\end{problem}
\chapter{Integrability condition}\label{chapter2}
In a family $(\mathcal{M},B,\Lambda)$ of deformations of a complex manifold $M$, the deformations near $M$ are represented by a $C^{\infty}$ vector $(1,0)$-form $\varphi(t) \in A^{0,1}(M,T_M)$ on $M$ with $\varphi(0)=0$, $\bar{\partial} \varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$ where $t \in \Delta$ a sufficiently small polydisk in $B$. In this chapter, we show that in a family $(\mathcal{M},B,\Lambda,\pi)$ of deformations of a holomorphic Poisson manifold $(M,\Lambda_0)$, the deformations near $(M,\Lambda_0)$ are represented by $C^{\infty}$ vector $(1,0)$-form $\varphi(t)\in A^{0,1}(M,T_M)$ and $C^{\infty}$ bivector $\Lambda(t)\in A^{0,0}(M,\wedge^2 T_M)$ with $\varphi(0)=0$, $\Lambda(0)=\Lambda_0$ and $\bar{\partial}(\varphi(t)+\Lambda(t))+\frac{1}{2}[\varphi(t)+\Lambda(t),\varphi(t)+\Lambda(t)]=0$. To deduce the integrability condition, we follow Kodaira's approach ($\cite{Kod05}$ section \S 5.3 (b) page 259) in the context of holomorphic Poisson deformations.
\section{Preliminaries}
Let $(\mathcal{M}, \Lambda, B,\omega)$ be a Poisson analytic family of compact Poisson holomorphic manifolds, and put $(M_t,\Lambda_t)=\omega^{-1}(t)$ where $B$ is a domain of $\mathbb{C}^m$ containing the origin $0$. Define $|t|=max_{\lambda}|t_{\lambda}|$ for $t=(t_1,...,t_m)\in \mathbb{C}^m$, and let $\Delta=\Delta_r =\{t\in \mathbb{C}^m||t|<r\}$ the polydisk of radius $r>0$. If we take a sufficiently small $\Delta \subset B,\mathcal{M}_{\Delta}=\omega^{-1}(\Delta)$ is represented in the form
\begin{align*}
\mathcal{M}_{\Delta}=\bigcup_j U_j\times \Delta
\end{align*}
We denote a point of $U_j$ by $\xi_j=(\xi_j^1,...,\xi_j^n)$ and its holomorphic Poisson structure $\Lambda_j=g_{\alpha \beta}^j(\xi_j,t) \frac{\partial}{\partial \xi_j^{\alpha}}\wedge \frac{\partial}{\partial \xi_j^{\beta}}$ on $U_j\times \Delta$. For simplicity we assume that $U_j=\{\xi_j\in \mathbb{C}^m||\xi_j|<1\}$ where $|\xi|=mat_a|\xi_j^a|$. $(\xi_j,t)\in U_j\times \Delta$ and $(\xi_k,t)\in U_k\times \Delta$ are the same point on $\mathcal{M}_{\Delta}$ if $\xi_j^{\alpha}=f_{jk}^{\alpha}(\xi_k,t)$, $\alpha=1,...,n$ where $f_{jk}^{\alpha}(\xi_k,t)$ is a poisson holomorphic map of $\xi_{k}^1,...,\xi_k^n,t_1,...,t_m$, defined on $U_k\times \Delta \cap U_j\times \Delta$ and we have $g_{\alpha \beta}^j(f_{jk}^1(\xi_k,t),...,f_{jk}^n(\xi_k,t))=\sum_{r,s} g_{rs}^k(\xi_k,t)\frac{\partial f_{jk}^{\alpha}}{\partial \xi_k^r}\frac{\partial f_{jk}^{\beta}}{\partial \xi_k^s}$. So $(M_t,\Lambda_t)=\cup_j (U_j,\Lambda_j(t))$ is a compact holomorphic Poisson manifold obtained by glueing a finite number of Poisson polydiscks $(U_1,\Lambda_1(t)),...,(U_j,\Lambda_j(t)),...$ by identifying $\xi_j\in U_j$ and $\xi_k\in U_k$ if $\xi_j=f_{jk}(\xi_k,t)$, and that holomorphic Poisson structure of $(M_t,\Lambda_t)$ varies since the manner of glueing and holomorphic Poisson structure vary with $t$. We note that by $\cite{Kod05}$ Theorem 2.3, when we ignore complex structures and Poisson structures $M_t$ for any $t\in \Delta$ is a diffeomorphic to $M_0$ as differentiable manifolds.
By $\cite{Kod05}$ Theorem 2.5, if we take a sufficiently small $\Delta$, there is a diffeomorphism $\Psi$ of $M\times \Delta$ onto $\mathcal{M}_{\Delta}$ as differentiable manifolds such that $\omega\circ \Psi$ is the projection $M\times \Delta \to \Delta$, where we put $M=M_0$. If we denote a point of $M$ by $z$, we have
\begin{align*}
\omega\circ \Psi(z,t)=t,\,\,\,\,\, t\in \Delta.
\end{align*}
$\Psi$ is the identity of $M=M\times 0$ onto $M=M_0\subset \mathcal{M}_{\triangle}$, namely $\Psi(z,0)=z$. Put $\Psi(z,t)=(\xi,t)=(\xi_j,t)$ for $\Psi(z,t)\in U_j\times \triangle$. Then each component $\xi_j^{\alpha}=\xi_j^{\alpha}(z,t)$, $\alpha=1,...,n$, of $\xi_j=(\xi_j^1,...,\xi_j^n)$ is a $C^{\infty}$ function:
\begin{align*}
\Psi(z,t)=(\xi_j^1(z,t),...,\xi_j^n(z,t),t_1,...,t_m).
\end{align*}
If we identify $\mathcal{M}_{\Delta}=\Psi(M\times \Delta)$ with $M_0\times \Delta$ via $\Psi$, $(\mathcal{M}_{\Delta},\Lambda)$ is considered as a holomorphic Poisson structure defined on the $C^{\infty}$ manifold $M\times \Delta$ by the system of local coordinates
\begin{align*}
\{(\xi_j,t)|j=1,2,3,...\},\,\,\,\,\, (\xi_j,t)=(\xi_j^1(z,t),...,\xi_j^n(z,t),t_1,...,t_m).
\end{align*}
and local holomorphic Poisson structures on $U_j\times \Delta$
\begin{align*}
\{\sum_{\alpha,\beta} g_{\alpha \beta}^j(\xi_j(z,t),t)\frac{\partial}{\partial \xi_j^{\alpha}}\wedge \frac{\partial}{\partial \xi_j^{\beta}}|j=1,2,3,...\}
\end{align*}
Let $(z^1,...,z^n)$ be arbitrary local complex coordinates of a point $z$ of $M_0$.
\begin{align*}
\xi_j^{\alpha}(z,t)=\xi_j^{\alpha}(z_1,...,z_n,t_1,...,t_m),\,\,\,\,\, \alpha=1,...,n,
\end{align*}
are $C^{\infty}$ functions of the complex variables $z^1,...,z^n,t_1,...,t_m$. Since for $t=0$, both $(\xi_j^1(z,0),...,\xi_j^n(z,0))$ and $(z_1,...,z_n)$ are local complex coordinates on the complex holomorphic manifold $M_0=M$, $\xi_j^{\alpha}(z,0)$ are holomorphic functions of $z_1,...,z_n$, and
\begin{align*}
det\left(\frac{\partial \xi_j^{\alpha}(z,0)}{\partial z_{\lambda}}\right)_{\alpha,\lambda=1,...,n}\ne 0
\end{align*}
Hence, if we take $\Delta$ sufficiently small, it follows that
\begin{align*}
det\left(\frac{\partial \xi_j^{\alpha}(z,t)}{\partial z_{\lambda}}\right)_{\alpha,\lambda=1,...,n}\ne 0
\end{align*}
for any $t\in \Delta$.
With this preparation, we identify the holomorphic Poisson deformations near $(M,\Lambda_0)$, where $M=M_0$ in the analytic family $(\mathcal{M},\Lambda,B,\Delta)$ with $\varphi(t)+\Lambda(t)$ where $\varphi(t)$ is a $C^{\infty}$ vector $(0,1)$ form and $\Lambda(t)$ is a $C^{\infty}$ bivector on $M$.
\subsection{Identification of the deformations of complex structures with $\varphi(t)$}\
We consider $\bar{\partial} \xi_j^{\alpha}(z,t)=\sum \bar{\partial}\xi_j^{\alpha}(z,t)d\bar{z}^v$. The domain $\mathcal{U}_j=\Psi^{-1}(U_j\times \triangle)$ of $\xi_j^{\alpha}(z,t)$ is a domain of $M\times \triangle$.
Since $det\left(\frac{\partial \xi_j^{\alpha}(z,t)}{\partial z_{\lambda}}\right)_{\alpha,\lambda=1,...,n}\ne 0$, we define a $(0,1)$-form $\varphi^{\lambda}_j(z,t)=\sum_{v=1}^n \varphi^{\lambda}_{jv}(z,t)d\bar{z_v}$ in the following way:
\begin{equation*}
\left(
\begin{matrix}
\varphi_j^1(z,t)\\
\vdots \\
\varphi_j^n(z,t)
\end{matrix}
\right)
:=
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^1}{\partial z_n}\\
\vdots & \vdots\\
\frac{\partial \xi_j^n}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)^{-1}
\left(
\begin{matrix}
\bar{\partial} \xi_j^1\\
\vdots \\
\bar{\partial} \xi_j^n
\end{matrix}
\right)
\end{equation*}
Then we have
\begin{equation*}
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^1}{\partial z_n}\\
\vdots & \vdots\\
\frac{\partial \xi_j^n}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)
\left(
\begin{matrix}
\varphi_j^1(z,t)\\
\vdots \\
\varphi_j^n(z,t)
\end{matrix}
\right)
=
\left(
\begin{matrix}
\bar{\partial} \xi_j^1\\
\vdots \\
\bar{\partial} \xi_j^n
\end{matrix}
\right)
\end{equation*}
which is equivalent to
\begin{equation*}
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^1}{\partial z_n}\\
\vdots & \vdots\\
\frac{\partial \xi_j^n}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)
\left(
\begin{matrix}
\varphi_{j1}^1 & \dots & \varphi_{jn}^1\\
\vdots & \vdots\\
\varphi_{j1}^n & \dots & \varphi_{jn}^n
\end{matrix}
\right)
\left(
\begin{matrix}
d\bar{z_1}\\
\vdots \\
d\bar{z_n}
\end{matrix}
\right)
=
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial \bar{z_1}} & \dots & \frac{\partial \xi_j^1}{\partial \bar{z_n}}\\
\vdots & \vdots\\
\frac{\partial \xi_j^n}{\partial \bar{z_1}} & \dots & \frac{\partial \xi_j^n}{\partial \bar{z_n}}
\end{matrix}
\right)
\left(
\begin{matrix}
d\bar{z_1}\\
\vdots \\
d\bar{z_n}
\end{matrix}
\right)
\end{equation*}
In other words, we have $(0,1)$-forms
\begin{align*}
\varphi^{\lambda}_j(z,t)=\sum_{v=1}^n \varphi^{\lambda}_{jv}(z,t)d\bar{z_v}
\end{align*}
for each $\lambda=1,...,n$, such that
\begin{align*}
\bar{\partial}\xi_j^{\alpha}(z,t)=\sum_{\lambda=1}^{n} \varphi_j^{\lambda}(z,t)\frac{\partial \xi_j^{\alpha}(z,t)}{\partial z_{\lambda}},\,\,\,\,\, \alpha=1,...,n
\end{align*}
The coefficients $\varphi_{jv}^{\alpha}(z,t)$ are $C^{\infty}$ functions on $\mathcal{U}_j$.
\begin{lemma}\label{c}
On $\mathcal{U}_j \cap \mathcal{U}_k$, we have
\begin{align*}
\sum_{\lambda=1}^n \varphi_j^{\lambda}(z,t)\frac{\partial}{\partial z_{\lambda}}=\sum_{\lambda=1}^n \varphi_k^{\lambda}(z,t)\frac{\partial}{\partial z_{\lambda}}
\end{align*}
\end{lemma}
\begin{proof}
See $\cite{Kod05}$ p.262.
\end{proof}
If for $(z,t)\in \mathcal{U}_j$, we define
\begin{align}\label{b}
\varphi(z,t):=\sum_{\lambda=1}^n \varphi_j^{\lambda}(z,t) \frac{\partial}{\partial z_{\lambda}}=\sum_{v,\lambda} \varphi_v^{\lambda}(z,t) d\bar{z_v} \frac{\partial}{\partial z_{\lambda}}
\end{align}
By Lemma \ref{c}, $\varphi(t)=\varphi(z,t)$ is a $C^{\infty}$ vector $(0,1)$-form on $M$ for every $t\in\triangle$.
Then since $\bar{\partial} \xi_j^{\alpha}(z,0)=0$, and $det\left(\frac{\partial \xi_j^{\alpha}(z,t)}{\partial z_{\lambda}}\right)_{\alpha,\lambda=1,...,n}\ne 0$, we have $\varphi(0)=0$. $\varphi(t)$ satisfies $\bar{\partial}\varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$\footnote{For the proof, see $\cite{Kod05}$ p.263,p.265} and we have the following theorem.
\begin{thm}\label{text}
If we take a sufficiently small polydisk $\Delta$, then for $t\in \Delta$, al local $C^{\infty}$ function $f$ on $M$ is holomorphic with respect to the complex structure $M_t$ if and only if $f$ satisfies the equation
\begin{align*}
(\bar{\partial}-\varphi(t))f=0
\end{align*}
\end{thm}
\begin{proof}
See $\cite{Kod05}$ Theorem 5.3 p.263.
\end{proof}
\subsection{Identification of the deformations of Poisson structures with $\Lambda(t)$}\
For, on each $U_j\times \Delta$, the holomorphic Poisson structure $\sum_{\alpha,\beta}g_{\alpha,\beta}^j(\xi_j,t) \frac{\partial}{\partial \xi_j^{\beta}}\wedge \frac{\partial}{\partial \xi_j^{\beta}}$, there exists the unique bivector field $\Lambda'=\sum_{r,s} f_{rs}^j(z,t)\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}$ on $\mathcal{U}_j=\Psi^{-1}(U_j\times \Delta)$ such that $\sum_{r,s} f_{rs}^j(z,t)\frac{\partial \xi_j^{\alpha}}{\partial z_r}\frac{\partial \xi_j^{\beta}}{\partial z_s}=g_{\alpha\beta}^j(\xi_j(z,t),t)$.
Indeed, since $det\left(\frac{\partial \xi_j^{\alpha}(z,t)}{\partial z_{\lambda}}\right)_{\alpha,\lambda=1,...,n}\ne 0$,
we set
\begin{equation*}
\left(
\begin{matrix}
f_{11}^j(z,t)& \dots & f_{1n}^j(z,t)\\
\vdots & \vdots &\vdots\\
f_{n1}^j(z,t)& \dots & f_{nn}^j(z,t)
\end{matrix}
\right)
:=
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^1}{\partial z_n}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_j^n}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)^{-1}
\left(
\begin{matrix}
g_{11}^j(\xi_j(z,t))& \dots & g_{1n}^j(\xi_j(z,t))\\
\vdots & \vdots &\vdots\\
g_{n1}^j(\xi_j(z,t)) & \dots & g_{nn}^j(\xi_j(z,t))
\end{matrix}
\right)
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_1}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_j^1}{\partial z_n} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)^{-1}
\end{equation*}
Then we have the unique $C^{\infty}$ bivector field $\Lambda_j':=\sum_{r,s} f_{rs}^j(z,t)\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}$ on $\mathcal{U}_j$
\begin{lemma}\label{e}
On $\mathcal{U}_j\cap \mathcal{U}_k$, we have $f_{rs}^j(z,t)=f_{rs}^k(z,t)$.
\end{lemma}
\begin{proof}
We first note the following identities.
\begin{equation*}
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^1}{\partial z_n}\\
\vdots & \vdots\\
\frac{\partial \xi_j^n}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)
\left(
\begin{matrix}
f_{11}^j(z,t)& \dots & f_{1n}^j(z,t)\\
\vdots & \vdots & \vdots\\
f_{n1}^j(z,t) & \dots & f_{nn}^j(z,t)
\end{matrix}
\right)
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_1}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_j^1}{\partial z_n} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)
=\left(
\begin{matrix}
g_{11}^j(\xi_j(z,t))& \dots & g_{1n}^j(\xi_j(z,t))\\
\vdots & \vdots &\vdots\\
g_{n1}^j(\xi_j(z,t)) & \dots & g_{nn}^j(\xi_j(z,t))
\end{matrix}
\right)
\end{equation*}
\begin{equation*}
\left(
\begin{matrix}
\frac{\partial \xi_k^1}{\partial z_1} & \dots & \frac{\partial \xi_k^1}{\partial z_n}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_k^n}{\partial z_1} & \dots & \frac{\partial \xi_k^n}{\partial z_n}
\end{matrix}
\right)
\left(
\begin{matrix}
f_{11}^k(z,t)& \dots & f_{1n}^k(z,t)\\
\vdots & \vdots &\vdots\\
f_{n1}^k(z,t) & \dots & f_{nn}^k(z,t)
\end{matrix}
\right)
\left(
\begin{matrix}
\frac{\partial \xi_k^1}{\partial z_1} & \dots & \frac{\partial \xi_k^n}{\partial z_1}\\
\vdots & \vdots &\vdots \\
\frac{\partial \xi_k^1}{\partial z_n} & \dots & \frac{\partial \xi_k^n}{\partial z_n}
\end{matrix}
\right)
=\left(
\begin{matrix}
g_{11}^k(\xi_k(z,t))& \dots & g_{1n}^k(\xi_k(z,t))\\
\vdots & \vdots &\vdots\\
g_{n1}^j(\xi_k(z,t)) & \dots & g_{nn}^k(\xi_k(z,t))
\end{matrix}
\right)
\end{equation*}
\begin{equation*}
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial \xi_k^1} & \dots & \frac{\partial \xi_j^1}{\partial \xi_k^n}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_j^n}{\partial \xi_k^1} & \dots & \frac{\partial \xi_j^n}{\partial \xi_k^n}
\end{matrix}
\right)
\left(
\begin{matrix}
g_{11}^k(\xi_k(z,t))& \dots & g_{1n}^k(\xi_k(z,t))\\
\vdots & \vdots &\vdots\\
g_{n1}^k(\xi_k(z,t)) & \dots & g_{nn}^k(\xi_k(z,t))
\end{matrix}
\right)
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial \xi_k^1} & \dots & \frac{\partial \xi_j^n}{\partial \xi_k^1}\\
\vdots & \vdots &\vdots \\
\frac{\partial \xi_j^1}{\partial \xi_k^n} & \dots & \frac{\partial \xi_j^n}{\partial \xi_k^n}
\end{matrix}
\right)
=\left(
\begin{matrix}
g_{11}^j(\xi_j(z,t))& \dots & g_{1n}^j(\xi_j(z,t))\\
\vdots & \vdots &\vdots\\
g_{n1}^j(\xi_j(z,t)) & \dots & g_{nn}^j(\xi_j(z,t))
\end{matrix}
\right)
\end{equation*}
Since $\frac{\partial \xi_j^q}{\partial z_p}=\sum_{r=1}^n \frac{\partial \xi_j^q}{\partial \xi_k^r}\frac{\partial \xi_k^r}{\partial z_p}$, we have
\begin{equation*}
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial \xi_k^1} & \dots & \frac{\partial \xi_j^1}{\partial \xi_k^n}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_j^n}{\partial \xi_k^1} & \dots & \frac{\partial \xi_j^n}{\partial \xi_k^n}
\end{matrix}
\right)
\left(
\begin{matrix}
\frac{\partial \xi_k^1}{\partial z_1} & \dots & \frac{\partial \xi_k^1}{\partial z_n}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_k^n}{\partial z_1} & \dots & \frac{\partial \xi_k^n}{\partial z_n}
\end{matrix}
\right)
=\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^1}{\partial z_n}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_j^n}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)
\end{equation*}
\begin{equation*}
\left(
\begin{matrix}
\frac{\partial \xi_k^1}{\partial z_1} & \dots & \frac{\partial \xi_k^n}{\partial z_1}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_k^1}{\partial z_n} & \dots & \frac{\partial \xi_k^n}{\partial z_n}
\end{matrix}
\right)
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial \xi_k^1} & \dots & \frac{\partial \xi_j^n}{\partial \xi_k^1}\\
\vdots & \vdots &\vdots \\
\frac{\partial \xi_j^1}{\partial \xi_k^n} & \dots & \frac{\partial \xi_j^n}{\partial \xi_k^n}
\end{matrix}
\right)
=\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_1}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_j^1}{\partial z_n} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)
\end{equation*}
Since $det\left(\frac{\partial \xi_j^{\alpha}(z,t)}{\partial z_{\lambda}}\right)_{\alpha,\lambda=1,...,n}\ne 0$, we have $f_{rs}^j(z,t)=f_{rs}^k(z,t)$.
\end{proof}
If for $(z,t)\in \mathcal{U}_j$, we define
\begin{align}\label{f}
\Lambda(z,t):=\sum_{r,s} f_{rs}^j(z,t)\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}.
\end{align}
By Lemma \ref{e}, $\Lambda(t)=\Lambda(z,t)$ is a $C^{\infty}$ bivector field on $M$ for every $t\in \Delta$.
\begin{thm}\label{1thm}
If we take a sufficiently small polydiesk $\Delta$, then for the Poisson structure $\sum_{\alpha,\beta}g_{\alpha,\beta}^j(\xi_j,t) \frac{\partial}{\partial \xi_j^{\beta}}\wedge \frac{\partial}{\partial \xi_j^{\beta}}$ on $U_j\times \Delta$ for each $j$, there exists the unique bivector field $\Lambda_j'=\sum_{r,s} f_{rs}^j(z,t)\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}$ on $\mathcal{U}_j$ satisfying
\begin{enumerate}
\item $\sum_{r,s} f_{rs}^j(z,t)\frac{\partial \xi_j^{\alpha}}{\partial z_r}\frac{\partial \xi_j^{\beta}}{\partial z_s}=g_{\alpha\beta}^j(\xi_j(z,t),t)$
\item $\Lambda_j'$ are glued together to define a $C^{\infty}$ bivector field $\Lambda'$ on $M\times \Delta$
\item for each $j$, $[\Lambda_j',\Lambda_j']=0$. Hence we have $[\Lambda',\Lambda']=0$
\end{enumerate}
\end{thm}
We need the following lemma to prove the theorem.
\begin{lemma}\label{formula}
If $\rho=\sum_{p,q} \sigma_{pq}\frac{\partial }{\partial z_p}\wedge \frac{\partial}{\partial z_q}$, then
$[\sigma,\sigma]=0$ is equivalent to
\begin{align*}
\sum_{l=1}^n \sigma_{lk}\frac{\partial \sigma_{ij}}{\partial z_l}+\sigma_{li}\frac{\partial \sigma_{jk}}{\partial z_l}+\sigma_{lj}\frac{\partial \sigma_{ki}}{\partial z_l}=0
\end{align*}
for each $1\leq i,j,k \leq n$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{1thm}]
We have already showed $(1)$ and $(2)$. It remains to show $(3)$. We note that $[\sum_{\alpha,\beta} g_{\alpha\beta}^j(\xi_j,t) \frac{\partial}{\partial \xi_j^{\alpha}}\wedge\frac{\partial}{\partial \xi_j^{\beta}},\sum_{\alpha,\beta} g_{\alpha\beta}^j(\xi_j,t) \frac{\partial}{\partial \xi_j^{\alpha}}\wedge\frac{\partial}{\partial \xi_j^{\beta}}]=0$ and $g_{\alpha\beta}^j(\xi_j(z,t),t)=\sum_{a,b} f_{ab}^j(z,t)\frac{\partial \xi_j^{\alpha}}{\partial z_a}\frac{\partial \xi_j^{\beta}}{\partial z_b}$ is holomorphic with respect to $\xi_j=(\xi_j^{\alpha}),\alpha=1,...,n$. In the following, for simplicity, we denote $\xi_j^{\alpha}(z_j,t)$ by $\xi_{\alpha}$ and $f_{ab}^j(z,t)$ by $f_{ab}$. By Lemma \ref{formula}, we have
\begin{align*}
0=&\sum_{a,b,c,d,l} f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial}{\partial \xi_l}\left(f_{cd}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}\right)+f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial}{\partial \xi_l}\left(f_{cd}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}\right)+f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial}{\partial \xi_l}\left(f_{cd}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d}\right)\\
+&\sum_{a,b,c,d,l} f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial}{\partial \bar{\xi}_l}\left(f_{cd}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}\right)+f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial}{\partial \bar{\xi}_l}\left(f_{cd}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}\right)+f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial}{\partial \bar{\xi}_l}\left(f_{cd}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d}\right)\\
=&\sum_{a,b,c,d,l}f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial f_{cd}}{\partial \xi_l}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}+f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}f_{cd}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_i}{\partial z_c}\right)\frac{\partial \xi_j}{\partial z_d}+f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}f_{cd}\frac{\partial \xi_i}{\partial z_c}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_j}{\partial z_d}\right)\\
+&\sum_{a,b,c,d,l}f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial f_{cd}}{\partial \xi_l}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}+f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}f_{cd}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_j}{\partial z_c}\right)\frac{\partial \xi_k}{\partial z_d}+f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}f_{cd}\frac{\partial \xi_j}{\partial z_c}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_k}{\partial z_d}\right)\\
+&\sum_{a,b,c,d,l}f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial f_{cd}}{\partial \xi_l}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d}+f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}f_{cd}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_k}{\partial z_c}\right)\frac{\partial \xi_i}{\partial z_d}+f_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}f_{cd}\frac{\partial \xi_k}{\partial z_c}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_i}{\partial z_d}\right)\\
+&\sum_{a,b,c,d,l}f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial f_{cd}}{\partial \bar{\xi}_l}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}+f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}f_{cd}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_i}{\partial z_c}\right)\frac{\partial \xi_j}{\partial z_d}+f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}f_{cd}\frac{\partial \xi_i}{\partial z_c}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_j}{\partial z_d}\right)\\
+&\sum_{a,b,c,d,l}f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial f_{cd}}{\partial \bar{\xi}_l}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}+f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}f_{cd}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_j}{\partial z_c}\right)\frac{\partial \xi_k}{\partial z_d}+f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}f_{cd}\frac{\partial \xi_j}{\partial z_c}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_k}{\partial z_d}\right)\\
+&\sum_{a,b,c,d,l}f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial f_{cd}}{\partial \bar{\xi}_l}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d}+f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}f_{cd}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_k}{\partial z_c}\right)\frac{\partial \xi_i}{\partial z_d}+f_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}f_{cd}\frac{\partial \xi_k}{\partial z_c}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_i}{\partial z_d}\right)\\
=&\sum_{a,b,c,d}f_{ab}\frac{\partial f_{cd}}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}+f_{ab}\frac{\partial \xi_k}{\partial z_b}f_{cd}\frac{\partial^2 \xi_i}{\partial z_a\partial z_c}\frac{\partial \xi_j}{\partial z_d}+f_{ab}\frac{\partial \xi_k}{\partial z_b}f_{cd}\frac{\partial \xi_i}{\partial z_c}\frac{\partial^2 \xi_j}{\partial z_a\partial z_d}\\
+&\sum_{a,b,c,d}f_{ab}\frac{\partial f_{cd}}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}+f_{ab}\frac{\partial \xi_i}{\partial z_b}f_{cd}\frac{\partial^2 \xi_j}{\partial z_a\partial z_c}\frac{\partial \xi_k}{\partial z_d}+f_{ab}\frac{\partial \xi_i}{\partial z_b}f_{cd}\frac{\partial \xi_j}{\partial z_c}\frac{\partial^2 \xi_k}{\partial z_a\partial z_d}\\
+&\sum_{a,b,c,d}f_{ab}\frac{\partial f_{cd}}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d}+f_{ab}\frac{\partial \xi_j}{\partial z_b}f_{cd}\frac{\partial^2 \xi_k}{\partial z_a\partial z_c}\frac{\partial \xi_i}{\partial z_d}+f_{ab}\frac{\partial \xi_j}{\partial z_b}f_{cd}\frac{\partial \xi_k}{\partial z_c}\frac{\partial^2 \xi_i}{\partial z_a\partial z_d}\\
=&\sum_{a,b,c,d}f_{ab}\frac{\partial f_{cd}}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}+f_{ab}\frac{\partial f_{cd}}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}+f_{ab}\frac{\partial f_{cd}}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d}\\
=&\sum_{a,b,c,d}\left(f_{ab}\frac{\partial f_{cd}}{\partial z_a}+f_{ac}\frac{\partial f_{db}}{\partial z_a}+f_{ad}\frac{\partial f_{bc}}{\partial z_a}\right)\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}\frac{\partial \xi_k}{\partial z_b}
\end{align*}
Since $det\left(\frac{\partial \xi_j^{\alpha}(z,t)}{\partial z_{\lambda}}\right)_{\alpha,\lambda=1,...,n}\ne 0$, $\sum_a f_{ab}\frac{\partial f_{cd}}{\partial z_a}+f_{ac}\frac{\partial f_{db}}{\partial z_a}+f_{ad}\frac{\partial f_{bc}}{\partial z_a}=0$. So by Lemma \ref{formula}, $[\Lambda_j',\Lambda_j']=0$.
\end{proof}
\begin{remark}
In summary, for holomorphic Poisson manifold $(M_t,\Lambda_t)$ for each $t\in \Delta$ in the Poisson analytic family, there exists a bivector field $\Lambda'(t)$ on $M$ with $[\Lambda'(t),\Lambda'(t)]=0$ for $t\in \Delta$. Conversely, $\Lambda'(t)$ induces $\Lambda_t$ via diffeomorphism $\Psi$. More precisely, the $(2,0)$-part of $\Psi_{*} \Lambda'(t)$ is $\Lambda_t$ for $t\in \Delta$.\footnote{For the type of a bivector field, we mean the decomposition $\wedge^2 T_{\mathbb{C}} M=\wedge^2 T^{1,0} \oplus T^{1,0}\otimes T^{0,1} \oplus \wedge^2 T^{0,1}$ with respect to the almost complex structure induced from the complex structure. If $\Lambda\in C^{\infty}(\wedge T_{\mathbb{C}} M)$, then we denote by $\Lambda^{2,0}$ the component of $C^{\infty}(\wedge^2 T^{1,0})$, by $\Lambda^{1,1}$ the component of $C^{\infty}(T^{0,1}\otimes T^{0,1})$, and by $\Lambda^{0,2}$ the component of $C^{\infty}(\wedge^2 T^{0,1})$. So we have $\Lambda=\Lambda^{2,0}+\Lambda^{1,1}+\Lambda^{0,2}$.}
\end{remark}
Now we discuss the condition when a $C^{\infty}$ bivector field $\Lambda$ on $M$ with $[\Lambda,\Lambda]=0$ gives a holomorphic bivector field $\Lambda^{2,0}$ with respect to the complex structure $M_t$ when we restrict $\Lambda$ to $(2,0)$ part. Before proceeding our discussion, we recall a bracket structure $[-,-]$ on $A=\bigoplus_{p+q=i,p\geq 0,q\geq 1} A^{0,p}(M,\wedge^q T_M)$ (See Appendix \ref{appendixc}), which we need for the computation of the integrability condition.
The bracket structure on $A$ is defined in the following way.
\begin{align*}
[-,-]:A^{0,p}(M,\wedge^q T_M)\times A^{0,p'}(M,\wedge^{q'} T_M)\to A^{p+p'}(M,\wedge^{q+q'-1} T_M)
\end{align*}
In local coordinates it is given by
\begin{align*}
[fdz_I\frac{\partial}{\partial z_J},gdz_K\frac{\partial}{\partial z_L}]=(-1)^{|K|(|J|+1)} dz_I\wedge dz_K [f\frac{\partial}{\partial z_J},g\frac{\partial}{\partial z_L}]\end{align*}
Then $(A[1],\bar{\partial},[-,-])$ is a differntial graded Lie algebra.
So we have the following. For $a\in A^{0,p}(M,\wedge^q T_M), b\in A^{0,p'}(M,\wedge^{q'} T_M)$,
\begin{enumerate}
\item $[a,b]=-(-1)^{(p+q+1)(p'+q'+1)}[b,a]$
\item $[a,[b,c]]=[[a,b],c]+(-1)^{(p+q+1)(p'+q'+1)}[b,[a,c]]$
\item $\bar{\partial}[a,b]=[\bar{\partial} a,b]+(-1)^{p+q+1}[a,\bar{\partial} b]$
\end{enumerate}
\begin{thm}\label{m}
If we take a sufficiently small polydisk $\Delta$, then for $t\in \Delta$, a $(2,0)$-part $\Lambda^{2,0}$ of a $C^{\infty}$ bivector field $\Lambda=\sum_{\alpha,\beta=1}^n f_{\alpha \beta}(z)\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}$ on $M$ is holomorphic with respect to the complex structure $M_t$, if and only if it satisfies the equation
\begin{align*}
\bar{\partial}\Lambda-[\Lambda,\varphi(t)]=0
\end{align*}
Moreover, if $[\Lambda,\Lambda]=0$, then $[\Lambda^{2,0},\Lambda^{2,0}]=0$.
\end{thm}
\begin{proof}
Since $(2,0)$ part of $\Lambda=\sum_{\alpha,\beta=1}^n f_{\alpha \beta}(z)\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}$ is $\sum_{\alpha,\beta,i,j} f_{\alpha \beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}}\frac{\partial}{\partial \xi^i}\wedge \frac{\partial}{\partial \xi^j}$, we have to show that for each $i,j$,$\sum_{\alpha,\beta} f_{\alpha \beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}}$ is holomorphic with respect to the complex structure $M_t$, which is equivalent to $(\bar{\partial}-\varphi(t))(\sum_{\alpha,\beta} f_{\alpha \beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}})=0$ by Theorem \ref{text}, if and only if $\bar{\partial}\Lambda-[\Lambda,\varphi(t)]=0$. First we compute $\bar{\partial}\Lambda-[\Lambda,\varphi(t)]=\sum_{\alpha,\beta, v} \frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}d\bar{z}_v\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}-[\sum_{\alpha,\beta} f_{\alpha\beta}\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}},\sum_{v,\lambda} \varphi_v^{\lambda}(z,t) d\bar{z_v} \frac{\partial}{\partial z_{\lambda}}]=\sum_{\alpha,\beta, v} \frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}d\bar{z}_v\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}-\sum_{\alpha,\beta, v,\lambda} [f_{\alpha\beta}\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}},\varphi_v^{\lambda} d\bar{z_v} \frac{\partial}{\partial z_{\lambda}}]=\sum_{\alpha,\beta, v} \frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}d\bar{z}_v\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}+\sum_{\alpha,\beta, v,\lambda} [f_{\alpha\beta}\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}},\varphi_v^{\lambda} \frac{\partial}{\partial z_{\lambda}}]d\bar{z}_v=\sum_{\alpha,\beta, v} \frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}d\bar{z}_v\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}+\sum_{\alpha,\beta, v,\lambda} [f_{\alpha\beta}\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}},\varphi_v^{\lambda} \frac{\partial}{\partial z_{\lambda}}]d\bar{z}_v=\sum_{\alpha,\beta, v} \frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}d\bar{z}_v\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}+\sum_{\alpha,\beta, v,\lambda}(f_{\alpha\beta}\frac{\partial \phi_v^{\lambda}}{\partial z_{\alpha}}\frac{\partial}{\partial z_{\lambda}}\wedge \frac{\partial}{\partial z_{\beta}}-\varphi_{v}^{\lambda} \frac{\partial f_{\alpha\beta}}{\partial z_{\lambda}}\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}+f_{\alpha\beta} \frac{\partial \varphi_v^{\lambda}}{\partial z_{\beta}}\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\lambda}})d\bar{z}_v$. By considering the coefficients of $d\bar{z}_v\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}$, $\bar{\partial}\Lambda-[\Lambda,\varphi(t)]=0$ is equivalent to $\sum_{\alpha,\beta,v} [\frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}+\sum_c (f_{c\beta}\frac{\partial \varphi^{\alpha}_v}{\partial z_c}-\varphi^c_v\frac{\partial f_{\alpha \beta}}{\partial z_c}+f_{\alpha c}\frac{\partial \varphi^{\beta}_v}{\partial z_c})] d\bar{z}_v\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}=0$ which is equivalent to
\begin{center}
$(*)\frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}+\sum_c (f_{c\beta}\frac{\partial \varphi^{\alpha}_v}{\partial z_c}-\varphi^c_v\frac{\partial f_{\alpha \beta}}{\partial z_c}+f_{\alpha c}\frac{\partial \varphi^{\beta}_v}{\partial z_c})]=0$ for each $\alpha,\beta,v$.
\end{center}
On the other hand, we compute $(\bar{\partial}-\varphi(t))(\sum_{\alpha,\beta} f_{\alpha \beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}})=\sum_{\alpha,\beta}(\bar{\partial}-\varphi(t))(f_{\alpha \beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}})$ for each $i,j$. $\sum_{\alpha,\beta}(\bar{\partial}-\varphi(t))(f_{\alpha \beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}})=\sum_{\alpha ,\beta,v} (\frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial}{\partial \bar{z}_v}(\frac{\partial \xi^i}{\partial z_{\alpha}})\frac{\partial \xi^j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial}{\partial \bar{z}_v}(\frac{\partial \xi^j}{\partial z_{\beta}}))d\bar{z}_v-\sum_{\alpha,\beta,v,\lambda} \varphi^{\lambda}_v d\bar{z}_v(\frac{\partial f_{\alpha\beta}}{\partial z_{\lambda}}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi_j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial^2 \xi^i}{\partial z_{\alpha} \partial z_{\lambda}}\frac{\partial \xi^j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}\partial z_{\lambda}})$. So $(\bar{\partial}-\varphi(t))(\sum_{\alpha,\beta} f_{\alpha \beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}})=0$ is equivalent to
\begin{center}
$(**)\sum_{\alpha ,\beta} (\frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial}{\partial z_{\alpha}}(\frac{\partial \xi^i}{\partial \bar{z}_v})\frac{\partial \xi^j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial}{\partial z_{\beta}}(\frac{\partial \xi^j}{\partial \bar{z}_v}))-\sum_{\alpha,\beta,c} \varphi^{c}_v(\frac{\partial f_{\alpha\beta}}{\partial z_{c}}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi_j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial^2 \xi^i}{\partial z_{\alpha} \partial z_{c}}\frac{\partial \xi^j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}\partial z_{c}})=0$
\end{center}
for each $i,j,v$.
Since
\begin{equation*}
\left(
\begin{matrix}
\frac{\partial \xi^1}{\partial z_1} & \dots & \frac{\partial \xi^1}{\partial z_n}\\
\vdots & \vdots\\
\frac{\partial \xi^n}{\partial z_1} & \dots & \frac{\partial \xi^n}{\partial z_n}
\end{matrix}
\right)
\left(
\begin{matrix}
\varphi_{1}^1 & \dots & \varphi_{n}^1\\
\vdots & \vdots\\
\varphi_{1}^n & \dots & \varphi_{n}^n
\end{matrix}
\right)
=
\left(
\begin{matrix}
\frac{\partial \xi^1}{\partial \bar{z_1}} & \dots & \frac{\partial \xi^1}{\partial \bar{z_n}}\\
\vdots & \vdots\\
\frac{\partial \xi^n}{\partial \bar{z_1}} & \dots & \frac{\partial \xi^n}{\partial \bar{z_n}}
\end{matrix}
\right),
\end{equation*}
we have $\frac{\partial \xi^i}{\partial \bar{z}_v}=\sum_c \frac{\partial \xi^i}{\partial z_c}\varphi^c_v$ and $\frac{\partial \xi^j}{\partial \bar{z}_v}=\sum_c \frac{\partial \xi^j}{\partial z_c}\varphi^c_v$.
So $(**)$ is equivalent to
\begin{center}
$\sum_{\alpha ,\beta} \frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}}+\sum_{\alpha,\beta,c}(f_{\alpha\beta}(\frac{\partial^2 \xi^i}{\partial z_{\alpha} \partial z_c}\varphi^c_v+\frac{\partial \xi_i}{\partial z_c}\frac{\partial \varphi^c_v}{\partial z_{\alpha}})\frac{\partial \xi^j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial \xi^i}{\partial z_{\alpha}}(\frac{\partial^2 \xi^j}{\partial z_{\beta} \partial z_c}\varphi^c_v+\frac{\partial \xi_j}{\partial z_c}\frac{\partial \varphi^c_v}{\partial z_{\beta}}))-\sum_{\alpha,\beta,c} \varphi^{c}_v(\frac{\partial f_{\alpha\beta}}{\partial z_{c}}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi_j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial^2 \xi^i}{\partial z_{\alpha} \partial z_{c}}\frac{\partial \xi^j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}\partial z_{c}})=0$.
\end{center}
which is equivalent to
\begin{center}
$\sum_{\alpha ,\beta} \frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}}+\sum_{\alpha,\beta,c}(f_{\alpha\beta}(\frac{\partial \xi_i}{\partial z_c}\frac{\partial \varphi^c_v}{\partial z_{\alpha}})\frac{\partial \xi^j}{\partial z_{\beta}}+f_{\alpha\beta}\frac{\partial \xi^i}{\partial z_{\alpha}}(\frac{\partial \xi_j}{\partial z_c}\frac{\partial \varphi^c_v}{\partial z_{\beta}}))-\sum_{\alpha,\beta,c} \varphi^{c}_v(\frac{\partial f_{\alpha\beta}}{\partial z_{c}}\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi_j}{\partial z_{\beta}})
=0$
\end{center}
which is equivalent to
\begin{center}
$\sum_{\alpha,\beta} [\frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}+\sum_c (f_{c\beta}\frac{\partial \varphi^{\alpha}_v}{\partial z_c}-\varphi^c_v\frac{\partial f_{\alpha \beta}}{\partial z_c}+f_{\alpha c}\frac{\partial \varphi^{\beta}_v}{\partial z_c})]\frac{\partial \xi^i}{\partial z_{\alpha}}\frac{\partial \xi^j}{\partial z_{\beta}}=0$ for each $i,j,v$.
\end{center}
Since $det\left(\frac{\partial \xi_j^{\alpha}(z,t)}{\partial z_{\lambda}}\right)_{\alpha,\lambda=1,...,n}\ne 0$, this is equivalent to
\begin{center}
$(***) \frac{\partial f_{\alpha\beta}}{\partial \bar{z}_v}+\sum_c (f_{c\beta}\frac{\partial \varphi^{\alpha}_v}{\partial z_c}-\varphi^c_v\frac{\partial f_{\alpha \beta}}{\partial z_c}+f_{\alpha c}\frac{\partial \varphi^{\beta}_v}{\partial z_c})=0$ for each $\alpha,\beta,v$.
\end{center}
Note that $(*)$ is same to $(***)$.
For the second statement, we can write $\Lambda=\sum_{a,b} f_{ab}\frac{\partial}{\partial z_a}\wedge \frac{\partial}{\partial z_b}=\sum_{a,b,i,j} f_{ab}\frac{\partial \xi_i}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial}{\partial \xi_i}\wedge \frac{\partial}{\partial \xi_j}+ 2f_{ab}\frac{\partial \xi_i}{\partial z_a}\frac{\partial \bar{\xi}_j}{\partial z_b}\frac{\partial}{\partial \xi_i}\wedge \frac{\partial}{\partial \bar{\xi}_j}+ f_{ab}\frac{\partial \xi_i}{\partial z_a}\frac{\partial \bar{\xi}_j}{\partial z_b}\frac{\partial}{\partial \bar{\xi}_i}\wedge \frac{\partial}{\partial \bar{\xi}_j}=\Lambda^{2,0}+\Lambda^{1,1}+\Lambda^{2,0}$. Since $[\Lambda,\Lambda]=0$, $(3,0)$ part of $[\Lambda,\Lambda]=0$. But $(3,0)$ part happens in $[\Lambda^{2,0},\Lambda^{2,0}]+[\Lambda^{2,0},\Lambda^{1,1}]^{3,0}$. Since $\Lambda^{2,0}$ is holomorphic with respect to the complex structure induced by $\varphi(t)$, $[\Lambda^{2,0},\Lambda^{1,1}]^{3,0}=0$.
\end{proof}
\begin{remark}
A $C^{\infty}$ complex bivector field $\Lambda$ on $M$ with $[\Lambda,\Lambda]=0$ gives a Poisson bracket on $C^{\infty}$ complex valued functions on $M$. We point out that when we restrict to holomorphic functions with respect to the complex structure $M_t$, this is exactly the Poisson bracket induced from $\Lambda^{2,0}$.
\end{remark}
\section{Expression of infinitesimal deformations in terms of $\varphi(t)$ and $\Lambda(t)$}\
In this section, we study how the infinitesimal deformation of $(M,\Lambda_0)$ in a family is represented in terms of $\varphi(t)$ and $\Lambda(t)$. Recall that $(\mathcal{M},\Lambda,B,\pi)$ is a Poisson analytic family of compact holomorphic Poisson manifolds with $(M,\Lambda_0)=\omega^{-1}(0)$ and for sufficiently small polydisk $\Delta\subset B$, $M_{\Delta}=\omega^{-1}(\Delta)$ is represented in the form $\mathcal{M}_{\Delta}=\bigcup_j U_j\times \Delta$ where $U_j=\{x_j\in \mathbb{C}^m||\xi_j|<1\}$ and the holomorphic Poisson structures on $U_j$ is $\sum_{\alpha\beta} g_{\alpha\beta}(\xi_j,t) \frac{\partial}{\partial \xi_j^{\alpha}}\wedge \frac{\partial}{\partial \xi_j^{\beta}}$ and $\xi_j^{\alpha}=f_{jk}^{\alpha}(\xi_k,t),\alpha=1,...,m$ on $U_k\times \Delta \cap U_j\times \Delta$. We showed that the infinitesimal deformation at $(M,\Lambda_0)$ is captured by the element $(\frac{(M_t,\Lambda_t)}{\partial t})_{t=0}\in HP^2(M,\Lambda_0)$ of the complex of sheaves of $0\to \Theta_{M} \to \wedge^2 \Theta_{M}\to \cdots \to \wedge^n \Theta_M\to 0$ by using the following \u{C}ech hypercohomology resolution associated with the open covering $\mathcal{U}^0=\{U_j^0:=U_j\times 0\}$. (See Proposition \ref{gg})
\begin{center}
$\begin{CD}
@A[\Lambda,-]AA\\
C^0(\mathcal{U}^0,\wedge^3 \Theta_M)@>-\delta>>\cdots\\
@A[\Lambda,-]AA @A[\Lambda,-]AA\\
C^0(\mathcal{U}^0,\wedge^2 \Theta_M)@>\delta>> C^1(\mathcal{U}^0,\wedge^2 \Theta_M)@>-\delta>>\cdots\\
@A[\Lambda,-]AA @A[\Lambda,-]AA @A[\Lambda,-]AA\\
C^0(\mathcal{U}^0,\Theta_M)@>-\delta>>C^1(\mathcal{U}^0,\Theta_M)@>\delta>>C^2(\mathcal{U}^0,\Theta_M)@>-\delta>>\cdots\\
@AAA @AAA @AAA @AAA \\
0@>>>0 @>>> 0 @>>> 0@>>> \cdots
\end{CD}$
\end{center}
And we can also compute the hypercohomology group of $0\to \Theta_M \to \wedge^2 \Theta_M\to \cdots \to \wedge^n \Theta_M\to 0$ by using the following Dolbeault type resolution.(See example \ref{rr})
\begin{center}
$\begin{CD}
@A[\Lambda,-]AA\\
A^{0,0}(M,\wedge^3 T_M)@>\bar{\partial}>>\cdots\\
@A[\Lambda,-]AA @A[\Lambda,-]AA\\
A^{0,0}(M,\wedge^2 T_M)@>\bar{\partial}>> A^{0,1}(M,\wedge^2 T_M)@>\bar{\partial}>>\cdots\\
@A[\Lambda,-]AA @A[\Lambda,-]AA @A[\Lambda,-]AA\\
A^{0,0}(M,T_M)@>\bar{\partial}>>A^{0,1}(M,T_M)@>\bar{\partial}>>A^{0,2}(M, T_M)@>\bar{\partial}>>\cdots\\
@AAA @AAA @AAA @AAA \\
0@>>>0 @>>> 0 @>>> 0@>>> \cdots
\end{CD}$
\end{center}
We describe how the element in the Cech hypercohomology look like in the Dolbeault hypercohomology.
In the picture below, we connect two resolutions. We only depict a part of resolutions that we need in the next page.
{\tiny
\[
\xymatrixrowsep{0.2in}
\xymatrixcolsep{0.1in}
\xymatrix{
& & \wedge^3 \Theta_M \ar[ld] \ar[rr] & & C^0(\wedge^3 \Theta_M) \ar[ld]\\
&A^{0,0}(M,\wedge^3 T_M) \ar[rr] & & C^0(\mathscr{A}^{0,0}(\wedge^3 T_M))\\
&& \wedge^2 \Theta_M \ar@{.>}[uu] \ar@{.>}[ld] \ar@{.>}[rr] & & C^0(\wedge^2 \Theta_M) \ar[uu] \ar[ld] \ar[rr]^{\delta} && C^1(\wedge^2 \Theta_X) \ar[ld]\\
& A^{0,0}(M,\wedge^2 T_M) \ar[uu] \ar[ld] \ar[rr] && C^0(\mathscr{A}^{0,0}(\wedge^2 T_M)) \ar[uu] \ar[ld] \ar[rr]^{\delta} && C^1(\mathscr{A}^{0,0}(T_M)) \\
A^{0,1}(M,\wedge^2 T_M) \ar[rr] && C^0(A^{0,1}(\wedge^2 T_M)) && C^0(\Theta_M) \ar@{.>}[ld]\ar@{.>}[uu] \ar@{.>}[rr]^{-\delta} & & C^1(\Theta_X) \ar[uu] \ar[ld]\\
& A^{0,0}(M,T_M) \ar@{.>}[uu] \ar@{.>}[rr] \ar@{.>}[ld] && C^0(\mathscr{A}^{0,0}(T_M)) \ar[uu]^{[\Lambda,-]} \ar[rr]^{-\delta} \ar[ld]^{\bar{\partial}} && C^1(\mathscr{A}^{0,0}(T_M)) \ar[ld] \ar[uu]\\
A^{0,1}(M,T_M) \ar[uu] \ar[rr] & & C^0(\mathscr{A}^{0,1}(T_M)) \ar[uu] \ar[rr] & & C^1(\mathscr{A}^{0,1}(T_M))
}\]}
Now we explicitly construct the isomorphism of second hypercomology groups from \u{C}ech hyperresolution and Dolbeault hyperresolution, namely
\begin{align*}
HP^2(M,\Lambda_0)\cong \frac{ker(A^{0,0}(M,\wedge^2 T_M)\oplus A^{1,0}(M, T_M)\to A^{0,0}(M,\wedge^3 T_M)\oplus A^{1,0}(M,\wedge^2 T_M)\oplus A^{2,0}(M, T_M))}{im(A^{0,0}(M,T_M)\to A^{0,0}(M,\wedge^2 T_M)\oplus A^{1,0}(M,\wedge T_M))}
\end{align*}
Note that each horizontal complex is exact except for edges of the ``real wall".
We define the map in the following way:
let $(b,a) \in \mathcal{C}^0(\mathcal{U}, \wedge^2 \Theta_M)\oplus \mathcal{C}^1(\mathcal{U},\Theta_M)$ be a cohomology class of $HP^2(M,\Lambda)$. Since $\delta a=0$, there exists a $c\in C^0(\mathcal{U},\mathscr{A}^{0,0}(T_M))$ such that $-\delta c=a$. Since $a$ is holomorphic $(\bar{\partial}a=0)$, by the commutativity $\bar{\partial} c\in A^{0,1}(M, T_M)$. And we claim that $[\Lambda,c]-b\in A^{0,0}(M,\wedge^2 T_M)$. Indeed $\delta([\Lambda,c]-b)=\delta([\Lambda,c])-\delta b=-[\Lambda,-\delta c]-\delta b=-[\Lambda,a]-\delta b=0$. Now we show that $(\bar{\partial} c, [\Lambda,c]-b)$ is a cohomology class of Dolbeault type resolution. Clearly $\bar{\partial}(\bar{\partial}c)=0.$ $[\Lambda,[\Lambda,c]-b]=0$. And $\bar{\partial} ([\Lambda,c]-b)+[\Lambda, \bar{\partial} c]=-[\Lambda,\bar{\partial} c]+[\Lambda,\bar{\partial c}]=0$. We define the map by $(b,a)\mapsto ([\Lambda,c]-b,\bar{\partial} c)$.
Now we show that this map is well defined.
\begin{enumerate}
\item (independence of choice of $c$)
let $c'$ with $-\delta c'=a$. Then $-\delta(c-c')=0$. So $d=c-c'\in A^{0,0}(M,\Theta_M)$. Then $([\Lambda,c]-b,\bar{\partial c})-([\Lambda,c']-b,\bar{\partial} c')=([\Lambda,c-c'],\bar{\partial}(c-c'))=\bar{\partial}d+[\Lambda,d]$
\item (independence of choice of $(b,a)$)
Let $(b,a)$ and $(b',a')$ are in the same cohomology class. We show that $(b-b',a-a')$ is mapped to $0$. Indeed, there exists $e\in C^0(\mathcal{U},\Theta_M)$ such that $-\delta e+[\Lambda,e]=(a-a')-(b-b')$. We can use $e$ as $c$. Then $(b-b',a-a')$ is mapped to $([\Lambda, e]-(b-b'),\bar{\partial} e)=(0,0)$.
\end{enumerate}
For the inverse map, let $(\beta,\alpha) \in A^{0,0}(M,\wedge^2 T_M)\oplus A^{0,1}(M, T_M) $ be the cohomology class of Dolbeault type resolution.
Then there exists $c\in C^0(\mathcal{U},\mathscr{A}^{0,0}(T_M))$ such that $\bar{\partial} c =\alpha$. We define the inverse map $(\beta,\alpha) \mapsto ([\Lambda,c]-\beta,-\delta c)$.
\begin{thm}\label{n}
$\left(\left(\frac{\partial \varphi(t)}{\partial t}\right)_{t=0},- \left(\frac{\partial \Lambda(t)}{\partial t}\right)_{t=0}\right)$ satisfies
$[\Lambda_0, -\left(\frac{\partial \Lambda(t)}{\partial t}\right)_{t=0}]=0$, $\bar{\partial} \left( -(\frac{\partial \Lambda(t)}{\partial t})_{t=0}\right) +[\Lambda_0,\left(\frac{\partial \varphi(t)}{\partial t}\right)_{t=0}]=0$, $\bar{\partial} \left(\frac{\partial \varphi(t)}{\partial t}\right)_{t=0}=0$, and under the isomorphism
\begin{align*}
HP^2(M)\cong \frac{ker(A^{0,0}(M,\wedge^2 T_M)\oplus A^{1,0}(M,\wedge T_M)\to A^{0,0}(M,\wedge^3 T_M)\oplus A^{1,0}(M,\wedge^2 T_M)\oplus A^{2,0}(M,T_M))}{im(A^{0,0}(M,T_M)\to A^{0,0}(M,\wedge^2 T_M)\oplus A^{1,0}(M,\wedge T_M))}
\end{align*}
$\left(\frac{(M_t,\Lambda_t)}{\partial t}\right)_{t=0}\in HP^2(M,\Lambda_0)$ corresponds to $\left(\left(\frac{\partial \varphi(t)}{\partial t}\right)_{t=0}, -\left(\frac{\partial \Lambda(t)}{\partial t}\right)_{t=0}\right)$
\end{thm}
\begin{proof}
By taking the derivative of our integrability condition with respect to $t$ and plugging $0$, we get the first claim. Now we construct the isomorphism between two second cohomology groups and their correspondence. Put
\begin{align*}
\theta_{jk}&=\sum_{\alpha=1}^n \left(\frac{\partial f_{jk}^{\alpha}(\xi_k,t)}{\partial t}\right)_{t=0}\frac{\partial}{\partial \xi_j^{\alpha}}\\
\sigma_j&=\sum_{r,s=1}^n \left(\frac{\partial g_{rs}(\xi,t)}{\partial t}\right)_{t=0} \frac{\partial}{\partial \xi_j^r}\wedge \frac{\partial}{\partial \xi_j^s}
\end{align*}
The infinitesimal deformation $\left(\frac{\partial (M_t,\Lambda_t)}{\partial t}\right)_{t=0}\in HP^2(M,\Lambda_0)$ is the cohomology class of the $(\{\theta_{jk}\},\{\sigma_j\})\in C^1(\mathcal{U}^0,\Theta)\oplus C^0(\mathcal{U}^0,\wedge^2 \Theta)$. We fix a tangent vector $\frac{\partial}{\partial t}\in T_0(\Delta)$, denote $\left(\frac{\partial f(t)}{\partial t}\right)_{t=0}$ by $\dot{f}$. By differentiating
\begin{align*}
\xi_j^{\alpha}(z,t)=f_{jk}^{\alpha}(\xi_k(z,t),t)=f_{jk}(\xi_k^1(z,t),...,\xi_k^n(z,t),t)
\end{align*}
with respect to $t$ and putting $t=0$, we get
\begin{align*}
\dot{\xi_j}^{\alpha}:=\sum_{\beta=1}^n \frac{\partial \xi_j^{\alpha}}{\partial \xi_k^{\beta}}\dot{\xi_k^{\beta}}+\left( \frac{\partial f_{jk}^{\alpha}(\xi_k,t)}{\partial t}\right)_{t=0}
\end{align*}
where $\xi_j^{\alpha}=\xi_j^{\alpha}(z,0)$ and $\xi_k^{\beta}=\xi_k^{\beta}(z,0)$. Therefore putting
\begin{align*}
\xi_j=\sum_{\alpha=1}^n \dot{\xi_j}^{\alpha}\frac{\partial}{\partial \xi_j^{\alpha}}
\end{align*}
for each $j$, we have
\begin{align*}
\theta_{jk}=\xi_j-\xi_k
\end{align*}
Since $\dot{\xi_j}^{\alpha}=\left(\frac{\partial \xi_j^{\alpha}(z,t)}{\partial t}\right)_{t=0}$ is a $C^{\infty}$ function on $U_j$, $\xi_j$ is a $C^{\infty}$ vector field on $U_j$. So we have $\{\xi_j \} \in C^0(\mathcal{U}_0, \mathcal{A}^{0,0}(\Theta))$ and $\delta \{ \xi_j \}=-\{ \theta_{jk} \}$ where $\delta$ is the usual \u{C}ech map.
We need the following lemma.
\begin{lemma}\label{beta}
$\bar{\partial} \xi_j =\sum_{\lambda=1}^n \left( \frac{\partial \varphi^{\lambda}(z,t)}{\partial t}\right)_{t=0}\frac{\partial}{\partial z_{\lambda}}=\sum_{\lambda=1}^n \dot{\varphi}^{\lambda}\frac{\partial}{\partial z_{\lambda}}=\dot{\varphi}$ and $\dot{\Lambda}-{\sigma_j}+[\Lambda_0, \xi_j]=0$. More precisely,
\begin{align*}
\sum_{r,s=1}^n \left(\frac{\partial f_{rs} (z,t)}{\partial t}\right)_{t=0} \frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}-\sum_{\alpha,\beta=1}^n \left(\frac{\partial g_{\alpha\beta}^j(\xi_j,t)}{\partial t}\right)_{t=0} \frac{\partial}{\partial \xi_j^{\alpha}} \wedge \frac{\partial}{\partial \xi_j^{\beta}}+[\sum_{r,s=1}^n g_{rs}^j(\xi_j,0)\frac{\partial}{\partial \xi_j^r}\wedge \frac{\partial}{\partial \xi_j^s},\sum_{c=1}^n \dot{\xi}_j^c\frac{\partial}{\partial \xi_j^c}]=0
\end{align*}
equivalently,
\begin{align*}
(*)\sum_{r,s} \dot{f_{rs}} \frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}-\sum_{\alpha,\beta=1}^n \dot{g}_{\alpha\beta}^j \frac{\partial}{\partial \xi_j^{\alpha}} \wedge \frac{\partial}{\partial \xi_j^{\beta}}+[\sum_{r,s=1}^n g_{rs}^j(\xi_j,0)\frac{\partial}{\partial \xi_j^r}\wedge \frac{\partial}{\partial \xi_j^s},\sum_{c=1}^n \dot{\xi}_j^c\frac{\partial}{\partial \xi_j^c}]=0
\end{align*}
\end{lemma}
\begin{proof}
By differentiating
\begin{align*}
\bar{\partial} \xi_j^{\alpha}(z,t)=\sum_{\lambda=1}^n \varphi^{\lambda}(z,t)\frac{\partial \xi_j^{\alpha}(z,t)}{\partial z_{\lambda}}
\end{align*}
with respect to $t$, and putting $t=0$, we obtain
\begin{align*}
\bar{\partial}\dot{\xi}_j^{\alpha}=\sum_{\lambda=1}^{n} \dot{\varphi}^{\lambda}\frac{\partial \xi_j^{\alpha}(z,0)}{\partial z_{\lambda}}
\end{align*}
since $\varphi^{\lambda}(z,0)=0$. Hence
\begin{align*}
\bar{\partial} \xi_j=\sum_{\alpha=1}^n \bar{\partial} \dot{\xi}_j^{\alpha}\frac{\partial}{\partial \xi_j^{\alpha}}=\sum_{\lambda=1}^n\sum_{\alpha=1}^n \dot{\varphi}^{\lambda} \frac{\partial \xi_j^{\alpha}(z,0)}{\partial z_{\lambda}}\frac{\partial}{\partial \xi_j^{\alpha}}=\sum_{\lambda=1}^n \dot{\varphi}^{\lambda} \frac{\partial}{\partial z_{\lambda}}=\dot{\varphi}
\end{align*}
For $(*)$, we note that
\begin{align*}
\sum_{r,s=1} \dot{f_{rs}}\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}=\sum_{r,s,a,b=1} \dot{f_{rs}} \frac{\partial \xi_j^a (z,0)}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}\frac{\partial}{\partial \xi_j^a}\wedge \frac{\partial}{\partial \xi_j^b}\\
\end{align*}
\begin{align*}
&\sum_{r,s,c=1}^n [g_{rs}^j(\xi_j,0) \frac{\partial}{\partial \xi_j^r}\wedge \frac{\partial}{\partial \xi_j^s}, \dot{\xi}_j^c\frac{\partial}{\partial \xi_j^c}]=\sum_{r,s,c=1} [g_{rs}^j(\xi_j,0)\frac{\partial}{\partial \xi_j^r},\dot{\xi}_j^c \frac{\partial}{\partial \xi_j^c}]\wedge \frac{\partial}{\partial \xi_j^s}-g_{rs}^j(\xi_j,0)[\frac{\partial}{\partial \xi_j^s},\dot{\xi}_j^c \frac{\partial}{\partial \xi_j^c}]\wedge\frac{\partial}{\partial \xi_j^r}\\
&=\sum_{r,s,c=1}^n g_{rs}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^c}{\partial \xi_j^r}\frac{\partial}{\partial \xi_j^c}\wedge \frac{\partial}{\partial \xi_j^s}-\dot{\xi}_j^c\frac{\partial g_{rs}(\xi_j,0)}{\partial \xi_j^c}\frac{\partial}{\partial \xi_j^r}\wedge \frac{\partial}{\partial \xi_j^s}+g_{rs}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^c}{\partial \xi_j^s}\frac{\partial}{\partial \xi_j^r}\wedge\frac{\partial}{\partial \xi_j^c}
\end{align*}
By considering the coefficients of $\frac{\partial}{\partial \xi_j^a}\wedge \frac{\partial}{\partial \xi_j^b}$, $(*)$ is equivalent to
\begin{align*}
(**) \sum_{r,s=1}^n \dot{f_{rs}} \frac{\partial \xi_j^a (z,0)}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}-\dot{g}_{ab}^j-\sum_{c=1}^n \dot{\xi}_j^c\frac{\partial g_{ab}(\xi_j,0)}{\partial \xi_j^c}+\sum_{c=1}^n g_{cb}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^a}{\partial \xi_j^c}+g_{ac}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^b}{\partial \xi_j^c}=0
\end{align*}
On the other hand, we have
\begin{align*}
g_{ab}^j(\xi_j^1(z,t),...,\xi_j^n(z,t),t_1,...,t_m)=\sum_{r,s=1}^n f_{rs}(z,t)\frac{\partial \xi_j^a(z,t)}{\partial z_r}\frac{\partial \xi_j^b(z,t)}{\partial z_s}
\end{align*}
By taking the derivative with respect to $t$ and putting $t=0$, we have
\begin{align*}
\frac{\partial g_{ab}^j(\xi_j,0)}{\partial \xi_j^1}\dot{\xi}_j^1+\cdots + \frac{\partial g_{ab}^j(\xi_j,0)}{\partial \xi_j^n}\dot{\xi}_j^n+\dot{g}_{ab}^j=\sum_{r,s=1}^n\dot{f}_{rs}\frac{\partial \xi_j^a(z,0)}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}+f_{rs}(z,0)( \frac{\partial \dot{\xi}_j^a}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}+\frac{\partial \xi_j^a(z,0)}{\partial z_r}\frac{\partial \dot{\xi}_j^b}{\partial z_s})
\end{align*}
Hence $(**)$ is equivalent to
\begin{align*}
\sum_{c=1}^n g_{cb}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^a}{\partial \xi_j^c}+g_{ac}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^b}{\partial \xi_j^c}=\sum_{r,s=1}^n f_{rs}(z,0)\frac{\partial \dot{\xi}_j^a}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}+ f_{rs}(z,0)\frac{\partial \xi_j^a(z,0)}{\partial z_r}\frac{\partial \dot{\xi}_j^b}{\partial z_s}
\end{align*}
Indeed,
\begin{align*}
\sum_{c=1}^n g_{cb}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^a}{\partial \xi_j^c}+g_{ac}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^b}{\partial \xi_j^c}&=\sum_{r,s,c=1}^n f_{rs}(z,0)\frac{\partial \xi_j^c(z,0)}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}\frac{\partial \dot{\xi}_j^a}{\partial \xi_j^c}+ f_{rs}(z,0)\frac{\partial \xi_j^a(z,0)}{\partial z_r}\frac{\partial \xi_j^c(z,0)}{\partial z_s}\frac{\partial \dot{\xi}_j^b}{\partial \xi_j^c}\\
&=\sum_{r,s=1}^n f_{rs}(z,0)\frac{\partial \dot{\xi}_j^a}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}+ f_{rs}(z,0)\frac{\partial \xi_j^a(z,0)}{\partial z_r}\frac{\partial \dot{\xi}_j^b}{\partial z_s}
\end{align*}
\end{proof}
Going back to our proof of Theorem \ref{n}, we defined the isomorphism between two hypercohomology groups $(b,a)\mapsto ([\Lambda,c]-b)$ where $-\delta c=a$ in the discussion above the Theorem \ref{n}. We take $(b,a)=(\{\sigma_j\},\{\theta_{jk}\})$ and $c=\{\xi_j\}$. Since $-\delta \{\xi_j\}=\{\theta_{jk}\}$, we have $-\delta c=a$. Then by the isomorphism $(\{\sigma_j\},\{\theta_{jk}\})$ is mapped to $([\Lambda,\{\xi_j\}]-\{\sigma_j\}, \bar{\partial}\{\xi_j\})$ which is $(-\dot{\Lambda}, \dot{\varphi})$ by Lemma \ref{beta}.
\end{proof}
\section{Integrability condition}\
We showed that given a Poisson analytic family $(\mathcal{M},\Lambda,B,\omega)$ deformation $(M_t,\Lambda_t)$ of $M$ near $(M_0,\Lambda_0)$ is represented by the vector $(0,1)$-form $\varphi(t)$ and the bivector field $\Lambda(t)$ on $M$ with $\varphi(0)=0$ and $\Lambda(0)=\Lambda_0$ satisfying the conditions: (1)$[\Lambda(t),\Lambda(t)]=0,(2)\bar{\partial} \Lambda(t)-[\Lambda(t),\varphi(t)]=0$ and (3)$\bar{\partial} \varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$
Conversely, we show that on the holomorphic Poisson manifold $(M,\Lambda_0)$, a vector $(0,1)$-form $\varphi$ and a bivector field $\Lambda$ on $M$ such that $\varphi$ and $\Lambda_0+\Lambda$ satisfying the interability condition define another holomorphic Poisson structure on $M$.
Let $\varphi=\sum_{\lambda=1}^n \varphi^{\lambda}_{\bar{v}}(z)d\bar{z}^v\frac{\partial}{\partial z_{\lambda}}$ be a $C^{\infty}$ vector $(0,1)$-form and $\Lambda$ be a $C^{\infty}$ bivector field on a holomorphic Poisson manifold $(M,\Lambda_0)$ and suppose $det(\delta_v^{\lambda}-\sum_{\mu} \varphi_{v}^{\mu}\overline{\varphi_{\mu}^{\lambda}})\ne 0$. We assume that $\varphi$ and $\Lambda$ satisfies the integrability condition\footnote{If we replace $\varphi$ by $-\varphi$, then $(1),(2)$, and $(3)$ are equivalent to
\begin{align*}
L(\Lambda+\varphi)+\frac{1}{2}[\Lambda+\varphi,\Lambda+\varphi]=0 \,\,\,\,\text{where}\,\,L=\bar{\partial}+[\Lambda_0,-]
\end{align*}
which is a solution of the Maurer-Cartan equation of a differential graded Lie algebra $(\mathfrak{g}=\bigoplus_{i\geq 0} g^i=\bigoplus_{p+q-1=i, q \geq1} A^{0,p}(M,\wedge^q T_M),L,[-,-])$(See Appendix \ref{appendixc}). In the part \ref{part2} of the thesis, we prove that this differential graded Lie algebra controls deformations of a holomorphic Poisson manifold $(M,\Lambda_0)$ in the language of functor of Artin rings.}
\begin{align*}
&(1)[\Lambda_0+\Lambda,\Lambda_0+\Lambda]=0\\
&(2)\bar{\partial} (\Lambda_0+\Lambda)-[\Lambda_0+\Lambda,\varphi]=0\\
&(3)\bar{\partial}\varphi-\frac{1}{2}[\varphi,\varphi]=0
\end{align*}
Then by the Newlander-Nirenberg theorem($\cite{New57}$,$\cite{Kod05}$), the condition (3) gives a finite open covering $\{U_j\}$ of $M$ and $C^{\infty}$-functions $\xi_j^{\alpha}=\xi_j^{\alpha}(z),\alpha=1,...,n$ on each $U_j$ such that $\xi_j:z\to \xi_j(z)=(\xi_j^1(z),...,\xi_j^n(z))$ gives complex coordinates on $U_j$ and $\{\xi_1,...,\xi_j,...\}$ defines another complex structure on $M$,which we denote by $M_{\varphi}$. And by Theorem \ref{m}, the condition $(1)$ and $(2)$ gives another holomorphic Poisson structure $(\Lambda_0+\Lambda)^{2,0}$ on $M$ with respect to the complex structure induced by $\varphi$ where $(\Lambda_0+\Lambda)^{2,0}$ means the $(2,0)$-part of $\Lambda_0+\Lambda$ with respect to the complex structure induced by $\varphi$.
\begin{example}[Hitchin-Goto family]
Hitchin showed the following theorem.
\begin{thm}
Let $(M,\sigma)$ be a holomorphic Poisson manifold which satisfies the $\partial\bar{\partial}$-lemma. Then any class $\sigma([\omega])\in H^1(M,T)$ for $[\omega]\in H^1(M,T^*)$ is tangent to a deformation of complex structure induced by $\phi(t)=\sigma(\alpha)$ where $\alpha=t\omega+\partial (t^2\beta_2+t^3\beta_3+\cdots)$ for $(0,1)$-forms $\beta_i$ with respect to the original complex structure.
\end{thm}
\begin{proof}
See $\cite{Hit12}$ theorem 1.
\end{proof}
And also Hitchin showed for each $\phi(t)$, there is a holomorphic Poisson structure $\sigma_t$ with respect to $M_{\phi(t)}$. We construct a Poisson analytic family $(\mathcal{M},\Lambda)$ such that $(M_t,\Lambda_t)=(M_{\phi(t)},\sigma_t)$ by showing that $(\phi(t),\sigma)$ satisfies the integrability condition.
\begin{lemma}\label{ab}
Let $\sigma \in C^{\infty}(\wedge^ 2T_M)$ be a bivector of a complex manifold $M$ such that $[\sigma,\sigma]=0$. Then we have $[\sigma,\sigma(\partial \beta)]=0$ where $\beta$ is an $(0,1)$-form.
\end{lemma}
\begin{proof}
If we write $\sigma=\sum_{l,k=1}^n \sigma^{lk}\frac{\partial}{\partial z_l}\wedge \frac{\partial}{\partial z_k}$ and $\beta= \sum_{i=1}^n f_id\bar{z_i}$. Then $\partial \beta=\sum_{i,j=1}^n \frac{\partial f_i}{\partial z_j} dz_j\wedge d\bar{z_i}$. Then $\sigma(\partial \beta)=\sum_{i,l,k} \sigma^{lk}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}d\bar{z_i}-\sigma^{lk}\frac{\partial f_i}{\partial z_k}\frac{\partial}{\partial z_l}d\bar{z_i}=\sum_{i,l,k} 2\sigma^{lk}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}d\bar{z_i}$. So it is sufficient to show that
\begin{align*}
\sum_{p,q,l,k} [\sigma^{pq}\frac{\partial}{\partial z_p}\wedge \frac{\partial}{\partial z_q},\sigma^{lk}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}]=0
\end{align*}
$\sum_{p,q,l,k} [\sigma^{pq}\frac{\partial}{\partial z_p}\wedge \frac{\partial}{\partial z_q},\sigma^{lk}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}]=\sum_{p,q,l,k} [\sigma^{pq}\frac{\partial}{\partial z_p},\sigma^{lk}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}]\frac{\partial}{\partial z_q}-\sigma^{pq}[\frac{\partial}{\partial z_q},\sigma^{lk}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}]\frac{\partial}{\partial z_p}\\=\sum_{p,q,l,k} \sigma^{pq}\frac{\partial \sigma^{lk}}{\partial z_p}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_q}+\sigma^{pq}\sigma^{lk}\frac{\partial^2 f_{i}}{\partial z_p \partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial }{\partial z_q}-\sigma^{lk}\frac{\partial f_i}{\partial z_l}\frac{\partial \sigma^{pq}}{\partial z_k}\frac{\partial}{\partial z_p}\wedge \frac{\partial}{\partial z_q}-\sigma^{pq}\frac{\partial \sigma^{lk}}{\partial z_q}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_p}-\sigma^{pq}\sigma^{lk}\frac{\partial^2 f_{i}}{\partial z_q\partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_p}$.
Let's consider $\sum_{p,q,l,k} \sigma^{pq}\sigma^{lk}\frac{\partial^2 f_{i}}{\partial z_p \partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial }{\partial z_q}-\sigma^{pq}\sigma^{lk}\frac{\partial^2 f_{i}}{\partial z_q\partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_p}=2\sum_{p,q,l,k} \sigma^{pq}\sigma^{lk}\frac{\partial^2 f_{i}}{\partial z_p \partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial }{\partial z_q}$. By considering the coefficient of $\frac{\partial}{\partial z_a}\wedge \frac{\partial}{\partial z_b}$ of $\sum_{p,q,l,k} \sigma^{pq}\sigma^{lk}\frac{\partial^2 f_{i}}{\partial z_p \partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial }{\partial z_q}$, we have $\sum_{p,l} \sigma^{pb}\sigma^{la}\frac{\partial^2 f_{i}}{\partial z_p \partial z_l}\frac{\partial}{\partial z_a}\wedge \frac{\partial }{\partial z_b}-\sum_{p,l} \sigma^{pa}\sigma^{lb}\frac{\partial^2 f_{i}}{\partial z_p \partial z_l}\frac{\partial}{\partial z_a}\wedge \frac{\partial }{\partial z_b}=\sum_{p,l} \sigma^{pb}\sigma^{la}\frac{\partial^2 f_{i}}{\partial z_p \partial z_l}\frac{\partial}{\partial z_a}\wedge \frac{\partial }{\partial z_b}-\sum_{p,l} \sigma^{la}\sigma^{pb}\frac{\partial^2 f_{i}}{\partial z_l \partial z_p}\frac{\partial}{\partial z_a}\wedge \frac{\partial }{\partial z_b}=0$
So we have $\sum_{p,q,l,k} [\sigma^{pq}\frac{\partial}{\partial z_p}\wedge \frac{\partial}{\partial z_q},\sigma^{lk}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}]=\sum_{p,q,l,k} \sigma^{pq}\frac{\partial \sigma^{lk}}{\partial z_p}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_q}-\sigma^{lk}\frac{\partial f_i}{\partial z_l}\frac{\partial \sigma^{pq}}{\partial z_k}\frac{\partial}{\partial z_p}\wedge \frac{\partial}{\partial z_q}-\sigma^{pq}\frac{\partial \sigma^{lk}}{\partial z_q}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_p}=\sum_{q,k,l}\left( \sum_p \sigma^{pq}\frac{\partial \sigma^{lk}}{\partial z_p}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_q}-\sigma^{lp}\frac{\partial f_i}{\partial z_l}\frac{\partial \sigma^{kq}}{\partial z_p}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_q}-\sigma^{qp}\frac{\partial \sigma^{lk}}{\partial z_p}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_q}\right)=\sum_{q,k,l}\left( \sum_p \sigma^{pq}\frac{\partial \sigma^{lk}}{\partial z_p}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_q}+\sigma^{pl}\frac{\partial f_i}{\partial z_l}\frac{\partial \sigma^{kq}}{\partial z_p}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_q}+\sigma^{kp}\frac{\partial \sigma^{lq}}{\partial z_p}\frac{\partial f_i}{\partial z_l}\frac{\partial}{\partial z_k}\wedge \frac{\partial}{\partial z_q}\right)=0 $ by Lemma \ref{formula}.
\end{proof}
Now we assume that $\phi(t)=\sigma(\alpha)$ constructed by Hitchin converges for $t\in \Delta \subset \mathbb{C}$. And we know that $\psi(t)=-\phi(t)$ satisfies $\bar{\partial}\psi(t)-\frac{1}{2}[\psi(t),\psi(t)]=0$. We can consider $\phi:=\phi(t)$ as a $C^{\infty} (0,1)$-vector on $M\times \Delta$. Then by Newlander-Nirenberg theorem$($$\cite{New57}$,$\cite{Kod05}$ p.268$)$, we can give a holomorphic coordinate on $M\times \Delta$ induced by $\phi$ (more precisely $-\phi$) . Let's denote the complex manifold induced by $\phi$ by $\mathcal{M}$. Then $\omega:\mathcal{M}\to \Delta$ is a family of compact complex manifolds. If we choosee a sufficiently fine locally finite open covering $\{U_j\}$ of $M$, we have $n+1$ holomorphic coordinates $\xi_j^{\beta}(z,t),\beta=1,...,n$ and $t$ on each $U_j\times \Delta$, and the map
\begin{equation*}
\xi_j:(z,t)\to(\xi_j^1(z,t),...,\xi_j^{n}(z,t),t)
\end{equation*}
gives local complex coordinates of $\mathcal{M}$ on $U_j\times \Delta$.
And we can think of $\sigma$ on $M$ as a $C^{\infty}$ bivector on $M\times \Delta$. $($more precisely, $\sigma\oplus 0$ $)$. We note that a bivector $(2,0)$ part $\Lambda$ of $\sigma\in C^{\infty}(\wedge^2 T_M)$ on $M\times \Delta$ is holomorphic with respect to the complex structure $\mathcal{M}$, in other words with respect to coordinates systems $\xi_j(z,t)$, if and only if it satisfies
\begin{align*}
\bar{\partial} \sigma +[\sigma,\phi(t)]=0
\end{align*}
Since $\sigma$ satisfies the equation by Lemma \ref{ab}, $\Lambda$ is a holomrphic bivector field on $\mathcal{M}$ and $\Lambda_t$ induces Poisson holomorphic structure on $M_{\phi(t)}$ for each $t$. If we write $\sigma=\sum_{\alpha,\beta=1}^n \sigma^{\alpha\beta}(z_j)\frac{\partial}{\partial z_j^{\alpha}} \wedge \frac{\partial}{\partial z_j^{\beta}}$ on $U_j\times \Delta$, then in new complex coordinate systems, it becomes $\Lambda=\sum_{p,q,\alpha,\beta=1}^n \sigma^{\alpha\beta}(z_j)\frac{\partial \xi_j^p}{\partial z_j^{\alpha}}\frac{\partial \xi_j^q}{\partial z_j^q}\frac{\partial}{\partial \xi_j^p}\wedge \frac{\partial}{\partial \xi_j^q}$. So we have a Poisson analytic family $(\mathcal{M},\Lambda)$. For each $t$, we have
\begin{align*}
\Lambda_t=\sum_{p,q,\alpha,\beta=1}^n \sigma^{\alpha\beta}(z_j)\frac{\partial \xi_j^p(z_j,t)}{\partial z_j^{\alpha}}\frac{\partial \xi_j^q(z_j,t)}{\partial z_j^q}\frac{\partial}{\partial \xi_j^p}\wedge \frac{\partial}{\partial \xi_j^q}
\end{align*}
On the other hand, Hitchin defines a holomorphic Poisson structure on $M_{\phi(t)}$ in the following way:
\begin{align*}
\sigma_t(f,g)=\sigma(\partial f,\partial g)
\end{align*}
for $f,g$ local holomorphic functions with respect to the complex structure at $t$. So we have $\sigma_t(\xi_j^p(z,t),\xi_j^q(z,t))=\sum_{\alpha,\beta=1}^n \sigma^{\alpha\beta}(z_j)\frac{\partial \xi_j^p(z_j,t)}{\partial z_j^{\alpha}}\frac{\partial \xi_j^q(z_j,t)}{\partial z_j^q}$. This implies that $(\mathcal{M}_t,\Lambda_t)=(M_{\phi(t)},\sigma_t)$. And since $\Lambda$ does not depend on $t$, we have 0 in the Poisson direction under the Poisson Kodaira Spencer map $\varphi_0: T_0\Delta\to HP^2(M,\sigma)$ by Theorem \ref{n}. More precisely $\varphi_0(\frac{\partial}{\partial t})=(\sigma([\omega]),0)$.
\end{example}
\chapter{Theorem of Existence for holomorphic Poisson structures}\label{chapter3}
\section{Theorem of existence}
In this chapter, we prove theorem of existence for holomorphic Poisson deformations under the assumption (\ref{assumption}) as an analogue of theorem of existence for deformations of complex structure.
\subsection{Statement of the theorem}\
Before we state the theorem of existence, we discuss the assumption (\ref{assumption}) \footnote{For my unfamiliarity of analysis, I do not know that we can relax the assumption} that we use in the proof of theorem of existence. Let $(M,\Lambda)$ be compact holomorphic Poisson manifold. First we note that the differential operator $L=\bar{\partial} +[\Lambda,-]$ is elliptic and so we have operators $L^*, H, G$ and $\Box=LL^*+L^*L$.(See Appendix \ref{appendixd})
we introduce the H\"{o}lder norms in the spaces $A^p=A^{0,p-1}(M,T)\oplus \cdots \oplus A^{0,0}(M, \wedge^p T)$. To do this, we fix a finite open covering $\{U_j\}$ of $M$ such that $(z_j)$ are coordinates on $U_j$. Let $\varphi\in A^p$,
\begin{align*}
\varphi=\sum_{r+s=p, s\geq 1} \varphi_{j \alpha_1\cdots\alpha_r\beta_1\cdots\beta_s}(z)d\bar{z}_j^{\alpha_1}\wedge \cdots \wedge d\bar{z}_j^{\alpha_r}\wedge \frac{\partial}{\partial z_j^{\beta_1}}\wedge\cdots \wedge\frac{\partial}{\partial z_j^{\beta_s}}
\end{align*}
Let $k\in \mathbb{Z},k\geq 0,\alpha\in \mathbb{R},0<\alpha<1$. Let $h=(h_1,...,h_{2n}),h_i\geq 0,\sum_{i=1}^{2n} h_i=|h|$ where $n=\dim\, M$. Then denote
\begin{align*}
D_j^h=\left(\frac{\partial}{\partial x_j^1}\right)^{h_1}\cdots \left(\frac{\partial}{\partial x_j^{2n}}\right)^{h_{2n}},\,\,\,\,\,z_j^{\alpha}=x_j^{2\alpha-1}+ix_j^{2\alpha}
\end{align*}
Then the H\"{o}lder norm $||\varphi||_{k+\alpha}$ is defined as follows:
\begin{align*}
||\varphi||_{k+\alpha}=\max_j \{ \sum_{h, |h|\leq k}\left( \sup_{z\in U_j}|D_j^h \varphi_{j \alpha_1\cdots\alpha_r\beta_1\cdots\beta_s}(z)|\right)+\sup_{y,z\in U_j,|h|=k}
\frac{|D_j^h \varphi_{j \alpha_1\cdots\alpha_r\beta_1\cdots\beta_s}(y)-D_j^h \varphi_{j \alpha_1\cdots\alpha_r\beta_1\cdots\beta_s}(z)|}{|y-z|^{\alpha}} \},
\end{align*}
where the sup is over all $\alpha_1,...,\alpha_r,\beta_1,...,\beta_s$
Our assumption is the following. For any $\varphi\in A^2$,
\begin{equation}\label{assumption}
||\varphi||_{k+\alpha}\leq C(|| \Box \varphi||_{k-2+\alpha}+||\varphi||_0)
\end{equation}
where $k\geq 2$, $C$ is a constant which is independent of $\varphi$
\begin{center}
and $||\varphi||_0=\max_{j,\alpha_1,...,\beta_s} \sup_{z\in U_j} |\varphi_{j \alpha_1\cdots\alpha_r\beta_1\cdots\beta_s}(z)|$.
\end{center}
Now we state the theorem of existence of holomorphic Poisson deformations.
\begin{thm}[Theorem of Existence]\label{theorem of existence}
Let $(M,\Lambda_0)$ be a compact holomorphic Poisson manifold satisfying $($\ref{assumption}$)$ and suppose that $HP^3(M,\Lambda_0)=0$. Then there exists a Poisson analytic family $(
\mathcal{M},\Lambda,B,\omega)$ with $0\in B\subset \mathbb{C}^m$ satisfying the following conditions:
\begin{enumerate}
\item $\omega^{-1}(0)=(M,\Lambda_0)$
\item $\varphi_0:\frac{\partial}{\partial t}\to \left(\frac{\partial (M_t,\Lambda_t)}{\partial t}\right)_{t=0}$ with $(M_t,\Lambda_t)=\omega^{-1}(t)$ is an isomorphism of $T_0(B)$ onto $HP^2(M,\Lambda_0):T_0 B\xrightarrow{\rho_0} HP^2(M,\Lambda_0)$.
\end{enumerate}
\end{thm}
Assume that there is a Poisson analytic family $(\mathcal{M},\Lambda,B,\omega)$ satisfying $(1)$ and $(2)$. Take a sufficiently small $\Delta$ with $0\in \Delta \subset B$, and the $C^{\infty}$ vector $(0,1)$-form $\varphi(t)=\sum_{\lambda=1}^{n} \varphi^{\lambda}(z,t)\frac{\partial}{\partial z_{\lambda}}$ and $\Lambda(t)$ on $M$ defined by (\ref{b}) and (\ref{f}) defined by $(\mathcal{M}_{\Delta},\Lambda_{\Delta},\Delta, \omega)$. Then we have $[\Lambda(t),\Lambda(t)]=0,\bar{\partial}\Lambda(t)-[\Lambda(t),\varphi(t)]=0,\bar{\partial}\varphi(t)=\frac{1}{2}[\varphi(t),\varphi(t)]$ and we have $\varphi(0)=0,\Lambda(0)=\Lambda_0$. If we put
\begin{align*}
(\dot{\varphi}_{\lambda},-\dot{\Lambda}_{\lambda})=(\left(\frac{\partial \varphi(t)}{\partial t_{\lambda}}\right)_{t=0}, -\left(\frac{\partial \Lambda(t)}{\partial t_{\lambda}}\right)_{t=0}),\,\,\,\,\, \lambda=1,...,m,
\end{align*}
Then $\{(\dot{\varphi}_{1},-\dot{\Lambda}_{1}),...,(\dot{\varphi}_{m},-\dot{\Lambda}_m)\}$ forms a basis of $HP^2(M,\Lambda_0)$.
Conversely, given $(\beta_{\lambda},\pi_{\lambda})\in A^{0,1}(M,T)\oplus A^{2,0}(M,\wedge^2 T)$ for $\lambda=1,...,m$, such that $\{(\eta_{1},\pi_{1}),...,(\eta_{m},\pi_{m})\}$ forms a basis of $HP^2(M,\Lambda_0)$, assume that there is a family $\{(\varphi(t),\Lambda(t))|t\in \Delta \}$ of $C^{\infty}$ vector $(0,1)$-forms $\varphi(t)$ and $(2,0)$ vector $\Lambda(t)$ on $M$ with $0\in \Delta \subset \mathbb{C}^m$, which satisfy
\begin{enumerate}
\item $[\Lambda(t),\Lambda(t)]=0$\\
\item $\bar{\partial} \Lambda(t)-[\Lambda(t),\varphi(t)]=0$\\
\item $\bar{\partial}\varphi(t)=\frac{1}{2}[\varphi(t),\varphi(t)]$
\end{enumerate}
and the initial conditions
\begin{align*}
\varphi(0)=0, \Lambda(0)=\Lambda_0,(\left(\frac{\partial \varphi(t)}{\partial t_{\lambda}}\right)_{t=0}, -\left(\frac{\partial \Lambda(t)}{\partial t_{\lambda}}\right)_{t=0})=(\eta_{\lambda},\pi_{\lambda}),\,\,\,\,\, \lambda=1,...,m,
\end{align*}
Since $\Delta$ is assumed to be sufficiently small, we may assume that $\varphi(t)=\sum_{\lambda}\sum_{v} \varphi^{\lambda}_v d\bar{z}_v\frac{\partial}{\partial z_{\lambda}}$ satisfies
\begin{align*}
det(\delta^{\lambda}_v-\sum_{\mu}\varphi^{\mu}_v(t)\overline{\varphi^{\lambda}_{\mu}(t)})\ne 0.
\end{align*}
Therefore, by the Newlander-Nirenberg theorem($\cite{New57}$,$\cite{Kod05}$ p.268), each $\varphi(t)$ determines a complex structure $M_{\varphi(t)}$ on $M$. And condition $(2)$,$(3)$ implies $(2,0)$-part $\Lambda(t)^{2,0}$ of $\Lambda(t)$ is a holomorphic Poisson structure on $M_{\varphi(t)}$. If the family $\{M_{\varphi(t)},\Lambda(t)^{2,0}\}$ is a Poisson analytic family, it satisfies the conditions (1) and (2) in Theorem \ref{theorem of existence} by our assumption and Theorem \ref{n}. we construct such a family $\{(\varphi(t),\Lambda(t))|t\in \Delta \}$ and then show that $\{(M_{\varphi(t)},\Lambda(t)^{2,0})\}$ is a Poisson analytic family.(See \ref{subsection})
\begin{remark}
Constructing $\{\varphi(t),\Lambda(t)\}$ is equivalent to constructing $\{-\varphi(t),\Lambda(t)\}$. By replacing $\varphi(t)$ by $-\varphi(t)$, we construct $\varphi(t)$ satisfying
\begin{enumerate}
\item $[\Lambda(t),\Lambda(t)]=0$\\
\item $\bar{\partial} \Lambda(t)+[\Lambda(t),\varphi(t)]=0$\\
\item $\bar{\partial}\varphi(t)+\frac{1}{2}[\varphi(t),\varphi(t)]=0$
\end{enumerate}
and the initial conditions
\begin{align*}
\varphi(0)=0, \Lambda(0)=\Lambda_0,(-\left(\frac{\partial \varphi(t)}{\partial t_{\lambda}}\right)_{t=0}, -\left(\frac{\partial \Lambda(t)}{\partial t_{\lambda}}\right)_{t=0})=(-\eta_{\lambda},\pi_{\lambda}),\,\,\,\,\, \lambda=1,...,m,
\end{align*}
And we note that $(1),(2),(3)$ are equivalent to
\begin{center}
$\bar{\partial} (\varphi(t)+\Lambda(t))+\frac{1}{2}[\varphi(t)+\Lambda(t),\varphi(t)+\Lambda(t)]=0$
\end{center}
\end{remark}
We construct $\alpha(t)=\varphi(t)+\Lambda(t)$ in the following section.
\subsection{Construction of $\alpha(t)=$$\varphi(t)+\Lambda(t)$}\
We use the Kuranishi methods. We need the following lemmas.
\begin{lemma}\label{lemma5.2.2}
For $\varphi,\psi\in A^2$, we have $||[\varphi,\psi]||_{k+\alpha}\leq C||\varphi||_{k+1+\alpha}||\psi||_{k+1+\alpha}$, where $C$ is independent of $\varphi$ and $\psi$.
\end{lemma}
\begin{lemma}\label{lemma5.2.3}
For $\varphi\in A^2$, we have $||G\varphi||_{k+\alpha}\leq C||\varphi||_{k-2+\alpha},k\geq 2$, where $C$ depends only on $k$ and $\alpha$, not on $\varphi$.
\end{lemma}
\begin{proof}
This follows from the assumption \ref{assumption}. See \cite{Mor71} p.160 Proposition 2.3.
\end{proof}
Let us now construct the $\varphi(t)$ and $\Lambda(t)$. We want to construct $\alpha(t):=\varphi(t)+\Lambda(t)=\Lambda_0+\sum_{\mu=1}^{\infty} \varphi_{\mu}(t)+\Lambda_{\mu}(t)$, where
\begin{align*}
\varphi_{\mu}(t)+\Lambda_{\mu}(t)=\sum_{v_1+\cdots+v_m=\mu} (\varphi_{v_1\cdots v_m}+\Lambda_{v_1\cdots v_m})t_1^{v_1}\cdots t_m^{v_m}
\end{align*}
where $\varphi_{v_1\cdots v_m}+\Lambda_{v_1\cdots v_m}\in A^{0,1}(M,T)\oplus A^{0,0}(M,\wedge^2 T)$ such that
\begin{align}
&\bar{\partial} \alpha(t)+\frac{1}{2}[\alpha(t),\alpha(t)]=0\\
&\alpha_1(t)=\varphi_1(t)+\Lambda_1(t)=\sum_{v=1}^m (\eta_v+\pi_v)t_v,
\end{align}
where $\{\eta_v+\pi_v\}$ is a basis for $\mathbb{H}^2\cong HP^2(M,\Lambda_0)$.
Let $\beta(t)=\alpha(t)-\Lambda_0$. Then $(5.2.3)$ is equivalent to
\begin{align*}
L\beta(t)+\frac{1}{2}[\beta(t),\beta(t)]=0.
\end{align*}
Consider the equation
\begin{align*}
(*) \beta(t)=\beta_1(t)-\frac{1}{2}L^*G[\beta(t),\beta(t)],
\end{align*}
where $\beta_1(t)=\alpha_1(t)$.
$(*)$ has a unique formal power series solution $\beta(t)$. Indeed,
\begin{align*}
&\beta_2(t)=-\frac{1}{2}L^*G[\beta_1(t),\beta_1(t)]\\
&\beta_3(t)=-\frac{1}{2}L^*G([\beta_1(t),\beta_2(t)]+[\beta_2(t),\beta_1(t)])\\
&\beta_{\mu}(t)=-\frac{1}{2}L^* G\left( \sum_{\lambda=1}^{\mu-1}[\beta_{\lambda}(t),\beta_{\mu-\lambda}(t)]\right)
\end{align*}
\begin{proposition}
For small $|t|$, $\beta(t)=\sum_{\mu=1}^{\infty} \beta_{\mu}(t)$ converges in the norm $||\cdot||_{k+\alpha}$.
\end{proposition}
\begin{proof}
See \cite{Mor71} p.162 Proposition 2.4.
\end{proof}
\begin{proposition}
The $\beta(t)$ satisfies $L\beta(t)+\frac{1}{2}[\beta(t),\beta(t)]=0$ if and only if $H[\beta(t),\beta(t)]=0$, where $H:A^3=A^{0,2}(M,T)\oplus A^{0,1}(M,\wedge^2 T)\oplus A^{0,0}(M,\wedge^3 T)\to \mathbb{H}^3\cong HP^3(M,\Lambda_0)$ is the orthogonal projection to the harmonic subspace of $A^3$.
\end{proposition}
\begin{proof}
$(=>)$ $L\beta(t)=-\frac{1}{2}[\beta(t),\beta(t)]$. If we take $H$ on both sides, we have
\begin{align*}
0=HL\beta(t)=-\frac{1}{2}H[\beta(t),\beta(t)]
\end{align*}
$(<=)$ Let $H[\beta(t),\beta(t)]=0$.
Set $\psi(t)=L\beta(t)+\frac{1}{2}[\beta(t),\beta(t)]=\bar{\partial} \beta(t)+[\Lambda_0,\beta(t)]+\frac{1}{2}[\beta(t),\beta(t)]$
\begin{align*}
2\psi(t)&=2L\beta(t)+[\beta(t),\beta(t)]\\
&=-LL^*G[\beta(t),\beta(t)]+[\beta(t),\beta(t)]\\
&=-LL^*G[\beta(t),\beta(t)]+\Box G[\beta(t),\beta(t)]\\
&=-LL^*G[\beta(t),\beta(t)]+(LL^*+L^*L)G[\beta(t),\beta(t)]\\
&=L^*LG[\beta(t),\beta(t)]=L^*GL[\beta(t),\beta(t)]\\
&=2L^*G[L\beta(t),\beta(t)]
\end{align*}
since $(A[1],L,[-,-])$ is a differential graded Lie algebra (See Proposition \ref{d}).
So we have $\psi(t)=L^*G[\psi(t),\beta(t)]$. And by Lemma \ref{lemma5.2.2} and Lemma \ref{lemma5.2.3}, we have
\begin{align*}
||\psi(t)||_{k+\alpha}&=||L^*G[\psi(t),\beta(t)]||_{k+\alpha}\\
&\leq C_1||G[\psi(t),\beta(t)]||_{k+1+\alpha}\\
& \leq C_1 C_{k,\alpha} ||[\psi(t),\beta(t)]||_{k-1+\alpha}\\
& \leq C_1C_{k,\alpha}C||\psi(t)||_{k+\alpha}||\beta(t)||_{k+\alpha}
\end{align*}
Choose $|t|$ so small that $||\beta(t)||_{k+\alpha}C_1C_{k,\alpha}C<1$. Then we get the contradiction $||\psi(t)||_{k+\alpha}<||\psi(t)||_{k+\alpha}$ unless $\psi(t)=0$ for all small $t$ since $\beta(t)$ converges and $\beta(0)=0$.
\end{proof}
\begin{proposition}\label{pr}
$\alpha(t)=\varphi(t)+\Lambda(t)$ is $C^{\infty}$ in $(z,t)$ and holomorphic in $t$.
\end{proposition}
\begin{proof}
See \cite{Mor71} p.163 Proposition 2.6.
\end{proof}
With the assumption $HP^3(M,\Lambda_0)=0$, $\beta(t)$ solves the integrabiliy condition for small $|t|$, where $t\in \Delta_{\epsilon}$.
\subsection{Construction of a Poisson analytic family}\label{subsection}\
We have constructed a family $\{(\varphi(t),\Lambda(t))|t\in \triangle_{\epsilon}\}$ of $C^{\infty}$ vector $(0,1)$-forms $\varphi(t)=\sum_{\lambda=1}^n\sum_{v=1}^n \varphi_v^{\lambda}(z,t)d\bar{z}^v\frac{\partial}{\partial z^{\lambda}}$ and $C^{\infty}$ $(2,0)$ bivector $\Lambda(t)=\sum_{\alpha,\beta=1}^n g_{\alpha\beta}(z,t)\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}$ satisfying the integrability condition $[\Lambda(t),\Lambda(t)]=0,\bar{\partial}\Lambda(t)=[\Lambda(t),\varphi(t)],\bar{\partial}\varphi(t)=\frac{1}{2}[\varphi(t),\varphi(t)]$ and the initial conditions $\varphi(0)=0, \Lambda(0)=\Lambda_0, (\left(\frac{\partial \varphi(t)}{\partial t_{\lambda}}\right)_{t=0}, -\left(\frac{\partial \Lambda(t)}{\partial t_{\lambda}}\right)_{t=0})=(\beta_{\lambda},\pi_{\lambda}),\lambda=1,...,m$, where $\varphi_v^{\lambda}(z,t)$ and $g_{\alpha\beta}(z,t)$ are $C^{\infty}$ functions of $z^1,...,z^n,t_1,...,t_m$ and holomorphic in $t_1,...,t_m$.
Each $(\varphi(t),\Lambda(t))$ determines a holomorphic Poisson structure $(M_{\varphi(t)},\Lambda(t))$ on $M$. In order to show that $\{(M_{\varphi(t)},\Lambda(t))|t\in \Delta_{\epsilon}\}$ is a Poisson analytic family, we consider $\varphi=\varphi(t)$ as a vector $(0,1)$-form on the complex manifold $M\times \Delta_{\epsilon}$ and $\Lambda=\Lambda(t)$ as a $(2,0)$ bivector on $M\times \Delta_{\epsilon}$. Namely, we consider $\varphi(t)$ as
\begin{align*}
\varphi = \varphi(t)= \sum_{\lambda=1}^n \left(\sum_{v=1}^n \varphi_v^{\lambda} d\bar{z}^v+\sum_{\mu=1}^m \varphi_{n+\mu}^{\lambda} d\bar{t}_{\mu} \right)\frac{\partial}{\partial z^{\lambda}}+\sum_{\mu=1}^m\varphi^{n+\mu}\frac{\partial}{\partial t_{\mu}}
\end{align*}
with $\varphi^{t+\mu}=\varphi_{n+\mu}^{\lambda}=0,\mu=1,...,m$. Then since $\varphi_v^{\lambda}=\varphi_v^{\lambda}(z,t)$ are holomorphic in $t_1,...,t_m$ (Proposition \ref{pr}), we have $\frac{\partial \varphi_v^{\lambda}}{\partial \bar{t}_{\mu}}=0$ in
\begin{align*}
\bar{\partial}\varphi=\sum_{\lambda,v=1}^n \left(\sum_{\beta=1}^n\frac{\partial \varphi_v^{\lambda}}{\partial \bar{z}^{\beta}}d\bar{z}^{\beta}+\sum_{\mu=1}^m \frac{\partial \varphi_v^{\lambda}}{\partial \bar{t}_{\mu}}d\bar{t}_{\mu}\right) \wedge d\bar{z}^v\frac{\partial}{\partial z^{\lambda}}
\end{align*}
Similary since $g_{\alpha\beta}(z,t)$ is holomorphic in $t_1,...,t_m$ (Proposition \ref{pr}), we have $\frac{\partial g_{\alpha\beta}}{\partial \bar{t}_{\mu}}=0$ in
\begin{align*}
\bar{\partial} \Lambda=\sum_{\alpha,\beta} \left(\sum_{v=1}^n \frac{\partial g_{\alpha\beta}}{\partial \bar{z}^v}d\bar{z}^v+\sum_{\mu=1}^m \frac{\partial g_{\alpha\beta}}{\partial \bar{t}_{\mu}}d\bar{t}_{\mu}\right)\frac{\partial}{\partial z^{\alpha}}\wedge \frac{\partial}{\partial z^{\beta}}
\end{align*}
By $\bar{\partial}\varphi(t)$ we denote the exterior differential of $\varphi(t)$ as a vector $(0,1)$-form on $M$ with fixed $t$. Then $\bar{\partial}\varphi$ coincides with $\bar{\partial}\varphi(t)$ and we obtain $[\varphi,\varphi]=[\varphi(t),\varphi(t)]$. Similary $\bar{\partial}\Lambda(t)$ coincides with $\bar{\partial} \Lambda$ and we obtain $[\Lambda,\varphi]=[\Lambda(t),\varphi(t)]$ and $[\Lambda,\Lambda]=[\Lambda(t),\Lambda(t)]$. Therefore as a $C^{\infty}$ vector $(0,1)$-form on $M\times \Delta_{\epsilon}$ and $C^{\infty}$ $(2,0)$ bivector on $M\times \Delta_{\epsilon}$, $\varphi$ and $\Lambda$ satisfies $\bar{\partial}\varphi=\frac{1}{2}[\varphi,\varphi]$, $ \bar{\partial}\Lambda=[\Lambda,\varphi]$, and $[\Lambda,\Lambda]=0$.
Then by the Newlander-Nirenberg theorem($\cite{New57}$,$\cite{Kod05}$ p.268), $\varphi$ defines a complex structure $\mathcal{M}$ on $M\times \Delta_{\epsilon}$ and $ \bar{\partial}\Lambda=[\Lambda,\varphi]$, and $[\Lambda,\Lambda]=0$ imply that (2,0)-part $\Lambda^{2,0}$ of $\Lambda$ defines a holomorphic Poisson structure ($\mathcal{M},\Lambda^{2,0})$. If we choose a sufficiently fine locally finite open covering $\{U_j\}$ of $M$, and take a sufficiently small $\Delta_{\epsilon}$, we have $C^{\infty}$ functions $\xi_j^{\beta}(z,t),\beta=1,...,m+n$ on each $U_j\times \Delta_{\epsilon}$, and the map
\begin{align*}
\xi_j:(z,t)\to (\xi_j^1(z,t),...,\xi_j^{n+m}(z,t))
\end{align*}
gives local complex coordinates of $\mathcal{M}$ on $U_j\times \Delta_{\epsilon}$,
and $\xi_j^{n+\mu}(z,t)=t_{\mu}$ for $\mu=1,...,m$.
Then we have
\begin{align*}
\xi_j:(z,t)\to (\xi_j^1(z,t),...,\xi_j^n(z,t),t_1,...,t_m).
\end{align*}
Therefore
\begin{align*}
\omega:(\xi_j^1(z,t),...,\xi_j^n(z,t),t_1,...,t_m)\to (t_1,...,t_m)
\end{align*}
is a holomorphic map of $\mathcal{M}$ onto $\Delta_{\epsilon}$. For each $t\in \Delta_{\epsilon}$, $\omega^{-1}(t)$ is a holomorphic Poisson manifold whose system of local complex coordinates is given by $\{\xi_j^1(z,t),...,\xi_j^n(z,t)\}$ and a holomorphic Poisson structure is given by $(2,0)$-part $\Lambda(t)^{2,0}$ of $\Lambda(t)$. So we have $\omega^{-1}(t)=(M_{\varphi(t)},\Lambda(t)^{2,0})$. Thus $\{(M_{\varphi(t)},\Lambda(t)^{2,0})|t\in \Delta_{\epsilon}\}$ forms a Poisson analytic family $(\mathcal{M},\Lambda^{2,0},\Delta_{\epsilon},\omega)$
\begin{example}
Let $U_i=\{[z_0,z_1,z_2]|z_i\ne0\}$ $i=0,1,2$ be an open cover of complex projective plane $\mathbb{P}_{\mathbb{C}}^2$. Let $x=\frac{z_1}{z_0}$ and $w=\frac{z_2}{z_0}$ be coordinates on $U_0$. Then the holomorphic Poisson structures on $U_0$ are parametrized by $t=(t_1,...,t_{10})\in \mathbb{C}^{10}$
\begin{align*}
(t_1+t_2x+t_3w+t_4x^2+t_5xw+t_6w^2+t_7x^3+t_8x^2w+t_9xw^2+t_{10}w^3)\frac{\partial}{\partial x}\wedge \frac{\partial}{\partial w}
\end{align*}
This parametrizes the whole holomorphic Poisson structures on $\mathbb{P}_{\mathbb{C}}^2$.$($See $\cite{Pin11}$ Proposition 2.2$)$. Let $\Lambda_0=x\frac{\partial}{\partial x}\wedge \frac{\partial}{\partial w}$ be the holomorphic Poisson structure on $\mathbb{P}_{\mathbb{C}}^2$. Then $HP^2(\mathbb{P}_{\mathbb{C}}^2,\Lambda_0)=5$, $HP^3(\mathbb{P}_{\mathbb{C}}^2,\Lambda_0)=0$.$($See $\cite{Pin11}$ Example 3.5 $)$ and $w^2\frac{\partial}{\partial x}\wedge\frac{\partial}{\partial w}$, $x^3\frac{\partial}{\partial x}\wedge\frac{\partial}{\partial w}$, $x^2w\frac{\partial}{\partial x}\wedge\frac{\partial}{\partial w}$, $xw^2\frac{\partial}{\partial x}\wedge\frac{\partial}{\partial w}$ and $w^3\frac{\partial}{\partial x}\wedge\frac{\partial}{\partial w}$ are the representatives of the cohomology classes consisting of the basis of $HP^2(\mathbb{P}_{\mathbb{C}}^2,\Lambda_0)$. Let $t=(t_1,t_2,t_3,t_4,t_5)\in \mathbb{C}^5$. Let $\Lambda(t)=(t_1w^2+x+t_2x^3+t_3x^2w+t_5xw^2+t_5w^3)\frac{\partial}{\partial x}\wedge \frac{\partial}{\partial w}$ be the holomorphic Poisson structure on $\mathbb{P}_{\mathbb{C}}^2\times \mathbb{C}^5$. Then $(\mathbb{P}_{\mathbb{C}}^2\times \mathbb{C}^5,\Lambda(t),\mathbb{C}^5, \omega)$, where $\omega$ is the natural projection, is a Poisson analytic family with $\omega^{-1}(0)=(\mathbb{P}_{\mathbb{C}}^2,\Lambda_0)$. Since the complex structure does not change in the family, the Poisson Kodaira Spencer map is an isomorphism. Hence the family satisfies the theorem of existence for $(\mathbb{P}_{\mathbb{C}}^2,\Lambda_0)$. \end{example}
\section{A concept of Kuranishi family in holomorphic Poisson category}\label{section6}
By following the lecture notes of Kuranishi \cite{Kur71}, we extend the definition of complex analytic family over a complex space to Poisson analytic family over a complex space and raise the question of existence of a complete family without assumption $HP^3(M,\Lambda)=0$ where $(M,\Lambda)$ is a holomorphic Poisson manifold.
In this section $\bold{M}$ is a real $C^{\infty}$ compact manifold. And an analytic set $S$ is by definition a subset of a domain $D$ (which is called the ambient space of $S$) defined by zero locus of finitely many holomorphic functions on $D$. A map $f:S\to S'$ between two analytic sets is called analytic if for each $s\in S$, $f$ can be extended to a complex analytic map from an open neighborhood of $s$ in $D$ into the ambient space of $S'$.
\begin{definition}
Let $S$ be an analytic set. By a $C^{\infty}$ family of holomorphic Poisson charts of $\bold{M}$ with a parameter in $S$ we mean
\begin{enumerate}
\item a $C^{\infty}$ map
\begin{align*}
\tilde{z}:U\times S'\to \mathbb{C}^n
\end{align*}
where $U$ (resp $S'$) is an open subset of $\bold{M}$ (resp. of $S$), such that the map $z^t:U\to \mathbb{C}^n$ defined by $z^t(p)=\tilde{z}(p,t)$ is a holomorphic complex chart of $\bold{M}$ for each $t\in S'$.
\item A $C^{\infty}$-complex bivector field $\Lambda$ of the form $\sum_{i,j} g_{ij}(x,t)\frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial x_j}$ on $M\times S$ \footnote{Here we mean by a $C^{\infty}$ bivector field $\Lambda$ on $M\times S$ that for each $s\in S$, $g_{ij}(x,s)$ can be extended to be $C^{\infty}$ functions on $M\times U$ where $U$ is a neighorbood of $s$ in the ambient space of $S$ } such that $[\Lambda,\Lambda]=0$ for each $t\in S$ and $(2,0)$-part $\Lambda_t^{2,0}$ of the restriction $\Lambda_t$ of $\Lambda$ on $\bold{M} \times t$ is a holmorphic bivector field with respect to the complex structure defined by $z^t$.
\end{enumerate}
\end{definition}
Let $\tilde{z}^{\sharp}:U\times S'\to \mathbb{C}^n\times S'$ defined by $\tilde{z}^{\sharp}=(\tilde{z}(p,t),t)$.
If $\tilde{w}$ is another such family with domain $V$ and over $S''$, we mean by change of charts from $\tilde{z}$ to $\tilde{w}$ the map $\tilde{w}^{\sharp}\circ \tilde{z}^{\sharp -1}$ of $\tilde{z}^{\sharp}((U\cap V)\times (S'\cap S''))$ to $\tilde{w}^{\sharp}((U\cap V)\times (S'\cap S''))$. The domain and the image of the change of charts are open subsets of $\mathbb{C}^n\times S$, and hence we may ask if the change is complex analytic.
Now we can define the notion of a Poisson analytic family of holomopric poisson structures on $\bold{M}$
\begin{definition}
Let $S$ be an analytic set. By a Poisson analytic family of holomorphic Poisson structures on $\bold{M}$ with a parameter in $S$ we mean
\begin{enumerate}
\item A $C^{\infty}$ complex bivector field $\Lambda$ on $M\times S$ of the form $\sum_{i,j} g_{ij}(x,s)\frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial x_j}$ with $[\Lambda,\Lambda]=0$ for each $s\in S$.
\item a collection $(\tilde{\Phi},\Lambda)$ of $C^{\infty}$ families of holomorphic Poisson charts of $\bold{M}$ with a parameter in $S$ satisfying
\begin{enumerate}
\item If $\tilde{z}$,$\tilde{w}\in \tilde{\Phi}$, then the change of charts from $\tilde{z}$ to $\tilde{w}$ is complex analytic.
\item for any $p$ in $\bold{M}$ and any $t$ in $S$ there is a $\tilde{z}$ in $\tilde{\Phi}$ with domain $U$ and $S'$ such that $(p,t)\in U\times S'$.
\item If $\tilde{u}$ is a $C^{\infty}$ family of Poisson holomorphic charts of $\bold{M}$ with a parameter in $S$ and if the change of charts from $\tilde{z}$ to $\tilde{u}$ is complex analytic for any $\tilde{z}$ in $\tilde{\Phi}$, then $\tilde{u}$ is in $\tilde{\Phi}$.
\end{enumerate}
\end{enumerate}
\end{definition}
If this is the case, for each fixed $t$, $\Phi_t=\{z^t:\tilde{z}\in \tilde{\Phi}\}$ is a chart covering of a holomophic Poisson structure, say $(M_t,\Lambda_t^{2,0})$ on $\bold{M}$. $(M_t,\Lambda_t^{2,0})$ is called the holomoprhic Poisson structure in $\tilde{\Phi}$ over $t$. Thus we have a collection $\{(M_t,\Lambda_t):t\in S\}$ of holomoprhic Poisson structures on $\bold{M}$.
Let $B$ be an analytic set. Denote by $\tau$ an analytic map $B\to S$ and let $\Lambda \circ {\tau}=\sum_{i,j} g_{ij}(x,h(s))\frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial x_j}$. If $\tilde{z}$ is a $C^{\infty}$ family of holomorphic Poisson chart of $\bold{M}$ with domain $U$ and over $S'\subset S$, then $\tilde{z}\circ(id\times \tau):U\times B'\to \mathbb{C}^n$ where $B'=\tau^{-1}(S')$ and $id$ is the identity map of $U$, is a $C^{\infty}$ family of holomorphic Poisson charts of $\bold{M}$ with a parameter in $B$ with respect to $\Lambda \circ {\tau}=\sum_{i,j} g_{ij}(x,h(s))\frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial x_j}$. It is clear that the collection $\{\tilde{z}\circ (id\times \tau):\tilde{z}\in \tilde{\Phi}\}$ can be enlarged to a unique Poisson analytic family of holomorphic Poisson structures $(\tilde{\Phi}\circ \tau,\Lambda\circ \tau)$ on $\bold{M}$.
\begin{definition}
The above family $(\tilde{\Phi}\circ \tau,\Lambda\circ{\tau})$ is called the Poisson analytic family induced from $(\tilde{\Phi},\Lambda)$ by $\tau$.
\end{definition}
Let $(\tilde{\Phi},\Lambda)$ and $(\tilde{\Psi},\Lambda')$ be Poisson analytic family over an analytic set $B$. Denote by $\bold{f}$ a family of diffeomorphism of $\bold{M}$ parametrized by $B$, say $\{f^b:b\in B\}$
\begin{definition}
We say that $\bold{f}$ induces an isomorphism from $(\tilde{\Phi},\Lambda)$ to $(\tilde{\Psi},\Lambda')$ over the identity map of $B$ if the following conditions are satisfied:
\begin{enumerate}
\item for $\tilde{w}:V\times B'\to \mathbb{C}^n$ in $\tilde{\Psi}$, let $U, B''$ be open subsets such that $f^b(U)\subset V$ for all $b\in B''$. Then $(p,b)\in U\times B''\mapsto \tilde{w}(f^b(p),b)\in \mathbb{C}^n$ is an element of $\tilde{\Phi}$.
\item the map $(p,b)\in M\times B\mapsto f^b(p)\in \bold{M}$ is a $C^{\infty}$ map, i.e. on a neighborhood of each point it is $C^{\infty}$
\item let $F:\bold{M}\times B\to \bold{M}\times B$ defined by $(x,t)\to (f^t(x),t)$. Then $F_*\Lambda=\Lambda'$. In particular we have $f^b_* \Lambda_b^{2,0}=\Lambda_b^{'2,0}$ for each $b\in B$.
\end{enumerate}
\end{definition}
\begin{definition}
Let $(M,\Lambda_0)$ be a holomorphic Poisson structure on $\bold{M}$. We say that $(\tilde{\Phi},\Lambda)$ is a Poisson analytic family of deformations of $(M,\Lambda_0)$ over $(S,s_0)$ if the holomorphic Poisson structure in $\tilde{\Phi}$ over $s_0$ is $(M,\Lambda_0)$.
\end{definition}
\begin{definition}
A Poisson analytic family $(\tilde{\Phi},\Lambda)$ of deformations of $(M,\Lambda_0)$ over $(S,s_0)$ is called complete at $s_0$ if for any pointed analytic set $(B,b_0)$ and any Poisson analytic family $(\tilde{\Psi},\Lambda')$ of deformations of $(M,\Lambda)$ over $(B,b_0)$, we can find an open neighborhood $B'$ of $b_0$ and a complex analytic map $\tau:(B',b_0)\to (S,s_0)$ such that $(\tilde{\Psi}|_{B'},\Lambda'|_{B'})$ is isomorphic to the family $(\tilde{\Phi}\circ \tau,\Lambda\circ \tau)$.
\end{definition}
The following problem is an analogue of Kuranishi's completeness theorem.
\begin{problem}
Let $(M,\Lambda_0)$ be a compact holomorphic Poisson manifold. Then does there a complete Poisson analytic family of deformations of $(M,\Lambda_0)$ exist?\footnote{For my unfamiliarity of analysis, I could not access to this problem.} \footnote{
Let $(M,\Lambda_0)$ be a compact holomorphic Poisson manifold. We fix a Hermitian metric on $M$ and define $L^*,\Box,G,...$, and so forth. Let $\{\eta_v+\pi_v|v=1,...,m\}$ be a base for $\mathbb{H}^2\cong HP^2(M,\Lambda_0)$. Assume that we have a unique convergent power series solution $\beta(t)$ of
\begin{align*}
\beta(t)=\beta_1(t) +\frac{1}{2}L^*G[\beta(t),\beta(t)],
\end{align*}
where $\beta_1(t)=\sum_{v=1}^m (\eta_v+\pi_v)t_v$. $\beta(t)=\alpha(t)-\Lambda_0$ satisfies
\begin{align*}
L\beta(t)+\frac{1}{2}[\beta(t),\beta(t)]=0
\end{align*}
if and only if $H[\beta(t),\beta(t)]=0$. Let $\{\beta_{\lambda}|\lambda=1,...,r\}$ be an orthonomal base of $\mathbb{H}^3$ and let $(-,-)$ be the inner product in $A^{0,0}(M,\wedge^3 T)\oplus A^{0,1}(M,\wedge^2 T)\oplus A^{0,2}(M,T)$. Then
\begin{align*}
H[\beta(t),\beta(t)]=\sum_{\lambda=1}^r ([\beta(t),\beta(t)],\beta_{\lambda})\beta_{\lambda}
\end{align*}
Hence $H[\beta(t),\beta(t)]=0$ if and only if $([\beta(t),\beta(t)],\beta_{\lambda})=0$ for $\lambda=1,...,r$. Since $\beta(t)$ is a power series in $t$ so is $([\beta(t),\beta(t)],\beta_{\lambda})=b_{\lambda}(t)$. Thus $b_{\lambda}(t)$ is holomorphic in $t$ for $\lambda=1,...,r$ and $|t|$ small $(|t|<\epsilon)$. Also $b_{\lambda}(0)=0$. Define an analytic set $S$ as follows:
\begin{align*}
S=\{t||t|<\epsilon,b_{\lambda}(t),\lambda=1,...,r\}
\end{align*}
Then $S$ is an analytic subset of $B_{\epsilon}$ containing the origin. I believe that this would be the base space of Kuranishi family for holomorphic Poisson deformations of $M$.}
\end{problem}
Lastly, we pose some natural questions I can not answer at this stage.
\begin{problem}
Can we establish the upper-semicontinuity theorem in a Poisson analytic family? $($See $\cite{Kod05}$ page 200$)$
\end{problem}
\begin{problem}
Let $(\mathcal{M},B,\omega)$ be a complex analytic family. Let $ b\in B$ and $\omega^{-1}(b)=M_b$ with a holomorphic Poisson structure $\Lambda_b$. What is the conditions or obstructions for the following?
\begin{center}
``There exists an open neighborhood $U$ of $b$ in $B$ such that $(\mathcal{M}|_U,U,\omega)$ can be extended to be a Poisson analytic family $(\mathcal{M}|_U,\Lambda,U,\omega)$ such that $\omega^{-1}(b)=(M_b,\Lambda_b)$".\footnote{This problem is related to the operator $L=\bar{\partial}+[\Lambda,-]$}
\end{center}
\end{problem}
\begin{problem}
Can we establish the stability theorem for holomorphic Poisson submanifolds in the holomorphic Poisson category? $($$\cite{Kod63}$$)$
\end{problem}
\part{Infiniteismal Poisson deformations and Universal Poisson deformations of compact holomorphic Poisson manifolds}\label{part2}
In the part II of the thesis, we present infinitesimal deformations of a compact holomorphic Poisson manifold $(X,\Lambda_0)$ over an artinian local $\mathbb{C}$-algebra $(R,\mathfrak{m})$ with residue $\mathbb{C}$. We extend the method of \cite{Ran00} to show that given an infinitesimal Poisson deformation of $(X,\Lambda_0)$, as in the case of the part I of the thesis, we can canonically associate an element $\phi+\Lambda\in (A^{0,0}(X,T)\oplus A^{0,0}(X,\wedge^2 T))\otimes \mathfrak{m}$ satisfying the Maurer-Cartan equation $L(\phi+\Lambda)+\frac{1}{2}[\phi+\Lambda,\phi+\Lambda]=0$ of the differential graded Lie algebra $\mathfrak{g}=\bigoplus_i g_i=(\bigoplus_{p+q-1=i,p\geq 0, q\geq 1} A^{0,p}(X,\wedge^q T),L=\bar{\partial}+[\Lambda_0,-],[-,-])$. By using the language of functors of Artin rings, we show that the differential graded Lie algebra $\mathfrak{g}$ controls infinitesimal Poisson deformations of $(X,\Lambda_0)$. In other words, we establish the following theorem.
\begin{theorem}
Let $(X,\Lambda_0)$ be a compact holomorphic Poisson manifold. Then the Poisson deformation functor $PDef_{(X,\Lambda_0)}$ is controlled by the differential graded Lie algebra $\mathfrak{g}=(\bigoplus_{p+q-1=i,p\geq 0,q\geq 1}$
$A^{0,p}(X,\wedge^q T),L=\bar{\partial}+[\Lambda_0,-],[-,-])$. In other words, we have an isomorphism of two functors
\begin{align*}
Def_\mathfrak{g}\cong PDef_{(X,\Lambda_0)}
\end{align*}
\end{theorem}
We study universal Poisson deformation of a compact holomorphic Poisson manifold $(X,\Lambda_0)$ with $HP^1(X,\Lambda_0)=0$. Based on the method of \cite{Ran00}, we explicitly construct a $n$-th order universal Poisson deformation space $P_n^u$ over an artinian $\mathbb{C}$-algebra $(R_n^u,\mathfrak{m}^u_n)$ with exponent $n$ (i.e $\mathfrak{m}_n^{u\,n+1}=0$) such that any infinitesimal Poisson deformation of $(X,\Lambda_0)$ over an artinian local $\mathbb{C}$-algebra $(R,\mathfrak{m})$ with exponent $n$ (i.e $\mathfrak{m}^{n+1}=0$) can be induced from the $n$-th order universal Poisson deformation space via base change from a canonical ring homomorphism $r:R_n^u\to R$ up to equivalence. By taking the limit, we have a universal Poisson formal space. The main ingredient of universal Poisson deformation is the Jacobi complex or Quillen standard complex associated with the differential graded Lie algebra $\mathfrak{g}=\bigoplus_i g_i=(\bigoplus_{p+q-1=i,p\geq 0, q\geq 1} A^{0,p}(X,\wedge^q T), L:=\bar{\partial}+[\Lambda_0,-], [-,-])$. The base ring of an $n$-th universal Poisson deformation is given by $R_n^u:=\mathbb{C}\oplus \mathfrak{m}_n^u$, where $\mathfrak{m}_n^u=\mathbb{H}^0(J_n(\mathfrak{g}))^*$ which is the dual of $0$-th Jacobi cohomology group associated to $\mathfrak{g}$. For given an infinitesimal Poisson deformation, the associated element $\alpha:=\phi+\Lambda\in (A^{0,0}(X,T)\oplus A^{0,0}(X,\wedge^2 T))\otimes \mathfrak{m}$ gives an element $[\epsilon(\alpha)]$ in $\mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$ which is the $0$-th Jacobi cohomology group associated with the differential graded Lie algebra $\mathfrak{g}\otimes \mathfrak{m}$. The element $[\epsilon(\alpha)]$ gives a ring homomorphism $r:R_n^u\to R$. We will present the full detail of the construction of Jacobi complex since we need actual computations for our main theorem\footnote{Similar result on universal Poisson deformation of a Poisson algebra is proved in \cite{Gin04} (Theorem 1.10). More precisely, for a Poisson algebra $A$ with $HP^1(A)$ and $HP^2(A)$ a finite-dimensional vector space over $\mathbb{C}$, there is an universal formal Poisson deformation of the algebra $A$. We also prove in the part \ref{part3} of the thesis that the Poisson deformation functor $PDef_{(X,\Lambda_0)}$ is prorepresentable when $(X,\Lambda_0)$ is a smooth projective Poisson scheme with $HP^1(X,\Lambda_0)=0$.} in the following:
\begin{theorem}\label{2theorem}
Let $(X,\Lambda_0)$ be a compact holomorphic Poisson manifold with $HP^1(X,\Lambda_0)= 0$ and $J$ be the Jacobi complex associated with the differential graded Lie algebra
\begin{align*}
\mathfrak{g}=\bigoplus_i g_i=(\bigoplus_{p+q-1=i,p\geq 0,q\geq 1} A^{0,p}(X,\wedge^q T), L:=\bar{\partial}+[\Lambda_0,-], [-,-])
\end{align*} where $[-,-]$ is the Schouten bracket. Then
\begin{enumerate}
\item For each $n\geq 1$, $R_n^u=\mathbb{C}\oplus \mathbb{H}^0(J_n(\mathfrak{g}))^*$ is a local artinian $\mathbb{C}$-algebra with the residue $\mathbb{C}$ in a canonical way. The maximal ideal of $R_n^u$ is given by $\mathfrak{m}_n^u=\mathbb{H}^0(J_n(\mathfrak{g}))^*$ and have exponent $n$ (which means $\mathfrak{m}_n^{u\, n+1}=0$).
\item There is a $n$-th order universal Poisson deformation $P_n^u$ of $(X,\Lambda_0)$ over $R_n^u$ in the following sense:
for any artinian local $\mathbb{C}$-algebra $(R,\mathfrak{m})$ of exponent $n$ (which means $\mathfrak{m}^{n+1}=0$) and infinitesimal Poisson deformation $P$ of $(X,\Lambda_0)$ over $R$, there is a canonical homomorphism
$r:R_n^u\to R$
and an isomorphism of Poisson analytic spaces over $R$
\begin{align*}
P/R\xrightarrow{\sim} r^* P_n^u:=P_n^u\times_{Spec(R_n^u)} Spec(R);
\end{align*}
\item For each $n\geq 1$, $P_n^u/R_n^u$ fit together to form a direct system with limit, which give an universal formal Poisson doformation $\hat{P}^u/ \hat{R}^u:=\varinjlim_n P_n^u/R_n^u$ of $(X,\Lambda_0)$ over $\hat{R}^u$ in the following sense: if $\hat{R}= \varprojlim_n R_n$ is a complete local noetheiran $\mathbb{C}$-algebra and $\hat{P}/\hat{R}= \varinjlim_n P_n/R_n$, then $\hat{r}=\varprojlim_n r_n:\hat{R}^u \to \hat{R}$ exists and $\hat{P}/\hat{R}\cong \hat{r}^*(\hat{P}^u/\hat{R}^u):=\hat{P}^u\times_{Spec(\hat{R}^u)} Spec(\hat{R})$.
\end{enumerate}
\end{theorem}
In chapter \ref{chapter4}, we present the construction of $n$-th Jacobi complex or Quillen standard complex associated to a differential graded Lie algebra $\mathfrak{g}$. We show that we can canonically define a local artinian $\mathbb{C}$-algebra structure on $\mathbb{C}\oplus \mathbb{H}^0(J_n(\mathfrak{g}))^*$ with residue $\mathbb{C}$ and exponent $n$, where $\mathbb{H}^0(J_n(\mathfrak{g}))^*$ is the dual space of $0$-th Jacobi cohomology group $\mathbb{H}^0(J_n(\mathfrak{g}))$ associated with the Jacobi complex $J_n(\mathfrak{g})$. We also describe a morphic element in $\mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$ which gives an $\mathbb{C}$-algebra homomorphism $\mathbb{C}\oplus \mathbb{H}^0(J_n(\mathfrak{g}))^*\to R$, where $(R,\mathfrak{m})$ is a local artinian $\mathbb{C}$-algebra with residue $\mathbb{C}$ and exponent $n$.
In chapter \ref{chapter5}, we study infinitesimal Poisson deformations of compact holomorphic Poisson manifolds. We define infinitesimal deformations of compact holomorphis Poisson manifolds over local artinian $\mathbb{C}$-algebras with residue $\mathbb{C}$ which is an infinitesimal version of Poisson analytic families. We deduce the same integrability condition as in the Part \ref{part1} of the thesis. The idea is essentially same to the Part \ref{part1} of the thesis. However, we use the infinitesimal language.
In chapter \ref{chapter6}, we show that the differential graded Lie algebra $\mathfrak{g}$ controls infinitesimal Poisson deformations in the language of functor of Artin rings. We complete the proof of Theorem \ref{2theorem} on universal Poisson deformations.
\chapter{Jacobi complex}\label{chapter4}
We present the full details of the construction of the Jacobi complex or Quillen standard complex associated with a differential graded Lie algebra since we need actual computations for our infinitesimal Poisson deformations. See also \cite{Hin97} 2.2 Quillen standard complex.
\section{Preliminaries}\
Let $\mathfrak{g}=\bigoplus_{i\geq 0} g_i$ be a graded complex with a differential $d$ where $g_i$ is a vector space over a field $\mathbb{C}$. In other words, we have the following complex $\mathfrak{g}:g_0\xrightarrow{d} g_1\xrightarrow{d} g_2\xrightarrow{d} \cdots$.
\begin{definition}
The symmetric algebra of a graded complex $(\mathfrak{g},d)$ are defined as a graded complex $S(\mathfrak{g})=T(\mathfrak{g})/I$, where $T(\mathfrak{g})=\sum_{n\geq 0} \mathfrak{g}^{\otimes n}$ is the tensor algebra of $\mathfrak{g}$ and $I$ is the two sided ideal generated by elements of the form $a\otimes b-(-1)^{|a||b|}b\otimes a$ where $a,b$ are homogeneous elements of $\mathfrak{g}$. We denote $\overline{S(\mathfrak{g})}=\overline{T(\mathfrak{g})}/I$, where $\overline{T(\mathfrak{g})}=\sum_{n\geq 1} \mathfrak{g}^{\otimes n}$. We will denote by $x_1\odot \cdots \odot x_n$ the image of $x_1\otimes \cdots \otimes x_n$.
\end{definition}
\begin{definition}
The exterior algebra of a graded vector space $\mathfrak{g}$ are defined as $\bigwedge \mathfrak{g}=T(\mathfrak{g})/I$ where $T(\mathfrak{g})=\sum_{n\geq 0} \mathfrak{g}^{\otimes n}$ is the tensor algebra of $\mathfrak{g}$ and $I$ is the two sided ideal generated by elements of the form $a\otimes b +(-1)^{|a||b|}b\otimes a$ where $a,b$ are homogeneous elements of $\mathfrak{g}$. We denote $\overline{\bigwedge \mathfrak{g}}=\overline{T(\mathfrak{g})}/I$, where $\overline{T(\mathfrak{g})}=\sum_{n\geq 1} \mathfrak{g}^{\otimes n}$. We will denote by $x_1\wedge \cdots \wedge x_n$ the image of $x_1\otimes \cdots \otimes x_n$.
\end{definition}
\begin{remark}
we have
\begin{align*}
\wedge^n \mathfrak{g} &=\wedge ^n (g_0\oplus g_1\oplus g_2\oplus \cdots)\\
&=\bigoplus_{r_0+r_1+\cdots=n, r_i\geq 0} (\wedge^{r_0} g_0)\otimes (sym^{r_1} g_1)\otimes \cdots \otimes (\wedge^{r_{2k}} g_{2k})\otimes (sym^{r_{2k+1}} g_{2k+1}) \otimes \cdots
\end{align*}
where $\wedge^k V$ is the usual $k$-th anti-symmetric product of a vector space $V$ and $sym^k V$ is the usual $k$-th symmetric product of a vector space $V$ when we ignore the grading.
\end{remark}
\begin{definition}
We can define the coalgebra structure $\Delta'$ and $\Delta$ on $\overline{S(\mathfrak{g})}$ and $\overline{\bigwedge \mathfrak{g}}$ in the following way:
\begin{align*}
\Delta'(x_1\odot \cdots \odot x_n)=\sum_I (-1)^{s(I)} x_I\otimes x_{\bar{I}}\\
\Delta(v_1\wedge \cdots \wedge v_n)=\sum_I (-1)^{t(I)} v_I\otimes v_{\bar{I}}
\end{align*}
where the summation is over all subsets $I=\{r_1,...,r_p\}$, $r_1<\cdots <r_p$ and $\bar{I}=\{s_1,...,s_q\}$ such that $s_1<\cdots <s_q$ with $I\cup \bar{I}=\{1,...,n\}$, $x_I=x_{r_1}\odot \cdots \odot x_{r_p}$, $x_{\bar{I}}=x_{s_1}\odot \cdots\odot x_{s_q}$ and similary $v_I=v_{r_1}\wedge \cdots \wedge v_{r_p}, v_{\bar{I}}=v_{s_1}\wedge \cdots \wedge v_{s_q}$. Here $s(I)$ and $t(I)$ are determined in the following way:
\begin{align*}
x_1\odot\cdots \odot x_n&=(-1)^{s(I)}x_I\odot x_{\bar{I}}\\
v_1\wedge \cdots \wedge v_n&=(-1)^{t(I)}v_I\wedge v_{\bar{I}}
\end{align*}
\end{definition}
Let $(\mathfrak{g},d)$ be a graded complex and we denote $(\mathfrak{g}[n],d)$ be a graded complex by shifting the degree by $n$, i.e. $\mathfrak{g}[n]^i=g_{n+i}$.
\begin{remark}[d\'ecalage isomorphism]
We have an isomorphism
\begin{align*}
dec:S^n(\mathfrak{g}[1])&\cong (\bigwedge^n \mathfrak{g})[n]\\
\bar{x}_1\odot\cdots\odot\bar{x}_n&\mapsto (-1)^{\sum_{i=1}^{n-1} (n-i)p_i} x_1\wedge \cdots \wedge x_n
\end{align*}
where $x_i$ is an element of $\mathfrak{g}$, $\bar{x}_i$ is an element of $\mathfrak{g}[1]$ via the natural map $\mathfrak{g}\to \mathfrak{g}[1]$, and $p_i$ is the degree of $x_i$
\end{remark}
\begin{notation}
Let $I=\{p_1,..,p_r\}$. $A_I$ is defined by the following relation: $dec(\bar{x}_{p_1}\odot \cdots \odot \bar{x}_{p_r})=(-1)^{A_I}x_{p_1}\wedge \cdots \wedge x_{p_r}$. We will denote $\bar{x}_I=x_{p_1}\odot\cdots \odot x_{p_r}$ and $x_I=x_{p_1}\wedge \cdots \wedge x_{p_r}$. We denote $|I|=r$ the cadinality of $I$ and $|x_I|:=deg(x_{p_1})+\cdots +deg(x_{p_r})$
\end{notation}
Via this isomorphism $dec$, we have the following commutative diagram
\begin{center}
$\begin{CD}
\overline{S(\mathfrak{g}[1])}@>dec>> \overline{\bigwedge\mathfrak{g}}\\
@V\Delta' VV @VV\Delta V\\
\overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])} @>\tilde{dec}>> \overline{\bigwedge\mathfrak{g}}\otimes \overline{\bigwedge\mathfrak{g}}
\end{CD}$
\end{center}
where $\tilde{dec}$ is defined in the following way: $\tilde{dec}(\bar{x}_I\otimes \bar{x}_J)=(-1)^{A_I+A_J+|x_I||J|}x_I\otimes x_J$.
\subsection{Induced differential}\
Let $\mathfrak{g}$ be a differential graded Lie algebra.
We define the map $Q_n:\wedge^n \mathfrak{g}\to \wedge^{n-1} \mathfrak{g}$ by
\begin{align*}
Q_n(x_1\wedge \cdots \wedge x_n)=\sum_{i<j} (-1)^a[x_i,x_j]\wedge x_1\wedge \cdots \wedge \hat{x}_i\wedge \cdots \wedge \hat{x}_j\cdots x_n
\end{align*}
where $a$ is defined in the following way
\begin{align*}
x_1\wedge \cdots \wedge x_n=(-1)^ax_i\wedge x_j\wedge x_1\wedge \cdots \wedge \hat{x}_i\cdots \wedge \hat{x}_j\cdots\wedge x_n
\end{align*}
More precisely,
\begin{align*}
a=\underbrace{i-1+p_i(p_1+\cdots +p_{i-1})}_{\text{first moving $x_i$}}+\underbrace{j-2+p_j(p_1+\cdots +\hat{p}_i+\cdots +p_{j-1})}_{\text{second moving $x_j$}}
\end{align*}
And we define $d_n:\wedge^n \mathfrak{g}\to \wedge^n \mathfrak{g}$ inductively on $n$ by
\begin{align*}
d_n(x_1\wedge \cdots \wedge x_n)=dx_1\wedge x_2\wedge \cdots \wedge x_n+\sum_{i=2}^n(-1)^{p_1+\cdots+p_{i-1}}x_1\wedge\cdots \wedge x_{i-1}\wedge dx_i\wedge x_{i+1}\cdots \wedge x_n
\end{align*}
First we show that $d^2=0$ and $Q^2=0$. We show $d^2=0$ by induction on $k$, where $x_1\wedge \cdots \wedge x_k$. For $k=1$, the statement is true by the definition of $d$. Assume that the statement is true for $k=n-1$.
\begin{align*}
d\circ d(x_1\wedge\cdots \wedge x_n)&=d(dx_1\wedge x_2\cdots\wedge x_n+(-1)^{p_1}x_1\wedge d(x_2\wedge \cdots \wedge x_n))\\
&=ddx_1\wedge x_2\cdots \wedge x_n +(-1)^{p_1+1}dx_1\wedge d(x_2\wedge \cdots \wedge x_n)\\
&+(-1)^{p_1}dx_1\wedge d(x_2\wedge \cdots \wedge x_n)+(-1)^{p_1+p_1}x_1\wedge dd(x_2\wedge \cdots \wedge x_n)=0
\end{align*}
by the induction hypothesis. For $Q^2=0$, we recall the definition of $Q$. Then $Q\circ Q(x_1\wedge \cdots \wedge x_n)$ is of the following form
\begin{align*}
\sum_{i<j<k} ((-1)^a [[x_i,x_j],x_k]+(-1)^b [[x_j,x_k],x_i]+(-1)^c[[x_i,x_k],x_j])x_1\wedge \cdots \wedge \hat{x}_i \cdots \wedge \hat{x}_j\cdots \wedge \hat{x}_j \cdots \wedge \hat{x}_k\wedge \cdots \wedge x_n
\end{align*}
where
\begin{align*}
a&=\underbrace{i-1+p_i(p_1+\cdots +p_{i-1})}_{\text{first moving $x_i$}}+\underbrace{j-2+p_j(p_1+\cdots +\hat{p}_i+\cdots +p_{j-1})}_{\text{second moving $x_j$}}+\\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \underbrace{k-3+p_k(p_1+\cdots +\hat{p}_i+\cdots+\hat{p}_j+\cdots +p_{k-1})}_{\text{third moving $x_k$}}\\
b&=\underbrace{j-1+p_j(p_1+\cdots+p_{j-1})}_{\text{first moving $x_j$}}+\underbrace{k-2+p_k(p_1+\cdots+\hat{p}_j+\cdots+p_k)}_{\text{second moving $x_k$}}+\underbrace{i-1+p_i(p_1+\cdots+p_{i-1})}_{\text{third moving $x_i$}}\\
c&=\underbrace{i-1+p_i(p_1+\cdots+p_{i-1})}_{\text{first moving $x_i$}}+\underbrace{k-2+p_k(p_1+\cdots+\hat{p}_i+\cdots +p_k)}_{\text{second moving $x_k$}}+\underbrace{j-2+p_j(p_1+\cdots +\hat{p}_i+\cdots +p_{j-1})}_{\text{third moving $x_j$}}
\end{align*}
Set
\begin{align*}
d=i+j+k+p_i(p_1+\cdots +p_{i-1})+p_j(p_1+\cdots+p_{j-1})+p_k(p_1+\cdots+p_{k-1})
\end{align*}
Then
\begin{align*}
(-1)^d&=(-1)^{a+p_jp_i+p_kp_i+p_kp_j}&(-1)^a&=(-1)^{d+p_jp_i+p_kp_i+p_kp_j}\\
(-1)^d&=(-1)^{b+p_kp_j}&(-1)^b&=(-1)^{d+p_kp_j}\\
(-1)^d&=(-1)^{c+1+p_kp_i+p_jp_i}&(-1)^c&=(-1)^{d+1+p_kp_i+p_jp_i}
\end{align*}
So we have
\begin{align*}
&(-1)^a [[x_i,x_j],x_k]+(-1)^b [[x_j,x_k],x_i]+(-1)^c[[x_i,x_k],x_j]\\
&=(-1)^d\left((-1)^{p_jp_i+p_kp_i+p_kp_j} [[x_i,x_j],x_k]+(-1)^{p_kp_j} [[x_j,x_k],x_i]+(-1)^{1+p_kp_i+p_jp_i}[[x_i,x_k],x_j]\right)=0
\end{align*}
by the following relations and graded Jocobi identity
\begin{align*}
(-1)^{p_jp_i+p_kp_i+p_kp_j} [[x_i,x_j],x_k]=(-1)^{p_jp_i+p_kp_j+p_kp_j} [x_k,[x_i,x_j]]\\
(-1)^{p_kp_j} [[x_j,x_k],x_i]=(-1)^{p_kp_j+p_ip_j+p_ip_k} [x_i,[x_j,x_k]]\\
(-1)^{1+p_kp_i+p_jp_i}[[x_i,x_k],x_j]=(-1)^{p_kp_j+p_jp_i+p_jp_i}[x_j,[x_k,x_i]]
\end{align*}
\begin{align*}
(-1)^{p_ip_k}[x_i,[x_j,x_k]]+(-1)^{p_jp_i}[x_j,[x_k,x_i]]+(-1)^{p_kp_j}[x_k,[x_i,x_j]]=0
\end{align*}
Hence we have $Q^2=0$.
Next we will show that $Qd=dQ$ by induction on $k$ where $x_1\wedge \cdots\wedge x_k$. For $k=2$,
\begin{align*}
Q\circ d(x_1\wedge x_2)=Q(dx_1\wedge x_2+(-1)^{p_1}x_1\wedge dx_2)=[dx_1,x_2]+(-1)^{p_1}[x_1,dx_2]\\
d \circ Q(x_1\wedge x_2)=d[x_1,x_2]=[dx_1,x_2]+(-1)^{p_1}[x_1,dx_2]
\end{align*}
So induction holds for $k=2$. Now we assume that the statement holds for $k=n-1$. We will prove that it holds for $k=n$.
\begin{align*}
d(x_1\wedge\cdots\wedge x_n)&=dx_1\wedge(x_2\wedge\cdots\wedge x_n)+(-1)^{p_1} x_1\wedge d(x_2\wedge\cdots \wedge x_n)\\
dx_1\wedge(x_2\wedge\cdots\wedge x_n)&= (-1)^{n-1+(p_1+1)(p_2+\cdots+p_n)}(x_2\wedge\cdots \wedge x_n)\wedge dx_1\\
(-1)^{p_1}x_1\wedge d(x_2\wedge \cdots\wedge x_n
&=(-1)^{p_1+ i-2+p_i(p_2+\cdots p_{i-1})} x_1\wedge d(x_i\wedge x_2\wedge \cdots \wedge \hat{x}_i\wedge \cdots\wedge x_n)\\
&=(-1)^{p_1+i-2+p_i(p_2+\cdots+p_{i-1})} x_1\wedge dx_i\wedge x_2\wedge \cdots\wedge \hat{x}_i\wedge \cdots\wedge x_n\\
&\,\,\,\,\,\,\,\,\,\,+(-1)^{p_1+i-2+p_i(p_2+\cdots+p_{i-1})+p_i} x_1\wedge x_i\wedge d(x_2\wedge \cdots\wedge \hat{x}_i\wedge \cdots \wedge x_n)\\
&=(-1)^{p_1+n-2+p_1(p_2+\cdots+p_n+1)}d(x_2\wedge \cdots \wedge x_n)\wedge x_1\\
\end{align*}
Then
\begin{align*}
Qd(x_1\wedge \cdots \wedge x_n)&=\sum_{i=2}^n (-1)^{i-2+p_i(p_2+\cdots +p_{i-1})}[dx_1,x_i]\wedge x_2\wedge \cdots \hat{x}_i\wedge \cdots \wedge x_n\\
&\,\,\,\,\,\,\,\,\,\,+(-1)^{n-1+(p_1+1)(p_2+\cdots +p_n)}Q(x_2\wedge \cdots\wedge x_n)\wedge dx_1\\
&+\sum_{i=2}^n(-1)^{p_1+i-2+p_i(p_2+\cdots+p_{i-1})}[x_1,dx_i]\wedge x_2\wedge \cdots \wedge \hat{x}_i\wedge \cdots \wedge x_n\\
&\,\,\,\,\,\,\,\,\,\,+\sum_{i=2}^n (-1)^{p_1+i-2+p_i(p_2+\cdots+p_{i-1})+p_i}[x_1,x_i]\wedge d(x_2 \wedge \cdots \wedge \hat{x}_i\wedge \cdots \wedge x_n)\\
&+(-1)^{p_1+n-2+p_1(p_2+\cdots+p_n+1)}Qd(x_2\wedge \cdots \wedge x_n)\wedge x_1
\end{align*}
Now we compute $dQ$. First we note the following relations
\begin{align*}
x_1\wedge(x_2\wedge \cdots x_n)&=(-1)^{i-2+p_i(p_1+\cdots +p_{i-1})} x_1\wedge x_i\wedge x_2 \wedge \cdots \wedge \hat{x}_i\wedge \cdots \wedge x_n\\
&=(-1)^{n-1+p_1(p_2+\cdots +p_n)}(x_2\wedge \cdots \wedge x_n)\wedge x_1
\end{align*}
Then
\begin{align*}
Q(x_1\wedge(x_2\wedge \cdots \wedge x_n))&=\sum_{i=2}^n (-1)^{i-2+p_i(p_1+\cdots +p_{i-1})} [x_1, x_i]\wedge x_2 \wedge \cdots \wedge \hat{x}_i\wedge \cdots \wedge x_n\\
&\,\,\,\,\,\,\,\,\,\,+(-1)^{n-1+p_1(p_2+\cdots +p_n)}Q(x_2\wedge \cdots \wedge x_n)\wedge x_1
\end{align*}
Hence we have
\begin{align*}
dQ(x_1\wedge \cdots \wedge x_n)&=\sum_{i=2}^n (-1)^{i-2+p_i(p_1+\cdots +p_{i-1})} [dx_1, x_i]\wedge x_2 \wedge \cdots \wedge \hat{x}_i\wedge \cdots \wedge x_n\\
&+\sum_{i=2}^n (-1)^{i-2+p_i(p_1+\cdots +p_{i-1})+p_1} [x_1, dx_i]\wedge x_2 \wedge \cdots \wedge \hat{x}_i\wedge \cdots \wedge x_n\\
&+\sum_{i=1}^n (-1)^{i-2+p_i(p_1+\cdots +p_{i-1})+p_1+p_i} [x_1, x_i]\wedge d(x_2 \wedge \cdots \wedge \hat{x}_i\wedge \cdots \wedge x_n)\\
&+(-1)^{n-1+p_1(p_2+\cdots +p_n)}dQ(x_2\wedge \cdots \wedge x_n)\wedge x_1\\
&\,\,\,\,\,+(-1)^{n-1+p_1(p_2+\cdots +p_n)+(p_2+\cdots +p_n)}Q(x_2\wedge \cdots \wedge x_n)\wedge dx_1
\end{align*}
By induction hypothesis for $k=n-1$, we have $Qd(x_2\wedge \cdots \wedge x_n)=dQ(x_2\wedge \cdots \wedge x_n)$. Hence we have
\begin{align*}
Qd(x_1\wedge \cdots \wedge x_n)=dQ(x_1\wedge \cdots \wedge x_n)
\end{align*}
So the statement holds for $k=n$. Hence $Qd=dQ$. So if we we define $Q'$
\begin{align*}
Q'(x_1\wedge \cdots \wedge x_n)=((-1)^n d+Q)(x_1\wedge \cdots \wedge x_n)
\end{align*}
Then $Q'\circ Q'=0$.
\section{Jacobi complex}\
\begin{definition}[un-degree shifted complex]\label{2un}
Let $\mathfrak{g}$ be a differential graded Lie algebra. Let's consider the total complex of the following bicomplex with the differential $Q'$ defined as above,
\begin{center}
\tiny{$\begin{CD}
g_0 @>-d>> g_1@>-d>>g_2 @>-d>> g_3 @>>>\cdots\\%@>d>> g_4 \\
@AQAA @AQAA @AQAA @AQAA \\
\bigwedge^2 g_0 @>d>> (g_0\otimes g_1) @>d>> (g_0\otimes g_2) \oplus sym^2g_1@>d>> (g_0\otimes g_3) \oplus (g_1\otimes g_2) @>>> \cdots \\%@>>> (g_0\otimes g_4)\oplus (g_1 \otimes g_3)\oplus \bigwedge^2 g_2 \\
@AQAA @AQAA @AQAA @AQAA \\
\bigwedge^3 g_0 @>-d>> (\bigwedge^2 g_0)\otimes g_1 @>-d>> (\bigwedge^2 g_0 \otimes g_2) \oplus(g_0\otimes sym^2 g_1) @>-d>> (\bigwedge^2 g_0\otimes g_3)\oplus (g_0\otimes g_1\otimes g_2) @>>> \cdots \\
@. @. @. \oplus sym^3 g_1 \\
@AQAA @AQAA @AQAA @AQAA \\%@AAA\\
\bigwedge^4 g_0 @>d>> (\bigwedge^3 g_0) \otimes g_1@>d>> (\bigwedge^3 g_0\otimes g_2) @>d>> (\bigwedge^3 g_0\otimes g_3) @>>>\cdots\\%@>>> (\bigwedge^3 g_0\otimes g_4)\oplus(\bigwedge^2 g_0\otimes g_1\otimes g_3)\\
@.@. \oplus (\bigwedge^2 g_0 \otimes sym^2 g_1)@. \oplus (\bigwedge^2 g_0\otimes g_1\otimes g_2) \\%@. \oplus(\bigwedge^2 g_0\otimes \bigwedge^2 g_2)\\
@. @. @. \oplus (g_0\otimes sym^3 g_1) \\
@AQAA @AQAA @AQAA @AQAA \\% @AAA\\
\cdots @>>> \cdots @>>> \cdots @>>> \cdots \\%@>>> \cdots\\
@AQAA @AQAA @AQAA @AQAA\\
\wedge^n g_0 @> (-1)^nd>>(\wedge^{n-1}g_0) \otimes g_1 @>(-1)^nd >>\cdots @>(-1)^nd>> \cdots@>>> \cdots
\end{CD}$}
\end{center}
\end{definition}
Now we will define $n$-th order Jacobi complex $J_n(\mathfrak{g})$. First we note that we can consider the direct sum of each components in the above whole complex as a subset $S_n$ of $S=\overline{S(\mathfrak{g}[1])}$ via $dec$.
\begin{definition}
We define a differential $\bar{Q}$ on $\overline{S(\mathfrak{g}[1])}$ in a way that the following commutative diagram commutes.
\begin{center}
$\begin{CD}
\bar{x}_I@> dec >> (-1)^{A_I} x_I\\
@V \bar{Q} VV @V Q VV\\
\bar{Q}(\bar{x}_I) @>dec >> (-1)^{A_I} Q(x_I)
\end{CD}$
\end{center}
In other words, $\bar{Q}:=dec^{-1}\circ Q\circ dec$.
\end{definition}
\begin{definition}
we define a differential $\bar{d}$ in a way that the following diagram commutes
\begin{center}
$\begin{CD}
\bar{x}_I @> dec >> (-1)^{A_I} x_I\\
@V \bar{d} VV @V (-1)^{|I|}d VV\\
\bar{d}(\bar{x}_I) @>dec >> (-1)^{A_I+|I|} d(x_I)
\end{CD}$
\end{center}
\end{definition}
We translate the above complex $(\bigwedge \mathfrak{g},d,Q)$ (un degree shifted complex in Definition \ref{2un}) in terms of $(\overline{S(\mathfrak{g}[1])},\bar{d},\bar{Q})$ via $dec$ to define Jacobi complex.
\begin{definition}[Jacobi complex]
Let $\mathfrak{g}$ be a differential graded Lie algebra. The $n$-th order Jacobi complex $J_n(\mathfrak{g})$ is the total complex of the following bicomplex $($here we have to be careful of the grading: we are working on $S_n\subset \overline{S(\mathfrak{g}[1])}$, hence here the grading of $g_i$ is actually $i-1)$
\begin{center}
\tiny{$\begin{CD}
g_0 @>\bar{d}>> g_1@>\bar{d}>>g_2 @>\bar{d}>> g_3 @>>>\cdots\\%@>d>> g_4 \\
@A\bar{Q}AA @A\bar{Q}AA @A\bar{Q}AA @A\bar{Q}AA \\
\bigwedge^2 g_0 @>\bar{d}>> (g_0\otimes g_1) @>\bar{d}>> (g_0\otimes g_2) \oplus sym^2g_1@>\bar{d}>> (g_0\otimes g_3) \oplus (g_1\otimes g_2) @>>> \cdots \\%@>>> (g_0\otimes g_4)\oplus (g_1 \otimes g_3)\oplus \bigwedge^2 g_2 \\
@A\bar{Q}AA @A\bar{Q}AA @A\bar{Q}AA @A\bar{Q}AA \\
\bigwedge^3 g_0 @>\bar{d}>> (\bigwedge^2 g_0)\otimes g_1 @>\bar{d}>> (\bigwedge^2 g_0 \otimes g_2) \oplus(g_0\otimes sym^2 g_1) @>\bar{d}>> (\bigwedge^2 g_0\otimes g_3)\oplus (g_0\otimes g_1\otimes g_2) @>>> \cdots \\
@. @. @. \oplus sym^3 g_1 \\
@A\bar{Q}AA @A\bar{Q}AA @A\bar{Q}AA @A\bar{Q}AA \\%@AAA\\
\bigwedge^4 g_0 @>\bar{d}>> (\bigwedge^3 g_0) \otimes g_1@>\bar{d}>> (\bigwedge^3 g_0\otimes g_2) @>\bar{d}>> (\bigwedge^3 g_0\otimes g_3) @>>>\cdots\\%@>>> (\bigwedge^3 g_0\otimes g_4)\oplus(\bigwedge^2 g_0\otimes g_1\otimes g_3)\\
@.@. \oplus (\bigwedge^2 g_0 \otimes sym^2 g_1)@. \oplus (\bigwedge^2 g_0\otimes g_1\otimes g_2) \\%@. \oplus(\bigwedge^2 g_0\otimes \bigwedge^2 g_2)\\
@. @. @. \oplus (g_0\otimes sym^3 g_1) \\
@A\bar{Q}AA @A\bar{Q}AA @A\bar{Q}AA @A\bar{Q}AA \\% @AAA\\
\cdots @>>> \cdots @>>> \cdots @>>> \cdots \\%@>>> \cdots\\
@A\bar{Q}AA @A\bar{Q}AA @A\bar{Q}AA @A\bar{Q}AA\\
\wedge^n g_0 @> \bar{d}>>(\wedge^{n-1}g_0) \otimes g_1 @>\bar{d} >>\cdots @>\bar{d}>> \cdots@>>> \cdots
\end{CD}$}
\end{center}
We give a bigrading on $S_n$ by $S_n^{-r,s}\cong_{dec} (\wedge^r \mathfrak{g})^s$. In other words, if $x\in S_n^{-r,s}$, then $x$ is of the form $x=\sum_i \bar{x}_{i_1}\odot \cdots \odot \bar{x}_{i_r}$ where $deg(x_{i_1})+\cdots +deg(x_{i_r})=s$ in $\mathfrak{g}$. Then we set $J_n(\mathfrak{g})^i=\bigoplus_{-r+s=i} S_n^{-r,s}$. For example, $J_n(\mathfrak{g})^0=g_1\oplus(g_0\otimes g_2)\oplus sym^2 g_1\oplus(\wedge^2 g_0\otimes g_3)\oplus (g_0\otimes g_1\otimes g_2)\oplus sym^3 g_1\oplus \cdots$. Then we define the Jacobi complex to be the following complex
\begin{align*}
\cdots \xrightarrow{\bar{d}+\bar{Q}} J_n(\mathfrak{g})^{i-1}\xrightarrow{\bar{d}+\bar{Q}} J_n(\mathfrak{g})^i\xrightarrow{\bar{d}+\bar{Q}} J_n(\mathfrak{g})^{i+1} \xrightarrow{\bar{d}+\bar{Q}}\cdots
\end{align*}
\end{definition}
We also note natural inclusions $J_1(\mathfrak{g})\hookrightarrow J_2(\mathfrak{g})\hookrightarrow \cdots \hookrightarrow J_n(\mathfrak{g})\hookrightarrow \cdots$, which induces $\mathbb{H}^i(J_1(\mathfrak{g}))\to \mathbb{H}^i(J_2(\mathfrak{g}))\to \cdots \to \mathbb{H}^i(J_n(\mathfrak{g}))\to \cdots$.
\begin{definition}
We denote $i$-th cohomology group of the $n$-th order Jacobi complex associated with a differential graded Lie algebra $\mathfrak{g}$ by $\mathbb{H}^i(J_n(\mathfrak{g}))$.
\end{definition}
\begin{remark}
We can modify the Jacobi complex $J_n(\mathfrak{g})$ by tensoring $\otimes \mathfrak{m}$ for some local artinian $\mathbb{C}$-algebra $(R,\mathfrak{m})$ with residue $\mathbb{C}$ to get $J_n(\mathfrak{g})\otimes \mathfrak{m}$. Then its cohomology groups coincide with $\mathbb{H}^i(J_n(\mathfrak{g}))\otimes \mathfrak{m}$.
\end{remark}
\begin{remark}
In practice, when we compute the Jacobi cohomology for our infinitesimal Poisson deformations, we use the first complex $($un-degree shifted complex in Definition $\ref{2un})$. In other words, we will work on $\overline{\bigwedge \mathfrak{g}}$ for actual computation of Jacobi cohomology groups. The reason why we pass from $\overline{\bigwedge\mathfrak{g}}$ to $\overline{S(\mathfrak{g}[1])}$ is that we want to give a commutative $\mathbb{C}$-algebra structure on $\mathbb{C}\oplus \mathbb{H}^0(J_n(\mathfrak{g}))^*$. We will explain this below.
\end{remark}
Let $\mathfrak{g}$ be a differential graded Lie algebra. Then $\overline{S(\mathfrak{g}[1])}$ has a symmetric coalgebra structure. Let $Q$ be the differential induced from the bracket $[-,-]$ as above.
\begin{lemma}
We have $dec(\bar{Q}(\bar{x}_I)\odot \bar{x}_J)=(-1)^{A_I+A_J+|J||x_I|} Q(x_I)\wedge x_J$ and $dec(\bar{x}_I\odot \bar{Q}(\bar{x}_J))=(-1)^{A_I+A_J+|x_I|(|J|-1)}x_I\wedge Q(x_J).$
\end{lemma}
\begin{proof}
We simply note that $dec(\bar{x}_J)=(-1)^{A_J}x_J$, $dec\circ \bar{Q}(\bar{x}_I)=(-1)^{A_I}Q(x_I)$ and $dec\circ \bar{Q}(\bar{x}_J)=(-1)^{A_J}Q(x_J)$.
\end{proof}
By the definition of $\bar{Q}$ on $\overline{S(\mathfrak{g}[1])}$, we would like to show the following diagram commutes, (which means $\bar{Q}$ is a coderivation)
\begin{center}
$\begin{CD}
\overline{S(\mathfrak{g}[1])}@> \bar{Q} >> \overline{S(\mathfrak{g}[1])}\\
@V \triangle' VV @V \triangle' VV\\
\overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])} @>\bar{Q}\otimes 1+1\otimes \bar{Q}>>\overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])}
\end{CD}$
\end{center}
Since $(\bar{Q}\otimes 1+1\otimes \bar{Q})\circ \triangle'(\bar{x}_1\odot\cdots \odot\bar{x}_n)=\sum_{I,J} (-1)^{S(I,J)}(\bar{Q}(\bar{x}_I)\otimes \bar{x}_J+(-1)^{|x_I|+|I|}\bar{x}_I\otimes \bar{Q}(\bar{x}_J))$, where $ \bar{x}_1\odot\cdots\odot\bar{x}_n=(-1)^{S(I,J)}\bar{x}_I\odot\bar{x}_J$.
First we note the following commutative diagram
\begin{center}
$\begin{CD}
\overline{S(g[1])} @>\bar{Q}>> \overline{S(g[1])} @>\triangle' >> \overline{S(g[1])}\otimes \overline{S(g[1])}\\
@V dec VV @V dec VV @V \tilde{dec} VV\\
\overline{\bigwedge g} @> Q >> \overline{\bigwedge g} @> \triangle >> \overline{\bigwedge g} \otimes \overline{\bigwedge g}
\end{CD}$
\end{center}
By this commutativity, we are working on $\overline{\bigwedge \mathfrak{g}}$ instead of $\overline{S(\mathfrak{g}[1])}$ to show that $\bar{Q}$ is a coderivation on $\overline{S(\mathfrak{g}[1])}$.
so our claim is equivalent to
\begin{equation}\label{2e}
\sum_{I,J} (-1)^{S(I,J)}\tilde{dec}(\bar{Q}(\bar{x}_I)\otimes \bar{x}_J+(-1)^{|x_I|+|I|}\bar{x}_I\otimes \bar{Q}(\bar{x}_J))=\Delta\circ Q\circ dec(\bar{x}_1\odot\cdots\odot \bar{x}_n)
\end{equation}
\begin{remark}
Via the above commutative diagram, for any $I$ and $J$, where $I \cup J=\{1,...,n\}$, $Q(x_I)\otimes x_J$ and $x_I\otimes Q(x_J)$ appear (up to sign) as terms in $\Delta\circ Q\circ dec(\bar{x}_1\odot\cdots\odot\bar{x}_n)$. Conversely, since
$Q(x_1\wedge \cdots \wedge x_n)=\sum (-1)^{P(i,j)}[x_i,x_j]\wedge x_1\wedge \cdots \wedge \hat{x}_i\wedge \cdots \wedge \hat{x}_j\cdots \wedge x_n$, $Q(x_I)\otimes x_J$ and $x_I\otimes Q(x_J)$ for all the pairs $I,J$ (up to sign) exhaust all the terms in $\Delta\circ Q\circ dec(\bar{x}_1\odot \cdots \odot \bar{x}_n)$. Hence in order to prove our claim $(\ref{2e})$, we only need to check that the signs for $Q(x_I)\otimes x_J$ and $x_I\otimes Q(x_J)$ in $\Delta\circ Q\circ dec(\bar{x}_1\odot\cdots \odot\bar{x}_n)$ equal to the signs for $Q(x_I)\otimes x_J$ and $x_I\otimes Q(x_J)$ in $\tilde{dec}(\bar{Q}(\bar{x}_I)\otimes \bar{x}_J)$ and $(-1)^{|x_I|+|I|}\tilde{dec}(\bar{x}_I\otimes\bar{Q}(\bar{x}_J))$.
\end{remark}
We now prove the claim (\ref{2e}). We note that $\bar{x}_I\odot \bar{x}_J=(-1)^{(|I|+|x_I|)(|J|+|x_J|)} \bar{x}_J\odot \bar{x}_I$
\begin{center}
$\begin{CD}
\bar{x}_I\odot \bar{x}_J@> dec >> (-1)^{A_I+A_J+|x_I||J|} x_I\wedge x_J\\
@V \bar{Q} VV @V Q VV\\
\bar{Q}(\bar{x}_I\odot \bar{x}_J) @>dec >> (-1)^{A_I+A_J+|x_I||J|} Q(x_I) \wedge x_J+\cdots
\end{CD}$
\end{center}
Hence $(-1)^{A_I+A_J+|x_I||J|} Q(x_I)\otimes x_J$ corresponds to $\bar{Q}(\bar{x}_I)\otimes \bar{x}_J$ via $\tilde{dec}$.
\begin{center}
$\begin{CD}
(-1)^{(|I|+|x_I|)(|J|+|x_J|)} \bar{x}_J\odot \bar{x}_I@> dec >> (-1)^{(|I|+|x_I|)(|J|+|x_J|)+A_I+A_J+|x_J||I|} x_J\wedge x_I\\
@V \bar{Q} VV @V Q VV\\
\bar{Q}(\bar{x}_I\odot \bar{x}_J)=(-1)^{(|I|+|x_I|)(|J|+|x_J|)}\bar{Q}(\bar{x}_J\odot \bar{x}_I) @>dec >> (-1)^{(|I|+|x_I|)(|J|+|x_J|)+A_I+A_J+|x_J||I|} Q(x_J) \wedge x_I +\cdots
\end{CD}$
\end{center}
We note that
\begin{align*}
(-1)^{(|I|+|x_I|)(|J|+|x_J|)+A_I+A_J+|x_J||I|} Q(x_J) \wedge x_I &=(-1)^{(|I|+|x_I|)(|J|+|x_J|)+A_I+A_J+|x_J||I|+|I|(|J|-1)+|x_I||x_J|} x_I\wedge Q(x_J)\\
&=(-1)^{|x_I||J|+A_I+A_J+|I|} x_I\wedge Q(x_J)
\end{align*}
Hence $(-1)^{|x_I||J|+A_I+A_J+|I|} x_I\otimes Q(x_J)$ corresponds via $\tilde{dec}$ to
\begin{align*}
(-1)^{|x_I||J|+A_I+A_J+|I|+ A_I+A_J+|x_I|(|J|-1)} \bar{x}_I\otimes \bar{Q}(\bar{x}_J)=(-1)^{|I|+|x_I|} \bar{x}_I\otimes \bar{Q}(\bar{x}_J)
\end{align*}
This completes the claim (\ref{2e}). Hence $\bar{Q}$ is a coderivation of $\overline{S(\mathfrak{g}[1])}$.
On the other hand, would like to show that
\begin{align*}
\bar{d}(\bar{x}_I\odot \bar{x}_J)=\bar{d}(\bar{x}_I)\odot \bar{x}_J +(-1)^{|I|+|x_I|}\bar{x}_I\odot \bar{d}(\bar{x}_J)
\end{align*}
which implies that $\bar{d}$ defines a coderivation of $\overline{S(\mathfrak{g}[1])}$. In other words, the following commutative diagram holds
\begin{center}
$\begin{CD}
\overline{S(\mathfrak{g}[1])}@> \bar{d} >> \overline{S(\mathfrak{g}[1])}\\
@V \triangle' VV @V \triangle' VV\\
\overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])} @>\bar{d}\otimes 1+1\otimes \bar{d}>> \overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])}
\end{CD}$
\end{center}
First we note the following relations
\begin{center}
$\begin{CD}
\bar{x}_I\odot \bar{x}_J@> dec >> (-1)^{A_I+A_J+|x_I||J|}x_I\wedge x_J\\
@V \bar{d} VV @V (-1)^{|I|+|J|}d VV\\
\bar{d}(\bar{x}_I\odot\bar{x}_J) @>dec >> (-1)^{A_I+A_J+|x_I||J|+|I|+|J|} d(x_I\wedge x_J)
\end{CD}$
\end{center}
\begin{align*}
\bar{d}(\bar{x}_I)\odot \bar{x}_J\xrightarrow{dec} (-1)^{A_I+|I|+A_J+|J|(|x_I|+1)}dx_I\wedge x_J
\end{align*}
\begin{align*}
\bar{x}_I\odot \bar{d}(\bar{x}_J) \xrightarrow{dec} (-1)^{A_I+A_J+|J|+|J||x_I|}x_I\wedge dx_J
\end{align*}
\begin{align*}
(-1)^{|I|+|x_I|}\bar{x}_I\odot \bar{d}(\bar{x}_J) \xrightarrow{dec} (-1)^{A_I+A_J+|J|+|J||x_I|+|I|+|x_I|}x_I\wedge dx_J
\end{align*}
\begin{align*}
(-1)^{A_I+A_J+|x_I||J|+|I|+|J|} d(x_I\wedge x_J)&=(-1)^{A_I+A_J+|x_I||J|+|I|+|J|} dx_I\wedge x_J+(-1)^{A_I+A_J+|x_I||J|+|I|+|J|+|x_I|} x_I\wedge dx_J\\
\end{align*}
Hence $\bar{d}$ defines a coderivation of $\overline{S(\mathfrak{g}[1])}$.
In conclusion, $\bar{Q}'=\bar{d}+\bar{Q}$ is a coderivation of $\overline{S(\mathfrak{g}(1))}$. In other words, the following commutative diagram holds
\begin{center}
$\begin{CD}
\overline{S(\mathfrak{g}[1])}@> \bar{Q}' >> \overline{S(\mathfrak{g}[1])}\\
@V \triangle' VV @V \triangle' VV\\
\overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])} @>\bar{Q}'\otimes 1+1\otimes \bar{Q}'>> \overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])}
\end{CD}$
\end{center}
\section{Morphic elements}\
We have a comultiplication map $\Delta'$ and coderivation $\bar{Q}'$ which is induced from differential and bracket from a differential graded Lie algebra $\mathfrak{g}$.
\begin{center}
$\begin{CD}
\overline{S(\mathfrak{g}[1])}@>\bar{Q}' >> \overline{S(\mathfrak{g}[1])}\\
@V\Delta' VV @VV\Delta' V \\
\overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])}@>\bar{Q}'\otimes id+id \otimes \bar{Q}'>>\overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])}
\end{CD}$
\end{center}
The following diagram commutes with the coderivation $\bar{Q}'$
\begin{center}
$\begin{CD}
\overline{S(\mathfrak{g}[1])}@>\Delta'>> \overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])}\\
@V\Delta' VV @VV\Delta'\otimes id V \\
\overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])}@>id\otimes \Delta'>>\overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])} \otimes \overline{S(\mathfrak{g}[1])}
\end{CD}$
\end{center}
Hence we have the following commutative diagram
\begin{center}
\[\begindc{\commdiag}[50]
\obj(0,2)[a]{$\mathbb{H}^0$}
\obj(3,2)[b]{$\bigoplus_{i+j=0} \mathbb{H}^i\otimes \mathbb{H}^j$}
\obj(0,1)[c]{$\bigoplus_{i+j=0} \mathbb{H}^i\otimes \mathbb{H}^j$}
\obj(0,0)[d]{$\mathbb{H}^0\otimes \mathbb{H}^0$}
\obj(3,1)[e]{$\bigoplus_{a+b+c=0}\mathbb{H}^a\otimes \mathbb{H}^b\otimes \mathbb{H}^c$}
\obj(6,2)[f]{$\mathbb{H}^0\otimes \mathbb{H}^0$}
\obj(6,0)[g]{$\mathbb{H}^0\otimes \mathbb{H}^0\otimes \mathbb{H}^0$}
\mor{a}{b}{}
\mor{a}{c}{}
\mor{c}{d}{}
\mor{c}{e}{}
\mor{b}{e}{}
\mor{b}{f}{}
\mor{f}{e}{}
\mor{d}{e}{}
\mor{e}{g}{}
\mor{f}{g}{}
\mor{d}{g}{}
\enddc\]
\end{center}
This induces a comultiplication map $\mathbb{H}^0(J_n(\mathfrak{g}))\to \sum_{i+j=0} \mathbb{H}^i(J_n(\mathfrak{g}))\otimes \mathbb{H}^j(J_n(\mathfrak{g}))\to \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathbb{H}^0(J_n(\mathfrak{g}))$, where the last map is a projection.
And we have
\begin{center}
\[\begindc{\commdiag}[50]
\obj(0,1)[a]{$\overline{S(\mathfrak{g}[1])}$}
\obj(2,1)[b]{$\overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])}$}
\obj(1,0)[d]{$\overline{S(\mathfrak{g}[1])}\otimes \overline{S(\mathfrak{g}[1])}$}
\mor{a}{b}{$\Delta'$}
\mor{a}{d}{$\Delta'$}
\mor{b}{d}{$\tau$}
\enddc\]
\end{center}
where $\tau(a\otimes b)=(-1)^{|a||b|}b\otimes a$.
This induces
\begin{center}
\[\begindc{\commdiag}[50]
\obj(0,2)[a]{$\mathbb{H}^0$}
\obj(0,1)[b]{$\bigoplus_{i+j=0} \mathbb{H}^i\otimes \mathbb{H}^j$}
\obj(2,2)[c]{$\bigoplus_{i+j=0} \mathbb{H}^i \otimes \mathbb{H}^j$}
\obj(0,0)[d]{$\mathbb{H}^0\otimes \mathbb{H}^0$}
\obj(4,2)[e]{$\mathbb{H}^0\otimes \mathbb{H}^0$}
\mor{a}{b}{}
\mor{a}{c}{}
\mor{c}{b}{}
\mor{b}{d}{}
\mor{c}{e}{}
\mor{e}{d}{}
\enddc\]
\end{center}
So the comultiplication map induces $\mathbb{H}^0(J_n(\mathfrak{g}))\to sym^2(\mathbb{H}^0(J_n(\mathfrak{g})))\cong sym^2(\mathbb{H}^0(J_n(\mathfrak{g}))^*)^*$, where $V^*$ is the dual space of a vector space $V$ over $\mathbb{C}$.
\begin{remark}
When we assume that $H^0(\mathfrak{g})=0$ which is the main assumption for universal Poisson deformation, we have $\mathbb{H}^i=0$ for $i<0$. In this case, we have simply $\bigoplus_{i+j=0} \mathbb{H}^i\otimes \mathbb{H}^j=\mathbb{H}^0\otimes \mathbb{H}^0$ and $\bigoplus_{a+b+c=0}\mathbb{H}^a\otimes \mathbb{H}^b\otimes \mathbb{H}^c=\mathbb{H}^0\otimes \mathbb{H}^0\otimes \mathbb{H}^0$.
\end{remark}
\begin{remark}
The commultiplication $\Delta'$ of $S_n\subset \overline{S(\mathfrak{g}[1])}$ induces a map
\begin{align*}
*:\mathbb{H}^0(J_n(\mathfrak{g}))^*\times \mathbb{H}^0(J_n(\mathfrak{g}))^*\to \mathbb{H}^0(J_n(\mathfrak{g}))^*
\end{align*}
satisfying the associative and commutative laws. Hence $\mathbb{C}\oplus \mathbb{H}^0(J_n(\mathfrak{g}))^*$ is a commuative $\mathbb{C}$-algebra. Moreover, $\underbrace{\Delta'\circ \cdots \circ \Delta'}_{n+1}$ on $S_n$ is $0$. We have $\underbrace{\mathbb{H}^0(J_n(\mathfrak{g}))^**\cdots *\mathbb{H}^0(J_n(\mathfrak{g}))^*}_{n+1}=0$.
In conclusion, set $\mathfrak{m}_n^u=\mathbb{H}^0(J_n(\mathfrak{g}))^*$. Then $\mathbb{C}\oplus \mathfrak{m}_n^u$ is a local commutative $\mathbb{C}$-algebra with the maximal ideal $\mathfrak{m}_n^u$ and residue $\mathbb{C}$ such that $\mathfrak{m}_n^{u\,n+1}=0$. We note that for our infinitesimal Poisson deformation, $\mathfrak{m}_n^u$ is finite dimensional. Hence $\mathbb{C}\oplus \mathfrak{m}_n^u$ is artinian. This proves our main Theorem $\ref{2theorem}$ $(1)$ in the Introduction of the part II of the thesis.
\end{remark}
\subsection{Morphic elements and ring homomorphism}
\begin{lemma}\label{2le}
Let $A\xrightarrow{v} B\xrightarrow{w} C$ be a complex of vector spaces over $\mathbb{C}$ and $C^*\xrightarrow{w^*} B^*\xrightarrow{v^*} A^*$ be induced complex of dual vector spaces. Then $[b_1]=[b_2]\in H(B)$ and $[f]=[g]\in H(B^*)$, where $[t]$ is a cohomology class for $t\in B$ or $t\in B^*$. Then $f(b_1)=g(b_2)$.
\end{lemma}
\begin{proof}
First we note that $w(b_1)=w(b_2)=0$ and $v^*(f)=f\circ v=0, v^*(g)= g\circ v=0$. Let $v(a)=b_1-b_2$ and $w^*(h)=h\circ w=f-g$. $f(b_1)-g(b_2)=f(b_1)-g(b_1)+g(b_1)-g(b_2)=(f-g)(b_1)+g(b_1-b_2)=h\circ w(b_1)+g\circ v(a)=0$
\end{proof}
Let $[\phi]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$ where $\mathfrak{m}$ is the maximal ideal of some artinian local $\mathbb{C}$-algebra $(R,\mathfrak{m})$ with residue $\mathbb{C}$ and $\mathfrak{m}^{n+1}=0$. Then this $[\phi]$ defines a linear map $f:\mathbb{H}^0(J_n(\mathfrak{g}))^*\to \mathfrak{m}$ by $a\to a(\phi)$. By Lemma \ref{2le}, the linear map $f$ is independent of choices of $a$ and $\phi$. Let's consider a bilinear map $\mathbb{H}^0(J_n(\mathfrak{g}))^*\times \mathbb{H}^0(J_n(\mathfrak{g}))^*\to \mathfrak{m}^2$ defined by $(a,b)\mapsto f(a)f(b)=a(\phi)b(\phi)$. Since $\mathfrak{m}$ is commutative, this induces a map
\begin{align*}
f\times f:sym^2(\mathbb{H}^0(J_n(\mathfrak{g}))^*)\to \mathfrak{m}^2
\end{align*}
\begin{definition}[morphic elements]
We call $[\phi]\in\mathbb{H}^0(J_n(\mathfrak{g}))^*$ a morphic element if $f\times f$ defines a ring homormophism.
\end{definition}
In order for $f\times f$ to be a ring homomorphism (equivalently for $[\phi]$ to be a morphic element), we have to have $f(a) f(b)=f(a \cdot b)$ where $\cdot$ is induced from $\Delta':\mathbb{H}^0(J_n(\mathfrak{g}))\to sym^2 (\mathbb{H}^0(J_n(\mathfrak{g}))^*)^*$. More pricesly, by taking the dual of $\Delta'$, we have the map $\cdot:sym^2(\mathbb{H}^0(J_n(\mathfrak{g}))^*) \cong sym^2(\mathbb{H}^0(J_n(\mathfrak{g})))^* \to \mathbb{H}^0(J_n(\mathfrak{g}))^*$ by $a\cdot b=(a\otimes b)\circ \Delta'$. Hence $f(a)f(b)=f(a\cdot b)$ means that
\begin{align*}
(a\otimes b)(\phi\otimes \phi)=a(\phi)b(\phi)=(a \otimes b)(\Delta'(\phi))=(b\otimes a)(\Delta'(\phi))
\end{align*}
Hence $(a\otimes b)(\Delta'(\phi)-\phi\otimes \phi)=0$ for all $a,b\in \mathbb{H}^0(J_n(\mathfrak{g}))^*$. This implies that $\Delta'(\phi)=\phi \otimes \phi$. Hence for any morphic element $[\phi]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$, we have $\Delta'(\phi)=\phi\otimes \phi$.
\subsection{Explicit description of a morphic element}\
We describe a morphic element $v$ in $\mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$ for some local artinian $\mathbb{C}$-algebra $(R,\mathfrak{m})$ with residue $\mathbb{C}$ and $\mathfrak{m}^{n+1}=0$. When we consider the Jacobi bicomplex, a $0$-th cohomology class $v$ is of the form
\begin{align*}
v=v_1+\cdots +v_n
\end{align*}
where $v_i\in S^i(\mathfrak{g}[1])\otimes \mathfrak{m}$. In particular $v_1\in g_1\otimes \mathfrak{m}$. For $v$ to be a morphic element, we have to have $\Delta'(v)=v\otimes v$ where $\Delta'$ is the comultiplication map. So $\sum \Delta'(v_i)=\sum v_i\otimes v_j$. So we have the following relations
\begin{align*}
\Delta'(v_1)&=0\\
\Delta'(v_2)&=v_1\otimes v_1\\
\Delta'(v_3)&=v_1\otimes v_2+v_2\otimes v_1\\
\Delta'(v_4)&=v_1\otimes v_3+v_2\otimes v_2+v_3\otimes v_1\\
\cdots\\
\Delta'(v_n)&=v_1\otimes v_{n-1}+\cdots +v_{n-1}\otimes v_1
\end{align*}
So $v_i$ is determined by $v_1,...,v_{i-1}$ inductively, hence completely determined by $v_1$. Since $\Delta'(v_2)=v_1\otimes v_1$, we see that $v_2=\frac{1}{2}v_1\odot v_1$. Since $\Delta'(v_3)=\frac{1}{2}v_1\otimes (v_1\odot v_1)+\frac{1}{2} (v_1\odot v_1)\otimes v_1$, we have $v_3=\frac{1}{3!}v_1\odot v_1\odot v_1$. Inductively, since $\Delta'(v_n)=\frac{1}{(n-1)!}v_1\otimes (v_1\odot \cdots \odot v_1)+\frac{1}{2!(n-2)!} (v_1\odot v_1)\otimes(v_1\odot \cdots \odot v_n)+\frac{1}{3!(n-3)!}(v_1\odot v_1\odot v_1)\otimes (v_1\odot \cdots \odot v_1)+\cdots +\frac{1}{(n-2)!2!}(v_1\odot\cdots \odot v_1)\otimes (v_1\odot v_1)+\frac{1}{(n-1)!}(v_1\odot \cdots \odot v_1)\otimes v_1$, and $\binom{n}{i}=\frac{n!}{i!(n-i)!}$, we have $v_n=\frac{1}{n!}v_1\odot \cdots \odot v_1$. Hence a morphic element is of the form
\begin{align*}
v_1+\frac{1}{2!}v_1\odot v_1+\cdots+\frac{1}{n!}v_1\odot \cdots\odot v_1
\end{align*}
where $\frac{1}{i!}\underbrace{v_1\odot \cdots \odot v_1}_i\in sym^i g_1\otimes \mathfrak{m}^i$.
\chapter{Infinitesimal Poisson deformations}\label{chapter5}
\section{Bracket calculus}\
Let $(\mathfrak{g}=\bigoplus_i g_i,\bar{\partial},[-,-])$ be a differential graded Lie algebra. Let $R$ is a local artinian $\mathbb{C}$-algebra with residue $\mathbb{C}$.
Then $\mathfrak{g}\otimes R$ is a differential graded Lie algebra with $(\mathfrak{g}\otimes R)_i=g_i\otimes R$, $\bar{\partial}(a\otimes r)=\bar{\partial}a\otimes r$ and $[a\otimes r_1,b\otimes r_2]=[a,b]\otimes r_1r_2$ for $a,b\in \mathfrak{g}$. Let $X$ be a complex manifold. Since $ (\bigoplus_{p+q-1=i, p\geq 0, q\geq 1} A^{0,p}(X, \wedge^q T),\bar{\partial}, [-,-])$ is a differential graded Lie algebra (see Appendix \ref{appendixc}), this induces a differential graded Lie algebra structure on $\bigoplus_{p+q-1=i,p\geq1, q\geq 0} A^{0,p}(X,\wedge^q T)\otimes_{\mathbb{C}} R$.
\begin{definition}
Let $X$ be a compact complex manifold. We consider any element of the differential graded Lie algebra $A=\bigoplus_{p+q-1=i,p\geq 0,q\geq 1} A^{0,p}(X,\wedge^q T)\otimes_{\mathbb{C}} R$ as an operator acting on itself in the following way. Let $\psi\in A$. We define
\begin{align*}
\psi=[\psi,-]:A&\to A\\
a &\mapsto [\psi,a]
\end{align*}
\end{definition}
\begin{definition}
Let $\bar{\partial}$ be the differential as an operator acting on $A$ in $\mathfrak{g}$. We formally define that $[\bar{\partial},a]:=\bar{\partial}a$, i.e
\begin{align*}
\bar{\partial}:=[\bar{\partial},-]:A&\to A\\
a&\mapsto [\bar{\partial},a]:=\bar{\partial}a
\end{align*}
\end{definition}
\begin{proposition}\label{2pro}
As an operator acting on $A$, we have
$\bar{\partial}a=\bar{\partial}\circ a-(-1)^{|a|} a\circ \bar{\partial}$
\end{proposition}
\begin{proof}
$[\bar{\partial},[a,b]]=\bar{\partial}[a,b]=[\bar{\partial}a,b]+(-1)^{|a|}[a,\bar{\partial}b]$
\end{proof}
\begin{proposition}\label{2pk}
As an operator acting on $A$, we have $[a,b]=a\circ b-(-1)^{|a||b|} b\circ a$
\end{proposition}
\begin{proof}
$[[a,b],c]=[a,[b,c]]-(-1)^{|a||b|}[b,[a,c]]$.
\end{proof}
\begin{definition}
We would like to define a differential $($which we will denote by still same $\bar{\partial}$$)$ which generalizes Proposition $\ref{2pro}$ on the space of operators on $L_A$ generated by $\{L_a=[a,-]|a\in A\}$, $identity$, and $\bar{\partial}:=[\bar{\partial},-]$ in compository manner. So $L_A$ has an identity element, which we denote by $id$. We set $\deg(\bar{\partial})=1$ and $deg(id)=0$. We define $\bar{\partial} X:= \bar{\partial}\circ X-(-1)^{|X|}X\circ \bar{\partial}$ for $X\in L_A$. Then $\bar{\partial}(id)=0$ and $\bar{\partial}(\bar{\partial})=\bar{\partial}([\bar{\partial},-])=0$$($First $\bar{\partial}$ is the differential on $L$ and second $\bar{\partial}$ is an operator in $L_A$$)$. We would like to define the grading on $L_A$ in the following way: $deg(a_1\circ\cdots \circ a_n)=|a_1|+\cdots +|a_n|$ for $a_i\in A$ as operators where $|a|$ is the degree of $a$ in $A$. Then $|\bar{\partial}X|=|X|+1$ since the degree of $|\bar{\partial}|=1$ and $\bar{\partial}(\bar{\partial}X)=0$ for $X\in L_A$. We define the bracket on $L_A$ in the following way: $[X,Y]=X\circ Y-(-1)^{|X||Y|}Y\circ X$.
\end{definition}
So we have $[\bar{\partial},a]=\bar{\partial} a=\bar{\partial}\circ a-(-1)^{|a|}a\circ \bar{\partial}$ and $[a,b]=a\circ b-(-1)^{|a||b|}b\circ a$ for $a,b\in A$ as operators. This coincides with Proposition \ref{2pro} and Proposition \ref{2pk}.
\begin{remark}
We have an embedding
\begin{align*}
A&\hookrightarrow L_A\\
a&\mapsto a:=[a,-]
\end{align*}
which is bracket $([-,-])$ preserving, and differential $(\bar{\partial})$ preserving, in other words,
\begin{align*}
[a,b]&\mapsto [[a,b],-]=[[a,[b,-]]-(-1)^{|a||b|}[b,[a,-]]=a\circ b-(-1)^{|a||b|}b\circ a\\
\bar{\partial}a &\mapsto [\bar{\partial}a,-]=\bar{\partial}\circ a-(-1)^{|a|}a\circ \bar{\partial}
\end{align*}
\end{remark}
\begin{proposition}[Product Rule]\label{2p}
For $X,Y\in L_A$, we have $\bar{\partial}(X\circ Y)=\bar{\partial}X\circ Y+(-1)^{|X|}X\circ \bar{\partial}Y.$
\end{proposition}
\begin{proof}
$\bar{\partial}(X\circ Y)=\bar{\partial}\circ X\circ Y-(-1)^{|X|+|Y|}X\circ Y\circ \bar{\partial}$. On the other hand $\bar{\partial}X\circ Y=\bar{\partial}\circ X\circ Y-(-1)^{|X|}X\circ \bar{\partial}\circ Y$, and $(-1)^{|X|}X\circ \bar{\partial}Y=(-1)^{|X|}X\circ \bar{\partial}\circ Y-(-1)^{|X|+|Y|}X\circ Y\circ \bar{\partial}$.
\end{proof}
\begin{example}
$exp(tu)\circ u=u\circ exp(tu), t\in \mathbb{R}$. Here $exp(x)=id+x+\frac{x\circ x}{2!}+\cdots$ for $x\in L_A$.
\end{example}
\begin{example}
$\frac{d}{dt} exp(tu)=exp(tu)u$ for $t\in \mathbb{R}$. Indeed, $\frac{d}{dt}exp(tu)=\frac{d}{dt}(id+tu+\frac{(tu)\circ (tu)}{2!}+\cdots)=\frac{d}{dt}(id+tu+\frac{1}{2}t^2 u\circ u+\cdots)=u+tu\circ u+\frac{1}{2}t^2u\circ u\circ u+\cdots=u(id+tu+\frac{1}{2}tu\circ tu+\cdots)=uexp(tu)$
\end{example}
\begin{example}
If $a\in A$ is holomorphic, then as a operator $\bar{\partial}(exp(a))=0$. Indeed, since $\bar{\partial}a=0$, we have $\bar{\partial}(exp(a))=\bar{\partial}(id+a+\frac{1}{2}a\circ a+\cdots)=0$ by applying the product rule.
\end{example}
\begin{example}
Let $deg(a)=0$. Then as an operator
\begin{align*}
exp(a)\circ \bar{\partial}(exp(a))=[D(ad(a))(\bar{\partial}a)),-]=[\bar{\partial}a-\frac{[a,\bar{\partial}a]}{2!}+\frac{[a,[a,\bar{\partial}a]]}{3!}+\cdots,-]
\end{align*}
where
\begin{align*}
D(x)=\frac{exp(x)-1}{x}=\sum_{i=0}^{\infty} \frac{x^i}{(i+1)!}
\end{align*}
\end{example}
\section{Deformations of a compact holomorphic Poisson manifold}
Let $(X,\Lambda_0)$ be a compact holomorphic Poisson manifold. In other words, the structure sheaf $\mathcal{O}_X$ is a sheaf of Poisson algebras induced by $\Lambda_0$. We define an infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $Spec\,R$, where $R$ is a local artinian $\mathbb{C}$-algebra with residue $\mathbb{C}$.
\begin{definition}
Let $(X,\Lambda_0)$ be a compact holomorphic Poisson manifold. An infinitesimal Poisson deformation of $(X,\Lambda_0)$ is a cartesian diagram
\begin{center}
$\begin{CD}
X@>i>> \mathcal{X}\\
@VVV @VV\pi V\\
Spec(\mathbb{C})@>>> Spec(R)
\end{CD}$
\end{center}
where $\pi$ is a proper flat morphism, $A$ is an local artinian $\mathbb{C}$-algebra with the residue $\mathbb{C}$, $X\cong \mathcal{X}\times_{Spec(A)} Spec(\mathbb{C})$, and $\mathcal{O}_{\mathcal{X}}$ is a sheaf of Poisson $R$-algebra, which induces the Poisson $\mathbb{C}$-algebra $\mathcal{O}_X$ given by $\Lambda_0$.
\end{definition}
\begin{remark}
When we ignore Poisson structures, an infinitesimal Poisson deformation is simply a infinitesimal deformation of a underlying compact complex manifold. As complex analytic spaces, $X$ is a closed subspace of $\mathcal{X}$. Since $Spec(R)$ is a fat point $($one point set with structure sheaf $R$ itself $)$ and $\mathcal{X}\times_{Spec(A)} Spec(\mathbb{C})$, we have $\mathcal{X}\cong X$ topologically.
\end{remark}
\begin{remark}
We can assume that $i:X\to \mathcal{X}$ is an identity map as topological spaces. In other words, $i$ is simply equivalent to that $\mathcal{O}_{\mathcal{X}}(U)\to \mathcal{O}_X(U)$ is surjective for any open set $U$ of $X$. In the sequel, I assume that $i$ is identity as topological spaces.
\end{remark}
\begin{remark}
The cartesian diagram can be seen in the sheaf theoretic setting which we mainly use in the part II of the thesis. The cartesian diagram is equivalent to the following cartesian diagram of sheaves
\begin{center}
$\begin{CD}
\mathcal{O}_{\mathcal{X}}@>>> \mathcal{O}_X\\
@AAA @AAA\\
R@>>> R/\mathfrak{m}\cong \mathbb{C}
\end{CD}$
\end{center}
where a sheaf morphism $\mathcal{O}_{\mathcal{X}}\to \mathcal{O}_X$ over $\mathbb{C}$ on $X$ such that $\mathcal{O}_{\mathcal{X}}$ is a sheaf of flat Poisson $A$-algebra with $\mathcal{O}_{\mathcal{X}}\otimes_R R/\mathfrak{m}\cong \mathcal{O}_X$ as sheaves of Poisson $\mathbb{C}$-algebras in the following sense:
for any open set $U$ of $X$, the Poisson bracket $\{-,-\}_{\mathcal{X}}:\mathcal{O}_{\mathcal{X}}(U)\times \mathcal{O}_{\mathcal{X}}(U)\to \mathcal{O}_{\mathcal{X}}(U)$ induces the Poisson bracket on $\mathcal{O}_{X}(U)$ in the following way. $\{-,-\}_X:(\mathcal{O}_{\mathcal{X}}(U)\otimes_{\mathbb{C}} R/\mathfrak{m}) \times (\mathcal{O}_{\mathcal{X}}(U)\otimes_{\mathbb{C}} R/\mathfrak{m}) \to \mathcal{O}_{\mathcal{X}}(U)\otimes_{\mathbb{C}} R/\mathfrak{m}$ by $\{f\otimes r_1,g\otimes r_2 \}_X=\{f,g\}_{\mathcal{X}}\otimes r_1r_2$.
\end{remark}
\begin{proposition}\label{2proq}
Let $U$ be an open ball containing $0\in \mathbb{C}^n$ with a coordinate $(z_1,...,z_n)$. An infinitesimal deformation $\mathcal{X}$ of $U$ over $Spec(R)$ is rigid. In other words, $\mathcal{X}\cong U \times Spec(R)$.
\end{proposition}
\begin{proof}
We refer to \cite{Ser06} Theorem 1.2.4.
\end{proof}
Let $\mathcal{X}$ be an infinitesimal Poisson deformation of a compact holomorphic Poisson manifold $(X,\Lambda_0)$ over a local artinian $\mathbb{C}$-algebra $(R,\mathfrak{m})$ with residue $\mathbb{C}$. Let $\{U_i\}$ be an open covering of $X$, where $U_i$ is isomorphic to an open ball. By Proposition \ref{2proq}, locally an infinitesimal deformation $\mathcal{X}$ of the underlying complex manifold $X$ is
\begin{center}
$\begin{CD}
U_i@>>> \mathcal{X}|_{U_i}\cong U_i\times Spec(R)\\
@VVV @VV\pi V\\
Spec(\mathbb{C})@>>> Spec(R)
\end{CD}$
\end{center}
We call such an open cover $\{U_i\}$ locally trivial open covering of $\mathcal{X}$.
\begin{remark}
Let $\{U_i\}$ be a locally trivial open covering of $\mathcal{X}$. Since $\mathcal{O}_{\mathcal{X}}(U_i)\cong \mathcal{O}_X(U_i)\otimes_{\mathbb{C}} R$, the Poisson $R$-algebra structure on $\mathcal{O}_{\mathcal{X}}(U_i)$ is encoded in some holomorphic bivector field $\Lambda_i' \in \Gamma(U_i,\mathscr{A}^{0,0}(\wedge^2 T_X))\otimes_{\mathbb{C}} R$ with $\bar{\partial} \Lambda_i'=0$, $[\Lambda_i',\Lambda_i']=0$, where $\bar{\partial}$ on $\Gamma(U_i,\mathscr{A}^{0,0}(\wedge^2 T_X))\otimes_{\mathbb{C}} R$ is induced from $\bar{\partial}$ on $\Gamma(U_i,\mathscr{A}^{0,0}(\wedge^2 T_X))$ and $[-,-]$ is induced from the bracket on $U_i$ by extending $R$-linearly. More precisely, let $U_i$ be an open ball with coordinate $z=(z_1,...,z_n)$. Then the Poisson structure on $\mathcal{O}_\mathcal{X}(U_i)$ over $R$, where $R$ is generated by $<1, m_1,...,m_r>$ over $\mathbb{C}$ is encoded in the bivector field
\begin{align*}
\Lambda_i'=\sum_{\alpha,\beta} \Lambda_{\alpha\beta} \frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}= \sum_{\alpha,\beta} (\Lambda_{\alpha\beta}^0+m_1\Lambda_{\alpha\beta}^1+\cdots m_r\Lambda_{\alpha\beta}^r)\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}
\end{align*}
with $[\Lambda_i',\Lambda_i']=0$, where $\Lambda_{\alpha\beta}^k(z)$ is holomorphic on $U_i$ for each $k$. Then
\begin{align*}
\{f,g\}=<\Lambda_i',df\wedge dg>=<\sum_{\alpha,\beta} \Lambda_{\alpha\beta}\frac{\partial}{\partial z_{\alpha}} \wedge \frac{\partial}{\partial z_{\beta}}, df\wedge dg>=\sum_{\alpha,\beta} \Lambda_{\alpha\beta}\left(\frac{\partial f}{\partial z_{\alpha}}\frac{\partial g}{\partial z_{\beta}}-\frac{\partial g}{\partial z_{\alpha}}\frac{\partial f}{\partial z_{\beta}}\right)
\end{align*}
On the other hand,
\begin{align*}
[[\Lambda_i',f],g]&=\sum_{\alpha,\beta} [[\Lambda_{\alpha\beta}\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}},f],g]=\sum_{\alpha,\beta}[\Lambda_{\alpha\beta}\frac{\partial f}{\partial z_{\alpha}}\frac{\partial}{\partial z_{\beta}}-\Lambda_{\alpha\beta}\frac{\partial f}{\partial z_{\beta}}\frac{\partial}{\partial z_{\alpha}},g]\\
&=\sum_{\alpha,\beta}\Lambda_{\alpha\beta} \left(\frac{\partial f}{\partial z_{\alpha}}\frac{\partial g}{\partial z_{\beta}}-\frac{\partial g}{\partial z_{\alpha}}\frac{\partial f}{\partial z_{\beta}}\right)
\end{align*}
So we have the expression
\begin{align*}
\{f,g\}=[[\Lambda_i',f],g]
\end{align*}
Note that since $\Lambda_i'$ induces the Poisson structure of $(U_i,\Lambda_0)$, the Poisson structure of $(X,\Lambda_0)$ on $U_i$ is given by
\begin{align*}
\Lambda_0=\sum_{\alpha\beta}\Lambda_{\alpha\beta}^0\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}
\end{align*}
\end{remark}
\begin{definition}[equivalent Poisson deformations]
We say that an infinitesimal Poisson deformation $\mathcal{X}$ and an infinitesimal Poisson deformation $\mathcal{Y}$ of a holmorphic Poisson manifold $(X,\Lambda_0)$ over a local artinian $\mathbb{C}$-algebra $R$ are equivalent if the following diagram is commutative.
\begin{center}
\[\begindc{\commdiag}[50]
\obj(1,2)[aa]{$(X,\Lambda_0)$}
\obj(0,1)[bb]{$\mathcal{X}$}
\obj(2,1)[cc]{$\mathcal{Y}$}
\obj(1,0)[dd]{$Spec\,R$}
\mor{aa}{bb}{}
\mor{aa}{cc}{}
\mor{bb}{cc}{$f \cong $}
\mor{bb}{dd}{}
\mor{cc}{dd}{}
\enddc\]
\end{center}
where $f:\mathcal{X}\to \mathcal{Y}$ is a Poisson isomophism. In other words, $f^\sharp:\mathcal{O}_{\mathcal{Y}}\to f_*\mathcal{O}_{\mathcal{X}}$ is an isomorphism of Poisson sheaves.
\end{definition}
\section{Integrability condition}\
Let $\mathcal{X}$ be a infinitesimal Poisson deformation of a compact holomorphic Poisson manifold $(X,\Lambda_0)$ over $R$, where $(R,\mathfrak{m})$ is an local artinian $\mathbb{C}$-algebra with residue $\mathbb{C}$.
Let $\mathcal{U}=\{U_{\alpha}\}$ be a locally trivial open covering of $\mathcal{X}$. Then we have a set of isomorphisms of Poisson $R$-algebra
\begin{align*}
\varphi_{\alpha}:\mathcal{O}_{\mathcal{X}}(U_{\alpha})\xrightarrow{\thicksim} \mathcal{O}_X(U_{\alpha})\otimes R
\end{align*}
where the Poisson $R$-structure on $\mathcal{O}_X(U_{\alpha})\otimes R$, which we denote by $\Lambda_{\alpha}'$, is induced from the Poisson structure of $\mathcal{O}_{\mathcal{X}}(U_{\alpha})$ via $\varphi_{\alpha}$.
\begin{align*}
\theta_{\alpha\beta}:=\varphi_{\alpha}\varphi_{\beta}^{-1}:(\mathcal{O}_X(U_{\alpha}\cap U_{\beta})\otimes R,\Lambda_{\beta}')\to (\mathcal{O}_X(U_{\alpha}\cap U_{\beta})\otimes R,\Lambda_{\alpha}')
\end{align*}
are Poisson isomorphisms and satisfying cocycle condition $\theta_{\gamma\alpha}\circ \theta_{\alpha\beta}=\theta_{\gamma\beta}$.
\begin{lemma}\label{2len}
In the above, if $(R,\mathfrak{m})$ has the exponent $n$ (i.e $\mathfrak{m}^{n+1}=0)$, then $\theta_{\alpha\beta}=exp(t_{\alpha\beta})=I+t_{\alpha\beta}+\frac{1}{2}(t_{\alpha\beta})^2+\cdots +\frac{1}{n!}(t_{\alpha\beta})^n$ for some $t_{\alpha\beta}\in C^1(\mathcal{U},\Theta_X)\otimes \mathfrak{m}$.
\end{lemma}
\begin{proof}
We prove this by induction on the exponent $n$ of the maximal ideal $\mathfrak{m}$ of a local artinian $\mathbb{C}$-algebra $(R,\mathfrak{m})$.
Let's prove that the proposition holds for $n=1$. We assume that $\mathfrak{m}^2=0$. Let $f:=f\otimes 1\in \mathcal{O}_X(U_{\alpha}\cap U_{\beta})\otimes R$, which we consider an element in $\mathcal{O}_X(U_{\alpha}\cap U_{\beta})$. Note that $\theta_{\alpha\beta}$ is completely determined by $\mathcal{O}_X(U_{\alpha}\cap U_{\beta})$ since $\theta$ is $R$-algebra map. Let $dim_{\mathbb{C}}\, \mathfrak{m}=r$ with $\mathfrak{m}=<e_1,...,e_r>$. Since $R=\mathbb{C}\oplus \mathfrak{m}$, $\theta_{\alpha\beta}(f)=f+e_1 h_1(f)+\cdots +e_r h_r(f)$. Then $t_{\alpha\beta}:=\sum_i e_ih_i$ is a derivation. Indeed, for $f:=f\otimes 1, g:=g\otimes 1\in \mathcal{O}_X(U_{\alpha}\cap U_{\beta})\otimes R$, we have $\theta_{\alpha\beta}(fg)=\theta_{\alpha\beta}(f)\theta_{\alpha\beta}(g)=fg+e_1h_1(fg)+\cdots+e_rh_r(fg)=(f+e_1h_1(f)+\cdots +e_rh_r(f))(g+e_1h_1(g)+ \cdots + e_rh_r(g))=fg+e_1(fh_1(g)+gh_1(f))+\cdots + e_r(fh_r(g)+gh_r(f))$ since $\mathfrak{m}^2=0$, so we have $\{t_{\alpha\beta}=\sum_i h_ie_i\}\in C^1(\mathcal{U},\Theta_X)\otimes \mathfrak{m}$ and $\theta_{\alpha\beta}=exp(t_{\alpha\beta})=exp(\sum h_ie_i)$.
Now we assume that the proposition holds for up to $n-1$. Let $(R,\mathfrak{m})$ with $\mathfrak{m}^{n+1}=0$. Then $(R/\mathfrak{m}^n,\mathfrak{m}/\mathfrak{m}^{n})$ is a local artinian $\mathbb{C}$-algebra with residue $\mathbb{C}$ and exponent $n-1$.
We note that $\theta_{\alpha\beta}:\mathcal{O}_X(U_{\alpha}\cap U_{\beta})\otimes R \to \mathcal{O}_X(U_{\alpha}\cap U_{\beta})\otimes R$ induces $\bar{\theta}_{\alpha\beta}:\mathcal{O}(U_{\alpha}\cap U_{\beta})\otimes (R/\mathfrak{m}^n) \to \mathcal{O}(U_{\alpha}\cap U_{\beta})\otimes (R/\mathfrak{m}^n)$. Then by the induction hypothesis, $\bar{\theta}_{\alpha\beta}=exp(\bar{t}_{\alpha\beta})$ for some $\bar{t}_{\alpha\beta}\in C^1(\mathcal{U},\Theta_X)\otimes (\mathfrak{m}/\mathfrak{m}^n)$.
Let $t_{\alpha\beta}$ be an arbitrary lifting of $\bar{t}_{\alpha\beta}$ via $\mathfrak{m}\to \mathfrak{m}/\mathfrak{m}^n$. Let $\eta_{\alpha\beta}:=\theta_{\alpha\beta}-exp(t_{\alpha\beta})$. We claim that $\{\eta_{\alpha\beta}\} \in C^1(\mathcal{U},\Theta_X)\otimes \mathfrak{m}^n$. Indeed, let $f,g$ as above. Then $\eta_{\alpha\beta}(fg)=\theta_{\alpha\beta}(f)\theta_{\alpha\beta}(g)-(exp(t_{\alpha\beta})f)(exp(t_{\alpha\beta})g)=\theta_{\alpha\beta}(f)\theta_{\alpha\beta}(g)-\theta_{\alpha\beta}(f)(exp(t_{\alpha\beta})g)+\theta_{\alpha\beta}(f)(exp(t_{\alpha\beta})g)-(exp(t_{\alpha\beta})f)(exp(t_{\alpha\beta})g)=\theta_{\alpha\beta}(f)(\theta_{\alpha\beta}-exp(t_{\alpha\beta}))(g)+(exp(t_{\alpha\beta})g)(\theta_{\alpha\beta}-exp(t_{\alpha\beta}))(f)=f\eta_{\alpha\beta}(g)+g\eta_{\alpha\beta}(f)$ since $\eta_{\alpha\beta}(g),\eta_{\alpha\beta}(f)\in \mathcal{O}(U_{\alpha}\cap U_{\beta})\otimes \mathfrak{m}^n$ and $\mathfrak{m}^{n+1}=0$. So $\eta_{\alpha\beta}\in C^1(\mathcal{U},\Theta_X)\otimes \mathfrak{m}^n$. Hence we have
\begin{align*}
\theta_{\alpha\beta}=exp(t_{\alpha\beta})+\eta_{\alpha\beta}=exp(t_{\alpha\beta}+\eta_{\alpha\beta})
\end{align*}
\end{proof}
\begin{proposition}[Compare \cite{Kod05} Theorem 2.4 page 64]\label{2prot}
Let $\mathcal{X}\to Spec\,R$ be an infinitesimal deformation of a compact complex manifold $X$. Then we have an isomorphism of sheaves $\mathscr{A}^{0,0}_{\mathcal{X}}\cong \mathscr{A}^{0,0}_X\otimes_{\mathbb{C}} R$. In other words, a locally trivial infinitesimal $C^{\infty}$-deformation of a compact complex manifold $X$ over a local artinian $\mathbb{C}$-algebra $R$ with residue $\mathbb{C}$ is $($globally$)$ trivial. \footnote{The proposition is the infinitesimal version of \cite{Kod05} Theorem 2.4 (page 64). It took a lot of time for me to prove this proposition. Eventually I got the idea of the proof of the proposition from \cite{Kod05} Theorem 2.4. } We call a map defining $\mathscr{A}^{0,0}_{\mathcal{X}}\cong \mathscr{A}_X^{0,0}\otimes R$ a $C^{\infty}$-trivialization.
\end{proposition}
\begin{remark}\label{2re}
Before the proof of Proposition $\ref{2prot}$, we will clarify the meaning of $\mathscr{A}_{\mathcal{X}}^{0,0}\cong \mathscr{A}_X^{0,0}\otimes_{\mathbb{C}} R$. We define a sheaf, denoted by $\mathscr{A}^{0,0}_{\mathcal{X}}$ on $X$ in the following way. Let $\mathcal{U}=\{U_{\alpha}\}$ be a locally trivial open covering of $\mathcal{X}$. So we have $\varphi_{\alpha}:\mathcal{O}_{\mathcal{X}}(U_{\alpha})\cong \mathcal{O}_X(U_{\alpha})\otimes R$ and $\varphi_{\alpha}\varphi_{\beta}^{-1}=exp(t_{\alpha\beta})$ on $\mathcal{O}_X(U_\alpha\cap U_\beta)$ satisfying cocycle conditions where $\{t_{\alpha\beta}\}\in C^1(\mathcal{U},\Theta_X)\otimes \mathfrak{m}$. Let $\mathscr{A}^{0,0}_X$ be a sheaf of complex valued $C^{\infty}$-functions on $X$. We will denote its section on $U$ by $\mathscr{A}^{0,0}_X(U):=\Gamma(U,\mathscr{A}^{\infty}_X)$ for an open set $U$ of $X$. Then $exp(t_{\alpha\beta}): \mathcal{O}_X(U_{\alpha}\cap U_{\beta})\otimes R\to \mathcal{O}_X(U_{\alpha}\cap U_{\beta})\otimes R$ induces $exp(t_{\alpha\beta}):\mathscr{A}^{0,0}_X(U_{\alpha}\cap U_{\beta})\otimes R\to \mathscr{A}^{0,0}_X(U_{\alpha}\cap U_{\beta})\otimes R$. Since $\{exp(t_{\alpha\beta})\}$ satisfies cocycle conditions, this defines a sheaf locally isomorphic to $\mathscr{A}^{0,0}_X|_{U_{\alpha}}\otimes R$ on $U_{\alpha}$, which is the sheaf $\mathscr{A}^{0,0}_{\mathcal{X}}$. We will denote its section on $U$ by $\mathscr{A}^{0,0}_\mathcal{X}(U):=\Gamma(U,\mathscr{A}^{0,0}_\mathcal{X})$. We will use the same $\varphi_{\alpha}$ for local trivialization of $\mathscr{A}^{0,0}_{\mathcal{X}} ($i.e $\varphi_{\alpha}:\mathscr{A}^{0,0}_\mathcal{X}(U_{\alpha})\to \mathscr{A}_X^{0,0}(U_{\alpha})\otimes R)$. We would like to construct an explicit morphism of sheaves $\mathscr{A}^{0,0}_{\mathcal{X}}\to \mathscr{A}^{0,0}_X\otimes R$.
\end{remark}
\begin{proof}[Proof of Proposition $\ref{2prot}$]
We will prove by induction on the exponent of the maximal ideal $\mathfrak{m}$ of $R$. First we show that the proposition holds for $(R,\mathfrak{m})$ with $\mathfrak{m}^2=0$. Let $\{U_{\alpha}\}$ be an locally trivial open covering of $\mathcal{X}$. Then locally we have $\mathscr{A}^{0,0}_{\mathcal{X}}(U_{\alpha})\cong \mathscr{A}^{0,0}_X(U_{\alpha})\otimes_\mathbb{C} R$. There exist $exp(t_{\alpha\beta}):\mathscr{A}_X^{0,0}(U_{\beta}\cap U_{\alpha})\otimes_\mathbb{C} R\to \mathscr{A}_X^{0,0}(U_{\alpha}\cap U_{\beta})\otimes_\mathbb{C} R$ and we have $exp(t_{\alpha\gamma})=exp(t_{\alpha\beta})\circ exp(t_{\beta\gamma})$ which is same to $1+t_{\alpha\gamma}=(1+t_{\alpha\beta})\circ (1+t_{\beta\gamma})=1+t_{\alpha\beta}+t_{\beta\gamma}$ on $U_{\alpha}\cap U_{\beta}\cap U_{\gamma}$. We also note that $t_{\alpha\alpha}=0$ and $t_{\alpha\beta}=-t_{\beta\alpha}$. We will show that $ \mathscr{A}^{0,0}_X\otimes_\mathbb{C} R\cong \mathscr{A}^{0,0}_{\mathcal{X}}$ by defining a map which is compatible with $exp(t_{\alpha\beta})$. Let $\{\rho_\alpha\}$ be a partition of unity subordinate to the open covering $\{U_{\alpha}\}$. Then for each $\alpha$, we set $s_\alpha:=\sum_\gamma \rho_{\gamma}t_{\gamma\alpha}\in \Gamma(U_{\alpha},\mathscr{A}^{0,0}(T_X))\otimes \mathfrak{m}$ $(U_\gamma\cap U_\alpha\ne \emptyset)$. Then we have
\begin{align*}
exp(s_{\alpha})\circ exp(t_{\alpha\beta})&=1+t_{\alpha\beta}+\sum_\gamma \rho_\gamma t_{\gamma\alpha}=1+\sum_\gamma \rho_\gamma t_{\alpha\beta}+\sum_\gamma \rho_\gamma t_{\gamma\alpha}\\
&=1+\sum_\gamma \rho_\gamma t_{\gamma\beta}=exp(s_\beta)
\end{align*}
So we have the following commutative diagram
\begin{center}
\[\begindc{\commdiag}[100]
\obj(1,2)[aa]{$\mathscr{A}_X^{0,0}(U_{\alpha}\cap U_{\beta})\otimes_\mathbb{C} R$}
\obj(0,1)[bb]{$\mathscr{A}_X^{0,0}(U_{\beta}\cap U_{\alpha})\otimes_\mathbb{C} R$}
\obj(2,1)[cc]{$\mathscr{A}_X^{0,0}(U_{\alpha}\cap U_{\beta})\otimes_\mathbb{C} R$}
\mor{aa}{bb}{$exp(s_{\beta})$}[\atright,\solidarrow]
\mor{aa}{cc}{$exp(s_{\alpha})$}
\mor{bb}{cc}{$exp(t_{\alpha\beta})=1+t_{\alpha\beta} $}
\enddc\]
\end{center}
This compatibly shows that $\mathscr{A}^{0,0}_{\mathcal{X}}\cong \mathscr{A}^{0,0}_X\otimes_\mathbb{C} R$. So we proved the proposition for $R$ with the exponent $1$.
Now we assume that the proposition holds for $R$ with exponent up to $n-1$. Let $(R,\mathfrak{m})$ be an artinian local $\mathbb{C}$-algebra with the residue $\mathbb{C}$ and exponent $n$ (i.e. $\mathfrak{m}^{n+1}$), and $\mathcal{X}$ be an infinitesimal deformation of $X$ over $R$.
Then $(R/\mathfrak{m}^n,\mathfrak{m}/\mathfrak{m}^n)$ is a local artinian $\mathbb{C}$-algebra with exponent $n-1$. Let $\{U_{\alpha}\}$ be a locally trivializing open covering of $\mathcal{X}$. There exist $exp(t_{\alpha\beta}):\mathscr{A}_X^{0,0}(U_{\beta}\cap U_{\alpha})\otimes_\mathbb{C} R_1\to \mathscr{A}_X^{0,0}(U_{\alpha}\cap U_{\beta})\otimes_\mathbb{C} R$ and we have $exp(t_{\alpha\gamma})=exp(t_{\alpha\beta})\circ exp(t_{\beta\gamma})$. Then by the induction hypothesis, we have $\mathscr{A}^{0,0}_{\mathcal{X}}\otimes_{R} R/\mathfrak{m}^n \cong \mathscr{A}^{0,0}_X \otimes_{\mathbb{C}} R/\mathfrak{m}^n$ and so there exist $\bar{s}_{\alpha}\in \Gamma(U_{\alpha}, \mathscr{A}^{0,0}(T_X))\otimes \mathfrak{m}/\mathfrak{m}^n$ such that $exp(\bar{s}_\alpha)=exp(\bar{t}_{\alpha\beta})\circ exp(\bar{s}_{\beta})$, where $\{\bar{t}_{\alpha\beta}\}$ is the image of the natural map $C^1(\mathcal{U},\Theta_X)\otimes \mathfrak{m}\to C^1(\mathcal{U},\Theta_X)\otimes R/\mathfrak{m}^n$.
Let $s_\alpha$ be a lifting of $\bar{s}_\alpha$ by the natural map $\Gamma(U_{\alpha},\mathscr{A}^{0,0}(T_X))\otimes \mathfrak{m}\to \Gamma(U_{\alpha},\mathscr{A}^{0,0}(T_X))\otimes \mathfrak{m}/\mathfrak{m}^n$. Then $exp(t_{\alpha\beta})\circ exp(s_\beta)-exp(s_\alpha)$ is $0$ modulo $\mathfrak{m}^n$ and is a derivation. Indeed, set $A=exp(t_{\alpha\beta})\circ exp(s_\beta)$ and $B=exp(s_\alpha)$. Then $(A-B)(fg)=A(f)(A-B)(g)+B(g)(A-B)(f)=f(A-B)(g)+g(A-B)(f)$ since $A-B=0$ modulo $\mathfrak{m}^n$ and $\mathfrak{m}^{n+1}=0$. Set $ r_{\alpha\beta}:=exp(t_{\alpha\beta})\circ exp(s_\beta)-exp(s_\alpha)$. Since $exp(-t_{\alpha\beta})=exp(t_{\beta\alpha})$, we have $exp(t_{\beta\alpha})\circ r_{\alpha\beta}=exp(s_\beta)-exp(t_{n\beta\alpha})\circ exp(s_\alpha)$. So we get $exp(t_{\beta\alpha})\circ r_{\alpha\beta}=-r_{\beta\alpha}$. Since $r_{\alpha\beta}$ is $0$ modulo $\mathfrak{m}^n$, we have $r_{\alpha\beta}=-r_{\beta\alpha}$. Next we claim that $r_{\alpha\beta}=r_{\gamma\beta}+r_{\alpha\gamma}$. Indeed, since $r_{\alpha\beta}=exp(t_{\alpha\beta})\circ exp(s_{\beta})-exp(s_{\alpha})$ and $r_{\alpha\beta}=0$ modulo $\mathfrak{m}^n$, by applying $exp(t_{\gamma\alpha})$ on both sides,
\begin{align*}
r_{\alpha\beta}&=exp(t_{\gamma\alpha})\circ r_{\alpha\beta}= exp(t_{\gamma\alpha})\circ exp(t_{\alpha\beta})\circ exp(s_{\beta})-exp(t_{\gamma\alpha})\circ exp(s_{\alpha})\\
&=exp(t_{\gamma\beta})\circ exp(s_{\beta})-r_{\gamma\alpha}-exp(s_{\gamma})=r_{\gamma\beta}-r_{\gamma\alpha}=r_{\gamma\beta}+r_{\alpha\gamma}
\end{align*}
Let $\{\rho_\alpha\}$ be a partition of unity subordinate to the open covering $\{U_{\alpha}\}$. Then for each $\alpha$, we set $e_\alpha:=\sum_\gamma \rho_{\gamma}r_{\alpha\gamma}\in \Gamma(U_{\alpha},\mathscr{A}^{0,0}(T_X))\otimes \mathfrak{m}^n$. Then we have
\begin{align*}
exp(t_{\alpha\beta})\circ exp(s_{\beta}+e_{\beta})&=exp(t_{\alpha\beta})\circ exp(s_\beta)+e_\beta=exp(s_\alpha)+r_{\alpha\beta}+e_\beta\\
&=exp(s_{\alpha}) +\sum_\gamma \rho_{\gamma} r_{\alpha\beta}+\sum_{\gamma} \rho_{\gamma} r_{\beta\gamma}=exp(s_{\alpha})+\sum_{\gamma}\rho_{\gamma}(r_{\alpha\beta}+r_{\beta\gamma})\\
&=exp(s_{\alpha})+\sum_{\gamma}\rho_{\gamma}r_{\alpha\gamma}=exp(s_{\alpha})+e_\alpha=exp(s_{\alpha}+e_\alpha)
\end{align*}
So we have the following commutative diagram
\begin{center}
\[\begindc{\commdiag}[100]
\obj(1,2)[aa]{$\mathscr{A}_X^{0,0}(U_{\alpha}\cap U_{\beta})\otimes_\mathbb{C} R$}
\obj(0,1)[bb]{$\mathscr{A}_X^{0,0}(U_{\beta}\cap U_{\alpha})\otimes_\mathbb{C} R$}
\obj(2,1)[cc]{$\mathscr{A}_X^{0,0}(U_{\alpha}\cap U_{\beta})\otimes_\mathbb{C} R$}
\mor{aa}{bb}{$exp(s_{\beta}+e_\beta)$}[\atright,\solidarrow]
\mor{aa}{cc}{$exp(s_{\alpha}+e_\alpha)$}
\mor{bb}{cc}{$exp(t_{\alpha\beta})$}
\enddc\]
\end{center}
This compatibly shows that $\mathscr{A}^{0,0}_{\mathcal{X}}\cong \mathscr{A}^{0,0}_X\otimes_\mathbb{C} R$. This completes Proposition $\ref{2prot}$.
\end{proof}
As in the proof of Proposition \ref{2prot}, there is a global $C^{\infty}$ trivialization $C:\mathscr{A}^{0,0}_X\otimes R\cong \mathscr{A}^{0,0}_{\mathcal{X}}$. This induces $C:\mathscr{A}^{0,0}(U_{\alpha})\otimes R \to \mathscr{A}^{0,0}_{\mathcal{X}}(U_{\alpha})$ such that $\varphi_{\alpha} \circ C$ is of the form $exp(s_{\alpha})$, where $s_{\alpha}\in \Gamma(U_{\alpha},\mathscr{A}^{0,0}(T_X))\otimes \mathfrak{m}$. Here $\varphi_{\alpha}:\mathscr{A}^{0,0}_\mathcal{X}(U_{\alpha})\cong \mathscr{A}^{0,0}_X(U_{\alpha}) \otimes R $ is a local trivialization of $\mathscr{A}^{0,0}_{\mathcal{X}}(U_{\alpha})$. (See Remark \ref{2re})
So we have
\begin{align*}
exp(s_{\alpha})&=\varphi_{\alpha}\circ C:\mathscr{A}^{0,0}_X(U_{\alpha})\otimes R\to \mathscr{A}^{0,0}_X(U_{\alpha})\otimes R\\
exp(-s_{\beta})&=C^{-1}\circ \varphi_{\beta}^{-1}
\end{align*}
So we have $exp(s_{\alpha})exp(-s_{\beta})=\varphi_{\alpha} \circ C \circ C^{-1}\circ \varphi_{\beta}^{-1}=exp(t_{\alpha\beta})$.
So we have the following commutative diagram
\begin{center}
\[\begindc{\commdiag}[140]
\obj(0,2)[aa]{$\mathscr{A}^{0,0}_X(U_{\beta}\cap U_{\alpha})\otimes R$}
\obj(2,2)[bb]{$\mathscr{A}^{0,0}_X(U_{\alpha}\cap U_{\beta})\otimes R$}
\obj(1,1)[cc]{$\mathscr{A}^{0,0}_\mathcal{X}(U_{\alpha}\cap U_{\beta})$}
\obj(1,0)[dd]{$\mathscr{A}^{0,0}_X(U_{\alpha}\cap U_{\beta})\otimes R$}
\mor{aa}{bb}{$exp(t_{\alpha\beta})$}
\mor{cc}{aa}{$\varphi_{\beta}$}
\mor{cc}{bb}{$\varphi_{\alpha}$}[\atright,\solidarrow]
\mor{dd}{cc}{$C$}
\mor{dd}{aa}{$exp(s_{\beta})$}
\mor{dd}{bb}{$exp(s_{\alpha})$}[\atright,\solidarrow]
\enddc\]
\end{center}
\subsection{Integrability Condition}\
We set
\begin{center}
$\phi_{\alpha}:=exp(-s_{\alpha})\bar{\partial}(exp(s_{\alpha}))=[D(ad(-s_{\alpha}))(\bar{\partial}s_{\alpha}),-]$ as an operator acting on $A$
\end{center}
where $D$ is the function
\begin{align*}
D(x)=\frac{exp(x)-1}{x}=\sum_{i=0}^{\infty} \frac{x^i}{(i+1)!}
\end{align*}
We have
\begin{align*}
D(ad(s_{\alpha}))(\bar{\partial}s_{\alpha})=\bar{\partial}s_{\alpha}-\frac{[s_{\alpha},\bar{\partial} s_{\alpha}]}{2!}+\frac{[s_{\alpha},[s_{\alpha},\bar{\partial}s_{\alpha}]]}{3!}+\cdots \in \Gamma(U_{\alpha},\mathscr{A}^{0,1}(T))\otimes \mathfrak{m}
\end{align*}
Note that we have
\begin{align*}
0=\bar{\partial} exp(t_{\alpha\beta})=\bar{\partial}(exp(s_{\alpha})exp(-s_{\beta}))=\bar{\partial}(exp(s_{\alpha}))exp(-s_{\beta})+exp(s_{\alpha})\bar{\partial}(exp(-s_{\beta}))
\end{align*}
So we have
\begin{align*}
exp(-s_{\alpha})\bar{\partial}(exp(s_{\alpha}))=-\bar{\partial}(exp(-s_{\beta}))exp(s_{\beta})
\end{align*}
Since $0=\bar{\partial}(exp(-s_{\beta})exp(s_{\beta}))=\bar{\partial}(exp(-s_{\beta}))exp(s_{\beta})+exp(-s_{\beta})\bar{\partial}(exp(s_{\beta}))$, we have
\begin{align*}
-\bar{\partial}(exp(-s_{\beta}))exp(s_{\beta})=exp(-s_{\beta})\bar{\partial}(exp(s_{\beta}))
\end{align*}
Hence
\begin{align*}
exp(-s_{\alpha})\bar{\partial}(exp(s_{\alpha}))= exp(-s_{\beta})\bar{\partial}(exp(s_{\beta}))
\end{align*}
So $D(ad(s_{\alpha}))(\bar{\partial}s_{\alpha})$ glue together to give a global section
\begin{align*}
\phi \in A^{0,1}(X,T)\otimes \mathfrak{m}
\end{align*}
Since
\begin{align*}
\bar{\partial}\phi_{\alpha}=\bar{\partial}exp(-s_{\alpha})\bar{\partial}exp(-s_{\alpha})=\bar{\partial}exp(-s_{\alpha})exp(s_{\alpha})exp(-s_{\alpha})\bar{\partial}exp(s_{\alpha})=-\phi_{\alpha}\circ \phi_{\alpha}=-\frac{1}{2}[\phi_{\alpha},\phi_{\alpha}]
\end{align*}
We have
\begin{align*}
\bar{\partial}\phi=-\frac{1}{2}[\phi,\phi]
\end{align*}
Now we consider the Poisson structures. We need the following lemma.
\begin{lemma}\label{2lemp}
We have $[exp(a) f,exp(a) g]=exp(a)[f,g]$, where $deg(a)=0$.
\end{lemma}
\begin{proof}
We claim that
\begin{align*}
\sum_{l+m=n} \frac{1}{l!m!}[[\underbrace{a,[a,[\cdots[a}_l,f]\cdots],[\underbrace{a,[a,[\cdots[a}_m,g]\cdots]]=\frac{1}{n!}[\underbrace{a,[a,[\cdots[a}_n,[f,g]]\cdots]
\end{align*}
Then this implies the lemma. We will prove this by induction.
First we note that $[[a,f],g]=[a,[f,g]]-[f,[a,g]]$. So we have
\begin{align*}
[a,[f,g]]=[[a,f],g]+[f,[a,g]]
\end{align*}
Hence the claim holds for $n=1$. Now let's assume that the claim holds for $n-1$. We will prove that it holds for $n$.
\begin{align*}
&\frac{1}{n!}[\underbrace{a,[a,[\cdots[a}_n,[f,g]]\cdots]=\frac{1}{n}[a,\frac{1}{(n-1)!}[\underbrace{a,[a,[\cdots[a}_{n-1},[f,g]]\cdots]]\\
&=\frac{1}{n}[a,\sum_{p+q=n-1} \frac{1}{p!q!}[[\underbrace{a,[a,[\cdots[a}_p,f]\cdots],[\underbrace{a,[a,[\cdots[a}_q,g]\cdots]](\text{by induction hypothesis})\\
&=\frac{1}{n}\sum_{p+q=n-1}\frac{1}{p!q!}[[\underbrace{a,[a,[\cdots[a}_{p+1},f]\cdots],[\underbrace{a,[a,[\cdots[a}_q,g]\cdots]]\\
\end{align*}
\begin{align*}
&+\frac{1}{n}\sum_{p+q=n-1}\frac{1}{p!q!}[[\underbrace{a,[a,[\cdots[a}_p,f]\cdots],[\underbrace{a,[a,[\cdots[a}_{q+1},g]\cdots]]\\
&=\frac{1}{n}\sum_{l+m=n,l\geq 1}\frac{l}{l!m!}[[\underbrace{a,[a,[\cdots[a}_{l},f]\cdots],[\underbrace{a,[a,[\cdots[a}_m,g]\cdots]](\text{by taking $p+1=l,q=m$})\\
&+\frac{1}{n}\sum_{l+m=n,m\geq 1}\frac{m}{l!m!}[[\underbrace{a,[a,[\cdots[a}_l,f]\cdots],[\underbrace{a,[a,[\cdots[a}_{m},g]\cdots]](\text{by taking $p=l,q+1=m$})\\
&=\frac{1}{n}\sum_{l+m=n,l,m\geq 1} \frac{l+m}{l!m!}[[\underbrace{a,[a,[\cdots[a}_l,f]\cdots],[\underbrace{a,[a,[\cdots[a}_{m},g]\cdots]]\\
&+\frac{1}{n}\frac{n}{n!}[[\underbrace{a,[a,[\cdots[a}_{n},f]\cdots],g]+\frac{1}{n}\frac{n}{n!}[f,[\underbrace{a,[a,[\cdots[a}_{n},g]\cdots]]\\
&=\sum_{l+m=n} \frac{1}{l!m!}[[\underbrace{a,[a,[\cdots[a}_l,f]\cdots],[\underbrace{a,[a,[\cdots[a}_m,g]\cdots]]
\end{align*}
\end{proof}
\begin{definition}
Let $\mathcal{X}\to Spec\,R$ be an infinitesimal deformation of $X$, where $R$ is generated by $<1,m_1,...,m_r>$. We define a sheaf $\mathscr{A}^{0,p}(\wedge^q T_{\mathcal{X}/R})$ which is locally isomorphic to $\mathscr{A}^{0,p}(\wedge^q T_X)|_{U_{\alpha}}\otimes R$ on $U_{\alpha}$ in the following way: we define this sheaf by gluing sheaves $\mathscr{A}^{0,p}(\wedge^q T_X)|_{U_{\alpha}}\otimes R$ locally defined on each $U_{\alpha}$. On $U_{\alpha}\cap U_{\beta}$, we have isomorphisms $exp(t_{\alpha\beta}):(\mathscr{A}^{0,p}(\wedge^q T_X)|_{U_{\beta}})|_{U_{\alpha}\cap U_{\beta}}\to (\mathscr{A}^{0,p}(\wedge^q T_X)|_{U_{\alpha}})|_{U_{\alpha}\cap U_{\beta}}$. Since $exp(t_{\alpha\beta})=\exp(t_{\alpha \gamma})exp(t_{\gamma\beta})$ on $U_{\alpha}\cap U_{\beta}\cap U_{\gamma}$, we can glue these sheaves. By Lemma \ref{2lemp}, we can define a bracket $[-,-]$ on $\oplus_{p\geq 1,q\geq 0}\mathscr{A}^{0,p}(\wedge^q T_{\mathcal{X}/R})$ locally induced from the bracket on $[-,-]$ on $\oplus_{p\geq 1,q\geq 0}\Gamma(U_{\alpha},\mathscr{A}^{0,p}(\wedge^q T_X))\otimes R$.
Now we define the dolbeault differential $\bar{\partial}_\mathcal{X}:\mathscr{A}^{0,p}(\wedge^q T_{\mathcal{X}/{R}})\to \mathscr{A}^{0,p+1}(\wedge^q T_{\mathcal{X}/R})$ of morphism of sheaves. Locally for $f \in \Gamma(U_{\alpha},\mathscr{A}^{0,p}(\wedge^q T_X))\otimes R$, which is of the form $f=f_0+m_1f_1+\cdots +m_rf_r$, where $f_i\in \Gamma(U_{\alpha},\mathscr{A}^{0,p}(\wedge^q T_X))$, we define $\bar{\partial}_\mathcal{X} f=\bar{\partial}f_0+m_1\bar{\partial}f_1+\cdots + m_r\bar{\partial}f_r$. We show that this is glued together. It is equivalent to show that $exp(t_{\alpha\beta})\circ\bar{\partial}_\mathcal{X}f=\bar{\partial}_\mathcal{X} \circ exp(t_{\alpha\beta})f$. It is enough to show that $t_{\alpha\beta}\circ \bar{\partial}_\mathcal{X}=\bar{\partial}_\mathcal{X}\circ t_{\alpha\beta}$, which is equivalent to $\bar{\partial}_\mathcal{X} t_{\alpha\beta}=0$ $($more precisely, let $t_{\alpha\beta}=t_0+m_1t_1+\cdots +m_rt_r$, then above condition is equivlent to $ \bar{\partial} t_i=0$, which is equivalent to $\{t_{\alpha\beta}\}\in C^1(\mathcal{U},\Theta_X)\otimes \mathfrak{m})$. So $\bar{\partial}_\mathcal{X}:\mathscr{A}^{0,p}(\wedge^q T_{\mathcal{X}/R})\to \mathscr{A}^{0,p+1}(\wedge^qT_{\mathcal{X}/R})$ is well defined with $\bar{\partial}_\mathcal{X}\circ \bar{\partial}_\mathcal{X}=0$. We will denote $\bar{\partial}_{\mathcal{X}}$ simply by $\bar{\partial}$.
\end{definition}
\begin{remark}\label{2remark}
Let $\mathcal{X}\to Spec\,R$ be an infinitesimal Poisson deformation of $(X,\Lambda_0)$. Let $\{U_{\alpha}\}$ be a locally trivial open covering of $\mathcal{X}$. Then $\mathcal{O}_{\mathcal{X}}(U_{\alpha})\cong \mathcal{O}_X(U_{\alpha})\otimes R$. The Poisson structure is encoded in $\Lambda_{\alpha}' \in \Gamma(U_{\alpha}, \mathscr{A}^{0,0}(\wedge^2 T))\otimes R$ with $[\Lambda_{\alpha}',\Lambda_{\alpha}']=0$ and $\bar{\partial}\Lambda_{\alpha}'=0$. On $U_{\alpha}\cap U_{\beta}$, $exp(t_{\alpha\beta})[[\Lambda_{\beta}',f],g]=[[\Lambda_{\alpha}',exp(t_{\alpha\beta})f],exp(t_{\alpha\beta})g]$. Since $exp(t_{\alpha\beta})[[\Lambda_{\beta}',f],g]=[[exp(t_{\alpha\beta})\Lambda_{\beta}',exp(t_{\alpha\beta}) f], exp(t_{\alpha\beta})g]$ by Lemma $\ref{2lemp}$, we have $exp(t_{\alpha\beta})\Lambda_{\beta}'=\Lambda_{\alpha}'$. Hence the Poisson $R$-structure of $\mathcal{X}$ can be identified with the existence of a global section $\Lambda'$ of the sheaf $\mathscr{A}^{0,0}(\wedge^2 T_{\mathcal{X}/R})$ with $[\Lambda',\Lambda']=0$ and $\bar{\partial} \Lambda'=0$ such that $\Lambda'$ induces $\Lambda_0$.
\end{remark}
Going back to our discussion, since $exp(-s_{\alpha})=C^{-1}\circ (\varphi_{\alpha})^{-1}$ induces an isomorphism
\begin{align*}
\Gamma(U_{\alpha},\mathscr{A}^{0,0}(\wedge^2T_X))\otimes R\cong \Gamma(U_{\alpha},\mathscr{A}^{0,0}(\wedge^2 T_X))\otimes R
\end{align*}
which is compatible with each $\alpha$,
\begin{center}
\[\begindc{\commdiag}[130]
\obj(1,2)[aa]{$\Gamma(U_{\alpha}\cap U_{\beta},\mathscr{A}^{0,0}(\wedge^2 T_X))\otimes_\mathbb{C} R$}
\obj(0,1)[bb]{$\Gamma(U_{\alpha}\cap U_{\beta},\mathscr{A}^{0,0}(\wedge^2 T_X))\otimes_\mathbb{C} R$}
\obj(2,1)[cc]{$\Gamma(U_{\alpha}\cap U_{\beta},\mathscr{A}^{0,0}(\wedge^2 T_X))\otimes_\mathbb{C} R$}
\mor{bb}{aa}{$exp(-s_{\beta})$}[\atleft,\solidarrow]
\mor{cc}{aa}{$exp(-s_{\alpha})$}[\atright,\solidarrow]
\mor{bb}{cc}{$exp(t_{\alpha\beta})$}
\enddc\]
\end{center}
we have an isomorphism of sheaves $\mathscr{A}^{0,0}(\wedge^2 T_{\mathcal{X}/R})\cong \mathscr{A}^{0,0}(\wedge^2 T)\otimes R$. Since $\mathcal{O}_{\mathcal{X}}(U_{\alpha})\cong \mathcal{O}_X(U_{\alpha})\otimes R$ locally, the Poisson structure is locally encoded in $\Lambda'_{\alpha} \in A^{0,0}(\wedge^2 T)(U_{\alpha})\otimes R$ with $[\Lambda'_{\alpha},\Lambda'_{\alpha}]=0, \bar{\partial} \Lambda'_{\alpha}=0$. We define $\Lambda_{\alpha}'':=exp(-s_{\alpha})\Lambda'_{\alpha}$. Since $\Lambda_{\alpha}'=exp(s_{\alpha})exp(-s_{\beta})\Lambda_{\beta}'$, we have $\Lambda_{\alpha}''=exp(-s_{\alpha})\Lambda_{\alpha}'=exp(-s_{\beta})\Lambda_{\beta}'=\Lambda_{\beta}''$. So $\Lambda_{\alpha}''$ glue together to make a global section
\begin{align*}
\Lambda''\in A^{0,0}(X,\wedge^2 T)\otimes R
\end{align*}
By Lemma $\ref{2lemp}$, $[\Lambda_{\alpha}'',\Lambda_{\alpha}'']=[exp(-s_{\alpha})\Lambda_{\alpha}',exp(-s_{\alpha})\Lambda_{\alpha}']=exp(-s_{\alpha})[\Lambda_{\alpha}',\Lambda_{\alpha}']=0$.
\begin{align*}
&\bar{\partial}(\Lambda_{\alpha}'')+[D(ad(-s_{\alpha}))(\bar{\partial}s_{\alpha}),\Lambda_{\alpha}'']\\
&=\bar{\partial}(exp(-s_{\alpha})\Lambda_{\alpha}')+exp(-s_{\alpha})\circ \bar{\partial}(exp(s_{\alpha}))exp(-s_{\alpha})\Lambda_{\alpha}'\\
&=\bar{\partial}(exp(-s_{\alpha})\Lambda_{\alpha}')+(exp(-s_{\alpha})\circ \bar{\partial}\circ exp(s_{\alpha})-\bar{\partial})exp(-s_{\alpha})\Lambda_{\alpha}'\\
&=\bar{\partial}(exp(-s_{\alpha})\Lambda_{\alpha}')+exp(-s_{\alpha})\circ \bar{\partial}\Lambda_{\alpha}'-\bar{\partial}(exp(-s_{\alpha})\Lambda_{\alpha}')=0
\end{align*}
since $\bar{\partial}\Lambda_{\alpha}'=0$.
In conclusion, we have
\begin{enumerate}
\item $\bar{\partial}\phi+\frac{1}{2}[\phi,\phi]=0$
\item $[\Lambda'',\Lambda'']=0$
\item $\bar{\partial} \Lambda''+[\phi,\Lambda'']=0$
\end{enumerate}
By taking $\Lambda =\Lambda''-\Lambda_0\in A^{0,0}(X,\wedge^2 T)\otimes \mathfrak{m}$, the above equations are equivalent to
\begin{align*}
L(\phi+\Lambda)+\frac{1}{2}[\phi+\Lambda,\phi+\Lambda]=0
\end{align*}
where $L=\bar{\partial} +[\Lambda_0,-]$, which is a solution of Maurer Cartan equation of the differential graded Lie algebra $\mathfrak{g}=(\bigoplus_{p+q-1=i,p\geq 0,q \geq 1} A^{0,p}(X,\wedge^q T)\otimes \mathfrak{m},L=\bar{\partial} +[\Lambda_0,-],[-,-])$
Now set $\alpha=\phi+\Lambda$. Then we have $L \alpha =-\frac{1}{2}[\alpha,\alpha]$ and $\alpha \wedge L\alpha=-L\alpha\wedge \alpha$.(note that $deg(\alpha)=1$.). We denote $\bar{\alpha}$ be the corresponding element in $\mathfrak{g}[1]$. Now we assume that $\mathfrak{m}^{n+1}=0$. We claim that
\begin{equation}\label{2eq}
\epsilon(\alpha):=(\bar{\alpha},\frac{1}{2}\bar{\alpha}\odot \bar{\alpha},\cdots, \frac{1}{n!}\underbrace{\bar{\alpha}\odot \cdots \odot \bar{\alpha}}_n)\in \bigoplus_{i=1}^n sym^i g_1\otimes \mathfrak{m}\subset J_n(\mathfrak{g})\otimes \mathfrak{m}
\end{equation}
is a hypercocycle in $S_n\subset \overline{S(\mathfrak{g}[1])}$, which corresponds to via $dec$
\begin{align*}
(\alpha,\cdots, (-1)^{\frac{i(i-1)}{2}}\frac{1}{2!}\underbrace{\alpha \wedge \cdots \wedge \alpha}_i,\cdots, (-1)^{\frac{n(n-1)}{2}}\frac{1}{n!}\underbrace{\alpha\wedge \cdots \wedge \alpha}_n)
\end{align*}
in $\overline{\bigwedge \mathfrak{g}}$.
For the claim (\ref{2eq}), we have to show that $(-1)^i L((-1)^{\frac{(i-1)i}{2}}\frac{1}{i!}\underbrace{\alpha \wedge \cdots \wedge \alpha}_i)+(-1)^{\frac{i(i+1)}{2}}\frac{1}{(i+1)!}\binom{i+1}{2}[\alpha,\alpha]\wedge \underbrace{\alpha\wedge \cdots \wedge \alpha}_{i-1}=0$. In other words, $L(\frac{1}{i!}\underbrace{\alpha \wedge \cdots \wedge \alpha}_i)+\frac{1}{(i+1)!}\binom{i+1}{2}[\alpha,\alpha]\wedge \underbrace{\alpha\wedge \cdots \wedge \alpha}_{i-1}=0$. Indeed,
\begin{align*}
&L(\frac{1}{i!}\underbrace{\alpha \wedge \cdots \wedge \alpha}_i)
=\frac{1}{i!}(L \alpha\wedge \alpha \wedge \cdots \wedge \alpha-\alpha\wedge L\alpha\wedge \alpha\wedge \cdots \wedge \alpha+\cdots +(-1)^{i-1}\alpha\wedge\cdots \wedge \alpha\wedge L \alpha)\\
&=\frac{1}{(i-1)!} L \alpha\wedge \alpha \wedge \cdots \wedge \alpha\\
&\frac{1}{(i+1)!}\frac{(i+1)i}{2}[\alpha,\alpha] \wedge \alpha \wedge \cdots \wedge \alpha=\frac{1}{(i-1)!}\frac{1}{2}[\alpha,\alpha]\wedge \alpha \wedge \cdots \wedge \alpha
\end{align*}
So we get the claim $(\ref{2eq})$ by the condition $L\alpha+\frac{1}{2}[\alpha,\alpha]=0$. So $\epsilon(\alpha)$ is a hypercocycle in $S_n\subset \overline{S(\mathfrak{g}[1])}$. So $[\epsilon(\alpha)]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}.$
\subsection{$[\epsilon(\alpha)]\in \mathbb{H}^0(J_n(\mathfrak{g}))$ as a canonical element associated with $\mathcal{X}$}\
For given an infinitesimal Poisson deformation $\mathcal{X}$ of $(\mathcal{X},\Lambda_0)$ over $(R,\mathfrak{m})$ with $\mathfrak{m}^{n+1}=0$ , we could find a cohomology class $[\epsilon(\alpha)]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$, where $\alpha=\phi+\Lambda\in A^{0,1}(X,T)\oplus A^{0,0}(X,\wedge^2 T)$ for given an locally trivial open covering $\{U_{\alpha}\}$, $\varphi_{\alpha}$ and $C^{\infty}$-trivialization. In this subsection, we show that the cohomology class
\begin{align*}
[\epsilon(\alpha)]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}
\end{align*}
is independent of choices of $C^{\infty}$-trivialization and local trivialization for fixed locally trivial open covering $\{U_{\alpha}\}$. Then for given two locally trivial open covering, by choosing refinement of these two open coverings, we conclude that the cohomology class $[\epsilon(\alpha)]$ is canonically associated to the infinitesimal Poisson deformation $\mathcal{X}$ of $(X,\Lambda_0)$.
\subsubsection{Independence of choices of local trivialization}\
For given an locally trivial open covering $\mathcal{U}=\{U_{\alpha}\}$, let's assume that we have two local trivialization $\varphi_{\alpha}:\mathcal{O}_\mathcal{X}(U_{\alpha})\to \mathcal{O}_X(U_{\alpha})\otimes R$ and $\varphi_{\alpha}':\mathcal{O}_n(U_{\alpha})\to \mathcal{O}_X(U_{\alpha})\otimes R $. We will show that $v=\phi+\Lambda \in A^{0,1}(X,T)\oplus A^{0,0}(X,\wedge^2 T)$ associated with $\varphi_{\alpha}$ and $C^{\infty}$-trivialization $C$ is same to $w=\psi+\Pi \in A^{0,1}(X,T)\oplus A^{0,0}(X,\wedge^2 T)$ associated with $\varphi_{\alpha}^{'}$ and same $C$. We have the following commutative diagram
\begin{center}
\[\begindc{\commdiag}[100]
\obj(0,1)[aa]{$\mathscr{A}^{0,0}_X(U_{\alpha})\otimes R$}
\obj(1,1)[bb]{$\mathscr{A}^{0,0}_\mathcal{X}(U_{\alpha})$}
\obj(2,1)[cc]{$\mathscr{A}_X^{0,0}(U_{\alpha}) \otimes R$}
\obj(2,0)[dd]{$\mathscr{A}^{0,0}_X(U_{\alpha})\otimes R$}
\mor{aa}{bb}{$C$}
\mor{bb}{cc}{$\varphi_{\alpha}$}
\mor{bb}{dd}{$\varphi_{\alpha}^{'}$}[\atright,\solidarrow]
\mor{cc}{dd}{$exp(t_{\alpha})$}
\enddc\]
\end{center}
for some $\{t_{\alpha}\}\in C^1(\mathcal{U},\Theta_X)\otimes \mathfrak{m}$ and $exp(s_{\alpha})=\varphi_{\alpha}\circ C$, $exp(s_{\alpha}')=\varphi_{\alpha}^{'}\circ C$. Then we have, by our construction as above,
\begin{align*}
\psi_{\alpha}&=exp(-s_{\alpha}')\bar{\partial}(exp(s_{\alpha}'))=exp(-s_{\alpha})\circ exp(-t_{\alpha})\bar{\partial}(exp(t_{\alpha})\circ exp(s_{\alpha}))\\
&=exp(-s_{\alpha})\circ exp(-t_{\alpha})\circ \bar{\partial}(exp(t_{\alpha}))\circ exp(s_{\alpha})+exp(-s_{\alpha})\circ exp(-t_{\alpha})\circ exp(t_{\alpha})\circ \bar{\partial}exp(s_{\alpha})\\
&=exp(-s_{\alpha})\bar{\partial} (exp(s_{\alpha}))=\phi_{\alpha}
\end{align*}
since $\bar{\partial} t_{\alpha}=0$. On the other hand,
\begin{align*}
\Lambda_0+\Pi_{\alpha}=exp(-s_{\alpha}')\omega_{\alpha}'=exp(-s_{\alpha})\circ exp(t_{\alpha})\omega'_{\alpha}=exp(-s_{\alpha})\Lambda_{\alpha}'=\Lambda_0+\Lambda_{\alpha}.
\end{align*}
Hence we have $\Pi=\Lambda$. So we have $v=w$. So $[\epsilon(v)]=[\epsilon(w)]$.
\subsubsection{Independence of choices of $C^{\infty}$-trivialization}\
Now, we show that the class $[\epsilon(v)]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$ is independent of choice of $C^{\infty}$-trivialization. Let's assume that we have two $C^{\infty}$-trivialization $C:\mathscr{A}_X^{0,0}\otimes R\to \mathscr{A}_{\mathcal{X}}^{0,0}$ and $\tilde{C}:\mathscr{A}_X^{0,0}\otimes R\to \mathscr{A}_{\mathcal{X}}^{0,0}$. Then we have the following commutative diagram
\begin{center}
\[\begindc{\commdiag}[70]
\obj(0,1)[a]{$\mathscr{A}^{0,0}_X(X)\otimes R$}
\obj(2,1)[b]{$\mathscr{A}^{0,0}_\mathcal{X}(X)$}
\obj(1,0)[d]{$\mathscr{A}^{0,0}_X(X)\otimes R$}
\mor{a}{b}{$\tilde{C}$}
\mor{a}{d}{$exp(u)=C^{-1}\circ \tilde{C}$}[\atright,\solidarrow]
\mor{d}{b}{$C$}[\atright,\solidarrow]
\enddc\]
\end{center}
$C^{-1}\circ \tilde{C}$ is a homomorphism inducing identity up to $\mathfrak{m}$. As in the proof of Lemma \ref{2len}, we can show that there is an element $u\in A^{0,0}(T)\otimes \mathfrak{m}$ such that $C^{-1}\circ \tilde{C}=exp(u)$.
For given an locally trivial open covering $\{U_{\alpha}\}$ and local trivializations $\varphi_{\alpha}:\mathcal{O}_\mathcal{X}(U_{\alpha})\to \mathcal{O}_X(U_{\alpha})\otimes R$, let $\tilde{v}=\tilde{\phi}+\tilde{\Lambda}$ be the element of $(A^{0,1}(X,T)\oplus A^{0,0}(X,\wedge^2 T))\otimes \mathfrak{m}$ induced from $\tilde{C}$ and $v=\phi+\Lambda$ be the element of $(A^{0,1}(X,T)\oplus A^{0,0}(X,\wedge^2 T))\otimes \mathfrak{m}$ induced from $C$. We want to show that $[\epsilon(v_1)]=[\epsilon(v_0)]$.
Since $exp(s_{\alpha})=\varphi_{\alpha}\circ C$, we have $exp(s_{\alpha})\circ exp(u)=\varphi_{\alpha}\circ \tilde{C}$. Hence we have
\begin{align*}
\tilde{\phi}&=[exp(s_{\alpha})\circ exp(u)]^{-1}\bar{\partial}(exp(s_{\alpha})\circ exp(u))\\
&=exp(-u)\circ exp(-s_{\alpha})\bar{\partial}exp(s_{\alpha})exp(u)+exp(-u)\circ exp(-s_{\alpha})exp(s_{\alpha})\bar{\partial}exp(u)\\
&=exp(-u)\bar{\partial}exp(u)+exp(-u)\phi exp(u)\\
\Lambda_0+ \tilde{\Lambda}&=exp(-u)\circ exp(-s_{\alpha})(\Lambda_{\alpha}')\\
&=exp(-u)(\Lambda_0+\Lambda)
\end{align*}
We set for $t\in [0,1]\subset \mathbb{R}$,
\begin{align*}
\phi_t&=exp(-tu)\bar{\partial}exp(tu)+exp(-tu)\phi exp(tu)\\
\Lambda^t&=exp(-tu)(\Lambda_0+\Lambda)-\Lambda_0
\end{align*}
Note that $\phi_0=\phi, \Lambda^0=\Lambda$ and $\phi_1=\tilde{\phi},\Lambda^1=\tilde{\Lambda}$. We would like to show that $[\epsilon(v)]=[\epsilon(\phi+\Lambda)]=[\epsilon(\tilde{\phi}+\tilde{\Lambda})]=[\epsilon(\tilde{v})]$ by showing that $[\epsilon(v_t)]:=[\epsilon(\phi_t+\Lambda^t)]$ is constant independent of $t$. To this end, we note that since the operator $L=\bar{\partial}+[\Lambda_0,-]$ is elliptic, $\mathbb{H}^0(J_n(\mathfrak{g}))$ is finite dimensional vector space over $\mathbb{C}$. Since $\mathfrak{g}$ is a global section of a complex vector bundle over $X$, $\mathfrak{g}$ is endowed with a suitable metric, which induces a metric on $\mathbb{H}^0(J_n(\mathfrak{g}))$ by hodge theory which coincides with the standard Euclidean topology and likewise for $\mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$. So differentiation of a function $\mathbb{R}\to \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$ makes sense if the derivative exists with respect to the metric on $\mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$. Now we claim that $\frac{d}{dt}([\epsilon(v_t)])=\frac{d}{dt} [(\epsilon(\phi_t+\Lambda^t)]=[\frac{d}{dt} (\epsilon(\phi_t+\Lambda^t))]=0$, which implies that $[\epsilon(v_t)]$ is constant, and so $[\epsilon(\tilde{v})]=[\epsilon(v)]$. We note that
\begin{align*}
\phi_t':=\frac{d}{dt}(\phi_t)&=-uexp(-tu)\bar{\partial}(exp(tu))+exp(-tu)\bar{\partial}(exp(tu)u)\\
&-uexp(-tu)\phi exp(tu)+exp(-tu)\phi exp(tu)u\\
&=-uexp(-tu)\bar{\partial}(exp(tu))+exp(-tu)\bar{\partial}(exp(tu))u+\bar{\partial}u\\
&-uexp(-tu)\phi exp(tu)+exp(-tu)\phi exp(tu)u\\
&=-u\phi_t+\phi_t u+\bar{\partial} u=\bar{\partial}u-[u,\phi_t]\\
(\Lambda^{t})':=\frac{d}{dt}(\Lambda^t)&=-uexp(-tu)(\Lambda_0+\Lambda)=-u (\Lambda^t+\Lambda_0)\\
&=-[u,\Lambda^t]+[\Lambda_0,u]
\end{align*}
So $v_t':=\frac{d}{dt} v_t=\frac{d}{dt}(\phi_t+\Lambda^t)=\phi_t'+(\Lambda^{t})'=L u-[u,\phi_t+\Lambda^t]=Lu-[u,v_t]$. And we have
\begin{align*}
\frac{d}{dt}(\epsilon(v_t))=(\bar{v}_t',\bar{v}_t'\odot \bar{v}_t,...,\frac{1}{(n-1)!}\bar{v}_t'\odot \underbrace{\bar{v}_t \odot \cdots \odot \bar{v}_t}_{n-1})\in \oplus_{i=1}^n sym^ig_1\otimes \mathfrak{m}
\end{align*}
which corresponds to
\begin{align*}
(v_t',\cdots, (-1)^{\frac{(i-1)i}{2}}\frac{1}{(i-1)!}v_t' \wedge \underbrace{v_t\wedge \cdots \wedge v_t}_{i-1}, \cdots, (-1)^{\frac{(n-1)n}{2}}\frac{1}{(n-1)!}v_t'\wedge\underbrace{v_t\wedge \cdots \wedge v_t}_{n-1})
\end{align*}
in $\overline{\bigwedge \mathfrak{g}}$.
Note that $L v_t=-\frac{1}{2}[v_t,v_t].$\footnote{For given the locally trivial open covering $\{U_{\alpha}\}$ and local trivialization $\varphi_{\alpha}:\mathcal{O}_\mathcal{X}(U_{\alpha})\to \mathcal{O}_X(U_{\alpha})\otimes R$, let $C_t:\mathscr{A}^{0,0}_X \otimes R \to \mathscr{A}^{0,0}_{\mathcal{X}}$ be $C^{\infty}$-trivialization defined by $C_t=C\circ exp(tu)$. Then we have $\varphi_{\alpha}\circ C=exp(s_{\alpha})\circ exp(tu)$. Then we have
\begin{align*}
\phi_t&=(exp(s_{\alpha})\circ exp(tu))^{-1}\bar{\partial}(exp(s_{\alpha})\circ exp(tu))\\
\Lambda_0+\Lambda_t&=(exp(s_{\alpha})\circ exp(tu))^{-1}\Lambda_{\alpha}'
\end{align*}
Hence by the construction, we have $Lv_t+\frac{1}{2}[v_t,v_t]=0$.} Let
\begin{align*}
(-\bar{u},-\bar{u}\odot \bar{\alpha}_t,...,\frac{1}{(n-1)!} (-\bar{u})\odot \underbrace{\bar{\alpha}_t\odot ...\odot\bar{\alpha}_t}_{n-1}) \in \bigoplus_{i=1}^{n-1} g_0\otimes sym^i g_1\otimes \mathfrak{m}
\end{align*}
which corresponds to
\begin{align*}
(-u,\cdots, (-1)^{\frac{(i-1)i}{2}} \frac{1}{i!}(-u)\wedge \underbrace{\alpha_t\wedge \cdots \wedge \alpha_t}_i, \cdots, (-1)^{\frac{(n-2)(n-1)}{2}} \frac{1}{(n-1)!} (-u) \wedge \underbrace{\alpha_t\wedge \cdots \wedge \alpha_t}_{n-1})
\end{align*}
in $\overline{\bigwedge \mathfrak{g}}$.
We claim that this is the coboundary of $\frac{d}{dt}(\epsilon(v_t))$. Indeed, (note that $deg(u)=0$ and $deg(v_t)=1$.)
\begin{align*}
&(-1)^{i+1}L((-1)^{\frac{(i-1)i}{2}}\frac{1}{i!}(-u)\wedge \underbrace{v_t\wedge \cdots \wedge v_t}_i)+(-1)^{\frac{i(i+1)}{2}}\frac{i+1}{(i+1)!}[-u,v_t]\wedge \underbrace{v_t\wedge \cdots \wedge v_t}_i
\\&+(-1)^{\frac{i(i+1)}{2}}\frac{1}{(i+1)!}\binom{i+1}{2}[v_t, v_t] \wedge (-u) \wedge \underbrace{v_t\wedge \cdots \wedge v_t}_{i-1}\\
&=(-1)^{\frac{i(i+1)}{2}}\frac{1}{i!}(Lu\wedge \underbrace{v_t\wedge \cdots \wedge v_t}_i +u\wedge Lv_t\wedge v_t\wedge \cdots\wedge v_t-u\wedge v_t\wedge Lv_t\wedge v_t\wedge \cdots \wedge v_t+\cdots\\
&+(-1)^{i-1}u\wedge \underbrace{v_t\wedge \cdots \wedge v_t}_{i-1}\wedge Lv_t)
-(-1)^{\frac{i(i+1)}{2}}\frac{1}{i!}[u,v_t]\wedge \underbrace{v_t\wedge \cdots \wedge v_t}_i\\
&+(-1)^{\frac{i(i+1)}{2}}\frac{1}{(i-1)!}\frac{1}{2}u\wedge [v_t,v_t]\wedge \underbrace{v_t\wedge \cdots \wedge v_t}_{i-1}\\
&=(-1)^{\frac{i(i+1)}{2}}(\frac{1}{i!}Lu\wedge \underbrace{v_t\wedge \cdots \wedge v_t}_i+\frac{1}{(i-1)!}u\wedge L v_t\wedge \underbrace{v_t \wedge \cdots \wedge v_t}_{i-1}-\frac{1}{i!}[u,v_t]\wedge \underbrace{v_t\wedge\cdots \wedge v_t}_i)\\
&+(-1)^{\frac{i(i+1)}{2}}\frac{1}{(i-1)!}\frac{1}{2}u\wedge [v_t,v_t]\wedge \underbrace{v_t\wedge \cdots \wedge v_t}_{i-1}\\
&=(-1)^{\frac{i(i+1)}{2}}\frac{1}{i!}(L u-[u,v_t])\wedge \underbrace{v_t\wedge \cdots \wedge v_t}_{i-1}\\
&=(-1)^{\frac{i(i+1)}{2}}\frac{1}{i!}v_t'\wedge \underbrace{v_t\wedge \cdots \wedge v_t}_i.
\end{align*}
So we have $[\frac{d}{dt}(\epsilon(v_t))]=0$.
In conclusion, for given an infinitesimal Poisson deformation $\mathcal{X}$ of $(X,\Lambda_0)$, we can canonically associate the Poisson deformation $\mathcal{X}$ up to equivalence with the cohomology class $[\epsilon(v)]$. In the next chapter, we will show that under the assumption $HP^1(X,\Lambda_0)=0$, for given a choice $v,w\in (A^{0,1}(X,T)\oplus A^{0,0}(X,\wedge^2 T))\otimes \mathfrak{m}$ with $[\epsilon(v)]=[\epsilon(w)]$, the Poisson deformation associated with $v$ is equivalent to the Poisson deformation associated with $w$.
\chapter{Universal Poisson deformations}\label{chapter6}
\section{Isomorphism of two deformation functors $Def_\mathfrak{g}\cong PDef_{(X,\Lambda_0)}$}
In this section, we will show that two functors of Artin rings are isomorphic: namely the Poisson deformation functor $Def_{(X,\Lambda_0)}$ is isomorphic to the deformation functor $Def_{\mathfrak{g}}$ associated to the differential graded Lie algebra $\mathfrak{g}=(\bigoplus_{p+q-1=i,q\geq 1} A^{0,p}(X,\wedge^q T),L=\bar{\partial}+[\Lambda_0,-],[-,-])$.
So this shows that deformations of a compact holomorphic Poisson manifold are controlled by the differential graded Lie algebra $\mathfrak{g}$. For deformation functors associated with a differential graded Lie algebra, we refer to \cite{Man04}.
\subsection{Deformation functors}
\begin{definition}
A functor of Artin rings is a covariant functor
\begin{align*}
F:\bold{Art}\to \bold{Sets}
\end{align*}
such that $F(\mathbb{C})$ has only one element, where $\bold{Art}$ is the category of local artinian $\mathbb{C}$-algebra with residue $\mathbb{C}$, and $\bold{Sets}$ is the category of sets.
\end{definition}
\begin{definition}[functors associated with a DGLA $L$]
Let $L=(\bigoplus_{i\geq0}L_i,d,[-,-])$ be a differential graded Lie algebra and $(R,\mathfrak{m})\in \bold{Art}$. Let $MC(L)(R)$ be the set of all Maurer-Cartan elements of $L\otimes \mathfrak{m}$, i.e
\begin{align*}
MC_L(R)=\{x\in L_1\otimes \mathfrak{m} |dx+\frac{1}{2}[x,x]=0\}
\end{align*}
Then $MC_L:\bold{Art}\to \bold{Sets}$ is a functor.
\end{definition}
Let $g=L\otimes \mathfrak{m}$ be the induced differential graded Lie algebra from $L$. Then $g_0=L_0\otimes \mathfrak{m}$ is a nilpotent Lie algebra. Then the set $exp(g_0)=\{e^x|x\in g_0\}$ forms a group by the Campbell-Hausdorff formula. We have a group action of $exp(g_0)$ on $g_1=L_1\otimes \mathfrak{m}$ given by
\begin{align*}
e^a\cdot x:=x+\sum_{n\geq 1} \frac{(ad\,a)^{n-1}}{n!}([a,x]-da)\,\,\,\,\,\,\,\,\,\text{where}\,\,\,ad\,a:g_1\to g_1\,\,\,\text{defined by} \,\,b\mapsto [a,b]
\end{align*}
The action is known as the gauge action of $exp(g_0)$ on $g_1$ and the set of Maurer-Cartan elements is stable under the gauge action.
\begin{definition}
Let $x,y\in MC_L(R)$. We say that $x$ is gauge equivalent to $y$ if there exists $e^a\in exp(g_0)$ such that $e^a\cdot x=y$. Let $Def_L(R)$ be the set of all gauge equivalence classes of elements of $MC_L(R)$. Then the functor $Def_L:\bold{Art}\to \bold{Sets}$ is called the deformation functor associated to the differential graded Lie algebra $L$.
\end{definition}
\begin{definition}
We say that a functor of Artin rings $F$ is controlled by a differential graded Lie algebra $L$ if $F\cong Def_L$.
\end{definition}
\begin{definition}[Poisson deformation functor]
Let $(X,\Lambda_0)$ be a compact holomorphic Poisson manifold. The Poisson deformation functor $Def_{(X,\Lambda_0)}:\bold{Art}\to \bold{Sets}$ is defined by
\begin{align*}
PDef_{(X,\Lambda_0)}(R)=\{\text{equivalent classes of infinitesimal Poisson deformations of $(X,\Lambda_0)$ over $R$}\}
\end{align*}
\end{definition}
In the next subsection, we will prove that $Def_\mathfrak{g}\cong PDef_{(X,\Lambda_0)}$.
\subsection{The Poisson deformation functor $Def_{(X,\Lambda_0)}$ is controlled by the differential graded Lie algebra $\mathfrak{g}=(\bigoplus_{p+q-1=i,q\geq1} A^{0,p}(X,\wedge^q T),L=\bar{\partial}+[\Lambda_0,-],[-,-])$}\
Let $\mathcal{X}\to Spec\,R$ be an infinitesimal Poisson deformation of $(X,\Lambda_0)$, where $(R,\mathfrak{m})$ is generated by $<1,m_1,...,m_r>$ with exponent $n$. Let $\{U_{\alpha}\}$ be a locally trivial open cover of $\mathcal{X}$. We defined a sheaf $\mathscr{A}^{0,p}(\wedge^q T_{\mathcal{X}/R})$, a bracket $[-,-]$ and the deaulbault differential $\bar{\partial}_\mathcal{X}$. For $q=0$, we will denote this sheaf by $\mathscr{A}^{0,p}_\mathcal{X}$ and similarly for $\mathscr{A}^{0,p}_X$.
Given an element $v:=\phi+\Lambda\in (A^{0,1}(X,T)\oplus A^{0,0}(X,\wedge^2 T))\otimes \mathfrak{m}$ with $L(\phi+\Lambda)=-\frac{1}{2}[\phi+\Lambda,\phi+\Lambda]$. In particular, we have $\bar{\partial} \phi=-\frac{1}{2}[\phi,\phi]$. We define an operator $\bar{\partial}+\phi:=\bar{\partial}\otimes 1+\phi$ on $\mathscr{A}^{0,p}\otimes R$ and a sequence
\begin{equation}\label{2se}
0\to \mathscr{A}_X^{0,0}\otimes R\xrightarrow{\bar{\partial}+\phi} \mathscr{A}^{0,1}_X\otimes R \xrightarrow{\bar{\partial}+\phi}\cdots \xrightarrow{\bar{\partial}+\phi} \mathscr{A}^{0,p}_X\otimes R\xrightarrow{\bar{\partial}+\phi}
\end{equation}
which is a complex by the condition $\bar{\partial}\phi=-\frac{1}{2}[\phi,\phi]$. By tensoring $\otimes_{R} R/\mathfrak{m}$, we have
\begin{align*}
0\to \mathscr{A}^{0,0}_X\otimes \mathbb{C}\xrightarrow{\bar{\partial}} \mathscr{A}^{0,1}_X\otimes \mathbb{C} \xrightarrow{\bar{\partial}}\cdots
\end{align*}
which is a acyclic resolution of $\mathcal{O}_X$. Hence the complex $(\ref{2se})$ is exact in positive degree.
We define
\begin{align*}
\mathcal{O}(v):=ker(\bar{\partial}+\phi:\mathscr{A}_X^{0,0}\otimes R\to \mathscr{A}_X^{0,1}\otimes R)
\end{align*}
which is a flat $R$-sheaf on $X$.\footnote{In general, let $R$ be an artinian local ring with residue field $k$ and $M$ be an $R$-module. Let
\begin{align*}
M\to N^0\to N^1\to \cdots
\end{align*} be a flat resolution such that this induces a resolution
\begin{align*}
M\otimes k\to N^0\otimes k\to N^1\otimes k\cdots
\end{align*} Then $M$ is a $R$-flat.} $\mathcal{O}(v)$ has a Poisson bracket induced by $\Lambda_0+\Lambda$. We define on $\{-,-\}$ on $\mathcal{O}(v)$ by
\begin{align*}
\{f,g\}:=[[\Lambda_0+\Lambda,f],g]
\end{align*}
for local sections $f,g\in \mathcal{O}(v)$. Then $\{-,-\}$ defines a biderivation since $d(gh)=gdh+hdg$ and $R$-bilinear since for $a=a_0+a_1m_1+\cdots a_rm_r\in R$ where $a_i\in \mathbb{C}$, we have $da=0$. $[\Lambda_0+\Lambda,\Lambda_0+\Lambda]=0$ shows that $\{-,-\}$ satisfies the Jacobi identity. So it remains to show that $\mathcal{O}(v)$ is closed under $\{-,-\}$. Note that for $f,g\in \mathcal{O}(v)$, we have $\bar{\partial}f+[\phi,f]=0$, $\bar{\partial}g+[\phi,g]=0$. We set $\omega=\Lambda_0+\Lambda$. Then we have $\bar{\partial} \omega+ [\phi,\omega]=0$
\begin{align*}
\bar{\partial}[[\omega,f],g]+[\phi,[[\omega,f],g]]&=[\bar{\partial}[\omega,f],g]+[[\omega,f],\bar{\partial}g]+[[\phi,[\omega,f]],g]+[[\omega,f],[\phi,g]]\\
&=[[\bar{\partial}\omega,f],g]-[[\omega,\bar{\partial}f],g]+[[[\phi,\omega],f],g]-[[\omega,[\phi,f]],g]\\
&=0
\end{align*}
So we have $\{f,g\}\in \mathcal{O}(v)$. Hence $\mathcal{O}(v)$ is a sheaf of Poisson $R$-algebras. We also have $\mathcal{O}(v)\otimes_{R} R/\mathfrak{m}\cong \mathcal{O}_X$ as Poisson sheaves over $\mathbb{C}$. So $\mathcal{O}(v)$ defines an infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $R$.
Now we define the map
\begin{align*}
\mathcal{O}:Def_{\mathfrak{g}}(R)&\to PDef_{(X,\Lambda_0)}(R)\\
v=\phi+\Lambda&\mapsto \mathcal{O}(v)
\end{align*}
We claim that $v$ is gauge equivalent to $w$ if and only if $\mathcal{O}(v)\cong\mathcal{O}(w)$ as Poisson $R$-sheaves. This claim shows that the map $\mathcal{O}$ is well-defined and injective. Let $v=\phi+\Lambda,w=\psi+\Pi\in (A^{0,1}(X, T)\oplus A^{0,0}(X,\wedge^2 T))\otimes \mathfrak{m}$. Since $v$ is gauge equivalent to $w$, for some $a\in A^{0,0}(X,T)\otimes \mathfrak{m}$, we have
\begin{align*}
(1)\psi&=\phi+\sum_{n\geq 1} \frac{(ad\,a)^{n-1}}{n!}([a,\phi]-\bar{\partial}a)\\
(2)\Pi&=\Lambda+\sum_{n\geq 1} \frac{(ad\,a)^{n-1}}{n!}([a,\Lambda]-[\Lambda_0,a])=exp(a)(\Lambda_0+\Lambda)-\Lambda_0
\end{align*}
$(1)$ is equivalent that the following commutative diagram commutes:
\begin{center}
$\begin{CD}
\mathscr{A}^{0,0}_X\otimes R @>\bar{\partial}+\phi >> \mathscr{A}_X^{0,1}\otimes R\\
@V exp(a) VV @VV exp(a)V\\
\mathscr{A}^{0,0}_X\otimes R @>\bar{\partial}+\phi'>> \mathscr{A}^{0,1}_X\otimes R
\end{CD}$
\end{center}
which implies $\mathcal{O}(v)\cong \mathcal{O}(w)$ as sheaves of $R$-algebras.
$(2)$ means $\Lambda_0+\Pi=exp(a)(\Lambda_0+\Lambda)$ which means $\mathcal{O}(v)\cong \mathcal{O}(w)$ as sheaves of Poisson $R$-algebras. So we get the claim.
Now we show that $\mathcal{O}:Def_{\mathfrak{g}}(R) \to Def_{(X,\Lambda_0)}(R)$ is surjective. For given an infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $(R,\mathfrak{m})$, we showed that there is a canonically associated element $v=\phi+\Lambda \in (A^{0,1}(X,T)\oplus A^{0,0}(X,\wedge^2 T))\otimes \mathfrak{m}$ with $L(\phi+\Lambda)+\frac{1}{2}[\phi+\Lambda,\phi+\Lambda]=0$.
We claim that for each $\alpha$, the following diagram is commutative.
\begin{center}
$\begin{CD}
\mathscr{A}^{0,0}_X(U_{\alpha})\otimes R @>\bar{\partial}+\phi >> \mathscr{A}^{0,1}_X(U_{\alpha})\otimes R\\
@V exp(s_{\alpha}) VV @VV exp(s_{\alpha})V\\
\mathscr{A}^{0,0}_X(U_{\alpha})\otimes R@>\bar{\partial}_{\mathcal{X}}=\bar{\partial}>> \mathscr{A}^{0,1}_X(U_{\alpha}) \otimes R
\end{CD}$
\end{center}
Note that $exp(s_{\alpha})=\varphi_{\alpha}\circ C$. Indeed, the commutativity means that
\begin{align*}
\bar{\partial}f+\phi (f)&=\bar{\partial} f+exp(-s_{\alpha})\circ \bar{\partial}(exp(s_{\alpha}))f\\
&=\bar{\partial}f+exp(-s_{\alpha})\circ (\bar{\partial}\circ exp(s_{\alpha})-exp(s_{\alpha})\circ \bar{\partial})f\\
&=exp(-s_{\alpha})\circ \bar{\partial}\circ exp(s_{\alpha})f
\end{align*}
Since the diagram is compatible with each $\alpha$, we have the following commutative diagram of sheaves
\begin{center}
$\begin{CD}
\mathscr{A}^{0,0}_X\otimes R@>\bar{\partial}+\phi>> \mathscr{A}^{0,1}_X\otimes R\\
@V\cong VV @VV\cong V\\
\mathscr{A}^{0,0}_\mathcal{X}@>\bar{\partial}_\mathcal{X}>> \mathscr{A}^{0,1}_\mathcal{X}
\end{CD}$
\end{center}
So we have isomorphism of sheaves
\begin{align*}
\mathcal{O}(v):=ker(\bar{\partial}+\phi:\mathscr{A}_X^{0,0}\otimes R\to \mathscr{A}_X^{0,1}\otimes R)\cong \mathcal{O}_{\mathcal{X}}=ker(\bar{\partial}_\mathcal{X}:\mathcal{A}^{0,0}_\mathcal{X}\to \mathcal{A}^{0,1}_\mathcal{X})
\end{align*}
$\mathcal{O}(v)$ is a sheaf of Poisson $R$-algebras as above defined by
\begin{align*}
\{f,g\}:=[[\Lambda_0+\Lambda,f],g]\,\,\,\,\,\,\,\,\,\text{for local sections $f,g\in \mathcal{O}(v)$}
\end{align*}
Now we claim that as Poisson $R$-sheaves, we have
\begin{align*}
\mathcal{O}(v)\cong \mathcal{O}_{\mathcal{X}}
\end{align*}
We check this locally on $U_{\alpha}$: for $f,g\in \Gamma(U_{\alpha},\mathcal{O}(v))=ker(\bar{\partial}+\phi:\mathscr{A}^{0,0}_X(U_{\alpha})\otimes R\to \mathscr{A}_X^{0,1}(U_{\alpha})\otimes R)$,
\begin{align*}
exp(s_{\alpha})\{f,g\}&=exp(s_{\alpha})[[\Lambda_0+\Lambda,f],g]=exp(s_{\alpha})[[\Lambda_{\alpha}'',f],g]=[[exp(s_{\alpha})\Lambda_{\alpha}'',exp(s_{\alpha})f],exp(s_{\alpha})g]\\
&=[[exp(s_{\alpha})exp(-s_{\alpha})\Lambda_{\alpha}',exp(s_{\alpha})f],exp(s_{\alpha})g]=[[\Lambda_{\alpha}',exp(s_{\alpha})f],exp(s_{\alpha})g]\\
\end{align*}
where $\Lambda_{\alpha}'\in \Gamma(U_{\alpha}, \mathscr{A}^{0,0}(\wedge^2 T_X))\otimes R$ is the Poisson structure on $\mathcal{O}_X(U_{\alpha})\otimes R\cong \mathcal{O}_{\mathcal{X}}(U_{\alpha})$ and $\Lambda_{\alpha}''= exp(-s_{\alpha})\Lambda_{\alpha}'$. (See Remark \ref{2remark} for notations)
Hence the infinitesimal Poisson deformation $\mathcal{X}$ of $(X,\Lambda_0)$ over $R$ is equivalent to $\mathcal{O}(v):=ker(\bar{\partial}+\phi:\mathscr{A}_X^{0,0}\otimes R\to \mathscr{A}_X^{0,1}\otimes R)$ equipped with the Poisson structure $\Lambda_0+\Lambda$. This shows that the map $\mathcal{O}:Def_{\mathfrak{g}}(R)\to Def_{(X,\Lambda_0)}(R)$ is surjective.
So we proved that for an artinian local $\mathbb{C}$-algebra $R$ with residue $\mathbb{C}$, we have an isomorphism $\mathcal{O}:MC_{\mathfrak{g}}(R)\to PDef_{(X,\Lambda_0)}(R)$. To show that $Def_{\mathfrak{g}}\cong PDef_{(x,\Lambda_0)}$, we have to show that $\mathcal{O}$ is a morphism of functors of Artin rings, in other words, $\mathcal{O}$ is compatible with any local homomorphism $R\to S$ in $\bold{Art}$.
\begin{definition}[Base change]
Given an infinitesimal Poisson deformation $\mathcal{X}$ of $(X,\Lambda)$ over $R$, and a local $\mathbb{C}$-algebra homomorphism $(R,\mathfrak{m}_R)\to (S,\mathfrak{m}_S)$, we can define an infinitesimal Poisson deformation $\mathcal{X}\times_{Spec\,R} Spec\,S$ of $(X,\Lambda)$ over $S$ by base change.
\begin{center}
$\begin{CD}
X@>>>\mathcal{X}\times_{Spec\,R} Spec\,S @>>> \mathcal{X}\\
@VVV@VVV @VVV\\
Spec\,\mathbb{C}@>>>Spec\,S@>>> Spec\,R
\end{CD}$
\end{center}
We only need to explain the induced Poisson structure of $\mathcal{X}_S:=\mathcal{X}\times_{Spec\,R} Spec\,S$ over $S$. For any open set $U$ of $X$, $\mathcal{O}_{\mathcal{X}_S}(U)=\mathcal{O}_{\mathcal{X}}(U)\otimes_R S$. We define the Poisson bracket $\{-,-\}_S$ on $\mathcal{O}_{\mathcal{X}_S}(U)$ by
\begin{align*}
\{f\otimes s_1,g\otimes s_2\}_S=\{f,g\}_R\otimes s_1s_2
\end{align*}
where $\{-,-\}_R$ is the Poisson bracket on $\mathcal{O}_{\mathcal{X}}(U)$.
\end{definition}
We note that the induced infinitesimal Poisson deformation by the base change $(R,\mathfrak{m}_R)\to (S,\mathfrak{m}_S)$ can be interpreted in terms of a Mauer-Cartan element of $\mathcal{X}$. Let $\phi+\Lambda \in (A^{0,1}(X,T)\oplus A^{0,0}(X,\wedge^2 T))\otimes \mathfrak{m}_R$ be a Mauer Cartan element of $\mathcal{X}$. Hence $\mathcal{O}_{\mathcal{X}}$ is equivalent to $ker(\bar{\partial}+\phi:\mathscr{A}^{0,0}\otimes R\to \mathscr{A}^{0,1}\otimes R)$ with the Poisson structure $\Lambda_0+\Lambda$. The homomorphism $g:(R,m_R)\to (S, m_S)$ induces the homomorphisms $A^{0,p}(X,\wedge^q T)\otimes \mathfrak{m}_R\to A^{0,p}(X,\wedge^q T)\otimes \mathfrak{m}_S$. Let $\phi_S+\Lambda_S$ be the image of $\phi+\Lambda$, which also satisfy $L_S(\phi_S+\Lambda_S)+\frac{1}{2}[\phi_S+\Lambda_S,\phi_S+\Lambda_S]=0$. We have the following commutative diagram
\begin{center}
$\begin{CD}
(\mathscr{A}_X^{0,0}\otimes R,\Lambda_0+\Lambda_R)@>\bar{\partial}+\phi>> \mathscr{A}_X^{0,1}\otimes R\\
@VVV @VVV\\
(\mathscr{A}_X^{0,0}\otimes S, \Lambda_0+\Lambda_S) @>\bar{\partial}+\phi_S>> \mathscr{A}_X^{0,1}\otimes S
\end{CD}$
\end{center}
We claim that
\begin{proposition}
$\mathcal{O}_{\mathcal{X}_S}$ is equivalent to $(\phi_S+\Lambda_S)$.
\end{proposition}
\begin{proof}
We recall that for given locally trivial open covering $\{U_{\alpha}\}$, and local trivialization $\varphi_{\alpha}:\mathcal{O}_{\mathcal{X}}(U_{\alpha})\to \mathcal{O}_X(U_{\alpha})\otimes R$ and $C^{\infty}$-trivialization $C:\mathscr{A}^{0,0}_X\otimes R\to \mathscr{A}^{0,0}_{\mathcal{X}}$ for the family $\mathcal{X}$, we have $\phi=exp(-s_{\alpha})\bar{\partial}exp(s_{\alpha})\in A^{0,1}(X,T)\otimes \mathfrak{m}_R$ and $\Lambda=exp(-s_{\alpha})(\Lambda_0+\Lambda_{\alpha})-\Lambda_0\in A^{0,0}(X,\wedge^2 T)\otimes \mathfrak{m}_R$. Now we consider the family $\mathcal{X}_S$ over $S$. For the same open covering $\{U_{\alpha}\}$, the local trivialization is induced from $\psi_{\alpha}$ by tensoring $\otimes_R S$ and $C^{\infty}$ trivialization is also induced from $C$ by tensoring $\otimes_R S$. This observation gives the proposition.
\end{proof}
Hence we have the following commutative diagram
\begin{center}
$\begin{CD}
Def_{\mathfrak{g}}(R)@>\mathcal{O}>> PDef_{(X,\Lambda_0)}(R)\\
@VVV @VVV\\
Def_{\mathfrak{g}}(S)@>\mathcal{O}>> PDef_{(X,\Lambda_0)}(S)
\end{CD}$
\end{center}
Hence we proved the following theorem.
\begin{thm}
Let $(X,\Lambda_0)$ be a compact holomorphic Poisson manifold. Then the Poisson deformation functor $Def_{(X,\Lambda_0)}$ is controlled by the differential graded Lie algebra $\mathfrak{g}=(\bigoplus_{p+q-1=i,p\geq 1,q\geq 1}$
$A^{0,p}(X,\wedge^q T),L=\bar{\partial}+[\Lambda_0,-],[-,-])$. In other words, we have an isomorphism of two functors
\begin{align*}
Def_\mathfrak{g}\cong PDef_{(X,\Lambda_0)}
\end{align*}
\end{thm}
\section{Universal Poisson deformations}
Now we assume that for a holomorphic Poisson manifold $(X,\Lambda_0)$, $HP^1(X,\Lambda_0)=0$.
\subsection{Independence of choices of morphic elements giving the same cohomology class }\
Let $\mathfrak{m}$ be the maximal ideal of a local artinian $\mathbb{C}$-algebra $R$ with residue such that $\mathfrak{m}^{n+1}=0$. Our goal is that for given $v=\phi+\Lambda$, $w=\psi+\Pi \in g_1\otimes \mathfrak{m}$ such that $[\epsilon(v)]=[\epsilon(w)]$ in $\mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$, where $\epsilon(v)=(\bar{v},\frac{1}{2}\bar{v}\odot \bar{v},...,\frac{1}{n!}\underbrace{\bar{v}\odot \cdots \odot \bar{v}}_n)\cong_{dec}(v,\cdots, (-1)^{\frac{(n-1)n}{2}} \frac{1}{n!}\underbrace{v\wedge \cdots \wedge v}_n)\in \bigoplus_{i=1}^n sym^ig_1\otimes \mathfrak{m}^i$ and $L v = -\frac{1}{2}[v,v]$, and same for $w$, we want to show that the Poisson deformation $\mathcal{O} (v)$ is equivalent to the Poisson deformation $\mathcal{O}(w)$. In other words, we want to show that
\begin{enumerate}
\item there exists $u_0\in g_0\otimes \mathfrak{m}$ such that $exp(u_0)(\bar{\partial}+\phi)exp(-u_0)=\bar{\partial}+\psi$.
\begin{center}
$\begin{CD}
\mathscr{A}^{0,0}_X\otimes R @>\bar{\partial} +\phi >> \mathscr{A}^{0,1}_X\otimes R\\
@V exp(u_0) VV @VV exp(u_0) V \\
\mathscr{A}^{0,0}_X\otimes R @>\bar{\partial}+\psi >> \mathscr{A}^{0,1}_X\otimes R
\end{CD}$
\end{center}
\item $exp(u_0)(\Lambda_0+\Lambda)=\Lambda_0+\Pi$
\end{enumerate}
We will prove the statement by induction on the exponent $k$ of maximal ideal of artinian local $\mathbb{C}$-algebra with residue $\mathbb{C}$. Let $k=1$. So we have $\mathfrak{m}^2=0$. Let $[\epsilon(v)]=[\epsilon(w)]\in \mathbb{H}^0(J_1(\mathfrak{g}))\otimes \mathfrak{m}$. Then there exists $u_0\in g_0\otimes \mathfrak{m}$ such that $(-1)^1Lu_0=w-v$. So we have $Lu_0=\bar{\partial}u_0+[\Lambda_0,u_0]=v-w$. In other words,
\begin{align*}
\bar{\partial}u_0=\phi-\psi\\
[\Lambda_0,u_0]=\Lambda-\Pi
\end{align*}
We note that $exp(u_0)=1+u_0$ and $exp(-u_0)=1-u_0$. Then since $u_0\in g_0\otimes \mathfrak{m}$, $\phi,\psi, \Lambda,\Pi \in g_1\otimes \mathfrak{m}$ and $\mathfrak{m}^2=0$, we have
\begin{enumerate}
\item \begin{align*}
\exp(u_0)(\bar{\partial}+\phi)exp(-u_0)&=(1+u_0)(\bar{\partial}+\phi)(1-u_0)=(\bar{\partial}+\phi+u_0\bar{\partial}+u_0\phi)(1-u_0)\\
&=\bar{\partial}+\phi+u_0\bar{\partial}+u_0\phi-\bar{\partial}\cdot u_0-\phi u_0-u_0\bar{\partial}\cdot u_0-u_0\phi u_0\\
&=\bar{\partial}+\phi+u_0\bar{\partial}-\bar{\partial}\cdot u_0=\bar{\partial}+\phi-\bar{\partial}u_0\\
&=\bar{\partial}+\psi
\end{align*}
\item \begin{align*}
exp(u_0)(\Lambda_0+\Lambda)&=(1+u_0)(\Lambda_0+\Lambda)=\Lambda_0+\Lambda+[u_0,\Lambda_0+\Lambda]\\
&=\Lambda_0+\Lambda-[\Lambda_0,u_0]=\Lambda_0+\Lambda-(\Lambda-\Pi)\\
&=\Lambda_0+\Pi
\end{align*}
\end{enumerate}
So the statement holds for $k=1$.
Now let's assume that the statement holds for $k=n-1$. Now let $(R,\mathfrak{m})$ be a local artinian $\mathbb{C}$-algebra with exponent $n$ and let $[\epsilon(v)]=[\epsilon(w)]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$ where $v,w \in g_1\otimes \mathfrak{m}$. We have the following exact sequence of finite dimensional vector spaces
\begin{align*}
0\to \mathfrak{m}^{n}\to R \to R/\mathfrak{m}^n \to 0
\end{align*}
So we have the following splitting as vector spaces
\begin{align*}
R\cong (R/\mathfrak{m}^n) \oplus\mathfrak{m}^n
\end{align*}
Note that $(R/\mathfrak{m}^n, \mathfrak{m}/\mathfrak{m}^n)$ is a local artinian $\mathbb{C}$-algebra with exponent $n-1$. Now let $v=v_1+v_2=(\phi_1+\Lambda_1)+(\phi_2+\Lambda_2)$ and $w=w_1+w_2=(\psi_1+\Pi_1)+(\psi_2+\Pi_2)$, where $v_1,w_1\in g_1\otimes (R/\mathfrak{m}^n)$ and $v_2,w_2\in \mathfrak{m}^n$. Then we have $[\epsilon(v_1)]=[\epsilon(w_1)]\in \mathbb{H}^0(J_{n-1}(\mathfrak{g}))\otimes \mathfrak{m}/\mathfrak{m}^n$, where $v_1,w_1\in g_1\otimes \mathfrak{m}/\mathfrak{m}^n$. By the induction hypothesis,
\begin{enumerate}
\item we have the following commutative diagram.
\begin{center}
$\begin{CD}
\mathscr{A}_X^{0,0}\otimes R/\mathfrak{m}^n@>\bar{\partial}+\phi_1 >> \mathscr{A}_X^{0,1}\otimes R/\mathfrak{m}^n\\
@V exp(u) VV @VV exp(u) V \\
\mathscr{A}_X^{0,0}\otimes R/\mathfrak{m}^n @>\bar{\partial}+\psi_1>> \mathscr{A}_X^{0,1}\otimes R/\mathfrak{m}^n
\end{CD}$
\end{center}
where some $u \in g_0\otimes \mathfrak{m}/\mathfrak{m}^n$.
\item $exp(u)(\Lambda_0+\Lambda_1)=\Lambda_0+\Pi_1$.
\end{enumerate}
Let's consider the natural projection $g_0\otimes R \cong g_0\otimes (R/\mathfrak{m}^n\oplus \mathfrak{m}^n)\to g_0\otimes R/\mathfrak{m}^n$.
Choose the lifting of $u$ to be $u+0\in g_0 \otimes (R/\mathfrak{m}^n\oplus \mathfrak{m}^n)\cong g_0\otimes R$. Then
\begin{enumerate}
\item we have the following commutative diagram.
\begin{center}
$\begin{CD}
\mathscr{A}_X^{0,0}\otimes R@>\bar{\partial}+\phi_1+\phi_2 >> \mathscr{A}_X^{0,1}\otimes R\\
@V exp(u+0) VV @VV exp(u+0) V \\
\mathscr{A}_X^{0,0}\otimes R @>\bar{\partial}+\psi_1+\phi_2>> \mathscr{A}_X^{0,1}\otimes R
\end{CD}$
\end{center}
since $\phi_2=\phi_2\circ exp(u)=exp(u)\circ \phi_2$. (note that $\phi_2\in A^{0,1}(X,T)\otimes \mathfrak{m}^n$, $u\in g_0\otimes \mathfrak{m}$ and $\mathfrak{m}^{n+1}=0$.)
\item $exp(u)(\Lambda_0+\Lambda_1+\Lambda_2)=\Lambda_0+\Pi_1+\Lambda_2$ since $exp(u)(\Lambda_2)=\Lambda_2$ (note that $\Lambda_2\in A^{0,0}(X,\wedge^2 T)\otimes \mathfrak{m}^n$).
\end{enumerate}
Since $\mathcal{O}(v_1+v_2)$ and $\mathcal{O}(w_1+v_2)$ are equivalent Poisson deformations, we have $[\epsilon(v_1+v_2)]=[\epsilon(w_1+v_2)]$. If we show that $\mathcal{O}(w_1+v_2)$ is equivalent to $\mathcal{O}(w_1+w_2)$, then this means that $\mathcal{O}(v)$ is equivalent to $\mathcal{O}(w)$.
Since $[\epsilon(w_1+v_2)]=[\epsilon(w_1+w_2)]$, there exists $(u_0,...,u_{n-1})\in \bigoplus_{i=0}^{n-1}g_0\otimes sym^ig_1\otimes \mathfrak{m}^{i+1}$ such that
\begin{center}
\tiny{$\begin{CD}
u_0 @>(-1)^1L>> w_2-v_2 \\
@. @A\delta AA\\
@. u_1 @>(-1)^2 L>> -\frac{1}{2}((w_1+w_2)^2-(w_1+v_2)^2)=0 \\
@. @. @A\delta AA\\
@. @. \cdots @>>> \cdots\\
@. @. @. @A\delta AA\\
@. @. @. u_{n-1} @>(-1)^n L>> (-1)^{\frac{(n-1)n}{2}}\frac{1}{n!} ((w_1+ w_2)^n - (w_1+v_2)^n) =0
\end{CD}$}
\end{center}
Since $v_2, w_2\in g_1\otimes \mathfrak{m}^n$ and $w_1\in g_1\otimes \mathfrak{m}$, we have
\begin{align*}
(-1)^{\frac{(i-1)i}{2}}\frac{1}{i!}((w_1+ w_2)^i-(w_1+ v_2)^i)=0\,\,\,\text{ for } \,\,\, i>1.
\end{align*}
Let's consider $u_{n-1}\in g_0\otimes sym^{n-1}g_1\otimes \mathfrak{m}^n$. Write $u_{n-1}=\sum_k a_k\otimes b_k$ where $a_k\in g_0$ and $b_k$ are linearly independent in $sym^{n-1}g_1\otimes \mathfrak{m}^n$. Then since $(-1)^nLu_{n-1}=0$, we have $\sum_k La_k\otimes b_k+a_k\otimes L b_k$=0. So $L a_k=0$. Since $b_k$ are linearly independent and, $L a_k\otimes b_k$ and $a_k\otimes Lb_k$ live in different spaces, we have $La_k=0$. Since $H^0(\mathfrak{g})=HP^1(X,\Lambda_0)=0$, we have $a_k=0$. $u_{n-1}=0$. In this way we can show that $u_1=...=u_{n-1}=0$. Hence we have $(-1)^1Lu_0=w_2-v_2\in g_1\otimes \mathfrak{m}^n$. So $Lu_0=v_2- w_2$. In other words, we have
\begin{align*}
\bar{\partial}u_0=\phi_2-\psi_2\\
[\Lambda_0,u_0]=\Lambda_2-\Pi_2
\end{align*}
Let $u_0:=x_1+x_2\in g_0\otimes R\cong g_0\otimes (R/\mathfrak{m}^n\oplus \mathfrak{m}^n)$. Since $Lx_1=0$ and $Lx_2=v_2-w_2$, we have $x_1=0$ by $H^0(\mathfrak{g})=0$. So $u_0\in g_0\otimes \mathfrak{m}^n$. Then we have the following commutative diagram.
\begin{center}
$\begin{CD}
\mathscr{A}_X^{0,0}\otimes R@>\bar{\partial}+\psi_1+\phi_2 >> \mathscr{A}_X^{0,1}\otimes R\\
@V exp(u_0) VV @VV exp(u_0) V \\
\mathscr{A}_X^{0,0}\otimes R@>\bar{\partial}+\psi_1+\psi_2>> \mathscr{A}_X^{0,1}\otimes R
\end{CD}$
\end{center}
Indeed,
\begin{align*}
exp(u_0)(\bar{\partial}+\psi_1+\phi_2)exp(-u_0)&=(1+u_0)(\bar{\partial}+\psi_1+\phi_2)(1-u_0)\\
&=\bar{\partial}+\psi_1+\phi_2+u_0\bar{\partial}-\bar{\partial}u_0\\
&=\bar{\partial}+\psi_1+\phi_2-\bar{\partial}u_0\\
&=\bar{\partial}+\psi_1+\psi_2
\end{align*}
And
\begin{align*}
exp(u_0)(\Lambda_0+\Pi_1+\Lambda_2)&=(1+u_0)(\Lambda_0+\Pi_1+\Lambda_2)\\
&=\Lambda_0+\Pi_1+\Lambda_2+[u_0,\Lambda_0]=\Lambda_0+\Pi_1+\Lambda_2-[\Lambda_0,u_0]\\
&=\Lambda_0+\Pi_1+\Pi_2
\end{align*}
So the induction holds for $k=n$.
\subsection{$n$-th Universal Poisson deformations}\
Recall that $R_n^u=\mathbb{C}\oplus \mathfrak{m}_n^u:=\mathbb{C}\oplus \mathbb{H}^0(J_n(\mathfrak{g}))^*$ is a local artinian $\mathbb{C}$-algebra with residue $\mathbb{C}$ and expoent $n$ (i.e. $\mathfrak{m}_n^{u\,n+1}=0$).
\begin{definition}[$n$-th universal Poisson deformation]
Since the identity map $\mathbb{H}^*(J_n(\mathfrak{g}))\to\mathbb{H}^*(J_n(\mathfrak{g}))$ is a homomorphism, it corresponds to a morphic element
\begin{align*}
[\epsilon(v_u)]=[(\bar{v}_u,\frac{1}{2}\bar{v}_u\odot v_u,....,\frac{1}{n!}\underbrace{\bar{v}_u\odot\cdots \odot \bar{v}_u}_n)]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}_n^u
\end{align*}
where $v_u:=\phi_u+\Lambda_u\in g_1\otimes \mathbb{H}^0(J_n(\mathfrak{g}))^*$. Then $v_u$ defines an infinitesimal Poisson deformaiton $P_n^u:=\mathcal{O}(v_u)$ over a local artinian $\mathbb{C}$-algebra $R_n^u:=\mathbb{C}\oplus \mathbb{H}^0(J_n(\mathfrak{g}))^*$. We will call $P_n^u$ be a $n$-th order universial Poisson deformation of $(X,\Lambda_0)$ over $R_n^u$.
\end{definition}
Let $P$ be an infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $(R,\mathfrak{m})$ with $\mathfrak{m}^{n+1}=0$. Assume that $HP^1(X,\Lambda_0)=0$.
Let $v=\phi+\Lambda$ be an Maurer Cartan element corresponding to the infinitesimal Poisson deformation $P$ of $(X,\Lambda_0)$ over $R$. Then $[\epsilon(v)]=[(\bar{v},\frac{1}{2}\bar{v}\odot \bar{v},...,\frac{1}{n!}\bar{v}\odot \cdots \odot \bar{v})]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$, which induces a homomorphism $[\epsilon(v)]:\mathfrak{m}_n^u=\mathbb{H}^0(J_n(\mathfrak{g}))^* \to \mathfrak{m}$. Via the morphism $r:=[\epsilon(v)]$, $v_u\in g_1\otimes \mathfrak{m}_n^{u}$ is sent to $\tilde{v}_u\in g_1 \otimes \mathfrak{m}$. Then $\tilde{v}_u$ satisfies the Maurer Cartan equation since $v_u$ does. Hence $[\epsilon(\tilde{v}_u)]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}$ defines a morphic element and we have the corresponding homomorphism $[\epsilon(\tilde{v}_u)]:\mathbb{H}^0(J_n(\mathfrak{g}))^*\xrightarrow{id=[\epsilon(v_u)]} \mathbb{H}^0(J_n(\mathfrak{g}))^*\xrightarrow{r} \mathfrak{m}$, which is exactly $[\epsilon(v)]$. Hence we have $[\epsilon(v)]=[\epsilon(\tilde{v}_u)]$. Hence by the assumption of $HP^1(X,\Lambda_0)=0$, the induced deformation $\tilde{v}_u=\tilde{\phi}_u+\tilde{\Lambda}_u$ from the deformation $v_u$ by the base change $[\epsilon(v)]:\mathbb{H}^0(J_n(\mathfrak{g}))^* \to \mathfrak{m}$
\begin{center}
$\begin{CD}
(\mathscr{A}_X^{0,0}\otimes R_n^u,\Lambda_0+\Lambda_u)@>\bar{\partial}+\phi_u>> \mathscr{A}_X^{0,1}\otimes R_n^u\\
@VVV @VVV\\
(\mathscr{A}_X^{0,0}\otimes R,\Lambda_0+\tilde{\Lambda}_u) @>\bar{\partial}+\tilde{\phi}_u>> \mathscr{A}_X^{0,1}\otimes R
\end{CD}$
\end{center}
is equivalent to $v=\phi+\Lambda$, which represents the infinitesimal Poisson deformation $P$ of $(X,\Lambda_0)$ over $R$.
Then we have $P/R\cong r^* P_n^u=P_n^u\times_{Spec(R_n^u)} Spec(R)$. This proves our main Theorem \ref{2theorem} (2) in the Introduction of the part II of the thesis.
\subsection{Formal Completition}\
The natural map $\mathbb{H}^0(J_{n-1}(\mathfrak{g}))\to \mathbb{H}^0(J_n (\mathfrak{g}))$ gives dually the homomorphism $\mathbb{H}^0(J_n(\mathfrak{g}))^*\to \mathbb{H}^0(J_{n-1}(\mathfrak{g}))^*$. Set $\mathfrak{m}_n^u=\mathbb{H}^0(J_n(\mathfrak{g}))^*$ and $R_n^u:=\mathbb{C}\oplus \mathfrak{m}_n^u$. Take the inverse limit $\mathfrak{m}^u:=\varprojlim_n \mathfrak{m}_n^u$. Then we have
\begin{align*}
\hat{R}^u:=\mathbb{C}\oplus \mathfrak{m}^u=\mathbb{C}\oplus \varprojlim_n \mathfrak{m}_n^u=\varprojlim_n (\mathbb{C}\oplus \mathfrak{m}_n^u)=\varprojlim_n R_n^u
\end{align*}
By our construction of $J_n(\mathfrak{g})$, we have $\mathbb{C}\oplus \mathfrak{m}^u/\mathfrak{m}^{u\,n+1}=\mathbb{C}\oplus \mathfrak{m}_n^u$. Hence $(\hat{R}^u,\mathfrak{m}^u)=\varprojlim(R_n^u,\mathfrak{m}_n^u)$ is a complete local noetherian $\mathbb{C}$-algebra with respect to the $\mathfrak{m}^u$-$adic$ topology.
From $\mathfrak{m}_n^u=\mathbb{H}^0(J_n(\mathfrak{g}))^*\to \mathfrak{m}_{n-1}^u=\mathbb{H}^0(J_{n-1}(\mathfrak{g}))^*$, the morphic element $[\epsilon(v_u=\phi_u+\Lambda_u)]=[(\bar{v}_u,\frac{1}{2}\bar{v}_u\odot \bar{v}_u,\cdots ,\frac{1}{n!}\bar{v}_u\odot \cdots \odot \bar{v}_u)]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}_n^u$ inducing the identity map on $\mathbb{H}^0(J_n(\mathfrak{g}))^*$, gives a morphic element $[\epsilon(\tilde{v}_u=\tilde{\phi}_u+\tilde{\Lambda}_u)]=[(\bar{\tilde{v}}_u,\cdots, \frac{1}{(n-1)!}\bar{\tilde{v}}_u\odot \cdots \odot \bar{\tilde{v}}_u,0)]\in \mathbb{H}^0(J_n(\mathfrak{g}))\otimes \mathfrak{m}_{n-1}^u$ (via $\mathfrak{m}_n^u\to \mathfrak{m}_{n-1}^u$) which can be considered as an element in $\mathbb{H}^0(J_{n-1}(\mathfrak{g}))\otimes \mathfrak{m}_{n-1}^u$ inducing the identity map on $\mathbb{H}^0(J_{n-1}(\mathfrak{g}))^*$ and so we have the following commutative diagram:
\begin{center}
$\begin{CD}
\mathbb{H}^0(J_{n}(\mathfrak{g}))^* @>id=[\epsilon(v_u)] >> \mathbb{H}^0(J_{n}(\mathfrak{g}))^*\\
@VVV @VVV\\
\mathbb{H}^0(J_{n-1}(\mathfrak{g}))^*@>id=[\epsilon(\tilde{v}_u)]>> \mathbb{H}^0(J_{n-1}(\mathfrak{g}))^*
\end{CD}$
\end{center}
Hence we have the following commutative diagram
\begin{center}
$\begin{CD}
(\mathscr{A}_X^{0,0}\otimes R_n^u,\Lambda_0+\Lambda_u)@>\bar{\partial}+\phi_u>> \mathscr{A}_X^{0,1}\otimes R_n^u\\
@VVV @VVV\\
(\mathscr{A}_X^{0,0}\otimes R_{n-1}^u,\Lambda_0+\tilde{\Lambda}_u))@>\bar{\partial}+\tilde{\phi}_u>>\mathscr{A}_X^{0,1}\otimes R_{n-1}^u
\end{CD}$
\end{center}
So $n$-th universal Poisson deformations $P_n^u/R_n^u$ fit together to form a direct system with limit
\begin{align*}
\hat{P}^u/\hat{R}^u=\varinjlim P_n^u/R_n^u
\end{align*}
Now we set $\hat{R}=\varprojlim (R_n,\mathfrak{m}_n)$ is a complete local noetherian $\mathbb{C}$-algebra where $(R_n,\mathfrak{m}_n)$ is a local artinian $\mathbb{C}$-algebra with residue $\mathbb{C}$ and $\hat{P}/\hat{R}= \varinjlim_n P_n/R_n$ is a formal Poisson analytic space over $\hat{R}$, where $P_n/R_n$ is an infinitesimal Poisson deformation of $(X,\Lambda_0)$, which can be interpreted as a sequence $\{r_n\}$ of morphic elements where $r_n \in \mathbb{H}^0(J_n(\mathfrak{g})) \otimes \mathfrak{m}_n$ such that $r_n$ induces $r_{n-1}\in \mathbb{H}^0(J_{n-1}(\mathfrak{g}))\otimes \mathfrak{m}_{n-1}$ by the natural map $\mathfrak{m}_n\to \mathfrak{m}_{n-1}$. Hence we have the following commutative diagram:
\begin{center}
$\begin{CD}
\mathbb{H}^0(J_n(\mathfrak{g}))^* @>r_n >> \mathfrak{m}_n\\
@VVV @VVV\\
\mathbb{H}^0(J_{n-1}(\mathfrak{g}))^*@>r_{n-1}>> \mathfrak{m}_{n-1}
\end{CD}$
\end{center}
So we have the map $\hat{r}=lim_n r_n:\hat{R}^u\to \hat{R}$ which induces $\hat{P}/\hat{R}=\hat{r}^*(\hat{P}^u/\hat{R}^u)$. This proves our main Theorem \ref{2theorem} (3) in the Introduction of the part II of the thesis. So we complete the Theorem \ref{2theorem}.
\part{Deformations of algebraic Poisson schemes}\label{part3}
In the third part of the thesis, we study deformations of algebraic Poisson schemes over an algebraic closed field $k$, which is an algebraic version of the first part of the thesis. In chapter \ref{chapter7}, we discuss the definition of Poisson schemes, morphisms and cohomology. A Poisson scheme $X$ is a scheme whose structure sheaf $\mathcal{O}_X$ is a sheaf of Poisson $k$-algebras. Equivalently, a Poisson structure on a scheme $X$ is characterized by an element $\Lambda_0\in \Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(\wedge^2\Omega_{\mathcal{O}_{X}/k}^1,\mathcal{O}_X))$ with $[\Lambda_0,\Lambda_0]=0$. By a deformation of a Poisson scheme $(X,\Lambda_0)$ we mean a commutative diagram
\begin{center}
$\xi:$
$\begin{CD}
(X,\Lambda_0) @>>> (\mathcal{X},\Lambda)\\
@VVV @VV{\pi}V\\
Spec(k) @>s>>S
\end{CD}$
\end{center}
where $\pi$ is flat and surjective, and $S$ is connected, $(\mathcal{X},\Lambda)$ is a Possoin scheme over $S$ defined by $\Lambda\in \Gamma(\mathcal{X}, \mathscr{H}om(\wedge^2 \Omega_{\mathcal{X}/S}^1,\mathcal{O}_{\mathcal{X}}))$ with $X \cong \mathcal{X} \times_S Spec(k)$ as a Poisson isomorphism. Note that when we ignore Poisson structures, a Poisson deformation is an usual flat deformation of an algebraic scheme $X$. By following Sernesi's book \cite{Ser06}, we extend the formalism of ordinary flat deformations to Poisson deformations. We show that given a Poisson scheme $(X,\Lambda_0)$, first order Poisson deformation (i.e Poisson deformations over a dual number $k[\epsilon]$) whose underlying flat deformation (when we ignore Poisson structures) is locally trivial, is naturally in one to one correspondence with $HP^2(X,\Lambda_0)$ which is the second (truncated) Lichnerowicz-Poisson cohomology group, in other words $2$nd hypercohomology of the following complex of sheaves induced by $[\Lambda_0,-]$.
\begin{align*}
0\to \mathscr{H}om_{\mathcal{O}_X}(\Omega_{X/k}^1,\mathcal{O}_X)\xrightarrow{[\Lambda_0,-]} \mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega^1_{X/k},\mathcal{O}_X)\xrightarrow{[\Lambda_0,-]}\mathscr{H}om_{\mathcal{O}_X}(\wedge^3\Omega^1_{X/k},\mathcal{O}_X)\xrightarrow{[\Lambda_0,-]}\cdots
\end{align*}
We also show that for a smooth Poisson algebraic scheme over $k$, any small extension $e:0\to (t)\to \tilde{A}\to A\to 0$ (i.e $(A,\mathfrak{m}),(\tilde{A},\tilde{\mathfrak{m}})$ are local artinian $k$-algebras with residue $k$ and $t\cdot \tilde{\mathfrak{m}}=0$), and an infinitesimal Poisson deformation $\xi$ of $(X,\Lambda_0)$ over $Spec(A)$, we can associate
an element $o_{\mathfrak{\xi}}(e)\in HP^3(X,\Lambda_0)$ such that $o_{\mathfrak{\xi}}(e)$ is $0$ if and only if a lifting of $\xi$ to $\tilde{A}$ exists. So $HP^3(X,\Lambda_0)$ is an obstruction space. We also show that if $HP^2(X,\Lambda_0)=0$, then $(X,\Lambda_0)$ is rigid, which means that any infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $A$ is trivial for all local artinian $k$-agebra $A$.
In chapter \ref{chapter8}, we discuss Poisson deformation functor $PDef_{(X,\Lambda_0)}$ which is a functor of Artin rings. For a local artinian $k$-algebra $A$ with residue $k$, $PDef_{(X,\Lambda_0)}(A)$ is the set of Poisson deformations over $Spec(A)$ up to Poisson equivalence. We show that for a smooth projective Poisson scheme $(X,\Lambda_0)$, $PDef_{(X,\Lambda_0)}$ satisfies Schlessinger's criterion $(H_0),(H_1),(H_2),(H_3)$ and so $PDef_{(X,\Lambda_0)}$ has a miniversal family. We also show that in addition if $HP^1(X,\Lambda_0)=0$, $PDef_{(X,\Lambda_0)}$ is pro-representable.
In chapter \ref{chapter9}, we extend the construction of a cotangent complex (\cite{Sch67}) to Poisson cases. Let $A\to B$ be a Poisson homomorphism of Poisson $k$-algebras, and $M$ be a Poisson $B$-module. We construct $PT^i(B/A,M)$ in a similar way to construct $T^i(B/A,M)$ in \cite{Sch67}. As an application to Poisson deformation, we show that for a Poisson algebra $B_0$, $PDef_{Spec(B_0)}(k[\epsilon])$ is a natural one to one correspondence with $PT^1(B_0/k,B_0)$. We also show that given a Poisson algebra $B_0$ and an Poisson ideal $I$ of $B_0$, deformations of a Poisson subscheme $Spec(C)$ of $Spec(B_0)$ over $Spec(k[\epsilon])$ is one to one correspondence with $PT^1(C/B_0,C)$ where $C=B_0/I$.
\chapter{Deformations of algebraic Poisson schemes}\label{chapter7}
\section{Definitions of Poisson schemes, morphisms and cohomology}
In this section, every algebra is a commutative $k$-algebra, where $k$ is a field. Our reference is \cite{Lau13} Chapter 3. For algebraic geometry, we refer to \cite{Har77}, \cite{Liu02}.
\subsection{Characterization of a Poisson bracket $\{-,-\}$ of a Poisson algebra $A$ over $R$}
In this subsection, we will characterize a Poisson structure of a commutative algebra $A$ over $R$ in terms of an element $\Lambda \in Hom_A(\Omega_{R/A}^1,A)$ with $[\Lambda,\Lambda]=0$ where $[-,-]$ is the Schouten bracket on $\bigoplus_{p\geq 1}Hom_A(\wedge^p \Omega_{A/R}^1,A)$.
\begin{definition}
Let $A$ be a commutative $R$-algebra and let $p\geq 1$. A skew symmetric $p$-linear map $P\in Hom_R(\wedge^p A,A)$ is called a skew symmetric $p$-derivation of $A$ over $R$, if $P$ is a derivation in each of its components.
\end{definition}
Let $\Omega_{A/R}^1$ be the $A$-module of relative k\"{a}hler differential forms of $A$ over $R$. Then the $R$-module of all skew symmetric $p$-linear maps in $Hom_R(\wedge^p,A,A)$ is identified with $Hom_A(\wedge^p \Omega_{A/R}^1,A)$. Let $P$ be a skew symmetric $p$ linear map in $Hom_R(\wedge^p A,A)$. Then the associated $\tilde{P} \in Hom_A(\wedge^p \Omega_{A/R}^1,A)$ is defined in the following way: $\tilde{P}(da_1\wedge\cdots \wedge da_p):=P(a_1,\cdots a_p)$ where $d:A\to \Omega_{A/R}^1$ is the canonical map.
\subsubsection{The Shouten bracket on $\bigoplus_{p\geq 1} Hom_A(\wedge^p \Omega_{A/R}^1,A)$ and characterization of a Poisson bracket on $A$}
\begin{definition}
For $p,q \in \mathbb{N}$, a $(p,q)$-shuffle is a permutation $\sigma$ of the set $\{1,...,p+q\}$, such that $\sigma(1)< \cdots < \sigma(p)$ and $\sigma(p+1) < \cdots <\sigma(p+q)$. The set of all $(p,q)$-shuffles is denoted by $S_{p,q}$. For a shuffle $\sigma \in S_{p,q}$, we denote the signature of $\sigma$ by $sgn(\sigma)$. By convention, $S_{p,-1}:=\emptyset$ and $S_{-1,q}:= \emptyset $ for $p,q\in \mathbb{N}$.
\end{definition}
\begin{definition}
We define the Schouten bracket $[-,-]$ on $\bigoplus_{p\geq 1} Hom_A(\wedge^p \Omega_{A/R}^1,A)$, namely a family of maps
\begin{align*}
[-,-]:Hom_A(\wedge^p \Omega_{A/R}^1,A) \times Hom_A(\wedge^q \Omega_{A/R}^1,A) \to Hom_A(\wedge^{p+q-1}\Omega_{A/R}^1,A)
\end{align*}
for $p,q\in \mathbb{N}$ in the following way: let $P\in Hom_A(\wedge^p \Omega_{A/R}^1,A)$ and $Q\in Hom_A(\wedge^q \Omega_{A/R}^1,A)$, and for $F_1,...,F_{p+q-1}\in A$ by
\begin{align*}
[P,Q](dF_1\wedge \cdots\wedge dF_{p+q-1})=\sum_{\sigma\in S_{q,p-1}}sgn(\sigma)P(d(Q(dF_{\sigma(1)}\wedge...\wedge dF_{\sigma(q)}))\wedge dF_{\sigma(q+1)}\cdots\wedge dF_{\sigma(q+p-1)})\\
-(-1)^{(p-1)(q-1)}\sum_{\sigma\in S_{p,q-1}}sgn(\sigma)Q(d(P(dF_{\sigma(1)}\wedge...\wedge dF_{\sigma(p)}))\wedge dF_{\sigma(p+1)}\wedge\cdots \wedge dF_{\sigma(p+q-1)})
\end{align*}
\end{definition}
\begin{example}\label{3ex}
Let $P\in Hom_A(\wedge^2 \Omega_{A/R}^1,A)$ and $Q\in Hom_A(\Omega_{A/R}^1,A)$. Then
\begin{align*}
[P,Q](dF_1\wedge dF_2)=P(dQ(F_1)\wedge dF_2)-P(d(Q(F_2))\wedge dF_1)-Q(d(P(dF_1\wedge dF_2)))
\end{align*}
\end{example}
\begin{proposition}
Let $A$ be a commutative algebra over $R$. If $\Lambda$ is a skew symmetric biderivation of $A$ over $R$, i.e $\Lambda\in Hom_A(\wedge^2 \Omega_{A/R}^1,A)$, then $P$ defines a Poisson bracket $($i.e Jaocbi identity holds$)$ if and only if $[\Lambda,\Lambda]=0$.
\end{proposition}
\begin{proof}
See \cite{Lau13} Proposition 3.5 page 80.
\end{proof}
\begin{notation}
Let $A$ be a Poisson algebra over $R$ with a Poisson bracket $\{-,-\}$. Let $\Lambda$ be the associated biderivation with the Poisson bracket $\{-,-\}$. Then we will denote by $(A,\Lambda)$ the Poisson algebra $A$ over $R$ with the Poisson bracket $\{-,-\}$.
\end{notation}
\begin{remark}
Let $(A,\Lambda)$ be a Poisson algebra over $R$ with the Poisson structure $\Lambda \in Hom_A(\wedge^2, \Omega_{A/R}^1,A)$ with $[\Lambda,\Lambda]=0$. If we let $\mathfrak{g}=\oplus_{i\geq 0} g_i$, where $g_i=Hom_A(\wedge^{i+1}\Omega_{A/R}^1,A)$. Then $\mathfrak{g}=(\bigoplus_{i\geq 0} g_i,[-,-], [\Lambda,-])$ is a differential graded Lie algebra with the differential $[\Lambda,-]$. In other words, we have the following properties: for $P\in Hom_A(\wedge^p \Omega_{A/R}^1,A)$ and $Q\in Hom_A(\wedge^q \Omega_{A/R}^1,A)$ and $S\in Hom_A(\wedge^r \Omega_{A/R}^1,A) $,
\begin{enumerate}
\item $[\Lambda,[\Lambda,P]]]=0$ and $[\Lambda,P]\in Hom_A(\wedge^{p+1} \Omega_{A/S}^1,A)$
\item $[P,Q]=-(-1)^{(p-1)(q-1)}[Q,P]$
\item $[[P,Q],S]=[P,[Q,S]]-(-1)^{(p-1)(q-1)}[Q,[P,S]]$
\item $[\Lambda,[P,Q]]=[[\Lambda,P],Q]+(-1)^{p-1}[P,[\Lambda,Q]]$
\end{enumerate}
\begin{definition}
Let $(A,\Lambda)$ be a Poisson algebra over $R$. We define $i$-th truncated Lichnerowicz Poisson cohomology of $(A,\Lambda)$ to be the $i$-th cohomology group of the following complex
\begin{align*}
0\to Hom_A(\Omega_{A/R}^1,A)\xrightarrow{[\Lambda,-]} Hom_A (\wedge^2 \Omega_{A/R}^1,A)\xrightarrow{[\Lambda,-]} Hom_A(\Omega_{A/R}^1,A)\xrightarrow{[\Lambda,-]} \cdots
\end{align*}
We will denote $i$-th Lichnerowicz Poisson cohomology group by $HP^i(A,\Lambda)$.
\end{definition}
\end{remark}
\subsubsection{Characterization of Poisson morphisms}\
Let $f:A\to B$ be a $R$-homomorphism. Then we have the following commutative diagram
\begin{center}
$\begin{CD}
\Omega_{A/R}\otimes_A B@>>> \Omega_{B/R}\\
@Ad_{A} AA @Ad_{B} AA\\
A@>f>>B
\end{CD}$
\end{center}
So we have a canonical homomorphism $\wedge^2 \Omega_{A/R}\otimes_A B\to \wedge^2\Omega_{B/R}$. This induces $f^*:Hom_B(\wedge^2 \Omega_{B/R},B)\to Hom_B(\wedge^2 \Omega_{A/R}\otimes_A B,B)\cong Hom_A(\wedge^2 \Omega_{A/R},B)$
\begin{proposition}
Let $(A,P)$ and $(B,Q)$ be two Poisson $R$-algebras. Then a homomorphism $A\to B$ of $R$-algebras is a Poisson homomorphism if and only if $f^*Q=f\circ P$.
\end{proposition}
\begin{proof}
Let $f$ be a Poisson $R$-homomorphism. In other words, $f(\{a,b\})=\{f(a),f(b)\}$. Then $f(P(d_A a,d_A b))=Q(d_B f(a),d_B f(b))=f^* Q(d_A a,d_B b)$. Hence we get $f^*Q=f\circ P$.
\end{proof}
\begin{example}[Poisson ideals]
Let $I$ be an ideal of a commutative $R$-algebra $A$ and set $B=A/I$. Let $\Lambda\in Hom_A(\wedge^2 \Omega_{A/R},A)$ be a Poisson structure on $A$ over $R$. The map $A\to B$ induces $Hom_{B}(\wedge^2 \Omega_{B/R},B)\to Hom_A(\wedge^2 \Omega_{A/R},B)$ which is injective since $\Omega^1_{A/R}\otimes_A B\to \Omega^1_{B/R}$ is surjective. Let $\bar{\Lambda}$ be the composition of $\Lambda$ followed by $A\to B$. If $\bar{\Lambda}$ has pre image $P$, then $P$ defines a Poisson structure on $B$, which makes $I$ to be a Poisson ideal of $A$. Indeed, we show that $[P,P]=0$. Since $Hom_{B}(\wedge^3\Omega_{B/R},B)\to Hom_A(\wedge^3 \Omega_{A/R}, B)$ is injective and $[P,P]$ is sent to $\overline{[\Lambda,\Lambda]}=0$, where $\overline{[\Lambda,\Lambda]}$ is the composition of $[\Lambda,\Lambda]$ followed by $A\to B$. We have $[P,P]=0$.
\end{example}
\subsection{Affine Poisson Schemes}
\subsubsection{Poisson $(k)$-sheaves on a topological space $X$}
\begin{definition}
Let $X$ be a topological space and let $k$ be a field. A Poisson presheaf $\mathcal{F}$ on $X$ consists of the following data:
\begin{enumerate}
\item An Poisson $k$-algebra $\mathcal{F}(U)$ for every open subset $U$ of $X$, and
\item a Poisson $k$-algebra homomorphism $\rho_{UV}:\mathcal{F}(U)\to \mathcal{F}(V)$ for every inclusion wof open subset $V\subset U$.
\end{enumerate}
which satisfy the following conditions:
\begin{enumerate}
\item $\mathcal{F}(\emptyset)=0$ for the empty set $\emptyset$.
\item $\rho_{UU}$ is the identity map $\mathcal{F}(U)\to \mathcal{F}(U)$
\item If we have three open subsets $W\subset V \subset U$, then $\rho_{UW}=\rho_{VW}\circ \rho_{UV}$.
\end{enumerate}
\end{definition}
We call $\rho_{UV}$ restriction maps, and we write $s|_V$ instead of $\rho_{UV}(s)$ for $s\in \mathcal{F}(U)$. We refer to $\mathcal{F}(U)$ as the sections of $\mathcal{F}$ over $U$.
\begin{definition}
We say that a Poisson presheaf $\mathcal{F}$ is a Poisson sheaf if we have the following properties:
\begin{enumerate}
\item $($Uniqueness$)$ Let $s\in \mathcal{F}(U)$ for an open subset $U$ of $X$, and $\{ U_i \}$ be a open covering of $U$. If $s|_{U_i}=0$ for every $i$, then $s=0$.
\item $($Glueing local sections$)$ Let $U$ be an open subset of $X$ and $\{U_i\}$ be a open covering of $U$. Let $s_i\in \mathcal{F}(U_i)$ such that $s_i|_{U_i\cap U_j} =s_j|_{U_i\cap U_j}$. Then there exists $s\in \mathcal{F}(U)$ such that $s|_{U_i}=s_i$.
\end{enumerate}
\end{definition}
\begin{definition}
Let $\mathcal{F}$ and $\mathcal{G}$ be Poisson presheaves on $X$. A morphism $f:\mathcal{F}\to \mathcal{G}$ is called Poisson morphism if $f(U):\mathcal{F}(U)\to \mathcal{G}(U)$ is a Poisson homomorphism for any open set $ U \subset X $.
\end{definition}
\begin{remark}\
\begin{enumerate}
\item Let $\mathcal{F}$ be Poisson presheaf on $X$, and let $x\in X$. The stalk $\mathcal{F}_x$ at $x$ is a Poisson $k$-algebra.
\item Let $\mathcal{F}$ be a Poisson presheaf on $X$. There exists a Poisson sheaf $\mathcal{F}^{+}$ associated to $\mathcal{F}$ and a morphism of Poisson presheaves $\theta:\mathcal{F}\to \mathcal{F}^+$ verifying the following universal property: for every Poisson morphism $\alpha:\mathcal{F}\to \mathcal{G}$, where $G$ is a Poisson sheaf, there exists a unique Poisson morphism $\tilde{\alpha}:\mathcal{F}^{+}\to \mathcal{G}$ such that $\alpha=\tilde{\alpha}\circ \theta$.
\end{enumerate}
\end{remark}
\begin{definition}[Poisson locally ringed spaces]
A Poisson ringed topological space consists of a topological space $X$ endowed with a Poisson sheaf $($a sheaf of Poisson $k$-algebras$)$ $\mathcal{O}_X$ on $X$ such that $\mathcal{O}_{X,x}$ is a local ring for every $x\in X$ which is a Poisson $k$-algebra. We denote it by $(X,\mathcal{O}_X)$.
\end{definition}
\begin{definition}
A Poisson morphism of Poisson ringed topological spaces
\begin{align*}
(f,f^{\sharp}):(X,\mathcal{O}_X)\to (Y,\mathcal{O}_Y)
\end{align*}
consists of a continuous map $f:X\to Y$ and a morphism of Poisson sheaves $f^\sharp:\mathcal{O}_Y\to f_* \mathcal{O}_X$ such that for every $x\in X$, the induced Poisson homomorphism $f^\sharp_x:\mathcal{O}_{Y,f(x)}\to \mathcal{O}_{X,x}$ is a local Poisson homomorphism. We define the compositions of two Poisson morphisms of Poisson ringed topological spaces in an obvious manner.
\end{definition}
\subsubsection{Affine Poisson schemes}\
We recall the following facts.
\begin{lemma}
Let $A$ be an commutative $R$-algebra, $S\subset A$ a multiplicatively closed systems, and $A_S$ the corresponding localization of $A$. Then the module of relative differential forms $\Omega_{A_S/R}^1$ is given by the localization $(\Omega_{A/R}^1)_S$ and that the map
\begin{align*}
d:A_S\to (\Omega_{A/R}^1)_S,\,\,\,\,\, \frac{f}{s}\mapsto \frac{sd_{A/R}(f)-fd_{A/R}(s)}{s^2}
\end{align*}
serves as the exterior differential of $A_S$ over $R$ where $d_{A/R}:A\to \Omega_{A/R}^1$ denotes the exterior differential of $A$. And we have $\Omega_{A/R}^1\otimes_A A_S\xrightarrow{} \Omega_{A_S/R}^1$ and $\Omega_{A_S/A}^1=0$.
\end{lemma}
\begin{proof}
See \cite{Bos13} page 354 exercise 2.
\end{proof}
Let $A$ be a Poisson algebra over $k$. Now let $\Lambda\in Hom_A(\wedge^2 \Omega_{A/k}, A)$ be the Poisson $k$-structure on $A$, denoted by $\{-,-\}$. Let $S$ be a multiplicative system of $A$. Then $\Lambda$ induces a Poisson structure on $A_S$, denoted by $\{-,-\}_S$ from the natural map $Hom_A(\wedge^2 \Omega_{A/k}^1,A)\to Hom_A(\wedge^2 \Omega_{A/k}^1,A_S)\cong Hom_{A_S}(\wedge^2 A_S/k,A_S)$ . More precisely, we have
\begin{align*}
\{\frac{a_1}{s_1},\frac{a_2}{s_2}\}_S=\Lambda(d(\frac{a_1}{s_1}),d(\frac{a_2}{s_2}))=\Lambda(\frac{s_1da_1-a_1ds_1}{s_1^2},\frac{s_2da_2-a_2ds_2}{s_2^2})\\
=\frac{\{a_1,a_2\}}{s_1s_2}-\frac{a_2\{a_1,s_2\}}{s_1s_2^2}-\frac{a_1\{s_1,a_2\}}{s_1^2s_2}+\frac{a_1a_2\{s_1,s_2\}}{s_1^2s_2^2}
\end{align*}
\begin{proposition}
$X=Spec (A)$ for a Poisson $k$-algebra $(A,\Lambda)$ is a Poisson ringed topological space.
\end{proposition}
\begin{proof}
Let $\mathfrak{p}$ be a prime ideal of $A$. Then $A_{\mathfrak{p}}$ has a natural Poisson structure induced from $(A,\Lambda)$ with the Poisson bracket $\{-,-\}_{\mathfrak{p}}$. For any open set $U$ of $Spec(A)$ and $f,g\in \mathcal{O}_X(U)$. Then $a,b $ can be identified with $a,b:U\to \bigcup_{\mathfrak{p}\in U} A_{\mathfrak{p}}$ locally defined by an element of $A_f$ for $D(f)\subset U$. We define $\{a,b\}:U\to\bigcup_{\mathfrak{p}\in U} A_{\mathfrak{p}}$ by $\mathfrak{p}\to \{a_{\mathfrak{p}},b_{\mathfrak{p}}\}_{\mathfrak{p}}$. Since for each principle open set of the from $D(f)$, the Poisson structure on $D(f)$ are all induced from $\Lambda$, we have $\{a,b\}\in \mathcal{O}_X(U)$. Hence the structure sheaf $\mathcal{O}_X$ is a Poisson sheaf. Hence $X$ is a Poisson ringed topological space.
\end{proof}
\begin{definition}[affine Poisson schemes]
We define an affine Poisson scheme to be a Poisson ringed topological space isomorphic to some $(Spec\,A,\mathcal{O}_{Spec\,A})$ for a Poisson $k$-algebra $(A,\Lambda)$.
\end{definition}
\begin{definition}[Poisson schemes]
A Poisson scheme is a Poisson ringed topological space $(X,\mathcal{O}_X)$ admitting an open covering $\{U_i\}$ of $X$ such that $(U_i,\mathcal{O}_X|_{U_i})$ is an affine Poisson $k$-scheme for every $i$.
\end{definition}
Note that any $k$-scheme can be considered to be a Poisson scheme since any $k$-algebra $A$ has trivial Poisson structure, i.e. $\{f,g\}=0$ for any $f,g \in A$. We will consider a scheme without Poisson structure to be a scheme with trivial Poisson structure.
\subsection{Poisson Schemes}
\begin{definition}
Let $X\to S$ be a morphism of schemes. There is an operation
\begin{align*}
[-,-]:\mathscr{H}om_{\mathcal{O}_X}(\wedge^p \Omega_{X/S},\mathcal{O}_X )\times \mathscr{H}om_{\mathcal{O}_X}(\wedge^q \Omega_{X/S},\mathcal{O}_X )\to\mathscr{H}om_{\mathcal{O}_X}(\wedge^{p+q-1} \Omega_{X/S},\mathcal{O}_X )
\end{align*}
which is called the Schouten bracket on a scheme $X$ over $S$.
\end{definition}
The bracket $[-,-]$ is defined in the following way: $\Gamma(U, \mathscr{H}om_{\mathcal{O}_X}(\wedge^p\Omega_{X/S},\mathcal{O}_X))$ is the set of elements of the form $\beta:U\to \bigcup_{x\in U} Hom_{\mathcal{O}_{X,x}}(\wedge^p\Omega_{\mathcal{O}_{X,x}/\mathcal{O}_{S,s}},\mathcal{O}_{X,x})$ such that for any $x$, there exists an affine open neighborhood $V$ of $s=f(x)$ and an affine open neighborhood $U\subset f^{-1}(V)$ of $x$ and $\alpha\in Hom_{\mathcal{O}_X(U_x)}(\wedge^p \Omega_{\mathcal{O}_X(U)/\mathcal{O}_S(V)}^1,\mathcal{O}_X(U))$ with $\beta(x)=\alpha_x$. So on $U$, we define $[\beta_1,\beta_2]:=U\to \bigcup_{x\in U} Hom_{\mathcal{O}_{X,x}}(\wedge^p\Omega_{\mathcal{O}_{X,x}/\mathcal{O}_{S,s}},\mathcal{O}_{X,x}), x\mapsto [\beta_1(x),\beta_2(x)]_x$, where $[-,-]_x$ is the Schouten bracket on $\oplus_p Hom_{\mathcal{O}_{X,x}}(\wedge^p\Omega_{\mathcal{O}_{X,x}/\mathcal{O}_{S,s}},\mathcal{O}_{X,x})$. Hence to show the existence of Schouten bracket on $X$ over $S$, we only need to check the following lemma.
\begin{lemma}
Let $A$ be a commutative $R$-algebra $(f:R\to A)$, and $\mathfrak{p}$ be a prime ideal of $A$. Let $\mathfrak{q}=f^{-1}(\mathfrak{p})$. The following diagram commutes
\begin{center}
$\begin{CD}
Hom_A(\wedge^p \Omega_{A/R}^1,A)\times Hom_A(\wedge^q \Omega_{A/R}^1,A)@>[-,-]>> Hom_A(\wedge^{p+q-1}\Omega_{A/R}^1,A)\\
@VVV @VVV\\
Hom_A(\wedge^p \Omega_{A/R}^1,A_{\mathfrak{p}})\times Hom_A(\wedge^q \Omega_{A/R}^1, A_{\mathfrak{p}})@>[-,-]>> Hom_A(\wedge^{p+q-1}\Omega_{A/R}^1,A_{\mathfrak{p}})\\
@V\cong VV @VV\cong V \\
Hom_{A_{\mathfrak{p}}}(\wedge^p \Omega_{A_{\mathfrak{p}}/R}^1,A_{\mathfrak{p}})\times Hom_{A_{\mathfrak{p}}}(\wedge^q \Omega_{A_{\mathfrak{p}}/R}^1,A_{\mathfrak{p}})@>[-,-]>> Hom_{A_{\mathfrak{p}}}(\wedge^{p+q-1}\Omega_{A_{\mathfrak{p}}/R}^1,A_{\mathfrak{p}})\\
@V\cong VV @VV\cong V\\
Hom_{A_{\mathfrak{p}}}(\wedge^p \Omega_{A_{\mathfrak{p}}/R_{\mathfrak{q}}}^1,A_{\mathfrak{p}})\times Hom_{A_{\mathfrak{p}}}(\wedge^q \Omega_{A_{\mathfrak{p}}/R_{\mathfrak{q}}}^1,A_{\mathfrak{p}})@>[-,-]>> Hom_{A_{\mathfrak{p}}}(\wedge^{p+q-1}\Omega_{A_{\mathfrak{p}}/R_{\mathfrak{q}}}^1,A_{\mathfrak{p}})
\end{CD}$
\end{center}
\end{lemma}
\begin{example}
Let $B\otimes_k A$ be a $A$-algebra where $k$ is a field and $B$ is a finitely generated $k$-algebra $($Hence $\Omega_{B/k}^1$ is finitely presented$)$. Then $Hom_{B\otimes_kA}(\wedge^p \Omega_{B\otimes_k A/A}^1,B\otimes_k A)\cong Hom_{B\otimes_k A}(\wedge^p \Omega_{B/k}^1 \otimes A,B\otimes_k A)\cong Hom_B(\wedge^p \Omega_{B/k},B\otimes_k A)\cong Hom_k(\wedge^p \Omega_{B/k}^1, B)\otimes _k A$. So the Schouten bracket $[-,-]_{B\otimes_k A}$ on $Hom_B(\wedge^p\Omega_{B/k}^1,B)\otimes A$ over $A$ can be seen as
\begin{align*}
[P\otimes a,Q\otimes b]_{B\otimes_k A}=[P,Q]_B\otimes ab
\end{align*}
\end{example}
Let $X$ be a scheme over $k$. We would like to characterize a Poisson structure on $X$ by an element $\Lambda\in \Gamma(X, \mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{\mathcal{O}_X/k}^1,\mathcal{O}_X))$ with $[\Lambda,\Lambda]=0$.
\begin{proposition}
Let $X$ be a scheme over $k$. The following are equivalent
\begin{enumerate}
\item $X$ is a Poisson scheme over $k$.
\item There exists a global section $\Lambda\in \Gamma(X, \mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k}^1,\mathcal{O}_X))$ with $[\Lambda,\Lambda]=0$
\end{enumerate}
\end{proposition}
\begin{proof}
Let $X$ be a Poisson scheme over $k$. Then for each $x$, $\mathcal{O}_{X,x}$ is a Poisson $k$-algebra. So we have $\Lambda_x\in Hom_{\mathcal{O}_{X,x}}(\wedge^2 \Omega_{\mathcal{O}_{X,x}/k}^1,\mathcal{O}_{X,x})$ with $[\Lambda_x,\Lambda_x]=0$. We define $\Lambda:X\to \bigcup_{x\in X} Hom_{\mathcal{O}_{X,x}}(\wedge^2 \Omega_{\mathcal{O}_{X,x}/k}^1,\mathcal{O}_{X,x})$. Since $X$ is locally defined by affine Poisson schemes, for each $x\in X$, there exists an affine neighborhood $Spec(A)$ of $x$ with $\Lambda_A \in Hom_A(\wedge^2 \Omega_{A/k}^1,A)$ with $[\Lambda_A,\Lambda_A]=0$ which induces $\Lambda_x$ for $x\in Spec(A)$. Hence $\Lambda\in \Gamma(X, \mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k}^1,\mathcal{O}_X))$ with $[\Lambda,\Lambda]=0$.
Conversely, we assume that we have a global section $\Lambda\in \Gamma(X, \mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k}^1,\mathcal{O}_X))$ with $[\Lambda,\Lambda]=0$. Then $\Lambda$ can be identified with $\Lambda:X\to \bigcup_{x\in X} Hom_{\mathcal{O}_{X,x}}(\wedge^2 \Omega_{\mathcal{O}_{X,x}/k}^1,\mathcal{O}_{X,x})$ as above. Hence $\mathcal{O}_{X,x}$ is a Poisson $k$-algebra induced from $\Lambda_x$ with the Poisson bracket $\{-,-\}_x$. We show that $\mathcal{O}_X$ is a sheaf of Poisson $k$-algebra. Let $U$ be open set of $X$. Let $f,g\in \mathcal{O}_X(U)$. Then $f,g$ can be identified with $f,g:U\to \bigcup_{x\in U} \mathcal{O}_{X,x}$ which are locally defined by elements of sections of affine open sets. We define $\{f,g\}$ by $U\to \bigcup_{x\in X}\mathcal{O}_{X,x}, x\mapsto \{f_x,g_x\}_x$. This makes $\mathcal{O}_X$ to be a sheaf of Poisson $k$-algebras. For each $x$, there exists an affine open set $Spec(A)$ of $x$ such that $\Lambda$ on $Spec(A)$ is induced from a $\Lambda_A\in Hom_A(\wedge^2\Omega_{A/k}^1,A)$ with $[\Lambda_A,\Lambda_A]=0$. So $\Lambda_A$ defines a Poisson structure on $Spec(A)$. Hence $X$ is locally defined by affine Poisson schemes. Hence $X$ is a Poisson scheme over $k$.
\end{proof}
\begin{definition}
Let $X$ be a Poisson scheme over $k$. Let $f:X\to S$ be a morphism of schemes. We say that $X$ is Poisson over $S$ or a Poisson $S$-scheme if for any open set $U$ of $S$ and $\mathcal{O}_S(U)\to \mathcal{O}_X(f^{-1}(U))$, $\mathcal{O}_X(f^{-1}(U))$ is a Poisson $\mathcal{O}_S(U)$-algebra. In other words, $\{s,a\}=0 $ for any $s\in \mathcal{O}_S(U)$ and $a\in \mathcal{O}_X(f^{-1}(U))$
\end{definition}
\begin{proposition}
Let $X$ be a scheme over $k$ and $f:X\to S$ be a morphsim of schemes. The following are equivalent.
\begin{enumerate}
\item $X$ is a Poisson scheme over $S$.
\item There exists a global section $P\in \Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/S},\mathcal{O}_X))$ with $[P,P]=0$
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\Lambda \in \Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k}^1,\mathcal{O}_X))$ be the $(k)$-Poisson structure on $X$. Now we assume that $X$ is a Poisson scheme over $S$ via $f:X\to S$. We note that we have an exact sequence
\begin{align*}
0\to \Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/S},\mathcal{O}_X))\to \Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k}^1,\mathcal{O}_X)).
\end{align*}
We will show that $\Lambda$ is actually in $\Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/S},\mathcal{O}_X))$. For $x\in X$, via $f_*:\mathcal{O}_{S,f(x)}\to \mathcal{O}_{X,x}$, $\mathcal{O}_{X,x}$ is a Poisson $\mathcal{O}_{S,f(x)}$-algebra. Since $\Lambda_x$ is $\mathcal{O}_{S,f(x)}$-linear, we have actually
\begin{align*}
\Lambda_x\in Hom_{\mathcal{O}_{X,x}}(\Omega_{\mathcal{O}_{X,x}/\mathcal{O}_{S,f(x)}}^1,\mathcal{O}_{X,x})
\end{align*}
with $[\Lambda_x,\Lambda_x]=0$. Hence $P=:\Lambda\in \Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/S},\mathcal{O}_X))$ with $[P,P]=0$.
Conversely, assume there exists a global section $P\in \Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/S},\mathcal{O}_X))$ with $[P,P]=0$. Then $P$ defines a Poisson scheme over $k$ by the above exactness. Since for each $x\in X$, $\mathcal{O}_{X,x}$ is a Poisson $\mathcal{O}_{s,f(x)}$-algebra, $X$ is a Poisson scheme over $S$.
\end{proof}
\begin{definition}
Let $(X,P)$ and $(Y,Q)$ be Poisson schemes over $S$ with $g:X\to S$ and $h:Y\to S$. Then a morphism $f:X\to Y$ of schemes over $S$ is called a morphism of Poisson schemes over $S$ if for any open set $U$ of $S$ and any open set $V$ of $h^{-1}(U)$ and any open set $W$ of $f^{-1}(V)$, and $f^\sharp:\mathcal{O}_Y(V)\to \mathcal{O}_X(W)$ is a Poisson $\mathcal{O}_S(U)$-homomorphism.
\end{definition}
Let $f:X\to Y$ be a morphism of schemes over $S$. Then we have $f^*\Omega_{Y/S}^1\to \Omega_{X/S}^1$. Since $\Omega_{Y/S}^1$ is quasi-coherent, we have $f^*(\wedge^2 \Omega_{Y/S}^1)\cong \wedge^2 f^* \Omega_{Y/S}^1$. So we have $\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/S}^1,\mathcal{O}_X)\to \mathscr{H}om_{\mathcal{O}_X}(f^*(\wedge^2 \Omega_{Y/S}),f^*\mathcal{O}_Y)$. On the other hand, we have a natural sheaf morphism
\begin{align*}
\mathscr{H}om_{\mathcal{O}_Y}(\wedge^2 \Omega^1_{Y/S},\mathcal{O}_Y)\to f_*f^*\mathscr{H}om_{\mathcal{O}_Y}(\wedge^2 \Omega_{Y/S}^1,\mathcal{O}_Y)\to f_*\mathscr{H}om_{\mathcal{O}_X}(f^*(\wedge^2 \Omega^1_{Y/S}),f^* \mathcal{O}_Y)
\end{align*}
By taking the global sections, we have two morphsims
\begin{align*}
\alpha:&\Gamma(X, \mathscr{H}om_{\mathcal{O}_X}(\wedge^2\Omega^1_{X/S},\mathcal{O}_X))\to \Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(f^*(\wedge^2 \Omega_{Y/S}),f^*\mathcal{O}_Y))\\
\beta:&\Gamma(Y,\mathscr{H}om_{\mathcal{O}_Y}(\wedge^2\Omega_{Y/S}^1,\mathcal{O}_Y))\to \Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(f^*(\wedge^2 \Omega_{Y/S}),f^*\mathcal{O}_Y)
\end{align*}
If $f:(X,P)\to (Y,Q)$ is a morphism of Poisson schemes over $S$, we have $\alpha(P)=\beta(Q)$.
\begin{proposition}[Glueing Poisson schemes]
Let $S$ be a $k$-scheme. Let us consider a family $\{X_i\}$ of Poisson schemes over $S$. We suppose given open subschemes $X_{ij}$ of $X_i$$($which is necessarily Poisson $S$-scheme$)$ and Poisson isomorphisms of $S$-schemes $f_{ij}:X_{ij}\to X_{ji}$ such that $f_{ii}=Id_{X_i}, f_{ij}(X_{ij}\cap X_{ji})=X_{ji}\cap X_{jk}$, and $f_{ik}=f_{jk}\circ f_{ij}$ on $X_{ij}\cap X_{ik}$. Then there exists an Poisson $S$-scheme $X$, unique up to isomorphism, with Poisson open immersion of $S$-schemes $g_i:X_i\to X$ such that $g_i:X_i\to X$ such that $g_i=g_j\circ f_{ij}$ on $X_{ij}$, and $X=\cup_i g_i(X_i)$.
\end{proposition}
\subsection{(truncated) Lichnerowicz-Poisson cohomology}
\begin{definition}\label{3def}
Let $(X,\Lambda)$ be a Poisson scheme over $S$. Then we define $i$-th (truncated) Lichnerowicz-Poisson cohomology is the $i$-th hypercohomology group of the following complex of sheaves
\begin{align*}
0\to \mathscr{H}om_{\mathcal{O}_X}(\Omega_{X/S}^1,\mathcal{O}_X)\xrightarrow{[\Lambda,-]} \mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega^1_{X/S},\mathcal{O}_X)\xrightarrow{[\Lambda,-]}\mathscr{H}om_{\mathcal{O}_X}(\wedge^3\Omega^1_{X/S},\mathcal{O}_X)\xrightarrow{[\Lambda,-]}\cdots
\end{align*}
We denote $i$-th cohomology group by $HP^i(X,\Lambda)$.
\end{definition}
\begin{remark}\label{3remark2}
Let $X=Spec(A)$ be an affine scheme with a Poisson structure $\Lambda\in Hom_A(\wedge^2 \Omega_{A/S},A)=\Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(\wedge^2\Omega^1_{X/S},\mathcal{O}_X))$. Since $\mathscr{H}om_{\mathcal{O}_X}(\wedge^i\Omega_{X/S}^1,\mathcal{O}_X)$ are quasi coherent, and so its higher cohomology vanishes. Hence (truncated) Lichnerowicz-Poisson cohomology of $(X,\Lambda)$ is same to the (truncated) Lichnerowicz-Poisson cohomology of a Poisson algebra $(A,\Lambda)$.
\end{remark}
\section{Deformations of algebraic Poisson schemes}
\subsection{Basic materials on deformations of algebraic Poisson schemes}\
In this section, we discuss deformations of algebraic Poisson schemes by following \cite{Ser06}(see Chapter 1) in the Poisson context.
We will always denote by $k$ a fixed algebraically closed field. All schemes will be assumed to be defined over $k$, locally noetherian and separated. If $S$ is a scheme and $s\in S$, we denote $k(s)=\mathcal{O}_{S,s}/\mathfrak{m}_s$ the residue field of $S$ at $s$. We denote by $\bold{Art}$ the category of local artinian $k$-algebras with residue field $k$.
\begin{definition}
Let $(X,\Lambda_0)$ be an algebraic Poisson scheme. A cartesian diagram of morphisms of schemes
\begin{center}
$\eta:$
$\begin{CD}
(X,\Lambda_0) @>i>> (\mathcal{X},\Lambda)\\
@VVV @VV{\pi}V\\
Spec(k) @>s>>S
\end{CD}$
\end{center}
is called a family of Poisson deformations or a Poisson deformation of $X$ parametrized by $S$ where $\pi$ is flat and surjective, and $S$ is connected, $(\mathcal{X},\Lambda)$ is a Possoin $S$-scheme with $\Lambda\in \Gamma(\mathcal{X}, \mathscr{H}om_{\mathcal{O}_{\mathcal{X}}}(\wedge^2 \Omega_{\mathcal{X}/S}^1, \mathcal{O}_{\mathcal{X}}))$ and $X \cong \mathcal{X} \times_S Spec(k)$ as Poisson isomorphism: in other words, $\Lambda_0$ is induced from $\Lambda$. We call $S$ and $(\mathcal{X},\Lambda)$ respectively the parameter scheme and the total Poisson $S$-scheme of the Poisson deformation $\eta$. If $S$ is algebraic, for each $k$-rational point $t\in S$ the scheme theoretic fiber $(\mathcal{X}(t),\Lambda(t))$ with the induced Poisson structure $\Lambda(t)$ from $\Lambda$ is also called a Poisson deformation of $(X,\Lambda_0)$.
a Poisson deformations $\eta$ over $Spec(A)$ is called infinitesimal $($reps. first-order$)$ if $A\in \bold{Art}$ $($reps. if $A=k[\epsilon]$$)$.
\end{definition}
\begin{remark}
We will explain more in detail that $\Lambda_0$ is induced from $\Lambda$. Since $\Omega_{X/k}^1\cong i^*\Omega_{\mathcal{X}/S}^1$, canonical map $\mathscr{H}om_{\mathcal{O}_\mathcal{X}}(\wedge^2 \Omega_{\mathcal{X}/S}^1,\mathcal{O}_{\mathcal{X}})\to i_*i^*\mathscr{H}om_{\mathcal{O}_{\mathcal{X}}}(\wedge^2 \Omega_{\mathcal{X}/S}^1,\mathcal{O}_{\mathcal{X}})\to i_*\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k}^1,\mathcal{O}_X)$ induces $\Gamma(\mathcal{X},\mathscr{H}om_{\mathcal{O}_{\mathcal{X}}}(\wedge^2 \Omega_{\mathcal{X}/S}^1,\mathcal{O}_{\mathcal{X}}))\to \Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k}^1,\mathcal{O}_X))$. Via this map $\Lambda$ is sent to $\Lambda_0$. So $(X,\Lambda_0)$ is a closed Poisson subscheme of $(\mathcal{X},\Lambda)$ since $i$ is a Poisson morphism.
\end{remark}
Given another Poisson deformation
\begin{center}
$\xi:$
$\begin{CD}
(X,\Lambda_0) @>>> (\mathcal{Y},\Lambda')\\
@VVV @VVV\\
Spec(k) @>>>S
\end{CD}$
\end{center}
of $(X,\Lambda_0)$ over $S$, an isomorphism of $\eta$ with $\xi$ is an Poisson $S$-isomorphism $\phi:(\mathcal{X},\Lambda)\to (\mathcal{Y},\Lambda')$ inducing the identity on $(X,\Lambda_0)$, i.e. such that the following diagram is commutative.
\begin{center}
\[\begindc{\commdiag}[50]
\obj(0,1)[a]{$(\mathcal{X},\Lambda)$}
\obj(1,2)[b]{$(X,\Lambda_0)$}
\obj(1,0)[c]{$S$}
\obj(2,1)[d]{$(\mathcal{Y},\Lambda')$}
\mor{b}{a}{}
\mor{b}{d}{}
\mor{a}{c}{}
\mor{d}{c}{}
\mor{a}{d}{$\phi$}
\enddc\]
\end{center}
By a pointed scheme, we will mean a pair $(S,s)$ where $S$ is a scheme and $s\in S$. If $K$ is a field, we call $(S,s)$ a $K$-pointed scheme if $K\cong k(s)$.
\begin{definition}[trivial Poisson deformation]
Let $(X,\Lambda_0)$ be a algebraic Poisson scheme, and $(S,s)$ be a $k$-pointed scheme $(S,s)$. We define a trivial Poisson family induced by $(X,\Lambda_0)$ and $(S,s)$ to be the following Poisson deformation of $(X,\Lambda_0)$,
\begin{center}
$\begin{CD}
(X,\Lambda_0) @>>> (X\times_{Spec(k)} S,\Lambda_0\oplus 0)\\
@VVV @VV{\pi}V\\
Spec(k) @>s>>S
\end{CD}$
\end{center}
Here $(\Lambda_0\oplus 0)$ is the Poisson structure on $X\times_{Spec(k)} S$ over $S$ induced from the Poisson structure $\Lambda_0$ on $X$ via the following diagram.
\begin{center}
$\begin{CD}
X\times_{Spec(k)} S @>>> X\\
@VVV @VVV\\
S@>>> Spec(k)
\end{CD}$
\end{center}
Poisson deformation of $(X,\Lambda_0)$ over $S$ is called trivial if it is isomorphic to the trivial Poisson family as above.
\end{definition}
\begin{definition}[rigid Poisson deformations]
An algebraic Poisson scheme $(X,\Lambda_0)$ is called rigid if every infinitesimal Poisson deformations of $X$ over $A$ is trivial for every $Spec(A)$ in $\bold{Art}$.
\end{definition}
Given a Poisson deformation $\eta$ of $(X,\Lambda_0)$ over $S$ as above and a morphism $(S',s')\to (S,s)$ of $k$-pointed schemes there is induced a commutative diagram by base change
\begin{center}
$\eta':$
$\begin{CD}
(X,\Lambda_0) @>>> (\mathcal{X}\times_S S',\Lambda\oplus 0)\\
@VVV @VVV\\
Spec(k) @>>>S'
\end{CD}$
\end{center}
which is a Poisson deformation of $(X,\Lambda_0)$ over $S'$. This operation is functorial, in the sense that it commutes with composition of morphisms and the identity morphism does not change $\eta$. Moreover, it carries isomorphic Poisson deformations to isomorphic ones.
\begin{definition}[locally trivial Poisson deformations]
An infinitesimal Poisson deformation $\eta$ of $(X,\Lambda_0)$ is called locally trivial if for every point $x\in X$ has an open neighborhood $U_x\subset X$ such that
\begin{center}
$\begin{CD}
(U_x,\Lambda_0|_{U_x}) @>>> (\mathcal{X}|_{U_x},\Lambda|_{U_x})\\
@VVV @VV{\pi}V\\
Spec(k) @>s>>S
\end{CD}$
\end{center}
is a trivial Poisson deformation of $U_x$. In other words, $(\mathcal{X}|_{U_x},\Lambda|_{U_x})\cong (U_x\times_{spec(k)} S,\Lambda_0\oplus 0)$ as Poisson schemes.
\end{definition}
\subsection{Infinitesimal Poisson deformations}
\begin{definition}[small extension]
We say that for $(\tilde{A},\tilde{\mathfrak{m}}), (A,\mathfrak{m})\in \bold{Art}$, an exact sequence of the form $0\to (t)\to \tilde{A}\to A\to 0$ is a small extension if $t\in \tilde{\mathfrak{m}}$ is annihilated by $\tilde{\mathfrak{m}}. ($i.e $t\cdot \tilde{\mathfrak{m}}=0)$ so that $(t)$ is an one dimensional $k$-vector space.
\end{definition}
\begin{lemma}[compare $\cite{Ser06}$ Lemma 1.2.6 page 26]\label{3l}
Let $B_0$ be a Poisson $k$-algebra with the Poisson structure $\Lambda\in Hom_{B_0}(\wedge^2 \Omega_{B_0/k},B_0)$, and
\begin{align*}
e:0\to(t)\to \tilde{A}\to A\to 0
\end{align*}
a small extension in $\bold{Art}$. Let $\Lambda_0\in Hom_{B_0}(\wedge^2 \Omega_{B_0/k},B_0)\otimes_k A$ be a Poisson structure on $B_0\otimes_k A$ over $A$ inducing $\Lambda$. Let $\Lambda_1,\Lambda_2\in Hom_{B_0}(\wedge^2 \Omega_{B_0/k},B_0)\otimes_k \tilde{A}$ be Poisson structures on $B_0\otimes_k \tilde{A}$ over $\tilde{A}$ which induces $\Lambda_0$. This implies that there exists a $\Lambda'\in Hom_{B_0}(\wedge^2 \Omega_{B_0/k},B_0)$ such that $\Lambda_1-\Lambda_2=t\Lambda'$. Then there is one to one correspondence
\begin{align*}
\{\text{Poisson isomorphisms between} \,\,\,(B_0\otimes_k \tilde{A},\Lambda_1)\,\,\,\text{and}\,\,\,(B_0\otimes_k\tilde{A},\Lambda_2)\\\,\,\,\text{inducing the identity on}\,\,\,(B_0\otimes_k A,\Lambda_0)\}\\\to \{P\in Der_k(B_0,B_0)=Hom_{B_0}(\Omega_{B_0/k},B_0)| \Lambda'-[\Lambda, P]=\Lambda'+[P,\Lambda]=0\}
\end{align*}
In particular, when $\Lambda_1=\Lambda_2$, there is a canonical isomorphism of groups
\begin{align*}
\{\text{Poisson automorphisms between} \,\,\,(B_0\otimes_k \tilde{A},\Lambda_1)\,\,\,\text{and}\,\,\,(B_0\otimes_k\tilde{A},\Lambda_1)\\\,\,\,\text{inducing the identity on}\,\,\,(B_0\otimes_k A,\Lambda_0)\}\to PDer_k(B_0,B_0)=HP^1(B_0,\Lambda)
\end{align*}
\end{lemma}
\begin{proof}
Let $\theta:(B_0\otimes_k \tilde{A},\Lambda_1)\to (B_0\otimes_k \tilde{A}, \Lambda_2)$ be a Poisson isomorphism. Then $\theta$ is $\tilde{A}$-linear and induces the identity modulo by $t$. We have $\theta(x)=x+tPx$, where $P\in Der_{\tilde{A}}(B_0\otimes_k \tilde{A},B_0)=Der_k(B_0,B_0)=Hom_{B_0}(\Omega_{B_0/k}^1,B_0)$. When we think of $P$ as an element of $Hom_{B_0}(\Omega_{B_0/k}^1,B_0)$, we have $\theta(x)=x+tP(dx)$. We define the correspondence by $\theta \mapsto P$. Now we check that $\Lambda'-[\Lambda,P]=0$. Since $\theta$ is a Poisson map, for $x,y\in B_0$, we have by Example $(\ref{3ex})$,
\begin{align*}
&\theta(\Lambda_1(dx\wedge dy))=\Lambda_2(d(\theta x)\wedge d (\theta y))\\
&\Lambda_1(dx\wedge dy)+t P(d(\Lambda_1(dx\wedge dy)))=\Lambda_2((dx+td(P(dx)))\wedge (dy+td(P(dy))))\\
&\Lambda_1(dx\wedge dy)+t P(d(\Lambda(dx\wedge dy)))=\Lambda_2(dx\wedge dy)+t\Lambda(dx\wedge d(P(dy)))+t\Lambda(d(P(dx))\wedge dy)\\
&t[\Lambda'(dx\wedge dy)+P(d(\Lambda(dx\wedge dy)))-\Lambda(dx\wedge d(P(dy)))-\Lambda(d(P(dx))\wedge dy)]=0\\
&\Lambda'-[\Lambda,P]=0
\end{align*}
Since $\theta$ is determined by $P$, the correspondence is one to one.
Now we assume that $\Lambda_1=\Lambda_2$. So $\theta$ corresponds to $P$ with $[\Lambda,P]=0$. First we note that $P\in Hom_{B_0}(\Omega_{B_0/k}^1,B_0)$ with $[\Lambda,P]=0$ is a Poisson derivation. In other words, $P(\{x,y\})=\{Px,y\}+\{x,Py\}$. Indeed, $0=[\Lambda,P](dx\wedge dy)=\Lambda(d(Px)\wedge dy)-\Lambda(d(Py)\wedge dx)-P(d(\Lambda(dx\wedge dy))$.
Now we show that the correspondence is a group isomorphism. Indeed, let $\theta(x)=x+tPx$ and $\sigma(x)=x+tQx$ with $[\Lambda,P]=[\Lambda,Q]=0$. Then $\sigma(\theta(x))=\theta(x)+tQ(\theta(x))=x+tPx+tQ(x+tPx))=x+tPx+tQx=x+t(P+Q)x$. Hence $\theta+\sigma$ corresponds to $P+Q$. Since $[\Lambda,P+Q]=0$ and identity map corresponds to $0$, the correspondence is group isomorphism.
\end{proof}
\begin{proposition}[compare $\cite{Ser06}$ Proposition 1.2.9 page 29 and see also $\cite{Nam09}$ Proposition 8]\label{3q}
Let $(X,\Lambda_0)$ be an Poisson algebraic variety with $\Lambda_0\in \Gamma(X,\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k},\mathcal{O}_X))$. There is a $1-1$ correspondence:
\begin{align*}
&\kappa:\{\text{Poisson isomorphism classes of first order Poisson deformations of $(X,\Lambda_0)$}\\&\text {whose underlying flat deformation of $X$ is locally trivial}\}\to HP^2(X,\Lambda_0)
\end{align*}
such that $\kappa(\xi)=0$ if and only $\xi$ is the trivial Poisson deformation class. In particular, if $X$ is nonsingular, then we have $1-1$ correspondence
\begin{align*}
\kappa:\{\text{Poisson isomorphism classes of first order Poisson deformations of $(X,\Lambda_0)$}\} \to HP^2(X,\Lambda_0)
\end{align*}
\end{proposition}
\begin{proof}
Given a first-order Poisson deformation of $(X,\Lambda_0)$ whose underlying flat deformation is locally trivial,
\begin{center}
$\begin{CD}
(X,\Lambda_0)@>>> (\mathcal{X},\Lambda)\\
@VVV @VVV\\
Spec(k)@>>> Spec(k[\epsilon])
\end{CD}$
\end{center}
we choose an affine open cover $\mathcal{U}=\{U_i=Spec(B_i)\}_{i\in I}$ of $X$ such that $\mathcal{X}|_{U_i}\cong U_i\times Spec(k[\epsilon])=Spec(B_i)\times Spec(k[\epsilon])$ is trivial for all $i$ with the induced Poisson structure $\Lambda_0+\epsilon\Lambda_i=\in Hom_{B_i}(\wedge^2 \Omega_{B_i/k}^1,B_i)\otimes_k k[\epsilon]$ on $U_i\times Spec(k[\epsilon])$ from $\Lambda$. For each $i$, we have a Poisson isomorphism
\begin{align*}
\theta_i:(U_i\times Spec(k[\epsilon]),\Lambda_i)\to (\mathcal{X}|_{U_i},\Lambda|_{U_i})
\end{align*}
Then for each $i,j\in I$, $\theta_{ij}:=\theta_i^{-1}\theta_j:(U_{ij}\times Spec(k[\epsilon]),\Lambda_j)\to (U_{ij}\times Spec(k[\epsilon]), \Lambda_i)$ is an Poisson isomorphism inducing the identity on $(U_{ij},\Lambda_0)$ by modulo $\epsilon$. Hence by Lemma \ref{3l}, $\theta_{ij}$ corresponds to a $p_{ij}\in \Gamma(U_{ij}, T_X)$ where $T_X=\mathscr{H}om(\Omega_X^1,\mathcal{O}_X)=Der_k(\mathcal{O}_X,\mathcal{O}_X)$ such that $\Lambda_i-\Lambda_j-[\Lambda_0,p_{ij}]=0$. We claim that $(\{p_{ij}\},\{\Lambda_i'\})\in C^1(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\Omega_{X/k}^1,\mathcal{O}_X)))\oplus C^0(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k},\mathcal{O}_X))$ represents a cohomology class in the following diagram (see Appendix \ref{appendixb}):
\begin{center}
$\begin{CD}
@A[\Lambda_0,-]AA\\
C^0(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\wedge^3 \Omega_{X/k},\mathcal{O}_X)))@>-\delta>>\cdots\\
@A[\Lambda_0,-]AA @A[\Lambda_0,-]AA\\
C^0(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k},\mathcal{O}_X)))@>\delta>> C^1(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k},\mathcal{O}_X)))@>-\delta>>\cdots\\
@A[\Lambda_0,-]AA @A[\Lambda_0,-]AA @A[\Lambda_0,-]AA\\
C^0(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\Omega_{X/k}^1,\mathcal{O}_X)))@>-\delta>>C^1(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\Omega_{X/k}^1,\mathcal{O}_X)))@>\delta>>C^2(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\Omega_{X/k}^1,\mathcal{O}_X)))\\
@AAA @AAA @AAA \\
0@>>>0 @>>> 0
\end{CD}$
\end{center}
Since $[\Lambda_0+\epsilon \Lambda_i,\Lambda_0+\epsilon\Lambda_i]=0$, we have $[\Lambda_0,\Lambda_i]=0$. Since on each $U_{ijk}$ we have $\theta_{ij}\theta_{jk}\theta_{ik}^{-1}=1_{U_{ijk}\times Spec(k[\epsilon])}$, we have $p_{ij}+p_{jk}-p_{ik}=0$, and so $\delta(\{p_{ij}\})=0$. Since $\Lambda_i-\Lambda_j-[\Lambda_0,p_{ij}]=0$, we have $\delta(\{\Lambda_i\})+[\Lambda_0,p_{ij}]=0$. Hence $(\{p_{ij}\},\{\Lambda_i\})$ defines a cohomology class.
Now we show that for two equivalent Poisson deformations of $(X,\Lambda_0)$, the cohomology class is same. If we have another Poisson deformation
\begin{center}
$\begin{CD}
(X,\Lambda_0)@>>> (\mathcal{X}',\Lambda')\\
@VVV @VVV\\
Spec(k)@>>> Spec(k[\epsilon])
\end{CD}$
\end{center}
and $\Phi:(\mathcal{X},\Lambda)\to (\mathcal{X'},\Lambda')$ is an Poisson isomorphism of deformations, then for each $i\in I$ there is an induced Poisson isomorphism:
\begin{align*}
\alpha_i:(U_i\times Spec(k[\epsilon]),\Lambda_0+\epsilon\Lambda_i) \xrightarrow{\theta_i} (\mathcal{X}|_{U_i},\Lambda|_{U_i} )\xrightarrow{\Phi|_{U_i}} (\mathcal{X}'|_{U_i},\Lambda'|_{U_i} )\xrightarrow{\theta_i^{'-1}} (U_i\times Spec(k[\epsilon]),\Lambda_0+\epsilon\Lambda_i')
\end{align*}
So $\alpha_i$ corresponds to $a_i\in \Gamma(U_i,T_X)$ such that $\Lambda_i'-\Lambda_i-[\Lambda_0,a_i]=0$ by Lemma \ref{3l}. Since $-\delta(\{a_i\})=a_i-a_j=p_{ij}'-p_{ij}$ and $\Lambda'_i - \Lambda_i=[\Lambda_0,a_i]$, $(\{p_{ij}\},\{\Lambda_i\})$ and $(\{p_{ij}'\},\{\Lambda_i'\})$ are cohomologous.
Now we define an inverse map. Given an element in $HP^2(X,\Lambda_0)$, we represent it by a hyper Cech $1$-cocylce $(\{p_{ij}\}, \{\Lambda_i\})$ for an affine open cover $\mathcal{U}=\{U_i\}$ of $X$. So we have $[\Lambda_0,\Lambda_i]=0$, $p_{ij}+p_{jk}-p_{ik}=0$ and $\Lambda_j-\Lambda_i=[\Lambda_0,p_{ij}]=0$. By reversing the above process, the cohomology class gives a glueing condition to make a Poisson deformation of $(X,\Lambda_0)$ whose underlying deformation is a locally trivial flat deformation.
\end{proof}
\begin{definition}[Poisson Kodaria-Spencer map in an algebraic Poisson family]
For every first-order Poisson deformation $\xi$ of a Poisson algebraic variety $(X,\Lambda_0)$ whose underlying flat deformation is locally trivial, the cohomology class $\kappa(\xi)\in HP^2(X,\Lambda_0)$ is called the Poisson Kodaira-Spencer class of $\xi$. Now we assume that $(X,\Lambda_0)$ is a nonsingular Poisson variety. Let's consider a Poisson deformation of $(X,\Lambda_0)$
\begin{center}
$\xi:$$\begin{CD}
(X,\Lambda_0)@>>> (\mathcal{X},\Lambda)\\
@VVV @VVfV\\
Spec(k)@>s>> S
\end{CD}$
\end{center}
where the base space $S$ is a connected algebraic $k$-scheme and $\mathcal{X}$ is a Poisson scheme over $S$ defined by $\Lambda\in \Gamma(\mathcal{X}, \mathscr{H}om_{\mathcal{O}_{\mathcal{X}}}(\wedge^2\Omega_{\mathcal{X}/S}^1,\mathcal{O}_{\mathcal{X}}))$ We define $k$-linear map, called Poisson Kodaira-Spencer map of the family $\xi$ at $s\in S$,
\begin{align*}
\kappa_{\xi,s}:T_{S,s}\to HP^2(X,\Lambda_0)
\end{align*}
in the following way: let $U$ be an affine open neighborhood of $s\in S$ and $d\in T_{S,s}=Der_k(\mathcal{O}_{S,s},\mathcal{O}_{S,s})$. Let $\bar{d}:\mathcal{O}_{S,s}\to \mathcal{O}_{S,s}/\mathfrak{m}_s$ induced by $d$ and the canonical surjection $\mathcal{O}_{S,s}\to \mathcal{O}_{S,s}/\mathfrak{m}_s,a\to \bar{a}$. Let's consider the following homomorphisms
\begin{align*}
\mathcal{O}_S(U)\to (\mathcal{O}_{S,s}/\mathfrak{m}_s)\oplus \epsilon(\mathcal{O}_{S,s}/\mathfrak{m}_s)\cong k[\epsilon]\to \mathcal{O}_{S,s}/\mathfrak{m}_s\cong k,(a\mapsto \bar{a}+\epsilon \bar{d}(a)\mapsto \bar{a})
\end{align*}
This defines a morphism $Spec(k)\to Spec(k[\epsilon])\to U\hookrightarrow S$. We pullback $(\mathcal{X},\Lambda)$ over $S$ to a first order Poisson deformation over $Spec(k[\epsilon])$ via the map $Spec(k[\epsilon])\to S$. Then by Proposition \ref{3q}, we can find a cohomology class in $HP^2(X,\Lambda_0)$.
\end{definition}
\subsection{Higher-order Poisson deformation-obstructions}\
Let $(X,\Lambda_0)$ be a nonsingular Poisson algebraic variety. Consider a small extension
\begin{align*}
e:0\to (t)\to \tilde{A}\to A\to 0
\end{align*}
in $\bold{Art}$. let
\begin{center}
$\xi:
\begin{CD}
(X,\Lambda_0)@>>> (\mathcal{X},\Lambda)\\
@VVV @VVV\\
Spec(k)@>>> Spec(A)
\end{CD}$
\end{center}
be an infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $A$. A lifting of $\xi$ to $\tilde{A}$ is a infinitesimal Poisson deformation $\tilde{\xi}$ over $\tilde{A}$ inducing $\xi$. In other words,
\begin{center}
$\tilde{\xi}:
\begin{CD}
(X,\Lambda_0)@>>> (\tilde{\mathcal{X}},\tilde{\Lambda})\\
@VVV @VVV\\
Spec(k)@>>> Spec(\tilde{A})
\end{CD}$
\end{center}
and an Poisson isomorphism $\phi$ of Poisson deformations such that the following diagram commutes
\begin{center}
\[\begindc{\commdiag}[50]
\obj(0,1)[a]{$(\mathcal{X},\Lambda)$}
\obj(1,2)[b]{$(X,\Lambda_0)$}
\obj(1,0)[c]{$Spec(A)$}
\obj(2,1)[d]{$(\tilde{\mathcal{X}},\tilde{\Lambda})\times_{Spec(\tilde{\mathcal{A}})} Spec(A)$}
\mor{b}{a}{}
\mor{b}{d}{}
\mor{a}{c}{}
\mor{d}{c}{}
\mor{a}{d}{$\phi$}
\enddc\]
\end{center}
\begin{proposition}[compare $\cite{Ser06}$ Proposition 1.2.12]
Let $(X,\Lambda_0)$ be a nonsingular Poisson variety. Let $A\in \bold{Art}$ and an infinitesimal Poisson deformation $\xi$ of $(X,\Lambda_0)$ over $A$. To every small extension $e:0\to (t)\to \tilde{A}\to A\to 0$, there is associated an element $o_{\xi}(e)\in HP^3(X,\Lambda_0)$ called the obstruction lifting $\xi$ to $\tilde{A}$, which is $0$ if and only if a lifting of $\xi$ to $\tilde{A}$ exists.
\end{proposition}
\begin{proof}
Let $\mathcal{U}=\{U_i=Spec(B_i)\}_{i\in I}$ be an affine open cover of $X$. We have Poisson isomorphisms $\theta_i:(U_i\times Spec(A),\Lambda_i)\to (\mathcal{X}|_{U_i},\Lambda|_{U_i})$ and $\theta_{ij}:=\theta_i^{-1}\theta_j$ is a Poisson isomorphism. We have $\theta_{ij}\theta_{jk}=\theta_{ik}$ on $U_{ijk}\times Spec(A)$. To give a lifting $\tilde{\xi}$ of $\xi$ to $\tilde{A}$ is equivalent to give a collection of $\{\tilde{\Lambda}_i\}$ where $\tilde{\Lambda}_i \in Hom_{B_i}(\Omega_{B_i /k}^1,B_i)\otimes_k \tilde{A}$ with $[\tilde{\Lambda}_i,\tilde{\Lambda}_i]=0$ is a Poisson structure on $U_i\times Spec(\tilde{A})$ and a collection of Poisson isomorphisms $\{\tilde{\theta}_{ij}\}$ where $\tilde{\theta}_{ij}:(U_{ij}\times Spec(\tilde{A}),\tilde{\Lambda}_j)\to (U_{ij}\times Spec(\tilde{A}),\tilde{\Lambda}_i)$ such that
\begin{enumerate}
\item $\tilde{\theta}_{ij}\tilde{\theta}_{jk}=\tilde{\theta}_{ik}$ as a Poisson isomorphism.
\item $\tilde{\theta}_{ij}$ restricts to $\theta_{ij}$ on $U_{ij}\times Spec(A)$.
\item $\tilde{\Lambda}_i$ restrits to $\Lambda_i$.
\end{enumerate}
From such data, we can glue together $(U_i\times Spec(\tilde{A}),\tilde{\Lambda}_i)$ to make a Poisson deformation $(\tilde{\mathcal{X}},\tilde{\Lambda})$. Now given a Poisson deformation $\xi=(\mathcal{X},\Lambda)$ over $A$ and a small extension $e:0\to (t)\to \tilde{A}\to A\to0$, we associate an element $o_{\xi}(e)\in HP^3(X,\Lambda_0)$. Choose arbitrary automorphisms $\{\tilde{\theta}_{ij}\}$ satisfying $(2)$ (for the existence of lifting, see \cite{Ser06} Lemma 1.2.8) and arbitrary $\tilde{\Lambda}_i\in Hom_{B_i}(\Omega_{B_i/k}^1,B_i)\otimes \tilde{A}$ satisfying $(3)$ (not necessarily $[\tilde{\Lambda}_i,\tilde{\Lambda}_i]=0$). The lifting exists since $Hom_{B_i}(\Omega_{B_i/k}^1,B_i)\otimes_k \tilde{A} \to Hom_{B_i}(\Omega_{B_i/k}^1,B_i)\otimes_k A$ is surjective. Let $\tilde{\theta}_{ijk}=\tilde{\theta}_{ij}\tilde{\theta}_{jk}\tilde{\theta}_{ik}^{-1}$. Since $\tilde{\theta}_{ijk}$ is an automorphism on $U_{ijk}\times Spec(\tilde{A})$ inducing the identity on $U_{ijk}\times Spec(A)$, $\tilde{\theta}_{ijk}$ corresponds to $\tilde{d}_{ijk}\in \Gamma(U_{ijk},T_X)$ and $d_{jkl}-d_{ikl}+d_{ijl}-d_{jkl}=0$. So we have $-\delta(\{d_{ijk}\})=0$. Since $[\tilde{\Lambda}_i,\tilde{\Lambda}_i]$ is zero modulo $(t)$ by $[\Lambda_i,\Lambda_i]=0$, there exists $\Pi_i\in Hom_{B_i}(\wedge^3 \Omega_{B_i/k}^1, B_i)$ such that $[\tilde{\Lambda}_i,\tilde{\Lambda}_i]=t\Pi_i$. Since $0=[\tilde{\Lambda}_i,[\tilde{\Lambda}_i,\tilde{\Lambda}_i]]=[\tilde{\Lambda}_i,t\Pi_i]=t[\Lambda_0,\Pi_i]=0$, we have $[\Lambda_0,\Pi_i]=0$.
Let $\tilde{f}_{ij}: \mathcal{O}_X(U_{ij})\otimes_k \tilde{A} \to \mathcal{O}_X(U_{ji})\otimes_k \tilde{A}$ be the ring homomorphism corresponding to $\tilde{\theta}_{ij}$. We will denote by $\tilde{f}_{ij}\Lambda_i$ be the induced biderivation structure on $\mathcal{O}_X(U_{ji})\otimes_k \tilde{A}$ such that $\tilde{f}_{ij}:(\mathcal{O}_X(U_{ij})\otimes_k\tilde{A},\tilde{\Lambda}_i)\to (\mathcal{O}_X(U_{ji})\otimes_k\tilde{A}, \tilde{f}_{ij}\Lambda_i)$ is biderivation-preserving. Since $\tilde{f}_{ij}\tilde{\Lambda}_i$ and $\tilde{\Lambda}_j$ are same modulo $(t)$ by $(3)$, there exists $\Lambda_{ij}'\in \Gamma(U_{ij}, \mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k}^1, \mathcal{O}_X))$ such that $t\Lambda_{ij}'=\tilde{f}_{ij}\tilde{\Lambda}_i-\tilde{\Lambda}_j$. Then $t\Lambda_{ji}'=\tilde{f}_{ji}\Lambda_j-\Lambda_i$. By applying $\tilde{f}_{ij}$ on both sides, we have $t\Lambda_{ji}'=\tilde{\Lambda}_j-\tilde{f}_{ij}\tilde{\Lambda}_i=-t\Lambda_{ij}'$. Hence $\Lambda_{ji}'=-\Lambda_{ij}'$. Then $t\Pi_i-t\Pi_j=\tilde{f}_{ij}(t\Pi_i)-t\Pi_j=\tilde{f}_{ij}[\tilde{\Lambda}_i,\tilde{\Lambda}_i]-[\tilde{\Lambda}_j,\tilde{\Lambda}_j]=[\tilde{f}_{ij}\Lambda_i,\tilde{f}_{ij}\Lambda_i]-[\tilde{\Lambda}_j,\tilde{\Lambda}_j]=[\tilde{\Lambda}_j+t\Lambda_{ij}',\tilde{\Lambda}_j+t\Lambda_{ij}']-[\tilde{\Lambda}_j,\tilde{\Lambda}_j]=t[\Lambda_0,2\Lambda_{ij}']$. Hence we have $-\Pi_i-(-\Pi_j)+[\Lambda_0, 2\Lambda_{ij}']=0$. So we have $-\delta(\{-\Pi_i\})+[\Lambda_0,\{2\Lambda_{ij}'\}]=0$. In the following isomorphism
\begin{align*}
\tilde{\alpha}_{ijk}:U_{ijk}\times Spec(\tilde{A}) \xrightarrow{\tilde{\theta}_{ki}}U_{ijk}\times Spec(\tilde{A}) \xrightarrow{\tilde{\theta}_{jk}}U_{ijk}\times Spec(\tilde{A})\xrightarrow{\tilde{\theta}_{ij}}U_{ijk}\times Spec(\tilde{A})
\end{align*}
which corresponds to a $\tilde{d}_{ijk} \in \Gamma(U_{ijk},T_X)$. Then we have
\begin{align*}
Id+t\tilde{d}_{ijk}:\mathcal{O}_X(U_{ijk})\otimes_k \tilde{A} \xrightarrow{\tilde{f}_{ij}}\mathcal{O}_X(U_{ijk})\otimes_k \tilde{A} \xrightarrow{\tilde{f}_{jk}}\mathcal{O}_X(U_{ijk})\otimes_k \tilde{A} \xrightarrow{\tilde{f}_{ki}} \mathcal{O}_X(U_{ijk})\otimes_k \tilde{A}
\end{align*}
$Id+t\tilde{d}_{ijk}: (\mathcal{O}_X(U_{ijk})\otimes \tilde{A} ,\tilde{\Lambda}_i) \to (\mathcal{O}_X(U_{ijk})\otimes_k \tilde{A} ,\tilde{f}_{ki}\tilde{f}_{jk}\tilde{f}_{ij}\tilde{\Lambda}_i)$ is an isomorphism compatible with bidervations. We note that $\tilde{\Lambda}_i-\tilde{f}_{ki}\tilde{f}_{jk}\tilde{f}_{ij}\tilde{\Lambda}_i=\tilde{\Lambda}_i-\tilde{f}_{ki}\tilde{f}_{jk}(\tilde{\Lambda}_j+t\Lambda_{ij}')=\tilde{\Lambda}_i-\tilde{f}_{ki}(\tilde{\Lambda}_k+t\Lambda'_{jk}+\Lambda_{ij}')=\tilde{\Lambda}_i-(\tilde{\Lambda}_i+t\Lambda_{ki}'+t\Lambda_{jk}'+t\Lambda_{ij}')=-t(\Lambda_{ki}'+\Lambda_{jk}'+\Lambda_{ij}')$. Hence by Lemma \ref{3l}, we have $\Lambda_{ki}'+\Lambda_{jk}'+\Lambda_{ij}'-[\Lambda_0, \tilde{d}_{ijk}]=0$. So we have $-\delta(\{\Lambda_{ij}'\})+[\Lambda_0,\{\tilde{d}_{ijk}\}]=0$. Hence $\alpha=(\{-\Pi_i\}, \{2\Lambda_{ij}'\},\{2\tilde{d}_{ijk}\})\in C^0(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\wedge^3 \Omega_{X/k},\mathcal{O}_X)))\oplus C^1(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k},\mathcal{O}_X))\oplus C^2(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\Omega_{X/k}^1,\mathcal{O}_X)))$ is a cocyle in the following diagram (see Appendix \ref{appendixb}).
\begin{center}
$\begin{CD}
@A[\Lambda_0,-]AA\\
C^0(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\wedge^3 \Omega_{X/k},\mathcal{O}_X)))@>-\delta>>\cdots\\
@A[\Lambda_0,-]AA @A[\Lambda_0,-]AA\\
C^0(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k},\mathcal{O}_X)))@>\delta>> C^1(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k},\mathcal{O}_X)))@>-\delta>>\cdots\\
@A[\Lambda_0,-]AA @A[\Lambda_0,-]AA @A[\Lambda_0,-]AA\\
C^0(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\Omega_{X/k}^1,\mathcal{O}_X)))@>-\delta>>C^1(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\Omega_{X/k}^1,\mathcal{O}_X)))@>\delta>>C^2(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\Omega_{X/k}^1,\mathcal{O}_X)))\\
@AAA @AAA @AAA \\
0@>>>0 @>>> 0
\end{CD}$
\end{center}
We claim that given a different choice $\{\tilde{\theta}_{ij}' \}$ and $\{\tilde{\Lambda}_i'\}$ satisfying $(1),(2),(3)$, the associated cocyle $\beta=(\{-\Pi_i'\}, \{2\Lambda_{ij}''\},\{2\tilde{d}'_{ijk}\})\in C^0(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\wedge^3 \Omega_{X/k},\mathcal{O}_X)))\oplus C^1(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\wedge^2 \Omega_{X/k},\mathcal{O}_X))\oplus C^2(\mathcal{U},\mathscr{H}om_{\mathcal{O}_X}(\Omega_{X/k}^1,\mathcal{O}_X)))$ is cohomologous to the cocyle associated with $\{\tilde{\theta}_{ij}\}$ and $\{\tilde{\Lambda}_i\}$. Let $\tilde{f}_{ij}:\mathcal{O}_X(U_{ij})\otimes\tilde{A}\to \mathcal{O}_X(U_{ij})\otimes \tilde{A}$ corresponding to $\tilde{\theta}_{ij}$, and $\tilde{f}'_{ij}:\mathcal{O}_X(U_{ij})\otimes \tilde{A}\to \mathcal{O}_X(U_{ij})\otimes \tilde{A}$ corresdpong to $\theta_{ij}'$. Then $\tilde{f}_{ij}'=\tilde{f}_{ij}+tp_{ij}$ for some $p_{ij}\in \Gamma(U_{ij},T_X)$ \footnote{Since $\tilde{f}'_{ij}-\tilde{f}_{ij}$ is zero modulo $t$, we have $(\tilde{f}'_{ij}-\tilde{f}_{ij})(x)=0+tp_{ij}(x)$ for some map $p_{ij}$. We show that $p_{ij}$ is a derivation. Indeed $tp_{ij}(xy)=(\tilde{f}_{ij}-f_{ij})(xy)=\tilde{f}_{ij}(x)(\tilde{f}_{ij}-f_{ij})(y)+(\tilde{f}_{ij}-f_{ij})(x)f_{ij}(y)=\tilde{f}_{ij}(x)tp_{ij}(y)+tp_{ij}(y)f_{ij}(y)=t(xp_{ij}(y)+yp_{ij}(x))$. So $p_{ij}$ is a derivation and so an element in $\Gamma(U_{ij},T_X)$.} and $\tilde{\Lambda}'_i=\tilde{\Lambda}_i+t\Lambda_i'$ for some $\Lambda_i'\in Hom_{B_i}(\wedge^2 \Omega_{B_i/k}^1,B_i)$. For each $i,j,k$, $\tilde{\theta}_{ij}'\tilde{\theta}_{jk}'\tilde{\theta'}_{ik}^{-1}$ corresponds to the derivation $\tilde{d}'_{ijk}=\tilde{d}_{ijk}+(p_{ij}+p_{jk}-p_{ik})$. Hence $\delta(2\{p_{ij}\})=\{2\tilde{d}_{ijk}'-(2\tilde{d}_{ijk})\}$. We also note that $t\Pi_i'=[\tilde{\Lambda}'_i,\tilde{\Lambda}'_i]=[\tilde{\Lambda}_i+t\Lambda_i',\tilde{\Lambda}_i+t\Lambda_i']=[\tilde{\Lambda}_i,\tilde{\Lambda}_i]+t[2\Lambda_i',\Lambda_0]=t\Pi_i+t[2\Lambda_i',\Lambda_0]$. Hence we have $[\Lambda_0,2\Lambda_i']=-\Pi_i-(-\Pi_i')$. Since $t\Lambda_{ij}'=\tilde{f}_{ij}\tilde{\Lambda}_i-\tilde{\Lambda}_j, t\Lambda_{ij}''=\tilde{f}_{ij}'\tilde{\Lambda}_i'-\tilde{\Lambda}_j'=\tilde{f}_{ij}\tilde{\Lambda}_i'+t[p_{ij},\tilde{\Lambda}_i']-\tilde{\Lambda}_j'=\tilde{f}_{ij}\tilde{\Lambda}_i+t\Lambda_i'+t[p_{ij},\Lambda_0]-\tilde{\Lambda}_j-t\Lambda_j'$, we have $\Lambda_{ij}'-\Lambda_{ij}''=-\Lambda_i'+[\Lambda_0,p_{ij}]+\Lambda_j'$. So $\delta(\{2\Lambda_i'\})+[\Lambda_0,\{2p_{ij}\}]=\Lambda_{ij}'-\Lambda_{ij}''.$ Hence $(\{2\Lambda_i'\}, \{-2p_{ij}\})$ is mapped to $\alpha-\beta$. Hence $\alpha$ and $\beta$ are cohomologous. So given a deformation $\xi$ and a small extension $e:0\to (t)\to \tilde{A}\to A\to 0$, we can associate an element $o_{\xi}(e):=$ the cohomology class of $\alpha \in HP^3(X,\Lambda_0)$. We also note that $o_{\xi}(e)=0$ if and only if there exists a collection of $\{\tilde{\theta}_{ij}\}$ and $\{\tilde{\Lambda}_i\}$ satisfying $(2),(3)$ with $[\tilde{\Lambda}_i,\tilde{\Lambda}_i]=0$ (which means $\tilde{\Lambda}_i$ defines a Poisson structure), $\Lambda_{ij}'=0$ (which implies $\tilde{f}_{ij}\tilde{\Lambda}_i=\tilde{\Lambda}_j$) and $\tilde{d}_{ijk}=0$ (which means $(1)$) if and only if there is a lifting $\tilde{\xi}$.
\end{proof}
\begin{definition}
The Poisson deformation $\xi$ is called unobstructed if $o_{\xi}$ is the zero map, otherwise $\xi$ is called obstructed. $(X,\Lambda_0)$ is unobstructed if every infinitesimal deformation of $(X,\Lambda_0)$ is unobstructed, otherwise $(X,\Lambda_0)$ is obstructed.
\end{definition}
\begin{corollary}
A nonsingular Poisson variety $(X,\Lambda_0)$ is unobstructed if $HP^3(X,\Lambda_0)=0$.
\end{corollary}
\begin{proposition}\label{3rigid}
A nonsingular Poisson variety $(X,\Lambda_0)$ is rigid if and only if $HP^2(X,\Lambda_0)=0$.
\end{proposition}
\begin{proof}
Assume that $(X,\Lambda_0)$ is rigid. Since any infinitesimal Poisson deformation (in particular, any first order Poisson deformations) are trivial, $HP^2(X,\Lambda_0)=0$ by Proposition \ref{3q}. Assume that $HP^2(X,\Lambda_0)=0$. First we claim that given an infinitesimal Poisson deformation $\eta$ of $(X,\Lambda_0)$ over $A\in \bold{Art}$ and a small extension $e:0\to (t)\to \tilde{A}\to A\to 0$, any two liftings $ \xi, \tilde{\xi}$ to $\tilde{A}$ are equivalent. Let $\{U_i\}$ be an affine open covering of $\xi=(\mathcal{X},\Lambda)$ and $\tilde{\xi}=(\tilde{\mathcal{X}},\tilde{\Lambda})$. Let $\{\theta_i\}$ where $\theta_i :U_i\times Spec(\tilde{A})\to \mathcal{X}|_{U_i} $, $\{\Lambda_i\}$ where $\Lambda_i$ is the Poisson structure on $U_i\times Spec(\tilde{A})$ induced from from $\Lambda|_{U_i}$ and let $\theta_{ij}=\theta_i^{-1}\theta_j$ which corresponds to a $d_{ij}\in \Gamma(U_{ij},T_X)$. Let $\{\tilde{\theta}_i\}$ where $\tilde{\theta}_i :U_i\times Spec(\tilde{A})\to \mathcal{\tilde{X}}|_{U_i} $, $\{\tilde{\Lambda}_i\}$ where the induced Poisson structure from $\tilde{\Lambda}$ on $U_i\times Spec(\tilde{A})$ and let $\tilde{\theta}_{ij}=\tilde{\theta}_i^{-1}\tilde{\theta}_j$. Let $f_{ij}:(\mathcal{O}_X({U_{ij}})\otimes\tilde{A},{\Lambda}_i )\to (\mathcal{O}_X(U_{ij})\otimes \tilde{A},\Lambda_j)$ be the homomorphism corresponding to $\theta_{ij}$ and $\tilde{f}_{ij}:(\mathcal{O}_X(U_{ij})\otimes \tilde{A},\tilde{\Lambda}_i)\to (\mathcal{O}_X(U_{ij})\otimes \tilde{A},\tilde{\Lambda}_j)$ corresponding to $\tilde{\theta}_{ij}$. Since $\xi,\tilde{\xi}$ induce same Poisson deformation $\eta$ over $A$, we have
\begin{center}
$\tilde{f}_{ij}=f_{ij}+tp_{ij}$,$\,\,\,\,\,\,\,\,\,\,\, \tilde{\Lambda}_i=\Lambda_i+t\Lambda_i'$
\end{center}
for some $p_{ij}\in \Gamma(U_{ij},T_X)$.
Since for all $i,j,k$ we have $0=\tilde{d}_{ijk}=p_{ij}+p_{jk}-p_{ik}$. Since $0=[\tilde{\Lambda}_i,\tilde{\Lambda}_i]=[\Lambda_i+t\Lambda_i',\Lambda_i+t\Lambda_i']=2t[\Lambda'_i,\Lambda_0]$. Since $f_{ij}\Lambda_i=\Lambda_j$ and $\tilde{f}_{ij}\tilde{\Lambda}_i=\tilde{\Lambda}_j$, we have ${\Lambda}_j+t\Lambda_j'=\tilde{\Lambda}_j=\tilde{f}_{ij}\tilde{\Lambda}_i=(f_{ij}+tp_{ij})(\Lambda_i+t\Lambda_i')=\Lambda_j-t[\Lambda_0,p_{ij}]+t\Lambda_i'$. Hence we have $\Lambda_j'-\Lambda_i'+[\Lambda_0,p_{ij}]=0$. Hence $(\{\Lambda_i'\},\{ p_{ij}\})$ defines a cocylce. Since $HP^2(X,\Lambda_0)=0$, there exists $\{a_i\}\in \mathcal{C}^0(\mathcal{U},T_X)$ such that $[\Lambda_0,a_i]=\Lambda_i'$ and $a_i-a_j=p_{ij}$. Now we explicitly construct a Poisson isomorphism $(\tilde{\mathcal{X}},\tilde{\Lambda}) \cong (\mathcal{X},\Lambda)$. We define a Poisson isomorphism locally on $U_i\times Spec(\tilde{A})$, and show that each map glue together to give an Poisson isomorphism $(\tilde{\mathcal{X}},\tilde{\Lambda}) \cong (\mathcal{X},\Lambda)$. We claim that $(U_i\times Spec(\tilde{A}),\Lambda_i)\to ( U_i\times Spec(\tilde{A}), \tilde{\Lambda}_i)$ is a Poisson isomorphism induced from $Id+ta_i:(\mathcal{O}_X(U_i)\otimes_k \tilde{A},\tilde{\Lambda}_i)\to (\mathcal{O}_X(U_i)\otimes_k \tilde{A},\Lambda_i)$. The inverse map is $Id-ta_{i}$. Since $\tilde{\Lambda}_i+t[a_i,\tilde{\Lambda}_i]=\tilde{\Lambda}_i+t[a_i,\Lambda_0]=\Lambda_i+t\Lambda_i'+t[a_i,\Lambda_0]=\Lambda_i$, $Id+ta_i$ is Poisson. We show that each Poisson isomorphism $\{Id+ta_i\}$ glues together to give a Poisson isomorphism $(\tilde{\mathcal{X}},\tilde{\Lambda}) \cong (\mathcal{X},\Lambda)$. Indeed, it is sufficient to show that the following diagram commutes.
\begin{center}
$\begin{CD}
(\mathcal{O}_X(U_{ij})\otimes_k Spec(\tilde{A}), \tilde{\Lambda}_j )@> Id+ta_j >> (\mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\Lambda_j)\\
@A\tilde{f}_{ij}AA@AA f_{ij}A\\
(\mathcal{O}_X(U_{ij})\otimes Spec(\tilde{A}), \tilde{\Lambda}_i)@> Id+ta_i >> (\mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\Lambda_i)
\end{CD}$
\end{center}
Indeed, the diagram commutes if and only if $(Id+ta_j)\circ \tilde{f}_{ij}=f_{ij}\circ (Id+ta_i)$ if and only if $\tilde{f}_{ij}+ta_j=f_{ij}+ta_i$ if and only if $p_{ij}=a_i-a_j$. Hence there is at most one lifting of $\xi$.
Now we prove that if $HP^2(X,\Lambda_0)=0$, then $(X,\Lambda_0)$ is rigid. We will prove by induction on the dimension on $(A,\mathfrak{m})\in \bold{Art}$. For $A$ with $dim_k A=2$, then any first order Poisson deformation is trivial. Let' assume that any infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $A$ with $dim_k A\leq n-1$ is trivial. Let $\xi$ be an infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $A$ with $dim_k A=n$ such that $\mathfrak{m}^{p-1}\ne 0$ and $\mathfrak{m}^p=0$. Choose an element $t\ne 0\in \mathfrak{m}^{p-1}$. Then $0\to (t)\to A\to A/(t)\to 0$ is a small extension and $dim_k A/(t)\leq n-1$. Hence induced Poisson deformation $\bar{\xi}$ over $A/(t)$ from $\xi$ is trivial by induction hypothesis. Since $\xi$ is a lifting of $\bar{\xi}$, and trivial Poisson deformation over $A$ is also a lifting of $\bar{\xi}$, $\xi$ is trivial since we have at most one lifting of $\bar{\xi}$.
\end{proof}
\chapter{Poisson deformation functors}\label{chapter8}
\section{Schlessigner's criterion}\footnote{For details, see $\cite{Har10}$ page 106-117.}
We discuss Functor of Artin rings in more detail and Schlessinger's criterions. We recall that $\bold{Art}$ is the category of local artinian $k$-algebras with residue field $k$. Before the discussion, we note that following: let $f:(A',\mathfrak{m}')\to( A,\mathfrak{m})$ and $g:(A'',\mathfrak{m}'')\to (A,\mathfrak{m})$ be two local homomorphisms of local artinian $k$-algebras with the residue $k$. So we have $f^{-1}(\mathfrak{m})=\mathfrak{m}'$ and $g^{-1}(\mathfrak{m})=\mathfrak{m}''$. Let's consider the fiber product $\bar{A}=A'\times_A A''=\{(a,b)|a\in A', b\in A'', f(a)=g(b)\}$ is also a local artinian $k$-algebra, which is defined by $(a_1,b_1)\cdot (a_2,b_2)=(a_1a_2,b_1b_2)$, $(a_1,b_1)+(a_2,b_2)=(a_1+b_1, a_2+b_2)$. We will show that $A'\times_A A''$ is a local Artininan $k$-algebra with the residue $k$. The maximal ideal is given by $\bar{\mathfrak{m}}=\{(m,n)|m\in \mathfrak{m}', n\in \mathfrak{m}'', f(n)=g(m)\}$. We will show that $\bar{m}$ is the unique maximal ideal. It is enough to show that $(a,b)\in \bar{A}-\bar{\mathfrak{m}}$ is an unit. Indeed, we have $a\in A-\mathfrak{m}'$ and $b\in A'-\mathfrak{m}''$. So $a,b$ are units. Hence $(a^{-1},b^{-1})$ is a inverse of $(a,b)$. Now we show that $\bar{A}/\bar{\mathfrak{m}}\cong k$. We define a map $\varphi:\bar{A}\to A'/\mathfrak{m}'\times_{A/\mathfrak{m}} A''/\mathfrak{m}''$ by $(a,b)\mapsto (\bar{a},\bar{b})\cong k\times_k k\cong k$. Let $\varphi(a,b)=(\bar{a},\bar{b})=0$. Then $a\in \mathfrak{m}'$ and $b\in \mathfrak{m}''$. Hence $ker \varphi=\bar{\mathfrak{m}}$. So the natural maps $f':(\bar{A},\bar{\mathfrak{m}})\to (A',\mathfrak{m}')$ and $g':(\bar{A},\bar{\mathfrak{m}})\to (A'',\mathfrak{m}'')$ are local homomorphisms of local Artininan $k$-algebras and $f'^{-1}(\mathfrak{m'})=\bar{\mathfrak{m}}$, $g'^{-1}(\mathfrak{m}'')=\bar{\mathfrak{m}}$.
\begin{definition}
Let $R$ be a complete local $k$-algebra, and for each $A\in \bold{Art}$, we define $h_R$ be a functor of Artin rings in the following way
\begin{align*}
h_R:\bold{Art}&\to \bold{Sets}\\
A&\mapsto h(R):=Hom_k(R,A)
\end{align*}
A covariant functor $F:\bold{Art}\to \bold{Set}$ that is isomorphic to a functor of the form $h_R$ for some complete local $k$-algebra $R$ is called pro-representable.
\end{definition}
Let $(R,\mathfrak{m})$ be a complete local $k$-algebra. Let $\varphi:h_R\to F$ be a morphism of functors of Artin rings, then for each $n$, we have the following commutative diagram from the canonical map $R/\mathfrak{m}^{n+1}\to R/\mathfrak{m}^n$.
\begin{center}
$\begin{CD}
Hom(R,R/\mathfrak{m}^{n+1})@>\varphi_{n+1}>> F(R/\mathfrak{m}^{n+1})\\
@VVV @VVV\\
Hom(R,R/\mathfrak{m}^n)@>\varphi_n>> F(R/\mathfrak{m}^n)
\end{CD}$
\end{center}
Let $\pi_n:R\to R/\mathfrak{m}^n$ be the canonical surjection. Then we have $\xi=\{\xi_n\}:=\{\varphi_n(\pi_n)\}\in \varprojlim F(R/\mathfrak{m}^n)$. We call $\xi=\{\xi_n\}$ a formal family of $F$ over $R$.
\begin{definition}
Let $\bold{C}$ be the category of complete local $k$-algebras with residue field $k$. Let $F$ be a functor of Artin rings. We define
\begin{align*}
\hat{F}:\bold{C}&\to \bold{Sets}\\
(R,\mathfrak{m}) &\mapsto \hat{F}(R)=\varprojlim F(R/\mathfrak{m}^n)
\end{align*}
\end{definition}
Let $\xi=\{\xi_n\}\in \hat{F}(R)$ be a formal family. Then from this, we can define a morphism of functors $h_R\to F$ of functor of Artin rings as follows. For $A\in \bold{Art}$, we define
\begin{align*}
h_R(A)=Hom_k(R,A)&\to F(A)\\
f&\mapsto F(g)(\xi_n)
\end{align*}
Here $g$ is defined in the following way. Since $A$ is artinian, $f:R\to A$ factor through $R/\mathfrak{m}^n$ by $g:R/\mathfrak{m}^n\to A$ for some $n$.
\begin{remark}
If $F$ is a functor of Artin rings, and $R$ is a complete local $k$-algebra with residue $k$, then there is a natural bijection between $\hat{F}(R)$ of formal families $\{\xi_n|\xi_n\in F(R/\mathfrak{m}^n)\}$ and the set of homomorphisms of functors $h_R\to F$. So if $F$ is pro-representable, there is an isomorphism $\xi:h_R\to F$ for some $R$, we can think of $\xi$ as an element of $\hat{F}(R)$. We say that the pair $(R,\xi)$ pro-represents the functor $F$.
\end{remark}
\begin{definition}
Let $F$ be a functor of Artin rings.
\begin{enumerate}
\item A pair $(R,\xi)$ with $R\in \bold{C}$ and $\xi\in \hat{F}(R)$ is called a versal family for $R$ if the associated map $h_R\to F$ is smooth. In other words, for every surjection $B\to A$ in $\bold{Art}$, the natural map $h_R(B)\to h_R(A)\times_{F(A)} F(B)$ is surjective. This implies that given a map $R\to A$ inducing an element $\eta\in F(A)$, given $\theta\in F(B)$ mapping to $\eta$, one can lift the map $R\to A$ to a map $R\to B$ inducing $\theta$.
\item A versal family $(R,\xi)$ with $R\in \bold{C}$ and $\xi\in \hat{F}(R)$ is called a miniversal family or $F$ has a pro-representable hull $(R,\xi)$ if $h_R(k[\epsilon])\to F(k[\epsilon])$ is bijective.
\item A pair $(R,\xi)$ with $R\in \bold{C}$ and $\xi\in \hat{F}(R)$ is called a universal family if it pro-represents the functor $F$.
\end{enumerate}
\end{definition}
\begin{thm}[Schlessinger's criterion]\label{sch1}
A functor of Artin rings has a miniversal family if and only if
\begin{itemize}
\item $(H_0)$ $F(k)$ has one element.
\item $(H_1)$ The natural map $F(A'\times_A A'')\to F(A')\times_{F(A)} F(A'')$ is surjective for every small extension $A''\to A$.
\item $(H_2)$ The natural map $F(A'\times_A A'')\to F(A')\times_{F(A)} F(A'')$ is bijective when $A''=k[\epsilon]$ and $A=k$.
\item $(H_3)$ $t_F:=F(k[\epsilon])$ is a finite-dimensional $k$-vector space.
\end{itemize}
\end{thm}
\begin{proof}
See \cite{Har10} Theorem 16.2.
\end{proof}
\begin{remark}
We explain in $(H_3)$ why $t_F:=F(k[\epsilon])$ is a $k$-vector space. Let $F$ be a functor of Artin rings satisfying $(H_0)$ and $(H_2)$. Then $F(k[\epsilon])$ can be considered to be $k$-vector space in the following way. Let's consider the following map
\begin{align*}
\alpha:k[\epsilon]\times_k k[\epsilon]&\to k[\epsilon]\\
(a+b\epsilon,a+b'\epsilon)&\mapsto a+(b+b')\epsilon
\end{align*}
Then $\alpha((a+b\epsilon,a+b'\epsilon)\cdot (c+d\epsilon,c+d'\epsilon))=\alpha(ac+(ad+bc)\epsilon, ac+(ad'+b'c)\epsilon)=ac+(ad+bc+ad'+b'c)\epsilon$. On the other hand, $\alpha((a+b\epsilon,a+b'\epsilon))\cdot \alpha((c+d\epsilon,c+d'\epsilon))=(a+(b+b')\epsilon)(c+(d+d')\epsilon)=ac+(ad+ad'+bc+b'c)\epsilon$. Hence $\alpha$ is a homomorphism. So $\alpha$ induces $F(\alpha):F(k[\epsilon]\times_k k[\epsilon])\to F(k[\epsilon])$. Since $F$ satisfies $(H_3)$, we have $F(k[\epsilon])\times F(k[\epsilon])\cong F(k[\epsilon]\times_k k[\epsilon])$. So We have a map $F(k[\epsilon])\times F(k[\epsilon])\to F(k[\epsilon])$. This defines an addition. By the following commutativity of homomorphisms and the property $(H_3)$, the operation satisfies associativity$:$
\begin{center}
$\begin{CD}
k[\epsilon]\times_k k[\epsilon] \times_k k[\epsilon]((a+b\epsilon,a+b'\epsilon,a+b''\epsilon))@>>> k[\epsilon]\times_k k[\epsilon]((a+(b+b')\epsilon,a+b''\epsilon))\\
@VVV @VVV\\
k[\epsilon]\times_k k[\epsilon]((a+b\epsilon,a+(b+b')\epsilon))@>>> k[\epsilon](a+(b+b'+b'')\epsilon)
\end{CD}$
\end{center}
The zero element is the image of $F(k)\to F(k[\epsilon])$ induced from $k\to k[\epsilon],k\to k+\epsilon\cdot 0$. The scalar multiplication by $c\in k$ is defined by $F(k[\epsilon])\to F(k[\epsilon])$ induced from $k[\epsilon]\to k[\epsilon], a+b\epsilon\mapsto a+(cb)\epsilon$. Then the inverse is defined by the map $F(k[\epsilon])\to F(k[\epsilon])$ induced by $k[\epsilon]\to k[\epsilon], a+b\epsilon\mapsto a-b\epsilon$.
\end{remark}
Let's assume that the functor $F$ satisfies $H_0$, $H_1$ and $H_2$. We claim that for any small extension $0\to(t)\to A'\xrightarrow{p} A\to 0$ and any element $\eta\in F(A)$, there is a transitive group action of the vector space $t_F$ on the set $p^{-1}(\eta)$ if it is nonempty. Here $p:=F(p):F(A')\to F(A)$. Indeed, we have an isomorphism
\begin{align*}
\gamma:k[\epsilon]\times_k A'&\to A'\times_A A'\\
(x+y\epsilon,a') &\mapsto (a'+yt,a')\\
\gamma^{-1}:A'\times_A A' &\to k[\epsilon]\times_k A'\\
(b',a') &\mapsto (\overline{a'}+\overline{(b'-a')}\epsilon,a')
\end{align*}
where $\overline{a'}$ and $\overline{b'-a'}\in k$ are the residues of $a'$ and $b'-a'$ modulo by the maximal ideal of $A'$. Let $\beta:k[\epsilon]\times_k A'\to A, (x+y\epsilon,a')\mapsto a'+yt$. Then we have the following commutative diagram
\begin{center}
\[\begindc{\commdiag}[50]
\obj(0,1)[aa]{$k[\epsilon]\times_k A'$}
\obj(1,0)[bb]{$A'$}
\obj(2,1)[cc]{$A'\times_A A'$}
\mor{aa}{cc}{$\gamma$}
\mor{aa}{bb}{$\beta$}[\atright,\solidarrow]
\mor{cc}{bb}{$pr_1$}
\enddc\]
\end{center}
Since $F$ satisfies $H_2$, we have a bijection $\alpha^{-1}:F(k[\epsilon])\times F(A')\to F(k[\epsilon]\times_k A')$, so a bijection $F(k[\epsilon]) \times F(A')\to F(A'\times_{A} A')$ induced from $F(\gamma)\circ \alpha^{-1}$.
Since we have the following commutative diagram
{\tiny{\begin{center}
\[\begindc{\commdiag}[70]
\obj(0,1)[aa]{$F(A')$}
\obj(1,0)[bb]{$F(A')\times_{F(A)} F(A')$}
\obj(2,1)[cc]{$F(A')$}
\obj(1,2)[dd]{$F(A'\times_A A')$}
\obj(1,3)[ee]{$F(k[\epsilon]\times_k A')$}
\obj(1,4)[ff]{$t_F\times F(A')$}
\mor{aa}{bb}{}
\mor{cc}{bb}{}
\mor{dd}{aa}{}
\mor{dd}{cc}{}
\mor{ee}{dd}{$F(\gamma)$}
\mor{ff}{ee}{$\alpha^{-1}$}
\mor{ff}{aa}{$F(\beta)\circ \alpha^{-1}$}[\atright,\solidarrow]
\mor{ff}{cc}{$pr$}
\mor{dd}{bb}{}
\enddc\]
\end{center}}}
The map $t_F\times F(A')\to F(A')\times_{F(A)} F(A')$ is surjective and an isomorphism on the second factor since $F$ satisfies $H_1$. If we take $\eta\in F(A)$ and fix $\eta'\in p^{-1}(\eta)$, then we get a surjective map $t_F\times \{\eta'\}\to p^{-1}(\eta) \times \{\eta'\}$, and hence there is a transitive group action of $t_F$ on $p^{-1}(\eta)$.
\begin{thm}[Schlessinger's criterion]\label{sch2}
Let $F$ be a functor of Artin rings. The functor $F$ is prorepresentable if and only if $F$ satisfies $(H_0),(H_1),(H_2),(H_3)$ and
\begin{itemize}
\item $(H_4)$ For every small extension $p:A''\to A$ and every $\eta\in F(A)$ for which $p^{-1}(\eta)$ is nonempty, the group action of $t_F$ on $p^{-1}(\eta)$ is bijective.
\end{itemize}
\end{thm}
\begin{proof}
See \cite{Har10} Theorem 16.2.
\end{proof}
\section{Poisson Deformation functors}
\begin{definition}
Let $(X,\Lambda_0)$ be a Poisson algebraic scheme. For $A\in \bold{Art}$, we let
\begin{center}
$PDef_{(X,\Lambda_0)}=\{\text{infinitesimal Poisson deformations of $(X,\Lambda_0)$ over $A$}\}/\text{Poisson isomorphisms}$
\end{center}
Then $PDef_{(X,\Lambda_0)}$ is a functor of Artin rings.
\end{definition}
We will prove that $PDef_{(X,\Lambda_0)}$ has a miniversal family when $(X,\Lambda_0)$ is a smooth projective Poisson scheme. Before the proof, we note the following: let $B,B',B''$ are Poisson algebras over local Artininan $k$-algebras $A,A',A''$ with the residue $k$ as above. If $A'$-Poisson algebra homomoprhisms $p:B'\to B$, and $A''$-Poisson algebra homomoprhism $q:B''\to B$, then $\bar{B}=B'\times_B B''=\{(m,n)|m\in B',n\in B'', p(m)=q(n)\}$ is a Poisson $\bar{A}$-algebra defined in the following way: $\bar{A}\times \bar{B}\to \bar{B}, (a,b)\times (m,n)\mapsto (am,bn)$. Bracket is defined by $\{(m,n),(r,s)\}=(\{m,r\},\{n,s\})$. Then $\bar{B}\to B'$ is a Poisson homomorphism over $\bar{A}$ and $\bar{B}\to B''$ is a Poisson homomorphism over $\bar{A}$. The Poisson algebra satisfies an universal mapping property in the following sense: let $C$ be a Poisson algebra over $\bar{A}$ and assume that we have a Poisson $\bar{A}$-homomorphism $f:C\to B'$ and $g:C\to B''$ such that $p\circ f=q\circ g$. Then there is a unique Poisson homomorphism $h:C\to B'\times_B B''$ defined by $h(c)=(f(c),g(c))$ which is $\bar{A}$-Poisson homomorphism since $h(\{c_1,c_2\})=(f(\{c_1,c_2\}),g(\{c_1,c_2\})=(\{f(c_1),f(c_2)\},\{g(c_1),g(c_2)\})=\{(f(c_1),g(c_1)),(f(c_2),g(c_2))\}=\{h(c_1),h(c_2)\}$, and for $(a,b)\in \bar{A}$, we have $h((a,b)c)=(f((a,b)c),g((a,b)c))=(af(c),bg(c))=(a,b)(f(c),g(c))$
\begin{lemma}\label{3ll}
Let $A,A',A''$ be local artininan $k$-algebras, and let $\bar{A}=A'\times_A A''$. Let $B,B',B''$ be algebras over $A,A',A''$ with compatible maps $B'\to B$ and $B''\to B$, and assume that $B'\otimes_{A'} A\to B$ and $B''\otimes_{A''} A\to B$ are isomorphisms. Let $\bar{B}=B'\times_B B''$.
\begin{enumerate}
\item Assume $A''\to A$ is surjective. Then $\bar{B}\otimes_{\bar{A}} A'\to B'$ is an isomorphism and so $\bar{B}\otimes_{\bar{A}} A\to B$ is an isomorphism.
\item Now assume furthermore that $J=ker(A''\to A)$ is an ideal of square zeros and that $B',B''$ are flat over $A',A''$ respectively. Then $\bar{B}$ is flat over $\bar{A}$ and also $\bar{B}\otimes_{\bar{A}} A'' \to B''$ is an isomorphism.
\end{enumerate}
\end{lemma}
\begin{proof}
See \cite{Har10} Proposition 16.4.
\end{proof}
\begin{thm}\label{3thm1}
Let $(X,\Lambda_0)$ be a smooth projective Poisson scheme over $k$. Then the Poisson deformation functor $PDef_{(X,\Lambda_0)}$ of Poisson deformations of $(X,\Lambda_0)$ over local artinian rings has a miniversal family.
\end{thm}
\begin{proof}
We check Schlessinger's Criterion in Theorem \ref{sch1}. First $PDef_{(\mathcal{X},\Lambda_0)}(k)$ is one point set. So $(H_0)$ is satisfied. Now we prove $(H_2)$. Let's consider the following commutative diagram of local artinian $k$-algebras with residue $k$. Assume that $A''\to A$ is a small extension.
\begin{center}
$\begin{CD}
\bar{A}=A'\times_A A''@>>> A'\\
@VVV @VVV\\
A''@>>> A
\end{CD}$
\end{center}
Let $(\mathcal{X},\Lambda)$ be an infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $A$. Let $(\mathcal{X}',\Lambda')$ be an infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $A'$ and $(\mathcal{X}'',\Lambda'')$ be an infinitesimal Poisson deformation of $(X,\Lambda_0)$ inducing $(\mathcal{X},\Lambda)$ via the above diagram. So we have the following fiber product of Poisson algebraic schemes.
\begin{center}
$\begin{CD}
(\mathcal{X},\Lambda)@>>> (\mathcal{X}',\Lambda')\\
@VVV @VVV\\
Spec(A)@>>> Spec(A')
\end{CD}$
\,\,\,\,\,\,\,\,\,
$\begin{CD}
(\mathcal{X},\Lambda)@>>> (\mathcal{X}'',\Lambda'')\\
@VVV @VVV\\
Spec(A)@>>> Spec(A')
\end{CD}$
\end{center}
Then we will define an infinitesimal Poisson deformation $(\bar{\mathcal{X}},\bar{\Lambda})$ of $(X,\Lambda_0)$ inducing $(\mathcal{X},\Lambda)$ and $(\mathcal{X},\Lambda')$, which implies $(H_1)$. Since $Spec(A),Spec(A')$ and $Spec(A'')$ are one point sets, $\mathcal{X}$, $\mathcal{X}'$ and $\mathcal{X}''$ have the same topological space of $X$. For any open set $U\subset X$, $\mathcal{O}_\mathcal{X'}(U)\to \mathcal{O}_\mathcal{X}(U)$ is a Poisson $A'$-algebra homomoprhism and $\mathcal{O}_\mathcal{X''}(U)\to \mathcal{O}_\mathcal{X}(U)$ is a Poisson $A''$-algebra homomorphism. Choose an affine open cover $\{U_i=Spec(B_i)\}$ of $X$. Then $U_i$ are all affine open sets of $\mathcal{X},\mathcal{X}',\mathcal{X}''$.\footnote{Such open set exists:
let $Z_0$ be a closed subscheme of a scheme $Z$, defined by a sheaf of nilpotent ideals $N\subset \mathcal{O}_Z$. If $Z_0$ is affine, then $Z$ is affine as well (See \cite{Ser06} Lemma 1.2.3 page 23)} For affine open set $U$ of $X$, let $\mathcal{O}_{\mathcal{X}}(U)=B, \mathcal{O}_{\mathcal{X}'}(U)=B'$ and $\mathcal{O}_{\mathcal{X}''}(U)=B''$. Then we have a Poisson homomorphism $p:B'\to B$ and $q:B''\to B$. Then $B'\otimes_{A'} A\to B$ is a Poisson isomorphism and $B''\otimes_{A''} A\to B$ is a Poisson isomorphism. Let $B'\times_B B''$ which is a Poisson algebra over $\bar{A}$. Since $Spec(B),Spec(B')$ and $Spec(B'')$ has same topological spaces. $Spec(B)\to Spec(B')$ induced from $p:B'\to B$ is bijective and $Spec(B)\to Spec(B'')$ induced from $q:B''\to B$ is bijective. We would like to describe prime ideals of $\bar{B}=B'\times_B B''$. Since $A''\to A$ is surjective, by Lemma \ref{3ll}, $\varphi:\bar{B}\otimes_{\bar{A}} A'\to B'$ induced from $\bar{B}\to B'$ is an Poisson isomorphism. Then $Spec(B')$ is topologically homeomorphic to $Spec(\bar{B})$. Hence all prime ideal of $Spec(\bar{B})$ is induced from $Spec(B)$ from the map $\bar{B}\to B$. Let $\mathfrak{p}$ be the prime ideal of $\bar{B}$. Then $\mathfrak{p}=p'^{-1}q^{-1}(\mathfrak{q})$ for the unique prime ideal $\mathfrak{q}\in B$. Then we have $\bar{B}_{\mathfrak{p}}\cong B'_{p^{-1}(\mathfrak{q})}\times_{B_{\mathfrak{q}}} B''_{q^{-1}(\mathfrak{q})}$ which is a Poisson isomorphism induced from the natural fibered sum of Poisson algebras
\begin{center}
$\begin{CD}
\bar{B}_{\mathfrak{p}}@>>> B'_{p^{-1}(\mathfrak{q})}\\
@VVV @VVV\\
B''_{q^{-1}(\mathfrak{q})}@>>> B_{\mathfrak{q}}
\end{CD}$
\end{center}
Now we globalize our arguments. We would like to construct a Poisson scheme $\bar{\mathcal{X}}$ which is an infinitesimal Poisson deformation over $\bar{A}$. Our local model of $\mathcal{X}$ for affine open set $U$ of $X$ will be isomorphic to Poisson affine scheme $\mathcal{O}_{\mathcal{X}'}(U)\times_{\mathcal{O}_{\mathcal{X}}(U)} \mathcal{O}_{\mathcal{X}''}(U)$ with naturally induced Poisson structure. We assume that $\mathcal{X}\to \mathcal{X}'$ and $\mathcal{X}\to \mathcal{X}''$ are identity maps on $X$ when we ignore sheaf structures. Now we will define a Poisson sheaf $\mathcal{O}_{\bar{\mathcal{X}}}$ on $X$ which is a Poisson scheme $\bar{\mathcal{X}}$ over $\bar{A}$ in the following way: for open set $U$, $\mathcal{O}_{\bar{\mathcal{X}}}(U)$ is a set of elements $\phi:U\to \bigcup_{x\in U} \mathcal{O}_{\mathcal{X}',x}\times_{\mathcal{O}_{\mathcal{X},x}} \mathcal{O}_{\mathcal{X}'',x}$ such that for each $x\in X$, there exists an affine neighborhood $U_x$ of $x$ and an element $(a,b)\in \mathcal{O}_{\mathcal{X}''}(U_x)\times_{\mathcal{O}_{\mathcal{X}}(U_x)}\mathcal{O}_{\mathcal{X}''}(U_x)$ such that $\phi$ is canonically induced from $(a,b)$ from the natural Poisson map $\mathcal{O}_{\mathcal{X}'}(U_x)\times_{\mathcal{O}_{\mathcal{X}}(U_x)}\mathcal{O}_{\mathcal{X}''}(U_x)\to \mathcal{O}_{\mathcal{X}',x}\times_{\mathcal{O}_{\mathcal{X},x}} \mathcal{O}_{\mathcal{X}'',x}$. Then $\mathcal{O}_{\bar{\mathcal{X}}}$ is a Poisson sheaf where the Poisson structure is induced from the Poisson structure $\mathcal{O}_{\mathcal{X}',x}\times_{\mathcal{O}_{\mathcal{X},x}} \mathcal{O}_{\mathcal{X}'',x}$. Then $\mathcal{O}_{\bar{\mathcal{X}}}$ defines a Poisson scheme $\bar{\mathcal{X}}$. Indeed, for any affine open set $U$ of $X$, let $C=\mathcal{O}_{\mathcal{X}'}(U)\times_{\mathcal{O}_{\mathcal{X}'}(U)}\mathcal{O}_{\mathcal{X}''}(U)$. Then we have a natural sheaf homomorphism $\mathcal{O}_{Spec(C)}\to \mathcal{O}_{\bar{\mathcal{X}}}|_U$ which is a Poisson isomorphism since it is isomorphic at each stalk by the compatibility of localization as shown above.
We claim that $\bar{\mathcal{X}}\times_{Spec(\bar{A})} Spec(A')\cong \mathcal{X}'$ and $\bar{\mathcal{X}}\times_{Spec(\bar{A})}Spec(A'')\cong \mathcal{X}''$ as Poisson schemes. We simply note that $(\mathcal{O}_{\mathcal{X}',x}\times_{\mathcal{O}_{\mathcal{X},x}}\mathcal{O}_{\mathcal{X}'',x})\otimes_{\bar{A}} A'\cong \mathcal{O}_{\mathcal{X}',x}$ and $(\mathcal{O}_{\mathcal{X}',x}\times_{\mathcal{O}_{\mathcal{X},x}}\mathcal{O}_{\mathcal{X}'',x})\otimes_{\bar{A}} A''\cong \mathcal{O}_{\mathcal{X}'',x}$. $\bar{\mathcal{X}}$ is flat by Lemma \ref{3ll}.
Now we prove $(H_2)$. We assume that $A''=k[\epsilon]$ and $A=k$. Let $\mathcal{Y}$ be a infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $\bar{A}$ inducing $(\mathcal{X},\Lambda')$ and $(\mathcal{X}'',\Lambda'')$, i.e $\mathcal{Y}\times_{Spec(\bar{A})}Spec(A')\cong \mathcal{X}'$ and $\mathcal{Y}\times_{Spec(\bar{A})} Spec(A'')\cong \mathcal{X}''$. We will construct an Poisson isomorphism $\bar{\mathcal{X}}\to \mathcal{Y}$. Equivalently, we will show an isomorphism of Poisson sheaves $\mathcal{O}_{\mathcal{Y}}\to \mathcal{O}_{\bar{\mathcal{X}}}$. Since $A'\to k$ and $k[\epsilon]\to k$ are surjective, $\mathcal{X}'\to \mathcal{Y}$ and $\mathcal{X}''\to \mathcal{Y}$ are closed immersions. Since $\mathcal{Y}$ is also an infinitesimal Poisson deformation of $\mathcal{X}=(X,\Lambda_0)$, we have the following commutative diagram.
\begin{center}
$\begin{CD}
\mathcal{Y}@<<< \mathcal{X}'\\
@AAA @AAA\\
\mathcal{X}'' @<<< \mathcal{X}= (X,\Lambda_0)
\end{CD}$
\end{center}
For each affine open set of $X$, we have the following commutative diagram of Poisson homomorphisms
\begin{center}
$\begin{CD}
\mathcal{O}_{\mathcal{Y}}(U)@>>> \mathcal{O}_{\mathcal{X}'}(U)\\
@VVV @VVV\\
\mathcal{O}_{\mathcal{X}''}(U) @>>> \mathcal{O}_{X}(U)
\end{CD}$
\end{center}
which is compatible with localization at each $x\in U$. So we have the following commutative diagram
\begin{center}
$\begin{CD}
\mathcal{O}_{\mathcal{Y}}(U)@>>> \mathcal{O}_{\mathcal{X}}(U)\times_{\mathcal{O}_X(U)}\mathcal{O}_{\mathcal{X}''}(U)\\
@VVV @VVV\\
\mathcal{O}_{\mathcal{Y},x} @>>> \mathcal{O}_{\mathcal{X}',x}\times_{\mathcal{O}_{\mathcal{X},x}}\mathcal{O}_{\mathcal{X}'',x}
\end{CD}$
\end{center}
This induces a natural Poisson homomrphism $\mathcal{O}_{\mathcal{Y},x}\to\mathcal{O}_{\mathcal{X}',x} \times_{\mathcal{O}_{\mathcal{X},x}}\mathcal{O}_{\mathcal{Y},x}$ which is necessarily isomorphism. Now we define a sheaf map. For each open set $U$, we can identity $\mathcal{O}_{\mathcal{Y}}(U)$ is the set of elements $\alpha:U\to \bigcup_{x\in U} \mathcal{O}_{\mathcal{Y},x}$, locally coming from an element of section of affine open set of $U$. Then the map $\mathcal{O}_{\mathcal{Y}}(U)\to \mathcal{O}_{\bar{\mathcal{X}}}(U)$ induced from $\mathcal{O}_{\mathcal{X}'',x}\to\mathcal{O}_{\mathcal{X}',x}\times_{\mathcal{O}_{\mathcal{X},x}}\mathcal{O}_{\mathcal{X}'',x}$ is well defined. $\mathcal{O}_{\mathcal{Y}}\to \mathcal{O}_{\mathcal{X}}$ is isomorphism since it is isomorphic at each stalk.
Lastly, since $(X,\Lambda_0)$ is a smooth projective Poisson scheme, $HP^2(X,\Lambda_0)$ is a finite dimensional $k$-vector space. We have $\kappa:t_{PDef_{(X,\Lambda_0)}}\cong HP^2(X,\Lambda_0)$ as a map in Proposition \ref{3q}. We show that $\kappa$ is isomorphism as $k$-vector spaces. We simply note that for given $\xi,\eta\in PDef(k[\epsilon])$ represented by $(Id+\epsilon p_{ij},\Lambda_0+t\Lambda_i')$ and $(Id+ \epsilon p_{ij}',\Lambda_0+t\Lambda_i'')$, $\xi+\eta$ is given by the data $(Id+\epsilon(p_{ij}+p_{ij}')\epsilon, \Lambda_0+\epsilon(\Lambda_i'+\Lambda_i''))$. So $t_{PDef_{(X,\Lambda_0)}}=PDef_{(X,\Lambda_0)}(k[\epsilon])$ is a finite dimensional $k$-vector space. Hence by Schlessinger's criterion (Theorem \ref{sch1}), $PDef_{(X,\Lambda_0)}$ has a miniversal family.
\end{proof}
\begin{lemma}\label{3lm}
Let $(X,\Lambda_0)$ be smooth projective Poisson scheme with $HP^1(X,\Lambda_0)=0$. Then for any infinitesimal Poisson deformation $(\mathcal{X},\Lambda)$ of $(X,\Lambda_0)$ over $A$ for any $A\in \bold{Art}$, we have $PAut((\mathcal{X},\Lambda)/(X,\Lambda_0))=Id$, where $PAut((\mathcal{X},\Lambda)/(X,\Lambda_0)):=$ the set of Poisson automorphisms of $(\mathcal{X},\Lambda)$ restricting to the identity Poisson automorphism of $(X,\Lambda_0)$.
\end{lemma}
\begin{proof}
We prove by induction on the dimension of $A$. Let $dim_k\,A=1$. Then $A=k$. So we have nothing to prove. Let's assume that the lemma holds for $A$ with $dim_k\,A\leq n-1$. Let $dim_k A=n$ and $(\mathcal{X},\Lambda)$ be an infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $A$. Assume that the maximal ideal $\mathfrak{m}$ of $A$ satisfies $\mathfrak{m}^{p-1}\ne 0$ and $\mathfrak{m}^p=0$. Choose $t\ne 0\in \mathfrak{m}^{p-1}$. Then $A/(t) \in \bold{Art}$ with $dim_k\,A/(t)\leq n-1$ and $0\to (t)\to A\to A/(t)\to 0$ is a small extension. Now let $g:(\mathcal{X},\Lambda)\to (\mathcal{X},\Lambda)$ be a Poisson automorphism restricting to the identity Poisson automorphism of $(X,\Lambda_0)$. Let $\{U_i=Spec(B_i)\}$ be an affine open cover of $X$. Let $\{\theta_i\}$ where $\theta_i:U_i\times Spec({A})\to \mathcal{X}|_{U_i}$, $\{\Lambda_i\}$ where $\Lambda_i$ is the Poisson structure on $U_i\times Spec(A)$ induced from $\Lambda|_{U_i}$ via $\theta$ and let $\theta_{ij}={\theta}_i^{-1}{\theta}_j$ which corresponds to a Poisson homomorphism ${f}_{ij}:(\mathcal{O}_X(U_{ij})\otimes_k {A},{\Lambda}_i)\to (\mathcal{O}_X(U_{ij})\otimes_k {A}, {\Lambda}_j)$. Then $f$ can be described by the data $g_{i}: (\mathcal{O}_X(U_i)\otimes A,\Lambda_i) \to (\mathcal{O}_X(U_i)\otimes A,\Lambda_i)$ which is a Poisson automorphism and the following commutative diagram
\begin{center}
$\begin{CD}
(\mathcal{O}_X(U_{ij}) \otimes A ,\Lambda_j)@>g_j>> (\mathcal{O}_X(U_{ij})\otimes A,\Lambda_j)\\
@Af_{ij}AA @AAf_{ij}A\\
(\mathcal{O}_X(U_{ij})\otimes A,\Lambda_i)@>g_i>> (\mathcal{O}_X(U_{ij})\otimes A,\Lambda_i)
\end{CD}$
\end{center}
Since by the induction hypothesis $g_i$ induce the identity on $\mathcal{O}_X(U_i)\otimes A/(t)$. $g_i$ is of the form $g_i=Id+td_ix$, where $d_i\in Der_k(\mathcal{O}_X(U_i),\mathcal{O}_X(U_i))$ with $[\Lambda_0,d_i]=0$ by Lemma \ref{3l}. Hence $\{d_i\}\in HP^1(X,\Lambda_0)$. Since $HP^1(X,\Lambda_0)=0$, we have $d_i=0$. Hence $g_i$ is the identity. So $g$ is the identity. This proves the lemma.
\end{proof}
\begin{proposition}
Let $(X,\Lambda_0)$ be a projective smooth Poisson scheme with $HP^1(X,\Lambda_0)=0$. Then the functor $PDef_{(X,\Lambda_0)}$ is pro-representable.
\end{proposition}
\begin{proof}
Since $PDef_{(X_0,\Lambda_0)}$ satisfies $(H_0),(H_1),(H_2),(H_3)$ by Theorem \ref{3thm1}, we will check $(H_4)$ to show that the Poisson deformation functor $PDef_{(X,\Lambda_0)}$ is pro-representable by Theorem \ref{sch2}. Let $0\to (t)\to \tilde{A}\xrightarrow{\mu} A\to 0$ be a small extension in $\bold{Art}$. Let $\xi=(\mathcal{X},\Lambda)\in PDef_{(X,\Lambda_0)}(A)$ be an infinitesimal Poisson deformation of $(X,\Lambda_0)$ over $A$. Let $p:=PDef_{(X,\Lambda_0)}(\mu):PDef_{(X,\Lambda_0)}(\tilde{A})\to PDef_{(X,\Lambda_0)}(A)$ induced from $\mu:\tilde{A}\to A$. We will define a map $G:HP^2(X,\Lambda_0)\times p^{-1}(\xi)\to p^{-1}(\xi)$ which is a group action of $HP^2(X,\Lambda_0)$ acting on $p^{-1}(\xi)$. Let $\tilde{\xi}=(\tilde{\mathcal{X}},\tilde{\Lambda})\in p^{-1}(\xi)$ be a lifting of $\xi$ over $\tilde{A}$. Let $\{U_i=Spec(B_i)\}$ be an affine cover of ${\xi}=(\tilde{\mathcal{X}},\tilde{\Lambda})$. Let $\{\tilde{\theta}_i\}$ where $\tilde{\theta}_i:U_i\times Spec(\tilde{A})\to \tilde{\mathcal{X}}|_{U_i}$, $\{\tilde{\Lambda}_i\}$ where $\tilde{\Lambda}_i$ is the Poisson structure on $U_i\times Spec(\tilde{A})$ induced from $\tilde{\Lambda}|_{U_i}$ via $\tilde{\theta}_i$ and let $\tilde{\theta}_{ij}=\tilde{\theta}_i^{-1}\tilde{\theta}_j$ which corresponds to a Poisson homomorphism $\tilde{f}_{ij}:(\mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\tilde{\Lambda}_i)\to (\mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\tilde{\Lambda}_j)$. We note that $t\tilde{f}_{ij}:\mathcal{O}_X(U_{ij})\otimes\tilde{A}\to \mathcal{O}_X(U_{ij})\otimes \tilde{A}$ is same to $tId$ since $\tilde{f}_{ij}$ is identity map modulo by maximal ideal $\tilde{\mathfrak{m}}$ of $\tilde{A}$ (i.e $\tilde{f}_{ij}(x)-Id(x)\in \tilde{\mathfrak{m}}$), and the maximal ideal $\tilde{\mathfrak{m}}$ is killed by $(t)$, and so we have $t(f_{ij}-Id)(x)=0$. Let $(\{\Lambda_i'\}, \{p_{ij}\})\in HP^2(X,\Lambda_0)$, where $\Lambda_i'\in Hom_{B_i}(\wedge^2 \Omega_{B_i/k},B_i)$ and $p_{ij}\in \Gamma(U_i,T_X)=Der_k(B_i,B_i)$. So we have $[\Lambda_0,\Lambda_i']=0$, $\Lambda_j'-\Lambda_i'+[\Lambda_0,p_{ij}]=0$ and $p_{ij}+p_{jk}-p_{ik}=0$. Then we will define another lifting $\bar{\xi}$ of $\xi$ from $(\{\Lambda_i'\}, \{ p_{ij}\})$ and $\tilde{\xi}$ by gluing $U_i\times Spec(\tilde{A})$ equipped with $\tilde{\Lambda}_i+t\Lambda_i'$ in the following way: $\tilde{f}_{ij}+tp_{ij}:\mathcal{O}_X(U_{ij})\otimes_k \tilde{A} \to \mathcal{O}_X(U_{ij})\otimes_k \tilde{A}$ which is an isomorphism with the inverse $\tilde{f}_{ij}-tp_{ij}$. Indeed, $(\tilde{f}_{ji}-tp_{ij})\circ (\tilde{f}_{ij}+tp_{ij}):\mathcal{O}_X(U_i)\otimes_k \tilde{A}\to \mathcal{O}_X(U_i)\otimes_k\tilde{A}$, $Id+t(\tilde{f}_{ji}p_{ij}-p_{ij}\tilde{f}_{ij})=Id+t(p_{ij}-p_{ij})=Id$. $\{\tilde{f}_{ji}+tp_{ij}\}$ satisfies cocylce condition:
\begin{align*}
(\tilde{f}_{ki}+tp_{ki})\circ (\tilde{f}_{jk}+tp_{jk})\circ (\tilde{f}_{ij}+tp_{ij})=\tilde{f}_{ki}\circ\tilde{f}_{jk}\circ\tilde{f}_{ij}+t(p_{ij}+p_{jk}+p_{ki})=Id\\
\end{align*}
Now let's consider the Poisson structures. Since $[\Lambda_0,\Lambda_i']=0$, we have $[\tilde{\Lambda}_i+t\Lambda_i',\tilde{\Lambda}_i+t\Lambda_i']=0$. Hence $\tilde{\Lambda}_i+t\Lambda_i'$ defines a Poisson structure on $\mathcal{O}_X(U_i)\otimes_k \tilde{A}$. We claim that $\tilde{f}_{ij}+tp_{ij}:(\mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\tilde{\Lambda}_i+t\Lambda_i')\to \mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\tilde{\Lambda}_j+t\Lambda_j')$ is a Poisson isomorphism. First we note that
\begin{align*}
&(\Lambda_j'-\Lambda_i'+[\Lambda_0,p_{ij}])(dx\wedge dy)\\
=&\Lambda_{j}'(dx\wedge dy)-\Lambda_{i}'(dx\wedge dy)+\Lambda_0(d(p_{ij}(x))\wedge dy)-\Lambda_0(d(p_{ij}(y)))\wedge dx)-p_{ij}(\Lambda_0(dx\wedge dy)))=0
\end{align*}
\begin{align*}
&(\tilde{f}_{ij}+tp_{ij})((\tilde{\Lambda}_i+t\Lambda_i')(dx\wedge dy))=(\tilde{f}_{ij}+tp_{ij})(\tilde{\Lambda}_i(dx\wedge dy)+t\Lambda_i'(dx\wedge dy))\\
&=\tilde{f}_{ij}(\tilde{\Lambda}_i(dx\wedge dy))+t\Lambda_i'(dx\wedge dy) +tp_{ij}(\Lambda_0(dx\wedge dy))=\tilde{\Lambda}_j(d\tilde{f}_{ij}(x)\wedge d\tilde{f}_{ij}(y))
+t\Lambda_{j}'(dx\wedge dy)\\&+t\Lambda_0(d(p_{ij}(x))\wedge dy)-t\Lambda_0(d(p_{ij}(y)))\wedge dx)-tp_{ij}(\Lambda_0(dx\wedge dy))+tp_{ij}(\Lambda_0(dx\wedge dy))\\
&=\tilde{\Lambda}_j(d\tilde{f}_{ij}(x)\wedge d\tilde{f}_{ij}(y))+t\Lambda_{j}'(dx\wedge dy)+t\Lambda_0(d(p_{ij}(x))\wedge dy)-t\Lambda_0(d(p_{ij}(y)))\wedge dx)
\end{align*}
On the other hand,
\begin{align*}
&(\tilde{\Lambda}_j+t\Lambda_j')(d(\tilde{f}_{ij}+tp_{ij})(x))\wedge d((\tilde{f}_{ij}+tp_{ij})(y))\\
=& \tilde{\Lambda}_j(d(\tilde{f}_{ij}+tp_{ij})(x))\wedge d((\tilde{f}_{ij}+tp_{ij})(y))
+t\Lambda_j'(d(\tilde{f}_{ij}(x))\wedge d(\tilde{f}_{ij}(y))\\=& \tilde{\Lambda}_j(d\tilde{f}_{ij}(x)\wedge d\tilde{f}_{ij}(y))+t\Lambda_0(d(p_{ij}(x))\wedge dy)+t\Lambda_0(dx\wedge d(p_{ij}(y)))+t\Lambda_j'(dx\wedge dy)
\end{align*}
Hence we can define another lifting $\bar{\xi}$ from $\tilde{\xi}$ and ($\{\Lambda_i'\},\{p_{ij}\}$)$\in HP^2(X,\Lambda_0)$. Now we show that the map
\begin{align*}
G:HP^2(X,\Lambda_0)\times p^{-1}(\xi)&\to p^{-1}(\xi)\\
((\{\Lambda_i'\},\{p_{ij}\}),\tilde{\xi})&\mapsto \bar{\xi}
\end{align*}
is well-defined. Let $(\{\Lambda_i''\},\{p_{ij}'\})$ represent the same cohomology class with $(\{\Lambda_i\},\{p_{ij}\})$ in $HP^2(X,\Lambda_0)$. Then we have to show that the lifting $\bar{\xi}$ defined by $(\{\Lambda_i\},\{p_{ij}\})$ is an equivalent infinitesimal Poisson deformation to the lifting $\bar{\xi}'$ defined by $(\{\Lambda_i\},\{p_{ij}'\})$. There exists $\{a_i\}$ where $a_i\in \Gamma(U_i, T_X)$ such that $[\Lambda_0,a_i]=\Lambda_i'-\Lambda_i''$ and $-\delta(\{a_i\})=a_i-a_j=p_{ij}-p_{ij}'$. To show $\bar{\xi}\cong \bar{\xi}'$, it is sufficient to show that the following diagram commutes and $Id+ta_i$ defines Poisson isomorphisms.
\begin{center}
$\begin{CD}
(\mathcal{O}_X(U_{ij})\otimes_k Spec(\tilde{A}), \tilde{\Lambda}_j+t\Lambda_j' )@> Id+ta_j >> (\mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\tilde{\Lambda}_j+t\Lambda_j'')\\
@A \tilde{f}_{ij}+tp_{ij} AA@AA\tilde{f}_{ij}+tp_{ij}' A\\
(\mathcal{O}_X(U_{ij})\otimes Spec(\tilde{A}), \tilde{\Lambda}_i+t\Lambda_i')@> Id+ta_i >> (\mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\tilde{\Lambda}_i+t\Lambda_i'')
\end{CD}$
\end{center}
Indeed, $(Id+ta_j)\circ (\tilde{f}_{ij}+tp_{ij})-(\tilde{f}_{ij}+tp_{ij}')\circ (Id+ta_i)=\tilde{f}_{ij}+tp_{ij}+ta_j-(\tilde{f}_{ij}+ta_i+tp_{ij}')=t(a_j-a_i+p_{ij}-p_{ij}')=0$. $\tilde{\Lambda}_i+t\Lambda_i'-(\tilde{\Lambda}_i+t\Lambda_i'')=t(\Lambda_i-\Lambda_i'')$ and $\Lambda_i-\Lambda_i''-[\Lambda_0,a_i]=0$. So by Lemma \ref{3l}, $\{Id+ta_i\}$ are Poisson isomorphisms.
Next, we show that the group action $HP^2(X,\Lambda_0)\times p^{-1}(\xi)\to p^{-1}(\xi)$ is transitive. Let $\tilde{\xi}\in p^{-1}(\xi)$ be a lifting of $\xi$ as above. Choose arbitrary lifting $\eta\in p^{-1}(\xi)$ of $\xi$. We have to find $(\{\Lambda_i'\},\{p_{ij}\})\in HP^2(X,\Lambda_0)$ such that $((\{\Lambda_i'\},\{p_{ij}\}),\tilde{\xi})$ is mapped to $\eta$ under the action. Let $\eta$ consist of the data $f_{ij}':(\mathcal{O}_X(U_{ij}),\otimes \tilde{A},\tilde{\Lambda}_i')\to (\mathcal{O}_X(U_{ij})\otimes \tilde{A},\tilde{\Lambda}_j')$. Since $\tilde{\xi}$ and $\eta$ both induce $\xi$, we have $\tilde{f}_{ij}'=f_{ij}+tp_{ij}$ and $\tilde{\Lambda}'_i=\tilde{\Lambda}_i+t\Lambda_i'$ for some $p_{ij}\in \Gamma(U_{ij}, T_X)$ and $\Lambda_i'\in Hom_{B_0}(\wedge^2 \Omega_{B_i/k}^1,B_i)$. Then $(\{\Lambda_i'\},\{p_{ij}\})$ defines a cohomology class in $HP^2(X,\Lambda_0)$. So the group action is transitive.
Next, we show that the group action is free. Assume that for given $v=(\{\Lambda_i'\},\{p_{ij}\})\in HP^2(X,\Lambda_0)$, we have $G(v,\tilde{\xi})=\tilde{\xi}$. We have to show that $v=0 \in HP^2(X,\Lambda_0)$. Let $\tilde{\xi}$ and $G(v,\tilde{\xi})$ as above. Then $G(v,\tilde{\xi})=\tilde{\xi}$ implies that we have Poisson isomorphisms $g_i:(\mathcal{O}_X(U_i)\otimes_k \tilde{A},\tilde{\Lambda}_i) \to (\mathcal{O}_X(U_i)\otimes_k \tilde{A},\tilde{\Lambda}_i+t\Lambda_i')$ such that the following diagram is commutative.
\begin{center}
$\begin{CD}
(\mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\tilde{\Lambda}_j)@>g_j>> (\mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\tilde{\Lambda}_j+t\Lambda_j')\\
@A\tilde{f}_{ij}AA @AA\tilde{f}_{ij}+tp_{ij}A\\
(\mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\tilde{\Lambda}_i)@>g_i>> (\mathcal{O}_X(U_{ij})\otimes_k \tilde{A},\tilde{\Lambda}_i+t\Lambda_i')
\end{CD}$
\end{center}
Since $g_i$ induces a Poisson automorphism on $\xi=(\mathcal{X},\Lambda)$ by modulo $(t)$, which is necessarily identity by Lemma \ref{3lm}. Hence $g_i$ is the form of $g_i=Id+tq_i$, where $q_i\in Der_k(\mathcal{O}_X(U_i),\mathcal{O}_X(U_i))$ with $-\Lambda_i'-[\Lambda_0,q_i]=0$ by Lemma \ref{3l}. Since the diagram commutes, we have $0=g_j\tilde{f}_{ij}-(\tilde{f}_{ij}+tp_{ij})g_i=(Id+tq_j)\tilde{f}_{ij}-(\tilde{f}_{ij}+tp_{ij})(Id+tq_i)=t(q_j-p_{ij}-q_i)$. Hence $p_{ij}=q_j-q_i$. Hence $v=0 \in HP^2(X,\Lambda_0)$.
Lastly, we claim that $G$ is exactly same to $t_{PDef_{(\mathcal{X},\Lambda_0)}}\times p^{-1}(\xi)\to p^{-1}(\xi)$ that we defined in section 8.1. We set $F:=PDef_{(X,\Lambda_0)}$. Let's describe $t_F\times p^{-1}(\xi)\to p^{-1}(\xi)$ by following the definition in section 8.1. We note that here $A'=\tilde{A}$.
We now describe $\alpha^{-1}:t_F \times F(\tilde{A})\to F(k[\epsilon]\times_k \tilde{A})$. Given an element $v=(\{\Lambda_i'\},\{p_{ij}\})\in HP^2(X,\Lambda_0)$ gives a first order Poisson deformation $(\mathcal{X}_\epsilon,\Lambda_\epsilon)$ over $k[\epsilon]$ by Proposition \ref{3q}. Let $(\mathcal{X},\Lambda)\in F(\tilde{A})$ which is a lifting of $\xi \in F(A)$. Let $\{U_i\}$ be an affine open set of $X$. $(\mathcal{X}_{\epsilon},\Lambda_\epsilon)$ can be described by the data $(\{Id+\epsilon p_{ij}\},\{\Lambda_0+\epsilon\Lambda_i'\})$, where $\Lambda_0+\epsilon\Lambda_i'$ is a Poisson structure on $\mathcal{O}_X(U_i)\otimes_k k[\epsilon]$ and $Id+ \epsilon p_{ij}:\mathcal{O}_X(U_{ij})\otimes_k k[\epsilon]\to \mathcal{O}_X(U_{ij})\otimes_k k[\epsilon]$ an Poisson isomorphism. Similarly $(\mathcal{X},\Lambda)$ can be described by the data $(\{f_{ij}\},\{\Lambda_i\})$, where $\Lambda_i$ is a Poisson structure on $\mathcal{O}_X(U_i)\otimes_k \tilde{A}$ and $f_{ij}:\mathcal{O}_{X}(U_{ij})\otimes \tilde{A} \to \mathcal{O}_{X}(U_{ij})\otimes \tilde{A}$ an Poisson isomorphism. Then $\alpha^{-1}((\mathcal{X}_\epsilon,\Lambda_\epsilon), (\mathcal{X},\Lambda))$ is a fibered sum $\mathcal{X}_\epsilon\times_k \mathcal{X}$ which can be described as $(\{(Id+\epsilon p_{ij}, f_{ij})\},\{(\Lambda_0+\epsilon \Lambda_i',\Lambda_i)\})$.
We note that
\begin{center}
$\begin{CD}
k[\epsilon]\otimes_k \tilde{A},(x+y\epsilon,a') @>>> \tilde{A}\times_A \tilde{A}, (a'+yt,a')@>>> \tilde{A},(a'+yt)\\
@.@VVV \\
@. \tilde{A},(a')
\end{CD}$
\end{center}
In the map $t_F\times F(\tilde{A})\to F(\tilde{A})\times_{F(A)} F(\tilde{A}), (v,\xi)\mapsto(\tau(v,\xi),\xi)$, $\tau(v, (\mathcal{X},\Lambda))=(\mathcal{X}_{\epsilon}\times_k \mathcal{X})\otimes_{k[\epsilon]\times_k \tilde{A}} \tilde{A}$ is induced from $k[\epsilon]\times_k \tilde{A}\to \tilde{A}, (x+y\epsilon,a')\mapsto (a'+yt)$. Hence $(\mathcal{X}_{\epsilon}\times_k \mathcal{X})\otimes_{k[\epsilon]\times_k \tilde{A}} \tilde{A}$ can be described as $(f_{ij}+tp_{ij}, \Lambda_i+t\Lambda_i')$ which is exactly same to $G(v,(\mathcal{X},\Lambda))$.
\end{proof}
\chapter{Poisson cotangent complex}\label{chapter9}
In this chapter, we extend the construction of Schlessinger and Lichtenbaum's cotangent complex (See \cite{Sch67}) in terms of Poisson algebras.\footnote{My original goal was to generalize Hartshorne's construction of `$T^i$ Functors' presented in \cite{Har10} page 18-25 and apply to deformation problems for not necessarily smooth Poisson schemes. Hartshorne's book \cite{Har10} led me to Lichtenbaum and Schlessinger's original paper \cite{Sch67}. I followed Shclessinger and Lichtenbaum \cite{Sch67} in the context of Poisson algebras in this chapter. However I could not succeed in globalization. See Remark \ref{3remarks}. There is also a general approach to cotangent complex for algebras over an operad (see \cite{Lod12}). If our construction turns out to be correct, I expect that our construction $PT^i$ for $i=0,1,2$ is equivalent to the general construction in the language of operads.} We follow their arguments in Poisson context. In this chapter, every algebra is a $k$-algebra for a field $k$.
\section{Poisson modules and Poisson enveloping algebras}(See \cite{Fre06})
\begin{definition}
Let $A$ be a Poisson algebra. A Poisson module over $A$ is a $A$-module $M$ equipped with a bracket $\{-,-\}:A\otimes_k M\to M$ such that
\begin{enumerate}
\item $\{a,bm\}=\{a,b\}+b\{a,m\}$
\item $\{ab,m\}=a\{b,m\}+b\{a,m\}$
\item $\{a,\{b,m\}\}=\{\{a,b\},m\}+\{b,\{a,m\}\}$
\end{enumerate}
for $a,b\in A$ and $m\in M$.
\end{definition}
\begin{definition}
Let $M$ and $N$ be Poisson modules of a Poisson algebra $A$. A map $\phi:M\to N$ is a morphism of Poisson modules if
\begin{align*}
\phi(am)&=a\phi(m)\\
\phi(\{a,x\})&=\{a,\phi(m)\}
\end{align*}
for $a\in A$ and $m\in M$. We denote by $Hom_{\mathcal{U}_{Pois(A)}}(M,N)$ or $PHom_A(M,N)$ the $k$-module of morphisms of Poisson modules from $M$ to $N$.
\end{definition}
We will construct the Poisson enveloping algebra $\mathcal{U}_{Pois}(A)$ of a Poisson $k$-algebra $A$. This is a associative $k$-algebra which is characterized by the following property: ``The category of left $\mathcal{U}_{Pois(A)}$-modules is equivalent to the category of Poisson modules over $A$" (\cite{Fre06}).
\begin{definition}
The Poisson enveloping algebra $\mathcal{U}_{Pois(A)}$ is the associated $A$-algebra with unit generated by the symbols $X_a,a\in A$ by the quotient of the following relations
\begin{enumerate}
\item $X_a\cdot b=\{a,b\}+b\cdot X_a$
\item $X_{ab}=a\cdot X_b+b\cdot X_a$
\item $X_a\cdot X_b=X_{\{a,b\}}+X_b\cdot X_a$
\end{enumerate}
for $a,b\in A$
\end{definition}
Let $M$ be a Poisson module over $A$ (so $M$ is a left $\mathcal{U}_{Pois(A)}$-module), then $M$ is also a right $\mathcal{U}_{Pois(A)}$-module defined by
\begin{align*}
m\cdot a:=a\cdot m,\,\,\,\,\,\,\,\,\, m\cdot X_a=-\{a,m\}
\end{align*}
So given a Poisson module $M$ over $A$, by abuse of notation, we define $\{m,a\}:=m\cdot X_a$. Then in practice, we treat the bracket $\{-,-\}$ on $M$ like a bracket of a Poisson algebra.
\begin{definition}
Let $S\to A$ be a homomorphism of Poisson algebras. A Poisson $S$-derivation is a map $d:A\to M$ from a Poisson $k$-algebra $A$ to a Poisson $A$-module $M$ which is a $S$-linear derivation with respect to multiplication and with respect to the bracket $\{-,-\}$ of $A$. Explicitly, a Poisson $S$-derivation $d$ satisfies the following identites
\begin{align*}
d(sf)&=sd(f)\\
d(fg)&=df\cdot g+f\cdot dg\\
d\{f,g\}&=\{df,g\}+\{f,dg\}=-\{g,df\}+\{f,dg\}
\end{align*}
for $s\in S$ and $f,g\in A$.
We denote $PDer_S(A,M)$ the $A$-module of Poisson $S$-derivations from $A\to M$. The functor $PDer_S(A,-)$ is representable : there exists a Poisson representation denoted by $\Omega_{Pois(A)/S}^1$ such that
\begin{align*}
PDer_S(A,M)=Hom_{\mathcal{U}_{Pois(A)}}(\Omega_{\mathcal{U}_{Pois(A)/S}}^1,M)
\end{align*}
To construct $\Omega_{\mathcal{U}_{Pois(A)/S}}^1$, let $F$ be the free left $\mathcal{U}_{Pois}(A)$-module generated by the symbols $df,f\in A$. Let $E$ be the $\mathcal{U}_{Pois(A)}$-submodule of $F$ generated by the elements of the form $ds,s\in S$, $d(f+g)-df-dg, d(fg)=df\cdot g+f\cdot dg$ and $d\{f,g\}=-X_g\cdot df +X_f\cdot dg$. $\Omega_{\mathcal{U}_{Pois(A)/S}}^1$ is defined to be $E/F$. We have a natural map $d:A\to \Omega_{\mathcal{U}_{Pois(A)/S}}^1$. Via this map, the functor $PDer_S(A,-)$ is represented by $\Omega_{Pois(A)/S}^1$.
\end{definition}
Let $\rho:B\to C$ be a Poisson homomorphism compatible with $A$. In other words, $A\to B\to C$ is a Poisson homomorphism. Then there exist canonical homomorphism of $\mathcal{U}_{Pois(C)}$-modules
\begin{align*}
\alpha&:\mathcal{U}_{Pois(C)}\otimes_{\mathcal{U}_{Pois(B)}}\Omega_{Pois(B)/A}^1 \to \Omega_{Pois(C)/A}^1
\end{align*}
\begin{proposition}
Let $A\to B$ be a homomorphism of Poisson $k$-algebras. Let $I$ be a Poisson ideal of $B$.
If $C=B/I$, we have an exact sequence of $\mathcal{U}_{Pois(C)}$-modules
\begin{equation}\label{3equ}
I/(I^2\oplus \{I,I\})\xrightarrow{\delta} \mathcal{U}_{Pois(C)}\otimes_{\mathcal{U}_{Pois(B)}} \Omega_{Pois(B)/A}^1\to \Omega_{{Pois(C)}/A}^1\to 0
\end{equation}
where for any $i\in I$, let $\bar{i}$ be the image of $i$ in $I/(I^2\oplus \{I,I\})$. We define $\delta(\bar{i})=1\otimes di$.
\end{proposition}
\begin{proof}
We show that $I/(I^2\oplus \{I,I\})$ is a Poisson $B/I$-module, defined by $\{\bar{b},\bar{i}\}=\overline{\{b,i\}}$. Let $b_1,b_2\in B$ with $b_1-b_2\in I$ and $i_1,i_2\in I$ with $i_1-i_2\in I^2\oplus \{I,I\}$. Then $\{b_1,i_1\}-\{b_2,i_2\}=\{b_1,i_1-i_2\}+\{b_1-b_2,i_2\}\in I^2\oplus\{I,I\}$. Hence the bracket is well-defined. We claim that
\begin{align*}
I/(I^2\oplus \{I,I\}) \cong\mathcal{U}_{Pois(C)}\otimes_{\mathcal{U}_{Pois(B)}} I
\end{align*}
as $\mathcal{U}_{Pois(C)}$-modules. We define a map $\varphi: I/I^2\oplus \{I,I\}\to \mathcal{U}_{Pois(C)}\otimes \mathcal{U}_{Pois(B)} I$ where $\varphi(\bar{i})=1\otimes i$. First we show that this map is well-defined. Let $i,j\in I$ with $i-j\in I^2\oplus \{I,I\}$. Then $i-j=\sum_n f_n\cdot g_n$, where $f_n\in I$ or $f_n$ is of the from $b\cdot X_a$ where $a\in I, b\in \mathcal{U}_{Pois(B)}$ ,and $g_n\in I$ as $\mathcal{U}_{Pois(B)}$-module action. Since $\bar{f_n}=0$ for $f_n\in I$ and $X_{\bar{a}}=0$ for $a\in I$, we have $1\otimes (i-j)=0$. Hence $\varphi$ is well-defined. Second, we show that $\varphi$ is a $\mathcal{U}_{Pois(C)}$-module homomorphism. Let $c\in \mathcal{U}_{Pois(C)}$. Then there exists $b\in \mathcal{U}_{Pois(B)}$ with $\bar{b}=c$. Then $\varphi(c\cdot \bar{i})=\varphi(\bar{b}\cdot \bar{i})=\varphi(\overline{b\cdot i})=1\otimes b\cdot i=\bar{b}\otimes i=\bar{b}(1\otimes i)=\bar{b}\varphi(\bar{i})=c\varphi(\bar{i})$. Lastly we can define an inverse map $\varphi^{-1}:\mathcal{U}_{Pois(C)}\otimes_{\mathcal{U}_{Pois(B)}} I\to I/(I^2\oplus \{I,I\})$ by $\varphi^{-1}(c\otimes i)=c\cdot \bar{i}$.
Now we prove the proposition. The sequence in (\ref{3equ}) is equivalent to
\begin{align*}
\mathcal{U}_{Pois(C)}\otimes_{\mathcal{U}_{Pois(B)}} I\xrightarrow{\delta} \mathcal{U}_{Pois(C)}\otimes_{\mathcal{U}_{Pois(B)}} \Omega_{Pois(B)/A}^1\xrightarrow{\alpha} \Omega_{{Pois(C)}/A}^1\to 0
\end{align*}
where $\delta(\bar{b}\otimes i):=\bar{b}\otimes di$. Since $B\to B/I$ is surjective, $\alpha$ is surjective. Exactness of the sequence in (\ref{3equ}) is equivalent to the exactness of the following sequence
\begin{align*}
0\to Hom_{\mathcal{U}_{Pois(C)}}(\Omega_{Pois(C)/A}^1,N)\to Hom_{\mathcal{U}_{Pois(C)}}(\mathcal{U}_{Pois(C)}\otimes_{\mathcal{U}_{Pois(B)}} \Omega_{Pois(B)/A}^1,N)\\
\to Hom_{\mathcal{U}_{Pois(C)}}(I/(I^2\oplus \{I,I\}),N)
\end{align*}
for any left $\mathcal{U}_{Pois(C)}$-module $N$. Equivalently we have to show that the following sequence is exact.
\begin{align*}
0\to PDer_A(C,N)\to PDer_A(B,N) \xrightarrow{\phi} Hom_{\mathcal{U}_{Pois(B)}}(I,N)
\end{align*}
where for $D\in PDer_A(B,N)$, $\phi(D)$ is defined by restricting $D$ to $I$. Now we assume that $\phi(D)$ is zero, then $D(I)=0$ so $D$ factor through $B/I$, hence $D\in PDer_A(C,N)$. This proves the proposition.
\end{proof}
\section{Poisson cotangent complex}
\begin{definition}[See $\cite{Umi12}$]
We construct a free Poisson algebra generated by $\{x_i\}$ over $k$. Let $g$ be a free Lie algebra generated by $\{x_i\}$ with the Lie bracket $[-,-]$. Free Poisson algebra generated by $\{x_i\}$ with the Poisson bracket $\{-,-\}$ is the polynomial algebra $k[g]$ with the Poisson bracket defined by $\{x_i,x_j\}:=[x_i,x_j]$. We will denote the free Poisson algebra by $k\{x_i\}$.
\end{definition}
\begin{definition}
Let $A$ be a Poisson algebra with the bracket $\{-,-\}_A$ over $k$. Let's consider the free Poisson algebra $P$ generated by $A$ and $\{x_i\}$. We denote by $A\{x_i\}$ the quotient of $P$ by Poisson ideals generated by $\{a,b\}-\{a,b\}_A$ and $a\cdot b= a*b$ for $a,b\in A$ where $\cdot$ is the multiplication in $P$ and $*$ is the multiplication in $A$. We call $A\{x_i\}$ a free Poisson algebra over $A$ generated by $\{x_i\}$. We also note that $\Omega_{\mathcal{U}_{Pois(A\{x_i\})/A}}^1$ is a free $\mathcal{U}_{Pois(A\{x_i\})}$-module generated by $dx_i$.
\end{definition}
\begin{remark}
We have the following universal mapping property: let $X=\{x_i\}$ and $A\to B$ be a Poisson homomorphism. let $j:X\to B$ be a map. Then there exists an unique Poisson homomorphism $A\{x_i\}\to B$ such that the following diagram commutes
\begin{center}
\[\begindc{\commdiag}[70]
\obj(0,1)[ee]{$A$}
\obj(1,1)[aa]{$A\{x_i\}$}
\obj(2,1)[bb]{$B$}
\obj(2,0)[cc]{$X$}
\mor{ee}{aa}{}
\mor{aa}{bb}{$u$}
\mor{cc}{aa}{}
\mor{cc}{bb}{$j$}
\enddc\]
\end{center}
\end{remark}
\begin{definition}
Let $A\to B$ be a Poisson homomorphism. By a Poisson extension of $B$ over $A$ we mean an exact sequence
\begin{align*}
(\mathscr{E}):0\to E_2\xrightarrow{e_2} E_1\xrightarrow{e_1}R\xrightarrow{e_0}B\to 0
\end{align*}
where $R$ is a Poisson algebra and $e_0:R\to B$ is a Poisson homomorphism such that $A\to B$ factor through $e_0:R\to B$, $E_1$ and $E_2$ are $\mathcal{U}_{Pois(R)}$-modules, $e_1$ and $e_2$ are $\mathcal{U}_{Pois(R)}$-module homomorphisms, and we have the following relation
\begin{align*}
e_1(x)y=e_1(y)x,\,\,\,\,\,\,\,\,\,\ X_{e_1(x)}y+X_{e_1(y)}x=0
\end{align*}
for $x,y\in E_1$. We claim that $E_2$ is a Poisson $B$-module. Indeed, let $I$ be a kernel of $e_0$, which is a Poisson ideal of $R$. Now we give a Poisson $R/I$-module structure on $E_2$. First we note that $E_2$ is a $\mathcal{U}_{Pois(R)}$-module, and so Poisson $R$-module. Let $a\in B$ and $x\in E_2$. We define a Poisson $B$-module structure on $E_2$ by setting $b\cdot x:=r\cdot x$ and $\{b,x\}=:\{r,x\}$ where $r$ is a lifting of $b$ under $e_0:R\to R/I\cong B$. This is well-defined since given two lifting $r_1$ and $r_2$ of $b$ $(i.e \,\,\,\,r_1-r_2\in I)$, choose $y\in E_1$ with $e_1(y)=r_1-r_2$. Then $e_2(r_1x-r_2x)=(r_1-r_2)e_2(x)=e_1(y)e_2(x)=e_1(e_2(x))y=0$ and so we have $r_1x=r_2x$. On the other hand, $e_2(\{r_1,x\}-\{r_2,x\})=X_{r_1-r_2}e_2(x)=X_{e_1(y)} e_2(x)=-X_{e_1(e_2(yx))}y=0$. So we have $\{r_1,x\}=\{r_2,x\}$. Then other property of a Poisson module trivially follows.
Let $A\to A'\to B'$ be a homomorphism of Poisson algebras, and $\mathscr{E}'$ an Poisson extension of $B'$ over $A'$. By a homomorphism $\alpha:\mathscr{E}\to \mathscr{E}'$ of Poisson extensions we mean a collection $(b,\alpha_0,\alpha_1,\alpha_2)$ with the following commutative diagram
\begin{center}
$\begin{CD}
0@>>> E_2'@>e_2'>> E_1'@>e_1'>> R'@>e_0'>> B'@>>> 0\\
@. @AA\alpha_2A @AA\alpha_1A @AA\alpha_0A @AAbA @.\\
0@>>> E_2@>e_2>> E_1@>e_1>> R @>e_0>> B @>>>0
\end{CD}$
\end{center}
where $b$, $\alpha_0$ is a Poisson homomorphism such that the following diagram commutes as Poisson homomorphisms
\begin{center}
$\begin{CD}
A'@>>> R'@>e_0'>> B'\\
@AAA @AA\alpha_0A@AAbA\\
A@>>> R@>>e_0> B
\end{CD}$
\end{center}
and $\alpha_1$ and $\alpha_2$ are homomorphisms of $\mathcal{U}_{Pois(R)}$-modules. We consider $E_1'$ and $E_2'$ are $\mathcal{U}_{Pois(R)}$-modules via $\alpha_0$.
\end{definition}
\begin{definition}
Let $\mathscr{E}$ be an Poisson extension of $B$ over $A$, we define a complex $PL^\bullet(\mathscr{E})$ of $\mathcal{U}_{Pois(B)}$-modules or equivalently Poisson $B$-modules:
\begin{align*}
PL^\bullet(\mathscr{E}): 0\to E_2\xrightarrow{d_2}\mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}} E_1\xrightarrow{d_1} \mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}} \Omega_{\mathcal{U}_{Pois(R)/A}}^1\to 0
\end{align*}
where $d_2$ is induced from $e_2$, and $d_1$ is defined in the following way: let $I=Ker(e_0) ($a Poisson ideal of $R$ defining $B)$. We also note that we have the canonical map $d:R\to \Omega_{\mathcal{U}_{Pois(R)}/A}^1$, and by restricting $d$ on $I$, we have $d:I\to \Omega_{\mathcal{U}_{Pois(R)/A}}^1$ and then by tensoring $\mathcal{U}_{Pois(B)}$, we have $d:I/I^2\oplus \{I,I\}\cong \mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}} I\to \mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}} \Omega_{\mathcal{U}_{Pois(R)/A}}^1$. We define $d_2=d\circ(\mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}} e_1)$. This is well-defined since $im(E_1)=I$. $PL^\bullet(\mathscr{E})$ is a complex since $e_1\circ e_2=0$
\end{definition}
\begin{remark}
Any Poisson extension of $B$ over $A$ are all obtained in the following way. Choose a surjection $e:R\to B$ with $A\to R\to B$ Poisson homomorphisms. Let $I=Ker\,e_0$ be a Poisson ideal of $R$. Then choose an exact sequence of $\mathcal{U}_{Pois(R)}$-modules $0\to U\xrightarrow{i} F\xrightarrow{j} I\to 0$. Let $U_0$ be $\mathcal{U}_{Pois(R)}$-submodule of $F$ generated by $j(x)y-j(y)x$ and $X_{j(x)}y+X_{j(x)}y$, where $x,y\in F$. Then $j(U_0)=0$ since $j(x)j(y)-j(y)j(x)=0, X_{j(x)}j(y)+X_{j(y)}j(x)=\{j(x),j(y)\}+\{j(y),j(x)\}=0$. Hence $U_0$ is also a submodule of $U$. We take $e_2:U/U_0\to F/U_0$ and $e_1:F/U_0\to R$ which is well-defined since $j(U_0)=0$. Then $0\to U/U_0\to F/U_0\to R\to B\to 0$ is a Poisson extension of $B$ over $A$. Conversely, given a Poisson extension $(\mathscr{E}):0\to E_2\xrightarrow{e_2} E_1\xrightarrow{e_1}R\xrightarrow{e_0}B\to 0$, we have $U_0=0$.
\end{remark}
\begin{definition}[Free Poisson extension]
A Poisson extension of $B$ over $A$ is of the from $0\to U/U_0\to F/U_0\to R\to B\to 0$ where $R$ is a free Poisson algebra over $A$ and $F$ is a free $\mathcal{U}_{Pois(R)}$-module is called a free Poisson extension of $B$ over $A$.
\end{definition}
\begin{remark}
For a free Poisson extension of $B$ over $A$, $\mathscr{E}:0\to U/U_0\to F/U_0\xrightarrow{j} R=A\{x_i\}\xrightarrow{e_0} B\to 0$, we have $\mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}} (F/U_0) \cong \mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}}F$. Hence $PL^1(\mathscr{E})$ is free. Indeed, let's consider the natural map $\alpha:\mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}} F\to \mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}} F/U_0$, defined by $\sum b_i\otimes f_i\mapsto \sum b_i\otimes \bar{f}_i$, which is surjective. Let $\sum b_i\otimes \bar{f}_i=0$. Since $\mathcal{U}(e_0):\mathcal{U}_{Pois(R)}\to \mathcal{U}_{Pois(B)}$ is surjective, we have $\sum b_i\otimes \bar{f}=\sum 1\otimes a_i\bar{f}=0$, where $\mathcal{U}(e_0)(a_i)=b_i$. Hence $\sum a_i\bar{f}_i=0$. So $\sum a_if_i\in U_0$. Hence $\sum_i a_if_i= \sum a_j' g_j$, where $g_j$ is of the form $j(x)y-j(y)x$ or $X_{j(x)}y+X_{j(x)}x$ where $x,y \in F$. Let $t\in \mathcal{U}_{Pois(R)}$. Since $1\otimes t(j(x)y-j(y)x)=\mathcal{U}(e_0)(t)e_0(j(x))\otimes y -\mathcal{U}(e_0)(t)e_0(j(y))\otimes x=0$ and $1 \otimes t(X_{j(x)}y+X_{j(y)}x)= \mathcal{U}(e_0)(t)X_{e_0(j(x))}\otimes y+\mathcal{U}(e_0)(t)X_{e_0(j(y))} \otimes x=0$, we have $\sum 1\otimes a_if_i=\sum 1\otimes a_j'g_j=0$. Hence $\alpha$ is an isomorphism.
\end{remark}
Consider the following commutative diagram of Poisson homomoprhisms,
\begin{center}
$\begin{CD}
B@>b>> B'\\
@AAA @AAA\\
A@>a>> A'
\end{CD}$
\end{center}
Let $\mathscr{E}$ be a free Poisson extension of $B$ over $A$ and $\mathscr{E}'$ arbitrary Poisson extension of $B'$ over $A'$. Then there exists a homomorphism $\alpha: \mathscr{E}\to \mathscr{E}'$ extending $b$.
\begin{center}
$\begin{CD}
0@>>> E_2'@>e_2'>> E_1'@>e_1'>> R'@>e_0'>> B'@>>> 0\\
@. @AA\alpha_2A @AA\alpha_1A @AA\alpha_0A @AAbA @.\\
0@>>> U/U_0@>e_2>> F/U_0=(\oplus_k \mathcal{U}_{Pois(R)})/U_0@>e_1=j>> R=A\{x_i\} @>e_0>> B @>>>0
\end{CD}$
\end{center}
where $\alpha_0,\alpha_1$ and $\alpha_2$ are defined in the following way: for $\alpha_0$, we send $x_i$ to an arbitrary lifting of $b(e_0(x_i))$. Let $\{v_k\}$ be the canonical basis of $F$ (i.e $v_k$ has 1 in the $k$-th component, and $0$ in other components). For $\alpha_1$, first we define a map $\alpha_1':F\to E_1'$ by sending $v_k$ to an arbitrary lifting $w_k'$ of $\alpha_0(e_1(v_k))$. So we have $e_1'(w_k')=\alpha_0(e_1(v_k))=\alpha_0(j(v_k)).$ We show that $\alpha_2'(U_0)=0$, and so $\alpha_1$ is well-defined. Indeed, we claim that $\alpha_1'(j(x)y-j(y)x)=j(x)\alpha_1'(y)-j(y)\alpha_1'(x)=\alpha_0(j(x))\alpha_1'(y)-\alpha_0((j(y))\alpha_1'(x)= 0$. Let $x=\sum_i a_iv_i$ and $y=\sum_kb_k v_k$. Then $j(x)y-j(y)x=\sum_{i,k}a_ij(v_i)b_kv_k-b_kj(v_k)a_iv_i$. Hence $\alpha_1'(j(x)y-j(y)x)=\sum_{i,k}\alpha_0(a_i)\alpha_0(j(v_i))\alpha_0(b_k)\alpha_1'(v_k)-\alpha_0(b_k)\alpha_0(j(v_k))\alpha_0(a_i)\alpha_1'(v_i)$. It is sufficient to show that
\begin{align*}
\alpha_0(a_i)\alpha_0(j(v_i))\alpha_0(b_k)\alpha_1'(v_k)-\alpha_0(b_k)\alpha_0(j(v_k))\alpha_0(a_i)\alpha_1'(v_i)=0.
\end{align*}
Since $e_1'(\alpha_0(b_k)\alpha_1'(v_k))=\alpha_0(b_k)\alpha_0(j(v_k))$ and $e_1'(\alpha_0(a_i)\alpha_1'(v_i))=\alpha_0(a_i)\alpha_0(j(v_i))$, we get the claim. On the other hand, we claim that $\alpha_1'(X_{j(x)}y+X_{j(y)}x)=0$. Indeed, $X_{j(x)}y+X_{j(y)}x=\sum_{i,k} X_{a_ij(v_i)}b_kv_k+X_{b_kj(v_k)}a_iv_i$. So we have $\alpha_1'(X_{j(x)}y+X_{j(y)}x)=\sum_{i,k} X_{\alpha_0(a_i)\alpha_0(j(v_i))}\alpha_0(b_k)\alpha_1'(v_k)+X_{\alpha_0(b_k)\alpha_0(j(v_k))}\alpha_0(a_i)\alpha_1'(v_i)$. So it is sufficient to show that
\begin{align*}
X_{\alpha_0(a_i)\alpha_0(j(v_i))}\alpha_0(b_k)\alpha_1'(v_k)+X_{\alpha_0(b_k)\alpha_0(j(v_k))}\alpha_0(a_i)\alpha_1'(v_i)=0.
\end{align*}
Since $e_1'(\alpha_0(b_k)\alpha_1'(v_k))=\alpha_0(b_k)\alpha_0(j(v_k))$ and $e_1'(\alpha_0(a_i)\alpha_1'(v_i))=\alpha_0(a_i)\alpha_0(j(v_i))$, we get the claim. For $\alpha_2$, we simply see that $U/U_0$ is sent to $E_2'$ via $\alpha_1$.
Next we claim that we have a homomorphism $\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(B)}}PL^\bullet(\mathscr{E})\to PL^\bullet(\mathscr{E}')$.
\begin{center}
\tiny{$\begin{CD}
0@>>>E_2'@>1\otimes d_2>>\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R')}} E_1'@>1\otimes d_1>>\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R')}} \Omega_{\mathcal{U}_{Pois(R')/A'}}^1@>>> 0\\
@. @AA\alpha_2'A @AA\alpha_1'A @AA\alpha_0'A \\
0@>>>\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(B)}} U/U_0@>1\otimes d_2>>\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R)}} F/U_0@>1\otimes d_1>>\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R)}} \Omega_{\mathcal{U}_{Pois(R)/A}}^1@>>> 0\end{CD}$}
\end{center}
where $\alpha_2'$ and $\alpha_1'$ are canonical induced from $\alpha_2,\alpha_1$. For $\alpha_0'$, we note that we have a canonical commutative diagram
\begin{center}
$\begin{CD}
\Omega_{\mathcal{U}_{Pois(R)/A}}^1@>>> \Omega_{\mathcal{U}_{Pois(R')/A'}}^1\\
@AdAA @AAd'A\\
R@>\alpha_0>> R'
\end{CD}$
\end{center}
where $d$ and $d'$ are the canonical map. Let $I'=Ker(e_0')$ and $I=Ker(e_0)$. Then $\alpha_0(I)\subset I'$. Hence we have
\begin{center}
$\begin{CD}
\Omega_{\mathcal{U}_{Pois(R)/A}}^1@>>> \Omega_{\mathcal{U}_{Pois(R')/A'}}^1\\
@AdAA @AAd'A\\
I@>\alpha_0>> I'\\
@AAA @AAA\\
F/U_0@>\alpha_1>> E_1'
\end{CD}$
\end{center}
By tensoring $\mathcal{U}_{Pois(B')}$, we get $\alpha_0'$.
\begin{definition}
a complex of the form $PL^\bullet(\mathscr{E})$, where $\mathscr{E}$ is a free Poisson extension of $B$ over $A$ is called a Poisson cotangent complex of $B$ over $A$.
\end{definition}
\begin{definition}
We say that a Poisson homomorphism $A\to R$ has property $(L)$ if the following condition holds: let $A\to S$ be a Poisson homomorphism and $u:M\to S$ a homomorphism of $\mathcal{U}_{Pois(S)}$-modules such that $u(x)y=u(y)x$ and $X_{u(x)}y+X_{u(y)}x=0$ for $x,y\in M$. Then for any Poisson homomorphism $f,g:R\to S$ such that the following commutative diagram holds and $Im(f-g)\subset Im(u)$, there exists a Poisson biderivation $\lambda:R\to M$ such that $u\circ \lambda =f-g$.
\begin{center}
\[\begindc{\commdiag}[5]
\obj(0,10)[aa]{$M$}
\obj(10,10)[bb]{$S$}
\obj(10,0)[cc]{$R$}
\mor{aa}{bb}{$u$}
\mor{cc}{aa}{$\lambda$}
\mor(9,0)(9,10){$f$}
\mor(11,0)(11,10){$g$}[\atright,\solidarrow]
\enddc\]
\end{center}
Here a Poisson biderivation means $\lambda(xy)=\lambda(x)g(y)+f(g)\lambda(y)$ and $\lambda(\{x,y\})=\{\lambda(x),g(y)\}+\{f(x),\lambda(y)\}$. Recall that $M$ is a $(\mathcal{U}_{Pois(S)}-\mathcal{U}_{Pois(S)})$-bimodule, where right-module structure is defined by $m\cdot s:=s\cdot m$ and $m\cdot X_s:=-X_s\cdot m$ for $s\in S$.
\end{definition}
\begin{proposition}
Consider the following commutative diagram of Poisson homomorphisms,
\begin{center}
$\begin{CD}
B@>b>> B'\\
@AAA @AAA\\
A@>a>>A'
\end{CD}$
\end{center}
Let $\mathscr{E}$ be an Poisson extension of $B$ over $A$ and $\mathscr{E}$ be a Poisson extension of $B'$ over $A'$. Let $\alpha,\beta:\mathscr{E}\to\mathscr{E}'$ be a homomorphism extending $b$:
\begin{center}
\[\begindc{\commdiag}[5]
\obj(0,10)[aa]{$0$}
\obj(10,10)[bb]{$E_2'$}
\obj(20,10)[cc]{$E_1'$}
\obj(30,10)[dd]{$R'$}
\obj(40,10)[ee]{$B'$}
\obj(50,10)[ff]{$0$}
\obj(0,0)[gg]{$0$}
\obj(10,0)[hh]{$E_2$}
\obj(20,0)[ii]{$E_1$}
\obj(30,0)[jj]{$R$}
\obj(40,0)[kk]{$B$}
\obj(50,0)[ll]{$0$}
\mor{aa}{bb}{}
\mor{bb}{cc}{$e_2'$}
\mor{cc}{dd}{$e_1'$}
\mor{dd}{ee}{$e_0'$}
\mor{ee}{ff}{}
\mor{gg}{hh}{}
\mor{hh}{ii}{$e_2$}
\mor{ii}{jj}{$e_1$}
\mor{jj}{kk}{$e_0$}
\mor{kk}{ll}{}
\mor(9,0)(9,10){$\beta_2$}
\mor(11,0)(11,10){$\alpha_2$}[\atright,\solidarrow]
\mor(19,0)(19,10){$\beta_1$}
\mor(21,0)(21,10){$\alpha_1$}[\atright,\solidarrow]
\mor(29,0)(29,10){$\beta_0$}
\mor(31,0)(31,10){$\alpha_0$}[\atright,\solidarrow]
\mor(40,0)(40,10){$b$}
\enddc\]
\end{center}
If $R$ has property $(L)$, then $\bar{\alpha}$ and $\bar{\beta}$ are homotopic maps of $\mathcal{U}_{Pois(B')} \otimes_{\mathcal{U}_{Pois(B)}} PL^\bullet(\mathscr{E})\to PL^{\bullet}(\mathscr{E}')$.
\end{proposition}
\begin{proof}
There exists a Poisson biderivation $\lambda:R\to E_1'$ such that $e_1'\circ \lambda=\beta_0-\alpha_0$. Let $\theta:E_1\to E_1'$ be the map $\theta=(\beta_1-\alpha_1)-\lambda\circ e_1$. We note that $e_1'\circ \theta=e_1'\circ (\beta_1-\alpha_1-\lambda\circ e_1)=(\beta_0-\alpha_0)\circ e_1-(\beta_0-\alpha_0)\circ e_1=0$. Thus $Im(\theta)$ is in the $\mathcal{U}_{Pois(B')}$-module $Im(e_2')\cong E_2'$. Then $(e_0'\circ \alpha_0(r)-e_0'\circ \beta_0(r))x=(e_0\circ b(r)-e_0\circ b(r))x=0 $ and $\{e_0'\circ\alpha_0(r)-e_0'\circ \beta_0(r),x\}=0$ for $x\in E_2'$. Hence on $E_2'$ (so $Im(\theta)$) as an $\mathcal{U}_{Pois(R)}$-module, the action of $R$ via $\alpha_0$ and $\beta_0$ coincides. We claim that $\theta$ is Poisson $R$-linear or equivalently $\mathcal{U}_{Pois(R)}$-linear. In other words, $\theta(rx)=\beta_0(r)\theta(x)=\alpha_0(r)\theta(x)$ and $\theta(\{r,x\})=\{\beta_0(r),\theta(x)\}=\{\alpha_0(r),\theta(x)\}$ for $r\in R$ and $x\in E_1$. Indeed,
\begin{align*}
\theta(rx)&=\beta_0(r)\beta_1(x)-\alpha_0(r)\alpha_1(x)-\lambda(re_1(x))\\
&=\beta_0(r)\beta_1(x)-\alpha_0(r)\alpha_1(x)-\lambda(r)\alpha_0(e_1(x))-\beta_0(r)\lambda(e_1(x))\\
\beta_0(r)\theta(x)&=\beta_0(r)\beta_1(x)-\beta_0(r)\alpha_1(x)-\beta_0(r)\lambda(e_1(x)) \\
\theta(rx)-\beta_0(r)\theta(x)&= (\beta_0(r)-\alpha_0(r))\alpha_1(x) -\lambda(r)\alpha_0(e_1(x))=e_1'(\lambda(r))\alpha_1(x)-\lambda(r)e_1'(\alpha_1(x))=0
\end{align*}
On the other hand,
\begin{align*}
\theta(\{r,x\})&=\{\beta_0(r),\beta_1(x)\}-\{\alpha_0(r),\alpha_1(x)\}-\lambda(\{r,e_1(x)\})\\
&=\{\beta_0(r),\beta_1(x)\}-\{\alpha_0(r),\alpha_1(x)\}-\{\lambda r,\alpha_0(e_1(x))\}+\{\beta_0(r),\lambda(e_1(x))\}\\
\{\beta_0(r), \theta(x)\}&=\{\beta_0(r),\beta_1(x)\}-\{\beta_0(r),\alpha_1(x)\} -\{\beta_0(r), \lambda(e_1(x))\}\\
\theta(\{r,x\})-\{\beta_0(r),\theta(x)\}&=\{\beta_0(r)-\alpha_0(r),\alpha_1(x)\}-\{\lambda r,\alpha_0(e_1(x))\}=X_{e_1'(\lambda r)}\alpha_1(x)+X_{e_1'(\alpha_1(x))}\lambda r =0
\end{align*}
Note that the Poisson biderivation $\lambda:R\to E_1'$ induces a Poisson derivation
\begin{align*}
1\otimes \lambda:R\to \mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R')}} E_1'
\end{align*}
since the action induced by $\alpha_0$ and $\beta_0$ coincides. Then by the universal mapping property of $\Omega_{Pois(R)/A}^1$, there is a $\bar{\lambda}:\mathcal{U}_{Poiss(B')}\otimes_{\mathcal{U}_{Pois(R)}}\Omega_{\mathcal{U}_{Pois(R)/A}}^1\to \mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R')}} E_1'$ such that $\bar{\lambda}(b\otimes dx)=b\otimes \lambda x$. On the other hand, the Poisson $R$-module map $\theta:E_1\to Im (e_2')\cong E_2'$ induces a $\bar{\theta}:\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R)}} E_1\to E_2'$
\begin{center}
\tiny{
\[\begindc{\commdiag}[170]
\obj(0,1)[aa]{$E_2'$}
\obj(1,1)[bb]{$\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R')}} E_1'$}
\obj(2,1)[cc]{$\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R')}} \Omega_{\mathcal{U}_{Pois(R')/A}}^1$}
\obj(0,0)[dd]{$\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R)}} E_2$}
\obj(1,0)[ee]{$\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R)}} E_1$}
\obj(2,0)[ff]{$\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(R)}} \Omega_{\mathcal{U}_{Pois(R)/A}}^1$}
\mor{aa}{bb}{$e_2'$}
\mor{ff}{bb}{$\bar{\lambda}$}
\mor{ee}{aa}{$\bar{\theta}$}
\mor{dd}{aa}{$\bar{\beta}_2-\bar{\alpha}_2$}
\mor{ee}{bb}{$\bar{\beta}_1-\bar{\alpha}_1$}
\mor{ff}{cc}{$\bar{\beta}_0-\bar{\alpha}_0$}
\mor{dd}{ee}{$1\otimes e_2$}
\mor{bb}{cc}{$f':=1\otimes d'\circ e_1'$}
\mor{ee}{ff}{$f:=1\otimes d\circ e_1$}
\enddc\]}
\end{center}
where $d:R\to \mathcal{U}_{Pois(R)/A}$ and $d':R'\to \mathcal{U}_{Pois(R')/A}$ are the canonical maps. Now we claim that $\bar{\beta}_2-\bar{\alpha}_2=\bar{\theta}\circ(1\otimes e_2)$, $\bar{\beta}_1-\bar{\alpha}_1=e_2'\circ \bar{\theta}+\bar{\lambda}\circ f'$ and $\bar{\beta}_0-\bar{\alpha}_0=f'\circ \bar{\lambda}$. For $\bar{\beta}_2-\bar{\alpha}_2=\bar{\theta} \circ (1\otimes e_2)$, we note that for $x\in E_2$, $\theta(e_2(x))=(\beta_1-\alpha_1)(e_2(x))-\lambda(e_1(e_2(x)))=e_2'\circ (\beta_2-\alpha_2)(x)$. For $\bar{\beta}_0-\bar{\alpha}_0=f'\circ \bar{\lambda}$, we note that since $e_1'\circ \lambda=\beta_0-\alpha_0$, we have $f'\circ \bar{\lambda}(b\otimes dx)=f'(b\otimes \lambda x)=b\otimes (d'\circ e'(\lambda x))=b\otimes d'((\beta_0-\alpha_0)(x))=(\bar{\beta}_0-\bar{\alpha}_0)(b\otimes dx)$. For $\bar{\beta}_1-\bar{\alpha}_1=e_2'\circ \bar{\theta}+\bar{\lambda}\circ f$, we note that for $x\in E_1$, $\theta(x)=(\beta_1-\alpha_1)(x)- \lambda(e_1(x))$.
\end{proof}
\begin{lemma}
A free Poisson algebra $A\{x_i\}$ over a Poisson algebra $A$ generated by $\{x_i\}$ satisfies the property $(L)$.
\end{lemma}
\begin{proof}
Let $A\to S$ be a Poisson homomorphism of Poisson algebras and $u:M\to S$ be a homomorphism of $\mathcal{U}_{Pois(S)}$ such that $u(x)y=u(y)x$ and $X_{u(x)}y+X_{u(y)}x=0$, for all $x,y\in M$. Let $f,g:R\to S$ be Poisson homomorphism compatible with $A$ such that $Im(f-g)\subset Im(u)$. We would like to define a Poisson biderivation $d:R\to M$ such that $u\circ d=f-g$. Let $R=A\{x_i\}$. Since $Im(f-g)\subset Im(u)$, we define $d(x_i)$ in $M$ satisfying $u(d(x_i))=f(x_i)-g(x_i)$ and $d(a)=0$. In general we define $d$ in the following way: for example
\begin{align*}
d(x_1x_2[x_3, ax_4x_5]):=d(x_2)g(x_2)[g(x_3),g(a)g(x_4)g(x_5)]+f(x_1)d(x_2)[g(x_3),g(a)g(x_4)g(x_5)]\\
+f(x_1)f(x_2)[d(x_3),g(a)g(x_4)g(x_5)]+f(x_1)f(x_2)[f(x_3),d(a),g(x_4)g(x_5)]\\
+f(x_1)f(x_2)[f(x_3),f(a)d(x_2)g(x_5)]+f(x_1)f(x_2)[f(x_3),f(a)f(x_4)d(x_5)]
\end{align*}
where $[-,-]$ is the Poisson bracket on $A\{x_i\}$. We will show that this is well-defined and so by definition, $d$ is a Poisson biderivation. Let $L$ to be the free Lie algebra generated by $A$ and $x_i$. First we show that $d$ is well-defined on $L$. Simply we define $d$ on free algebra generated by $A$ and $x_i$ by the above relation, where the operation is $[-,-]$. We show that $d([x,x])=0$ and $d([x,[y,z]]+[y,[z,x]]+[z,[x,y]])=0$ where $x,y,z\in A\cup \{x_i\}$. Then $d$ is well-defined on $L$. Indeed, for $[x,x]$, we have to show $[dx,g(x)]+[f(x),dx]=0$. Since $X_{u(dx)}dx+X_{ud(x)}dx=0$, we have $[f(x)-g(x),dx]+[f(x)-g(x),dx]=0$. Hence we have $[f(x)-g(x),dx]=0$. So $[dx,g(x)]+[f(x),dx]=0$.
For $[x,[y,z]]+[y,[z,x]]+[z,[x,y]]$, we want to show that $d([x,[y,z]]+[y,[z,x]]+[z,[x,y]])=0$, equivalently,
\begin{align*}
&[dx,[g(y),g(z)]]+[f(x),[dy,g(z)]]+[f(x),[f(y),dz]]\\
+&[dy,[g(z),g(x)]]+[f(y),[dz,g(x)]]+[f(y),[f(z),dx]]\\
+&[dz,[g(x),g(y)]]+[f(z),[dx,g(y)]]+[f(z),[f(x),dy]]=0
\end{align*}
For $[x,[y,z]]$, we note that $X_{u(d(x))}d([y,z])+X_{u(d([y,z]))}d(x)=[f(x)-g(x),[dy,g(z)]+[f(y),dz]]+[[f(y)-g(y),g(z)],dx]+[[f(y),f(z)-g(z)],dx]=[f(x),[dy,g(z)]]-[g(x),[dy,g(z)]]+[f(x),[f(y),dz]]-[g(x),[f(y),dz]]+[[f(y),g(z)],dx]-[[g(y),g(z)],dx]+[[f(y),f(z)],dx]-[[f(y),g(z)]dx]=[f(x),[dy,g(z)]]-[g(x),[dy,g(z)]]+[f(x),[f(y),dz]]-[g(x),[f(y),dz]]-[[g(y),g(z)],dx]+[[f(y),f(z)],dx]$. Hence we get the following. (however, we do not use this relationship for $[x,[y,z]]$. We do this for the symmetric arguments $[y,[z,x]]$ and $[z,[x,y]]$ in the below.)
\begin{align*}
X_{u(d(x))}d([y,z])+X_{u(d([y,z]))}d(x)&=[f(x),[dy,g(z)]]-[g(x),[dy,g(z)]]\\
&+[f(x),[f(y),dz]]-[g(x),[f(y),dz]]\\
&-[[g(y),g(z)],dx]+[[f(y),f(z)],dx]=0
\end{align*}
For $[y,[z,x]]$, by symmetry we have
\begin{align*}
X_{u(d(y))}d([z,x])+X_{u(d([z,x]))}d(y)&=[f(y),[dz,g(x)]]-[g(y),[dz,g(x)]]\\
&+[f(y),[f(z),dx]]-[g(y),[f(z),dx]]\\
&-[[g(z),g(x)],dy]+[[f(z),f(x)],dy]=0
\end{align*}
For $[z,[x,y]]$, by symmetry we have
\begin{align*}
X_{u(d(z))}d([z,y])+X_{u(d([x,y]))}d(x)&=[f(z),[dx,g(y)]]-[g(z),[dx,g(y)]]\\
&+[f(z),[f(x),dy]]-[g(z),[f(x),dy]]\\
&-[[g(x),g(y)],dz]+[[f(x),f(y)],dz]=0
\end{align*}
Then we have
\begin{align*}
&[dx,[g(y),g(z)]]+[f(x),[dy,g(z)]]+[f(x),[f(y),dz]]\\
+&[dy,[g(z),g(x)]]+[f(y),[dz,g(x)]]+[f(y),[f(z),dx]]\\
+&[dz,[g(x),g(y)]]+[f(z),[dx,g(y)]]+[f(z),[f(x),dy]]\\
=&[dx,[g(y),g(z)]]+[f(x),[dy,g(z)]]+[f(x),[f(y),dz]]\\
+&[g(y),[dz,g(x)]]+[g(y),[f(z),dx]]-[[f(z),f(x)],dy]\,\,\,\text{from}\,\,\, X_{u(d(z))}d([z,y])+X_{u(d([x,y]))}d(x)=0\\
+&[g(z),[dx,g(y)]]+[g(z),[f(x),dy]]-[[f(x),f(y)],dz]\,\,\,\text{from}\,\,\,X_{u(d(z))}d([z,y])+X_{u(d([x,y]))}d(x)=0\\
=&-[g(y),[g(z),dx]]+[g(z),[g(y),dx]]-[dy,[g(z),f(x)]]-[g(z),[f(x),dy]]-[f(y),[dz,f(x)]]-[dz,[f(x),f(y)]]\\
+&[g(y),[dz,g(x)]]+[g(y),[f(z),dx]]-[[f(z),f(x)],dy]\\
+&[g(z),[dx,g(y)]]+[g(z),[f(x),dy]]-[[f(x),f(y)],dz]\\
=&-[g(y),[g(z),dx]]-[dy,[g(z),f(x)]]-[f(y),[dz,f(x)]]\\
+&[g(y),[dz,g(x)]]+[g(y),[f(z),dx]]-[[f(z),f(x)],dy]\\
=&[g(y),[f(z)-g(z),dx]]+[dy,[f(z)-g(z),f(x)]]-[[f(y),[dz,f(x)]]+[g(y),[dz,g(x)]]\\
=&[g(y),[f(z)-g(z),dx]]-[f(z)-g(z),[f(x),dy]]-[f(x),[dy,f(z)-g(z)]]-[[f(y),[dz,f(x)]]+[g(y),[dz,g(x)]]
\end{align*}
On the other hand, from $[g(y), X_{u(dz)}dx+X_{u(dx)}dz]=0$, we have
\begin{align*}
[g(y),[f(z)-g(z),dx]]+[g(y),[f(x)-g(x),dz]]=0
\end{align*}
From $[f(x),X_{u(dz)}dy+X_{u(dy)}dz]=0$, we have
\begin{align*}
[f(x),[f(z)-g(z),dy]]+[f(x),[f(y)-g(y),dz]]=0
\end{align*}
Hence we have
\begin{align*}
&[g(y),[f(z)-g(z),dx]]-[f(z)-g(z),[f(x),dy]]-[f(x),[dy,f(z)-g(z)]]-[[f(y),[dz,f(x)]]+[g(y),[dz,g(x)]]\\
=&-[g(y),[f(x)-g(x),dz]]-[f(z)-g(z),[f(x),dy]]-[f(x),[f(y)-g(y),dz]]\\
&-[[f(y),[dz,f(x)]]+[g(y),[dz,g(x)]]\\
=&-[g(y),[f(x),dz]]-[f(z)-g(z),[f(x),dy]]-[f(x),[f(y)-g(y),dz]]-[[f(y),[dz,f(x)]]\\
=&-[g(y),[f(x),dz]]-[f(z)-g(z),[f(x),dy]]-[f(x),[f(y),dz]]+[f(x),[g(y),dz]]-[[f(y),[dz,f(x)]]\\
=&[[f(x),g(y)],dz]-[f(z)-g(z),[f(x),dy]]-[[f(x),f(y)],dz]
\end{align*}
From $X_{u(dz)}[f(x),dy]+X_{u([f(x),dy])}dz=0$, we have
\begin{align*}
&[f(z)-g(z),[f(x),dy]]+[[f(x),f(y)-g(y)],dz]\\
=&[f(z)-g(z),[f(x),dy]]+[[f(x),f(y)],dz]-[[f(x),g(y)],dz]=0
\end{align*}
Hence we have
\begin{align*}
[[f(x),g(y)],dz]-[f(z)-g(z),[f(x),dy]]-[[f(x),f(y)],dz]=0
\end{align*}
Hence $d$ is well-defined on $L$. Since $(f-g)([x,y])=[f(x),(f-g)(x)]+[(f-g)(x),g(x)]$, we have $u\circ d=f-g$ on $L$. Now we show that $d$ is well-defined on the free commutative algebra $S$ generated by $L$. Let $V=\{T_j\}$ be a basis of $L$ over $k$. Then we define $d$ on $S$ by the following formula, for a monomial $T_{i_1}\cdots T_{i_n}$,
\begin{align*}
d(T_{i_1}\cdots T_{i_n})=\sum_{k=1}^n f(T_{i_1}\cdots T_{i_{k-1}})d(T_{i_k})g(T_{i_{k+1}}\cdots T_{i_n}).
\end{align*}
Then $d$ is well-defined on $S$ and we have $u\circ d=f-g$ on $S$ (See \cite{Sch67} Lemma 2.1.6). We define a bracket $[-,-]_S$ on $S$ such that $[T_i,T_j]_S:=[T_i,T_j]$ and we use the relation $[x,yz]=y[x,z]+z[x,y]$ for $x,y,z\in L$. Then from $[x,yz]-y[x,z]-z[x,y]$ for $x,y,z\in S$, we want to show that $d([x,yz])=d(y[x,z]+z[x,y])$, equivalently,
\begin{align*}
[dx,g(y)g(z)]+[f(x),dyg(z)]+[f(x),f(y)dz]=dy[g(x),g(z)]+f(y)[dx,g(z)]+f(y)[f(x),dz]\\
+dz[g(x),g(y)]+f(z)[dx,g(y)]+f(z)[f(x),dy]
\end{align*}
Equivalently,
\begin{align*}
g(y)[dx,g(z)]+g(z)[dx,g(y)]+dy[f(x),g(z)]+g(z)[f(x),dy]+dz[f(x),f(y)]+f(y)[f(x),dz]\\
=dy[g(x),g(z)]+f(y)[dx,g(z)]+f(y)[f(x),dz]
+dz[g(x),g(y)]+f(z)[dx,g(y)]+f(z)[f(x),dy]
\end{align*}
We note that
\begin{align*}
u(dy)d[x,z]-u(d[z,x])dy=(f(y)-g(y))([dx,g(z)]+[f(x),dz])-([f(x)-g(x),g(z)]+[f(x),f(z)-g(z)])dy\\
=f(y)[dx,g(z)]-g(y)[dx,g(z)]+f(y)[f(x),dz]-g(y)[f(x),dz]+[g(x),g(z)]dy-[f(x),f(z)]dy=0
\end{align*}
We also note that
\begin{align*}
u(dz)d[x,y]-u(d[y,x])dz=(f(z)-g(z))([dx,g(y)]+[f(x),dy])-([f(x)-g(x),g(y)]+[f(x),f(y)-g(y)])dz\\
=f(z)[dx,g(y)]-g(z)[dx,g(y)]+f(z)[f(x),dy]-g(z)[f(x),dy]+[g(x),g(y)]dz-[f(x),f(y)]dz=0
\end{align*}
So we have
\begin{align*}
dy[g(x),g(z)]+f(y)[dx,g(z)]+f(y)[f(x),dz]+dz[g(x),g(y)]+f(z)[dx,g(y)]+f(z)[f(x),dy]\\
=g(y)[dx,g(z)]+g(y)[f(x),dz]+[f(x),f(z)]dy+g(z)[dx,g(y)]+g(z)[f(x),dy]+[f(x),f(y)]dz
\end{align*}
\begin{align*}
g(y)[dx,g(z)]+g(z)[dx,g(y)]+dy[f(x),g(z)]+g(z)[f(x),dy]+dz[f(x),f(y)]+f(y)[f(x),dz]\\
-g(y)[dx,g(z)]-g(y)[f(x),dz]-[f(x),f(z)]dy-g(z)[dx,g(y)]-g(z)[f(x),dy]-[f(x),f(y)]dz\\
=dy[f(x),g(z)]+f(y)[f(x),dz]-g(y)[f(x),dz]-[f(x),f(z)]dy\\
=(f(y)-g(y))[f(x),dz]-[(f(x),(f(z)-g(z))]dy\\
=u(dy)[f(x),dz]-u([f(x),dz])dy=0
\end{align*}
Lastly since $da=0$ for all $a\in A$, $d$ is well-defined on $A\{x_i\}$, and $d$ is a Poisson biderivation and $u\circ d=f-g$.
\end{proof}
\begin{corollary}
Consider the following commutative diagram of Poisson homomorphisms
\begin{center}
$\begin{CD}
B@>b>> B'\\
@AAA @AAA\\
A@>a>>A'
\end{CD}$
\end{center}
Let $\mathscr{E}$ an free Poisson extension of $B$ over $A$ and $\mathscr{E}'$ be a Poisson extension of $B'$ over $A'$. Then there exists a homomrphim $\alpha:\mathscr{E}\to \mathscr{E}'$ extending $b$. If $\beta:\mathscr{E}\to \mathscr{E}'$ is any other homomorphism extending $b$, then $\bar{\alpha}$ and $\bar{\beta}$ are homotopic maps of $\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(B)}} PL^\bullet(\mathscr{E})\to PL^\bullet (\mathscr{E}')$.
\end{corollary}
\begin{definition}
Let $A\to B$ a Poisson homomoprhism, $\mathscr{E}$ be a free extension of $B$ over $A$, and $M$ be a Poisson $B$-module. We define $PT^i(B/A,M):=H^i(Hom_{\mathcal{U}_{Pois(B)}}(PL^{\bullet}(\mathscr{E}),M)), i=0,1,2$. Since $Hom_{\mathcal{U}_{Pois(B)}}(-,M)$ is an contravariant additive functor and any two Poisson cotangent complexes $PL^\bullet(\mathscr{E})$ and $PL^\bullet(\mathscr{F})$ induced from two free Poisson extensions $\mathscr{E}$ and $\mathscr{F}$ of $B$ over $A$ are homotopically equivalent, and so $Hom_{\mathcal{U}_{Pois(B)}}(PL^\bullet(\mathscr{E}),M)$ is homotopically equivalent to $Hom_{\mathcal{U}_{Pois(B)}}(PL^\bullet(\mathscr{F}),M)$. Hence $PT^i(B/A,M)$ is well-defined.
\end{definition}
\begin{proposition}
Let $A\to B$ be Poisson homomorphism of Poisson algebras over $k$. Then for $i=0,1,2$, $PT^i(B/A,\cdot)$ is a covariant, additive functor from the category of Poisson $B$-modules to category of $B$-modules. If $0\to M'\to M\to M''\to 0$ is a short exact sequence of Poisson $B$-modules, then there is a long exact sequence
\begin{align*}
0&\to PT^0(B/A,M')\to PT^0(B/A,M)\to PT^0(B/A,M'')\to\\
&\to PT^1(B/A,M')\to PT^1(B/A,M)\to PT^1(B/A,M'')\to\\
&\to PT^2(B/A,M')\to PT^2(B/A,M)\to PT^2(B/A,M'')
\end{align*}
\end{proposition}
\begin{proof}
By construction, $PT^i(B/A,\cdot)$ is a covariant additive functor. Let $\mathcal{E}:0\to U/U_0\to F/U_0\to R=A\{x_i\}\to B\to 0$ be a free Poisson extension of $B$ over $A$. Then $PL^0=\mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}}\Omega_{\mathcal{U}_{Pois(R)/A}}^1$ which is free $\mathcal{U}_{Pois(B)}$-module and $PL^1=\mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}} F$ which is a free $\mathcal{U}_{Pois(B)}$-module and $PL^2=U/U_0$. Let's consider the induced diagram.
\begin{center}
\tiny{$\begin{CD}
0@>>> Hom_{\mathcal{U}_{Pois(B)}}(PL^0,M')@>>> Hom_{\mathcal{U}_{Pois(B)}}(PL^0,M')@>>> Hom_{\mathcal{U}_{Pois(B)}}(PL^0,M'')@>>> 0\\
@.@VVV @VVV @VVV\\
0@>>>Hom_{\mathcal{U}_{Pois(B)}}(PL^1,M')@>>> Hom_{\mathcal{U}_{Pois(B)}}(PL^1,M')@>>>Hom_{\mathcal{U}_{Pois(B)}}(PL^1,M')@>>>0\\\
@.@VVV@VVV @VVV\\
0@>>> Hom_{\mathcal{U}_{Pois(B)}}(PL^2,M')@>>> Hom_{\mathcal{U}_{Pois(B)}}(PL^2,M')@>>> Hom_{\mathcal{U}_{Pois(B)}}(PL^2,M')
\end{CD}$}
\end{center}
First row and second row are exact since $PL^0$ and $PL^1$ are free, and third row is exact for the first two terms by the right exactness of $Hom_{\mathcal{U}_{Pois(B)}}(PL^2,\cdot)$. Hence we get the proposition.
\end{proof}
\begin{proposition}\label{3pro}
For any Poisson homomorphism $A\to B$ and and Poisson $B$-module $M$, $PT^0(B/A,M)=Hom_{\mathcal{U}_{Pois(B)}}(\Omega_{\mathcal{U}_{Pois(B)/A}}^1,M)=PDer_A(B,M)$. In particular, $PT^0(B/A,B)=Hom_{\mathcal{U}_{Pois(B)}}(\Omega_{\mathcal{U}_{Pois(B)/A}}^1,B)=PDer_A(B,B)$
\end{proposition}
\begin{proof}
Let's consider a exact sequence $0\to I\to A\{x_i\}\xrightarrow{\phi} B\to 0$ for some free Poisson algebra $A\{x_i\}$ over $A$ and $\phi$ is a Poisson homomorphism compatible with $A$. Then we have an exact sequence
\begin{align*}
\mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(A\{x_i\})/A}} I\cong I/(I^2\oplus \{I,I\}) \to \mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(A\{x_i\})}}\Omega_{\mathcal{U}_{Pois(A\{x_i\}/A)}}^1\to \Omega_{\mathcal{U}_{Pois(B)/A}}^1
\end{align*}
Let's choose a free Poisson $\mathcal{U}_{Pois(A\{x_i\})}$-module $F$ such that $F\to I\to 0$. Then we have an exact sequence
\begin{align*}
PL^1\to PL^0\to \Omega_{\mathcal{U}_{Pois(B)/A}}^1
\end{align*}
By taking $Hom_{\mathcal{U}_{Pois(B)}}(\cdot, M)$ which is a right exact functor, we have
\begin{align*}
PT^0(B/A,M)=Hom_{\mathcal{U}_{Pois(B)}}(\Omega_{\mathcal{U}_{Pois(B)/A}}^1,M)=PDer_A(B,M)
\end{align*}
\end{proof}
\begin{proposition}\label{3prot}
If $A\to B$ is a surjective Poisson algebra homomorphism with kernel $I$, then $PT^0(B/A,M)=0$ for all $M$, and $PT^1(B/A,M)=Hom_{\mathcal{U}_{Pois(B)}}(I/I^2\oplus \{I,I\},M)$. In particular $PT^1(B/A,B)=Hom_{\mathcal{U}_{Pois(B)}}(I/I^2\oplus\{I,I\},B)$
\end{proposition}
\begin{proof}
We take $R=A$ as a free Poisson extension with no generating set over $A$. Since $\Omega_{\mathcal{U}_{Pois(A)/A}}^1=0$, we have $PT^0(B/A,M)=0$. Let's consider the exact sequence $0\to U\to F\xrightarrow{j} I\to 0$. Let $U_0$ be $\mathcal{U}_{Pois(R)}$-submodule of $F$ generated by $j(x)y-j(y)x$ and $X_{j(x)}y+X_{j(y)}x$, where $x,y \in F$. Then $0\to U/U_0\to F/U_0\to I\to 0$ is exact. So we have an exact sequence.
\begin{align*}
\mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(A)}}U/U_0\to \mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(A)}} F/U_0\to I/I^2\oplus \{I,I\}\to 0
\end{align*}
Hence we have $PT^1(B/A,M)=Hom_{\mathcal{U}_{Pois(B)}}(I/I^2\oplus \{I,I\}, M)$.
\end{proof}
\begin{proposition}\label{3pror}
Given a commutative diagram
\begin{center}
$\begin{CD}
B @>b>> B'\\
@AAA @AAA\\
A @>a>> A'
\end{CD}$
\end{center}
where every morphisms are Poisson homomorphisms. Let $M'$ be a Poisson $B'$-module. Then there are natural homomorphism
\begin{align*}
PT^i(B'/A',M')\to PT^i(B/A,M')
\end{align*}
\end{proposition}
\begin{proof}
Let $\mathscr{E}:0\to F_2\to F_1\to P\to B\to 0$ be a free Poisson extension of $B$ over $A$ and $\mathscr{E}:0\to F_2'\to F_1'\to P'\to B'\to 0$ be a free Poisson extension of $B'$ over $A'$. Then we have a homomorphism $\alpha:\mathscr{E}\to \mathscr{E}'$ extending $b:B\to B'$ and this induces $\bar{\alpha}:PL^\bullet (\mathscr{E})\to PL^\bullet(\mathscr{E}')$ (hence $\mathcal{U}_{Pois(B')}\otimes_{\mathcal{U}_{Pois(B)}} PL^\bullet (\mathscr{E})\to PL^\bullet(\mathscr{E}'))$ and so we have
\begin{center}
\small{$\begin{CD}
Hom_{\mathcal{U}_{Pois(B')}}(PL^{0'},M')@>>> Hom_{\mathcal{U}_{Pois(B')}}(PL^{1'},M')@>>> Hom_{\mathcal{U}_{Pois(B')}}(PL^{2'},M')\\
@VVV @VVV @VVV\\
Hom_{\mathcal{U}_{Pois(B)}}(PL^{0},M')@>>> Hom_{\mathcal{U}_{Pois(B)}}(PL^{1},M')@>>> Hom_{\mathcal{U}_{Pois(B)}}(PL^{2},M')
\end{CD}$}
\end{center}
So this induces $PT^i(B'/A',M')\to PT^i(B/A,M')$. The construction is independent of the choices of $\mathscr{E}$ and $\mathscr{E'}$. Indeed, Let $\mathscr{F}$ be another free Poisson extension of $B$ over $A$ and $\mathscr{F'}$ be another free Poisson extension of $B'$ over $A'$. Let $\bar{\beta}:PL^\bullet(\mathscr{F})\to PL^\bullet(\mathscr{F}')$ a homomorphism extending $B\to B'$ similarly to $PL^\bullet(\mathscr{E})\to PL^\bullet(\mathscr{E}')$ as above. Let $\bar{\gamma}:PL^\bullet (\mathscr{F})\to PL^\bullet(\mathscr{E})$ and $\bar{\gamma}':PL^\bullet(\mathscr{E}')\to PL^\bullet(\mathscr{F}')$ be homotopically equivalent maps. Then in the diagram
\begin{center}
$\begin{CD}
PL^\bullet(\mathscr{E})@>\bar{\alpha}>> PL^\bullet(\mathscr{E}')\\
@A\bar{\gamma}AA @VV\bar{\gamma}'V\\
PL^\bullet(\mathscr{F})@>\bar{\beta}>\bar{\gamma}'\circ \bar{\alpha}\circ \bar{\gamma}> PL^\bullet(\mathscr{F}')
\end{CD}$
\end{center}
$\bar{\beta}$ and $\bar{\gamma}'\circ \bar{\alpha}\circ \bar{\gamma}$ are hotomopic maps since $\mathscr{F}$ is a free Poisson extension. Hence the induced maps $PT^i(B'/A',M')\to PT^i(B/A,M)$ are equal.
\end{proof}
\begin{definition}[Short Poisson extension]
Let $A\to B$ a Poisson homomorphism of Poisson $k$-algebras, $k$ a field, and $M$ be a Poisson $B$-module. By a short Poisson extension of $B$ over $A$ by $M$, we mean an exact sequence:
\begin{align*}
0\to M\xrightarrow{i} E\xrightarrow{k} B\to 0
\end{align*}
where $A\to E$ is an Poisson homomorphism of Poisson algebras and $k:E\to B$ a Poisson homomorphism compatible with $A$ $($i.e $A\to B$ factor through $E$$)$ and $M$ is regarded as a square zero Poisson ideal in $E$, i.e $i(M)\cdot i(M)=0$ and $\{i(M),i(M)\}=0$. We note that a short Poisson extension is a Poisson extension since $M$ is a $\mathcal{U}_{Pois(E)}$-module $($the $\mathcal{U}_{Pois(E)}$-module structure is induced from $\mathcal{U}_{Pois(B)}$-module structure via $k$$)$, and so $i(x)y-i(y)x$ and $X_{i(x)}y+X_{i(y)}x$ are trivially $0$ because $ki(x)=0$ for all $x\in M$. Hence $0\to 0\to M\to E\to B\to 0$ is a Poisson extension. If $E'$ is another short Poisson extension of $B$ over $A$ by $M$, we say that $E$ and $E'$ are equivalent if there exists a Poisson homomorphism $\theta:E\to E'$ compatible with $A$ inducing the following commutative diagram
\begin{center}
$\begin{CD}
0@>>> M@>i>>E@>k>>B @>>> 0\\
@. @|@V\theta VV @| @.\\
0@>>> M@>i'>> E' @>k'>>B@>>>0
\end{CD}$
\end{center}
\end{definition}
\begin{definition}
We define $PEx^1(B/A,M)$ be the set of equivalence classes of short Poisson extensions of $B$ over $A$ by $M$.
\end{definition}
\begin{lemma}
Let $0\to M\xrightarrow{i}E\xrightarrow{k}B\to0$ be a short Poisson extension of $B$ over $A$ by $M$ as above. Given an Poisson homomorphism $A\to C$ and two Poisson homomorphisms $f_1,f_2:C\to E$ compatible with $A$ such that $kf_1=k f_2$, the induced map $f_2-f_1:C\to M$ is an Poisson $A$-derivation.
\end{lemma}
\begin{proof}
We assume that $i$ is an inclusion. First we note that $M$ is an Poisson $B$-module. We define $C$-module structure on $M$ by setting $c\cdot m:=f_1(c) m=f_2(c)m$ which is well-defined since $M^2=0$. We define an Poisson $C$-module structure on $M$ by setting $\{c,m\}:=\{f_1(c),m\}=\{f_2(c),m\}$ which is well-defined since $\{M,M\}=0$.
Cleary $f_2-f_1$ is $A$-linear. We note that
\begin{align*}
(f_2-f_1)(c_1c_2)&=f_2(c_1)f_2(c_2)-f_1(c_1)f_1(c_2)
\\&=f_2(c_1)f_2(c_2)-f_2(c_1)f_1(c_2)+f_2(c_1)f_1(c_2)-f_1(c_1)f_1(c_2)\\
&=f_2(c_1)(f_2(c_2)-f_1(c_2))+f_1(c_2)(f_2(c_1)-f_1(c_1))
\\&=c_1\cdot(f_2-f_1)(c_1)+c_2\cdot (f_2-f_1)(c_1)\\
(f_2-f_1)(\{c_1,c_2\})&=\{f_2(c_1),f_2(c_2)\}-\{f_1(c_1),f_1(c_2)\}\\
&=\{f_2(c_1),f_2(c_2)\}-\{f_2(c_1),f_1(c_2)\}+\{f_2(c_1),f_1(c_2)\}-\{f_1(c_1),f_1(c_2)\}\\
&=\{f_2(c_1),(f_2-f_1)(c_2)\}+\{(f_2-f_1)(c_1),f_1(c_2)\}\\
&=\{c_1,(f_2-f_1)(c_2)\}-\{c_2,(f_2-f_1)(c_1)\}
\end{align*}
\end{proof}
\begin{definition}
The short Poisson extension of $B$ over $A$ by $M:0\to M\xrightarrow{i}E\xrightarrow{k}B\to0$ is called trivial if it has a section, that is if there exists a Poisson homomorphism $\sigma:B\to E$ such that $k\sigma=1_B$ and $\sigma$ is compatible with $A$.
\end{definition}
Given an Poisson $B$-module $M$, a trivial short Poisson extension of $B$ over $A$ by $M$ can be constructed by considering the Poisson algebra $B\tilde{\oplus} M$ whose underlying $A$-module is $B\oplus M$ with multiplication and bracket defined by
\begin{align*}
(b_1,m_1)(b_2,m_2)&=(b_1b_2,b_1m_2+b_2m_1)\\
\{(b_1,m_1),(b_2,m_2)\}&=(\{b_1,b_2\},-\{b_2,m_1\}+\{b_1,m_2\})
\end{align*}
The first projection
\begin{align*}
p:B\tilde{\oplus} M\to B
\end{align*}
is a Poisson homomorphism compatible with $A$ and defines an Poisson extension of $B$ over $A$ by $M$.
A section of $p$ can be identified with a Poisson $A$-derivations $d:B\to M$. Indeed, if we have a section $\sigma:B\to B\tilde{\oplus} M$ with $\sigma(b)=(b,d(b))$, then for all $b,b'\in B$
\begin{align*}
\sigma(bb')&=(bb',d(bb'))=\sigma(b)\sigma(b')=(b,d(b))(b',d(b'))=(bb',bd(b')+b'd(b))\\
\sigma(\{b,b'\})&=(\{b,b'\},d(\{b,b'\})=\{\sigma(b),\sigma(b')\}=\{(b,d(b)),(b',d(b'))\}=(\{b,b'\},-\{b',d(b)\}+\{b,d(b')\})
\end{align*}
and if $a\in A$ then $\sigma(ab)=(ab,d(ab))=a\sigma(b)=a(b,d(b))=(ab,ad(b))$. Hence $d:B\to M$ is a Poisson $A$-derivation. Conversely, every Poisson $A$-derivation $d:B\to M$ defines a section $\sigma_d:B\to B\tilde{\oplus} M$ by $\sigma_d(b)=(b,d(b))$.
\begin{proposition}
Every trivial short Poisson extension $E$ of $B$ by $M$ is isomorphic to $(B\tilde{\oplus} M,p)$.
\end{proposition}
\begin{proof}
If $\sigma:B\to E$ is a section, an isomorphism $\xi:B\tilde{\oplus} M\to E$ is given by $\xi((b,m))=\sigma(b)+m$. We check only Poisson compatibility. $\xi(\{(b_1,m_1),(b_2,m_2)\})=\sigma(\{b_1,b_2\})-\{b_2,m_1\}+\{b_1,m_2\}=\{\sigma(b_1),\sigma(b_2)\}-\{b_2,m_1\}+\{b_1,m_2\}$. On the other hand, $\{(\xi((b_1,m_1)),\xi((b_2,m_2))\}=\{\sigma(b_1)+m_1,\sigma(b_2)+m_2\}=\{\sigma(b_1),\sigma(b_2)\}-\{b_2,m_1\}+\{b_1,m_2\}$. We define an inverse map $\xi^{-1}(e')=(k(e'),e'-\sigma k(e'))$.
\end{proof}
Let $\mathscr{E}:0\to M\to E\to B\to 0$ be a short Poisson extension of $B$ over $A$ by $M$ as above. Let $h:A\to E$ be the Poisson homomorphism from $\mathscr{E}$. Then we have by Proposition \ref{3prot},
\begin{align*}
PT^1(B/E,M)=Hom_{\mathcal{U}_{Pois(B)}}(M/M^2\oplus \{M,M\},M)=Hom_{\mathcal{U}_{Pois(B)}}(M,M).
\end{align*}
Then by Proposition \ref{3pror}, $h$ induces
\begin{align*}
h^*:=Hom_{\mathcal{U}_{Pois(B)}}(M,M)=PT^1(B/E,M)\to PT^1(B/A,M)
\end{align*}
\begin{theorem}\label{3thm}
The assignment $\mathscr{E} \to h^*(id)$ induces a bijection
\begin{align*}
\rho:PEx^1(B/A,M)\to PT^1(B/A,M)
\end{align*}
in which the class of the trivial short Poisson extension corresponds to $0$.
\end{theorem}
\begin{proof}
Let
\begin{align*}
\mathscr{F}:0\to F_2\to F_1\to P\to B\to 0
\end{align*}
be a fixed free Poisson extension of $B$ over $A$. Given an short extension $\mathscr{E}:0\to M\to E\to B\to 0$, which is also an Poisson extension, we have a homomorphism $\alpha:\mathscr{F}\to \mathscr{E}$ extending the identity $B\to B$, unique up to homotopy,
\begin{center}
$\begin{CD}
0@>>> 0@>>> M@>j>> E@>k>> B@>>> 0\\
@. @AA\alpha_2A @AA\alpha_1A @AA\alpha_0A @AAidA @.\\
0@>>> F_2@>\bar{i}>> F_1@>\bar{j}>> P@>\bar{k}>> B @>>>0
\end{CD}$
\end{center}
Let's consider the cotangent complex of $B$ over $A$
\begin{align*}
0\to F_2\xrightarrow{\bar{i}}\mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(R)}} F_1\xrightarrow{d\circ\bar{j}} \mathcal{U}_{Pois(B)}\otimes_{\mathcal{U}_{Pois(P)}} \Omega_{\mathcal{U}_{Pois(P)/A}}^1\to 0
\end{align*}
Then we have
\begin{align*}
Hom_{\mathcal{U}_{Pois(P)}}( \Omega_{\mathcal{U}_{Pois(P)/A}}^1,M)\to Hom_{\mathcal{U}_{Pois(P)}}(F_1,M)\to Hom_{\mathcal{U}_{Pois(B)}}(F_2,M)
\end{align*}
The class of $\alpha_1:F_1\to M$ is $h^*(id)$. Let $\mathscr{E}':0\to M\to E'\to B\to 0$ be an equivalent short Poisson extension with $\mathscr{E}$ given by $\theta:E\to E'$. The we have a homomorphism $\mathscr{F}\to \mathscr{E}'$ extending the identity $B\to B$,
\begin{center}
$\begin{CD}
0@>>> 0@>>> M@>j'>> E@>k'>> B@>>> 0\\
@. @AA\alpha_2A @AA\alpha_1A @AA\theta \circ\alpha_0A @AAidA @.\\
0@>>> F_2@>\bar{i}>> F_1@>\bar{j}>> P@>\bar{k}>> B @>>>0
\end{CD}$
\end{center}
We also have the same $\alpha_1:F_1\to M$. Hence $\rho$ is well-defined.
Now we define the inverse map $\rho^{-1}$ of $\rho$ in the following way: given $e\in PT^1(B/A,M)$, choose $f:F_1\to M$ inducing $e$. We put $J=(P\oplus M)/K$ where $K$ is the Poisson ideal of $(P\oplus M)$ generated by the elements of the form $(\bar{j}(x),-f(x))$ for $x\in F_1$. We note that we have actually $K=\{(\bar{j}(x),-f(x))|x\in F_1\}$. Indeed, let $(p,m)\in P\oplus M$. Then $(p,m)\cdot (\bar{j}(x),-f(x))=(p\cdot\bar{j}(x),-pf(x)+\bar{j}(x)m)=(\bar{j}(px),-f(px))$ since $M$ is a Poisson $P$-module via $P\xrightarrow{\bar{k}} B$, so $\bar{j}(x)m$ means $\bar{k}(\bar{j}(x))m=0$. On the other hand $\{(p,m),(\bar{j}(x),-f(x))\}=(\{p,\bar{j}(x)\}_P,-\{p,f(x)\}_M-\{\bar{j}(x), m\}_M)=(\bar{j}(\{p,x\}_{F_1}),f(\{p,x\}_{F_1})$ since $\{\bar{j}(x),m\}_M$ means $\{\bar{k}\bar{j}(x),m\}_M=0$. Hence $K=\{(\bar{j}(x),-f(x))|x\in F_1\}$.
Now we claim that the following sequence is a short Poisson extension of $B$ over $A$ by $M$,
\begin{align*}
\mathscr{E}:0\to M\xrightarrow{j} (P\oplus M)/K\xrightarrow{k} B\to 0
\end{align*}
where $j(m):=$ the class of $(0,m)=\overline{(0,m)}$, and $k(\overline{(p,m)}):=\bar{k}(p)\in B$. $k$ is well-defined and a surjective Poisson map since for $(p,m)=(\bar{x},-f(x))\in K=\{(\bar{j}(x),-f(x))|x\in F_1\}$, $\bar{k}(p)=\bar{k}(\bar{j}(x))=0$. Now let $k(\overline{(p,m)})=\bar{k}(p)=0$. Then there exists $x\in F_1$ such that $\bar{j}(x)=p$. Then $(p,m)-(0,m+f(x))=(p,-f(x))=(\bar{j}(x),-f(x))\in K$. Hence $j(m+f(x))=\overline{(0,m+f(x))}=\overline{(p,m)}$. Hence $ker(k)\subset im(j)$ and clearly $im(j)\subset ker(k)$. Hence $im(j)=ker(k)$. Now we show that $j$ is injective. Let $\overline{(0,m)}=0$. Then $(0,m)\in K$. So we have $0=\bar{j}(x)$ and $m=-f(x)$ for some $x\in F_1$. Hence $x=\bar{i}(y)$ for some $y\in F_2$. Note that under the map $Hom_{\mathcal{U}_{Pois(P)}}( F_1,M)\to Hom_{\mathcal{U}_{Pois(B)}}(F_2,M)$, $f$ goes to $0$ since $f$ defines the cohomology class $e$, and so $f\circ \bar{i}=0$. Hence $m=-f(x)=-f(\bar{i}(y))=0$. So $j$ is injective. We have the following commutative diagram.
\begin{center}
$\begin{CD}
0@>>> 0@>>> M@>j>> (P\oplus M) /K@>k>> B@>>> 0\\
@. @AAA @AAfA @AA\alpha_0'A @AAidA @.\\
0@>>> F_2@>\bar{i}>> F_1@>\bar{j}>> P@>\bar{k}>> B @>>>0
\end{CD}$
\end{center}
where $\alpha_0'(p)=\overline{(p,0)}$. Indeed, let $x\in F_1$. $\alpha_0'(\bar{j}(x))=\overline{(\bar{j}(x),0)}$ and $j(f(x))=\overline{(0,f(x))}$. $(\bar{j}(x),0)-(0,f(x))=(\bar{j}(x),-f(x))\in K$. Thus $e=$ the class of $f=h^*(id)$.
Now we show that $\rho^{-1}$ is well-defined. In other words, $\rho^{-1}(e)$ is independent of choices of $f$ inducing $e\in T^1(B/A,M)$ up to equivalence of short Poisson extensions. Let $f,f': F_1\to M$ inducing $e$. Then there exists $v:\Omega_{\mathcal{U}_{Pois(P)/A}}^1\to M$ such that $f'-f=v\circ d\circ \bar{j}$ where $F_1\xrightarrow{\bar{j}}P\xrightarrow{d}\Omega_{\mathcal{U}_{Pois(P)/A}}^1\xrightarrow{v} M$. Let $\mathscr{E}'$ be the Poisson extension constructed from $f':F_1\to M$ as above,
\begin{align*}
0\to M\xrightarrow{j'} (P\oplus M)/K'\xrightarrow{k'} B\to 0
\end{align*}
where $K'$ is the Poisson ideal $\{(\bar{j}(x),-f'(x))|x\in F_1\}$. Consider an endomorphism $P\oplus M\to P\oplus M$ defined by $\varphi:(p,m)\mapsto (p,-v( d(p))+m)$, which is one to one and onto with the inverse $\varphi^{-1}(p,m)=(p,v(d(p))+m)$. We show that $\varphi$ is a Poisson homomorphism. Indeed,
\begin{align*}
\varphi((p_1,m_1)(p_2,m_2))=&\varphi(p_1p_2,p_1m_2+p_2m_1)=(p_1p_2,-v(p_1dp_2+p_2dp_1)+p_1m_2+p_2m_1)\\
&=(p_1p_2,p_1(-v(dp_2)+m_2)+p_2(-v(dp_1)+m_1))\\
&=(p_1,-v(dp_1)+m_1)(p_2,-v(dp_2)+m_2)\\
&=\varphi(p_1,m_1)\varphi(p_2,m_2)\\
\varphi(\{(p_1,m_1),(p_2,m_2)\})&=\varphi((\{p_1,p_2\}_P,\{p_1,m_2\}_M-\{p_2,m_1\}_M)\\
&=(\{p_1,p_2\}_P, -v(-\{p_2,dp_1\}_M+\{p_1,dp_2\}_M)+\{p_1,m_2\}_M-\{p_2,m_1\}_M)\\
&=(\{p_1,p_2\}_P,-\{p_2,m_1-v(dp_1)\}_M+\{p_1,m_2-v(dp_2)\}_M)\\
&=(\{(p_1,m_1-v(dp_1)),(p_2,m_2-v(dp_2))\})\\
&=\{\varphi(p_1,m_1),\varphi(p_2,m_2)\}
\end{align*}
On the other hand,
\begin{align*}
\varphi((\bar{j}(x),-f(x))&=(\bar{j}(x), -v(d(\bar{j}(x)))-f(x))=(\bar{j}(x),-f'(x))\\
\varphi^{-1}(-\bar{j}(x),f'(x))&=(-\bar{j}(x), -v(d(\bar{j}(x)))+f'(x))=(-\bar{j}(x),f(x))
\end{align*} for $x\in F_1$. Hence $\varphi$ maps $K$ to $K'$. Hence $\varphi$ induces an isomorphism $(P\oplus M)/K\to (P\oplus M)/K'$. Hence $\mathscr{E}$ is equivalent to $\mathscr{E}'$. Hence $\rho^{-1}$ is well-defined.
Lastly we show that the class of trivial short Poisson extension $0\to M\to B\tilde{\oplus} M\to B\to 0$ corresponds to $0\in PT^1(B/A,M)$ via $\rho$. In the following diagram
\begin{center}
$\begin{CD}
0@>>> 0@>>> M@>>> B\tilde{\oplus} M @>p>>B@>>> 0\\
@. @AAA @AA\alpha_1A @AA\alpha_0A @AAidA @.\\
0@>>> F_2@>\bar{i}>> F_1@>\bar{j}>> P@>\bar{k}>> B @>>>0
\end{CD}$
\end{center}
Let $q: B\tilde{\oplus} M\to M$ be the projection. Then $q\circ \alpha_0:P\to M$ be a Poisson $A$-derivation with $\alpha_1=q\circ \alpha_0\circ \bar{j}$. Hence there exists a map $v:\Omega_{\mathcal{U}_{Pois(P)/A}}^1\to M$ such that $q\circ \alpha_0=v\circ d$. So we have $\alpha_1=v\circ d\circ \bar{j}$. Hence the class of $\alpha_1$ is $0$.
\end{proof}
\section{First order Poisson deformations of affine Poisson schemes}
Let's explain in more detail a short Poisson extension of $R$ over $A$ by $I$ : $0\to I\xrightarrow{j} R'\xrightarrow{\phi} R\to0$, where $I$ is a Poisson $R$-module with $j(I)\cdot j(I)=0$ and $\{j(I),j(I)\}=0$. Here $\phi$ is a Poisson homomorphism of Poisson $k$-algebras compatible with a Poisson algebra $A$. Then we assume that $I$ is a Poisson $R'$-module via $\phi$, and so $j$ is a Poisson $R'$-module homomorphism.
When we say that a short Poisson extension of $R$ over $A$ by $R$ means that we have an exact sequence $0\to R\xrightarrow{j} R'\xrightarrow{\phi} R\to0$, where $R$ is a natural Poisson $R$-module, the image of $j$ satisfies $j(R)^2=0$ and $\{j(R),j(R)\}=0$, $R$ also have a $R'$-module structure via $\phi$. By these $R'$-module structure, and $j$ is a Poisson $R'$- module homomorphism. Now we show that this extension gives a $k[\epsilon]$-Poisson algebra structure on $R'$. Note that $j$ is completely determined by $1$ since $j(f)=j(f\cdot 1)=j(f'\cdot 1)=f'\cdot j(1)$ where $f'$ is a lift of $f$. We will give a $k[\epsilon]$-algebra structure on $R'$ by $\epsilon\to j(1)$ since $j(1)^2=0$. To show $k[\epsilon]$-Poisson algebra structure on $R'$, we have to show that $\{R', \epsilon\}=0$, equivalently $\{r,j(1)\}=0$ for all $r\in R'$. Since $j$ is a $R'$-module homomorphism, $\{r,j(1)\}_{R'}=j(X_r\cdot 1)=j(\{\phi(r),1\}_R)=0$.
\begin{proposition}[compare $\cite{Har10}$ Theorem 5.1]\label{3prr}
Let $B_0$ be a Poisson $k$-algebra, and let $X_0=Spec(B_0)$. Then there is a natural isomorphism
\begin{align*}
PDef_{B_0}(k[\epsilon])\cong PEx^1(B_0/k,B_0)
\end{align*}
where the class of trivial Poisson deformation corresponds to $0\in PT_{B_0}^1$.
\end{proposition}
\begin{proof}
A first order Poisson deformation of $B_0$ consists of a flat Poisson $k[\epsilon]$-algebra $B$ with Poisson $k$-isomorphism $B\otimes_{k[\epsilon]} k \cong B_0$ with the following commutative diagram
\begin{center}
$\begin{CD}
B@>\phi>> B_0\\
@AAA @AAA\\
k[\epsilon]@>>> k
\end{CD}$
\end{center}
where $\phi$ is a Poisson homomorphism over $k$. We note that a algebra $B$ is flat over $k[\epsilon]$ if and only if $0\to B_0\otimes_{k} (\epsilon)\cong B_0 \to B$ is exact. (see \cite{Har10} Proposition 2.2). So given a first order Poisson deformation of $B_0$, we have an exact sequence $ 0\to B_0\xrightarrow{j=\epsilon} B\xrightarrow{\phi} B/\epsilon B \cong B_0\to 0$, where the first map $\epsilon(b_0):=\epsilon \cdot b$, where $b$ is a lifting of $b_0$ via $\phi$. This is a short Poisson extension of $B_0$ over $k$ by $B_0$ since $B_0$ is a Poisson $B$-module and $\epsilon(B_0)^2=\{\epsilon(B_0),\epsilon(B_0)\}=0$ and the induced $B_0$-module structure on $B_0$ via $\phi$ is given by the multiplication on $B_0$. Let $B'$ be an equivalent Poisson deformation of $B_0$ over $k[\epsilon]$ with the first order Poisson deformation $B$. Then we would like to show that the following diagram commutes
\begin{center}
$\begin{CD}
0@>>> B_0@>j>> B@>\phi>> B_0@>>> 0\\
@. @| @V\cong \Phi VV @| \\
0 @>>> B_0 @>j'>> B'@>\phi'>> B_0 @>>>0
\end{CD}$
\end{center}
where $\Phi$ is a Poisson homomorphism over $k[\epsilon]$ defining equivalent first order Poisson deformations of $B_0$.
Since right diagram commutes by definition of equivalence, we only check that the left diagram commutes. $j(b_0)=\epsilon b$ where $\phi(b)=b_0$. Since $\phi'(\Phi(b))=b_0$, $j'(b_0)=\epsilon \Phi(b)$. Hence the diagram commutes. So equivalent first order Poisson deformations corresponds to equivalent short Poisson extensions $B_0$ over $k$ by $B_0$.
Conversely, let $0\to B_0\xrightarrow{j} B\xrightarrow{\phi} B_0 \to 0$ be a Poisson extension. Then $B$ is a Poisson $k[\epsilon]$-algebra by the above discussion. In this case, we can identify $B_0$ with $j(B_0)=\epsilon B=B\otimes_{k[\epsilon]} (\epsilon)$. Hence $B$ is flat over $k[\epsilon]$ and $B/\epsilon B=B\otimes_{k[\epsilon]} k\cong B_0$. Since $\epsilon B$ is a Poisson ideal ($\phi$ is a Poisson map), $\phi$ induces a Poisson isomorphism $B\otimes_{k[\epsilon]} k\cong B_0$. Given a equivalent Poisson extension, since the right diagram in the above commutes, it gives an equivalent Poisson deformations of $B_0$.
Let $B_0\otimes_k k[\epsilon]=B_0\oplus \epsilon B_0$ be the trivial Poisson deformation of $B_0$. Then the associated Poisson extension is trivial $0\to B_0\to B_0\oplus \epsilon B_0 \to B_0$:
\end{proof}
\begin{corollary}
Let $B_0$ be a Poisson $k$-algebra. Then the set of first order Poisson deformations of $B_0$ is in natural one to one correspondence $PT^1(B_0/k,B_0)$.
\begin{align*}
PDef_{B_0}(k[\epsilon])\cong PT^1(B_0/k,B_0)
\end{align*}
\end{corollary}
\begin{proof}
By Theorem \ref{3thm} and Proposition \ref{3prr}, we have $PDef_{B_0}(k[\epsilon])\cong PEx^1(B_0/k,B_0)\cong PT^1(B_0/k,B_0)$.
\end{proof}
\section{First order deformations of a Poisson closed subscheme of an affine Poisson scheme}
Let $X$ be a Poisson scheme over $k$ and let $Y$ be a closed Poisson subscheme of $(X,\Lambda_0)$. We will define a Poisson deformation of $Y$ over $Spec\, k[\epsilon]$ in $X$ to be a Poisson subscheme $Y'\subset (X\times Spec(k[\epsilon]),\Lambda_0\oplus 0)$ where $(X\times Spec(k[\epsilon]),\Lambda_0\oplus 0)$ is the trivial Poisson deformation of $X$ over $Spec(k[\epsilon])$, $Y'\times_{Spec \,k[\epsilon]} k=Y$ and $Y'$ is flat over $Spec(k[\epsilon])$.
We discuss deformations of Poisson subschemes when $X$ is an affine Poisson scheme. Then $X$ corresponds to a Poisson $k$-algebra $(B,\Lambda_0)$, and $Y$ is defined by an Poisson ideal $I\subset B$. We would like to find Poisson ideals of $k[\epsilon]$-Poisson algebra $(B'=B\otimes_k k[\epsilon],\Lambda_0\oplus 0)$ such that the image of $I'$ in $B=B'/\epsilon B'$ is $I$ and $B'/I'$ is flat over $k[\epsilon]$. We note that the flatness of $B'/I'$ over $k[\epsilon]$ is equivalent to the exactness of $0\to B/I\xrightarrow{\epsilon} B'/I'\to B/I\to 0$ where $\epsilon$ is the multiplication by $\epsilon$ if and only if $0\to I\xrightarrow{\epsilon} I'\to I\to 0$ is exact. (see \cite{Har10} page 11)
\begin{proposition}[compare $\cite{Har10}$ Proposition 2.3]\label{3p}
To give a Poisson ideal $I'\subset B'=B\oplus \epsilon B$ such that $B'/I'$ is flat over $k[\epsilon]$ and the image of $I'$ in $B$ is $I$ is equivalent to an element
\begin{align*}
\varphi\in Hom_{\mathcal{U}_{Pois(B)}}(I,B/I)=Hom_{\mathcal{U}_{Pois(B/I)}}(I/I^2\oplus \{I,I\},B/I).
\end{align*}
In particular, $\varphi=0$ corresponds to the trivial deformation given by $I'=I\oplus \epsilon I$ inside $B'=B\oplus \epsilon B$. \end{proposition}
\begin{proof}
$B'=B\oplus \epsilon B$ is naturally a Poisson $B$-algebra in the following way: $a\cdot (b+\epsilon c)=ab+\epsilon ac$ and $\{a,b+\epsilon c\}=\{a,b\}+\epsilon\{a,c\}$. Now let $I'$ be a Poisson ideal of $B'=B\oplus \epsilon B$ such that $B'/I'$ is flat over $k[\epsilon]$ and $\pi(I')=I$, where $\pi:B\oplus \epsilon B\to B$ be the projection. Let $x\in I$ and choose a lifting $x'$ of $x$ via $\pi$. Then $x'=x+\epsilon y\in I'$ for some $y\in B$. Let $x''=x+\epsilon y'\in I'$ be an another lifting of $x$. Then $y-y'\in I$ by the flatness of $B'/I'$ over $k[\epsilon]$. So the image $\bar{y}$ in $B/I$ is uniquely determined. So $\varphi:I\to B/I$, $x\to \bar{y}$ is well-defined. We claim that this is a Poisson $B$-module homomorphism. Indeed, let $\varphi(x)=\bar{y}$. Since $x+\epsilon y\in I'$, we have $bx+\epsilon by\in I'$ for $b\in B$. So $\varphi(b\cdot x)=\overline{by}=b\bar{y}=b\cdot\varphi(y)$. On the other hand, since $\{b,x\}+\epsilon\{b,y\}\in I'$, we have $\varphi(\{b,x\})=\overline{\{b,x\}}=\{b,\bar{x}\}=\{b,\varphi(x)\}$.
Conversely, let $\varphi\in Hom_{\mathcal{U}_{Pois(B)}}(I,B/I)$. Define
\begin{align*}
I'=\{x+\epsilon y|x\in I,y\in B, \,\,\text{the image of $\bar{y}$ of $y$ in $B/I$ is equal to $\varphi(x)$}\}
\end{align*}
We claim that $I'$ is a Poisson ideal of $B'$. Let $x+\epsilon y\in I'$ and $a+\epsilon b\in B'$. Then $x\in I$ and $\varphi(x)=\bar{y}$. Since $(a+\epsilon b)(x+\epsilon y)=ax+\epsilon(bx+ay)$ and $\overline{bx+ay}=\overline{ay}$, we have $\varphi(ax)=a\varphi(x)=\overline{ay}$. On the other hand, since $\{a+\epsilon b, x+\epsilon y\}=\{a,x\}+\epsilon(\{b,x\}+\{a,y\})$ and $\overline{\{b,x\}+\{a,y\}}=\overline{\{a,y\}}$, we have $\varphi(\{a,x\})=\overline{\{a,y\}}=\{a,\varphi(x)\}$. We have a natural exact sequence $0\to I \xrightarrow{\epsilon} I'\to I\to 0$, where $\epsilon$ means multiplication by $\epsilon$. Since the exactness means the exactness of $0\to B/I\xrightarrow{\epsilon} B'/I'\to B/I\to 0$, $B'/I'$ is a flat over $k[\epsilon]$.
These two construction is one to one correspondence. When $\varphi=0\in Hom_{\mathcal{U}_{Pois(B)}}(B,B/I)$, we have $I'=I\oplus \epsilon I$.
\end{proof}
\begin{corollary}
Let $B_0$ be a Poisson $k$-algebra and $I$ be a Poisson ideal of $B_0$. Let $C=B_0/I$. Then the set of first order deformations of Poisson closed subscheme $Spec(B_0/I)$ of an affine Poisson scheme $Spec(B_0)$ is in natural one to one correspondence with $PT^1(C/B_0,C)$.
\end{corollary}
\begin{proof}
This follows from Proposition \ref{3prot} and Proposition \ref{3p}
\end{proof}
\begin{remark}\label{3remarks}
If our construction of Poisson cotangent complex may turn out to be correct and use right languages, there is a ``globalization'' problem, which I cannot solve at this point. In $\cite{Sch67}$, Lichtenbaum and Schlessinger actually constructed a quasi-coherent sheaf $\mathcal{T}^i (X/Y,\mathcal{F})$ for a morphism of schemes $f:X\to Y$ and a quasi-coherent sheaf $\mathcal{F}$ of $\mathcal{O}_X$-module where $X$ is separated, $Y$ is noetherian and $f$ is locally of finite type. I could not show that $PT^i(B/A,M)$ commutes with localization and so that we can define a sheaf $\mathcal{PT}^i(X/Y,\mathcal{F})$ where $f:X\to Y$ is a morphism of Poisson schemes satisfying suitable finiteness conditions and $\mathcal{F}$ is a quasi-cohernt Poisson $\mathcal{O}_X$-module.
\end{remark}
|
2,877,628,089,575 | arxiv | \section*{Abstract}
Knowing the true effect size of clinical interventions in randomised clinical trials is key to informing the public health policies. Vaccine efficacy is defined in terms of the relative risk or the ratio of two disease risks. However, only approximate methods are available for estimating the variance of the relative risk. In this article, we show using a probabilistic model that uncertainty in the efficacy rate could be underestimated when the disease risk is low. Factoring in the baseline rate of the disease, we estimate broader confidence intervals for the efficacy rates of the vaccines recently developed for COVID-19. We propose new confidence intervals for the relative risk. We further show that sample sizes required for phase 3 efficacy trials are routinely underestimated and propose a new method for sample size calculation where the efficacy is of interest. We also discuss the deleterious effects of classification bias which is particularly relevant at low disease prevalence.
\section*{Introduction}
Vaccines are seen as the best control measure for the coronavirus pandemic. In this context, understanding the true efficacy of the vaccines and clinical interventions is crucial. Randomised clinical trials are conducted to systematically study the safety and the efficacy of an intervention in a subset of the population before it is widely used in the general population. In placebo-controlled vaccine trials, participants are randomised into vaccinated and unvaccinated groups where cases of the disease or infection are allowed to accrue over time. In planning a clinical trial, advance sample size calculation determines the size of the trial population needed to detect a minimal clinically relevant difference between the two groups if such a difference exists. The indicator for effectiveness of a vaccine is usually reduction of the cases in the vaccinated group relative to the control group. However, it is sometimes naively assumed that the trial participants who do not experience the event provide no information. Consequently, the event rate or the incidence rate of the disease receives inadequate attention. For rare diseases, it is often simply accepted that the accrual of the cases takes longer. Human clinical trials are also an area where theory and practice are seldom consistent, as experiments in human populations are hardly fully controlled experiments, not least due to unrealistic assumptions, loss to follow-up, noncompliance, heterogeneity of treatment effect and the trial population, etc. \cite{10.1093epirev} Therefore, it is not uncommon that, by the time an interim analysis declares a significant finding, the original assumptions used to define the statistical power of the study and the sample size are neglected.
In this article our interest is on evaluating the impact of the event rate, insofar as it could affect the estimation of the efficacy rate. We show that low incidence rate of the disease could lead to overestimation of confidence in the estimated efficacy rates. We propose a new method for posterior probability of the vaccine efficacy that has a more subtle relationship with the event rate. Using our approach, we obtain broader confidence intervals for the efficacy of the vaccines recently developed for COVID-19. Based on our findings, we propose new confidence intervals for the relative risk. A new method for sample size calculation in controlled efficacy trials is proposed which is more robust at low disease prevalence. Also highlighted is the impact of classification bias which could have large consequences when the disease risk is low.
\section*{Methods}
Vaccine \textit{efficacy} is defined as the proportionate reduction in the risk of disease or infection in a vaccinated group compared to an unvaccinated group. It is defined as (1-RR)$\times$100\% in terms of the relative risk or the \textit{risk ratio}, $\textrm{RR}=\pi_v/\pi_c$, where $\pi$ are the incidence of the disease among those exposed in the vaccinated and control groups. Throughout this paper we interchangeably use the terms, incidence rate, disease risk, prevalence and event rate.
It is important to remember that the variables $\pi_v$ and $\pi_c$ are scaled binomials as they represent sample proportions. Assuming equal person-time exposure in the two groups, the efficacy is often summarised in terms of the numbers of cases in the vaccinated and unvaccinated groups, $t_v$ and $t_c$ respectively:
\begin{equation}
\alpha=1-\frac{\pi_v}{\pi_c}\simeq 1-\frac{t_v}{t_c}.\label{eqn:alpha}
\end{equation}
It appears in the literature that only approximate methods are available for the variance of the ratio of two binomial parameters \cite{pmid3291957,Katz1978OBTAININGCI}. The consensus method that is commonly used to assign confidence intervals to the risk ratio is credited to Katz et al \cite{Katz1978OBTAININGCI}. The method is based on asymptotic normality of logarithm of the ratio of two binomial variables. Assuming independence of the incidence rates, it follows that $\textrm{var}(\log(\pi_v/\pi_c))$ = $\textrm{var}(\log(\pi_v))+\textrm{var}(\log(\pi_c))$. Using a Taylor series, the variances are approximated as $\textrm{var}(\log(\pi)) \approx$ $\textrm{var}(\pi)/\pi^2$ where Wald method is often used to set $\textrm{var}(\pi)$. Then two-sided 95\% confidence intervals on the efficacy (e.g. see \cite{pmid8931208,pmid3260147,LACHENBRUCH1998569,tmi.13351}) can be written as
\begin{equation}
95\% \textrm{CL}: 1- \exp\bigg(\ln(\textrm{RR})\pm1.96\sqrt{\frac{1-\pi_v}{t_v}+\frac{1-\pi_c}{t_c}}\bigg). \label{eqn:RR}
\end{equation}
Hereafter we refer to equation \ref{eqn:RR} as pooled Wald approximation. We will show that the method underestimates the variance espcially when the incidence rate is low.
Equation \ref{eqn:RR} sets out the large sample asymptotic variance of the risk ratio. However, Wald method used to define $\textrm{var}(\pi)$ is known to be unreliable when $\pi$ is small. One may use alternative binomial proportion confidence intervals, however, log normality of the ratio might not hold and the variance of (the logarithm of) the ratio may be irreducible. Hightower et al \cite{pmid3260147} raised question about the credibility of the confidence limits when the efficacy is high and the disease risk is low. Also, O'Neill \cite{pmid3231951} noted that, when $t\ll n$, the variance of $\ln({\textrm{RR}})$ in equation \ref{eqn:RR} remains fairly stable and quickly converges to $1/t_v+1/t_c$.
Ratio distributions are known to have heavy tails and often no finite variance. If one were to model the likelihood function for the efficacy defined in equation \ref{eqn:alpha} in terms of independent incidence rates, the choice of the prior probabilities for $\pi_v$ and $\pi_c$ would be critical. One can readily verify that the variance of the ratio of two binomial distributions increases as binomial probabilities decrease. Uninformative priors could simply cancel out by the division and the dependence of the posterior on the prevalence would not become obvious. Analytical solutions using independent incidence rates may also be hard to obtain.
For an analytical solution, we model the efficacy in terms of conditional probabilities of the disease risks. Independence of the probabilities of the incidence rates is neither necessary nor ideal when calculating the efficacy, as equation \ref{eqn:alpha} imposes a constraint on the two variables. Under a binomial model with overall prevalence of $\pi=t/n$ in both groups and total population size of $n$, overall number of cases $t=t_c+t_v$ follows $t\sim \mathrm{Bin}(n,\pi)$, then, from equation \ref{eqn:alpha} assuming $t_c\sim \mathrm{Bin}(t,1/(2-\alpha))$, we expect $t_c \sim \mathrm{Bin}(n,\pi/(2-\alpha))$. Were we to use Poisson distributions for $t$ and $t_c$, $t_c$ conditional on $t$ would still follow a binomial distribution. Modeling the efficacy in terms of conditional probabilities has previously been suggested \cite{pmid8931208}. This notation enables to explicitly parametrise the likelihood function in terms of the prevalence, irrespective of the priors for $\pi_v$ and $\pi_c$.
For a general solution accounting for classification bias we assume an imperfect diagnostic procedure with sensitivity Se and specificity Sp. Then fraction of individuals who test positive for the disease is sum of true positive rate and false positive rate:
\begin{align}
T&=\textrm{Se}\times\pi+(1-\textrm{Sp})\times(1-\pi) \nonumber \\
&=c_1+c_2\pi, \label{eqn:lambda}
\end{align}
where $c_1$=1-Sp is the false positive rate and $c_2$=Se+Sp-1. The posterior distribution of $\alpha$ given that $t_c$ is binomial follows as
\begin{align}
p(\alpha | t_c,\pi,c_1,c_2)&=\frac{p(t_c | \alpha,\pi,c_1,c_2)p(\pi)p(\alpha)}{g(\alpha)} \nonumber \\
&\propto\frac{1}{g(\alpha)}\binom{n}{t_c}\left(\frac{c_1+c_2\pi}{2-\alpha}\right)^{t_c}\left(1-\frac{c_1+c_2\pi}{2-\alpha}\right)^{n-t_c}f(\pi), \label{eqn:post}
\end{align}
where $f(\pi)$ is the prior on $\pi$ and we have assumed uniform prior on the efficacy $\alpha\sim \mathrm{unif}\{0,1\}$. For a complete solution, the marginal likelihood $g(\alpha)$ can be written in terms of the incomplete beta function (see e.g. \cite{608719}):
\begin{align*}
g(\alpha)&=f(\pi) \binom{n}{t_c} (c_1+c_2\pi) \big\{B(c_1+c_2\pi;t_c-1,n-t_c+1) \\
&- B((c_1+c_2\pi)/2;t_c-1,n-t_c+1)\big\}.
\end{align*}
As we do not intend to impose a prior on the prevalence, $f(\pi)$ in equation \ref{eqn:post} cancels out and our analysis, in essence, is likelihood based. One needs to remember that, the posterior in equation \ref{eqn:post}, as it was derived from the second equality in equation \ref{eqn:alpha}, is valid only when the individuals are equally divided between the two groups.
The mode of the posterior of $\alpha$ is obtained by setting the derivative of the log likelihood to zero i.e. $\partial \ell/\partial\alpha=\partial \mathrm{ln}(p(\alpha | t_c,\pi))/\partial \alpha=0$. This leads to
\begin{equation}
\alpha_{mode}=2-\frac{n(c_1+c_2\pi)}{t_c} \label{eqn:mode},
\end{equation}
which corresponds to the maximum likelihood estimator (MLE). Cramér–Rao bound expresses a lower bound on the variance of any unbiased estimator of $\alpha$ in terms of the inverse of the Fisher information
\begin{equation}
\textrm{Var}(\alpha_{mode})\ge\frac{1}{\mathcal{I}(\alpha)}, \label{eqn:cramer}
\end{equation}
where the Fisher information $\mathcal{I(\alpha)}$ is obtained as
\begin{align}
\mathcal{I}(\alpha)= \mathbb{E}\Big[\Big(\frac{\partial}{\partial\alpha}\ell(\alpha | t_c,\pi)\Big)^2\Big] &=n \times \mathbb{E}\Big[\Big(-\frac{1-t_c}{2-\alpha-\pi}-\frac{-1}{2-\alpha}\Big)^2\Big] \nonumber \\
&= \frac{n (c_1+c_2\pi)}{(2-\alpha)^2(2-\alpha-(c_1+c_2\pi))}. \label{eqn:fisher}
\end{align}
Here $\mathbb{E}$ denotes `expected' over $t_c$, where we have substituted $\mathbb{E}[t_c]=\mathbb{E}[t_c^2]=(c_1+c_2\pi)/(2-\alpha)$. We will show that the conditional binomial model has a more subtle dependence on $\pi$ compared to the pooled Wald method.
Under certain regularity conditions and assuming asymptotic normality near MLE, 95\% confidence intervals on $\alpha_{mode}$ can be estimated as
\begin{equation}
\alpha_{mode}\pm\frac{1.96}{\sqrt{\mathcal{I}(\alpha_{mode})}}. \label{eqn:cramerCIs}
\end{equation}
However, as the posterior distribution is asymmetric, especially when the efficacy is high, and the intervals could lie outside [0,1], we will estimate the credible intervals computationally.
\section*{Results}
\subsection*{Effect of incidence rate on vaccine efficacy}
The posterior probability of vaccine efficacy given in its simplest form in equation \ref{eqn:post} is ready for inspection. Using binomial notation is particularly useful in enabling us to directly plug in the numbers $n$, $t_c$ in the estimation of $\alpha$. In this section we evaluate the impact of the incidence rate on the efficacy and assign new confidence bounds to the efficacy of COVID-19 vaccines.
Firstly, we assume a diagnostic test with perfect sensitivity and specificity i.e. Se=Sp=1. In the absence of misclassification, mode of the posterior in equation \ref{eqn:mode} corresponds to the expectation $\hat{\alpha}=1-t_v/t_c$. The larger $n$ the smaller the variance of the posterior, however, for a fixed $n$, the variance depends on $\pi$. Figure \ref{fig:figure-1} shows the posterior probability of $\alpha$ plotted over a range of $\pi$, for a fixed $n$ on the left hand, and for a fixed $t$ on the right hand, assuming true vaccine efficacy of 70\% and 90\% respectively. Also plotted in vertical lines are the independent 95\% confidence intervals from equation \ref{eqn:RR}. As the event rate falls, the posterior distributions and the confidence intervals become wider, however, for a fixed $t$ (right plot) Wald intervals are stable over a wide range of $\pi$, and more so when the efficacy is high. The proposed conditional binomial model better represents the variability at low prevalence.
\begin{figure}[!t]
\centering
\mbox{\includegraphics[width=2.7in]{alpha-post-n50k.eps}}
\hspace{1px}
\mbox{\includegraphics[width=2.7in]{alpha-post-t2k.eps}}
\caption{\color{Gray} \textbf{Posterior distribution of vaccine efficacy}. Blue lines represent the normalised posterior probabilities, while vertical lines show the independent pooled Wald confidence intervals. Left hand plot assumes a fixed $n$=50,000 while right hand plot is for a fixed $t$=2,000. The general trend holds for different values of the parameters. Wald method overstates the confidence in the efficacy when $t\ll n$.}
\label{fig:figure-1}
\end{figure}
Three clinical trials of the vaccines designed to prevent COVID-19 recently published their interim phase 3 analysis results \cite{pmid33306989,NEJMoa2034577,NEJMoa2035389} with two of them reporting incredibly narrow 95\% confidence bounds on the efficacy. The reported case numbers and the efficacy rates for the primary end points are provided in Table \ref{tab:table-1}. Firstly, we note that, although the trials used different models and priors on the efficacy, the reported confidence intervals almost perfectly correspond with those obtained from equation \ref{eqn:RR}. At large $n$ the posterior is clearly dominated by the data and the Bayesian and the frequentist are equivalent. Furthermore, especially where the efficacy is high, pooled Wald confidence intervals hardly vary by the choice of $n$. If one were to use different values for $n$ in Table \ref{tab:table-1}, over a large range of the values equation \ref{eqn:RR} would still give the same confidence intervals. Therefore, the uncertainty caused by $n_v$ and $n_c$ is not accounted for.
\begin{table}[!b]
\caption{Estimated efficacy of COVID-19 vaccine trials}
\label{tab:table-1}
\begin{tabular}{c@{\qquad}ccc@{\qquad}c}
\toprule
\multirow{2}{*}{\raisebox{-\heavyrulewidth}{Trial}} & \multicolumn{3}{c}{Case numbers and reported efficacy rates} & \multicolumn{1}{c}{Estimated efficacy rate} \\
\cmidrule{2-5}
& \thead{case rate in\\ vaccinated} & \thead{case rate in\\ control} & \thead{reported\\ efficacy and 95\% CI} & \thead{estimated mode\\ and 95\% credible interval} \\
\midrule
AZ-Oxford (combined) & 30/5,807 & 101/5,829 & 70·4\% [54·8, 80·6] & 70.3\% [39.1, 90.9] \\
Pfizer-BioNTech & 8/18,198 &162/18,325 & 95.0\% [90.3, 97.6] & 95.1\% [74.9, 99.6] \\
Moderna-NIH & 11/14,134 & 185/14,073 & 94.1\% [89.3, 96.8] & 94.1\% [75.4, 99.5] \\
\bottomrule
\end{tabular}
\end{table}
We re-estimate the confidence intervals using the conditional binomial model presented in the Methods. Using the case numbers reported, the likelihood of the data in equation \ref{eqn:post} is obtained by setting the prevalence to $\pi=T=t/n$. Then maximum \textit{a posteriori} (MAP) and 95\% credible intervals for the efficacy rates are calculated computationally. The results shown in Table \ref{tab:table-1} are contrasted with those reported. Although estimated modes are the same, our credible intervals are wider. Incorporating the incidence rates has removed the overwhelming confidence originally assigned to the point estimates. Note that, our approach requires the trial participants to be equally divided between the vaccinated and unvaccinated groups which is roughly the case here.
Figure \ref{fig:post}, in red, shows the posterior probabilities and the credible intervals for COVID-19 vaccines. Of note is that, if we were to hypothetically assume $\pi=t/n=1$, the posterior in equation \ref{eqn:post} would produce the same intervals as those reported by the vaccine trials and Wald approximation. Moreover, an independent binomial model with uninformative (e.g. uniform) priors for $\pi_v$ and $\pi_c$ would produce the pooled Wald intervals.
\begin{figure}[!t]
\centering
\mbox{\includegraphics[width=1.9in]{astrazeneca.eps}}
\hspace{1px}
\mbox{\includegraphics[width=1.9in]{pfizer.eps}}
\hspace{1px}
\mbox{\includegraphics[width=1.9in]{moderna.eps}}
\caption{\color{Gray} \textbf{Estimated efficacy of COVID-19 vaccines}. Posterior probabilities for the conditional binomial model are plotted in red, with shaded areas representing the 95\% credible intervals. Blue curves are for when $n$ is set to $t_v+t_c$ and correspond with pooled Wald approximation.}
\label{fig:post}
\end{figure}
\subsection*{Bias in case classification}
So far we have assumed no bias in classification of the cases, however, imperfect diagnostic procedure could lead to misclassification of the infected and uninfected individuals. In this section we examine the effect of classification bias on estimation of the efficacy.
It is worth noting that equation \ref{eqn:lambda} requires the observed infection rate $T$ to be greater than the false positive rate $c_1=1-Sp$. This relates to the `false positive paradox' which implies that the accuracy of a diagnostic test is compromised if the test is used in a population where the incidence of the disease is lower than the false positive rate of the test itself. Furthermore, false negatives could dominate at low incidence rates. When the disease risk is low, as the majority of the tests are negative, a small false negative rate could lead to a situation where false negatives outnumber the positive cases. These concepts are further explained in Note 1.
\begin{figure}[!t]
\centering
\mbox{\includegraphics[width=2.7in]{alpha-post-Sp0.999.eps}}
\hspace{1px}
\mbox{\includegraphics[width=2.7in]{alpha-post-Se0.95.eps}}
\caption{\color{Gray} \textbf{Effect of imperfect diagnostic procedure}. Misclassification error biases the vaccine efficacy rate. Left plot shows the distributions for Se=1 and Sp=0.999, while the right plot is for Se=0.95 and Sp=1, with $n$=50,000 in both. True efficacy rate is assumed at 70\%. Imperfect specificity, however small, could have disastrous effects when incidence rate is low, whereas lack of sensitivity consistently inflates the efficacy rate.}
\label{fig:figure-3}
\end{figure}
Figure \ref{fig:figure-3} illustrates the effect of classification bias on the posterior probability of the vaccine efficacy. The left plot shows the impact of a very small reduction in specificity to 0.999 (or increase in false positive rate), while the right hand plot shows the effect of reduction in sensitivity to 0.95 (or increase in false negative rate). A small loss of specificity could lead to serious underestimation of the effect size as noted by \cite{LACHENBRUCH1998569,tmi.13351}, but it could further lead to complete loss of precision when the incidence rate is low. Loss of sensitivity results in overestimation of the efficacy irrespective of the disease rate. In these plots, we have considered a larger reduction in sensitivity, not only because reduction in specificity has a more dramatic effect, but also as diagnostic assays typically have relatively higher specificity than sensitivity, not least due to specimen collection, insufficient viral load, stage of the disease, etc. \cite{pmid33301459} However, the effect of loss of sensitivity is consistently toward shifting the mode in equation \ref{eqn:mode}, or MAP, to higher values of $\alpha$, even at low incidence rates where negative predictive value is high.
\begin{tcolorbox}[float, drop shadow, title=Note 1,sidebyside,sidebyside align=top,lower separated=false]
J. Balayla \cite{pmid33027310} noted that there exists a \textit{prevalence threshold} below which the positive predictive value (PPV) of a diagnostic test drops precipitously relative to the prevalence. This means that at too low a prevalence a positive test result could more likely be a false positive than a true positive. More underappreciated is the impact of the negative predictive value (NPV). Though, at low incidence rates, the negative predictive value is nearly 100\%, a small loss in sensitivity could still have a marked effect as the negative tests vastly outnumber the positive tests. We could even have a situation where the false negatives are more than the true and false positives. To avoid these pitfalls, the participants are pre-selected for their symptoms before confirmation with the assay. Though this raises the pre-test probability, it could cause collider bias \cite{pmid33184277}.
\tcblower
\includegraphics[height=.3\textheight,width=\linewidth,valign=t]{ppv.eps}
\captionof{figure}{Positive (red) and negative (blue) predictive values are plotted in terms of population prevalence. Solid lines are for to a diagnostic test with Se=Sp=0.99; dashed lines are for Se=Sp=0.95. Vertical lines show the prevalence thresholds.}\label{fig:ppv}
\end{tcolorbox}
\section*{Discussion}
Base rate fallacy happens in situations where base rate information is ignored in favour of individuating information. In probability terms, it often occurs when $P(A|B)$ is confused or interchangeably used with $P(B|A)$ ignoring the prior probability $P(A)$, e.g. probability of having a rare disease given a positive test is wrongly equated to probability of a positive test given the disease (or diagnostic sensitivity) ignoring the low prior probability of the disease itself. We showed, in estimation of the vaccine efficacy when the disease rate is low, not only diagnostic error could have deleterious effects, but also failure to appropriately integrate the information about the base rate or incidence rate of the disease in the calculation could lead to underestimation of the uncertainty.
Vaccine efficacy is defined in terms of the risk ratio $\pi_v/\pi_c$, that is the ratio of two binomial proportions. Ratio distributions are known to have undefined variances, conversely, pooled Wald method has been traditionally used to approximate the variance of the risk ratio. In this article, we used a parametrisation that makes the dependence of the efficacy on the disease prevalence explicit, without recourse to priors for $\pi_v$ and $\pi_c$. Particularly, improper priors for $\pi_v$ and $\pi_c$ could lead to underestimation of the variance. We conditioned $t_c$ on $t=t_c+t_v$ and treated $t$ as another random variable. The resulting compound probability $t_c \sim \mathrm{Bin}(n,\pi/(2-\alpha))$ is over-dispersed and better captures the variability of the variance with $\pi$, whereas pooled Wald confidence intervals are largely insensitive to $\pi$ when $\pi$ is small.
Wald method is intended as large sample approximation, however, the bulk of the life sciences deals with small sample sizes. Therefore, it is likely that the confidence intervals reported in the literature for the risk ratio (and odds ratio) are overly optimistic. By analogy of equations \ref{eqn:cramer} and \ref{eqn:cramerCIs}, one could define new confidence intervals for the risk ratio by substituting RR=$(n_c/n_v)(1-\alpha)$ for unequal sized groups in the Fisher information. The results can be written as
\begin{equation}
95\% \textrm{CL}: \textrm{RR}\pm 1.96 \frac{n_c}{n_v}\big(1+\frac{t_v}{t_c}\big) \sqrt{\frac{1+t_v/t_c-\pi}{t_v+t_c}}, \label{eqn:fisherCIs}
\end{equation}
where $\pi$=$(t_v+t_c)/(n_v+n_c)$. The above intervals on the risk ratio are generally wider than but converge to the pooled Wald method when the sample size is large. They may be preferred to those obtained from equation \ref{eqn:RR} when the sample size is small or the relative risk is low. Particularly, for a fixed sample size as RR nears zero, the upper bound in equation \ref{eqn:fisherCIs} remains conservative and the lower bound takes negative values and becomes undetermined. On the contrary, as RR nears zero, the pooled Wald intervals remain positive and shrink rapidly, giving the counterintuitive impression of increased precision when the incidence rate is low (similar to figure \ref{fig:post}). However, as with Wald method, the confidence intervals in equation \ref{eqn:fisherCIs} were derived using normal approximation which may not hold when RR significantly deviates from 1.
Our findings have implications for pre-planning the sample sizes for phase 3 efficacy trials. Sample size calculation in case-control design is often stated as ``How many samples are needed to be randomised in order to conclude with 100$(1-\beta)$\% power that a treatment difference of size $\Delta$ exists between the two groups at the level of significance of $\alpha$?". Therefore calculation of sample size requires specification of the null hypothesis (expected treatment effect) and the alternative hypothesis defined in terms of the difference in treatment outcomes. Here, $\alpha$ or type I error is the probability of rejecting the null hypothesis where we should not, and $\beta$ or type II error is the probability of failing to reject the null hypothesis where we should reject it. Under the assumption of normality of the treatment outcome, a generic formula for per-group sample size is derived in terms of the two-sample t-test: \cite{10.1093epirev}\begin{equation}
n=\frac{2\sigma^2}{\Delta^2}(z_{1-\alpha/2}+z_{1-\beta})^2, \label{eqn:ssize}
\end{equation}
where $z$-scores determine the critical values for the standard normal distribution. Therefore one needs to specify the variance of the measured variable, the desired rates of error and the magnitude of the treatment difference. Where the measured variable is binary (infected or uninfected), the test statistic reduces to the test for the difference between two proportions. Where the efficacy is of interest, the log normal approximation of the risk ratio from equation \ref{eqn:RR} may be used to define the test statistic. O'Neill \cite{pmid3231951} calculated the required sample sizes for a two-sided test given the pooled Wald variance in equation \ref{eqn:RR}. We re-write the \textit{total} sample size in this form:
\begin{equation}
n=2\frac{(z_{1-\alpha/2}+z_{1-\beta})^2}{d^2}\Big(\frac{(2-\textrm{VE})^2}{\pi(1-\textrm{VE})}-2\Big), \label{eqn:ssizeWald}
\end{equation}
where
\begin{equation*}
d=\ln\Big(\Delta/(2(1-\textrm{VE}))+\sqrt{\big(\Delta/(2(1-\textrm{VE}))\big)^2+1}\Big).
\end{equation*}
Here VE is the anticipated efficacy and $\Delta$ is the expected difference in VE in absolute terms. We showed, however, that at low prevalence rate, equation \ref{eqn:RR} significantly underestimates the variance. Using an inadequately small variance could lead to underestimation of the type I and type II errors, potentially resulting in winner's curse in underpowered studies \cite{pmid18633328,pmid23571845}. If instead we were to use the proposed compound binomial model, one could simply substitute the variance in equation \ref{eqn:cramer}. As in \cite{pmid3231951}, under the assumption of normality and assuming $\Delta$ is the difference between the upper and lower limits of the confidence interval, substituting the margin of error as $\Delta/2=z\sigma$ in equation \ref{eqn:cramer} gives
\begin{equation}
n\ge 4 \frac{(z_{1-\alpha/2}+z_{1-\beta})^2}{\pi \Delta^2} (2-\textrm{VE})^2(2-\textrm{VE}-\pi). \label{eqn:ssizeCramer}
\end{equation}
This equation sets out the \textit{total} required sample size for a perfect diagnostic test, to be equally divided between the two groups.
The proposed Cramér–Rao bound based formula \ref{eqn:ssizeCramer} assumes normality of distributions of the null and the alternative hypotheses, however, the binomial likelihood function is asymmetric, as is pooled Wald intervals (see \cite{pmid3231951}), and becomes more so as the efficacy increases. Notwithstanding the limitations, we plug in the critical values for $\alpha=0.05$ and power of $100(1-\beta)=80$ per cent ($z_{1-\alpha/2}=1.96$ and $z_{1-\beta}=0.84$) in equations \ref{eqn:ssizeWald} and \ref{eqn:ssizeCramer}. The resulting sample sizes are plotted in Figure \ref{fig:figure-4} for $\Delta=10\%$ and different prevalence and efficacy rates.
\begin{figure}[t]
\centering
\mbox{\includegraphics[width=2.7in]{sample-size1.eps}}
\hspace{1px}
\mbox{\includegraphics[width=2.7in]{sample-size2.eps}}
\caption{\color{Gray} \textbf{Sample size relative to disease prevalence}. Total number of samples required to detect with 80\% power and level of significance of $\alpha=0.05$ a difference in the efficacy of size $\Delta=10\%$. Solid lines represent Cramér–Rao bound and dashed lines represent pooled Wald approximation. On the left, x-axis is on logarithmic scale. y-axis is logarithmic in both plots.}
\label{fig:figure-4}
\end{figure}
\begin{table}[t]
\caption{Total sample sizes needed to conclude with 80\% power and $\alpha$=0.05 a significant effect size}
\label{tab:table-2}
\begin{tabular}{cc@{\qquad}ccccccc}
\toprule
\multicolumn{2}{c}{effect size} & \multicolumn{7}{c}{event rate} \\
\cmidrule{1-2} \cmidrule{3-9}
VE & $\Delta$ & 0.5 & 0.1 & 0.05 & 0.01 & 0.005 & 0.001 & 0.0005 \\
\midrule
0\% & 10\% &37,632 & 238,336 & 489,216 & 2,496,256 & 5,005,056 & 25,075,456 & 50,163,456 \\
0\% & 20\% &9,408 & 59,584 & 122,304 & 624,064 & 1,251,264 & 6,268,864 & 12,540,864 \\
0\% & 30\% &4,181 & 26,482 & 54,357 & 277,362 & 556,117 & 2,786,162 & 5,573,717 \\
0\% & 40\% &2,352 & 14,896 & 30,576 & 156,016 & 312,816 & 1,567,216 & 3,135,216 \\
\midrule
30\% & 10\% &21,751 & 145,009 & 299,080 & 1,531,654 & 3,072,371 & 15,398,105 & 30,805,273 \\
30\% & 20\% &5,438 & 36,252 & 74,770 & 382,913 & 768,093 & 3,849,526 & 7,701,318 \\
30\% & 30\% &2,417 & 16,112 & 33,231 & 170,184 & 341,375 & 1,710,901 & 3,422,808 \\
30\% & 40\% &1,359 & 9,063 & 18,693 & 95,728 & 192,023 & 962,382 & 1,925,330 \\
\midrule
60\% & 10\% &11,064 & 79,905 & 165,957 & 854,372 & 1,714,890 & 8,599,037 & 17,204,221 \\
60\% & 20\% &2,766 & 19,976 & 41,489 & 213,593 & 428,723 & 2,149,759 & 4,301,055 \\
60\% & 30\% &1,229 & 8,878 & 18,440 & 94,930 & 190,543 & 955,449 & 1,911,580 \\
60\% & 40\% &691 & 4,994 & 10,372 & 53,398 & 107,181 & 537,440 & 1,075,264 \\
\midrule
90\% & 10\% &4,553 & 37,946 & 79,686 & 413,607 & 831,009 & 4,170,221 & 8,344,237 \\
90\% & 20\% &1,138 & 9,486 & 19,921 & 103,402 & 207,752 & 1,042,555 & 2,086,059 \\
90\% & 30\% &506 & 4,216 & 8,854 & 45,956 & 92,334 & 463,358 & 927,137 \\
90\% & 40\% &285 & 2,372 & 4,980 & 25,850 & 51,938 & 260,639 & 521,515 \\
\bottomrule
\end{tabular}
\end{table}
In Figure \ref{fig:figure-4} the relationship between the sample size and the incidence rate looks linear on log-log scale as they have a power law relationship. However, while the two methods coincide at high incidence rates, pooled Wald method significantly underestimates the sample sizes at low incidence rates especially when the efficacy is high (note that y-axis is on logarithmic scale). Contrasting Figure \ref{fig:figure-4} with the case rates in Table \ref{tab:table-1}, it is clear that, to achieve the narrow confidence bounds that Pfizer and Moderna have reported, they would have needed several times more samples under pooled Wald method, and an order of magnitude more under Cramér–Rao bound. If the event rate were to differ from that in the general population or if possibility of misclassification was non negligible, such a discrepancy in incidence rates could cause such large variations in the variance that the trial population could be unrepresentative of the larger population. Table \ref{tab:table-2} provides the total sample sizes from Cramér–Rao bound formula \ref{eqn:ssizeCramer} for different levels of efficacy and effect size. It is clear that the sample size is also very sensitive to the choice of $\Delta$, therefore an investigator must be wary of misspecification of the anticipated treatment difference \cite{10.1093epirev}.
Throughout the Methods, we incorporated the misclassification error in the calculations in order to emphasise the importance of accounting for classification bias when the disease is rare. We showed that, while lack of diagnostic sensitivity consistently inflates the estimated efficacy rates, imperfect specificity results is serious loss of accuracy and precision at low disease risks. Case definition for COVID-19 is particularly a major caveat. The three vaccine trials broadly follow FDA definition of the disease. For primary end points symptomatic cases are identified by surveillance or are self-reported, and are subsequently confirmed with RT-PCR. Pre-selecting of the participants for PCR assay could create the possibility for collider bias \cite{pmid33184277}. Moreover, the highly non-specific symptoms of COVID-19, which include symptoms as common as cough and congestion, could create the perfect conditions for misclassification. False negatives due to e.g. selective reporting, specimen collection, etc, and PCR false positives due to e.g. remnant viral RNA, etc could be introduced if the test is not repeated \cite{pmid33301459,balayla2020bayesian}. Much remains unknown about COVID-19 and its many symptoms and presentations. Therefore, it is recommended to account for classification bias in the calculation. The code for calculating the posterior probability of the vaccine efficacy, which can simultaneously marginalise over the diagnostic sensitivity and specificity is provided.
\section*{Code}
R code for the posterior probability of the efficacy was modified from code published in \cite{608719}. It is provided in Appendix along with functions to calculate the sample sizes from equations \ref{eqn:ssizeWald} and \ref{eqn:ssizeCramer}.
\section*{Acknowledgments}
The author's position at the University of Cambridge is funded by CRUK grant C60100/A23916. The author would like to appreciate the helpful comments received from the Cancer Mutagenesis group at MRC Cancer Unit.
\nolinenumbers
|
2,877,628,089,576 | arxiv | \section{Introduction}
The Fuchsian differential equation is a linear differential equation whose singularities are all regular.
It frequently appears in a range of problems in mathematics and physics.
For example, the famous Gauss hypergeometric differential equation is a canonical form of the second-order Fuchsian differential equation with three singularities on the Riemann sphere $\Cplx \cup \{ \infty \}$.
Global properties of solutions, i.e., the monodromy, often play decisive roles in the applications of these equations in physics and other areas of mathematics.
Heun's differential equation is a canonical form of a second-order Fuchsian equation with four singularities, which is given by
\begin{equation}
\frac{d^2y}{dz^2} + \left( \frac{\gamma}{z}+\frac{\delta }{z-1}+\frac{\epsilon}{z-t}\right) \frac{dy}{dz} +\frac{\alpha \beta z -q}{z(z-1)(z-t)} y=0,
\label{Heun}
\end{equation}
with the condition
\begin{equation}
\gamma +\delta +\epsilon =\alpha +\beta +1.
\label{Heuncond}
\end{equation}
Several approaches for analyzing Heun's equation are known: including the Heun polynomial (\cite{Ron}), Heun function (\cite{Ron}),
perturbation from the hypergeometric equation (\cite{Tak2}) and finite-gap integration (\cite{TV,GW,Smi,Tak3}).
Finite-gap integration is applicable for the case $\gamma ,\delta ,\epsilon , \alpha - \beta \in \Zint +1/2$, $t \in \Cplx \setminus \{0,1 \} $ and all $q$, and results on the integral representation of solutions (\cite{Tak1}), the Bethe Ansatz (\cite{Tak1}), the Hermite-Krichever Ansatz (\cite{BE,Tak4}), the monodromy formulae by hyperelliptic integrals (\cite{Tak3}), the hyperelliptic-to-elliptic reduction formulae (\cite{Tak4}) and relationships with the Darboux transformation (\cite{Tak5}) have been obtained.
In this paper, we obtain integral formulae of solutions for the case $\gamma ,\delta ,\epsilon , \alpha +1/2, \beta +1/2 \in \Zint $, $t \in \Cplx \setminus \{0,1 \} $ and all $q$, which then facilitates a calculation of the monodromy.
To obtain these formulae, we need to consider a Fuchsian system of differential equations with four singularities $0,1,t,\infty $,
\begin{equation}
\frac{dY}{dz}=\left( \frac{A_0}{z}+\frac{A_1}{z-1}+\frac{A_t}{z-t} \right) Y,
\label{eq:dYdzAzY00}
\end{equation}
where $A_0$, $A_1$, $A_t$ are $2 \times 2$ matrices with constant elements.
We consider the case that $\det A_0=\det A_1=\det A_t=0$, and $A_0+A_1+A_t=-\mbox{diag} (\kappa _1, \kappa _2)$ is a diagonal matrix.
Let $\theta _i$ $(i=0,1,t)$ denote the eigenvalues of $A_i$ other than $0$, and $\theta _{\infty } = \kappa _1 -\kappa _2$.
Under some assumptions the sixth Painlev\'e system is obtained by the monodromy preserving deformation.
Here the sixth Painlev\'e system is defined by
\begin{equation}
\frac{d\lambda }{dt} =\frac{\partial H_{VI}}{\partial \mu}, \quad \quad
\frac{d\mu }{dt} =-\frac{\partial H_{VI}}{\partial \lambda} ,
\label{eq:Psys}
\end{equation}
with the Hamiltonian
\begin{align}
H_{VI} = & \frac{1}{t(t-1)} \left\{ \lambda (\lambda -1) (\lambda -t) \mu^2 \right. \label{eq:P6} \\
& \left. -\left\{ \theta _0 (\lambda -1) (\lambda -t)+\theta _1 \lambda (\lambda -t) +(\theta _t -1) \lambda (\lambda -1) \right\} \mu +\kappa _1(\kappa _2 +1) (\lambda -t)\right\} .\nonumber
\end{align}
By eliminating $\mu $ in Eq.(\ref{eq:Psys}), we obtain the sixth Painlev\'e equation for $\lambda $,
\begin{align}
\frac{d^2\lambda }{dt^2} = & \frac{1}{2} \left( \frac{1}{\lambda }+\frac{1}{\lambda -1}+\frac{1}{\lambda -t} \right) \left( \frac{d\lambda }{dt} \right) ^2 -\left( \frac {1}{t} +\frac {1}{t-1} +\frac {1}{\lambda -t} \right)\frac{d\lambda }{dt} \label{eq:P6eqn} \\
& +\frac{\lambda (\lambda -1)(\lambda -t)}{t^2(t-1)^2}\left\{ \frac{(1-\theta _{\infty})^2}{2} -\frac{\theta _{0}^2}{2}\frac{t}{\lambda ^2} +\frac{\theta _{1}^2}{2}\frac{(t-1)}{(\lambda -1)^2} +\frac{(1-\theta _{t}^2)}{2}\frac{t(t-1)}{(\lambda -t)^2} \right\}, \nonumber
\end{align}
which is a non-linear ordinary differential equation of order two whose solutions do not have movable singularities other than poles.
It is known that the sixth Painlev\'e systems have symmetry, and the action of the symmetry is called the Okamoto-B\"acklund transformation.
The sixth Painlev\'e system has two-parameter solutions for the case $\theta_0= \theta_1= \theta_t= 1-\theta_{\infty }=0$, which are called Picard's solution.
By the Okamoto-B\"acklund transformation of the sixth Painlev\'e system, Picard's solutions are transformed to the solutions for the case $(\theta _0 , \theta _1 , \theta _t, 1-\theta _{\infty} ) \in O_1 \cup O_2$, where
\begin{align}
& O_1= \left\{ (\theta _0 , \theta _1 , \theta _t, 1- \theta _{\infty} ) |
\theta _0 , \theta _1 , \theta _t, 1- \theta _{\infty} \in \Zint +\frac{1}{2} \right\}, \\
& O_2 = \left \{(\theta _0 , \theta _1 , \theta _t, 1- \theta _{\infty} ) \left|
\begin{array}{ll}
\theta _0 , \theta _1 , \theta _t, 1- \theta _{\infty} \in \Zint \\
\theta _0 + \theta _1 + \theta _t + 1- \theta _{\infty} \in 2 \Zint
\end{array}
\right. \right\}. \nonumber
\end{align}
For the case $(\theta _0 , \theta _1 , \theta _t, 1-\theta _{\infty} ) \in O_1$, solutions of the Fuchsian system (Eq.(\ref{eq:dYdzAzY00})) are expressed in the form of the Hermite-Krichever Ansatz, which is a consequence of results presented in \cite{TakP}.
In the present study, we investigate solutions of the Fuchsian system for the case $(\theta _0 , \theta _1 , \theta _t, 1-\theta _{\infty} ) \in O_2$.
These solutions will be shown to have integral representations whose integrands are functions in the form of the Hermite-Krichever Ansats.
In particular we obtain explicit solutions for the case $(\theta _0 , \theta _1 , \theta _t, 1-\theta _{\infty} ) =(0,0,0,0)$, and we can calculate the monodromy explicitly.
By considering the monodromy preserving deformation directly, we recover Picard's solution of the sixth Painlev\'e equation.
The integral representions of solutions to the Fuchsian system follow from the results by Dettweiler-Reiter \cite{DR1,DR2} and Filipuk \cite{Fil} on the middle convolution (see section \ref{sec:MC}).
By considering special cases, we obtain integral formulae of solutions to Heun's equation for the case $\gamma ,\delta ,\epsilon , \alpha +1/2, \beta +1/2 \in \Zint $, $t \in \Cplx \setminus \{0,1 \} $ and all $q$, which are then available for calculating the monodromy.
For the case $\gamma =\delta =\epsilon =1, \alpha =3/2, \beta =1/2 $, we have explicit representations of the integral, and so we obtain explicit representations of the monodromy.
This paper is organized as follows:
In section \ref{sec:FS}, we introduce notation for the Fuchsian system with four singularities.
In section \ref{sec:MC}, we review results on the middle convolution due to Dettweiler-Reiter and Filipuk, and combine their results.
In section \ref{sec:HKA}, we recall the Hermite-Krichever Ansatz.
In section \ref{sec:intrepFs}, we obtain integral representations of solutions to the Fuchsian system with four singularities for the case $(\theta _0 , \theta _1 , \theta _t, 1-\theta _{\infty} ) \in O_2$, whose integrands are functions in the form of the Hermite-Krichever Ansats.
In section \ref{sec:intrepFs0000}, we have explicit representations of solutions for the case $(\theta _0 , \theta _1 , \theta _t, 1-\theta _{\infty} ) =(0,0,0,0)$, and we calculate the monodromy explicitly.
Furthermore, by considering the monodromy preserving deformation directly, we recover Picard's solution of the sixth Painlev\'e equation.
In section \ref{sec:intrepHeun}, we obtain integral formulae of solutions to Heun's equation for the case $\gamma ,\delta ,\epsilon , \alpha - \beta -1/2 \in \Zint $, $t \in \Cplx \setminus \{0,1 \} $ and all $q$, including the case $\gamma =\delta =\epsilon =1, \alpha =3/2, \beta =1/2 $.
\section{Fuchsian system with four singularities} \label{sec:FS}
We consider a system of ordinary differential equations,
\begin{equation}
\frac{dY}{dz}=A(z)Y, \quad A(z)=\frac{A_0}{z}+\frac{A_1}{z-1}+\frac{A_t}{z-t} =
\left(
\begin{array}{ll}
a_{11}(z) & a_{12}(z) \\
a_{21}(z) & a_{22}(z)
\end{array}
\right) ,
\label{eq:dYdzAzY}
\end{equation}
where $Y= {}^t (y_1(z), y_2(z)) $, $t\neq 0,1$ , $A_0$, $A_1$, $A_t$ are $2 \times 2$ matrices with constant elements.
Then Eq.(\ref{eq:dYdzAzY}) is Fuchsian, i.e., any singularities on the Riemann sphere $\Cplx \cup \{ \infty \} $ are regular, and it may have regular singularities at $z=0,1,t,\infty$ on this sphere.
Set
\begin{align}
& A_0= \left(
\begin{array}{ll}
u_0+\theta _0 & -w_0 \\
u_0(u_0+\theta _0)/ w_0 & -u_0
\end{array}
\right) , \quad
A_1= \left(
\begin{array}{ll}
u_1+\theta _1 & -w_1 \\
u_1(u_1+\theta _1)/w_1 & -u_1
\end{array}
\right) , \label{eq:A0A1AtP} \\
& A_t= \left(
\begin{array}{ll}
u_t+\theta _t & -w_t \\
u_t(u_t+\theta _t)/w_t & -u_t
\end{array}
\right) , \quad \nonumber
\end{align}
where $u_0, w_0, u_1, w_1, u_t, w_t$ are defined by
\begin{align}
& w_0 = \frac{k\lambda}{t}, \quad w_1= -\frac{k(\lambda-1)}{t-1}, \quad w_t = \frac{k(\lambda-t)}{t(t-1)} , \label{eq:wugen}\\
& u_0=-\theta _0 +\frac{\lambda}{t\theta _{\infty}} [ \lambda(\lambda-1)(\lambda-t)\mu^2 +\{ 2\kappa _1 (\lambda-1)(\lambda-t)-\theta _1(\lambda-t) \nonumber \\
& \quad \quad \quad \quad \quad \quad \quad \quad -t\theta _t(\lambda-1) \} \mu +\kappa _1 \{\kappa _1(\lambda-t-1)-\theta _1-t\theta _t\} ] ,\nonumber \\
& u_1 =-\theta _1 -\frac{\lambda-1}{(t-1)\theta _{\infty}} [ \lambda(\lambda-1)(\lambda-t)\mu^2 + \{ 2\kappa _1 (\lambda-1)(\lambda-t)+(\theta _{\infty}-\theta _1 )(\lambda-t) \nonumber\\
& \quad \quad \quad \quad \quad \quad \quad \quad -t\theta _t (\lambda-1)\}\mu +\kappa _1 \{ \kappa _1 (\lambda-t+1)+\theta _0-(t-1)\theta _t\} ] ,\nonumber \\
& u_t= -\theta _t+\frac{\lambda-t}{t(t-1)\theta _{\infty}} [ \lambda(\lambda-1)(\lambda-t)\mu^2 + \{ 2\kappa _1 (\lambda-1)(\lambda-t) -\theta _1 (\lambda-t) \nonumber\\
& \quad \quad \quad \quad \quad \quad \quad \quad +t(\theta _{\infty}-\theta _t )(\lambda-1)\} \mu +\kappa _1 \{ \kappa _1 (\lambda-t+1)+\theta _0+(t-1)(\theta _{\infty}-\theta _t)\} ] ,\nonumber
\end{align}
and $\kappa _1= (\theta _{\infty } -\theta _0 -\theta _1 -\theta _t)/2$, $\kappa _2= -(\theta _{\infty } +\theta _0 +\theta _1 +\theta _t)/2$.
Note that the eigenvalues of $A_i$ $(i=0,1,t)$ are $0$ and $\theta _i$.
Set $A_{\infty }= -(A_0 +A_1 +A_t)$. Then
\begin{align}
A_{\infty }= \left(
\begin{array}{cc}
\kappa _1 & 0 \\
0 & \kappa _2
\end{array}
\right) . \label{def:Ainf}
\end{align}
We denote the Fuchsian sysytem (Eq.(\ref{eq:dYdzAzY})) with Eqs.(\ref{eq:A0A1AtP}, \ref{eq:wugen}) by $D_Y(\theta _0, \theta _1, \theta _t, \theta _{\infty}; \lambda ,\mu ;k )$.
By eliminating $y_2(z)$ in Eq.(\ref{eq:dYdzAzY}), we have a second-order linear differential equation,
\begin{align}
& \frac{d^2y_1(z)}{dz^2} + \left( \frac{1-\theta _0}{z}+\frac{1-\theta _1}{z-1}+\frac{1-\theta _t}{z-t}-\frac{1}{z-\lambda} \right) \frac{dy_1(z)}{dz} \label{eq:linP6} \\
& \quad \quad + \left( \frac{\kappa _1(\kappa _2 +1)}{z(z-1)}+\frac{\lambda (\lambda -1)\mu}{z(z-1)(z-\lambda)}-\frac{t (t -1)H}{z(z-1)(z-t)} \right) y_1(z)=0, \nonumber \\
& H=\frac{1}{t(t-1)}[ \lambda (\lambda -1) (\lambda -t)\mu ^2 -\{ \theta _0 (\lambda -1) (\lambda -t)+\theta _1 \lambda (\lambda -t) \nonumber \\
& \quad \quad \quad \quad \quad \quad \quad \quad +(\theta _t -1) \lambda (\lambda -1)\} \mu +\kappa _1 (\kappa _2 +1) (\lambda -t)], \nonumber
\end{align}
which we denote by $D_{y_1}(\theta _0, \theta _1, \theta _t, \theta _{\infty}; \lambda ,\mu )$.
This equation has regular singularities at $z=0,1,t,\lambda ,\infty$. The exponents of the singularity $z=\lambda $ are $0,2$, and this singularity is apparent (i.e. non-logarithmic).
Note that the sixth Painlev\'e system
\begin{align}
\frac{d\lambda }{dt} =\frac{\partial H}{\partial \mu}, \quad \frac{d\mu }{dt} =-\frac{\partial H}{\partial \lambda}
\label{eq:P6sys}
\end{align}
describes the condition for the monodromy preserving deformation of Eq.(\ref{eq:dYdzAzY}) with respect to the variable $t$.
It is known that the sixth Painlev\'e system has symmetry of the extended affine Weyl group of type $F_{4}^{(1)}$ (\cite{Oka}), which is called the Okamoto-B\"acklund transformation.
In particular the sixth Painlev\'e system is invariant under Okamoto's transformation $s_2$ defined by
\begin{align}
s_2: \; & \theta _0 \rightarrow \kappa _1+ \theta _0, \; \; \theta _1 \rightarrow \kappa _1+ \theta _1, \; \; \theta _t \rightarrow \kappa _1+ \theta _t, \; \; \theta _{\infty } \rightarrow -\kappa _2, \\
& \lambda \rightarrow \lambda +\kappa _1/\mu ,\; \; \mu \rightarrow \mu, \; \; t \rightarrow t. \nonumber
\end{align}
Note that $s_2$ is involutive, i.e., $(s_2)^2=1$.
\section{Middle convolution} \label{sec:MC}
Dettweiler and Reiter \cite{DR1,DR2} gave an algebraic analogue of Katz' middle convolution functor, and Filipuk \cite{Fil} applied them for the Fuchsian system with four singularities.
We review and combine these authors' results for the present setting.
Note that the results of Dettweiler and Reiter are valid for Fuchsian equations of an arbitrary size and an arbitrary number of singular points.
Let $A_0$, $A_1$, $A_t$ be matrices in $\Cplx ^{2\times 2}$.
For $\nu \in \Cplx$, we define the convolution matrices $B_0, B_1, B_t \in \Cplx ^{6\times 6}$ as follows:
\begin{align}
& B_0=
\left(
\begin{array}{ccc}
A_0 +\nu & A_1 & A_t \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array}
\right) , \quad
B_1=
\left(
\begin{array}{ccc}
0 & 0 & 0 \\
A_0 & A_1 +\nu & A_t \\
0 & 0 & 0
\end{array}
\right) , \quad \label{eq:Bdef} \\
& B_t=
\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
A_0 & A_1 & A_t +\nu
\end{array}
\right) . \nonumber
\end{align}
We consider the following differential equation:
\begin{equation}
\frac{dU}{dz}=\left( \frac{B_0}{z}+\frac{B_1}{z-1}+\frac{B_t}{z-t} \right) U, \quad U \in \Cplx ^6
\label{eq:dYdzBzU}
\end{equation}
We fix a base point $o \in \Cplx \setminus \{0,1,t\}$.
Let $\alpha _i$ $(i=0,1,t,\infty )$ be a cycle turning the point $w=i$ anti-clockwise whose base point is $o$.
Let $z \in \Cplx \setminus \{0,1,t\}$
and $\alpha _{z}$ be a cycle turning the point $w=z$ anti-clockwise.
Let $[\alpha , \beta] = \alpha ^{-1} \beta ^{-1}\alpha \beta $ be the Pochhammer contour.
\begin{prop} $($\cite{DR2}$)$ \label{prop:DRintegrepr}
Assume that $Y= {}^t (y_1(z), y_2(z))$ is a solution to the differential equation
\begin{equation}
\frac{dY}{dz}=\left( \frac{A_0}{z}+\frac{A_1}{z-1}+\frac{A_t}{z-t} \right) Y.
\label{eq:dYdzAzY0}
\end{equation}
For $i \in \{ 0,1,t,\infty \}$, the function
\begin{equation}
U = \left(
\begin{array}{l}
\int _{[\alpha _{z} ,\alpha _i]}w^{-1}y_1(w) (z-w)^{\nu } dw \\
\int _{[\alpha _{z} ,\alpha _i]}w^{-1}y_2(w) (z-w)^{\nu } dw \\
\int _{[\alpha _{z} ,\alpha _i]}(w-1)^{-1}y_1(w) (z-w)^{\nu } dw \\
\int _{[\alpha _{z} ,\alpha _i]}(w-1)^{-1}y_2(w) (z-w)^{\nu } dw \\
\int _{[\alpha _{z} ,\alpha _i]}(w-t)^{-1}y_1(w) (z-w)^{\nu } dw \\
\int _{[\alpha _{z} ,\alpha _i]}(w-t)^{-1}y_2(w) (z-w)^{\nu } dw
\end{array}
\right) , \label{eq:integrepU}
\end{equation}
satisfies differential equation (\ref{eq:dYdzBzU}).
\end{prop}
\begin{proof}
It follows from a straightforward calculation that the function
\begin{equation}
U = \left(
\begin{array}{l}
z^{-1}y_1(z)\\
z^{-1}y_2(z)\\
(z-1)^{-1} y_1(z)\\
(z-1)^{-1} y_2(z)\\
(z-t)^{-1} y_1(z)\\
(z-t)^{-1} y_2(z)
\end{array}
\right) ,
\end{equation}
is a solution of Eq.(\ref{eq:dYdzBzU}) for the case $\nu =-1$ (see \cite[Lemma 6.4]{DR2}).
It is shown in \cite[Lemma 6.2]{DR2} that if $U={}^t (u_1 (z) , u_2 (z) , \dots , u_6 (z) )$ is a solution of Eq.(\ref{eq:dYdzBzU}) for the case $\nu =\nu _1$, then the function
\begin{equation}
\bar{U} = \left(
\begin{array}{c}
\int _{[\alpha _{z} ,\alpha _i]} u_1 (w)(w-z)^{\nu_2 -1} dw\\
\int _{[\alpha _{z} ,\alpha _i]} u_2 (w)(w-z)^{\nu_2 -1} dw\\
\vdots \\
\int _{[\alpha _{z} ,\alpha _i]} u_6 (w)(w-z)^{\nu_2 -1} dw\\
\end{array}
\right) ,
\end{equation}
is a solution of Eq.(\ref{eq:dYdzBzU}) for the case $\nu =\nu _1 +\nu _2$.
By applying this result for the case $\nu_1=-1, \nu_2=\nu +1$, we obtain the proposition.
\end{proof}
We set
\begin{align}
& {\mathcal L}_0= \left(
\begin{array}{c}
\mbox{Ker}(A_0) \\
0 \\
0
\end{array}
\right) , \quad
{\mathcal L}_1= \left(
\begin{array}{c}
0\\
\mbox{Ker}(A_1) \\
0
\end{array}
\right) , \quad
{\mathcal L}_t= \left(
\begin{array}{c}
0 \\
0 \\
\mbox{Ker}(A_t)
\end{array}
\right) , \\
& {\mathcal L} ={\mathcal L}_0 \oplus {\mathcal L}_1 \oplus {\mathcal L}_t, \quad
{\mathcal K} = \mbox{Ker}(B_0) \cap \mbox{Ker}(B_1) \cap \mbox{Ker}(B_t) . \nonumber
\end{align}
We fix an isomorphism between $\Cplx ^6 /({\mathcal K}+ {\mathcal L})$ and $\Cplx ^m$ for some $m$.
A tuple of matrices $mc_{\nu } (A) =(\tilde{B}_0, \tilde{B}_1, \tilde{B}_t)$, where $\tilde{B}_k$ $(k=0,1,t)$ is induced by the action of $B_k$ on $\Cplx ^m \simeq \Cplx ^6 /({\mathcal K}+ {\mathcal L})$, is called an additive version of the middle convolution of $(A_0,A_1,A_t)$ with the parameter $\nu$.
Filipuk \cite{Fil} established that, if $\nu =\kappa _1$, then $\Cplx ^6 /({\mathcal K}+ {\mathcal L})$ is isomorphic to $\Cplx ^2$ and the isomonodromic deformation of the middle convolution system
\begin{equation}
\frac{d\tilde{Y}}{dz}=\left( \frac{\tilde{B}_0}{z}+\frac{\tilde{B}_1}{z-1}+\frac{\tilde{B}_t}{z-t} \right) \tilde{Y},
\end{equation}
gives the sixth Painleve equation for the parameters transformed by Okamoto's transformation $s_2$.
Note that Boalch \cite{Boa} obtained a geometric result on Okamoto's transformation earlier by finding an isomorphism between a $2 \times 2$ Fuchsian equation and a $3 \times 3$ Fuchsian equation, which would be related with Filipuk's result.
We now calculate explicitly the Fuchsian differential equation determined by the middle convolution that is required for our purpose, and which reproduces the result by Filipuk \cite{Fil}.
Let $A_0$, $A_1$, $A_t$ be the matrices defined by Eq.(\ref{eq:A0A1AtP}). If $\nu = \kappa _1$, then the spaces ${\mathcal L}_0$, ${\mathcal L}_1$, ${\mathcal L}_t$, ${\mathcal K}$ are written as
\begin{align}
& {\mathcal L}_0= \Cplx \left(
\begin{array}{c}
w_0 \\
u_0+\theta _0 \\
0\\
0\\
0\\
0
\end{array}
\right) , \;
{\mathcal L}_1= \Cplx \left(
\begin{array}{c}
0\\
0\\
w_1 \\
u_1+\theta _1 \\
0\\
0
\end{array}
\right) , \;
{\mathcal L}_t= \Cplx \left(
\begin{array}{c}
0\\
0\\
0\\
0\\
w_t \\
u_t+\theta _t
\end{array}
\right) , \;
{\mathcal K}= \Cplx \left(
\begin{array}{c}
1\\
0\\
1\\
0\\
1 \\
0
\end{array}
\right) .
\end{align}
Set
\begin{align}
& S= \left(
\begin{array}{cccccc}
0 & 0 & 1 & w_0 & 0 & 0\\
0 & 0 & 0 &u_0+\theta _0& 0 & 0 \\
s_{31} & s_{32} & 1 &0 & w_1 & 0 \\
0 & 0 & 0 &0 & u_1+\theta _1& 0 \\
s_{51} & s_{52} & 1 &0& 0 & w_t \\
0 & 0 & 0 &0 & 0 & u_t+\theta _t
\end{array}
\right) , \;
\begin{array}{l}
s_{31}= \frac{t(t-1)\mu w_0 w_1 u_t}{k^2 \kappa _2 w_t(u_0+\theta _0)(u_1+\theta _1)}, \\
s_{51}= \frac{t(1-t)\mu w_0 w_t u_1}{k^2 \kappa _2 w_t(u_0+\theta _0)(u_t+\theta _t)}, \\
s_{32}= \frac{1}{\theta _{\infty }} \left( \frac{w_0}{u_0+\theta _0}- \frac{w_1}{u_1+\theta _1} \right) ,\\
s_{52}= \frac{1}{\theta _{\infty }} \left( \frac{w_0}{u_0+\theta _0}- \frac{w_t}{u_t+\theta _t} \right) ,
\end{array}
\end{align}
and $\tilde{U}= S^{-1} U$, where $U$ is a solution to Eq.(\ref{eq:dYdzBzU}).
Then $\det U =k \lambda (\lambda -1)(\lambda -t) \mu/(t(1-t)\theta _{\infty})$ and $\tilde{U}$ satisfies
\begin{equation}
\frac{d\tilde{U}}{dz} = \left(
\begin{array}{cccccc}
b_{11}(z) & b_{12}(z) & 0 & 0 & 0 & 0\\
b_{21}(z) & b_{22}(z) & 0 & 0 & 0 & 0\\
b_{31}(z) & b_{32}(z) & 0 & 0 & 0 & 0\\
0 & b_{42}(z) & 0 & \frac{\kappa _1}{z} & 0 & 0\\
0 & b_{52}(z) & 0 & 0 & \frac{\kappa _1}{z-1} & 0\\
0 & b_{62}(z) & 0 & 0 & 0 & \frac{\kappa _1}{z-t}
\end{array}
\right) \tilde{U},
\end{equation}
where $b_{i1}(z)$ $(i=1,2,3)$ and $b_{i2}(z)$ $(i=1,\dots ,6)$ are rational functions.
Write $\tilde{U}= {}^t (\tilde{u}_1(z), \tilde{u}_2(z), \dots , \tilde{u}_6(z))$ and set $\tilde{y}_1(z) = \tilde{u}_1(z)$, $\tilde{y}_2(z) = \tilde{u}_2(z)$ and $\tilde{Y}= {}^t (\tilde{y}_1(z), \tilde{y}_2(z))$.
Then we have
\begin{align}
& \frac{d\tilde{Y}}{dz}=
\left(
\begin{array}{ll}
b_{11}(z) & b_{12}(z) \\
b_{21}(z) & b_{22}(z)
\end{array}
\right) \tilde{Y}.
\label{eq:dtYdzBztY0}
\end{align}
The elements $b_{11}(z)$, $b_{12}(z)$, $b_{21}(z)$, $b_{22}(z)$ are calculated explicitly and Eq.(\ref{eq:dtYdzBztY0}) coincides with the Fuchsian differential equation $D_Y(\tilde{\theta }_0, \tilde{\theta }_1, \tilde{\theta }_t, \tilde{\theta }_{\infty}; \tilde{\lambda },\tilde{\mu };\tilde{k} )$ (see Eq.(\ref{eq:dYdzAzY})), where
\begin{align}
& \tilde{\theta }_0 = \frac{\theta _0 -\theta _1 -\theta _t+\theta _{\infty}}{2}, \quad \tilde{\theta }_1 = \frac{-\theta _0 +\theta _1 -\theta _t+\theta _{\infty}}{2}, \quad \tilde{\theta }_t = \frac{-\theta _0 -\theta _1 +\theta _t+\theta _{\infty}}{2}, \\
& \tilde{\theta }_{\infty} = \frac{\theta _0+\theta _1 +\theta _t+\theta _{\infty}}{2}, \quad \tilde{\lambda} =\lambda + \kappa _1/\mu, \quad \tilde{\mu} =\mu ,\quad \tilde{k}= k . \nonumber
\end{align}
The functions $\tilde{y}_1(z)$ and $\tilde{y}_2(z)$ are expressed as
\begin{align}
& \tilde{y}_1(z) = \frac{\tilde{\lambda } (u_0+\theta _0)}{\lambda } u_1(z) - \frac{k\tilde{\lambda }}{t} u_2 (z) + \frac{(\tilde{\lambda } -1) (u_1+\theta _1)}{\lambda -1} u_3(z) \label{eq:y1tu16} \\
& \quad \quad \quad + \frac{k(\tilde{\lambda }-1)}{t-1 } u_4 (z)+ \frac{(\tilde{\lambda } -t)(u_t+\theta _t)}{\lambda -t} u_5(z) + \frac{k(\tilde{\lambda }-t) }{t(1-t)} u_6 (z), \nonumber \\
& \tilde{y}_2(z) = \frac{\theta _{\infty}}{\kappa _2} \left( -\frac{t u_0 (u_0+\theta _0)}{k \lambda } u_1(z) + u_0 u_2 (z) + \frac{(t-1) u_1 (u_1 + \theta _1 )}{k (\lambda -1)} u_3(z) \right. \nonumber \\
& \left. \quad \quad \quad \quad \quad \quad + u_1 u_4 (z)+ \frac{t(1-t) u_t (u_t+\theta _t)}{k(\lambda -t)} u_5(z) +u_t u_6 (z) \right) . \nonumber
\end{align}
It follows from Proposition \ref{prop:DRintegrepr} that the function $U={}^t (u_1 (z) , u_2 (z) , \dots , u_6 (z) )$ given by Eq.(\ref{eq:integrepU}) is a solution to Eq.(\ref{eq:dYdzBzU}).
Combining with the relations $y_2 (w)= (dy_1(w)/dw -a_{11}(w)y_1(w))/a_{12}(w)$, $y_1 (w)= (dy_2(w)/dw -a_{22}(w)y_2(w))/a_{21}(w)$ and Eq.(\ref{eq:y1tu16}),
the functions $\tilde{y}_1(z)$ and $\tilde{y}_2(z)$ are expressed as the integral in the following proposition by means of a straightforward calculation:
\begin{prop} \label{thm:zinterep}
Set $\kappa _1= (\theta _{\infty } -\theta _0 -\theta _1 -\theta _t)/2$ and $\kappa _2= -(\theta _{\infty } +\theta _0 +\theta _1 +\theta _t)/2$.
If $Y={}^t ( y_1(z), y_2(z))$ is a solution to the Fuchsian differential equation $D_Y(\theta _0, \theta _1, \theta _t,\theta _{\infty}; \lambda ,\mu ;k )$ (see Eq.(\ref{eq:dYdzAzY})), then the function $\tilde{Y} = {}^t (\tilde{y}_1 (z) , \tilde{y}_2 (z) )$ defined by
\begin{align}
& \tilde{y}_1 (z)= \int _{[\alpha _{z} ,\alpha _i]} \left\{ \kappa _1 y_1(w) + (w- \tilde{\lambda})\frac{dy_1(w)}{dw} \right\} \frac{(z-w)^{\kappa _1 }}{w-\lambda } dw, \label{eq:yt1zintrep} \\
& \tilde{y}_2 (z) = \frac{-\theta _{\infty}}{\kappa _2 }\int _{[\alpha _{z} ,\alpha _i]} \frac{dy_2(w)}{dw} (z-w)^{\kappa _1 } dw, \nonumber
\end{align}
satisfies the Fuchsian differential equation $D_Y(\kappa _1 + \theta _0, \kappa _1 +\theta _1, \kappa _1 +\theta _t, -\kappa _2 ; \lambda +\kappa _1 /\mu ,\mu ;k )$ for $i \in \{ 0,1,t,\infty \}$.
\end{prop}
Therefore, if we know a solution to the differential equation $D_Y(\theta _0, \theta _1, \theta _t,\theta _{\infty}; \lambda ,\mu ;k )$, then we have integral representations of solutions to the Fuchsian differential equation $D_Y(\tilde{\theta }_0, \tilde{\theta }_1, \tilde{\theta }_t, \tilde{\theta }_{\infty}; \tilde{\lambda },\tilde{\mu };\tilde{k} )$ obtained by Okamoto's transformation $s_2$.
It can be shown that, if $\kappa _2 \neq 0$, $\kappa _1\not \in \Zint$ and $\theta _i \not \in \Zint $ for some $i \in \{ 0,1,t,\infty \}$, then the function $\tilde{y}_1 (z)$ is non-zero for generic $\lambda $ and $\mu$ (see \cite[Lemma 6.6]{DR2}).
On the other hand, a solution to Eq.(\ref{eq:dYdzAzY}) for the case $\theta _0, \theta _1, \theta _t, \theta _{\infty } \in \Zint +\frac{1}{2}$ can be expressed in the form of the Hermite-Krichever Ansats.
In the next section, we recall the Hermite-Krichever Ansats.
\section{Hermite-Krichever Ansatz} \label{sec:HKA}
We rewrite Eq.(\ref{eq:linP6}) in elliptical form. Recall that Eq.(\ref{eq:linP6}) is written as
\begin{align}
& \frac{d^2y_1(z)}{dz^2} + \left( \frac{1-\theta _0}{z}+\frac{1-\theta _1}{z-1}+\frac{1-\theta _t}{z-t}-\frac{1}{z-\lambda} \right) \frac{dy_1(z)}{dz} \label{eq:linP60} \\
& \quad \quad + \left( \frac{\kappa _1(\kappa _2 +1)}{z(z-1)}+\frac{\lambda (\lambda -1)\mu}{z(z-1)(z-\lambda)}-\frac{t (t -1)H}{z(z-1)(z-t)} \right) y_1(z)=0, \nonumber
\end{align}
and $H$ is determined by
\begin{align}
& H=\frac{1}{t(t-1)}[ \lambda (\lambda -1) (\lambda -t)\mu ^2 -\{ \theta _0 (\lambda -1) (\lambda -t)+\theta _1 \lambda (\lambda -t) \label{eq:linP6H} \\
& \quad \quad \quad \quad \quad \quad \quad \quad +(\theta _t -1) \lambda (\lambda -1)\} \mu +\kappa _1 (\kappa _2 +1) (\lambda -t)]. \nonumber
\end{align}
Let $\wp (x)$ be the Weierstrass $\wp$-function with periods $(2\omega _1,2\omega _3)$, $\omega _0(=0)$, $\omega_1$, $\omega_2(=-\omega _1 -\omega _3)$, $\omega_3$ be half-periods and $e_i=\wp (\omega _i)$ $(i=1,2,3)$.
Set
\begin{equation}
z=\frac{\wp (x)-e_1}{e_2-e_1} ,\quad t=\frac{e_3-e_2}{e_1-e_2}, \quad \lambda =\frac{\wp (\delta )-e_1}{e_2-e_1}
\end{equation}
For $t \in \Cplx \setminus \{0,1 \}$, there exists a pair of periods $(2\omega _1,2\omega _3)$ such that $ t=(\wp (\omega _3)-\wp (\omega _2))/(\wp (\omega _1)-\wp (\omega _2))$. The value $\delta $ is determined up to the sign $\pm $ and the periods $2\omega _1 \Zint \oplus 2\omega _3 \Zint $.
Set
\begin{align}
& \theta _0 =l_1 +1/2, \quad \theta _1 =l_2 +1/2, \quad \theta _t =l_3 +1/2, \quad \theta _{\infty} =-l_0 +1/2, \label{eq:kili} \\
& f(x)= y_1 (z) z^{-l_1/2} (z-1)^{-l_2/2} (z-t)^{-l_3/2}. \nonumber
\end{align}
Then Eq.(\ref{eq:linP60}) is transformed to
\begin{align}
& \left( -\frac{d^2}{dx^2} + \frac{\wp ' (x)}{\wp (x) -\wp (\delta )} \frac{d}{dx} + \frac{\tilde{s}}{\wp (x) -\wp (\delta )} +\sum_{i=0}^3 l_i(l_i+1) \wp (x+\omega_i) +C \right) f (x)=0, \label{eq:Hg} \\
& \tilde{s}= -4(e_2-e_1)^2 \lambda (\lambda -1) (\lambda -t) \left( \mu -\frac{l_1}{2\lambda } -\frac{l_2}{2 (\lambda -1)}-\frac{l_3}{2 (\lambda -t)} \right) , \nonumber \\
& C=4(e_2-e_1)\left\{ \lambda (1-\lambda )\mu- t(1-t)H\right\} +(l_1+l_2+l_3+l_0+1)(l_1+l_2+l_3-l_0)e_3 \nonumber \\
& \quad \quad -2(l_1l_2e_3+l_2l_3e_1+l_3l_1e_2) + 2(l_1+l_2+l_3) ((e_2-e_1)\lambda +e_1) +\sum _{i=1}^3 l_i (l_i +2) e_i, \nonumber
\end{align}
and Eq.(\ref{eq:linP6H}) is equivalent to the equality
\begin{align}
& C =4(e_2-e_1) \lambda (\lambda -1) (\lambda -t) \mu \left\{ \mu - \frac{l_1+\frac{1}{2}}{\lambda } - \frac{ l_2+\frac{1}{2}}{\lambda -1} - \frac{l_3+\frac{1}{2}}{\lambda -t} \right\} +\sum _{i=1}^3 l_i (l_i +2) e_i \label{pgkrap} \\
& \quad +((e_2-e_1) \lambda +e_1) \{ (l_1+l_2+l_3+l_0+2)(l_1+l_2+l_3-l_0+1) -2\} \nonumber \\
& \quad -2(l_1l_2e_3+l_2l_3e_1+l_3l_1e_2) , \nonumber
\end{align}
which shows that the regular singularities $x=\pm \delta $ are apparent.
The sixth Painlev\'e equation (Eq.(\ref{eq:P6eqn})) for $\lambda (=(\wp (\delta )-e_1)/(e_2-e_1))$ also has an elliptical representation
\begin{equation}
\frac{d^2 \delta }{d \tau ^2} = -\frac{1}{4\pi ^2} \left\{ \frac{(1-\theta _{\infty})^2}{2} \wp ' \left(\delta \right) + \frac{\theta _{0}^2}{2} \wp ' \left(\delta +\frac{1}{2} \right) + \frac{\theta _{1}^2}{2} \wp ' \left(\delta +\frac{\tau +1}{2} \right) + \frac{\theta _{t}^2}{2} \wp ' \left(\delta +\frac{\tau }{2}\right) \right\}, \label{eq:P6ellip}
\end{equation}
where $\omega _1=1/2$, $\omega _3=\tau /2$ and $\wp ' (z ) = (\partial /\partial z ) \wp (z)$ (see \cite{Man,Tks,TakHP}),
and it is related to the monodromy preserving deformation of Eq.(\ref{eq:Hg}) by the variable $\tau =\omega _3/\omega _1$.
We recall that a solution to Eq.(\ref{eq:Hg}) can be expressed in the form of the Hermite-Krichever Ansatz if $l_0 ,l_1,l_2, l_3 \in \Zint$.
Note that the condition $\theta _{\infty} , \theta _0, \theta _1 , \theta _t \in \Zint +1/2$ corresponds to the condition $l_0 ,l_1,l_2, l_3 \in \Zint$.
Set
\begin{equation}
\Phi _i(x,\alpha )= \frac{\sigma (x+\omega _i -\alpha ) }{ \sigma (x+\omega _i )} \exp (\zeta( \alpha )x), \quad \quad (i=0,1,2,3).
\label{Phii}
\end{equation}
\begin{prop} \label{thm:alpha} $($\cite{TakP}$)$
Set $\tilde{l} _0 = |l_0 +1/2|+1/2$ and $\tilde{l}_i =|l_i +1/2 |-1/2$ $(i=1,2,3)$.
For $l_0 ,l_1,l_2, l_3 \in \Zint$, we have polynomials $Q(\lambda ,\mu)$, $P_1 (\lambda ,\mu)$, \dots , $P_6 (\lambda ,\mu)$ such that if $P_2 (\lambda ,\mu)\neq 0$ then there exists a solution $f_{HK}(x ;l_0, l_1, l_2, l_3 ;\lambda ,\mu )$ to Eq.(\ref{eq:Hg}) of the form
\begin{align}
& f _{HK} (x ;l_0, l_1, l_2, l_3;\lambda ,\mu ) = \exp \left( \kappa x \right) \left( \sum _{i=0}^3 \sum_{j=0}^{\tilde{l}_i-1} \tilde{b} ^{(i)}_j \left( \frac{d}{dx} \right) ^{j} \Phi _i(x, \alpha ) \right)
\label{Lalpha}
\end{align}
for some values $\alpha $, $\kappa$ and $\tilde{b} ^{(i)}_j$ $(i=0,1,2 ,3, \: j= 0,\dots ,\tilde{l}_i-1)$, and the values $\alpha $ and $\kappa $ are expressed as
\begin{align}
& \wp (\alpha )= \frac{P_1 (\lambda ,\mu)}{P_2 (\lambda ,\mu)}, \quad \wp '(\alpha )= \frac{P_3 (\lambda ,\mu)}{P_4 (\lambda ,\mu)}\sqrt{-Q(\lambda ,\mu)} , \label{eq:alphakappa} \\
& \kappa = \frac{P_5 (\lambda ,\mu)}{P_6 (\lambda ,\mu)}\sqrt{-Q(\lambda ,\mu)} . \nonumber
\end{align}
Regarding the periodicity of the function $f_{HK} (x ) =f_{HK} (x ;l_0, l_1, l_2, l_3;\lambda ,\mu ), $
we have
\begin{align}
& f_{HK} (x+2\omega _j ) = \exp (-2\eta _j \alpha +2\omega _j \zeta (\alpha ) +2 \kappa \omega _j ) f _{HK} (x ), \label{ellint}
\end{align}
for $j=1,3$, where $\eta _j =\zeta (\omega _j)$.
\end{prop}
It follows from Proposition \ref{thm:alpha} that a solution to the Fuchsian differential system $D_{y_1 }( l_1 +1/2, l_2 +1/2, l_3 +1/2, -l_0+1/2; \lambda ,\mu ;k)$ is expressed in the form of the Hermite-Krichever Ansatz for the case $l_0, l_1, l_2 , l_3 \in \Zint $ by setting $z=(\wp (x)-e_1)/(e_2-e_1)$, $t=(e_3-e_2)/(e_1-e_2)$, $y_1 (z)= z^{l_1/2} (z-1)^{l_2/2} (z-t)^{l_3/2}f_{HK}(x;l_0, l_1, l_2, l_3;\lambda ,\mu )$, $y_2 (z)= (dy_1(z)/dz -a_{11}(z)y_1(z))/a_{12}(z)$.
We now consider the Hermite-Krichever Ansatz for the case $l_0=l_1=l_2=l_3=0$ in detail, which was demonstrated in \cite{TakP}.
The differential equation (\ref{eq:Hg}) is written as
\begin{equation}
\left\{ -\frac{d^2}{dx^2} + \frac{\wp ' (x)}{\wp (x) -\wp (\delta )} \frac{d}{dx} - \frac{ 4\mu \lambda (\lambda -1) (\lambda -t)(e_2-e_1)^2}{\wp (x) -\wp (\delta )} +C \right\} f (x)=0,
\label{Hgkr1l00}
\end{equation}
We assume that $\delta \not \equiv 0$ mod $\omega _1 \Zint \oplus \omega _3 \Zint$. The condition that the regular singularities $x= \pm \delta $ are apparent is written as
\begin{align}
& C= 2(2 \lambda (\lambda -1) (\lambda -t) \mu^2 -(3\lambda ^2-2(1+t) \lambda +t ) \mu )(e_2-e_1), \label{pgkrapl00}
\end{align}
(see Eq.(\ref{pgkrap})).
We consider Eq.(\ref{Hgkr1l00}) with the condition in Eq.(\ref{pgkrapl00}). The polynomial $Q(\lambda ,\mu) $ in Eq.(\ref{eq:alphakappa}) is calculated as
\begin{align}
& Q(\lambda ,\mu)= -2\mu (2\lambda \mu -1) (2(\lambda -1) \mu -1) (2(\lambda -t) \mu -1)/(e_2-e_1).
\end{align}
There exists a solution $f_{HK}(x ) (=f_{HK}(x ;0,0,0,0;\lambda ,\mu ))$ to Eq.(\ref{Hgkr1l00}) that can be expressed in the form of the Hermite-Krichever Ansatz as
\begin{align}
& f _{HK} (x) = \bar{b} ^{(0)}_0 \exp (\kappa x) \Phi _0 (x, \alpha ),
\label{eq:HK0000}
\end{align}
if $\mu \neq 0$.
The values $\alpha $ and $\kappa $ are determined as
\begin{align}
& \wp (\alpha )= e_1+(e_2 -e_1) \left(\lambda - \frac{1}{2 \mu} \right) , \quad \wp '(\alpha )= -\frac{(e_2-e_1)^2\sqrt{-Q(\lambda ,\mu)}}{2\mu ^2} , \\
& \kappa = \frac{(e_2-e_1) \sqrt{-Q(\lambda ,\mu)}}{2\mu } ,\nonumber
\end{align}
and we have
\begin{align}
& \lambda = \frac{1}{e_2-e_1} \left\{ \wp (\alpha ) -e_1-\frac{\wp ' (\alpha )}{2\kappa } \right\} ,\quad \mu = -\frac{(e_2-e_1 )\kappa }{\wp ' (\alpha )} . \label{eq:lmalka0000}
\end{align}
\section{Integral representation of solutions to Fuchsian system} \label{sec:intrepFs}
We show that solutions to the Fuchsian system (Eq.(\ref{eq:dYdzAzY})) have integral representations for the case $\theta _0, \theta _1, \theta _t, \theta _{\infty } \in \Zint $, $\theta _0 + \theta _1 + \theta _t + \theta _{\infty } \in 1+2\Zint $ by use of the function in the form of Hermite-Krichever Ansatz.
\begin{thm} \label{thm:xinterep}
Assume that $l_0 , l_1 , l_2 ,l_3 \in \Zint$ and $l_0 +l_1 +l_2 +l_3 \in 2\Zint$.
Let $f_{HK} (x)(=f_{HK} (x; l_0 , l_1 , l_2 ,l_3 ;\lambda ,\mu ))$ be the solution expressed in the form of the Hermite-Krichever Ansatz in Proposition \ref{thm:alpha}.
Set
\begin{align}
& \tilde{\wp }(x)=\frac{\wp (x) -e_1}{e_2-e_1}, \quad \kappa _1=-\frac{l_0+l_1+l_2+l_3+1}{2}.
\end{align}
Then the function $\tilde{y}^{(i)}_1 (z)$ $(i=0,1,2,3)$ defined by
\begin{align}
\tilde{y}^{(i)} _1 (z)=& \int _{-\tilde{\wp }^{-1} (z )+2\omega _i}^{\tilde{\wp }^{-1}(z)} \left[ \left\{ \frac{\kappa _1 }{e_2- e_1}+ \left( \tilde{\wp }(\xi )- \lambda -\frac{\kappa _1}{\mu }\right) \sum _{j=1}^3 \frac{l_j}{2(\wp (\xi )-e_j)} \right\} \wp ' (\xi ) f _{HK} (\xi ) \right. \label{eq:intrepy1x}\\
& \quad \quad \quad \left. + \left( \tilde{\wp }(\xi )- \lambda -\frac{\kappa _1}{\mu }\right) \frac{df _{HK} (\xi )}{d\xi } \right] \left( \prod _{j=1}^3 (\wp(\xi )-e_j) ^{l_j/2} \right) \frac{(z-\tilde{\wp }(\xi ))^{\kappa _1 }}{(\tilde{\wp }(\xi )-\lambda )} d\xi , \nonumber
\end{align}
is a solution to the Fuchsian differential equation $D_{y_1 }(\tilde{\theta }_0, \tilde{\theta }_1, \tilde{\theta }_t, \tilde{\theta }_{\infty}; \tilde{\lambda },\tilde{\mu })$ (see Eq.(\ref{eq:linP6})), where
\begin{align}
& \tilde{\theta }_0=\frac{-l_0 +l _1 -l_2 -l_3}{2}, \quad \tilde{\theta }_1=\frac{-l_0 -l _1 +l_2 -l_3}{2} , \quad \tilde{\theta }_t=\frac{-l_0 -l _1 -l_2 +l_3}{2} ,\\
& \tilde{\theta }_{\infty}=\frac{-l_0 +l _1 +l_2 +l_3}{2}+1 , \quad \tilde{\lambda }= \lambda +\frac{\kappa _1}{\mu}, \quad \tilde{\mu }=\mu . \nonumber
\end{align}
For the case $\kappa _1 \leq -1$, we interpret the integral as a half of the value integrated over the cycle from a point sufficiently close to $\xi = -\tilde{\wp }^{-1} (z )+2\omega _i$, turning around the point $\xi = -\tilde{\wp }^{-1} (z )+2\omega _i$ clockwise, moving to the point sufficiently close to $\xi = \tilde{\wp }^{-1} (z )$, turning around the point $\xi = \tilde{\wp }^{-1} (z )$ anticlockwise and returning to the initial point.
\end{thm}
\begin{proof}
By changing the variable $w=(\wp (\xi) -e_1)/(e_2-e_1)$, substituting $y_1(w)=w^{l_1/2}(w-1)^{l_2/2}(w-t)^{l_3/2} f_{HK}(\xi)$ and multiplying Eq.(\ref{eq:yt1zintrep}) by $t\kappa _2 (u_0+\theta _0)/(-\lambda (\lambda -t)\mu) \tilde{y}_1 (z)$, we obtain the integrand.
We consider the integral contour $[\alpha _{z} ,\alpha _i]$.
Let $o\in \Cplx \setminus \{0,1,t\}$ be the initial point of the contour in the $w$-plane, and $\pm x_0$ (resp. $\pm x$) be the point such that $\wp (\pm x_0)= o$ (resp. $\wp (\pm x)= z$).
We choose $x_0$ sufficiently close to $x$.
The contour $\alpha _z$ in the $z$-plane corresponds to the contour whose initial point is $x_0$ and turning $x$ anticlockwise and returning either to $x_0$ or to the contour whose initial point is $-x_0$ and turning $-x$ anticlockwise and returning to $-x_0$, depending on the choice of branching.
The contour $\alpha _{\infty }$ (resp. $\alpha _0$, $\alpha _1$, $\alpha _t$) in the $w$-plane corresponds either to the contour whose initial point is $x_0$ and ends at $-x_0$ (resp. $-x_0+2\omega _1$, $-x_0+2\omega _2$, $-x_0+2\omega _3$) or to the reverse contour.
By analytic continuation along the cycle $\alpha _z$, the integrand is multiplied by $-1$ because of the factor $(z-\tilde{\wp }(\xi ))^{\kappa _1 }$ $(\kappa _1 \in \Zint +1/2)$, and the integral tends to zero in the limit as $x_0 \rightarrow x$ for the case $\kappa >-1$.
Hence the contour $[\alpha _{z} ,\alpha _{\infty }]$ (resp. $[\alpha _{z} ,\alpha _0]$, $[\alpha _{z} ,\alpha _{1}]$, $[\alpha _{z} ,\alpha _{t}]$) corresponds to a contour that runs twice from $x $ to $-x$ (resp. $-x+2\omega _1$, $-x+2\omega _2$, $-x+2\omega _3$).
We therefore obtain the theorem.
\end{proof}
Note that if $l_0, l_1 ,l_2 ,l_3 \in \Zint $ and $l_0 + l_1 +l_2 +l_3 \in 2\Zint +1$, then the integral is equal to zero.
It follows from the assumption $l_0, l_1 ,l_2 ,l_3 \in \Zint $ and $l_0 + l_1 +l_2 +l_3 \in 2\Zint $ that $\tilde{\theta }_0,\tilde{\theta }_1,\tilde{\theta }_t ,\tilde{\theta }_{\infty} \in \Zint $ and $\tilde{\theta }_0+\tilde{\theta }_1+\tilde{\theta }_t +\tilde{\theta }_{\infty} \in 1+2\Zint$.
For given $\theta _0, \theta _1, \theta _t, \theta _{\infty }$ such that $\theta _0, \theta _1, \theta _t, \theta _{\infty } \in \Zint $ and $\theta _0 + \theta _1 + \theta _t + \theta _{\infty } \in 1+2\Zint $,
we have integral representations of solutions by choosing $l_0 , l_1 ,l_2 ,l_3 $ appropriately.
More precisely, we have the following corollary:
\begin{cor} \label{cor:xinterep}
Assume that $\theta _0, \theta _1, \theta _t, \theta _{\infty } \in \Zint $ and $\theta _0 + \theta _1 + \theta _t + \theta _{\infty } \in 1+2\Zint $.
Set
\begin{align}
& l_0=\frac{-\theta _0- \theta _1- \theta _t- \theta _{\infty }+1}{2},\quad l_1=\frac{\theta _0- \theta _1- \theta _t+ \theta _{\infty }-1}{2}, \\
& l_2=\frac{-\theta _0+ \theta _1- \theta _t+ \theta _{\infty }-1}{2},\quad l_3=\frac{-\theta _0- \theta _1+ \theta _t+ \theta _{\infty }-1}{2}, \nonumber \\
& \tilde{\wp }(x)=\frac{\wp (x) -e_1}{e_2-e_1}, \quad \tilde{\kappa }_1=\frac{\theta _0+ \theta _1+ \theta _t- \theta _{\infty }}{2}. \nonumber
\end{align}
Let $f_{HK} (x)=(f_{HK} (x; l_0 , l_1 , l_2 ,l_3 ;\lambda -\tilde{\kappa _1}/\mu, \mu))$ be the solution expressed in the form of the Hermite-Krichever Ansatz in Proposition \ref{thm:alpha}.
Then the function $\tilde{y}^{(i)}_1 (z)$ $(i=0,1,2,3)$ defined by
\begin{align}
\tilde{y}^{(i)} _1 (z)= &\int _{-\tilde{\wp }^{-1} (z )+2\omega _i}^{\tilde{\wp }^{-1}(z)} \left[ \left\{ \frac{\tilde{\kappa _1 }}{e_2- e_1}+ \left( \tilde{\wp }(\xi )- \lambda \right) \sum _{j=1}^3 \frac{l_j}{2(\wp (\xi )-e_j)} \right\} \wp ' (\xi ) f _{HK} (\xi ) \right. \label{eq:y1tcor} \\
& \quad \quad \left. + \left( \tilde{\wp }(\xi )- \lambda \right) \frac{df _{HK} (\xi )}{d\xi } \right] \prod _{j=1}^3 (\wp(\xi )-e_j) ^{l_j/2} \frac{(z-\tilde{\wp }(\xi ))^{\tilde{\kappa }_1 }}{(\tilde{\wp }(\xi )-\lambda +\tilde{\kappa _1}/\mu)} d\xi ,\nonumber
\end{align}
is a solution to the Fuchsian differential equation $D_{y_1 }(\theta _0, \theta _1, \theta _t, \theta _{\infty}; \lambda ,\mu )$.
For the case $\kappa _1 \leq -1$, we interpret the integral as the one in Theorem \ref{thm:xinterep}.
\end{cor}
Let $a_{11}(z), a_{12}(z)$ be the functions defined in Eq.(\ref{eq:dYdzAzY}) and set $\tilde{y}_2 ^{(i) }(z) = ( d{\tilde y}_1^{(i) }(z)/dz -a_{11}(z)\tilde{y}_1^{(i) }(z))/a_{12}(z)$ $(i=0,1,2,3)$.
Then the function $Y={}^t ( \tilde{y}_1 ^{(i) }(z) ,\tilde{y}_2 ^{(i) }(z) )$ is a solution to the Fuchsian differential system $D_{Y}(\theta _0, \theta _1, \theta _t, \theta _{\infty}; \lambda ,\mu ;k)$ (see Eq.(\ref{eq:dYdzAzY})).
Note that the function $\tilde{y}_2 ^{(i) }(z)$ is also expressed as the form like Eq.(\ref{eq:y1tcor}) by combining the expression in Eq.(\ref{eq:yt1zintrep}), the relation $y_1 (w)= w^{l_1/2} (w-1)^{l_2/2} (w-t)^{l_3/2}f_{HK}(\tilde{\wp }^{-1}(w ))$ and the relation among $dy_2 (w)/dw$, $dy_1(w)/dw$ and $y_1(w)$.
We can calculate the monodromy of the Fuchsian system $D_{Y}(\theta _0, \theta _1, \theta _t, \theta _{\infty}; \lambda ,\mu ;k)$ and the Fuchsian equation $D_{y_1 }(\theta _0, \theta _1, \theta _t, \theta _{\infty}; \lambda ,\mu )$ for the case $\theta _0, \theta _1, \theta _t, \theta _{\infty } \in \Zint $ and $\theta _0 + \theta _1 + \theta _t + \theta _{\infty } \in 1+2\Zint $ in principal by considering the integral representations of solutions and their asymptotics around the singularities.
We will do this for the case $(\theta _0, \theta _1, \theta _t, 1-\theta _{\infty }) =(0,0,0,0)$ in the next section.
\section{Integral representation of solutions to the Fuchsian equation for the case $(\theta _0, \theta _1, \theta _t, 1-\theta _{\infty }) =(0,0,0,0)$} \label{sec:intrepFs0000}
We consider the integral representation of solutions to the Fuchsian equation for the case $(\theta _0, \theta _1, \theta _t, 1-\theta _{\infty }) =(0,0,0,0)$.
For this case, the function $f_{HK}(\xi )$ in the integrand of Eq.(\ref{eq:intrepy1x}) is written in the form of the Hermite-Krichever Ansatz for the case $l_0=l_1=l_2=l_3=0$, and it is described by Eq.(\ref{eq:HK0000}).
The values $\lambda ,\mu $ for the case $l_0=l_1=l_2=l_3=0$ and the values $\alpha , \kappa $ are related by Eq.(\ref{eq:lmalka0000}).
By substituting Eq.(\ref{eq:lmalka0000}) into Eq.(\ref{eq:y1tcor}) and the integral representation of the function $\tilde{y}_2 ^{(i) }(z)$ like Eq.(\ref{eq:y1tcor}), multiplying by appropriate constants and applying the formula $\wp (x) -\wp (\xi )=-\sigma (x+ \xi) \sigma (x- \xi)/(\sigma (x)^2 \sigma (\xi)^2)$,
we have the following proposition:
\begin{prop} \label{prop:intrep0001}
Set
\begin{align}
& f_i(x)=\int_{-x+2\omega _i}^x \frac{e^{(\kappa +\zeta (\alpha ))\xi }\sigma (x)\sigma (\xi -\alpha )}{\sqrt{\sigma (x-\xi )\sigma (x+\xi )}}d\xi , \label{eq:intrep0001} \\
& g_i(x)= \frac{1}{4k (e_2-e_1)} \int_{-x+2\omega _i}^x \left( \kappa ^2 +\frac{\wp '(\xi ) +\wp '(\alpha )}{\wp (\xi ) -\wp (\alpha )} \kappa +2\wp (\xi ) +\wp (\alpha ) \right) \nonumber \\
& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \frac{e^{(\kappa +\zeta (\alpha ))\xi }\sigma (x)\sigma (\xi -\alpha )}{\sqrt{\sigma (x-\xi )\sigma (x+\xi )}}d\xi . \nonumber
\end{align}
The function ${}^t(f_i (x) , g_i (x))$ $(i=0,1,2,3, \; z=(\wp (x)-e_1)/(e_2-e_1))$ is a solution to the Fuchsian differential system $D_{Y} (0,0,0,1;\lambda ,\mu ; k)$, where
\begin{align}
& \lambda =\frac{\wp (\alpha ) -e_1}{e_2-e_1}, \quad \mu= -\frac{(e_2-e_1 )\kappa }{\wp ' (\alpha )} .
\end{align}
In particular, the function $f_i (x)$ $(i=0,1,2,3, \; z=(\wp (x)-e_1)/(e_2-e_1))$ is a solution to the Fuchsian differential equation $D_{y_1} (0,0,0,1;\lambda ,\mu)$ and the differential equation can also be written as
\begin{align}
& \frac{d^2y}{dx^2} +\left\{ \left( \sum _{j=1}^3 \frac{1}{2(\wp (x)- e_j)} \right) -\frac{1}{\wp (x) -\wp (\alpha )} \right\} \wp '(x) \frac{dy}{dx} \\
& \quad \quad \quad +\left\{ -\kappa ^2-\frac{\wp ' (\alpha )}{\wp (x) -\wp (\alpha )}\kappa +\wp (x)-\wp (\alpha ) \right\} y=0.\nonumber
\end{align}
\end{prop}
The monodromy matrix for the Fuchsian differential system $D_{Y} (0,0,0,1;\lambda ,\mu ; k)$ with respect to a basis of solutions
$ \left\{ \left( \begin{array}{cc}
y^{\{1 \} }_1(z) \\
y^{\{1 \} }_2(z)
\end{array}
\right) ,
\left( \begin{array}{cc}
y^{\{2 \} }_1 (z) \\
y^{\{2 \} }_2 (z)
\end{array}
\right) \right\} $
along a cycle $\gamma $ coincides with the monodromy matrix for the Fuchsian differential equation $D_{y_1} (0,0,0,1;\lambda ,\mu )$ with respect to a basis of solutions
$ \{ y^{\{1 \} }_1(z) ,y^{\{ 2 \} }_1(z)\} $
along the cycle $\gamma $.
Hence we investigate the monodromy matries for the Fuchsian differential equation $D_{y_1} (0,0,0,1;\lambda ,\mu)$ by applying integral representations of solutions $f_i(x)$ $(i=0,1,2,3)$.
Assume that $\alpha \not \equiv 0$ mod $\omega _1\Zint \oplus \omega _3 \Zint$.
By considering the exponents of the singularities, we have a basis of local solutions to the Fuchsian differential equation $D_{y_1} (0,0,0,1;\lambda ,\mu)$ about $x=0$ and $\omega _i$ $(i=1,2,3)$ of the form
\begin{align}
& s^{(0)}_1 (x)= x+c^{(0)}_2 x^2+ \dots , \quad s^{(0)}_2 (x)= s^{(0)}_1 (x) \log x +\tilde{c}^{(0)}_1 x + \tilde{c}^{(0)}_2 x^2 + \dots ,\\
& s^{(i)}_1 (x)= 1+c^{(i)}_1 (x-\omega _i) + \dots ,\quad s^{(i)}_2 (x)= s^{(i)}_1 (x) \log (x-\omega _i) +\tilde{c}^{(i)}_0 + \tilde{c}^{(i)}_1 (x -\omega _i) + \dots . \nonumber
\end{align}
Let $\gamma _i$ $(i=0,1,2,3)$ be the cycle turning anti-clockwise around $x=\omega _i$, and $f^{\gamma }(x)$ be the function which is continued analytically along the cycle $\gamma $.
Then we have
\begin{align}
& (s^{(i),\gamma _i}_1 (x), s^{(i),\gamma _i}_2 (x)) =(s^{(i)}_1 (x), s^{(i)}_2 (x)) \left(
\begin{array}{cc}
1 & 2\pi \sqrt{-1} \\
0 & 1
\end{array}
\right)
\quad (i=0,1,2,3). \label{eq:silocmonod}
\end{align}
We now relate the local solutions $(s^{(i)}_1 (x), s^{(i)}_2 (x))$ $(i=0,1,2,3)$ to the solutions of integral representations $f_0 (x), \dots ,f_3(x)$.
Since $f_0(x)$ is a solution to the Fuchsian differential equation $D_{y_1} (0,0,0,1,\lambda ,\mu)$, it is expressed as a linear combination of $s^{(0)}_1 (x)$ and $s^{(0)}_2 (x)$.
We set $\xi =x \nu $. Since $\lim _{x \rightarrow 0} \sigma (x)/x =1$, we have the following asymptotic limit as $x \rightarrow 0$:
\begin{align}
& f_0(x)= \int _{-1} ^1 \frac{e^{x(\kappa +\zeta (\alpha ))\nu }\sigma (x)\sigma (x \nu -\alpha )}{\sqrt{\sigma (x(1+ \nu ))\sigma (x(1- \nu ))}} xd\nu \sim \int _{-1} ^1 \frac{x^2 \sigma (-\alpha )}{x\sqrt{(1+\nu )(1-\nu )}}d\nu = \sigma (-\alpha )\pi x.
\end{align}
Hence we have
\begin{equation}
f_0(x)= \sigma (-\alpha ) \pi s^{(0)}_1 (x).
\label{eq:f0s0}
\end{equation}
We consider the asymptotics of $f_0(x)$ in the limit as $x \rightarrow \omega _i$ $(i=1,2,3)$.
By using the formula $\sigma (x+2\omega _i)= -\sigma (x)\exp (2\eta _i (x+\omega _i))$, we have
\begin{align}
& f_0(x)=\int _0^1
\frac{e^{x(\kappa +\zeta (\alpha ))\nu } \sigma (x) \sigma (x \nu -\alpha )xd\nu }{\sqrt{-e^{2\eta _i(x(1+ \nu )-\omega _i)}\sigma (x(1+ \nu )-2\omega _i)\sigma (x(1- \nu ))}} \\
& \quad +\int _{-1}^0 \frac{e^{x(\kappa +\zeta (\alpha ))\nu }\sigma (x)\sigma (x \nu -\alpha )xd\nu }{\sqrt{-e^{2\eta _i(x(1- \nu )-\omega _i)}\sigma (x(1- \nu )-2\omega _i)\sigma (x(1+ \nu ))}} \nonumber \\
& \quad \sim \int _0^1 \frac{e^{x(\kappa +\zeta (\alpha ))-\eta _i(2x-\omega _i)}\sigma (x)\sigma (x -\alpha )xd\nu }{x\sqrt{-(1+ \nu -2\omega _i/x ) (1- \nu )}} +\int _{-1}^0 \frac{e^{-x(\kappa +\zeta (\alpha ))-\eta _i(2x-\omega _i)}\sigma (x)\sigma (-x -\alpha )xd\nu }{x\sqrt{-(1- \nu -2\omega _i/x ) (1+ \nu )}} \nonumber \\
& \quad \sim -\log (\omega _i-x) e^{\omega _i (\kappa +\zeta (\alpha )-\eta _i )} \sigma (\omega _i) \sigma (\omega _i- \alpha ) -\log (\omega _i-x) e^{\omega _i (-(\kappa +\zeta (\alpha ))-\eta _i )} \sigma (\omega _i) \sigma (-\omega _i- \alpha ) \nonumber \\
& \quad = -\log (\omega _i-x) e^{\omega _i (\kappa +\zeta (\alpha )- \eta _i) } \sigma (\omega _i) \sigma (\omega _i- \alpha ) (1 - e^{-2\omega _i (\kappa +\zeta (\alpha ))+2\eta _i \alpha }) .\nonumber
\end{align}
Since $f_0(x)$ is a solution to Eq.(\ref{eq:dYdzAzY}), it can be expressed as a linear combination of $s^{(i)}_1 (x)$ and $s^{(i)}_2 (x)$, and we have
\begin{equation}
f_0(x)= -e^{\omega _i (\kappa +\zeta (\alpha )- \eta _i )} \sigma (\omega _i) \sigma (\omega _i- \alpha ) (1- e^{-2\omega _i (\kappa +\zeta (\alpha ))+2\eta _i \alpha }) s^{(i)}_2 (x) +c^{(0,i)} s^{(i)}_1 (x),
\label{eq:f0si}
\end{equation}
for some constant $c^{(0,i)} $.
Next, we express the function $f_i(x)$ $(i=1,2,3)$ as a linear combination of $s^{(j)}_1 (x)$ and $s^{(j)}_2 (x)$ for $j\in \{ 0,1,2,3\}$.
We set $\xi =(\omega _i-x) \nu +\omega _i$, whereupon we have
\begin{align}
& f_i(x)= \int _{-1} ^1 \frac{e^{x(\kappa +\zeta (\alpha ))((\omega _i-x) \nu +\omega _i) }\sigma (x)\sigma ((\omega _i-x) \nu +\omega _i-\alpha )}{\sqrt{\sigma ((\omega _i-x) (1+\nu ))\sigma ((\omega _i-x) (1-\nu )-2\omega _i )}} (x-\omega _i)d\nu .
\end{align}
Similarly, we have
\begin{align}
& f_i(x) \sim \sqrt{-1} \pi \sigma (\omega _i ) \sigma (\omega _i -\alpha ) e^{\omega _i(\kappa +\zeta (\alpha )-\eta _i)} , \quad (x \rightarrow \omega _i), \\
& f_i(x) \sim - \sqrt{-1} \sigma ( -\alpha ) (1-e^{2\omega _i (\kappa +\zeta (\alpha ))-2\eta _i \alpha }) x \log x , \quad (x \rightarrow 0) , \nonumber
\end{align}
and
\begin{align}
& f_i(x) \sim \sigma (\omega _j ) \sigma (\omega _j- \alpha ) e^{\omega _j (\kappa +\zeta (\alpha )-\eta _j ) }(1-e^{2(\omega _i- \omega _j )(\kappa +\zeta (\alpha ))+2(\eta _j -\eta _i) \alpha }) \log (\omega _j -x) ,
\end{align}
in the limit as $x \rightarrow \omega _j $, $(j\neq 0,i)$, where we have used Legendre's relation, $\eta_i \omega _j-\eta _j \omega _i=\pm \pi \sqrt{-1}/2$.
Therefore we have
\begin{align}
f_i(x) & = \sqrt{-1} \pi \sigma (\omega _i ) \sigma (\omega _i -\alpha ) e^{\omega _i(\kappa +\zeta (\alpha )-\eta _i)} s^{(i)}_1 (x) \label{eq:fisi} \\
& =-\sqrt{-1} \sigma ( -\alpha ) (1-e^{2\omega _i (\kappa +\zeta (\alpha ))-2\eta _i \alpha })s^{(0)}_2 (x) +c^{(i,0)} s^{(0)}_1 (x) \nonumber \\
& =\sigma (\omega _j ) \sigma (\omega _j- \alpha ) e^{\omega _j (\kappa +\zeta (\alpha )-\eta _j ) }(1-e^{2(\omega _i- \omega _j )(\kappa +\zeta (\alpha ))+2(\eta _j -\eta _i) \alpha }) s^{(j)}_2 (x) +c^{(i,j)} s^{(j)}_1 (x), \nonumber
\end{align}
for some constants $c^{(i,0)} $ and $c^{(i,j)} $.
We consider the monodromy matrices on the basis $(f_0(x) , f_1(x))$. Set
\begin{equation}
e[i]=\exp( 2\omega _i (\kappa +\zeta (\alpha ))-2\eta _i \alpha ), \quad (i=1,2,3).
\end{equation}
It follows from Eqs.(\ref{eq:silocmonod}, \ref{eq:f0s0}, \ref{eq:fisi}) that
\begin{align}
& (f^{\gamma _0}_0(x) , f^{\gamma _0}_1(x))= (\sigma (-\alpha ) \pi s^{(0),\gamma _0}_1 (x), -\sqrt{-1} \sigma ( -\alpha ) (1-e[1])s^{(0),\gamma _0}_2 (x) +c^{(i,0)} s^{(0),\gamma _0}_1 (x)) \label{eq:f0f1gam0} \\
& = (\sigma (-\alpha ) \pi s^{(0)}_1(x), -\sqrt{-1} \sigma ( -\alpha ) (1-e[1])(s^{(0)}_2 (x) + 2\pi \sqrt{-1} s^{(0)}_1 (x) )+c^{(i,0)} s^{(0)}_1 (x)) \nonumber \\
& = (f_0(x) , f_1(x))
\left(
\begin{array}{cc}
1 & 2(1-e[1]) \\
0 & 1
\end{array}
\right) . \nonumber
\end{align}
Similarly, it follows from Eqs.(\ref{eq:silocmonod}, \ref{eq:f0si}, \ref{eq:fisi}) that
\begin{align}
& (f^{\gamma _1}_0(x) , f^{\gamma _1}_1(x))= (f_0(x) , f_1(x))
\left(
\begin{array}{cc}
1 & 0 \\
-2(1-1/e[1]) & 1
\end{array}
\right) . \label{eq:f0f1gam1}
\end{align}
If $e[1] \neq 0$, then it follows from the asymptotic limits as $x \rightarrow 0$ and $x\rightarrow \omega _1$ that the functions $f_0(x)$ and $f_1(x)$ form a basis of solutions to Eq.(\ref{eq:dYdzAzY}),
and the functions $f_j(x)$ $(j=2,3)$ are written as linear combinations of $f_0(x)$ and $f_1(x)$.
Write $f_j(x)= \tilde{c}_{0,j}f_0(x) +\tilde{c}_{1,j}f_1(x)$. Then the coefficients $\tilde{c}_{0,j}, \tilde{c}_{1,j}$ are determined by considering the asymptotic limits as $x \rightarrow \omega _1$ and $x\rightarrow 0$, and we have
\begin{align}
& \tilde{c}_{0,j} = \frac{e[1]-e[j]}{1-e[1]} , \quad \tilde{c}_{1,j} = \frac{1-e[j]}{1-e[1]} .
\end{align}
Therefore
\begin{align}
& (f^{\gamma _j}_0(x) , f^{\gamma _j}_1(x))= (f^{\gamma _j}_0(x) , f^{\gamma _j}_j(x))
\left(
\begin{array}{cc}
1 & -\tilde{c}_{0,j}/\tilde{c}_{1,j} \\
0 & 1/\tilde{c}_{1,j}
\end{array}
\right) \label{eq:f0f1gamj} \\
& = (f_0(x) , f_j(x))
\left(
\begin{array}{cc}
1 & 0 \\
-2(1-1/e[j]) & 1
\end{array}
\right)
\left(
\begin{array}{cc}
1 & -\tilde{c}_{0,j}/\tilde{c}_{1,j} \\
0 & 1/\tilde{c}_{1,j}
\end{array}
\right) \nonumber \\
& =(f_0(x) , f_1(x))
\left(
\begin{array}{cc}
1 & \tilde{c}_{0,j} \\
0 & \tilde{c}_{1,j}
\end{array}
\right)
\left(
\begin{array}{cc}
1 & 0 \\
-2(1-1/e[j]) & 1
\end{array}
\right)
\left(
\begin{array}{cc}
1 & -\tilde{c}_{0,j}/\tilde{c}_{1,j} \\
0 & 1/\tilde{c}_{1,j}
\end{array}
\right) \nonumber \\
&= (f_0(x) , f_1(x))
\left(
\begin{array}{cc}
1+2\frac{(e[1]-e[j])(e[j]-1)}{(e[1]-1)e[j]} & 2\frac{(e[1]-e[j])^2}{(e[1]-1)e[j]} \\
-2\frac{(e[j]-1)^2}{(e[1]-1)e[j]} & 1-2\frac{(e[1]-e[j])(e[j]-1)}{(e[1]-1)e[j]}
\end{array}
\right) ,\nonumber
\end{align}
and we have obtained the monodromy matrices for the basis $(f_0(x), f_1(x))$ on the cycles $\gamma _2$, $\gamma _3 $.
We consider the monodromy preserving deformation with respect to the basis $(f_0(x), f_1(x))$.
Assume that the values $e[1]$, $e[3]$ are preserved while varying the ratio $\omega _3/\omega _1$.
Then the monodromy is preserved by Eqs.(\ref{eq:f0f1gam0}, \ref{eq:f0f1gam1}, \ref{eq:f0f1gamj}) and the equality $e[1]+e[2]+e[3]=0$.
Since the values $e[1]$, $e[3]$ are preserved by monodromy preserving deformation, we have
\begin{align}
& -2\eta _1 \alpha +2\omega _1 \zeta (\alpha ) +2 \kappa \omega _1 = \pi \sqrt{-1} C_1, \\
& -2\eta _3 \alpha +2\omega _3 \zeta (\alpha ) +2 \kappa \omega _3 = \pi \sqrt{-1} C_3, \nonumber
\end{align}
for constants $C_1$ and $C_3$.
By Legendre's relation, $\eta _1\omega _3-\eta _3\omega _1=\pi \sqrt{-1}/2$, we have
\begin{align}
& \alpha = C_3 \omega _1 -C_1 \omega _3 \label{al00},\\
& \kappa = \zeta (C_1 \omega _3 -C_3 \omega _1 ) +C_3 \eta _1 -C_1 \eta _3 , \nonumber
\end{align}
Recall that the sixth Painlev\'e equation has an elliptical representation (see Eq.(\ref{eq:P6ellip})),
and it is a differential equation on $\delta $ with respect to the variable $\tau =\omega_3/\omega _1$.
For the case $(\theta _0, \theta _1, \theta _t, 1-\theta _{\infty }) =(0,0,0,0)$, this equation is written as $d^2\delta /d\tau ^2 =0$.
The variables $\lambda $ and $\delta $ are related by $\lambda =(\wp (\delta )-e_1)/(e_2-e_1)$.
With regards to the integral representation of solutions to $D_{y_1}(0,0,0,1;\lambda ,\mu)$, we have the relations
\begin{align}
& \lambda = \frac{1}{e_2-e_1} \left\{ \wp (\alpha ) -e_1\right\} ,\quad \mu = -\frac{(e_2-e_1 )\kappa }{\wp ' (\alpha )} . \label{eq:lmalka0000t}
\end{align}
Hence $\alpha $ plays the role of $\delta $ mod $2\omega _1 \Zint \oplus 2\omega _3 \Zint $, and Eq.(\ref{al00}) corresponds to Picard's solution to the sixth Painlev\'e equation for the case $(\theta _0, \theta _1, \theta _t, 1-\theta _{\infty }) =(0,0,0,0)$ by setting $\omega _1=1/2$ and $\omega _3=\tau /2$.
We therefore reproduce Picard's solution by determining the monodromy of the corresponding Fuchsian equation.
\section{Integral representation of solutions to Heun's equation} \label{sec:intrepHeun}
In section \ref{sec:intrepFs}, we obtained that, if $\theta _0, \theta _1, \theta _t, \theta _{\infty } \in \Zint $ and $\theta _0 + \theta _1 + \theta _t + \theta _{\infty } \in 1+2\Zint $, then we have integral representations of solutions to the Fuchsian equation $D_{y_1}(\theta _0, \theta _1, \theta _t, \theta _{\infty}; \lambda ,\mu )$ (see Eq.(\ref{eq:intrepy1x})) and the Fuchsian system $D_Y(\theta _0, \theta _1, \theta _t, \theta _{\infty}; \lambda ,\mu ,k)$.
In this section we obtain integral representations of solutions to Heun's equation by a suitable choice of the parameters $\lambda $ and $\mu $.
Recall that Heun's differential equation is defined by
\begin{equation}
\frac{d^2y}{dz^2} + \left( \frac{\gamma}{z}+\frac{\delta }{z-1}+\frac{\epsilon}{z-t}\right) \frac{dy}{dz} +\frac{\alpha \beta z -q}{z(z-1)(z-t)} y=0,
\label{eq:Heuns7}
\end{equation}
with the condition $\gamma +\delta +\epsilon =\alpha +\beta +1$.
This equation has an elliptical representation:
Set
\begin{align}
& z=\frac{\wp (x) -e_1}{e_2-e_1}, \quad t=\frac{e_3-e_1}{e_2-e_1}, \quad f(x)= y z^{\frac{-l_1}{2}}(z-1)^{\frac{-l_2}{2}}(z-t)^{\frac{-l_3}{2}},
\end{align}
then Heun's equation (Eq.(\ref{eq:Heuns7})) is transformed to
\begin{equation}
\left(-\frac{d^2}{dx^2} + \sum_{i=0}^3 l_i(l_i+1)\wp (x+\omega_i)-E\right)f(x)=0,
\label{InoEF0}
\end{equation}
where
\begin{align}
& l_0= \alpha -\beta -1/2,\quad l_1= -\gamma +1/2, \quad l_2=-\delta +1/2, \quad l_3=-\epsilon +1/2, \\
& E=(e_2-e_1)(-4q+(-(\alpha -\beta)^2 +2\gamma ^2+6\gamma \epsilon +2\epsilon ^2 -4\gamma -4\epsilon -\delta ^2 +2\delta +1)/3 \nonumber \\
& \quad \quad +(-(\alpha -\beta ) ^2 +2\gamma ^2+6\gamma \delta +2\delta^2 -4\gamma -4\delta -\epsilon ^2+2\epsilon +1)t/3). \nonumber
\end{align}
We obtained in section \ref{sec:intrepFs} that, if $l_0 , l_1 , l_2 ,l_3 \in \Zint$ and $l_0 +l_1 +l_2 +l_3 \in 2\Zint$, then the function $\tilde{y}^{(i)}_1 (z) $
defined by
\begin{align}
\tilde{y}^{(i)} _1 (z)=& \int _{-\tilde{\wp }^{-1} (z )+2\omega _i}^{\tilde{\wp }^{-1}(z)} \left[ \left\{ \frac{\kappa _1 }{e_2- e_1}+ \left( \tilde{\wp }(\xi )- \lambda -\frac{\kappa _1}{\mu }\right) \sum _{i=1}^3 \frac{l_i}{2(\wp (\xi )-e_i)} \right\} \wp ' (\xi ) f _{HK} (\xi ) \right. \label{eq:intrepy1x0}\\
& \quad \quad \quad \left. + \left( \tilde{\wp }(\xi )- \lambda -\frac{\kappa _1}{\mu }\right) \frac{df _{HK} (\xi )}{d\xi } \right] \left( \prod _{i=1}^3 (\wp(\xi )-e_i) ^{l_i/2} \right) \frac{(z-\tilde{\wp }(\xi ))^{\kappa _1 }}{(\tilde{\wp }(\xi )-\lambda )} d\xi , \nonumber
\end{align}
$(i=0,1,2,3)$ is a solution to the Fuchsian differential equation $D_{y_1 }(\tilde{\theta }_0, \tilde{\theta }_1, \tilde{\theta }_t, \tilde{\theta }_{\infty}; \lambda +\kappa _1 /\mu ,\mu )$, where
\begin{align}
& \tilde{\theta }_0=\frac{-l_0 +l _1 -l_2 -l_3}{2}, \quad \tilde{\theta }_1=\frac{-l_0 -l _1 +l_2 -l_3}{2} , \quad \tilde{\theta }_t=\frac{-l_0 -l _1 -l_2 +l_3}{2} ,\\
& \tilde{\theta }_{\infty}=\frac{-l_0 +l _1 +l_2 +l_3}{2}+1 , \quad \tilde{\wp }(x)=\frac{\wp (x) -e_1}{e_2-e_1}, \quad \kappa _1=-\frac{l_0+l_1+l_2+l_3+1}{2}, \nonumber
\end{align}
and the function $f_{HK}(x)$ is defined in Theorem \ref{thm:xinterep}.
The Fuchsian equation $D_{y_1 }(\tilde{\theta }_0, \tilde{\theta }_1, \tilde{\theta }_t, \tilde{\theta }_{\infty}; \lambda +\kappa _1 /\mu ,\mu )$ has an apparent singularity at $z=\lambda +\kappa _1/\mu $.
We consider the confluence of the apparent singularity $z =\lambda +\kappa _1/\mu $ to the regular singularity $z=\infty$.
Set $\mu= 0$.
Then the Fuchsian equation is written as Heun's equation
\begin{align}
& \frac{d^2y}{dz^2} + \left( \frac{1-\tilde{\theta }_0}{z}+\frac{1-\tilde{\theta }_1}{z-1}+\frac{1-\tilde{\theta }_t}{z-t}\right) \frac{dy}{dz} \label{Heuninfty} \\
& +\frac{\tilde{\kappa }_1 (\tilde{\kappa }_2+2) z + \tilde{\kappa }_1 (1 -\tilde{\theta }_{\infty} )\lambda -\tilde{\kappa }_1 ( (\tilde{\kappa }_2 +\tilde{\theta }_t +1)t +(\tilde{\kappa }_2 +\tilde{\theta }_1 +1)) }{z(z-1)(z-t)} y=0 ,\nonumber
\end{align}
where $\tilde{\kappa }_1 =( \tilde{\theta }_{\infty } -\tilde{\theta }_0-\tilde{\theta }_1-\tilde{\theta }_t)/2$ and $\tilde{\kappa }_2 =-( \tilde{\theta }_{\infty } +\tilde{\theta }_0+\tilde{\theta }_1+\tilde{\theta }_t)/2$.
We have $1-\tilde{\theta }_0, 1-\tilde{\theta }_1,1-\tilde{\theta }_t , \tilde{\kappa }_1 +1/2, \tilde{\kappa }_2+2+1/2 \in \Zint$.
For the case $\tilde{\theta }_{\infty} =1$, we set $\mu =bs^2$, $\lambda = c/s$ and consider the limit $s \rightarrow 0$.
Then we have
\begin{align}
& \frac{d^2y}{dz^2} + \left( \frac{1-\tilde{\theta }_0}{z}+\frac{1-\tilde{\theta }_1}{z-1}+\frac{1-\tilde{\theta }_t}{z-t}\right) \frac{dy}{dz} \label{Heuninftytti1} \\
& +\frac{\tilde{\kappa }_1 (\tilde{\kappa }_2+2) z + \tilde{\kappa }_1 bc^2 - \tilde{\kappa }_1 ( (\tilde{\kappa }_2 +\tilde{\theta }_t +1)t +(\tilde{\kappa }_2 +\tilde{\theta }_1 +1)) }{z(z-1)(z-t)} y=0 ,\nonumber
\end{align}
The following theorem follows from Eq.(\ref{Heuninfty}) by substituting the parameters as indicated:
\begin{thm} \label{thm:interepHeun}
(i) Assume that $\gamma +\delta +\epsilon =\alpha +\beta +1$, $\gamma ,\delta ,\epsilon ,\alpha +1/2 , \beta +1/2 \in \Zint$.
Set
\begin{align}
& \tilde{l}_0=\alpha -3/2, \; \tilde{l}_1=\delta +\epsilon -\alpha -1/2, \; \tilde{l}_2=\gamma +\epsilon -\alpha -1/2, \; \tilde{l}_3=\gamma + \delta -\alpha -1/2.
\end{align}
Let $f_{HK}(x)= f_{HK} (x; \tilde{l}_0,\tilde{l}_1,\tilde{l}_2,\tilde{l}_3;\lambda ,\mu )$ be the function expressed in the form of the Hermite-Krichever Ansatz.
Set $\tilde{\wp }(x)= (\wp (x)-e_1)/(e_2-e_1)$ and
\begin{align}
& F(\xi; \lambda ,\mu, m)= \mu ^m \left[ \left\{ \frac{-\beta }{e_2- e_1}+ \left( \tilde{\wp }(\xi )- \lambda +\frac{\beta }{\mu }\right) \sum _{i=1}^3 \frac{\tilde{l}_i}{2(\wp (\xi )-e_i)} \right\} \wp ' (\xi ) f _{HK} (\xi ) \right. \label{eq:intrepy1x00}\\
& \quad \quad \quad \quad \quad \quad \quad \quad \quad \left. + \left( \tilde{\wp }(\xi )- \lambda +\frac{\beta }{\mu }\right) \frac{df _{HK} (\xi )}{d\xi } \right] \left( \prod _{i=1}^3 (\wp(\xi )-e_i) ^{\tilde{l}_i/2} \right) .\nonumber
\end{align}
If $\alpha -\beta \neq 1$ (resp. $\alpha -\beta = 1$) and the integrand in Eq.(\ref{eq:intrepHeun}) (resp. Eq.(\ref{eq:intrepHeun00})) has a non-zero finite limit as $\mu \rightarrow 0$ (resp. $s \rightarrow 0$) for some $m$,
then the functions
\begin{align}
& \tilde{y}^{(i)}_1 (z)= \int _{-\tilde{\wp }^{-1}(z) +2\omega _i}^{\tilde{\wp }^{-1}(z) } \lim _{\mu \rightarrow 0} F(\xi; \lambda ,\mu , m) \frac{(z-\tilde{\wp }(\xi ))^{-\beta }}{(\tilde{\wp }(\xi )-\lambda )} d\xi , \label{eq:intrepHeun} \\
& \quad \quad \lambda = \frac{t(\alpha -\epsilon )+(\alpha -\delta )-q/\beta }{\alpha -\beta - 1} , \quad (\alpha -\beta \neq 1), \nonumber \\
& \tilde{y}^{(i)}_1 (z)= \int _{-\tilde{\wp }^{-1}(z) +2\omega _i}^{\tilde{\wp }^{-1}(z) } \lim _{s \rightarrow 0} F(\xi; c/s ,b s^2 , m) \frac{(z-\tilde{\wp }(\xi ))^{-\beta }}{(\tilde{\wp }(\xi )-c/s )} d\xi , \label{eq:intrepHeun00} \\
& \quad \quad bc^2 = t(\alpha -\epsilon )+(\alpha -\delta )+q/(1- \alpha ) , \quad (\alpha -\beta = 1), \nonumber
\end{align}
$(i=0,1,2,3)$ are solutions to Heun's equation (Eq.(\ref{eq:Heuns7})).\\
(ii) Assume that $l_0,l_1,l_2,l_3 \in \Zint +1/2$ and $l_0+l_1+l_2+l_3 \in 1+2\Zint $.
Set
\begin{align}
& \tilde{l}= \frac{l_0+l_1+l_2+l_3}{2}, \quad \tilde{l}_0=\frac{l_0-l_1-l_2-l_3}{2}-1, \\
& \tilde{l}_1=\frac{-l_0+l_1-l_2-l_3}{2}, \quad \tilde{l}_2=\frac{-l_0-l_1+l_2-l_3}{2}, \quad \tilde{l}_3=\frac{-l_0-l_1-l_2+l_3}{2}, \nonumber \\
& F(\xi; \lambda ,\mu, m)= \mu ^m \left[ \left\{ \frac{\tilde{l}}{e_2- e_1}+ \left( \tilde{\wp }(\xi )- \lambda -\frac{\tilde{l}}{\mu }\right) \sum _{i=1}^3 \frac{\tilde{l}_i}{2(\wp (\xi )-e_i)} \right\} \wp ' (\xi ) f _{HK} (\xi ) \right. \nonumber \\
& \quad \quad \quad \quad \quad \quad \quad \quad \left. + \left( \tilde{\wp }(\xi )- \lambda -\frac{\tilde{l}}{\mu }\right) \frac{df _{HK} (\xi )}{d\xi } \right] \left( \prod _{i=1}^3 (\wp(\xi )-e_i) ^{\tilde{l}_i/2} \right) . \nonumber
\end{align}
If $ l_0 \neq 1/2$ (resp. $l_0 = 1/2$) and the integrand in Eq.(\ref{eq:intrepHeune}) (resp. Eq.(\ref{eq:intrepHeune00})) has a non-zero finite limit as $\mu \rightarrow 0$ (resp. $s \rightarrow 0$) for some $m$,
then the functions
\begin{align}
& f^{(i)} (x)= \left( \prod _{j=1}^3 (\wp (x)-e_j )^{-l_j/2 } \right) \int _{-x+2\omega _i}^{x} \lim _{\mu \rightarrow 0} F(\xi; \lambda ,\mu , m) \frac{(\wp(x) -\wp (\xi ))^{\tilde{l}}}{(\tilde{\wp }(\xi )-\lambda )} d\xi \label{eq:intrepHeune}, \\
& \lambda = \frac{E+ (l_3-l_1)(2l_0+l_1+l_3) e_1+(l_3-l_2)(2l_0+l_2+l_3) e_2}{(e_1-e_2)(l_1+l_2+l_3+l_0)(2l_0-1)} +\frac{e_1}{e_2-e_1} , \quad (l_0 \neq 1/2), \nonumber \\
& f^{(i)} (x)= \left( \prod _{j=1}^3 (\wp (x)-e_j )^{-l_j/2 } \right) \int _{-x+2\omega _i}^{x} \lim _{s \rightarrow 0} F(\xi; c/s ,bs^2 , m) \frac{(\wp(x) -\wp (\xi ))^{\tilde{l}}}{(\tilde{\wp }(\xi )-c/s )} d\xi \label{eq:intrepHeune00}, \\
& bc^2= \frac{E+ (l_3-l_1)(l_1+l_3+1 ) e_1+(l_3-l_2)(l_2+l_3+1) e_2}{(e_1-e_2)(2l_1+2l_2+2l_3+1)} ,\quad (l_0=1/2), \nonumber
\end{align}
$(i=0,1,2,3)$ are solutions to the elliptical representation of Heun's equation (Eq.(\ref{InoEF0})).
\end{thm}
We consider the limits $\lambda +\kappa _1/\mu \rightarrow 0,1,t$.
The following equations are obtained by setting $\lambda =-\kappa _1/\mu $, $\lambda =1-\kappa _1/\mu $, $\lambda =t-\kappa _1/\mu $ in the Fuchsian equation $D_{y_1}(\theta _0, \theta _1, \theta _t, \theta _{\infty}; \lambda ,\mu )$:
\begin{align}
& \frac{d^2y}{dz^2} + \left( \frac{-\tilde{\theta }_0}{z}+\frac{1-\tilde{\theta }_1}{z-1}+\frac{1-\tilde{\theta }_t}{z-t} \right) \frac{dy}{dz} +\frac{\tilde{\kappa }_1 (\tilde{\kappa }_2+1)z +t\tilde{\theta }_0\mu }{z(z-1)(z-t)}y=0, \label{Heun0} \\
& \frac{d^2y}{dz^2} + \left( \frac{1-\tilde{\theta }_0}{z}+\frac{-\tilde{\theta }_1}{z-1}+\frac{1-\tilde{\theta }_t}{z-t} \right) \frac{dy}{dz} +\frac{\tilde{\kappa }_1 (\tilde{\kappa }_2+1)(z-1) +(1-t)\tilde{\theta }_1\mu }{z(z-1)(z-t)}y=0, \label{Heun1} \\
& \frac{d^2y}{dz^2} + \left( \frac{1-\tilde{\theta }_0}{z}+\frac{1-\tilde{\theta }_1}{z-1}+\frac{-\tilde{\theta }_t}{z-t} \right) \frac{dy}{dz} +\frac{\tilde{\kappa }_1 (\tilde{\kappa }_2+1)(z-t) +t(t-1)\tilde{\theta }_t\mu }{z(z-1)(z-t)} y=0, \label{Heunt}
\end{align}
where the parameters are defined as for the case $\mu =0$.
For the case $\tilde{\theta }_i =0$ $(i=0,1,t)$, we set $\mu=c/s$, $\lambda = i-\kappa _1/\mu +bs^2$, and consider the the limit $s \rightarrow 0$.
Then we have
\begin{align}
& \frac{d^2y}{dz^2} + \left(\frac{1-\tilde{\theta }_1}{z-1}+\frac{1-\tilde{\theta }_t}{z-t} \right) \frac{dy}{dz} +\frac{\tilde{\kappa }_1 (\tilde{\kappa }_2+1)z -tbc^2 }{z(z-1)(z-t)}y=0, \label{Heun0-0} \\
& \frac{d^2y}{dz^2} + \left( \frac{1-\tilde{\theta }_0}{z}+\frac{1-\tilde{\theta }_t}{z-t} \right) \frac{dy}{dz} +\frac{\tilde{\kappa }_1 (\tilde{\kappa }_2+1)(z-1) +(t-1)bc^2}{z(z-1)(z-t)}y=0, \label{Heun1-0} \\
& \frac{d^2y}{dz^2} + \left( \frac{1-\tilde{\theta }_0}{z}+\frac{1-\tilde{\theta }_1}{z-1} \right) \frac{dy}{dz} +\frac{\tilde{\kappa }_1 (\tilde{\kappa }_2+1)(z-t) +t(1-t)bc^2 }{z(z-1)(z-t)} y=0. \label{Heunt-0}
\end{align}
Note that we have similar propositions to Theorem \ref{thm:interepHeun}.
We consider the integral representations of solutions to Heun's equation for the case $\gamma =\delta =\epsilon =1$ and $\alpha = 3/2, \beta =1/2$, i.e. the case $l_0=1/2$, $l_1=l_2=l_3=-1/2$.
Recall that the functions
\begin{align}
& f_i(x)=\int_{-x+2\omega _i}^x \frac{e^{(\kappa +\zeta (\alpha ))\xi }\sigma (x)\sigma (\xi -\alpha )}{\sqrt{\sigma (x-\xi )\sigma (x+\xi )}}d\xi , \quad z=\frac{\wp (x)-e_1}{e_2-e_1} , \label{eq:intrep00010}
\end{align}
for $i=0,1,2,3$ are solutions to the Fuchsian differential equation $D_{y_1} (0,0,0,1;( \wp (\alpha ) -e_1)/(e_2-e_1) ,-(e_2-e_1 )\kappa /\wp ' (\alpha ))$ (see Proposition \ref{prop:intrep0001}).
The condition $s\rightarrow 0$ in Theorem \ref{thm:interepHeun} implies the condition $\alpha \rightarrow 0$ while setting $\kappa =-\zeta(\alpha )+\tilde{\kappa }$.
Therefore, it follows from Eq.(\ref{eq:intrep00010}) that the functions
\begin{align}
& f_i(x)=\int_{-x+2\omega _i}^x \frac{e^{\tilde{\kappa }\xi }\sigma (x)\sigma (\xi )}{\sqrt{\sigma (x-\xi )\sigma (x+\xi )}}d\xi , \label{eq:intrepH0001}
\end{align}
for $i=0,1,2,3$ are solutions to Heun's equation
\begin{align}
& \frac{d^2y}{dz^2} + \left( \frac{1}{z}+\frac{1}{z-1}+\frac{1}{z-t}\right) \frac{dy}{dz} +\frac{3z +(3e_1 -\tilde{\kappa }^2)/(e_2-e_1)}{4z(z-1)(z-t)} y=0, \label{eq:Heuns70001}
\end{align}
by setting $z=(\wp (x)-e_1)/(e_2-e_1)$, and the functions
\begin{align}
f^{(i)}(x)&= \left( \prod _{i=1}^3 (\wp (x) -e_i) \right) ^{1/4}\int ^{x}_{-x+2\omega _i} \frac{e^{\tilde{\kappa }\xi }\sigma (x)\sigma (\xi )}{\sqrt{\sigma (x-\xi )\sigma (x+\xi )}}d\xi \\
& =\left( \frac{\sigma (x-\omega_1 )\sigma (x-\omega _2) \sigma (x-\omega _3)}{\sigma (x)} \right)^{1/2} \int ^{x}_{-x+2\omega _i} \frac{e^{\tilde{\kappa }\xi }\sigma (\xi )}{\sqrt{\sigma (x-\xi )\sigma (x+\xi )}}d\xi , \nonumber
\end{align}
for $i=0,1,2,3$ are solutions to Heun's equation in elliptical form for the case $l_0=1/2$, $l_1=l_2=l_3=-1/2$,
\begin{align}
& \left(-\frac{d^2y}{dx^2} + \frac{3}{4} \wp(x) -\frac{1}{4} \sum_{i=1}^3 \wp (x+\omega_i ) +\tilde{\kappa }^2 \right)f(x)=0, \label{InoEF00001}
\end{align}
The monodromy matrix of solutions to Eq.(\ref{eq:Heuns70001}) can be expressed in the form of those in section \ref{sec:intrepFs0000} by substituting $\kappa =-\zeta (\alpha )+\tilde{\kappa }$ and $\alpha =0$.
In fact, if $e^{2\omega _1 \tilde{\kappa }} \neq 1$ then the functions $f_0 (x) $ and $f_1(x)$ are linearly independent, and the monodromy matrices are written as
\begin{align}
& (f^{\gamma _0}_0(x) , f^{\gamma _0}_1(x))= (f_0(x) , f_1(x))
\left(
\begin{array}{cc}
1 & 2(1-e^{2\omega _1 \tilde{\kappa }}) \\
0 & 1
\end{array}
\right) , \label{eq:monodmHeun} \\
& (f^{\gamma _1}_0(x) , f^{\gamma _1}_1(x))= (f_0(x) , f_1(x))
\left(
\begin{array}{cc}
1 & 0 \\
-2(1-e^{-2\omega _1 \tilde{\kappa }}) & 1
\end{array}
\right) ,\nonumber \\
& (f^{\gamma _j}_0(x) , f^{\gamma _j}_1(x)) \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad (j=2,3) \nonumber \\
& \quad \quad = (f_0(x) , f_1(x))
\left(
\begin{array}{cc}
1+2\frac{(e^{2\omega _1 \tilde{\kappa }} - e^{2\omega _j \tilde{\kappa }})(e^{2\omega _j \tilde{\kappa }}-1 )}{(e^{2\omega _1 \tilde{\kappa }}-1)e^{2\omega _j \tilde{\kappa }}} & 2\frac{(e^{2\omega _1 \tilde{\kappa }} - e^{2\omega _j \tilde{\kappa }})^2 }{(e^{2\omega _1 \tilde{\kappa }}-1)e^{2\omega _j \tilde{\kappa }}} \\
-2\frac{(e^{2\omega _1 \tilde{\kappa }} - 1)^2 }{(e^{2\omega _1 \tilde{\kappa }}-1)e^{2\omega _j \tilde{\kappa }}} & 1-2\frac{(e^{2\omega _1 \tilde{\kappa }} - e^{2\omega _j \tilde{\kappa }})(e^{2\omega _j \tilde{\kappa }}-1 )}{(e^{2\omega _1 \tilde{\kappa }}-1)e^{2\omega _j \tilde{\kappa }}}
\end{array}
\right) ,\nonumber
\end{align}
which are obtained by analytic continuation of Eqs.(\ref{eq:f0f1gam0}, \ref{eq:f0f1gam1}, \ref{eq:f0f1gamj}) on the limit $\alpha \rightarrow 0$.
The monodromy matrices of solutions to Eq.(\ref{eq:Heuns70001}) are written as products of the monodromy matrices in Eq.(\ref{eq:monodmHeun}) and the scalar that is determined by the branching of $(\sigma (x-\omega _1 )\sigma (x-\omega _2 )\sigma (x-\omega _3 )/\sigma (x))^{1/2 }$.
If $\tilde{\kappa }=0$, then the integrals in Eq.(\ref{eq:intrepH0001}) are written as
\begin{align}
\int _{\infty }^z \frac{dw}{\sqrt{(w-z)(w-e_1)(w-e_2)(w-e_3)}}, \quad \int _{e_i }^z \frac{dw}{\sqrt{(w-z)(w-e_1)(w-e_2)(w-e_3)}},
\end{align}
for $i=1,2,3$ by setting $w=\wp (\xi)$ and $z=\wp (x)$.
These integrals coincide with the formula for the density function on root asymptotics of spectral polynomials for the Lame operator discovered by Borcea and Shapiro \cite{BS} (see also \cite{TakW}).
The limits $\lambda +\kappa _1/\mu \rightarrow 0,1,t$ correspond respectively to the limits $\alpha \rightarrow e_1, e_2, e_3$.
The functions
\begin{align}
& f_{i'}(x)=\int_{-x+2\omega _{i'}}^x \frac{e^{(\kappa +\eta _i) \xi }\sigma (x)\sigma (\xi -\omega_i )}{\sqrt{\sigma (x-\xi )\sigma (x+\xi )}}d\xi , \quad (i'=0,1,2,3 ) \label{eq:intrepH0001123}
\end{align}
are solutions to the following Heun's equations;
\begin{align}
& \frac{d^2y}{dz^2} + \left( \frac{1}{z-1}+\frac{1}{z-t}\right) \frac{dy}{dz} +\frac{z -\kappa ^2/(e_2-e_1)}{4z(z-1)(z-t)}y=0, \quad (i=1), \label{Heun00} \\
& \frac{d^2y}{dz^2} + \left( \frac{1}{z}+\frac{1}{z-t}\right) \frac{dy}{dz} +\frac{z -1-\kappa ^2/(e_2-e_1)}{4z(z-1)(z-t)}y=0, \quad (i=2), \label{Heun11} \\
& \frac{d^2y}{dz^2} + \left( \frac{1}{z}+\frac{1}{z-1}\right) \frac{dy}{dz} +\frac{z -t-\kappa ^2/(e_2-e_1)}{4z(z-1)(z-t)} y=0, \quad (i=3), \label{Heuntt}
\end{align}
by setting $z=(\wp (x)-e_1)/(e_2-e_1)$, and we have similar results for Heun's equations in elliptical form for the case $l_0=-l_1=l_2=l_3=-1/2$, $l_0=l_1=-l_2=l_3=-1/2$, $l_0=l_1=l_2=-l_3=-1/2$ respectively.
The monodromy matrices are expressed in similar forms as Eq.(\ref{eq:monodmHeun}).
{\bf Acknowledgments}
The author would like to thank Galina Filipuk and Yoshishige Haraoka for fruitful discussions and valuable comments.
Thanks are also due to Philip Boalch.
He is supported by the Grant-in-Aid for Young Scientists (B) (No. 19740089) from the Japan Society for the Promotion of Science.
|
2,877,628,089,577 | arxiv | \section{Introduction}
In this paper, we consider the problem of estimating the integrated volatility of a discretely-observed one-dimensional It\^o semimartingale over a finite interval.
The class of It\^o semimartingales has many applications in various area such as neuroscience, physics and finance. Indeed, it includes the stochastic Morris-Lecar neuron model \cite{5 GLM} as well as important examples taken from finance such as the Barndorff-Nielsen-Shephard model \cite{2 GLM}, the Kou model \cite{13 GLM} and the Merton model \cite{22 GLM}; to name just a few. \\
In this work we aim at estimating the integrated volatility based on discrete observations $X_{t_0}, ... , X_{t_n}$ of the process $X$, with $t_i = i \frac{T}{n}$. Let $X$ be a solution of
$$X_t= X_0 + \int_0^t b_s ds + \int_0^t a_s dW_s + \int_0^t \int_{\mathbb{R} \backslash \left \{0 \right \}} \gamma(X_{s^-}) \, z \, \tilde{\mu}(ds, dz), \quad t \in \mathbb{R}_+,$$
with $W=(W_t)_{t \ge 0}$ a one dimensional Brownian motion and $\tilde{\mu}$ a compensated Poisson random measure. We also require the volatility $a_t$ to be an It\^o semimartingale.
We consider here the setting of high frequency observations, i.e. $\Delta_n : = \frac{T}{n} \rightarrow 0$ as $n \rightarrow \infty$. We want to estimate $IV := \frac{1}{T}\int_0^T a^2_s f(X_s) ds$, where $f$ is a polynomial growth function. Such a quantity has already been widely studied in the literature because of its great importance in finance. Indeed, taking $f \equiv 1$, $IV$ turns out being the so called integrated volatility that has particular relevance in measuring and forecasting the asset risks; its estimation on the basis of discrete observations of $X$ is one of the long-standing problems. \\
In the sequel we will present some known results denoting by $IV$ the classical integrated volatility, that is we are assuming $f$ equals to 1.
When X is continuous, the canonical way for estimating the integrated volatility is to use the realized volatility or approximate quadratic variation at time T:
$$[X,X]_T^n := \sum_{i =0}^{n -1}(\Delta X_i)^2, \qquad \mbox{where } \Delta X_i = X_{t_{i + 1}} - X_{t_i}.$$
Under very weak assumptions on $b$ and $a$ (namely when $\int_0^T b^2_s ds$ and $\int_0^T a^4_s ds$ are finite for all $t \in (0,T]$), we have a central limit theorem (CLT) with rate $\sqrt{n}$: the processes $\sqrt{n}([X,X]_T^n - IV)$ converge in the sense of stable convergence in law for processes, to a limit $Z$ which is defined on an extension of the space and which conditionally is a centered Gaussian variable whose conditional law is characterized by its (conditional) variance $V_T := 2 \int_0^T a^4_s ds$.
When $X$ has jumps, the variable $[X, X]_T^n$ no longer converges to $IV$. However, there are other known methods to estimate the integrated volatility. \\
The first type of jump-robust volatility estimators are the \textit{Multipower variations} (cf \cite{Power 1}, \cite{Power 2}, \cite{13 in Maillavin}), which we do not explicitly recall here. These estimators satisfy a CLT with rate $\sqrt{n}$ but with a conditional variance bigger than $V_T$ (so they are rate-efficient but not variance-efficient). \\
The second type of volatility estimators, introduced by Jacod and Todorov in \cite{JT}, is based on estimating locally the volatility from the empirical characteristic function of the increments of the process over blocks of decreasing length but containing an increasing number of observations, and then summing the local volatility estimates. \\
Another method to estimate the integrated volatility in jump diffusion processes, introduced by Mancini in \cite{Mancini soglia}, is the use of the \textit{truncated realized volatility} or \textit{truncated quadratic variance} (see \cite{13 in Maillavin}, \cite{Mancini}):
$$\hat{IV}_T^n := \sum_{i = 0}^{n -1} (\Delta X_i)^2 1_{\left \{ |\Delta X_i| \le v_n \right \}},$$
where $v_n$ is a sequence of positive truncation levels, typically of the form $(\frac{1}{n})^\beta$ for some $\beta\in (0, \frac{1}{2})$. \\
Below we focus on the estimation of $IV$ through the implementation of the truncated quadratic variation, that is based on the idea of summing only the squared increments of X whose absolute value is smaller than some threshold $v_n$. \\
It is shown in \cite{J} that $\hat{IV}_T^n$ has exactly the same limiting properties as $[X,X]_T^n$ does for some $\alpha \in [0,1)$ and $\beta \in [\frac{1}{2(2 - \alpha)}, \frac{1}{2})$. The index $\alpha$ is the degree of jump activity or Blumenthal-Getoor index
$$\alpha := \inf \left \{ r \in [0,2] : \int_{|x| \le 1} |x|^r F(dx) < \infty \right \},$$
where $F$ is a L\'evy measure which accounts for the jumps of the process and it is such that the compensator $\bar{\mu}$ has the form $\bar{\mu}(dt, dz)= F(z)dz dt$. \\
Mancini has proved in \cite{Mancini} that, when the jumps of $X$ are those of a stable process with index $\alpha \ge 1$, the truncated quadratic variation is such that
\begin{equation}
(\hat{IV}_T^n - IV) \overset{\mathbb{P}}{\sim} (\frac{1}{n})^{\beta (2 - \alpha)}.
\label{eq: result Mancini}
\end{equation}
This rate is less than $\sqrt{n}$ and no proper CLT is available in this case.
In this paper, in order to estimate $IV := \frac{1}{T}\int_0^T a^2_s f(X_s) ds$, we consider in particular the truncated quadratic variation defined in the following way: $$Q_n : = \sum_{i= 0}^{n-1} f(X_{t_i})(X_{t_{i+1}} - X_{t_i})^2 \varphi_{\Delta_{n}^\beta}(X_{t_{i+1}} - X_{t_i}),$$
where $\varphi$ is a $C^\infty$ function that vanishes when the increments of the data are too large compared to the typical increments of a continuous diffusion process, and thus can be used to filter the contribution of the jumps. \\
We aim to extend the results proved in short time in \cite{Mancini} characterising precisely the noise introduced by the presence of jumps and finding consequently some corrections to reduce such a noise. \\
The main result of our paper is the asymptotic expansion for the integrated volatility. Compared to earlier results, our asymptotic expansion provides us precisely the limit to which $n^{\beta(2 - \alpha)}(Q_n - IV)$ converges when $(\frac{1}{n})^{\beta(2 - \alpha)} > \sqrt{n}$, which matches with the condition $\beta < \frac{1}{2(2 - \alpha)}$. \\
Our work extends equation \eqref{eq: result Mancini} (obtained in \cite{Mancini}). Indeed, we find
$$Q_n - IV = \frac{Z_n}{\sqrt{n}} + (\frac{1}{n})^{\beta (2 - \alpha)} c_\alpha \int_\mathbb{R} \varphi (u) |u|^{1 - \alpha} du \int_0^T |\gamma|^\alpha(X_s) f(X_s) ds + o_\mathbb{P}((\frac{1}{n})^{\beta(2 - \alpha)}),$$
where $Z_n \xrightarrow{\mathcal{L}} N(0, 2 \int_0^T a^4_s f^2(X_s) ds)$ stably with respect to $X$. The asymptotic expansion here above allows us to deduce the behaviour of the truncated quadratic variation for each couple $(\alpha, \beta)$, that is a plus compared to \eqref{eq: result Mancini}. \\
Furthermore, providing we know $\alpha$ (and if we do not it is enough to estimate it previously, see for example \cite{alpha} or \cite{Fabien alpha}), we can improve the performance of the truncated quadratic variation subtracting the bias due to the presence of jumps to the original estimator or taking particular functions $\varphi$ that make the bias derived from the jump part equal to zero. Using the asymptotic expansion of the integrated volatility we also provide the rate of the error left after having applied the corrections. It derives from the Brownian increments mistakenly truncated away, when the truncation is tight. \\
Moreover, in the case where the volatility is constant, we show numerically that the corrections gained by the knowledge of the asymptotic expansion for the integrated volatility allows us to reduce visibly the noise for any $\beta \in (0, \frac{1}{2})$ and $\alpha \in (0, 2)$. It is a clear improvement because, if the original truncated quadratic variation was a well-performed estimator only if $\beta > \frac{1}{2(2 - \alpha)}$ (condition that never holds for $\alpha \ge 1$), the unbiased truncated quadratic variation achieves excellent results for any couple $(\alpha, \beta)$.
The outline of the paper is the following. In Section 2 we present the assumptions on the process X. In Section 3.1 we define the truncated quadratic variation, while Section 3.2 contains the main results of the paper. In Section 4 we show the numerical performance of the unbiased estimator. The Section 5 is devoted to the statement of propositions useful for the proof of the main results, that is given in Section 6. In Section 7 we give some technical tools about Malliavin calculus, required for the proof of some propositions, while other proofs and some technical results are presented in the Appendix.
\section{Model, assumptions}\label{S:Model}
The underlying process $X$ is a one dimensionale It\^o semimartingale on the space $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t \ge 0}, \mathbb{P})$, where $(\mathcal{F}_t)_{t \ge 0}$ is a filtration, and observed at times $t_i = \frac{i}{n}$, for $i = 0, 1, … , n$. \\
Let $X$ be a solution to
\begin{equation}
X_t= X_0 + \int_0^t b_s ds + \int_0^t a_s dW_s + \int_0^t \int_{\mathbb{R} \backslash \left \{0 \right \}} \gamma(X_{s^-}) \, z \, \tilde{\mu}(ds, dz), \quad t \in \mathbb{R}_+,
\label{eq: model}
\end{equation}
where $W=(W_t)_{t \ge 0}$ is a one dimensional Brownian motion and $\tilde{\mu}$ a compensated Poisson random measure on which conditions will be given later. \\
We will also require the volatility $a_t$ to be an It\^o semimartingale and it thus can be represented as
\begin{equation}
a_t= a_0 + \int_0^t \tilde{b}_s ds + \int_0^t \tilde{a}_s dW_s + \int_0^t \hat{a}_s d\hat{W}_s + \int_0^t \int_{\mathbb{R} \backslash \left \{0 \right \}} \tilde{\gamma}_s \, z \, \tilde{\mu}(ds, dz) + \int_0^t \int_{\mathbb{R} \backslash \left \{0 \right \}} \hat{\gamma}_s \, z \, \tilde{\mu}_2(ds, dz).
\label{eq: model vol}
\end{equation}
The jumps of $a_t$ are driven by the same Poisson compensated random measure $\tilde{\mu}$ as $X$ plus another Poisson compensated measure $\tilde{\mu}_2$. We need also a second Brownian motion $\hat{W}$: in the case of "pure leverage" we would have $ \hat{a} \equiv 0$ and $\hat{W}$ is not needed; in the case of "no leverage" we rather have $\tilde{a} \equiv 0$. In the mixed case both $W$ and $\hat{W}$ are needed.
\subsection{Assumptions}
The first assumption is a structural assumption describing the driving
terms $W, \hat{W}$, $\tilde{\mu}$ and $\tilde{\mu}_2$; the second one being a set of conditions on the coefficients
implying in particular the existence of the various stochastic integrals involved above. \\
\\
\textbf{A1}: The processes $W$ and $\hat{W}$ are two independent Brownian motion, $\mu$ and $\mu_2$ are Poisson random measures on $[0, \infty) \times \mathbb{R}$ associated to the L\'evy processes $L=(L_t)_{t \ge 0}$ and $L_2 = (L^2_t)_{t \ge 0}$ respectively, with $L_t:= \int_0^t \int_\mathbb{R} z \tilde{\mu} (ds, dz)$ and $L^2_t:= \int_0^t \int_\mathbb{R} z \tilde{\mu}_2 (ds, dz)$. The compensated measures are $\tilde{\mu}= \mu - \bar{\mu}$ and $\tilde{\mu}_2= \mu_2 - \bar{\mu}_2$; we suppose that the compensator has the following form: $\bar{\mu}(dt,dz): = F(dz) dt $, $\bar{\mu}_2(dt,dz): = F_2(dz) dt $. Conditions on the Levy measures $F$ and $F_2$ will be given in A3 and A4. The initial condition $X_0$, $a_0$, $W$, $\hat{W}$, $L$ and $L_2$ are independent. The Brownian motions and the L\'evy processes are adapted with respect to the filtration $(\mathcal{F}_t)_{t \ge 0}$. We suppose moreover that there exists $X$, solution of \eqref{eq: model}. \\
\\
\textbf{A2}: The processes $b$, $\tilde{b}$, $\tilde{a}$, $\hat{a}$, $\tilde{\gamma}$, $\hat{\gamma}$ are bounded, $\gamma$ is Lipschitz. The processes $b$, $\tilde{a}$ are c\'adl\'ag adapted, $\gamma$, $\tilde{\gamma}$ and $\hat{\gamma}$ are predictable, $\tilde{b}$ and $\hat{a}$ are progressively measurable. Moreover it exists an $\mathcal{F}_t$ -measurable random variable $K_t$ such that
$$\mathbb{E}[|b_{t + h} - b_t|^2 | \mathcal{F}_t] \le K_t \, |h|; \quad \forall p \ge 1, \, \mathbb{E}[|K_t|^p]< \infty. $$
\\
We observe that the last condition on $b$ holds true regardless if, for example, $b_t = b(X_t)$; $b : \mathbb{R} \rightarrow \mathbb{R}$ Lipschitz. \\
The next assumption ensures the existence of the moments: \\
\\
\textbf{A3}: For all $q >0$, $\int_{|z|> 1} |z|^q F(dz) < \infty$ and $\int_{|z|> 1} |z|^q F_2(dz) < \infty$. Moreover, $\mathbb{E}[|X_0|^q] < \infty$ and $\mathbb{E}[|a_0|^q] < \infty$. \\
\\
\textbf{A4 (Jumps)}:
\begin{enumerate}
\item The jump coefficient $\gamma$ is bounded from below, that is $\inf_{x \in \mathbb{R}}|\gamma(x)|:= \gamma_{min} >0$.
\label{it:1}
\item The L\'evy measures $F$ and $F_2$ are absolutely continuous with respect to the Lebesgue measure and we denote $F(z) = \frac{F(dz)}{dz}$, $F_2(z) = \frac{F_2(dz)}{dz}$.
\label{it:3}
\item The L\'evy measure $F$ satisfies $F(dz)= \frac{\bar{g}(z)}{|z|^{1 + \alpha}} dz$, where $\alpha \in (0,2)$ and $\bar{g}: \mathbb{R}\rightarrow \mathbb{R}$ is a continuous symmetric nonnegative bounded function with $\bar{g}(0)= 1$.
\label{it:2}
\item The function $\bar{g}$ is differentiable on $\left \{ 0 < |z| \le \eta \right \}$ for some $\eta> 0$ with continuous derivative such that $\sup_{0 < |z| \le \eta} |\frac{\bar{g}'}{\bar{g}}| < \infty$.
\label{it: plus Maillavin}
\item The jump coefficient $\gamma$ is upper bounded, i.e. $\sup_{x \in \mathbb{R}}|\gamma(x)| := \gamma_{max} < \infty$.
\label{it: 4}
\item The Levy measure $F_2$ satisfies $\int_\mathbb{R} |z|^2 F_2(z) dz < \infty$.
\end{enumerate}
The first and the fifth points of the assumptions here above are useful to compare size of jumps of $X$ and $L$. The fourth point is required to use Malliavin calculus and it is satisfied by a large class of processes: $\alpha$- stable process ($\bar{g} = 1$), truncated $\alpha$-stable processes ($\bar{g} = \tau$, a truncation function), tempered stable process ($\bar{g}(z) = e^{- \lambda |z|}$, $\lambda > 0$). \\
In the following, we will use repeatedly some moment inequalities for jump diffusion, which are gathered in Lemma \ref{lemma: Moment inequalities} below and showed in the Appendix.
\begin{lemma}
Suppose that A1 - A4 hold. Then, for all $t > s$, \\
1)for all $p \ge 2$, $\mathbb{E}[|a_t - a_s|^p] \le c |t-s|$; for all $q > 0$ $\sup_{t \in [0,T]} \mathbb{E}[|a_t|^q] < \infty$. \\
2) for all $p \ge 2$, $p \in \mathbb{N}$, $\mathbb{E}[|a_t - a_s|^p|\mathcal{F}_s] \le c|t-s|$. \\
3) for all $p \ge 2$, $\mathbb{E}[|X_t - X_s|^p]^\frac{1}{p} \le c |t-s|^\frac{1}{p}$; for all $q > 0$ $\sup_{t \in [0,T]} \mathbb{E}[|X_t|^q] < \infty$, \\
4) for all $p \ge 2$, $p \in \mathbb{N}$, $\mathbb{E}[|X_t - X_s|^p|\mathcal{F}_s] \le c|t-s|(1 + |X_s|^p)$. \\
5) for all $p \ge 2$, $p \in \mathbb{N}$, $\sup_{h \in [0,1]} \mathbb{E}[|X_{s+h}|^p|\mathcal{F}_s] \le c(1 + |X_s|^p)$. \\
6) for all $p > 1$, $\mathbb{E}[|X_t^c - X_s ^c|^p]^\frac{1}{p} \le |t - s|^\frac{1}{2}$ and $\mathbb{E}[|X_t^c - X_s ^c|^p|\mathcal{F}_s]^\frac{1}{p} \le c |t - s|^\frac{1}{2}(1 + |X_s|^p)$, \\
where we have denoted by $X^c$ the continuous part of the process $X$, which is such that
$$X^c_{t} - X^c_{s} := \int_{s}^{t}a_u dW_u + \int_{s}^{t} b_u du.$$
\label{lemma: Moment inequalities}
\end{lemma}
\section{Setting and main results}\label{S:Construction_and_main}
The process $X$ is observed at regularly spaced times $t_i = i \Delta_n = \frac{i \, T}{n}$ for $i = 0, 1, ... , n$, within
a finite time interval $[0,T]$. We can assume, WLOG, that $T = 1$. \\
Our goal is to estimate the integrated volatility $IV := \frac{1}{T}\int_0^T a^2_s f(X_s) ds$, where $f$ is a polynomial growth function. To do it, we propose the estimator $Q_n $, based on the truncated quadratic variation introduced by Mancini in \cite{Mancini soglia}. Given that the quadratic variation was a good estimator for the integrated volatility in the continuous framework, the idea is to filter the contribution of the jumps and to keep only the intervals in which we judge no jumps happened. We use the size of the increment of the process $X_{t_{i+1}} - X_{t_i}$ in order to judge if a jump occurred or not in the interval $[t_i, t_{i + 1})$: as it is hard for the increment of $X$ with continuous transition to overcome the threshold $\Delta_{n}^\beta = (\frac{1}{n})^\beta$ for $\beta \le \frac{1}{2}$, we can assert the presence of a jump in $[t_i, t_{i + 1})$ if $|X_{t_{i+1}} - X_{t_i}| > \Delta_{n}^\beta $. \\
We set
\begin{equation}
Q_n : = \sum_{i= 0}^{n-1} f(X_{t_i})(X_{t_{i+1}} - X_{t_i})^2 \varphi_{\Delta_{n}^\beta}(X_{t_{i+1}} - X_{t_i}),
\label{eq: definition Qn}
\end{equation}
where $$\varphi_{\Delta_{n}^\beta}(X_{t_{i+1}} - X_{t_i}) = \varphi( \frac{X_{t_{i+1}} - X_{t_i}}{\Delta_{n}^\beta}),$$ with $\varphi$ a smooth version of the indicator function, such that
$\varphi(\zeta) = 0$ for each $ \zeta$, with $|\zeta| \ge 2$ and $\varphi(\zeta) = 1$ for each $ \zeta $, with $ |\zeta| \le 1$. \\
It is worth noting that, if we consider an additional constant $k$ in $\varphi$ (that becomes $\varphi_{k \Delta^\beta_{n}}(X_{t_{i+1}} - X_{t_i})= \varphi( \frac{X_{t_{i+1}} - X_{t_i}}{k \Delta_{n}^\beta})$), the only difference is the interval on which the function is $1$ or $0$: it will be $1$ for $|X_{t_{i+1}} - X_{t_i}| \le k \Delta_{n}^\beta$; $0$ for $|X_{t_{i+1}} - X_{t_i}| \ge 2k \Delta_{n}^\beta$. Hence, for shortness in notations, we restrict the theoretical analysis to the situation where $k = 1$ while, for applications, we may take the threshold level as $k\Delta_{n}^\beta$ with $k \neq 1$.
\subsection{Main results}
The main result of this paper is the asymptotic expansion for the truncated integrated volatility.\\
We show first of all it is possible to decompose the truncated quadratic variation, separating the continuous part from the contribution of the jumps. We consider right after the difference between the truncated quadratic variation and the discretized volatility, showing it consists on the statistical error (which derives from the continuous part), on a noise term due to the jumps and on a third term which is negligible compared to the other two. From such an expansion it appears clearly the condition on $(\alpha, \beta)$ which specifies whether or not the truncated quadratic variation performs well for the estimation of the integrated volatility. It is also possible to build some unbiased estimators. Indeed, through Malliavin calculus, we identify the main bias term which arises from the presence of the jumps. We study then its asymptotic behavior and, by making it equal to zero or by removing it from the original truncated quadratic variation, we construct some corrected estimators. \\
We define as $\tilde{Q}_n^J$ the jumps contribution present in the original estimator $Q_n$:
\begin{equation}
{\tilde{Q}}_n^J : = n^{\beta (2 - \alpha)} \sum_{i= 0}^{n-1} (\int_{t_i}^{t_{i + 1}}\int_{\mathbb{R} \backslash \left \{0 \right \}} \gamma(X_{s^-}) \, z \, \tilde{\mu}(ds, dz))^2 f(X_{t_i}) \varphi_{\Delta_{n}^\beta}(X_{t_{i+1}} - X_{t_i}).
\label{eq: definition tilde Qn}
\end{equation}
Denoting as $o_\mathbb{P}((\frac{1}{n})^k)$ a quantity such that $\frac{o_\mathbb{P}((\frac{1}{n})^k)}{(\frac{1}{n})^k}\overset{\mathbb{P}}{\rightarrow} 0$,
the following decomposition holds true:
\begin{theorem}
Suppose that A1 - A4 hold and that $\beta \in (0, \frac{1}{2})$ and $\alpha \in (0,2)$ are given in definition \eqref{eq: definition Qn} and in the third point of A4, respectively. Then, as $n \rightarrow \infty$,
\begin{equation}
Q_n = \sum_{i = 0}^{n - 1} f(X_{t_i})(X^c_{t_{i+1}} - X^c_{t_i})^2 + (\frac{1}{n})^{\beta(2 - \alpha)} \tilde{Q}_n^J + \mathcal{E}_n =
\label{eq: Qn parte continua}
\end{equation}
\begin{equation}
= \sum_{i = 0}^{n - 1} f(X_{t_i})(\int_{t_i}^{t_{i + 1}}a_s dW_s)^2 + (\frac{1}{n})^{\beta(2 - \alpha)} \tilde{Q}_n^J + \mathcal{E}_n,
\label{eq: estensione Qn}
\end{equation}
where $\mathcal{E}_n$ is both $o_\mathbb{P}((\frac{1}{n})^{\beta(2 - \alpha)})$ and, for each $\tilde{\epsilon} > 0$, $o_\mathbb{P}((\frac{1}{n})^{(1 - \alpha \beta - \tilde{\epsilon}) \land (\frac{1}{2} - \tilde{\epsilon})})$.
\label{th: estensione Qn}
\end{theorem}
To show Theorem \ref{th: estensione Qn} here above, the following lemma will be useful. It illustrates the error we commit when the truncation is tight and therefore the Brownian increments are mistakenly truncated away.
\begin{lemma}
Suppose that A1 - A4 hold. Then, $\forall \epsilon > 0$,
$$\sum_{i = 0}^{n - 1} f(X_{t_i})(X^c_{t_{i + 1}} - X^c_{t_i})^2(\varphi_{\Delta_{n}^\beta}(X_{t_{i+1}} - X_{t_i})- 1) = o_\mathbb{P} ((\frac{1}{n})^{1 - \alpha \beta - \epsilon}).$$
\label{lemma: brownian increments}
\end{lemma}
Theorem \ref{th: estensione Qn} anticipates that the size of the jumps part is $(\frac{1}{n})^{\beta(2 - \alpha)}$ (see Theorem \ref{th: reformulation th T fixed}) while the size of the Brownian increments wrongly removed is upper bounded by $(\frac{1}{n})^{1 - \alpha \beta - \epsilon}$ (see Lemma \ref{lemma: brownian increments}). As $\beta \in (0, \frac{1}{2}) $, we can always find an $\epsilon > 0$ such that $1 - \alpha \beta - \epsilon > \beta(2 - \alpha)$ and therefore the bias derived from a tight truncation is always smaller compared to those derived from a loose truncation.
However, as we will see, after having removed the contribution of the jumps such a small downward bias will represent the main error term if $\alpha \beta > \frac{1}{2} $. \\
In order to eliminate the bias arising from the jumps, we want to identify the term $\tilde{Q}_n^J$ in details. For that purpose we introduce
\begin{equation}
\hat{Q}_n := (\frac{1}{n})^{\frac{2}{\alpha} - \beta(2 - \alpha)} \sum_{i = 0}^{n-1} f(X_{t_i}) \gamma^2(X_{t_i}) d(\gamma(X_{t_i}) n^{ \beta - \frac{1}{\alpha}}),
\label{eq: definizione finale hat Q}
\end{equation}
where $d(\zeta) : = \mathbb{E}[(S_1^\alpha)^2 \varphi(S_1^\alpha \zeta)]$; $(S_t^\alpha)_{t \ge 0}$ is an $\alpha$-stable process. \\
We want to move from $\tilde{Q}_n^J$ to $\hat{Q}_n$.
The idea is to move from our process, that in small time behaves like a conditional rescaled L\'evy process, to an $\alpha$ stable distribution.
\begin{proposition}
Suppose that A1 - A4 hold. Let $(S_t^\alpha)_{t \ge 0}$ be an $\alpha$-stable process. Let $g$ be a measurable bounded function such that $\left \| g \right \|_{pol} := \sup_{x \in \mathbb{R}} (\frac{|g(x)|}{1 + |x|^p}) < \infty$, for some $p\ge 1$, $p\ge \alpha$ hence
\begin{equation}
|g(x)| \le \left \| g \right \|_{pol} (|x|^p + 1).
\label{eq: conditon on h}
\end{equation}
Moreover we denote $\left \| g \right \|_\infty: = \sup_{x \in \mathbb{R}} |g(x)|$.
Then, for any $\epsilon > 0$, $0 < h < \frac{1}{2}$,
\begin{equation}
|\mathbb{E}[g(h^{- \frac{1}{\alpha}} L_{h})] - \mathbb{E}[g(S_1^\alpha)]| \le C_\epsilon h \, |\log(h) | \left \| g \right \|_\infty + C_\epsilon h^\frac{1}{\alpha}\left \| g \right \|_\infty^{1 - \frac{\alpha}{p} - \epsilon} \left \| g \right \|_{pol}^{\frac{\alpha}{p} + \epsilon} |\log(h) |+
\label{eq: tesi prop stable}
\end{equation}
$$+ C_\epsilon h^\frac{1}{\alpha}\left \| g \right \|_\infty^{1+ \frac{1}{p} - \frac{\alpha}{p} + \epsilon} \left \| g \right \|_{pol}^{- \frac{1}{p} + \frac{\alpha}{p} - \epsilon} |\log(h) |1_{\left \{ \alpha > 1 \right \}} ,$$
where $C_\epsilon$ is a constant independent of $h$.
\label{prop: estimation stable}
\end{proposition}
Proposition \ref{prop: estimation stable} requires some Malliavin calculus.
The proof of Proposition \ref{prop: estimation stable} as well as some technical tools will be found in Section \ref{S:Proof_propositions}. \\
The previous proposition is an extension of Theorem $4.2$ in \cite{Maillavin} and it is useful when $\left \| g \right \|_\infty$ is large, compared to $\left \| g \right \|_{pol}$. For instance, it is the case if consider the function $g(x): = |x|^2 1_{|x| \le M} $ for $M$ large. \\
\\
We need Proposition \ref{prop: estimation stable} to prove the following theorem, in which we consider the difference between the truncated quadratic variation and the discretized volatility. We make explicit its decomposition into the statistical error and the noise term due to the jumps, identified as $\hat{Q}_n$.
\begin{theorem}
Suppose that A1- A4 hold and that $\beta \in (0, \frac{1}{2})$ and $\alpha \in (0,2)$ are given in Definition \ref{eq: definition Qn} and in the third point of A4, respectively. Then, as $n \rightarrow \infty$,
\begin{equation}
Q_n - \frac{1}{n} \sum_{i = 0}^{n - 1} f(X_{t_i}) a^2_{t_i} = \frac{Z_n}{\sqrt{n}}+ (\frac{1}{n})^{\beta(2 - \alpha)}\hat{Q}_n + \mathcal{E}_n, \label{eq:tesi teo 2 e 3}
\end{equation}
where $\mathcal{E}_n$ is always $o_\mathbb{P}((\frac{1}{n})^{\beta(2 - \alpha)})$ and, adding the condition $\beta > \frac{1}{4- \alpha}$, it is also $o_\mathbb{P}((\frac{1}{n})^{(1 - \alpha \beta - \tilde{\epsilon}) \land (\frac{1}{2} - \tilde{\epsilon})})$. Moreover $Z_n \xrightarrow{\mathcal{L}}N(0,2\int_0^T a^4_s f^2(X_s) ds)$ stably with respect to $X$.
\label{th: 2 e 3 insieme}
\end{theorem}
We recognize in the expansion \eqref{eq:tesi teo 2 e 3} the statistical error of model without jumps given by $Z_n$, whose variance is equal to the so called quadricity. As said above, the term $\hat{Q}_n$ is a bias term arising from the presence of jumps and given by \eqref{eq: definizione finale hat Q}. From this explicit expression it is possible to remove the bias term (see Section \ref{S: Applications}). \\
The term $\mathcal{E}_n $ is an additional error term that is always negligible compared to the bias deriving from the jump part $(\frac{1}{n})^{\beta(2 - \alpha)}\hat{Q}_n$ (that is of order $(\frac{1}{n})^{\beta(2 - \alpha)}$ by Theorem \ref{th: reformulation th T fixed} below). \\
The bias term admits a first order expansion that does not require the knowledge of the density of $S^\alpha$.
\begin{proposition}
Suppose that A1 - A4 hold and that $\beta \in (0, \frac{1}{2})$ and $\alpha \in (0,2)$ are given in Definition \ref{eq: definition Qn} and in the third point of Assumption 4, respectively. Then
\begin{equation}
\hat{Q}_n = \frac{1}{n} c_\alpha \sum_{i = 0}^{n-1} f(X_{t_i}) |\gamma(X_{t_i})|^\alpha (\int_\mathbb{R} \varphi(u) | u| ^{1 - \alpha} du) + \tilde{\mathcal{E}}_n,
\label{eq: limite hat Qn con densita}
\end{equation}
with
\begin{equation}
c_\alpha =
\begin{cases}
\frac{\alpha(1 - \alpha)}{4 \Gamma(2 - \alpha) \cos(\frac{\alpha \pi}{2})} \qquad \mbox{if } \alpha \neq 1, \, \alpha <2 \\
\frac{1}{2 \pi} \qquad \qquad \qquad \quad \mbox{if } \, \alpha = 1.
\end{cases}
\label{eq: def calpha}
\end{equation}
$\tilde{\mathcal{E}}_n = o_\mathbb{P}(1)$ and, if $\alpha < \frac{4}{3}$, it is also $n^{\beta(2 - \alpha)} o_\mathbb{P}((\frac{1}{n})^{(1 - \alpha \beta - \tilde{\epsilon}) \land (\frac{1}{2} - \tilde{\epsilon})}) = o_\mathbb{P}((\frac{1}{n})^{(\frac{1}{2} - 2 \beta + \alpha \beta - \tilde{\epsilon})\land(1 - 2 \beta - \tilde{\epsilon})})$.
\label{prop: conv hat Qn}
\end{proposition}
We have not replaced directly the right hand side of \eqref{eq: limite hat Qn con densita} in \eqref{eq:tesi teo 2 e 3}, observing that $(\frac{1}{n})^{\beta(2 - \alpha)} \tilde{\mathcal{E}}_n = \mathcal{E}_n$, because $(\frac{1}{n})^{\beta(2 - \alpha)} \tilde{\mathcal{E}}_n$ is always $o_\mathbb{P}((\frac{1}{n})^{\beta(2 - \alpha)})$ but to get it is also $o_\mathbb{P}((\frac{1}{n})^{(1 - \alpha \beta - \tilde{\epsilon}) \land (\frac{1}{2} - \tilde{\epsilon})})$ the additional condition $\alpha < \frac{4}{3}$ is required. \\
Proposition \ref{prop: conv hat Qn} provides the contribution of the jumps in detail, identifying a main term. Recalling we are dealing with some bias, it comes naturally to look for some conditions to make it equal to zero and to study its asymptotic behaviour in order to remove its limit.
\begin{corollary}
Suppose that A1 - A4 hold and that $\alpha \in (0, \frac{4}{3})$, $\beta \in (\frac{1}{4 - \alpha}$, $(\frac{1}{2 \alpha} \land \frac{1}{2}))$. If $\varphi$ is such that $\int_\mathbb{R} | u| ^{1 - \alpha}\varphi(u) du = 0$ then, $\forall \tilde{\epsilon} > 0$,
\begin{equation}
Q_n - \frac{1}{n} \sum_{i = 0}^{n - 1} f(X_{t_i}) a^2_{t_i} = \frac{Z_n}{\sqrt{n}}+ o_\mathbb{P}((\frac{1}{n})^{\frac{1}{2} -\tilde{\epsilon}}),
\label{eq: eq per le appli}
\end{equation}with $Z_n$ defined as in Theorem \ref{th: 2 e 3 insieme} here above.
\label{cor: cond rimozione rumore}
\end{corollary}
It is always possible to build a function $\varphi$ for which the condition here above is respected (see Section \ref{S: Applications}). \\
We have supposed $\alpha < \frac{4}{3}$ in order to say that the error we commit identifying the contribution of the jumps as the first term in the right hand side of \eqref{eq: limite hat Qn con densita} is always negligible compared to the statistical error.
Moreover, taking $\beta < \frac{1}{2 \alpha}$ we get $1 - \alpha \beta > \frac{1}{2}$ and therefore also the bias studied in Lemma \ref{lemma: brownian increments} becomes upper bounded by a quantity which is roughly $o_\mathbb{P}(\frac{1}{\sqrt{n}})$. \\
Equation \eqref{eq: eq per le appli} gives us the behaviour of the unbiased estimator, that is the truncated quadratic variation after having removed the noise derived from the presence of jumps.
Taking $\alpha$ and $\beta$ as discussed above we have, in other words, reduced the error term $\mathcal{E}_n$ to be $o_\mathbb{P}((\frac{1}{n})^{\frac{1}{2} -\tilde{\epsilon}})$, which is roughly the same size as the statistical error.
We observe that, if $\alpha \ge \frac{4}{3}$ but $\gamma = k \in \mathbb{R}$, the result still holds if we choose $\varphi$ such that
$$\int_\mathbb{R} u^{2}\varphi(u) \, f_\alpha (\frac{1}{k}u (\frac{1}{n})^{ \beta - \frac{1}{\alpha}})du =0,$$
where $f_\alpha$ is the density of the $\alpha$-stable process.
Indeed, following \eqref{eq: definizione finale hat Q}, the jump bias $\hat{Q}_n$ is now defined as
$$ (\frac{1}{n})^{\frac{2}{\alpha} - \beta(2 - \alpha)} \sum_{i = 0}^{n-1} f(X_{t_i}) k^2 d(k \, n^{ \beta - \frac{1}{\alpha}}) = (\frac{1}{n})^{\frac{2}{\alpha} - \beta(2 - \alpha)} \sum_{i = 0}^{n-1} f(X_{t_i}) k^2 \int_\mathbb{R} z^2 \varphi(z k (\frac{1}{n})^{\frac{1}{\alpha} - \beta})f_\alpha(z) dz = $$
$$= (\frac{1}{n})^{\frac{2}{\alpha} - \beta(2 - \alpha)} \sum_{i = 0}^{n-1} f(X_{t_i}) k^2 (\frac{1}{n})^{3 (\beta - \frac{1}{\alpha})} \frac{1}{k^3} \int_\mathbb{R} u^{2}\varphi(u) \, f_\alpha (\frac{1}{k}u (\frac{1}{n})^{ \beta - \frac{1}{\alpha}})du =0,$$
where we have used a change of variable. \\
\\
Another way to construct an unbiased estimator is to study how the main bias detailed in \eqref{eq: limite hat Qn con densita} asymptotically behaves and to remove it from the original estimator.
\begin{theorem}
Suppose that A1 - A4 hold. Then, as $n \rightarrow \infty$,
\begin{equation}
{\hat{Q}}_n \overset{\mathbb{P}}{\rightarrow} c_\alpha \int_\mathbb{R} \varphi(u) |u|^{1 - \alpha} du \int_0^T |\gamma(X_s)|^\alpha f(X_s) ds.
\label{eq: convergenza hat Q tempo corto}
\end{equation}
Moreover
\begin{equation}
Q_n - IV = \frac{Z_n}{\sqrt{n}}+ (\frac{1}{n})^{\beta(2 - \alpha)}c_\alpha \int_\mathbb{R} \varphi(u) |u|^{1 - \alpha} du \int_0^T |\gamma(X_s)|^\alpha f(X_s) ds + o_\mathbb{P}((\frac{1}{n})^{\beta(2 - \alpha)}),
\label{eq: tesi finale tempo corto}
\end{equation}
where $Z_n \xrightarrow{\mathcal{L}} N(0, 2 \int_0^T a^4_s f^2(X_s) ds)$ stably with respect to $X$.
\label{th: reformulation th T fixed}
\end{theorem}
It is worth noting that, in both \cite{Condition Jacod beta} and \cite{Mancini}, the integrated volatility estimation in short time is dealt and they show that the truncated quadratic variation has rate $\sqrt{n}$ if $\beta > \frac{1}{2(2 - \alpha)}$. \\
We remark that the jump part is negligible compared to the statistic error if $n^{- 1} < n^{- \frac{1}{2 \beta(2 - \alpha)}}$ and so $\beta > \frac{1}{2(2 - \alpha)}$, that is the same condition given in the literature. \\
However, if we take $(\alpha, \beta)$ for which such a condition doesn't hold, we can still use that we know in detail the noise deriving from jumps to implement corrections that still make the unbiased estimator well-performed (see Section \ref{S: Applications}). \\
\\
We require the activity $\alpha$ to be known, for conducting bias correction. If it is unknown, we need to estimate it previously (see for example the methods proposed by Todorov in \cite{alpha} and by Mies in \cite{Fabien alpha}). Then, a question could be how the estimation error in $\alpha$ would affect the rate of the bias-corrected estimator. We therefore assume that $\hat{\alpha}_n = \alpha + O_\mathbb{P} (a_n)$, for some rate sequence $a_n$. Replacing $\hat{\alpha}_n$ in \eqref{eq: tesi finale tempo corto} it turns out that the error derived from the estimation of $\alpha$ does not affect the correction if $a_n (\frac{1}{n})^{\beta(2 - \alpha)} < (\frac{1}{n})^\frac{1}{2}$, which means that $a_n$ has to be smaller than $(\frac{1}{n})^{\frac{1}{2}- \beta(2 - \alpha)}$. We recall that $\beta \in (0, \frac{1}{2})$ and $\alpha \in (0,2)$. Hence, such a condition is not a strong requirement and it becomes less and less restrictive when $\alpha$ gets smaller or $\beta$ gets bigger.
\section{Unbiased estimation in the case of constant volatility} \label{S: Applications}
In this section we consider a concrete application of the unbiased volatility estimator in a jump diffusion model and we investigate its numerical performance. \\
We consider our model \eqref{eq: model} in which we assume, in addition, that the functions $a$ and $\gamma$ are both constants. \\
Suppose that we are given a discrete sample $X_{t_0}, ... , X_{t_n}$ with $t_i = i \Delta_n = \frac{i}{n}$ for $i =0, ... , n$. \\
We now want to analyze the estimation improvement; to do it we compare the classical error committed using the truncated quadratic variation with the unbiased estimation derived by our main results. \\
We define the estimator we are going to use, in which we have clearly taken $f \equiv 1$ and we have introduced a threshold $k$ in the function $\varphi$, so it is
\begin{equation}
Q_n = \sum_{i = 0}^{n - 1}(X_{t_{i + 1}} - X_{t_i})^2\varphi_{k \Delta_{n}^\beta}(X_{t_{i + 1}} - X_{t_i}).
\label{eq: Qn applications}
\end{equation}
If normalized, the error committed estimating the volatility is $E_1 : = (Q_n - \sigma^2) \sqrt{n}$. \\
We start from \eqref{eq: limite hat Qn con densita} that in our case, taking into account the presence of $k$, is
\begin{equation}
\hat{Q}_n = c_\alpha \gamma^\alpha k^{2 - \alpha}(\int_\mathbb{R} \varphi(u) |u|^{1 - \alpha} du ) + \tilde{\mathcal{E}}_n.
\label{eq: hatQn applications}
\end{equation}
We now get different methods to make the error smaller. \\
First of all we can replace \eqref{eq: hatQn applications} in \eqref{eq:tesi teo 2 e 3} and so we can reduce the error by subtracting a correction term, building the new estimator $Q_n^c : = Q_n - (\frac{1}{n})^{\beta (2 - \alpha)} c_\alpha \gamma^\alpha k^{2 - \alpha}(\int_\mathbb{R} \varphi(u) |u|^{1 - \alpha} du )$. The error committed estimating the volatility with such a corrected estimator is $E_2 : = (Q_n^c - \sigma^2) \sqrt{n}$. \\
Another approach consists of taking a particular function $\tilde{\varphi}$ that makes the main contribution of $\hat{Q}_n$ equal to $0$. We define $\tilde{\varphi}(\zeta) = \varphi(\zeta) + c \psi(\zeta)$, with $\psi$ a $\mathcal{C}^\infty$ function such that $\psi(\zeta) = 0$ for each $\zeta$, $|\zeta| \ge 2$ or $|\zeta | \le 1$. In this way, for any $c \in \mathbb{R} \setminus \left \{ 0\right \}$, $\tilde{\varphi}$ is still a smooth version of the indicator function such that $\tilde{\varphi}(\zeta) = 0$ for each $\zeta$, $|\zeta| \ge 2$ and $\tilde{\varphi}(\zeta) = 1$ for each $\zeta$, $|\zeta| \le 1$. We can therefore leverage the arbitrariness in $c$ to make the main contribution of $\hat{Q}_n$ equal to zero, choosing $\tilde{c} := - \frac{\int_\mathbb{R} \varphi(u)|u|^{1 - \alpha} du }{\int_\mathbb{R} \psi(u)|u|^{1 - \alpha} du}$, which is such that $\int_\mathbb{R} (\varphi + \tilde{c}\psi(u))|u|^{1 - \alpha} du =0$. \\
Hence, it is possible to achieve an improved estimation of the volatility by used the truncated quadratic variation $Q_{n,c} : = \sum_{i = 0}^{ n - 1} (X_{t_{i + 1}} - X_{t_i})^2 (\varphi + \tilde{c}\psi)(\frac{X_{t_{i + 1}} - X_{t_i}}{ k \Delta_{n}^{\beta}}) $. To make it clear we will analyze the quantity $E_3 : = (Q_{n,c} - \sigma^2) \sqrt{n}$. \\
Another method widely used in numerical analysis to improve the rate of convergence of a sequence is the so-called Richardson extrapolation. We observe that the first term on the right hand side of \eqref{eq: hatQn applications} does not depend on $n$ and so we can just write $\hat{Q}_n = \hat{Q} + \tilde{\mathcal{E}}_n$. Replacing it in \eqref{eq:tesi teo 2 e 3} we get
$$Q_n = \sigma^2 + \frac{Z_n}{\sqrt{n}} + \frac{1}{n^{\beta(2 - \alpha)}} \hat{Q} + \mathcal{E}_n \qquad \mbox{and}$$
$$Q_{2n} = \sigma^2 + \frac{Z_{2n}}{\sqrt{2n}} + \frac{1}{2^{\beta(2 - \alpha)}} \frac{1}{n^{\beta(2 - \alpha)}} \hat{Q} + \mathcal{E}_{2n},$$
where we have also used that $(\frac{1}{n})^{\beta(2 - \alpha)} \tilde{\mathcal{E}}_n = \mathcal{E}_n$. We can therefore use $\frac{Q_n - 2^{\beta (2 - \alpha )} Q_{2 n}}{1 - 2^{\beta(2 - \alpha)}}$ as improved estimator of $\sigma^2$. \\
We give simulation results for $E_1$, $E_2$ and $E_3$ in the situation where $\sigma=1$. The given mean and the deviation standard are each based on 500 Monte Carlo samples. We choose to simulate a tempered stable process (that is $F$ satisfies $F(dz) = \frac{e^{-|z|}}{|z|^{1 + \alpha}}$) in the case $\alpha < 1$ while, in the interest of computational efficiency, we will exhibit results gained from the simulation of a stable L\'evy process in the case $\alpha \ge 1$ ($F(dz) = \frac{1}{|z|^{1 + \alpha}}$). \\
We have taken the smooth functions $\varphi$ and $\psi$ as below:
\begin{equation}
\varphi(x) =
\begin{cases}
1 \qquad \mbox{if } |x| < 1 \\
e^{\frac{1}{3} + \frac{1}{|x|^2 - 4}} \qquad \mbox{if } 1 \le |x| < 2 \\
0 \qquad \mbox{if } |x| \ge 2
\end{cases}
\end{equation}
\begin{equation}
\psi_M(x) =
\begin{cases}
0 \qquad \quad \mbox{if } |x| \le 1 \mbox{ or } |x| \ge M \\
e^{\frac{1}{3} + \frac{1}{|3 - x|^2 - 4}} \qquad \mbox{if } 1 < |x| \le \frac{3}{2} \\
e^{\frac{1}{|x|^2 -M} - \frac{5}{21} + \frac{4}{4M^2 - 9}} \qquad \mbox{if } \frac{3}{2} < |x| < M ;
\end{cases}
\end{equation}
choosing opportunely the constant $M$ in the definition of $\psi_M$ we can make its decay slower or faster. We observe that the theoretical results still hold even if the support of $\tilde{\varphi}$ changes as $M$ changes and so it is $[-M, M]$ instead of [-2, 2]. \\
Concerning the constant $k$ in the definition of $\varphi$, we fix it equal to $3$ in the simulation of the tempered stable process, while its value is $2$ in the case $\alpha > 1$, $\beta = 0.2$ and, in the case $\alpha > 1$ and $\beta = 0.49$, it increases as $\alpha $ and $\gamma$ increase. \\
The results of the simulations are given in columns 3-6 of Table \ref{tab: beta 0.2} for $\beta = 0.2$ and in columns 3-6 of Table \ref{tab: beta 0.49} for $\beta = 0.49$. \\
\begin{table}[h]
\begin{subtable}[h]{0.45\textwidth}
\centering
\begin{tabular}{||c|c|c|c|c|c||}
\hline
\multicolumn{1}{||c|}{\textbf{$\alpha$}} &
\multicolumn{1}{c|}{\textbf{$\gamma$}} &
\multicolumn{1}{c|}{\textbf{Mean}} &
\multicolumn{1}{c|}{\textbf{Rms}} &
\multicolumn{1}{c|}{\textbf{Mean}} &
\multicolumn{1}{c||}{\textbf{Mean}} \\
& & $E_1$ & $E_1$ & $E_2$ & $E_3$ \\
\hline \hline
0.1 & 1 & 3.820 & 3.177 & 0.831 & 0.189 \\
& 3 & 5.289 & 3.388 & 1.953 & -0.013 \\
0.5 & 1 & 15.168 & 9.411 & 0.955 & 1.706 \\
& 3 & 14.445 & 5.726 &2.971 & 0.080 \\
0.9 & 1 & 13.717 & 4.573 & 4.597 & 0.311 \\
& 3 & 42.419 & 6.980 & 13.664 & -0.711\\ \hline \hline
1.2 & 1 & 32.507 & 11.573 &0.069 & 2.137\\
& 3 & 112.648 & 21.279 &-0.915 & 0.800\\
1.5 & 1 & 50.305 & 12.680 & 0.195 &0.923\\
& 3 & 250.832 & 27.170 & -5.749 &3.557\\
1.9 & 1 & 261.066 & 20.729 & -0.530 & 9.139\\
& 3 & 2311.521 & 155.950 &-0.304 & -35.177 \\ \hline
\end{tabular}
\caption{$\beta = 0.2$}
\label{tab: beta 0.2}
\end{subtable}
\hfill
\begin{subtable}[h]{0.45\textwidth}
\begin{tabular}{||c|c|c|c|c|c||}
\hline
\multicolumn{1}{||c|}{\textbf{$\alpha$}} &
\multicolumn{1}{c|}{\textbf{$\gamma$}} &
\multicolumn{1}{c|}{\textbf{Mean}} &
\multicolumn{1}{c|}{\textbf{Rms}} &
\multicolumn{1}{c|}{\textbf{Mean}} &
\multicolumn{1}{c||}{\textbf{Mean}} \\
& & $E_1$ & $E_1$ & $E_2$ & $E_3$ \\
\hline \hline
0.1 & 1 & 1.092 & 1.535 & 0.307 & -0.402 \\
& 3 & 1.254 & 1.627 & 0.378 & -0.372 \\
0.5 & 1 & 2.503 & 1.690 & 0.754 & -0.753 \\
& 3 & 4.680 & 2.146 & 1.651 & -0.824 \\
0.9 & 1 & 2.909 & 1.548 & 0.217 & 0.416 \\
& 3 & 8.042 & 1.767 & 0.620 & -0.404\\ \hline \hline
1.2 & 1 & 7.649 & 1.992 & -0.944 & -0.185\\
& 3 & 64.937 & 9.918 & -1.692 & -2.275\\
1.5 & 1 & 25.713 & 3.653 & -1.697 & 3.653\\
& 3 & 218.591 & 21.871 & -4.566 & -13.027 \\
1.9 & 1 & 238.379 & 14.860 & -6.826 & 16.330 \\
& 3 & 2357.553 & 189.231 & 3.827 & -87.353 \\ \hline
\end{tabular}
\caption{$\beta = 0.49$}
\label{tab: beta 0.49}
\end{subtable}
\caption{Monte Carlo estimates of $E_1$, $E_2$ and $E_3$ from 500 samples. We have here fixed $n = 700$; $\beta = 0.2$ in the first table and $\beta = 0.49$ in the second one.}
\label{tab:tabella totale}
\end{table}
\\
It appears that the estimation we get using the truncated quadratic variation performs worse as soon as $\alpha$ and $\gamma$ become bigger (see column 3 in both Tables \ref{tab: beta 0.2} and \ref{tab: beta 0.49}). However, after having applied the corrections, the error seems visibly reduced. A proof of which lies, for example, in the comparison between the error and the root mean square: before the adjustment in both Tables \ref{tab: beta 0.2} and \ref{tab: beta 0.49} the third column dominates the fourth one, showing that the bias of the original estimator dominates the standard deviation while, after the implementation of our main results, we get $E_2$ and $E_3$ for which the bias is much smaller. \\
We observe that for $\alpha < 1$, in both cases $\beta =0.2$ and $\beta = 0.49$, it is possible to choose opportunely M (on which $\psi$'s decay depends) to make the error $E_3$ smaller than $E_2$. On the other hand, for $\alpha >1$, the approach who consists of subtracting the jump part to the error results better than the other, since $E_3$ is in this case generally bigger than $E_2$, but to use this method the knowledge of $\gamma$ is required.
It is worth noting that both the approaches used, that lead us respectively to $E_2$ and $E_3$, work well for any $\beta \in (0, \frac{1}{2})$. \\
We recall that, in \cite{Condition Jacod beta}, the condition found on $\beta$ to get a well-performed estimator was
\begin{equation}
\beta > \frac{1}{2(2 - \alpha)},
\label{eq: cond jacod on beta}
\end{equation}
that is not respected in the case $\beta = 0.2$. Our results match the ones in \cite{Condition Jacod beta}, since the third column in Table \ref{tab: beta 0.49} (where $\beta =0.49$) is generally smaller than the third one in Table \ref{tab: beta 0.2} (where $\beta =0.2$). We emphasise nevertheless that, comparing columns 5 and 6 in the two tables, there is no evidence of a dependence on $\beta$ of $E_2$ and $E_3$. \\
The price you pay is that, to implement our corrections, the knowledge of $\alpha$ is request. Such corrections turn out to be a clear improvement also because for $\alpha$ that is less than 1 the original estimator \eqref{eq: Qn applications} is well-performed only for those values of the couple $(\alpha, \beta)$ which respect the condition \eqref{eq: cond jacod on beta} while, for $\alpha \ge 1$, there is no $\beta \in (0, \frac{1}{2})$ for which such a condition can hold. That's the reason why, in the lower part of both Tables \ref{tab: beta 0.2} and \ref{tab: beta 0.49}, $E_1$ is so big. \\
Using our main results, instead, we get $E_2$ and $E_3$ that are always small and so we obtain two corrections which make the unbiased estimator always well-performed without adding any requirement on $\alpha$ or $\beta$.
\section{Preliminary results}\label{S:Propositions}
In the sequel, for $\delta \ge 0$, we will denote as $R_i(\Delta_n^\delta )$ any random variable which is $\mathcal{F}_{t_i}$ measurable and such that, for any $q \ge 1$,
\begin{equation}
\exists c > 0: \quad \left \| \frac{R_i(\Delta_n^\delta )}{\Delta_n^\delta } \right \|_{L^q} \le c < \infty,
\label{eq: definition R}
\end{equation}
with $c$ independent of $i,n$. \\
$R_i$ represent the term of rest and have the following useful property, consequence of the just given definition:
\begin{equation}
R_i(\Delta_{n}^\delta)= \Delta_{n}^\delta R_i(\Delta_{n}^0).
\label{propriety power R}
\end{equation}
We point out that it does not involve the linearity of $R_i$, since the random variables $R_i$ on the left and on the right side are not necessarily the same but only two on which the control (\ref{eq: definition R}) holds with $\Delta_{n}^\delta $ and $\Delta_{n}^0$, respectively. \\ \\
In order to prove the main result, the following proposition will be useful. \\
We define, for $i \in \left \{ 0, ... , n-1 \right \}$,
\begin{equation}
\Delta X_i^J : = \int_{t_i}^{t_{i + 1}}\int_{\mathbb{R} \backslash \left \{0 \right \}} \gamma(X_{s^-}) \, z \, \tilde{\mu}(ds, dz) \qquad \mbox{and} \qquad \Delta \tilde{X}_i^J : = \int_{t_i}^{t_{i + 1}}\int_{\mathbb{R} \backslash \left \{0 \right \}} \gamma(X_{t_i}) \, z \, \tilde{\mu}(ds, dz).
\label{eq: def salti}
\end{equation}
We want to bound the error we commit moving from $\Delta X_i^J$ to $\Delta \tilde{X}_i^J$, denoting as $o_{L^1}(\Delta_{n}^k)$ a quantity such that $ \mathbb{E}_i[|o_{L^1}(\Delta_{n}^k)|] = R_i(\Delta_{n}^k)$, with the notation $\mathbb{E}_i[.] = \mathbb{E}[.|\mathcal{F}_{t_i}]$.
\begin{proposition}
Suppose that A1- A4 hold. Then
\begin{equation}
(\Delta X_i^J)^2 \varphi_{\Delta_{n}^\beta}(\Delta X_i) = (\Delta \tilde{X}_i^J )^2 \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J ) + o_{L^1}(\Delta_{n}^{\beta(2 - \alpha) + 1)}),
\label{eq: espansione salti}
\end{equation}
\begin{equation}
(\int_{t_i}^{t_{i+1}}a_s dW_s)\Delta X_i^J \varphi_{\Delta_{n}^\beta}(\Delta X_i) = (\int_{t_i}^{t_{i+1}}a_s dW_s)\Delta \tilde{X}_i^J\varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J ) + o_{L^1}(\Delta_{n}^{\beta(2 - \alpha) + 1)}).
\label{eq: salti con browniano}
\end{equation}
Moreover, for each $\tilde{\epsilon} > 0$ and $f$ the function introduced in the definition of $Q_n$,
\begin{equation}
\sum_{i = 0}^{n - 1} f(X_{t_i})(\Delta X_i^J)^2 \varphi_{\Delta_{n}^\beta}(\Delta X_i) = \sum_{i = 0}^{n - 1} f(X_{t_i})(\Delta \tilde{X}_i^J )^2 \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J ) + o_\mathbb{P}(\Delta_n^{(1 - \alpha \beta - \tilde{\epsilon}) \land (\frac{1}{2} - \tilde{\epsilon})}),
\label{eq: aggiunta prop1 salti}
\end{equation}
\begin{equation}
\sum_{i = 0}^{n - 1} f(X_{t_i})(\int_{t_i}^{t_{i+1}}a_s dW_s)\Delta X_i^J \varphi_{\Delta_{n}^\beta}(\Delta X_i) = \sum_{i = 0}^{n - 1} f(X_{t_i})(\int_{t_i}^{t_{i+1}}a_s dW_s)\Delta \tilde{X}_i^J\varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J ) + o_{\mathbb{P}}(\Delta_n^{(1 - \alpha \beta - \tilde{\epsilon}) \land (\frac{1}{2} - \tilde{\epsilon})}).
\label{eq: aggiunta prop1 browniano}
\end{equation}
\label{prop: espansione salti}
\end{proposition}
Proposition \ref{prop: espansione salti} will be showed in the Appendix. \\
In the proof of our main results, also the following lemma will be repeatedly used.
\begin{lemma}
Let us consider $\Delta X_i^J$ and $\Delta \tilde{X}_i^J$ as defined in \eqref{eq: def salti}. Then
\begin{enumerate}
\item For each $q \ge 2$ $\exists \epsilon > 0$ such that
\begin{equation}
\mathbb{E}[|\Delta X_i^J 1_{\left \{ |\Delta X_i^J| \le 4 \Delta_{n}^\beta \right \}}|^q |\mathcal{F}_{t_i} ] = R_i(\Delta_{n}^{1 + \beta(q - \alpha)}) = R_i(\Delta_{n}^{1 + \epsilon}).
\label{eq: estensione salti lemma 10}
\end{equation}
\begin{equation}
\mathbb{E}[|\Delta \tilde{X}_i^J 1_{\left \{ |\Delta \tilde{X}_i^J| \le 4 \Delta_{n}^\beta \right \}}|^q |\mathcal{F}_{t_i} ] = R_i( \Delta_{n}^{1 + \beta(q - \alpha)}) = R_i(\Delta_{n}^{1 + \epsilon}).
\label{eq: estensione tilde salti lemma 10}
\end{equation}
\item For each $q \ge 1$ we have
\begin{equation}
\mathbb{E}[|\Delta X_i^J 1_{\left \{ \frac{\Delta_{n}^\beta}{4} \le |\Delta X_i^J| \le 4 \Delta_{n}^\beta \right \}}|^{q} |\mathcal{F}_{t_i} ] = R_i( \Delta_{n}^{1 + \beta(q - \alpha)}).
\label{eq: estensione q = 1 + epsilon lemma 10}
\end{equation}
\end{enumerate}
\begin{proof}
Reasoning as in Lemma 10 in \cite{Chapitre 1} we easily get \eqref{eq: estensione salti lemma 10}. Observing that $\Delta \tilde{X}_i^J$ is a particular case of $\Delta X_i^J$ where $\gamma$ is fixed, evaluated in $X_{t_i}$, it follows that \eqref{eq: estensione tilde salti lemma 10} can be obtained in the same way of \eqref{eq: estensione salti lemma 10}. Using the bound on $\Delta X_i^J$ obtained from the indicator function we get that the left hand side of \eqref{eq: estensione q = 1 + epsilon lemma 10} is upper bounded by
$$c \Delta_{n}^{\beta q} \mathbb{E}[ 1_{\left \{ \frac{\Delta_{n}^\beta}{4} \le |\Delta X_i^J| \le 4 \Delta_{n}^\beta \right \}} |\mathcal{F}_{t_i} ] \le \Delta_{n}^{\beta q} R_i(\Delta_{n}^{1 - \alpha \beta}),$$
where in the last inequality we have used Lemma 11 in \cite{Chapitre 1} on the interval $[t_i, t_{i + 1}]$ instead of on $[0, h]$. From property \eqref{propriety power R} of $R_i$ we get \eqref{eq: estensione q = 1 + epsilon lemma 10}.
\end{proof}
\label{lemma: estensione 10 capitolo 1}
\end{lemma}
\section{Proof of main results} \label{S: Proof main}
We show Lemma \ref{lemma: brownian increments}, required for the proof of Theorem \ref{th: estensione Qn}.
\subsection{Proof of Lemma \ref{lemma: brownian increments}.}
\begin{proof}
By the definition of $X^c$ we have
$$|\sum_{i = 0}^{n - 1} f(X_{t_i})(X^c_{t_{i + 1}} - X^c_{t_i})^2(\varphi_{\Delta_{n}^\beta}(\Delta X_i)- 1)| \le$$
$$ \le c \sum_{i = 0}^{n - 1} |f(X_{t_i})|\big(|\int_{t_i}^{t_{i + 1}}a_s dW_s|^2 + |\int_{t_i}^{t_{i + 1}} b_s ds|^2\big)|\varphi_{\Delta_{n}^\beta}(\Delta X_i)- 1| = : |I_{2,1}^n| + |I_{2,2}^n|. $$
In the sequel the constant $c$ may change value from line to line. \\
Concerning $I_{2,1}^n$, using Holder inequality we have
\begin{equation}
\mathbb{E}[|I_{2,1}^n|] \le c \sum_{i = 0}^{n - 1} \mathbb{E}[|f(X_{t_i})|\mathbb{E}_i[|\int_{t_i}^{t_{i + 1}}a_s dW_s|^{2p}]^\frac{1}{p} \mathbb{E}_i[|\varphi_{\Delta_{n}^\beta}(\Delta X_i)- 1|^q]^\frac{1}{q}],
\label{eq: I21 start}
\end{equation}
where $\mathbb{E}_i$ is the conditional expectation wit respect to $\mathcal{F}_{t_i}$. \\
We now use Burkholder-Davis-Gundy inequality to get, for $p_1 \ge 2$,
\begin{equation}
\mathbb{E}_i[|\int_{t_i}^{t_{i+1}}a_s dW_s|^{p_1}]^\frac{1}{p_1} \le \mathbb{E}_i[|\int_{t_i}^{t_{i+1}}a^2_s ds|^\frac{p_1}{2}]^\frac{1}{p_1} \le R_i(\Delta_{n}^\frac{p_1}{2})^\frac{1}{p_1} = R_i(\Delta_{n}^\frac{1}{2}),
\label{eq: bdg}
\end{equation}
where in the last inequality we have used that $a^2_s$ has bounded moments as a consequence of Lemma \ref{lemma: Moment inequalities}. We now observe that, from the definition of $\varphi$ we know that $\varphi_{\Delta_{n}^\beta}(\Delta X_i)- 1$ is different from $0$ only if $|\Delta X_i| > \Delta_{n}^\beta$. We consider two different sets: $|\Delta X_i^J| < \frac{1}{2} \Delta_{n}^\beta$ and $|\Delta X_i^J| \ge \frac{1}{2} \Delta_{n}^\beta$. We recall that $\Delta X_i = \Delta X_i^c + \Delta X_i^J$ and so, if $|\Delta X_i| > \Delta_{n}^\beta$ and $|\Delta X_i^J| < \frac{1}{2} \Delta_{n}^\beta$, then it means that $|\Delta X_i^c|$ must be more than $\frac{1}{2} \Delta_{n}^\beta$.
Using a conditional version of Tchebychev inequality we have that, $\forall r > 1$,
\begin{equation}
\mathbb{P}_i(|\Delta X^c_i| \ge \frac{1}{2} \Delta_{n}^\beta ) \le c \frac{\mathbb{E}_i[|\Delta X^c_i|^r]}{\Delta_{n}^{\beta r }} \le R_i(\Delta_{n}^{(\frac{1}{2} - \beta) r}),
\label{eq: prob parte continua}
\end{equation}
where $\mathbb{P}_i$ is the conditional probability with respect to $\mathcal{F}_{t_i}$; the last inequality follows from the sixth point of Lemma \ref{lemma: Moment inequalities}. If otherwise $|\Delta X_i^J| \ge \frac{1}{2} \Delta_{n}^\beta$, then we introduce the set $N_{i,n}: = \left \{ |\Delta L_s| \le \frac{2 \Delta_{n}^\beta}{\gamma_{min}}; \forall s \in (t_i, t_{i + 1}] \right \}$. We have
$\mathbb{P}_i(\left \{|\Delta X_i^J| \ge \frac{1}{2} \Delta_{n}^\beta \right \} \cap (N_{i,n})^c) \le \mathbb{P}_i((N_{i,n})^c)$, with
\begin{equation}
\mathbb{P}_i((N_{i,n})^c) = \mathbb{P}_i(\exists s \in (t_i, t_{i+ 1}] : |\Delta L_s| > \frac{\Delta_{n}^\beta}{2\gamma_{min}} ) \le c \int_{t_i}^{t_{i+1}} \int_{\frac{ \Delta_{n}^\beta}{2\gamma_{min}}}^\infty F(z) dz ds \le c \Delta_{n}^{1 - \alpha \beta},
\label{eq: proba Nin c}
\end{equation}
where we have used the third point of A4. Furthermore, using Markov inequality,
\begin{equation}
\mathbb{P}_i(\left \{|\Delta X_i^J| \ge \frac{1}{2} \Delta_{n}^\beta \right \} \cap N_{i,n}) \le c \mathbb{E}_i[|\Delta X_i^J|^r 1_{N_{i,n}}] \Delta_{n}^{- \beta r} \le R_i( \Delta_{n}^{- \beta r + 1 + \beta(r - \alpha)}) = R_i(\Delta_{n}^{1 - \beta \alpha}),
\label{eq: salti con Nin}
\end{equation}
where we have used the first point of Lemma \ref{lemma: estensione 10 capitolo 1}, observing that $1_{N_{i,n}}$ acts like the indicator function in \eqref{eq: estensione salti lemma 10} (see also (219) in \cite{Chapitre 1}).
Now using \eqref{eq: prob parte continua}, \eqref{eq: proba Nin c}, \eqref{eq: salti con Nin} and the arbitrariness of $r$ we have
\begin{equation}
\mathbb{P}_i(|\Delta X_i| > \Delta_{n}^\beta) = \mathbb{P}_i(|\Delta X_i| > \Delta_{n}^\beta, |\Delta X_i^J| < \frac{1}{2} \Delta_{n}^\beta ) + \mathbb{P}_i(|\Delta X_i| > \Delta_{n}^\beta, |\Delta X_i^J| \ge \frac{1}{2} \Delta_{n}^\beta) \le R_i(\Delta_{n}^{1 - \alpha \beta}).
\label{eq: proba varphi diverso da 1}
\end{equation}
Taking $p$ big and $q$ next to $1$ in \eqref{eq: I21 start} and replacing there \eqref{eq: bdg} with $p_1 = 2p$ and \eqref{eq: proba varphi diverso da 1} we get, $\forall \epsilon > 0$,
$$n^{1 - \alpha \beta - \tilde{\epsilon}}\mathbb{E}[|I_{2,1}^n|] \le n^{1 - \alpha \beta - \tilde{\epsilon}} c \sum_{i=1}^{n - 1}\mathbb{E}[|f(X_{t_i})| R_i(\Delta_{n}) R_i(\Delta_{n}^{1 - \alpha \beta - \epsilon})] \le (\frac{1}{n})^{\tilde{\epsilon} - \epsilon} \frac{c}{n} \sum_{i=1}^{n - 1} \mathbb{E}[|f(X_{t_i})| R_i(1)]. $$
Now, for each $\tilde{\epsilon} > 0$, we can always find an $\epsilon$ smaller than it, that is enough to get that $\frac{I_{2,1}^n}{(\frac{1}{n})^{1 - \alpha \beta - \tilde{\epsilon}}}$ goes to zero in $L^1$ and so in probability.
Let us now consider $I_{2,2}^n$. We recall that $b$ is uniformly bounded by a constant, therefore
\begin{equation}
(\int_{t_i}^{t_{i + 1}} b_s ds)^2 \le c \Delta_{n}^2.
\label{eq: stima I22}
\end{equation}
Acting moreover on $|\varphi_{\Delta_{n,i}^\beta}(\Delta X_i)- 1|$ as we did here above it follows
$$n^{1 - \alpha \beta - \tilde{\epsilon}}\mathbb{E}[|I_{2,2}^n|] \le n^{1 - \alpha \beta - \tilde{\epsilon}} c \sum_{i=1}^{n - 1}\mathbb{E}[|f(X_{t_i})| R_i(\Delta_{n}^2) R_i(\Delta_{n}^{1 - \alpha \beta - \epsilon})] \le (\frac{1}{n})^{ 1 + \tilde{\epsilon} - \epsilon} \frac{c}{n} \sum_{i=1}^{n - 1} \mathbb{E}[|f(X_{t_i})| R_i(1)] $$
and so $I_{2,2}^n = o_\mathbb{P}((\frac{1}{n})^{1 - \alpha \beta - \tilde{\epsilon}})$.
\end{proof}
\subsection{Proof of Theorem \ref{th: estensione Qn}.}
We observe that, using the dynamic \eqref{eq: model} of $X$ and the definition of the continuous part $X^c$, we have that
\begin{equation}
X_{t_{i + 1}} - X_{t_i} = (X^c_{t_{i + 1}} - X^c_{t_i}) + \int_{t_i}^{t_{i + 1}}\int_{\mathbb{R} \backslash \left \{0 \right \}} \gamma(X_{s^-}) \, z \, \tilde{\mu}(ds, dz).
\label{eq: incremento X funzione di Xc}
\end{equation}
Replacing \eqref{eq: incremento X funzione di Xc} in definition \eqref{eq: definition Qn} of $Q_n$ we have
$$Q_n = \sum_{i = 0}^{n - 1} f(X_{t_i})(X^c_{t_{i + 1}} - X^c_{t_i})^2 + \sum_{i = 0}^{n - 1} f(X_{t_i})(X^c_{t_{i + 1}} - X^c_{t_i})^2(\varphi_{\Delta_{n}^\beta}(\Delta X_i)- 1)+ $$
\begin{equation}
+ 2 \sum_{i = 0}^{n - 1} f(X_{t_i})(X^c_{t_{i + 1}} - X^c_{t_i})(\Delta X_i^J) \varphi_{\Delta_{n}^\beta}(\Delta X_i) + \sum_{i = 0}^{n - 1} f(X_{t_i})(\Delta X_i^J)^2\varphi_{\Delta_{n}^\beta}(\Delta X_i)= : \sum_{j= 1}^4 I_j^n.
\label{eq: riformulazione Qn}
\end{equation}
Comparing \eqref{eq: riformulazione Qn} with \eqref{eq: Qn parte continua}, using also definition \eqref{eq: definition tilde Qn} of $\tilde{Q}_n$, it follows that our goal is to show that $I_2^n + I_3^n = \mathcal{E}_n$, that is both $o_\mathbb{P}(\Delta_n^{\beta(2 - \alpha)})$ and $o_\mathbb{P}(\Delta_n^{(1 - \alpha \beta - \tilde{\epsilon}) \land (\frac{1}{2} - \tilde{\epsilon})})$. We have already shown in Lemma \ref{lemma: brownian increments} that $I_2^n = o_{\mathbb{P}}(\Delta_n^{1 - \alpha \beta - \tilde{\epsilon}})$. As $(1 - \alpha \beta - \tilde{\epsilon}) \land (\frac{1}{2} - \tilde{\epsilon}) < 1 - \alpha \beta - \tilde{\epsilon}$ and $\beta(2 - \alpha) < 1 - \alpha \beta - \tilde{\epsilon}$, we immediately get $I_2^n = \mathcal{E}_n$. \\
Let us now consider $I_3^n$. From the definition of the process $(X_t^c)$ it is
$$ 2 \sum_{i = 0}^{n - 1} f(X_{t_i})[\int_{t_i}^{t_{i+ 1}} b_s ds + \int_{t_i}^{t_{i+ 1}} a_s dW_s] \Delta X_i^J \varphi_{\Delta_{n}^\beta}(\Delta X_i)= : I_{3,1}^n + I_{3,2}^n.$$
We use on $I_{3,1}^n$ Cauchy-Schwartz inequality, \eqref{eq: stima I22} and Lemma 10 in \cite{Chapitre 1}, getting
$$\mathbb{E}[|I_{3,1}^n|] \le 2 \sum_{i = 0}^{n - 1} \mathbb{E}[ |f(X_{t_i})| R_i(\Delta_{n}^{1 + \beta(2 - \alpha)})^\frac{1}{2} R_i( \Delta_{n}^2)^\frac{1}{2}] \le \Delta_n^{\frac{1}{2} + \frac{\beta}{2}(2 - \alpha)} \frac{1}{n} \sum_{i = 0}^{n - 1} \mathbb{E}[|f(X_{t_i})| R_i(1)], $$
where we have also used property \eqref{propriety power R} on $R$. We observe it is $\frac{1}{2} + \beta -\frac{\alpha \beta}{2} > \frac{1}{2}$ if and only if $\beta(1 - \frac{\alpha}{2}) > 0$, that is always true. We can therefore say that $I_{3,1}^n = o_\mathbb{P}(\Delta_n^\frac{1}{2})$ and so
\begin{equation}
I_{3,1}^n = o_\mathbb{P}(\Delta_n^{(\frac{1}{2} - \tilde{\epsilon}) \land (1 - \alpha \beta - \tilde{\epsilon})}).
\label{eq: I31 nuovo}
\end{equation}
Moreover,
\begin{equation}
\frac{\mathbb{E}[|I_{3,1}^n|]}{\Delta_n^{\beta(2 - \alpha)}} \le \Delta_n^{\frac{1}{2} - \beta + \frac{\alpha \beta}{2}} \frac{1}{n}\sum_{i = 0}^{n - 1} \mathbb{E}[|f(X_{t_i})| R_i( 1)],
\label{eq: conv I31}
\end{equation}
that goes to zero using the polynomial growth of $f$, the definition of $R$, the fifth point of Lemma \ref{lemma: Moment inequalities}. Moreover, we have observed that the exponent on $\Delta_n$ is positive for $\beta < \frac{1}{2} \frac{1}{(1 - \frac{\alpha}{2})}$, that is always true. \\
Concerning $I_{3,2}^n$, we start proving that $I_{3,2}^n = o_\mathbb{P}(\Delta_n^{\beta(2 - \alpha)})$. From \eqref{eq: salti con browniano} in Proposition \ref{prop: espansione salti} we have
\begin{equation}
\frac{I_{3,2}^n}{\Delta_n^{\beta(2 - \alpha)}} = \frac{2}{\Delta_n^{\beta(2 - \alpha)}} \sum_{i = 0}^{n - 1} f(X_{t_i}) \Delta \tilde{X}_i^J \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J) \int_{t_i}^{t_{i + 1}} a_s dW_s + \frac{2}{\Delta_n^{\beta(2 - \alpha)}} \sum_{i = 0}^{n - 1} f(X_{t_i}) o_{L^1}(\Delta_{n}^{\beta(2 - \alpha) + 1}).
\label{eq: riformulo dopo prop 1}
\end{equation}
By the definition of $o_{L^1}$ the last term here above goes to zero in norm $1$ and so in probability. The first term of \eqref{eq: riformulo dopo prop 1} can be seen as
\begin{equation}
\frac{2}{\Delta_n^{\beta(2 - \alpha)}} \sum_{i = 0}^{n - 1} f(X_{t_i}) \Delta \tilde{X}_i^J \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J) [\int_{t_i}^{t_{i + 1}} a_{t_i} dW_s + \int_{t_i}^{t_{i + 1}} (a_s- a_{t_i}) dW_s].
\label{eq: main I32}
\end{equation}
On the first term of \eqref{eq: main I32} here above we want to use Lemma 9 of \cite{Genon Catalot} in order to get that it converges to zero in probability, so we have to show the following:
\begin{equation}
\frac{2}{\Delta_n^{\beta(2 - \alpha)}} \sum_{i = 0}^{n - 1} \mathbb{E}_i[f(X_{t_i}) \Delta \tilde{X}_i^J \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J)\int_{t_i}^{t_{i + 1}} a_{t_i} dW_s] \xrightarrow{\mathbb{P}} 0,
\label{eq: tesi 1 Genon Catalot}
\end{equation}
\begin{equation}
\frac{4}{\Delta_n^{2\beta(2 - \alpha)}} \sum_{i = 0}^{n - 1} \mathbb{E}_i[f^2(X_{t_i}) (\Delta \tilde{X}_i^J)^2 \varphi^2_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J)(\int_{t_i}^{t_{i + 1}} a_{t_i} dW_s)^2] \xrightarrow{\mathbb{P}} 0,
\label{eq: tesi 2 Genon Catalot}
\end{equation}
where $\mathbb{E}_i[.] = \mathbb{E}[. | \mathcal{F}_{t_i}]$. \\
Using the independence between $W$ and $L$ we have that the left hand side of \eqref{eq: tesi 1 Genon Catalot} is
\begin{equation}
\frac{2}{\Delta_n^{\beta(2 - \alpha)}} \sum_{i = 0}^{n - 1} f(X_{t_i}) \mathbb{E}_i[ \Delta \tilde{X}_i^J \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J)] \mathbb{E}_i[\int_{t_i}^{t_{i + 1}} a_{t_i} dW_s] = 0.
\label{eq: I32 centrato}
\end{equation}
Now, in order to prove \eqref{eq: tesi 2 Genon Catalot}, we use Holder inequality with $p$ big and $q$ next to $1$ on its left hand side, getting it is upper bounded by
$$\Delta_n^{ - 2 \beta(2 - \alpha)} \sum_{i = 0}^{n - 1} f^2(X_{t_i}) \mathbb{E}_i[(\int_{t_i}^{t_{i + 1}} a_{t_i} dW_s)^{2p}]^\frac{1}{p} \mathbb{E}_i[ |\Delta \tilde{X}_i^J \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J)|^{2q}]^\frac{1}{q} \le $$
\begin{equation}
\le \Delta_n^{ - 2 \beta(2 - \alpha)} \sum_{i = 0}^{n - 1} f^2(X_{t_i})R_i( \Delta_{n})R_i( \Delta_{n}^{\frac{1}{q} + \frac{\beta}{q}(2q - \alpha)}) \le \Delta_n^{1 - 2 \beta(2 - \alpha) + 2 \beta - \alpha \beta - \epsilon} \frac{1}{n} \sum_{i = 0}^{n - 1} f^2(X_{t_i}) R_i( 1),
\label{eq: fine per tesi 2 Catalot}
\end{equation}
where we have used \eqref{eq: bdg}, \eqref{eq: estensione tilde salti lemma 10} and property \eqref{propriety power R} of $R$. We observe that the exponent on $\Delta_n$ is positive if $\beta < \frac{1}{2 - \alpha} - \epsilon$ and we can always find an $\epsilon >0$ such that it is true. Hence \eqref{eq: fine per tesi 2 Catalot} goes to zero in norm $1$ and so in probability. \\
Concerning the second term of \eqref{eq: main I32}, using Cauchy-Schwartz inequality and \eqref{eq: estensione tilde salti lemma 10} we have
$$\mathbb{E}_i[|\Delta \tilde{X}_i^J \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J)| |\int_{t_i}^{t_{i + 1}} [a_s- a_{t_i}] dW_s|] \le \mathbb{E}_i[|\Delta \tilde{X}_i^J \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J)|^2]^\frac{1}{2} \mathbb{E}_i[|\int_{t_i}^{t_{i + 1}} [a_s- a_{t_i}] dW_s|^2]^\frac{1}{2} \le $$
\begin{equation}
\le R_i(\Delta_{n}^{\frac{1}{2} + \frac{\beta}{2}(2 - \alpha)}) \mathbb{E}_i[\int_{t_i}^{t_{i + 1}} |a_s - a_{t_i}|^2 ds]^\frac{1}{2} \le \Delta_{n}^{\frac{1}{2} + \frac{\beta}{2}(2 - \alpha)}R_i( 1) \Delta_n \le \Delta_{n,i}^{\frac{3}{2} + \frac{\beta}{2}(2 - \alpha)}R_i(1),
\label{eq: altra parte I32}
\end{equation}
where we have also used the second point of Lemma \ref{lemma: Moment inequalities} and the property \eqref{propriety power R} of $R$. Replacing \eqref{eq: altra parte I32} in the second term of \eqref{eq: main I32} we get it is upper bounded in norm 1 by
\begin{equation}
\Delta_n^{\frac{1}{2} - \beta + \frac{\alpha \beta}{2}} \frac{1}{n} \sum_{i = 0}^{n-1} \mathbb{E}[ |f(X_{t_i})| R_i( 1)],
\label{eq: fine I32}
\end{equation}
that goes to zero since the exponent on $\Delta_n$ is more than $0$ for $\beta < \frac{1}{2} \frac{1}{(1 - \frac{\alpha}{2})}$, that is always true. Using
\eqref{eq: riformulo dopo prop 1} - \eqref{eq: tesi 2 Genon Catalot} and \eqref{eq: fine I32} we get
\begin{equation}
\frac{I_{3,2}^n}{\Delta_n^{\beta(2 - \alpha)}} \xrightarrow{\mathbb{P}} 0.
\label{eq: convergence I32}
\end{equation}
We now want to show that $I_{3,2}^n$ is also $o_\mathbb{P}(\Delta_n^{(\frac{1}{2} - \tilde{\epsilon}) \land (1 - \alpha \beta - \tilde{\epsilon})})$. \\
Using \eqref{eq: aggiunta prop1 browniano} in Proposition \ref{prop: espansione salti} we get it is enough to prove that
\begin{equation}
\frac{1}{\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}} \sum_{i = 0}^{n - 1}f(X_{t_i}) [\Delta \tilde{X}_i^J \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J) \int_{t_i}^{t_{i + 1}} a_s dW_s ]\xrightarrow{\mathbb{P}} 0,
\label{eq: I32 nuovo inizio}
\end{equation}
where the left hand side here above can be seen as \eqref{eq: main I32}, with the only difference that now we have $\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}$ instead of $\Delta_n^{\beta(2 - \alpha)}$. We have again, acting like we did in \eqref{eq: I32 centrato} and \eqref{eq: fine per tesi 2 Catalot},
\begin{equation}
\frac{2}{\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}} \sum_{i = 0}^{n - 1} f(X_{t_i}) \mathbb{E}_i [\Delta \tilde{X}_i^J \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J) \int_{t_i}^{t_{i + 1}} a_{t_i} dW_s ] \xrightarrow{\mathbb{P}} 0
\label{eq: I32 nuovo conv}
\end{equation}
and
\begin{equation}
\frac{4}{\Delta_n^{2(\frac{1}{2} - \tilde{\epsilon})}} \sum_{i = 0}^{n - 1} \mathbb{E}_i[f^2(X_{t_i}) (\Delta \tilde{X}_i^J)^2 \varphi^2_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J)(\int_{t_i}^{t_{i + 1}} a_{t_i} dW_s)^2] \le \Delta_n^{2 \tilde{\epsilon} + 2 \beta - \alpha \beta - \epsilon} \frac{1}{n} \sum_{i = 0}^{n - 1} f^2(X_{t_i}) R_i(1),
\label{eq: I32 nuovo carre}
\end{equation}
that goes to zero in norm 1 and so in probability. Using also \eqref{eq: altra parte I32} we have that
\begin{equation}
\frac{2}{\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}} \sum_{i = 0}^{n - 1} \mathbb{E}_i[ |f(X_{t_i}) \Delta \tilde{X}_i^J \varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J) \int_{t_i}^{t_{i + 1}}[ a_s - a_{t_i}] dW_s|] \le \Delta_n^{\frac{\beta}{2}(2 - \alpha) + \tilde{\epsilon}} \frac{1}{n} \sum_{i = 0}^{n - 1} |f(X_{t_i})| R_i(1),
\label{eq: I32 nuovo fine}
\end{equation}
that, again, goes to zero in norm 1 and so in probability since the exponent on $\Delta_n$ is always positive. Using \eqref{eq: I32 nuovo inizio} - \eqref{eq: I32 nuovo fine} we get $I_{3,2}^n = o_\mathbb{P}(\Delta_n^{\frac{1}{2} - \tilde{\epsilon}})$ and so
\begin{equation}
I_{3,2}^n = o_\mathbb{P}(\Delta_n^{(\frac{1}{2} - \tilde{\epsilon}) \land (1 - \alpha \beta - \tilde{\epsilon})}).
\label{eq: finale I32 nuovo}
\end{equation}
From Lemma \ref{lemma: brownian increments}, \eqref{eq: I31 nuovo}, \eqref{eq: conv I31}, \eqref{eq: convergence I32} and \eqref{eq: finale I32 nuovo} it follows \eqref{eq: Qn parte continua}. \\
\\
Now, in order to prove \eqref{eq: estensione Qn}, we recall the definition of $X_t^c$:
\begin{equation}
X^c_{t_{i + 1}} - X^c_{t_i} = \int_{t_i}^{t_{i + 1}} b_s ds + \int_{t_i}^{t_{i + 1}} a_s dW_s.
\label{eq: definition Xc}
\end{equation}
Replacing \eqref{eq: definition Xc} in \eqref{eq: Qn parte continua} and comparing it with \eqref{eq: estensione Qn} it follows that our goal is to show that
$$A_1^n + A_2^n : = \sum_{i = 0}^{n - 1} f(X_{t_i}) (\int_{t_i}^{t_{i + 1}} b_s ds)^2 + 2 \sum_{i = 0}^{n-1} f(X_{t_i}) (\int_{t_i}^{t_{i + 1}} b_s ds)(\int_{t_i}^{t_{i + 1}} a_s dW_s) = \mathcal{E}_n. $$
Using \eqref{eq: stima I22} and property \eqref{propriety power R} of $R$ we know that
\begin{equation}
\frac{\mathbb{E}[|A_1^n|]}{\Delta_n^{\beta(2 - \alpha)}} \le \frac{1}{\Delta_n^{\beta(2 - \alpha)}} \sum_{i = 0}^{n-1} \mathbb{E}[|f(X_{t_i})|R_i( \Delta_{n}^2)] \le \Delta_n^{1 - \beta(2 - \alpha)} \frac{1}{n} \sum_{i = 0}^{n-1}\mathbb{E}[|f(X_{t_i})| R_i(1)]
\label{eq: estim A1}
\end{equation}
and
\begin{equation}
\frac{\mathbb{E}[|A_1^n|]}{\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}} \le \Delta_n^{\frac{1}{2} + \tilde{\epsilon}} \frac{1}{n} \sum_{i = 0}^{n-1}\mathbb{E}[|f(X_{t_i})| R_i(1)],
\label{eq: estim A1 nuovo}
\end{equation}
that go to zero since the exponent on $\Delta_n$ is always more than $0$, $f$ has both polynomial growth and the moment are bounded. \\
Let us now consider $A_2^n$. By adding and subtracting $b_{t_i}$ in the first integral, as we have already done, we get that
$$A_2^n = \sum_{i = 0}^{n-1} \zeta_{n,i} + A_{2,2}^n : = 2 \sum_{i = 0}^{n-1} f(X_{t_i})(\int_{t_i}^{t_{i + 1}} b_{t_i} ds)(\int_{t_i}^{t_{i + 1}} a_s dW_s) + 2\sum_{i = 0}^{n-1} f(X_{t_i})(\int_{t_i}^{t_{i + 1}} [b_s - b_{t_i}] ds)(\int_{t_i}^{t_{i + 1}} a_s dW_s).$$
Using Lemma 9 in \cite{Genon Catalot}, we want to show that
\begin{equation}
\sum_{i=0}^{n-1} \zeta_{n,i} = \mathcal{E}_n
\label{eq: A21 nuovo}
\end{equation}
and so that the following convergences hold:
\begin{equation}
\frac{1}{\Delta_n^{\beta(2 - \alpha)}}\sum_{i=0}^{n-1} \mathbb{E}_i[ \zeta_{n,i}] \xrightarrow{\mathbb{P}} 0 \qquad \frac{1}{\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}}\sum_{i=0}^{n-1} \mathbb{E}_i[ \zeta_{n,i}] \xrightarrow{\mathbb{P}} 0;
\label{eq: estim A21}
\end{equation}
\begin{equation}
\frac{1}{\Delta_n^{2\beta(2 - \alpha)}}\sum_{i=0}^{n-1} \mathbb{E}_i[ \zeta^2_{n,i}] \xrightarrow{\mathbb{P}} 0 \qquad \frac{1}{\Delta_n^{2(\frac{1}{2} - \tilde{\epsilon})}}\sum_{i=0}^{n-1} \mathbb{E}_i[ \zeta^2_{n,i}] \xrightarrow{\mathbb{P}} 0.
\label{eq: estim A21 carre}
\end{equation}
We have
$$\sum_{i = 0}^{n-1} \mathbb{E}_i[ \zeta_{n,i}]= \frac{2}{\Delta_n^{\beta(2 - \alpha)}} \sum_{i = 0}^{n-1} f(X_{t_i}) \Delta_{n} b_{t_i} \mathbb{E}_i[\int_{t_i}^{t_{i + 1}} a_s dW_s] = 0$$
and so the two convergences in \eqref{eq: estim A21} both hold.
Concerning \eqref{eq: estim A21 carre}, using \eqref{eq: bdg} we have
$$\Delta_n^{1 - 2 \beta(2 - \alpha)} \frac{c}{n} \sum_{i = 0}^{n-1} f^2(X_{t_i})b^2_{t_i} \mathbb{E}_i[(\int_{t_i}^{t_{i + 1}} a_s dW_s)^2] \le \Delta_n^{2 - 2 \beta(2 - \alpha)}\frac{c}{n} \sum_{i = 0}^{n-1} f^2(X_{t_i})b^2_{t_i}R_i(1) $$
and
$$\Delta_n^{1 - 2 (\frac{1}{2} - \tilde{\epsilon})} \frac{c}{n} \sum_{i = 0}^{n-1} f^2(X_{t_i})b^2_{t_i} \mathbb{E}_i[(\int_{t_i}^{t_{i + 1}} a_s dW_s)^2] \le \Delta_n^{1 + 2 \tilde{\epsilon}} \frac{c}{n} \sum_{i = 0}^{n-1} f^2(X_{t_i})b^2_{t_i}R_i(1), $$
that go to zero in norm $1$ and so in probability since $\Delta_n$ is always positive. It follows \eqref{eq: estim A21 carre} and so \eqref{eq: A21 nuovo}.
Concerning $A_{2,2}^n$, using Holder inequality, \eqref{eq: bdg}, the assumption on $b$ gathered in A2 and Jensen inequality it is
$$\mathbb{E}[|A_{2,2}^n|]\le c \sum_{i = 0}^{n-1} \mathbb{E}[|f(X_{t_i})| \mathbb{E}_i[(\int_{t_i}^{t_{i + 1}} | b_s - b_{t_i}| ds)^q]^\frac{1}{q} R_i(\Delta_{n}^\frac{1}{2}) ]\le $$
$$ \le c \sum_{i = 0}^{n-1} \mathbb{E}[|f(X_{t_i})| ( \Delta_{n}^{q - 1} \int_{t_i}^{t_{i + 1}}\mathbb{E}_i[|b_s - b_{t_i}|^q] ds)^\frac{1}{q} R_i(\Delta_{n}^\frac{1}{2})] \le c \sum_{i = 0}^{n-1} \mathbb{E}[|f(X_{t_i})| ( \Delta_{n}^{q - 1} \int_{t_i}^{t_{i + 1}} \Delta_{n} ds)^\frac{1}{q} R_i(\Delta_{n}^\frac{1}{2})]. $$
So we get
\begin{equation}
\frac{\mathbb{E}[|A_{2,2}^n|]}{\Delta_n^{\beta(2 - \alpha)}}\le \Delta_n^{\frac{1}{q} + \frac{1}{2} - \beta(2 - \alpha)} \frac{c}{n} \sum_{i = 0}^{n-1}\mathbb{E}[|f(X_{t_i})|R_i(1)] \qquad \mbox{and}
\label{eq: estim A22}
\end{equation}
\begin{equation}
\frac{\mathbb{E}[|A_{2,2}^n|]}{\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}}\le \Delta_n^{\frac{1}{q} + \tilde{\epsilon}} \frac{c}{n} \sum_{i = 0}^{n-1}\mathbb{E}[|f(X_{t_i})|R_i(1)].
\label{eq: estim A22 nuovo}
\end{equation}
Since it holds for $q\ge 2$, the best choice is to take $q=2$, in this way we get that \eqref{eq: estim A22} and \eqref{eq: estim A22 nuovo} go to $0$ in norm $1$, using the polynomial growth of $f$, the boundedness of the moments, the definition of $R_i$ and the fact that the exponent on $\Delta_n$ is in both cases more than zero, because of $\beta < \frac{1}{2 - \alpha}$. \\
From \eqref{eq: estim A1}, \eqref{eq: estim A1 nuovo}, \eqref{eq: estim A21}, \eqref{eq: estim A22} and \eqref{eq: estim A22 nuovo} it follows \eqref{eq: estensione Qn}.
\subsection{Proof of Theorem \ref{th: 2 e 3 insieme}}
\begin{proof}
From Theorem \ref{th: estensione Qn} it is enough to prove that
\begin{equation}
\sum_{i = 0}^{n-1} f(X_{t_i}) (\int_{t_i}^{t_{i+1}} a_s dW_s)^2 - \frac{1}{n} \sum_{i = 0}^{n-1} f (X_{t_i}) a^2_{t_i} = \frac{Z_n}{\sqrt{n}} + \mathcal{E}_n,
\label{eq:primo punto teo 2 3}
\end{equation}
and
$$\tilde{Q}_n^J = \hat{Q}_n + \frac{1}{\Delta_n^{\beta(2 - \alpha)}} \mathcal{E}_n,$$
where $\mathcal{E}_n$ is always $o_\mathbb{P}(\Delta_n^{\beta(2 - \alpha)})$ and, if $\beta > \frac{1}{4 - \alpha}$, then it is also $o_\mathbb{P}(\Delta_n^{(\frac{1}{2} - \tilde{\epsilon}) \land (1 - \alpha \beta - \tilde{\epsilon})})$. We can rewrite the last equation here above as
\begin{equation}
\tilde{Q}^J_n = \hat{Q}_n + o_\mathbb{P}(1)
\label{eq:secondo punto teo 2 3}
\end{equation}
and, for $\beta > \frac{1}{4 - \alpha}$,
\begin{equation}
\tilde{Q}^J_n = \hat{Q}_n + \frac{1}{\Delta_n^{\beta(2 - \alpha)}} o_\mathbb{P}(\Delta_n^{(\frac{1}{2} - \tilde{\epsilon}) \land (1 - \alpha \beta - \tilde{\epsilon})}).
\label{eq:nuovo punto teo 2 3}
\end{equation}
Indeed, using them and \eqref{eq: estensione Qn} it follows \eqref{eq:tesi teo 2 e 3}. Hence we are now left to prove \eqref{eq:primo punto teo 2 3} - \eqref{eq:nuovo punto teo 2 3}.\\ \\
\textit{Proof of \eqref{eq:primo punto teo 2 3}}.\\
We can see the left hand side of \eqref{eq:primo punto teo 2 3} as
\begin{equation}
\sum_{i = 0}^{n-1} f(X_{t_i}) [(\int_{t_i}^{t_{i+1}} a_s dW_s)^2- \int_{t_i}^{t_{i+1}} a^2_s ds] + \sum_{i = 0}^{n-1} f(X_{t_i})\int_{t_i}^{t_{i+1}} [a^2_s - a^2_{t_i}] ds = : M_n^Q + B_n.
\label{eq: def Bn}
\end{equation}
We want to show that $B_n = \mathcal{E}_n$, it means that it is both $o_\mathbb{P}(\Delta_n^{\beta(2 - \alpha)})$ and $o_\mathbb{P}(\Delta_n^{(\frac{1}{2} - \tilde{\epsilon}) \land (1 - \alpha \beta - \tilde{\epsilon})})$. We write
\begin{equation}
a^2_s - a^2_{t_i} = 2a_{t_i}(a_s - a_{t_i}) + (a_s - a_{t_i})^2,
\label{eq: development a2}
\end{equation}
replacing \eqref{eq: development a2} in the definition of $B_n$ it is $B_n = B_1^n + B_2^n$.
We start by proving that $B_2^n = o_\mathbb{P}(\Delta_n^{\beta(2 - \alpha)})$. Indeed, from the second point of Lemma \ref{lemma: Moment inequalities}, it is
$$\mathbb{E}[|B_2^n|] \le c \sum_{i = 0}^{n - 1}\mathbb{E}[|f(X_{t_i})| \int_{t_i}^{t_{i + 1}} \mathbb{E}_i[|a_s - a_{t_i}|^{2}] ds] \le c \Delta_n^2 \sum_{i = 0}^{n - 1}\mathbb{E}[|f(X_{t_i})|].$$
It follows
\begin{equation}
\frac{\mathbb{E}[|B_2^n|]}{\Delta_n^{\beta(2 - \alpha)}} \le \Delta_n^{1 - \beta(2 - \alpha)}\frac{1}{n} \sum_{i = 0}^{n - 1}\mathbb{E}[ |f|(X_{t_i})] \qquad \mbox{and} \quad \frac{\mathbb{E}[|B_2^n|]}{\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}} \le \Delta_n^{\frac{1}{2} + \tilde{\epsilon}}\frac{1}{n} \sum_{i = 0}^{n - 1}\mathbb{E}[ |f|(X_{t_i})],
\label{eq: stima B2n stabile}
\end{equation}
that go to zero using the polynomial growth of $f$ and the fact that the moments are bounded. We have also observed that the exponent on $\Delta_n$ is always more than $0$. \\
Concerning $B_1^n$, we recall that from \eqref{eq: model vol} it follows
$$a_s - a_{t_i} = \int_{t_i}^s \tilde{b}_u du + \int_{t_i}^s \tilde{a}_u dW_u + \int_{t_i}^s \hat{a}_u d\hat{W}_u + \int_{t_i}^s \int_{\mathbb{R} \backslash \left \{0 \right \}} \tilde{\gamma}_u \, z \, \tilde{\mu}(du, dz) + \int_{t_i}^s \int_{\mathbb{R} \backslash \left \{0 \right \}} \hat{\gamma}_u \, z \, \tilde{\mu}_2(du, dz)$$
and so, replacing it in the definition of $B_1^n$, we get $B_1^n:= I_1^n + I_2^n + I_3^n + I_4^n + I_5^n$. \\
We start considering $I_1^n$ on which we use that $\tilde{b}$ is bounded
$$\mathbb{E}[|I_1^n|] \le 2 \sum_{i = 0}^{n - 1}\mathbb{E}[|f(X_{t_i})||a_{t_i}| \int_{t_i}^{t_{i + 1}}(\int_{t_i}^s c du) ds] \le \Delta_n \frac{1}{n} \sum_{i = 0}^{n - 1} \mathbb{E}[|f(X_{t_i})||a_{t_i}|]. $$
It follows
\begin{equation}
\frac{\mathbb{E}[|I_1^n|] }{\Delta_n^{\beta(2 - \alpha)}} \le \Delta_n^{1 - \beta(2 - \alpha)} \frac{1}{n} \sum_{i = 0}^{n - 1} \mathbb{E}[|f(X_{t_i})||a_{t_i}|] \qquad \mbox{and }
\label{eq: conv I1n stabile}
\end{equation}
\begin{equation}
\frac{\mathbb{E}[|I_1^n|]}{\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}} \le \Delta_n^{\frac{1}{2} + \tilde{\epsilon}}\frac{1}{n} \sum_{i = 0}^{n - 1} \mathbb{E}[|f(X_{t_i})||a_{t_i}|],
\label{eq: stima I1n nuovo}
\end{equation}
that go to zero because of the polynomial growth of $f$, the boundedness of the moments and the fact that $1 - \beta(2 - \alpha) > 0$. \\
We now act on $I_2^n$ and $I_3^n$ in the same way. Considering $I_2^n$, we define $\zeta_{n,i} : = 2 f(X_{t_i}) a_{t_i} \int_{t_i}^{t_{i + 1}}(\int_{t_i}^s \tilde{a}_u dW_u) ds $. We want to use Lemma 9 in \cite{Genon Catalot} to get that
\begin{equation}
\frac{I_2^n}{\Delta_n^{\beta(2 - \alpha)}} \xrightarrow{\mathbb{P}} 0 \qquad \mbox{and } \quad \frac{I_2^n}{\Delta_n^{(\frac{1}{2} - \tilde{\epsilon}) \land (1 - \alpha \beta - \tilde{\epsilon} )}} \xrightarrow{\mathbb{P}} 0
\label{eq: conv I2n stabile}
\end{equation}
and so we have to show the following :
\begin{equation}
\frac{1}{\Delta_n^{\beta (2 - \alpha)}}\sum_{i = 0}^{n-1} \mathbb{E}_i[\zeta_{n,i}] \xrightarrow{\mathbb{P}} 0, \qquad \frac{1}{\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}}\sum_{i = 0}^{n-1} \mathbb{E}_i[\zeta_{n,i}] \xrightarrow{\mathbb{P}} 0;
\label{eq: cond 1 genon catalot stabile}
\end{equation}
\begin{equation}
\frac{1}{\Delta_n^{2\beta (2 - \alpha)}}\sum_{i = 0}^{n-1} \mathbb{E}_i[\zeta_{n,i}^2] \xrightarrow{\mathbb{P}} 0,
\label{eq: cond 2 genon catalot stabile}
\end{equation}
\begin{equation}
\frac{1}{\Delta_n^{2(\frac{1}{2} - \tilde{\epsilon})}}\sum_{i = 0}^{n-1} \mathbb{E}_i[\zeta^2_{n,i}] \xrightarrow{\mathbb{P}} 0.
\label{eq:cond 2 genon catalot stabile nuovo }
\end{equation}
By the definition of $\zeta_{n,i}$ it is $\mathbb{E}_i[\zeta_{n,i}] = 0$ and so \eqref{eq: cond 1 genon catalot stabile} is clearly true. The left hand side of \eqref{eq: cond 2 genon catalot stabile} is
\begin{equation}
\Delta_n^{ - 2\beta(2 - \alpha)} 4 \sum_{i = 0}^{n-1} f^2(X_{t_i})a_{t_i}^2 \mathbb{E}_i [(\int_{t_i}^{t_{i + 1}}(\int_{t_i}^s \tilde{a}_u dW_u) ds )^2].
\label{eq: I2n intermedio}
\end{equation}
Using Fubini theorem and Ito isometry we have
\begin{equation}
\mathbb{E}_i [(\int_{t_i}^{t_{i + 1}}(\int_{t_i}^s \tilde{a}_u dW_u) ds )^2] = \mathbb{E}_i [(\int_{t_i}^{t_{i + 1}}(t_{i + 1} - s) \tilde{a}_s dW_s)^2] = \mathbb{E}_i [\int_{t_i}^{t_{i + 1}}(t_{i + 1} - s^2) \tilde{a}_s^2 ds] \le R_i(\Delta_{n}^3).
\label{eq: utile carre I2n}
\end{equation}
Because of \eqref{eq: utile carre I2n}, we get that \eqref{eq: I2n intermedio} is upper bounded by
$$ \Delta_n^{2 - 2\beta(2 - \alpha)} \frac{1}{n} \sum_{i = 0}^{n-1}f^2(X_{t_i})a_{t_i}^2 R_i(1),$$
that converges to zero in norm $1$ and so \eqref{eq: cond 2 genon catalot stabile} follows, since $2 - 2\beta(2 - \alpha) > 0$ for $\beta < \frac{1}{2 - \alpha}$, that is always true.
Acting in the same way we get that the left hand side of \eqref{eq:cond 2 genon catalot stabile nuovo } is upper bounded by
$$ \Delta_n^{1 + 2 \tilde{\epsilon} } \frac{1}{n} \sum_{i = 0}^{n-1}f^2(X_{t_i})a_{t_i}^2 R_i(1),$$
that goes to zero in norm $1$. The same holds clearly for $I_3^n$ instead of $I_2^n$.
In order to show also
\begin{equation}
\frac{I_4^n}{\Delta_n^{\beta(2 - \alpha)}} \xrightarrow{\mathbb{P}} 0 \qquad \mbox{and } \quad \frac{I_4^n}{\Delta_n^{(\frac{1}{2} - \tilde{\epsilon}) \land (1 - \alpha \beta - \tilde{\epsilon} )}} \xrightarrow{\mathbb{P}} 0,
\label{eq: convergence I3n stabile}
\end{equation}
we define $\tilde{\zeta}_{n,i} := 2 f(X_{t_i}) a_{t_i} \int_{t_i}^{t_{i + 1}}(\int_{t_i}^s \int_\mathbb{R} \tilde{\gamma}_u z \tilde{\mu}(du, dz)) ds $. We have again $\mathbb{E}_i[\tilde{\zeta}_{n,i}]= 0$ and so \eqref{eq: cond 1 genon catalot stabile} holds with $\tilde{\zeta}_{n,i}$ in place of $\zeta_{n,i}$.
We now act like we did in \eqref{eq: utile carre I2n}, using Fubini theorem and Ito isometry. It follows
$$\mathbb{E}_i [(\int_{t_i}^{t_{i + 1}}(\int_{t_i}^s \int_\mathbb{R} \tilde{\gamma}_u z \tilde{\mu}(du,dz) ds )^2] = \mathbb{E}_i [(\int_{t_i}^{t_{i + 1}}\int_\mathbb{R}(t_{i + 1} - s) \tilde{\gamma}_s z \tilde{\mu}(ds,dz))^2]=$$
\begin{equation}
= \mathbb{E}_i[\int_{t_i}^{t_{i + 1}}(t_{i + 1} - s)^2 \tilde{\gamma}_s^2 ds (\int_\mathbb{R} z^2 F(z)dz)] \le R_i(\Delta_{n}^3),
\label{eq: isometria salti}
\end{equation}
having used in the last inequality the definition of $\bar{\mu}(ds, dz)$, the fact that $\int_\mathbb{R} z^2 F(z)dz < \infty$ and the boundedness of $\tilde{\gamma}$. Replacing \eqref{eq: isometria salti} in the left hand side of \eqref{eq: cond 2 genon catalot stabile} and \eqref{eq:cond 2 genon catalot stabile nuovo }, with $\tilde{\zeta}_{n,i}$ in place of $\zeta_{n,i}$, we have
$$\frac{1}{\Delta_n^{2\beta(2 - \alpha)}}\sum_{i=0}^{n-1} \mathbb{E}_i[\tilde{\zeta}^2_{n,i}] \le c \Delta_n^{ - 2\beta(2 - \alpha)} \sum_{i = 0}^{n-1}f^2(X_{t_i}) a^2_{t_i} R_i(\Delta^3_{n}) \le \Delta_n^{2 - 2\beta(2 - \alpha)} \frac{1}{n} \sum_{i = 0}^{n-1} f^2(X_{t_i}) a^2_{t_i} R_i(1)$$
$$\mbox{and } \frac{1}{\Delta_n^{1 - 2 \tilde{\epsilon}}}\sum_{i=0}^{n-1} \mathbb{E}_i[\tilde{\zeta}^2_{n,i}] \le \Delta_n^{1 + 2 \tilde{\epsilon}} \frac{1}{n} \sum_{i = 0}^{n-1}f^2(X_{t_i}) a^2_{t_i} R_i(1).$$
Again, they converge to zero in norm $1$ and thus in probability since $2 - 2\beta(2 - \alpha) > 0$ always holds. Therefore, we get \eqref{eq: convergence I3n stabile}. Clearly, \eqref{eq: convergence I3n stabile} holds also with $I_5^n$ replacing $I_4^n$; the reasoning here above joint with the sixth point of A4 on $F_2$ is proof of that. \\
From \eqref{eq: stima B2n stabile}, \eqref{eq: conv I1n stabile}, \eqref{eq: stima I1n nuovo}, \eqref{eq: conv I2n stabile} and \eqref{eq: convergence I3n stabile} it follows that
\begin{equation}
B_n = \mathcal{E}_n.
\label{eq: Bn trascurabile in punto 1 teo 2 3}
\end{equation}
Concerning $M_n^Q: = \sum_{i= 0}^{n-1} \hat{\zeta}_{n,i}$, Genon - Catalot and Jacod have proved in \cite{Genon Catalot} that, in the continuous framework, the following conditions are enough to get $\sqrt{n}M_n^Q \rightarrow N(0, 2 \int_0^T f^2(X_s) a^4_s ds)$ stably with respect to $X$:
\begin{itemize}
\item $\mathbb{E}_i[\hat{\zeta}_{n,i}] = 0$;
\item $\sum_{i = 0}^{n-1}\mathbb{E}_i[\hat{\zeta}^2_{n,i}] \xrightarrow{\mathbb{P}} 2 \int_0^T f^2(X_s) a^4_s ds$ ;
\item $\sum_{i = 0}^{n-1}\mathbb{E}_i[\hat{\zeta}^4_{n,i}] \xrightarrow{\mathbb{P}} 0$;
\item $\sum_{i = 0}^{n-1} \mathbb{E}_i[\hat{\zeta}_{n,i}(W_{t_{i + 1}} - W_{t_i})] \xrightarrow{\mathbb{P}} 0$;
\item $\sum_{i = 0}^{n-1} \mathbb{E}_i[\hat{\zeta}_{n,i}(\hat{W}_{t_{i + 1}} - \hat{W}_{t_i})] \xrightarrow{\mathbb{P}} 0$.
\end{itemize}
Theorem 2.2.15 in \cite{13 in Maillavin} adapts the previous theorem to our framework, in which there is the presence of jumps. \\
We observe that the conditions here above are respected, hence
\begin{equation}
M_n^Q = \frac{Z_n}{\sqrt{n}}, \mbox{ where } Z_n \xrightarrow{n} N(0,2 \int_0^T f^2(X_s) a^4_s ds),
\label{eq: conv stabile}
\end{equation}
stably with respect to $X$.
From \eqref{eq: Bn trascurabile in punto 1 teo 2 3} and \eqref{eq: conv stabile}, it follows \eqref{eq:primo punto teo 2 3}. \\ \\
\textit{Proof of \eqref{eq:secondo punto teo 2 3}.} \\
We use Proposition \ref{prop: espansione salti} replacing \eqref{eq: espansione salti} in the definition \eqref{eq: definition tilde Qn} of $\tilde{Q}^J_n$. Recalling that the convergence in norm $1$ implies the convergence in probability it is clear that we have to prove the result on
$$n^{\beta(2 - \alpha)} \sum_{i = 0}^{n-1} f(X_{t_i})(\Delta \tilde{X}_i^J)^2\varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J) =$$
\begin{equation}
= n^{\beta(2 - \alpha)} \sum_{i = 0}^{n-1} f(X_{t_i}) \gamma^2(X_{t_i}) \Delta_{n}^\frac{2}{\alpha} (\frac{\Delta \tilde{X}_i^J}{\gamma(X_{t_i})\Delta_{n}^\frac{1}{\alpha}})^2\varphi_{\Delta_{n}^\beta}(\frac{\Delta \tilde{X}_i^J}{\gamma(X_{t_i})\Delta_{n}^\frac{1}{\alpha}}\gamma(X_{t_i})\Delta_{n}^\frac{1}{\alpha}),
\label{eq: ref tesi salti}
\end{equation}
where we have also rescaled the process in order to apply Proposition \ref{prop: estimation stable}. We now define
\begin{equation}
g_{i,n}(y):= y^2\varphi_{\Delta_{n}^\beta}(y \gamma(X_{t_i})\Delta_{n}^\frac{1}{\alpha}),
\label{eq: definition g}
\end{equation}
hence we can rewrite \eqref{eq: ref tesi salti} as
$$(\frac{1}{n})^{\frac{2}{\alpha} - \beta(2 - \alpha)} \sum_{i = 0}^{n-1} f(X_{t_i}) \gamma^2(X_{t_i}) [g_{i,n}(\frac{\Delta \tilde{X}_i^J}{\gamma(X_{t_i})\Delta_{n}^\frac{1}{\alpha}}) - \mathbb{E}[g_{i,n}(S_1^\alpha)]] +$$
\begin{equation}
+ (\frac{1}{n})^{\frac{2}{\alpha} - \beta(2 - \alpha)} \sum_{i = 0}^{n-1} f(X_{t_i}) \gamma^2(X_{t_i}) \mathbb{E}[g_{i,n}(S_1^\alpha)] =: \sum_{i = 0}^{n-1}A_{1, i}^n + \hat{Q}_n,
\label{eq: def B1 B2}
\end{equation}
where $S_1^\alpha$ is the $\alpha$-stable process at time $t=1$. We want to show that $\sum_{i = 0}^{n-1}A_{1, i}^n $ converges to zero in probability. With this purpose in mind, we take the conditional expectation of $A_{1, i}^n $ and we apply Proposition \ref{prop: estimation stable} on the interval $[t_i, t_{i+1}]$ instead of on $[0, h]$, observing that property \eqref{eq: conditon on h} holds on $g_{i,n}$ for $p=2$.
By the definition \eqref{eq: definition g} of $g_{i,n}$, we have $\left \| g_{i,n} \right \|_\infty = R_i(\Delta_{n}^{2(\beta - \frac{1}{\alpha})}) $ and $\left \| g_{i,n} \right \|_{pol} = R_i(1)$. Replacing them in \eqref{eq: tesi prop stable} we have that
$$|\mathbb{E}_i[g_{i,n}(\frac{\Delta \tilde{X}_i^J}{\gamma(X_{t_i})\Delta_{n}^\frac{1}{\alpha}})] - \mathbb{E}[g_{i,n}(S_1^\alpha)]| \le c_{\epsilon, \alpha} \Delta_{n}|\log(\Delta_{n})|R_i(\Delta_{n}^{2(\beta - \frac{1}{\alpha})}) + $$
$$ + c_{\epsilon, \alpha} \Delta_{n}^\frac{1}{\alpha}|\log(\Delta_{n})|R_i(\Delta_{n}^{2(\beta - \frac{1}{\alpha})(1 - \frac{\alpha}{2} - \epsilon)}) + c_{\epsilon, \alpha} \Delta_{n}^\frac{1}{\alpha}|\log(\Delta_{n})|R_i(\Delta_{n}^{2(\beta - \frac{1}{\alpha})(\frac{3}{2} - \frac{\alpha}{2} - \epsilon)})1_{\alpha > 1}. $$
To get $\sum_{i = 0}^{n-1}A_{1, i}^n : = o_\mathbb{P}(1)$, we want to use Lemma 9 of \cite{Genon Catalot}. We have
$$\sum_{i = 0}^{n-1}|\mathbb{E}_i[A_{1, i}^n ] |\le (\frac{1}{n})^{\frac{2}{\alpha} - \beta(2 - \alpha)} \sum_{i = 0}^{n-1} |f(X_{t_i})| |\gamma^2(X_{t_i})||\log(\Delta_{n})|[\Delta_{n}^{1 + 2 (\beta - \frac{1}{\alpha})} + \Delta_{n}^{\frac{1}{\alpha} + (2 - \alpha - \epsilon)(\beta - \frac{1}{\alpha})}+ $$
\begin{equation}
+ \Delta_{n}^{\frac{1}{\alpha} + (3 - \alpha - \epsilon)(\beta - \frac{1}{\alpha})}1_{\alpha > 1}]R_i(1) \le (\Delta_n^{\alpha \beta} + \Delta_n^{\frac{1}{\alpha} - \epsilon} + \Delta_n^{\beta - \epsilon}1_{\alpha > 1}) \frac{| \log(\Delta_n)|}{n}\sum_{i = 0}^{n-1} |f(X_{t_i})| |\gamma^2(X_{t_i})|R_i(1),
\label{eq: stima B1}
\end{equation}
where we have used property \eqref{propriety power R}. Using the polynomial growth of $f$, the boundedness of the moments and the fifth point of Assumption 4 in order to bound $\gamma$, \eqref{eq: stima B1} converges to $0$ in norm $1$ and so in probability since $\Delta_n^{\alpha \beta} \log(\Delta_n) \rightarrow 0$ for $n\rightarrow \infty$ and we can always find an $\epsilon > 0$ such that $\Delta_n^{\frac{1}{\alpha} - \epsilon}$ does the same. \\
To use Lemma 9 of \cite{Genon Catalot} we have also to show that
\begin{equation}
(\frac{1}{n})^{\frac{4}{\alpha} - 2 \beta(2 - \alpha)} \sum_{i = 0}^{n - 1} f^2(X_{t_i}) \gamma^4(X_{t_i}) \mathbb{E}_i[(g_{i,n}(\frac{\Delta \tilde{X}_i^J}{\gamma(X_{t_i})\Delta_{n}^\frac{1}{\alpha}}) - \mathbb{E}[g_{i,n}(S_1^\alpha)])^2] \xrightarrow{P} 0.
\label{eq: conv Ai alla seconda}
\end{equation}
We observe that $\mathbb{E}_i[(g_{i,n}(\frac{\Delta \tilde{X}_i^J}{\gamma(X_{t_i})\Delta_{n}^\frac{1}{\alpha}}) - \mathbb{E}[g_{i,n}(S_1^\alpha)])^2] \le c \mathbb{E}_i[g_{i,n}^2(\frac{\Delta \tilde{X}_i^J}{\gamma(X_{t_i})\Delta_{n}^\frac{1}{\alpha}})] + c \mathbb{E}_i[\mathbb{E}[g_{i,n}(S_1^\alpha)]^2] $. Now, using equation \eqref{eq: estensione tilde salti lemma 10} of Lemma \ref{lemma: estensione 10 capitolo 1}, we observe it is
\begin{equation}
\mathbb{E}_i[g_{i,n}^2(\frac{\Delta \tilde{X}_i^J}{\gamma(X_{t_i})\Delta_{n}^\frac{1}{\alpha}})] = \frac{\Delta_{n}^{- \frac{4}{\alpha}}}{\gamma^4(X_{t_i})} \mathbb{E}_i[(\Delta \tilde{X}_i^J)^4 \varphi^2_{\Delta_n^\beta}(\Delta \tilde{X}_i^J)] = \frac{\Delta_{n}^{- \frac{4}{\alpha}}}{\gamma^4(X_{t_i})}R_i(\Delta_{n}^{1 + \beta(4 - \alpha)}),
\label{eq: estim g2x}
\end{equation}
where $\varphi$ acts as the indicator function. Moreover we observe that
\begin{equation}
\mathbb{E}[g_{i,n}(S_1^\alpha)] = \int_\mathbb{R} z^2 \varphi(\Delta_{n}^{\frac{1}{\alpha} - \beta}\gamma(X_{t_i})z) f_\alpha(z) dz = d(\gamma(X_{t_i})\Delta_{n}^{\frac{1}{\alpha} - \beta}),
\label{eq: val atteso g}
\end{equation}
with $f_\alpha(z)$ the density of the stable process.
We now introduce the following lemma, that will be shown in the Appendix:
\begin{lemma}
Suppose that Assumptions 1-4 hold. Then, for each $\zeta_n$ such that $\zeta_n \rightarrow 0$ and for each $\hat{\epsilon} > 0$,
\begin{equation}
d(\zeta_n) = |\zeta_n|^{\alpha - 2} c_\alpha \int_\mathbb{R} |u|^{1 - \alpha} \varphi(u) du + o(|\zeta_n|^{- \hat{\epsilon}} + |\zeta_n|^{2 \alpha - 2 - \hat{\epsilon}}),
\label{eq: dl d}
\end{equation}
where $c_\alpha$ has been defined in \eqref{eq: def calpha}.
\label{lemma: dl d}
\end{lemma}
Since $\frac{1}{\alpha} - \beta > 0$, $\gamma(X_{t_i}) \Delta_{n}^{\frac{1}{\alpha} - \beta}$ goes to zero for $n \rightarrow \infty$ and so we can take $\zeta_n$ as $\gamma(X_{t_i}) \Delta_{n}^{\frac{1}{\alpha} - \beta}$, getting that
\begin{equation}
\mathbb{E}[g_{i,n}(S_1^\alpha)] = d(\gamma(X_{t_i})\Delta_{n}^{\frac{1}{\alpha} - \beta}) = R_i(\Delta_{n}^{(\frac{1}{\alpha} - \beta)(\alpha - 2)}).
\label{eq: replaced dl density}
\end{equation}
Replacing \eqref{eq: estim g2x} and \eqref{eq: replaced dl density} in the left hand side of \eqref{eq: conv Ai alla seconda} we get it is upper bounded by
$$\sum_{i=0}^{n-1} \mathbb{E}_i[(A_{1,i}^n)^2] = (\frac{1}{n})^{\frac{4}{\alpha} - 2 \beta(2 - \alpha)} \sum_{i=0}^{n-1} f^2(X_{t_i})\gamma^4(X_{t_i})(R_i(\Delta_{n}^{1 + \beta(4 - \alpha)}) + R_i(\Delta_{n}^{4 \beta - \frac{4}{\alpha} + 2 - 2 \alpha \beta})) \le$$
\begin{equation}
\le \Delta_n^{\alpha \beta \land 1 } \frac{1}{n} \sum_{i=0}^{n-1} f^2(X_{t_i})\gamma^4(X_{t_i})R_i(1),
\label{eq: quadrato A1}
\end{equation}
that converges to zero in norm $1$ and so in probability, as a consequence of the polynomial growth of $f$ and the fact that the exponent on $\Delta_n$ is always positive. From \eqref{eq: stima B1} and \eqref{eq: quadrato A1} it follows
\begin{equation}
\sum_{i=0}^{n - 1} A_{1,i}^n = o_\mathbb{P}(1).
\label{eq: estim A1n}
\end{equation}
and so \eqref{eq:secondo punto teo 2 3}. \\ \\
\textit{Proof of \eqref{eq:nuovo punto teo 2 3}.} \\
We use Proposition \ref{prop: espansione salti} replacing \eqref{eq: aggiunta prop1 salti} in definition \eqref{eq: definition tilde Qn} of $\tilde{Q}^J_n$. Our goal is to prove that
$$n^{\beta(2 - \alpha)} \sum_{i = 0}^{n-1} f(X_{t_i}) (\Delta \tilde{X}_i^J)^2\varphi_{\Delta_{n}^\beta}(\Delta \tilde{X}_i^J) = \hat{Q}_n + o_\mathbb{P}(\Delta_n^{(\frac{1}{2} - 2 \beta + \alpha \beta - \tilde{\epsilon}) \land (1 - 2 \beta - \tilde{\epsilon})}).$$
On the left hand side of the equation here above we can act like we did in \eqref{eq: ref tesi salti} - \eqref{eq: def B1 B2}.
To get \eqref{eq:nuovo punto teo 2 3} we are therefore left to show that , if $\beta > \frac{1}{4 - \alpha}$, then $\sum_{i = 0}^{n - 1} A_{1,i}^n$ is also $o_\mathbb{P}(\Delta_n^{(\frac{1}{2} - 2 \beta + \alpha \beta - \tilde{\epsilon}) \land (1 - 2 \beta - \tilde{\epsilon})})$. To prove it, we want to use Lemma 9 of \cite{Genon Catalot}, hence we want to prove the following:
\begin{equation}
\frac{1}{\Delta_n^{\frac{1}{2} - 2 \beta + \alpha \beta - \tilde{\epsilon}}} \sum_{i = 0}^{n - 1} \mathbb{E}_i[A_{1,i}^n] \xrightarrow{\mathbb{P}} 0 \quad \mbox{and }
\label{eq: conv A1n nuovo}
\end{equation}
\begin{equation}
\frac{1}{\Delta_n^{2(\frac{1}{2} - 2 \beta + \alpha \beta - \tilde{\epsilon})}} \sum_{i = 0}^{n - 1} \mathbb{E}_i[(A_{1,i}^n)^2] \xrightarrow{\mathbb{P}} 0.
\label{eq: conv A1n carre nuovo}
\end{equation}
Using \eqref{eq: stima B1} we have that, if $\alpha > 1$, then the left hand side of \eqref{eq: conv A1n nuovo} is in module upper bounded by
$$\frac{\Delta_n^{\beta - \epsilon} |\log(\Delta_n)|}{\Delta_n^{\frac{1}{2} - 2 \beta + \alpha \beta - \tilde{\epsilon}}} \frac{1}{n} \sum_{i = 0}^{n - 1}|f(X_{t_i})| |\gamma^2(X_{t_i})|R_i(1) = \Delta_n^{3 \beta - \alpha \beta - \frac{1}{2} + \tilde{\epsilon} - \epsilon }|\log(\Delta_n)| \frac{1}{n} \sum_{i = 0}^{n - 1}|f(X_{t_i})| |\gamma^2(X_{t_i})|R_i(1), $$
that goes to zero since we have chosen $\beta > \frac{1}{4 - \alpha} > \frac{1}{2(3 - \alpha)}$. Otherwise, if $\alpha \le 1$, then \eqref{eq: stima B1} gives us that the left hand side of \eqref{eq: conv A1n nuovo} is in module upper bounded by
$$\frac{\Delta_n^{ \alpha \beta} |\log(\Delta_n)|}{\Delta_n^{\frac{1}{2} - 2 \beta + \alpha \beta - \tilde{\epsilon}}} \frac{1}{n} \sum_{i = 0}^{n - 1}|f(X_{t_i})| |\gamma^2(X_{t_i})|R_i(1) = \Delta_n^{2 \beta - \frac{1}{2} + \tilde{\epsilon} }|\log(\Delta_n)| \frac{1}{n} \sum_{i = 0}^{n - 1}|f(X_{t_i})| |\gamma^2(X_{t_i})|R_i(1),$$
that goes to zero because $\beta > \frac{1}{4 - \alpha} > \frac{1}{4}$. \\
Using also \eqref{eq: quadrato A1}, the left hand side of \eqref{eq: conv A1n carre nuovo} turns out to be upper bounded by \\
$\Delta_n^{-1 + 4 \beta - 2 \alpha \beta + 2 \tilde{\epsilon}} \Delta_n^{\alpha \beta \land 1} \frac{1}{n} \sum_{i=0}^{n-1} f^2(X_{t_i})\gamma^4(X_{t_i})R_i(1),$
that goes to zero in norm 1 and so in probability since we have chosen $\beta > \frac{1}{4 - \alpha}$.
It follows \eqref{eq: conv A1n carre nuovo} and so \eqref{eq:tesi teo 2 e 3}. \\
\end{proof}
\subsection{Proof of Proposition \ref{prop: conv hat Qn}}
\begin{proof}
To prove the proposition we replace \eqref{eq: dl d} in the definition of $\hat{Q}_n$. It follows that our goal is to show that
$$I_1^n + I_2^n :=( \frac{1}{n})^{\frac{2}{\alpha} - \beta(2 - \alpha)}\sum_{i = 0}^{n - 1} f(X_{t_i}) \gamma^2(X_{t_i})(o(|\Delta_{n}^{\frac{1}{\alpha} - \beta} \gamma(X_{t_i})|^{- \hat{\epsilon}} + |\Delta_{n}^{\frac{1}{\alpha} - \beta} \gamma(X_{t_i})|^{2 \alpha - 2- \hat{\epsilon}} )) = \tilde{\mathcal{E}}_n,$$
where $\tilde{\mathcal{E}}_n$ is always $o_\mathbb{P}(1)$ and, if $\alpha < \frac{4}{3}$, it is also $\frac{1}{\Delta_n^{\beta(2 - \alpha)}} o_\mathbb{P}(\Delta_n^{(\frac{1}{2} - \tilde{\epsilon}) \land (1 - \alpha \beta - \tilde{\epsilon} )})$. \\
We have that $I_1^n = o_\mathbb{P}(1)$ since it is upper bounded by
$$\Delta_n^{\frac{2}{\alpha} - 1 - 2 \beta + \alpha \beta - \hat{\epsilon}(\frac{1}{\alpha} - \beta) } \frac{1}{n}\sum_{i = 0}^{n - 1} R_i(1)\,o(1),$$
that goes to zero in norm $1$ and so in probability since we can always find an $\hat{\epsilon} > 0$ such that the exponent on $\Delta_n$ is positive. \\
Also $I_2^n$ is $o_\mathbb{P}(1)$. Indeed it is upper bounded by
\begin{equation}
\Delta_n^{\frac{2}{\alpha} - 1 - 2 \beta + \alpha \beta -2(\frac{1}{\alpha} - \beta) +2 (1 - \alpha \beta) - \hat{\epsilon}(\frac{1}{\alpha} - \beta) } \frac{1}{n}\sum_{i = 0}^{n - 1} R_i(1)\,o(1).
\label{eq: remplace density per hat Q}
\end{equation}
We observe that the exponent on $\Delta_n$ is $1 - \alpha \beta - \hat{\epsilon}(\frac{1}{\alpha} - \beta) $ and we can always find $\hat{\epsilon}$ such that it is more than zero, hence \eqref{eq: remplace density per hat Q} converges in norm $1$ and so in probability. \\
In order to show that $I_1^n = \frac{1}{\Delta_n^{\beta(2 - \alpha)}} o_\mathbb{P}(\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}) = o_\mathbb{P}(\Delta_n^{\frac{1}{2} - \tilde{\epsilon} - \beta(2 - \alpha)})$ we observe that
$$\frac{I_1^n}{\Delta_n^{\frac{1}{2} - \tilde{\epsilon} - \beta(2 - \alpha)}} \le \Delta_n^{\frac{2}{\alpha } - 1 - \frac{1}{2} + \tilde{\epsilon} -\hat{\epsilon}(\frac{1}{\alpha} - \beta)}\frac{1}{n}\sum_{i = 0}^{n - 1} R_i(1)\,o(1). $$
If $\alpha < \frac{4}{3}$ we can always find $\tilde{\epsilon}$ and $\hat{\epsilon}$ such that the exponent on $\Delta_n$ is more than zero, getting the convergence wanted. It follows $I_1^n = \frac{1}{\Delta_n^{\beta(2 - \alpha)}} o_\mathbb{P}(\Delta_n^{(\frac{1}{2} - \tilde{\epsilon}) \land (1 - \alpha \beta - \tilde{\epsilon} )})$. \\
To conclude, $I_2^n = \frac{1}{\Delta_n^{\beta(2 - \alpha)}} o_\mathbb{P}(\Delta_n^{1 - \alpha \beta - \tilde{\epsilon}}) = o_\mathbb{P}(\Delta_n^{1 - 2 \beta - \tilde{\epsilon} })$. Indeed,
\begin{equation}
\frac{I_2^n}{\Delta_n^{1 - 2 \beta - \tilde{\epsilon} }} \le \Delta_n^{\frac{2}{\alpha} - 1 - 1 + \alpha \beta + \tilde{\epsilon} - 2(\frac{1}{\alpha} - \beta) + 2 (1 - \alpha \beta) - \hat{\epsilon}(\frac{1}{\alpha} - \beta)} \frac{1}{n}\sum_{i = 0}^{n - 1} R_i(1)\,o(1).
\label{eq: estimation con dl f}
\end{equation}
The exponent on $\Delta_n$ is $2 \beta - \alpha \beta + \tilde{\epsilon} - \hat{\epsilon}(\frac{1}{\alpha} - \beta) $ and so we can always find $\tilde{\epsilon}$ and $\hat{\epsilon}$ such that it is positive. It follows the convergence in norm 1 and so in probability of \eqref{eq: estimation con dl f}. The proposition is therefore proved.
\end{proof}
\subsection{Proof of Corollary \ref{cor: cond rimozione rumore}}
\begin{proof}
We observe that \eqref{eq: eq per le appli} is a consequence of \eqref{eq: limite hat Qn con densita} in the case where $\hat{Q}_n = 0$. Moreover, $\beta < \frac{1}{2 \alpha}$ implies that $\Delta_n^{1 - \alpha \beta - \tilde{\epsilon}}$ is negligible compared to $\Delta_n^{\frac{1}{2} - \tilde{\epsilon}}$. It follows \eqref{eq: eq per le appli}.
\end{proof}
\subsection{Proof of Theorem \ref{th: reformulation th T fixed}.}
\begin{proof}
The convergence \eqref{eq: convergenza hat Q tempo corto} clearly follows from \eqref{eq: limite hat Qn con densita}. \\
Concerning the proof of \eqref{eq: tesi finale tempo corto}, we can see its left hand side as $$Q_n - \frac{1}{n} \sum_{i= 0}^{n - 1} f(X_{t_i}) a^2_{t_i} + \frac{1}{n} \sum_{i= 0}^{n - 1} f(X_{t_i}) a^2_{t_i} - IV_1 $$
and so, using \eqref{eq:tesi teo 2 e 3} and the definition of $IV_1$, it turns out that our goal is to show that
\begin{equation}
\frac{1}{n} \sum_{i= 0}^{n - 1} f(X_{t_i}) a^2_{t_i} - \int_0^1 f(X_s) \, a^2_s ds = o_\mathbb{P}(\Delta_n^{\beta(2 - \alpha)}).
\label{eq: tesi th tempo corto}
\end{equation}
The left hand side of \eqref{eq: tesi th tempo corto} is
$$\sum_{i= 0}^{n - 1} f(X_{t_i}) \int_{t_i}^{t_{i + 1}}(a^2_{t_i} - a^2_s ) ds + \sum_{i= 0}^{n - 1} \int_{t_i}^{t_{i + 1}} a^2_s (f (X_{t_i}) - f (X_s)) ds = : B_n + R_n. $$
$B_n$ in the equation here above is exactly the same term we have already dealt with in the proof of Theorem \ref{th: 2 e 3 insieme} (see \eqref{eq: def Bn}). As showed in \eqref{eq: Bn trascurabile in punto 1 teo 2 3} it is $\mathcal{E}_n$ and so, in particular, it is also $o_\mathbb{P}(\Delta_n^{\beta (2 - \alpha)})$. \\
On $R_n$ we act like we did on $B_n$,considering this time the development up to second order of the function $f$, getting
\begin{equation}
f(X_s) = f(X_{t_i}) + f'(X_{t_i}) (X_s - X_{t_i}) + \frac{f''(\tilde{X}_{t_i})}{2} (X_s - X_{t_i})^2,
\label{eq: development f}
\end{equation}
where $\tilde{X}_{t_i} \in [X_{t_i}, X_s]$. Replacing \eqref{eq: development f} in $R_n$ we get two terms that we denote $R_n^1$ and $R_n^2$. On them we can act like we did on \eqref{eq: development a2}. The estimations gathered in Lemma \ref{lemma: Moment inequalities} about the increments of $X$ and of $a$ have the same size (see points 2 and 4) and provide on $B_2^n$ and $R_n^2$ the same upper bound:
$$\mathbb{E}[|R_n^2|] \le c \sum_{i = 0}^{n - 1}\mathbb{E}[|f''(X_{t_i})| \int_{t_i}^{t_{i + 1}} \mathbb{E}_i[|a_s||X_s - X_{t_i}|^{2}] ds] \le c \Delta_n^2 \sum_{i = 0}^{n - 1}\mathbb{E}[|f''(X_{t_i})|R_i(1)],$$
where we have used Cauchy Schwartz inequality and the fourth point of Lemma \ref{lemma: Moment inequalities}.
It yields $R_n^2 = o_\mathbb{P}(\Delta_n^{\beta (2 - \alpha)})$, which is the same result found in the first inequality of \eqref{eq: stima B2n stabile} for the increments of $a$. \\
To deal with $R_n^1$ we replace the dynamic of $X$ (as done with the dynamic of $a$ for $B_1^n$). Even if the volatility coefficient in the dynamic of $X$ is no longer bounded, the condition $\sup_{s \in [t_i, t_{i + 1}]} \mathbb{E}_i[|a_s|] < \infty$ (which is true according with Lemma \ref{lemma: Moment inequalities}) is enough to say that \eqref{eq: utile carre I2n} keep holding. \\
Following the method provided in the proof of Theorem \ref{th: 2 e 3 insieme} to show that $B_1^n = \mathcal{E}_n$ we obtain $R_n^1= \mathcal{E}_n$ and therefore $R_n^1 =o_\mathbb{P}(\Delta_n^{\beta (2 - \alpha)})$.
It yields \eqref{eq: tesi th tempo corto} and so the theorem is proved.
\end{proof}
\section{Proof of Proposition \ref{prop: estimation stable}.}\label{S:Proof_propositions}
This section is dedicate to the proof of Proposition \ref{prop: estimation stable}.
To do it, it is convenient to introduce an adequate truncation function and to consider a rescaled process, as explained in the next subsections. Moreover, the proof of Proposition \ref{prop: estimation stable} requires some Malliavin calculus; we recall in what follows all the technical tools to make easier the understanding of the paper.
\subsection{Localization and rescaling}\label{subsection: construction per maillavin}
We introduce a truncation function in order to suppress the big jumps of $(L_t)$. Let $\tau : \mathbb{R} \rightarrow [0, 1]$ be a symmetric function, continuous with continuous derivative, such that $\tau = 1$ on $\left \{ |z| \le \frac{1}{4} \eta \right \}$ and $\tau = 0$ on $\left \{ |z| \ge \frac{1}{2} \eta \right \}$, with $\eta$ defined in the fourth point of Assumption 4. \\
On the same probability space $(\Omega, \mathcal{F}, (\mathcal{F}_t), \mathbb{P})$ we consider the L\'evy process $(L_t)$ defined below \eqref{eq: model} which measure is $F(dz) = \frac{\bar{g}(z)}{|z|^{1 + \alpha}} 1_{\mathbb{R} \setminus \left \{ 0 \right \}} (z) dz$, according with the third point of A4, and the truncated L\'evy process $(L^\tau_t)$ with measure $F^\tau(dz)$ given by $F^\tau(dz) = \frac{\bar{g}(z) \tau(z)}{|z|^{1 + \alpha}} 1_{\mathbb{R} \setminus \left \{ 0 \right \}} (z) dz$. This can be done by setting $L_t := \int_0^t \int_\mathbb{R} z \tilde{\mu}(ds, dz)$, as we have already done, and $L_t^\tau : = \int_0^t \int_\mathbb{R} z \tilde{\mu}^\tau(ds, dz)$, where $\tilde{\mu}$ and $\tilde{\mu}^\tau$ are the compensated Poisson random measures associated respectively to
$$\mu(A): = \int_{[0,1]} \int_\mathbb{R} \int_{[0,T]} 1_A(t,z) \mu^{\bar{g}}(dt, dz,du), \qquad A \subset [0, T] \times \mathbb{R},$$
$$\mu^\tau(A): = \int_{[0,1]} \int_\mathbb{R} \int_{[0,T]} 1_A(t,z) 1_{u \le \tau(z)} \mu^{\bar{g}}(dt, dz,du), \qquad A \subset [0, T] \times \mathbb{R},$$
for $\mu^{\bar{g}}$ a Poisson random measure on $[0,T] \times \mathbb{R} \times [0,1]$ with compensator $\bar{\mu}^{\bar{g}}(dt, dz, du) = dt \frac{\bar{g}(z)}{|z|^{1 + \alpha}} 1_{\mathbb{R} \setminus \left \{ 0 \right \}} (z) dz du$. \\
By construction, the restrictions of the measures $\mu$ and $\mu^\tau$ to $[0, h] \times \mathbb{R}$ coincide on the set \\
$\left \{ (u, z) \mbox{ such that } u \le \tau(z) \right \}$, and thus coincide on the event
$$\Omega_h := \left \{ \omega \in \Omega; \mu^{\bar{g}}([0, h] \times \left \{ z \in \mathbb{R}: |z| \ge \frac{\eta}{4} \right \} \times [0,1]) = 0 \right \}.$$
Since $\mu^{\bar{g}}([0, h] \times \left \{ z \in \mathbb{R}: |z| \ge \frac{\eta}{4} \right \} \times [0,1])$ has a Poisson distribution with parameter
$$\lambda_h : = \int_0^{h} \int_{|z| \ge \frac{\eta}{4}} \int_0^1 \frac{\bar{g}(z)}{|z|^{1 + \alpha}} du \, dz\, dt \le c h;$$
we deduce that
\begin{equation}
\mathbb{P}(\Omega_h^c) \le c \,h.
\label{eq: prob omega n}
\end{equation}
Then we have
\begin{equation}
\mathbb{P}((L_t)_{t \le h} \neq (L_t^\tau)_{t \le h}) \le \mathbb{P}(\Omega_h^c) \le c \, h.
\label{eq: proba diff processi}
\end{equation}
To prove Proposition \ref{prop: estimation stable} we have to rescale the process $(L_t)_{t \in [0,1]}$, we therefore introduce an auxiliary L\'evy process $(L_t^h)_{t \in [0,1]}$ defined possibly on another filtered space $(\tilde{\Omega}, \tilde{\mathcal{F}}, (\tilde{\mathcal{F}_t}), \tilde{\mathbb{P}})$ and admitting the decomposition $L_t^h := \int_0^t \int_\mathbb{R} z \tilde{\mu}^h(dt, dz)$, with $t \in [0,1]$; where $\tilde{\mu}^h$ is a compensated Poisson random measure $\tilde{\mu}^h = \mu^h - \bar{\mu}^h$, with compensator
\begin{equation}
\bar{\mu}^h(dt, dz) = dt \frac{\bar{g}(z h^\frac{1}{\alpha})}{|z|^{1 + \alpha}} \tau(z h^{\frac{1}{\alpha}})1_{\mathbb{R} \setminus \left \{ 0 \right \}} (z) dz.
\label{eq: bar mu n}
\end{equation}
By construction, the process $(L_t^h)_{t \in [0,1]}$ is equal in law to the rescaled truncated process $(h^{-\frac{1}{\alpha}} L^\tau_{h t})_{t \in [0,1]}$ that coincides with $(h^{ -\frac{1}{\alpha}} L_{h t})_{t \in [0,1]}$ on $\Omega_n$.
\subsection{Malliavin calculus}\label{section: Maillavin theory}
In this section, we recall some results on Malliavin calculus for jump processes. We refer to \cite{Spieg Maillavin} for a complete presentation and to \cite{Maillavin} for the adaptation to our framework. We will work on the Poisson space associated to the measure $\mu^h$ defining the process $(L_t^h)_{t \in [0,1]}$ of the previous section, assuming that $h$ is fixed. By construction, the support of $\mu^h$ is contained in $[0,1] \times E_h$, where $E_h := \left \{ z \in \mathbb{R} : |z| < \frac{\eta}{2} \frac{1}{h^\frac{1}{\alpha}} \right \}$, with $\eta$ defined in the fourth point of A4. We recall that the measure $\mu^h$ has compensator
\begin{equation}
\bar{\mu}^h(dt, dz) = dt \frac{\bar{g}(z h^\frac{1}{\alpha})}{|z|^{1 + \alpha}} \tau(z h^{\frac{1}{\alpha}})1_{\mathbb{R} \setminus \left \{ 0 \right \}} (z) dz := dt F_h(z) dz.
\label{eq: definition Fn}
\end{equation}
In this section we assume that the truncation function $\tau$ satisfies the additional assumption
$$\int_\mathbb{R}|\frac{\tau '(z)}{\tau(z)}|^p \tau(z) dz < \infty, \qquad \forall p \ge 1. $$
We now define the Malliavin operators $L$ and $\Gamma$ (omitting their dependence in $h$) and their basic properties (see \cite{Spieg Maillavin} Chapter IV, sections 8-9-10). For a test function $f: [0,1] \times \mathbb{R} \rightarrow \mathbb{R}$ measurable, $\mathcal{C}^2$ with respect the second variable, with bounded derivative and such that $f \in \cap_{p \ge 1} L^p(\bar{\mu}^h(dt,dz))$, we set $\mu^h(f) = \int_0^1 \int_\mathbb{R} f(t,z) \mu^h(dt,dz).$ As auxiliary function, we consider $\rho : \mathbb{R} \rightarrow [0, \infty)$ such that $\rho$ is symmetric, two times differentiable and such that $\rho(z) = z^4$ if $z \in [0, \frac{1}{2}]$ and $\rho(z) = z^2$ if $z \ge 1$. Thanks to the truncation $\tau$, we do not need that $\rho$ vanishes at infinity. Assuming the fourth point of Assumption 4, we check that $\rho$, $\rho'$ and $\rho \frac{F_h'}{F_h}$ belong to $\cap_{p \ge 1} L^p(F_h(z)dz)$. With these notations, we define the Malliavin operator $L$ on the functional $\mu^h(f)$ as follows:
$$L(\mu^h(f)) := \frac{1}{2} \mu^h(\rho'f' + \rho \frac{F_h'}{F_h} f' + \rho f''),$$
where $f'$ and $f''$ are derivative with respect to the second variable. This definition permits to construct a linear operator on the space $D \subset \cap_{p \ge 1} L^p(F_h(z)dz)$ which is self-adjoint: $\forall \Phi, \Psi \in D$, $\mathbb{E}\Phi L \Psi = \mathbb{E}L \Phi \Psi$ (see Section 8 in \cite{Spieg Maillavin} for the details on the construction of $D$). \\
We associate to $L$ the symmetric bilinear operator $\Gamma$:
$$\Gamma(\Phi, \Psi)= L(\Phi, \Psi) - \Phi L(\Psi) - \Psi L(\Phi).$$
If $f$ and $g$ are two test functions, we have
\begin{equation}
\Gamma(\mu^h(f), \mu^h(g)) = \mu^h(\rho f'g').
\label{eq: definition Gamma}
\end{equation}
The operators $L$ and $\Gamma$ satisfy the chain rule property:
$$LF(\Phi) = F'(\Phi) L \Phi + \frac{1}{2} F''(\Phi) \Gamma (\Phi, \Phi), \qquad \Gamma(F(\Phi), \Psi) = F'(\Phi)\Gamma (\Phi, \Psi).$$
These operators permit to establish the following integration by parts formula (see \cite{Spieg Maillavin} Theorem 8-10 p.103).
\begin{theorem}
Let $\Phi$ and $\Psi$ be random variable in $D$ and $f$ be a bounded function with bounded derivatives up to order two. If $\Gamma(\Phi, \Phi)$ is invertible and $\Gamma^{-1}(\Phi, \Phi) \in \cap_{p \ge 1} L^p $, then we have
\begin{equation}
\mathbb{E}f'(\Phi) \Psi = \mathbb{E} f(\Phi) \mathcal{H}_\Phi(\Psi),
\label{eq: int per parti}
\end{equation}
with
\begin{equation}
\mathcal{H}_\Phi(\Psi) = -2 \Psi \Gamma^{-1}(\Phi, \Phi) L\Phi - \Gamma(\Phi, \Psi \Gamma^{-1}(\Phi,\Phi)).
\label{eq: def peso Maillavin}
\end{equation}
\label{th: regola catena maillavin}
\end{theorem}
The random variable $L_1^h$ belongs to the domain of the operators $L$ and $\Gamma$. Computing $L(L_1^h)$, $\Gamma(L_1^h, L_1^h)$ and $\mathcal{H}_{L_1^h}(1)$ it is possible to deduce the following useful inequalities, proved in Lemma 4.3 in \cite{Maillavin}.
\begin{lemma}
We have
$$\sup_n \mathbb{E}|\mathcal{H}_{L_1^h}(1)|^p \le C_p \qquad \forall p \ge 1,$$
$$\sup_n \mathbb{E}|\int_0^1 \int_{|z| > 1} |z| \mu^h(ds, dz) \mathcal{H}_{L_1^h}(1)|^p \le C_p \qquad \forall p \ge 1.$$
\label{lemma: lemma 4.3 Maillavin}
\end{lemma}
With this background we can proceed to the proof of Proposition \ref{prop: estimation stable}.
\subsection{Proof of Proposition \ref{prop: estimation stable}}
\begin{proof}
The first step is to construct on the same probability space two random variables whose laws are close to the laws of $h^{- \frac{1}{\alpha}}L_{h}$ and $S_1^\alpha$. We recall briefly the notation of Section \ref{subsection: construction per maillavin}: $\mu^h$ is a Poisson random measure with compensator $\bar{\mu}^h(dt, dz)$ defined in \eqref{eq: bar mu n} and the process $L_t^h$ is defined by
\begin{equation}
L_t^h = \int_0^t \int_\mathbb{R} z \tilde{\mu}^h(ds,dz) = \int_0^t \int_{|z| \le h^{- \frac{1}{\alpha}}\frac{\eta}{2}} z \tilde{\mu}^h(ds, dz)
\label{eq: definition L n}
\end{equation}
with $\tilde{\mu}^h = \mu^h - \bar{\mu}^h$.
Using triangle inequality we have
\begin{equation}
|\mathbb{E}[g(h^{- \frac{1}{\alpha}}L_{h})] - \mathbb{E}[g(S_1^\alpha)]| \le |\mathbb{E}[g(h^{- \frac{1}{\alpha}}L_{h})] - \mathbb{E}[g(L_1^h)]|+|\mathbb{E}[g(L_1^h) - g(S_1^\alpha)]|.
\label{eq: inizio prop maillavin}
\end{equation}
By the definition of $L_1^h$ it is
\begin{equation}
|\mathbb{E}[g(h^{- \frac{1}{\alpha}}L_{h})] - \mathbb{E}[g(L_1^h)] | = |\mathbb{E}[g(h^{- \frac{1}{\alpha}}L_{h}) - g(h^{- \frac{1}{\alpha}}L^\tau_{h})] | \le 2 \left \| g \right \|_\infty \mathbb{P}(\Omega_n^c)\le c\left \| g \right \|_\infty h,
\label{eq: stima primo termine Maillavin}
\end{equation}
where in the last inequality we have used \eqref{eq: proba diff processi}. In order to get an estimation to the second term of \eqref{eq: inizio prop maillavin} we now construct a variable approximating the law of $S_1^\alpha$ and based on the Poisson measure $\mu^h$ :
\begin{equation}
L_t^{\alpha, h} := \int_0^t \int_{|z| \le h^{- \frac{1}{\alpha}}\frac{\eta}{2}} g_h(z)\tilde{\mu}^h(ds, dz),
\label{eq: definition L alpha n}
\end{equation}
where $g_h$ is an odd function built in the proof of Theorem 4.1 in \cite{Maillavin} for which the following lemma holds:
\begin{lemma}
\begin{enumerate}
\item For each test function $f$, defined as in Section \ref{section: Maillavin theory}, we have
\begin{equation}
\int_0^1 \int_{|z| \le \frac{\eta}{2} h^{- \frac{1}{\alpha}}} f(t,g_h(z)) \bar{\mu}^h(dt, dz) = \int_0^1 \int_{|\omega| \le \frac{\eta}{2} h^{- \frac{1}{\alpha}}} f(t,\omega) \bar{\mu}^{\alpha, h}(dt, d\omega),
\label{eq: trasformazione legge da salti a uniforme}
\end{equation}
where $\bar{\mu}^h(dt, dz)$ is the compensator defined in \eqref{eq: bar mu n} and
$$\bar{\mu}^{\alpha, h}(dt, d\omega) = dt \frac{\tau(\omega h^\frac{1}{\alpha})}{|\omega|^{1 + \alpha}} d\omega$$
is the compensator of a measure associated to an $\alpha$- stable process whose jumps are truncated with the function $\tau$.
\item There exists $\epsilon_0 > 0$ such that, for $|z| \le \epsilon_0 h^{- \frac{1}{\alpha}}$,
$$|g_h(z) - z| \le c z^2 h^{\frac{1}{\alpha}} + c |z|^{1 + \alpha }h \qquad \mbox{if } \alpha \neq 1,$$
$$|g_h(z) - z| \le c z^2 h |\log(|z|h)| \qquad \mbox{if } \alpha =1.$$
\item The function $g_h$ is $\mathcal{C}^1$ on $(- \epsilon_0 h^{- \frac{1}{\alpha}}, \epsilon_0 h^{- \frac{1}{\alpha}})$ and for $|z| < \epsilon_0 h^{- \frac{1}{\alpha}}$,
$$|g'_h(z) - 1| \le c |z| h^{\frac{1}{\alpha}} + c |z|^{ \alpha }h \qquad \mbox{if } \alpha \neq 1,$$
$$|g'_h(z) - 1| \le c |z| h |\log(|z|h)| \qquad \mbox{if } \alpha =1.$$
\end{enumerate}
\label{lemma: 4.5 in Maillavin}
\end{lemma}
The second and the third point of the lemma here above are proved in Lemma 4.5 of \cite{Maillavin}, while the first point is proved in Theorem 4.1 \cite{Maillavin} and it shows us, using the exponential formula for Poisson measure, that $g_h$ is the function that turns our measure $\mu^h$ into the measure associated to an $\alpha$-stable process truncated with the function $\tau$. Thus $(L_t^{\alpha, h})_{t \in [0,1]}$ is a L\'evy process with jump intensity $\omega \mapsto \frac{\tau(\omega h^\frac{1}{\alpha})}{|\omega|^{1 + \alpha}}$ and we recognize the law of an $\alpha$-stable truncated process. We deduce, similarly to \eqref{eq: stima primo termine Maillavin},
\begin{equation}
|\mathbb{E}[g(L_1^{\alpha, h})] - \mathbb{E}[g(S_1^{\alpha})]| \le c\left \| g \right \|_\infty h.
\label{eq: differenza S1 e L1 alpha n}
\end{equation}
Proposition \ref{prop: estimation stable} is a consequence of \eqref{eq: inizio prop maillavin}, \eqref{eq: stima primo termine Maillavin}, \eqref{eq: differenza S1 e L1 alpha n} and the following lemma:
\begin{lemma}
Suppose that Assumptions 1 to 4 hold. Let $g$ be as in Proposition \ref{prop: estimation stable}. Then, for any $\epsilon > 0$ and for $p \ge \alpha$,
$$|\mathbb{E}[g(L_1^{h}) - g(L_1^{\alpha, h})]| \le C_\epsilon h |\log(h)| \left \| g \right \|_\infty + C_\epsilon h^\frac{1}{\alpha}\left \| g \right \|_\infty^{1 - \frac{\alpha}{p} + \epsilon} \left \| g \right \|_{pol}^{\frac{\alpha}{p} - \epsilon} |\log(h)| +$$
$$+ C_\epsilon h^\frac{1}{\alpha}\left \| g \right \|_\infty^{1+ \frac{1}{p} - \frac{\alpha}{p} + \epsilon} \left \| g \right \|_{pol}^{- \frac{1}{p} + \frac{\alpha}{p} - \epsilon} |\log(h)|1_{\alpha > 1} .$$
\label{lemma: main prop stable}
\end{lemma}
\begin{proof}
The proof is based of the comparison of the representation of \eqref{eq: definition L n} and \eqref{eq: definition L alpha n}. Since in Lemma \ref{lemma: 4.5 in Maillavin} the difference $g_h(z) - z$ is controlled for $|z| \le \epsilon_0 h^{- \frac{1}{\alpha}}$, we need to introduce a localization procedure consisting in regularizing $1_{\left \{ \mu^h([0,1] \times \left \{ z \in \mathbb{R}: |z| > \epsilon_0 h^{- \frac{1}{\alpha}} \right \}) = 0 \right \}}$. Let $\mathcal{I}$ be a smooth function defined on $\mathbb{R}$ and with values in $[0,1]$, such that $\mathcal{I}(x) = 1$ for $x \le \frac{1}{2}$ and $\mathcal{I}(x) = 0$ for $x \ge 1$. Moreover, we denote by $\zeta$ a smooth function on $\mathbb{R}$, with values in $[0,1]$ such that $\zeta(z)=0$ for $|z| \le \frac{1}{2}$ and $\zeta(z) = 1$ for $|z| \ge 1$ and we set
$$V^h : = \int_0^1 \int_\mathbb{R} \zeta(\frac{z h^\frac{1}{\alpha}}{\epsilon_0}) \mu^h(ds, dz)= \int_0^1 \int_{\left \{ \frac{1}{2} \epsilon_0 h^{- \frac{1}{\alpha}} \le |z| \le \epsilon_0 h^{- \frac{1}{\alpha}} \right \}} \zeta(\frac{z h^{\frac{1}{\alpha}}}{\epsilon_0}) \mu^h(ds, dz) + \int_0^1 \int_{\left \{ |z| \ge \epsilon_0 h^{- \frac{1}{\alpha}} \right \}} \mu^h(ds, dz), $$
$$W^h:= \mathcal{I}(V^h).$$
From the construction, $W^h$ is a Malliavin differentiable random variable such that $W^h \neq 0$ implies $\mu^h([0,1] \times \left \{ z \in \mathbb{R}: |z| > \epsilon_0 h^{- \frac{1}{\alpha}} \right \}) = 0 $. It is possible to show, acting as we did in \eqref{eq: prob omega n}, that $\mathbb{P}(W^h \neq 1) \le \mathbb{P}(\mu^h \mbox{ has a jump of size} > \frac{1}{2} \epsilon_0 h^{- \frac{1}{\alpha}} ) = O(h)$. From the latter, it is clear that the proof of the lemma reduces in proving the result on
$|\mathbb{E}[g(L_1^{h})W^h] - \mathbb{E}[g(L_1^{\alpha, h})W^h]|.$
Considering a regularizing sequence $(g_p)$ converging to $g$ in $L^1$ norm, such that $\forall p$ $g_p$ is $\mathcal{C}^1$ with bounded derivative and $\left \| g_p \right \|_\infty \le \left \| g \right \|_\infty$, we may assume that $g$ is $\mathcal{C}^1$ with bounded derivative too. Using the integration by part formula \eqref{eq: int per parti} and denoting by $G$ any primitive function of $g$ we can write $\mathbb{E}[g(L_1^h)W^h] = \mathbb{E}[G(L_1^h) \mathcal{H}_{L_1^h}(W^h)]$ where the Malliavin weight can be written, using \eqref{eq: def peso Maillavin} and the chain rule property of the operator $\Gamma$, as
\begin{equation}
\mathcal{H}_{L_1^h}(W^h) = W^h \mathcal{H}_{L_1^h}(1) - \frac{\Gamma (W^h, L_1^h)}{\Gamma(L_1^h, L_1^h)}.
\label{eq: peso maillavin su L1n}
\end{equation}
Using the triangle inequality, we are now left to find upper bounds for the following two terms:
$$\tilde{T}_1 := |\mathbb{E}[g(L_1^{\alpha, h})W^h] - \mathbb{E}[G(L_1^{ \alpha, h}) \mathcal{H}_{L_1^h}(W^h)]|,$$
$$\tilde{T}_2 := |\mathbb{E}[G(L_1^{ \alpha, h}) \mathcal{H}_{L_1^h}(W^h)] - \mathbb{E}[G(L_1^{ h}) \mathcal{H}_{L_1^h}(W^h)]|.$$
Let us start considering $\tilde{T}_2$. Using the Lipschitz property of the function $G$ and \eqref{eq: peso maillavin su L1n} we have it is upper bounded by
$$\mathbb{E}[|g(\hat{L}_1)||L_1^{ \alpha, h} - L_1^{h}||\mathcal{H}_{L_1^h}(W^h)|] \le \mathbb{E}[|g(\hat{L}_1)||L_1^{ \alpha, h} - L_1^{h}||W^h \mathcal{H}_{L_1^h}(1)|] + \mathbb{E}[|g(\hat{L}_1)||L_1^{ \alpha, h} - L_1^{h}||\frac{\Gamma (W^h, L_1^h)}{\Gamma(L_1^h, L_1^h)}|] =$$
$$ = : \tilde{T}_{2,1} + \tilde{T}_{2,2}, $$
where $\hat{L}_1$ is between $L_1^{ \alpha, h}$ and $L_1^{ h}$. We focus on $\tilde{T}_{2,1}$. Using the definitions \eqref{eq: definition L n} and \eqref{eq: definition L alpha n} of $L_1^h$ and $L_1^{\alpha, h}$ it is
$$\tilde{T}_{2,1} \le \mathbb{E}[|g(\hat{L}_1)||\int_0^1 \int_\mathbb{R} (g_h(z) - z) \tilde{\mu}^h(ds, dz) ||\mathcal{H}_{L_1^h}(1)W^h|] \le \mathbb{E}[|g(\hat{L}_1)||\int_0^1 \int_{|z| \le 1} (g_h(z) - z) \tilde{\mu}^h(ds, dz) ||\mathcal{H}_{L_1^h}(1)W^h|] +$$
\begin{equation}
+ \mathbb{E}[|g(\hat{L}_1)||\int_0^1 \int_{ 1 \le |z| \le \epsilon_0 h^{- \frac{1}{\alpha}} } (g_h(z) - z) {\mu}^h(ds, dz) ||\mathcal{H}_{L_1^h}(1)W^h|],
\label{eq: splitto T21}
\end{equation}
where we have used that $g_h$ is an odd function with the symmetry of the compensator $\bar{\mu}^h$ and the fact that on $W_h \neq 0$ we have $\mu^h([0,1] \times \left \{ z \in \mathbb{R}: |z| > \epsilon_0 h^{- \frac{1}{\alpha}} \right \}) = 0 $.
For the sake of shortness, we only give the details of the proof in the case $\alpha \neq 1$. In the case $\alpha = 1$, one needs to modify this control with an additional logarithmic term. For the small jumps term, from inequality 2.1.37 in \cite{13 in Maillavin} and the second point of Lemma \ref{lemma: 4.5 in Maillavin} we deduce $\mathbb{E}[|\int_0^1 \int_{|z| \le 1} (g_h(z) - z) \tilde{\mu}^h(ds, dz) |^{q_1}] \le C_{q_1}(h + h^{\frac{1}{\alpha}})^{q_1}$, $\forall q_1 \ge 2$. Using it and Holder inequality with $q_1$ big and $q_2$ close to $1$ we have
$$\mathbb{E}[|g(\hat{L}_1)||\int_0^1 \int_{|z| \le 1} (g_h(z) - z) \tilde{\mu}^h(ds, dz) ||\mathcal{H}_{L_1^h}(1)W^h|] \le C_{q_1}(h + h^{\frac{1}{\alpha}}) \mathbb{E}[|g(\hat{L}_1)|^{q_2}|\mathcal{H}_{L_1^h}(1)|^{q_2}W^h]^\frac{1}{q_2} \le $$
\begin{equation}
\le C_{q_1}(h + h^{\frac{1}{\alpha}}) \mathbb{E}[|g(\hat{L}_1)|^{p_1\,q_2}W^h]^\frac{1}{q_2 p_1} \mathbb{E}[|\mathcal{H}_{L_1^h}(1)|^{q_2 p_2}]^\frac{1}{q_2 p_2},
\label{eq: dettata da gigi}
\end{equation}
where in the last inequality we have used again Holder inequality, with $p_2$ big and $p_1$ close to $1$. Using the first point of Lemma \ref{lemma: lemma 4.3 Maillavin}, we know that $\mathbb{E}[|\mathcal{H}_{L_1^h}(1)|^{q_2 p_2}]^\frac{1}{q_2 p_2}$ is bounded, hence \eqref{eq: dettata da gigi} is upper bounded by
\begin{equation}
C_{q_1 q_2 p_2} h \left \| g \right \|_\infty + C_{q_1 q_2 p_2} h^\frac{1}{\alpha} \mathbb{E}[|g(\hat{L}_1)W^h|^{p_1\,q_2}]^\frac{1}{q_2 p_1},
\label{eq: prima parte T21}
\end{equation}
where we have bounded $|g(\hat{L}_1)|$ with its infinity norm and used that $0 \le W^h \le 1$. We remind that we are considering $q_2$ and $p_1$ next to $1$, hence we can write $q_2 p_1$ as $1 + \epsilon$. We now introduce $r$ in the following way:
$$\mathbb{E}[|g(\hat{L}_1)|^{1 + \epsilon}W^h]^\frac{1}{1 + \epsilon} = \mathbb{E}[|g(\hat{L}_1)|^{(1 + \epsilon) r}|g(\hat{L}_1)|^{(1 + \epsilon)(1 - r)}W^h]^\frac{1}{1 + \epsilon} \le \left \| g \right \|_\infty^r \mathbb{E}[|g(\hat{L}_1)|^{(1 + \epsilon)(1 - r)}W^h]^\frac{1}{1 + \epsilon} \le $$
\begin{equation}
\left \| g \right \|_\infty^r \left \| g \right \|_{pol}^{1 - r} \mathbb{E}[(1 + |\hat{L}_1|^p)^{(1 + \epsilon)(1 - r)}W^h]^\frac{1}{1 + \epsilon} \le c \left \| g \right \|_\infty^r \left \| g \right \|_{pol}^{1 - r} + c \left \| g \right \|_\infty^r \left \| g \right \|_{pol}^{1 - r} \mathbb{E}[|\hat{L}_1|^{p(1 + \epsilon)(1 - r)}W^h]^\frac{1}{1 + \epsilon};
\label{eq: spezzo h su norma inf e norma pol}
\end{equation}
where we have estimated $g$ with its norm $\infty$ and we have used the property \eqref{eq: conditon on h} of $g$ and that $0 \le W^h\le 1$. We observe that $\hat{L}_1$ is between $L_1^h$ and $L_1^{\alpha,h}$ hence $|\hat{L}_1| \le |L_1^h| + |L_1^{\alpha,h}|$. Moreover we choose $r$ such that $p(1 + \epsilon)(1 - r) = \alpha$; therefore $r = 1 - \frac{\alpha}{p(1 + \epsilon)}$. In this way we have that \eqref{eq: spezzo h su norma inf e norma pol} is upper bounded by
\begin{equation}
c \left \| g \right \|_\infty^{1 - \frac{\alpha}{p(1 + \epsilon)}} \left \| g \right \|_{pol}^{\frac{\alpha}{p(1 + \epsilon)}} \log(h^{- \frac{1}{\alpha}}),
\label{eq: finale per z <1}
\end{equation}
where we have used that $\mathbb{E}[|\hat{L}_1|^\alpha W^h] \le c \log(h^{- \frac{1}{\alpha}})$, that we justify now. Indeed, using Lemma 2.1.5 in the appendix of \cite{13 in Maillavin} if $\alpha \in [1,2]$ and Jensen inequality if $\alpha \in [0,1)$, we have
$$\mathbb{E}[|\hat{L}_1|^\alpha W^h] \le c \mathbb{E}[(|L_1^h|^\alpha + |L_1^{\alpha,h}|^\alpha ) W^h] \le c \mathbb{E}[|\int_0^1 \int_{|z| \le 1}z \tilde{\mu}^h(ds, dz)|] + c \mathbb{E}[|\int_0^1 \int_{|z| \le 1}g_h(z) \tilde{\mu}^h(ds, dz)|] +$$
$$+ c \mathbb{E}[\int_0^1 \int_{1 \le |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z|^\alpha \bar{\mu}^h(ds, dz)] + c \mathbb{E}[\int_0^1 \int_{1 \le |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|g_h(z)|^\alpha \bar{\mu}^h(ds, dz)].$$
We observe that, using Kunita inequality, the first term here above is bounded in $L^p$ and, as a consequence of the second point of Lemma \ref{lemma: 4.5 in Maillavin}, the second term here above so does.
Concerning the third term here above (and so, again, we act on the fourth in the same way), we have
\begin{equation}
c \mathbb{E}[\int_0^1 \int_{1 \le |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z|^\alpha \bar{\mu}^h(ds, dz)] \le c \int_{1 \le |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z|^{\alpha - 1 - \alpha} dz \le c \, \log(h^{- \frac{1}{\alpha}}) \le c |\log(h)|,
\label{eq: hat L1 z < 1}
\end{equation}
where we have also used definition \eqref{eq: bar mu n} of $\bar{\mu}^h$. \\
Replacing \eqref{eq: finale per z <1} in \eqref{eq: prima parte T21} we get
\begin{equation}
\mathbb{E}[|g(\hat{L}_1)||\int_0^1 \int_{|z| \le 1} (g_h(z) - z) \tilde{\mu}^h(ds, dz) ||\mathcal{H}_{L_1^h}(1)W^h|] \le C_{q_1 q_2 p_2} h \left \| g \right \|_\infty + C_{q_1 q_2 p_2} h^\frac{1}{\alpha}\left \| g \right \|_\infty^{1 - \frac{\alpha}{p} + \epsilon} \left \| g \right \|_{pol}^{\frac{\alpha}{p} - \epsilon}\log(h^{- \frac{1}{\alpha}}),
\label{eq: finale unito z<1}
\end{equation}
where we have taken another $\epsilon$, using its arbitrariness. The constants depend also on it. \\
Let us now consider the large jumps term in \eqref{eq: splitto T21}. Using the second point of Lemma \ref{lemma: 4.5 in Maillavin} and the following basic inequality
$$\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z|^\delta \mu^h(ds,dz) \le \int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z|^{\delta - 1} \mu^h(ds,dz) \int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z| \mu^h(ds,dz) $$
for $\delta \ge 1$, we get it is upper bounded by
\begin{equation}
\mathbb{E}[|g(\hat{L}_1)|\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}(h^\frac{1}{\alpha}|z| + h |z|^\alpha ) \mu^h(ds,dz) \int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z| \mu^h(ds,dz)|\mathcal{H}_{L_1^h}(1)|W^h].
\label{ eq: T21 seconda parte}
\end{equation}
We now use Holder inequality with $p_2$ big and $p_1$ next to $1$ and we observe that, from the second point of Lemma \ref{lemma: lemma 4.3 Maillavin}, it follows
$$\mathbb{E}[|\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z| \mu^h(ds,dz) \mathcal{H}_{L_1^h}(1)|^{p_2}]^\frac{1}{p_2} \le C_{p_2}.$$
Hence \eqref{ eq: T21 seconda parte} is upper bounded by
\begin{equation}
C_{p_2} \mathbb{E}[|g(\hat{L}_1)|^{p_1} |\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}(h^\frac{1}{\alpha}|z| + h |z|^\alpha) \mu^h(ds,dz) |^{p_1} W^h]^\frac{1}{p_1} \le
\label{eq: uguale al caso con Gamma}
\end{equation}
\begin{equation}
\le C_{p_2} \left \| g \right \|_\infty h \mathbb{E}[|\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z|^\alpha \mu^h(ds,dz) |^{p_1}]^\frac{1}{p_1} + C_{p_2} h^\frac{1}{\alpha} \mathbb{E}[|g(\hat{L}_1)|^{p_1} |\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}} |z|\mu^h(ds,dz) |^{p_1} W^h]^\frac{1}{p_1}.
\label{eq: T21 riformulata}
\end{equation}
Concerning the first term of \eqref{eq: T21 riformulata}, we use Lemma 2.1.5 in the appendix of \cite{13 in Maillavin} with $p_1 = (1 + \epsilon) \in [1,2]$ and the definition of $F_h$ given in \eqref{eq: definition Fn}, getting
$$\mathbb{E}[|\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z|^\alpha \mu^h(ds,dz) |^{1 + \epsilon}]^\frac{1}{1 + \epsilon} \le \mathbb{E}[\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z|^{\alpha(1 + \epsilon)} \bar{\mu}^h(ds,dz)]^\frac{1}{1 + \epsilon} \le $$
\begin{equation}
\le c(\int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}|z|^{\alpha(1 + \epsilon) - 1 - \alpha} dz)^\frac{1}{1 + \epsilon} \le c h^{- \frac{\epsilon}{1 + \epsilon}}=c h^{- \epsilon},
\label{eq: stima int z alla alpha(1 + epsilon)}
\end{equation}
where we have used the arbitrariness of $\epsilon$ in the last equality. \\
On the second term of \eqref{eq: T21 riformulata} we act differently depending on whether or not $\alpha$ is more than $1$. If it does, we act as we did in \eqref{eq: spezzo h su norma inf e norma pol}, considering $p_1= 1 + \epsilon < \alpha$ and introducing $r$, this time we set it such that the following equality holds:
\begin{equation}
p(1 + \epsilon )(1 - r) + (1 + \epsilon) = \alpha.
\label{eq: definition r}
\end{equation}
We also use the property \eqref{eq: conditon on h} on $g$, hence it is upper bounded by
\begin{equation}
C_{p_2} h^\frac{1}{\alpha} \left \| g \right \|_\infty^r \left \| g \right \|_{pol}^{1 - r} \mathbb{E}[(1 + |\hat{L}_1|^{p(1 + \epsilon )(1 - r)})|\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}} |z|\mu^h(ds,dz) |^{1 + \epsilon} W^h]^\frac{1}{1 + \epsilon}.
\label{eq: L1 hat}
\end{equation}
Now on the first term here above we use that $0 \le W^h \le 1$ and Lemma 2.1.5 in the appendix of \cite{13 in Maillavin} as we did in \eqref{eq: stima int z alla alpha(1 + epsilon)} in order to get
\begin{equation}
\mathbb{E}[|\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}} |z|\mu^h(ds,dz) |^{1 + \epsilon} ]^\frac{1}{1 + \epsilon} \le c.
\label{eq: stima z alla 1+ epsilon}
\end{equation}
Moreover we observe, as we have already done, that $|\hat{L}_1| \le |L_1^h| + |L_1^{\alpha, h}|$ and that, from the second point of Lemma \ref{lemma: 4.5 in Maillavin}, there exists $c > 0$ such that $|g_h(z)| \le c |z|$; so we get
\begin{multline}
\mathbb{E}[|\hat{L}_1|^{p(1 + \epsilon )(1 - r)}|\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}} |z|\mu^h(ds,dz) |^{1 + \epsilon} W^h]^\frac{1}{1 + \epsilon} \le \\
\le c + \mathbb{E}[|\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}} |z|\mu^h(ds,dz) |^{p(1 + \epsilon)(1 - r) + (1 + \epsilon)}]^\frac{1}{1 + \epsilon} \le \\
\le c (\int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}} |z|^\alpha |z|^{- 1 - \alpha}dz)^\frac{1}{1 + \epsilon} \le c \, \frac{1}{1 + \epsilon} \log(h^{- \frac{1}{\alpha}}) \le c|\log(h)|,
\label{eq: stima L1 alla alpha}
\end{multline}
having chosen a particular $r$ just in order to have the exponent here above equal to $\alpha$ and so having found out the same computation of \eqref{eq: hat L1 z < 1}. We have not considered the integral on $|z| \le 1$ because, as we have already seen above \eqref{eq: hat L1 z < 1}, the integral is bounded in $L^p$ and so we simply get \eqref{eq: stima z alla 1+ epsilon} again. From \eqref{eq: definition r} we obtain $r = 1 + \frac{1}{p} - \frac{\alpha}{p(1 + \epsilon)}$. Replacing it and using \eqref{eq: stima z alla 1+ epsilon} and \eqref{eq: stima L1 alla alpha} we get \eqref{eq: L1 hat} is upper bounded by
\begin{equation}
C_{p_2} h^\frac{1}{\alpha} \left \| g \right \|_\infty^{1 + \frac{1}{p} - \frac{\alpha}{p(1 + \epsilon)}} \left \| g \right \|_{pol}^{-\frac{1}{p} + \frac{\alpha}{p(1 + \epsilon)}}(c + |\log(h)|) = C_{p_2} h^\frac{1}{\alpha} \left \| g \right \|_\infty^{1 + \frac{1}{p} - \frac{\alpha}{p(1 + \epsilon)}} \left \| g \right \|_{pol}^{-\frac{1}{p} + \frac{\alpha}{p(1 + \epsilon)}} |\log(h)|.
\label{eq: fine parte 2 T21}
\end{equation}
If otherwise $\alpha$ is less than $1$, then the second term of \eqref{eq: T21 riformulata} is upper bounded by
\begin{equation}
C_{p_2} h^{\frac{1}{\alpha}}\left \| g \right \|_\infty \mathbb{E}[|\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}} |z|\mu^h(ds,dz) |^{p_1} W^h]^\frac{1}{p_1} \le C_{p_2} h^{\frac{1}{\alpha}}\left \| g \right \|_\infty h^{\frac{1}{1 + \epsilon} - \frac{1}{\alpha}} = C_{p_2} h^{\frac{1}{1 + \epsilon}}\left \| g \right \|_\infty,
\label{eq: secondo termine, alpha < 1}
\end{equation}
where we have taken $p_1 = 1 + \epsilon$ and we have used the fact that $0 \le W^h \le 1$ and that, for $\alpha < 1$,
$$\mathbb{E}[|\int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}} |z|\mu^h(ds,dz) |^{1 + \epsilon} ]^\frac{1}{1 + \epsilon} \le c h^{\frac{1}{1 + \epsilon} - \frac{1}{\alpha}}.$$
Using \eqref{eq: T21 riformulata}, \eqref{eq: stima int z alla alpha(1 + epsilon)}, \eqref{eq: fine parte 2 T21} and \eqref{eq: secondo termine, alpha < 1} it follows
$$\mathbb{E}[|g(\hat{L}_1)||\int_0^1 \int_{ 1 \le |z| \le \epsilon_0 h^{- \frac{1}{\alpha}} } (g_h(z) - z) {\mu}^h(ds, dz) ||\mathcal{H}_{L_1^h}(1)W^h|] \le$$
\begin{equation}
\le C_{p_2} h^{1 - \epsilon} \left \| g \right \|_\infty + C_{p_2} h^\frac{1}{\alpha} \left \| g \right \|_\infty^{1 + \frac{1}{p} - \frac{\alpha}{p(1 + \epsilon)}} \left \| g \right \|_{pol}^{-\frac{1}{p} + \frac{\alpha}{p(1 + \epsilon)}} |\log(h)|1_{\alpha > 1}.
\label{eq: grandi salti T21}
\end{equation}
Now from \eqref{eq: splitto T21}, \eqref{eq: finale unito z<1}, and \eqref{eq: grandi salti T21} it follows \begin{equation}
\tilde{T}_{2,1} \le C_{q_1 q_2 p_2} h^{1 - \epsilon} \left \| g \right \|_\infty + C_{q_1 q_2 p_2} h^\frac{1}{\alpha}\left \| g \right \|_\infty^{1 - \frac{\alpha}{p} + \epsilon} \left \| g \right \|_{pol}^{\frac{\alpha}{p} - \epsilon} |\log(h)| + C_{q_1 q_2 p_2} h^\frac{1}{\alpha}\left \| g \right \|_\infty^{1 + \frac{1}{p}- \frac{\alpha}{p} + \epsilon} \left \| g \right \|_{pol}^{- \frac{1}{p } + \frac{\alpha}{p} - \epsilon} |\log(h)|1_{\alpha > 1}.
\label{es: stima T21 completa}
\end{equation}
Concerning $\tilde{T}_{2,2}$, it is already proved in Theorem 4.2 in \cite{Maillavin} that
\begin{equation}
\tilde{T}_{2,2} \le c h \left \| g \right \|_\infty.
\label{eq: estimation T22}
\end{equation}
Let us now consider $\tilde{T}_1$. Using \eqref{eq: definition Gamma} and \eqref{eq: def peso Maillavin} we can write
$$\mathcal{H}_{L_1^h}(W^h) = \frac{- W^h \, L(L_1^h)}{\Gamma(L_1^h, L_1^h)} + L(\frac{W^h}{\Gamma(L_1^h, L_1^h)}) L_1^h - L(\frac{L_1^h \, W^h}{\Gamma(L_1^h, L_1^h)}).$$
With computations using that L is a self-adjoint operator we get
\begin{equation}
\tilde{T}_1 = |\mathbb{E}[g(L_1^{\alpha, h}) W^h] - \mathbb{E}[g(L_1^{\alpha, h}) \frac{\Gamma(L_1^{\alpha, h}, L_1^h)}{\Gamma(L_1^h, L_1^h)} W^h]| \le \mathbb{E}[|g(\hat{L}_1)||\frac{\Gamma(L_1^h - L_1^{\alpha, h}, L_1^h)}{\Gamma(L_1^h, L_1^h)}| W^h].
\label{eq: T1}
\end{equation}
Using equation \eqref{eq: definition Gamma}, we have
$$\Gamma(L_1^h - L_1^{\alpha, h}, L_1^h) = \int_0^1 \int_{|z| < \frac{\eta}{2} h^{- \frac{1}{\alpha}}} \rho(z) (1 - g_h'(z)) \mu^h(ds, dz).$$
Using the third point of Lemma \ref{lemma: 4.5 in Maillavin} we deduce the following on the event $W^h \neq 0$:
$$|\Gamma(L_1^h - L_1^{\alpha, h}, L_1^h)| \le c \int_0^1 \int_{|z| \le \epsilon_0 h^{- \frac{1}{\alpha}}} \rho(z) (h^{\frac{1}{\alpha}} |z| + h |z|^\alpha) \mu^h(ds, dz) \le c \int_0^1 \int_{ |z| \le 1} \rho(z) (h^{\frac{1}{\alpha}} |z| + h |z|^\alpha) \mu^h(ds, dz) +$$
$$ + c \int_0^1 \int_{1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}} \rho(z) \mu^h(ds, dz)\int_0^1 \int_{ 1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}(h^{\frac{1}{\alpha}} |z| + h |z|^\alpha) \mu^h(ds, dz) \le $$
$$\le c \int_0^1 \int_{\mathbb{R}} \rho(z) \mu^h(ds, dz)( h^\frac{1}{\alpha} + h) + c\int_0^1 \int_{\mathbb{R}} \rho(z) \mu^h(ds, dz)\int_0^1 \int_{ 1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}(h^{\frac{1}{\alpha}} |z| + h |z|^\alpha) \mu^h(ds, dz) = $$
\begin{equation}
= c( h^\frac{1}{\alpha} + h) \Gamma(L_1^h, L_1^h) + c \Gamma(L_1^h, L_1^h)(\int_0^1 \int_{ 1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}(h^{\frac{1}{\alpha}} |z| + h |z|^\alpha) \mu^h(ds, dz)),
\label{eq: Gamma in T1}
\end{equation}
where we have used that $z$ is always less than $1$ in the first integral and that, since $\rho$ is a positive function, we can upper bound the integrals considering whole set $\mathbb{R}$. Moreover, we have used the definition of $\Gamma(L_1^h, L_1^h)$. Replacing \eqref{eq: Gamma in T1} in \eqref{eq: T1} we get
\begin{equation}
\tilde{T}_1 \le c ( h^\frac{1}{\alpha} + h) \mathbb{E}[|g(\hat{L}_1)|] + c\mathbb{E}[|g(\hat{L}_1)|\int_0^1 \int_{ 1 < |z| \le \epsilon_0 h^{- \frac{1}{\alpha}}}(h^{\frac{1}{\alpha}} |z| + h |z|^\alpha) \mu^h(ds, dz))] = : \tilde{T}_{1,1} + \tilde{T}_{1,2}.
\label{eq: splitto T1}
\end{equation}
Concerning $\tilde{T}_{1,1}$, we have
\begin{equation}
\tilde{T}_{1,1} \le c h \left \| g \right \|_\infty + c h^\frac{1}{\alpha}\mathbb{E}[|g(\hat{L}_1)|] \le c h \left \| g \right \|_\infty + c h^\frac{1}{\alpha} \left \| g \right \|_\infty^{1 - \frac{\alpha}{p}} \left \| g \right \|_{pol}^{\frac{\alpha}{p}} |\log(h)|,
\label{eq: T11}
\end{equation}
where in the last inequality we have acted exactly like we did in \eqref{eq: spezzo h su norma inf e norma pol} and \eqref{eq: finale per z <1} with the exponent on $g$ that is exactly equal to $1$ instead of $1 + \epsilon$ and so we have chosen $r$ such that $p(1 - r) = \alpha$. Let us now consider $\tilde{T}_{1,2}$. We observe that it is exactly like \eqref{eq: uguale al caso con Gamma} but with $p_1 = 1$ instead of $p_1 = 1 + \epsilon$, with the only difference that computing \eqref{eq: stima int z alla alpha(1 + epsilon)} now we get $c \, \log(h^{- \frac{1}{\alpha}})$ instead of $c h^{- \epsilon}$ and in the definition \eqref{eq: definition r} we choose $r$ such that $p(1 - r) + 1 = \alpha$. Acting exactly like we did above it follows
\begin{equation}
\tilde{T}_{1,2} \le C_{p_2} h |\log(h)| \left \| g \right \|_\infty + C_{p_2} h^\frac{1}{\alpha} \left \| g \right \|_\infty^{1 + \frac{1}{p} - \frac{\alpha}{p}} \left \| g \right \|_{pol}^{-\frac{1}{p} + \frac{\alpha}{p}} |\log(h)|1_{\alpha > 1}.
\label{eq: T12}
\end{equation}
Using \eqref{es: stima T21 completa}, \eqref{eq: estimation T22}, \eqref{eq: T11} and \eqref{eq: T12}, the lemma is proved.
\end{proof}
It follows Proposition \ref{prop: estimation stable}, using also \eqref{eq: inizio prop maillavin}, \eqref{eq: stima primo termine Maillavin} and \eqref{eq: differenza S1 e L1 alpha n}.
\end{proof}
|
2,877,628,089,578 | arxiv | \section{Introduction}
\label{sec:intro}
Statisticians and data scientists are often called to manipulate,
analyze, and summarize datasets that present high-dimensional and
elaborate dependency structures. In multiple cases, these large datasets
contain variables characterized by a considerable amount of redundant
information. One can exploit these redundancies to represent a large
dataset on a much lower-dimensional scale. This summarization procedure,
called \emph{dimensionality reduction}, is a fundamental step in many
statistical analyses. For example, dimensionality reduction techniques
make otherwise challenging tasks as large data manipulation and
visualization feasible by reducing computational time and memory
requirements.
More formally, dimensionality reduction is possible whenever the data
points take place on one or more manifolds characterized by a lower
dimension than what has been originally observed. We call the dimension
of a latent, potentially non-linear manifold the
\emph{intrinsic dimension} (\texttt{id}). Several other definitions of
\texttt{id} exist in the literature. For example, we can see the
\texttt{id} as the minimal number of parameters needed to represent all
the information contained in the data without significant information
loss \citep{Ansuini2019, Rozza2011, Bennett1969}. Intuitively, the
\texttt{id} is an indicator of the complexity of the features of a
dataset. It is a necessary piece of information to have before
attempting to perform any dimensionality reduction, manifold learning,
or visualization tasks. Indeed, most dimensionality reduction methods
would be worthless without a reliable estimate of the true \texttt{id}
they need to target: an underestimated \texttt{id} value can cause
needless information loss. At the same time, the reverse can lead to an
unnecessary waste of time and computational resources \citep{Hino2017}.
Over the past few decades, a vast number of methods for \texttt{id}
estimation and dimensionality reduction have been developed. The
algorithms can be broadly classified into two main categories:
projection and geometric approaches. The former maps the original data
to a lower-dimensional space. The projection function can be linear, as
in the case of Principal Component Analysis (PCA) \citep{Hotelling1933}
or nonlinear, as in the case of Locally Linear Embedding
\citep{Roweis2000}, Isomap \citep{Tenn}, and the tSNE
\citep{LaurensvanderMaatenTiCC2009}. For more examples, see
\citet{Jollife2016} and the references therein. In consequence, there is
a plethora of \proglang{R} packages that implement these types of
algorithms. To mention some examples, one can use the packages
\pkg{RDRToolbox} \citep{RDRtoolbox}, \pkg{lle} \citep{Kayo2006},
\pkg{Rtsne} \citep{Rtsne}, and the classic \code{princomp()} function
from the default package \pkg{stats}.
Geometric approaches rely instead on the topology of a dataset,
exploiting the properties of the distances between data points. Within
this family, we can find fractal methods \citep{Falconer2003}, graphical
methods \citep{Costa2004}, model-based likelihood approaches
\citep{Levina}, and methods based on nearest neighbors distances
\citep{Pettis1979}. Also in this case, numerous packages are available:
for example, for fractal methods alone there are \pkg{fractaldim}
\citep{fractaldimpack}, \pkg{nonlinearTseries} \citep{nonlinearTSpack},
and \pkg{tseriesChaos} \citep{tserieschaospack}, among others. For a
recent review of the methodologies used for \texttt{id} estimation we
refer to \citet{Campadelli2015}.
Given the abundance of approaches in this area, several \proglang{R}
developers have also attempted to provide unifying collections of
dimensionality reduction and \texttt{id} estimation techniques For
example, remarkable ensembles of methodologies are implemented in the
packages \pkg{ider} \citep{Hino2017b}, \pkg{dimred} and \pkg{coRanking}
\citep{dimred}, \pkg{dyndimred} \citep{dyndimred}, \pkg{IDmining}
\citep{Golay2017}, and \pkg{intrinsicDimension}
\citep{intrinsicDimension}. Among these proposals, the package
\pkg{Rdimtools} \citep{rdimtoolspack} stands out, implementing 150
different algorithms, 17 of which are exclusively dedicated to
\texttt{id} estimation \citep{You2020}.
In this paper, we introduce and discuss the \proglang{R} package
\pkg{intRinsic} (version 0.2.0). The package is openly available from
the Comprehensive R Archive Network (CRAN) at
\url{https://CRAN.R-project.org/package=intRinsic}, and can be easily
installed by running
\begin{verbatim}
R> install.packages("intRinsic")
\end{verbatim}
Future developments and updates will be uploaded both on CRAN and on
GitHub at \url{https://github.com/Fradenti/intRinsic}.
The package implements the \texttt{TWO-NN}, \texttt{Gride}, and
\texttt{Hidalgo} models, three state-of-the-art \texttt{id} estimators
recently introduced in \citet{Facco,Denti2021} and \citet{Allegra},
respectively. These methods are likelihood-based estimators that rely on
the theoretical properties of the distances among the nearest neighbors.
The first two models yield an estimate of a global, unique \texttt{id}
of a dataset and are implemented under both the frequentist and Bayesian
paradigms. One can also exploit these models to study how the
\texttt{id} of the dataset depends on the scale of the neighborhood
considered for its estimation. \texttt{Hidalgo}, on the other hand, is a
Bayesian mixture model that allows for the estimation of clusters of
points characterized by heterogeneous \texttt{id}s. In this article, we
focus our attention on the exposition of \texttt{TWO-NN} and
\texttt{Hidalgo}, and we discuss the pros and cons of both models with
the aid of simulated data. More details about the additional routines
implemented in our package, such as \texttt{Gride}, are reported in
Section A of the Appendix. In Section B of the Appendix, we elaborate
more on the strengths and weaknesses of the methods implemented in
\pkg{intRinsic} in comparison to the other existing packages.
Broadly speaking, the package contains two sets of functions, organized
into high-level and low-level routines. The former set contains
user-friendly and straightforward \proglang{R} functions. Our goal is to
make the package as accessible and intuitive as possible by automating
most tasks. The low-level routines represent the core of the package,
and they are not exported. The most computationally-intensive low-level
functions are written in \proglang{C++}, exploiting the interface with
\proglang{R} provided by the packages \pkg{Rcpp} and \pkg{RcppArmadillo}
\citep{Eddelbuettel2011,armad}. The \proglang{C++} implementation
considerably speeds up time-consuming tasks, like running the Gibbs
sampler for the Bayesian mixture model \texttt{Hidalgo}. Moreover,
\pkg{intRinsic} is well integrated with external \proglang{R} packages.
For example, we enriched the package's functionalities defining ad-hoc
methods for generic functions like \code{autoplot()} from the
\pkg{ggplot2} package \citep{GGPLOT} to produce the appropriate
graphical outputs.
The article is structured as follows. Section \ref{sec:mod} introduces
and describes the theoretical background of the \texttt{TWO-NN} and
\texttt{Hidalgo} methods. Section \ref{sec:examples} illustrates the
basic usage of the implemented routines on simulated data. We show how
to obtain, manipulate, and interpret the different outputs.
Additionally, we assess the robustness of the methods by monitoring how
the results vary when the input parameters change. Section
\ref{sec:alon} presents an applications to a famous real microarray
datasets. Finally, Section \ref{sec:summary} concludes discussing future
directions and potential extensions to the package.
\section{The modeling background}
\label{sec:mod}
Let \(\bm{X}\) be a dataset with \(n\) data points measured over \(D\)
variables. We denote each observation as \(x_i\in \mathbb{R}^D\).
Despite being observed over a \(D\)-dimensional space, we suppose that
the points take place on a latent manifold \(\mathcal{M}\) with
intrinsic dimension (\texttt{id} from now on) \(d\leq D\). Generally, we
expect that \(d <<D\). Thus, we postulate that a low-dimensional
data-generating mechanism can accurately describe the dataset.\\
Then, consider a single data point \(x_i\). Starting from this point,
one can order the remaining \(n-1\) observations according to their
distance from \(x_i\). This way, we obtain a list of nearest neighbors
(NNs) of increasing order. Formally, let
\(\Delta:\mathbb{R}^D\times\mathbb{R}^D\rightarrow\mathbb{R}^+\) be a
generic distance function between data points. We denote with
\(x_i^{(l)}\) the \(l\)-th NN of \(x_i\) and with
\(r_{i,l}=\Delta\left(x_{i},x_i^{(l)}\right)\) their distance, for
\(l=1,\ldots, n-1\). Given the sequence of NNs for each data point, we
can define the
\emph{volume of the hyper-spherical shell enclosed between two successive neighbors of}
\(x_i\) as \begin{equation}
\nu_{i,l}=\omega_{d}\left(r_{i,l}^{d}-r_{i,l-1}^{d}\right), \quad \quad \text{for }l=1,\ldots,n-1,\;\text{ and }\;i=1,\ldots,n, \label{eq::HSshell}
\end{equation} where \(d\) is the dimensionality of the latent manifold
in which the points are embedded (the \texttt{id}) and \(\omega_{d}\) is
the volume of the \(d\)-dimensional hyper-sphere with unitary radius.
For this formula to hold, we need to set \(x_{i,0}\equiv x_{i}\) and
\(r_{i,0}=0\). Considering the two-dimensional case for simplicity, we
provide a visual representation of the quantities involved in Figure
\ref{fig:concentric} for \(l=1,2\).
From a modeling perspective, we assume the dataset \(\bm{X}\) is a
realization of a Poisson point process characterized by density function
\(\rho\left(x\right)\). \citet{Facco} showed that the hyper-spherical
shells defined in Equation \eqref{eq::HSshell} are the multivariate
extension of the well-known \emph{inter-arrival times}
\citep{Kingman1992}. Therefore, they proved that under the assumption of
homogeneity of the Poisson point process,
i.e.~\(\rho(x)=\rho\:\: \forall x\), all the \(\nu_{i,l}\)'s are
independently drawn from an Exponential distribution with rate equal to
the density \(\rho\): \(\nu_{i,l}\sim Exp(\rho)\), for
\(l =1,\ldots,n-1,\) and \(i=1,\dots,n\). This fundamental result
motivates the derivation of the estimators we will introduce in the
following sections.
\begin{figure}[t!]
\centering
\includegraphics[scale=.6]{images/concentriche.png}
\caption{Pictorial representation in $\mathbb{R}^2$ of the first two NNs of point $x_i$ and the corresponding distances. In two dimensions, the volume of the hyper-spherical shells $\nu_{i,1}$ and $\nu_{i,2}$ is equal to the surfaces of the first circle and the outer ring, respectively.}
\label{fig:concentric}
\end{figure}
\subsection{The TWO-NN estimator}
\label{sec:twonn}
Building on the distribution of the hyper-spherical shells,
\citet{Facco} noticed that, if the intensity of the Poisson point
process is assumed to be constant on the scale of the second NN, the
following distributional result holds: \begin{equation}
\mu_{i,1,2} = \frac{r_{i,2}}{r_{i,1}} \sim Pareto(1,d), \quad \quad \mu_i \in \left(1,+\infty\right) \quad \quad i=1,\ldots,n.
\label{eq::firstresult}
\end{equation} In other words, if the intensity of the Poisson point
process that generates the data can be regarded as locally constant (on
the scale of the second NN), the ratio of the first two distances from
the closest two NNs is Pareto distributed. Recall that the Pareto random
variable is characterized by a scale parameter \(a\), shape parameter
\(b\), and density function \(f_X(x)=ab^a x^{-a-1}\) defined over
\(x\in\left[a,+\infty\right]\). Remarkably, Equation
\eqref{eq::firstresult} states that the ratio \({r_{i,2}}/{r_{i,1}}\)
follows a Pareto distribution with scale \(a=1\) and shape \(b=d\),
i.e., the shape parameter can be interpreted as the \texttt{id} of the
data.\\
One can also attain more general results by considering ratios of
distances with NNs of generic orders. A generalized ratios will be
denoted with \(\mu_{i,n_1,n_2} = \frac{r_{i,n_2}}{r_{i,n_1}}\) for
\(i = 1,\ldots, n\), where \(n_1\) and \(n_2\) are the NN orders,
integer numbers that need to comply with the following constraint:
\(1\leq n_1 < n_2\leq n\). In this paper, we will mainly focus on
methods involving the ratio of the first two NN distances. The
generalized ratios will be mentioned only when discussing the function
\code{compute\_mus()}. Therefore, for notational clarity, we will assume
\(\mu_i = \mu_{i,1,2}\) and \(\bm{\mu}=\left(\mu_i\right)_{i=1}^n\).
Once the vector \(\bm{\mu}\) is computed, we can employ different
estimators for the \texttt{id}. All of the following methods can be
called via the \pkg{intRinsic} function \code{twonn()}. Examples about
its usage can be found in Section \ref{sec:illustration_twonn}.\\
\textbf{Linear Estimator}. \citet{Facco} proposed to estimate the
\texttt{id} via the linearization of the Pareto c.d.f.
\(F({\mu_i})= (1-\mu_i^{-d})\). The estimate \(\hat{d}_{OLS}\) is
obtained as the solution of \begin{equation}
-\log(1-\hat{F}(\mu_{(i)}) )= d\log(\mu_{(i)}),
\label{regression}
\end{equation} where \(\hat{F}(\cdot)\) denotes the empirical c.d.f. of
the sample and the \(\mu_{(i)}\)'s are the ratios defined in Equation
\eqref{eq::firstresult} sorted by increasing order. To obtain a more
robust estimation, the authors suggested trimming from \(\bm{\mu}\) a
percentage \(c_{TR}\) of the most extreme values. This choice is
justified because the extreme ratios often correspond to observations
that do not comply with the local homogeneity assumption.\\
\textbf{Maximum Likelihood Estimator}. In a similar spirit,
\cite{Denti2021} took advantage of the distributional results in
Equation \eqref{eq::firstresult} to derive a simple Maximum Likelihood
Estimator (MLE) and corresponding Confidence Interval (CI). Trivially,
the (unbiased) MLE for the shape parameter of a Pareto distribution is
given by: \begin{equation}
\hat{d} = \frac{n-1}{\sum_{i}^n\log(\mu_i)},
\label{MLE:TWONN}
\end{equation} while the corresponding CI of level (1-\(\alpha\)) is
defined as\\
\begin{equation}
CI(d,1-\alpha)=\left[\frac{\hat{d}}{q^{1-\alpha/2}_{IG_{n,(n-1)}}},\frac{\hat{d}}{q^{\alpha/2}_{IG_{n,(n-1)}}}\right],
\label{MLE:CI}
\end{equation} where \(q^{\alpha/2}_{IG_{a,b}}\) denotes the quantile of
order \(\alpha/2\) of an Inverse-Gamma distribution of shape \(a\) and
scale \(b\).\\
\textbf{Bayesian Estimator}. It is also straightforward to derive an
estimator according to a Bayesian perspective \citep{Denti2021}. Indeed,
one can specify a prior distribution on the shape parameter \(d\). The
most natural prior to choose is a conjugate \(d\sim Gamma(a,b)\). It is
immediate to derive the posterior distribution for the shape parameter:
\begin{equation}
d|\bm{\mu} \sim Gamma\left(a+n, b+\sum_{i=1}^n \log(\mu_i)\right).
\end{equation} With this method, one can obtain the principal quantiles
of the posterior distribution, collecting point estimates and
uncertainty quantification with a credible interval (CrI) of level
\(\alpha\).
\begin{comment}
\textcolor{red}{Posterior predictive distribution:}
Let us define $a^*=a+n$ and $b^*=b+\sum_{i=1}^n \log(\mu_i)$. We can derive the posterior predictive distribution:
\begin{equation}
p(\tilde{\mu}|\bm{\mu})= \frac{a^*}{b^*\;\tilde{\mu}}\left(1+\frac{\log(\tilde{\mu})}{b^*}\right)^{-a^*-1}, \text{ with } \: \tilde{\mu} \in \left(1,+\infty\right).
\end{equation}
It is not surprising that the posterior predictive law for $\log(\tilde{\mu})$ follows a $Lomax(a^{*},b^{*})$ distribution, for which samplers are readily available.
\end{comment}
\subsection{Hidalgo: the heterogeneous intrinsic dimension algorithm}
\label{sec:hidalgo}
The estimators we described in the previous sections implicitly assume
that the \texttt{id} of a dataset is unique. However, considering a
unique value for the \texttt{id} can often be limiting, especially when
the data present complex dependence structures among variables. To
extend the previous modeling framework, one can imagine that the data
points are divided into clusters, each of them belonging to a latent
manifold with its specific \texttt{id}. Allegra et al.~(2020) employed
this heterogeneous \texttt{id} estimation approach in their model: the
heterogeneous \texttt{id} algorithm (\texttt{Hidalgo}). The authors
proposed as density function of the generating point process a mixture
of \(K\) distributions defined on \(K\) different latent manifolds,
expressed as
\(\rho\left(\bm{x}\right) = \sum_{k=1}^{K}\pi_k\rho_k\left(\bm{x}\right)\),
where \(\bm{\pi}=\left(\pi_1,\ldots,\pi_K\right)\) is the vector of
mixture weights. This assumption induces a mixture of Pareto
distributions as the distribution of the ratios \(\mu_i\)'s:
\begin{equation}
f(\mu_i|\bm{d},\bm{\pi}) = \sum_{k=1}^{K}\pi_k \: d_k \mu_i^{-(d_k+1)}, \quad \quad i=1,\ldots,n,
\label{hid}
\end{equation} where \(\bm{d}=\left(d_1,\ldots,d_K\right)\) is the
vector of \texttt{id} parameters. \cite{Allegra} adopted a Bayesian
perspective, specifying independent Gamma priors for each element of
\(\bm{d}:\) \(d_k\sim Gamma(a_{d},b_{d})\), and a Dirichlet prior for
the mixture weights
\(\bm{\pi}\sim Dirichlet(\alpha_1,\ldots, \alpha_K)\). Regarding the
latter, we suggest to set \(\alpha_1=\ldots=\alpha_K=\alpha<0.05\),
fitting a sparse mixture model as suggested by
\citet{Malsiner-Walli2017}. This prior allows the data to populate only
the necessary number of mixture components. Thus, the value \(K\) is
seen as an upper bound on the number of active clusters \(K^*<K\).
Unfortunately, a model-based clustering approach like the one presented
in Equation \eqref{hid} is ineffective at modeling the data. The problem
lies in the fact that the different Pareto kernel densities constituting
the mixture components are extremely similar and overlapping. Therefore,
Pareto densities with varying shape parameters can fit the same data
points equally well, compromising the clustering. Even when considering
very diverse shape parameters, the right tails of the different Pareto
distributions overlap to a great extent. This issue is evident in Figure
\ref{fig:pareto}, where we illustrate different examples of
\(Pareto(1,d)\) densities. Thus, the distributional similarity
jeopardizes the cluster assignments of the data points and the
consequent \texttt{id}s estimation.
\begin{figure}
\centering
\includegraphics[scale=.15]{images/paretos.png}
\caption{Density functions of $Pareto(1,d)$ distribution for different values of the shape parameter $d$.}
\label{fig:pareto}
\end{figure}
To address this problem, \citet{Allegra} introduced a local homogeneity
assumption, which postulates that points close to each other are more
likely to be part of the same latent manifold. To incorporate this, the
authors added an extra penalizing term in the likelihood. We now
summarize their approach.\\
First, they introduced the latent membership labels
\(\bm{z}=(z_1,\ldots,z_n)\) to assign each observation to a cluster,
where \(z_i=k\) means that the \(i\)-th observation was assigned to the
\(k\)-th mixture component. Then, they defined the binary adjacency
matrix \(\mathcal{N}^{(q)}\), whose entries are
\(\mathcal{N}_{ij}^{(q)}=1\) if the point \(x_j\) is among the \(q\) NNs
of \(x_i\) and 0 otherwise. Finally, they assumed the following
probabilities:
\(\mathbb{P}\left[\mathcal{N}_{ij}^{(q)}=1|z_i=z_j\right]=\zeta_1\),
with \(\zeta_1>0.5\) and
\(\mathbb{P}\left[\mathcal{N}_{ij}^{(q)}=1|z_i\neq z_j\right]=\zeta_0\),
with \(\zeta_0<0.5\). These probabilities are incorporated in the
following distributional constraints for the data point \(x_i\):
\(\pi(\mathcal{N}_{i}^{(q)}|\bm{z})=\prod_{j=1}^n \zeta_0^{\mathds{1}_{z_i\neq z_j}}\zeta_1^{\mathds{1}_{z_i=z_j}}/\mathcal{Z}_i\),
where \(\mathcal{Z}_i\) is the normalizing constant and
\(\mathds{1}_{A}\) is the indicator function, equal to 1 when the event
\(A\) is true, 0 otherwise. A more technical discussion of this model
extension and the validity of the underlying hypothesis can be found in
the Supplementary Material of \citet{Allegra}. For simplicity, we assume
\(\zeta_0=\zeta\) and \(\zeta_1= 1-\zeta\). The model becomes
\begin{equation}
\mathcal{L}\left(\mu_i,\mathcal{N}^{(q)}|\bm{d},\bm{z},\zeta\right) = \: d_{z_i} \mu_i^{-(d_{z_i}+1)}\times \prod_{i=1}^{n}\frac{ \zeta^{\mathds{1}_{z_i\neq z_j}}(1-\zeta)^{\mathds{1}_{z_i=z_j}}}{\mathcal{Z}_i}, \quad \quad z_i|\bm{\pi} \sim Cat_{K}(\bm{\pi}),
\label{MODpara}
\end{equation} where \(Cat_{K}\) denotes a Categorical distribution over
the set \(\{1,\ldots,K\}\). A closed-form for the posterior distribution
is not available, so we rely on MCMC techniques to simulate a posterior
sample.\\
The function \code{Hidalgo()} implements the Bayesian mixture model
under the conjugate prior and two other alternative distributions. When
the nominal dimension \(D\) is low, the unbounded support of a Gamma
prior may provide unrealistic results, where the posterior distribution
assigns positive density to the interval \(\left(D,+\infty\right)\).
\citet{Santos-Fernandez2020} proposed to employ a more informative prior
for \(\bm{d}\): \begin{equation}
\pi(d_k) = \hat{\rho}\cdot d_k^{a-1} \exp^{-b d_k} \frac{\mathbbm{1}_{(0,D)}}{\mathcal{C}_{a,b,D}} + (1-\hat\rho)\cdot \delta_D(d_k) \quad \forall k,
\label{mixd}
\end{equation} where they denoted the normalizing constant of a
\(Gamma(a,b)\) truncated over \(\left(0,D\right]\) with
\(\mathcal{C}_{a,b,D}\). That is, the prior distribution for \(d_k\) is
a mixture between a truncated Gamma distribution over
\(\left(0,D\right]\) and a point mass located at \(D\). The parameter
\(\rho\) denotes the mixing proportion. When \(\rho=1\), the
distribution in \eqref{mixd} reduces to a simple truncated Gamma. Both
approaches are implemented in \pkg{intRinsic}, but we recommend using
the latter. We report the details of the implemented Gibbs sampler in
Section C of the Appendix.
\section{Examples using {intRinsic}}
\label{sec:examples}
\label{sec:applicat}
This section illustrates and discusses the main routines of the
\pkg{intRinsic} package. In the following, we will indicate the number
of observations and the observed nominal dimension with \texttt{n} and
\texttt{D}, respectively. Let us also represent with
\(\mathcal{N}_k(m,\Sigma)\) a multivariate Normal distribution of
dimension \(k\), mean \(m\), and covariance matrix \(\Sigma\). Moreover,
let \(\mathcal{U}^{(k)}_{\left(a,b\right)}\) represent a multivariate
Uniform distribution with support \(\left(a,b\right)\) in \(k\)
dimensions. Finally, we denote with \(\mathbb{I}_k\) an identity matrix
of dimension \(k\). We first load our package by running:
\begin{verbatim}
R> library(intRinsic)
\end{verbatim}
\subsection{Simulated Datasets}
To illustrate how to apply the different \texttt{id} estimation
techniques available in the package, we will make use of three simulated
datasets: \texttt{Swissroll}, \texttt{HyperCube}, and \texttt{GaussMix}.
This way, we can compare the results of the experiments with the ground
truth. One can generate the exact replica of the three simulated
datasets used in this paper by running the code reported below.
The first dataset, \texttt{Swissroll}, is obtained via the classical
Swissroll transformation
\(\mathcal{S}:\mathbb{R}^2\rightarrow\mathbb{R}^3\) defined as
\(\mathcal{S}(x,y)=(x\cos(x),y,x\sin(x))\). We sample all the pairs of
points \(\left(x,y\right)\) from two independent Uniform distributions
on \(\left(0,10\right)\). To simulate such a dataset, we can use the
function \code{Swissroll()} in \pkg{intRinsic}, specifying the number of
observations \texttt{n} as input parameter.
\begin{verbatim}
R> set.seed(123456)
R> # Swissroll dataset
R> Swissroll <- Swissroll(n = 1000)
\end{verbatim}
The second dataset, \texttt{HyperCube}, contains a cloud of points
sampled from \(\mathcal{U}^{(5)}_{\left(0,1\right)}\) embedded in an
eight-dimensional space \(\mathbb{R}^8\).
\begin{verbatim}
R> # Five dimensional uniform distribution
R> HyperCube <- replicate(5, runif(500))
R> HyperCube <- cbind(HyperCube, 0, 0, 0)
R> colnames(HyperCube) <- paste0("V", 1:8)
R> HyperCube <- data.frame(HyperCube)
\end{verbatim}
Lastly, the dataset \(\texttt{GaussMix}\) is the collection of data
points generated from three different random variables, \(X_1, X_2\),
and \(X_3\), embedded in a five-dimensional Euclidean space. The three
random variables are distributed as \begin{equation*}
X_1=3X_0, \:\: X_0\sim \mathcal{N}_1(-5,1);\quad X_2\sim \mathcal{N}_3(0,\mathbbm{I}_3); \quad
X_3 \sim \mathcal{N}_5(5,\mathbbm{I}_5).
\end{equation*}
\begin{verbatim}
R> # Mixture of three Gaussian
R> v1 <- rnorm(500, mean = -5, sd = 1)
R> v1 <- data.frame(V1 = v1, V2 = 3 * v1, V3 = 0, V4 = 0, V5 = 0)
R> v2 <- cbind(replicate(3, stats::rnorm(500)), 0, 0)
R> v3 <- replicate(5, stats::rnorm(500, 5))
R> v4 <- data.frame(rbind(v1, v2, v3))
R> GaussMix <- data.frame(v4)
R> class_GMix <- rep(c("A", "B", "C"), rep(500, 3))
\end{verbatim}
Next, we need to establish what are the true values of the \texttt{id}
for the different datasets. Scatterplots are useful exploratory tools to
spot any clear dependence across the columns of a dataset. For example,
if we plot all the possible two-dimensional scatterplots from the
\texttt{Swissroll} dataset, we obtain Figure \ref{fig:scattSwiss}. From
the different panels, it is evident that two of the three coordinates
are free, and we can recover the last coordinate as a function of the
others. Therefore, the \texttt{id} of \texttt{Swissroll} is equal to 2.
Moreover, from the description of the data simulation, it is also
evident that the \texttt{id = 5} for the \texttt{HyperCube} dataset.
However, it is not as simple to figure out the value of the true
\texttt{id} for \(\texttt{GaussMix}\), given the heterogeneity of its
data generating mechanism. Table \ref{tab:data} summarizes the sample
sizes along with the true nominal dimensions \texttt{D} and \texttt{id}s
that characterize the three datasets.
\begin{figure}[t!]
\centering
\includegraphics[height=10cm,width=10cm]{images/Swisspair.pdf}
\caption{Scatterplots of the three variables in the \texttt{Swissroll} dataset. The dependence among the coordinates $x$ and $z$ is evident.}
\label{fig:scattSwiss}
\end{figure}
\begin{table}[t!]
\centering
\begin{tabular}{lccc}
\toprule
Name & \texttt{n} & \texttt{D} & \texttt{id}\\
\midrule
\texttt{Swissroll} & 1000 & 3 & 2 \\
\texttt{HyperCube} & 500 & 8 & 5 \\
\texttt{GaussMix} & 1500 & 5 & ? \\
\bottomrule
\end{tabular}
\caption{Summary of the dimensionality characteristics of the three simulated datasets.}
\label{tab:data}
\end{table}
\subsection{Ratios of nearest neighbors distances}
\label{sec:computemus}
The ratios between distances of NNs constitute the core quantities on
which the theoretical development presented in Section \ref{sec:mod} is
based. We can compute the ratios \(\mu_i\) defined in Equations
\eqref{eq::firstresult} with the function \code{compute\_mus()}.
However, note that the function can compute more general ratios
\(\bm{\mu}_{n_1,n_2}\), where \texttt{n1 < n2}, as presented in Section
\ref{sec:twonn}. In fact, the function needs the following arguments:
\begin{itemize}
\item \texttt{X}: a dataset of dimension \texttt{n}$ \times$\texttt{D} of which we want to compute the distance ratios;
\item \texttt{dist\_mat}: a \texttt{n}$ \times$\texttt{n} matrix containing the distances between data points;
\item \texttt{n1}: the order of the closest nearest neighbor to consider. As default, \texttt{n1 = 1};
\item \texttt{n2}: the order of the furthest nearest neighbor to consider; As default, \texttt{n2 = 2}.
\end{itemize}
In addition to this list, there are two remaining arguments, \texttt{Nq}
and \texttt{q}. They will be introduced later in Section
\ref{sec:hidalgo_app}, when we will discuss the application of
\texttt{Hidalgo}. Note that the specification of \texttt{dist\_mat}
overrides the argument passed as \texttt{X}. Instead, if the distance
matrix \texttt{dist\_mat} is not specified, \code{generate\_mus()}
relies on the function \code{get.knn()} from the package \pkg{FNN}
\citep{FNN}, which implements fast NN-search algorithms on the original
dataset.
The main output of the function is the vector of ratios
\(\bm{\mu}_{n_1,n_2}\). To use the function, we can easily write:
\begin{verbatim}
R> mus_Swissroll <- compute_mus(X = Swissroll)
R> mus_HyperCube <- compute_mus(X = HyperCube)
R> mus_GaussMix <- compute_mus(X = GaussMix)
\end{verbatim}
Calling the function with default arguments produces
\(\bm{\mu}=\bm{\mu}_{1,2}\). To explicitly compute generalized ratios,
we need to specify the NN orders \texttt{n1} and \texttt{n2}. Here, we
report two different examples:
\begin{verbatim}
R> mus_Swissroll_1 <- compute_mus(X = Swissroll, n1 = 5, n2 = 10)
R> mus_HyperCube_1 <- compute_mus(X = HyperCube, n1 = 5, n2 = 10)
R> mus_GaussMix_1 <- compute_mus(X = GaussMix, n1 = 5, n2 = 10)
\end{verbatim}
and
\begin{verbatim}
R> mus_Swissroll_2 <- compute_mus(X = Swissroll, n1 = 10, n2 = 20)
R> mus_HyperCube_2 <- compute_mus(X = HyperCube, n1 = 10, n2 = 20)
R> mus_GaussMix_2 <- compute_mus(X = GaussMix, n1 = 10, n2 = 20)
\end{verbatim}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{images/Mus_hist_all_orders.png}
\caption{Histograms of the ratios $\bm{\mu}_{n_1,n_2}$ for datasets \texttt{Swissroll}, \texttt{HyperCube}, and $\texttt{GaussMix}$. The shape of the histograms in the first column suggests that a Pareto distribution could be a good fit, according to the \texttt{TWO-NN} model.}
\label{fig:mu_hists}
\end{figure}
The histograms of the computed ratios are reported in Figure
\ref{fig:mu_hists}. The panels in each row correspond to different
datasets, while the effect of the varying NN orders is reported by
column. The histograms are truncated over the interval
\(\left[0,4\right]\) to ease the visualization. We see that the
distributions present the right-skewed shape typical of the Pareto
distribution. However, notice that some ratios could assume extreme
values, especially when low values of NN orders are chosen. To provide
an example, in Table \ref{tab:summary_gmx} we display the summary
statistics of the three vectors of ratios computed on \texttt{GaussMix}.
The maximum in the first line has great magnitude, but it significantly
reduces when higher NN orders are considered. It is also interesting to
observe how the distribution for the ratios of \texttt{GaussMix} when
\(\texttt{n1} = 10\) and \(\texttt{n2} = 20\) (bottom-right panel) is
multimodal, a symptom of the presence of heterogeneous manifolds. We
remark again that for the rest of the paper, we will focus on the
\texttt{TWO-NN} and \texttt{Hidalgo} models. Therefore, we will only use
\code{compute\_mus()} in its default specification, simply computing
\(\bm{\mu} = \left(\mu_i\right)_{i=1}^n\). The ratios of NNs of generic
order are necessary when using the \texttt{Gride} model. See Section A
of the Appendix and \cite{Denti2021} for more details.
\begin{table}
\centering
\begin{tabular}[t]{cccccccc}
\toprule
\texttt{n1} & \texttt{n2} & Minimum & 1st quartile & Median & Mean & 3rd quartile. & Maximum\\
\midrule
1 & 2 & 1.0002 & 1.1032 & 1.3048 & 18.4239 & 1.9228 & 5874.3666\\
5 & 10 & 1.0234 & 1.1627 & 1.2858 & 1.5344 & 1.7043 & 6.9533 \\
10 & 20 & 1.0472 & 1.1685 & 1.2613 & 1.4987 & 1.7250 & 4.0069 \\
\bottomrule
\end{tabular}
\caption{Summary statistics for the \texttt{GaussMix} dataset. Each row corresponds to a different combination of NN orders.}
\label{tab:summary_gmx}
\end{table}
Finally, recall that the model is based on the assumption that a Poisson
point process is the generating mechanism of the dataset. Ergo, the
model cannot handle situations where ties among data points are present.
From a more practical point of view, if \(\exists i\neq j\) such that
\(x_i=x_j\), the computation of \(\mu_i\) would be unfeasible, since
\(r_{i,1}=0\). We devised the function \code{compute\_mus()} to
automatically detect if duplicates are present in a dataset. In that
case, the function removes the duplicates and provides a warning. We
showcase this behavior with a simple example with 3 unique data points:
\begin{verbatim}
R> Dummy_Data_with_replicates <- rbind(
+ c(1, 2, 3), c(1, 2, 3), # replicates
+ c(1, 4, 3), c(1, 4, 3), # replicates
+ c(1, 4, 5) )
R> mus_shorter <- compute_mus(X = Dummy_Data_with_replicates)
Warning:
Duplicates are present and will be removed.
Original sample size: 5. New sample size: 3.
\end{verbatim}
The function \code{compute\_mus()} is at the core of many other
higher-level routines we use to estimate the \texttt{id}. In the
following subsection, we show how to implement the \texttt{TWO-NN} model
to obtain a point estimate of a global, homogeneous \texttt{id}
accompanied by the corresponding confidence intervals (CI) or credible
intervals (CrI).
\subsection{Estimating a global intrinsic dimension with TWO-NN}
\label{sec:illustration_twonn}
In alignment with what presented in Section \ref{sec:twonn}, we propose
three methods to carry out inference on the \texttt{id}: the linear
estimator, MLE, and Bayesian approach. The low-level functions that
implement these methods are \code{twonn\_mle()}, \code{twonn\_linfit()},
and \code{twonn\_bayes()}, respectively. These low-level functions can
be called via the high-level function \code{twonn()}. Regardless of the
preferred estimation method, the \code{twonn()} function takes the
following arguments: the dataset \texttt{X} or the distance matrix
\texttt{dist\_mat} (refer to previous arguments specification for more
details), along with
\begin{itemize}
\item \texttt{mus}: the vector of second-to-first NN distance ratios. If this argument is provided, \texttt{X} and \texttt{dist\_mat} will be ignored;
\item \texttt{method}: a string stating the preferred estimation method. Could be the maximum likelihood estimator (\texttt{"mle"}, the default), the estimation via least squares approach (\texttt{"linfit"}), or the estimation via Bayesian approach (\texttt{"bayes"});
\item \texttt{alpha}: the confidence level (for \texttt{"mle"} and \texttt{"linfit"}) or posterior probability included in the credible interval (\texttt{"bayes"});
\item \texttt{c\_trimmed}: the proportions of most extreme ratios excluded from the analysis.
\end{itemize}
The object that the function returns is a list. The first element of the
list always contains the estimates, while the others provide information
about the chosen estimation process. The returned list is characterized
by a class that varies according to the selected estimation method.
Tailored \proglang{R} methods have been devised to extended the generic
functions \code{print()} and \code{autoplot()}.
\subsubsection{Linear Estimator}
Here, we apply the linear estimator to the \texttt{Swissroll} dataset.
As an example, we fit five linear models, by setting
\texttt{method = "linfit"}, with different trimming proportions.
Printing the output provides a quick summary of the estimation.
\begin{verbatim}
R> lin_1 <- twonn(X = Swissroll, method = "linfit", c_trimmed = 0)
R> lin_2 <- twonn(X = Swissroll, method = "linfit", c_trimmed = 0.001)
R> lin_3 <- twonn(X = Swissroll, method = "linfit", c_trimmed = 0.01)
R> lin_4 <- twonn(X = Swissroll, method = "linfit", c_trimmed = 0.05)
R> lin_5 <- twonn(X = Swissroll, method = "linfit", c_trimmed = 0.1)
R> # Example of output:
R> lin_2
Model: TWO-NN
Method: Least Square Estimation
Sample size: 1000, Obs. used: 999. Trimming proportion: 0.1%
ID estimates (confidence level: 0.95)
| Lower Bound| Estimate| Upper Bound|
|-----------:|--------:|-----------:|
| 1.986659| 1.999682| 2.012705|
\end{verbatim}
The results of these experiments are collected in Table
\ref{tab:res_lin}. This first example allows us to comment on the
trimming level to choose. Trimming the most extreme observations may be
fundamental, since the estimate may be distorted by outliers. However,
too much trimming would remove important information regarding the tail
of the Pareto distribution, which is essential for the estimation. As we
can see, the estimates improve for very low levels of trimming, but
start to degenerate as more than 10\% of observations are removed from
the dataset.
\begin{table}[th!]
\centering
\begin{tabular}{lcccccc}
\toprule
Trimming percentage & 0\% & 0.1\% & 1\% & 5\% & 10\%\\
\midrule
Lower bound & 1.9524 & 1.9867 & 2.2333 & 2.5709 & 2.9542\\
Estimate & 1.9669 & 1.9997 & 2.2457 & 2.5988 & 2.9974\\
Upper bound & 1.9814 & 2.0127 & 2.2581 & 2.6267 & 3.0407\\
\bottomrule
\end{tabular}
\caption{Point estimate and relative confidence intervals for the \texttt{id} values retrieved from the \texttt{Swissroll} dataset with the linear estimator. Each column displays the estimates for a specific trimming level.}
\label{tab:res_lin}
\end{table}
We can also visually assess the goodness of fit via a dedicated
\code{autoplot()} function, which plots the data and the relative
regression line. The slope of the regression lines corresponds to the
linear fit \texttt{id} estimates. For example, we obtain the plots in
Figure \ref{fig:lin_fit_trim} with the following two lines of code:
\begin{verbatim}
R> autoplot(lin_1, title = "No trimming")
R> autoplot(lin_5, title = "1
\end{verbatim}
\begin{figure}[ht!]
\centering
\includegraphics[width=\linewidth]{images/lin_fit_trim.png}
\caption{Linear regression fitted to the set of points $ -\log(1-\hat{F}(\mu_i) )= d\log(\mu_i)$ for the \texttt{Swissroll} dataset with no trimming (left panel) and 10\% of trimmed observations (right panel).}
\label{fig:lin_fit_trim}
\end{figure}
\subsubsection{Maximum Likelihood Estimator}
A second way to obtain an \texttt{id} estimate, along with its
confidence interval, is via MLE. The formulas implemented are presented
in Equations \eqref{MLE:TWONN} and \eqref{MLE:CI}. We compute the MLE by
calling the low-level \code{twonn\_mle()} function via
\texttt{method = "mle"}. In addition to the previous arguments, one can
also specify
\begin{itemize}
\item \texttt{unbiased}: logical, if \texttt{TRUE} the point estimate according to the unbiased estimator (where the numerator is $n-1$, as in Equation \eqref{MLE:TWONN}) is computed.
\end{itemize}
As an example, we will compute the \texttt{id} on \texttt{HyperCube} via
MLE using different distance definitions: Euclidean, Manhattan, and
Canberra. These distances are all implemented within \code{dist()}
function in the \pkg{stats} package.
\begin{verbatim}
R> dist_Eucl_D2 <- as.matrix(stats::dist(HyperCube))
R> dist_Manh_D2 <- as.matrix(stats::dist(HyperCube, method = "manhattan"))
R> dist_Canb_D2 <- as.matrix(stats::dist(HyperCube, method = "canberra"))
\end{verbatim}
Other distance matrices can be employed as well. In this example, we
also show how the bounds of the CIs change by varying the confidence
levels. We write:
\begin{verbatim}
R> mle_11 <- twonn(dist_mat = dist_Eucl_D2)
R> mle_12 <- twonn(dist_mat = dist_Eucl_D2, alpha = .99)
R> mle_21 <- twonn(dist_mat = dist_Manh_D2)
R> mle_22 <- twonn(dist_mat = dist_Manh_D2, alpha = .99)
R> mle_31 <- twonn(dist_mat = dist_Canb_D2)
R> mle_32 <- twonn(dist_mat = dist_Canb_D2, alpha = .99)
R> # Example of output:
R> mle_12
Model: TWO-NN
Method: MLE
Sample size: 500, Obs. used: 495. Trimming proportion: 1%
ID estimates (confidence level: 0.99)
| Lower Bound| Estimate| Upper Bound|
|-----------:|--------:|-----------:|
| 3.968082| 4.459427| 5.00273|
\end{verbatim}
The results are summarized in Table \ref{tab:MLEtab}. The type of
distance can lead to differences in the results. Overall, the estimators
agree with each other, obtaining values that are close to the ground
truth.
\begin{table}[th!]
\centering
\begin{tabular}{lcccccc}
\toprule
\code{dist()} & \multicolumn{2}{c}{Euclidean} & \multicolumn{2}{c}{Manhattan}& \multicolumn{2}{c}{Canberra}\\
$\alpha$ & $0.95$ & $0.99$ & $0.95$ & $0.99$ & $0.95$ & $0.99$\\
\midrule
Lower bound & 4.0834 & 3.9681 & 4.0736 & 3.9585 & 4.6001 & 4.4701\\
Estimate & 4.4594 & 4.4594 & 4.4487 & 4.4487 & 5.0237 & 5.0237\\
Upper bound & 4.8706 & 5.0027 & 4.8588 & 4.9906 & 5.4868 & 5.6357\\
\bottomrule
\end{tabular}
\caption{MLE obtained from the \texttt{TWO-NN} model applied to the \texttt{HyperCube} dataset. Different distance functions and confidence level specifications are adopted.}
\label{tab:MLEtab}
\end{table}
\subsubsection{Bayesian Estimation}
The third option for \texttt{id} estimation is to adopt a Bayesian
perspective and specify a prior distribution for the parameter \(d\). To
run the Bayesian estimates, we can call the low-level function
\code{twonn\_bayes()} via \texttt{method = "bayes"} with \code{twonn()}.
Along the aforementioned arguments, we can also specify:
\begin{itemize}
\item \texttt{a\_d} and \texttt{b\_d}: shape and rate parameters for the Gamma prior distribution on $d$. A vague specification is adopted as a default with \texttt{a\_d = 0.001} and \texttt{b\_d = 0.001}.
\end{itemize}
Differently from the previous two cases, recall that \texttt{alpha} in
this context is assumed to be the probability contained in the credible
interval computed on the posterior distribution. Along with the credible
interval, the function outputs the posterior mean, median, and mode. In
the following, four examples showcase the usage of this function on the
\texttt{Swissroll} dataset with different combinations of credible
interval levels and prior specifications. A summary of the results is
reported in Table \ref{tab:baytownn}.
\begin{verbatim}
R> bay_1 <- twonn(X = Swissroll, method = "bayes")
R> bay_2 <- twonn(X = Swissroll, method = "bayes", alpha = 0.99)
R> bay_3 <- twonn(X = Swissroll, method = "bayes", a_d = 1, b_d = 1)
R> bay_4 <- twonn(X = Swissroll, method = "bayes", a_d = 1, b_d = 1, alpha = 0.99)
\end{verbatim}
We can plot the posterior density of the parameter \(d\) using
\code{autoplot()}, as displayed in Figure \ref{fig:bayes}. When plotting
an object of class \texttt{twonn\_bayes}, we can also specify the
following parameters:
\begin{itemize}
\item \texttt{plot\_low} and \texttt{plot\_upp}: lower and upper extremes of the support on which the posterior is evaluated;
\item \texttt{by}: increment of the sequence going from \texttt{plot\_low} to \texttt{plot\_upp} that defines the support.
\end{itemize}
As an example, we compare the prior specification used for the object
\texttt{bay\_4} (\(d\sim Gamma(1,1)\)) with a more informative one
(\(d\sim Gamma(10,10)\)) by writing:
\begin{verbatim}
R> bay_5 <- twonn(X = Swissroll,method = "bayes", a_d = 10, b_d = 10, alpha = 0.99)
R> # Example of output
R> bay_5
Model: TWO-NN
Method: Bayesian Estimation
Sample size: 1000, Obs. used: 990. Trimming proportion: 1%
Prior d ~ Gamma(10, 10)
Credibile Interval quantiles: 0.
Posterior ID estimates:
| Lower Bound| Mean| Median| Mode| Upper Bound|
|-----------:|--------:|--------:|--------:|-----------:|
| 1.991901| 2.164113| 2.163391| 2.161949| 2.344453|
\end{verbatim}
The posterior distribution is depicted in black, the prior in blue, and
the dashed vertical red lines represent the estimates.
\begin{figure}[th!]
\centering
\includegraphics[width=\linewidth]{images/baytwonn.png}
\caption{\texttt{Swissroll} dataset. Graphical representation of the posterior distribution (black line), prior distribution (blue line), and main quantiles and average (vertical dotted red lines) under $d\sim Gamma(1,1)$ (left panel) and $d\sim Gamma(10,10)$ (right panel) prior specifications.}
\label{fig:bayes}
\end{figure}
\begin{table}[th!]
\centering
\begin{tabular}{lcccc}
\toprule
Prior & \multicolumn{2}{c}{Default} & \multicolumn{2}{c}{$d\sim Gamma(1,1)$}\\
$\alpha$ & $0.95$ & $0.99$ & $0.95$ & $0.99$\\
\midrule
Lower bound & 2.0556 & 2.0147 & 2.0532 & 2.0124\\
Mean & 2.1899 & 2.1899 & 2.1872 & 2.1872\\
Median & 2.1891 & 2.1891 & 2.1865 & 2.1865\\
Mode & 2.1876 & 2.1876 & 2.1850 & 2.1850\\
Upper bound & 2.3284 & 2.3733 & 2.3255 & 2.3703\\
\bottomrule
\end{tabular}
\caption{Posterior estimates under the Bayesian specification according to different prior specifications and levels $\alpha$ of the credible interval.}
\label{tab:baytownn}
\end{table}
So far, we discussed methods to accurately and efficiently determine a
global estimate of a dataset's \texttt{id}. Knowing the simulated data
generating process, we could easily compare the obtained estimates with
the ground truth for the \texttt{Swissroll} and \texttt{HyperCube}
datasets. However, the same task is not immediate when dealing with
\texttt{GaussMix}. For this dataset, it is unclear what value should
represent the true \texttt{id} because the data points in
\texttt{GaussMix} are generated from Gaussian distributions taking place
on different manifolds of heterogeneous dimensions. This scenario is
more likely to occur with datasets describing real phenomena, often
characterized by complex dependencies, and it will be the focus of the
next section.
\subsection{Detecting manifolds with heterogeneous intrinsic dimensions using Hidalgo}
\label{sec:hidalgo_app}
\subsubsection{Detecting the presence of multiple manifolds}
In the context where the data may exhibit heterogeneous \texttt{id}s, we
are facing two main challenges: (i) detect the actual presence of
multiple manifolds in the data, and (ii) accurately estimate their
\texttt{id}s. To tackle these problems, we start by applying the
\code{twonn()} function to \texttt{GaussMix} with \texttt{method} equal
to \texttt{"linfit"} and \texttt{"mle"}.
\begin{verbatim}
R> mus_gm <- compute_mus(GaussMix)
R> twonn(mus = mus_gm, method = "linfit")
Model: TWO-NN
Method: Least Square Estimation
Sample size: 1500, Obs. used: 1485. Trimming proportion: 1%
ID estimates (confidence level: 0.95)
| Lower Bound| Estimate| Upper Bound|
|-----------:|--------:|-----------:|
| 1.438657| 1.456325| 1.473992|
R> twonn(mus = mus_gm, method = "mle")
Model: TWO-NN
Method: MLE
Sample size: 1500, Obs. used: 1485. Trimming proportion: 1%
ID estimates (confidence level: 0.95)
| Lower Bound| Estimate| Upper Bound|
|-----------:|--------:|-----------:|
| 1.739329| 1.830063| 1.925601|
\end{verbatim}
The estimates obtained with the different methods do not agree. Figure
\ref{fig:lin_mix} raises concerns about the appropriateness of a model
postulating the existence of a single, global manifold. In the top panel
of Figure \ref{fig:lin_mix}, the data points are colored according to
their generating mixture component. The sorted log-ratios present a
non-linear pattern, where the slope values vary among the different
mixture components \texttt{A}, \texttt{B}, and \texttt{C}.
\begin{figure}[t!]
\centering
\includegraphics[width = .9\linewidth]{images/gaussmixLinfit.pdf}
\caption{Linear estimators applied to the \texttt{GaussMix} dataset. In the top panel, the linear estimator is applied to the entire dataset. The points are colored according to the mixture component from which they originate, and the corresponding colored dashed lines represent the linear estimators applied to the subsets of points relative to each mixture component. The bottom three panels report the estimates within each single mixture component.}
\label{fig:lin_mix}
\end{figure}
Another empirical assessment can be given by the inspection of the
evolution of the cumulative average distances between a point and its
nearest neighbors. In other words, for each point \(x_i\) we consider
the evolution of the ergodic mean of the sorted sequence of distances
\(\left(r_{i,0},\ldots,r_{i,n}\right)\), given by
\(r_i(j) = \sum_{j=1}^J r_{i,j}/j \:\:\forall i,j\). Figure
\ref{fig:evolution} displays the contrast between the \texttt{HyperCube}
(left panel -- exemplifying the homogeneous case) and \texttt{GaussMix}
(right panel -- representing the heterogeneous case) datasets. On the
one hand, the left plot displays the ideal condition: there are no
visible departures from the overall mean distance, which reassures us
about the homogeneity assumption we made about the data. But, on the
other hand, the right panel tells a different story. We immediately
detect the different clusters in the data by focusing on their different
starting values. The ergodic means remain approximately constant until
the 500-th NNs, which corresponds to the size of each subgroup. The
behavior of the cumulative means abruptly changes after the 500-th NN,
negating the presence of a unique manifold. This type of plot provides
an empirical but valuable overview of the structure between data points,
highlighting the presence of clusters that may be reflecting the
presence of multiple manifolds.
These visual assessments help detect signs of inappropriateness of the
global \texttt{id} assumption. The most immediate approach to adopt in
this case would be to divide the dataset into homogeneous subgroups and
apply the \texttt{TWO-NN} estimator within each cluster. Such an
approach is highlighted in the bottom panels of Figure
\ref{fig:lin_mix}. However, knowing ex-ante such well-separated groups
is not a realistic expectation to have about actual data, and therefore
we will rely on \code{Hidalgo()}, the Bayesian finite mixture model for
heterogeneous \texttt{id} estimation described in Section
\ref{sec:hidalgo}.
\begin{figure}[ht!]
\centering
\includegraphics[width = \linewidth]{images/cummeans.png}
\caption{Evolution of the cumulative means of NN distances computed for all the observations in the \texttt{HyperCube} (left panel) and \texttt{GaussMix} (right panel) datasets. In the right panel, the colors highlight the different mixture components.}
\label{fig:evolution}
\end{figure}
\subsubsection{Fitting the Hidalgo model}
\texttt{Hildalgo} addresses the presence of multiple manifolds in the
same dataset, yielding a vector of different estimated \texttt{id}
values \(\bm{d}\). As already discussed, estimating a mixture model with
Pareto components is challenging because of their extensive overlap. A
naive model-based estimation can lead to inaccurate results since there
is no clear separation between the kernel densities. Therefore, we
modify the classical Bayesian mixture using the likelihood stated in
Equation \eqref{MODpara}. By introducing the extra term
\(\prod_{i=1}^n\pi(\mathcal{N}_{i}^{(q)}|\bm{z})\) into the likelihood,
we can induce local homogeneity, which helps identify the model
parameters.
The adjacency matrix \(\mathcal{N}^{(q)}\) can be easily computed by
specifying two additional arguments in the function
\code{compute\_mus()}:
\begin{itemize}
\item \texttt{Nq}: logical, if \texttt{TRUE}, the function adds the adjacency matrix to the output;
\item \texttt{q}: integer, the number of NNs to be considered in the construction of the matrix $\mathcal{N}^{(q)}$. The default value is 3.
\end{itemize}
To provide an idea of the structure of the adjacency matrix
\(\mathcal{N}^{(q)}\), we report three examples obtained from a random
sub-sample of the \texttt{GaussMix} dataset for increasing values of
\texttt{q}. We display the heatmaps of the resulting matrices in Figure
\ref{fig:Nqs}.
\begin{verbatim}
R> set.seed(12345)
R> ind <- sort(sample(1:1500, 100, F))
R> Nq1 <- compute_mus(GaussMix[ind, ], Nq = T, q = 1)$NQ
R> Nq2 <- compute_mus(GaussMix[ind, ], Nq = T, q = 5)$NQ
R> Nq3 <- compute_mus(GaussMix[ind, ], Nq = T, q = 10)$NQ
\end{verbatim}
As \texttt{q} increases, the binary matrix becomes more populated,
uncovering the neighboring structure of the data points. \cite{Allegra}
investigated how the performance of the model changes as \texttt{q}
varies. They suggest fixing \(\texttt{q}=3\), a value that provides a
good trade-off between the flexibility of the mixture allocations and
local homogeneity.
\begin{figure}[t!]
\centering
\includegraphics[scale=.5]{images/Nqs.pdf}
\caption{Heatmaps of the adjacency matrices $\mathcal{N}^{(q)}$ computed on a subset of observations of the \texttt{GaussMix} dataset. Different values of $q$ are assumed.}
\label{fig:Nqs}
\end{figure}
Given this premise, we are now ready to discuss \code{Hidalgo()}, the
high-level function that fits the Bayesian mixture. It implements the
Gibbs sampler described in Section C of the Appendix, relying on
low-level \proglang{Rcpp} routines. Also, the function internally calls
\code{compute\_mus()} to automatically generate the ratios of distances
and the adjacency matrix needed to evaluate the likelihood from the data
points.
The function has the following arguments: \texttt{X},
\texttt{dist\_mat}, \texttt{q}, \texttt{D}, and
\begin{itemize}
\item \texttt{K}: integer, number of mixture components;
\item \texttt{nsim}, \texttt{burn\_in}, and \texttt{thinning}: number of MCMC iterations to collect, initial iterations to discard, and thinning interval, respectively;
\item \texttt{verbose}: logical, if \texttt{TRUE}, the progress of the sampler is printed;
\item \texttt{xi}: real between 0 and 1, local homogeneity parameter. Default is 0.75;
\item \texttt{alpha\_Dirichlet}: hyperparameter of the Dirichlet prior on the mixture weights;
\item \texttt{a0\_d} and \texttt{b0\_d}: shape and rate parameters of the Gamma prior on $d$. Here,
the default is 1 for both values;
\item \texttt{prior\_type}: string, type of Gamma prior on $d$ which can be
\begin{itemize}
\item \texttt{Conjugate}: a classic Gamma prior is adopted (default);
\item \texttt{Truncated}: a truncated Gamma prior on the interval $\left(0,D\right)$ is used. This specification is advised when dealing with datasets characterized by a small number of columns \texttt{D}, to avoid the estimated \texttt{id} exceeding the nominal dimension \texttt{D};
\item \texttt{Truncated\_PointMass}: same as \texttt{Truncated}, but a point mass is placed on $D$. That is, the estimated \texttt{id} is allowed to be exactly equal to the nominal dimension \texttt{D};
\end{itemize}
\item \texttt{pi\_mass}: probability placed a priori on $D$ when a \texttt{Truncated\_PointMass} prior specification is chosen.
\end{itemize}
We apply the \texttt{Hidalgo} model on the \texttt{GaussMix} dataset
with two different prior configurations: conjugate and truncated with
point mass at \(\texttt{D}=5\). The code we used to run the models is:
\begin{verbatim}
R> set.seed(1234)
R> hid_fit <- Hidalgo(X = GaussMix, K = 10, alpha_Dirichlet = .05,
+ nsim = 2000, burn_in = 2000, thinning = 5,
+ verbose = FALSE)
R> set.seed(1234)
R> hid_fit_TR <- Hidalgo(X = GaussMix, K = 10, alpha_Dirichlet = .05,
+ prior_type = "Truncated_PointMass", D = 5,
+ nsim = 2000, burn_in = 2000, thinning = 5,
+ verbose = FALSE)
\end{verbatim}
\begin{verbatim}
R> # Output Example
R> hid_fit_TR
Model: Hidalgo
Method: Bayesian Estimation
Prior d ~ Gamma(1, 1), type = Truncated_PointMass
Prior on mixture weights: Dirichlet(0.05) with 10 mixture components
MCMC details:
Total iterations: 4000, Burn in: 2000, Elapsed time: 1.2289 mins
\end{verbatim}
By using \texttt{alpha\_Dirichlet = 0.05}, we have adopted a sparse
mixture modeling approach in the spirit of
\citet{Malsiner-Walli2016, Malsiner-Walli2017}. In principle, the sparse
mixture approach would automatically let the data estimate the number of
mixture components required. As a consequence, the argument \code{K}
should be interpreted as an upper bound on the number of active
clusters. Nonetheless, we stress that estimating well-separated clusters
with Pareto kernels is challenging, and we will discuss how to analyze
the output to perform proper inference. The output object
\texttt{hid\_fit} is a list of class \code{Hidalgo}, containing six
elements:
\begin{itemize}
\item \texttt{cluster\_prob}: matrix of dimension \texttt{nsim}$\times$\texttt{K}. Each column contains the MCMC sample of the mixing weight for every mixture component;
\item \texttt{membership\_labels}: matrix of dimension \texttt{nsim}$\times$\texttt{n}. Each column contains the MCMC sample of the membership labels for every observation;
\item \texttt{id\_raw}: matrix of dimension \texttt{nsim}$\times$\texttt{K}. Each column contains the MCMC sample for the \texttt{id} estimated in every cluster;
\item \code{id\_postpr}: matrix of dimension \texttt{nsim}$\times$\texttt{n}. It contains a chain for each observation, corrected for labels switching;
\item \code{id\_summary}: a matrix containing the values of posterior mean and the 5\%, 25\%, 50\%, 75\%, 95\% quantiles for each observation;
\item \code{recap}: a list with specifications passed to the function as inputs.
\end{itemize}
To inspect the output, we can employ the dedicated \code{autoplot()}
function, devised for objects of class \texttt{Hidalgo}. There are
several arguments that can be specified, producing different graphs. The
most important is
\begin{itemize}
\item \texttt{type}: string that indicates the type of plot that is requested. It can be:
\begin{itemize}
\item \texttt{raw\_chains}: plot the MCMC and the ergodic means \textbf{not} corrected for label switching (default);
\item \texttt{point\_estimates}: plot the posterior mean and median \texttt{id} for each observation, along with their credible intervals;
\item \texttt{class\_plot}: plot the estimated \texttt{id} distributions stratified by the groups specified in an additional \texttt{class} vector;
\item \texttt{clustering}: plot the posterior co-clustering matrix. Rows and columns can be stratified by and external \texttt{class} and/or a clustering structure.
\end{itemize}
\end{itemize}
For example we can plot the raw chains of the two models with the aid of
the \pkg{patchwork} package \citep{patch}.
\begin{verbatim}
R> autoplot(hid_fit) / autoplot(hid_fit_TR)
\end{verbatim}
The previous line produces Figure \ref{fig:swap}. Plotting the trace
plots of the elements in \(\bm{d}\) allows us to assess the convergence
of the algorithm. We need to be aware that these chains may suffer from
label-switching issues, preventing us from directly drawing inference
from the MCMC output. Due to label-switching, mixture components can be
discarded and emptied or repopulated across iterations. Indeed, Figure
\ref{fig:swap} shows the MCMC trace plots of the two models, with the
ergodic means for each mixture component superimposed. Additionally, we
can see that if no constraint is imposed on the support of the prior
distribution for \(\bm{d}\) (top panel), the posterior estimates can
exceed the nominal dimension \texttt{D = 5} of the \texttt{GaussMix}
dataset. However, this problem disappears when imposing a truncation on
the prior support (bottom panel).
\begin{figure}[ht!]
\centering
\includegraphics[width = .9\linewidth]{images/raw_id_tracesplot.pdf}
\caption{MCMC trace plots and superimposed ergodic means of components of the vector $\bm{d}$. Top panel: conjugate prior specification. Bottom panel: truncated with point mass prior specification.}
\label{fig:swap}
\end{figure}
To address the label-switching issue and perform meaningful inference,
the raw MCMC needs to be postprocessed. In Section D of the Appendix, we
discuss the algorithm that is used to map the \(K\) chains to \(n\)
observation-specific chains that can be employed for inference. The
algorithm is already implemented in \code{Hidalgo()}, and produces the
elements \code{id\_postpr} and \code{id\_summary} in the returned list.
A visual summary of the post-processed estimates can be obtained via
\begin{verbatim}
R> autoplot(hid_fit, type = "point_estimates") +
+ autoplot(hid_fit_TR, type = "point_estimates")
\end{verbatim}
\begin{figure}[ht!]
\centering
\includegraphics[width = .95\linewidth]{images/pointest2.png}
\caption{Observation-specific means (left panels) and median (right panels) \texttt{id} represented with blue dots. The gray bars represent a $90\%$ credible interval. The two plots correspond to the two different prior specifications. The default plots were modified with \texttt{coord\_cartesian( ylim = c(0, 6.3))} to highlight the effect of the truncation.}
\label{fig:memed}
\end{figure}
The resulting plots are shown in Figure \ref{fig:memed}. The panels
display the mean and median \texttt{id} estimates for each data point.
Here, the separation of the data into different generating manifolds is
evident. Also, we notice that some of the estimates in the conjugate
case are incorrectly above the nominal value \texttt{D = 5}, once again
justifying the need for a truncated prior.
\subsubsection{Estimated clustering solutions}
When dealing with a mixture model, it is natural to seek model-based
clustering solutions. To this extent, the key source of information is
the posterior similarity -- or co-clustering -- matrix (PSM). The
entries of this matrix are computed as the proportion of times in which
two observations have been assigned to the same mixture component across
all the MCMC iterations. Thus, the PSM describes the underlying
clustering structure of the data detected by \texttt{Hidalgo}. Given the
PSM, one can evaluate various loss functions on the space of the
partitions. By minimizing the loss functions, we can retrieve the
optimal partition of the dataset into clusters. To obtain such estimate,
one can rely on the function \code{salso()} from the \proglang{R}
packages \pkg{salso} \citep{salso_package}. Otherwise, a faster
alternative method proceeds by building a dendrogram from the
dissimilarity matrix \(1-\)PSM and thresholds it to segment the data
into a prespecified number of clusters \texttt{K}.
These approaches are implemented in the dedicated function
\code{psm\_and\_cluster()} which takes as arguments, along the
\texttt{object} output from the \code{Hidalgo()} function,
\begin{itemize}
\item \texttt{clustering\_method}: string indicating the method to use to perform clustering. It can be \texttt{"dendrogram"} or \texttt{"salso"}. The former method thresholds the dendrogram constructed from the dissimilarity matrix 1-PSM to retrieve \texttt{K} clusters. The latter method estimates the optimal clustering by minimizing a loss function on the space of the partitions. The default loss function is the Variation of Information (VI);
\item \texttt{K}: integer. If \texttt{"dendrogram"} is chosen, it is the number of clusters to recover with the thresholding of the dendrogram obtained from the PSM;
\item \texttt{nCores}: integer, argument for the functions taken from \pkg{salso}. It represents the number of cores used to compute the PSM and the optimal clustering solution.
\end{itemize}
Additional arguments can be passed to personalize the partition
estimation via \code{salso()}. Given the large sample size of the
\texttt{GaussMix} dataset, we opt for the dendrogram approach,
truncating the dendrogram at \(K=3\) groups. We highlight that relying
on the minimization of a loss function is a more principled approach.
However, the method can be misled by the strongly overlapping clusters
estimated across the MCMC iterations, providing overly conservative
solutions.
\begin{verbatim}
R> psm_cl <- psm_and_cluster(object = hid_fit_TR,
+ clustering_method = "dendrogram",
+ K=3, nCores = 5)
R> # Example of output
R> psm_cl
Estimated clustering solution summary:
Method: dendrogram.
Retrieved clusters: 3.
Clustering frequencies:
| Cluster 1| Cluster 2| Cluster 3|
|---------:|---------:|---------:|
| 648| 356| 496|
\end{verbatim}
To visualize the results, we can also plot the PSM via \code{autoplot()}
which, if \texttt{type = "clustering"}, directly calls
\texttt{psm\_and\_cluster}.
\begin{verbatim}
R> autoplot(hid_fit_TR, type = "clustering")
\end{verbatim}
As we can see from Figure \ref{fig:coc}, the model correctly detects the
three clusters in the data. One can specify additional arguments in the
\code{autoplot()} to stratify columns and rows according to external
factors or the estimated clustering solution.
\begin{figure}[t!]
\centering
\includegraphics[scale=.3]{images/psm_hidTR_gm.png}
\caption{\texttt{GaussMix} dataset. Posterior co-clustering matrix computed from the output of the \texttt{Hidalgo} model.}
\label{fig:coc}
\end{figure}
\subsubsection{The presence of patterns in the data uncovered by the id}
Once the observation-specific \texttt{id} chains are computed, we can
investigate the presence of potential patterns between the estimates and
given external variables. To explore these possible relations, we can
use the function \code{id\_by\_class()}. Along with an object of class
\texttt{Hidalgo}, we need to specify:
\begin{itemize}
\item \texttt{class}: factor, a variable used to stratify the \texttt{id} posterior estimates.
\end{itemize}
\begin{verbatim}
R> id_by_class(object = hid_fit_TR, class = class_GMix)
class mean median sd
A A 1.069257 0.9282385 0.4680442
B B 2.963738 3.0577284 0.4947821
C C 4.777659 4.9293029 0.4452653
\end{verbatim}
The estimates in the three classes are very close to the ground truth.
The same argument, \texttt{class}, can be passed to the
\code{autoplot()} function, in combination with
\begin{itemize}
\item \texttt{class\_plot\_type}: string, if \texttt{type = "class\_plot"}, one can visualize the stratified \texttt{id} estimates with a \texttt{"density"} plot or a \texttt{"histogram"}, or using \texttt{"boxplots"} or \texttt{"violin"} plots;
\item \texttt{class}: a vector containing a class used to stratify the observations;
\end{itemize}
to visualize \texttt{id} estimates of the \texttt{GaussMix} dataset
stratified by the generating manifold of the observations. This
information is contained in \texttt{class\_GMix}, which we pass as
\texttt{class}. As an example of possible graphs, Figure
\ref{fig:classes} shows the stratified boxplots (left panel) and
histograms (right panel).
\begin{verbatim}
R> autoplot(hid_fit_TR, type = "class", class = class_GMix,
+ class_plot_type = "boxplot") +
+ autoplot(hid_fit_TR, type = "class", class = class_GMix,
+ class_plot_type = "histogram")
\end{verbatim}
\begin{figure}[t!]
\centering
\includegraphics[width = .9\linewidth]{images/boxhist.pdf}
\caption{Two different types of graphs that the \code{autoplot()} method can produce (left panel: boxplots, right panel: histograms). The \texttt{id}s estimates of the \texttt{GaussMix} dataset are stratified according the generating distribution, as specified in the \texttt{class} argument.}
\label{fig:classes}
\end{figure}
We have introduced and discussed the principal functions of the
\pkg{intRinsic} package concerning the \texttt{TWO-NN} and
\texttt{Hidalgo} models. Employing simulated data with known
\texttt{id}, we presented a pipeline to guide our analysis. In the next
section, we present a real-data analysis, highlighting how the
\texttt{id} estimation can be used to effectively reduce the size of a
dataset while capturing and preserving important features.
\section{Alon dataset: gene microarray measurements}
\label{sec:alon}
In this section, we present a real-data example investigating the
\texttt{id} of the \texttt{Alon} dataset. A copy of this famous
microarray measurements table can be found in the \proglang{R} package
\pkg{HiDimDA}. The dataset, first presented in \citet{Alon6745},
contains microarray data for 2,000 genes measured on 62 patients. Among
the patients, 40 were diagnosed with colon cancer and 22 were healthy
subjects. A factor variable named \texttt{status} describes the patient
health condition (coded as \texttt{"Cancer"} vs.~\texttt{"Healthy"}). We
store the gene measurements in the object \texttt{Xalon}, a matrix of
nominal dimension \texttt{D = 2000}, with \texttt{n = 62} observations.
To load and prepare the data, we write:
\begin{verbatim}
R> data("AlonDS", package = "HiDimDA")
R> status <- factor(AlonDS$grouping, labels = c("Cancer", "Healthy"))
R> Xalon <- as.matrix(AlonDS[, -1])
\end{verbatim}
To obtain a visual summary of the dataset, we plot the heatmap of the
log-data values annotated by \texttt{status}. The result is shown in
Figure \ref{fig:Alon}. No clear structure is immediately visible.
\begin{figure}[th!]
\centering
\includegraphics[width=\linewidth]{images/alonds.png}
\caption{Heatmap of the $\log$ values of the \texttt{Alon} microarray dataset. The patients on the rows are stratified according to their health status (\texttt{"Cancer"} vs \texttt{"Healthy"}).}
\label{fig:Alon}
\end{figure}
We ultimately seek to uncover hidden patterns in this dataset. The task
is challenging, especially given the small number of observations
available. As a first step, we investigate how well a global estimate of
the \texttt{id} can represent the data.
\subsection{Homogeneous intrinsic dimension}
Let us start with describing the overall complexity of the dataset by
computing estimating a homogeneous \texttt{id} value. Using the
\texttt{TWO-NN} model, we can compute:
\begin{verbatim}
R> Alon_twonn_1 <- twonn(Xalon,method = "linfit")
R> Alon_twonn_1
Model: TWO-NN
Method: Least Square Estimation
Sample size: 62, Obs. used: 61. Trimming proportion: 1%
ID estimates (confidence level: 0.95)
| Lower Bound| Estimate| Upper Bound|
|-----------:|--------:|-----------:|
| 10.00382| 10.34944| 10.69506|
R> Alon_twonn_2 <- twonn(Xalon,method = "bayes")
R> Alon_twonn_2
Model: TWO-NN
Method: Bayesian Estimation
Sample size: 62, Obs. used: 61. Trimming proportion: 1%
Prior d ~ Gamma(0.001, 0.001)
Credibile Interval quantiles: 2.
Posterior ID estimates:
| Lower Bound| Mean| Median| Mode| Upper Bound|
|-----------:|--------:|--------:|--------:|-----------:|
| 7.784152| 10.17639| 10.12084| 10.00957| 12.88427|
R> Alon_twonn_3 <- twonn(Xalon,method = "mle")
R> Alon_twonn_3
Model: TWO-NN
Method: MLE
Sample size: 62, Obs. used: 61. Trimming proportion: 1%
ID estimates (confidence level: 0.95)
| Lower Bound| Estimate| Upper Bound|
|-----------:|--------:|-----------:|
| 7.785304| 10.01107| 12.88623|
\end{verbatim}
\begin{figure}[th!]
\centering
\includegraphics[width=\linewidth]{images/Alon_twonn.png}
\caption{\texttt{Alon} dataset. The left panel shows the result of the linear estimator, while the right panel depicts the posterior distribution obtained via the Bayesian approach.}
\label{fig:linbay}
\end{figure}
The estimates based on the \texttt{TWO-NN} model obtained with different
methods are very similar. The results are also illustrated in Figure
\ref{fig:linbay}, which shows the linear fit (left panel) and posterior
distribution (right panel) for the \texttt{TWO-NN} model. According to
these results, we conclude that the information contained in the
\texttt{D = 2000} genes can be summarized with approximately ten
variables. For example, the first ten eigenvalues computed from the
spectral decomposition of the matrix
\(\Lambda = X_{alon}^{'}X^{\:}_{alon}\) contribute to explain the 95.4\%
of the total variance.
Although the linear fit plot and the \texttt{TWO-NN} estimates do not
raise any evident sign of concern, as a final check, we explore the
evolution of the average distances between NN, reported in Figure
\ref{fig:evo_alon}. As expected, the plot does not highlight any abrupt
change in the evolution of the ergodic means. However, it suggests that
investigating the presence of multiple manifolds could be worthy. In
fact, despite the evolution of most of the ergodic means being
stationary, the heterogeneous starting points highlight some potential
data inhomogeneities that should be deepen.
\begin{figure}[th!]
\centering
\includegraphics[width=\linewidth]{images/cummeans_alon.png}
\caption{Evolution of the cumulative means of NN distances computed for all the observations in the \texttt{Alon} dataset.}
\label{fig:evo_alon}
\end{figure}
\subsection{Heterogeneous intrinsic dimension}
To investigate the presence of heterogeneous latent manifolds in the
\texttt{Alon} dataset, we employ \code{Hidalgo()}. Since the nominal
dimension \texttt{D} is large, we do not need to truncate the prior on
\(d\). Moreover, given the small number of data points, we opt for an
informative and regularizing prior \(Gamma(1,1)\) instead of the vague
default specification. Also, we set a conservative upper bound for the
mixing component \(K=30\), and choosing again \(\alpha = 0.05\) to fit a
sparse mixture. We run:
\begin{verbatim}
R> set.seed(12345)
R> Alon_hid <- Hidalgo(X = Xalon, K = 30, a0_d = 1, b0_d = 1,
+ alpha_Dirichlet = .05, nsim = 10000, burn_in = 100000, thin = 5)
R> Alon_hid
Model: Hidalgo
Method: Bayesian Estimation
Prior d ~ Gamma(1, 1), type = Conjugate
Prior on mixture weights: Dirichlet(0.05) with 30 mixture components
MCMC details:
Total iterations: 110000, Burn in: 1e+05, Elapsed time: 41.1114 secs
\end{verbatim}
Once the model is fitted, we first explore the estimated clustering
structure. Here, instead of directly plotting the heatmap of the
\(PSM\), we build the dendrogram from the dissimilarity matrix obtained
as \(1-PSM\), and we report it in the top panel of Figure
\ref{fig:dendro}. We construct such plot with the help of the package
\pkg{ggdendro} \citep{ggdendro}. We can detect four clusters, and
therefore we decide to set \texttt{K = 4} when running
\begin{verbatim}
R> Alon_psm <- psm_and_cluster(Alon_hid, K = 4)
\end{verbatim}
\begin{figure}[t!]
\centering
\includegraphics[width = .9\linewidth]{images/Alon_classbox_dendro.png}
\caption{\texttt{Alon} dataset. Top panel: dendrogram obtained from $1-PSM$. Bottom panel: boxplots of the \texttt{id} estimates stratified by health \texttt{status} (left) and estimated partition (right).}
\label{fig:dendro}
\end{figure}
As illustrated in the previous section, all these plots can be obtained
with the proper \code{autoplot()} specifications. The next natural step
is to investigate how strongly the estimated partition, and in general,
the estimated \texttt{id}s, are associated with health \texttt{status}.
The bottom two panels of Figure \ref{fig:dendro} display the boxplots of
the values (means and medians) of the \texttt{id} stratified by
\texttt{status} (left) and estimated cluster (right). We can also run
\begin{verbatim}
R> id_by_class(Alon_hid,class = status)
R> id_by_class(Alon_hid,class = Alon_psm$clust)
\end{verbatim}
to obtain summary results linking the variations in the \texttt{id} with
health status and cluster. We report the results in Table
\ref{tab:alon_summary}. As the estimated \texttt{id} increases, the
proportion of healthy subjects in each cluster decreases. This result
suggests that the microarray profiles of people diagnosed with cancer
are slightly more complex than healthy patients' ones.
\begin{table}[t!]
\centering
\begin{tabular}{ccccccc}
\toprule
Cluster & \# \texttt{Cancer} & \# \texttt{Healthy} & \% \texttt{Healthy} & Average \texttt{id} & Median \texttt{id} & Std. Dev.\\
\midrule
1 & 0 & 5 & 1.0000 & 6.7528 & 6.7206 & 0.0800 \\
2 & 3 & 12 & 0.8000 & 7.2140 & 7.1760 & 0.1113\\
3 & 28 & 5 & 0.1515 & 7.5931 & 7.6693 & 0.4145\\
4 & 9 & 0 & 0.0000 & 8.1083 & 8.1593 & 0.0855 \\
\bottomrule
\end{tabular}
\caption{Stratification by cluster of the health \texttt{status} (in absolute values and proportions of healthy patients) and \texttt{id} estimates (median, mean, and standard deviation).}
\label{tab:alon_summary}
\end{table}
The analyses conducted so far helped us uncover interesting descriptive
characteristics of the \texttt{id}s in the dataset. Nevertheless, the
results obtained can be effectively used to summarize the data. The
estimated individual \texttt{id} values are useful to potentially
classify the health \texttt{status} of new patients according to their
genomic profiles. As a simple example, we carry out a classification
analysis using two \texttt{random forest} models, predicting the target
variable \texttt{Y = status}. To train the models, we use two different
sets of covariates: \texttt{X\_O}, the original dataset composed of
2,000 genes,
\begin{verbatim}
R> # Model 1
R> X_O <- data.frame(Y = status, X = (Xalon))
R> set.seed(123)
R> rfm1 <- randomForest::randomForest(Y ~ ., data = X_O,
+ type = "classification", ntree=100)
\end{verbatim}
and \texttt{X\_I}, the observation-specific \texttt{id} summary returned
by \code{Hidalgo()}, along with our estimated partition.
\begin{verbatim}
R> # Model 2
R> X_I <- data.frame(Y = status,
+ X = data.frame(Alon_hid$id_summary[, 1:6]),
+ clust = factor(Alon_psm$clust))
R> set.seed(12315)
R> rfm2 <- randomForest::randomForest(Y ~ ., data = X_I,
+ type = "classification", ntree=100)
\end{verbatim}
We now inspect the results:
\begin{verbatim}
R> print(rfm1)
Call:
randomForest(formula = Y ~ ., data = X_O, type = "classification", ntree = 100)
Type of random forest: classification
Number of trees: 100
No. of variables tried at each split: 44
OOB estimate of error rate: 24.19%
Confusion matrix:
Cancer Healthy class.error
Cancer 34 6 0.1500000
Healthy 9 13 0.4090909
R> print(rfm2)
Call:
randomForest(formula = Y ~ ., data = X_I, type = "classification", ntree = 100)
Type of random forest: classification
Number of trees: 100
No. of variables tried at each split: 2
OOB estimate of error rate: 16.13%
Confusion matrix:
Cancer Healthy class.error
Cancer 36 4 0.1000000
Healthy 6 16 0.2727273
\end{verbatim}
Remarkably, a simple dataset with seven variables summarizing the main
distributional traits of the observation-specific posterior \texttt{id}s
obtains better performance in predicting the health \texttt{status} than
the original dataset. The random forest on the original dataset obtained
an out-of-bag estimated error rate of 24.19\%, while the error is
reduced to 16.13\% for our \texttt{id}-based dataset. We can conclude
that, in this case, the topological properties of the dataset are
associated with the outcome of interest and convey important
information.
We showed how the estimation of heterogeneous \texttt{id}s can not only
provide a reliable index of complexity for involved data structures, but
can also help unveil relationships among data points hidden at the
topological level. The application to the \texttt{Alon} dataset
showcases how reliable \texttt{id} estimates represent a fundamental
perspective that help us discover non-trivial data patterns.
Furthermore, one can subsequently exploit the extracted information in
many downstream analyses such as patient segmentation or predictive
analyses.
\section{Summary and discussion} \label{sec:summary}
In this paper we introduced and discussed \pkg{intRinsic}, an
\proglang{R} package that implements novel routines for the \texttt{id}
estimation according to the models recently developed in
\citet{Facco, Allegra, Denti2021}, and \citet{Santos-Fernandez2020}.
\pkg{intRinsic} consists of a collection of high-level, user-friendly
functions that in turn rely on efficient, low-level routines implemented
in \proglang{R} and \proglang{C++}. We also remark that \pkg{intRinsic}
integrates functionalities from external packages. For example, all the
graphical outputs returned by the functions are built using the
well-known package \pkg{ggplot2}. Therefore, they are easily
customizable using the grammar of graphics \citep{Wilkinson2005}.
To summarize, the package includes both frequentist and Bayesian model
specifications for the \texttt{TWO-NN} global \texttt{id} estimator.
Moreover, it implements the Gibbs sampler for posterior simulation for
the \texttt{Hidalgo} model, which can capture the presence of
heterogeneous manifolds within a single dataset. We showed how the
discovery multiple latent manifolds can help unveil topological traits
of a dataset by associating the \texttt{id} estimates with external
variables. As a general analysis pipeline for practitioners, we
suggested starting with the efficient \texttt{TWO-NN} functions to
understand how appropriate the hypothesis of homogeneity is for the data
at hand. If departures from the assumptions are visible from nonuniform
estimates obtained with different estimation methods and from visual
assessment of the evolution of the average NN distances, one should rely
on \texttt{Hidalgo}. This model also allows the exploration of
relationships between local \texttt{id} estimates and external
variables.
The most promising future research directions stem from
\texttt{Hidalgo}. First, we plan to develop more reliable methods to
obtain an optimal partition of the data based on the \texttt{id}
estimates, since the one proposed heavily relies on a mixture model of
overlapping distribution. Moreover, another research avenue worth
exploring is a version of \texttt{Hidalgo} with likelihood distributions
based on generalized NN ratios, exploiting the information coming from
varying neighborhood sizes.\\
We are also aware that the mixture model fitting may become time
consuming if the analyzed datasets are large. Faster estimating solution
via, for example, Variational Bayes approach, will be explored. Also, we
highlight that \texttt{Hidalgo}, originally proposed as a mixture model
within a Bayesian framework, lacks a frequentist-based estimation
counterpart such as an Expectation-Maximization algorithm. Its
derivation is not immediate, since the neighboring structure introduced
via the \(\mathcal{N}^{(q)}\) matrix makes the problem non-trivial. We
plan to keep working on this package and to continuously update it in
the long run, as contributions to this line of research will become
available. The novel \texttt{id} estimators we discussed have started a
lively research branch, and we intend to include all the future
advancements in \pkg{intRinsic}.
\section*{Appendix}
\subsection*{A - Additional methods implemented in the package}
For the sake of succinctness, we have focused our attention to the
discussion of the \texttt{TWO-NN} and the \texttt{Hidalgo} models. In
Section \ref{sec:mod}, we explained that both methods are based on the
distributional properties of the ratios of distances between the first
two NNs of each data point. However, this modeling framework has been
extended by \citet{Denti2021}, where the authors developed a novel
\texttt{id} estimator, called \texttt{Gride}. This new estimator is
based upon the ratios of distances between NNs of generic order, namely
\(n_1\) and \(n_2\). Extending the neighborhood size leads to two major
implications: more stringent local homogeneity assumptions and the
possibility to compute estimates as a function of the chosen NN orders.
Monitoring the \texttt{id} evolution as the order of the furthest NN
\(n_2\) increases allows the extraction of meaningful information
regarding the link between the \texttt{id} and the scale of the
considered neighborhood. Doing so, \texttt{Gride} produces estimates
that are more robust to noise in the dataset, which is not directly
addressed by the model formulation.
The \texttt{Gride} model is implemented in \pkg{intRinsic}, and the
estimation can be carried out under both the frequentist and Bayesian
frameworks via the function \code{gride()}, which is very similar to
\code{twonn()} in its usage. Additionally, one can use the functions
\code{twonn\_decimation()} and \code{gride\_evolution()} to study the
\texttt{id} dynamics. More details about these functions are available
in the package documentation.
The map in Figure \ref{fig:map} provides a visual summary of the main
functions contained in the package. The main topics are reported in the
diamonds, while the high-level, exported functions are displayed in the
blue rectangles. These routines are linked to the (principal) low-level
function via dotted lines. Finally, the light-blue area highlights the
functions presented in this paper.
\begin{figure}[ht]
\centering
\includegraphics[width = \linewidth, trim={0cm 3cm 0cm 3cm},clip]{images/map.pdf}
\caption{A conceptual map summarizing the most important functions contained in \pkg{intRinsic}. The blue squares contain the names of the principal, high-level functions. Dotted lines connect these functions with the most important low-level functions (not exported). The light-blue area represents the topics that have been discussed in this paper.}
\label{fig:map}
\end{figure}
\subsection*{B - intRinsic and other packages}
As mentioned in Section \ref{sec:intro}, there is a large number of
intrinsic dimension estimators available in the literature, and many of
them have been implemented in \proglang{R}. A valuable survey of the
availability of dimensionality reduction methods and \texttt{id}
estimators have been recently reported in \citet{You2020}. From there,
we see that two packages are the most important when it comes to
\texttt{id}: \pkg{Rdimtools} and \pkg{intrinsicDimension}.
\pkg{Rdimtools}, in particular, is comprised of an unprecedented
collection of methods -- including also the least-squares
\texttt{TWO-NN}. Such an abundance of methods and the ongoing research
in this area indicate that there is no globally optimal estimator to
employ regardless of the application. Thus, a practitioner should be
aware of the strengths and limitations of every method. Here, we discuss
the pros and cons of the methods implemented in \pkg{intRinsic}.
At the moment of writing, there are two main traits of \pkg{intRinsic}
that are unique to this package. First, our package is devoted to the
recently proposed likelihood-based estimation methods introduced by the
seminal work of \citet{Facco} and the literature that followed.
Therefore, as of today, most of the implementations presented here are
exclusively contained in this package -- for example, the \proglang{R}
implementation of the \texttt{Hidalgo} model and all the routines linked
to the \texttt{Gride} family of models. To the best of our knowledge,
the function \code{Hidalgo()} is available outside this package, but it
can be only found on \texttt{GitHub} repositories, coded in
\proglang{Python} and \proglang{C++}.\\
Second, all the functions in our package allow -- and emphasize -- the
uncertainty quantification around the \texttt{id} estimates, which is an
crucial component granted from our model-based approach. This feature is
often overlooked in other implementations.
One limitation of the likelihood-based models offered in this package,
shared with many other \texttt{id} estimators in general, is the
underestimation of the \texttt{id} when the true latent manifold's
dimension is large. As an empirical rule, for cases where the
\texttt{id} estimate is large (e.g., \(d>20\)), these estimates should
be cautiously regarded as lower bounds for the actual value
\citep{Ansuini2019}. An alternative method that we found to be
particularly robust to this issue is the Expected Simplex Skewness
(\texttt{ESS}) algorithm proposed by \citet{Johnsson2015}. To exemplify,
consider 5,000 observations sampled from a \(D=d=50\) dimensional
Gaussian distribution. With the following code, we can see how
\code{twonn()} underestimates the true \texttt{id}, which is instead
well recovered by the \texttt{ESS}.
\begin{verbatim}
R> set.seed(12211221)
R> X_highdim <- replicate(50, rnorm(5000))
R> intrinsicDimension::essLocalDimEst(X_highdim)
Dimension estimate: 49.05083
Additional data: ess
R> twonn(X_highdim)
Model: TWO-NN
Method: MLE
Sample size: 5000, Obs. used: 4950. Trimming proportion: 1%
ID estimates (confidence level: 0.95)
| Lower Bound| Estimate| Upper Bound|
|-----------:|--------:|-----------:|
| 34.92154| 35.90791| 36.92254|
\end{verbatim}
However, the \texttt{ESS} is not uniformly optimal. For example, for the
\texttt{Swissroll} data, the \code{twonn()} performs better:
\begin{verbatim}
R> intrinsicDimension::essLocalDimEst(Swissroll)
Dimension estimate: 2.898866
Additional data: ess
R> twonn(Swissroll, c_trimmed = .001)
Model: TWO-NN
Method: MLE
Sample size: 1000, Obs. used: 999. Trimming proportion: 0.1%
ID estimates (confidence level: 0.95)
| Lower Bound| Estimate| Upper Bound|
|-----------:|--------:|-----------:|
| 1.945607| 2.07005| 2.202571|
\end{verbatim}
When dealing with dataset characterized by a large number of columns, we
suggest checking the discrepancy between our methods and different
competitors. A marked difference in the results should flag the
likelihood-based findings as less reliable. At this point, a legitimate
doubt that may arise regards the validity of the results we derived
studying the \texttt{Alon} dataset. Because of its high number of
columns (\texttt{D = 2000}), the intrinsic dimensions we obtained may
have been strongly underestimated. To validate our results, we run the
\texttt{ESS} estimator on the \texttt{Alon} dataset, obtaining
\begin{verbatim}
R> intrinsicDimension::essLocalDimEst(data = Xalon)
Dimension estimate: 7.752803
Additional data: ess
\end{verbatim}
which is very close to the estimates obtained with our methods,
reassuring us about our results.
\subsection*{C - Gibbs Sampler for \code{Hidalgo()}}
The steps of the Gibbs sampler are the following:
\begin{enumerate}
\item Sample the mixture weights according to $$\bm{\pi}|\cdots \sim Dirichlet\left(\alpha_1+
\sum_{i=1}^{n}\mathds{1}_{z_i=1} ,\ldots,\alpha_K+\sum_{i=1}^{n}\mathds{1}_{z_i=K} \right)$$
\item Let $\bm{z}_{-i}$ denote the vector $\bm{z}$ without its $i$-th element. Sample the cluster indicators $z_i$ according to:
\begin{equation*}
\begin{aligned}
\mathbb{P}\left(z_i=k| \bm{z}_{-i},\cdots \right)\propto \pi_{z_i}
f\left(\mu_{i},\mathcal{N}_i^{(q)}|z_1,\ldots,z_{i-1},k,z_{i+1},\ldots,z_n, \bm{d}\right)
\end{aligned}
\end{equation*}
We emphasize that, given the new likelihood we are considering, the cluster labels are no longer independent given all the other parameters. Let us define $$\bm{z}_i^k=\left(z_1,\ldots,z_{i-1},k,z_{i-1},\ldots,z_{n}\right).$$
Then, let $N_{z_i}(\bm{z}_{-i})$ be the number of elements in the $(n-1)$-dimensional vector $\bm{z}_{-i}$ that are assigned to the same manifold (mixture component) as $z_i$.
Moreover, let $m_{i}^{i n}=\sum_{l} \mathcal{N}_{li}^{(q)} \mathds{1}_{z_{l}=z_{i}}$ be the number of points sampled from the same manifold of the $i$-th observation that have ${x}_i$ as neighbor, and let $n_{i}^{i n}(\bm{z})=\sum_{l} \mathcal{N}_{i l}^{(q)} \mathds{1}_{z_{l}=z_{i}} \leq q$ be the number of neighbors of ${x}_i$ sampled from the same manifold. Then, we can simplify the previous formula, obtaining the following full conditional:
\begin{equation}
\begin{aligned}
\mathbb{P}\left(z_i=k| \bm{z}_{-i},\cdots \right)\propto & \frac{\pi_{k} d_{k} \mu_i^{-(d_{k}+1)} }{\mathcal{Z}\left(\zeta, N_{z_{i}=k}(\bm{z}_{-i})+1\right)}\times \left(\frac{\zeta}{1-\zeta}\right)^{n_{i}^{i n}(\bm{z}_i^k)+m_{i}^{i n}(\bm{z}_i^k)}\\
\times&\left(\frac{\mathcal{Z}\left(\zeta, N_{z_{i}=k}(\bm{z}_{-i})\right)}{\mathcal{Z}\left(\zeta, N_{z_{i}=k}(\bm{z}_{-i})+1\right)}\right)^{N_{z_{i}=k}(\bm{z}_{-i})}.
\end{aligned}
\end{equation}
See \citet{tesiFacco} for a detailed derivation of this result.
\item The posterior distribution for $\bm{d}$ depends on the prior specification we adopt:
\begin{enumerate}
\item If we assume a conjugate Gamma prior, we obtain $$d_k|\cdots \sim Gamma\left(a_0+n_k, b_0+\sum_{i:z_i=k}\log \mu_i\right),$$ where $n_k=\sum_{i=1}^{n}\mathds{1}_{z_i=k}$ is the number of observations assigned to the $k$-th group;\\
\item If $G_0$ is assumed to be a truncated Gamma distribution on $\left(0,D\right)$, then $$d_k|\cdots \sim Gamma\left(a_0+n_k, b_0+\sum_{i:z_i=k}\log \mu_i\right)\mathds{1}(\cdot)_{\left(0,D\right)};$$
\item Finally, let us define $a^*=a_0+n_k$ and $b^*=b_0+\sum_{i:z_i=k}\log \mu_i$, if $G_0$ is assumed to be a truncated Gamma with point mass at $D$ we obtain
$$d_k|\cdots \sim \frac{\hat{\rho}_1^*}{\hat{\rho}_1^*+\hat{\rho}_0^*}\: Gamma\left(a^*, b^*\right)\mathds{1}(\cdot)_{\left(0,D\right)} + \frac{\hat{\rho}_0^*}{\hat{\rho}_1^*+\hat{\rho}_0^*}\:\delta_D(d_k),$$
where $\hat{\rho}_1^*=\hat\rho\frac{\mathcal{C}_{a^*,b^*,D}}{\mathcal{C}_{a,b,D}}$ and $\hat{\rho}_0^*=(1-\hat\rho)D^{n_k}\exp^{-D\sum_{i:z_i=k}\log \mu_i}$.
\end{enumerate}
\end{enumerate}
\subsection*{D - Postprocessing to address label switching}
The post-processing procedure adopted for the raw chains fitted by
\code{Hidalgo()} works as follows. Let us consider a MCMC sample of
length \(T\), and denote a particular MCMC iteration with \(t\),
\(t=1,\ldots,T\). Let \(z_i{(t)}\) indicate the cluster membership of
observation \(i\) at the \(t\)-th iteration. Similarly, \(d_k{(t)}\)
represents the value of the estimated \texttt{id} in the \(k\)-th
mixture component at the \(t\)-th iteration. This algorithm maps the
\(K\) chains for the parameters in \(\bm{d}\) to each data point via the
values of \(\bm{z}\). That is, it constructs \texttt{n} chains, one for
each observation, by computing \(d^*_k{(t)}=d_{z_i{(t)}}{(t)}\). We then
obtain a collection of chains that link every observation to its
\texttt{id} estimate. When the chains have been post-processed, the
local observation-specific \texttt{id} can be estimated by the ergodic
mean or median.
\subsection*{E - System configuration}
We obtained the results in this vignette by running our \proglang{R}
code on a MacBook Pro with a 2.6 GHz 6-Core Intel Core i7 processor.
\section*{Acknowledgements}
The author is extremely grateful to Andrea Gilardi\footnote{\label{note1}University of Milan, Bicocca} for his valuable comments. The author also thanks Michelle N. Ngo\footnote{\label{note2}University of California, Irvine}, Derenik Haghverdian\cref{note2}, Wendy Rummerfield\footnote{\label{note3}CSU - Eastbay}, Andrea Cappozzo\footnote{\label{note4}Politecnico di Milano}, and Riccardo Corradin\footnote{\label{note5}University of Nottingham} for their help with the earlier version of this manuscript.
\bibliographystyle{plainnat}
|
2,877,628,089,579 | arxiv | \section{\label{sec:Intro}Introduction}
The temporal fluctuations of spatially averaged (or, global) quantities are of interest in several fields of research including turbulent flows~\cite{Fauve_etal_1993,Niemela_etal_2000,Aumaitre_Fauve_epl_2003,Gerolymos_etal_2014}, nanofluids~\cite{Kakac_Pramuanjaroenkij_2009}, biological fluids~\cite{Tsimring_2014,Deco_etal_2017}, geophysics~\cite{Damon_etal_1978,Tyler_etal_2017},
phase transitions~\cite{Mobilia_etal_2007,Brennecke_etal_2013}. The probability density function (PDF) of the temporal fluctuations of thermal flux in turbulent Rayleigh-B\'{e}nard convection (RBC) was found to have normal distribution with slight asymmetries at the tails. The direct numerical simulations (DNS) of the Nusselt number $\mathrm{Nu}$, which is a measure of thermal flux, also showed the similar behaviour in presence of the Lorentz force~\cite{Das_Kumar_2019}.
The power spectral density (PSD) of the thermal flux in the frequency ($f$) space~\cite{Aumaitre_Fauve_epl_2003,Das_Kumar_2019,Hirdesh_etal_2014} was found to vary as $f^{-2}$. In this work, we present the results obtained by DNS of temporal signals of global quantities: spatially averaged kinetic energy per unit mass $E $, convective entropy per unit mass $E_{\Theta}$ and Nusselt number $\mathrm{Nu}$ in unsteady Rayleigh-B\'{e}nard magnetoconvection (RBM)~\cite{Chandrasekhar_1961,Fauve_etal_1984,Weiss_Proctor_2014}. The kinetic energy as well as the entropy vary with frequency as $f^{-2}$ at relatively higher frequencies. In this scaling regime, the scaling exponent does not depend on the Rayleigh number $\mathrm{Ra}$, Prandtl number $\mathrm{Pr}$ and Chandrasekhar's number $\mathrm{Q}$.
\section{\label{sec:system} Governing equations}
The physical system consists of a thin layer of a Boussinesq fluid (e.g., liquid metals, melt of some alloys (i.e., $NaNO_3$ melt), nanofluids, etc.) of density $\rho_0$ and electrical conductivity $\sigma$ confined between two horizontal plates, which are made of electrically non-conducting but thermally conducting materials. The lower plate is heated uniformly and the upper plate is cooled uniformly so that an adverse temperature gradient $\beta$ is maintained across the fluid layer. A uniform magnetic field $B_0$ is applied in the vertical direction. The positive direction of the $z$- axis is in the direction opposite to that of the acceleration due to gravity $g$. The basic state is the conduction state with no fluid motion. The stratification of the steady temperature field $T_s (z)$, fluid density $\rho_s(z)$ and pressure field $P_s(z)$, in the conduction state~\cite{Chandrasekhar_1961}, are given as:
\begin{eqnarray}
T_s (z) &=& T_b + \beta z,\\
\rho_s (z) &=& \rho_0 \left[1 + \alpha \left(T_b - T_s (z)\right)\right],\\
P_s (z) &=& P_0 - \left[\rho_0 g \left(z + \frac{1}{2}\alpha \beta z^2 \right) + \frac{{B_{0}}^2}{8\mu_0 \pi}\right],
\end{eqnarray}
where $T_b$ and $\rho_0$ are temperature and density of the fluid at the lower plate, respectively. $P_0$ is a constant pressure in the fluid and $\mu_0$ is the permeability of the free space.
As soon as the temperature gradient across the fluid layer is raised above a critical value $\beta_c$ for fixed values of all fluid parameters (kinematic viscosity $\nu$, thermal diffusivity $\kappa$, thermal expansion coefficient $\alpha$) and the externally imposed magnetic field $B_0$, the convection sets in. All the fields are perturbed due to convection and they may be expressed as:
\begin{eqnarray}
\rho_s (z) \rightarrow \tilde{\rho}(x, y, z, t) &=& \rho_s (z) + \delta \rho (x, y, z, t),\\
T_s (z) \rightarrow T(x, y, z, t) &=& T_s (z) + \theta (x, y, z, t),\\
P_s (z) \rightarrow P(x, y, z, t) &=& P_s (z) + p (x, y, z, t),\\
{\bm{\mathrm{B}}}_0 \rightarrow {\bm{\mathrm{B}}} (x, y, z, t) &=& {\bm{\mathrm{B}}}_0 + \bm{\mathrm{b}} (x, y, z, t),
\end{eqnarray}
where $\bm{\mathrm{v}}(x,y,z,t)$, $\mathrm{p}(x,y,x,t)$, $\theta (x,y,z,t)$ and $\bm{\mathrm{b}}(x,y,z,t)$ are the fluid velocity, perturbation in the fluid pressure and the convective temperature and the induced magnetic field, respectively, due to convective flow. The perturbative fields are made dimensionless by measuring all the length scales in units of the clearance $d$ between two horizontal plates, which is also the thickness of the fluid layer. The time is measured in units of the free fall time $\tau_{f} = 1/\sqrt{\alpha \beta g}$. The convective temperature field $\theta$ and the induced magnetic field $\bf{b}$ are dimensionless by $\beta d$ and $B_0 \mathrm{Pm}$, respectively. The magnetoconvective dynamics is then described by the following dimensionless equations:
\begin{eqnarray}
& D_t\bm{\mathrm{v}}=-\nabla p+\sqrt{\frac{\mathrm{Pr}}{\mathrm{Ra}}}\nabla^2\bm{\mathrm{v}}+\frac{\mathrm{Q}\mathrm{Pr}}{\mathrm{Ra}}\partial_z\bm{\mathrm{b}}+\theta\bm{\mathrm{e}}_3,\label{eq:mom-v}\\
&\nabla^2\bm{\mathrm{b}} = -\sqrt{\frac{\mathrm{Ra}}{\mathrm{Pr}}}\partial_z\bm{\mathrm{v}}, \label{eq:mag-v}\\
& {D_t \theta} = \sqrt{\frac{1}{\mathrm{Ra} \mathrm{Pr}}}\nabla^2\theta + {\mathrm{v}}_3, \label{eq:theta}\\
&\nabla\cdot\bm{\mathrm{v}} = \nabla\cdot\bm{\mathrm{b}}=0,\label{eq:cont}
\end{eqnarray}
where $D_t \equiv \partial_t + (\bm{\mathrm{v}}\cdot\nabla)$ is the material derivative. As the magnetic Prandtl number $\mathrm{Pm}$ is very small ($\le 10^{-5}$) for all terrestrial fluids, we set $\mathrm{Pm}$ equal to zero in the above. The induced magnetic field is then slaved to the velocity field. We consider the idealized boundary ({\sl stress-free}) conditions for the velocity field on the horizontal boundaries. The relevant boundary conditions~\cite{Chandrasekhar_1961,Basak_etal_pre2014} at horizontal plates, which are located at $\mathrm{z} = 0$ and $ \mathrm{z}= 1$, are:
\begin{equation}
\frac{\partial \mathrm{v}_1}{\partial z}=\frac{\partial \mathrm{v}_2}{\partial z}=\mathrm{v}_3=\mathrm{b}_1=\mathrm{b}_2= \frac{\partial \mathrm{b}_3}{\partial z} =\theta=0. \label{eq:bound-v}
\end{equation}
All fields are considered periodic in the horizontal plane. The dynamics of the flow (as $\mathrm{Pm} \rightarrow 0$) is controlled by three dimensionless parameters: (1) Rayleigh number $\mathrm{Ra} = \frac{\alpha \beta g d^4}{\nu \kappa}$, (2) Prandtl number $\mathrm{Pr} = \frac{\nu}{\kappa}$ and (3) Chandrasekhar's number $\mathrm{Q}=\frac{\sigma B_0^2 d^2}{\rho_0 \nu}$.
The critical values of Rayleigh number $\mathrm{Ra_c}$ and the critical wave number $k_c$ are~\cite{Chandrasekhar_1961}:
\begin{eqnarray}
& \mathrm{Ra}_{c}(\mathrm{Q}) = \frac{\pi^2 + k_{c}^2}{k_{c}^2}\big[ ( \pi^2 + k_{c}^2 )^{2} + \pi^{2}\mathrm{Q} \big],\label{eq:Ra}\\
& k_{c}(\mathrm{Q}) = \pi \sqrt{a_{+} + a_{-} - \frac{1}{2}},\label{eq:k}
\end{eqnarray}
where
\begin{equation}
a_{\pm} = \Bigg( \frac{1}{4} \Big[\frac{1}{2} + \frac{\mathrm{Q}}{\pi^{2}} \pm \big[ \big( \frac{1}{2} + \frac{\mathrm{Q}}{\pi^{2}} \big)^{2} - \frac{1}{4} \big]^{\frac{1}{2}}\Big] \Bigg)^{\frac{1}{3}}. \label{eq:a}
\end{equation}
The kinetic energy $E$ and convective entropy $E_{\Theta}$ per unit is mass are defined as: $E = \frac{1}{2} \int{\mathrm{v}^2 dV}$ and $E_{\Theta} = \frac{1}{2} \int{\theta^2 dV}$, respectively. The Nusselt number $\mathrm{Nu}$ , which is the ratio of total heat flux and the conductive heat flux across the fluid layer, is defined as: $\mathrm{Nu} = 1 + \frac{\sqrt{\mathrm{Ra} \mathrm{Pr}}}{V}\int{\mathrm{v}_3\theta dV}$.
The system of equations may also be useful for investigating magnetoconvection in nanofluids with low concentration non-magnetic metallic nanoparticles~\cite{Das_Kumar_2019}. A homogeneous suspension of nanoparticles in a viscous fluid works as a nanofluid. As the fluid properties depend on the base fluid and the nano-particles, their effective values may be used for the nanofluid. All fluid parameters are may be replaced by their effective values in the presence of nanoparticles in a simple model. If $\phi$ is the volume fraction of the spherically shaped nanoparticles, the effective form of the density and electrical conductivity of the nanofluid may be expressed as:
\begin{eqnarray}
\rho &=& (1-\phi) \rho_f + \phi \rho_p,\\
\sigma &=& (1-\phi) \sigma_f + \phi \sigma_p,
\end{eqnarray}
where $\rho_f$ and $\sigma_f$ are the density and electrical conductivity of the base fluid, respectively. Here $\rho_p$ is the density and $\sigma_p$ is the electrical conductivity of the nanoparticles. The effective thermal conductivity $K$~\cite{Maxwell_1873} is expressed as:
\begin{equation}
K = K_{f} \left[ \frac{(K_{p}+2K_{f}) - 2\phi(K_{f}-K_{p})}{(K_{p} + 2K_{f}) + \phi(K_{f}-K_{p})} \right],
\end{equation}
where $K_f$ and $K_p$ are the thermal conductivity of the base fluid and that
of the spherical shaped nanoparticles, respectively. Similarly, the effective specific ccapacity $c_V$ may be expressed through the following relation~\cite{Selimefendigil_Oztop_2014}:
\begin{equation}
(\rho c_V) = (1-\phi)(\rho c_V)_{f} + \phi (\rho c_V)_{p}.
\end{equation}
The effective dynamic viscosity $\mu$ of the nanofluid~\cite{Brinkman_1952} may also be expressed as:
\begin{equation}
\mu = \mu_{f}(1-\phi)^{-2.5}.
\end{equation}
The relevant values of effective fluid parameters may be used in the set of equations \ref{eq:mom-v}-\ref{eq:cont} for investigating flow properties in nanofluids.
\begin{figure}[ht]
\centering
\includegraphics[height=!,width=9.0 cm]{Time_signal.eps}
\caption{\label{Time_sig} (Colour online)
Temporal variations of the kinetic energy $E$, entropy $E_{\Theta}$ and Nusselt number $\mathrm{Nu}$ for Rayleigh number $\mathrm{Ra} = 5.0\times10^5$ and Prandtl number $\mathrm{Pr}=4.0$. The light gray (red) curves are for Chandrasekhar number $\mathrm{Q}= 100$ and the gray (blue) curves are for $\mathrm{Q}= 100$. }
\end{figure}
\begin{figure*}[htp]
\centering
\includegraphics[height=!,width=18.0 cm]{psds.eps}
\caption{\label{psds}
Frequency power spectral densities (PSD) of the energy per unit mass $E(f)=|\mathrm{v}(f)|^2$, the convective entropy per unit mass $E_{\Theta} (f) = |\theta (f)|^2$ and thermal flux $\mathrm{Nu}(f)$ in the frequency space for different values of $\mathrm{Ra}$, $\mathrm{Q}$ and $\mathrm{Pr}.$
}
\end{figure*}
\section{\label{sec:DNS}Direct Numerical Simulations}
\noindent
The direct numerical simulations are carried out using pseudo-spectral method. The perturbative fields are expanded as:
\begin{eqnarray}
{\bm{\Psi}} (x,y,z,t) &=& \sum_{l,m,n} {\bm \Psi}_{lmn}(t) e^{ik(lx+my)}\cos{(n\pi z)},\label{eq.psi}\\
{\bm{\Phi}} (x,y,z,t) &=& \sum_{l,m,n} {\bm \Phi}_{lmn}(t) e^{ik(lx+my)}\sin{(n\pi z)},\label{eq.phi}
\end{eqnarray}
where ${\bm{\Psi}} (x, y, z, t) = [{\mathrm{v}_1}, {\mathrm{v}_2} , {p}]^{\dagger}$ and ${\bm{\Phi}} (x, y, z, t) = [{\mathrm{v}_3}, {\theta} ]^{\dagger}$. The time dependent Fourier amplitudes of these fields are denoted by ${\bm \Psi}_{lmn} (t) = [U_{lmn}, V_{lmn}, P_{lmn}]^{\dagger}$
and ${\bm \Phi}_{lmn} (t) = [W_{lmn}, \Theta_{lmn}]^{\dagger}$, where $l$, $m$ and $n$ are integers. The horizontal wave vector of the perturbative fields is $\bm{k} = lk\bm{\mathrm{e}}_1 + mk\bm{\mathrm{e}}_2$, where $\bm{\mathrm{e}}_1$ and $\bm{\mathrm{e}}_2$ are the unit vectors along the $x$- and $y$-axes. The numerical simulations are carried out in a three dimensional periodic box of size $L\times L\times 1$, where $L=2\pi/k_c(\mathrm{Q})$. The possible values of the integers $l, m, n$ are decided by the continuity equations. They can take values which satisfy the following equation.
\begin{equation}
ilk_c (\mathrm{Q}) U_{lmn} + imk_c(\mathrm{Q}) V_{lmn} + n\pi W_{lmn}=0.
\end{equation}
A minimum spatial grid resolution of $128\times 128\times 128$ or $256\times 256\times 256$ has been used for the simulations presented here. The integration in time is performed using a standard fourth order Runge-Kutta (RK4) method. The data points of the temporal signal are recorded at equal time interval of 0.001 to determine record the signals. The time steps have been chosen such that the Courant-Friedrichs-Lewy (CFL) condition is satisfied for all times.
\section{\label{sec:Result}Results and Discussions}
\begin{table*}[ht]
\begin{ruledtabular}
\def~{\hphantom{0}}
\begin{tabular}{cccccccccc}
$\mathrm{Pr}$ & $\mathrm{Ra}$ & $\mathrm{Q}$ & {Exponent $\alpha$}& {Exponent $\beta$} & {Exponent $\gamma$}\\
\hline
$0.1$ & $7.0\times10^4$ & $100$ & $1.97$ & $1.97$ & $1.96 $\\
& & $300$ & $1.97$ & $1.97$ & $1.97$\\
& & $500$ & $1.96$ & $1.96$ & $1.97 $\\
& & $700$ & $1.96$ & $1.97$ & $1.97$\\ \\
$0.2$ & $7.0\times10^4$ & $100$ & $1.96$ & $1.97$ & $1.96$\\
& & $300$ & $1.97$ & $1.97$ & $1.96$\\
& & $500$ & $1.96$ & $1.96$ & $1.96 $\\ \\
$1.0$ & $3.04\times10^6$ & $300$ & $1.96$ & $1.97$ & $1.96 $\\
& & $500$ & $1.96$ & $1.97$ & $1.96$\\
& & $700$ & $1.97$ & $1.96$ & $1.96$\\
& & $1000$ & $1.96$ & $1.97$ & $1.97$\\ \\
$2.0$ & $3.04\times10^6$ & $500$ & $1.96$ & $1.96$ & $1.96$\\
& & $700$ & $1.96$ & $1.96$ & $1.96$\\
& & $1000$ & $1.96$ & $1.97$ & $1.96$\\ \\
$4.0$ & $5.0\times10^5$ & $100$ & $1.96$ & $1.97$ & $1.96 $\\
& & $200$ & $1.96$ & $1.97$ & $1.97$\\
& & $400$ & $1.96$ & $1.97$ & $1.97$\\ \\
$6.4$ & $5.0\times10^5$ & $50$ & $1.96$ & $1.97$ & $1.96 $\\
& & $100$ & $1.97$ & $1.97$ & $1.96$\\
& & $250$ & $1.97$ & $1.96$ & $1.97$\\
\end{tabular}
\caption{ \label{slopes}
List of Prandtl number $\mathrm{Pr}$, Chandrasekhar number $\mathrm{Q}$, Rayleigh number $\mathrm{Ra}$, exponents of Kinetic energy$( \alpha)$,exponents of Entropy$( \beta)$ and exponents of Nusselt number $( \gamma)$.}
\end{ruledtabular}
\end{table*}
\begin{figure}[htp]
\centering
\includegraphics[height=!,width=9.0 cm]{critical.eps}
\caption{\label{critical_freq}
Variation of critical values of the dimensionless frequencies $f_c(E)$,
$f_c (E_{\Theta})$ and $f_c (\mathrm{Nu})$ for the energy spectra $E(f)$, entropy spectra $E_{\Theta}(f)$ and thermal flux $\mathrm{Nu}(f)$, respectively, with the Chandrasekhar number $\mathrm{Q}$ for Prandtl number $\mathrm{Pr} = 0.1$ [red(light gray) triangles] and $1.0$ [blue(gray) circles].}
\end{figure}
The simulations are done for several values of thermal Prandtl number ($ 0.1 \le\mathrm{Pr} \le 6.4$). These values of $\mathrm{Pr}$ are relevant for Earth's liquid outer core~\cite{Olson_Glatzmaier_1996}. They are also relevant for problem of crystal growth~\cite{Lan_Kou_1991} and water based nano-fluids~\cite{Kakac_Pramuanjaroenkij_2009}. The Rayleigh number is varied in a range $7.0 \times 10^4 \le \mathrm{Ra} \le 3.04 \times 10^6$, while the Chandrasekhar's number is varied in a range $50 \le \mathrm{Q} \le 10^3$.
Fig~\ref{Time_sig} shows the temporal variations of three global quantities for $\mathrm{Ra} = 5.0 \times 10^5$, $\mathrm{Pr} = 4.0$ and for two different values of $\mathrm{Q}$: (1) the kinetic energy per unit mass $E$, (2) the convective entropy per unit mass $E_{\Theta}$ and (3) the Nusselt number $\mathrm{Nu}$. All global quantities are averaged over a three-dimensional simulation box described above. The first two set of curves (from the top) show the variations of $E$ with dimensionless time. The light gray (red) curve is for $\mathrm{Q}=100$ and the gray (blue) curve is for $\mathrm{Q}=400$. The mean of the kinetic energy decrease with increase in $\mathrm{Q}$. The fluctuations of the energy signal also decreases with increase in $\mathrm{Q}$. The curves in the third and fourth rows from the top show the temporal variations of $E_{\Theta}$, and the curves in the fifth and sixth rows (from the top) show the temporal signal for the Nusselt number $\mathrm{Nu}$, which is a measure of the heat flux. The mean values of the entropy per unit mass and the Nusselt number also decrease with increase in $\mathrm{Q}$. The fluctuations in their temporal signals also decrease with increase in $\mathrm{Q}$.
Figure~~\ref{psds} displays the power spectrum densities (PSD) for the spatially averaged global quantities in the frequency space for several values of $\mathrm{Ra}$, $\mathrm{Pr}$ and $\mathrm{Q}$. The PSDs of the fluid speed $E(f) = |\mathrm{v}(f)|^2$ are shown in Fig~\ref{psds}(a). The energy spectra are very noisy for dimensionless frequencies between $0.04$ and $1.0$. In this frequency range ($0.04 < f < 1.0 $), the spectra is noisy and the slope of the curves ${E(f)}-f$ on the log-log scale varies between $-3.2$ to $-5.1$. However, the $E(f)$ is found to have negligible noise $ 1 < f < 200$. The PSD shows a very clear scaling behaviour for $f > 1$. The PSD ($E(f)$) of the energy signal scales with frequency $f$ as almost $f^{-\alpha}$ with $\alpha \approx 2$. The scaling behaviour is found to be continued for more than two decades. The scaling exponent is independent of $\mathrm{Pr}$, $\mathrm{Ra}$ and $\mathrm{Q}$ in this frequency window. Table-I gives the exact values of the exponent $\alpha$ for different values of $\mathrm{Ra}$, $\mathrm{Pr}$ and $\mathrm{Q}$. The scaling law $E(f) \sim f^{-2}$ was also observed in rotating Rayleigh-B\'{e}nard convection (RBC)~\cite{Hirdesh_etal_2014}.
Fig~\ref{psds}(b) shows the PSDs of the convective entropy $E_{\Theta} (f) = |{\theta(f)}|^2$ of the fluid in the frequency space for different values of $\mathrm{Ra}$, $\mathrm{Pr}$ and $\mathrm{Q}$. Its power spectra is also noisy in the dimensionless frequency range $0.04 < f < 1.0 $. The slope on the log-log scale varies between $-5.9$ and $-6.4$. However for $f > 1.0$, $E_{\Theta}$ also scales with frequency with as $f^{-\beta}$ with $\beta \approx 2$. The numerically computed values of the exponent $\beta$ are listed in Table-I. Interestingly, the power spectra of the temperature fluctuations are also found to vary as ${f^{-2}}$ in the turbulent RBC experiments~\cite{Boubnov-Golitsyn_1990}.
The PSDs for the thermal flux [Nusselt number, $\mathrm{Nu} (f)]$ for several values of values of $\mathrm{Ra}$, $\mathrm{Pr}$ and $\mathrm{Q}$ are shown in Fig.~\ref{psds}(c). The PSDs also show the scaling behaviour. The PSDs are noisy, as in the case of energy and entropy signals, for dimensionless frequencies $0.04 < f < 1.0 $. The scaling exponent varies between $-4.5$ to $-6.4$ in this frequency range. However, for dimensionless frequencies range $1 < f < 200$, the spectra for thermal flux $\mathrm{Nu} (f)$ also shows very clear scaling: $\mathrm{Nu}(f)\sim f^{-\gamma}$, where $\gamma \approx 2$. The Table-I shows the values of the exponent $\gamma$ computed in DNS. The measurements of the spectra of thermal flux in RBC also shows the similar scaling law~\cite{Aumaitre_Fauve_epl_2003}.
The scaling law showing the variation of the power spectra as $f^{-2}$ starts at a critical frequency $f_c$ for different values of the Chandrasekhar number. Fig.~\ref{critical_freq} shows the variation of the critical frequency for $E(f)$, $E_{\Theta}(f)$ and $\mathrm{Nu}(f)$ with $\mathrm{Q}$ two different values of $\mathrm{Pr}$. The critical frequency $f_c (E)$ becomes lower as $\mathrm{Q}$ is increased (see Fig.~\ref{critical_freq} (a)). In addition, it is less for smaller values of $\mathrm{Pr}$. Figs.~\ref{critical_freq} (b)-(c) show the variation of $f_c (E_{\Theta})$ and $f_c (\mathrm{Nu})$, respectively, with $\mathrm{Q}$. The values of critical frequencies are slightly different for $E (f)$, $E_{\Theta} (f)$ and $\mathrm{Nu} (f)$. However they all decrease with increase in $\mathrm{Q}$. They also decrease with decrease in the value of $\mathrm{Pr}$.
\section{\label{sec:conclusion}Conclusions}
Results of direct numerical simulations on Rayleigh-B\'{e}nard magnetoconvection show that power spectral densities the kinetic energy $E(f)$, convective entropy $E_{\Theta}(f)$ and the Nusselt number $\mathrm{Nu} (f)$ scale as $f^{-2}$ for frequencies above a critical value $f_c$. The critical values $f_c (E)$, $f_c (E_{\Theta})$ and
$f_c (\mathrm{Nu})$ are different for kinetic energy, convective entropy and the Nusselt number. The critical frequency decreases with increase in the strength of the external magnetic field. However, the scaling exponent is independent of the thermal Prandtl number, Rayleigh number and Chandrasekhar number. The results may be relevant for geophysical problems, water based nano-fluids and crystal growth.
\section{References}
\noindent
\nocite{*}
|
2,877,628,089,580 | arxiv | \section{Introduction}
Future sixth generation (6G) wireless communication systems are expected to rapidly evolve towards an ultra-high speed and low latency with the software-based functionality paradigm \cite{Akyildiz_Thz2018wcm,han2014multi,Sarieddeen2020wcm,chongwenhmimos,MarcoJSAC2020}. Although current millimeter-wave (mmWave) communication systems (30-300 GHz) have been intergraded into 5G mobile systems, and several mmWave sub-bands were released for licensed communications, e.g., 27.5-29.5 GHz, 57-64 GHz, 81-86 GHz, etc., the total consecutive available bandwidth is still less than 10GHz, which is difficult to offer Tbps data rates \cite{Akyildiz_Thz2018wcm,han2014multi,Sarieddeen2020wcm,Moldovan2016}. To meet the increasing demand for higher data rates and new spectral bands, the Terahertz (0.1--10 THz) band communication is considered as one of the promising technology to enable ultra-high speed and low-latency communications. Although major progresses in the recent ten years are empowering practical THz communication networks, there are still many challenges in THz communications that require innovative solutions. One of the major challenges is the very high propagation attenuations, which drastically reduces the propagation distance.
Fortunately, the recently proposed reconfigurable intelligent surface (RIS) is considered as a promising technology to combat the propagation distance problem, since RIS can be programmed to change an impinging electromagnetic (EM) field in a desired way to focus, steer, and enhance the signal power towards the object user \cite{hcw2018icassp,MarcoJSAC2020,chongwenhmimos,zappone2020overhead,chongwentwc2019,wqq2019beamforming,
Basar2019access}. Recently, RIS-based designs have emerged as strong candidates that empower communications at the THz band. Specifically, \cite{Akyildiz_Thz2018wcm,Sarieddeen2020wcm} presented some promising visions and potential applications leveraging the advances of RIS to combat the propagation attenuations and molecular absorptions of THz frequencies. To remove obstacles of realizing these applications, \cite{ning2019beamforming,weili_CE_2020} proposed the channel estimation and data rate
maximization transmission solutions for massive multiple input multiple output (MIMO) RIS-assisted THz system. Furthermore, some beamforming and resource allocation schemes were proposed in \cite{ning2019beamforming,Nie2019resource}. For example, a cooperative beam training scheme and two cost-efficient hybrid beamforming schemes were proposed in \cite{ning2019beamforming} for the THz multi-user massive MIMO system with RIS, while a resource allocation based on the proposed end-to-end physical model was introduced in \cite{Nie2019resource} to improve the achievable distance and data-rate at THz band RIS-assisted communications
All above works assume single-hop RIS assisted systems, where only one RIS is deployed between the BS and the users. In practical, similar to multi-hop relaying systems, multiple RISs can be used to overcome severe signal blockage between the BS and users to achieve better service coverage. Although multi-hop MIMO relaying systems have been addressed in the literature intensively in the context of relay selection, relay deployment, and precoding design, multi-hop RIS assisted systems have not yet been studied. In addition, the methodologies developed for multi-hop relay systems cannot be directly applied to multi-hop RIS assisted systems, due to different reflecting mechanisms and channel models. Particularly, the constraint on diagonal phase shift matrix and unit modulus of the reflecting RIS makes the joint design of transmit beamforming and phase shifts extremely challenging.
To address high-dimension, complex EM environment, and mathematically intractable non-linear issues of communication systems, the model-free machine learning method as an extraordinarily remarkable technology has introduced in recent years \cite{chen2019FLwireless,Ref14c}. Overwhelming research interests and results uncovers machine learning technology to be used in the future 6G wireless communication systems for dealing with the non-trivial problems due to extremely large dimension in large scale MIMO systems. To be specific, deep learning has been used to obtain the channel state information (CSI) or beamforming matrix in non-linear communication systems. In terms of dynamic and mobile wireless scenarios, deep reinforcement learning (DRL) provides an effective solution by leveraging the advantages of deep learning, iterative updating and interacting with environments over the time \cite{Ref14c,Ref16e,Ref16g,deeplearn2},. In particular, the hybrid beamforming matrices were obtained by DRL for the mobile mmWave systems in \cite{Ref16e}, while \cite{Ref16g} proposed a novel idea to utilize DRL for optimizing the network coverage.
In this paper, we present a multi-hop RIS-assisted communication scheme to overcome the severe propagation attenuations and improve the coverage range at THz-band frequencies, where the hybrid design of transmit beamforming at the BS and phase shift matrices is obtained by the advances of DRL. Specifically, benefited from the recent breakthrough on RIS, our main objective is to overcome propagation attenuations at THz-band communications by deploying multiple passive RISs between the BS and multiple users. To maximize the sum rate, formulated optimization problem is non-convex due to the multiuser interference, mathematically intractable multi-hop signals, and non-linear constraints. Owning to the presence of possible multi-hop propagation, which results in composite channel fadings, the optimal solution is unknown in general. To tackle this intractable issue, a DRL based algorithm is proposed to find the feasible solutions.
The notations of this paper are summarized as follows. We use the $\mathbf{H}$ to denote a general matrix. $\mathbf{H}^{(t)}$ is the value of $\mathbf{H}$ at time $t$. $\mathbf{H}^T$, and $\mathbf{H}^{\mathcal{H}}$ denote the transpose and conjugate transpose of matrix $\mathbf{H}$, respectively. $Tr \{ \}$ is the trace of the enclosed. For any vector $\mathbf{g}$, $\mathbf{g}(i)$ is the $i^{th}$ entry, while $\mathbf{g}_{k}$ is the channel vector for the $k^{th}$ user. $||\mathbf{h}||$ denotes the magnitude of the vector. $\mathcal{E}$ denotes statistical expectation. $|x|$ denotes the absolute value of a complex number ${x}$, and its real part and imaginary part are denoted by $Re{(x)}$ and $Im{(x)}$, respectively.
\vspace{-0.15cm}
\section{System Model and Problem Formulation}
\subsection{Terahertz-Band Channel Model}
Unlike the lower frequency band communications, a signal operating at the THz band can be affected easily by many peculiarly factors, mainly is influenced by the molecular absorption due to water vapor and oxygen, which result in very high path loss for line-of-sight (LOS) links\cite{Akyildiz_Thz2018wcm,han2014multi,Sarieddeen2020wcm}. On the other hand, spreading loss also contributes a large proportion of attenuations. In terms of non-line-of-sight (NLOS) links, besides mentioned peculiarities, unfavorable material and roughness of the reflecting surface also will cause a very severe reflection loss\cite{Moldovan2016,han2014multi,Nie2019resource}. The overall channel transfer function can be written as,
\begin{equation} \label{eq:modelthz1}
\begin{split}
H(f,d,\bm{\zeta})=&H^{LOS}(f,d)e^{-j2\pi f\tau_{LOS}}+ \\
&\sum^{M_{rays}}_{i=1}H_i^{NLOS}(f,\zeta_i)e^{-j2\pi f\tau_{NLOS_i}},
\end{split}
\end{equation}
where $f$ denotes the operating frequency, $d$ is the distance between the transmitter and receiver, the vector $\bm{\zeta}=[\zeta_1,...,\zeta_{M_{rays}}]$ represents the coordinates of all scattering points, and $\tau_{LOS}$ and $\tau_{NLOS_i}$ denote the propagation delays of the LOS path and $i^{th}$ NLOS path respectively.
\subsection{Proposed Multi-hop Scheme}
As mentioned before, communications over the THz band are very different with the low frequency band communications, the transmitted signal suffers from the severe path attenuations. To address this issue, we introduce a multi-hop multiuser system by leveraging some unique features of RISs, which is comprised of a BS, $N$ reflecting RISs and multiple single-antenna users shown in Fig. \ref{fig:hybrid}. We consider that BS equipped with $M$ antennas communicate with $K$ single-antenna users in a circular region. Assume that the $i^{th}$ reflecting RIS, $i=1,\cdots, N$, has $N_i$ reflecting elements. A number of $K$($K\leq M$) data streams are transmitted simultaneously from the $M$ antennas of the BS with the aid of multiple RISs to improve the coverage range of THz communications. Each data stream is beamforming to one of the $K$ users by the assistance of RISs.
\textbf{Remark}: In contrast to the traditional precoding architectures, a key novelty of this proposed multi-hop scheme is to take full advantages of RISs with the unique programmable feature as an \textbf{external and portable analog precoder}, i.e., the RIS functions as a reflecting array, equivalent to introduce the analog beamfroming to impinge signals, which not only can remove internal analog precoder at BS that simplifies the architecture and reduces its cost significantly, but also improve the beamforming performance of THz-band communication systems.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=14cm]{fig1.pdf}
\caption{The RIS-based multi-hop for THz communications and proposed practical RIS-based hybrid beamforming architecture. }
\label{fig:hybrid}
\end{center} \vspace{-6mm}
\end{figure*}
We assume that the channel fading is frequency flat, and the transmitted signal experiences $I_k (I_k \leq N)$ hops on RISs to arrive $k^{th}$ user. We denote the channel matrix from the BS to the first reflecting RIS as $\mathbf{H}_1 \in \mathbb{C}^{(N_1 \times M )}$, the channel matrix from the $i^{th}$ RIS to the $(i+1)^{th}$ RIS as $\mathbf{H}_{(i+1)} \in \mathbb{C}^{(N_{(i+1)} \times N_i)}$. The received signal at the $k^{th}$ user is given as
\begin{equation} \label{eq:sys_1}
\begin{split}
y_k=(\mathbf{g}_{k}^T \prod_{i=1,\cdots, I_k} \mathbf{\Phi}_i \mathbf{H}_{i}+\mathbf{w}_k)\mathbf{x}+n_k \\
\end{split}
\end{equation}
where the vector $\mathbf{g}_{k} \in \mathbb{C}^{(N_{I_k} \times 1)}$ and $\mathbf{w}_k \in \mathbb{C}^{( 1 \times M )} $ denote the channel from the last RIS to the $k^{th}$ user and the direct channel from the BS to user $k$ respectively, $\mathbf{\Phi}_i \triangleq\mathrm{diag}[\theta_{i1},\theta_{i1},\ldots,\theta_{iN_i}] \in \mathbb{C}^{(N_{i} \times N_{i})}$ is the phase shift matrix of the $i^{th}$ RIS, i.e., the $i^{th}$ analog precoding matrix, $\mathbf{x} \in \mathbb{C}^{M \times 1} $ is the transmit vector from the BS, and $n_k$ is the additive white Gaussian noise (AWGN) with the zero mean and $\sigma_n^2$ variance. We further assume that the channel $\mathbf{g}_{k}$, $\mathbf{w}_{k}$, and $\mathbf{H}_{i}$ for all $K$ users are perfectly known at both the BS and all users. Although we admit that obtaining these CSIs are challenging tasks for RIS-based communication systems, there are already significant methods that are proposed in existing works \cite{ning2019beamforming,weili_CE_2020}. Furthermore, research on the channel estimation is also beyond the scope of this paper. Therefore, we have this assumption. The transmit vector $\mathbf{x}$ can be written as $ \mathbf{x}\triangleq\sum_{k=1}^{K}\mathbf{f}_{k}s_{k} $,
where $\mathbf{f}_{k}\in\mathbb{C}^{M\times 1}$ and $s_{k} \in \mathcal{CN}(0,1)$, i.e., under the assumption of Gaussian signals, denote the beamforming vector and independent user symbols respectively. The power of the transmit signal from the BS has the constraint,
$ \mathcal{E}[|\mathbf{x}|^2]=\mathrm{tr}(\mathbf{F}^H\mathbf{F})\leq P_t\;$
wherein $\mathbf{F}\triangleq[\mathbf{f}_1,\mathbf{f}_2,...,\mathbf{f}_K]\in\mathbb{C}^{M\times K}$, and $P_t$ is the total transmission power of the BS.
It should be noted that $\mathbf{\Phi}_i$ is a diagonal matrix whose entries are given by $\mathbf{\Phi}_i(n_i,n_i)=\theta_{in_i}=e^{j\phi_{n_i}}$, where $\phi_{n_i}$ is the phase shift induced by each element of the RIS. Like a mirror, the signal goes through the RIS is no energy loss, which means $|\mathbf{\Phi}_i(n_i,n_i)|^2$=1.
The received signal (\ref{eq:sys_1}) can be further given
\begin{equation} \label{eq:sys_1a}
\begin{split}
y_k=&\bigg(\mathbf{g}_{k}^T \prod_{i=1,\cdots, I_k} \mathbf{\Phi}_i \mathbf{H}_{i}+\mathbf{w}_k \bigg) \mathbf{f}_kx_k + \\
&\sum_{j, j\neq k}^K \bigg(\mathbf{g}_{k}^T \prod_{i=1,\cdots, I_k} \mathbf{\Phi}_i \mathbf{H}_{i}+\mathbf{w}_k \bigg) \mathbf{f}_jx_j+n_k
\end{split}
\end{equation}
where $\mathbf{f}_m$ is the beamforming vector for the $m^{th}, m\neq k$ user.
Furthermore, the SINR at the $k^{th}$ user is written as
\begin{equation} \label{eq:sys_3}
\rho_{k}=\frac{|(\mathbf{g}_{k}^T \prod_{i=1,\cdots, I_k} \mathbf{\Phi}_i \mathbf{H}_{i}+\mathbf{w}_k) \mathbf{f}_k|^2}{|(\sum_{j,j\neq k}^K\mathbf{g}_{k}^T \prod_{i=1,\cdots, I_k} \mathbf{\Phi}_i \mathbf{H}_{i}+\mathbf{w}_k) \mathbf{f}_j|^2+\sigma_n^2}
\end{equation}
\subsection{Problem Formulation}
Our main objective is to combat the propagation attenuations of THz communications by leveraging multi-hop RIS-assisted communication scheme. Therefore, we use the ergodic sum rate as the evaluate metric. However, the major obstacles to maximize the sum rate are to obtain the optimal design of digital beamforming matrix $\mathbf{F}$ and beamforming matrix $\mathbf{\Phi}_{i}, \forall i$, i.e., phase shift matrix of RISs. The optimization problem is formulated as follows,
\begin{equation} \label{eq:BD_1}
\begin{split}
& \max\limits_{\mathbf{F},\mathbf{\Phi}_i} C(\mathbf{F},\mathbf{\Phi}_{i,\forall i}, \mathbf{w}_{k, \forall k}, \mathbf{g}_{k, \forall k},\mathbf{H}_{i,\forall i})=\sum_{k=1}^K\log_2(1+\rho_k) \\
& \; \textrm{s.t.} \;\; tr\{\mathbf{F}\mathbf{F}^{\mathcal{H}} \} \leq P_t \\
& \;\;\;\quad\;\; |\theta_{in_i}|=1\;\forall n_i=1,2,\ldots,N_i.
\end{split}
\end{equation}
Unfortunately, we can easily find that the optimization problem (\ref{eq:BD_1}) is a NP-hard problem because of the non-trivial objective function and the non-convex constraint. As we all know, it is nearly impossible to obtain an analytical solution by the traditional methods of mathematical analysis for the multi-hop optimization. In addition, exhaustive numerical search is also impractical for large scale networks. Although there are some existing approximation methods that are proposed based on the alternating method to find the sub-optimal solutions for single hop RIS-based system, e.g., \cite{chongwentwc2019,wqq2019beamforming,
Basar2019access}, they are difficult to work for the multi-hop scenario, especially we do not know how many RIS hops the transmitted signal experienced to arrive $k^{th}$ user, i.e., $I_k (I_k \leq N)$ in prior. Instead, in this paper, we will propose a new method by leveraging the recent advance on DRL technique, rather than directly solving this challenging optimization problem mathematically.
\section {DRL-based Design of Digital and Analog Beamforming}
In this section, we give the details of the proposed DRL-based algorithm for hybrid beamforming of multi-hop THz communication networks utilizing the deep deterministic policy gradient (DDPG) algorithm.
\subsection {Framework of DRL}
Generally, a typical DRL framework consists of six fundamental elements, i.e., the state set $\mathbf{S}$, the action set $\mathbf{A}$, the instant reward $r(s,a), (s\in \mathbf{S}, a \in \mathbf{A})$, the policy $\pi(s,a)$, transition function $\mathbf{P}$ and Q-function $Q(s,a)$. Note that the policy $\pi(s,a)$ denotes the conditional probability of taking action $a$ on the instant state $s$. This also means that the policy $\pi(s,a)$ needs to satisfy $\sum_{a \in \mathbf{A}, s \in \mathbf{S},}\pi(s,a) =1$. In addition, since we consider a mobile environment, the transition function $\mathbf{P}$ usually is affected by the environment itself and the action from the RL agent.
Regarding to our proposed hybrid beamforming problem that have an approaching infinite state and action space, the storage size and search complexity of Q-table are extremely impractical. To overcome these issues, we employ a deep Q-learning method to approximate the Q-table by leveraging the universal approximation feature of deep neural networks (DNNs) \cite{universal}. As shown in Fig. \ref{fig:NN}, our proposed DRL framework uses two DNNs (also named actor network and critic network) to approximate the state/action value function. In other words, a actor neural network to approximate a policy based on the observed environment $s$ state and output an action, while another DNN implements the critic network denoted $Q(\mathbf{\theta} |s(t),a(t))$ to evaluate the current policy according to the received the rewards.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{fig2.pdf}
\caption{ The illustration of the proposed DRL framework, and the actor-critic DDPG algorithm. }
\label{fig:NN} \vspace{-8mm}
\end{center}
\end{figure}
\subsection{Critic and Actor Networks}
As can be seen in Fig. \ref{fig:NN} that the hardcore of DDPG structure is the critic and actor networks that are comprised of a fully connected DNN, where they share the similar structure, consisted of four layers, i.e., two hidden layers with one input and output layer. Note that the increase or decrease of the width of the network depends on the actions, but the last output layer is up to the number of users. Based on this, we introduce the batch normalization layer between these two hidden layers with ReLU activation function. The optimizer used in the critic and actor networks is Adam with learning rate $\mu_c^{(t)}=\lambda_c\mu_c^{(t-1)}$ and $\mu_a^{(t)}=\lambda_a\mu_a^{(t-1)}$, where $\lambda_c$ and $\lambda_a$ represent their the decaying rate.
\subsubsection{Critic Process}
The main objective of critic agent is to evaluate how good a policy is. The input of the critic network are the current environment state and actions generated by the actor network, and outputs the Q-function based on the DDPG. Its learning rate usually is set as smaller to avoid oscillation, but it needs more time to converge. In order to the negative inputs, the activation function $tanh$ is obtained for the training critic network. To remove the correlation of adjacent states, the input state $\mathbf{S}$ needs the whitening process.
\subsubsection{Actor Process}
The function of the actor network is to learn the current environment based on the DDPG algorithm (Details can be seen in algorithm 1 of our previous work \cite{Ref14c}) and outputs the actions to the critic network. Unlike the critic network, the actor needs additional process, i.e., power modular normalization before the output to implementation problems for computing the $\Delta_{a}q(\mathbf{\theta}_c^{(target)}|s^{(t)},a)$. The approximate policy gradient might yield the error, and it also could not make sure that we can obtain the optimal solution, but we can minimize the error by using the compatible features of transition function.
In the actor-critic RL agent, the policy parameters and transition function are updated simultaneously, and $\mathbf{F}$ needs to meet the power constraint $Tr \{\mathbf{F}\mathbf{F}^{\mathcal{H}} \} =P_t$. In order to satisfy this condition, a normalization layer is added at the output of the actor network. Noting that, the signal is changed the transmission direction after it goes through RISs, but its amplitude be maintained as $|\mathbf{\Phi}_i(n_i,n_i)|^2=1$ since it does not consume the additional power.
\subsection{Proposed DRL Algorithm }
Before we implement the proposed DRL algorithm, the channel information, $\mathbf{H}_i, i=1,\cdots, I$, $\mathbf{w}_{k}$ and $\mathbf{g}_{k} \forall k$ are collected by the existing methods that are investigated by some previous works \cite{ning2019beamforming,weili_CE_2020}. The channel information and previous actions of $\mathbf{F}^{(t-1)}$ and $\mathbf{\Phi}_i^{(t-1)}, \forall i$ at previous $t-1$ state, the agent obtains the current state $s^{(t)}$. In addition, weight initialization is also a key factor to affect the learning process. The action $\mathbf{F}$ and $\mathbf{\Phi}_i, \forall i$, networks parameters $\mathbf{\theta}_c^{(train)}$, $\mathbf{\theta}_a^{(train)}$, $\mathbf{\theta}_c^{(target)}$, and $\mathbf{\theta}_a^{(target)}$and replay buffer $\mathcal{M}$ should be initialized before running the algorithm. Furthermore, we also proposed two initialization algorithms, one is based singular value decomposition (SVD), while another one is utilize the max-min SINR method.
The algorithm stops when it converges or reaches the maximum number of iteration steps. The obtained rewards could not be increased with the taking more actions, we think that the output $\mathbf{F}_{opt}, \mathbf{\Phi}_{i,opt}$ are the optimal. Noting that the proposed algorithm might converges the sun-optimal solutions although our objective is to obtain the optimal digital and analog beamformings. Combing with the previous stated DDPG, the whole proposed DRL-algorithm can be summarized as Algorithm \ref{alg:ALG1} in the following.
\begin{algorithm}[H]
\caption{DRL-based hybrid beamforming design for RIS-based THz Systems}
\label{alg:ALG1}
\textbf{Input:} $\mathbf{w}_{k,\forall k}$, $\mathbf{g}_{k, \forall k},\mathbf{H}_{i,\forall i}$ \\
\textbf{Output:} The optimal $a=\{ \mathbf{F},\mathbf{\Phi}_{i,\forall i} \}$, $Q$ function\\
\textbf{Initialization:} Memory $\mathcal{M}$; parameters $\mathbf{\theta}_c^{(train)}$, $\mathbf{\theta}_a^{(train)}$, $\mathbf{\theta}_c^{(target)}$, $\mathbf{\theta}_a^{(target)}$; \\ beamforming matrices $\mathbf{F}$, $\mathbf{\Phi}_{i,\forall i} $
\begin{algorithmic}[1]
\WHILE{}
\FOR{\texttt{espisode $=0,1,2, \cdots,Z-1$}}
\STATE Collect and preprocess $\mathbf{w}_{k, \forall k}^{(n)}, \mathbf{g}_{k, \forall k}^{(n)},\mathbf{H}_{i,\forall i}^{(n)}$ for the $n^{th}$ episode to obtain the first state $s^{(0)}$
\FOR{\texttt{t=$0,1,2, \cdots, T-1$}}
\STATE Update action $a^{(t)}=\{\mathbf{F}^{(t)},\mathbf{\Phi}_{i,\forall i}^{(t)} \}=\pi(\mathbf{\theta}_a^{(train)})$ from the actor network \\
\STATE Implement DDPG Algorithm \\
\STATE Update parameters $\mathbf{\theta}_c^{(train)}$, $\mathbf{\theta}_a^{(train)}$, $\mathbf{\theta}_c^{(target)}$, $\mathbf{\theta}_a^{(target)}$ \\
\STATE Input them to the agent as next state $s^{(t+1)}$
\ENDFOR
\ENDFOR \\
\textbf{Until:} Convergent or reaches the maximum iterations.
\ENDWHILE
\end{algorithmic}
\end{algorithm} \vspace{-4mm}
In terms of this proposed algorithm, its state, action, reward and convergence are elaborated in the following.
\subsubsection{State}
The state $s^{(t)}$ is continuous and constructed by the transmit digital beamforming matrix $\mathbf{F}^{(t-1)}$, the analog beamforming matrices $\mathbf{\Phi}_i^{(t-1)}, \forall i$ in the previous $t-1$ time step, and the channel information $\mathbf{H}_i, i=1, \cdots, I$, $\mathbf{w}_k$ and $\mathbf{g}_k, \forall k$. Since the DRL based on the TensorFlow platform do not support the complex number inputs, we employ two independent input ports to input the real part and the imaginary part of state $s$ separately. We have the dimension of the state space $D_s=2MK+2\sum_{i=1,\cdots, I}N_i+2MN_1+2\sum_{i=1, \cdots, I-1}N_iN_{i+1}+2KN_I$. We assume that there is no neighboring interference between the different states. To maximize the transmission distance, assume that each state can offer some prior knowledge to DRL agent for selecting the optimal RIS and analog beamforming. The optimal beamforming is related to the channel information and interference to other users. Then, DRL agent can learn the interference patten from the historical date, so that it can infer the future interference at the next time step.
\subsubsection{Action}
Similarly, the action space is also continuous, and comprised of the digital beamforming matrix $\mathbf{F}$ and analog beamforming matrices $\mathbf{\Phi}_i, \forall i$. Furthermore, the real and imaginary part of $\mathbf{F}=Re\{\mathbf{F} \}+Im \{ \mathbf{F}\}$ and $\mathbf{\Phi}_i=Re \{\mathbf{\Phi}_i \} +Im \{\mathbf{\Phi}_i \}$ are also separated as two inputs. Its dimension also depends the parameters of communication systems, as $D_a=2MK+2\sum_{i=1,\cdots, I}N_i$.
\subsubsection{Reward}
The instant rewards is affected by two main factors: the contributions to throughput $C(\mathbf{F}^{(t)},\mathbf{\Phi}_i^{(t)}, \mathbf{w}_{k}, \mathbf{g}_{k},\mathbf{H}_{i=1,\cdots, I})$ and the penalty caused by the adjusting the beamforming direction under the prior information, the instantaneous channels $\mathbf{H}_{i=1,\cdots, I}$, $\mathbf{h}_{k}, \forall k$ and the actions $\mathbf{F}^{(t)}$ and $\mathbf{\Phi}_i^{(t)}$ outputted from the actor network.
\subsubsection{Convergence}
Furthermore, there are some factors that can affect the convergence. For example, the initialization of action and state parameters plays a key role, which will introduced in the following. In addition, gradient evolution, learning rate also pose the affect on convergence. The too large or small gradient and learning rate both make a algorithm diverge. We investigate the affect of the learning rate that are shown in simulation section.
\section{Numerical Results}
In this section, we numerically evaluate the performance of the proposed DRL based algorithm for DRL-based hybrid beamforming for multi-hop multiuser RIS-assisted wireless THz communication networks.
\subsection{Simulation Settings}
In the following simulations, we consider a single cell scenario, where there is only one BS, and many RISs that are randomly deployed in a circular region with the diameter as 100 m.
\subsubsection{System Model}
We employ the proposed hybrid beamforming architecture shown in Fig. \ref{fig:hybrid}. In particular, The BS has $M=8$ antennas with the same number of RF chains, and $K=32$ mobile users equipped the single antenna and RF chain. To reduce the complexity of deployment and learning, we adopt that all $N=64$ RISs have the same number of elements, i.e., $N_i=128$ for all $i$, and the spacing between
elements equal to $2\lambda$. The channel matrices $\mathbf{w}_{k, \forall k},\mathbf{g}_{k, \forall k},\mathbf{H}_{i-1,\forall i}$ are generated randomly with Rayleigh distribution in the simulations. The transmission frequency is set as 0.12 THz occupied the fixed 12 GHz bandwidth, and the transmitted power of BS is set as $10$ Watt.
\subsubsection{DRL Settings}
Without special highlight, the parameter settings of the proposed DRL-based beamforming algorithm are concluded in Table \ref{tab:hyperP}.
\begin{table}
\caption{ Parameters for DRL-based beamforming algorithm} \label{tab:hyperP}
\begin{center} \vspace{-4mm}
\begin{tabular}{ | m{4em} | m{19em}| m{4em} | }
\hline
Parameters & Description & Settings \vspace{1mm}\\
\hline
$\beta$ & Discounted rate of the future reward & 0.99 \vspace{1mm}\\
\hline
$\mu_c$ & Learning rate of training critic network update & 0.001 \vspace{1mm} \\
\hline
$\mu_a$ & Learning rate of training actor network update & 0.001 \vspace{1mm} \\
\hline
$\tau_c$ & Learning rate of target critic network update & 0.001 \vspace{1mm} \\
\hline
$\tau_a$ & Learning rate of target actor network update & 0.001 \vspace{1mm} \\
\hline
$\lambda_c$ & Decaying rate of training critic network update & 0.005 \vspace{1mm}\\
\hline
$\lambda_a$ & Decaying rate of training actor network update & 0.005 \vspace{1mm} \\
\hline
$D$ & Buffer size for experience replay& 100000 \vspace{1mm} \\
\hline
$Z$ & Number of episodes & 5000 \vspace{1mm} \\
\hline
$T$ & Number of steps in each episode & 20000 \vspace{1mm} \\
\hline
$W$ & Number of experiences in the mini-batch & 16 \vspace{1mm} \\
\hline
$U$ & Number of steps synchronizing target network with the training network & 1 \vspace{1mm} \\
\hline
\end{tabular} \vspace{-4mm}
\end{center}
\end{table}
\subsubsection{Benchmarks}
To show the effectiveness of our proposed, three significant cases are selected as benchmarks. The first case is an ideal case, where there is no RISs to assist transmit, i.e., $I=0$, and we employ the full digital zero-forcing beamforming. The second typical benchmark was already investigated in some existing works \cite{chongwentwc2019,ning2019beamforming,Nie2019resource,wqq2019beamforming}, where there is a just single hop between the BS and each user, and an alternating optimization method is usually proposed to design the beamforming matrices.
\subsection{Comparisons with Benchmarks}
We compare the proposed DRL-based method described in Algorithm 1 for multi-hop RIS-assisted wireless THz communication networks as well as three mentioned benchmarks shown in Fig. \ref{fig:comparison}. It shows that the proposed DRL-based multi-hop (i.e., $I=2$) THz communication scheme nearly always obtain the best system throughput compared with the considered three schemes over the whole transmission distance from 1m to 20m. In particular, we employ the ideally full digital ZF beamforming for the first benchmark, where does not have the RIS to assist transmission, its throughput drops fastest with the increase of the transmission distance. For example, under the same throughput 1Gbps, we can see that the proposed DRL-based two-hop scheme obtains around 50\% and 14\% more transmission distances than that of ZF beamforming without RIS and single-hop scheme respectively. What's more, this performance gap will becomes larger when the transmission distance increases. Another interesting point is that the traditional alternating-based method that we adopt is the proposed method in \cite{wqq2019beamforming}, as this benchmark can obtain a little better performance than that of the DRL-based beamforming single-hop scheme, but much less than that of the two-hop scheme.
\begin{figure}[tb] \vspace{-4mm}
\begin{center}
\includegraphics[width=8.2cm]{fig4_compdis.pdf} \vspace{-4mm}
\caption{Total throughput versus transmission distance. We compare the performance of four schemes. }
\label{fig:comparison}
\end{center} \vspace{-8mm}
\end{figure}
\subsection{Impact of System Settings}
To further verify the effectiveness of our proposed scheme, we have evaluated its rewards performance as a function of time steps, which is shown in Fig. \ref{fig:steps5dB}, where we consider the setting of $M=8,I=2, N_i=64, K=4$. It can be seen that, the sum rate converges with time step $t$. With the increasing of SNR, instant and average rewards both increase naturally. However, it converges faster at the low transmission power $P_t=5W$ than that of high transmission power $P_t=30W$. This is because the higher transmission power will means the state spaces of instant rewards are larger, which needs to more time to converge the local optimal solution. Based on these results, we also conclude that our proposed DRL-based algorithm can learn from the environment and feed the rewards to the agent to prompt the beamforming matrices $\mathbf{F}$ and $\mathbf{\Phi}_{i,\forall i} $ converging the local optimal.
\begin{figure}[htbp] \vspace{-4mm}
\begin{center}\vspace{-0mm}
\includegraphics[width=8.2cm]{fig4_rewards.pdf} \vspace{-4mm}
\caption{Rewards versus steps at $P_t=5W$, $P_t=20W$, and $P_t=30W$ respectively. }
\label{fig:steps5dB} \vspace{-8mm}
\end{center}
\end{figure}
\section{Conclusions}
In this paper, a novel and practical hybrid beamforming architecture for multi-hop multiuser RIS-assisted wireless THz communication networks was proposed, which can effectively combat the severe propagation attenuations and improve the coverage range. Based on this proposed scheme, a non-convex joint design problem of the digital beamforming and analog beamforming matrices was formulated. To tackle this NP-hard problem, a novel DRL-based algorithm was proposed, which is a very
early attempt to address this hybrid design problem. Simulation results show that our proposed scheme is able to improve 50\% more coverage range of THz communications compared with the considered benchmarks. Furthermore, it is also shown that our proposed DRL-based method is a state-of-the-art method to solve the NP-bard beamforming problem, especially when the signals at RIS-assisted THz communication networks experience multi hops.
\vspace{-4mm}
\bibliographystyle{ieeetran}
|
2,877,628,089,581 | arxiv | \section*{Overview}
This is a summary of the splitting circle method as implemented$^1$ in Pari. The overall idea is to recursively split the polynomial $p(x)$ into two equal-degree factors $F(x), G(x)$ at each step until we reach linear factors which are the roots.
We can divide the algorithm into two parts: determining a splitting circle that contains roughly half of the roots on each side and estimating the factor $F(x)$ consisting of the product of the roots inside the circle. In this part, we will go in detail on using the splitting circle.
\section*{Finding a splitting circle}
\subsection*{Overview}
We start with a monic polynomial $P(x)= x^n + a_1x^{n-1} + ... + a_n$ of degree $n$ where the roots have moduli $\varrho_1(P),..., \varrho_n(P)$ arranged in increasing order. We would like to find a circle that contains some roots but not all and that the two collections of roots are as far apart as possible. Intuitively, if we see that the ratio $\frac{\varrho_{j+1}(P)}{\varrho_{j}(P)}$ is sufficiently large, then the geometric mean $\sqrt{\varrho_{j+1}(P)\varrho_{j}(P)}$ would make for a good radius for the splitting circle centered at the origin. Unfortunately, the roots of $P$ can have the exact same moduli.
Our algorithm has two steps: finding an appropriate center for the splitting circle and then calculating the radius.
\subsection*{Finding the center}
For our splitting circle, we care about the range of the moduli, as expressed by the ratio $\frac{\varrho_{n}(P)}{\varrho_{1}(P)}$. We would like to pick a center such that the range from the new center is sufficiently large.
First, we standardize the polynomial by shifting the coordinates to have the center of mass of the roots be the origin. Using Vieta's formulas, this is equivalent to considering the new polynomial $P_1(x) = P(x- a_1/n)$. Next, we would like to scale the coordinates such that the largest modulus is close to 1, which is the change of coordinates $P_2(x) = P_1(\varrho_n(P_1)x)$. Finally, for the shift, we consider four points outside the circle: $v = (2,2i,-2i,-2)$ and consider the shifted polynomials $Q_j = P_2(x-v_j)$. We choose as center the point $v_j$ that maximizes the ratio $\frac{\varrho_{n}(Q_j)}{\varrho_{1}(Q_j)}$. Let $Q$ be that maximizing polynomial. As the center of mass was defined to be the origin in the first step, we can easily show that $\frac{\varrho_{n}(Q)}{\varrho_{1}(Q)}$ is bounded below by a constant, specifically $e^{0.3}$.
In the example below, we show the final shift of the center for a sample centered polynomial $P$. Note that the ratio $\frac{\varrho_{n}(Q)}{\varrho_{1}(Q)}$ substantially increases after choosing $(-2,0)$ as the new center.
\begin{figure}[ht]
\resizebox{.6\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
ticks=none,
axis lines = middle,
axis line style={->},
ymin=-5.5,ymax=5.5,
xmin=-5.5, xmax=5.5,
xlabel={$Re$},
ylabel={$Im$},
axis equal image
]
\addplot[dotted, domain=-4:-1.8] {-.181818*(x+4)};
\addplot[dotted, domain=-4:1.4] {0.259259259*(x+4)};
\addplot[dotted, domain=-4:0.4] {-0.4090909*(x+4)};
\node[label={180:{(0,2i)}},circle, red,fill,inner sep=2pt] at (axis cs:0,4) {};
\node[label={180:{(0,-2i)}},circle, red,fill,inner sep=2pt] at (axis cs:0,-4) {};
\node[label={270:{(2,0)}},circle, red,fill,inner sep=2pt] at (axis cs:4,0) {};
\node[circle, blue,fill,inner sep=2pt] at (axis cs:1.4,1.4) {};
\node[circle, blue,fill,inner sep=2pt] at (axis cs:-1.8,-.4) {};
\node[circle, blue,fill,inner sep=2pt] at (axis cs:.4,-1.8) {};
\node[label={270:{(-2,0)}},circle, red,fill,inner sep=2pt] at (axis cs:-4,0) {};
\draw (axis cs:0,0) circle [blue, radius=2];
\end{axis}
\end{tikzpicture}
}
\caption{Finalizing the choice of center of the splitting circle}
\end{figure}
\subsection*{Finding the radius}
After recentering, we now have a polynomial $P$ with ratio $\frac{\varrho_{n}(P)}{\varrho_{1}(P)} > e^{\Delta}$ for $\Delta > 0$. We would like to find an annulus $\Gamma$ that contains no roots of $P$ and such that $n/2$ roots are inside the interior circle.
We define $\Gamma$ by the interior radius $r$ and exterior radius $R$. We can then initialize $r_0 = \varrho_1(P)$ and $R= \varrho_n(P)$ and note that there is $i_0 = 1$ root in the inner circle and $j_0 = n-1$ roots in the outer circle. Afterwards, we will iteratively shrink our annulus until $n/2$ roots are located in the inner circle, with none in the annulus. At each step $\ell$, we take the geometric mean $\rho = \sqrt{r_{\ell-1}R_{\ell-1}}$ and choose our new annulus to be either $\Gamma_\ell = (r_{\ell-1}, \rho)$ or $\Gamma_\ell = (\rho, R_{\ell-1})$ using a subroutine that finds on the number of roots $k$ within the disk $\{|z| = \rho\}$. If $k < (i_{\ell-1} + j_{\ell-1})/2)$, we choose the former, otherwise we choose the latter. We stop when $i_\ell = j_\ell$, that is, when the annulus contains no roots. Then there must be an index $k$ such that $\varrho_k(P) < r_ell < R_\ell < \varrho_{k+1}(P)$. We conclude by finding $m= \varrho_k(P), M=\varrho_{k+1}(P)$ and setting our splitting circle to the geometric mean $\{|z| = \sqrt{mM}\}$.
We claim that the initial guarantee of $\frac{\varrho_{n}(P)}{\varrho_{1}(P)} > e^{\Delta}$ implies that $\frac{R_\ell}{r_\ell} > e^{\delta/(n-1)}$. Then scaling our splitting circle to a unit circle by a change of coordinates $z \rightarrow z/R$ gives us a root-free annulus of the form $(e^{-\delta}, e^{\delta})$ where $\delta = \frac{1}{2}\log(M/m)$.
\subsection*{Graeffe's method}
To numerically evaluate the modules of the roots of a polynomial $P$, we will use Graeffe's method, which follows from the observation that if
\[P(x) = (x-z_1)(x-z_2)...(x-z_n)\] then
\[P(x)P(-x) = (-1)^n(x^2-z_1^2)(x^2-z_2^2)...(x^2-z_n^2)\]
We note that the roots of $Q(x^2) := P(x)P(-x)$ are the squares of the roots of $P(x)$, and we will denote this operation as $Q = Graeffe(P)$. If
\[P(x) = \sum_{i=0}^n a_i x^i\]
We can efficiently calculate $Graeffe(P)$ thanks to the formula
\[Q = A^2 - zB^2, \qquad A = \sum_i a_{2i} z^i, \qquad B = \sum_i a_{2i+1} z^i\]
We iteratively define $P_{m+1} = Graeffe(P_m)$ and note that
\[P_m(x) = a_0^{(m)} + a_1^{(m)}x + ... + a_n^{(m)}x^n \]
Through Vieta's formulas, we see that at the $m$th iteration,
\[\frac{a_k^{(m)}}{a_n^{(m)}} \sum_{i_1 < ... < i_{n-k}} (z_{i_1} * ... * z_{i_{n-k}})^{2^m}\]
where $z_1,...,z_n$ are the roots of $P$. If the moduli are ordered $|z_1| < ... < |z_n|$, then we get the approximation
\[\frac{a_k^{(m)}}{a_0^{(m)}}= (z_{k+1}*...*z_{n})^{2^m}\] so that
\[\varrho_k(P) = \lim_{m \rightarrow \infty} \left|\frac{a_k^{(m)}}{a_0^{(m)}}\right|^{2^{-m}}\]
Of course, using the Graeffe iteration directly only guarantees convergence when all roots have distinct moduli and there is no bound as to the rate of convergence. We will use Graeffe for three distinct cases: calculating the number of roots in a disk, calculating the kth modulus $\varrho_k(P)$, and calculating the $n$th modulus $\varrho_n(P)$, where there are more efficient algorithms for estimating the largest modulus than in the general case.
\subsection*{Number of roots in a disk}
Given a polynomial $P$, error parameter $\tau$ and a radius $R$, we would like to find an index $k$ such that
$\varrho_k(P)e^{-t} < R < \varrho_{k+1}(P)e^{t}$. Doing so tells us that there are exactly $k$ roots in the disk of radius $R$. Applying a change of coordinates, we can reduce this problem to the case where $R=1$. We will get an initial error tolerance of $t$ using a result of Schonhage and apply Graeffe's method until the $m$ th iteration of the error tolerance $t_m < \tau$.
We will use the following theorem$^2$ from Schonhage without proof:
\textbf{Theorem 1} Let $P(x) = a_0 + a_1 x + ... + a_n x^n$ be a complex-valued polynomial with $k$ an integer such that $1\leq k \leq n$. Given the existence of $c,q > 0$ such that
\[|a_{k-m}| \leq cq^m|a_k|, \qquad m = 1:k\] we have that
\[\varrho_k(P) \leq (c+1)q(n-k+1)\]
\textbf{Corollary 1}
In addition, if we have that
\[|a_{k+m}| \leq cq^m|a_k| , \qquad m = 1:n-k\]
then
\[\varrho_{k+1}(P) \geq \frac{1}{(c+1)q(n-k+1)}\]
\textit{Proof.} Consider the reciprocal polynomial $\tilde{P}(x) = x^nP(\frac{1}{x})$. We see that $\varrho_{n-k}(\tilde{P}) = \varrho_k(P)$ and we apply the inequality from the previous theorem.
Moreover, we see that $c=1,q=1$ satisfies the condition of the theorem for $k = \arg\max_{0 \leq i \leq n} |a_i|$. Then applying the theorem and corollary tells us that $\varrho_{k}(P) \leq 2n$ and $\varrho_{k+1}(P) \geq \frac{1}{2n}$. Setting $t > \log(2n)$ gives us the desired bound. However, this error tolerance is unworkably high, with many potentially valid values of $k$. As stated earlier, we can tighten the error bound with Graeffe's method.
Let $P_m(x) = b_0 + b_1(x) + ... + b_n x^n$ be the $m$th Graeffe iteration of $P$ and $k = \arg\max_{0 \leq i \leq n} |b_i|$. As before,
$\varrho_{k}(P_m) \leq 2n$ and $\varrho_{k+1}(P_m) \leq \frac{1}{2n}$. However, taking the $2^m$ roots of the moduli tells us that
$\varrho_{k}(P) \leq e^{t_m}$ and $\varrho_{k+1}(P) \geq e^{-t_m}$ where $t_m = \frac{\log(2n)}{2^m}$. Given our fixed error tolerance $\tau$, we choose the smallest $m$ such that $t_m < \tau$, guaranteeing a sufficient separation of $\varrho_{k}(P) < 1 < \varrho_{k+1}(P)$ up to multiplication by $e^{\tau}$.
Implementing this algorithm in double precision requires us to truncate coefficients of our polynomials. The following result$^3$ from Schatzle says that sufficiently small perturbations of the coefficients lead to small changes in the estimated $k$th modulus.
\textbf{Theorem 2} Let $P$ and $\hat{P}$ are complex polynomials with degree $n > 0$ such that $|\hat{P} - P| < \varepsilon|\hat{P}|$ and let $\mu > 1$ be an upper bound of $\varrho_k(P)$. If $\varepsilon < \mu^{-n}2^{-4n}$ then
\[|\varrho_k(\hat{P}) - \varrho_k(P)| \leq \frac{2\mu(1+\mu)\varepsilon^{1/n}}{1-4(1+\mu)\varepsilon^{1/n}}\]
From there, we get the following corollary:
\textbf{Corollary 2} Let $P$ and $\hat{P}$ be complex polynomials with degree $n > 0$, let $\tau > 0$ and $k$ an integer such that $0 \leq k \leq n$. If
\[|\hat{P} - P| < \varepsilon|\hat{P}|, \qquad \varepsilon < 2^{-4n}\tau^ne^{-3n\tau/2}\] then
\[\varrho_k(\hat{P}) \geq e^{-3\tau/4} \implies \varrho_k(P) \geq e^{-\tau}\] and
\[\varrho_{k+1}(\hat{P}) \leq e^{3\tau/4} \implies \varrho_k(P) \leq e^{\tau}\]
so that running the algorithm on the truncated approximation for a radius of $R$ will find the largest modulus $\varrho_k < 1$ up to multiplication by $e^{-\tau}$.
\textit{Proof.}
To apply the theorem, set $\varepsilon \leq \mu^{-n}2^{-4n}$ for $\mu = \tau^{-1}e^{-3\tau/2}$, with $\tau < 1$. Then
\[|\varrho_k(\hat{P}) - \varrho_k(P)| \leq \frac{2\mu(1+\mu)\varepsilon^{1/n}}{1-4(1+\mu)\varepsilon^{1/n}} \leq \]
\[\frac{2\mu(1+\mu)\mu^{-1}2^{-4}}{1-4(1+\mu)\mu^{-1}2^{-4}} = \]
\[\frac{1+\mu}{8-2(1+\mu^{-1})} = \]
\[\frac{1+\mu}{6-2\mu^{-1}} = \]
\[\frac{1+\tau^{-1}e^{3\tau/2}}{6-2\tau e^{-3\tau/2}}\]
On the other hand, assuming the corollary doesn't hold, we see that
\[|\varrho_k(\hat{P}) - \varrho_k(P)| > e^{-3\tau/4} - e^{-\tau}\]
Taking the difference of the two expressions
\[\frac{d}{d\tau} \frac{1+\tau^{-1}e^{3\tau/2}+2e^{\tau/2}-2e^{3\tau/4}}{6-2\tau e^{-3\tau/2}} e^{-3\tau/4} - e^{-\tau} \] tells us that $e^{-3\tau/4} - e^{-\tau}$ is below our bound for $\tau <1$, so by nonnegativity of norms, $\varrho_k(P) \geq e^{-\tau}$.
The corresponding implication
\[\varrho_{k+1}(\hat{P}) \leq e^{3\tau/4} \implies \varrho_k(P) \leq e^{\tau}\]
holds by Corollary 1.
We use this to create the subroutine NRD.
\begin{algorithm}[H]
\caption{NRD: Calculates number of roots in a disk}
\hspace*{\algorithmicindent} \textbf{Input}: $(P, R,\tau)$ where $P$ is a polynomial of degree $n \geq 2$, $R>0$ is the radius of the disk, and $\tau > 0$ is an error parameter \\
\hspace*{\algorithmicindent} \textbf{Output}: An integer $k$ with $0 \leq k \leq n$ such that $\varrho_k(P)e^{-\tau} < R < \varrho_{k+1}(P)e^{\tau}$
\begin{algorithmic}[1]
\State Set $\tau_0 = \tau$ and $\varepsilon_0 = 2^{-4n}\tau_0^ne^{-3n\tau_0/2}$ and calculate the coefficients of the scaled polynomial $P_0 = P(Rz)$ with a relative precision of $\varepsilon_0/2(n+1)$.
\State Round $P_0$ to a polynomial $\hat{P}_0$ such that $|P_0 - \hat{P}_0| < \frac{\varepsilon_0}{2}|\hat{P}_0|$.
\State Set counter $m=1$.
\While{$\frac{3}{4}\tau_{m-1} < \log(2n)$}
\State Increment counter $m= m+1$.
\State Set $P_m = Graeffe(\hat{P}_{m-1})$.
\State Set $\tau_m = \frac{3}{4}\tau_{m-1}$ and $\varepsilon_0m = 2^{-4n}\tau_m^ne^{-3n\tau_m/2}$
\State Round $P_m$ to a polynomial $\hat{P}_m$ such that $|P_m - \hat{P_m}| < \varepsilon_m|\hat{P}_m$
\EndWhile
\State With $\hat{P_m}(x) = b_0 + b_1x + ... + b_nx^n$, return the index $k = \arg\max_{0 \leq i \leq n} |b_i|$
\end{algorithmic}
\end{algorithm}
Let's consider the correctness of the algorithm. After the last step, we have the inequalities $\varrho_k(\hat{P}_m) < \exp(3\tau_m/4)$ and $\varrho_{k+1}(\hat{P}_m) > exp(-3\tau_m/4)$. By Corollary 2, we see that $\varrho_k(P_m)< e^{\tau_m}$ and $\varrho_{k+1}(P_m)> e^{-\tau_m}$. As $\varrho_j(P_i) = \varrho_j(\hat{P}_{i-1})^2$, we induct backwards using the definition of $\tau_m$ to see that $\varrho_k(\hat{P}_j) < e^{3\tau_j/4}$ and $\varrho_{k+1}(\hat{P}_j) > e^{-3\tau_j/4}$ for $0 \leq j \leq m-1$. In particular, for our choice of $k$, $\varrho_k(\hat{P_0}) < e^{3\tau/4}$ and $\varrho_{k+1}(\hat{P_0}) > e^{-3\tau/4}$ so by Corollary 2, we conclude that \[\varrho_k(P(Rx))e^{-\tau} < 1 <\varrho_{k+1}(P(Rx))e^{\tau}\]
as desired.
\subsection*{Finding the kth modulus}
Next, we will use Graeffe's method to estimate the $k$th modulus $\varrho_k(P)$ to within a multiplicative error of $e^{\tau}$ for some error tolerance $\tau > 0$. Let $P(x) = a_0 + a_1x + ... + a_n x^n$. Then we will scale $P$ by some factor $\rho > 0$ with $\tilde{P}(x) = P(\rho x) = \tilde{a}_0 + \tilde{a}_1x + ... + \tilde{a}_n x^n$
such that there exist two integers $\ell, h$ with
\[\ell < k \leq h, \qquad |\tilde{a}_\ell| = |\tilde{a}_h|, |\tilde{a}_j| \leq |\tilde{a}_\ell| \quad \text{for } j=0,1,...,n\]
Then setting $c=q=1$ allows $\ell,h$ to satisfy Theorem 1, with
\[\frac{1}{2n} < \varrho_\ell(\tilde{P}) \leq \varrho_k(\tilde{P}) \leq \varrho_h(\tilde{P}) < 2n\]
We then apply Graeffe as before to tighten the bounds.
We can choose $\ell$ by considering the largest $x$-coordinate of the corners less than $k$ in the lower convex envelope of the points $M_j = (j, \log(|a_j|))$. Similarly, we can choose $h$ as the smallest $x$-coordinate of the corners greater than $k$ in the same envelope. We would like $|\tilde{a}_\ell| = |\tilde{a}_h|$, which requires $|a_{\ell}|\rho^{\ell}= |a_{h}| \rho^{h}$ or \[\rho = (\frac{|a_\ell|}{|a_h|})^{1/(h-\ell)}\]
In addition, by convexity, as shown in the diagram below, the line passing through $\log(a_\ell)$ and $\log(a_h)$ upper bounds $\log(a_j)$ for $0 \leq j \leq n$. We switch to $\log(|a_j|)$ because $\log(\rho^x) = x\log(\rho)$ is monotone in $x$. As a result, the linear upper bound remains an upper bound when composed with a monotone function $x
\rightarrow x\log(\rho)$ for $0 \leq x \leq n$. Therefore, all points $\tilde{M}_j = (j, \log(|\tilde{a}_j|))$ are below the line connecting $\tilde{M}_\ell,\tilde{M}_h$. By our argument above, Theorem 1 is satisfied.
\begin{figure}[ht]
\resizebox{.6\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
axis lines = middle,
axis line style={->},
xtick={0,3,4,6,11},
xticklabels={$0$,$\ell$, $k$, $h$, $n$},
domain=0:2*pi
ymin=-1.5,ymax=6.5,
xmin=0, xmax=12,
xlabel={$j$},
ylabel={$\log(|a_j|)$},
axis equal image
]
\addplot[black, domain=0:3] {1.16666*x-.5};
\addplot[black, domain=3:6] {.6666*x+1};
\addplot[dotted, domain=0:11] {.6666*x+1};
\addplot[black, domain=6:8] {-.5*x+8};
\addplot[black, domain=8:10] {-1*x+12};
\addplot[black, domain=10:11] {-1*x+12};
\node[circle, red,fill,inner sep=2pt] at (axis cs:0,-.5) {};
\node[circle, red,fill,inner sep=2pt] at (axis cs:1,-1) {};
\node[circle, red,fill,inner sep=2pt] at (axis cs:2,1) {};
\node[circle, red,fill,inner sep=2pt] at (axis cs:3,3) {};
\node[circle, red,fill,inner sep=2pt] at (axis cs:4,2) {};
\node[circle, red,fill,inner sep=2pt] at (axis cs:5,3.5) {};
\node[circle, red,fill,inner sep=2pt] at (axis cs:6,5) {};
\node[circle, red,fill,inner sep=2pt] at (axis cs:7,1) {};
\node[circle, red,fill,inner sep=2pt] at (axis cs:8,4) {};
\node[circle, red,fill,inner sep=2pt] at (axis cs:9,0) {};
\node[circle, red,fill,inner sep=2pt] at (axis cs:10,2) {};
\node[circle, red,fill,inner sep=2pt] at (axis cs:11,1) {};
\end{axis}
\end{tikzpicture}
}
\caption{Convex envelope of log(coefficients) }
\end{figure}
For convenience, we set $\rho$ to be a power of $2$, with \[\rho = 2^\beta, \qquad \beta = \left\lfloor \frac{1}{h-\ell}\log_2 \frac{|a_\ell|}{|a_h|}+1/2\right\rfloor\]
We then see that for $m \geq 0$,
\[|\tilde{a}_{h-m}| \leq 2^{m/2}|\tilde{a}_h|, \qquad |\tilde{a}_{\ell+m}| \leq 2^{m/2}|\tilde{a}_\ell| \]
so the conditions of Theorem 1 are valid for $c=1,q=\sqrt{2}$. Due to rounding error, it is better to work with an upper bound of $c=1,q=3/2$, which gives us looser initial bounds of
\[\frac{1}{3n} < \varrho_\ell(\tilde{P}) \leq \varrho_k(\tilde{P}) \leq \varrho_h(\tilde{P}) < 3n\]
We then use the following result$^1$ from Gourdon to guarantee the accuracy of our estimates given a truncated initial polynomial.
\textbf{Theorem 3} Let $P, \hat{P}$ be two polynomials of degree $n >0$, let $\tau >0$ and $k$ an integer such that $0 \leq k \leq n$. If
\[|\hat{P} - P| < \varepsilon|\hat{P}|, \qquad \varepsilon < 2^{-n-1}(3n)^{-n}\tau^ne^{-3n\tau/2} \qquad \frac{1}{3n} < \varrho_k(P) < 3n\]
Then
\[\varrho_k(\hat{P})e^{-\tau} \leq \varrho_k(P) \leq \varrho_k(\hat{P})e^\tau\]
\begin{algorithm}[H]
\caption{MOD: Evaluates the moduli of the roots.}
\hspace*{\algorithmicindent} \textbf{Input}: $(R,k,\tau)$ where $P(x) = a_0 + a_1x + ... + a_nx^n$ is a polynomial of degree $n \geq 2$, $k$ is an integer with $1 \leq k \leq n$, and $\tau > 0$ is an error parameter.\\
\hspace*{\algorithmicindent} \textbf{Output}: A floating point number $R>0$, such that $Re^{-\tau} \leq \varrho_k(P) \leq Re^\tau$.
\begin{algorithmic}[1]
\State If $a_0 = a_1 = ... = a_{n-1}=0$, return $R = 0$.
\State Otherwise, set $P_0 = P$ and $\tau_0 = \tau/8$.
\State Calculate $\tilde{P}_0(x) = P_0(\rho_0x)$ for $\rho_0 := 2^\beta, \quad \beta := \left\lfloor \frac{1}{h-\ell}\log_2 \frac{|a_\ell|}{|a_h|}+1/2\right\rfloor$ where $h,\ell$ are defined as in the convex envelope for $P_0$.
\State Set $M$ to be the smallest natural number such that $2^{-M}\log(3n) < \tau/2$
\For{$m=1,2,...,M$}
\State Round the polynomial $\tilde{P}_{m-1}$ to a polynomial $\hat{P}_{m-1}$ such that \[|\tilde{P}_{m-1} - \hat{P}_{m-1}| < \varepsilon_{m-1}, \quad \varepsilon_{m-1} := 2^{-(n+1)}(3n)^{-n}\tau_{m-1}^ne^{-3n\tau_{m-1}/2}\]
\State Calculate $P_m = Graeffe(\hat{P}_{m-1})$
\State Calculate $\tilde{P}_m(x) = P_m(\rho_m x)$ where $\rho_m$ is defined as in the convex envelope for $P_m$.
\State Set $\tau_m = \frac{3}{2}\tau_{m-1}$
\EndFor
\State Return $R = \rho_0\rho_1^{2^{-1}}\rho_2^{2^{-2}}...\rho_M^{2^{-M}}$
\end{algorithmic}
\end{algorithm}
Let us evaluate the correctness of our algorithm. For each step $m$, by our change of coordinates we get $\varrho_k(P_m) = \rho_m\varrho_k(\tilde{P}_m)$. After applying the perturbation result of Theorem 3, we see that
\[\varrho_m\varrho_k(\hat{P}_m)e^{-\tau} < \varrho_k(P_m) < \varrho_m\varrho_k(\hat{P}_m)e^{\tau}\]
Using the relation $\varrho_k(P_{m+1}) = \varrho_k(\hat{P}_m)^2$ gives us
\[\rho_0...\rho_M^{2^{-M}}\varrho_k(\tilde{P}_M)^{2^{-M}}e^{-\tau'} < \varrho_k(P_0) < \rho_0...\rho_M^{2^{-M}}\varrho_k(\tilde{P}_M)^{2^{-M}}e^{\tau'}, \quad \tau' = \tau_0 + \frac{\tau_1}{2} + ... + \frac{\tau_{M-1}}{2^{M-1}}\]
From Theorem 3, we see that as $\frac{1}{3n} < \varrho_k(\tilde{P}_M) < 3n$, then
\[e^{-\tau/2} < \varrho_k(\tilde{P}_M)^{2^{-M}} < e^{\tau/2}\]
Moreover, \[0 \leq \tau' = \frac{\tau}{8}\left(1+ \frac{3}{4} + \left(\frac{3}{4}\right)^2 + ...\right) \leq \frac{\tau}{2}\]
so we conclude that
\[Re^{-\tau} < \varrho_k(P_0) < Re^{\tau}\]
for $R = \rho_0...\rho_M^{2^{-M}}$.
\subsection*{Upper bounding the moduli of the roots}
For the sketch of our splitting circle algorithm, we need to calculate $\varrho_n(P)$ often. It turns out that this can be accomplished more efficiently than $\varrho_k(P)$ for a general $k$. Again, given an error parameter $\tau>0$, we would like to find an $R > 0$ equal to $\varrho_n(P)$ up to multiplicative error,
such that
\[Re^{-\tau} \leq \varrho_n(P) \leq Re^{\tau}\]
The idea of the algorithm is the same as in the previous case. However, we would like to use a different scalar $\rho$. Given an initial $P = a_0 + a_1x + ... + a_nx^n$, we define $\tilde{P}(x) = P(\rho x)$ such that
\[\frac{|\tilde{a}_j|}{|\tilde{a}_0|} \leq 2^j {n \choose j} \quad \text{for } j = 1,...,n \quad \text{and there exists } h, 1 \leq h \leq n \text{ such that } \frac{|\tilde{a}_j|}{|\tilde{a}_0|} \geq {n \choose h}\]
and $\rho$ can be determined by \[\rho = 2^\beta, \qquad \beta = \max_{1 \leq j \leq n} \left\lfloor \frac{1}{j}\log \frac{|a_\ell|}{|a_0|{n \choose j}}\right\rfloor\]
By Vieta's equations, we get the inequality
\[|\tilde{a}_h| \leq |\tilde{a}_0|{ n \choose h} \varrho_n(\tilde{P})^h\] so $\varrho_n(\tilde{P}) \geq 1$ when we account for the second assumption for our choice of $\rho$. Setting $c = 1, q=2n$, we see that $2^j{n \choose j} \leq (2n)^j$, so Theorem 1's conditions are satisfied and give an upper bound of $1 \leq \varrho_n(\tilde{P}) \leq 4n$. We continue with a perturbation result.
\textbf{Theorem 4} Let $P,\hat{P}$ be polynomials of degree $n > 0$ such that
\[|P - \hat{P}| < |\beta|e^{-n\tau}\tau^n\]
where $\beta$ is the leading coefficient of $P$. Then if $\varrho_n(P) \geq 1$, we have that
\[\varrho_n(\hat{P})e^{-\tau} < \varrho_n(P) < \varrho_n(\hat{P})e^\tau\]
\textit{Proof.} First, let $\varrho_n(\hat{P}) > \varrho_n(P)$ and let $z$ designate a root of $\hat{P}$ with modulus $\varrho_n(P) > 1$. Then we have that
\[|P(z)| = |P(z) - \hat{P}(z)| \leq |z|^n|P-\hat{P}| \leq \varrho_n(\hat{P})^n|\beta|e^{-n\tau}\tau^n\]
Moreover, we have
\[|P(z)| \geq |\beta|\prod_{i=1}^n (|z|- \varrho_i(P)) \geq |\beta|(\varrho_n(\hat{P})-\varrho_n(P))^n\]
which gives us the bound
\[\varrho_n(\hat{P})-\varrho_n(P) \leq \varrho_n(\hat{P})e^{-\tau}\tau\]
Applying the inequality $1+x\leq e^x$ gives us the desired bounds. An analogous argument holds for $\varrho(\hat{P}) < \varrho_n(P)$ by switching the roles of the two terms.
We now describe the algorithm to find the largest modulus of the roots. The code and justification follow largely from the previous example.
\begin{algorithm}[H]
\caption{MODMAX: Evaluates the largest modulus of the roots.}
\hspace*{\algorithmicindent} \textbf{Input}: $(R,\tau)$ where $P(x) = a_0 + a_1x + ... + a_nx^n$ is a polynomial of degree $n \geq 2$ and $\tau > 0$ is an error parameter.\\
\hspace*{\algorithmicindent} \textbf{Output}: A floating point number $R>0$, such that $Re^{-\tau} \leq \varrho_n(P) \leq Re^\tau$.
\begin{algorithmic}[1]
\State If $a_0 = a_1 = ... = a_{n-1}=0$, return $R = 0$.
\State Otherwise, set $P_0 = P$ and $\tau_0 = \tau/8$.
\State Calculate $\tilde{P}_0(x) = P_0(\rho_0x)$ for $\rho = 2^\beta, \qquad \beta = \max_{1 \leq j \leq n} \left\lfloor \frac{1}{j}\log \frac{|a_\ell|}{|a_0|{n \choose j}}\right\rfloor$
\State Set $M$ to be the smallest natural number such that $2^{-M}\log(4n) < \tau/2$
\For{$m=1,2,...,M$}
\State Round the polynomial $\tilde{P}_{m-1}$ to a polynomial $\hat{P}_{m-1}$ such that \[|\tilde{P}_{m-1} - \hat{P}_{m-1}| < \varepsilon_{m-1}, \quad \varepsilon_{m-1} := |\beta+{m-1}\tau_{m-1}^ne^{-n\tau_{m-1}}\] \State where $\beta_{m-1}$ is the leading coefficient of $\tilde{P}_{m-1}$.
\State Calculate $P_m = Graeffe(\hat{P}_{m-1})$
\State Calculate $\tilde{P}_m(x) = P_m(\rho_m x)$ where $\rho_m$ is defined in terms of $P_m$ for $P_0$.
\State Set $\tau_m = \frac{3}{2}\tau_{m-1}$
\EndFor
\State Return $R = \rho_0\rho_1^{2^{-1}}\rho_2^{2^{-2}}...\rho_M^{2^{-M}}$.
\end{algorithmic}
\end{algorithm}
Applying the algorithm to the reciprocal $P^*(x) = x^nP(\frac{1}{x})$ allows us to also find the smallest modulus of the roots.
\begin{algorithm}[H]
\caption{MODMIN: Evaluates the smallest modulus of the roots.}
\hspace*{\algorithmicindent} \textbf{Input}: $(R,\tau)$ where $P(x) = a_0 + a_1x + ... + a_nx^n$ is a polynomial of degree $n \geq 2$ and $\tau > 0$ is an error parameter.\\
\hspace*{\algorithmicindent} \textbf{Output}: A floating point number $R>0$, such that $Re^{-\tau} \leq \varrho_1(P) \leq Re^\tau$.
\begin{algorithmic}[1]
\State If $a_0 = 0$, return $R = 0$.
\State Otherwise, define the reciprocal polynomial $Q(x) = a_n + a_{n-1}(x) + ... + a_0x^n$ of $P$ and return 1/MODMAX($Q,\tau$).
\end{algorithmic}
\end{algorithm}
We now have all the tools we need to find the splitting circle of a polynomial.
\subsection*{Algorithms}
We can use the previous algorithms to to construct a new algorithm to find a splitting circle. We will break the search into two parts: finding the center and finding the radius of the splitting circle.
First, assume the center has already been determined. Finding the center should also give us an annulus $r < |x| < R$ along with two indices $i,j$ such that
\[1\leq i < j \leq n-1, \qquad \varrho_i(P) < r < R < \varrho_{j+1}(P)\]
We now have to find a radius $\rho$ in the interval $(r,R)$ from which we can factorize $P$ into two nontrivial factors. The general idea was discussed above, and we will deal with practical implementation here.
\begin{algorithm}[H]
\caption{RAD: Returns the radius of the splitting circle.}
\hspace*{\algorithmicindent} \textbf{Input}: $(P, r, R, i, j)$ where $P$ is a polynomial of degree $n \geq 2$ that satisfies the centering conditions above\\
\hspace*{\algorithmicindent} \textbf{Output}: $(\rho,k,\delta)$, where $\rho > 0$ is such that the circle $|z| = \rho$ contains $k$ roots of $P$ with $i \leq k \leq j$ and $\delta >0$ such that no root of $P$ is within the annulus $\rho e^{-\delta} < |z| < \rho e^{\delta}$
\begin{algorithmic}[1]
\State We calculate the error margin $\Delta := \log(R/r)$
\item If $i = j$, then the annulus $r < |z| < R$ doesn't contain any roots of $P$ and we can do the following steps:
\State \qquad We approximate the moduli $\varrho_i(P)$ and $\varrho_{i+1}(P)$ by
\quad $m= MOD(P,i,\Delta/8)$ and $M= MOD(P,i+i,\Delta/8)$
\State \qquad We return $(\rho,\delta)$ where $\rho = \sqrt{mM}$ and $\delta = \frac{1}{2}\log(M/m) - \frac{\Delta}{8}$.
\State Otherwise, we set $\rho = \sqrt{rR}, \delta = \frac{\Delta}{8(j-i)}$ and $k = NRD(P, \rho, \delta)$
\State \qquad If $k < (i+j)/2$, return $RAD(R,r, \rho e^{-\delta},i,k)$.
\State \qquad If $k > (i+j)/2$, return $RAD(R,\rho e^{\delta},R,k,j)$.
\State \qquad If $k = (i+j)/2$ and $k<n/2$, return $RAD(R,r, \rho e^{-\delta},i,k)$.
\State Otherwise if $k = (i+j)/2$ and $k\geq n/2$, return $RAD(R,\rho e^{\delta},R,k,j)$.
\end{algorithmic}
\end{algorithm}
We see that $\delta$ is always greater than $c/(j-i+1)$ for some positive constant $c$. Furthermore, when the condition $i=j$ is reached, then we have an annulus $r<|z| < R$ that contains no roots of $P$ and so our final radius does give us a splitting circle.
In later sections, we will come up with an algorithm $FCS(P, k, \delta, \varepsilon)$ that splits $P$ into two approximate factors $\hat{F},\hat{G}$ such that $|P - \hat{F}\hat{G}| < \varepsilon |P|$ given a unit splitting circle. Combining $FCS$ with $RAD$ gives us an algorithm that factorizes a centered polynomial $P$.
\begin{algorithm}[H]
\caption{HOM: Factorizes from a dilation of the splitting circle}
\hspace*{\algorithmicindent} \textbf{Input}: $(P, r, R, i, j, \varepsilon)$ where $P$ is a polynomial of degree $n \geq 2$ that satisfies the centering conditions above, $\varepsilon > 0$ is an error parameter\\
\hspace*{\algorithmicindent} \textbf{Output}: $(\hat{F},\hat{G})$, which are approximations of factors $F,G$ such that $|P-\hat{F}\hat{G}| < \varepsilon|P|$
\begin{algorithmic}[1]
\State Find $(\rho,k,\delta) = RAD(P,r,R,i,j)$.
\State Find an approximation $\hat{Q}$ of $Q(z) = P(\rho z)$ with a relative precision of $\varepsilon'/n$ for $\varepsilon' = \frac{1}{4}\min(\rho^{-n},\rho^n)\varepsilon$
\State Calculate $(F_0,G_0) = FCS(Q,k,\delta, \varepsilon')$
\State Rescale the factors $\hat{F}(z) = F_0(z/\rho), \hat{G}(z) = G_0(z/\rho)$ with a relative precision of $\varepsilon'' = 2^{-(n+4)}\varepsilon/n$ and return the scaled factors
$(\hat{F},\hat{G})$.
\end{algorithmic}
\end{algorithm}
We will now prove that the precision specified gives us $|P-\hat{F}\hat{G}| < \varepsilon|P|$. In the second step, the approximation $\hat{Q}$ of $Q$ satisfies
\[|\hat{Q}-Q| < \varepsilon'|Q|\]
Moreover, $F_0,G_0$ satisfy
\[|\hat{Q}-\hat{F}\hat{G}| < \varepsilon'|\hat{Q}| < \varepsilon'|\hat{Q}| < 2\varepsilon'|Q|\] so
\[|Q-\hat{F}\hat{G}| < 3\varepsilon'|Q|\]
The dilation scales errors by a factor at most $\max(\rho^n,\rho^{-n})$ so scaling tells us that
\[|P-FG| < \frac{3}{4}\varepsilon|P|\]
where $F(z) = F_0(z/\rho), G(z) = G_0(z/\rho)$.
Next, from our choice of $\varepsilon''$, we have the inequalities
$|F-\hat{F}|<2^{-n-4}\varepsilon|F|$ and $|G-\hat{G}|<2^{-n-4}\varepsilon|G|$. Multiplying these together tells us that
\[|FG- \hat{F}\hat{G}| < 2^{-n-4}\varepsilon(|F|\cdot |G| + |F| \cdot |\hat{G}|) \leq 2^{-n-2}\varepsilon(|F|\cdot |G|)\]
From Schonhage, we get the inequality $|F|\cdot |G| \leq 2^{n-1}|FG|$, which is slightly tighter than the classical inequality $|F|\cdot |G| \leq 2^{n}|FG|$. This leads to the bound
\[|FG- \hat{F}\hat{G}| < \frac{\varepsilon}{8}|FG| \leq \frac{\varepsilon}{4}P\]
Putting all of our inequalities together,
\[|P - \hat{F}\hat{G}| \leq |P-FG| + |FG- \hat{F}\hat{G}| < \varepsilon|P|\]
as desired.
We now proceed to find the center of the splitting circle. The center $u$ for the polynomial $Q(z) = P(z+u)$ has to be chosen so that the modular ratio $\frac{\varrho_n(Q)}{\varrho_1(Q)}$ is sufficiently large. In practice, we need to preprocess to assure the calculations are well-conditioned. This is realized by our algorithm $CTR0$ which calls on the centering algorithm $CTR$ proper.
\begin{algorithm}[H]
\caption{CTR0: Factorizes a given polynomial of degree $n \geq 2$}
\hspace*{\algorithmicindent} \textbf{Input}: $(P, \varepsilon)$ where $P$ is a polynomial of degree $n \geq 2$, $\varepsilon > 0$ is an error parameter\\
\hspace*{\algorithmicindent} \textbf{Output}: $(\hat{F},\hat{G})$, which are approximations of factors $F,G$ such that $|P-\hat{F}\hat{G}| < \varepsilon|P|$
\begin{algorithmic}[1]
\State If the constant coefficient $a_0$ of $P$ satisfies $|a_0| < \varepsilon|P|$, return $(z, (P(z)-P(0))/z)$.
\State Otherwise, set $k_1 = NRD(P,1.9,0.05)$. If $k_1=n$, then $\varrho_n(P) \leq 1.9e^{.05}<2$ and we return $CTR(P,\varepsilon)$.
\State Otherwise, consider the reciprocal polynomial $Q = P^* z^nP(1/z)$ and calculate $k_2 = NRD(Q,1.9,0.05)$. If $k_2=n$, then calculate $(F_0,G_0) = CTR(Q,\varepsilon)$ and return $(F_0^*,G_0^*)$.
\State Otherwise, set $R = 1.9e^{0.05}$, $r = 1/R$, and return $HOM(P,r,R,n-k_2,k_1-1,\varepsilon)$.
\end{algorithmic}
\end{algorithm}
The algorithm $CTR0$ allows us to use $CTR$, which requires that the moduli of the roots are not too high or even directly move to $HOM$ if the constraints are satisfied, giving us a stability condition.
\begin{algorithm}[H]
\caption{CTR: Finds the center of the splitting circle}
\hspace*{\algorithmicindent} \textbf{Input}: $(P, \varepsilon)$ where $P(z)=a_0+a_1z+...+a_nz^n$ is a polynomial of degree $n \geq 2$ with $\varrho_n(P) \leq 2$, $\varepsilon > 0$ is an error parameter\\
\hspace*{\algorithmicindent} \textbf{Output}: $(\hat{F},\hat{G})$, which are approximations of factors $F,G$ such that $|P-\hat{F}\hat{G}| < \varepsilon|P|$
\begin{algorithmic}[1]
\State Set $P_0(z) = P(z+u)$ where $u = \frac{-a_1}{n a_0}$ is the center of mass of the roots.
\State Find an approximation $\hat{P}_0$ of $P_0$ with a relative precision of $\varepsilon_0' = \frac{1}{4}\varepsilon(1+|u|)^{-n}$.
\State Set $\varepsilon_0 = \frac{\varepsilon}{4}(1+|u|)^{-n}\frac{|P|}{|P_0|}$
\State If the constant term $c$ of $\hat{P}_0$ satisfies $|c| < \varepsilon_0|\hat{P}_0|$ then skip to the last step with the polynomials $\hat{F}_0(z) = z, \hat{G}_0(z) = (\hat{P}_0(z)-c)/z$.
\State Calculate an approximation $r = MODMAX(\hat{P}_0,0.01)$ of the largest modulus of the roots of $\hat{P}_0$, then check that $\rho = re^{-0.01} < 4$.
\State Calculate an approximation $\hat{P}_1$ of the dilation $P_1(z) = \rho^{-n}\hat{P}_0(\rho z)$ with a relative precision of $\varepsilon_1/n$ for $\varepsilon_1 = \frac{1}{4}\varepsilon_0\sup(1,\rho)^{-n}$
\State For $j=0,1,2,3$ calculate the polynomial $Q_j = \hat{P}_1(z+2e^{ij\pi/2})$ and the values $R_j = MODMAX(Q_j,0.01), r_j = MODMIN(Q_j,0.01)$.
\State Let $j_0$ be the index that maximizes the ratio $R_j/r_j$. Set $P_2 = Q_{j_0}$ and $v = 2e^{ij_0\pi/2}$.
\State Calculate $(F_2,G_2) = HOM(P_2, r,R, 1, n-1, \varepsilon_2)$ with $r= r_{j_n}e^{0.01}$, $R=R_{j_n}e^{-0.01}$ and $\varepsilon_2 = \varepsilon_1 \frac{|\hat{P}_0|}{|P_2|}3^{-n}$.
\State Calculate $F_1(z) = F_2(z-v), G_1(z) = G_2(z-v)$.
\State Find the approximations $\hat{F}_0, \hat{G}_0$ of the scaled polynomials $F_0 = \rho^kF_1(z/p), F_0 = \rho^{n-k}G_1(z/p)$ where $k=deg(F_1)$. Work with a relatie precision of $\varepsilon'/n$ for $\varepsilon' =2^{-n-4}\varepsilon_0$.
\State Calculate the approximations $\hat{F},\hat{G}$ of $F = \hat{F}_0(z-u), G = \hat{G}_0(z-u)$ to a relative accuracy of $\varepsilon''/n$ for $\varepsilon = 2^{-n-4}\varepsilon$ and return $(\hat{F},\hat{G})$.
\end{algorithmic}
\end{algorithm}
We will now show that the precision of our calculations is sufficient. After calling HOM in line 9, we have that $|P_2 - F_2G_2| < \varepsilon_2|P_2|$, and the transformation $R(z) \rightarrow R(z-v)$ could amplify errors by a factor of at most $(1+|v|)^n = 3^n$ at most, giving us a bound of
\[|\hat{P}_1 - F_1G_1| < 3^n\varepsilon_2|P_2| = \varepsilon_1|\hat{P}_0|\]
As $|\hat{P}_1 - P_1| < \varepsilon_1 |\hat{P}_0|$, we see that
\[|P_1 - F_1G_1| < 2\varepsilon_1|\hat{P}_0|\]
Meanwhile, the dilation in line 11 gives us
\[|\hat{P}_0 - F_0G_0| < 2\varepsilon_1\sup_{1,\rho}^n|\hat{P}_0| = \frac{\varepsilon_0}{2}|\hat{P}_0|\]
We repeat the argument to show that
the precision used at step 11 to find $\hat{F}_0,\hat{G}_0$ implies
\[|\hat{P}_0 - \hat{F}_0\hat{G}_0| < \varepsilon_0|\hat{P}_0|\]
Next, as $|\hat{P}_0 - P_0| < \varepsilon_0|\hat{P}_0|$ we see by the triangle inequality that
\[|P_0 - \hat{F}_0\hat{G}_0| < 2\varepsilon_0|\hat{P}_0|\] and therefore
\[|P_0 - FG| < 2\varepsilon_0(1+|u|)^n|\hat{P}_0|= \frac{\varepsilon}{2}|P|\]
The precision used to calculate $\hat{F},\hat{G}$ ensures that $|P- \hat{F}\hat{G}| < \varepsilon|P|$, which guarantees the desired output.
Note that for $|u| \leq 2, \rho \leq 4$, we have that
\[\varepsilon_2 =\frac{1}{4}(1+|u|)^{-n}\sup(1,\rho)^{-n}3^{-n}\frac{|P|}{|P_2|}\varepsilon \geq \frac{1}{4}36{-n}\frac{|P|}{|P_2|}\varepsilon\]
In addition, we have that
\[|\hat{P}_0| \leq 2|P_0| \leq 2(1+|u|)^n|P| \leq 2\cdot 3^n|P|\]
Next, all the roots of $P_1$ are in the interior of the unit disk, so if $\alpha$ is the leading coefficient of $P_1$, then $|P_1| \leq |\alpha|2^n \leq |\hat{P}_0|2^n$ as $\alpha$ is also the dominant coefficient of $\hat{P}_0$. We then have that $|\hat{P}_1| \leq |\hat{P}_0|2^{n+1}$. Using the inequality $|P_2| \leq 3^n|\hat{P}_1|$ we get that
\[\frac{|P_2|}{|P|}\leq 3^{2n}\cdot 2^{n+2} = 4*18^n\]
Putting all of our inequalities alows us to conclude that $\varepsilon_2 \geq \frac{1}{16}648^{-n}\varepsilon$. This lower bound of $\varepsilon_2$ makes sure that it is sufficiently large with respect to $\varepsilon$ that the calculations are well-conditioned.
\section*{Using a splitting circle to find the factors}
\subsection*{Overview}
Assume that we have a splitting circle $C$ that contains roughly half of the roots of our polynomial $p(x)$. Then by a change of coordinates, we can assume that our splitting circle is the unit circle. We would like to find the factor $F(x)$ that consists of the product of the roots in the circle.
We first find an initial approximation $F_0$ of $F$ using the residue theorem. By polynomial division, we get the approximation $G_0$ of the remainder. We then use a form of Newton's method to get better approximations $F_1,G_1$ of $F,G$. We repeat the process until $F_1,G_1$ are linear factors.
\subsection*{Finding $F_0$}
By the residue theorem, we note that along the boundary of our unit circle $C$:
\[\frac{1}{2i\pi}\oint_C \frac{p'(z)}{p(z)}z^m dz = \sum_{|z| < 1, p(z) = 0}z^m = W_m\]
Using Newton's identities, we can turn the sums of powers from $m=1,...,n$ to the coefficients $\phi_i$ of $F$ using the recursive formulas:
\[\phi_1 = -W_1\]
\[\phi_m = -\frac{1}{m}(W_1\phi_{m-1} + ... + W_{m-1}\phi_1 + W_m)\]
For a large enough $N$, we can approximate the contour integral $W_m$ with the discrete sum
\[W_m = \frac{1}{N}\sum_{j=0}^{N-1} \frac{p'(\omega^j)}{p(\omega^j)}\omega^{(m+1)j}\]
where $\omega$ is the $N$th root of unity. Applying Newton's identities gives us our estimates $\phi_{i}$ and our initial approximation $F_0$ of the factor $F$, which should be within $O(e^{-\delta N})$ of the true value.
\subsection*{Newton-Schonhage iteration}
For our final approximation, we would like to find correction factors $f,g$ with degrees $deg(f) < deg(F_0), deg(g) < deg(G_0)$ such that $F_1 := F_0 + f, G_1 := G_0 + g$ are sufficiently good approximations of the true factors $F,G$. Specifically, we would like to minimize the L1 norm
\[\min_{f,g} ||P - (F_0 + f)(G_0 + g)||_1\] To the first order, we have that
\[P - (F_0 + f)(G_0 + g) \approx P - F_0G_0 - fG_0 - gF_0\]
so we would like to find $f,g$ such that
\[P - F_0G_0 = fG_0 + gF_0\]
We can ignore $g$ in our calculations by taking the modular representation
\[fG_0 \equiv P \mod F_0\]
Instead of solving directly for $f$, we will calculate an "auxiliary polynomial" $H$ with degree $deg(H) < deg(F_0)$ such that
\[HG_0 \equiv 1 \mod F_0\]
in which case we can simply find $f$ by
\[f \equiv f\cdot HG_0 \equiv HP \mod F_0\]
From there, we get the updated estimates $F_1 = F_0 + f$ and $G_1 = P / F_1$.
It turns out that $H$ can be represented as another complex integral, with the following result.
\textbf{Theorem}: Given our unit splitting circle $C$ with $P = FG$ such that $F$ only has roots inside of $C$ and $G$ does not, \[HG_0 \equiv 1 \mod F_0\] is uniquely defined by \[H =
\frac{1}{2i\pi}\oint_C \frac{1}{F_0G_0}\frac{F_0(z)-F_0(t)}{z-t}dt\]
\textit{Proof}. Let \[I(z) := \frac{1}{2i\pi}\oint_C \frac{
H(t)}{F_0(t)}\frac{F_0(z)-F_0(t)}{z-t}dt\]
We can decompose our modular equation into the form
\[\frac{1}{F_0G_0} = R + \frac{H}{F_0} + \frac{B}{G_0}\]
where $deg(A) < deg(F_0), deg(B) < deg(G_0)$ and we have that $HG_0 \equiv 1 \mod F_0$.
We note that the map $x \rightarrow (R(x) +
\frac{B(x)}{G_0(x)})\frac{F_0(z)-F_0(x)}{z-x}$ is analytic in the unit disk, so we can use Cauchy's integral formula to get
\[I(z) = \frac{1}{2i\pi}\oint_{|t|=1}\frac{1}{F_0(t)G_0(t)}\frac{F_0(z) -F_0(t)}{z-t}dt =\]
\[ F_0(z)\frac{1}{2i\pi}\oint_{|t|=1}\frac{H(t)}{F_0(t)}\frac{1}{z-t}dt - \frac{1}{2i\pi}\oint_{|t|=1}\frac{H(t)}{z-t}dt\]
We note that for $|z| > 1$, the second term vanishes, giving us
\[\frac{I(z)}{F_0(z)} = \frac{1}{2i\pi}\oint_{|t|=1}\frac{H(t)}{F_0(t)}\frac{1}{z-t}dt\]
Changing the circle from $|t| = 1$ to $|t| = R$ and sending $R$ to infinity gives us a unique residue of $H(z)/F_0(z)$ and the integral is 0 for $R = \infty$, as $H$ has degree less than $F$. We conclude that $H(z) = I(z)$ for all $z$ with modulus greater than 1, so by the identity theorem, our claim is valid.
Assuming that the polynomial $F_0(z) = \phi_0 z^k + \phi_1 z^{k-1} + ... + \phi_k$, we see by linearity of integrals that $H$ would be equal to
\[H(z) = \sum_{\ell = 0}^{k-1} \left(\sum_{m=\ell+1}^k \phi_{k-m}v_{m-\ell}\right)z^\ell\]
with
\[v_m = \frac{1}{2i\pi}\oint_C \frac{t^{m-1}}{F_0G_0(t)}dt\]
Again, we can approximate this integral around the unit circle with discrete sums of values at roots of unity. We define
\[U_m = \frac{1}{N} \sum_{j=0}^{N-1}\frac{\omega^{mj}}{P(\omega^j)}\]
One can show that $U_m$ is sufficiently close to $v_m$ given a large enough $N$ and that therefore we can make an initial guess $H_0$ of $H$ by
\[H_0(z) = \sum_{\ell = 0}^{k-1} \left(\sum_{m=\ell+1}^k \phi_{k-m}U_{m-\ell}\right)z^\ell\]
with $|H-H_0| = O(N^{n-1}e^{-\delta N})$. We then apply Newton's method to get an improved guess $H_1$ of $H$ via the following iteration:
Given $H_m$, we find $D_m$ by taking the modulus
\[H_mG_0 \equiv 1 -D_{m} \mod F_0\]
We then find $H_{m+1}$ by taking the modulus
\[H_{m+1}G_0 \equiv H_m(1 +D_{m}) \mod F_0\]
We note that if $H_0$ is a sufficiently good guess, then the sequence $\{D_m\}$ will converge quadratically to 0, as
\[D_{m+1} \equiv 1 - H_{m+1}G_0 \equiv 1-(1-D_m)(1+D_m) \equiv D_m^2 \mod F_0\]
We iterate until reaching a desired error tolerance of $H$.
\subsection*{Discrete Fourier Transforms}
For ease of implementation, it turns out that the discrete integrals we use over the unit circle exactly correspond to Discrete Fourier transforms, which are defined as follows. Let $P$ be an $L$-dimensional vector of coefficients for our polynomial $p$, then
\[DFT(P)_m = \sum_{k=0}^{L-1} P_k \omega^{mk}\] where $\omega$ is the $L$th root of unity. Then taking a discretization of the integral over the unit circle is just $DFT(P)/L$. In addition, we can use the Fast Fourier Transform when $L$ is a power of 2 for an $O(n\log n)$ calculation of polynomial multiplication. We note that for $N = KL$ as described above:
\[W_m = \sum_{u =0}^{K-1} Y_{m,u} \omega^{(m+1)u}\]
\[U_m = \sum_{u =0}^{K-1} X_{m,u} \omega^{mu}\]
with
\[Y_{m,u} = \sum_{v=0}^{L-1}\frac{P'(\omega^{u+vK})}{P(\omega^{u+vK})}(\omega^K)^{(m+1)v}\]
\[X_{m,u} = \sum_{v=0}^{L-1}\frac{1}{P(\omega^{u+vK})}(\omega^K)^{mv}\]
where we can represent the double sums $W_m, U_m$ as sums of compositions of discrete Fourier transforms. Note that setting $K = 1$ just gives us a direct composition of DFTs. For tuning purposes, it can be helpful to have $K > 1$.
We are now ready to reveal the algorithms for estimating the factors $F(x),G(x)$ of $p(x)$ given a unit splitting circle and the number of roots inside the circle.
\subsection*{Algorithms}
We first want to take discrete Fourier transforms to estimate our integrals $W_m, U_m$:
\begin{algorithm}[H]
\caption{DFT: Find the Fourier transforms $W_m, U_m$}
\hspace*{\algorithmicindent} \textbf{Input}: $(N,k,P)$ where $P$ is a polynomial having $k < n$ roots in the unit disk, $N$ is an integer of the form $N = KL$ with $L$ is a power of $2$ such that $n < L \leq 2n$. \\
\hspace*{\algorithmicindent} \textbf{Output}: $(W, U)$ where $W_m, U_m$ are defined for $1 \leq m \leq k$
\begin{algorithmic}[1]
\State Calculate $K,L$ where $N = KL$ and $L$ is a power of $2$ such that $n < L \leq 2n$.
\State Set $W_m = 0, U_m = 0$ for $1 \leq m \leq k$.
\State Set $\omega = e^{2i\pi/N}$ and $\omega_0 = \omega^K$.
\For {$u=0,1,\ldots, K-1$}
\State Calculate $\alpha = FFT(P,L)$, where $\alpha_v = \sum_{j=0}^n (p_j \omega^{uj})\omega_0^{vj}$ for $v \in 0:L-1$.
\State Calculate $\beta = FFT(P',L)$, where $\beta_v = \sum_{j=0}^{n-1} ((j+1)p_{j+1} \omega^{uj})\omega_0^{vj}$ for $v \in 0:L-1$.
\State Calculate $\gamma_v = 1/\alpha_v$ for $v \in 0:L-1$.
\State Calculate $x = FFT(\gamma, L)$, where $x_m = \sum_{v=0}^{L-1} \gamma_v \omega_0^{mv}$ for $m \in 0:L-1$.
\State Set $U_m = U_m + x_m \omega^{mu}$ for $m \in 0:k-1$.
\State Calculate $\delta_v = \beta_v/\alpha_v$ for $v \in 0:L-1$.
\State Calculate $y = FFT(\delta, L)$, where $y_m = \sum_{v=0}^{L-1} \delta_v \omega_0^{m v}$ for $m \in 0:L-1$.
\State Set $W_m = W_m + y_{m+1} \omega^{(m+1) u}$ for $m \in 0:k-1$.
\EndFor
\State Set $W_m = W_m/N$ and $U_m = U_m/N$.
\end{algorithmic}
\end{algorithm}
Now that we have the appropriate integrals, we would like to plug them in to get initial guesses for our factor $F$ and auxiliary polynomial $H$.
\begin{algorithm}[H]
\caption{RES: Initial approximation of $F,H$ given the respective contour integrals}
\hspace*{\algorithmicindent} \textbf{Input}: $(N,k,P,\delta, \varepsilon)$ where $P$ is a polynomial having $k < n$ roots in the unit disk with no roots in the annulus $(e^{-\delta}, e^{\delta})$, $N$ is an integer of the form $N = KL$ with $L$ is a power of $2$ such that $n < L \leq 2n$, $\varepsilon$ is a preset error tolerance. \\
\hspace*{\algorithmicindent} \textbf{Output}: $(F_0, H_0)$ which are approximations of $F,H$ respectively.
\begin{algorithmic}[1]
\State Calculate the contour integrals $(W, U)$ approximately using DFT($P,k,N)$.
\State Approximate the factors $\phi_m$ in $F_0 = z^k + \phi_1z^{k-1} + ... \phi_k$ using the recursive formulas
\[\phi_1 = -W_i\]
\[\phi_m = -\frac{1}{m}(W_1\phi_{m-1} + ... + W_{m-1}\phi_1 + W_m)\]
\State Set the polynomial $H_0(z) = \sum_{\ell = 0}^{k-1} \left(\sum_{m=\ell+1}^k \phi_{k-m}U_{m-\ell}\right)z^\ell$ and keep the coefficients as a $k$-dimensional vector.
\end{algorithmic}
\end{algorithm}
We now improve $H_0$ for use in Newton's method:
\begin{algorithm}[H]
\caption{AUX: Improves approximation of auxiliary polynomial $H$}
\hspace*{\algorithmicindent} \textbf{Input}: $(F_0, G_0, H_0, \varepsilon)$ where $F_0,G_0,H_0$ are approximations of $F,G,H$, $\varepsilon$ is a preset error tolerance. \\
\hspace*{\algorithmicindent} \textbf{Output}: $(H_1)$, which is an approximation of $H$ such that $H_1G_0 \equiv 1 -D \mod F_0$ with $|D| < \varepsilon$
\begin{algorithmic}[1]
\State Set $H_1 := H_0$.
\For{$k=0,1,2,\ldots$}
\State Find $D$ by the modular relation $H_1G_0 \equiv 1 -D \mod F_0$.
\If{$||D||_1 < \varepsilon$}
\State Return $H_1$.
\ElsIf{$||D||_1 > 1$}
\State Return 0.
\EndIf
\State Set $H_{1} \equiv H_1(1+D) \mod F_0$
\EndFor
\end{algorithmic}
\end{algorithm}
We now plug in our improved estimate of $H$ to the Newton-Schonhage method:
\begin{algorithm}[H]
\caption{NS: Improves approximation of factor $F$}
\hspace*{\algorithmicindent} \textbf{Input}: $(P, F_0, H_0, \varepsilon)$ where $P$ is a polynomial, $F_0, H_0$ are approximations of $F,H$, $\varepsilon$ is a preset error tolerance. \\
\hspace*{\algorithmicindent} \textbf{Output}: $(F_1,G_1)$, which are approximations of $F,G$ such that $|P-F_1G_1| < \varepsilon|P|$.
\begin{algorithmic}[1]
\State Set $F_1 := F_0$
\State Set $G_1 := P/F_1$ via Euclidean division.
\State Calculate $\varepsilon_0 = |P-F_1G_1|/|P|$.
\For{k = 0,1,2....}
\If{$\varepsilon_0 < \varepsilon$}
\State Return $(F_1,G_1)$.
\ElsIf{$\varepsilon_0 > 1$}
\State Return (0,0).
\EndIf
\State Set $H := $ AUX$(F_1,G_1,H_1,\varepsilon_0)$.
\State Calculate $f := H_1P \mod F_1$.
\State Set $F_1 := F_1 + f$, $G_1 = P/F_1$.
\EndFor
\end{algorithmic}
\end{algorithm}
Note that the above algorithm could fail to converge if the approximation $F_0$ of $F$ is too far off. We combine all of our steps for the final algorithm:
\begin{algorithm}[H]
\caption{FCS: Factorization given a splitting circle}
\hspace*{\algorithmicindent} \textbf{Input}: $(P, k, \delta, \varepsilon)$ where $P$ is a polynomial of degree $n \geq 2$ possessing $k< n$ roots in the unit disk and none in the annulus $(e^{-\delta}, e^{\delta})$, $\varepsilon > 0$ being the preset error tolerance \\
\hspace*{\algorithmicindent} \textbf{Output}: $(F_1,G_1)$, which are approximations of $F,G$ such that $|P-F_1G_1| < \varepsilon|P|$.
\begin{algorithmic}[1]
\State Set $L$ to be the power of 2 such that $n < L \leq 2n$
\State Set $K = \max(1/2\delta, 2)$ and set $N = KL$.
\For{k = 0,1,2....}
\State Find the initial guesses $(F_0,G_0) = RES(P,k,\delta, N)$
\State Find the updated factors $F_1,G_1 = NS(P,F_0,H_0,\varepsilon)$
\If{$(F_1,G_1) = (0,0)$}
\State Set $N := 2N$.
\Else
\State Return $(F_1,G_1)$.
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\section*{Complete Factorization}
Now that we are capable of finding a splitting circle and using it to factor our degree $n$ polynomial $P$ into two nontrivial factors $F,G$, we can use our routine $CTR0$ repeatedly until we've reduced the problem to linear factors $L_1,L_2,...L_n$, with
\[|P - L_1L_2...L_n| < \varepsilon |P|\]
This would occur if each intermediary call to $CTR0$ left us with $k$ factors $P_1,...,P_k$ such that
\[|P - P_1P_2...P_k| < \frac{k}{n}\varepsilon |P|\]
While $k < n$, we can factorize one of the polynomials $P_i$ with degree $>1$. WLOG, let this be $P_1$, with a factorization of
\[|P-FG| < \varepsilon_k|P_1|\]
which would give
\[|P - FGP_2...P_k| < \frac{k}{n}\varepsilon |P| + \varepsilon_k|P_1|\cdot|P_2...P_k|\]
This would be upper bounded by $\frac{k+1}{n}\cdot \varepsilon|P|$ if $\varepsilon_k < \frac{\varepsilon}{n}\frac{|P|}{n|P_1|\cdot|P_2...P_k|}$.
From Schonhage, we have the inequality
\[|Q|\cdot|R| < 2^{\deg(QR)-1}|QR|\]
which implies
\[|P_1|\cdot|P_2...P_k| < 2^{n-1}|P_1...P_k| < 2^{n-1}(1+\frac{k}{n}\varepsilon)|P| < 2^n|P|\]
so our constraint is modified if we choose
\[\varepsilon_k = 2^{-n}\frac{\varepsilon}{n}\]
In summary, if each factorization step of the algorithm $CTR0$ uses a precision of $2^{-n}\varepsilon/n$, then we have our approximate linear factorization. That gives us the following algorithm:
\begin{algorithm}[H]
\caption{FACT: Approximate complete factorization}
\hspace*{\algorithmicindent} \textbf{Input}: $(P, \varepsilon)$ where $P$ is a polynomial of degree $n \geq 1$, $0< \varepsilon <1 $ being the preset error tolerance \\
\hspace*{\algorithmicindent} \textbf{Output}: $(L_1,...,L_n)$, which are linear factors such that such that $|P-L_1L_2...L_n| < \varepsilon|P|$.
\begin{algorithmic}[1]
\State If $n=1$, return $P$.
\State Otherwise, we calculate $(F,G) = CTR0(P,\frac{2^{-n}\varepsilon}{n})$.
\State Let $k = deg(F)$. Then we calculate $(L_1,...,L_k) = FACT(F,\varepsilon)$ and $(L_{k+1},...,L_n) = FACT(G,\varepsilon)$
\State Return $(L_1,...,L_n)$.
\end{algorithmic}
\end{algorithm}
\section*{References}
1. Gourdon, Xavier. \textit{Combinatoire, Algorithmique et Géométrique des Polynômes.} PhD Thesis, École Polytechnique, Paris, 1996.
2. Schonhage, A. Equation solving in terms of computational complexity. In \textit{Proceedings of the International Congress of Mathematicians} (1987).
3. Schatzle, R. Diploma Thesis
Universitat Bonn, 1996.
\end{document} |
2,877,628,089,582 | arxiv | \section{Introduction}
\label{sect:intro}
Experimental analyses of pedestrian dynamics for behavioural insights or predictive model validation saw a rapid proliferation over the last years~\cite{PhysRevLett.113.238701,johansson2010analysis,helbing2012crowd,johansson2007ACS,helbing2007dynamics}.
Fine-scale tracking-based data collections
have been growing in complexity and acquisition scales~\cite{zhang2014comparison,Brscic201477,mehner2015robust}, both in~\cite{DBLP:journals/ijon/BoltesS13,zhang2011transitions} and out~\cite{corbetta2014TRP,seer2014kinects,helbing2007dynamics} of controlled laboratory environments.
Laboratory experiments enable detailed parametric studies of the crowd flow (see, e.g.,~\cite{schadschneider2009evacuation}), and may technically benefit from visual markers to enhance automatic pedestrian detection and tracking~\cite{mehner2015robust}. Real-life condition measurements, recently tackled via, e.g., wireless sensors~\cite{roggen2011recognition} or, as here, via 3D sensors~\cite{corbetta2014TRP,seer2014kinects,DBLP:journals/ijon/BoltesS13}, likely eliminate potential behavioural biases by a laboratory environment (such as those introduced by the awareness of taking part into a scientific experiment). However, they present hard automatic vision challenges~\cite{dalal2005histograms}. In general, real-life measurements are realistically a must if one aims at resolved statistical descriptions of physical observables (e.g., positions, velocities, accelerations) or quantification of related rare events~\cite{corbetta2016multiscale}. These descriptions are in fact possible by means of accumulating and agglomerating data from continuous and long-time ranged measurements~\cite{corbetta2014TRP,Brscic201477}. Notably, extensive real-life measurements are subjected to the natural uncontrollability and unpredictability of the crowd flow. In fact, in contrast with laboratory experiments, real-life measurements unavoidably include an alternation of heterogeneous scenarios. A low density pedestrian flow can suddenly turn into a dense crowd, as it happens daily e.g. in a train station at rush hours~\cite{corbetta2016multiscale}. Likewise, scenarios where individuals walk undisturbed can alternate with group dynamics~\cite{zanlungo2014pedestrian}. Although data analyses including all traffic conditions at once are a possibility~\cite{Brscic201477} (e.g. to evaluate global statistics or time-histories), inquiries on a homogeneous flow class basis, i.e. after a classification and selection of similar dynamics scenario, appear more useful toward a phenomenological understanding of individual dynamics. In other words, since we expect that pedestrians walking isolated from peers will exhibit a different dynamics than pedestrians walking in groups~\cite{PhysRevE.89.012811}, the agglomeration of data from these different scenarios appears to be`a logic step for a (cross-)comparison analysis. Furthermore, modulo sufficiently long recording times, we can reach an arbitrary statistical resolution.
For such classification purposes, manual annotation has been often employed, e.g. to select groups in~\cite{PhysRevE.89.012811}, to classify walking patterns in~\cite{Tamura2013}, or to isolate people waiting in~\cite{seitzTGF15}. To the best of our knowledge, automatized agglomeration of homogeneous datasets from heterogeneous measurements ensembles is still an open problem, both technically and in terms of ``class homogeneity'' definition. In the following, borrowing from the database terminology, we refer to selection operations as \textit{queries}.
In this paper we first discuss approaches for automatic selection of homogeneous flow data from heterogeneous long-term recordings. Then we apply these approaches to analyse and cross-compare massive pedestrian data collected by us in a year-long real-life measurement campaign at Eindhoven University of Technology, the Netherlands~\cite{corbetta2014TRP}. During this campaign we recorded on a 24/7 schedule pedestrian trajectories in a landing (intermediate planar area between flights of stairs) with corridor-like geometry. We note that individuals walking in a landing are either ascending or descending the neighboring stair flights. This aspect, appearing on side of cultural preferences for the walking side~\cite{moussaid2009experimental}, induces asymmetries in the dynamics, which we discussed for selected flow conditions in our previous work~\cite{corbettaTGF15}. Few experimental data have been collected in these scenarios, typically in the context of evacuation dynamics~\cite{hoskins2012differences,ronchi2014analysis,peacockmovement}. Natural heterogeneity in our data is high due to multiple natural traffic scenarios such as uni- or bi-directional flows with one or several pedestrians. From our analysis in~\cite{corbettaTGF15}, we expect that the number of pedestrians in the landing (taken as a surrogate of the density) and their walking directions strongly influence the dynamics. These two elements are at all insufficient in identifying a query.
Processing extensive recordings querying for combinations of number of pedestrians and walking directions on a recording frame basis appears to be a simple and natural option. Nevertheless, long-range mutual interactions and memory effects are expected to influence the dynamics beyond single frames and rather steer entire trajectories. Queries selecting flow scenarios on a trajectory basis are hence a second, in a sense dual, standpoint. Borrowing some known terminology from continuum mechanics, we define these queries respectively ``Eulerian'' and ``Lagrangian''. Here we perform a cross-comparison of the statistics of pedestrian positions, velocities and accelerations in dependence on the different, but homogeneous, flow conditions. Furthermore, we employ selected flow conditions to compare the two querying approaches.
This content of the paper is organized as follows: in \sref{sect:query} we formally introduce the concepts of Eulerian and Lagrangian queries for pedestrian trajectories datasets. The data analyses of our dataset are the subject of \sref{sect:Meas}. The section includes a comparison of Eulerian and Lagrangian querying methodologies. A discussion in \sref{sect:concl} closes the paper.
\section{Aggregation of homogeneous measurements: Eulerian and Lagrangian queries}\label{sect:query}
In this section we provide definitions and examples for Eulerian, i.e. frame-based, and Lagrangian, i.e. trajectory-based, data queries.
Our definitions, although general, are here shaped after experimental scenarios like narrow corridors, as in our analyses in \sref{fig:experiment-general}. In these cases, there are just two walking directions and, consistently with the reference used in \fref{fig:experiment-general} these are from the left side to the right side or \textit{vice versa}. To be identified on a per scenario basis are the expected constituents of the dynamics: the number of pedestrians involved and the walking directions~\cite{corbettaTGF15}. In other words we expect a statistically similar (i.e. temporally homogeneous) behaviour once fixed the number of pedestrians and given the walking direction. In order to better clarify the concept we provide here a few examples, anticipating the case studied for the following sections:
\begin{enumerate}[(i)]
\item considering all the frames in which one pedestrian walks alone in our corridor in a given direction specifies an \textit{Eulerian} query;
\item generalizing (i), we can aggregate all time frames in which a given number of pedestrians with specified walking directions are in the facility. This is another \textit{Eulerian} query.
\item case (i) includes frames with only one pedestrian. This implies that often only fragments of trajectories, possibly from heterogeneous flows, are included. In fact, while entering the landing a pedestrian might initially be alone, but other pedestrians may appear successively. We label as \textit{undisturbed} a pedestrian that is observed alone along the entire trajectory. Isolating all the trajectories by undisturbed pedestrians (plus walking direction) implies a \textit{Lagrangian} query;
\item the simplest avoidance scenario involves exactly two pedestrians, e.g. P3 and P4, (see \fref{fig:graph-lagr-constr}) walking in opposite direction. To ensure that the mutual presence is the only element influencing the two, we require that no third pedestrian is present in the landing except for P3 and/or P4. Once more this is a \textit{Lagrangian} constraint as it pertains to the trajectories of P3 and P4 as a whole.
\end{enumerate}
\begin{figure}[t]
\center
\begin{tikzpicture}
\node[draw, circle,align=center] (P1) at (0,0) {P1\\$\rightarrow$};
\node[draw, circle,align=center, right=1cm of P1] (P2) {P2\\$\leftarrow$};
\node[draw, circle,align=center, right=1cm of P2] (P3) {P3\\$\rightarrow$};
\node[draw, circle,align=center, right=.5cm of P3] (P4) {P4\\$\leftarrow$};
\draw[-] (P3)--(P4);
\node[below=.46cm of P1] (belP1) {};
\node[below=.46cm of P2] (belP2) {};
\node[below=.46cm of P3] (belP3) {};
\node[below=.46cm of P4] (belP4) {};
\draw[-,dashed] (P1)--(belP1);
\draw[-,dashed] (P2)--(belP2);
\draw[-,dashed] (P3)--(belP3);
\draw[-,dashed] (P4)--(belP4);
\node[below of=P1,left of=P1] (lstart) {};
\node[below of=P4,right of=P4] (lend) {$t$};
\draw[->,thick] (lstart)--(lend);
\node[draw, circle,align=center, below=1.5 cm of P1] (P5) {P5\\$\rightarrow$};
\node[draw, circle,align=center, right=.75cm of P5] (P6) {P6\\$\leftarrow$};
\node[draw, circle,align=center, right=.5cm of P6] (P7) {P7\\$\rightarrow$};
\node[below=1.85cm of P7] (belP7) {};
\draw[-,dashed] (P7)--(belP7);
\node[draw, circle,align=center, below right=.5cm of P6,fill=white ] (P8) {P8\\$\leftarrow$};
\draw[-] (P5)--(P6);
\draw[-] (P6)--(P7);
\draw[-] (P6)--(P8);
\draw[-] (P7)--(P8);
\node[below=5cm of P1,left of=P1] (lstart) {};
\node[below=5cm of P4,right of=P4] (lend) {$t$};
\draw[->,thick] (lstart)--(lend);
\node[below=1.85cm of P5] (belP5) {};
\node[below=1.85cm of P6] (belP6) {};
\node[below=.67cm of P8] (belP8) {};
\draw[-,dashed] (P5)--(belP5);
\draw[-,dashed] (P6)--(belP6);
\draw[-,dashed] (P8)--(belP8);
\end{tikzpicture}
\caption{We use graphs to represent Lagrangian selections of data. We associate each pedestrian trajectory with a node in a graph carrying information on the direction.
We connect with edges all those pedestrian/nodes that appear together in at least one time instant. The entrance time of each pedestrian define an order for the nodes. P1 identifies a pedestrian going to the right that appears alone along the entire trajectory, i.e. undisturbed. P2 is undisturbed too, although going to the left. P3 and P4 have opposite direction, appear together at least in one time instant and do not appear with any other pedestrian. Cases P1, P2, P3, P4 are considered in \sref{sect:lagr}. A more complex scenario occurs for P5 -- P8. P5 enters first, before leaving he/she shares the landing for at least one time frame with P6, and, afterwards, P6 appear together with both P7 and P8.\label{fig:graph-lagr-constr}}
\end{figure}
\subsection{Lagrangian queries: interpretation and evaluation}\label{sect:lagrQ}
\begin{figure}[t!]
\center
\begin{tikzpicture}
\node[draw, circle,align=center,fill=gray!30] (P1) at (0,0) {$\rightarrow$};
\node (e1) [draw, dashed, fit= (P1), inner sep=0.25cm] {};
\node [yshift=1.0ex, xshift=1.ex] at (e1.south west) {A};
\node[right=.75cm of P1] (P1pl) {$+$};
\node[draw, circle,align=center,above right=.25cm and .5 cm of P1pl,fill=gray!30] (P2) {$\rightarrow$};
\node[draw, circle,align=center,above right=0.3cm and 0.3cm of P2] (P2o) {$\leftarrow$};
\draw[-] (P2)--(P2o);
\node[draw, circle,align=center,below right=.75cm and .5cm of P1pl,fill=gray!30 ] (P3) {$\rightarrow$};
\node[draw, circle,align=center,above right=0.3cm and 0.3cm of P3] (P3o) {$\rightarrow$};
\draw[-] (P3)--(P3o);
\node[right=4.25cm of P1] (P2pl) {$+$};
\node (e2) [draw, dashed, fit= (P2) (P2o) (P3) (P3o), inner sep=0.25cm] {};
\node [yshift=1.0ex, xshift=1.ex] at (e2.south west) {B};
\node[draw, circle,align=center,below right=1.75cm and .5cm of P2pl,fill=gray!30 ] (P4) {$\rightarrow$};
\node[draw, circle,align=center,above right=0.1cm and 0.3cm of P4] (P4o) {$\rightarrow$};
\node[draw, circle,align=center,below right=0.1cm and 0.3cm of P4] (P4o2) {$\leftarrow$};
\draw[-] (P4)--(P4o);
\draw[-] (P4)--(P4o2);
\draw[dotted] (P4o)--(P4o2);
\node[draw, circle,align=center,right= .5cm of P2pl,fill=gray!30 ] (P5) {$\rightarrow$};
\node[draw, circle,align=center,above right=0.1cm and 0.3cm of P5] (P5o) {$\rightarrow$};
\node[draw, circle,align=center,below right=0.1cm and 0.3cm of P5] (P5o2) {$\rightarrow$};
\draw[-] (P5)--(P5o);
\draw[-] (P5)--(P5o2);
\draw[dotted] (P5o)--(P5o2);
\node[draw, circle,align=center,above right=1.75cm and .5cm of P2pl,fill=gray!30 ] (P6) {$\rightarrow$};
\node[draw, circle,align=center,above right=0.1cm and 0.3cm of P6] (P6o) {$\leftarrow$};
\node[draw, circle,align=center,below right=0.1cm and 0.3cm of P6] (P6o2) {$\leftarrow$};
\draw[-] (P6)--(P6o);
\draw[-] (P6)--(P6o2);
\draw[dotted] (P6o)--(P6o2);
\node (e3) [draw, dashed, fit= (P4) (P4o) (P4o2) (P5) (P5o) (P5o2) (P6) (P6o) (P6o2) , inner sep=0.25cm] {};
\node [yshift=1.0ex, xshift=1.ex] at (e3.south west) {C};
\node[right=7.5cm of P1] (P1pl) {$+$};
\node[right=8.5cm of P1] (P1pl) {\Large$\ldots$};
\end{tikzpicture}
\caption{Trajectories from different Lagrangian scenarios contribute to the same Eulerian query. For instance, frames including just one pedestrian walking to the right, gathered in Eulerian sense, include contributions from heterogeneous sets of trajectories. Considering these trajectories (gray nodes) by the number of other pedestrians appeared, we have:
(A) trajectories from pedestrians walking alone to the right in Lagrangian sense (no further pedestrian appeared); (B) trajectories from pedestrians that appear with a second pedestrian. The latter can have the same or opposite (as P3-P4 in \fref{fig:graph-lagr-constr}) direction of the former; (C) trajectories from pedestrians appearing with two further pedestrians. These pedestrians may or may not appear together (thus an edge shall or shall not connect them, indicated with the dotted edge).
\label{fig:graph-lagr-decomposition}}
\end{figure}
Lagrangian queries can be conveniently represented via undirected graphs (we refer, e.g., to~\cite{bondy1976graph} for an introduction on graphs). We associate each distinct pedestrian recorded to a graph node, including the pedestrian direction. Hence, we connect with an edge two nodes in case the two pedestrians appear together at least in one time frame. Analyzing the connected components of such a graph (i.e. those subgraph in which each node is connected to all others via a path constituted of one or more edges~\cite{bondy1976graph}) we can extract homogeneous flow conditions with respect to trajectories. Pedestrians walking undisturbed are identified by all connected components with just one node (e.g. P1 and P2 in \fref{fig:graph-lagr-constr} that have opposite directions). Scenarios involving just two pedestrians are identified by connected components with two nodes. Hence avoidance scenarios involving two pedestrians (as in \sref{sect:lagr}) are defined by the connected components of the graph having two nodes associated to opposite directions (cf. P3 -- P4 in \fref{fig:graph-lagr-constr}). Notably, this graph based selection comes at low computational costs as:
\begin{inparaenum}[(i)]
\item one pass of the dataset is sufficient to build the graph;
\item querying for connected components is a light operation on modern graph libraries such as~\cite{hagberg-2008-exploring}.
\end{inparaenum}
We can use this graph representation to interpret further the difference between Eulerian and Lagrangian queries. As an example, we consider the queries (i) and (iii) in \sref{sect:query}. Following (i) we isolate all the time frames in which one pedestrian walking in a given direction (e.g. from the left side of the corridor to the right side) is observed. Measurements from several trajectories fragments remain thus agglomerated. In \fref{fig:graph-lagr-decomposition} we report a Lagrangian, graph based, classification of these trajectory fragments.
There the number of nodes of the connected components to which each trajectory fragment belongs gives the sorting criterion. Hence, the query (i) includes the entire selection given by (iii) (connected components with just one node) plus measurements from pedestrians that in previous or future time frames will appear with two, three or more other individuals. The following observations are due:
\begin{enumerate}[(A)]
\item Eulerian selections (e.g., (i)-(ii) in \sref{sect:query}) aggregate conditions having similar load and/or analogous usage patterns of the corridor;
\item conversely, Lagrangian selections (e.g., (iii)-(iv) in \sref{sect:query}) identify specific physical scenarios focusing on the involved pedestrians (cf. cases P1 -- P4 in \fref{fig:graph-lagr-constr}). In general these scenarios appear ideal references in social-force-like modeling perspectives~\cite{helbing1995PRE}, where the pedestrian motion is a sum of a \textit{desired} component in absence of other individuals (as in P1 or P2), plus additive term considering \textit{pair-wise} interactions (as for P3-P4). See \sref{sect:lagr} for further modeling considerations.
\item The expansion in \fref{fig:graph-lagr-decomposition} grows with a super-exponential~\cite{stanley2001enumerative} number of different graph configurations as the number of considered pedestrians increases. This means that a graph based description may become impractical when many interacting pedestrians are considered and, for condition of high homogeneous crowding (co-flow of numerous pedestrians, counter-flows of two numerous groups), Eulerian queries may remain the only option. Nevertheless, in these conditions, a prevalence of density-related effects over the Lagrangian graph edges seems reasonable. In other words, we expect strong similarities in the dynamics in case of large highly connected graphs, independently on the exact structure of the connections.
\end{enumerate}
\section{Asymmetric dynamics in a staircase landing}\label{sect:Meas}
We employ Eulerian and Lagrangian queries to select and analyze data from our large scale real-life measurements of pedestrian traffic in a corridor-shaped landing. For the sake of completeness we report here a primer of the measurement campaign and we refer the interested reader to~\cite{corbetta2014TRP,corbettaTGF15} for a more detailed overview of the traffic and to~\cite{corbetta2015MBE,corbetta2016multiscale} for the techniques employed.
\begin{figure}[t!h!]
\begin{center}
\scalebox{-1}[1]{ \includegraphics[width=.3\textwidth,trim=0.5cm .5cm 0cm 3.8cm,clip=true]{3498-kSnap-12-59-58-909_2006442.png} }
\includegraphics[width=.6\textwidth,trim=0 0cm 0cm .0cm,clip=true]{t_3498_.eps}
\scalebox{-1}[1]{ \includegraphics[width=.3\textwidth,trim=0.5cm .5cm 0cm 3.8cm,clip=true]{3438-kSnap-12-53-00-768_2000197.png} }
\includegraphics[width=.6\textwidth,trim=0 0cm 0 0cm,clip=true]{t_3438_.eps}
\scalebox{-1}[1]{ \includegraphics[width=.3\textwidth,trim=0.5cm .5cm 0cm 3.8cm,clip=true]{3424-kSnap-12-51-48-016_1999107.png} }
\includegraphics[width=.6\textwidth,trim=0 0cm 0 0cm,clip=true]{t_3424_.eps}
\end{center}
\caption{ (Left panels) Depth frames taken in the landing by the \kinectTM~ sensor, measured trajectories are superimposed. (Right panels) Measured trajectories in a sketch of the landing and considered $(x,y)$ reference. (Top row) One pedestrian walking from the left to the right hand side of the landing (2R) undisturbed. (Middle row) Two pedestrians moving from the right to left hand side of the corridor, i.e. co-flowing (2L). (Bottom row) A frame containing three pedestrians is plotted with all the four trajectories of the the connected component including the three of them (cf. \fref{fig:graph-lagr-constr} {P5 -- P8}).}
\label{fig:experiment-general}
\end{figure}
In the one year starting from October 2013 we recorded via an overhead Microsoft \kinectTM~ 3D-range sensor~\cite{Kinect} all pedestrians walking in a landing within the Metaforum building at Eindhoven University of Technology. The landing connects two staircases in the configuration presented in the right panels of \fref{fig:experiment-general}, where individuals ascend in a clockwise direction from the ground floor to the first floor of the building. The landing is $5.2\,$m long and $1.2\,$m wide, and the steps have the same width. Individuals at the ground floor reach the landing after $18$ steps, then they climb $4$ further steps arriving at the first floor. Recordings went on a 24/7 basis and include data from 108 working days. With \textit{ad hoc} processing techniques of the \kinectTM~ depth cloud and fluid mechanics-like tracking~\cite{OpenPTV,willneff2003spatio}, we collected \textit{ca.} 230,000 time-resolved high-resolution trajectories.
Trajectories span diverse flow scenarios, ranging from pedestrians walking undisturbed to clogged counter-flows. In the next subsections we analyze statistics from these flow scenarios employing Eulerian and Lagrangian queries.
In this section we pursue an analysis of the dynamics as well as a comparison of the Eulerian and Lagrangian querying approaches.
\subsection{Eulerian overview of the dynamics}\label{sect:eulerian}
The U-shape of the landing influences the dynamics of pedestrians that follow curved trajectories to reach the staircase at the opposite end of the walkway (cf. trajectories samples in \fref{fig:experiment-general}). Considering the stairs flights, pedestrians are furthermore ``globally'' ascending or descending through the building. For convenience we indicate the walking direction that allows one to ascend to the first floor as \textit{left to right} (2R, for brevity) and as \textit{right to left} (2L) the opposite case. Shape plus ``functional'' differences among walking direction allow the emergence of asymmetries in the dynamics for and within the different \textit{flow conditions} (undisturbed pedestrian vs. multiple pedestrians vs. direction. Cf. also our previous work~\cite{corbettaTGF15}).
First we give an overview of the dynamics spanning over observed flow conditions adopting the Eulerian standpoint (cf. (A) and (C) in \sref{sect:lagrQ}).
Curved pedestrian trajectories fall preferentially in narrow curved bands that we use to compare our queries. The quantitative definition of the bands rely on binning the pedestrian position data according to the span-wise ($x$) position and taking statistics on the transversal position $y$. The bands reported in \fref{fig:pref-path-combo} range from the $15^{\rm th}$ to $85^{\rm th}$ percentiles of the pedestrians transversal positions (cf. \aref{app:tech} for technical details).
\begin{figure}[h!]
\includegraphics[width=.95\textwidth]{pref_path_combo.eps}
\caption{Bands indicating the preferred pedestrian positions in different flow conditions. Each plot reports different Eulerian queries, in the database of our measurements, based on the number of pedestrians traversing the corridor from right to left (2L) and from left to right (2R) and their ultimate direction. Hence, in the subplot $(\textbf{N}\ \textit{2L}, \textbf{M}\ \textit{2R})$, we isolated all the frames containing $N$ pedestrians going to the left and $M$ going to the right. The cyan and the magenta lines limit, respectively, the preferred position bands for pedestrians going to the left and to the right. It is observed that the pedestrians conforms to the driving side preference by walking on the relative right side of the corridor, at least in the cases of counter-flowing. Increase of co-flowing pedestrians results in expansion of the preferred position band in the transversal direction, but increase of counter-flowing pedestrians constricts the width of the preferred position band. }
\label{fig:pref-path-combo}
\end{figure}
Pedestrians maintain a relative right position in the corridor and in all flow configurations. As the number of co-flow pedestrians increases, the width of the preferred position band increases, and, as the number of counter-flow pedestrians increases, the preferred position band becomes narrower. As intuition suggests, the widths of preferred position bands in the counter-flow situation roughly follows the ratio between the numbers of pedestrians in both directions. Note that for the 5 pedestrians cases when a group of 3 pedestrians and a group of two pedestrians going toward each other ((3 2L, 2 2R) and (2 2L, 3 2R) cases in \fref{fig:pref-path-combo}) the statistics are low so the boundaries of the preferred position bands are less smooth.
For an overview of the average walking velocities across these Eulerian queries we refer the reader to Sect. 3 in~\cite{corbettaTGF15}.
\subsection{Eulerian vs. Lagrangian queries of diluted flows}\label{sect:eu_vs_lagr}
Via the queries in \sref{sect:eulerian} we agglomerated our measurements following the Eulerian standpoint, and we showed position preferences and dynamics asymmetries. As commented in \sref{sect:lagrQ}, Eulerian queries exchange querying simplicity for physical clarity. When we consider trafficked dynamics involving many pedestrians, because of the combinatorial explosion of the Lagrangian graph configurations, Eulerian queries are likely to be the only option. However they mix heterogeneous physical scenarios. In this section we compare results from Eulerian and Lagrangian queries when selecting flow conditions involving few pedestrians (one or two), i.e. ``diluted'' flows in our landing.
Our analysis compares both bands of preferred position and speed fields.
\begin{figure}[thp!]
\begin{center}
\subfigure[]{\includegraphics[width=.48\textwidth]{preferred_path_single_difference_2L.eps}}
\subfigure[]{\includegraphics[width=.48\textwidth]{preferred_path_single_difference_2R.eps}}
\end{center}
\caption{Preferred position bands for pedestrians appearing alone in a frame. The definition of these layers is discussed in \aref{app:tech}. Preferred position bands for the 2L (panel a) and 2R (panel b) cases. Red lines indicate the preferred position bands calculated from Lagrangian queries, and blue lines come from Eulerian queries. The black arrows indicate the change from the Lagrangian queries. Black vertical dotted lines indicate longitudinal center of the landing. The preferred position bands from Lagrangian queries are roughly symmetric with respect to the longitudinal center, but in Eulerian queries such symmetry is lost. Notably, the results from Eulerian queries have a larger width.}
\label{fig:singlePedEuLa}
\end{figure}
In \fref{fig:singlePedEuLa} we report the bands of preferred positions according to Eulerian and Lagrangian queries of single pedestrians for the two possible pedestrian directions (cf. cases (i) and (iii) in \sref{sect:query}). Consistently with \fref{fig:pref-path-combo} (subplots (1 2L, 0 2R) and (0 2L, 1 2R)), the 2L and 2R preferred position bands appear as being vertical translations by about 20 cm of each other with the 2L pedestrians walking on the upper side of the corridor, conforming to the driving side preference. Although the relative position of the layers conforms with the cultural habit of keeping the driving side (cf. e.g.~\cite{moussaid2009experimental}), an influence of the landing geometry cannot be excluded. In fact the shape of the landing limits the sight on the staircases, hence right-hand side positions may be kept to ease potential collisions (cf. \fref{fig:2counterflow} and \fref{fig:interaction-force}). We note that bands from Lagrangian queries are symmetric to the longitudinal center of the corridor, while this does not happen in the Eulerian case. Specifically, the entrance end of the bands expands more to the upper side of the corridor, and the exit end expands into the lower side of the corridor.
\begin{figure}[htp!]
\begin{center}
\subfigure[]{\includegraphics[width=.9\textwidth,trim=0 2.2cm 0 0,clip=true]{preferred_path_single_lagrangian_decomposition_draft_2L.eps}}
\subfigure[]{\includegraphics[width=.9\textwidth,trim=0 2.2cm 0 0,clip=true]{preferred_path_single_lagrangian_decomposition_draft_2R.eps}}
\subfigure[]{\includegraphics[width=.49\textwidth]{preferred_path_single_lagrangian_decomposition_addition_distribution_2L.eps}}
\subfigure[]{\includegraphics[width=.49\textwidth]{preferred_path_single_lagrangian_decomposition_addition_distribution_2R.eps}}
\end{center}
\caption{ Contributions from different flow configurations to the preferred position bands in \fref{fig:singlePedEuLa}. Panels (a) and (b) indicate, respectively, the preferred position bands from Eulerian queries (blue lines), Lagrangian queries of undisturbed pedestrians (red lines), two pedestrians co-flow (green lines), two pedestrians counter-flow (yellow lines) and the rest (cyan lines). The co-flowing pedestrians (green lines) occupy a wider position band than other flow conditions while counter-flowing pedestrians (yellow lines) have preferred position bands shifted toward the relative right-hand side. Panels (c) and (d) corresponds to 2L and 2R cases, respectively, the number of pedestrians in each flow configurations included in the Eulerian queries. Color scheme is the same as that in panels (a, b). The largest contribution to the results of Eulerian queries other than the undisturbed pedestrians comes from co-flowing pedestrian pairs. Hence at the entrance side (right end for 2L, left end for 2R) the preferred position band is wider in Eulerian queries than in Lagrangian queries.}
\label{fig:singlePedEuLaPath-decomp}
\end{figure}
The difference between the preferred position bands from the two queries comes from pedestrians who have met and will meet other pedestrians during their walk in the corridor (although currently alone). \fref{fig:singlePedEuLaPath-decomp}{a} and \fref{fig:singlePedEuLaPath-decomp}{b} shows that the pair co-flow pedestrians occupy a wider preferred position band, and they are the largest group of single pedestrians eventually or previously sharing the corridor with others (cf. \fref{fig:singlePedEuLaPath-decomp}{c} and \fref{fig:singlePedEuLaPath-decomp}{d}). For the increased width of the position bands at the exit end (right end for 2R, left end for 2L) in the Eulerian queries, it is possibly due to a combination of co-flow and counter-flow pedestrians following and avoiding others.
\begin{figure}[ht!]
\begin{center}
\subfigure[]{\includegraphics[width=.4825\textwidth,trim=0 .2cm 2.6cm 0, clip=true]{eulerian_speed_field_single_2L.eps}}
\subfigure[]{\includegraphics[width=.49\textwidth, trim=2.4cm .2cm 0cm 0, clip=true]{eulerian_speed_field_single_2R.eps}}
\subfigure[]{\includegraphics[width=.4825\textwidth,trim=0 .2cm 2.6cm 0, clip=true]{lagrangian_speed_field_single_2L.eps}}
\subfigure[]{\includegraphics[width=.49\textwidth, trim=2.4cm .2cm 0cm 0, clip=true]{lagrangian_speed_field_single_2R.eps}}
\end{center}
\caption{Spatial fields of pedestrian average walking speed considering pedestrians walking alone in Eulerian and Lagrangian sense. Panels (a, b) show average velocity fields for 2L and 2R pedestrians, respectively, from Eulerian queries. Panels (c, d) show average velocity fields for 2L and 2R pedestrians from Lagrangian queries. In both 2L and 2R cases, pedestrian average speeds are lower in the Eulerian queries.
}
\label{fig:speedEuLa}
\end{figure}
\fref{fig:speedEuLa} depicts the average pedestrian velocity field for the considered queries. The walking speed varies in space, and its contours are roughly transversal with respect to the walking direction. In both Lagrangian and Eulerian points of view, pedestrians going 2R walk slower than the pedestrians going 2L. In Lagrangian queries the speed field is not symmetric to the middle line of the corridor. Pedestrians walk in a higher speed at the later part of their walk in the corridor, but before they arrive the next flight of stairs they slow down again. A speed drop of about $30\%$ is measured in our observation window. In both 2L and 2R cases the deceleration phase when pedestrians approach the next flight of stairs is shorter than the acceleration phase when they arrive the landing. In the Eulerian perspective pedestrian speed is lower than what is found using Lagrangian queries. Since a single pedestrian in the Eulerian query may have co-flow or counter-flow encounters during their walk in the corridor, his/her speed may reduce to reflect such a situation. Also in the Eulerian point of view the asymmetry of acceleration and deceleration when entering/leaving the landing is greatly reduced, although still visible.
\begin{figure}[ht!]
\begin{center}
\subfigure[]{\includegraphics[width=.4715\textwidth,trim=0 .2cm 2.8cm 0, clip=true]{lagrangian_speed_minus_eulerian_speed_pairs_2L.eps}}
\subfigure[]{\includegraphics[width=.49\textwidth, trim=2.3cm .2cm 0cm 0, clip=true]{lagrangian_speed_minus_eulerian_speed_pairs_2R.eps}}
\end{center}
\caption{Differences in the preferred position bands and in the average velocity fields of two pedestrians in the counter-flow configuration from Lagrangian and Eulerian queries. The red lines represent the preferred position bands from the Lagrangian queries, the blue lines represent that from the Eulerian queries, and black arrows indicate the change of the results from the Lagrangian queries to Eulerian queries. The difference of preferred position band is much larger in this counter-flowing configuration than in the single pedestrian case. In the Eulerian case preferred position bands are much more to the relative right side of the corridor due to the potential higher traffic. The underlying maps presents the difference in average speed field calculated by subtracting average speed field of Eulerian queries from that of Lagrangian queries. In most of the regions the average speed from Lagrangian queries is larger due to that fact that in Eulerian queries the pedestrians are already sharing the corridor at the moment of recording and the avoidance mechanism has taken effect.}
\label{fig:speedLa-Eu-two-ped}
\end{figure}
In the same spirit, we compute the preferred position bands and the average velocity field for the case of two pedestrians walking in the counter-flow condition. \fref{fig:speedLa-Eu-two-ped} shows the difference between the Lagrangian and Eulerian queries. It is clear that the difference in the preferred position bands from the two perspective is even larger compared with the single pedestrian case (cf. \fref{fig:singlePedEuLa}). The Eulerian queries of this condition gives frames of exactly two pedestrians who walk in opposite directions. Hence the pedestrians have already seen each other, and the avoidance mechanism (topic of \sref{sect:lagr}) is in effect. Also these pedestrians may encounter more people during their walk in the corridor. Compared with the Lagrangian queries of this condition where the pedestrians may not have met their counter part yet or the other person has left, the Eulerian queried counter-flow trajectories show a greater avoidance effect. We can do the same two set of queries for the average velocity fields and calculate the difference, as shown in the underlying map in \fref{fig:speedLa-Eu-two-ped}. In most of the region the average velocity field of the Lagrangian queries is in a larger magnitude (so the difference is positive). The aforementioned reasons are likely to contribute to this difference as well.
\subsection{Lagrangian analysis of pair-wise interactions} \label{sect:lagr}
The last step of our analysis considers Lagrangian scenarios involving undisturbed pedestrians and counter-flowing pairs (i.e. the cases P1, P2, P3-P4 in \fref{fig:graph-lagr-constr}). The asymmetries, here discussed quantitatively, are not limited to positions and velocities and include, in a social-force modeling~\cite{helbing1995PRE} perspective, exchanged ``interaction forces'' (here rather accelerations).
\begin{figure}[ht!]
\begin{center}
\subfigure[]{\includegraphics[width=.8\textwidth, clip=true]{lagrangian_speed2_minus_lagrangian_speed1_pairs_2L.eps}}
\subfigure[]{\includegraphics[width=.8\textwidth, clip=true]{lagrangian_speed2_minus_lagrangian_speed1_pairs_2R.eps}}
\end{center}
\caption{Preferred position bands and difference of average speed between counter-flow pairs and undisturbed pedestrians from Lagrangian queries. The dashed green lines represent the preferred position bands when pedestrians walking undisturbed, and the solid green lines represent the preferred position bands for counter-flow pairs. The preferred position bands are shifted toward the relative right side of the corridor in the counter-flowing configuration due to collision avoidance. The underlying map shows the average velocity fields of counter-flow pairs subtracts the average speed fields of undisturbed pedestrians. (a) case 2L, (b) case 2R. In most of the space the difference is negative since counter-flowing pedestrians slow down during the encounter with another pedestrian. }
\label{fig:2counterflow}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\subfigure[]{\includegraphics[width=.3\textwidth]{vertical_displ.eps}}
\subfigure[]{\includegraphics[width=.3\textwidth]{speed_xdistr.eps}}
\subfigure[]{\includegraphics[width=.3\textwidth]{speed_drop.eps}}
\end{center}
\caption{Quantitative comparisons between the undisturbed dynamics and the counter-flow of two individuals in terms of position and velocity differences. (a) Displacement in absolute value of the layers of preferred positions from the undisturbed pedestrian case to the two pedestrians in counter-flow (cf. \fref{fig:2counterflow}). We evaluate the transversal displacement of the median line of the path $y_{x,50}$ (cf. explanation in \aref{app:tech}). We consider $\Delta y_{x,50}^{2L} = (y_{x,50}^{counterflow,2L} - y_{x,50}^{single,2L})$ in the 2L case and $\Delta y_{x,50}^{2R} = -(y_{x,50}^{counterflow,2R} - y_{x,50}^{single,2R})$ in the 2R case. The box-plots report the distributions of $\Delta y_{x,50}^{2L}$ and of $\Delta y_{x,50}^{2R}$ across the landing span. The 2R pedestrians moved much more to the relative right when sharing the corridor with another pedestrian coming their way. (b) Comparison of walking speeds for undisturbed pedestrians and pair pedestrians in counter flow. We evaluate the average speed at each span-wise location $x$, and for every direction we consider the relative difference of such velocity between undisturbed case and counter-flow. The distribution of the relative difference is reported by the box-plots. Descending (2L) pedestrians walk faster than 2R pedestrian in both undisturbed and counter-flowing situations. Although in both directions pedestrians slow down when in counter-flow, the 2R pedestrians slowed down much more, which can also be seen in panel (c). (c) Distribution of relative speed variation comparing undisturbed pedestrians and counter-flows of two. (a,b,c) The box-plots limit the first and third quartiles of the distributions, the whiskers identify the 5$^{th}$ and 95$^{th}$ percentiles. Red line reports the median and red dots the average values.
}
\label{fig:boxplots}
\end{figure}
Direction-dependent differences, considered for undisturbed pedestrians in \fref{fig:singlePedEuLa} and \fref{fig:singlePedEuLaPath-decomp}{cd}, increase when the presence of one other pedestrian with opposite direction triggers the avoidance mechanism (i.e., in a counter-flow. Case P3 -- P4 in \fref{fig:graph-lagr-constr}). In this condition, the paths are shifted to the relative right to avoid collision. Contrary to the single pedestrian case these preferred position bands have no overlap (cf. \fref{fig:2counterflow} and \fref{fig:singlePedEuLa}), furthermore they are not symmetric with respect to the vertical axis in the landing center ($x\approx -0.1\,$m). In both 2L and 2R cases, bands are wider near the entrance side with similar distribution to the undisturbed pedestrian case. Moving across the landing, the bands constrict and shift toward the relative right-hand side, possible due to encounters of other pedestrians. The displacement of the preferred position bands features direction-related asymmetries: on average the rigid translation of the band is \textit{ca.} $40\%$ larger in the 2R case than in 2L case. For pedestrians going to the left the preferred position band shifts almost rigidly to the relative right showing a displacement of \textit{ca.} $10\,$cm. For pedestrians going to the right, instead, the preferred position band has a deformation. The band axis shifts of \textit{ca.} $18\,$cm. (cf. \fref{fig:2counterflow} and \fref{fig:boxplots}{a}).
We observe a drop in the walking speed in comparison with the undisturbed pedestrians, especially around the central horizontal axis ($y\approx 0\,$m) where collisions may potentially occur. Higher walking speed are reached at the relative right hand side of the pedestrians, where collisions are mostly avoided (cf. the negative difference in the average speed fields in \fref{fig:2counterflow}). \fref{fig:boxplots}{b}{c} show quantitative measurements of the speed reduction. In both undisturbed and counter-flow pedestrian cases, pedestrians walking to the right walk slower than the pedestrians walking to the left. Furthermore, when encountering another pedestrian coming from the opposite direction, the 2R pedestrians slow down $18\%$ in average, while the descending pedestrians (2L) has a almost negligible reduction.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=.9\textwidth]{acc_field_sin_2L.eps}
\end{center}
\caption{Average acceleration field for undisturbed pedestrians walking from right to left. The curved pedestrian motion following the U-shape of the corridor determines a centripetal acceleration. We measure an almost central acceleration field pointing to \textit{ca.} $(x=-0.25,y=-0.10)\,$m. The acceleration field for pedestrians going to the right is analogous, thus not repeated.}
\label{fig:displacement-distribution}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\subfigure[]{\includegraphics[width=.9\textwidth]{acc_field_all_2L.eps}}
\subfigure[]{\includegraphics[width=.9\textwidth]{acc_field_all_2R.eps}}
\end{center}
\caption{Average avoidance acceleration field (cf. \eref{eq:interaction-acceleration}) for two pedestrians having opposite velocities (counter-flowing). (a) acceleration field for pedestrians going from right to left; (b) acceleration field for pedestrians going from left to right. In both cases the fields yield sidesteps on the relative right and a longitudinal speed reduction in the entering half of the landing. In the preferred walking band of undisturbed pedestrians (solid line) deceleration are stronger. The dashed line reports the preferred walking band for the two pedestrians case.
}
\label{fig:interaction-force}
\end{figure}
Acceleration fields help the interpretation of the position and average velocity fields in \fref{fig:2counterflow}. As pedestrians follow curved trajectories, they experience a centripetal acceleration even when moving undisturbed (cf. \fref{fig:displacement-distribution} for the average acceleration field of pedestrians walking from the right to the left). Lagrangian queries are here paramount as to quantify fields without perturbations from pedestrians just met or to be met in the landing. Let $\vec{a}_{p,1}$ be the acceleration of undisturbed pedestrians. To estimate the average acceleration field in avoidance, we follow a social force-like~\cite{helbing1995PRE} approach decomposing the acceleration of a pedestrian $\vec{a}_p$ as
\begin{equation}
\vec{a}_p = \vec{a}_{p,d} + \vec{a}_{p,i},
\end{equation}
where $\vec{a}_{p,d}$ denotes the \textit{desired} component of the acceleration, thus independent on other pedestrians, and $\vec{a}_{p,i}$ is the perturbation to $\vec{a}_{p,d}$ due to the avoidance \textit{interaction}. In social force models, $\vec{a}_{p,d}$ is typically a relaxation force toward a given (desired) velocity field. It is reasonable to assume that, at least on average, the force $\vec{a}_{p,d}$ can be approximated as $\vec{a}_{p,1}$. Thus, we extract the average interaction acceleration as
\begin{equation}\label{eq:interaction-acceleration}
\langle\vec{a}_{p,i}\rangle = \langle\vec{a}_p \rangle - \langle\vec{a}_{p,d}\rangle \approx \langle\vec{a}_p \rangle - \langle\vec{a}_{p,1}\rangle,
\end{equation}
where $\langle \vec{a} \rangle$ is a local spatial average of $\vec{a}$ (cf. \aref{app:tech} for technical details). We report the avoidance acceleration fields for pedestrians going to the left and to the right in \fref{fig:interaction-force}.
In both 2L and 2R cases the accelerations point toward the relative right following the drift of trajectories thus the displacement of the preferred position bands. Strong longitudinal decelerations for collision avoidance are visible in the 2R case close by the entering side (left end). However it is noticed that pedestrians going to the left mostly avoid collision by moving to the relative right without changing the forward speed (longitudinal velocity).
\section{Discussion}\label{sect:concl}
Pedestrian dynamics measurements acquired in real-life conditions are largely heterogeneous due to the natural variability of flow conditions. This makes data selection paramount, as accidental aggregations of data from heterogeneous flows may yield biased statistical measurements leading to improper conclusions. This demands for methodologies to query homogeneous datasets from measurements.
In this paper we cross compared pedestrian dynamics data from a large experimental dataset, that we collected via a year-long measurement campaign on a staircase landing. From the physical point of view we commented on the asymmetries of the pedestrian dynamics depending on flow conditions, here identified with the number of pedestrians and their walking directions. The U-shape of the landing combines with functional differences of walking directions (pedestrians walking to the left are going to a lower level in the building, the opposite happens for pedestrians going to the right) yielding asymmetries at the levels of preferred positions, velocities and also avoidance accelerations (social interaction forces). The quantitative differences found include, beside higher velocities for pedestrians descending, larger influences to the walking patterns of ascending pedestrians in presence of pedestrians walking in counter-flow. We observed a strong walking side preference towards the driving side, indeed an influence of the landing shape on this cannot be excluded. In these conditions ascending pedestrians, typically walking in the inner side, may have a different sight range on the environment than individual descending. This aspect can play also a role in the asymmetries measured.
The previous comparisons stimulated methodological investigations too. We recognised the difference between querying our dataset for homogeneous flow conditions at the frame level (Eulerian querying) or at the trajectory level (Lagrangian querying). Although querying at the frame level is more immediate when dealing with continuous measurements, pedestrian interactions affect pedestrian trajectories thoroughly. While this might not be true on large length scales, it certainly holds for our recording window. A pedestrian observed alone remains affected by previous passes-by of other individuals (already disappeared from the observation window) or, since sight likely extends beyond our observation window, avoidance maneuvers may already play a role before a second pedestrian enters in the scene. In this respect, the following observations hold:
\begin{itemize}
\item the graph connections are built after simultaneous appearance of pedestrians in our observation window. In a sense, we assume that interactions among pedestrians are limited to connected components. In principle, interactions might happen outside our recording windows and still play a role in the observed dynamics. Although this aspect is hard to assess, we expect that because of the geometry of the landing it played negligible effects. In other words we assume that if two pedestrian interact then, at a point, both appeared in our observation window.
\item The spatial scale of our geometry is certainly relevant. Although it is reasonable to assume that two pedestrians appearing together in our observation window play a reciprocal influence on their dynamics, the same would not hold for larger geometrical settings (e.g. spanning beyond typical interaction ranges). Generalizing the graph structure including geometric distances might help in treating such cases.
\end{itemize}
\begin{appendix}
\section{Preferred walking layers, speed and acceleration fields}\label{app:tech}
We evaluate preferred walking layers, speed and acceleration fields via a spatial binning of the measurements from homogeneous sets of trajectories (cf. \sref{sect:query}). Given a homogeneous set of trajectories every detection $d$ has form
\begin{equation}
d = (t,p,x,y,u,w,a_x,a_y)
\end{equation}
where $t$ is the detection time, $p$ is an unique index for the detected pedestrian, $(x,y)$ is the position of the pedestrian at time $t$, $\vec{v}=(u,w)$ his/her velocity, $\vec{a} = (a_x,a_y)$ his/her acceleration.
To evaluate the preferred walking layer we extend the approach suggested in~\cite{corbetta2014TRP}. We bin the detection set $\{d\}$ with respect to the longitudinal position $x$ between $x = -1\,$m and $x=0.8\,$m in $40$ equal bins. For each bin we consider the distribution of transversal positions $y_x$ (where the $x$ subscript indicates the dependence on the bin), and we take the $15^{th}$ and $85^{th}$ percentiles (indicated by $y_{x,15}$ and $y_{x,85}$) of the distribution to define the preferred position band.
We evaluate the displacement of the preferred position bands in \fref{fig:boxplots}{a} by considering in each bin the $50^{th}$ percentiles of $y_x$: $y_{x,50}$. We compute the difference between the value for pedestrians undisturbed ($y_{x,50}^{single,2L}$, for the 2L case) and walking with one other individual in opposite direction ($y_{x,50}^{counterflow,2L}$). We consider the distribution of the quantity obtained ($\Delta y_{x,50}^{2L} = (y_{x,50}^{counterflow,2L} - y_{x,50}^{single,2L})$).
Velocity and acceleration fields are defined after a binning with respect to $x$ and $y$. For each bin we take respectively the average speed $\langle\sqrt{u^2+v^2}\rangle$ and the average acceleration $\langle\vec{a}\rangle = (\langle a_x\rangle,\langle a_y\rangle)$. For the velocity fields we employed $32$ bins in $x$ direction within $[-0.8,1.0]\,$m and $20$ bins in $y$ direction within $[-1.0,0.5]\,$m. For the acceleration fields we employed $20$ bins in $x$ direction within $[-0.8,1.0]\,$m and $20$ bins in $y$ direction within $[-0.4,0.4]\,$m.
\end{appendix}
\begin{acknowledgement}
We thank A. Holten and G. Oerlemans (Eindhoven, NL) for their help in the establishment of the measurement setup at Eindhoven University of Technology and A. Liberzon (Tel Aviv, IL) for his help in the adaptation of the OpenPTV library. We acknowledge the support from the Brilliant Streets research program of the Intelligent Lighting Institute at the Eindhoven University of Technology, NL. This work is part of the JSTP research programme ``Vision driven visitor behaviour analysis and crowd management" with project number 341-10-001, which is financed by the Netherlands Organisation for Scientific Research (NWO). Support from COST Action MP1305 ``Flowing Matter'' is also kindly acknowledged.
\end{acknowledgement}
\bibliographystyle{cdbibstyle} |
2,877,628,089,583 | arxiv | \section{Introduction}
The purpose of this paper is to study the probability law of the real-valued solution of the following general class of stochastic partial differential equations:
\begin{equation}
L u(t,x) = \sigma(u(t,x)) \dot{W} (t,x) + b(u(t,x)),
\label{1}
\end{equation}
$t \geq 0$, $x\in \mathbb{R}^{d}$, where $L$ denotes a second order differential operator, and we impose the
initial conditions
\begin{equation}
u(0,x) = \frac{\partial u}{\partial t}(0, x) = 0. \label{1a}
\end{equation}
The coefficients $\sigma$ and $b$ are some real-valued functions and $\dot{W} (t,x)$ is the formal notation for some Gaussian random perturbation
defined on some probability space. We will assume that it is white in time and with a homogeneous spatial correlation given by a function $f$,
and we denote by $(\mathcal{F}_t)_t$ the filtration generated by $W$ (see Section \ref{prel} for the precise definition of this noise). \\
By definition, the solution to Equation (\ref{1}) is an adapted stochastic process $\{u(t,x), (t,x)\in [0,T]\times \mathbb{R}^{d}\}$
satisfying
\begin{align}
u(t,x)=& \int_0^t \int_{\mathbb{R}^{d}} \Gamma(t-s,x-y) \sigma(u(s,y)) W(ds,dy) \nonumber\\
&+ \int_0^t \int_{\mathbb{R}^{d}} b(u(t-s,x-y)) \Gamma(s,dy) ds,
\label{3}
\end{align}
where $\Gamma$ denotes the fundamental solution associated to $Lu=0$. As it is going to be clarified later on, for all $t\in [0,T]$,
we suppose that $\Gamma(t)$ is a non-negative measure on $\mathbb{R}^d$ (see Hypothesis A).
The stochastic integral appearing in formula (\ref{3}) requires some care because the integrand
is a measure. For integrands that are real-valued functions, that is $(t,x)\mapsto \Gamma(t,x)\in \mathbb{R}$, the stochastic integral was defined by Walsh in \cite{walsh}. Then, in order to deal with SPDEs whose associated fundamental solution is a generalised function, Dalang \cite{Da} extended Walsh's stochastic integral and covered, for instance, the case of the wave equation in dimension three. \\
However, in the first part of this paper we will give, in a general setting, a definition of the stochastic integral with respect to $W$ using the techniques of the stochastic integration with respect to a cylindrical Brownian motion (see, for instance, \cite{dz}). As we will see, this integral will turn out to be equivalent to Dalang's extension (\cite{Da}) when the integrand is of the form $G:=\Gamma(t-\cdot,x-*) Z(\cdot,*)$, for certain stochastic processes $Z$. More precisely, we will show that random elements of this latter form may be integrated with respect to $W$ using a localising procedure: first we will assume that $Z$ has bounded trajectories and then we will identify $G$ as the weak limit of some sequence $(\Gamma Z_N)_N$, where any $Z_N$ have bounded paths, almost surely (see Lemma \ref{ggamma} and Proposition \ref{prop1}).\\
We should mention at this point that solutions to stochastic partial differential equations of the form (\ref{1}) have been largely studied during the last two decades. For instance, for the case of the stochastic heat and wave equations, we refer to \cite{walsh,carmona,DF,MS,Da,PZ,Pe}.\\
The second part of the paper is devoted to study the probability law of the random variable $u(t,x)$, for any fixed $(t,x)\in (0,T]\times \mathbb{R}^d$. This will be done using the techniques of the so-called Malliavin calculus. The aim is two-fold: \\
First, we prove that $u(t,x)$ has an absolutely continuous law with respect to Lebesgue measure on $\mathbb{R}$, provided that, among other assumptions, the differential operator $L$ and the spatial correlation $f$ are related as follows:
\begin{equation}
\int_0^T \int_{\mathbb{R}^{d}} |\mathcal{F} \Gamma(t)(\xi)|^2 \mu(d\xi) dt< \infty,
\label{5bis}
\end{equation}
where $\mu$ is a non-negative tempered measure such that its Fourier transform is $f$
(see Theorem \ref{densitat} for the precise statement). This result provides a generalisation of Theorem 3 in \cite{QS}, where the authors deal with the three-dimensional stochastic wave equation and a slightly stronger condition than (\ref{5bis}) is assumed. In order to prove that the law of $u(t,x)$ has a density, we apply Bouleau-Hirsch's criterion. Indeed, to prove the Malliavin regularity of the solution, we take advantage of the results in
\cite{QS} and we fully identify the initial condition of the stochastic equation satisfied by the Malliavin derivative of $u(t,x)$. This is an important point in order to show that the Malliavin matrix is invertible almost surely. Eventually, we point out that (\ref{5bis}) is also a sufficient condition to have the stochastic integral in (\ref{3}) well defined. \\
Secondly, we prove that the law of $u(t,x)$ has an infinitely differentiable density with respect to Lebesgue measure on $\mathbb{R}$ (see
Theorem \ref{regularitatdensitat}). To obtain this result, we show that $u(t,x)$ is infinitely differentiable in the Malliavin sense and that the inverse of the Malliavin matrix has moments of all order. For the latter to be achieved, we need to impose a lower bound of the integral in (\ref{5bis}) in terms of a certain power of $T$ (see (\ref{lowbound})).
We should mention that
Theorem \ref{regularitatdensitat} in Section \ref{regularitat} provides an improvement of Theorem 3 in \cite{qs2} for the case of the three-dimensional stochastic wave equation, since in this latter reference the integrability conditions concerning the Fourier transform of $\Gamma$ are much more involved (see also \cite{marta})). Moreover, it is worth mentioning that Theorem \ref{regularitatdensitat} also generalises known results on existence and smoothness of the density for the case of the stochastic heat and wave equation with dimensions $d\geq 1$ and $d=1,2$, respectively (see
\cite{carmona,MS,mms,marta}).\\
The paper is organised as follows. In the next Section \ref{prel}, we present some preliminaries concerning the random perturbation in Equation (\ref{1}) as well as the main hypothesis on the fundamental solution $\Gamma$ and the space correlation $f$. We extend Walsh's stochastic integral in order to cover the case of measure-valued integrands in Section \ref{integral}; we also caracterise some measure-valued random elements that can be integrated with respect to $W$ and we sketch the construction of the integral in a Hilbert-valued setting. In Section \ref{ex}, a theorem on existence and uniqueness of solution for Equation (\ref{1}) is stated; we also deal with some particular examples of differential operators $L$, namely the heat and wave equations. Sections \ref{existencia} and \ref{regularitat} are devoted, respectively, to the existence and regularity of the density for the probability law of the solution to (\ref{1}). At the very beginning of Section \ref{existencia}, we introduce the main tools of the Malliavin calculus needed along the paper (we refer to \cite{nualart} for a complete account on the topic).\\
Throughout the paper we use the notation $C$ for any positive real constant, independently of its value.
\section{Preliminaries}
\label{prel}
Recall that we are interested in the following general class of stochastic partial differential equations:
$$
L u(t,x) = \sigma(u(t,x)) \dot{W} (t,x) + b(u(t,x)),
$$
with $t \geq 0$, $x\in \mathbb{R}^{d}$, $L$ denotes a second order differential operator, and we consider vanishing initial conditions (\ref{1a}).
The Gaussian random perturbation is described as follows: $W$ is a zero mean Gaussian family of random variables
$\{W(\varphi), \varphi \in \mathcal{C}_0^\infty (\mathbb{R}^{d+1})\}$,
defined in a complete probability space $(\Omega, \mathcal{F},P)$, with covariance
\begin{equation}
E(W(\varphi) W(\psi)) = \int_0^\infty \int_{\mathbb{R}^{d}} \int_{\mathbb{R}^{d}} \varphi(t,x) f(x-y) \psi(t,y) dx dy dt,
\label{2}
\end{equation}
where $f$ is a non-negative continuous function of $\mathbb{R}^{d}\setminus \{0\}$ such that it is the Fourier
transform of a non-negative definite tempered measure $\mu$ on $\mathbb{R}^{d}$.
That is,
$$f(x)=\int_{\mathbb{R}^{d}} \exp(-2\pi i\; x\cdot \xi) \mu(d\xi)$$
and there is an integer $m\geq 1$ such that
$$\int_{\mathbb{R}^{d}} (1+|\xi|^2)^{-m}\mu(d\xi) <\infty.$$
Then, the covariance (\ref{2}) can also be written, using Fourier transform, as
$$E(W(\varphi) W(\psi)) = \int_0^\infty \int_{\mathbb{R}^{d}} \mathcal{F} \varphi(t)(\xi) \overline{\mathcal{F} \psi(t)(\xi)} \mu(d\xi) dt.$$
The main assumption on the differential operator $L$ may be summarised as follows:\\
\noindent {\bf Hypothesis A}.
The fundamental solution to $Lu = 0$, denoted by $\Gamma$, is a non-negative measure of
the form $\Gamma(t, dx)dt$ such that for all $T>0$
\begin{equation}
\sup_{0\leq t\leq T} \Gamma(t,\mathbb{R}^{d})\leq C_T<\infty
\label{e2}
\end{equation}
and
\begin{equation}
\int_0^T \int_{\mathbb{R}^{d}} |\mathcal{F} \Gamma(t)(\xi)|^2 \mu(d\xi) dt< \infty.
\label{5}
\end{equation}
\vspace{0.5cm}
The completion of the Schwartz space $\mathcal{S} (\mathbb{R}^{d})$ of rapidly decreasing $\mathcal{C}^\infty$ functions, endowed
with the inner product
$$\langle \varphi,\psi\rangle_\mathcal{H}
=\int_{\mathbb{R}^{d}} \int_{\mathbb{R}^{d}} \varphi(x) f(x-y) \psi(y) dx dy=\int_{\mathbb{R}^{d}} \mathcal{F} \varphi(\xi) \overline{\mathcal{F} \psi(\xi)} \mu(d\xi),$$
$\varphi, \psi\in \mathcal{S} (\mathbb{R}^{d})$, is denoted by $\mathcal{H}$. Notice that $\mathcal{H}$ may contain distributions. Set $\mathcal{H}_T = L^2([0,T];\mathcal{H})$.
\section{Stochastic integrals}
\label{integral}
In this section Walsh's stochastic integral with respect to martingale measures will be extended to more general integrands, namely the class of square integrable $\mathcal{H}$-valued predictable processes. The extension will be performed in the infinite dimensional setting described by Da Prato and Zabczyk in \cite{dz}.
Then, we will give non-trivial examples of integrands, which will be some measure-valued random elements. We will briefly recall the extension of the stochastic integral in a Hilbert-valued setting. This will be needed to give a rigorous meaning to the stochastic evolution equations satisfied by the Malliavin derivatives of the solution of (\ref{3}).\\
\noindent {\bf Extension of Walsh's stochastic integral}\\
Fix a time interval $[0,T]$. The Gaussian family $\{W(\varphi ),\varphi \in
\mathcal{C}_{0}^{\infty }([0,T]\times \mathbb{R}^{d})\}$ can be extended to the
completion $\mathcal{H}_T=L^{2}([0,T];\mathcal{H})$ of the space $\mathcal{C}_{0}^{\infty
}([0,T]\times \mathbb{R}^{d})$ under the scalar product%
\[
\left\langle \varphi ,\psi \right\rangle =\int_{0}^{T}\int_{\mathbb{R}^{d}}%
\mathcal{F}\varphi (t)(\xi )\overline{\mathcal{F}\psi (t)(\xi )}\mu (d\xi
)dt.
\]%
We will also denote by $W(g)$ the Gaussian random variable associated with
an element $g\in L^{2}([0,T];\mathcal{H})$.
Set $W_{t}(h)=W(1_{[0,t]}h)$ for any $t\geq 0$ and $h\in \mathcal{H}$. \
Then, $\{W_{t},t\in \lbrack 0,T]\}$ is a cylindrical Wiener process in the
Hilbert space $\mathcal{H}$. That is, for any $h\in \mathcal{H}$, $%
\{W_{t}(h),t\in \lbrack 0,T]\}$ is a Brownian motion with variance $\left\|
h\right\| _{\mathcal{H}}^{2}$, and%
\[
E(W_{t}(h)W_{s}(g)=\left( s\wedge t\right) \left\langle h,g\right\rangle _{%
\mathcal{H}}.
\]%
Let $\mathcal{F}_t$ be the $\sigma$-field generated by the random variables
$\{W_s(h), h\in \mathcal{H}, 0\le s\le t\}$ and the $P$-null sets. We define the predictable
$\sigma$-field as the $\sigma$-field in $\Omega\times [0,T]$ generated by
the sets $\{ (s,t]\times A, 0\le s<t\le T, A\in \mathcal{F}_s\}$.
Then (see, for instance, \cite{dz}), we can define the
stochastic integral of $\mathcal{H}$-valued square integrable predictable
processes. For any predictable process $g\in $ $L^{2}(\Omega \times \lbrack
0,T];\mathcal{H)}$ we denote its integral with respect to the cylindrical
Wiener process $W$ by%
\begin{equation}
\int_{0}^{T}\int_{\mathbb{R}^{d}}gdW=g\cdot W, \label{e1}
\end{equation}
and we have the isometry property%
\[
E\left( \left| g\cdot W\right| ^{2}\right) =E\left( \int_{0}^{T}\left\|
g_{t}\right\| _{\mathcal{H}}^{2}dt\right) .
\]
\begin{remark}
Under the standing assumptions, using an approximation procedure by means of test functions,
one proves that the space $\mathcal{H}$ contains the indicator functions of bounded Borel sets (for details see \cite{DF} or
\cite{quer}, p. 13). Then, $M_{t}(A):=W(1_{[0,t]}1_{A})$ defines a martingale measure associated to the noise $%
W $ in the sense of Walsh (see \cite{walsh} and \cite{Da}) and the stochastic integral (\ref{e1})
coincides with the integral defined in the work of Dalang \cite{Da}.
\end{remark}
\noindent {\bf Example of integrands}\\
We aim now to provide useful examples of random distributions which belong to the space $L^2(\Omega\times [0,T]; \mathcal{H})$.
Before stating the result, we consider the following lemma:
\begin{lemma}\label{ggamma}
Assume that $\Gamma$ satisfies Hypothesis A. Let $g$ be a bounded Borel function on $[0,T]\times \mathbb{R}^{d}$. Then $%
g\Gamma \in \mathcal{H}_T$, and%
\[
\left\| g\Gamma \right\|_{\mathcal{H}_T}^2 \leq \left\| g\right\|^2
_{\infty }\int_{0}^{T}\int_{\mathbb{R}^{d}}\left| \mathcal{F}\Gamma (t)(\xi
)\right| ^{2}\mu (d\xi )dt.
\]
\end{lemma}
\noindent {\it Proof.}
We can decompose $g$ into the difference $g^{+}-g^{-}$ of two nonnegative
bounded Borel functions. Thus, without any loss of generality we can assume
that $g$ is nonnegative. Moreover, we observe that $g\Gamma $ also satisfies \ conditions (%
\ref{e2}) and (\ref{5}). Indeed, to prove the latter condition, we consider
an approximation of the identity $(\psi _{n})_n$ defined as follows: let $\psi\in \mathcal{C}_0^{\infty}(\mathbb{R}^d)$ such that $\psi\geq 0$,
the support of $\psi$ is contained in the unit ball of $\mathbb{R}^d$ and $\int_{\mathbb{R}^d} \psi(x)dx=1$.
Set $\Gamma
_{n}(t)=\psi _{n}\ast \Gamma (t)$ and $J_{n}(t)=\psi _{n}\ast (g\Gamma (t))$.
Then, for all $t\in \lbrack 0,T]$, $\Gamma _{n}(t)$ and $J_n(t)$ belong to $\mathcal{S}(\mathbb{R}^{d})\subset \mathcal{H}$, and \ $\left| \mathcal{F}%
\Gamma _{n}(t)\right| \leq \left| \mathcal{F}\Gamma (t)\right|$. Besides, since $\Gamma$ is non-negative, we have that
$J_n(t)(\xi)\leq \|g\|_\infty \Gamma_n(t)(\xi)$, for any $\xi\in \mathbb{R}^d$. Thus, by Fatou's lemma
\begin{align*}
& \int_0^T \int_{\mathbb{R}^{d}} |\mathcal{F} (g\Gamma(t))(\xi)|^2 \mu(d\xi) dt \\
&\quad \leq \liminf_{n\rightarrow \infty} \int_0^T \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} J_n(t,x) f(x-y) J_n(t,y) dx dy dt \\
&\quad \leq \|g\|_{\infty}^2 \liminf_{n\rightarrow \infty} \int_0^T \int_{\mathbb{R}^{d}} |\mathcal{F} \Gamma_n(t)(\xi)|^2 \mu(d\xi) dt\\
& \quad \leq \|g\|_{\infty}^2 \int_0^T \int_{\mathbb{R}^{d}} |\mathcal{F} \Gamma(t)(\xi)|^2 \mu(d\xi) dt < \infty.
\end{align*}
The fact that $g\Gamma$ satisfies conditions (\ref{e2}) and (\ref{5}) let us reduce the proof to the case where $g=1$.
We consider the regularisation $(\Gamma_n)_n$ of $\Gamma$ defined above. Condition (\ref{5}) implies that $\Gamma _{n}(t)$ belongs to
$\mathcal{H}_T$ and it has a uniformly bounded norm, so it converges weakly to some
element $h\in \mathcal{H}_T$. We claim that $h=\Gamma $, and this is a consequence of the
fact that, owing to the definition of $\Gamma_n(t)$, for any $\varphi \in \mathcal{S}(\mathbb{R}^{d})$, and for any $%
0\leq s<t\leq T$ we have%
\[
\int_{s}^{t}\left\langle h,\varphi \right\rangle _{\mathcal{H}%
}dr=\lim_{n\rightarrow \infty }\int_{s}^{t}\left\langle \Gamma
_{n}(r),\varphi \right\rangle _{\mathcal{H}}dr=\int_{s}^{t}\left\langle
\Gamma (r),\varphi \right\rangle _{\mathcal{H}}dr.
\]
More precisely, it holds that
$$
\int_{s}^{t}\left\langle \Gamma
_{n}(r),\varphi \right\rangle _{\mathcal{H}}dr = \int_{s}^{t} \int_{\mathbb{R}^d}
\Gamma(r,dz) \left( \int_{\mathbb{R}^d} \psi_n(x-z) F(x) dx \right)dr,$$
where $F(x):=\int_{\mathbb{R}^d} f(x-y)\varphi(y) dy$. Observe that the hypothesis on $f$ and $\varphi$ imply that $F$ is continuous and
$\lim_{x\rightarrow \infty} F(x)=0$. Hence, it turns out that we can apply the Bounded Convergence Theorem, so that we end up with
$$
\lim_{n\rightarrow \infty } \int_{s}^{t}\left\langle \Gamma
_{n}(r),\varphi \right\rangle _{\mathcal{H}}dr =
\int_{s}^{t} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \Gamma(r,dz) f(z-y) \varphi(y) dy dr,$$
and this let us identify $h$ with $\Gamma$ in $\mathcal{H}_T$. \hfill
$\square$
\medskip
This lemma allows us to prove the following result, which give examples of random distributions that can be integrated with respect to $W$.
\begin{proposition}\label{prop1}
Assume that $\Gamma$ satisfies Hypothesis A.
Let $Z=\{Z(t,x),(t,x)\in \lbrack 0,T]\times \mathbb{R}^{d}\}$ be a
predictable process such that%
\[
C_{Z}:=\sup_{(t,x)\in \lbrack 0,T]\times \mathbb{R}^{d}}E(|Z(t,x)|^{2})<%
\infty .
\]%
Then, the random element $G=G(t,dx)=Z(t,x)\Gamma (t,dx)$ is a predictable
process in the space $L^{2}(\Omega \times \lbrack 0,T];\mathcal{H)}$.
\end{proposition}
\noindent {\it Proof.}
For any $N\geq 1$ define%
\[
Z_{N}(t,x)=Z(t,x){\bf 1}_{\{|Z(t,x)|\leq N\}}.
\]%
Clearly, $Z_{N}$ is a predictable process with bounded trajectories. Thus,
by the previous lemma, $G_{N}(t,x):=Z_{N}(t,x)\Gamma (t,dx)$ is a
predictable process in $L^{2}(\Omega \times \lbrack 0,T];\mathcal{H)}$.
Let $\left( J_{N,n}(t) \right)_n \subset \mathcal{S}(\mathbb{R}^d) $ be the regularisation of $G_N(t)$ by means of an approximation of the identity $(\psi_n)_n$, as it has been defined in the proof of Lemma \ref{ggamma}. This let us prove that the norm of $G_N$ in $L^2(\Omega\times [0,T];\mathcal{H})$
is uniformly bounded because%
\begin{align*}
& E\left( \left\| G_{N}\right\| _{L^{2}([0,T];\mathcal{H})}^{2}\right)
= E\left( \lim_{n\rightarrow \infty} \|J_{N,n}\|_{L^{2}([0,T];\mathcal{H})}^{2}\right)\\
&\quad \leq \liminf_{n\rightarrow \infty} E\left( \int_0^T \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} J_{N,n}(t,x) f(x-y)J_{N,n}(t,y) dx dy dt\right)\\
&\quad \leq C_Z \liminf_{n\rightarrow \infty} E\left( \int_{0}^{T}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\Gamma_n
(t,x) f(x-y) \Gamma_n(t,y) dx dy dt \right)\\
&\quad \leq C_{Z}\int_{0}^{T}\int_{\mathbb{R}^{d}}\left| \mathcal{F}\Gamma
(t)(\xi )\right| ^{2}\mu (d\xi )dt < \infty.
\end{align*}%
Recall that $\Gamma_n(t)=\psi_n * \Gamma(t)$.
Therefore, $G_{N}$ converges weakly to a predictable process $\widetilde{G}$
in $L^{2}(\Omega \times \lbrack 0,T];\mathcal{H)}$. We claim that $%
\widetilde{G}=G$. In fact, for any $\varphi \in \mathcal{S}(\mathbb{R}^{d})$%
, for any $0\leq s<t\leq T$ and for any $B\in \mathcal{F}_{s}$, we can argue similarly as in the very last part of the proof of Lemma \ref{ggamma} and obtain
\begin{align*}
& E\left( {\bf 1}_{B}\int_{s}^{t}\left\langle \widetilde{G}(r),\varphi \right\rangle
_{\mathcal{H}}dr\right) \\
&\quad = \lim_{N\rightarrow \infty }E\left(
{\bf 1}_{B}\int_{s}^{t}\left\langle G_{N}(r),\varphi \right\rangle _{\mathcal{H}%
}dr\right) \\
&\quad =\lim_{N\rightarrow \infty }E\left( {\bf 1}_{B}\int_{s}^{t}\left\langle
Z(r,x){\bf 1}_{\{|Z(r,x)|\leq N\}}\Gamma (r),\varphi \right\rangle _{\mathcal{H}%
}dr\right) \\
&\quad =\lim_{N\rightarrow \infty }E\left( {\bf 1}_B \int_{s}^{t}\int_{\mathbb{R}^{d}}\int_{%
\mathbb{R}^{d}}\Gamma (r,dx)Z(r,x){\bf 1}_{\{|Z(rt,x)|\leq N\}}f(x-y)\varphi
(y)dxdydr\right) \\
&\quad =E\left( {\bf 1}_B \int_{s}^{t}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\Gamma
(r,dx)Z(r,x)f(x-y)\varphi (y)dxdydr\right),
\end{align*}
so we can identify $\tilde G$ with $G$. \hfill$\square$
\begin{remark}
As a consequence of Proposition \ref{prop1}, we are able to define the stochastic integral of $G=Z \Gamma$ with respect to $W$:
$$G\cdot W =\int_0^T \int_{\mathbb{R}^{d}} G(s,y) W(ds,dy)=\int_0^T \int_{\mathbb{R}^{d}} \Gamma(s,y) Z(s,y) W(ds,dy).$$
Moreover, using the same ideas as in \cite{QS} (see also \cite{quer}, Theorem 1.2.5), one can obtain bounds for the $L^p(\Omega)-$norm of
$G\cdot W$. More precisely, suppose that
$$\sup_{(t,x)\in [0,T]\times \mathbb{R}^{d}} E(|Z(t,x)|^p)< \infty,$$
for some $p\geq 2$. Then
\begin{align}
& E(|G\cdot W|^p) = E\left( \left| \int_0^T \int_{\mathbb{R}^{d}} G(s,y) W(ds,dy)\right|^p\right) \nonumber \\
& \quad
\leq C_p (\nu_T)^{\frac{p}{2}-1} \int_0^T \left( \sup_{x\in \mathbb{R}^{d}} E(|Z(s,x)|^p) \right) \int_{\mathbb{R}^{d}} |\mathcal{F} \Gamma (s)(\xi)|^2 \mu(d\xi) ds,
\label{pbound}
\end{align}
where
$$\nu_T =\int_0^T \int_{\mathbb{R}^{d}} |\mathcal{F} \Gamma (t)(\xi)|^2 \mu(d\xi) dt.$$
\label{Lpbound}
\end{remark}
\begin{remark}
Since the noise's correlation is of the form $(x,y)\mapsto f(x-y)$, it is natural to be interested in spatially homogeneous situations. Indeed, suppose that we add the following hypothesis on the process $Z$: for all $s\in [0,T]$ and $x,y\in \mathbb{R}^{d}$ we have
$$E(Z(s,x)Z(s,y))=E(Z(s,0)Z(s,y-x)).$$
Then, owing to \cite{Da}, p. 10, we may construct a non-negative tempered measure $\mu_s^Z$ on $\mathbb{R}^{d}$ such that
$$\|G\|_{L^2(\Omega; \mathcal{H}_T)}= E(|G\cdot W|^2)=\int_0^T \int_{\mathbb{R}^{d}} |\mathcal{F} \Gamma(s)(\xi)|^2 \mu_s^Z(d\xi) ds.$$
\label{hom}
\end{remark}
As we will see in the next section, the main examples of deterministic measures $\Gamma$ will correspond to fundamental solutions associated to second order differential operators. First, we sketch the construction of the stochastic integral in a Hilbert-valued setting, that is when the process $Z$ takes values in some Hilbert space, usually different from $\mathcal{H}$. \\
\noindent {\bf Hilbert-valued stochastic integrals}\\
Let $\mathcal{A}$ be a separable real Hilbert space with inner-product and norm denoted
by
$\langle\cdot,\cdot\rangle_{\mathcal{A}}$ and $\Vert \cdot\Vert_{\mathcal{A}}$,
respectively.
Let $K=\{K(t,x), (t,x)\in[0,T]\times \mathbb{R}^{d}\}$
be an $\mathcal{A}-$valued predictable process satisfying the following condition:
\begin{equation}
\sup_{(t,x)\in[0,T]\times \mathbb{R}^{d}}
E\left(||K(t,x)||_{\mathcal{A}}^{2}\right)<\infty.
\label{10}
\end{equation}
Our purpose is to define the stochastic integral of elements of the form $\Gamma K=\Gamma(t,dx) K(t,x) \in L^2(\Omega \times [0,T]; \mathcal{H} \otimes \mathcal{A})$.
Let $(e_{j},j\geq 0)$ be a complete orthonormal system of $\mathcal{A}$. Set
$K^j(t,x) = \langle K(t,x),e_{j}\rangle_{\mathcal{A}}$, $(t,x)\in [0,T]\times
\mathbb{R}^{d}$.
According to Proposition \ref{prop1}, for any $j\geq 0$ the element $G^j= G^j(t,x)= \Gamma(t,dx) K^j (t,x)$ belongs to
$L^2(\Omega \times [0,T]; \mathcal{H})$ and, therefore, we may integrate it with respect to the noise $W$:
$$G^j \cdot W=\int_0^T \int_{\mathbb{R}^{d}} \Gamma(s,y) K^j(s,y) W(ds,dy).$$
We define, for $G=\Gamma K$,
$$ G\cdot W := \sum_{j\geq 0} G^j \cdot W.$$
Owing to (\ref{10}) and Proposition \ref{prop1}, it can be proved that the above series is convergent and therefore $G\cdot W$ defines an element of $L^{2}(\Omega ;\mathcal{A})$ (see also
\cite{QS}, Remark 1). Moreover, using the same arguments as for the proof of (\ref{pbound}), we have the following bound for the moments of $G\cdot W$ in $\mathcal{A}$:
\begin{align}
& E\big(|| G\cdot W||_{\mathcal{A}}^{p}\big) \nonumber \\
& \quad \le C_p (\nu_T)^{\frac{p}{2}-1} \int_0^T \sup_{x\in \mathbb{R}^{d}} E(||K(s,x)||^p_ {\mathcal{A}})
\int_{\mathbb{R}^{d}} |\mathcal{F} \Gamma(s)(\xi)|^{2} \mu(d\xi) ds,
\label{11}
\end{align}
for all $p\geq 2$.
\section{Existence and uniqueness of solutions}
\label{ex}
Recall that a solution to Equation (\ref{1}) is a real-valued adapted stochastic process $u = \{u(t,x), (t,x)\in [0,T]\times \mathbb{R}^{d}\}$
satisfying
\begin{align}
u(t,x)=& \int_0^t \int_{\mathbb{R}^{d}} \Gamma(t-s,x-y) \sigma(u(s,y)) W(ds,dy) \nonumber \\
&+ \int_0^t \int_{\mathbb{R}^{d}} b(u(t-s,x-y)) \Gamma(s,dy) ds.
\label{eq}
\end{align}
We assume that $Z=Z(s,y)=\sigma(u(s,y))$ satisfy the hypothesis of Proposition \ref{prop1}, so that the stochastic integral on the right hand-side is well-defined.
We suppose that $\sigma$ and $b$ are real-valued Lipschitz functions. Under these conditions, we may state an existence and uniqueness of solution's theorem:
\begin{theorem}
Suppose that the fundamental solution $\Gamma$ of $Lu=0$ satisfies Hypothesis A.
Then, Equation (\ref{eq}) has a unique solution $\{u(t,x), (t,x)\in [0,T]\times \mathbb{R}^{d}\}$ which is continuous in $L^2$ and satisfies
$$\sup_{(t,x)\in [0,T]\times \mathbb{R}^{d}} E(|u(t,x)|^p) <\infty,$$
for all $T>0$ and $p\geq 1$.
\label{sol}
\end{theorem}
For the proof we refer to \cite{Da}, Theorem 13, where the Walsh-Dalang equivalent setting is used.
Let us now enumerate some examples of differential operators whose associated fundamental solution fulfills the hypothesis of
Theorem \ref{sol}.
\begin{example}
{\it The wave equation}. Let $\Gamma_d$ be the fundamental solution of the wave equation in $\mathbb{R}^{d}$, that is, $\Gamma_d$ is the solution of
$$\frac{\partial^2 \Gamma_d}{\partial t^2} - \Delta \Gamma_d =0,$$
with vanishing initial conditions. It is known that for $d=1,2,3$, $\Gamma_d$ is given, respectively, by
\begin{align*}
\Gamma_1(t)&=\frac{1}{2} {\bf 1}_{\{|x|<t\}},\\
\Gamma_2 (t)& = C (t^2-|x|^2)_+^{-1/2},\\
\Gamma_3(t) &= \frac{1}{4\pi t}\sigma_t,
\end{align*}
where $\sigma_t$ denotes the surface measure on the three-dimensional sphere of radius $t$. In particular, for each $t$, $\Gamma_d(t)$ has compact support. It is important to remark that only in these cases $\Gamma_d$ defines a non-negative measure. Furthermore, for all dimensions $d\geq 1$, we have a unified expression for the Fourier transform of $\Gamma_d(t)$:
$$\mathcal{F} \Gamma_d (t)(\xi)= \frac{\sin (2\pi t |\xi|)}{2\pi |\xi|}.$$
Elementary estimates show that there are positive constants $c_1$ and $c_2$ depending on $T$ such that
$$\frac{c_1}{1+|\xi|^2}\leq \int_0^T \frac{ \sin^2 (2\pi t\xi)}{4\pi^2 |\xi|^2} dt \leq \frac{c_2}{1+|\xi|^2}.$$
Therefore, $\Gamma_d$ satisfies condition (\ref{5}) if and only if
\begin{equation}
\int_{\mathbb{R}^{d}} \frac{\mu(d\xi)}{1+|\xi|^2}<\infty.
\label{12}
\end{equation}
\label{wave}
\end{example}
\begin{example}
{\it The heat equation}. Let $\Gamma$ be the fundamental solution of the heat equation in $\mathbb{R}^{d}$ and with vanishing initial conditions, that is
$$\frac{\partial \Gamma}{\partial t} -\frac{1}{2}\Delta \Gamma =0.$$
Then, $\Gamma$ is given by the Gaussian density:
$$\Gamma(t,x)=(2\pi t)^{-d/2} \exp\left(-\frac{|x|^2}{2 t}\right)$$
and
$$\mathcal{F} \Gamma (t)(\xi)=\exp(-4\pi^2 t|\xi|^2).$$
Because
$$\int_0^T \exp(-4\pi^2 t|\xi|^2) dt =\frac{1}{4\pi^2 |\xi|^2} (1-\exp(-4\pi^2 T|\xi|^2)),$$
we conclude that condition (\ref{5}) holds if and only if (\ref{12}) holds.
\label{heat}
\end{example}
Let us express condition (\ref{12}) in terms of the covariance function $f$. Indeed, as it is pointed out in \cite{Da}, condition (\ref{12}) is always true when $d=1$; for $d=2$, (\ref{12}) holds if and only if
$$\int_{|x|\leq 1} f(x) \log\frac{1}{|x|} dx <\infty,$$
and for $d\geq 3$, (\ref{12}) holds if and only if
$$\int_{|x|\leq 1} f(x) \frac{1}{|x|^{d-2}} dx <\infty.$$
\section{Existence of density}
\label{existencia}
In this section we aim to prove that the solution to Equation (\ref{3}), at any point $(t,x)\in (0,T]\times \mathbb{R}^{d}$, is a random variable whose law admits a density with respect to Lebesgue measure on $\mathbb{R}$. For this, we will make use of the techniques provided by the Malliavin calculus and, more precisely, we will apply Bouleau-Hirsch's criterion (see \cite{bh} or Theorem 2.1.2 in \cite{nualart}).\\
First of all, we describe the Gaussian context in which we will use the tools of the Malliavin calculus. Namely, we consider the Hilbert space $\mathcal{H}_T=L^2([0,T];\mathcal{H})$ and the Gaussian family of random variables
$(W(h), h\in \mathcal{H}_T)$ defined at the very beginning of Section \ref{integral}.
Then
$(W(h), h\in \mathcal{H}_T)$ is a centered Gaussian process such that $E(W(h_1)W(h_2))=\langle h_1,h_2\rangle_{\mathcal{H}_T}$,
$h_1,h_2\in \mathcal{H}_T$,
and we can use the differential Malliavin calculus based on it (see, for instance, \cite{nualart}). The Malliavin derivative is denoted by $D$ and, for any $N\geq 1$, the domain of the iterated derivative $D^N$ in $L^p(\Omega; \mathcal{H}_T^{\otimes N})$ is denoted by
$\mathbb{D}^{N,p}$, for any $p\geq 2$. We shall also use the notation
$$\mathbb{D}^\infty =\cap_{p\geq 1}\cap_{k\geq 1} \mathbb{D}^{k,p}.$$
The first step in order to apply Bouleau-Hirsch's criterion is to study the Malliavin differentiability of $u(t,x)$, for all fixed $(t,x)\in (0,T]\times \mathbb{R}^{d}$. Recall that, for any random variable $X$ in the domain of the derivative operator $D$, $DX$ defines an $\mathcal{H}_T-$valued random variable. In particular, for some fixed $r\in [0,T]$, $DX(r)$ is an element of $\mathcal{H}$, which will be denoted by $D_rX$. In the sequel we will use the notation $\cdot$ and $*$ to denote, respectively, the time and $\mathcal{H}$ variables.
\begin{proposition}
Assume that $\Gamma$ satisfies Hypothesis A. Suppose also that the coefficients $b$ and $\sigma$ are $\mathcal{C}^1$ functions with bounded Lipschitz continuous derivatives. Then, for any $(t,x)\in [0,T]\times \mathbb{R}^{d}$, $u(t,x)$ belongs to
$\mathbb{D}^{1,p}$, for any $p\in [1,\infty)$.
Moreover, the Malliavin derivative $Du(t,x)$ defines an $\mathcal{H}_T-$valued process that satisfies the following linear stochastic differential equation:
\begin{align}
D_r u(t,x) = & \sigma(u(r,*))\Gamma(t-r,x-*) \nonumber\\
& + \int_r^t \int_{\mathbb{R}^{d}} \Gamma(t-s,x-y) \sigma'(u(s,y)) D_ru(s,y)W(ds,dy)\nonumber\\
& + \int_r^t \int_{\mathbb{R}^{d}} b'(u(s,x-y)) D_r u(s,x-y)\Gamma(t-s,dy) ds,
\label{eqmal}
\end{align}
for all $r\in [0,T]$.
\label{difmal}
\end{proposition}
The stochastic integral on the right hand-side of Equation (\ref{eqmal}) must be understood by means of the Hilbert-valued integration setting described at the very final part of Section \ref{integral}.
Concerning the Hilbert-valued pathwise integral, it is defined as follows: let $\mathcal{A}$ be a Hilbert space, $(e_j)_{j\geq 1}$ a complete orthonormal system of $\mathcal{A}$ and $\{Y(s,y), (s,y)\in [0,T]\times \mathbb{R}^{d}\}$ an $\mathcal{A}-$valued stochastic process such that
$$\sup_{(s,y)\in [0,T]\times \mathbb{R}^{d}} E(\|Y(s,y)\|^2_{\mathcal{A}})<+\infty.$$
Then, the $\mathcal{A}-$valued integral
$$\mathcal{I}_t=\int_0^t \int_{\mathbb{R}^{d}} Y(s,y)\Gamma(s,dy) ds$$
is determined by the components $\left( \int_0^t \int_{\mathbb{R}^{d}} \langle Y(s,y),e_j\rangle_{\mathcal{A}}\; \Gamma(s,dy) ds, j\geq 1\right)$, which are real-valued integrals. Moreover, one can obtain an upper bound for the moments of the above integral (see \cite{quer}, p. 24):
\begin{equation}
E(|\mathcal{I}_t|^p)\leq \int_0^t \sup_{z\in \mathbb{R}^{d}} E(\|Y(s,z)\|_\mathcal{A}^p \int_{\mathbb{R}^{d}} \Gamma(s,dy) ds,\; p\geq 2.
\label{deter}
\end{equation}
Eventually, notice that owing to Proposition \ref{prop1}, the first term on the right hand-side of (\ref{eqmal}) is fully defined.
\medskip
\noindent {\it Proof of Proposition \ref{difmal}}. The statement is almost an immediate consequence of Theorem 2 in \cite{QS}. Indeed, the authors of this latter reference prove that, under the standing hypothesis and for any fixed $(t,x)\in [0,T]\times \mathbb{R}^{d}$, the random variable $u(t,x)$
belongs to $\mathbb{D}^{1,p}$, for any $p\in [1,\infty)$. In addition, they show that there exists an $\mathcal{H}_T-$valued stochastic process
$\{\Theta(t,x), (t,x)\in [0,T]\times \mathbb{R}^{d}\}$ satisfying
$$\sup_{(t,x)\in [0,T]\times \mathbb{R}^{d}} E(\|\Theta(t,x)\|_{\mathcal{H}_T}^p)<\infty$$
and such that, in $\mathcal{H}_T$,
\begin{align*}
D u(t,x) = &
\Theta(t,x)+ \int_0^t \int_{\mathbb{R}^{d}} \Gamma(t-s,x-y) \sigma'(u(s,y)) Du(s,y)W(ds,dy)\\
& + \int_0^t \int_{\mathbb{R}^{d}} b'(u(s,x-y)) D u(s,x-y)\Gamma(t-s,dy) ds.
\end{align*}
Moreover, it holds that
$$E(\|\Theta(t,x)\|_{\mathcal{H}_T}^2)=E(\| \Gamma(t-\cdot,x-*) \sigma(u(\cdot,*))\|_{\mathcal{H}_T}^2).$$
Hence, in order to conclude the proof, we only need to show that the Hilbert-valued random variables $\Theta(t,x)$ and
$\Gamma(t-\cdot,x-*) \sigma(u(\cdot,*))$ coincide as elements of $\mathcal{H}_T$.
Let $(\Gamma_n)_{n\geq 1}$ be the family of smooth functions defined in the proof of Lemma \ref{ggamma}. Then,
in the proof of Theorem 2 in \cite{QS} the process $\Theta$ is defined by the following limit in $\mathcal{H}_T$:
$$\Theta(t,x)=\mathcal{H}_T-\lim_{n\rightarrow \infty} \Gamma_n(t-\cdot,x-*) \sigma(u_n(\cdot,*)),$$
where $\{u_n(t,x), (t,x)\in [0,T]\times \mathbb{R}^{d}\}$ is the unique mild solution to an equation of the form (\ref{3}) but replacing $\Gamma$ by $\Gamma_n$.
As a consequence of the proof of Proposition 3 from \cite{QS}, it is readily checked that
$$
\lim_{n\rightarrow 0} E(\| \Gamma_n(t-\cdot,x-*)[\sigma(u_n(\cdot,*))-\sigma(u(\cdot,*))]\|_{\mathcal{H}_T}^2)=0
$$
and
$$\lim_{n\rightarrow 0} E(\| [\Gamma_n(t-\cdot,x-*)-\Gamma(t-\cdot,x-*)]\sigma(u(\cdot,*))\|_{\mathcal{H}_T}^2)=0.$$
Thus, we get that $\Theta(t,x)=\sigma(u(t-\cdot,x-*)) \Gamma(t-\cdot, x-*)$. \hfill $\square$
\vspace{0.3cm}
The main result of the section is the following:
\begin{theorem}
Assume that $\Gamma$ satisfies Hypothesis A. Suppose also that the coefficients $b$ and $\sigma$ are $\mathcal{C}^1$ functions with bounded Lipschitz continuous derivatives and that $|\sigma(z)|\geq c>0$, for all $z\in \mathbb{R}$ and some positive constant $c$. Then, for all $t>0$ and $x\in \mathbb{R}^{d}$, the random variable $u(t,x)$ has an absolutely continuous law with respect to Lebesgue measure on $\mathbb{R}$.
\label{densitat}
\end{theorem}
\noindent
{\it Proof}. Owing to Bouleau-Hirsch's criterion and Proposition \ref{difmal}, it suffices to show that $\|Du(t,x)\|_{\mathcal{H}_T}>0$ almost surely.
To begin with, from Equation (\ref{eqmal}) we obtain
\begin{equation}
\int_0^t \|D_s u(t,x)\|_\mathcal{H}^2 ds\geq \frac{1}{2}\int_{t-\delta}^t \|\Gamma(t-s,x-*)\sigma(u(s,*))\|_\mathcal{H}^2 ds - I(t,x;\delta),
\label{15}
\end{equation}
for any $\delta>0$ sufficiently small, where
\begin{align}
I(t,x;\delta)= & \int_{t-\delta}^t \left\| \int_s^t \int_{\mathbb{R}^{d}} \Gamma(t-r,x-z) \sigma'(u(r,z)) D_s u(r,z) W(dr,dz) \right.\nonumber\\
&\quad + \left. \int_s^t \int_{\mathbb{R}^{d}}\Gamma(t-r,dz) b'(u(r,x-z)) D_s u(r,x-z) dr\right\|_\mathcal{H}^2 ds.
\label{15.5}
\end{align}
The above term $I(t,x;\delta)$ may be bounded by $2(I_1(t,x;\delta)+I_2(t,x;\delta)$, with
\begin{align}
I_1(t,x;\delta)= & \int_0^\delta \left\| \int_{t-s}^t \int_{\mathbb{R}^{d}} \Gamma(t-r,x-z) \sigma'(u(r,z)) D_{t-s} u(r,z) W(dr,dz) \right\|_\mathcal{H}^2 ds, \label{15.6}\\
I_2(t,x;\delta)= & \int_0^\delta \left\| \int_{t-s}^t \int_{\mathbb{R}^{d}}\Gamma(t-r,dz) b'(u(r,x-z)) D_{t-s} u(r,x-z) dr\right\|_\mathcal{H}^2 ds.
\label{15.7}
\end{align}
In order to bound from below the term in the left hand-side of (\ref{15}), let us first
obtain a lower bound for the first one on the right hand-side. For this, we will make use of the family of smooth functions $(\Gamma_n)_n$ and
$(J_n^{t,x})_n$, considered in the proof of Lemma \ref{ggamma}, that
regularise the measures $\Gamma$ and $\Gamma(\cdot,x-*)\sigma(u(t-\cdot,*))$, respectively. Then, by the proof of Lemma \ref{ggamma}, the very definition of the norm in $\mathcal{H}_\delta$ and the non-degeneracy assumption on $\sigma$, we have
\begin{align}
& \int_{t-\delta}^t \|\Gamma(t-s,x-*)\sigma(u(s,*))\|_\mathcal{H}^2 ds = \|\Gamma(\cdot,x-*)\sigma(u(t-\cdot,*))\|_{\mathcal{H}_\delta}^2 \nonumber\\
&\quad = \lim_{n\rightarrow \infty} \| J_n^{t,x} \|_{\mathcal{H}_\delta}^2\nonumber\\
& \quad = \lim_{n\rightarrow \infty} \int_0^\delta \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} J_n^{t,x}(s,y) f(y-z) J_n^{t,x}(s,z)
dydz ds\nonumber \\
&\quad \geq c^2 \lim_{n\rightarrow \infty} \int_0^\delta \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \Gamma_n(s,x-y) f(y-z)\Gamma_n(s,x-z)\nonumber \\
& \quad = c^2 \lim_{n\rightarrow \infty} \|\Gamma_n(\cdot,x-*) \|_{\mathcal{H}_\delta}^2 = c^2 \|\Gamma(\cdot,x-*)\|_{\mathcal{H}_\delta}^2=c^2 g(\delta),
\label{16}
\end{align}
where
\begin{equation} \label{gedelta}
g(\delta):=\int_0^\delta \int_{\mathbb{R}^{d}} |\mathcal{F} \Gamma(s)(\xi)|^2 \mu(d\xi) ds.
\end{equation}
Now we find out upper bounds for the expectation of the terms $I_1(t,x;\delta)$ and $I_2(t,x;\delta)$. First, in order to deal with
the former term, one can use the bound (\ref{11}). Thus, taking into account that $\sigma'$ is bounded, we get the following estimate:
\begin{align}
& E(I_1(t,x;\delta))\nonumber \\
&\quad \leq C \sup_{(\tau,y)\in (0,\delta)\times \mathbb{R}^{d}} E\left( \|D_{t-\cdot} u(t-\tau,y)\|_{\mathcal{H}_\delta}^2\right) \int_0^\delta \int_{\mathbb{R}^{d}} |\mathcal{F} \Gamma(s)(\xi)|^2 \mu(d\xi) ds\nonumber \\
& \quad= C \sup_{(\tau,y)\in (0,\delta)\times \mathbb{R}^{d}} E\left( \|D_{t-\cdot} u(t-\tau,y)\|_{\mathcal{H}_\delta}^2\right) g(\delta).
\label{17}
\end{align}
On the other hand, by (\ref{deter}) the term $E(I_2(t,x;\delta))$ corresponding to the Hilbert-valued pathwise integral can be bounded by
\begin{equation}
E(I_2(t,x;\delta))\leq C \sup_{(\tau,y)\in (0,\delta)\times \mathbb{R}^{d}} E\left( \|D_{t-\cdot} u(t-\tau,y)\|_{\mathcal{H}_\delta}^2\right) h(\delta),
\label{17.2}
\end{equation}
where $h(\delta)=\int_0^\delta \int_{\mathbb{R}^{d}} \Gamma(s,dy) ds$.\\
At this point, we will make use of the following fact (see Lemma 5 in \cite{qs2} and \cite{quer}, p. 53):
\begin{equation}
\sup_{(\tau,y)\in (0,\delta)\times \mathbb{R}^{d}} E\left( \|D_{t-\cdot} u(t-\tau,y)\|_{\mathcal{H}_\delta}^{2q}\right)
\leq C (g(\delta))^q,
\label{17.5}
\end{equation}
for any $q\geq 1$. Hence, by (\ref{17}) and (\ref{17.2}) the terms $E(I_1(t,x;\delta))$ and $E(I_2(t,x;\delta))$ may be bounded, up to constants, respectively, by $(g(\delta))^2$ and $g(\delta) h(\delta)$, which implies that
\begin{equation}
E(I(t,x;\delta))\leq C g(\delta)(g(\delta)+h(\delta)).
\label{18}
\end{equation}
For any fixed small $\delta>0$, let $n$ be a sufficiently large positive integer such that $\frac{1}{n}\leq \frac{c^2}{2}g(\delta)$. Then, owing to
(\ref{16}) and (\ref{18}) and applying Chebyshev's inequality, we obtain
\begin{align}
P\left( \int_0^t \|D_s u(t,x)\|_\mathcal{H}^2 ds <\frac{1}{n}\right) & \leq P\left( I(t,x;\delta)\geq \frac{c^2}{2} g(\delta)-\frac{1}{n}\right) \nonumber\\
& \leq \left(\frac{c^2}{2}g(\delta)-\frac{1}{n}\right)^{-1} E(I(t,x;\delta))\nonumber \\
& \leq \left(\frac{c^2}{2}g(\delta)-\frac{1}{n}\right)^{-1} g(\delta)(g(\delta)+h(\delta)).
\label{19}
\end{align}
Therefore
$$\lim_{n\rightarrow \infty} P\left( \int_0^t \|D_s u(t,x)\|_\mathcal{H}^2 ds <\frac{1}{n}\right) \leq C (g(\delta)+h(\delta)),$$
and the latter term converges to zero as $\delta$ tends to zero. Hence,
$$P\left( \int_0^t \|D_s u(t,x)\|_\mathcal{H}^2 ds =0\right)=0,$$
which concludes the proof. \hfill$\square$
\begin{remark}
In the particular case of the three-dimensional stochastic wave equation,
the above Theorem \ref{densitat} generalises Theorem 3 in the reference \cite{QS}.
\end{remark}
\section{Smoothness of the density}
\label{regularitat}
This section is devoted to prove that, for any fixed $(t,x)\in (0,T]\times \mathbb{R}^{d}$, the law of the random variable $u(t,x)$ has an infinitely differentiable density with respect to Lebesgue measure on $\mathbb{R}$. This will be achieved by showing that $u(t,x)$ belongs to the space $\mathbb{D}^\infty$ and that the inverse of the Malliavin matrix of $u(t,x)$ has moments of all order (see, for instance, Theorem 2.1.4 in \cite{nualart}).
Recall that for any {\it differentiable} random variable $X$ and any $N\geq 1$, the iterated Malliavin derivative $D^N X$ defines an element of the Hilbert space $L^2(\Omega; \mathcal{H}_T^{\otimes N})$. As for the case $N=1$, for any $r=(r_1,\dots,r_N)\in [0,T]^N$, the element $D X(r)$ of $\mathcal{H}^{\otimes N}$ will be denoted by $D_rX$. We will also use the notation
$$D^{N}_{((r_{1},\varphi_{1}),\dots,(r_{N},\varphi_{N}))}X=
\langle D^{N}_{(r_{1},\dots,r_{N})} X, \varphi_{1}\otimes \dots \otimes
\varphi_{N}
\rangle_{\mathcal{H}^{\otimes N}},$$
for $r_{i}\in [0,T]$, $\varphi_{i}\in\mathcal{H}$, $i=1,\dots,N$.
In particular, we have that
$$
\|D^{N} X\|^{2}_{\mathcal{H}_T^{\otimes N}}=\int_{[0,T]^{N}}dr_{1}\dots dr_{N}
\sum_{j_{1},\dots,j_{N}}
|D_{((r_{1},e_{j_{1}}),\dots,(r_{N},e_{j_{N}}))} X|^{2},
$$
where $(e_{j})_{j\geq 0}$ is a complete orthonormal system of $\mathcal{H}$. Let
$$\Delta_{\alpha}^N (g,X):= D^{N}_{\alpha} g(X) - g'(X) D_{\alpha}^{N} X,$$
where $\alpha=((r_1,\varphi_1),\dots,(r_N,\varphi_N))$, $r_i\in [0,T]$ and $\varphi_i\in \mathcal{H}$.
Notice that $\Delta_{\alpha}^N (g,X)=0$ if $N=1$ and it only depends on the
Malliavin
derivatives up
to the order $N-1$ if $N>1$.\\
We now state the main result concerning the Malliavin regularity of the solution $u(t,x)$.
\begin{proposition}
Assume that $\Gamma$ satisfies Hypothesis A. Suppose also that the coefficients $\sigma$ and $b$ are $\mathcal{C}^\infty$ functions with bounded derivatives of any order greater than or equal to one. Then, for every $(t,x)\in [0,T]\times \mathbb{R}^{d}$, the random variable $u(t,x)$ belongs to the space $\mathbb{D}^\infty$.
The iterated Malliavin derivative $D^Nu(t,x)$ satisfies the following equation in $L^p(\Omega; \mathcal{H}_T^{\otimes N})$, for any $p\geq 1$ and
$N\geq 1$:
\begin{align}
& D^{N}u(t,x) = Z^{N}(t,x) \nonumber \\
& + \int_{0}^{t}\int_{\mathbb{R}^{d}} \Gamma(t-s,x-z) [\delta^N(\sigma,u(s,z))
+ D^{N}u(s,z)\sigma'(u(s,z))] W(ds,dz) \nonumber \\
& + \int_{0}^{t}ds \int_{\mathbb{R}^{d}} \Gamma(s,dz) [\delta^N(b,u(t-s,x-z))\nonumber \\
&\quad \quad
+ D^{N}u(t-s,x-z)b'(u(t-s,x-z))],
\label{21}
\end{align}
where $Z^N(t,x)$ is the element of $L^p(\Omega; \mathcal{H}_T^{\otimes N})$ defined by
\begin{equation}
\langle Z^N_r(t,x), e_{j_1}\otimes\dots \otimes e_{j_N}\rangle_{\mathcal{H}^{\otimes N}} =
\sum_{i=1}^{N} \langle \Gamma(t-r_i,x-*)
D^{N-1}_{\hat{\alpha}_i}
\sigma(u(r_i,*)),e_{j_i}\rangle_{\mathcal{H}},
\label{ci}
\end{equation}
for any $r=(r_1,\dots,r_N)\in [0,T]^N$ and $j_1,\dots,j_N\in \{1,\dots,N\}$.
Moreover, it holds that
$$\sup_{(s,y)\in [0,T]\times \mathbb{R}^{d}} E(\|D^{N}u(s,y)\|^{p}_{\mathcal{H}_T^{\otimes N}}
)<+\infty,$$
for all $p\ge 1$.
\label{malinf}
\end{proposition}
\noindent
{\it Proof}. It is almost an immediate consequence of Theorem 1 in \cite{qs2} and Proposition \ref{difmal} from the preceding Section
\ref{existencia}.
Namely, we just need to check that, using the same notation as in the proof of Theorem 1 in \cite{qs2}, the sequence of
$L^2(\Omega; \mathcal{H}_T^{\otimes N})-$valued random variables $(Z^{N,n}(t,x))_{n\geq 1}$ converges to $Z^N(t,x)$ as $n$ tends to infinity. We should mention that in that reference, $Z^{N,n}(t,x)$ is constructed using a regularisation procedure, that is smoothing the measure $\Gamma$, and by means of a similar expression to (\ref{ci}). In \cite{qs2}, the objective was to define the initial condition of the linear stochastic equation satisfied by the iterated Malliavin derivative $D^N u(t,x)$ as the limit of $Z^{N,n}(t,x)$. We claim that this limit equals to $Z^N(t,x)$, defined in the present proposition's statement (see (\ref{ci})).
Indeed, the convergence of $Z^{N,n}(t,x)$ to $Z^N(t,x)$ in $L^2(\Omega; \mathcal{H}_T^{\otimes N})$ can be easily studied using the same arguments as in the proof of Lemma 3 in \cite{qs2}. \hfill $\square$
\vspace{0.3cm}
We are now in position to state and prove the main result of the paper.
\begin{theorem}
Assume that $\Gamma$ satisfies Hypothesis A, the coefficients $\sigma$ and $b$ are $\mathcal{C}^\infty$ functions with bounded derivatives of any order greater than or equal to one and $|\sigma(z)|\geq c>0$, for all $z\in \mathbb{R}$.
Moreover, suppose that there exist $\gamma >0$ such that for all $\tau\in (0,1]$,
\begin{equation}
\int_0^\tau \int_{\mathbb{R}^{d}} |\mathcal{F} \Gamma(s)(\xi)|^2 \mu(d\xi) ds
\ge C_1 \tau^{\gamma},
\label{lowbound}
\end{equation}
for some positive constant $C_1$. Then, for all $(t,x)\in [0,T]\times \mathbb{R}^{d}$, the law of $u(t,x)$ has a $\mathcal{C}^\infty$ density with respect to Lebesgue measure on $\mathbb{R}$.
\label{regularitatdensitat}
\end{theorem}
\noindent
{\it Proof.} In view of Proposition \ref{malinf}, we need to show that the inverse of the Malliavin matrix of $u(t,x)$ has moments of all order, that is
$$E\left( \left| \int_0^T \|D_s u(t,x)\|_\mathcal{H}^2 ds \right|^{-q} \right)<+\infty,$$
for all $q\geq 2$.
It turns out (see, for instance, Lemma 2.3.1 in \cite{nualart}) that it suffices to check that for any $q\geq 2$,
there exists an $\varepsilon _{0}(q)>0$ such that for all $\varepsilon \leq
\varepsilon _{0}$
\begin{equation}
P\left( \int_{0}^{t}\left\| D_s u(t,x) \right\|_{\mathcal{H}}^{2} ds<\varepsilon \right) \leq C \varepsilon ^{q}.
\end{equation}
Proceeding as in the proof of Theorem \ref{densitat}, for any $\delta>0$ sufficiently small we obtain the following estimate:
\begin{align}
P\left( \int_0^t \|D_s u(t,x)\|^2_\mathcal{H} ds<\varepsilon\right) & \leq P\left( I(t,x;\delta) \geq \frac{c^2}{2} g(\delta)-\varepsilon\right)\nonumber\\
& \leq \left( \frac{c^2}{2}g(\delta) -\varepsilon\right)^{-p} E(|I(t,x;\delta)|^p),
\label{23}
\end{align}
for any $p>0$, where we recall that $I(t,x;\delta)$ is defined by (\ref{15.5}) and
$g(\delta)$ is given by (\ref{gedelta}).
We decompose now the term $I(t,x;\delta)$ as in the proof of Theorem \ref{densitat}, so that we need to find upper bounds for $E(|I_i(t,x;\delta)|)^p$, $i=1,2$ (see (\ref{15.6}) and (\ref{15.7})). On one hand, owing to H\"older's inequality and (\ref{11}) we get
\begin{align}
& E(|I_1(t,x;\delta)|^p)\nonumber \\
&\quad = E\left( \int_0^\delta \left\| \int_{t-s}^t \int_\mathbb{R}^{d} \Gamma(t-r,x-z)\sigma'(u(r,z)) D_{t-s} u(r,z)W(dr,dz) \right\|_\mathcal{H}^2 ds\right)^p \nonumber \\
& \quad \leq \delta^{p-1} E\left( \int_0^\delta \left\| \int_{t-s}^t \int_\mathbb{R}^{d} \Gamma(t-r,x-z)\sigma'(u(r,z)) D_{t-s} u(r,z)W(dr,dz) \right\|_\mathcal{H}^{2p} ds\right) \nonumber\\
&\quad \leq \delta^{p-1} (g(\delta))^p \sup_{(\tau,y)\in [0,T]\times \mathbb{R}^{d}} E\left( \|D_{t-\cdot} u(t-\tau,y)\|_{\mathcal{H}_T}^{2p}\right).
\label{23.4}
\end{align}
The above estimate (\ref{23.4}) let us conclude that
$$
E(|I_1(t,x;\delta)|^p) \leq C \delta^{p-1} (g(\delta))^{p}.
$$
On the other hand, using similar arguments but for the Hilbert-valued pathwise integral (see (\ref{deter})), one proves that $E(|I_2(t,x;\delta)|^p)$ may be bounded, up to some positive constant, by $\delta^{p-1} (g(\delta))^p $.
Thus, we have proved that
\begin{equation}
P\left( \int_0^t \|D_s u(t,x)\|^2_\mathcal{H} ds < \epsilon\right)
\leq C \left( \frac{c^2}{2}g(\delta) -\epsilon\right)^{-p} \delta^{p-1} (g(\delta))^p .
\label{24}
\end{equation}
At this point, we choose $\delta=\delta(\epsilon)$ in such a way that $g(\delta)=\frac{4}{c^2} \epsilon$. By (\ref{lowbound}), this implies that
$\frac{4}{c^{2}}\varepsilon \geq C\delta
^{\gamma }$, that is $\delta \leq C\varepsilon ^{\frac{1}{\gamma }}$. Hence,%
\[
P\left( \int_{0}^{t}\left\| D_{s}u(t,x)\right\| _{\mathcal{H}%
}^{2}ds<\varepsilon \right) \leq C\varepsilon ^{\frac{p-1}{\gamma }},
\]%
and it suffices to take $p$ sufficiently large such that $ \frac{p-1}{\gamma } \geq q$.
\hfill $\square$
\begin{remark}
As it is pointed out in \cite{QS}, Appendix A (see also \cite{lev}), when $\Gamma$ is the fundamental solution of the wave equation in $\mathbb{R}^d$, with
$d=1,2,3$, then condition (\ref{lowbound}) is satisfied with $\gamma=3$.
On the other hand, if $\Gamma$ is the fundamental solution of the heat equation on $\mathbb{R}^d$, $d\geq 1$, then
condition (\ref{lowbound}) is satisfied with any $\gamma\geq 1$ (see Lemma 3.1 in \cite{mms}).
\end{remark}
\begin{remark}
Theorem \ref{regularitatdensitat} provides a generalisation of Theorem 3 in \cite{qs2} for the case of the three-dimensional wave equation (see also \cite{marta}). Moreover, it also generalises the results in \cite{mms} for the stochastic wave equation with space dimension $d=1,2$ and the stochastic heat equation in any space dimension.
\end{remark}
|
2,877,628,089,584 | arxiv | \section{Introduction}
The 2D Edwards-Anderson (EA) model in statistical mechanics is defined
by a set $\sigma=\{s_1\ldots s_N\}$ of $N$ Ising spins $s_i=\pm 1$
placed on the nodes of a 2D square lattice, and random interactions
$J_{i,j}$ at the edges, with a Hamiltonian
\[
\mathcal{H} (\sigma) = - \sum_{<i,j>} J_{i,j} s_i s_j
\]
where $<i,j>$ runs over all couples of neighboring spins (first
neighbors on the lattice). The $J_{i,j}$ are the magnetic interchange
constants between spins and are supposed fixed for any given instance
of the system, and the spins $s_i$ are the dynamic variables. We will
focus on one of the most common disorder types, the bimodal
interactions $J=\pm 1$ with equal probabilities.
The statistical mechanics of the EA model, at a temperature $T=1/\beta $,
is given by the Gibbs-Boltzmann distribution
\[P(\sigma) = \frac{e^{-\beta\mathcal{H}(\sigma)}}{Z} \quad \mbox{where }\quad Z=\sum_{\sigma} e^{-\beta \mathcal{H}(\sigma)}
\]
The direct computation of the partition function $Z$, or any marginal
probability distribution like $p(s_i,s_j)=\sum_{\sigma \backslash
s_i,s_j}P(\sigma)$, is a time consuming task, unattainable in
general, and therefore an approximation is required. We are interested
in fast algorithms for inferring such marginal distributions. Actually
for the 2D EA model, thanks to the graph planarity, algorithms
computing $Z$ in a time polynomial in $N$ exist. However we are
interested in very fast (i.e.\ linear in $N$) algorithms that can be
used also for more general model, e.g.\ the EA model in a field or
defined on a 3D cubic lattice. For these more general cases a
polynomial algorithm is very unlikely to exist and some approximations
are required.
A simple and effective mean field approximation is the one due to
Bethe \cite{bethe}, in which the marginals over the dynamic variables,
like $p(s_i)$, are obtained from the minimization of a variational
free energy in a self consistent way. The Bethe approximation is exact
for a model without loops in the interactions network, which
unfortunately is far from being the usual case in physics. In the
context of finite dimensional lattices, Kikuchi \cite{kikuchi} derived
an extension of this approximation to larger groups of variables,
which accounts for short loops exactly, and is usually referred as
Cluster Variational Method (CVM).
The interest in spin glasses, with quenched random disorder, brought a
new testing ground for both approximations. In particular Bethe
approximation (exact on trees) has been the starting point of many
useful theoretical and applied developments. It is at the basis of the
cavity method, which allows a restatement of replica theory in
probabilistic terms for finite connectivity systems \cite{MP1}. The
Bethe approximation is connected to well known algorithms in computer
science, namely Belief Propagation \cite{pearlBP} and the sum-product
algorithm \cite{sumprod}. A major achievement of this confluence
between computer science and statistical mechanics, has been the
conception of the Survey Propagation algorithm \cite{KS,KS2}, inspired
by the cavity method and the replica symmetry breaking
\cite{MP1,MP2,MPV}, that shows great performance on hard optimization
problems \cite{KS,KS2,Col,Col2}. Statistical mechanics clarified the
relation between phase transitions and easy-hard transitions in
optimization problems, and allowed the statistical characterization of
the onset of the hard phase
\cite{achlioptas_rigorous,lenkaPNAS07,mon08}, as well as the
analytical description of search algorithms based on BP
\cite{MRTS_Allerton, BPdecimation}.
The correctness of Bethe approximation and the related algorithms is,
however, linked to the lack of topological correlations in the
interactions (random graphs are locally tree-like), since the
approximation is exact only on tree topologies. This is a strong
limitation for physical purposes, since tree topologies or random
graphs are not the common situation. Bethe approximation performs
poorly in finite dimensional lattices, and the associated algorithm
are usually non convergent at low temperatures.
Recently the Cluster Variational Method (CVM) has been reformulated in
a broader probabilistic framework called \textit{region-based}
approximations to free energy \cite{yedidia} and connected to a
Generalized Belief Propagation (GBP) algorithm to find the stationary
points of the free energy. It extends Bethe approximation by
considering correlations in larger regions, allowing, in principle, to
take into account short loops accurately. In \cite{yedidia} was shown
that stable fixed points of GBP message passing algorithm corresponds
to stationary points of the approximated CVM free energy, while the
converse is not necessarily true. Furthermore, the GBP message passing
is not guaranteed to converge at all. Prompted by this lack of
convergence, a new kind of provably convergent algorithms for
minimizing the CVM approximated free energy, known as Double Loop (DL)
algorithms \cite{yuille,HAK03}, has been developed, at the cost of a
drastic drop off in speed.
GBP has been applied in the last decade to inference problems
\cite{tanakaCVM,haploCVM,kappenCVMmedical}, consistently outperforming
BP. In particular, the image reconstruction problems
\cite{tanakaCVM1995,tanakaCVM} are based on a 2D lattices structure,
but, at variance with 2D EA model, the interactions among nearby spins
(pixels) are ferromagnetic, and the damaged image is used as an
external field. Both factors help convergence of GBP algorithms. An
analysis of CVM approximation using GBP algorithms on single instances
of finite dimensional disordered models of physical interest, like the
EA model, has not been done so far.
The Edwards Anderson model in 2D has been largely studied by other
methods (see \cite{JLMM, middleton09} and reference therein)
suggesting that it remains paramagnetic all the way down to zero
temperature, lacking any thermodynamic transition at any finite $T$,
although at low T there are metastable states of very long lifetime,
leading to very slow dynamics. Based on this fact, a paramagnetic
version of the GBP on 2D EA model was studied recently in
\cite{dual}. The connection of CVM with the replica trick and a
Generalized Survey Propagation have been presented recently
\cite{tommaso_CVM}. However the implementation of the latter algorithm
on finite dimensional lattices is computationally very demanding, and
should be preceded by the study of the original CVM approximation and
GBP algorithm.
In this paper we study the convergence properties of GBP message
passing algorithm and the performance of the CVM approximation on the
2D EA model. After the introduction of the region-based free energy in
Sec.~\ref{GBP2D} and the message passing algorithm in terms of cavity
fields, we compute the critical (inverse) temperature $T_\text{CVM} \simeq
0.82$ ($\B_\text{CVM} \simeq 1.22$) of the plaquette-CVM approximation in
Sec.~\ref{Tc}, improving Bethe estimate $T_\text{Bethe}=1.51$
($\beta_\text{Bethe} \simeq 0.66$) by roughly a factor 2. The CVM average
case temperature, however, does not clearly corresponds to the single
instance behavior of the GBP message passing algorithm, as is shown in
Sec.~\ref{ConvergenceProblems}. At variance with Belief Propagation,
GBP converges to spin glass solutions (below $T_\text{SG} \simeq
1.27$, above $\B_\text{SG} \simeq 0.79$), and stops converging near $T \simeq
1.0$, before the average case prediction $T_\text{CVM}$. In Sec.\ref{gauge}
we show that this convergence problem depends on the implementation
details of the message passing algorithm, and can be improved by a
simultaneous update of message. In order to do so the gauge invariance
of the message passing equations has to be fixed. In
Sec.~\ref{GBPvsDL} we compare the solutions and the performance of GBP
with 3 other algorithms for the minimization of the CVM free energy:
Double Loop \cite{HAK03}, Two-Ways Message Passing \cite{HAK03}, and
the Dual algorithm \cite{dual}. In terms of the CVM free energy, the
paramagnetic solution is in general the one to be chosen, except for a
small interval in temperatures where the spin glass solution has a
lower free energy. Our results are summarized in
Sec.~\ref{Conclusions}.
\section{Generalized Belief Propagation on EA 2D}
\label{GBP2D}
Given that a detailed derivation of plaquette-GBP message passing
equations for the 2D Edwards Anderson model were presented in
\cite{dual}, here we only summarize such derivation, skipping
unnecessary details.
The idea of the \textit{region-based} free energy approximation
\cite{yedidia,pelizzola05} is to mimic the exact (Boltzmann-Gibbs)
distribution $P(\sigma)$, by a reduced set of its marginals. A
hierarchy of approximations is given by the size of such marginals,
starting with the set of all single spins marginals $p_i(s_i)$ (mean
field), then following to all neighboring sites marginals $p(s_i,s_j)$
(Bethe approximation), then to all square plaquettes marginals
$p(s_i,s_j,s_k,s_l)$, and so on. Since the only way of knowing such
marginals exactly is the unattainable computation of $Z$, the method
pretends to approximate them by a set of beliefs $b_i(s_i)$,
$b_L(s_i,s_j)$, $b_\mathcal{P}(s_i,s_j,s_k,s_l)$, etc. obtained from a
minimization of a region based free energy.
Following the derivation done in \cite{dual}, the plaquette level
approximated free energy for the 2D EA model is given as a
contribution of all Plaquettes, Links and Spins in the 2D lattice:
\begin{widetext}
\begin{eqnarray}
-\beta F &=& \sum_{\mathcal{P}} \sum_{\sigma_\mathcal{P}} \displaystyle b_\mathcal{P}(\sigma_\mathcal{P}) \log \frac{ b_\mathcal{P}(\sigma_\mathcal{P})}{\exp(-\beta E_\mathcal{P}(\sigma_\mathcal{P}))} \qquad\mbox{Plaquettes} \nonumber\\
& & -\sum_{L} \sum_{\sigma_L} \displaystyle b_L(\sigma_L) \log \frac{ b_L(\sigma_L)}{\exp(-\beta E_L(\sigma_L))} \qquad\mbox{Links} \label{eq:freeen} \\
& & +\sum_{i} \sum_{s_i} \displaystyle b_i(s_i) \log \frac{ b_i(s_i)}{\exp(-\beta E_i(s_i))} \qquad\mbox{Spins} \nonumber
\end{eqnarray}
\end{widetext}
where the symbol $\sigma_R=(s_1,\ldots,s_k)$ stands for the set of
spins in region $R$, while $E_R(\sigma_R)=-\sum_{<i,j>\in R} J_{i,j} s_i
s_j$ stands for the energy contribution in that region. The energy
term $E_i(s_i)$ in the spins contribution is only relevant when an
external field acts over spins, and will be neglected from now on.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.6\textwidth]{resources/beliefs.eps}
\caption{Schematic representation of belief equations
(\ref{eq:beliefs}). Lagrange multipliers are depicted as arrows,
going from parent regions to children regions. Italics capital
letters are used to denote Plaquettes, simple capital letters denote
Links, and lower case letters denote Spins.}
\label{fig:beliefs2D}
\end{center}
\end{figure}
An unrestricted minimization of the free energy (\ref{eq:freeen}) in
terms of its beliefs, produces incongruent results. Beliefs are only
meaningful as an approximation to the correct marginals if they obey
the marginalization constrains $b_i(s_i) = \sum_{s_j} b_L(s_i,s_j)$
and $b_L(s_i,s_j) = \sum_{s_k,s_l} b_P(s_i,s_j,s_k,s_l)$. This
marginalization is enforced by the introduction of Lagrange
multipliers (see \cite{yedidia} for a general introduction, and
\cite{dual} for this particular case) in the free energy
expression. There is one Lagrange multiplier $\mu_{L\to i}(s_i)$ for
every link $L$ and spin $i \in L$, and a Lagrange multiplier
$\nu_{\mathcal{P}\to L}(s_i,s_j)$ for each plaquette $\mathcal{P}$ and link $L\in\mathcal{P}$
. In terms of these Lagrange multipliers, the stationary condition of
the approximated free energy is achieved with
\begin{eqnarray}
b_i(s_i) &=& \frac{1}{Z_i} \exp\left(-\beta E_i(s_i) - \sum_{L\supset
i}^4 \mu_{L\to i}(s_i)\right)\;, \nonumber\\
b_L(\sigma_L) &=& \frac{1}{Z_L} \exp\left(-\beta E_L(\sigma_L) -
\sum_{\mathcal{P} \supset L}^2 \nu_{\mathcal{P}\to L}(\sigma_L) - \sum_{i\subset L}^2
\mathop{\sum_{L'\supset i}^3}_{L' \neq L} \mu_{L'\to i}(s_i)
\right)\;, \label{eq:beliefs}\\
b_\mathcal{P}(\sigma_\mathcal{P}) &=& \frac{1}{Z_\mathcal{P}} \exp\left(-\beta E_\mathcal{P}(\sigma_\mathcal{P})
- \sum_{L\subset\mathcal{P}}^4 \mathop{\sum_{\mathcal{P}'\supset L}^1}_{\mathcal{P}' \neq \mathcal{P}}
\nu_{\mathcal{P}'\to L}(\sigma_L) - \sum_{i\subset \mathcal{P}}^4 \mathop{\sum_{L \supset
i}^2}_{L \not\subset \mathcal{P}} \mu_{L\to i}(s_i) \right)\;. \nonumber
\end{eqnarray}
A graphical representation of these equations is given in figure
\ref{fig:beliefs2D}. Lagrange multipliers are shown as arrows going
from parent regions, to children. Take, for one, the middle equation
for the belief in link regions $b_{L}(\sigma_L)=b_{L}(s_i,s_j)$. The
sum of the two Lagrange multipliers $\nu_{\mathcal{P}\to L} (s_i,s_j)$
corresponds to the triple arrows on both sides of the link in central
figure \ref{fig:beliefs2D}, while the two sums over three messages
$\mu_{L'\to i}(s_i)$ corresponds to the three arrows acting over the
top ($j$) and bottom ($i$) spins, respectively. In equations
(\ref{eq:beliefs}), the $Z_R$ are normalization constants. The terms
$E_\mathcal{P}(\sigma_\mathcal{P})=E_\mathcal{P}(s_i,s_j,s_k,s_l)=-(J_{i,j} s_i s_j+J_{j,k}
s_j s_k+J_{k,l} s_k s_l+J_{l,i} s_l s_i)$ and $E_L(s_i,s_j)=- J_{i,j}
s_i s_j$ are the corresponding energies in plaquettes and links
respectively, and are represented in the diagram by the lines
(interactions) between circles (spins).
zero since no field is acting upon spins.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.6\textwidth]{resources/messages.eps}
\caption{Message passing equations (\ref{eq:message-u}) and
(\ref{eq:message-Uuu}), shown schematically. Messages are depicted
as arrows, going from parent regions to children regions. On any
link $J_{i,j}$, represented as bold lines between spins (circles), a
Boltzmann factor $e^{\beta J_{i,j} s_i s_j}$ exists. Dark circles
represent spins to be traced over. Messages from plaquettes to links
$\nu_{P\to L}(s_i,s_j)$ are represented by a triple arrow, because
they can be written in terms of three parameters $U$, $u_i$ and
$u_j$, defining the correlation $\< s_i s_j \>$ and magnetizations
$\< s_i \>$ and $\< s_j \>$, respectively.}
\label{fig:message2D}
\end{center}
\end{figure}
The Lagrange multipliers can be parametrized in terms of cavity
fields $u$ and $(U,u_a,u_b)$ as
\begin{eqnarray}
-\mu_{L\to i}(s_i) &=& \beta u_{L\to i} \: s_i \\
-\nu_{\mathcal{P} \to L}(s_i,s_j)&=& \beta (U_{\mathcal{P} \to L} \:s_i s_j + u_{\mathcal{P} \to i} \:s_i + u_{\mathcal{P} \to j}\: s_j)
\end{eqnarray}
In particular, the field $u_{L\to i}$ corresponds to the cavity field
in the Bethe approximation \cite{yedidia}. The choice of these
parametrization is the reason for the use of single and triple arrows
in figures \ref{fig:beliefs2D} and \ref{fig:message2D}. In particular,
the messages going from plaquettes to links, are characterized by
three fields ($U_{\mathcal{P} \to L},u_{\mathcal{P} \to i},u_{\mathcal{P} \to j}$), and the
capital $U_{\mathcal{P} \to L}$ acts as an effective interaction term.
The Lagrange multipliers are related among them by the constrains they
are supposed to impose (see \cite{dual}). In terms of the cavity
fields and using the notation in figure \ref{fig:message2D},
Link-to-Spin cavity fields shall be related by
\begin{equation}
u_{L\to i} = \hat{u}(u_{\mathcal{P}\to i} + u_{\mathcal{L} \to i},\;
U_{\mathcal{P}\to L} + U_{\mathcal{L}\to L} + J_{ij},\;
u_{\mathcal{P}\to j} + u_{\mathcal{L}\to j} + u_{A\to j} + u_{B\to j} + u_{U\to j})\;,
\label{eq:message-u}
\end{equation}
where
\[
\hat{u}(u,U,h) \equiv u + \frac{1}{2\beta} \log\frac{\cosh\beta(U+h)}{\cosh\beta(U-h)}
\]
Note that the usual cavity equation for fields in the Bethe
approximation \cite{MP1} is recovered if all contributions from
plaquettes $\mathcal{P}$ and $\mathcal{L}$ are set to zero.
Similarly, by imposing the marginalization of the beliefs at
Plaquettes onto their children Links, we find the self consistent
expression for the Plaquette-to-Link cavity fields:
\begin{eqnarray}
U_{\mathcal{P}\to L} &=& \hat U(\#) = \frac{1}{4\beta}\log\frac{K(1,1)K(-1,-1)}{K(1,-1)K(-1,1)} \nonumber \\
u_{\mathcal{P}\to i} &=& - u_{D\to i} + \hat u_i(\#) =u_{\mathcal{D}\to i} - u_{D\to i} + \frac{1}{4\beta}\log\frac{K(1,1)K(1,-1)}{K(-1,1)K(-1,-1)}
\label{eq:message-Uuu} \\
u_{\mathcal{P}\to j} &=& - u_{U\to j} + \hat u_j(\#) = u_{\mathcal{U}\to j} - u_{U\to j} + \frac{1}{4\beta}\log\frac{K(1,1)K(-1,1)}{K(1,-1)K(-1,-1)} \nonumber
\end{eqnarray}
where
\begin{eqnarray*}
K(s_i,s_j) &=& \sum_{s_k,s_l} \exp \bigg[
\beta \Big( (U_{\mathcal{U}\to U} + J_{jk}) s_j s_k +
(U_{\mathcal{R}\to R} + J_{kl}) s_k s_l + (U_{\mathcal{D}\to D} + J_{li}) s_l s_i + \\
&& (u_{\mathcal{U}\to k} + u_{C\to k} + u_{E\to k} + u_{\mathcal{R}\to k}) s_k +
(u_{\mathcal{R}\to l} + u_{F\to l} + u_{G\to l} + u_{\mathcal{D}\to l}) s_l \Big) \bigg]
\end{eqnarray*}
and the symbol $\#$ stands for all incoming fields in the right hand
side of the equations. The functions $\hat u(u,U,h)$ and $[\hat
U(\#),\hat u_i(\#),\hat u_j(\#)]$ will be used in next section for
the average case calculation.
For a given system of size $N$ (number of spins) there are $2N$ Links
and $N$ square plaquettes, and therefore there are $4 N$
Plaquette-to-Link fields $[U_{\mathcal{P}\to L},u_{\mathcal{P}\to i},u_{\mathcal{P}\to j}]$,
and $4 N$ Link-to-Spins fields $u_{L\to i}$. At the stationary points
of the free energy their values are related by the set of $4N+4N$
equations (\ref{eq:message-u}) and (\ref{eq:message-Uuu}).
The set of $4 N +4 N$ self-consistent equations are also called
message-passing equations when they are used as update rules for
fields in the message passing algorithm, or cavity iteration equations
in the context of cavity calculations. The field notation is more
comprehensible than the original Lagrange multipliers notation, and
has a clear physical meaning: each plaquette is telling its children
links that they should add an effective interaction term $U_{P\to L}$
to the direct interaction $J_{i,j}$, due to the fact that spins $s_i$ and
$s_j$ are also interacting through the other three links in the
plaquette. Terms $u_i$ act like magnetic fields upon spins, and the
complete $\nu(s_i,s_j)-$message is characterized by the triplet
$(U_{i,j},u_i,u_j)$.
\section{Critical Temperature of Plaquette-CVM approximation}
\label{Tc}
In this section we revisit the method used in \cite{tommaso_CVM} to
compute the critical temperature at which CVM approximation develops a
spin glass phase. By spin glass phase we mean a phase characterized by
non zero local magnetizations $m_i = \tanh \left(\beta \sum_L^4 u_{L\to
i} \right)$ and nearly zero total magnetization $m = \frac 1 N
\sum_i m_i \simeq 0$ (remember we work with no external field). The 2D
EA model is paramagnetic down to zero temperature, but spin glass like
solutions can appear in the CVM approximation due to its mean field
character. We correct one of the conclusions reached in
\cite{tommaso_CVM}, where we fail to observe the appearance of the
spin glass phase in the CVM approximation to the 2D Edwards Anderson
model. We follow an average case approach, which is similar in spirit
but different from the single instance stability analysis done in
\cite{MooKap05} for the Bethe approximation (Belief Propagation).
The average case calculation is a mathematical technique developed in
\cite{MP1} to study the typical solutions of cavity equations in
disordered systems, with a deep and fundamental connection to the
replica trick \cite{MPV}. When applied to the plaquette-CVM
approximation \cite{tommaso_CVM}, we end up with two equations, in
which fields (messages) are now replaced by functions of fields $q(u)$
and $Q(U,u_1,u_2)$, and the interactions are averaged out. As a
consequence of the homogeneity of the 2D lattice and the averaging
over local disorder $J_{i,j}$, all plaquettes, links, and spins in the
graph are now equivalent, and we only need to study one of them to
characterize the whole system.
More precisely, the average case self consistent equations for the
distribution $q(u)$ is given by
\begin{eqnarray}
q(u_i) &=& \mathbb{E}_J \int \mathrm{d} q(u_{A\to j})\: \mathrm{d} q(u_{B\to j}) \: \mathrm{d} q(u_{U\to j}) \label{eq:ave_qu_eq} \\
&& \mathrm{d} Q(U_{\mathcal{P} \to L},u_{\mathcal{P} \to i},u_{\mathcal{P} \to j}) \: \mathrm{d} Q(U_{\mathcal{L} \to L},u_{\mathcal{L} \to i},u_{\mathcal{L} \to j})\; \delta\Bigl(u_i - \hat{u}(\#)\Bigr) \nonumber
\end{eqnarray}
with $\hat u(\#)$ as defined in the right hand side of equation (\ref{eq:message-u}), and $\mathrm{d} f(x) \equiv f(x) \mathrm{d} x$
The corresponding self-consistent equation for $Q(U,u_1,u_2)$ is
\begin{eqnarray}
\iint Q(U,u_a,u_b) q(u_i-u_a) q(u_j-u_b) \mathrm{d} u_a \mathrm{d} u_b = \phantom{\iint Q(U,u_1,u_2)} \label{eq:ave_QUuu_eq} \\
= \mathbb{E}_J \int \mathrm{d} q(u_{C\to k}) \: \mathrm{d} q(u_{E\to k}) \: \mathrm{d} q(u_{F\to l}) \: \mathrm{d} q(u_{G\to l}) \: \mathrm{d} Q(U_{\mathcal{U}\to U},u_{\mathcal{U} \to j},u_{\mathcal{U} \to k})\: \nonumber \\
\mathrm{d} Q(U_{\mathcal{R}\to R},u_{\mathcal{R} \to k},u_{\mathcal{R} \to l}) \: \mathrm{d} Q(U_{\mathcal{D}\to D},u_{\mathcal{D} \to l},u_{\mathcal{D} \to i})
\delta\Bigl(U-\hat U (\#)\Bigr) \; \delta\Bigl(u_i-\hat u_i (\#)\Bigr) \; \delta\Bigl(u_j-\hat u_j (\#)\Bigr) \nonumber
\end{eqnarray}
where the notation corresponds to equation (\ref{eq:message-Uuu}). In
both equations (\ref{eq:ave_qu_eq}) and (\ref{eq:ave_QUuu_eq}) the
expression $\mathbb{E}_J = \int \mathrm{d} J P(J) \ldots$ stands for the
average over the quenched randomness.
At high temperatures we expect fixed point equations
(\ref{eq:message-u}) and (\ref{eq:message-Uuu}) to yield a
paramagnetic solution. Such a solution is characterized by Link to
Site messages $u=0$, and Plaquette to Link messages $(U,u_1,u_2) =
(U,0,0)$. If we impose this ansatz to fields, we recover the
paramagnetic or dual algorithm of \cite{dual} for the single instance
message passing, and the paramagnetic average case study of
\cite{tommaso_CVM} for the average case. Let us remember that the 2D
EA model is expected to have no thermodynamic transition at any finite
temperature, and hence remain paramagnetic all the way down to
$T=0$. Following \cite{tommaso_CVM}, in the average case the
paramagnetic solution has the form
\begin{eqnarray*}
q(u) &=& \delta(u) \\
Q(U,u_1,u_2) &=& Q(U) \delta(u_1)\delta(u_2)
\end{eqnarray*}
The equation (\ref{eq:ave_qu_eq}) is always satisfied when
$q(u)=\delta(u)$ for whatever $Q(U)$. The equation
(\ref{eq:ave_QUuu_eq}) can be solved self-consistently for $Q(U)$:
\begin{eqnarray} Q(U) &=& \mathbb{E}_J \int \mathrm{d} Q(U_{\mathcal{U}})\: \mathrm{d} Q(U_{\mathcal{R}})\:\mathrm{d} Q(U_{\mathcal{D}}) \label{eq:ave_QU} \\
&& \delta\left(U - \frac{1}{\beta} \arctanh\Bigl[\tanh \beta(J_\mathcal{U} + U_\mathcal{U})\tanh \beta (J_\mathcal{R} + U_\mathcal{R})\tanh \beta (J_\mathcal{D} + U_\mathcal{D})\Bigr]\right) \nonumber
\end{eqnarray}
and the average free energy and all other relevant functions can be
derived in terms of it (see \cite{tommaso_CVM}).
On the other hand, a general (not paramagnetic) solution of the
average case equations (\ref{eq:ave_qu_eq}) and (\ref{eq:ave_QUuu_eq})
is very difficult, since it involves the deconvolution of
distributions $q(u)$ in the left hand side of \rref{eq:ave_QUuu_eq} in
order to update $Q(U,u_1,u_2)$ by an iterative method. A critical
temperature can be found, however, by an expansion in small $u$ around
the paramagnetic solution. We can focus on the second moments of the
distributions
\begin{eqnarray*}
a &=& \int q(u) u^2 \mathrm{d} u \\
a_{i\:j}(U) &=& \iint Q(U,u_1,u_2) \:u_i\: u_j \:\mathrm{d} u_1 \mathrm{d} u_2 \qquad \text{where } i,j \in \{1,2\}
\end{eqnarray*}
and check whether the paramagnetic solution ($a=0$ and $a_{ij}(U) =
0$) is locally stable. To do this we expand equations
(\ref{eq:ave_qu_eq}) and (\ref{eq:ave_QUuu_eq}) to second order, and
we obtain the following linearized equations:
\begin{eqnarray*}
a &=& K_{a,a} a + \int \mathrm{d} U'\: K_{a,a_{11}}(U') a_{11}(U') + \int \mathrm{d} U'\: K_{a,a_{12}}(U') a_{12}(U') \\
a\: Q(U) +a_{11}(U) &=& K_{a_{11},a}(U) a + \int \mathrm{d} U'\: K_{a_{11},a_{11}}(U,U') a_{11}(U') + \int \mathrm{d} U'\: K_{a_{11},a_{12}}(U,U') a_{12}(U') \\
a_{12}(U) &=& K_{a_{12},a}(U) a + \int \mathrm{d} U'\: K_{a_{12},a_{11}}(U,U') a_{11}(U') + \int \mathrm{d} U'\: K_{a_{12},a_{12}}(U,U') a_{12}(U')
\end{eqnarray*}
The actual values of the $K_{a_x,a_y}$ come from the expansion in
small $u$ of the original equations (see equation 90 in
\cite{tommaso_CVM} for an example).
We can not solve these equations analytically because we do not have
an analytical expression of $Q(U)$ for the paramagnetic solution at
all temperatures. By discretizing the values of $U$ uniformly in
$(-U_{\text{max}},U_{\text{max}})$, \textit{ i.e. } $U= i \Delta U$ with
$i\in[-I_{\text{max}},I_{\text{max}}]$, we can transform the
continuous set of equations to a system of the form
\begin{equation}
\vec a = \mathbf{K}(\beta) \cdot \vec a \label{eq:matrix_a_Ka}
\end{equation}
where the vector of the second moments $\vec a=
(a,a_{11}(U),a_{12}(U))$ have the form
\begin{eqnarray*}
\vec a &=& \bigl(a, a_{11}(-U_\text{max}),a_{11}(-U_\text{max}+\Delta U ),\ldots,a_{11}(U_\text{max}-\Delta U ),a_{11}(U_\text{max}), \bigr. \\
&& \bigl.\phantom{\bigl(a, }\: a_{12}(-U_\text{max}),a_{12}(-U_\text{max}+\Delta U ),\ldots,a_{12}(U_\text{max}-\Delta U ),a_{12}(U_\text{max}) \bigr)
\end{eqnarray*}
$\mathbf{K}(\beta)$ is a $(2 I_\text{max}+1) \times (2 I_\text{max}+1)$
matrix, that stand for the discrete representation of the integrals in
the right hand side of the linearized equations, and depends on the
inverse temperature via the solution $Q(U)$ of eq. (\ref{eq:ave_QU}).
The paramagnetic solution $\vec a =0$ always satisfy the homogeneous
\rref{eq:matrix_a_Ka}. The stability criterion for the paramagnetic
solution is the singularity of the Jacobian $\det (\mathbf
I-\mathbf{K}(\beta)) =0$. When such condition is satisfied, a non
paramagnetic solution continuously arises from the paramagnetic one,
since a flat direction appears in the free energy.
\begin{figure}[!htb]
\includegraphics[angle=0,width=0.5\textwidth]{resources/eigen_B_c.eps}
\caption[0]{Determinant of the Jacobian $\mathbf J = \mathbf
I-\mathbf{K}(\beta)$ as a function of inverse temperature $\beta$. The
critical inverse temperature is $\B_\text{CVM} \simeq 1.22$.}
\label{fig:eigen_B_c}
\end{figure}
Numerically, we worked with a discretization of $2 I_\text{max} + 1 =
41$ points between $(- U_\text{max} = -3.5,\:U_\text{max} = 3.5
)$. The paramagnetic solution $Q(U)$ is found solving
eq. (\ref{eq:ave_QU}) by an iterative method at every temperature, and
then used to compute the elements of the $\mathbf{K}(\beta)$ matrix. In
figure \ref{fig:eigen_B_c} we show the determinant of the Jacobian
matrix $\mathbf J = \mathbf I - \mathbf K(\beta)$. The critical inverse
temperature derived from this analysis is $\B_\text{CVM} \simeq 1.22$ for the
appearance of a flat direction in the free energy.
In \cite{tommaso_CVM} $\B_\text{CVM}$ was thought to be infinite (zero
temperature) because an incomplete range of the values of $\beta$ was
examined. The critical temperature found here is below the Bethe
critical temperature $\beta_{\text{Bethe}} \simeq 0.66$, and therefore
improves the Bethe approximation by roughly a factor 2, since the 2D
EA model is likely to remain paramagnetic at all finite
temperatures. At variance with the Bethe approximation, the single
instance behavior of the message passing is not so clearly related to
the average case critical temperature, as we show in the next section.
\section{Performance of GBP on 2D EA Model}
\label{ConvergenceProblems}
Before studying GBP message passing for the plaquette-CVM
approximation, let us check what happens to the simpler Bethe
approximation and the corresponding message passing algorithm known as
Belief Propagation (BP) in the 2D EA model. When running BP at high
temperatures (above $T_{\text{Bethe}} = 1/\beta_{\text{Bethe}} \simeq 1.51$) in a
typical instance of the model with bimodal interactions, we find the
paramagnetic solution (given by all fields $u=0$), and therefore, the
system is equivalent to a set of independent interacting pairs of
spins, which is only correct at infinite temperature. The Bethe
temperature $T_{\text{Bethe}}$ (computed in average case and exact on acyclic
graphs \cite{defTbethe}), seems to mark precisely the point where BP
stops converging (see Fig. \ref{fig:prob_conv_GBP-BP}). Indeed
messages flow away from zero below $T_{\text{Bethe}}$, and convergence of the
BP message passing algorithm is not achieved anymore. So, the Bethe
approximation is disappointing when applied to single instances of the
Edwards Anderson model: either it converge to a paramagnetic solution
at high temperatures, or it does not converge at all below
$T_\text{Bethe}$.
\begin{figure}[tb]
\includegraphics[angle=270,width=0.8\textwidth]{resources/prob_conv_GBP-BP.eps}
\caption[0]{Probability of convergence of BP and GBP on a 2D EA model,
with random bimodal interactions, as a function of inverse
temperature $\beta = 1/T$. The Bethe spin glass transition is expected
to occur at $\beta_{\text{Bethe}}\simeq 0.66$ on a random graph with the same
connectivity. The BP message passing algorithm on 2D EA model stops
converging very close to that point. Above that temperature, BP
equations converge to the paramagnetic solution, \textit{ i.e. } all messages
are trivial $u=0$. Below the Bethe temperature (nearly) the Bethe
instability takes messages away from the paramagnetic solution, and
the presence of short loops is thought to be responsible of the lack
of convergence. On the other hand, the GBP equations converge at
lower temperatures, but eventually stops converging as well.}
\label{fig:prob_conv_GBP-BP}
\end{figure}
The natural question arises, as to what extent GBP message passing
algorithm for the plaquette-CVM approximation is also non convergent
below its critical temperature, and whether this temperature coincides
with the average case one. To check this we used GBP message passing
equations (\ref{eq:message-u}) and (\ref{eq:message-Uuu}), with a
damping factor $0.5$ in the Link-to-Site fields $u$:
\[u^{\text{new}}_{L\to i} = 0.5 \: u^{\text{old}}_{L\to i} + 0.5\:
\hat u (\#) \] We will make the distinction between two types of
solutions for the GBP algorithm. The high temperature or paramagnetic
solution is characterized by zero local magnetization of spins $m_i =
\sum_{s_i} s_i b_i (s_i) = \tanh \left(\beta \sum_L^4 u_{L\to i}\right) =
0$. At low temperatures, following the average case analysis, a non
paramagnetic or spin-glass solution should appear, characterized by
non zero local magnetizations, but roughly null global
magnetization. The temperature at which non zero local magnetizations
appear will be called $T_\text{SG} = 1/\B_\text{SG}$.
Figure \ref{fig:prob_conv_GBP-BP} shows that GBP is able to converge
below the Bethe critical temperature, but stops converging before the
CVM average case critical temperature $\B_\text{CVM}\simeq 1.22$. Furthermore,
figure \ref{fig:sg_frac} shows that even before stop converging, GBP
finds a spin glass solution in most instances.
\begin{figure}[tb]
\includegraphics[angle=270,width=0.8\textwidth]{resources/sg_frac.eps}
\caption[0]{Data points correspond to the fraction of SG solutions in
a population of 100 systems of sizes $16^2$, $32^2$, $64^2$,
$128^2$, $256^2$ respectively. At high temperatures (low $\beta$) GBP
message passing converge always to the paramagnetic solution. The
average case critical inverse temperature $\B_\text{CVM}\simeq 1.22 $ does
not corresponds to the single instance behavior, as the spin glass
solutions in GBP appear around $\B_\text{SG} \simeq 0.79$. The inset shows
that all data collapsed if plotted as a function of the scaling
variable $L^{0.9} (\beta-0.79)$, where the exponent $0.9$ and the
critical inverse temperature $\B_\text{SG} \simeq 0.79$ are obtained from
best data collapse.}
\label{fig:sg_frac}
\end{figure}
The inner plot of figure \ref{fig:sg_frac} shows a collapse of the
data points for different system sizes using the scaling variable
$L^{0.9} (\beta-0.79)$, which gives an estimate $\B_\text{SG}\simeq 0.79$ (the
exponent $0.9$ is obtained from the best data collapse). Since
$\B_\text{SG}\simeq 0.79$ is well below the average case inverse critical
temperature $\B_\text{CVM} \simeq 1.22$, the relevance of the latter on the
behavior of GBP on single samples is questionable. By a similar data
collapse procedure, we estimate the non-convergence temperature for
the GBP message passing algorithm to be $\beta_{\text{conv}} \simeq 0.96$
(see Fig.~\ref{fig:GBPMat_B_c}), which is again far away from the
average case prediction $\B_\text{SG}$.
So, beyond the simple Bethe approximation, we found three different
temperatures in the CVM approximation: $\B_\text{SG}\simeq 0.79 <
\beta_{\text{conv}} \simeq 0.96 < \B_\text{CVM} \simeq 1.22$ corresponding
respectively to the appearance of spin glass solutions, to the lack of
convergence on single instances, and to the average case prediction
for the critical temperature.
We can summarize three main differences between the properties of BP
and GBP. At high temperatures (below $\B_\text{SG} \simeq 0.79$) GBP gives a
quite good approximate of the marginals \cite{dual}, namely the
paramagnetic solution with non trivial correlations fields $U\neq 0$,
while BP treats the system as a set of independent pairs of linked
spins. Furthermore, this naive approach is all that BP can do for us,
since above $\beta_\text{Bethe}\simeq 0.66$, it no longer converges. GBP,
on the other hand, is not only able to converge beyond
$\beta_\text{Bethe}$, but it is also able to find spin glass solutions
above $\B_\text{SG}$. The third difference between both algorithms is that the
non convergence of BP seems to occur exactly at the same temperature
where a spin glass phase should appear (and arguably because of it),
while the GBP convergence problems appear deep into the spin glass
phase. The lack of convergence of GBP, however, seems to depend
strongly on implementation details as we show next.
\section{Gauge invariance of GBP equations}
\label{gauge}
The convergence properties of the GBP message passing is sensitive to
implementation details, \textit{ e.g. } the damping value in the update equations,
and this is not an inherent property of the CVM (or region-graph)
approximation. We might try, for instance, to update simultaneously
all \textit{small-}u fields pointing towards a given spin, hoping to
gain some more stability in message passing algorithm. When trying to
do this we find out that there is a freedom in the choice of these
fields that has no effect over the fixed point solutions. This freedom
(similar to the one noticed in \cite{martin_protein}) is the result of
having introduced unnecessary Lagrange multipliers to enforce
marginalization constraints that were already indirectly enforced.
\begin{figure}[!htb]
\includegraphics[angle=0,width=0.5\textwidth]{resources/invariance_u.eps}
\caption[0]{Null modes of the plaquette CVM free energy in terms of
fields. The small-$u$ fields that act over a given spin $i$ inside a
plaquette can be shifted by an arbitrary amount $\delta$ as in
equation (\ref{eq:gauge_transfor}) without changing the self
consistent (message passing) equations.}
\label{fig:invariance}
\end{figure}
Consider, for instance, the messages shown in figure
\ref{fig:invariance}. If the belief on a plaquette
$b_P(s_i,s_j,s_k,s_l)$ correctly marginalizes to the beliefs of two of
its children links $b_{L}(s_i,s_j)$ and $b_{D}(s_l,s_i)$, and one of
those beliefs marginalizes to the common spin $b_i(s_i)= \sum_{s_j}
b_{L}(s_i,s_j)$, it is inevitable that the second link $D$ also
marginalizes to the same belief on $s_i$, since $b_i(s_i) = \sum_{s_j}
b_{L}(s_i,s_j) = \sum_{s_j,s_l,s_k} b_{\mathcal{P}}(s_i,s_j,s_k,s_l) =
\sum_{s_l} b_{D}(s_l,s_i)$. Therefore the Lagrange multiplier that was
introduced to force this last marginalization is not needed. This
redundancy is a general feature of GBP equations when there are more
than two level of regions (Plaquette, Links, and Spins, in our case).
The consequence of having introduced unnecessary multipliers leads to
a gauge invariance on the fields (messages) values. Such invariance
can be better understood by looking at the GBP equations at infinite
temperature: for $\beta =0$ the non linear parts of the message passing
equations (\ref{eq:message-u}) and (\ref{eq:message-Uuu}) disappear,
but there is still a set of linear equations to be satisfied for the
small-$u$ messages with infinite many non trivial solutions. These
solutions correspond, however, to the same physical paramagnetic
solution, since the total field $h_i=\sum_L^4 u_{L\to i}$ and the
magnetizations $m_i = \tanh(\beta h_i) $ are always zero. It is easy to
check that once we have a solution of the message passing equations
(\ref{eq:message-u}) and (\ref{eq:message-Uuu}) at any temperature, we
can change by an arbitrary amount $\delta$ any group of 4 $u$-messages
inside a plaquette (figure \ref{fig:invariance}) pointing to the same
spin as
\begin{eqnarray}
u_{L\to i}&\to& u_{L\to i}+\delta\:, \nonumber\\
u_{\mathcal{P}_L\to i}&\to& u_{\mathcal{P}_L\to i}+\delta\:, \label{eq:gauge_transfor}\\
u_{D\to i}&\to& u_{D\to i}-\delta\:, \nonumber \\
u_{\mathcal{P}_D\to i}&\to& u_{\mathcal{P}_D\to i}-\delta\:, \nonumber
\end{eqnarray}
and still all self-consistent equations are satisfied.
This local null mode of the standard GBP equations can be avoided by
arbitrary setting to zero one of the four small-$u$ fields entering
equation (\ref{eq:gauge_transfor}). We choose to fix the gauge by
removing the right small-$u$ field in every Plaquette-to-Link field
$(U,u_{\text{left}},u_{\text{right}})$, as shown in figure
\ref{fig:gauge_fixed_matrix}. Once the gauge is fixed, the fields are
uniquely determined, and we can try to implement the simultaneous
updating of all \textit{small-}u fields around a given spin, hopefully
improving convergence.
\begin{figure}[!htb]
\includegraphics[width=0.8\textwidth]{resources/matrix_invariance_u.eps}
\caption[0]{In the left diagram, all 8 \textit{small-}$u$ messages
pointing to the central spin are highlighted with bold face. They
are 4 Link-Site $u$-messages, and 4 Plaquette-Link
$u_{\text{left}}$-messages. They have linear dependence among
them. The right diagram shows four plaquettes around a spin, and the
messages that contribute in a non linear way to the aforementioned 8
messages. The idea of GBP+GF is to compute the non linear
contributions to the message passing equations, and then assign the
values of the $u$-messages in order to satisfy their linear
relations.}
\label{fig:gauge_fixed_matrix}
\end{figure}
In the left diagram of figure \ref{fig:gauge_fixed_matrix} all
messages involving the central spin are represented, and in bold face
those that act precisely upon that spin. These messages enter linearly
in the message passing equations of each other (see equations
(\ref{eq:message-u}) and (\ref{eq:message-Uuu})). Therefore, the self
consistent equations they should satisfy at the fixed points, can be
written as (using the notation of figure \ref{fig:gauge_fixed_matrix})
\begin{equation}
\begin{array}{ll}
\begin{array}{ll}
u_1 &= u_a + NL_1 \\
u_2 &= u_b + NL_2 \\
u_3 &= u_c + NL_3 \\
u_4 &= u_d + NL_4
\end{array} & \quad
\begin{array}{ll}
u_a &= u_b - u_2 + NL_a \\
u_b &= u_c - u_3 + NL_b \\
u_c &= u_d - u_4 + NL_c \\
u_d &= u_a - u_1 + NL_d
\end{array}
\end{array} \label{eq:linear_system}
\end{equation}
where the $NL$ stand for the non linear contributions to the
corresponding equation. As a consequence, the values of the 8
$u$-messages pointing to the central spin can be assigned precisely by
a linear transformation for any given values of the non linear
contributions. This gauge fixed updating method, that we will call
GBP+GF, updates all $u$-messages around a spin simultaneously and in a
way that they are consistent with each other via the message passing
equations.
\begin{figure}[!htb]
\includegraphics[angle=270,width=0.8\textwidth]{resources/prob_conv_MatGBP_64.eps}
\caption[0]{Convergence probability of GBP and GBP+GF as a function of
$\beta$. The solution found by either iteration method is always the
same (when both converge), but GBP+GF reaches lower temperatures
while converging. The fraction of spin glass solutions found by
either algorithm show that GBP+GF sees the same spin glass
transition temperature. The fraction of spin glass solutions is
always given respect to the amount of convergent solutions.}
\label{fig:prob_conv_MatGBP_64}
\end{figure}
The right diagram in figure \ref{fig:gauge_fixed_matrix} shows the
messages entering the non linear parts. Taking the 8 $u$-messages as
zero, the non linear contributions are the right hand sides of the
message passing equations involved. With the non linear parts
computed, the system of equations (\ref{eq:linear_system}) is solved
for the $u$-variables multiplying the non linearities vector by the
corresponding matrix. The 8 $u$-messages are then updated, usually
with a damping factor. The update of the $U$ correlation fields is
done as in the original GBP method, via the equation
(\ref{eq:message-Uuu}), since it does not depend on the $u$-messages
that are being updated.
\begin{figure}[!htb]
\includegraphics[angle=270,width=0.7\textwidth]{resources/BconvvsN.eps}
\caption[0]{Estimate of the non convergence temperature for different
system sizes using the standard GBP (squares) and the Gauge Fixed
GBP (circles). As shown, with the gauge fixed procedure the non
convergence extrapolated temperature is quite close to the average
case prediction $\B_\text{CVM} \simeq 1.22$. Each data point corresponds to
the average of the non convergence temperature over many
realizations of the disorder: 10 realizations for the $512 \times
512$ systems, 20 for the $256 \times 256$ and 100 for the others.}
\label{fig:GBPMat_B_c}
\end{figure}
Figure \ref{fig:prob_conv_MatGBP_64} shows the probability of
convergence versus inverse temperature for GBP and GBP+GF, and also
the fraction of the solutions found that correspond to a spin glass
phase. Let us emphasize here that GBP and GBP+GF are not different
approximations, but different methods to find the same fixed point
solution by message passing. They are expected to find the same
solutions, and in fact they do. At high temperatures both methods
converge to the paramagnetic solution, with all null local
magnetizations $m_i=\tanh \left(\beta\sum_L^4 u_{L\to i} \right)=0$. The
standard message passing update of GBP equations hardly converges
above $\beta_\text{conv} \simeq 0.96$, while the GBP+GF method reaches
lower temperatures, $\beta_{\text{conv-GF}}\simeq 1.2$, as can be seen in
Fig.~\ref{fig:GBPMat_B_c}. Furthermore, the GBP+GF allows us to work
in a range of temperatures where most solutions are spin glass
like. This proves that the non converging temperature found for GBP,
$\beta_{\text{conv}}\simeq 0.96$, is not a feature of the CVM
approximation, but a characteristic of the message passing method
used, and can be outperformed by other message passing schemes, like
GBP+GF. Kindly note in figure \ref{fig:GBPMat_B_c} that the non
convergence inverse temperature of GBP+GF $\beta_{\text{conv-GF}}\simeq
1.2$ is quite close to the average case prediction for the critical
temperature $\B_\text{CVM} \simeq 1.22$. Whether this is accidental or not is
still unclear. Since the average case instability should describe the
breakdown of the paramagnetic phase, and the lack of convergence in
single instances occurs while already in a non paramagnetic phase, it
seems far fetched assuming that both critical behaviors are related.
\subsection{Gauge fixed average case stability}
The disagreement between the average case critical temperature $\B_\text{CVM}$
and the one observed in the single instance $\B_\text{SG}$, can be due to a
number of reasons. First, the average case calculation assumes that
cavity fields are uncorrelated. But, in our case, messages
participating in the cavity iteration are very close to each other in
the lattice, and thus correlated. Furthermore, GBP does not have the
equivalent of a Bethe lattice for BP, i.e.\ a model in which the
correlation between cavity messages is close to zero by
construction. The second reason for a failure of the average case
prediction is that the transition we observe in single instances might
be due to the almost inevitable appearance of ferromagnetic domains in
large systems (Griffith instability). The third, and the most obvious
reason, is that the gauge invariance was not accounted in the average
case calculation.
Reproducing the method of Sec.~\ref{Tc} to obtain an average case
prediction of the critical temperature for the Gauge Fixed GBP is not
straightforward. The reason is that Link-to-Spins messages $u$, should
fulfill two different equations: their own original equation
(\ref{eq:message-u}), and the implicit equation derived from the fact
that the gauge is fixed and one of the fields in the Plaquette-to-Link
message $(U,u,u)$ is set to zero.
\begin{figure}[!htb]
\includegraphics[angle=0,width=0.6\textwidth]{resources/ave_case_pop.eps}
\caption[0]{Left: The set of four messages that we compute jointly by
a population dynamic. Right: the population dynamic step consists in
taking four quadruplets at random from the population (those in
black), and computing a new quadruplet (the one in gray inside the
plaquette) using randomly selected interactions $J_{ij}$ on the
plaquette.}
\label{fig:ave_case_pop_diagram}
\end{figure}
However, a different average case calculation is possible. We can
represent the messages flowing in the lattice by a population of
quadruplets $(u_{L_l\to l},u_{\mathcal P \to l},U_{\mathcal{P}\to lr},
u_{L_r\to r})$, where one of the original messages is absent because
the gauge has been fixed (see left panel in
Fig.~\ref{fig:ave_case_pop_diagram}). Given any four of these
quadruplets of messages around a plaquette, we can compute, using the
message passing equations, the new messages inside the plaquette (see
right panel in Fig.~\ref{fig:ave_case_pop_diagram}). The new
population dynamics consists in picking four of these quadruplets out
of the population at random, then computing the new quadruplet (using
also random interactions in the plaquette) and finally put it back in
the population. After several steps, the population stabilizes either
to a paramagnetic solution (where all $u=0$ and only $U\neq 0$),
either to a non paramagnetic one (where also $u \neq 0$).
\begin{figure}[tb]
\includegraphics[angle=270,width=0.7\textwidth]{resources/qEA_vs_Beta_pop.eps}
\caption[0]{Edwards Anderson order parameter, see eq.(\ref{eq:qEA}),
obtained using a population of $N=10^3$ messages, and running the
population dynamic step $10^3 \times N$ times. In agreement with the
single instance behavior, the transition between paramagnetic
($q_{EA} =0$) and non paramagnetic (spin glass) phases is found at
$\beta \simeq 0.78$.}
\label{fig:ave_case_pop}
\end{figure}
In Fig. \ref{fig:ave_case_pop} we show the Edwards Anderson order
parameter $q_{EA} = \sum_i m_i^2 / N$ obtained at different
temperatures using this population dynamics average case method. We
find that $q_{EA}$ becomes larger than zero at $\beta_\text{CVM-GF}
\simeq 0.78$, which is quite close to the inverse temperature $\B_\text{SG}
\simeq 0.79$ where single instances develop non-zero local
magnetizations and a spin glass phase. The correspondence between this
average case result and the single instance behaviour is very
enlightening: indeed the average case computation does not take into
account correlations among quadruplets of messages and it is not
sensible to Griffith's singularities. So, the most simple explanation
for the GBP-GF behaviour on single samples of the 2D EA model is that
quadruplets of messages arriving on any given plaquette are mostly
uncorrelated and that at $\B_\text{SG}$ a true spin glass instability takes
place (which is an artifact of the mean-field like approximation).
Please consider that under the Bethe approximation the SG instability
happens at $\beta_\text{Bethe} \simeq 0.66$, while the CVM
approximation improves the estimate of the SG critical boundary to
$\B_\text{SG} \simeq 0.79$ (on single instances) and to $\beta_\text{CVM-GF}
\simeq 0.78$ (on the average case).
\section{Same approximation, four algorithms}
\label{GBPvsDL}
It can be proved \cite{yedidia} that stable fixed points of the
message passing equations correspond to stationary points of the
region graph approximated free energy (or CVM free energy). The
converse is not necessarily true, and some of the stationary points of
the free energy, might not be stable under the message passing
heuristic. As we have seen, the message passing might not even
converge at all. For a given free energy approximation
(eq. (\ref{eq:freeen}) in our case), there are other algorithms to
search for stationary points, including other types of message passing
and provably convergent algorithms. In this section we study two of
these algorithms and show that they do find the same spin glass like
transition at $\beta_m$, but have a different behavior at lower
temperatures.
The one presented so far is the so called Parent-to-Child (PTC)
message passing algorithm, in which Lagrange multipliers are
introduced to force marginalization of bigger (parent) regions onto
their children. Other choices of Lagrange multipliers are possible
\cite{yedidia}, leading to the so called Child-to-Parent and Two-Ways
algorithms. Next we test the following four algorithms for minimizing
the plaquette-CVM free energy in typical instances of 2D EA:
\begin{itemize}
\item Double-Loop algorithm of Heskes \textit{ et. al. } \cite{HAK03}. Is a
provably convergent algorithm that guarantees a step by step
minimization of the free energy functional. It consist of two
loops, the inner of which is a Two-Ways message passing algorithm
that we will call HAK. We use the implementation in LibDai public
library \cite{libdai0.2.3}.
\item HAK message passing algorithm. Is a Two-Ways message passing
algorithm \cite{HAK03}. When it converges, it is usually faster than
Double-Loop.
\item GBP Parent-to-Child is the message passing algorithm we have
presented so far in this paper, and for which the simultaneous
updating of cavity fields was introduced to help
convergence. Nevertheless the following results were obtained using
standard GBP PTC.
\item Dual algorithm of \cite{dual}. Is the same GBP PTC setting all
small fields $u=0$, and doing only message passing in terms of
correlation fields $U$ (first equation in
eq. (\ref{eq:message-Uuu})).
\end{itemize}
For the last three algorithms we use our own implementation in terms
of cavity fields $u$ and $(U,u_a,u_b)$. The dual algorithm forces the
solution of GBP to remain paramagnetic since all $u=0$. This
paramagnetic ansatz is specially suited for the 2D EA model since it
is expected to be paramagnetic at any finite temperature (in the
thermodynamical limit).
\begin{figure}[tb]
\includegraphics[angle=270,width=0.8\textwidth]{resources/free_and_qEA.eps}
\caption[0]{Free energy of the solutions found by Double Loop
algorithm, HAK and the GBP PTC algorithm relative to the free energy
of the paramagnetic solution (Dual approximation), in a typical
system in which GBP PTC finds a spin glass solution. At high
temperatures both algorithm find the same paramagnetic
solution. Interestingly, there is a small range of temperatures
where the spin glass solution found by GBP is actually the one that
minimizes the free energy. But at even lower temperatures the
paramagnetic solution becomes again the correct one. While Double
Loop and HAK switch back to the paramagnetic solution (even if at a
wrong $T$), the GBP PTC get stuck in the spin glass solution (and
for this reason, it eventually stops converging).}
\label{fig:free}
\end{figure}
As shown in the previous section, the GBP PTC message passing
equations finds a paramagnetic solution in the 2D EA model at high
temperatures, while below $T_\text{SG}=1/\B_\text{SG} \simeq 1.27$ it finds a
spin glass like solution. By spin glass like we mean that the total
field $h_i=\sum_L^4 u_{L\to i}$ and the magnetization $m_i = \tanh(\beta
h_i)$ are non zero and change from spin to spin. The order parameter
\begin{equation}
q_{\text{EA}} = \frac{1}{N} \sum_{i} m_i^2 \label{eq:qEA}
\end{equation}
is used to locate this phase. The critical temperature $T_\text{SG}$,
where $q_\text{EA}$ becomes larger than zero, seems to be independent
of message passing details, like damping or the use of gauge fixing
for simultaneous updates of fields.
In figure \ref{fig:free} we show the free energy and the
$q_{\text{EA}}$ parameter of the solutions found by Double Loop, HAK
and GBP PTC for two typical realizations of an $N=16\times 16$ EA
system with bimodal interactions. The free energy of the dual
approximation is subtracted to highlight the differences with respect
to the paramagnetic solution. The figure shows that HAK and Double
Loop do find the same spin glass solution that GBP PTC finds when
going down in temperature. This solution is actually lower in free
energy when it appears, but at even lower temperatures becomes
subdominant compared to the paramagnetic one. The GBP PTC keeps
finding the spin glass solutions while Double Loop and HAK switch back
to the paramagnetic one. This is an interesting feature of Double Loop
and in particular of HAK which is a fast message passing algorithm. By
returning to the dual (paramagnetic) solution, HAK is also ensuring
its convergence at low temperature \cite{dual}, while GBP PTC get lost
in the irrelevant (and physically wrong) spin glass solution, and
eventually stops converging.
However note that DL and HAK may stop finding the SG solution when
this solution is still the one with lower free energy. Moreover the
lack of convergence of GBP can be used as a warning that something
wrong is happening with the CVM approximation, something that is
impossible to understand by looking at the behavior of provably
convergent algorithms.
\begin{figure}[tb]
\includegraphics[angle=270,width=0.8\textwidth]{resources/conv_time.eps}
\caption[0]{Convergence time in seconds for the Double Loop algorithm
(full points) and standard message passing algorithms (empty points)
for the plaquette-GBP approximation in two different realizations of
a $16^2$ Edwards Anderson system. Message passing algorithms are
typically faster, but not always convergent. The first cusp is
related to the appearance of the spin glass solution, while the
second cusp in the Double Loop algorithm is related to the switching
back to the paramagnetic solution (see Fig. \ref{fig:free}).}
\label{fig:t_conv_libdai_GBP}
\end{figure}
In figure \ref{fig:t_conv_libdai_GBP} we compare the running times of
Double Loop (LibDai \cite{libdai0.2.3}), HAK and GBP PTC (our
implementation) for the two systems of figure \ref{fig:free}. As
expected, Double Loop is much more slowly than the message passing
heuristics of HAK and GBP (please notice the log scale in the time
axis). The peaks in running times correspond to the transition points
from paramagnetic to spin glass solution. Double Loop and HAK have two
peaks, the second corresponding to the transition back to paramagnetic
solution, while the GBP PTC has only the first peak.
\section{Summary and Conclusions}
\label{Conclusions}
We studied the properties of the Generalized Belief Propagation
algorithm derived from a Cluster Variational Method approximation to
the free energy of the Edwards Anderson model in 2D at the level of
plaquettes. We compared the results obtained by Parent-to-Child GBP
with the ones obtained by the Dual (paramagnetic) algorithm
\cite{dual} and by HAK Two-Ways algorithm \cite{HAK03} and Double-Loop
provably convergent algorithm \cite{HAK03}.
We found that the plaquette-CVM approximation (using Parent-to-Child
GBP) is far richer than the Bethe (BP) approximation in 2D EA
model. BP converges only at high temperatures (above
$T_{\text{Bethe}}=1/\beta_{\text{Bethe}} =1.51$), and in such case it
treats the system as a set of independent pairs of linked spins. GBP
on the other hand, makes a better prediction on the paramagnetic
behavior of the model at high T, since it implements a message passing
of correlations fields flowing from plaquettes to links in the
graph. Furthermore with GBP the paramagnetic phase is extended to
temperatures below $T_\text{Bethe}=1.51$ until $T_\text{SG} = 1/\B_\text{SG}
\simeq 1.27$ where spin glass solutions appear in the single instance
implementation of the message passing algorithm. In contrast to Bethe
approximation, GBP is able to find spin glass solutions, and the
standard message passing stops converging near $T_{\text{conv}} \simeq
1$.
The average case calculation of the stability of the paramagnetic
solution in the CVM approximation predicted that non paramagnetic
(spin glass) solutions should appear at lower temperatures $ T_\text{CVM} =
1/\B_\text{CVM} \simeq 0.82$. This average case result does not coincide with
the single instance behavior of the standard GBP, since it fails to
mark both the point where GBP start finding spin glass solutions
$T_\text{SG}$ and the point where GBP stops converging
$T_\text{conv}$.
However, the non convergence of GBP is not a feature of the CVM
approximation, and is susceptible of changes from one implementation
of the message passing to another. We showed that by fixing a hidden
gauge invariance in the message passing equation, a simultaneous
update of all cavity fields pointing to a single spin in the lattice
improves the convergence of the algorithm, without changing
drastically its speed. Using the gauge fixed GBP, the non
convergence inverse temperature is moved to $T_{\text{conv-GF}} \simeq
1.2$, quite close to the average case prediction $T_\text{CVM}$ (whether this
is only a coincidence is still not clear). Most importantly the average
case computation (population dynamics) with the gauge fixed identifies
the same SG critical temperature $T_\text{CVM-GF} \simeq 1.28$ measured
on single samples (where $T_\text{SG} \simeq 1.27$).
Finally we compared the fixed point solutions found by the GBP message
passing with those found by the provably convergent Double-Loop
algorithm and the message passing heuristic of the Two-Ways algorithm
of \cite{HAK03}. All the algorithms find the same paramagnetic
solutions at high T, while below $T_\text{SG}$ they find a spin glass
solution, in the sense that local magnetizations are non zero, while
the global magnetization is null. Decreasing the temperature
Double-Loop and HAK switch back from the spin glass to the
paramagnetic solution, at the cost of a factor $10^2-10^3$ and
$10-10^2$ respectively in running time, compared to GBP. Furthermore,
the paramagnetic solution can always be found fast by the Dual
algorithm of \cite{dual}, making these two algorithms (Double-Loop and
HAK) unnecessarily slow.
Although the thermodynamics of the 2D EA model is paramagnetic, at low
temperatures, the correlation length grows until eventually surpassing
$L/2$ and therefore being effectively infinite for any finite size 2D
system. In such a situation the non paramagnetic solutions obtained by
GBP can account for long range correlations, and presumably gives
better estimates for the correlations among spins than the
paramagnetic solution obtained by HAK and Double Loop.
Establishing the previous claim requires a detailed study of the
quality of CVM approximation at low temperatures (in the non
paramagnetic range) and its connections to the statics and dynamics of
2D Edwards Anderson model, which is already under study. Application
of CVM and GBP message passing to Edwards Anderson model in 3D is also
appealing, since this model does have a spin glass behavior at low
temperature.
|
2,877,628,089,585 | arxiv | \section{Introduction}
For a long time, the scientific community tried to preserve the classical determinism for quantum events, one of the most relevant and best-structured theories about this theme comes up from the de Broglie-Bohm formulation of the quantum mechanics \cite{HOLLIVRO:93,bricmont2016broglie,sanz2019bohm,styer2002nine,belinsky2019david,bohm1952suggested,bohm85suggested,de1928nouvelle,bacciagaluppi2009quantum}. Based on the conceptions of the pilot wave of de Broglie \cite{de1928nouvelle,bacciagaluppi2009quantum}, David Bohm proposes a theoretical formulation for the quantum mechanics \cite{HOLLIVRO:93,bricmont2016broglie,sanz2019bohm,styer2002nine,belinsky2019david}, in which the quantum events are driven according to an essentially quantum potential, arising from the interaction between the particle and its wave-guide, being responsible for the quantum nature of the events during the system dynamics
\cite{sanz2019bohm,pinto2013quantum,hasan2016effect,lentrodt2020ab,gonzalez2007quantum,gonzalez2009bohmian,gonzalez2004bohmian,gonzalez2009bohmian2,gonzalez2008effective}.
{Recent works has been proposed alternative applications to this formulation \cite{bricmont2016broglie,sanz2019bohm,pinto2013quantum,becker2019asymmetry,sanz2015investigating,batelaan2015dispersionless}, unraveling interesting properties and interpretations for the dynamics of quantum systems \cite{becker2019asymmetry,batelaan2015dispersionless}. The existence of the quantum potential provides one path towards the understanding these in a Newtonian-like view, through the existence of the so-called Bohm's quantum force \cite{maddox2003estimating,becker2019asymmetry,batelaan2015dispersionless}. }
Recently, Becker \textit{et al.} \cite{becker2019asymmetry} observed the quantum force predicted by Shelankov \cite{shelankov1998magnetic}, Berry \cite{berry1999aharonov} and Keating \cite{keating2001force} for an Aharonov-Bohm physical system, providing the experimental support for the evidence of the quantum force in the Aharonov-Bohm effect \cite{aharonov1959significance}.
{In this context, we show in this paper a application of the de Broglie-Bohm Quantum Theory of Motion (QTM) to estimate Bohm's quantum force in the quantum dynamics of a Gaussian wavepacket, with and without the presence of a classical potential . For that, we consider two situations, the first one associated with the free particle case \cite{livre,belinfante1974survey}, and the second one related to a system subjected to the Eckart potential model \cite{gauss,dewdney1982quantum}. The dynamic variables were analyzed through the temporal propagation technique, using the popular and easy to implement finite-difference method, facilitating the reproduction of this analysis for most undergraduate and graduate quantum mechanics students \cite{finite}.}
Our results show that in the absence of a classical potential, the system experiences quantum effects arising from an effective force intrinsically related to the existence of the wavepacket itself, while the classical determinism is preserved in some way. Moreover, in the scattering by a classical potential, the wavepacket experiences a quantum force effect which depends on the presence of the potential, even in the absence of any classical force field, perceiving it even before the explicit interaction, strengthen the fact that classical potentials can act without force fields and giving us indications that the nature of the Aharonov-Bohm effect can be observed in different classical potentials.
{Therefore, this application could be used as an introduction to the concept of Bohm's quantum force, presenting the QTM as a useful working tool for study the quantum dynamics, instead of merely an alternative interpretation of the quantum theory.}
\section{de Broglie-Bohm Interpretation}\label{sec-1}
The de Broglie-Bohm QTM presents an interesting interpretation for quantum mechanics, in which the quantum system can be interpreted as two intrinsic counterparts: a wave and a point particle ~\cite{HOLLIVRO:93,maddox2003estimating,sanz2019bohm}. In this context, an individual system comprises one wave, that propagates into spacetime driving the motion of a punctual particle. The wave is mathematically described by a function $\Psi(q_i;t)$, which is a solution of the Schr\"odinger's equation, in such a way that
\begin{eqnarray}
\Psi(q_i;t)=R(q_i;t)\, \text{e}^{i S(q_i;t)/\hbar}~,
\label{eq:01}
\end{eqnarray}
where $R=R(q_{i},t)$ and $S=S(q_{i},t)$ are real functions given by:
\begin{eqnarray}
R(q_{i},t)&=&|\Psi(q_{i},t)| \geq 0, \qquad \forall \qquad \lbrace q_{i},t\rbrace~, \qquad
\label{eq:02} \\
\frac{S(q_{i},t)}{\hbar}&=&\tan^{-1}\left(\frac{\mbox{Im}\lbrace\Psi(q_{i},t)\rbrace}{\mbox{Re}\lbrace\Psi(q_{i},t)\rbrace}\right)~.
\label{eq:03}
\end{eqnarray}
Here $S$ can be seen as an action having dimension of $\hbar$.
Considering the functional form of $\Psi(q_i;t)$, given in Eq.~\eqref{eq:01}, the Schr\"odinger's equation results on two coupled equations
\begin{eqnarray}
\frac{1}{2m} \left(\nabla S(q_i;t)\right)^{2}+ V(q_i;t) - \frac{\hbar^{2}}{2m}\frac{\nabla^{2}R(q_i;t)}{R(q_i;t)}&=&-\frac{\partial S(q_i;t)}{\partial t}\,, \nonumber \\
\label{eq:06}\\
\frac{\partial R^{2}(q_i;t)}{\partial t} + \nabla\cdot\left(R^{2}(q_i;t)\,\frac{\nabla S(q_i;t)}{m}\right)=0\,.
\label{eq:07}
\end{eqnarray}
with $V(q_{i},t)$ being a external classical potential. Eqs. \eqref{eq:06} and \eqref{eq:07} describe the dynamic evolution of a particle in the classical theory and a continuity equation for the probability density, respectively, and the quantum nature of the events emerge from the coupled terms between these equations \cite{HOLLIVRO:93,bricmont2016broglie,sanz2019bohm}.
Eq.~\eqref{eq:06} provides a total energy, $-\frac{\partial S(q_i;t)}{\partial t}$, given by a sum of kinetic and potential energies, plus an additional term interpreted as a quantum potential \cite{hasan2016effect,lentrodt2020ab,gonzalez2007quantum,gonzalez2009bohmian,gonzalez2004bohmian,gonzalez2009bohmian2,gonzalez2008effective}, while Eq.~\eqref{eq:07} can be identified as a continuity equation, with the probability density $R^{2}(q_i;t)$ and the current density given by
\begin{equation}
{\bf J}=R^{2}(q_i;t)\,\frac{{\bf \nabla} S(q_i;t)}{m}\,.
\end{equation}
The uniqueness of $\Psi(q_{i},t)$ is immediately verified in $R(q_i;t)$, for each pair $\lbrace q_{i},t\rbrace$; but not necessarily into $S(q_{i},t)$, since for each pair one can define a distinct set of these functions. However, if the functions $S(q_{i},t)$ differ from each other by integer multiples of $\hbar$, then the wave function $\Psi(q_i;t)$ will be unique, and the field $p_{i}$ defined as
\begin{eqnarray}
{p_{i}}= \nabla S(q_{i},t)
\label{eq:09}
\end{eqnarray}
shall has uniqueness assured for each points $\lbrace q_{i},t\rbrace$.
In QTM, the Eqs. \eqref{eq:06} and \eqref{eq:07} control the dynamics of a system particles \cite{hasan2016effect,lentrodt2020ab,gonzalez2007quantum,gonzalez2009bohmian,gonzalez2004bohmian,gonzalez2009bohmian2,gonzalez2008effective}. In this scenario, the term
\begin{eqnarray}
V(q_i;t) - \frac{\hbar^{2}}{2m}\frac{\nabla^{2}R\left(q_{i},t\right)}{R\left(q_{i},t\right)}
\label{eq:10}
\end{eqnarray}
provides an effective potential in which the particle is submitted. Therefore, the Eq. \eqref{eq:06} consists into the Hamilton-Jacobi equation \cite{dittrich2016hamilton}, unless a so-called quantum potential term
\begin{eqnarray}
Q(q_{i},t)=-\frac{\hbar^{2}}{2m}\frac{\nabla^{2}R\left(q_{i},t\right)}{R\left(q_{i},t\right)}.
\label{eq:11}
\end{eqnarray}
This term arises from the interaction between the guiding wave $\Psi(q_{i},t)$ and the particle, and it is responsible for events of quantum nature during the evolution of the physical system \cite{hasan2016effect,lentrodt2020ab,gonzalez2007quantum,gonzalez2009bohmian,gonzalez2004bohmian,gonzalez2009bohmian2,gonzalez2008effective}.
Since $R(q_{i},t)$, Eq.~\eqref{eq:02}, consists in a probability density, Eq. \eqref{eq:07} provides a continuity equation associated to $R(q_{i},t)$. In this regard, the specification of $q_{i}(t)$ and the guiding wave $\Psi(q_{i},t)$, at a certain instant $t$, defines the state of an individual system. As demonstrated from Eq.~\eqref{eq:06}, $Q(q_{i},t)$ depends explicitly of $R(q_{i},t)$, and it is coupled with $S(q_{i},t)$ in such way that
\begin{eqnarray}
\frac{\partial S (q_{i},t)}{\partial t}+\frac{1}{2m} \left(\nabla S (q_{i},t)\right)^{2}+ V(q_{i},t) + Q(q_{i},t) =0\,. \nonumber \\
\end{eqnarray}
Thus, the quantum potential is not a previously known potential, such as $V(q_{i},t)$, but it depends on the state of the whole system, and it defines an interaction wave-particle that evolves according to the system dynamics which is mediated by a force like effect \cite{maddox2003estimating,becker2019asymmetry,keating2001force,batelaan2015dispersionless}. In this regard, the dynamic of the particle wavepacket can be described in terms of a effective force:
\begin{eqnarray}
\textbf{F}_{eff}=\frac{\mbox{d}{\bf{p}}}{\mbox{d}t}=\textbf{F}_{C}+\textbf{F}_{Q}~,
\label{eq:12a}
\end{eqnarray}
in terms of the classical force ($\textbf{F}_{C}$), derived from the classical potential $V(q_{i},t)$, and the so-called quantum force ($\textbf{F}_{Q}$) \cite{maddox2003estimating,becker2019asymmetry}
\begin{eqnarray}
\textbf{F}_Q(q_{i},t)=-\nabla Q(q_{i},t)~,
\label{eq:12}
\end{eqnarray}
derived from the quantum potential, Eq.~\eqref{eq:11}.
The quantum force acts on the de Broglie-Bohm trajectories \cite{kocsis2011observing}, and it is not mesurable \citep{kocsis2011observing,becker2019asymmetry}. In an operational way, the presence of the quantum force can be observed in the presence of a deflection in the average trajectories \cite{kocsis2011observing,becker2019asymmetry}. In this context, we propose a study of a free particle and a particle subjected to the Eckart potential, through the QTM, and so we compare the effect of a classical potential on the Bohm's quantum force.
\section{Temporal Propagation Through the Finite-Difference Method}\label{sec-2}
Most of the studies involving scattering in QTM searches for descriptive and representative quantities of the dynamic process~\cite{San525:12,keating2001force,gonzalez2007quantum,gonzalez2009bohmian,gonzalez2004bohmian}. These quantities are obtained in terms of the functions $R(q_{i},t)$ and $S(q_{i},t)$. Thus, one can solve the Schr\"odinger equation, and obtain these functions in terms of $\Psi(q_{i},t)$. In this work, we apply the Quantum Trajectory Method~\cite{Wya,Wya187:99} on the field $\Psi$, in order to obtain the system dynamics through interactive processes at a given initial condition, with the proper adjustments to ensure the convergence criteria and stability. Additionaly, we have limited our applications in one-dimensional problems: the free particle and with the presence of a classical Eckart potential.
Adopting the interactive finite-difference method \cite{simos1999finite,cooper2010finite,finite} {since it is a common method to the students, being widely discussed in traditional undergraduate courses in physics,} the one-dimensional time-dependent Schr\"odinger equation~ can be written as
\begin{eqnarray}
\frac{\Psi(q,t+\Delta t)- \Psi(q,t)}{\Delta t}=\frac{i}{2}\frac{\partial^{2}\Psi(q,t)}{\partial q^{2}} - i V(q,t)\Psi(q,t), \nonumber \\
\label{eq:14}
\end{eqnarray}
where $\Delta t$ is a small finite time interval and we use the atomic units system, in order to ensure a reasonable performance without compromising the relevant theoretical aspects.
In order to make use of the propagation process, it is necessary to define the initial state of the quantum wave function. Here, we are choosing the Gaussian packet \cite{livre,belinfante1974survey} at the instant $t=0$,
\begin{eqnarray}
\psi(q,0)=\left(\frac{2\gamma}{\pi}\right)^{\frac{1}{4}} \exp\left[-\gamma\left(q-q_{0}\right)^{2}+ip_{0}(q-q_{0})\right], \nonumber\\
\label{eq:15}
\end{eqnarray}
where $\gamma=1/2\delta^2$, with $\delta$ being the packet's width, , and $q_0$ and $p_0$ are, respectively, the center of position and momentum of the packet.
Since the scalar fields $R(q,t)$ and $S(q,t)$ can be determined in terms of $\Psi(q, t)$, Eqs.~\eqref{eq:02} and \eqref{eq:03}, one may use them into Eqs. \eqref{eq:06} and \eqref{eq:07}, in order to obtain the dynamic of the system. Considering the problem under the influence of a time-independent potential $V(q)$ and the Eq.~\eqref{eq:09},
it is possible to determine the velocity distributions $\dot{q}(t)$ and the associated trajectories, as well as the effective force related to the quantum potential. For determination of the trajectory, we use the temporal propagation by finite differences technique, making the necessary adjustments for the initial conditions,
\begin{eqnarray}
q(t_{k}+\Delta t)&=&q(t_{k}) + \frac{\partial S(q,t_{k})}{\partial q}\,\Delta t \,.\nonumber
\end{eqnarray}
In addition, for determination of the mediating force, from Eq.~\eqref{eq:11} one can apply the finite differences approach upon the quantum potential $Q$ as
\begin{eqnarray}
Q(q, 0)&=& \frac{1}{2R(q,0)} \times\nonumber \\
&& \times \left[\frac{R(q+\Delta q,0)- 2R(q,0) + R(q-\Delta q,0)}{\Delta q ^{2}}\right] \nonumber \\\label{eq:11b}
\end{eqnarray}
in terms of the generalized coordinates. In the cases considered in this work, the implementation of the numerical calculus
with a discretization of $2 500$ points in the variable $q$ and $10^{7}$ points in the variable $t$, in a way to guarantee a satisfactory description, without to incur significant divergences on the values, and assuring a relatively low computational cost \cite{finite}.
\section{Results}\label{sec-3}
\label{sec-4}
\subsection{Free particle wavepacket}
Considering the propagation of a free ($V(q)=0 $) Gaussiam wavepacket \cite{livre}, Eq. (\ref{eq:15}),centerd on $q_{0}=-2.0\, a.u.$, $p_{0}=10\,a.u.$ and spatially distributed in the interval $[-10,10]$. We obtain the propagation profile for this wavepacket, applying the temporal propagation through the finite-difference method, (see Fig.~\ref{Figura:01}). According the Fig.~\ref{Figura:01}, the scattering effect on the wavepacket is clearly perceived during the process of temporal propagation. That outcome also is provided by usual interpretations of the quantum mechanics, and it is intrinsically connected to the uncertainty of observations in the Schr\"odinger representation for position.
\begin{figure}[h!]
\centering
{\includegraphics[scale=1.6]{fig1.eps}} \\
\caption{(Color online) Propagation profile of a free Gaussian wavepacket, obained from the temporal propagation through the finite-difference method.} \label{Figura:01}
\end{figure}
In order to highlight the trajectories localized at the center and extremes of the wavepacket we select nineteen points symmetrically distributed around the center of the wavepacket, $q_{0}=q_{10}$, which represents the initial configuration associated to the particles \textit{ensemble}. In such way, each point is initially distributed around $q_{10}$, as depicted in Fig.~\ref{Figura:02}. Thus, through the dynamic variables we can observe what happen individually with the constituent elements of the distribution.
\begin{figure}[h!]
\centering
{\includegraphics[scale=1.25]{fig2.eps}}
\caption{(Color online) Trajectories associated to a set of nineteen points distributed over the free wavepacket, highlighting trajectories localized at the center $\lbrace q_{10} \rbrace$ (green dash-dotted line), left $\lbrace q_{1}\rbrace$ (solid red line) and rigth $q_{19} \rbrace$ (dashed blue line) of the packet.}\label{Figura:02}
\end{figure}
In the de Broglie-Bohm theory, despite the absence of a classic potential, the system is subjected to a quantum potential $Q(q,t)$, which arises from the dual wave-particle nature, through the interaction between the particle an its wave-guide. Thus, the wavepacket propagation acquires a different connotation, which is explained as being a direct consequence of the action of a field $\Psi(q,t)$ on the \textit{ensemble} of particles via potential $Q(q,t)$, offering new prospects to the interpretation of the system dynamics. According to this representation, we calculate the quantum potential using Eq.~\eqref{eq:11b}. Fig.~\ref{Figura:03} shows the quantum potential associated to the three representative trajectories of the \textit{ensemble} at the center $\lbrace q_{10}\rbrace$ and extremes $\lbrace q_{1};q_{19} \rbrace$ of the free wavepacket. Those trajectories correspond to initial points localized at the center and extremes of the wavepacket, as highlighted in the Fig.~\ref{Figura:02}.
\begin{figure}[h!]
\centering
{\includegraphics[scale=1.3]{fig3.eps}}
\caption{(Color online) Quantum potential $Q(t)$ associated to the three representative trajectories: center (green dash-dotted line), extremes left (solid red line) and rigth (dashed blue line) of the free wavepacket.}\label{Figura:03}
\end{figure}
Therefore, from the quantum potential the \textit{ensemble} experiences the action of a non-null effective force (Eq.~\eqref{eq:12a}) consisting of elements intrinsically related to the initial conditions of the system, even in the absence of a classical potential, which evidences the non-classical nature of this process. Using Eq.~\eqref{eq:12} we calculate the respective quantum force associated to the same trajectories described in the Fig.~\ref{Figura:03}. In the absence of any classical potential, the effective force experienced by the wavepacket arises exclusively from the quantum potential being considered as a \textit{quantum force}.
Fig.~\ref{Figura:04} shows the effective quantum force as a function of the time and the generalized coordinate $q(t)$. As indicated, although the effective force being zero at the center of wavepacket, the dispersion on the trajectories at the extremes obeys the tendency that the quantum force acts over the elements distributed at the edges of the wavepacket, in such way that it \textit{accelerates} \cite{newton} points on the left side of the wavepacket center (back), $q < q_{10}$, and \textit{slows down} \cite{newton} points on the right side (front), $q > q_{10}$.
\begin{figure}[h]
\centering
\subfigure[]{\includegraphics[scale=1.3]{fig4a.eps}}
\subfigure[]{\includegraphics[scale=1.3]{fig4b.eps}}
\caption{(Color Online) Effective force as a function of the time (a) and the generalized coordinate $q(t)$ (b), for the trajectories localized at the center (green dash-dotted line), left (solid red line) and right (dashed blue line) of the free (gaussian) wavepacket. Since there is no classical potential the effective force is only due to the exitence of a quantum potential emergent from the interaction between the corpuscular and wave nature of the system.}\label{Figura:04}
\end{figure}
Therefore, one can conclude that the center of the wavepacket experienced a classical free particle dynamics, because there is no classical or quantum force acting on it, whereas the edges experience a quantum dynamics from the quantum potential emergent from the interaction between the corpuscular and wave nature of the system. Therefore, the quantum force is strongly connected to the existence of the wavepacket itself, while the classical determinism of a physical system is in some way preserved, and the events of quantum nature are guided by a field of probabilistic nature, $\Psi$, which acts on the \textit{ensemble} of particle modifying the system dynamics, as a wave-guide.
\subsection{Particle subjected to the Eckart potential}
{In the perspective of to illustrate the effect of a classical potential on the quantum force \cite{gauss,dewdney1982quantum}, we consider the propagation of the wavepacket, Eq. (\ref{eq:15}), scattered by one of the most applicable and useful potentials for investigations about scattering parameters and bound states \cite{gauss,razavy2003quantum,eckart1930penetration,johnston1962tunnelling,soylu2008kappa,Ikhdair_2014,Valencia_Ortega_2018,fern2019confluent,Mousavi_2019,dhali2019quantum}the Eckart potential}
\begin{eqnarray}
V(q) = V_{0}\frac{\exp{\left[\beta (q-q_{v})\right]}}{\left\{1+\exp{\left[\beta (q-q_{v})\right]}\right\}^{2}},
\label{eq:18}
\end{eqnarray}
where $V_0$, $\beta$ and $q_v$ are, respectively, amplitude, width and center of the potential.
Since, we have adopted atomic units, the coefficient $\beta$ has unit of inverse of the Bohr radius. For our analysis, we are assuming the potential with amplitude $V_{0}=200\,a.u.$, width $\beta = 20\,a.u.$, and centerd at $q_{0}=-2.0\,a.u.$ and $p_{0}=10\,a.u.$.
In the Fig.~\ref{Figura:06} (a), we depicted the wavepacket propagation scattered by the Eckart potential given in the Eq.~\eqref{eq:18}, obtained from the finite-difference method. It furnishes the behavior characteristic for that type of process, showing a distinction for effects of transmission and reflection on this potential barrier. Even the initial average energy of the wavepacket being equal to the height of the barrier, one fraction of the packet is transmitted and the other one is reflected, with most of the amplitude being transmitted for present initial conditions. The propagation of the wavepacket and the dispersion, during the scattering process, are illustrated in terms of the trajectories pictured in the Fig.~\ref{Figura:06} (b).
Those trajectories, represented in Fig.~\ref{Figura:06} (b), allow us to conclude that the scattering process starts at $t=0.15\, a.u.$ and any influence registered before this interval elapses without an explicit action of the classical potential.
As illustrated, the left (back) of the wavepacket $\lbrace q_{1} \rbrace$ is reflected, whereas the center $\lbrace q_{10} \rbrace$ and the right (front) $\lbrace q_{19} \rbrace$ of the wavepacket are transmitted, tunneling the potential barrier. This effect can also be illustrated by the plot of the quantum potential and the analysis of the quantum forces acting in each trajectory.
\begin{figure}[h]
\centering
\subfigure[]{\includegraphics[scale=1.5]{fig5a.eps}} \\
\subfigure[]{\includegraphics[scale=1.3]{fig5b.eps}}
\caption{(Color online) (a) Propagation profile of the Gaussian wavepacket subjected to the Eckart potential. (b) Trajectories of nineteen points distributed around $q_{0}=-2.0\,a.u.$, with $E=50.0\,a.u.$. The highlighted trajectories are associated to the points placed at the center (green dash-dotted line), extremes left (solid red line) and rigth (dashed blue line) of the wavepacket.}\label{Figura:06}
\end{figure}
Fig.~\ref{Figura:08} shows the quantum potential $Q(q,t)$ obtained from Eq.~\eqref{eq:11b} for the wavepacket scattered by a classical Eckart potential, for the trajectories highlighted in Fig.~\ref{Figura:06} (b), localized at the center $\lbrace q_{10} \rbrace$ and extremes $\lbrace q_{1};q_{19} \rbrace$ of the wavepacket. As shown in Fig.~\ref{Figura:08}, when the wavepacket approaches the potential barrier the quantum potential profile changes and even before the scattering the behavior is completely different from the one obtained in the case of a free particle, considering the same quantities (Fig.~\ref{Figura:03}). Fig.~\ref{Figura:08} (b) shows the tunneling of the front and the center of the wavepacket in the potential barrier as previously discussed. The tunnelling effect with the Eckart potential was discussed before in the literature in terms of the Bohmian Total Potential \cite{gonzalez2009bohmian,johnston1962tunnelling}.
\begin{figure}[h!]
\subfigure[]{\includegraphics[scale=1.3]{fig6a.eps}}
\subfigure[]{\includegraphics[scale=1.3]{fig6b.eps}}
\caption{(a) Quantum potential $Q(t)$ represented at the time interval $0\,a.u.$ and $0.15\,a.u.$, with $E=50.0\,a.u.$, subjected to the classical interaction of amplitude $V_0=200\, a.u.$. (b) Comparison between the quantum and classical potential. These profiles are associated to the three representative trajectories: center (green dash-dotted line), extremes left (solid red line) and rigth (dashed blue line) of the Gaussian wavepacket subjected to the Eckart potential.}\label{Figura:08}
\end{figure}
Using Eq.~\eqref{eq:12} we calculate the quantum force for the same trajectories described in Fig.~\ref{Figura:08}. Fig.~\ref{Figura:09} shows the quantum force as a function of the time and the generalized coordinate $q(t)$, for the wavepacket scattered by a classical Eckart potential.
\begin{figure}[h]
\centering
\subfigure[]{\includegraphics[scale=1.3]{fig7a.eps}}
\subfigure[]{\includegraphics[scale=1.3]{fig7b.eps}}
\caption{(Color Online)(a) Quantum force as a function of the time and (b) the generalized coordinate $q(t)$, for the trajectories localized at the center (green dash-dotted line), left (solid red line) and rigth (dashed blue line) of the Gaussian wavepacket subjected to the Eckart potential.}\label{Figura:09}
\end{figure}
As the scattering occurs, different points of the wavepacket experience a variation on the interaction profile which they are subjected. That can be seen in Fig.~\ref{Figura:08} (b) describing the quantum potential $Q(q)$ and the classical potential $V(q)$ in terms of their coordinates. As can be seen, the constituents of the \textit{ensemble} perceive the classical potential even before the classical interaction, since the element localized to the left side of the packet ($q_{1}$) suffers a significant change in its potential profile, even not interacting explicitly with the potential $V(q)$, but receiving this information through a correlation existing among the elements of the wavepacket. In other words, the particle experiences a quantum force effect which depends on the presence of the classical potential, even in the absence of any classical force field. In order to illustrate this effect we show in Fig.~\ref{Figura:10} comparison between the forces in the transmitted trajectory at the edge of the scattered wavepacket.
\begin{figure}[htp]
{\includegraphics[scale=1.3]{fig8.eps}}
\caption{(Color Online) Comparison between quantum (solid blue line), classic (green dash-dotted line) and effective force (dashed red line) for the trajectories localized at the rigth of the Gaussian wavepacket, subjected to the Eckart potential.}\label{Figura:10}
\end{figure}
Therefore, this result can be interpreted analogously to that observed in the Aharonov-Bohm effect \cite{aharonov1959significance} since even in the absence of a force field, the quantum dynamics of the particle is altered by the presence of the classical potential. These results {strengthen the fact that classical potentials can act without force-fields, giving us indications that the Aharonov-Bohm effect could be observed in other classical potentials.}
\section{Conclusions}\label{sec-com}
{In this work, we report an application of the de Broglie-Bohm Quantum Theory of Motion as a powerful tool for evaluating Bohm's quantum force in the scattering process of a Gaussian wavepacket by a classical Eckart potential. In order to make our analysis easy to reproduce by undergraduate and graduate students of quantum mechanics courses, we adopt the temporal propagation method, which is an interactive technique of finite differences}.
First, we consider the free particle dynamics, where we observe that, in the absence of a classical potential, the edges of the wavepacket experience an effective force effect, intrinsically related to the existence of the quantum potential $Q(q,t)$, which emerges from the interaction between the corpuscular and wave nature of the system, while the center of the wavepacket shows a classical free particle dynamics. Thus, the quantum force is strongly connected to the existence of the wavepacket itself, while the classical determinism of a physical system is in some way preserved.
{In the following, we illustrate the effect of a classical Eckart potential, showing that the system experiences significant changes on its dynamics, even before the explicit interaction with the classical force, giving us evidences of the presence of the quantum force in the scattering process. Thus, the system experiences a quantum force effect, which depends on the classical potential, even in the absence of any classical force field, analogous to that observed in the Aharonov-Bohm effect, giving indications that the nature of this effect can be observed in different classical potentials.}
{Therefore, these results show the potential of the de Broglie-Bohm formulation as a complementary picture for the quantum theory, being a useful classroom working tool for study quantum dynamics through the concept of Bohm's quantum force, instead of merely an alternative interpretation of the quantum theory.}
\begin{acknowledgments}
W. S. Santana and F. V. Prudente would like to thank Mirco Ragni for the computational help. This study was financed in part by the CNPq and the \textit{Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{i}vel Superior - Brasil} (CAPES) - Finance Code 001.
\end{acknowledgments}
|
2,877,628,089,586 | arxiv | \section{Introduction}\label{intro}
The statistical model has succeeded in the description of the soft
part of particle production in heavy-ion collisions \cite{bib1}.
In particular, particle yield ratios and $p_{T}$ spectra of
identified hadrons have been reproduced with a good accuracy.
Transverse energy ($dE_{T}/d\eta$) and charged particle
multiplicity densities ($dN_{ch}/d\eta$) are global variables
whose measurements are independent of hadron spectroscopy,
therefore they could be used as an additional test of the
self-consistency of the statistical model.
The experimentally measured transverse energy is defined as
\begin{equation}
E_{T} = \sum_{i = 1}^{L} \hat{E}_{i} \cdot \sin{\theta_{i}} \;,
\label{Etdef}
\end{equation}
\noindent where $\theta_{i}$ is the polar angle, $\hat{E}_{i}$
denotes $E_{i}-m_{N}$ ($m_{N}$ means the nucleon mass) for
baryons, $E_{i}+m_{N}$ for antibaryons and the total energy
$E_{i}$ for all other particles and the sum is taken over all $L$
emitted particles \cite{bib7}.
The statistical model with single freeze-out \cite{bib2,bib3} is
applied for evaluations of $dE_{T}/d\eta$ and $dN_{ch}/d\eta$ at
midrapidity for various centrality bins at RHIC at
$\sqrt{s_{NN}}=130$ and 200 GeV. Details of this analysis can be
found elsewhere \cite{bib4,bib5}. The foundations of the model are
as follows: (\textit{a}) the chemical and thermal freeze-outs take
place simultaneously, (\textit{b}) all confirmed resonances up to
a mass of $2$ GeV from the Particle Data Tables \cite{bib6} are
taken into account, (\textit{c}) a freeze-out hypersurface is
defined by the equation $\tau =
(t^{2}-r_{x}^{2}-r_{y}^{2}-r_{z}^{2})^{1/2}= const$, (\textit{d})
the four-velocity of an element of the freeze-out hypersurface is
proportional to its coordinate, $u^{\mu}= x^{\mu} / \tau$,
(\textit{e}) the transverse size is restricted by the condition
$r=(r_{x}^{2}+r_{y}^{2})^{1/2}< \rho_{max}$. The model has four
parameters, namely, the two thermal parameters, the temperature
$T$ and the baryon number chemical potential $\mu_{B}$, and the
two geometric parameters, $\tau$ and $\rho_{max}$. Values of these
parameters were obtained from fits to particle yield ratios and
$p_{T}$ spectra (see Table 1 in \cite{bib5}, the table collects
the results from \cite{bib3,bib11}). The invariant distribution of
the measured particles of species $i$ has the Cooper-Frye form
\cite{bib2,bib3}. The distribution collects, besides the thermal
one, also contributions from simple and sequential decays such
that at least one of the final secondaries is of the \emph{i} kind
(for details, see \cite{bib3,bib4}). Having integrated this
distribution suitably over $p_{T}$ and summing up over final
particles, one can obtain $dE_{T}/d\eta$ and $dN_{ch}/d\eta$ and
finally the ratio $\langle dE_{T}/d\eta\rangle /\langle
dN_{ch}/d\eta\rangle$. The complete set of results for
$dE_{T}/d\eta$ and $dN_{ch}/d\eta$ can be found in \cite{bib5},
here only the values of the ratio as a function of the number of
participants ($N_{part}$) are shown in Figs. \ref{fig1} and
\ref{fig2}.
\begin{figure}[htb]
\insertplot{fig-1.eps}
\vspace*{-0.8cm} \caption[]{$\langle dE_{T}/d\eta\rangle /\langle
dN_{ch}/d\eta\rangle$ versus $N_{part}$ for RHIC at
$\sqrt{s_{NN}}=130$ GeV. Dots denote model evaluations, triangles
are the PHENIX data \protect\cite{bib7}. Crosses denote
recalculated PHENIX data points, \emph{i.e.} the sum of integrated
charged hadron yields \cite{bib8} have been substituted for the
denominator in the ratio. } \label{fig1}
\end{figure}
\begin{figure}[htb]
\insertplot{fig-2.eps}
\vspace*{-0.8cm} \caption[]{$\langle dE_{T}/d\eta\rangle /\langle
dN_{ch}/d\eta\rangle$ versus $N_{part}$ for RHIC at
$\sqrt{s_{NN}}=200$ GeV. Black dots and stars are model
evaluations for PHENIX and STAR, respectively. Triangles are the
direct PHENIX data \protect\cite{bib7}, whereas crosses are
recalculated PHENIX data points, \emph{i.e.} the sum of integrated
charged hadron yields \cite{bib9} have been substituted for the
denominator in the ratio. Open stars are the STAR data
\protect\cite{bib10}. } \label{fig2}
\end{figure}
As one can see, the position of model predictions is very regular
and exactly resembles the configuration of the data in each case,
the estimates are only shifted up about $10\%$ as a whole. This
overestimation can be explained, at least for more central
collisions, by the observed discrepancy between the directly
measured $dN_{ch}/d\eta$ and $dN_{ch}/d\eta$ expressed as the sum
of the integrated charged hadron yields (this effect was notified
in backup slides of \cite{bib12}). If the original data points are
replaced by the recalculated data such that the denominators are
sums of the integrated charged hadron yields, then much better
agreement can be reached for more central collisions.
As far as the predictions for $dE_{T}/d\eta$ and $dN_{ch}/d\eta$
are concerned (see Figs. 1-4 in \cite{bib5}), the agreement with
the data is much better for RHIC at $\sqrt{s_{NN}}=130$ GeV. For
the case of $\sqrt{s_{NN}}=200$ GeV, only the rough qualitative
agreement has been reached. For sure, one of the reasons is that
fits in \cite{bib11} were done to the preliminary data for the
spectra \cite{bib12,bib13} and, as it turned out later, the final
data \cite{bib14,bib15} differ substantially from the preliminary
ones.
To conclude, the single-freeze-out model fairly well explains the
observed centrality dependence of transverse energy and charged
particle multiplicity pseudo-rapidity densities at midrapidity and
their ratio in the case of RHIC collisions at $\sqrt{s_{NN}}=130$
and $200$ GeV. These two variables are independent observables,
which means that they are measured independently of identified
hadron spectroscopy. It should be stressed once more that the
model fits were done earlier with the use of particle yield ratios
and $p_{T}$ spectra (not by the author, values of fitted
parameters are taken from \cite{bib3,bib11}). With the values of
parameters given, transverse energy and charged particle
multiplicity densities have been calculated in the
single-freeze-out model. Generally, the results agree
qualitatively well with the data. This adds a new argument
supporting the idea of the appearance of a thermal system during
an ultra-relativistic heavy-ion collision.
\section*{Acknowledgment}
This work was supported in part by the Polish Committee for
Scientific Research under Contract No. KBN 2 P03B 069 25.
\section*{Notes}
\begin{notes}
\item[a] This is a write-up of a poster presented at the Workshop
on Quark-Gluon-Plasma Thermalization, Vienna, Austria, 10-12
August 2005
\end{notes}
|
2,877,628,089,587 | arxiv | \section{Introduction}
M82 has been one of the most frequently targeted
candidates for studying sturburst galaxies and
is distinguished by its very high IR luminosity ($L_{IR} = 3 \times 10^{10}L_{\odot}$,
Telesco \& Harper 1980).
A cluster of young supernova remnants has also been observed around the nucleus
of M82 (Kronberg et al. 1985).
This galaxy is located in a small group of galaxies, which is comprised
of M81, M82, NGC~3077 and several other smaller dwarf galaxies.
The HI map of the M81 group revealed tidal tails bridging
M81 with M82 and NGC~3077, suggesting a recent interaction of these galaxies
(Yun, Ho \& Lo 1993).
The distance to the M81 group has been measured by Freedman et al. (1994)
using the $HST$ observations of Cepheid variables in M81.
They report a distance modulus of $(m-M)_0 = $27.80 $\pm$ 0.20 mag.
Two other galaxies in the M82 group have also recently been the $HST$
targets. Caldwell et al. (1998) observed dwarf ellipticals, F8D1 and BK5N,
and reported the distances measured by the tip of the red giant branch (TRGB)
method of 28.0 $\pm$ 0.10 and 27.9 $\pm$ 0.15 mag respectively.
As part of a long--term project to obtain direct distances to galaxies
in the nearby Universe using the TRGB method,
we observed two fields in the halo of M82 using the HST and Wide Field Planetary
Camera 2.
The details of observations and data reductions are reported in the following
Section 2.
In Sections 3 and 4, we discuss the detection of the RGB stars, and
report a distance using the I--band luminosity function.
In addition to the RGB stars, we detected a large number of
stars brighter than the TRGB in the M82 halo regions.
We briefly explore in Section 5 what this population of stars may be.
\section{Observations and Reductions}
Two positions in the halo region of M82 were chosen
for our HST observations. A digital sky survey image of M82,
is shown in Figure~1, on which the
$HST$ Wide Field Planetary Camera 2 (WFPC2) footprints are
superimposed, indicating the two regions observed.
We refer to the region closer to the center of the galaxy as
Field I, and the other to the east as Field II.
The Planetary Camera (PC) chip covers the smallest area;
we refer to this as chip 1.
The three Wide Field (WF) chips cover the three larger fields and
are referred to as chips 2, 3 and 4 respectively,
counterclockwise from the PC.
A closeup $HST$ image of one of the chips, WF2 field of Field I,
is shown in Figure~2.
Observations of the M82 halo region
were made with the WFPC2 on board the {\it Hubble Space Telescope}
on July 9, 1997 using two filters, F555W and F814W.
Two exposures of 500 seconds each were taken for both filters
at each position.
Cosmic rays on each image were cleaned before being combined to make a set
of F555W and F814W frames.
The subsequent photometric analysis was done
using point spread function fitting packages DAOPHOT and ALLSTAR.
These programs use automatic star finding algorithms and then measure
stellar magnitudes by fitting a point spread function (PSF), constructed
from other uncrowded HST images (Stetson 1994).
We checked for a possible variation in the luminosity function as a function
of the position on each chip by examining the luminosity functions for
different parts of the chip.
For each frame, we find the identical luminosity function,
confirming that there is no significant systematic offsets originating
from the adopted PSFs.
The F555W and F814W instrumental magnitudes were converted to the calibrated
Landolt (1992) system as follows. (A detailed discussion is found in
Hill et al. 1998). The instrumental magnitudes were first transformed
to the Holtzman et al. (1995) 0\Sec5 aperture magnitudes by determining
the aperture correction that need to be applied to the PSF magnitudes.
This was done by selecting 20--30 brighter, isolated stars on each frame.
Then all the stars were subtracted from the original image except for
these selected stars. The aperture photometry was carried out for these
bright stars, at 12 different radii ranging from 0\Sec15 to 0\Sec5.
The 0\Sec5 aperture magnitudes were determined by applying the growth
curve analysis provided by DAOGROW (Stetson 1990), which were then compared
with the corresponding PSF magntiudes to estimate the aperture corrections
for each chip and filter combination.
The values of
aperture corrections for each chip are listed in Table 1.
We use a different set of aperture corrections for two Fields.
Most of the values agree with each other within $2\sigma$,
however slight offsets between the corrections in the two Fields are
most likely due to the PSFs not sampling the images in the exactly same way.
When images are co--added, the combined images are not exactly identical
to the original uncombined images;
that is, the precise positions of stars on the frames are slightly
different. Thus we should expect
some differences in the aperture corrections of the same chip in
two Fields.
Finally, the 0\Sec5 aperture magnitudes are converted to the standard
system via the equation:
\begin{equation}
M = m + 2.5 \log t + {\mbox{C1}} + {\mbox{C2}} \times (V-I) + {\mbox{C3}} \times
(V-I)^2 + {\mbox{a.c.}},
\end{equation}
where $t$ is the exposure time, C1, C2 and C3 are constants and a.c. is the
aperture correction.
C1 is comprised of several terms including (1) the long--exposure WFPC2 magnitude
zero points, (2) the DAOPHOT/ALLSTAR magntidue zero point, (3) a correction
for multiplying the original image by 4 before converting it to integers
(in order to save the disk space), (4) a gain ratio term due to the difference
between the gain settings used for M82 and for the Holtzman et al. data
(7 and 14 respectively), (5) a correction for the pixel area map which
was normalized differently from that of Holtzman et al. (1995), and
(6) an offset between long and short exposure times in the HST zero point
calibration.
C2 and C3 are color terms and are the same for all four chips.
In Table~2, we summarize all three constants for each chip.
\section{Detection of the Red Giant Stars in M82}
The $V$ and $I$ photometric results are shown in the color--magnitude
diagrams in Figure~3.
In Table~3, astrometric and photometric data for a set of brighter,
isolated reference stars are presented. The X and Y coordinates
tabulated refer to those
on the image of rootname u3nk0201m for Field I, and u3nka201m for Field II.
We also show luminosity function histograms in Figure~4.
In both Fields I and II, WF~4 field samples the least crowded
halo region of M82.
Based on the observations of Cepheids in M81, the parent galaxy of
M82, we know that the distance modulus of M82 is approximately
$\mu_0 = 27.8$ mag. Then the tip of the red giant branch should therefore
be observed at I $\sim$ 23.7 mag.
In all CMDs presented here, we can
visually detect the position of the TRGB at $I \simeq 23.7 - 23.9$ mag
relatively easily, which is also evident in the luminosity functions,
as a jump in number counts, especially in those of Field I.
If we are observing the TRGB at around $I \sim 23.8$ mag, then
a significant number of brighter
stars are present in the halo regions of M82,
which are observed above the tip of the RGB in the CMDs.
In addition, comparing two regions, more of these stars are found in Field II.
This will be discussed more in detail in Section 5.
\section{TRGB Distance to M82}
The TRGB marks the core helium flash of old, low--mass stars which
evolve up the red giant branch, but almost instantaneously
change their physical characteristics upon ignition of helium.
This restructuring of the stellar interior
appears as a sudden discontinuity in the luminosity function and
is observed at $M_I \simeq -4$ mag in the $I-$band ($\sim 8200$\AA).
The TRGB magnitude has been shown both observationally and theoretically
to be extremely stable; it varies only by $\sim$0.1 mag for ages
2 -- 15 Gyr, and for metallicities between $-2.2 <$ [Fe/H] $< -0.7$ dex,
(the range bounded by the Galactic globular clusters).
Here, we use the calibration presented by Lee et al. (1993) which is
based on the observations of four Galactic globular clusters by
Da Costa \& Armandroff (1990). The globular cluster distances
had been determined using the RR Lyrae distance scale based on
the theoretical horizontal branch model for $Y_{MS}=0.23$ of
Lee, Demarque and Zinn (1990), and corresponds to $M_V (\mbox{\small RR Lyrae}) = 0.57$ mag
at [Fe/H] = $-1.5$.
The top panel of each plot in Figure~5 shows
an $I-$band luminosity function smoothed by a variable Gaussian whose dispersion
is the photometric error for each star detected.
We apply a Sobel edge--detection filter to all luminosity functions to
determine quantitatively and objectively the position of the TRGB following
$E(m) = \Phi(I + \sigma_m) - \Phi(I - \sigma_m)$, where $\Phi(m)$ is
the luminosity function at magnitude defined at $m$, and
$\sigma_m$ is the typical photometric error of stars of magnitude $m$.
For the details of the Sobel filter application, readers
are referred to the Appendix of Sakai, Madore \& Freedman (1996).
The results of the convolution are shown as in bottom panels of
Figure~5.
The position of the TRGB is identified with the highest peak in the filter output
function.
The TRGB method works as a distance indicator best in practice when the $I-$band
luminosity function sample is restricted to those stars in the halo
region only. This is mainly due to three reasons: (1) less crowding,
(2) less internal extinction and (3) less contamination by
AGB stars which tend to smear out the ``edge'' defining the TRGB in the
luminosity function.
In Field I, the tip position is detected clearly in the luminosity
function and filter output for the WF~4 region at $I_{\mbox{\tiny TRGB}}
= 23.82 \pm 0.15$ mag.
The one--sigma error here is determined roughly by estimating the ``full width
half maximum'' of the peak profile defining the TRGB in the filter
output function.
In the WF~3 region, the tip is also observable at $I_{\mbox{\tiny TRGB}}
= 23.72 \pm 0.10$ mag,
slightly brighter than the case of WF~4. The simulations have shown that
the position of the tip shifts to a brighter magnitude due to crowding
effects (Madore and Freedman 1995), and that is what we observe on WF3.
The stellar population in Field II is comprised of more of these brighter stars
(which could be AGB stars),
thus restricting the luminosity function to the halo region helps
especially in determining the TRGB position.
Here, we obtain $I_{\mbox{\tiny TRGB}} = 23.71 \pm 0.09$ mag and
$I_{\mbox{\tiny TRGB}} = 23.95 \pm 0.14$ mag for WF~3 and WF~4 field respectively.
The tip magnitude of WF~3 agrees extremely well with that of the same
chip in Field I. However, the TRGB magnitude defined by the WF~4 sample
is fainter by $0.17$ mag compared to the halo region of Field I.
There are several reasons to believe that the TRGB defined by the
Field II halo region would more likely correspond to the true distance of M82.
First, if one examines the WFPC2 image of Field I closely,
the presence of wispy, filamentary structures is recognizable.
Such features are likely to increase the uncertainties furthermore
due to variable reddening.
Another but more important reason for putting less weight on the
Field I WF4 data is that there are far fewer stars observed
in this region.
Madore and Freedman (1995) showed using a simulation that the population
size does matter in systematically detecting the TRGB position accurately.
That is, if not enough stars are sampled in the first bin immediately
fainter than the TRGB magnitude, the distance can be overestimated.
We show here again how the population sampling size affects our
distance estimates.
We used $V$ and $I$ photometric data of the halo of NGC~5253 (Sakai 1999)
which is comprised of 1457 stars that are brighter than $M_I \leq -3$,
and is considered here as a complete sample.
The TRGB magnitude for this galaxy is $I = 23.90$ mag.
$N$ stars are then randomly selected from this NGC~5253 database
100 times, for which the smoothed luminosity function is determined.
The edge--detection filter is then applied to the luminosity function
in a usual fashion to estimate the TRGB magnitude.
This exercise was repeated for the case comprised solely of the RGB
stars; that is, the stars brighter than the TRGB were excluded from
the parent sample.
We show the results for $N=20,100$ and $1000$ in Figure 6,
where the number distribution of TRGB magnitudes is shown for each simulation.
And in Table 4, we list the average offset from the TRGB magnitude.
In both cases, for smaller two samples, the TRGB determination becomes very
uncertain, as the RGB population becomes indistinguishable from the brighter
intermediate--age AGB population.
Or in the case where the RGB population is undersampled (the second scenario
in which only the RGB stars were included in the sample), the stars around
the tip of the RGB are missed, yielding an overestimated distance to this galaxy.
Another way to present this effect is to plot the TRGB magnitude as
a function of the difference between the 0.15--mag bins immediately
brighter and fainter than the TRGB. This is shown in Figure 7.
For the least complete sample ($N=300$), the difference in number counts
in the consecutive bins around the TRGB is merely $\sim$20.
This figure suggests that at least a number count difference of $\sim$40
is needed to estimate the TRGB position accurately.
Using the photometric data of the WF4 of Field II, the TRGB is detected
at $I = 23.95 \pm 0.14$ mag.
The foreground extinction in the line of sight of M82 is
$A_B = 0.12$ mag (Burstein and Heiles 1982).
Using conversions of $A_V / E(V-I) = 2.45$ and $R_V = A_V/E(B-V) = 3.2$
(Dean, Warren \& Cousins (1978), Cardelli et al. (1989) and Stanek (1996)),
we obtain $A_I = 0.05$ mag.
To calculate the true modulus to M82,
we use the TRGB calibration of Lee et al. (1993), according to which
the tip distance is determined via the relation $(m-M)_I = I_{\tiny TRGB} -
M_{bol} + BC_I$, where both the bolometric magnitude ($M_{bol}$) and
the bolometric correction ($BC_I$) are dependent on the color of
the TRGB stars. They are defined by: $M_{bol} = -0.19[Fe/H] - 3.81$ and
$BC_I = 0.881 - 0.243(V-I)_{\tiny TRGB}$. The metallicity is in turn expressed
as a function of the $V-I$ color: $[Fe/H] = -12.65 + 12.6(V-I)_{-3.5} - 3.3(V-I)_{-3.5}^2$,
where $(V-I)_{-3.5}$ is measured at the absolute $I$ magnitude of $-3.5$.
The colors of the red giant stars range from
$(V-I)_0 = 1.5 - 2.2$ (see Figure~4), which gives
the TRGB magnitude of $M_I = -4.05 \pm 0.10$.
We thus derive the TRGB distance modulus of M82 to be
$(m-M)_0 = 27.95 (\pm0.14)_{\mbox{\tiny random}} [\pm0.16]_{\mbox{\tiny
systematic}}$ mag.
This corresponds to a linear distance of $3.9 (\pm 0.3) [\pm 0.3]$ Mpc.
The sources of errors include (1) the random uncertainties in the tip position
(0.14 mag) and (2) the systematic uncertainties, mainly those due
to the TRGB calibration (0.15 mag) and the HST photometry zero point (0.05 mag).
Unfortunately, because the TRGB method is calibrated on the RR Lyrae
distance scale whose zero point itself is uncertain at a 0.15 mag level,
the TRGB zero point subsequently has an uncertainty of 0.15 mag.
Recently, Salaris \& Cassisi (1997: SC97) presented a theoretical calibration
of the TRGB magnitude that utilized the canonical evolutionary models
of stars for a combination of various masses and metallicities for
$Y=0.23$ (Salaris \& Cassisi 1996). SC97 find that their theoretical
calibration gives to a zero point that is $\sim$0.15 mag brighter than
the empirical zero point given by Da Costa \& Armandroff (1990).
They attribute this systematic difference to the small sample of stars
observed in the Galactic globular clusters. We did find in previous
section that under-sampling the RGB stars
leads to a systematically fainter TRGB magnitude, which seem to
be in agreement with CS97. Clearly, the issues pertaining to the
TRGB calibration need to be reviewed in detail in the future.
In this paper, we adopt the TRGB systematic calibration uncertainty
of 0.15 mag based on these studies.
\section{Stars Brighter than the TRGB: What are they?}
It was noted in Figure~4
that the Field II appears to have a considerable number
of stars that are brighter than the TRGB.
There are two possible scenarios to explain what these stars are:
(1) blends of fainter stars due to crowding, or
(2) intermediate--age asymptotic giant branch (AGB) stars.
To explain how much effect the crowding has on stellar photometry,
we turn our attention to Grillmair et al. (1996) who presented
HST observation of M32 halo stars. They concluded that the AGB stars
detected in the same halo region by Freedman (1989) were mostly due to
the crowding. Upon convolving the HST data to simulate the 0\Sec6 image
obtained at CFHT, they successfully recovered these brighter ``AGB''
stars. While HST's 0\Sec1 resolution at the distance of M32, 770kpc, corresponds
to 0.37 pc, 0\Sec6 resolution at the same distance corresponds to 2.2pc.
Our HST M82 data has a resolution of 1.7 pc (0\Sec1 at 3.2 Mpc), indicating
that those stars brighter than the first--ascent TRGB stars are, by analogy
with M32, likely blends of fainter stars.
If instead we were to adopt the second scenario in which these brighter
stars are actually AGB stars, the first striking feature in the CMDs shown
in Figure 3 is that Field II contains significantly more AGB stars in comparison
to Field I.
In particular, we focus on the WF3 chip of each field;
we restrict the sample to smaller regions of WF3 chips where
the surface brightness is roughly in the range of $21.0 \leq \mu_i \leq 21.5$.
This corresponds to the lower 3/4 of WF3 chip in Field I (Regions 3A$+$3B in
Figure~8), and upper 3/4 of WF3 chip in Field II (Regions 3A$+$3B).
The difference in the number of AGB population of two Fields is compared
in term of $N_{AGB}/N_{RGB}$, defined here as the ratio of numbers of stars
in a 0.5--mag bin brighter than
the TRGB to those in a 0.5--mag bin fainter than the TRGB.
We chose the 0.5--mag bin here as it might be less affected by the
incompleteness of stars detected at magnitudes $\sim$1 mag fainter than
the TRGB.
In calculating the ratios, we also assume that 20\% of the fainter giants
below the TRGB are actually AGB stars.
The ratios of Fields I and II are, respectively, $N_{AGB}/N_{RGB} = 58/193 = 0.30
\pm 0.04$ and $164/484 = 0.64 \pm 0.04$.
Restricting the samples furthermore to avoid the more crowded regions,
by using those stars in the section 3A only,
we obtain $N_{AGB}/N_{RGB} = 58/193 = 0.30 \pm 0.04$ and $87/172 = 0.51 \pm 0.06$
for Fields I and II respectively.
These ratios seem to suggest that the difference between the two fields is
significant, at a level of $4-5\sigma$'s.
Because these subregions were chosen to match the surface brightness as closely
as possible, the blending of stars due to crowding should not be a major
factor in systematically making Field~II much richer in the intermediate--age
AGB population compared to Field~I.
Although the present analysis cannot by any means rule out
crowding as the dominant effect and there is strong evidence that these brighter
stars above the TRGB are blends of fainter stars, we conclude the paper
by mentioning a possible connection between the presence of these brighter
stars (if real) with the HI distribution around this galaxy.
Yun, Ho and Lo (1993) presented the VLA observations of M82 which revealed
tidal streamers extending $\geq 10$kpc from M82, characterized by two
main structures.
One of these streamers extend northward from the
NE edge of the galaxy, which coincides with our Field~II position.
The integrated HI flux map of Yun et al. does not, however, reveal any
neutral hydrogen in the region around Field~I.
If M82 is a tidally--disrupted system that has undergone direct interaction
with M81 and NGC~3077, could this have affected the star--formation history
of M82, enhancing a more recent star formation in the northeastern edge
of the galaxy (Field II)?
Answering this question is obviously beyond the scope of this paper,
requiring much deeper, higher--resolution observations, such as with
the Advanced Camera.
This work was funded by NASA LTSA program, NAS7-1260, to SS.
BFM was supported in part by the NASA/IPAC Extragalactic Database.
|
2,877,628,089,588 | arxiv | \section{Introduction}}
\label{sec:intro}
\IEEEPARstart{D}{eep} learning, especially deep neural networks (DNNs), has been successfully adopted in widespread applications for its high effectiveness and efficiency \cite{lecun2015deep,feng2020performance,minaee2021image}. In general, obtaining well-performed DNNs is usually expensive for it requires well-designed architecture, a large number of high-quality training samples, and many computational resources. Accordingly, these models are the valuable intellectual properties of their owners.
However, recent studies \cite{tramer2016stealing,orekondy2019knockoff,chandrasekaran2020exploring} revealed that the adversaries can obtain a function-similar copy model of the well-performed victim model to `steal' it. This attack is called \emph{model stealing}. For example, the adversaries can copy the victim model directly if they can access its source files; Even when the victim model is deployed where the adversaries can only query the model, they can still steal it based on its predictions ($i.e.$, labels or probabilities). Since the stealing process is usually costless compared with obtaining a well-trained victim model, model stealing poses a huge threat to the model owners.
Currently, there are also some methods to defend against model stealing. In general, existing defenses can be roughly divided into two main categories, including the \emph{active defenses} and \emph{verification-based defenses}. Specifically, active defenses intend to increase the costs ($e.g.$, query times and accuracy decrease) of model stealing, while verification-based defenses attempt to verify whether a suspicious model is stolen from the victim model. For example, defenders can introduce randomness or perturbations in the victim models \cite{tramer2016stealing,lee2018defending,kariyappa2020defending} or watermark the victim model via (targeted) backdoor attacks or data poisoning \cite{li2020open,jia2021entangled,maini2021dataset}. However, existing active defenses may lead to poor performance of the victim model and could even be bypassed by advanced adaptive attacks \cite{jia2021entangled,maini2021dataset,li2022defending}; the verification-based methods target only limited simple stealing scenarios ($e.g.$, direct copy or fine-tuning) and have minor effects in defending against more complicated model stealing. Besides, these methods also introduce some stealthy latent \emph{short-cuts} ($e.g.$, hidden backdoors) in the victim model, which could be maliciously used. It further hinders their applications. Accordingly, how to defend against model stealing is still an important open question.
In this paper, we revisit the verification-based defenses against model stealing, which examine whether a suspicious model has defender-specified behaviors. If the model has such behaviors, the defense treats it as stolen from the victim. We argue that a defense is practical if and only if it is both effective and harmless. Specifically, effectiveness requires that it can accurately identify whether the suspicious model is stolen from the victim, no matter what model stealing is adopted; Harmlessness ensures that the model watermarking brings no additional security risks, $i.e.$, the model trained with the watermarked dataset should have similar prediction behaviors to the one trained with the benign dataset. We first reveal that existing methods fail to meet all these requirements and their reasons. Based on the analysis, we propose to conduct model ownership verification via embedded external features (MOVE), trying to fulfill both two requirements. Our MOVE defense consists of three main steps, including 1) embedding external features, 2) training ownership meta-classifier, and 3) ownership verification with hypothesis-test. In general, the external features are different from those contained in the original training set. Specifically, we embed external features by tempering the images of a few training samples based on \emph{style transfer}. Since we only poison a few samples and do not change their labels, the embedded features will not hinder the functionality of the victim model and will not create a malicious hidden backdoor in the victim model. Besides, we also train a \emph{benign model} based on the original training set. It is used only for training the meta-classifier to determine whether a suspicious model is stolen from the victim. In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
The main contribution of this work is five-fold: 1) We revisit the defenses against model stealing from the aspect of ownership verification. 2) We reveal the limitations of existing verification-based methods and their failure reasons. 3) We propose a simple yet effective ownership verification method under both white-box and black-box settings. 4) We verify the effectiveness of our method on benchmark datasets under various types of attacks simultaneously and discuss its resistance to potential adaptive attacks. 5) Our work could provide a new angle about how to adopt the malicious `data poisoning' for positive purposes.
This paper is a journal extension of our conference paper \cite{li2022defending}. Compared with the preliminary conference version, we have made significant improvements and extensions in this paper. The main differences are from four aspects: 1) We rewrite the motivation of this paper to the significance and problems of existing methods in Introduction to better clarify our significance. 2) We generalize our MOVE from the white-box settings to the black-box settings in Section \ref{sec:method} to enhance its abilities and widen its applications. 3) We analyze the resistance of our MOVE to potential adaptive attacks and discuss its relations with membership inference and backdoor attacks in Section \ref{sec:resistance} and Section \ref{sec:relations}, respectively. 4) More results and analysis are added in Section \ref{sec:exp}.
The rest of this paper is organized as follows. We briefly review related works, including model stealing and its defenses, in Section \ref{sec:related-work}. After that, we introduce the preliminaries and formulate the studied problem. In Section \ref{sec:limitation}, we reveal the limitations of existing verification-based defenses. We introduce our MOVE under both white-box and black-box settings in Section \ref{sec:method}. We verify the effectiveness of our methods in Section \ref{sec:exp} and conclude this paper at the end. We hope that our paper can inspire a deeper understanding of model ownership verification, to facilitate the intellectual property protection of model owners.
\section{Related Work}
\label{sec:related-work}
\subsection{Model Stealing}
\label{sec:model-stealing}
In general, model stealing\footnote{In this paper, we focus on model stealing and its defenses in image classification tasks. The attacks and defenses in other tasks are out of the scope of this paper. We will discuss them in our future work.} aims to steal the intellectual property from a victim by obtaining a function-similar copy of the victim model. Depending on the adversary's access level to the victim model, existing model stealing methods can be divided into four main categories, as follows:
\emph{1) Fully-Accessible Attacks ($\mathcal{A}_F$): }
In this setting, the adversaries can directly copy and deploy the victim model.
\emph{2) Dataset-Accessible Attacks ($\mathcal{A}_D$): }
The adversaries can access the training dataset of the victim model, whereas they can only query the victim model. The adversaries may obtain a stolen model by knowledge distillation \cite{hinton2015distilling}.
\emph{3) Model-Accessible Attacks ($\mathcal{A}_M$): }
The adversaries have complete access to the victim model whereas having no training samples. This attack may happen when the victim model is open-sourced. The adversaries may directly fine-tune the victim model (with their own samples) or use the victim model for data-free distillation in a zero-shot learning framework \cite{fang2019data} to obtain the stolen model.
\emph{4) Query-Only Attacks ($\mathcal{A}_Q$): }
This is the most threatening type of model stealing where the adversaries can only query the victim model. Depending on the feedback of the victim model, the query-only attacks can be divided into two sub-categories, including the \emph{label-query attacks} \cite{papernot2017practical,jagielski2020high,chandrasekaran2020exploring} and the \emph{logit-query attacks} \cite{tramer2016stealing,orekondy2019knockoff}. In general, label-query attacks adopted the victim model to annotate some substitute (unlabeled) samples, based on which to train their substitute model. In the logit-query attacks, the adversary usually obtains the function-similar substitute model by minimizing the distance between its predicted logits and those generated by the victim model.
\subsection{Defenses against Model Stealing}
\label{sec:defense-model-stealing}
\subsubsection{Active Defenses}
Currently, most of the existing methods against model stealing are active defenses. In general, they intend to increase the costs ($e.g.$, query times and accuracy decrease) of model stealing. For example, defenders may round the probability vectors \cite{tramer2016stealing}, introduce noise to the output vectors which will result in a high loss in the processes of model stealing \cite{lee2018defending}, or only return the most confident label instead of the whole output vector \cite{orekondy2019knockoff}. However, these defenses may significantly reduce the performance of victim models and may even be bypassed by adaptive attacks \cite{jia2021entangled,maini2021dataset,li2022defending}.
\subsubsection{Model Ownership Verification}
In general, model ownership verification intends to verify whether a suspicious model is stolen from the victim. Currently, existing methods can be divided into two main categories, including \emph{membership inference} and \emph{backdoor watermarking}, as follows:
\vspace{0.2em}
\noindent \emph{Verification via Membership Inference. }
Membership inference \cite{shokri2017membership,leino2020stolen, hui2021practical} aims to identify whether some particular samples are used to train a given model. Intuitively, defenders can use it to verify whether the suspicious model is trained on particular training samples used by the victim model to conduct ownership verification. However, simply applying membership inference for ownership verification is far less effective in defending against many complicated model stealing ($e.g.$, model extraction) \cite{maini2021dataset}. This is most probably because the suspicious models obtained by these processes are significantly different from the victim model, although they have similar functions. Most recently, Maini \emph{et al.} proposed dataset inference \cite{maini2021dataset} trying to defend against different types of model stealing simultaneously. Its key idea is to identify whether a suspicious model contains the knowledge of the inherent features that the victim model $V$ learned from the private training set instead of simply particular samples. Specifically, let we consider a $K$-classification problem. For each sample $(\bm{x}, y)$, dataset inference first generated its minimum distance $\bm{\delta}_t$ to each class $t$ by
\begin{equation}
\min_{\bm{\delta}_t} d(\bm{x}, \bm{x}+\bm{\delta}_t), s.t., V(\bm{x}+\bm{\delta}_t) = t,
\end{equation}
where $d(\cdot)$ is a distance metric ($e.g.$, $\ell^\infty$ norm). It defined the distance to each class ($i.e.$, $\bm{\delta}=(\bm{\delta}_1, \cdots, \bm{\delta}_K)$) as the inherent feature of sample $(\bm{x}, y)$ $w.r.t.$ the victim model $V$. After that, the defender will randomly select some samples inside (labeled as `+1') or out-side (labeled as `-1') their private dataset and use the feature embedding $\bm{\delta}$ to train a binary meta-classifier $C$, where $C(\bm{\delta}) \in [0,1]$ indicates the probability that the sample $(\bm{x}, y)$ is from the private set. In the verification stage, the defender will select equal-sized sample vectors from private and public samples and then calculates the inherent feature embedding $\bm{\delta}$ for each sample vector $w.r.t.$ the suspicious model $S$. To verify whether $S$ is stolen from $V$, the trained $C$ gives the confidence scores based on the inherent feature embedding $\bm{\delta}$ of $S$. Besides, dataset inference adopted \emph{hypothesis-test} based on the confidence scores of sample vectors to provide a more confident verification.
However, as shown in the following experiments in Section \ref{sec:limitation_dataset}, dataset inference is easy to make misjudgments, especially when the training set of suspicious models has a similar distribution to that of the victim model. The misjudgment is mostly because different models may learn similar inherent features once their training sets have certain distribution similarities.
\vspace{0.2em}
\noindent \emph{Verification via Backdoor Watermarking. }
These methods \cite{adi2018turning,li2020open,jia2021entangled} were originally proposed to defend against fully-accessible attacks with or without fine-tuning. However, we notice that they enjoy certain similarities to dataset inference, since they both relied on the knowledge learned by the victim model from the private dataset. Accordingly, they serve as potential defenses against other types of model stealing, such as model-accessible and query-only attacks. The main difference compared with dataset inference is that they conducted model ownership based on defender-specified trigger-label pairs that are out of the distribution of the original training set. For example, backdoor watermarking first adopted backdoor attacks \cite{li2020backdoor} to watermark the model during the training process and then conducted the ownership verification. In particular, a backdoor attack can be characterized by three components, including 1) target class $y_t$, 2) trigger pattern $\bm{t}$, and 3) pre-defined poisoned image generator $G(\cdot)$. Given the benign training set $\mathcal{D} = \{ (\bm{x}_i, y_i) \}_{i=1}^{N}$, the backdoor adversary will randomly select $\gamma \%$ samples ($i.e.$, $\mathcal{D}_s$) from $\mathcal{D}$ to generate their poisoned version $\mathcal{D}_p = \{ (\bm{x}', y_t) |\bm{x}' = G(\bm{x}; \bm{t}), (\bm{x}, y) \in \mathcal{D}_s \}$. Different backdoor attacks may assign different generator $G(\cdot)$. For example, $G(\bm{x}; \bm{t}) = (\bm{1}-\bm{\lambda}) \otimes \bm{x}+ \bm{\lambda} \otimes \bm{t}$ where $\bm{\lambda} \in \{0,1\}^{C \times W \times H}$ and $\otimes$ indicates the element-wise product in the BadNets \cite{gu2019badnets}; $G(\bm{x}) = \bm{x} + \bm{t}$ in the ISSBA \cite{li2021invisible}. After $\mathcal{D}_p$ was generated, $\mathcal{D}_p$ and remaining benign samples $\mathcal{D}_b \triangleq \mathcal{D} \backslash \mathcal{D}_s$ will be used to train the (watermarked) model $f_\theta$ via
\begin{equation}
\min_{\bm{\theta}} \sum_{(\bm{x}, y) \in \mathcal{D}_p \cup \mathcal{D}_b} \mathcal{L}(f_{\bm{\theta}}(\bm{x}), y),
\end{equation}
where $\mathcal{L}(\cdot)$ is the loss function.
In the verification stage, the defender will examine the suspicious model in predicting $y_t$. If the confidence scores of poisoned samples are significantly greater than those of benign samples, the suspicious model is treated as watermarked and therefore it is stolen from the victim. However, as shown in the following experiments conducted in Section \ref{sec:limitation_backdoor}, these methods have minor effects in defending against more complicated model stealing. Their failures are most probably because the hidden backdoor is modified after the complicated stealing process. Moreover, backdoor watermarking also introduce new security threats, since it builds a stealthy latent connection between trigger pattern and target label. The adversaries may use it to maliciously manipulate the predictions of deployed victim models. This problem will also hinder their utility in practice.
\vspace{0.15em}
In conclusion, existing defenses still have vital limitations. How to defend against model stealing is still an important open question and worth further exploration.
\section{Preliminaries}
\subsection{Technical Terms}
Before we dive into technical details, we first present the definition of commonly used technical terms, as follows:
\begin{itemize}
\item \emph{Victim Model}: released model that could be stolen by the adversaries.
\item \emph{Suspicious Model}: model that is likely stolen from the victim model.
\item \emph{Benign Dataset}: unmodifed dataset.
\item \emph{Benign Sample}: unmodifed sample.
\item \emph{Watermarked Dataset}: dataset used for watermarking the (victim) model.
\item \emph{Watermarked Sample}: modified sample contained in the watermarked dataset.
\item \emph{Poisoned Sample}: modified sample used to create and activate the backdoor.
\item \emph{Benign Accuracy}: the accuracy of models in predicting benign testing samples.
\item \emph{Attack Success Rate}: the accuracy of models in predicting poisoned testing samples.
\end{itemize}
\subsection{Problem Formulation}
In this paper, we focus on defending against model stealing in image classification tasks via model ownership verification. Specifically, given a suspicious model $S$, the defender intends to identify whether it is stolen from the victim model $V$. We argue that a model ownership verification is promising in practice if and only if it satisfies both two requirements simultaneously, as follows:
\begin{definition}[Two Necessary Requirements]
\end{definition}
\vspace{-0.65em}
\begin{itemize}
\item \emph{Effectiveness}: The defense could accurately identify whether the suspicious model is stolen from the victim no matter what model stealing is adopted.
\item \emph{Harmlessness}: The defense brings no additional security risks ($e.g.$, backdoor), $i.e.$, the model trained with the watermarked dataset should have similar prediction behaviors to the one trained with the benign dataset.
\end{itemize}
\vspace{0.15em}
In general, effectiveness guarantees verification effects, while harmlessness ensures the safety of the victim model.
\subsection{Threat Model}
In this paper, we consider both white-box and black-box settings of model ownership verification, as follows:
\vspace{0.3em}
\noindent \emph{White-box Setting. }
We assume that the defenders have complete access to the suspicious model, $i.e.$, they can obtain its source files. However, the defenders have no information about the stealing process. For example, they have no information about the training samples, the training schedule, and the adopted stealing method used by the adversaries. One may argue that only black-box defenses are practical, since the adversary may refuse to provide the suspicious model. However, white-box defenses are also practical. In our opinion, the adoption of verification-based defenses (in a legal system) requires an official institute for arbitration. Specifically, all commercial models should be registered here, through the unique identification ($e.g.$, MD5 code \cite{rivest1992md5}) of their model’s source files. When this official institute is established, its staff should take responsibility for the verification process. For example, they can require the company to provide the model file with the registered identification and then use our method (under the white-box setting) for verification.
\vspace{0.3em}
\noindent \emph{Black-box Setting. }
We assume that the defenders can only query and obtain the predicted probabilities from the suspicious model, whereas cannot get access to the model source files or intermediate results ($e.g.$, gradients) and have no information about the stealing. This approach can be used as a primary inspection of the suspicious model before applying the official arbitration where white-box verifications may be adopted. In particular, we did not consider the label-only black-box setting since the harmlessness requires that no abnormal prediction behaviors (compared with the model trained with a benign dataset) are introduced in the victim model. In other words, it is impossible to accurately identify model stealing under the label-only black-box setting.
\vspace{0.8em}
\section{Revisiting Verification-based Defenses}
\label{sec:limitation}
\vspace{0.3em}
\subsection{The Limitations of Dataset Inference}
\label{sec:limitation_dataset}
As we described in Section \ref{sec:defense-model-stealing}, dataset inference reached better performance compared with membership inference by using inherent features instead of given samples. In other words, it relied on the latent assumption that a benign model will not learn features contained in the training set of the victim model. However, different models may learn similar features from different datasets, $i.e.$, this assumption does not hold and therefore leads to misjudgments. In this section, we will illustrate this limitation.
\vspace{0.3em}
\noindent \emph{Settings. }
We conduct the experiments on CIFAR-10 \cite{krizhevsky2009learning} dataset with VGG \cite{simonyan2014very} and ResNet \cite{he2016deep}. To create two independent datasets that have similar distribution, we randomly separate the original training set $\mathcal{D}$ into two disjoint subsets $\mathcal{D}_l$ and $\mathcal{D}_r$. We train the VGG on $\mathcal{D}_l$ (dubbed VGG-$\mathcal{D}_l$) and the ResNet on $\mathcal{D}_r$ (dubbed ResNet-$\mathcal{D}_r$), respectively. We also train a VGG on a noisy verision of $\mathcal{D}_l$ ($i.e.$, $\mathcal{D}_l'$), where $\mathcal{D}_l' \triangleq \{(\bm{x}', y)|\bm{x}' = \bm{x} + \mathcal{N}(0, 16), (\bm{x}, y) \in \mathcal{D}_l\}$ (dubbed VGG-$\mathcal{D}_l'$) for reference. In the verification process, we verify whether the VGG-$\mathcal{D}_l$ and VGG-$\mathcal{D}_l'$ is stolen from ResNet-$\mathcal{D}_r$ and whether the ResNet-$\mathcal{D}_r$ is stolen from VGG-$\mathcal{D}_l$ with dataset inference \cite{maini2021dataset}. As described in Section \ref{sec:defense-model-stealing}, we adopt the p-value as the evaluation metric, following the setting of dataset inference. In particular, the smaller the p-value, the more confident that dataset inference believes that the suspicious model is stolen from the victim model.
\vspace{0.3em}
\noindent \emph{Results. }
As shown in Table \ref{table:misjudge-acc-p-value}, all models have promising performance even with only half the numbers of original training samples. However, the p-value is significantly smaller than 0.01 in all cases. In other words, dataset inference is confident about the judgments that VGG-$\mathcal{D}_l$ and VGG-$\mathcal{D}_l'$ are stolen from ResNet-$\mathcal{D}_r$, and ResNet-$\mathcal{D}_r$ is stolen from VGG-$\mathcal{D}_l$. However, none of these models should be regarded as stolen from the victim since they adopt completely different training samples and model structures. These results reveal that \emph{dataset inference could make misjudgments and therefore its results are questionable}. Besides, the misjudgments may probably cause by the distribution similarity among $\mathcal{D}_l$, $\mathcal{D}_r$, and $\mathcal{D}_l'$. The p-value of the VGG-$\mathcal{D}_l$ is lower than that of the VGG-$\mathcal{D}_l'$. This is probably because the latent distribution of $\mathcal{D}_l'$ is more different from that of $\mathcal{D}_r$ (compared with that of $\mathcal{D}_l$) and therefore models learn more different features.
\begin{table}[t]
\caption{The accuracy of victim models and p-value of verification processes. In this experiment, dataset inference misjudges in all cases. The failed case is marked in red.}
\centering
\scalebox{1}{
\begin{tabular}{c|ccc}
\toprule
& ResNet-$\mathcal{D}_r$ & VGG-$\mathcal{D}_l$ & VGG-$\mathcal{D}_l'$ \\ \hline
Accuracy & 88.0\% & 87.7\% & 85.0\% \\
p-value & \red{$10^{-7}$} & \red{$10^{-5}$} & \red{$10^{-4}$} \\ \bottomrule
\end{tabular}
}
\label{table:misjudge-acc-p-value}
\end{table}
\subsection{The Limitations of Backdoor Watermarking}
\label{sec:limitation_backdoor}
As described in Section \ref{sec:defense-model-stealing}, the verification via backdoor watermarking relies on the latent assumption that the defender-specific trigger pattern matches the backdoor embedded in stolen models. This assumption holds in its originally discussed scenarios, since the suspicious model is the same as the victim model. However, this assumption may not hold in more advanced model stealing, since the backdoor may be changed or even removed during the stealing process. Consequently, the backdoor-based watermarking may fail in defending against model stealing. In this section, we verify this limitation.
\vspace{0.3em}
\noindent \emph{Settings. }
We adopt the most representative and effective backdoor attack, the BadNets \cite{gu2019badnets}, as an example for the discussion. The watermarked model will then be stolen by the data-free distillation-based model stealing \cite{fang2019data}. We adopt \emph{benign accuracy (BA)} and \emph{attack success rate (ASR)} \cite{li2020backdoor} to evaluate the performance of the stolen model. The larger the ASR, the more likely the stealing will be detected.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.94\textwidth]{./fig/pipeline.pdf}
\vspace{-0.5em}
\caption{The main pipeline of our MOVE defense. In the first step, defenders will modify some images via style transfer for embedding external features. In the second step, defenders will train a meta-classifier to determine whether a suspicious model is stolen from the victim based on gradients (under the white-box setting) or prediction differences (under the black-box setting). In the last step, defenders will conduct ownership verification via hypothesis-test.}
\label{fig:pipeline}
\vspace{-0.5em}
\end{figure*}
\begin{table}[ht]
\caption{The performance (\%) of different models. The failed verification case is marked in red.}
\centering
\scalebox{1}{
\begin{tabular}{c|ccc}
\toprule
Model Type $\rightarrow$ & Benign & Watermarked & Stolen \\ \hline
BA & 91.99& 85.49 & 70.17 \\\hline
ASR & 0.01 & 100.00 & \red{ $3.84$} \\ \bottomrule
\end{tabular}
}
\label{table:fail_backdoor}
\end{table}
\begin{figure}[ht]
\centering
\subfigure[]{
\label{fig:original}
\includegraphics[width=0.14\textwidth]{./fig/original_trigger.png}}
\subfigure[]{
\label{fig:before}
\includegraphics[width=0.14\textwidth]{./fig/before.png}}
\subfigure[]{
\label{fig:after}
\includegraphics[width=0.14\textwidth]{./fig/after.png}}
\caption{The adopted trigger pattern and synthesized ones obtained from the watermarked and the stolen model. The trigger areas are indicated in the blue box. \textbf{(a)} ground-truth trigger pattern; \textbf{(b)} pattern obtained from the watermarked model; \textbf{(c)} pattern obtained from the stolen model.}
\label{fig:backdoor-change}
\end{figure}
\vspace{0.3em}
\noindent \emph{Results. }
As shown in Table \ref{table:fail_backdoor}, the ASR of the stolen model is only 3.84\%, which is significantly lower than that of the watermarked model. In other words, \emph{the defender-specified trigger no longer matches the hidden backdoor contained in the stolen model}. As such, backdoor-based model ownership verification will fail to detect this model stealing. To further discuss the reason for this failure, we adopt the targeted universal adversarial attack \cite{moosavi2017universal} to synthesize and visualize the potential trigger pattern of each model. As shown in Figure \ref{fig:backdoor-change}, the trigger pattern recovered from the victim model is similar to the ground-truth one. However, the recovered pattern from the stolen model is significantly different from the ground-truth one. These results provide a reasonable explanation for the failure of backdoor-based watermarking.
In particular, backdoor watermarking will also \emph{introduce additional security risks} to the victim model and therefore is harmful. Specifically, it builds a stealthy latent connection between triggers and the target label. The adversary may use it to maliciously manipulate predictions of the victim model. This potential security risk will further hinder the adoption of backdoor-based model verification in practice.
\section{The Proposed Method}
\label{sec:method}
Based on the observations in Section \ref{sec:limitation}, we propose a harmless model ownership verification (MOVE) by embedding external features (instead of inherent features) into victim model, without changing the label of watermarked samples.
\subsection{Overall Pipeline}
In general, our MOVE defense consists of three main steps, including 1) embedding external features, 2) training an ownership meta-classifier, and 3) conducting ownership verification. In particular, we consider both white-box and black-box settings. They have the same feature embedding process and similar verification processes, while having different training manners of the meta-classifier. The main pipeline of our MOVE is shown in Figure \ref{fig:pipeline}.
\subsection{Embedding External Features}
\label{sec:embedding-external-features}
In this section, we describe how to embed external features. We first define inherent and external features before reaching the technical details of the embedding.
\begin{definition}[Inherent and External Features]
A feature $f$ is called the inherent feature of dataset $\mathcal{D}$ if and only if
$\
\forall (\bm{x}, y) \in \mathcal{X}\times \mathcal{Y}, (\bm{x}, y) \in \mathcal{D} \Rightarrow (\bm{x}, y)\ \text{contains feature} f.
$
Similarly, f is called the external feature of dataset $\mathcal{D}$ if and only if
$\
\forall (\bm{x}, y) \in \mathcal{X}\times \mathcal{Y}, (\bm{x}, y)\ \text{contains feature} \ f \Rightarrow (\bm{x}, y) \notin \mathcal{D}.
$
\end{definition}
\begin{example}
If an image is from the MNIST dataset, it is at least grayscale; If an image is cartoon-type, it is not from the CIFAR-10 dataset for it contains only natural images.
\end{example}
Although external features are well defined, how to construct them is still difficult, since the learning dynamic of DNNs remains unclear and the concept of features itself is complicated. However, at least we know that the \emph{image style} can serve as a feature for the learning of DNNs in image-related tasks, based on some recent studies \cite{geirhos2019imagenet,duan2020adversarial,cheng2021deep}. As such, we can use \emph{style transfer} \cite{johnson2016perceptual, huang2017arbitrary,chen2020optical} for embedding external features. People may also adopt other methods for the embedding. It will be discussed in our future work.
\begin{figure*}[ht]
\centering
\subfigure[]{
\includegraphics[width=0.14\textwidth]{./fig/original.jpg}}
\hspace{0.1em}
\subfigure[]{
\includegraphics[width=0.14\textwidth]{./fig/with_trigger.png}}
\hspace{0.1em}
\subfigure[]{
\includegraphics[width=0.14\textwidth]{./fig/gm.jpg}}
\hspace{0.1em}
\subfigure[]{
\includegraphics[width=0.14\textwidth]{./fig/entangle.jpg}}
\hspace{0.1em}
\subfigure[]{
\includegraphics[width=0.14\textwidth]{./fig/style.png}}
\hspace{0.1em}
\subfigure[]{
\includegraphics[width=0.14\textwidth]{./fig/embedded.jpg}}
\caption{Images involved in different defenses. \textbf{(a)} benign image; \textbf{(b)} poisoned image in BadNets; \textbf{(c)} poisoned image in Gradient Matching; \textbf{(d)} poisoned image in Entangled Watermarks; \textbf{(e)} style image; \textbf{(f)} transformed image in our MOVE. }
\label{fig:images}
\end{figure*}
In particular, let $\mathcal{D} = \{ (\bm{x}_i, y_i) \}_{i=1}^{N}$ denote the benign training set, $\bm{x}_s$ is a defender-specified \emph{style image}, and $T: \mathcal{X} \times \mathcal{X} \rightarrow \mathcal{X}$ is a (pre-trained) style transformer. In this step, the defender first randomly selects $\gamma \%$ (dubbed \emph{transformation rate}) samples ($i.e.$, $\mathcal{D}_s$) from $\mathcal{D}$ to generate their transformed version $\mathcal{D}_t = \{ (\bm{x}', y) |\bm{x}' = T(\bm{x}, \bm{x}_s), (\bm{x}, y) \in \mathcal{D}_s \}$. The external features will be learned by the victim model $V_\theta$ during the training process via
\begin{equation}
\min_{\bm{\theta}} \sum_{(\bm{x}, y) \in \mathcal{D}_b \cup \mathcal{D}_t} \mathcal{L}(V_{\bm{\theta}}(\bm{x}), y),
\end{equation}
where $\mathcal{D}_b \triangleq \mathcal{D} \backslash \mathcal{D}_s$ and $\mathcal{L}(\cdot)$ is the loss function.
In this stage, how to select the style image is an important question. Intuitively, it should be significantly different from those contained in the original training set. In practice, defenders can simply adopt oil or sketch paintings as the style image, since most of the images that need to be protected are natural images. Defenders can also use other style images. we will further discuss it in Section \ref{sec:hyper}.
In particular, since we only modify a few samples and do not change their labels, the embedding of external features will not hinder the functionality of victim models or introduce new security risks ($e.g.$, hidden backdoors).
\subsection{Training Ownership Meta-Classifier}
Since there is no explicit expression of the embedded external features and those features also have minor influences on the prediction, we need to train an additional binary meta-classifier to determine whether the suspicious model contains the knowledge of external features.
Under the white-box setting, we adopt the gradients of model weights as the input to train the meta-classifier $C_{\bm{w}}: \mathbb{R}^{|\bm{\theta}|} \rightarrow \{-1, +1\}$. In particular, we assume that the victim model $V$ and the suspicious model $S$ have the same model structure. This assumption can be easily satisfied since the defender can retain a copy of the suspicious model on the training set of the victim model if they have different structures. Once the suspicious model is obtained, the defender will then train the benign version ($i.e.$, the $B$) of the victim model on the benign training set $\mathcal{D}$. After that, we can obtain the training set $\mathcal{D}_c$ of meta-classifier $C$ via
\begin{equation}
\begin{aligned}
\mathcal{D}_c = & \left\{\left(g_V(\bm{x}'), +1\right)| (\bm{x}', y) \in \mathcal{D}_t \right\} \cup \\
& \left\{\left(g_B(\bm{x}'), -1\right)| (\bm{x}', y) \in \mathcal{D}_t \right\},
\end{aligned}
\end{equation}
where $\text{sgn}(\cdot)$ indicates the sign function \cite{sachs2012applied}, $g_{V}(\bm{x}') = \text{sgn}( \nabla_{\bm{\theta}} \mathcal{L}(V(\bm{x}'), y))$, and $g_{B}(\bm{x}') = \text{sgn}( \nabla_{\bm{\theta}} \mathcal{L}(B(\bm{x}'), y))$. In particular, we adopt its sign vector instead of the gradient itself to highlight the influence of its direction.
Under the black-box setting, defenders can no longer obtain the model gradients since they can not access model's source files or intermediate results. In this case, we adopt the difference between the predicted probability vector of the transformed image and that of its benign version. Specifically, let $V: \mathcal{X} \rightarrow [0, 1]^K$ and $B: \mathcal{X} \rightarrow [0, 1]^K$ indicates the victim model and the benign model, respectively. Assume that $d_{V}(\bm{x}, \bm{x}') = V(\bm{x}') - V(\bm{x})$ and $d_{B}(\bm{x}, \bm{x}') = B(\bm{x}') - B(\bm{x})$. In this case, the training set $\mathcal{D}_c$ can be obtained via
\begin{equation}\label{eq:D_c}
\begin{aligned}
\mathcal{D}_c = & \left\{\left(d_{V}(\bm{x}, \bm{x}'), +1\right)| (\bm{x}', y) \in \mathcal{D}_t \right\} \cup \\
& \left\{\left(d_{B}(\bm{x}, \bm{x}'), -1\right)| (\bm{x}', y) \in \mathcal{D}_t \right\},
\end{aligned}
\end{equation}
Different from MOVE under the white-box setting, we do not assume that the victim model has the same model structure as the suspicious model under the black-box setting, since only model predictions are needed in this case.
However, we found that directly using $\mathcal{D}_c$ defined in Eq.(\ref{eq:D_c}) is not able to train a well-performed meta-classifier. This failure is mostly because the probability differences contain significantly less information than the model gradients. To alleviate this problem, we propose to introduce data augmentations on the transformed image and concatenate their prediction differences. Specifically, let $\bm{F}=\{f_{aug}^{(i)}\}_{i=1}^N$ denotes $N$ given semantic and size preserving image transformations ($e.g.$, flipping). The (augmented) training set is denoted as follows:
\begin{equation}\label{eq:D_c_aug}
\begin{aligned}
\mathcal{D}_c^{aug} = & \left\{\left(A_V(\bm{x}, \bm{x}'), +1\right)| (\bm{x}', y) \in \mathcal{D}_t \right\} \cup \\
& \left\{\left(A_{B}(\bm{x}, \bm{x}'), -1\right)| (\bm{x}', y) \in \mathcal{D}_t \right\},
\end{aligned}
\end{equation}
where
\comment{
\begin{equation}
A_V(\bm{x}, \bm{x}') = \text{CAT}\left(\left\{d_{V}\left(f_{aug}^{(i)}(\bm{x}), f_{aug}^{(i)}(\bm{x}')\right)\right\}_{i=1}^N\right),
\end{equation}
\begin{equation}
A_B(\bm{x}, \bm{x}') = \text{CAT}\left(\left\{d_{B}\left(f_{aug}^{(i)}(\bm{x}), f_{aug}^{(i)}(\bm{x}')\right)\right\}_{i=1}^N\right),
\end{equation}
}
\begin{equation}
A_V(\bm{x}, \bm{x}') = \text{CAT}\left(\left\{d_{V}\left(\bm{x}, f_{aug}^{(i)}(\bm{x}')\right)\right\}_{i=1}^N\right),
\end{equation}
\begin{equation}
A_B(\bm{x}, \bm{x}') = \text{CAT}\left(\left\{d_{B}\left(\bm{x}, f_{aug}^{(i)}(\bm{x}')\right)\right\}_{i=1}^N\right),
\end{equation}
and `CAT' is the concatenate function. Note that the dimensions of $A_V(\bm{x}, \bm{x}')$ and $A_V(\bm{x}, \bm{x}')$ are both $(1, K \times N)$. In this paper, we adopt five widespread transformations, including 1) identical transformation, 2) horizontal flipping, 3) translation towards the bottom right, 4) translation towards towards the right, and 5) translation towards the bottom, for simplicity. We will discuss the potential of using other transformations in our future work.
Once $\mathcal{D}_c$ is obtained, the meta-classifier $C_{\bm{w}}$ is trained by
\begin{equation}
\min_{\bm{w}} \sum_{(\bm{s}, t) \in \mathcal{D}_c} \mathcal{L}(C_{\bm{w}}(\bm{s}), t).
\end{equation}
\subsection{Ownership Verification with Hypothesis-Test}
\label{sec:verification}
After training the meta-classifier $C$, the defenders can verify whether a suspicious model is stolen from the victim simply by the result of meta-classifier, based on a given transformed image $\bm{x}'$. However, the verification result may be sharply affected by the randomness of selecting $\bm{x}'$. In order to increase the verification confidence, we design a hypothesis-test-based method, as follows:
\begin{definition}[White-box Verification]
Let $\bm{X}'$ is the variable of the transformed image, while $\mu_{S}$ and $\mu_{B}$ denotes the posterior probability of event $C(g_S(\bm{X}')) = 1$ and $C(g_B(\bm{X}')) = 1$, respectively. Given a null hypothesis $H_0: \mu_{S} = \mu_{B} \ (H_1: \mu_{S} > \mu_{B})$, we claim that the suspicious model $S$ is stolen from the victim if and only if the $H_0$ is rejected.
\end{definition}
\begin{definition}[Black-box Verification]
Let $\bm{X}'$ is the variable of the transformed image and $\bm{X}$ denotes the benign version of $\bm{X}'$. Assume that $\mu_{S}$ and $\mu_{B}$ indicates the posterior probability of event $C\left(A_S(\bm{X}, \bm{X}')\right) = 1$ and $C\left(A_B(\bm{X}, \bm{X}')\right) = 1$, respectively. Given a null hypothesis $H_0: \mu_{S} = \mu_{B} \ (H_1: \mu_{S} > \mu_{B})$, we claim that the suspicious model $S$ is stolen from the victim if and only if the $H_0$ is rejected.
\end{definition}
Specifically, we randomly sample $m$ different transformed images from $\mathcal{D}_t$ to conduct the single-tailed pair-wise T-test \cite{hogg2005introduction} and calculate its p-value. If the p-value is smaller than the significance level $\alpha$, the null hypothesis $H_0$ is rejected. We also calculate the \emph{confidence score} $\Delta \mu = \mu_{S} - \mu_{B}$ to represent the verification confidence. The larger the $\Delta \mu$, the more confident the verification.
\section{Main Experiments}
\label{sec:exp}
\subsection{Settings. }
\label{sec:exp_settings}
\noindent \emph{Dataset and Model Selection. }
We evaluate our defense on CIFAR-10 \cite{krizhevsky2009learning} and ImageNet \cite{deng2009imagenet} dataset. CIFAR-10 contains 60,000 images (with size $3 \times 32 \times 32$) with 10 classes, including 50,000 training samples and 10,000 testing samples. ImageNet is a large-scale dataset and we use its subset containing 20 random classes for simplicity and efficiency. Each class of the subset contains 500 samples (with size $3 \times 224 \times 224$) for training and 50 samples for testing.
Following the settings of \cite{maini2021dataset}, we use the WideResNet \cite{zagoruyko2016wide} and ResNet \cite{he2016deep} as the victim model on CIFAR-10 and ImageNet, respectively.
\vspace{0.3em}
\noindent \emph{Training Settings. }
For the CIFAR-10 dataset, the training is conducted based on the open-source codes$\footnote{\url{https://github.com/kuangliu/pytorch-cifar}}$. Specifically, both victim model and benign model are trained for 200 epochs with SGD optimizer and an initial learning rate of 0.1, momentum of 0.9, weight decay of 5 $\times 10^{-4}$, and batch size of 128. We decay the learning rate with the cosine decay schedule \cite{loshchilov2016sgdr} without a restart. We also use data augmentation techniques including random crop and resize (with random flip). For the ImageNet dataset, both the victim model and benign model are trained for 200 epochs with SGD optimizer and an initial learning rate of 0.001, momentum of 0.9, weight decay of $1 \times 10^{-4}$, and batch size of 32. The learning rate is decreased by a factor of 10 at epoch 150. All training processes are performed on a single GeForce GTX 1080 Ti GPU.
\begin{table*}[!ht]
\centering
\caption{Results of defenses on CIFAR-10 dataset under the white-box setting. The best results among all defenses are indicated in boldface while the failed verification cases are marked in red.}
\scalebox{0.85}{
\begin{tabular}{llcccccccccccccc}
\toprule
\multicolumn{2}{c}{\multirow{2}*{Model Stealing}}
&\multicolumn{2}{c}{BadNets}& &\multicolumn{2}{c}{Gradient Matching}& &\multicolumn{2}{c}{Entangled Watermarks}& &\multicolumn{2}{c}{Dataset Inference}& & \multicolumn{2}{c}{MOVE (Ours)}\\
\cline{3-4}\cline{6-7}\cline{9-10}\cline{12-13}\cline{15-16}
& &$\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value\\
\hline
$\mathcal{A}_{F}$ & Direct-copy &0.91 &$10^{-12}$ && 0.88 & $10^{-12}$ &&\textbf{0.99}&$\mathbf{10^{-35}}$ &&-&$10^{-4}$ & & 0.97 & $10^{-7}$\\
$\mathcal{A}_{D}$ &Distillation &\red{$-10^{-3}$} & \red{0.32} && \red{$10^{-7}$} & \red{0.20} && \red{0.01} & \red{0.33} &&-&$10^{-4}$ & &\textbf{0.53} & $\mathbf{10^{-7}}$\\
\multirow{2}*{$\mathcal{A}_{M}$}&Zero-shot &\red{$10^{-25}$} & \red{0.22} &&\red{$10^{-24}$} & \red{0.22} &&\red{$10^{-3}$} & $10^{-3}$ &&-&$10^{-2}$ & &\textbf{0.52} & $\mathbf{10^{-5}}$\\
&Fine-tuning &\red{$10^{-23}$} & \red{0.28} &&\red{$10^{-27}$} & \red{0.28} &&0.35& 0.01 &&-&$10^{-5}$ & &\textbf{0.50} & $\mathbf{10^{-6}}$\\
\multirow{2}*{$\mathcal{A}_{Q}$}&Label-query &\red{$10^{-27}$} & \red{0.20} &&\red{$10^{-30}$} & \red{0.34} &&\red{$10^{-5}$} &\red{0.62} &&-&$10^{-3}$ & &\textbf{0.52} & $\mathbf{10^{-4}}$\\
&Logit-query &\red{$10^{-27}$} & \red{0.23} &&\red{$10^{-23}$} & \red{0.33} &&\red{$10^{-6}$} &\red{0.64} && - &$10^{-3}$ & &\textbf{0.54} & $\mathbf{10^{-4}}$ \\ \hline
Benign&Independent &$10^{-20}$ & 0.33 &&$10^{-12}$ & 0.99 &&$10^{-22}$ &0.68 &&-&\red{$10^{-31}$} & &\textbf{0.00} & \textbf{1.00} \\
\bottomrule
\end{tabular}
}
\label{table:white-cifar10}
\end{table*}
\begin{table*}[!ht]
\centering
\caption{Results of defenses on ImageNet dataset under the white-box setting. The best results among all defenses are indicated in boldface while the failed verification cases are marked in red.}
\scalebox{0.85}{
\begin{tabular}{llcccccccccccccc}
\toprule
\multicolumn{2}{c}{\multirow{2}*{Model Stealing}}
&\multicolumn{2}{c}{BadNets}& &\multicolumn{2}{c}{Gradient Matching}& &\multicolumn{2}{c}{Entangled Watermarks}& &\multicolumn{2}{c}{Dataset Inference}& & \multicolumn{2}{c}{MOVE (Ours)}\\
\cline{3-4}\cline{6-7}\cline{9-10}\cline{12-13}\cline{15-16}
& &$\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value\\
\hline
$\mathcal{A}_{F}$ & Direct-copy &0.87 &$10^{-10}$ &&0.77 &$10^{-10}$ &&\textbf{0.99} &$\mathbf{10^{-25}}$ &&-&$10^{-6}$&&0.90 &$10^{-5}$\\
$\mathcal{A}_{D}$ &Distillation & \red{$10^{-4}$} & \red{0.43} && \red{$10^{-12}$} & \red{0.43} && \red{$10^{-6}$} & \red{0.19} &&-&$10^{-3}$&&\textbf{0.61} &$\mathbf{10^{-5}}$\\
\multirow{2}*{$\mathcal{A}_{M}$}&Zero-shot & \red{$10^{-12}$} & \red{0.33} && \red{$10^{-18}$} & \red{0.43} && \red{$10^{-3}$} & \red{0.46} &&-& $10^{-3}$&&\textbf{0.53} &$\mathbf{10^{-4}}$\\
&Fine-tuning & \red{$10^{-20}$} & \red{0.20} &&\red{ $10^{-12}$} & \red{0.47} && 0.46 &0.01 &&-&$10^{-4}$&&\textbf{0.60} &$\mathbf{10^{-5}}$\\
\multirow{2}*{$\mathcal{A}_{Q}$}&Label-query & \red{$10^{-23}$} & \red{0.29} && \red{$10^{-22}$} & \red{0.50} && \red{$10^{-7}$} & \red{0.45} &&-&$\bm{10^{-3}}$&&\textbf{0.55} &$\mathbf{10^{-3}}$\\
&Logit-query & \red{$10^{-23}$} & \red{0.38} && \red{$10^{-12}$} & \red{0.22} && \red{$10^{-6}$} & \red{0.36} &&-&$10^{-3}$&&\textbf{0.55} &$\mathbf{10^{-4}}$\\ \hline
Benign&Independent &$10^{-24}$ &0.38 &&$10^{-23}$ &0.78 &&$\mathbf{10^{-30}}$ &0.55 &&-&\red{$10^{-10}$} &&$10^{-5}$&\textbf{0.99}\\
\bottomrule
\end{tabular}
}
\label{table:white-imagenet}
\vspace{1em}
\end{table*}
\vspace{0.3em}
\noindent \emph{Settings for Model Stealing. }
Following the settings in \cite{maini2021dataset}, we conduct model stealing attacks illustrated in Section \ref{sec:model-stealing} to evaluate the effectiveness of different defenses. Besides, we also provide the results of examining a suspicious model which is not stolen from the victim (dubbed `Independent') for reference. Specifically, we implement the model distillation (dubbed `Distillation') \cite{hinton2015distilling} based on its open-sourced codes$\footnote{\url{https://github.com/thaonguyen19/ModelDistillation-PyTorch}}$. The stolen model is trained with SGD optimizer and an initial learning rate of 0.1, momentum of 0.9, and weight decay of $10^{-4}$; The zero-shot learning based data-free distillation (dubbed `Zero-shot') \cite{fang2019data} is implemented based on the open-source codes\footnote{\url{https://github.com/VainF/Data-Free-Adversarial-Distillation}}. The stealing process is performed for 200 epochs with SGD optimizer and a learning rate of 0.1, momentum of 0.9, weight decay of $5\times 10^{-4}$, and batch size of 256; For the fine-tuning, the adversaries obtain stolen models by fine-tuning victim models on different datasets. Following the settings in \cite{maini2021dataset}, we randomly select 500,000 samples from the original TinyImages \cite{birhane2021large} as the substitute data to fine-tune the victim model for experiments on CIFAR-10. For the ImageNet experiments, we randomly choose samples with other 20 classes from the original ImageNet as the substitute data. We fine-tune the victim model for 5 epochs to obtain the stolen model; For the label-query attack (dubbed 'Label-query'), we train the stolen model for 20 epochs with a substitute dataset labeled by the victim model; For the logit-query attack (dubbed 'Logit-query'), we train the stolen model by minimizing the KL-divergence between its outputs ($i.e.$, logits) and those of the victim model for 20 epochs.
\vspace{0.3em}
\noindent \emph{Settings for Defenses. }
We compare our defense with dataset inference \cite{maini2021dataset} and model watermarking \cite{adi2018turning} with BadNets \cite{gu2019badnets}, gradient matching \cite{geiping2021witches}, and entangled watermarks \cite{jia2021entangled}. We poison 10\% training samples for all defenses. Besides, we adopt a white square in the lower right corner as the trigger pattern for BadNets and adopt an oil paint as the style image for our defense. Other settings are the same as those used in their original paper. We implement BadNets based on BackdoorBox \cite{li2022backdoorbox} and other methods based on their official open-sourced codes. An example of images ($e.g.$, poisoned images and the style image) involved in different defenses is shown in Figure \ref{fig:images}. In particular, similar to our method, dataset inference uses different methods under different settings, while other baseline defenses are designed under the black-box setting. Since methods designed under the black-box setting can also be used in the white-box scenarios, we also compare our MOVE with them under the white-box setting.
\vspace{0.3em}
\noindent \emph{Evaluation Metric. }
We use the confidence score $\Delta \mu$ and p-value as the metric for our evaluation. Both of them are calculated based on the hypothesis-test with 10 sampled images under the white-box setting and 100 images under the black-box setting. In particular, except for the independent sources (which should not be regarded as stolen), the smaller the p-value and the larger the $\Delta \mu$, the better the defense. For the independent ones, the larger the p-value and the smaller the $\Delta \mu$, the better the method. Among all defenses, the best result is indicated in boldface and the failed verification cases are marked in red.
\subsection{Results of MOVE under the White-box Setting}
As shown in the Table \ref{table:white-cifar10}-\ref{table:white-imagenet}, MOVE achieves the best performance in almost all cases under the white-box setting. For example, the p-value of our method is three orders of magnitude smaller than that of the dataset inference and six orders of magnitude smaller than that of the model watermarking in defending against the distillation-based model stealing on the CIFAR-10 dataset. The only exceptions appear when defending against the fully-accessible attack. In these cases, entangled watermarks based model watermarking has some advantages. Nevertheless, our method can still easily make correct predictions in these cases. In particular, our MOVE defense is the only method that can effectively identify whether there is model stealing in all cases. Other defenses either fail under many complicated attacks ($e.g.$, query-only attacks) or misjudge when there is no stealing. Besides, our defense has minor adverse effects on the performance of victim models. The accuracy of the model trained on benign CIFAR-10 and its transformed version is 91.79\% and 91.99\%, respectively; The accuracy of the model trained on benign ImageNet and its transformed version is 82.40\% and 80.40\%, respectively. This is mainly because we do not change the label of transformed images and therefore the transformation can be treated as data augmentation, which is mostly harmless.
\begin{table*}[!ht]
\centering
\caption{Results of defenses on CIFAR-10 dataset under the black-box setting. The best results among all defenses are indicated in boldface while the failed verification cases are marked in red.}
\scalebox{0.87}{
\begin{tabular}{llcccccccccccccc}
\toprule
\multicolumn{2}{c}{\multirow{2}*{Model Stealing}}
&\multicolumn{2}{c}{BadNets}& &\multicolumn{2}{c}{Gradient Matching}& &\multicolumn{2}{c}{Entangled Watermarks}& &\multicolumn{2}{c}{Dataset Inference}& & \multicolumn{2}{c}{MOVE (Ours)}\\
\cline{3-4}\cline{6-7}\cline{9-10}\cline{12-13}\cline{15-16}
& &$\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value\\
\hline
$\mathcal{A}_{F}$ & Direct-copy &0.79 &$10^{-60}$&&\red{0.03} &$10^{-5}$&&0.54 &$10^{-35}$&&-&$10^{-39}$&&\textbf{0.84}& $\mathbf{10^{-74}}$ \\
$\mathcal{A}_{D}$ &Distillation &\red{$-10^{-3}$} & 0.02&&\red{0.01}&\red{ 0.14}&&\red{0.01}&$10^{-3}$ &&-&\red{ 0.32}&&\textbf{0.54}& $\mathbf{10^{-24}}$ \\
\multirow{2}*{$\mathcal{A}_{M}$}&Zero-shot &\red{$-10^{-25}$}&\red{0.29}&&\red{$10^{-4}$} &$10^{-3}$&&\red{$10^{-3}$}&\red{0.17}&&-&$10^{-5}$ &&\textbf{0.39}& $\mathbf{10^{-15}}$ \\
&Fine-tuning &\red{$10^{-3}$}&\red{0.08}&&\red{ $10^{-3}$}&\red{0.17}&&\red{$10^{-3}$}&\red{ 0.05}&&-&$10^{-6}$ &&\textbf{0.37}& $\mathbf{10^{-14}}$ \\
\multirow{2}*{$\mathcal{A}_{Q}$}&Label-query &\red{$10^{-4}$}&\red{0.11}&&\red{0.01} &\red{0.11}&&\red{$10^{-3}$}&\red{0.05}&&-&$\mathbf{10^{-7}}$&&\textbf{0.07}& $10^{-3}$ \\
&Logit-query &\red{$10^{-3}$}&\red{0.10}&&\red{ $10^{-23}$}&\red{0.08}&&\red{$10^{-35}$}&\red{ 0.11}&&-&$10^{-3}$&&\textbf{0.17}& $\mathbf{10^{-6}}$ \\ \hline
Benign&Independent &$10^{-20}$ &0.33&&$10^{-12}$&1.00&&\textbf{0.00}&\textbf{1.00}&&-&\red{$10^{-38}$} && $10^{-4}$ & 0.98 \\
\bottomrule
\end{tabular}
}
\label{table:black-cifar10}
\end{table*}
\begin{table*}[!ht]
\centering
\caption{Results of defenses on ImageNet dataset under the black-box setting. The best results among all defenses are indicated in boldface while the failed verification cases are marked in red.}
\scalebox{0.87}{
\begin{tabular}{llcccccccccccccc}
\toprule
\multicolumn{2}{c}{\multirow{2}*{Model Stealing}}
&\multicolumn{2}{c}{BadNets}& &\multicolumn{2}{c}{Gradient Matching}& &\multicolumn{2}{c}{Entangled Watermarks}& &\multicolumn{2}{c}{Dataset Inference}& & \multicolumn{2}{c}{MOVE (Ours)}\\
\cline{3-4}\cline{6-7}\cline{9-10}\cline{12-13}\cline{15-16}
& &$\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value & & $\Delta \mu$ & p-value\\
\hline
$\mathcal{A}_{F}$& Direct-copy &0.87 &$10^{-10}$&&0.32 &$10^{-35}$&&\textbf{0.99} &$10^{-48}$&&-&$10^{-65}$&&0.91&$\mathbf{10^{-135}}$\\
$\mathcal{A}_{D}$ &Distillation &\red{$10^{-4}$}&\red{0.09}&&\red{ $10^{-4}$}&\red{0.31}&&\red{$10^{-5}$}&\red{ 0.06}&&-&$10^{-50}$ &&\textbf{0.84}& $\mathbf{10^{-84}}$\\
\multirow{2}*{$\mathcal{A}_{M}$}&Zero-shot & \red{0.01} &$10^{-4}$ &&\red{$10^{-3}$} &\red{ 0.30}&&\red{$10^{-3}$}&\red{0.33}&&-&$10^{-65}$ &&\textbf{0.88}&$\mathbf{10^{-113}}$ \\
&Fine-tuning &\red{0.01} &$10^{-4}$ &&\red{$10^{-12}$}&\red{0.21}&&\red{0.01}&\red{0.19}&&-&$10^{-55}$ &&\textbf{0.74}&$\mathbf{10^{-62}}$\\
\multirow{2}*{$\mathcal{A}_{Q}$}&Label-query&\red{$10{-3}$}&\red{0.26}&&\red{$10^{-3}$}&\red{ 0.09}&&\red{$10^{-3}$}&\red{0.27} &&-&$\mathbf{10^{-50}}$ &&\textbf{0.43}& $10^{-12}$\\
&Logit-query &\red{$10^{-3}$}&\red{0.11}&&\red{$10^{-12}$}&\red{0.15}&&\red{$10^{-3}$}&\red{0.24}&&-&$\mathbf{10^{-54}}$ &&\textbf{0.61}& $10^{-37}$\\ \hline
Benign&Independent &$10^{-24}$&0.38 &&$10^{-5}$ &0.28 &&$\mathbf{10^{-30}}$&\textbf{0.99} &&-&\red{$10^{-50}$}&& $10^{-4}$ & 0.97 \\
\bottomrule
\end{tabular}
}
\label{table:black-imagenet}
\end{table*}
\begin{table}[ht]
\centering
\caption{Defending against multi-stage stealing.}
\scalebox{0.87}{
\begin{tabular}{c|c|ccc}
\toprule
Setting$\downarrow$ & Stage$\rightarrow$ & Stage 0 & Stage 1 & Stage 2 \\ \hline
\multirow{4}{*}{White-box} & Method$\rightarrow$ & Direct-copy & Zero-shot & Zero-shot \\
& p-value$\rightarrow$ & $10^{-7}$ & $10^{-5}$ & $10^{-4}$ \\ \cline{2-5}
& Method$\rightarrow$ & Direct-copy & Logit-query & Zero-shot \\
& p-value$\rightarrow$ & $10^{-7}$ & $10^{-4}$ & 0.01 \\ \hline \hline
\multirow{4}{*}{Black-box} & Method$\rightarrow$ & Direct-copy & Zero-shot & Zero-shot \\
& p-value$\rightarrow$ & $10^{-74}$ & $10^{-15}$ & $10^{-5}$ \\ \cline{2-5}
& Method$\rightarrow$ & Direct-copy & Logit-query & Zero-shot \\
& p-value$\rightarrow$ & $10^{-74}$ & $10^{-6}$ & $10^{-3}$ \\ \bottomrule
\end{tabular}
}
\label{tab:multi-stage}
\end{table}
\subsection{Results of MOVE under the Black-box Setting}
As shown in Table \ref{table:black-cifar10}-\ref{table:black-imagenet}, our MOVE defense still reaches promising performance under the black-box setting. For example, the p-value of our defense is over twenty and sixty orders of magnitude smaller than that of the dataset inference and entangled watermarks in defending against direct-copy on CIFAR-10 and ImageNet, respectively. In those cases where we do not get the best performance ($e.g.$, label-query and logit-query), our defense is usually the second-best where it can still easily make correct predictions. In particular, same as the scenarios under the white-box setting, our MOVE defense is the only method that can effectively identify whether there is stealing in all cases. Other defenses either fail under many complicated stealing attacks or misjudge when there is no model stealing. These results verify the effectiveness of our defense again.
\subsection{Defending against Multi-Stage Model Stealing}
In previous experiments, the stolen model is obtained by a single stealing. In this section, we explore whether our method is still effective if there are multiple stealing stages.
\vspace{0.3em}
\noindent \emph{Settings. }
We discuss two types of multi-stage stealing on the CIFAR-10 dataset, including stealing with the same attack and model structure and stealing with different attacks and model structures. In general, the first one is the easiest multi-stage attack while the second one is the hardest. Other settings are the same as those in Section \ref{sec:exp_settings}.
\vspace{0.3em}
\noindent \emph{Results. }
As shown in Table \ref{tab:multi-stage}, the p-value increases with the increase of the stage, which indicates that defending against multi-stage attacks is difficult. Nevertheless, the p-value $\leq 0.01$ in all cases under both white-box and black-box settings, $i.e.$, our method can identify the existence of model stealing even after multiple stealing stages. Besides, the p-value in defending the second type of multi-stage attack is significantly larger than that of the first one, showing that the second task is harder.
\section{Discussion}
In this section, we further explore the mechanisms and properties of our MOVE. Unless otherwise specified, all settings are the same as those in Section \ref{sec:exp}.
\begin{figure*}[ht]
\centering
\subfigure[MOVE under the White-box Setting]{
\includegraphics[width=0.45\textwidth]{./fig/hyper-whitebox.pdf}}
\hspace{1.5em}
\subfigure[MOVE under the Black-box Setting]{
\includegraphics[width=0.45\textwidth]{./fig/hyper-blackbox.pdf}}
\vspace{-0.8em}
\caption{The effects of the transformation rate (\%) and the number of sampled images of our MOVE on the CIFAR-10 dataset.}
\label{fig:hyper}
\end{figure*}
\begin{table*}[!t]
\caption{The effectiveness of our defense with different style images on CIFAR-10 dataset.}
\centering
\begin{tabular}{c|cccc|cccc}
\toprule
Method$\rightarrow$ & \multicolumn{4}{c|}{MOVE under the White-box Setting} & \multicolumn{4}{c}{MOVE under the Black-box Setting} \\ \hline
Pattern$\rightarrow$ & \multicolumn{2}{c|}{Pattern (a)} & \multicolumn{2}{c|}{Pattern (b)} & \multicolumn{2}{c|}{Pattern (a)} & \multicolumn{2}{c}{Pattern (b)} \\ \hline
Stealing Attack$\downarrow$, Metric$\rightarrow$ & $\Delta \mu$ & \multicolumn{1}{c|}{p-value} & $\Delta \mu$ & p-value & $\Delta \mu$ & \multicolumn{1}{c|}{p-value} & $\Delta \mu$ & p-value \\ \hline
Direct-copy & 0.98 & \multicolumn{1}{c|}{$10^{-10}$} & 0.98 & $10^{-12}$ & 0.92 & \multicolumn{1}{c|}{$10^{-126}$} & 0.93 & $10^{-112}$ \\
Distillation & 0.72 & \multicolumn{1}{c|}{$10^{-8}$} & 0.63 & $10^{-7}$ & 0.71 & \multicolumn{1}{c|}{$10^{-41}$} & 0.91 & $10^{-100}$ \\
Zero-shot & 0.74 & \multicolumn{1}{c|}{$10^{-8}$} & 0.67 & $10^{-7}$ & 0.67 & \multicolumn{1}{c|}{$10^{-37}$} & 0.65 & $10^{-33}$ \\
Fine-tuning & 0.21 & \multicolumn{1}{c|}{$10^{-7}$} & 0.50 & $10^{-9}$ & 0.19 & \multicolumn{1}{c|}{$10^{-7}$} & 0.21 & $10^{-9}$ \\
Label-query & 0.68 & \multicolumn{1}{c|}{$10^{-8}$} & 0.68 & $10^{-7}$ & 0.81 & \multicolumn{1}{c|}{$10^{-53}$} & 0.74 & $10^{-41}$ \\
Logit-query & 0.62 & \multicolumn{1}{c|}{$10^{-6}$} & 0.73 & $10^{-7}$ & 0.6 & \multicolumn{1}{c|}{$10^{-28}$} & 0.23 & $10^{-9}$ \\ \hline
Independent & 0.00 & \multicolumn{1}{c|}{1.00} & $10^{-9}$ & 0.99 & $10^{-4}$ & \multicolumn{1}{c|}{0.95} & $10^{-4}$ & 0.96 \\ \bottomrule
\end{tabular}
\label{tab:effects_style}
\end{table*}
\subsection{Effects of Key Hyper-parameters}
\label{sec:hyper}
In this part, we discuss the effects of hyper-parameters and components involved in our method.
\vspace{0.3em}
\subsubsection{Effects of Transformation Rate}
In general, the larger the transformation rate $\gamma$, the more training samples are transformed during the training process of the victim model and therefore the `stronger' the external features. As shown in Figure \ref{fig:hyper}, as we expected, the p-value decreases with the increase of $\gamma$ in defending all stealing methods under both white-box. In other words, increasing the transformation rate can increase the performance of model verification. However, under the black-box setting, the changes in p-Value are relatively irregular. We speculate that this is probably because using prediction differences to approximately learn external features under the black-box setting has limited effects. Nevertheless, our method can successfully defend against all stealing attacks in all cases (p-value $\ll 0.01$). Besides, we need to emphasize that the increase of $\gamma$ may also lead to the accuracy decrease of victim models. Defenders should specify this hyper-parameter based on their specific requirements in practice.
\begin{figure}[!t]
\centering
\subfigure[]{
\includegraphics[width=0.14\textwidth]{./fig/pencil.jpg}}
\hspace{1em}
\subfigure[]{
\includegraphics[width=0.14\textwidth]{./fig/ship.jpg}}
\caption{The new style images adopted for the evaluation.}
\label{fig:styimg}
\end{figure}
\vspace{0.3em}
\subsubsection{Effects of the Number of Sampled Images}
Recall that our method needs to specify the number of sampled (transformed) images ($i.e.$, the $m$) adopted in the hypothesis-based verification. In general, the larger the $m$, the less the adverse effects of the randomness involved in this process and therefore the more confident the verification. This is probably the main reason why the p-value also decreases with the increase of $m$, as shown in Figure \ref{fig:hyper}.
\vspace{0.3em}
\subsubsection{Effects of Style Images}
In this part, we examine whether the proposed defense is still effective if we adopt other style images (as shown in Figure \ref{fig:styimg}). As shown in Table \ref{tab:effects_style}, the p-value $\ll 0.01$ in all cases under attack, while it is nearly 1 when there is no stealing, no matter under white-box or black-box setting. In other words, our method remains effective in defending against different stealing methods when different style images are used, although there will be some fluctuations in the results. We will further explore how to optimize the selection of style images in our future work.
\begin{table}[!t]
\caption{The effectiveness (p-value) of style transfer.}
\centering
\scalebox{0.87}{
\begin{tabular}{lccccc}
\toprule
& \multicolumn{2}{c}{CIFAR-10}& &\multicolumn{2}{c}{ImageNet} \\
\cline{2-3}\cline{5-6}
& \tabincell{c}{Patch-based\\Variant} & Ours& & \tabincell{c}{Patch-based\\Variant} & Ours\\\hline
Direct-copy & $\mathbf{10^{-74}}$ & $10^{-7}$&& 0.01 &$\mathbf{10^{-5}}$\\
Distillation & 0.17 & $\mathbf{10^{-7}}$ &&0.13 & $\mathbf{10^{-5}}$\\
Zero-shot & 0.01 & $\mathbf{10^{-5}}$ && $10^{-3}$ & $\mathbf{10^{-4}}$\\
Fine-tuning & $10^{-3}$ & $\mathbf{10^{-6}}$ &&$10^{-3}$ & $\mathbf{10^{-5}}$\\
Label-query & $10^{-3}$ & $\mathbf{10^{-4}}$ &&0.02 & $\mathbf{10^{-3}}$\\
Logit-query & $10^{-3}$ & $\mathbf{10^{-4}}$ &&0.01 & $\mathbf{10^{-4}}$\\
\bottomrule
\end{tabular}
}
\label{table:effectiveness_style}
\end{table}
\subsection{The Ablation Study}
There are three key components contained in our MOVE, including 1) embedding external features with style transfer, 2) using the sign vector of gradients instead of gradients themselves, and 3) using meta-classifier for verification. In this section, we verify their effectiveness. For simplicity, we use MOVE under the white-box setting for discussion.
\begin{table*}[ht]
\caption{The performance of our meta-classifier trained with different features.}
\centering
\scalebox{1}{
\begin{tabular}{c|cc|cc|cccc}
\toprule
Dataset$\rightarrow$ & \multicolumn{4}{c|}{CIFAR-10} & \multicolumn{4}{c}{ImageNet} \\ \hline
\multirow{2}{*}{Model Stealing$\downarrow$} & \multicolumn{2}{c|}{Gradient} & \multicolumn{2}{c|}{Sign of Gradient (Ours)} & \multicolumn{2}{c|}{Gradient} & \multicolumn{2}{c}{Sign of Gradient (Ours)} \\ \cline{2-9}
& $\Delta \mu$ & p-value & $\Delta \mu$ & p-value & $\Delta \mu$ & \multicolumn{1}{c|}{p-value}& $\Delta \mu$ & p-value \\ \hline
Direct-copy & 0.44 & $10^{-5}$ & $\bm{0.97}$ & $\bm{10^{-7}}$ &0.15 & \multicolumn{1}{c|}{$10^{-4}$} & $\bm{0.90}$ & $\bm{10^{-5}}$ \\
Distillation & 0.27 & 0.01 & $\bm{0.53}$ & $\bm{10^{-7}}$ &0.15 & \multicolumn{1}{c|}{$10^{-4}$} & $\bm{0.61}$ & $\bm{10^{-5}}$ \\
Zero-shot & 0.03 & $10^{-3}$ & $\bm{0.52}$ & $\bm{10^{-5}}$ &0.12 & \multicolumn{1}{c|}{$10^{-3}$} & $\bm{0.53}$ & $\bm{10^{-4}}$ \\
Fine-tuning & 0.04 & $10^{-5}$ & $\bm{0.50}$ & $\bm{10^{-6}}$ &0.13 & \multicolumn{1}{c|}{$10^{-3}$} & $\bm{0.60}$ & $\bm{10^{-5}}$ \\
Label-query & 0.08 & $10^{-3}$ & $\bm{0.52}$ & $\bm{10^{-4}}$ &0.13 & \multicolumn{1}{c|}{$10^{-3}$} & $\bm{0.55}$ & $\bm{10^{-3}}$ \\
Logit-query & 0.07 & $10^{-5}$ & $\bm{0.54}$ & $\bm{10^{-4}}$ &0.12 & \multicolumn{1}{c|}{$10^{-3}$} & $\bm{0.55}$ & $\bm{10^{-4}}$ \\ \hline
Independent & $\bm{0.00}$ & $\bm{1.00}$ & $\bm{0.00}$ & $\bm{1.00}$ &$\bm{10^{-10}}$ & \multicolumn{1}{c|}{$\bm{0.99}$} & $10^{-5}$ & $\bm{0.99}$ \\ \bottomrule
\end{tabular}
}
\label{table:signeffects}
\end{table*}
\vspace{0.3em}
\subsubsection{The Effectiveness of Style Transfer}
To verify that the style watermark transfers better during the stealing process, we compare our method with its variant which uses the white-square patch (adopted in BadNets) to generate transformed images. As shown in Table \ref{table:effectiveness_style}, our method is significantly better than its patch-based variant. It is probably because DNNs are easier to learn the texture information \cite{geirhos2019imagenet} and the style watermark is bigger than the patch one. This phenomenon partly explains why our method works well.
\begin{table}[!t]
\caption{The effectiveness (p-value) of meta-classifier.}
\centering
\scalebox{1}{
\begin{tabular}{lccccc}
\toprule
& \multicolumn{2}{c}{CIFAR-10}& &\multicolumn{2}{c}{ImageNet} \\
\cline{2-3}\cline{5-6}
& w/o & w/ & & w/o & w/ \\\hline
Direct-copy & $10^{-12}$&$\mathbf{10^{-37}}$&& $10^{-10}$ & $\mathbf{10^{-11}}$ \\
Distillation & 0.32 & $\mathbf{10^{-3}}$ &&0.43 & $\mathbf{10^{-3}}$\\
Zero-shot & 0.22 & $\mathbf{10^{-61}}$ && 0.33 & $\mathbf{10^{-3}}$\\
Fine-tuning & 0.28 & $\mathbf{10^{-5}}$ &&0.20 & $\mathbf{10^{-13}}$\\
Label-query & 0.20 & $\mathbf{10^{-50}}$ &&0.29 & $\mathbf{10^{-3}}$\\
Logit-query & 0.23 & $\mathbf{10^{-3}}$ &&0.38 & $\mathbf{10^{-3}}$\\
\bottomrule
\end{tabular}
}
\label{table:effectiveness_meta}
\end{table}
\vspace{0.3em}
\subsubsection{The Effectiveness of Sign Function}
In this section, we compare our method with its variant which directly adopts the gradients to train the meta-classifier. As shown in Table \ref{table:signeffects}, using the sign of gradients is significantly better than using gradients directly. This is probably because the `direction' of gradients contains more information compared with their `magnitude'. We will further explore it in the future.
\vspace{0.3em}
\subsubsection{The Effectiveness of Meta-Classifier}
To verify that the meta-classifier is also useful, we compare the BadNets-based model watermarking with its extension, which also uses the meta-classifier (adopted in our MOVE defense) for ownership verification. In this case, the victim model is the backdoored one and the transformed image is the one containing backdoor triggers. As shown in Table \ref{table:effectiveness_meta}, adopting meta-classier significantly decrease the p-value in almost all cases, which verifies the effectiveness of the meta-classifier. These results also partly explains the effectiveness of our MOVE defense.
\subsection{Resistance to Potential Adaptive Attacks}
\label{sec:resistance}
In this section, we discuss the resistance of our defense to potential adaptive attacks. We take MOVE under the white-box setting as an example to discuss for simplicity.
\vspace{0.3em}
\subsubsection{Resistance to Model Fine-tuning}
In many cases, attackers have some benign local samples. As such, they may first fine-tuning the stolen model before deployment. This adaptive attack might be effective since the fine-tuning process may alleviate the effects of transformed images due to the catastrophic forgetting \cite{kirkpatrick2017overcoming} of DNNs. Specifically, we adopt $p\%$ benign training samples (where $p \in \{0, 5, 10, 15, 20\}$ is dubbed \emph{sample rate}) to fine-tune the whole stolen models generated by distillation, zero-shot, label-query, and logit-query. We also adopt the p-value and $\Delta \mu$ to measure the defense effectiveness.
As shown in Figure \ref{fig:resist_fine-tune}, the p-value increases while the $\Delta \mu$ decreases with the increase of sample rate. These results indicate that model fine-tuning has some benefits in reducing our defense effectiveness. However, our defense can still successfully detect all stealing attacks (p-value $<0.01$ and $\Delta \mu \gg 0$) even the sample rate is set to $20\%$. In other words, our defense is resistant to model fine-tuning.
\begin{figure}[!t]
\centering
\subfigure[CIFAR-10]{
\includegraphics[width=0.46\textwidth]{./fig/finetune_cifar.pdf}
}
\subfigure[ImageNet]{
\includegraphics[width=0.46\textwidth]{./fig/finetune_imgnet.pdf}
}
\vspace{-0.5em}
\caption{Resistance to the adaptive attack based on model fine-tuning with different sample rates.}
\label{fig:resist_fine-tune}
\vspace{-0.3em}
\end{figure}
\begin{figure*}[!t]
\centering
\subfigure[]{
\includegraphics[width=0.16\textwidth]{./fig/salience_cifar_1_2.png}}
\hspace{0.2em}
\subfigure[]{
\includegraphics[width=0.16\textwidth]{./fig/salience_imgnet_1.png}}
\hspace{0.2em}
\subfigure[]{
\includegraphics[width=0.16\textwidth]{./fig/salience_vict_cifar_1_2.png}}
\hspace{0.2em}
\subfigure[]{
\includegraphics[width=0.16\textwidth]{./fig/salience_vict_imgnet_1.png}}
\hspace{0.2em}
\caption{Salience maps of BadNets and our MOVE. \textbf{(a)-(b)}: salience maps of poisoned images generated by BadNets. \textbf{(c)-(d)}: salience maps of transformed images generated by MOVE. \textbf{(a)\&(c)}: salience maps of images from CIFAR-10 dataset. \textbf{(b)\&(d)}: salience maps of images from ImageNet dataset.}
\label{fig:saliancy}
\end{figure*}
\vspace{0.3em}
\subsubsection{Resistance to Saliency-based Backdoor Detections}
These methods \cite{huang2019neuroninspect,chou2020sentinet,doan2020februus} use the saliency map to identify and remove potential trigger regions. Specifically, they first generate the saliency map of each sample and then calculate trigger regions based on the intersection of all saliency maps. As shown in Figure \ref{fig:saliancy}, the Grad-CAM mainly focuses on the trigger regions of images in BadNets while it mainly focuses on the object outline towards images in our proposed defense. These results indicate that our MOVE is resistant to saliency-based backdoor detections.
\vspace{0.3em}
\subsubsection{Resistance to STRIP}
This method \cite{gao2021design} detects and filters poisoned samples based on the prediction randomness of samples generated by imposing various image patterns on the suspicious image. The randomness is measured by the entropy of the average prediction of those samples. \emph{The higher the entropy, the harder an method for STRIP to detect}. We compare the average entropy of all poisoned images in BadNets and that of all transformed images in our defense. As shown in Table \ref{tab:strip}, the entropy of our defense is significantly larger than that of the BadNets-based method. These results show that our defense is resistant to STRIP.
\begin{table}[!t]
\caption{Resistance (entropy) of BadNets-based model watermarking and our defense to STRIP. }
\centering
\begin{tabular}{cc|cc}
\toprule
\multicolumn{2}{c|}{CIFAR-10} & \multicolumn{2}{c}{ImageNet} \\ \hline
BadNets & Ours & BadNets & Ours \\
0.01 & $\textbf{1.15}$ & 0.01 & $\textbf{0.77}$ \\ \bottomrule
\end{tabular}
\label{tab:strip}
\end{table}
\vspace{0.3em}
\subsubsection{Resistance to Trigger Synthesis based Detections}
These methods \cite{wang2019neural,dong2021black,shen2021backdoor} detect poisoned images by reversing potential triggers contained in given suspicious DNNs. They have a latent assumption that the triggers should be sample-agnostic and the attack should be targeted. However, our defense does not satisfy these assumptions since the perturbations in our transformed images are sample-specific and we do not modify the label of those images. As shown in Figure \ref{fig:rev_triggers}, synthesized triggers of the BadNets-based method contain similar patterns to those used by defenders ($i.e.$, white-square on the bottom right corner) or its flipped version, whereas those of our method are meaningless. These results show that our defense is also resistant to these detection methods.
\begin{figure}[!t]
\centering
\subfigure[]{
\includegraphics[width=0.143\textwidth]{./fig/trigger_cifar_2.png}}
\hspace{.2in}
\subfigure[]{
\includegraphics[width=0.143\textwidth]{./fig/trigger_imgnet.png}}\\
\subfigure[]{
\includegraphics[width=0.143\textwidth]{./fig/trigger_vict_cifar_1.png}}
\hspace{.2in}
\subfigure[]{
\includegraphics[width=0.143\textwidth]{./fig/trigger_vict_imgnet_1.png}}
\vspace{-0.8em}
\caption{Reversed triggers. \textbf{First Row}: triggers of BadNets-based defense. \textbf{Second Row}: triggers of our method. \textbf{First Column}: triggers on CIFAR-10 dataset. \textbf{Second Column}: triggers on ImageNet dataset.}
\label{fig:rev_triggers}
\end{figure}
\subsection{Relations with Related Methods}
\label{sec:relations}
\subsubsection{Relations with Membership Inference Attacks}
Membership inference attacks \cite{shokri2017membership,leino2020stolen, hui2021practical} intend to identify whether a particular sample is used to train given DNNs. Similar to these attacks, our method also adopts some training samples for ownership verification. However, our defense aims to analyze whether the suspicious model contains the knowledge of external features rather than whether the model is trained on those transformed samples. To verify it, we design additional experiments, as follows:
\vspace{0.3em}
\noindent \emph{Settings. }
For simplicity, we compare our method with its variant which adopts testing instead of training samples to generate transformed images used in ownership verification under the white-box setting. Except for that, other settings are the same as those stated in Section \ref{sec:exp_settings}.
\vspace{0.3em}
\noindent \emph{Results. }
As shown in Table \ref{table:membership}, using testing images has similar effects to that of using training images in generating transformed images for ownership verification, although there are some small fluctuations. In other words, our method indeed examines the information of embedded external features rather than simply verifying whether the transformed images are used for training. These results verify that our defense is fundamentally different from membership inference attacks.
\begin{table*}[!t]
\caption{Results of ownership verification with transformed samples generated by training and testing samples.}
\centering
\scalebox{1}{
\begin{tabular}{c|cc|cc|cccc}
\toprule
Dataset$\rightarrow$ & \multicolumn{4}{c|}{CIFAR-10} & \multicolumn{4}{c}{ImageNet} \\ \hline
\multirow{2}{*}{Model Stealing$\downarrow$} & \multicolumn{2}{c|}{Training Set} & \multicolumn{2}{c|}{Testing Set} & \multicolumn{2}{c|}{Training Set} & \multicolumn{2}{c}{Testing Set} \\ \cline{2-9}
& $\Delta \mu$ & p-value & $\Delta \mu$ & p-value & $\Delta \mu$ & \multicolumn{1}{c|}{p-value}& $\Delta \mu$ & p-value \\ \hline
Direct-copy & 0.97 & $10^{-7}$ & 0.96 & $10^{-7}$ & 0.90 & \multicolumn{1}{c|}{$10^{-5}$} & 0.93 & $10^{-7}$ \\
Distillation& 0.53 & $10^{-7}$ & 0.53 & $10^{-5}$ & 0.61 & \multicolumn{1}{c|}{$10^{-5}$} & 0.42 & $10^{-5}$ \\
Zero-shot & 0.52 & $10^{-5}$ & 0.53 & $10^{-5}$ & 0.53 & \multicolumn{1}{c|}{$10^{-4}$} & 0.34 & $10^{-3}$ \\
Fine-tuning & 0.50 & $10^{-6}$ & 0.47 & $10^{-6}$ & 0.60 & \multicolumn{1}{c|}{$10^{-5}$} & 0.72 & $10^{-5}$ \\
Label-query & 0.52 & $10^{-4}$ & 0.52 & $10^{-4}$ & 0.55 & \multicolumn{1}{c|}{$10^{-3}$} & 0.40 & $10^{-3}$ \\
Logit-query & 0.54 & $10^{-4}$ & 0.53 & $10^{-4}$ & 0.55 & \multicolumn{1}{c|}{$10^{-4}$} & 0.48 & $10^{-4}$ \\ \hline
Independent & 0.00 & 1.00 & 0.00 & 1.00 & $10^{-5}$ & \multicolumn{1}{c|}{0.99} & $10^{-9}$ & 0.99 \\ \bottomrule
\end{tabular}
}
\label{table:membership}
\end{table*}
\subsubsection{Relations with Backdoor Attacks}
Similar to that of (poisoning-based) backdoor attacks \cite{zhao2020clean,zhai2021backdoor,li2022few}, our defense embeds pre-defined distinctive behaviors into DNNs via modifying some training samples. However, different from that of backdoor attacks, our method neither changes the label of poisoned samples nor only selects training samples with the specific category for poisoning. Accordingly, our defense will not introduce any hidden backdoor into the trained victim model. In other words, the dataset watermarking of our MOVE is also fundamentally different from backdoor attacks. This is probably the main reason why most existing backdoor defenses will have minor benefits in designing adaptive attacks against our defense, as illustrated in Section \ref{sec:resistance}.
\section{Conclusion}
In this paper, we revisited the defenses against model stealing from the perspective of model ownership verification. We revealed that existing defenses suffered from low effectiveness and may even introduced additional security risks. Based on our analysis, we proposed a new effective and harmless model ownership verification ($i.e.$, MOVE), which examined whether the suspicious model contains the knowledge of defender-specified external features. We embedded external features by modifying a few training samples with style transfer without changing their label. In particular, we developed our MOVE defense under both white-box and black-box settings to provide comprehensive model protection. We evaluated our defense on both CIFAR-10 and ImageNet datasets. The experiments verified that our method can defend against various types of model stealing simultaneously while preserving high accuracy in predicting benign samples.
\bibliographystyle{IEEEtran}
|
2,877,628,089,589 | arxiv | \section{Introduction}
A real hypersurface $M^3$ in a 2-dimensional complex manifold (such as $\C^2$) inherits an intrinsic geometric structure from the complex structure of its ambient space. This is called a CR structure and can be thought of as an odd-dimensional version of a complex structure. A more precise definition is given \hl{in \S2} below. The study of these structures is based on three foundational papers. \hl{The first is a 1907 paper of H.~Poincar\'e \cite{Po}, }which shows that the Riemann Mapping Theorem for domains in $\C^1$ does not hold in higher dimensions. In fact, it fails even locally, even in the real analytic case, and for the simplest of reasons: There are more germs of real hypersurfaces than germs of holomorphic mappings. More explicitly, Poincar\'e's observation was that for $n\geq 2$ and $N$ large enough, the space of $N$-jets of biholomorphic mappings on open sets of $\C^n$ is of lower dimension than the space of $N$-jets of real-valued functions of $2n-1$ real variables. From this it follows that a generic perturbation of a smoothly bounded open set $A\subset {\C^n}$ is not biholomorphically equivalent to $A$ and so the Riemann Mapping Theorem fails. It also follows that, unlike complex structures, CR structures possess {\em local} invariants, similar to the well-known curvature invariants of Riemannian metrics. Consequently, a generic CR manifold admits {\em no} CR symmetries, even locally.
The second foundational paper, published in two parts, is \'Elie Cartan's work of 1932 \cite {Ca1},\cite{Ca2}. Since there are these local, indeed pointwise, invariants it is natural to find them explicitly. This fit in nicely with research Cartan was already doing. The Erlangen Program of F. Klein emphasized that geometry was the study of the invariance properties of groups of transformations. Cartan had taken this one step further with his theory of moving frames focusing on the infinitesimal action of the transformation groups. This not only incorporated Riemannian manifolds and its generalizations into the Erlangen scheme but also provided Cartan with the new tools to study projective geometries, both real and complex, the conformal and projective deformations of surfaces, etc. A contemporaneous explanation of Cartan's moving frames approach may be found in Weyl's review of one of Cartan's books \cite {Weyl}. A more accessible explanation, using modern notation, is the influential article \cite{Gr} and, more recently, the graduate textbook \cite{Cl}.
Two highly significant papers that have continued and extended this study of the geometric properties of CR structures on hypersurfaces in $\C^n$ are \cite {CM} and \cite{We}
The third foundational paper took the study of CR structures in a new and surprising direction. This was Hans Lewy's discovery, in 1957, of a locally non-solvable linear partial differential equation \cite{Le}. To emphasize how surprising this discovery was, we quote Treves \cite{Tr}, one of the originators of the modern theory of \hl{PDEs}:
\begin{quotation}
Allow me to insert a personal anecdote: in 1955 I was given the following thesis problem: prove that every linear partial differential equation with smooth coefficients, not vanishing identically at some point, is locally solvable at that point. My thesis director was, and still is, a leading analyst; his suggestion simply shows that, at that time, nobody had any inkling of the structure underlying the local solvability problem, as it is now gradually revealed.
\end{quotation}
Lewy's example is that
for a generic \hl{smooth} $f(x,y,u)$ the equation
\[
\left(\frac {\partial}{\partial x} + i\frac {\partial}{\partial y}-i(x+iy)
\frac {\partial }{\partial u}\right)w =f
\]
has no solution in any neighborhood of any point in $\mathbb{R}^3$.
The connection of Lewy's paper to CR structures is this: The operator on the left is induced by the Cauchy-Riemann equations on $\C^2$ and defines the CR structure on the hyperquadric
\[
\Im (w)= |{z}|^2.
\]
This connection between CR structures and the theory of \hl{PDEs} has led to a vast amount of research such as \cite{Ho} and the three subsequent volumes for general solvability theory and \cite{TrBook} for the study of the induced CR complex and its generalizations.
Here we come to the origin of the name of this field. Cartan called these structures pseudoconformal, emphasizing that they should be thought of as a generalization of the conformal (\hl{i.e.} complex) structure of $\mathbb{R}^2$. With the realization that the partial differential operators of $M^{2n+1}\subset \C^{n+1}$ induced by the Cauchy-Riemannn equations were of fundamental importance, the induced structure became known as a CR structure. This new name was introduced by Greenfield \cite{Gr}.
It is interesting to note that H. Lewy once commented (to \hl{the 2nd author}) that he was led to his example while trying to understand Cartan's paper \hl{\cite{Ca1}}.
In the present article we study the CR structures most closely related to the Erlangen Program, namely the left-invariant CR structures on three-dimensional Lie groups and, more generally, homogeneous three-dimensional CR structures. In fact, before using the moving frames method to study the general case, Cartan used a more algebraic approach to classify in Chapter II of \cite{Ca1} {\em homogeneous} CR 3-manifolds, \hl{i.e.} 3-dimensional CR manifolds admitting a transitive action of a Lie group by CR automorphisms. He found that, up to a cover, every such CR structure is a left-invariant CR structure on a 3-dimensional Lie group \cite[p.~69]{Ca1}. The items on this list form a rich source of natural examples of CR geometries which, in our opinion, has been hardly explored and mostly forgotten. In this article we present some of the most interesting items on Cartan's list.
We outline Cartan's approach and, in particular, the relation between the adjoint representation of the group and global realizability (the embedding of a CR structure as a hypersurface in a complex 2-dimensional manifold).
The spherical CR structure on $S^3$ can be thought of as the unique left-invariant CR structure on the group $\SU_2\simeq S^3$ \hl{that} is also invariant by right translations by the standard diagonal circle subgroup $\mathrm{U}_1\subset \SU_2$. There is a well-known and much studied 1-parameter family of deformations of this structure on $\SU_2$ to structures whose only symmetries are left translations by $\SU_2$ (see, for example, \cite{Bu}, \cite{Cap}, \cite{CaMo}, \cite{Ro}). An interesting feature of this family of deformations is that none of the structures, except the spherical one, can be globally realized as a hypersurface in $\C^2$ (although they can be realized as finite covers
of hypersurfaces in
$\C\P^2$, the 3-dimensional orbits of the projectivization of the conjugation action of $\SU_2$ on $\sl_2(\C)$). This was first shown in \cite{Ro} and later in \cite{Bu} by a different and interesting proof; see Remark \ref{rmrk:burns} for a sketch of the latter proof.
A left-invariant CR structure on a 3-dimensional Lie group $G$ is given by a 1-dimensional complex subspace of its complexified Lie algebra $\mathfrak{g}_\C$, that is, a point in the 2-dimensional complex projective plane $\P(\g_\C)\simeq \C\P^2$, satisfying a certain regularity condition (Definition \ref{def:reg} below).
The automorphism group of $G$, ${\rm Aut}(G)$, acts on the space of left-invariant CR structures on $G$, so that two ${\rm Aut}(G)$-equivalent left-invariant CR structures on $G$ correspond to two points in $\P(\g_\C)$ in the same ${\rm Aut}(G)$-orbit. Thus the classification of left-invariant CR structures on $G$, up to CR-equivalence by the action of ${\rm Aut}(G)$, reduces to the classification of the ${\rm Aut}(G)$-orbits in $\P(\g_\C)$. This leaves the possibility that two left-invariant CR structures on $G$ which are not CR equivalent under ${\rm Aut}(G)$ might be still CR-equivalent, locally or globally. Using Cartan's equivalence method, as introduced in \cite{Ca1}, we show in \hl{Theorem \ref{thm:ens}}
that for {\em aspherical} left-invariant CR structures this possibility does not occur. Namely: two left-invariant aspherical CR structures on two 3-dimensional Lie groups are CR equivalent if and only if the they are CR equivalent via a Lie group isomorphism.
See also \cite{BE} for a global invariant that distinguishes members of the left-invariant structures on $\SU_2$ and Theorem 2.1 of \cite[p.~246]{ENS}, \hl{which is the basis of our Theorem \ref{thm:ens}}. The asphericity condition in Theorem \ref{thm:ens} is essential (see Remark \ref{rmrk:counter}).
\medskip\noindent{\bf Contents of the paper.} In the next section, \S\ref{sec:prelim}, we
present the basic definitions and properties of CR manifolds. In \S\ref{sec:homog} we introduce some tools for studying homogenous CR manifolds which will be used in later sections.
In \S\ref{sec:sl2} we study our main example of $G=\SLt$, where we find that up to ${\rm Aut}(G)$, there are two 1-parameter families of left-invariant CR structures, one {\em elliptic} and one {\em hyperbolic}, depending on the incidence relation of the associated contact distribution with the null cone of the Killing metric, see Proposition \ref{prop:p}. %
Realizations of these structures are described in Proposition \ref{prop:slt2}: the elliptic spherical structure can be realized as any of the generic orbits of the standard representation in $\C^2$, or the complement of $z_1=0$ in $S^3\subset\C^2$. The rest of the structures are finite covers of orbits of the adjoint action in $\P(\sl_2(\C))=\C\P^2$. The question of their global realizability in $\C^2$ remains open, as far as we know.
In \S\ref{sec:su2} we treat the simpler case of $G=\SU_2$, where we recover the well-known 1-parameter family of left-invariant CR structures mentioned above, all with the same contact structure, containing a single spherical structure.
The remaining two sections present similar results for the Heisenberg and Euclidean groups.
In the Appendix we state the main differential geometric result of \cite{Ca1} and the specialization to homogeneous CR structures.
\medskip\noindent \centerline{*\qquad *\qquad*}
\medskip\noindent{\bf How `original' is this paper?}
We are certain that \'Elie Cartan knew most the results we present here. Some experts in his methods could likely extract the {\em statements} of these results from his paper \cite{Ca1}, where Cartan presents a classification of homogeneous CR 3-manifolds in Chapter II. As for finding the {\em proofs} of these results in \cite{Ca1}, or anywhere else, we are much less certain. The classification of homogeneous CR 3-manifolds appears on p.~70 of \cite{Ca1}, summing up more than 35 pages of general considerations followed by case-by-case calculations. We found Cartan's text justifying the classification very hard to follow. The general ideas and techniques are quite clear, but we were unable to justify many details of his calculations and follow through the line of reasoning. Furthermore, Cartan presents the classification in Chap.~II of \cite{Ca1} before solving the equivalence problem for CR manifolds in Chap.~III, so the CR invariants needed to distinguish the items on his list are not available, nor can he use the argument of our Theorem \ref{thm:ens}. In spite of extensive search and consultations with several experts, we could not find anywhere in the literature a detailed and complete statement in modern language of Cartan's classification of homogeneous CR manifolds, let alone proofs. We decided it would be more useful for us, and our readers, to abstain from further deciphering of \cite{Ca1} and to rederive his classification.
As for \cite{ENS}, apparently the authors shared our frustration with Cartan's text, as they redo parts of the classification in a style similar to ours. But we found their presentation sketchy and at times inadequate. For example, the reference on pp.~248 and 250 of \cite{ENS} to the `scalar curvature $R$ of the CR structure' is misleading. There is no `scalar curvature' in CR geometry. Cartan's invariant called $R$ is coframe dependent and so the formula given by the authors is meaningless without specifying the coframe used \hl{(which is not provided).} Also, the realizations they found for their CR structures are rather different from ours.
In summary, we lay no claim for originality of the results of this paper. Our main purpose here is to give a new treatment of an old subject. We hope the reader will find it worthwhile.
\medskip\noindent{\bf Acknowledgments.} We thank Boris Kruglikov and Alexander Isaev for pointing out to us the article \cite{ENS}, on which our Theorem \ref{thm:ens} is based.
GB thanks Richard Montgomery and Luis Hern\'andez Lamoneda for useful conversations. GB acknowledges support from CONACyT under project 2017-2018-45886.
\section{Basic definitions and properties of CR manifolds}\label{sec:prelim}
A {\em CR structure} on a 3-dimensional manifold $M$ is a rank 2 subbundle $D \subset TM$ together with an almost complex structure
$J$ on $D$, i.e. a bundle automorphism $J:D\to D$ such that $J^2=-Id$. The structure is {\em non-degenerate}
if $D$ is a contact structure, i.e. its sections bracket generate $TM$.
We shall henceforth assume this non-degeneracy condition for all CR structures. We stress that in this article all CR manifold are assumed 3-dimensional and have an underlying contact structure.
A CR structure is equivalently given by a complex line subbundle $V \subset D_\C:=D\otimes\C,$ the $-i$ eigenspace of $J_\C:=J\otimes\C$, denoted also by $T^{(0,1)}M$.
Conversely, given a complex line subbundle $V \subset T_\C M:=TM\otimes\C$ such that $V \cap \overline V = \{0\}$ and $V \oplus \overline V$ bracket generates $T_\C M$, there is a unique CR structure $(D,J)$ on $M$ such that $V=T^{(0,1)}M$. A section of $V$ is a {\em complex vector field of type} $(0,1)$ and can be equally used to specify the CR structure, provided it is non-vanishing.
A dual way of specifying a CR structure, particularly useful for calculations, is via an {\em adapted coframe. } This consists of a pair of 1-forms $(\phi,\phi_1)$ where $\phi$ is a real contact form, i.e. $D={\rm Ker}(\phi)$, $\phi_1$ is a complex valued form of type $(1,0)$, i.e. $\phi_1(Jv)=i\phi_1(v)$ for every $v\in D$, and such that $\phi\wedge\phi_1\wedge \bar\phi_1$ is non-vanishing. \hl{The line bundle} $V\subset T_\C M$ can then be recovered from $\phi, \phi_1$ as their common kernel. The non-degeneracy of $(D,J)$ is equivalent to the non-vanishing of $\phi\wedge \d \phi$.
We will use in the sequel any of these equivalent definitions of a CR structure.
If $M$ is a real hypersurface in a complex 2-dimensional manifold $N$ there is an induced CR structure on $M$ defined by $D := TM \cap \tilde J(TM)$, where $\tilde J$ is the almost complex structure on $N$,
with the almost complex structure $J$ on $D$ given by the restriction of $\tilde J$ to $D$. Equivalently, $V=T^{(0,1)}M:= \left(T_\C M \right) \cap \left(T^{(0,1)}N\right)$. A CR structure (locally) CR equivalent to a hypersurface in a complex 2-manifold is called (locally) {\em realizable}.
Two CR manifolds $(M_i, D_i, J_i)$, $i=1,2$, are {\em CR equivalent} if there exists a diffeomorphism $f:M_1\to M_2$ such that $\d f(D_1)=D_2$ and such that $(\d f|_{D_1})\circ J_1=J_2\circ (\d f|_{D_1})$. Equivalently, $(\d f)_\C (V_1)=V_2.$ A {\em CR automorphism} of a CR manifold is a CR self-equivalence, i.e. a diffeomorphism $f:M\to M$ such that $\d f$ preserves $D$ and $\d f|_D$ commutes with $J$.
Local CR equivalence and automorphism are defined similarly, by restricting the above definitions to open subsets.
An {\em infinitesimal CR automorphism} is a vector field whose (local) flow acts by (local) CR automorphisms. Clearly, the set ${\rm Aut}_\mathrm{CR}(M)$ of CR automorphisms forms a group under composition and the set $\mathfrak{aut}_\mathrm{CR}(M)$ of infinitesimal CR automorphisms forms a Lie algebra under the Lie bracket of vector fields. In fact, ${\rm Aut}_\mathrm{CR}(M)$ is naturally a Lie group of dimension $\leq \dim(\mathfrak{aut}_\mathrm{CR}(M))\leq 8$, see Corollary \ref{cor:aut} in the Appendix.
The basic example of CR structure is the unit sphere $S^3=\{|z_1|^2+|z_2|^2=1\}\subset\C^2$ equipped with the CR structure induced from $\C^2$. Its group of CR automorphisms is the 8-dimensional simple Lie group $\PU_{2,1}$. The action of the latter on $S^3$ is seen by embedding $\C^2$ as an affine chart in $\C\P^2$, $(z_1, z_2)\mapsto [z_1:z_2: 1]$, mapping $S^3$ unto the hypersurface given in homogeneous coordinates by $ |Z_1|^2+|Z_2|^2=|Z_3|^2$, the projectivized null cone of the hermitian form $|Z_1|^2+|Z_2|^2-|Z_3|^2$ in $\C^3$ of signature $(2,1)$. The group $\mathrm{U}_{2,1}$ is the subgroup of $\mathrm{GL}_3(\C) $ leaving invariant this hermitian form and its projectivized action on $\C\P^2$ acts on $S^3$ by CR automorphism. It is in fact its {\em full} automorphism group. This is a consequence of the Cartan's equivalence method, see Corollary \ref{cor:aut}.
\medskip\noindent
Here are two standard results of the general theory of CR manifolds.
\begin{prop}[`Finite type' property]
\label{prop:cont}
Let $M ,M'$ be two CR manifolds with $M$ connected and $f:M\to M'$ a local CR-equivalence. Then $f$ is determined by its restriction to any open subset of $M$. In fact it is determined of its 2-jet at a single point of $M$.
\end{prop}
\begin{proof}
The Cartan equivalence method associates canonically with each CR 3-manifold $M$ a certain principal bundle $B\to M$ with 5-dimensional fiber, a reduction of the bundle of second order frames on $M$, together with a canonical coframing of $B$ (an $e$-structure, or `parallelism'; see the Appendix for more details). Consequently, $f:M\to M'$ lifts to a bundle map $\tilde f:B\to B'$ between the associated bundles (in fact, the 2-jet of $f$, restricted to $B$), preserving the coframing. Now any coframe preserving map of coframed manifolds with a connected domain is determined by its value at a single point.
Thus $\tilde f$ is determined by its value at a single point in $B$. It follows that $f$ is determined by its 2-jet at a single point in $M$.
\end{proof}
\begin{prop}[`Unique extension' property]\label {prop:ext} Let $f:U\to U'$ be a CR diffeomorphism between open connected subsets of $S^3$. Then $f$ can be extended uniquely to an element $g\in{\rm Aut}_{\mathrm{CR}}(S^3)=\PU_{2,1}$.
\end{prop}
\begin{proof} Let $B\to S^3$ be the Cartan bundle associated with the CR structure, as in the proof of the previous proposition, and $\tilde f:B|_{U}\to B|_{U'}$ the canonical lift of $f$. Since ${\rm Aut}_\mathrm{CR}(S^3)$ acts transitively on $B$ (in fact, freely, see Corollary \ref{cor:aut}), for any given $p\in B|_{U}$ there is a unique $g\in {\rm Aut}_\mathrm{CR}(S^3)$ such that $\tilde f(p)=\tilde g(p).$ It follows, by the previous proposition, that $f=g|_{U}$. See also \cite{A}, Proposition 2.1, for a different proof. \end{proof}
Here is a simple consequence of the last two propositions that will be useful for us later.
\begin{cor}\label{cor:sph}Let $M$ be a connected 3-manifold and $\phi_i:M\to S^3$, $i=1,2$, be two immersions. Then the two induced spherical CR structures on $M$ coincide if and only if $\phi_2=g\circ \phi_1$ for some $g\in{\rm Aut}_\mathrm{CR}(S^3)=\PU_{2,1}$.
\end{cor}
\begin{proof}Let $U\subset M$ be a connected open subset for which each restriction
$\left.\phi_i\right|_U$ is a diffeomorphism unto its image $V_i:=\phi_i(U)\subset S^3$,
$i=1,2$. Then $(\phi_2|_U)\circ(\phi_1|_U)^{-1}:V_1\to V_2$ is a CR diffeomorphism. By Proposition \ref{prop:ext}, there exists $g\in \PU_{2,1}$ such that $\phi_2|_U=(g\circ \phi_1)|_U.$ It follows, by Proposition \ref{prop:cont}, that $\phi_2 = g \circ \phi_1 .$
\end{proof}
\section{Left-invariant CR structures on 3-dimensional Lie groups}\label{sec:homog}
A natural class of CR structures are the {\em homogeneous} CR manifolds, i.e. CR manifolds admitting a transitive group of automorphisms.
Up to a cover, every such structure is given by a left-invariant CR structure on a 3-dimensional Lie group (see e.g. \cite[p.~69]{Ca1}).
Each such Lie group is determined, again, up to a cover, by its Lie algebra. The list of possible Lie algebras is a certain sublist of the list of 3-dimensional real Lie algebras (the `Bianchi classification'), and was determined by \'E. Cartan in Chapter II of his 1932 paper \cite{Ca1}.
In this section we first make some general remarks about such CR structures, then state an easy to apply criterion for sphericity. Our main references here are Chapter II of \'E.~Cartan's paper \cite{Ca1} and \S 2 of Ehlers et al. \cite{ENS}.
\subsection{Preliminaries}
Let $G$ be a 3-dimensional Lie group $G$ with identity element $e$ and Lie algebra $\mathfrak{g}=T_eG.$ To each $g\in G$ is associated the {\em left translation} $G\to G$, $x\mapsto gx$. A CR structure on $G$ is {\em left-invariant} if all left translations are CR automorphisms. Clearly, a left-invariant CR structure $(D,J)$ is given uniquely by its value $(D_e,J_e)$ at $e$. Equivalently, it is given by a {\em non-real} 1-dimensional complex subspace $V_e\subset \mathfrak{g}_\C:=\mathfrak{g}\otimes\C$; i.e. $V_e\cap \overline{V_e}=\{0\}$. By the non-degeneracy of the CR structure, $D_e\subset \mathfrak{g}$ is not a Lie subalgebra; equivalently, $V_e\oplus \overline{V_e}\subset \mathfrak{g}_\C$ is not a Lie subalgebra. In other words, {\em left-invariant CR structures are parametrized by the non-real and non-degenerate elements of $\P(\g_\C)\simeq \C\P^2$.}
\begin{defn}\label{def:reg}
An element $[L]\in \P(\g_\C)$ is {\em real} if $[L]= [\overline L]$, {\em degenerate} if $L,\overline L$ span a Lie subalgebra of $\mathfrak{g}_\C$ \hl{and {\em regular} if it is neither real nor degenerate.} The locus of regular elements in $\P(\g_\C)$ is denoted by $\P(\g_\C)_{\mathrm{reg}}.$
\end{defn}
Equivalently, if $[L]=[L_1+iL_2]\in\P(\g_\C)$, where $L_1, L_2\in\mathfrak{g}$, then $[L]$ is non-real if and only if $L_1, L_2$ are linearly independent and is regular if and only if $L_1,L_2, [L_1,L_2]$ are linearly independent.
\medskip\noindent
Let ${\rm Aut}(G)$ be the group of Lie group automorphisms of $G$ and ${\rm Aut}(\mathfrak{g})$ the group of Lie algebra automorphisms of $\mathfrak{g}$. For each $f\in {\rm Aut}(G)$, $\d f(e)\in{\rm Aut}(\mathfrak{g})$, and if $G$ is connected then $f$ is determined uniquely by $\d f(e)$, so ${\rm Aut}(G)$ embeds naturally as a subgroup ${\rm Aut}(G)\subset {\rm Aut}(\mathfrak{g})$. Every Lie algebra homomorphism of a {\em simply connected} Lie group lifts uniquely to a Lie group homomorphism, hence for simply connected $G$, ${\rm Aut}(G)={\rm Aut}(\mathfrak{g})$. The adjoint representation of $G$ defines a homomorphism $\Ad:G\to {\rm Aut}(G)$. Its image is a normal subgroup $\mathrm{Inn}(G)\subset{\rm Aut}(G)$, the group of {\em inner} automorphisms (also called `the adjoint group'). The quotient group, $\mathrm{Out}(G):={\rm Aut}(G)/\mathrm{Inn}(G)$, is the group of {\em outer} automorphisms. For a simple Lie group, $\mathrm{Out}(G)$ is a finite group. For example, $\mathrm{Out}(\SU_2)$ is trivial and $\mathrm{Out}(\SLt)\simeq\mathbb{Z}_2,$ given by conjugation by any matrix $g\in \mathrm{GL}_2(\mathbb{R})$ with negative determinant, e.g. $g=\mathrm{diag}(1,-1)$.
Now ${\rm Aut}(G)$ clearly acts on the set of left-invariant CR structures on $G$. It also acts on $\P(\g_\C)_{\mathrm{reg}}$ by the projectivized complexification of its action on $\mathfrak{g}$. The map associating with a left-invariant CR structure $V\subset T_\C G$ the point $z=V_e\in\P(\g_\C)_{\mathrm{reg}}$ is clearly ${\rm Aut}(G)$-equivariant, hence if $z_1, z_2\in \P(\g_\C)_{\mathrm{reg}}$ lie on the same ${\rm Aut}(G)$-orbit then the corresponding left-invariant CR structures on $G$ are CR equivalent via an element of ${\rm Aut}(G)$. As mentioned in the introduction, the converse is true for {\em aspherical} left-invariant CR structures.
\begin{thm}\label{thm:ens}
Consider two left-invariant aspherical CR structures $V_i\subset T_\C G_i$ on two connected 3-dimensional Lie groups $G_i$, with corresponding elements $z_i:=\left(V_i\right)_{e_i}\in \P((\mathfrak{g}_i)_\C))_\mathrm{reg}$,
where $e_i$ is the identity element of $G_i$, $i=1,2$. If the two CR structures are equivalent, then there exists a group isomorphism $G_1\to G_2$ which is a CR equivalence, whose derivative at $e_1$ maps $z_1\mapsto z_2$. If the two CR structures are locally equivalent, then there exists a Lie algebra isomorphism $\mathfrak{g}_1\to\mathfrak{g}_2$, mapping $z_1\mapsto z_2$.
\end{thm}
\begin{proof}
Let $f:G_1\to G_2$ be a CR equivalence. By composing $f$ with an appropriate left translation, either
in $G_1$ or in $G_2$, we can assume, without loss of generality, that $f(e_1)=e_2$. Since $f$ is a CR equivalence, $(\d f)_\C V_1=V_2$. In particular, $(\d f)_\C$ maps $z_1\mapsto z_2$. We next show that $f$ is a group isomorphism.
For any 3-dimensional Lie group $G$, the space $\mathfrak{R}(G)$ of right-invariant vector fields is a 3-dimensional Lie subalgebra of the space of vector fields on $G$, generating left-translations on $G$. Hence if $G$ is equipped with a left-invariant CR structure then $\mathfrak{R}(G)\subset\mathfrak{aut}_\mathrm{CR}(G).$ If the CR structure is aspherical then the Cartan equivalence method implies that $\dim(\mathfrak{aut}_\mathrm{CR}(M))\leq 3$, see Corollary \ref{cor:aut} of the Appendix. Thus $\mathfrak{R}(G)=\mathfrak{aut}_\mathrm{CR}(G).$
Now since $f:G_1\to G_2$ is a CR equivalence, its derivative defines a Lie algebra isomorphism $\mathfrak{aut}_\mathrm{CR}(G_1)\simeq
\mathfrak{aut}_\mathrm{CR}(G_2)$. It follows, by the last paragraph, that $\d f(\mathfrak{R}(G_1))=\mathfrak{R}(G_2)$. This implies that $f$ is a group isomorphism by a result from the theory of Lie groups: If $f:G_1\to G_2$ is a diffeomorphism between two connected Lie groups such that $f(e_1)=e_2$ and $\d f(\mathfrak{R}(G_1))=\mathfrak{R}(G_2)$ then $f$ is a group isomorphism.
We could not find a reference for the (seemingly standard) last statement so we sketch a proof here.
Let $G=G_1\times G_2$ and $H=\{(x,f(x))| x\in G_1\}$ (the graph of $f$). Then $f$ is a group isomorphism if and only if $H\subset G$ is a subgroup. Let $\h:=T_eH$, where $e=(e_1,e_2)\in G$, and let $\mathcal{H}\subset TG$ the extension of $\h$ to a right-invariant sub-bundle. Then, since $\d f: \mathfrak{R}(G_1)\to \mathfrak{R}(G_2)$ is a Lie algebra isomorphism, $\h\subset \mathfrak{g}$ is a Lie subalgebra, $\mathcal{H}$ is integrable and $H$ is the integral leaf of $\mathcal{H}$ through $ e\in G$ (a maximal connected integral submanifold of $\mathcal{H}$). It follows that $Hh$ is also an integral leaf of $\mathcal{H}$ for every $h\in H$. But $e\in H\cap Hh$, hence $H=Hh$ and so $H$ is closed under multiplication and inverse, as needed.
To prove the last statement of the theorem, suppose $f:U_1\to U_2$ is a CR equivalence, where $U_i\subset G_i$ are open subsets, $i=1,2$. By composing $f$ with appropriate left translations
in $G_1$ and $G_2$, we can assume, without loss of generality, that $U_i$ is a neighborhood of $e_i\in G_i$, $i=1,2$, and that
$f(e_1)=e_2$. Since $f$ is a CR equivalence, its complexified derivative $(\d f)_\C:T_\C U_1\to T_\C U_2$ maps $V_1|_{U_1}$ isomorphically onto $ V_2|_{U_2}$; in particular, it maps $z_1\mapsto z_2.$ It remains to show that $\d f(e_1):\mathfrak{g}_1\to\mathfrak{g}_2$ is a Lie algebra isomorphism.
For any Lie group $G$, the Lie bracket of two elements $X_e, Y_e\in \mathfrak{g}=T_eG$ is defined by evaluating at $e$
the commutator $XY-YX$ of their unique extensions to {\em left}--invariant vector fields $X,Y$ on $G$. If we use
instead {\em right}--invariant vector fields, we obtain the negative of the standard Lie bracket. Now right-invariant
vector fields generate left translations, hence if $G$ is a 3-dimensional Lie group equipped with a left-invariant CR
structure, there is a natural inclusion of Lie algebras $\mathfrak{g}_-\subset \mathfrak{aut}_\mathrm{CR}(G),$ where $\mathfrak{g}_-$ denotes $\mathfrak{g}$ equipped
with the negative of the standard bracket. For any aspherical CR structure on a 3-manifold $M$ we have
$\dim(\mathfrak{aut}_\mathrm{CR}(M))\leq 3$, hence for any open subset $U\subset G$ the restriction of a left-invariant aspherical CR
structure on $G$ to $U$ satisfies $\mathfrak{aut}_\mathrm{CR}(U)=\mathfrak{R}(G)|_U\simeq\mathfrak{g}_-.$
Next, since $f:U_1\to U_2$ is a CR equivalence, its derivative $\d f$ defines
a Lie algebra isomorphism $\mathfrak{aut}_\mathrm{CR}(U_1)\to\mathfrak{aut}_\mathrm{CR}(U_2)$.
By the previous paragraph, $\d f(e)$ is a Lie algebra isomorphism $(\mathfrak{g}_1)_-\to (\mathfrak{g}_2)_-$, and thus is also a Lie algebra isomorphism $\mathfrak{g}_1\to\mathfrak{g}_2$.
\end{proof}
\subsection{A sphericity criterion via well-adapted coframes}
We formulate here a simple criterion for deciding whether a left-invariant CR structure $z\in\P(\g_\C)_{\mathrm{reg}}$ on a Lie group $G$ is spherical or not.
The basic tools are found in the seminal papers of Cartan \cite{Ca1},\cite{Ca2}. We defer a more complete discussion to the Appendix.
\begin{defn}\label{def:adap}
Let $M$ be a 3-manifold with a CR structure $V\subset T_\C M$. An {\em adapted coframe} is a pair of 1-forms $(\phi, \phi_1)$ with $\phi$ real and $\phi_1$ complex, such that $\phi |_V=\phi_1 |_V = 0$ and $\phi\wedge \phi_1\wedge\bar\phi_1$ is non-vanishing. The coframe is {\em well-adapted} if $\d\phi = i\phi_1\wedge\bar{\phi_1}$.
\end{defn}
Adapted and well-adapted coframes always exist, locally. Starting with an arbitrary
non-vanishing local section $L$ of $V$ (a complex vector field of type $(0,1)$) and
a contact form $\theta$ (a non-vanishing local section of $D^\perp\subset T^*M$), define
the complex $(1,0)$-form $\phi_1$ by $\phi_1(L)=0$, $\bar \phi_1(L)=1$. Then
$(\phi, \phi_1)$ is an adapted coframe and any other adapted coframe is given by
$\tilde\phi=|\lambda|^2\phi,$ $\tilde \phi_1=\lambda(\phi +\mu \phi_1)$
for arbitrary complex functions $\mu$,\ $\lambda$, with $\lambda$ non-vanishing.
It is then easy to verify that for any $\lambda$ and $ \mu=i\,L(u)/u$ where
$ u=|\lambda|^2$, the resulting coframe $(\tilde \phi, \tilde\phi_1)$ is well-adapted.
Given a well-adapted coframe $(\phi, \phi_1)$, decomposing $\d\phi, \d\phi_1$ in the same coframe we get
\begin{align}\label{eq:str}
\begin{split}
\d\phi &= i\phi_1\wedge\bar{\phi_1}\\
\d\phi_1&=a\,\phi_1\wedge\bar \phi_1+b\,\phi\wedge \phi_1+c\,\phi\wedge \bar\phi_1,
\end{split}
\end{align}
for some complex valued functions $a,b,c$ on $M$. For a left-invariant CR structure on a 3-dimensional group $G$ one can choose a (global) well-adapted coframe of left-invariant 1-forms, and then $a,b,c$ are constants.
\begin{prop}\label{prop:CRc}
Consider a CR structure on a 3-manifold given by a well adapted coframe $\phi, \phi_1$, satisfying equations \eqref{eq:str} for some constants $a,b,c\in\C.$ The CR structure is spherical if and only if $c \left(2|a|^2+9 ib\right)=0.$
\end{prop}
This is a consequence of Cartan equivalence method. See Corollary \ref{cor:sph-curv} in the Appendix.
\subsection{Realizability }
Let $(M,D,J)$ be a CR 3-manifold and $N$ a complex manifold. A smooth function $f:M\to N$ is a {\em CR map}, or simply {\em CR}, if $\tilde J\circ (\d f|_D)= (\d f|_D)\circ J$, where $\tilde J:TN\to TN$ is the almost complex structure on $N$.
Equivalently, $(\d f )_\C V\subset T^{(0,1)}N.$ A {\em realization} of $(M,D,J)$ is a CR embedding of $M$ in a (complex) 2-dimensional $N$. A {\em local realization} is a CR immersion in such $N$.
The following lemma is useful for finding CR immersions and embeddings of left-invariant CR structures on Lie groups.
\begin{lemma}\label{lemma:real}Let $G$ be a 3-dimensional Lie group with a left-invariant CR structure $(D,J)$, with corresponding $[L]\in\P(\g_\C)_{\mathrm{reg}}$. Let $\rho:G\to \mathrm{GL}(U)$ be a finite dimensional complex representation, $u\in U$ and $\mu:G\to U$ the evaluation map $g\mapsto \rho(g)u$. Then $\mu$ is a CR map if and only if $\rho'(L)u=0,$ where $\rho':
\mathfrak{g}_\C\to\End(U)$ is the complex linear extension of $(\d\rho)_e:\mathfrak{g}\to\End(U)$ to $\mathfrak{g}_\C$.
\end{lemma}
\begin{proof}
$\mu$ is clearly $G$-equivariant, hence $\mu$ is CR if and only if $\d\mu( JX)=i\,\d\mu(X)$ for some (and thus all) non-zero $X\in D_e$. Now $\d\mu(X)=\rho'(X)u,$ hence the CR condition on $\mu$ is $\rho'(X+iJX)u=0,$ for all $X\in D_e$. Equivalently, $\rho'(L)u=0$ for some (and thus all) non-zero $L\in \mathfrak{g}_\C$ of type $(0,1)$.
\end{proof}
Here is an application of the last lemma, often used by Cartan in Chapter II of \cite{Ca1}.
\begin{prop}\label{prop:real}
Let $G$ be a 3-dimensional Lie group with a left-invariant CR structure $[L]\in\P(\g_\C)_{\mathrm{reg}}$. Then the evaluation map $\mu:G\to\P(\g_\C)$, $g\mapsto [\Ad_g(L)]$, is a $G$-equivariant CR map, whose image $\mu(G)\subset\P(\g_\C)$, the $\Ad_G$-orbit of $[L]\in\P(\g_\C)$, is of dimension 2 or 3. It follows that if $L$ has a trivial centralizer in $\mathfrak{g}$ then $\mu(G)$ is 3-dimensional and hence $\mu$ is a local realization of the CR structure on $G$ in $\P(\g_\C)\simeq\C\P^2$.
\end{prop}
\begin{proof}
Let $\tilde\mu: G\to \mathfrak{g}_\C\setminus\{0\},$ $g\mapsto \Ad_g L,$
and $\pi: \mathfrak{g}_\C\setminus\{0\}\to\P(\g_\C)$, $B\mapsto [B]$.
Then $\mu=\pi\circ\tilde\mu$ and $\pi$ is holomorphic,
hence it is enough to show that $\tilde\mu$ is CR at $e\in G$. Applying Lemma
\ref{lemma:real} with $\rho=\Ad_G$, $u=L$, we have that $\rho'(L)L=[L,L]=0$, hence $\tilde\mu$ is CR, and so is $\mu.$
Let $\mathcal O=\mu(G)$. Since $\mu$ is CR, $\d\mu(D)$ is a $\tilde J$-invariant and $G$-invariant subbundle of $T\mathcal O$, where $\tilde J$ is the almost complex structure of $\P(\g_\C)$. Thus in order to show that $\dim(\mathcal O)\geq 2$ it is enough to show that $\d\mu(D_e)\neq 0$.
Equivalently, $\d\tilde\mu (D_e)\not\subset {\rm Ker}((\d\pi)_L)=\C L$. Let $L=L_1+iL_2,$ with $L_1, L_2\in \mathfrak{g}$. Then $L_2=JL_1$ and so $\d\tilde\mu ( L_2)=[L_2,L]=-[L_1,L_2]$. But $[L]$ is non-real, so $(\C L)\cap \mathfrak{g}=\{0\}$, hence $[L_1,L_2]\in\C L$ implies $[L_1,L_2]=0$, so $D_e=\mathrm{Span}\{L_1, L_2\}\subset \mathfrak{g}$ is an (abelian) subalgebra, in contradiction to the non-degeneracy assumption on the CR structure.
\end{proof}
\section{$\SLt$}\label{sec:sl2}
We illustrate the results of the previous section first of all with a detailed description of left-invariant CR structures on the group $G=\SLt$, where $\mathfrak{g}=\slt$, the set of $2\times 2$ traceless real matrices and $\mathfrak{g}_\C=\sl_2(\C)$, the set of $2\times 2$ traceless complex matrices.
Here is a summary of the results: for $G=\SLt$, the set of left-invariant CR structures $\P(\g_\C)_{\mathrm{reg}}$ is identified ${\rm Aut}(G)$-equivariantly with the set of unordered pairs of points $\zeta_1, \zeta_2\in\C\setminus\mathbb{R}$, $\zeta_1\neq \bar\zeta_2$, on which ${\rm Aut}(G)$ acts by orientation preserving isometries of the usual hyperbolic metric in each of the half. With this description, it is easy to determine the ${\rm Aut}(G)$-orbits. There are two families of orbits: the `elliptic' family corresponds to pairs of points in the same half-plane, with the spherical structure corresponding to a `double point', $\zeta_1=\zeta_2$; the `hyperbolic' family corresponds to non-conjugate pairs of points in opposite half planes.
Each orbit is labeled uniquely by the hyperbolic distance $d(\zeta_1,\zeta_2)$ in the elliptic case, or $d(\zeta_1, \bar \zeta_2)$ in the hyperbolic case. All structures, except the spherical elliptic one, are locally realized as adjoint orbits in $\P(\sl_2(\C))=\C\P^2$, either inside $S^3=\{[L]\, | \, \mathrm{tr}(L\bar L)=0\}$ (in the hyperbolic case) or in its exterior (in the elliptic case). The elliptic spherical structure embeds as any of the generic orbits of the standard action on $\C^2$.
\sn
We begin with the conjugation action of $\SL_2(\C)$ on $\P(\sl_2(\C))$ (this will be useful also for the next example of $G=\SU_2$). With each $[L]\in\P(\sl_2(\C))$ we associate
an unordered pair of points $\zeta_1, \zeta_2\in\C\cup\infty,$ possibly repeated, the roots of the quadratic polynomial
\begin{equation}\label{eq:p}
p_L(\zeta):=c\zeta^2-2a\zeta-b=c(\zeta-\zeta_1)(\zeta-\zeta_2), \qquad L=\left(
\begin{array}{rr}
a &b\\
c&-a
\end{array}\right).
\end{equation}
Clearly, multiplying $L$ by a non-zero complex constant does not affect $\zeta_1, \zeta_2$.
\begin{lemma}\label{lemma:equiv}
Let $S^2(\C\P^1)$ be the set of unordered pairs of points $\zeta_1, \zeta_2\in \C\cup\infty=\C\P^1$.
Then:
\benum
\item The map $\P(\sl_2(\C))\to S^2(\C\P^1)$, assigning to $[L] \in \P(\sl_2(\C))$ the roots of $p_L$, as in equation \eqref{eq:p}, is an
$\SL_2(\C)$-equivariant bijection, where $\SL_2(\C)$ acts on $S^2(\C\P^1)$ via M\"obius
transformations on $\C\P^1$ (projectivization of the standard action on $\C^2$);
\item Complex conjugation, $[L]\mapsto [\overline L]$, corresponds, under the above bijection, to complex conjugation of the roots of $p_L$, $\{\zeta_1,\zeta_2\}\mapsto \{\bar\zeta_1, \bar\zeta_2\}$.
\end{enumerate}
\end{lemma}
\begin{proof}The map $[L]\mapsto \{\bar\zeta_1, \bar\zeta_2\}$ is clearly a bijection (a polynomial is determined, up to a scalar multiple, by its roots). The $\SL_2(\C)$-equivariance, as well as item (b), can be easily checked by direct computation.
Here is a more illuminating argument, explaining also the origin of the formula for $p_L$ in equation \eqref{eq:p}. We first show that the adjoint representation of $\SL_2(\C)$ on $\sl_2(\C)$ is isomorphic to $H_2$, the space of quadratic forms on $\C^2$, or complex homogeneous polynomials $q(z_1, z_2)$ of degree 2 in two variables, with $g\in \SL_2(\C)$ acting by substitutions, $q\mapsto q\circ g^{-1}$. To derive an explicit isomorphism,
let $\mathrm{U}$ be the standard representation of $\SL_2(\C)$ on $\C^2$ and $\mathrm{U}^*$ the dual representation, where $g\in\SL_2(\C)$ acts on $\alpha\in \mathrm{U}^*$ by $\alpha\mapsto \alpha\circ g^{-1}$. The induced action on $\Lambda^2(\mathrm{U}^*)$ (skew symmetric bilinear forms on $\mathrm{U}$) is trivial (this amounts to $\det(g)=1$). Let us fix $\omega:=z_1\wedge z_2\in\Lambda^2(\mathrm{U}^*)$. Since $\omega$ is $\SL_2(\C)$-invariant, it defines an
$\SL_2(\C)$-equivariant isomorphism $\mathrm{U}\to \mathrm{U}^*$, $u\mapsto \omega(\cdot, u),$ mapping
${\mathbf e}_1\mapsto -z_2,$ ${\mathbf e}_2\mapsto z_1$, where ${\mathbf e}_1,{\mathbf e}_2$ is the standard basis of $\mathrm{U}$, dual to $z_1, z_2\in \mathrm{U}^*$.
We thus obtain an isomorphism of $\SL_2(\C)$ representations, $\End(\mathrm{U})\simeq
\mathrm{U}\otimes \mathrm{U}^*\simeq \mathrm{U}^*\otimes \mathrm{U}^*$. Under this isomorphism, $\sl_2(\C)\subset\End(\mathrm{U})$ is mapped unto $S^2(\mathrm{U}^*)\subset \mathrm{U}^*\otimes \mathrm{U}^*$ (symmetric bilinear forms on $\mathrm{U}$), which in turn is identified with $H_2$, $\SL_2(\C)$-equivariantly, via $B\mapsto q$, $q(u)=B(u,u).$ Following through these isomorphisms, we get the sought for $\SL_2(\C)$-equivariant isomorphism $\sl_2(\C)\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,} H_2$,
\begin{align*}
L=\left(
\begin{array}{rr}
a &b\\
c&-a
\end{array}\right) &\mapsto
a {\mathbf e}_1\otimes z_1+b {\mathbf e}_1\otimes z_2+c {\mathbf e}_2\otimes z_1-a {\mathbf e}_2\otimes z_2\\
&\mapsto
-a z_2\otimes z_1-b z_2\otimes z_2+c z_1\otimes z_1-a z_1\otimes z_2\\
&\mapsto
q_L(z_1,z_2)=c(z_1)^2-2a\,z_1z_2-b(z_2)^2.
\end{align*}
Now every non-zero quadratic form $q\in H_2$ can be factored as the product of two non-zero linear forms, $q=\alpha_1\alpha_2$, where the kernel of each $\alpha_i$ determines a `root' $\zeta_i\in\C\P^1$. Introducing the inhomogeneous coordinate $\zeta=z_1/z_2$ on $\C\P^1=\C\cup\infty$, we get $c(z_1)^2-2a\,z_1z_2-b(z_2)^2=(z_2)^2p_L(\zeta)$, with $p_L$ as in equation \eqref{eq:p} with roots $\zeta_i\in \C\cup\infty.$
\end{proof}
\begin{rmrk}\label{rmrk:proj} There is a simple projective geometric interpretation of Lemma \ref{lemma:equiv}. See Figure \ref{fig:proj}(a).
Consider in the projective plane $\P(\sl_2(\C))\simeq\C\P^2$ the conic $\mathcal{C}:=\{[L]\, | \, \det(L)=0\}\simeq \C\P^1.$
Through a point $[L]\in\C\P^2\setminus\mathcal{C}$ pass two (projective) lines tangent to $\mathcal{C}$, with tangency points $\zeta_1,\zeta_2\in\mathcal{C}$ (if $[L]\in\mathcal{C}$ then $\zeta_1=\zeta_2=[L]$). Since $\SL_2(\C)$ acts on $\C\P^2$ by projective transformations preserving $\mathcal{C}$, the map $[L]\mapsto \{\zeta_1, \zeta_2\}$ is $\SL_2(\C)$-equivariant. The map $[L]\mapsto [\overline L]$ is the reflection about $\RP^2\subset\C\P^2.$ Formula \eqref{eq:p} is a coordinate expression of this geometric recipe.
\end{rmrk}
\begin{figure}[h!]
\centerline{\includegraphics[width=\textwidth]{figure1.pdf}}
\caption{Distinct types of $[L]\in\P(\g_\C)$ for $G=\SLt$: (a) regular ; (b) real ; (c) non-real degenerate. See the proofs of Lemma \ref{lemma:equiv}, \ref{lemma:reg} and Remark \ref{rmrk:proj}.
}\label{fig:proj}
\end{figure}
\begin{lemma}\label{lemma:reg} Let $L\in\sl_2(\C)$, $L\neq 0$. Then $[L]\in\P(\sl_2(\C))_{\mathrm{reg}}$ if and only if both roots of $p_L$ are non-real and are non-conjugate, i.e. $\zeta_1, \zeta_2\in\C\setminus\mathbb{R}$ and $\zeta_1\neq \bar\zeta_2$.
\end{lemma}
\begin{proof}
Let $\zeta_1, \zeta_2$ be the roots of $p_L$.
By Lemma \ref{lemma:equiv} part (b), $[L]$ is real, $[L]= [\overline L]$, if and only if $\zeta_1, \zeta_2$ are both real or $\zeta_1=\bar\zeta_2$.
We claim that if $[L]\neq [\overline L]$ then $[L]$ is degenerate, i.e. $L,\overline L$ span a 2-dimensional subalgebra of $\sl_2(\C)$, exactly when one of the two roots $\zeta_1, \zeta_2$ is real and the other is non-real. This is perhaps best seen with Figure \ref{fig:proj}(c). A 2-dimensional subspace of $\sl_2(\C)$ corresponds to a projective line in $\P(\sl_2(\C))$. The 2-dimensional subalgebras of $\sl_2(\C)$ are all conjugate (by $\SL_2(\C)$) to the subalgebra of upper triangular matrices and are represented in Figure \ref{fig:proj} by lines tangent to $\mathcal{C}$. Now the line passing through $[L], [\overline L]$ is invariant under complex conjugation, hence if it is tangent to to $\mathcal{C}$ then the tangency point is real and is one of the roots of $p_L$. But $[L]$ is non-real, hence the other root is non-real.
\end{proof}
Next we describe ${\rm Aut}(\SLt)$. Clearly, $\mathrm{GL}_2(\mathbb{R})$ acts on $\SLt$ by matrix conjugation as group automorphism. The ineffective kernel of this action is the center $\mathbb{R}^*\mathrm I$ of $\GL_2(\R)$ (non-zero multiples of the identity matrix). The quotient group is denoted by $\mathrm{PGL}_2(\mathbb{R})=\GL_2(\R)/\mathbb{R}^*\mathrm I.$ Thus there is a natural inclusion $\mathrm{PGL}_2(\mathbb{R})\subset {\rm Aut}(\SLt)$.
\begin{lemma}$\PGL_2(\R)= {\rm Aut}(\SLt)={\rm Aut}(\slt)$.
\end{lemma}
\begin{proof} We have already seen the inclusions $\PGL_2(\R)\subset{\rm Aut}(\SLt)\subset {\rm Aut}(\slt)$, so it is enough to show that ${\rm Aut}(\slt)\subset\PGL_2(\R).$ Now the Killing form of a Lie algebra,
$\<X,Y\>=\mathrm{tr}({\rm ad} X\circ{\rm ad} Y)$, is defined in terms of the Lie bracket alone. For $\slt$, the associated quadratic form is $\det(X)=-a^2-bc$ (up to a constant), a non-degenerate quadratic form of signature (2,1). Furthermore, the `triple product' $(X,Y,Z)\mapsto \< X, [Y,Z]\>$ defines a non vanishing volume form on $\slt$ in terms of the Lie bracket, hence ${\rm Aut}(\slt)\subset\mathrm{SO}_{2,1}.$ Finally, $\PGL_2(\R)\subset \mathrm{SO}_{2,1}$ and both are 3-dimensional groups with two components, so they must coincide. \end{proof}
Let us now examine the action of ${\rm Aut}(\SLt)$ on $\P(\sl_2(\C)).$ It is convenient, instead of working with ${\rm Aut}(\SLt)=\mathrm{PGL}_2(\mathbb{R})$, to work with its double cover $\mathrm{SL}^\pm_2(\mathbb{R})$ (matrices with $\det=\pm 1.$) The latter consists of two components, the identity component, $\SLt$, and $\sigma\SLt$, where $\sigma$ is any matrix with $\det=-1$; for example $\sigma=\mathrm{diag}(1,-1)$.
According to Lemma \ref{lemma:equiv}, we need to consider first the action of $\mathrm{SL}^\pm_2(\mathbb{R})$ by M\"obius transformations on $\C\P^1$. The action of the identity component $\SLt$ has 3 orbits; in terms of the inhomogeneous coordinate $\zeta$, these are
\begin{itemize}
\item the upper half-plane $\Im(\zeta) > 0,$
\item the lower half-plane $\Im(\zeta)< 0,$
\item their common boundary, the real projective line $\R\P^1 =\mathbb{R}\cup\infty$.
\end{itemize}
The action on each half-plane is by orientation preserving hyperbolic isometries (isometries of the Poincar\'e metric $|\d \zeta|/|\Im(\zeta)|$). The action of $\sigma=\mathrm{diag}(1,-1)$ is by reflection about the origin $\zeta=0$, an orientation preserving hyperbolic isometry between the upper and lower half planes.
In summary, we get the following orbit structure:
\begin{prop}\label{prop:p} Under the identification $\P(\sl_2(\C))\simeq S^2(\C\P^1)$ of Lemma \ref{lemma:equiv}, the orbits of ${\rm Aut}(\SLt)$ in $\P(\sl_2(\C))_{\mathrm{reg}}$ correspond to the following two 1-parameter families of orbits in $S^2(\C\P^1)$:
\begin{enumerate}[leftmargin=18pt,label=I.]\setlength\itemsep{5pt}
\item A 1-parameter family of orbits, corresponding to a pair of points $\zeta_1, \zeta_2\in\C\setminus\mathbb{R}$ in the same half-plane (upper or lower). The parameter can be taken as the hyperbolic distance $d(\zeta_1, \zeta_2)\in[0,\infty)$. All these orbits are 3-dimensional, except the one corresponding to a double point $\zeta_1=\zeta_2$, which is 2-dimensional.
\item A 1-parameter family of orbits, corresponding to pair of points $\zeta_1, \zeta_2\in\C\setminus\mathbb{R}$ situated in opposite half planes and which are not complex conjugate, $\zeta_1\neq \bar \zeta_2.$ The parameter can be taken as the hyperbolic distance $d(\zeta_1, \bar \zeta_2)\in(0,\infty)$. All these orbits are 3-dimensional. \end{enumerate}
The rest of the orbits are either real ($\zeta_1, \zeta_2\in \R\P^1=\mathbb{R}\cup \infty$ or $\zeta_1= \bar \zeta_2$) or degenerate (one of the points is real).
\end{prop}
\begin{proof} Most of the claims follow immediately from the previous lemmas so their proof is omitted. The claimed dimensions of the orbits follow from the dimension of the stabilizer in ${\rm Aut}(\SLt)$ of an unordered pair $\zeta_1, \zeta_2\in \C\setminus\mathbb{R}$; for two distinct points in the same half-plane, or in opposite half-planes with $z_1\neq \bar z_2$ , the stabilizer is the two element subgroup interchanging the points. For a double point the stabilizer is a circle group of hyperbolic rotations about this point. \end{proof}
Next, recall that the {\em Killing form} on $\slt$ is the bilinear form $\<X,Y\>=(1/2)\mathrm{tr}(XY).$
The associated quadratic form $\<X, X\>=-\det(X)=a^2+bc$ is a non-degenerate indefinite form of signature $(2,1)$, the unique $\Ad$-invariant form on $\slt$, up to scalar multiple. The {\em null cone} $C\subset \slt$ is the subset of elements with $\<X, X\>=0.$
\begin{defn} A 2-dimensional subspace $\Pi\subset\slt$ is called {\em elliptic} (respectively,{\em hyperbolic}) if the Killing form restricts to a definite (respectively, indefinite, but non-degenerate) inner product on $\Pi$. Equivalently, $\Pi$ is hyperbolic if its intersection with the null cone $C$ consists of two of its generators and elliptic if it intersects it only at its vertex $X=0$. A left-invariant CR structure $(D,J)$ on $\SLt$ is elliptic (resp.~{\em hyperbolic}) if $D_e\subset \slt$ is elliptic (resp.~{\em hyperbolic}).
\end{defn}
\begin{rmrk} There is a third type of a 2-dimensional subspace $\Pi\subset\slt$, called {\em parabolic}, consisting of 2-planes tangent to $C$, but these are subalgebras of $\slt$, hence are excluded by the non-degeneracy condition on the CR structure.
\end{rmrk}
\begin{rmrk}
Our use of the terms elliptic and hyperbolic for the contact plane is natural from the point of view of Lie theory. However it conflicts with the terminology of analysis; CR vector fields are never elliptic or hyperbolic differential operators.
\end{rmrk}
\begin{lemma}\label{lemma:elip}
Let $[L]\in\P(\sl_2(\C))_{\mathrm{reg}}$, and $D_e\subset \slt$ the real part of the span of $L, \overline L$. Then $D_e$ is elliptic if the roots of $p_L$ lie in the same half plane (type I of Proposition \ref{prop:p}), and is hyperbolic if they lie in opposite half planes (type II of proposition \ref{prop:p}).
\end{lemma}
\begin{proof} Let $\zeta_1, \zeta_2$ be the roots of $p_L$. Acting by ${\rm Aut}(\SLt)$, we can assume, without loss of generality, that $\zeta_1=i$ and $\zeta_2=it$ for some $t\in\mathbb{R}\setminus\{-1,0\}$. Thus, up to scalar multiple, $p_L=(\zeta-i)(\zeta-it)=\zeta^2-i(1+t)\zeta-t.$ A short calculation shows that $D_e$ consists of matrices of the form
$X=\left(\begin{array}{cc}
a(1+t) &tb\\
b& -a(1+t)
\end{array}
\right)$, $a,b\in\mathbb{R}$, with $\det (X)=-a^2(1+t)^2-tb^2.$ This is negative definite for $t>0$ and indefinite otherwise.
\end{proof}
\begin{prop}\label{prop:slt1} Let $V_t\subset T_\C\SLt$, $t\in\mathbb{R},$ be the left-invariant complex line bundle spanned at $e\in\SLt$ by
\begin{equation}\label{eq:L}
L_t=\left(\begin{array}{cc}
i{1+t\over 2} &t\\
1& -i{1+t\over 2}
\end{array}
\right)\in\slt\otimes\C=\sl_2(\C).\end{equation}
Then
\benum
\item
$V_t$ is a left-invariant CR structure for all $t\neq 0,-1$, elliptic for $t>0$ and hyperbolic for $t<0, t\neq -1.$
\item $V_t$ is spherical if $t=1$ or $-3\pm 2\sqrt{2}$ and aspherical otherwise.
\item Every left-invariant CR structure on $\mathrm{SL}_2(\mathbb{R})$ is CR equivalent to
$V_t$ for a unique $t\in(-1,0)\cup(0,1].$
\item The aspherical left-invariant CR structures $V_t$, $t\in(-1,1)\setminus\{0,-3+ 2\sqrt{2} \}$, are pairwise non-equivalent, even locally.
\end{enumerate}
\end{prop}
\begin{proof}
(a) The quadratic polynomial corresponding to $L_t$
is
$$p(\zeta)=\zeta^2-i(1+t)\zeta-t=(\zeta-i)(\zeta-it),$$
with roots $i,it$.
For $t>0$ the roots are in the upper half plane and thus, by Lemma \ref{lemma:elip}, $V_t$ is an elliptic
CR structure. For $t<0$ the roots are in opposite half planes and
for $t\neq -1$ are not complex conjugate, hence $V_t$ is an hyperbolic CR structure.
\sn (b) Let
$$\Theta=g^{-1}\d g= \left(\begin{array}{lr}
\alpha &\beta\\
\gamma & -\alpha
\end{array}
\right)
$$
be the left-invariant Maurer-Cartan $\sl_2(\mathbb{R})$-valued 1-form on $\mathrm{SL}_2(\mathbb{R})$. A coframe adapted to $V_t$ is
\begin{equation}\label{eq:adap}
\theta=\beta-t\gamma,
\quad \theta_1=\alpha-i{1+t\over 2}\gamma,
\end{equation}
i.e. $\theta(L_t)=\theta_1(L_t)=0,$ $\bar\theta_1(L_t)\neq 0$. The Maurer-Cartan equations, $\d\Theta=-\Theta\wedge\Theta$, are
$$\d\alpha=-\beta\wedge\gamma,\quad \d\beta=-2\alpha\wedge\beta,\quad\d\gamma=2\alpha\wedge\gamma.
$$
Using there equations, we calculate
$$\d\theta=i{4t\over 1+t}\theta_1\wedge\bar\theta_1+\theta\wedge\theta_1+\theta\wedge\bar\theta_1.
$$
Now
$$\phi:=\mathrm{sign}(t)(\beta-t\gamma), \quad \phi_1:=\sqrt{\left|{4t\over 1+t}\right|}\left[\alpha-i {1+t\over 4}\left({\beta\over t}+\gamma\right)\right]$$
satisfy
$$\d\phi =i\phi _1\wedge \bar{\phi_1},
\quad \d \phi_1=b\phi\wedge\phi_1+c\phi\wedge\bar\phi_1,$$
where
$$
b=-i{1+6t+t^2\over 4|t|(1+t)},
\quad c=-i{(1-t)^2\over 4|t|(1+t)},
$$
thus $(\phi, \phi_1)$ is well-adapted to $V_t$. Applying Proposition \ref{prop:CRc}, we conclude that $V_t$ is spherical if and only if $(1+6t+t^2)(1-t)=0;$ that is, $t=1$ or $-3\pm 2\sqrt{2}$, as claimed.
\sn(c) The hyperbolic distance $d(i, it)$ varies monotonically from $0$ to $\infty$ as $t$ varies from $1$ to $0$, hence every pair of points in the same half plane can be mapped by ${\rm Aut}(\SLt)$ to the pair $(i,it)$ for a unique $t\in (0,1]$. Consequently, every left-invariant elliptic CR structure is CR equivalent to $V_t$ for a unique $t\in(0,1]$.
Similarly, $d(i, -it)$ varies monotonically from $0$ to $\infty$
as $t$ varies from $-1$ to $0$, hence every hyperbolic left-invariant CR structure is CR equivalent to $V_t$ for a unique $t\in(-1,0)$.
By Theorem \ref{thm:ens}, no pair of the aspherical $V_t$ with $0<|t|<1$ are CR equivalent, even locally. It remains to show that the elliptic and hyperbolic spherical structures, namely, $V_t$ for $t=1$ and $-3+2\sqrt{2}$ (respectively), are not CR equivalent. In the next proposition, we find an embedding $\phi_1:\SLt\to S^3$ of the elliptic spherical structure in the standard spherical CR structure on $S^3$ and an immersion $\phi_2:\SLt\to S^3$ of the hyperbolic spherical structure which is not an embedding (it is a $2:1$ cover). It follows from Corollary \ref{cor:sph} that these two spherical structures are not equivalent: if $f:\SLt\to\SLt$ were a diffeomorphism mapping the hyperbolic spherical structure to the elliptic one, then this would imply that the pull-backs to $\SLt$ of the spherical structure of $S^3$ by $ \phi_1\circ f$ and $\phi_2$ coincide, and hence, by Corollary \ref{cor:sph}, there is an element $g\in\PU_{2,1}$ such that $\phi_2=g\circ \phi_1\circ f$. But this is impossible, since $g\circ \phi_1\circ f$ is an embedding and $\phi_2$ is not.
\sn (d) As mentioned in the previous item, this is a consequence of Theorem \ref{thm:ens}.
\end{proof}
\begin{rmrk}\label{rmrk:short} There is an alternative path, somewhat shorter (albeit less picturesque), to the classification of left-invariant
CR structures on $\SLt$, leading to a family of `normal forms' different then the $V_t$ of
Proposition \ref{prop:slt1}. One shows first that, up to conjugation by $\SLt$, there are only
two non-degenerate left-invariant contact structures $D\subset T\SLt$: an elliptic one, given by $D_e^+=\{c=b\},$
and hyperbolic one, given by $D_e^-=\{c=-b\}.$ The Killing form on $\slt$, $-\det(X)=a^2+bc$, restricted to $D_e^\pm$, is given by $a^2\pm b^2, $ with orthonormal basis $A,B\pm C$, where $A, B, C$ is the basis of $\slt$ dual to $a,b,c$.
One then determines the stabilizer of $D_e^\pm$ in ${\rm Aut}(\SLt)$
(the subgroup that leaves $D_e^\pm$ invariant). In each case the stabilizer acts on $D_e^\pm$ as the full isometry group of $a^2\pm b^2,$ that is, $\rm{O}_2$ in the elliptic case, and $\rm{O}_{1,1},$ in the hyperbolic case. Using this description one shows that, in the elliptic case, each almost complex structure on $D^+_e$ is conjugate to a unique one of the form $A\mapsto s(B+C)$, $s\geq 1$, with corresponding $(0,1)$ vector $A+is(B+C)=\left(\begin{array}{cc}
1 &is\\
is&-1
\end{array}
\right)$, and in the hyperbolic case $A\mapsto s(B-C)$, $s> 0$, with corresponding $(0,1)$ vector $A+is(B-C)=\left(\begin{array}{cc}
1 &is\\
-is&-1
\end{array}
\right)$. The spherical structures are given by $s=1$ in both cases.
\end{rmrk}
Regarding realizability of left-invariant CR structures on $\SLt$, we have the following.
\begin{prop}\label{prop:slt2}
\benum
\item
The elliptic left-invariant spherical CR structure on $\SLt$ ($t=1$ in equation \eqref{eq:L}) is realizable as any of the generic (3-dimensional) $\SLt$-orbits in $\C^2$ (complexification of the standard linear action on $\mathbb{R}^2$). This is also CR equivalent to the complement of a `chain' in $S^3\subset \C^2$ (a curve in $S^3$ given by the intersection of a complex affine line in $\C^2$ with $S^3$; e.g. $z_1=0$)
\item The rest of the left-invariant CR structures on $\SLt$, with $0<|t|<1$ in equation \eqref{eq:L}, are either $4:1$ covers, in the aspherical elliptic case $0<t<1$, or $2:1$ covers, in the hyperbolic case $-1<t<0$, of the orbits of $\SLt$ in $\P(\sl_2(\C))$.
\item The spherical hyperbolic orbit is also CR equivalent to the complement of $S^3\cap \mathbb{R}^2$ in $S^3\subset\C^2$.
\end{enumerate}
\end{prop}
\begin{proof}
(a) Fix $v\in\C^2$ and define $\mu:\SLt\to\C^2$ by $\mu(g)= gv$. If the stabilizer of $v$ in $\SLt$ is trivial and $L_1v=0,$ then, by Lemma \ref{lemma:real}, $\mu$ is an $\SLt$-equivariant CR embedding.
Both conditions are satisfied by $v={i\choose 1}.$ In fact, all 3-dimensional $\SLt$-orbits in $\C^2$ are homothetic, hence are CR equivalent and any of them will do.
Now let $\mathcal O\subset\C^2$ be the $\SLt$-orbit of $v={i\choose 1}.$ For $g={a\ \ b\choose c \ \ d}\in \SLt$, with $\det(g)=ad-bc=1$, $gv={b+ia\choose d+ic}$, hence $\mathcal O$ is the quadric $\Im(z_1\bar z_2)=1$, where $z_1, z_2$ are the standard complex coordinates in $\C^2$. To map $\mathcal O$ onto the complement of $z_1=0$ in $S^3$ we first apply the complex linear transformation $\C^2\to \C^2$, $(z_1, z_2)\mapsto (z_1+iz_2, z_2+iz_1)/2$, mapping $\mathcal O$ unto the hypersurface $|z_1|^2-|z_2|^2=1$.
Next let $Z_1, Z_2, Z_3$ be homogenous coordinates in $\C\P^2$ and embed $\C^2$ as an affine chart, $(z_1, z_2)\mapsto [z_1: z_2:1].$ The image of $|z_1|^2-|z_2|^2=1$ is the complement of $Z_3=0$ in $|Z_1|^2-|Z_2|^2=|Z_3|^2.$ This is mapped by $[Z_1:Z_2:Z_3]\mapsto [Z_3: Z_2: Z_1]$ to the complement of $Z_1=0$ in $|Z_1|^2+|Z_2|^2=|Z_3|^2.$ In our affine chart this is the complement of $z_1=0$ in $|z_1|^2+|z_2|^2=1$, as needed.
\sn (b) By Proposition \ref{prop:real}, to show that the map $\SLt\to\P(\sl_2(\C))$, $g\mapsto [\Ad_gL_t]$, is a CR immersion of $V_t$ into $\P(\sl_2(\C))$, it is enough to to show that the stabilizer of $[L_t]\in\P(\sl_2(\C))$ in $\SLt$ is discrete. Using Lemma \ref{lemma:equiv}, we find that, in the aspherical elliptic case, where $t\in (0,1)$, the roots are an unordered pair of distinct points in the upper half plane, so there is a single hyperbolic isometry in $\PSLt$ interchanging them, hence the stabilizer in $\SLt$ is a 4 element subgroup.
In the hyperbolic case, where $t\in (-1,0)$, the roots $\zeta_1, \zeta_2$ are in opposite half-spaces and $\zeta_1\neq \bar\zeta_2$. Hence an element $g\in\SLt$ that fixes the unordered pair $\zeta_1, \zeta_2$ has two distinct fixed points $\zeta_1, \bar \zeta_2$ in the same half plane. It follows that $g$ acts trivially in this half plane and thus $g=\pm \mathrm I.$
\sn (c) $\sl_2(\C)$ admits a pseudo-hermitian product of signature $(2,1)$, $\mathrm{tr}\left(X\overline Y\right)$, invariant under the conjugation action of $\SLt$. The associated projectivized null cone in $\C\P^2$ is diffeomorphic to $S^3$, a model for the spherical CR structure on $S^3$. One can check that $L_t$ is a null vector, i.e. $\mathrm{tr}(L_t\bar L_t)=0$, for $t=-3\pm\sqrt{2}$. Thus the hyperbolic spherical left-invariant structure on $\SLt$ is a $2:1$ cover of an $\SLt$-orbit in $S^3$, consisting of all regular elements $[L]\in S^3$, whose complement in $S^3$ is the set of elements which are either real or degenerate non-real (see Lemma \ref{lemma:reg} and its proof). One can check that the only degenerate element in $S^3$ satisfies $a=c=0,$ $b\neq 0$, which is real. Thus all irregular elements in $S^3$ are the real elements $\R\P^2\cap S^3\subset \C\P^2,$ or $\mathbb{R}^2\cap S^3\subset\C^2,$ as claimed.
\end{proof}
\begin{rmrk} \label{rmrk:counter}
In Cartan's classification \cite[p.~70]{Ca1}, the left-invariant spherical elliptic CR structure on $\SLt$ appears in item $5^\circ$(B) of the first table, as a left-invariant CR structure on the group $\rm{Aff}(\mathbb{R})\times \mathbb{R}/\mathbb{Z}$. This group is {\em not} isomorphic to $\SLt$, yet it admits a left-invariant spherical structure, CR equivalent to the spherical elliptic CR structure on $\SLt$. This shows that the asphericity condition in Theorem \ref{thm:ens}
is essential. Both groups are subgroups of the full 4-dimensional group of automorphism of this homogeneous spherical CR manifold (the stabilizer in $\PU_{2,1}$ of a chain in $S^3$). The hyperbolic spherical structure is item $8^\circ$(K$^\prime$).
The elliptic and hyperbolic aspherical left-invariant structures on $\SLt$ appear in items $4^\circ$(K) and $5^\circ$(K$^\prime$) (respectively) of the second table. In these items, Cartan gives explicit equations for the adjoint orbits in inhomogeneous coordinates $(x,y)\in\C^2\subset\C\P^2$ (an affine chart). For the elliptic aspherical orbits he gives the equation $1+x\bar x- y\bar y=\mu|1+x^2-y^2|,$ with $\Im(x(1+\bar y))>0$ and $\mu>1;$ for the hyperbolic aspherical structures he gives the equation $x\bar x+y\bar y-1=\mu|x^2+y^2-1|,$ with $(x,y)\in\C^2\setminus\mathbb{R}^2$ and $0<|\mu|<1.$ Both equations are $\mathrm{tr}(L\bar L)=\mu|\mathrm{tr}(L^2)|,$ with $(x,y)=(b+c,b-c))/(2a)$ in the elliptic case, and $(x,y)=(2a, b-c)/(b+c)$ in he hyperbolic case. The elliptic orbits are the generic orbits in the exterior of $S^3$, given by $\mathrm{tr}(L\bar L)>0$, while the hyperbolic orbits lie in its interior, given by $\mathrm{tr}(L\bar L)<0$. The elliptic orbits come in complex-conjugate pairs; that is, for each orbit, given by the pairs of roots $\zeta_1,\zeta_2\in\C\setminus\mathbb{R}$ in the same (fixed) half-plane, with a fixed hyperbolic distance $d(\zeta_1,\zeta_2)$, there is a complex-conjugate orbit where the pair of roots lie in the opposite half plane. The condition $\Im(x(1+\bar y))>0$ constrain the roots to lie in one of the half planes, so picks up one of the orbits in each conjugate pair. The hyperbolic orbits are self conjugate.
\end{rmrk}
\section{$\SU_2$}\label{sec:su2}
$\SU_2\simeq S^3$ is the group of $2\times 2$ complex unitary matrices with det=1. Its Lie algebra $\su_2$ consists of anti-hermitian $2\times 2$ complex matrices with $\su_2\otimes\C=\sl_2(\C)$. This case is easier then the previous case of $\SLt$, with no really new ideas, so we will be much briefer. The outcome is that there is a single 1-parameter family of left-invariant CR structures, exactly one of which is spherical, the standard spherical structure in $S^3$, realizable in $\C^2$. The rest of the structures are 4:1 covers of generic adjoint orbits in $\P(\g_\C)\simeq\C\P^2$.
\begin{lemma}
${\rm Aut}(\SU_2)={\rm Aut}(\su_2)=\mathrm{Inn}(\SU_2)=\SU_2/\{\pm \mathrm I\}\simeq \mathrm{SO}_3.$
\end{lemma}
\begin{proof}Similar to the $\SLt$ case, the Killing form and the triple product on $\su_2$ are defined in terms
of the Lie bracket alone. This gives a natural inclusion $ {\rm Aut}(\SU_2)\subset \SO_3$.
The conjugation action gives an embedding $\mathrm{Inn}(\SU_2)=\SU_2/\{\pm \mathrm I\}\subset \mathrm{SO}_3$.
The last two groups are connected and 3-dimensional, hence coincide.
\end{proof}
Since $\SU_2\subset \SL_2(\C)$, with $(\su_2)_\C=\sl_2(\C)$, we can, like in the previous case of $G=\SLt$, identify $\P((\su_2)_\C),$ $\SU_2$-equivariantly, with $S^2(\C\P^1)$, the set of unordered pairs of points on $\C\P^1=S^2$, with ${\rm Aut}(\SU_2)=\SU_2/\{\pm\mathrm I\}=\mathrm{SO}_3$ acting on $S^2(\C\P^1)$ by euclidean rotations of $\C\P^1=S^2$, and complex conjugation in $\P((\su_2)_\C)$ given by the antipodal map. Hence $\P((\su_2)_\C)$ consists of non-antipodal unordered pairs of points $\zeta_1, \zeta_2\in S^2$, each of which is given uniquely, up to ${\rm Aut}(\SU_2)=\mathrm{SO}_3$, by their spherical distance $d(\zeta_1, \zeta_2)\in [0,\pi).$
\begin{prop}\label{prop:sut}
Let $V_t\subset T_\C\SU_2$, $t\in\mathbb{R},$ be the left-invariant complex line bundle spanned at $e\in\SU_2$ by
\begin{equation}\label{eq:LL}
L_t=\left(\begin{array}{cc}
0 &t-1\\
t+1& 0
\end{array}
\right)\in\su_2\otimes\C=\sl_2(\C).\end{equation}
Then
\benum
\item $V_t$ is a left-invariant CR structure on $\SU_2$ for all $t\neq 0.$
\item $V_t$ is spherical if and only if $t=\pm 1$.
\item Every left-invariant CR structure on $\SU_2$ is CR equivalent to
$V_t$ for a unique $t\geq 1.$
\item The aspherical left-invariant CR structures $V_t$, $t>1$, are pairwise non-equivalent, even locally.
\item $V_1$ is realized by any of the non-null orbits of the standard representation of $\SU_2$ in $\C^2$. The aspherical structures are locally realized as $4:1$ covers of the adjoint orbits of $\SU_2$ in $\P(\sl_2(\C))$.
\end{enumerate}
\end{prop}
\begin{proof}
(a) Note that $L_t\in\su_2$ only for $t=0$ and that $\su_2$ does not have 2-dimensional subalgebras. It follows that $[L_t]$ is regular for all $t\neq 0$.
\sn (b) We apply Proposition \ref{prop:CRc}. The left-invariant $\su_2$-valued Maurer Cartan form on $\SU_2$ is
\begin{equation}\label{eqn:mc}\Theta=g^{-1}\d g= \left(\begin{array}{cc}
i\alpha &\beta+i\gamma\\
-\beta+i\gamma & -i\alpha
\end{array}.
\right)
\end{equation}
The Maurer Cartan equation $\d\Theta=-\Theta\wedge\Theta$ gives
$$
\d\alpha=-2\beta\wedge\gamma, \ \d\beta=-2\gamma\wedge\alpha, \ \d\gamma=-2\alpha\wedge\beta.
$$
A coframe well adapted to $V_t$ is
$$\phi=\alpha,\ \phi_1=\sqrt{t}\beta+{i\over\sqrt{t}}\gamma,
$$
satisfying
$$\d\phi=i\phi_1\wedge\bar\phi_1,\quad \d\phi_1=-i\left( {1\over t}+t \right)\phi\wedge\phi_1 -i\left( {1\over t}-t \right)\phi\wedge\bar\phi_1.
$$
We conclude from Proposition \ref{prop:CRc} that $V_t$ is spherical if and only if
$\left( {1\over t}+t \right)\left( {1\over t}-t \right)=0;$ that is, $t=\pm 1.$
\sn(c) The quadratic polynomial associated to $L_t$ is $(t+1)\zeta^2-(t-1)$, with roots $\zeta_\pm=\pm\sqrt{(t-1)/(t+1)}$. For $t=1$ (the spherical structure) this is a double point at $\zeta=0$, and for $t>1$ these are a pair of points symmetrically situated on the real axis, in the interval $(-1,1)$. As $t$ varies from $1$ to $\infty$ the spherical distance $d(\zeta_+, \zeta_-)$ increases monotonically from $0$ to $\pi$ (see next paragraph). It follows that every pair of unordered non-antipodal pair of points on $S^2$ can be mapped by ${\rm Aut}(\SU_2)=\mathrm{SO}_3$ to a pair $\zeta_\pm$ for a unique $t\geq 1$.
One way to see the claimed statement about $d(\zeta_+, \zeta_-)$ is to place the roots on the sphere $S^2$, using the inverse stereographic projection $\zeta\mapsto (2\zeta, 1-|\zeta|^2)/(1+|\zeta|^2)\in\C\oplus\mathbb{R}$. Then $\zeta_\pm\mapsto (\pm \sin\theta,0,\cos\theta)\in\mathbb{R}^3$, where $\cos\theta=1/t$ and $\theta\in[0,\pi/2)$ for $t\in[1,\infty).$ Thus as $t$ increases from $t=1$ to $\infty$ the pair of points on $S^2$ start from a double point at $(1,0,0)$, move in opposite directions along the meridian $y=0$ and tend towards the poles $ (0,0,\pm 1)$ as $t\to\infty$.
\sn (e) Every non-null orbit of the standard action of $\SU_2$ on $\C^2$ contains a point of the form $v=(\lambda,0),$ $\lambda\in \C^*$. Since the stabilizer of such a point is trivial and $L_1v=0$, it follows by Lemma \ref{lemma:real} that $g\mapsto gv$ is a CR embedding of $V_1$ in $\C^2$.
For $t>1$, we use Proposition \ref{prop:real} to realize the aspherical CR structure $V_t$ as the $\SU_2$-orbit of $[L_t]$ in $\P(\sl_2(\C))$. The stabilizer in $\mathrm{SO}_3$ is the two element group interchanging the two roots in $S^2$, hence the stabilizer in $\SU_2$ is a 4 element subgroup.
\end{proof}
\begin{rmrk}As in the $\SLt$ case (see Remark \ref{rmrk:short}), there is a somewhat quicker way to prove item (c). First note that ${\rm Aut}(\SU_2)=\mathrm{SO}_3$ acts transitively on the set of 2-dimensional subspaces of $\su_2$, hence one can fix the contact plane $D_e$ arbitrarily, say $D_e={\rm Ker}(\alpha)=\mathrm{Span}\{B,C\},$ where $A,B,C$ is the basis of $\su_2$ dual to $\alpha, \beta, \gamma$ of equation \eqref{eqn:mc}. Then, using the subgroup $\rm{O}_2\subset \mathrm{SO}_3={\rm Aut}(\SU_2)$ leaving invariant $D_e$, one can map any almost complex structure on $D_e$ to $J_t:B\mapsto tC$, for a unique $t\geq 1,$ with associated $(0,1)$-vector $B+itC=-L_t$.
\end{rmrk}
\begin{rmrk}\label{rmrk:burns} Proposition \ref{prop:sut}(e) gives a $4:1$ CR immersion $\SU_2\to \P(\sl_2(\C))\simeq\C\P^2$ of each of the aspherical left-invariant CR structures $V_t$, $t>1$. In fact, the proof shows that $\SU_2\to\sl_2(\C)\simeq\C^3$, $g\mapsto gL_tg^{-1}$, is a $2:1$ CR-immersion. It is still unknown, as far as we know, if one can find immersions into $\C^2$. However, it is known that one cannot find CR {\em embeddings} of the aspherical $V_t$ into $\C^n$, $n\geq 2$. This was first proved in \cite{Ro}, by showing that any function $f:\SU_2\to \C$ which is CR with respect to any of the aspherical $V_t$
is necessarily {\em even}, i.e. $f(-g)=f(g).$ A simpler representation theoretic argument was later given in \cite{Bu}, which we proceed to sketch here (with minor notational modifications).
First, one embeds $\mu:\SU_2\to \C^2$, $g\mapsto g{1\choose 0},$ with image $\mu(\SU_2)=S^3$, mapping the action of $\SU_2$ on itself by left translations to the restriction to $S^3$ of the standard linear action of $\SU_2$ on $\C^2$. Next, one uses the `spherical harmonics' decomposition $L^2(S^3)=\bigoplus_{p,q\geq 0} H^{p,q}$, where $H^{p,q}$ is the restriction to $S^3$ of the complex homogenous harmonic polynomials on $\C^2$ of bidegree $(p,q)$; that is, complex polynomials $f(z_1, z_2, \bar z_1, \bar z_2)$ which are homogenous of degree $p$ in $z_1,z_2$, homogenous of degree $q$ in $\bar z_1, \bar z_2$, and satisfy $(\partial_{z_1}\partial_{\bar z_1}+\partial_{z_2}\partial_{\bar z_2}) f=0$. Each $H^{p,q}$ has dimension $p+q+1$, is $\SU_2$-invariant and irreducible, with $-\mathrm I\in\SU_2$ acting by $(-1)^{p+q}.$
Next, one checks that $Z:=\bar z_2\partial_{z_1}-\bar z_1\partial_{z_2}$ is an $\mathrm{SU}_2$-invariant $(1,0)$-complex vector field on $\C^2$, tangent to $S^3$, mapping $H^{p,q}\to H^{p-1,q+1}$ for all $p>0, q\geq 0$, $\SU_2$-equivariantly. The latter is a non-zero map, hence, by Schur's Lemma, it is an {\em isomorphism}. Similarly, $\bar Z$ is a $(0,1)$-complex vector field on $\C^2$, tangent to $S^3$, defining an $\SU_2$-isomorphism $H^{p,q}\to H^{p+1,q-1}$ for all $q>0, p\geq 0$. It follows that each $H^k:=\bigoplus_{p+q=k}H^{p,q}$, $k\geq 0$, is invariant under $Z, \bar Z.$
Next, one checks that $\bar Z_t:=(1+t)\bar Z+ (1-t)Z$, restricted to $S^3$, spans $\d\mu(V_t)$. That is, $f:S^3\to \C$ is CR with respect to $\d\mu(V_t)$ if and only if $\bar Z_t f=0$. By the previous paragraph, each $H^k$ is $\bar Z_t $ invariant, hence $\bar Z_t f=0$ implies $\bar Z_t f^k=0$ for all $k\geq 0$, where $f^k\in H^k$ and $f=\sum f^k$. Now one uses the previous paragraph to show that for $k$ odd and $t>1$, $\bar Z_t$ restricted to $H^k$ is {\em invertible}. It follows that $\bar Z_t f=0$, for $t>1$, implies that $ f^k=0$ for all $k$ odd; that is, $f$ is even, as claimed. \qed
\end{rmrk}
\begin{rmrk} In Cartan's classification \cite[p.~70]{Ca1}, the spherical structure $V_1$ is item $1^\circ$ of the first table. The aspherical structures appear in item $6^\circ$(L) of the second table. Note that Cartan has an error in this item (probably typographical): the equation for the $\SU_2$-adjoint orbits, in homogenous coordinates in $\C\P^2$, should be $x_1\bar x_1+x_2\bar x_2+x_1\bar x_2=\mu|x_1 ^2+x_2^2+x_3^2|,$ $\mu>1$ (as appears correctly on top of p.~67). This is a coordinate version of the equation $\mathrm{tr}(L\bar L^t)=\mu|\mathrm{tr}(L^2)|$.
\end{rmrk}
\section {The Heisenberg group}
The Heisenberg group $H$ is the group of matrices of the form
$$\left(\begin{array}{ccc}1&x&z\\0&1&y\\ 0&0&1\end{array}\right), \quad x,y,z\in\mathbb{R}.$$
Its Lie algebra $\h$ consists of matrices of the form
$$\left(\begin{array}{ccc}0&a&c\\0&0&b\\ 0&0&0\end{array}\right), \quad a,b,c\in\mathbb{R}.$$
\begin{lemma} ${\rm Aut}(H)={\rm Aut}(\h)$ is the 6-dimensional Lie group, acting on $\h$ by
\begin{equation}\label{eq:auth}
\left(\begin{array}{cc}T&0\\ {\bf v} &\det(T)\end{array}\right), \quad T\in\mathrm{GL}_2(\mathbb{R}),\ {\bf v}\in\mathbb{R}^2
\end{equation}
(matrices with respect to the basis dual to $a,b,c$).
\end{lemma}
\begin{proof}Let $A,B,C$ be the basis of $\h$ dual to $a,b,c$. Then $$[A,B]=C,\ [A,C]=[B,C]=0.$$
One can then verify by a direct calculation that the matrices in formula \eqref{eq:auth} are those preserving these commutation relations.
\end{proof}
\begin{rmrk} Here is a cleaner proof of the last Lemma (which works also for the higher dimensional Heisenberg group): the commutation relations imply that $\mathfrak{z}:=\mathbb{R} C$ is the center of $\h$, so any $\phi\in{\rm Aut}(H)$ leaves it invariant, acting on $\mathfrak{z}$ by some $\lambda\in\mathbb{R}^*$ and on $\h/\mathfrak{z}$ by some $T\in{\rm Aut}(\h/\mathfrak{z}).$ The Lie bracket defines a non-zero element $\omega\in \Lambda^2((\h/\mathfrak{z})^*)\otimes \mathfrak{z}$ fixed by $\phi$. Now $\phi^*\omega=(\lambda/\det(T))\omega$, hence $\lambda=\det(T)$. This gives the desired form of $\phi$, as in equation \eqref{eq:auth}.
\end{rmrk}
\begin{prop} Let $V\subset T_\C H$ be the left-invariant complex line bundle spanned at $e\in H$ by
\begin{equation}\label{eq:H}
L=\left(\begin{array}{ccc}
0 &1&0\\
0&0& i\\
0&0&0
\end{array}
\right)\in\h\otimes\C.\end{equation}
Then
\benum
\item $V$ is the unique left-invariant CR structure on $H$, up to the action of ${\rm Aut}(H)$.
\item V is spherical, CR equivalent to the complement of a point in $S^3$.
\item $V$ is also embeddable in $\C^2$ as the real quadric $
\Im(z_1)=|z_2|^2.$ In these coordinates, the group multiplication in $H$ is given by
\[
(z_1,z_2)\cdot (w_1,w_2)=(z_1+w_1,z_2+w_2+2iz_1\bar{w_1}).
\]
\end{enumerate}
\end{prop}
\begin{proof}
(a) The adjoint action is $(x,y,z)\cdot(a,b,c)=(a, b, c + b x - a y).$ This has 1-dimensional orbits, the affine lines parallel to the $c$ axis, except the $c$ axis itself (the center of $\h$), which is pointwise fixed.
The `vertical' 2-dimensional subspaces in $\h$, i.e. those containing the $c$ axis, are subalgebras, so give degenerate CR structures. It is easy to see that any other 2-dimensional subspace can be mapped by the adjoint action to $D_e=\{c=0\}$ and that the subgroup of ${\rm Aut}(H)$ preserving $D_e$ consists of
$$\left(\begin{array}{cc}T&0\\0&\det(T)\end{array}\right), \quad T\in\mathrm{GL}_2(\mathbb{R}),$$
(written with respect to the basis of $\h$ dual to $a,b,c$).
These act transitively on the set of almost complex structures on $D_e$. One can thus take the almost complex structure on $D_e$ mapping $A\mapsto B,$ with associated $(0,1)$ vector $L=A+iB.$
\sn (b) Define a Lie algebra homomorphism $\rho':\h\to \End(\C^3)$
\begin{equation} \label{eq:rep}
(a,b,c)\mapsto
\left(\begin{array}{ccc}0&-b-ia&2c\\ 0&0&a+ib\\ 0&0&0\end{array}\right).\end{equation}
with associated complex linear representation $\rho:H\to \mathrm{GL}_3(\C)$,
\begin{equation} \label{eq:repg}
(x,y,z)\mapsto
\left(\begin{array}{ccc}1&-y-ix&2z-xy-{i\over 2}(x^2+y^2)\\ 0&1&x+iy\\ 0&0&1\end{array}\right).
\end{equation}
Then one can verify that $\rho$ has the following properties:
\begin{itemize}[leftmargin=30pt]
\item It preserves the pseudo-hermitian quadratic form $ |Z_2|^2-2\Im(Z_1\bar Z_3)$ on $\C^3$, of signature $(2,1)$.
\item The induced $H$-action on $S^3\subset \C\P^2$ (the projectivized null cone of the pseudo-hermitian form) has 2 orbits: a fixed point $[{\mathbf e}_1]\in S^3$ and its complement.
\item The $H$-action on $S^3\setminus\{[{\mathbf e}_1]\}$ is free.
\item $\rho'(L){\mathbf e}_3=0.$
\end{itemize}
It follows, by Lemma \ref{lemma:real}, that $H\to S^3\subset\C\P^2,$ $h\mapsto [\rho(h){\mathbf e}_3]$, is a CR embedding of the CR structure $V$ on $H$ in $S^3$, whose image is the complement of $[{\mathbf e}_1]$.
\sn (c) In the affine chart $\C^2\subset \C\P^2$, $(z_1, z_2)\mapsto [z_1:z_2:1]$, the equation of $H=S^3\setminus [{\mathbf e}_1]$ is $2\Im(z_1)=|z_2|^2.$ After rescaling the $z_1$ coordinate one obtains $\Im(z_1)=|z_2|^2.$ The claimed formula for the group product in these coordinates follows from the embedding $h\mapsto [\rho(h){\mathbf e}_3]$ and formula \eqref{eq:repg}. \end{proof}
\begin{rmrk} The origin of formula \eqref{eq:rep} is as follows. Consider the standard representation of $\SU_{2,1}$ on $\C^{2,1}$ and the resulting action on $S^3\subset\C\P^2=\P(\C^{2,1})$. The stabilizer in $\SU_{2,1}$ of a point $\infty \in S^3$ is a 5-dimensional subgroup $P\subset\SU_{2,1}$, acting transitively on $S^3\setminus\{\infty\}.$ The stabilizer in $P$ of a point $o\in S^3\setminus\{\infty\}$ is a subgroup $\C^*\subset P$, whose conjugation action on $P$ leaves invariant a 3-dimensional normal subgroup of $P$, isomorphic to our $H$, so that $P= H\rtimes\C^*$. To get formula \eqref{eq:rep}, we consider the adjoint action of $\C^*$ on the Lie algebra $\mathfrak{p}$ of $P$, under which $\mathfrak{p}$ decomposes as $\mathfrak{p}=\h\oplus \C$, as in \eqref{eq:rep}. For more details,
see \cite[pp.~115-120]{Gol2}. \end{rmrk}
\begin{rmrk} In Cartan's classification \cite[p.~70]{Ca1}, the left-invariant spherical structure on $H$ is item $2^\circ$(A) of the first table.
\end{rmrk}
\section{The Euclidean Group}
Let ${\rm E}_2=\mathrm{SO}_2\rtimes\mathbb{R}^2$ be the group of orientation preserving isometries of $\mathbb{R}^2$, equipped with the standard euclidean metric. Every element in ${\rm E}_2$ is of the form ${\bf v}\mapsto R{\bf v}+ {\mathbf w} $, for some $R\in \mathrm{SO}_2$, $ {\mathbf w} \in\mathbb{R}^2$.
If we embed $\mathbb{R}^2$ as the affine plane $z=1$ in $\mathbb{R}^3$, ${\bf v}\mapsto ({\bf v},1)$, then ${\rm E}_2$ is identified with the subgroup of $\mathrm{GL}_3(\mathbb{R})$ consisting of matrices in block form
\begin{equation}\label{eq:E} \left(\begin{array}{ccc}R& {\mathbf w} \\ 0&1\end{array}\right), \quad R\in\mathrm{SO}_2, \ {\mathbf w} \in\mathbb{R}^2.
\end{equation}
Its Lie algebra $\Ee_2$ consists of matrices of the form
\begin{equation}\label{eq:Ee}\left(\begin{array}{ccc}0&-c&a\\ c&0&b\\ 0&0&0\end{array}\right), \quad a,b,c\in\mathbb{R}.
\end{equation}
Let $\rm{CE}_2$ be the group of {\em similarity} transformations of $\mathbb{R}^2$ (not necessarily orientation preserving). That is, maps $\mathbb{R}^2\to\mathbb{R}^2$ of the form ${\bf v}\mapsto T{\bf v}+ {\mathbf w} $, where $ {\mathbf w} \in\mathbb{R}^2$, $T\in \mathrm{CO}_2=\mathbb{R}^*\times \rm{O}_2$.
Then ${\rm E}_2\subset \rm{CE}_2$ is a normal subgroup with trivial centralizer, hence there is a natural inclusion $\rm{CE}_2\subset {\rm Aut}({\rm E}_2)$.
\begin{lemma}
$\rm{CE}_2= {\rm Aut}({\rm E}_2)={\rm Aut}(\Ee_2)$.
\end{lemma}
\begin{proof}
One calculates that the inclusion $\rm{CE}_2\subset {\rm Aut}(\Ee_2)$ is given, with respect to the basis $A,B,C$ of $\Ee_2$ dual to $a,b,c$, by the matrices
\begin{equation}\label{eq:aut}
( {\mathbf w} ,T)\mapsto\left(\begin{array}{cc}
T&-\epsilon i {\mathbf w} \\
0&\epsilon
\end{array}
\right), \quad T\in \rm{CO}_2, \ {\mathbf w} \in\mathbb{R}^2,
\end{equation}
where $\epsilon=\pm 1$ is the sign of $\det(T)$ and $i:(a,b)\mapsto (-b,a).$ To show that the map $\rm{CE}_2\to {\rm Aut}(\Ee_2)$ of equation \eqref{eq:aut} is surjective, let $\phi\in{\rm Aut}(\Ee_2)$ and observe that $\phi$ must preserve the subspace $c=0$, since it is the unique 2-dimensional ideal of $\Ee_2$. Thus
$\phi$ has the form
$$\phi=\left(\begin{array}{rrr}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ 0&0&a_{33}\end{array}\right)$$
with respect to the basis $A,B,C$ of $\Ee_2$ dual to $a,b,c$.
Next, using the commutation relations
\begin{equation}\label{eq:com}
[A,B]=0, \ [A,C]=-B, \ [B,C]=A.
\end{equation}
we get
$$a_{11}=a_{22}a_{33}, \ a_{22}=a_{11}a_{33},\ a_{12}=-a_{21}a_{33},\ a_{21}=-a_{12}a_{33}.
$$
From the first two equations we get $a_{11}=a_{11}(a_{33})^2, $ and from the last two
$a_{12}=a_{12}(a_{33})^2.$ We cannot have $a_{11}=a_{12}=0$, else $\det(\phi)=
(a_{11}a_{22}-a_{12}a_{21})a_{33}=0.$ It follows that $a_{33}=\pm 1$. If $a_{33}=1$ then we get from the above 4 equations $a_{22}=a_{11}, a_{12}=-a_{21}$, hence
the top left $2\times 2 $ block of $\phi$ is in $\mathrm{CO}_2^+$ (an orientation preserving linear similarity). If $a_{33}=-1$ then we get $a_{22}=-a_{11}, a_{12}=a_{21}$, hence the top left $2\times 2 $ block of $\phi$ is in $\mathrm{CO}_2^-$ (an orientation reversing linear similarity). These are exactly the matrices of equation \eqref{eq:aut}.
\end{proof}
\begin{prop}Let $V\subset T_\C {\rm E}_2$ be the left-invariant line bundle whose value at $e\in {\rm E}_2$ is spanned by
$$L=\left(\begin{array}{ccc}0&-i&1\\ i&0&0\\ 0&0&0\end{array}\right)\in (\Ee_2)_\C.$$
Then
\benumm
\item Every left-invariant CR structure on ${\rm E}_2$ is CR equivalent to $V$ by ${\rm Aut}({\rm E}_2)$.
\item $V$ is an aspherical left-invariant CR structure on ${\rm E}_2$.
\item $V$ is realized in $\P((\Ee_2)_\C)=\C\P^2$ by the adjoint orbit of $[L]$. This is CR equivalent to the real hypersurface $[\Re(z_1)]^2+[\Re(z_2)]^2=1$ in $\C^2$.
\end{enumerate}
\end{prop}
\begin{proof}
(a) Let $A,B,C$ the basis of $\Ee_2$ dual to $a,b,c$. Then $L=A+iC$, so $D_e=\mathrm{Span}\{A,C\}=\{b=0\}.$ The plane $c=0$ is a subalgebra of $\Ee_2$, so gives a degenerate CR structure. By equation \eqref{eq:aut},
every other plane can be mapped by ${\rm Aut}({\rm E}_2)$ to $D_e$. The subgroup of ${\rm Aut}({\rm E}_2)$ preserving $D_e$ acts on $D_e$, with respect to the basis $A,C$, by the matrices
$$\left(\begin{array}{cc}r &s\\ 0&\epsilon\end{array}\right), \quad r\in\mathbb{R}^*, \ s\in \mathbb{R}, \ \epsilon=\pm1.$$
One can then show that this group acts transitively on the space of almost complex structures on $D_e$.
\sn (b) Let $\alpha, \beta, \gamma$ be the left-invariant 1-forms on $E$ whose value at $e$ is $a,b,c$ (respectively). Then
$$\Theta=\left(\begin{array}{ccc}0&-\gamma&\alpha\\ \gamma&0&\beta\\ 0&0&0\end{array}\right)$$
is the left-invariant Maurer-Cartan form on $E$, satisfying $\d\Theta=-\Theta\wedge\Theta$, from which we get
\begin{equation}\label{eq:stre}
\d \alpha=-\beta\wedge\gamma, \ \d\beta=\alpha\wedge\gamma, \ \d\gamma=0.
\end{equation}
A coframe $(\phi, \phi_1)$ adapted to $V$ (i.e. $\phi(L)=\phi_1(L)=0,$ $\bar\phi_1(L)\neq 0$) is
$$\phi=\beta, \ \phi_1={1\over\sqrt{2}}\left(\alpha+i\gamma\right).$$
Using equations \eqref{eq:stre}, we find
$$\d\phi=i\phi_1\wedge\bar\phi_1, \quad \d\phi_1={i\over 2}\phi\wedge\phi_1-{i\over 2}\phi\wedge\bar\phi_1,$$
Thus $(\phi, \phi_1)$ is well-adapted. By Proposition \ref{prop:CRc}, the structure is aspherical.
\sn(c) Using Proposition \ref{prop:real}, this amount to showing that the stabilizer of $[L]$ in ${\rm E}_2$ is trivial. This is a simple calculation using formula \eqref{eq:aut}, with $L=A+iC$ and $T\in\mathrm{SO}_2,$ $\epsilon=1$. The ${\rm E}_2$-orbit of $[L]$ in $\P((\Ee_2)_\C)$ is contained in the affine chart $c\neq 0$. Using the coordinates $z_1=a/c, z_2=b/c$ in this chart, the equation for the orbit is $[\Re(z_1)]^2+[\Re(z_2)]^2=1.$
\end{proof}
\begin{rmrk} In Cartan's classification \cite[p.~70]{Ca1}, the left-invariant aspherical structure on $E_2$ is item $3^\circ$(H) of the second table, with $m=0$.
\end{rmrk}
|
2,877,628,089,590 | arxiv | \section{Introduction}
\IEEEPARstart{D}{ecreasing} the number of required samples for unique representation of a class of signals known as \emph{sparse} has been the subject of extensive research in the past five years. The field of compressed sensing which was first introduced in \cite{Donoho2006} and further in \cite{Candes2006a,Candes2006b}, deals with reconstruction of a $n\times 1$ but $k$-sparse vector $\mathbf{x}_{n\times 1}$ from its linear projections ($\mathbf{y}_{m\times 1}$) onto an $m$-dimensional ($m\ll n$) space: $\mathbf{y}_{m\times 1}=\mathbf{\Phi}_{m\times n}\mathbf{x}_{n\times 1}$. The two main concerns in compressed sensing are 1) selecting the sampling matrix $\mathbf{\Phi}_{m\times n}$ and 2) reconstruction of $\mathbf{x}_{n\times 1}$ from the measurements $\mathbf{y}_{m\times 1}$ by exploiting the sparsity constraint.
In general, the exact solution to the second problem, is shown to be an NP-complete problem \cite{Candes2005}; however, if the number of samples ($m$) exceeds the lower bound of $m >\mathcal{O}\big(k\log(n/k)\big)$, $\ell_1$ minimization (Basis Pursuit) can be performed instead of the exact $\ell_0$ minimization (sparsity constraint) with the same solution for almost all the possible inputs \cite{Candes2005,Candes2006a}. There are also other techniques such as greedy methods \cite{Tropp2007,Needell2008} that can be used.
The first problem (sampling matrix) is usually treated by random selection of the matrix; among the well-known random matrices are i.i.d Gaussian \cite{Donoho2006} and Rademacher \cite{Baraniuk2006} matrices. Before addressing some of the deterministic matrix constructions, we first describe the well known Restricted Isometry Property (RIP) \cite{Candes2006a}:
We say that the matrix $\mathbf{A}_{m\times n}$ obeys RIP of order $k$ with constant $0\leq\delta_k<1$ (RIC) if for all $k$-sparse vectors $\mathbf{x}_{n\times 1}$ we have:
\begin{eqnarray}
1-\delta_k\leq\frac{\|\mathbf{A}\mathbf{x}\|^2_{\ell_2}}{\|\mathbf{x}\|^2_{\ell_2}}\leq 1+\delta_k
\end{eqnarray}
In other words, RIP of order $k$ implies that each $k$ columns of the matrix $\mathbf{A}$ resembles a quasi-orthonormal set: if $\mathbf{B}_{m\times k}$ is formed by $k$ different columns of $\mathbf{A}$, all the eigenvalues of the Grammian matrix $\mathbf{B}^T\mathbf{B}$ should lie inside the interval $[1-\delta_k~,~1+\delta_k]$.
RIP is a sufficient condition for stable recovery. The basis pursuit and greedy methods can be applied for recovery of $k$-sparse vectors from noisy samples with good results if the matrix $\mathbf{A}$ obeys RIP of order $2k$ with a good enough constant $\delta_{2k}$ \cite{Candes2008,Needell2008}.
In this paper we are interested in deterministic as opposed to random sampling matrices. Deterministic sampling matrices are useful because in practice, the sampler should finally choose a deterministic matrix; realizations of the random matrices are not guaranteed to work. Moreover, by proper choice of the matrix, complexity or compression rate may be improved. In deterministic sampling matrix, we are looking for $m\times n$ matrices which obey the RIP of order $k$. It is well-known that any $k$ columns of a $k\times n$ Vandermond matrix are linearly independent; thus, if we normalize the columns, for all values of $n$, the new matrix satisfies the RIP condition of order $k$. In other words, arbitrary RIP-constrained matrices could be constructed in this way; however, when $n$ increases, the constant $\delta_k$ rapidly approaches $1$ and some of the $k\times k$ submatrices become ill-conditioned \cite{Cohen2009} which makes the matrix impractical. In \cite{DeVore2007}, $p^2\times p^{r+1}$ matrices ($p$ is a power of a prime integer) with $0,1$ elements (prior to normalization) are proposed which obey RIP of order $k$ where $kr<p$. Another binary matrix construction with $m=k2^{\mathcal{O}(\log\log n)^E}$ measurements ($E>1$) is investigated in \cite{Indyk2008conf} which employs hash functions and extractor graphs. The connection between coding theory and compressed sensing matrices is established in \cite{Howard2008conf} where second order Reed-Muller codes are used to construct $2^{l}\times 2^{\frac{l(l+1)}{2}}$ matrices with $\pm 1$ elements; unfortunately, the matrix does not satisfy RIP for all k-sparse vectors. Complex $m^2\times m$ matrices with chirp-type columns are also conjectured to obey RIP of some order \cite{Applebauma2009}. Recently, almost bound-achieving matrices have been proposed in \cite{Calderbank2009} which, rather than the exact RIP, satisfy statistical RIP (high probability that RIP holds). In this paper, we explicitly introduce $(2^l-1)\times 2^{2^{\mathcal{O}(\frac{l}{j}\log j)}}$ matrices with $\pm 1$ elements which obey the exact RIP for $k<2^j$. The new construction is achieved by replacing the zeros of the linear binary block codes (specially BCH codes) by $-1$. In this approach, we require binary codes with minimum distances as large as almost half their code length; the existence of these codes will be shown by providing BCH codes.
The rest of the paper is organized as follows: In the next section we show the connection between linear block codes and construction of RIP-fulfilling $\pm 1$ matrices. In section \ref{sec:dminCodes} we introduce BCH codes that meet the requirements to produce compressed sensing matrices. Matrix construction and recovery of the sparse signal from the samples using the matching pursuit method is discussed in section \ref{sec:SampRec}. The introduced matrices are combined with a previous scheme to form $0,\pm 1$ matrices in section \ref{sec:ThreeElement} and finally, section \ref{sec:Conclusion} concludes the paper.
\section{Matrix Construction via Linear Codes}\label{sec:CSCoding}
In this section we will describe the connection between the sampling matrix and coding theory. Since the parameters $k,n$ are used in both compressed sensing and coding theory, we distinguish the two by using the $\tilde{_{\Box}}$ notation for coding parameters; i.e., $\tilde{n}$ refers to the code length while $n$ denotes the number of columns of the sampling matrix.
Let $\mathcal{C}(\tilde{n},\tilde{k};2)$ be a linear binary block code and $\mathbf{1}_{\tilde{n}\times 1}$ be the all $1$ vector. We say $\mathcal{C}$ is 'symmetric' if $\mathbf{1}_{\tilde{n}\times 1}\in\mathcal{C}$. For symmetric codes, if $\mathbf{a}_{n\times 1}$ is a code vector, due to the linearity of the code, complement of $\mathbf{a}_{n\times 1}$ which is defined as $\mathbf{a}_{n\times 1} \oplus \mathbf{1}_{\tilde{n}\times 1}$ is also a valid code vector; therefore, code vectors consist of complement couples.
\\
\newtheorem{theo}{\textbf{Theorem}}
\begin{theo}\label{theo:CodingCS}
Let $\mathcal{C}(\tilde{n},\tilde{k};2)$ be a symmetric code with the minimum distance $\tilde{d}_{min}$ and let $\tilde{\mathbf{A}}_{\tilde{n}\times 2^{\tilde{k}-1}}$ be the matrix composed of code vectors as its columns such that from each complement couple, exactly one is selected. Define:
\begin{eqnarray}
\mathbf{A}_{\tilde{n}\times 2^{\tilde{k}-1}}\triangleq \frac{1}{\sqrt{\tilde{n}}}\bigg(2\tilde{\mathbf{A}}_{\tilde{n}\times 2^{\tilde{k}-1}} - \big(1\big)_{\tilde{n}\times 2^{\tilde{k}-1}}\bigg)
\end{eqnarray}
Then, $\mathbf{A}$ satisfies RIP with the constant $\delta_k=(k-1)\big(1-2\frac{\tilde{d}_{min}}{\tilde{n}}\big)$ for $k<\frac{\tilde{n}}{\tilde{n}-2\tilde{d}_{min}}+1$ ($k$ is the RIP order).
\end{theo}
\vspace{0.5cm}
\textbf{Proof}. First note that the columns of $\mathbf{A}$ are normal. In fact $2\tilde{\mathbf{A}}_{\tilde{n}\times 2^{\tilde{k}-1}} - \big(1\big)_{\tilde{n}\times 2^{\tilde{k}-1}}$ is the same matrix as $\tilde{\mathbf{A}}$ where zeros are replaced by $-1$; hence, absolute value of each element of $\tilde{\mathbf{A}}$ is equal to $\frac{1}{\sqrt{\tilde{n}}}$ which reveals that the columns are normal.
To prove the RIP, we use a similar approach to that of \cite{DeVore2007}; we show that for each two columns of $\mathbf{A}$, the absolute value of their inner product is less than $\frac{\tilde{n}-2\tilde{d}_{min}}{\tilde{n}}$. Let $\mathbf{a}_{\tilde{n}\times 1}, \mathbf{b}_{\tilde{n}\times 1}$ be two different columns of $\mathbf{A}$ and $\tilde{\mathbf{a}}_{\tilde{n}\times 1}, \tilde{\mathbf{b}}_{\tilde{n}\times 1}$ be their corresponding columns in $\tilde{\mathbf{A}}$. If $\tilde{\mathbf{a}}$ and $\tilde{\mathbf{b}}$ differ at $l$ bits, we have:
\begin{eqnarray}\label{eq:TheoProof1}
\langle\mathbf{a} , \mathbf{b}\rangle=\frac{1}{\tilde{n}}\bigg(1\times(\tilde{n}-l)+ (-1)\times l\bigg)=\frac{\tilde{n}-2l}{\tilde{n}}
\end{eqnarray}
Moreover, $\tilde{\mathbf{b}}$ and $\tilde{\mathbf{a}}\oplus \mathbf{1}_{\tilde{n}\times 1}$ (complement of $\tilde{\mathbf{a}}$) differ at $\tilde{n}-l$ bits and since all the three vectors $\{\mathbf{a},~\tilde{\mathbf{a}}\oplus \mathbf{1}_{\tilde{n}\times 1},~\mathbf{b}\}$ are different code words (from each complement couple, exactly one is chosen and thus $\mathbf{b}\neq\tilde{\mathbf{a}}\oplus \mathbf{1}_{\tilde{n}\times 1}$), both $l$ and $\tilde{n}-l$ should be greater than or equal to $\tilde{d}_{min}$, i.e.,:
\begin{eqnarray}\label{eq:TheoProof2}
\left\{\begin{array}{l}
l\geq\tilde{d}_{min}\\
\tilde{n}-l\geq\tilde{d}_{min}\\
\end{array}\right. &\Rightarrow& \tilde{d}_{min}\leq l\leq\tilde{n}-\tilde{d}_{min}\nonumber\\
&\Rightarrow& |\tilde{n}-2l|\leq\tilde{n}-2\tilde{d}_{min}
\end{eqnarray}
Note that $\mathbf{0}_{\tilde{n}\times 1},\mathbf{1}_{\tilde{n}\times 1}\in\mathcal{C}$ and for each code vector $\mathbf{a}$, either $d(\mathbf{0}_{\tilde{n}\times 1} , \mathbf{a})$ or $d(\mathbf{1}_{\tilde{n}\times 1} , \mathbf{a})$ cannot exceed $\frac{\tilde{n}}{2}$; therefore, $\tilde{n}-2\tilde{d}_{min}\geq 0$. Combining (\ref{eq:TheoProof1}) and (\ref{eq:TheoProof2}) we have:
\begin{eqnarray}
|\langle\mathbf{a},\mathbf{b}\rangle|\leq \frac{\tilde{n}-2\tilde{d}_{min}}{\tilde{n}}
\end{eqnarray}
which proves the claim on the inner product of the columns of $\mathbf{A}$. Now let $\mathbf{B}_{\tilde{n}\times k}$ be the matrix formed by $k$ different columns of $\mathbf{A}$. According to the previous arguments, $\mathbf{B}^T\mathbf{B}$ is a $k\times k$ matrix that has $1$'s on its main diagonal while its off-diagonal elements have absolute values less than or equal to $\frac{\tilde{n}-2\tilde{d}_{min}}{\tilde{n}}$. It is now rather easy to complete the proof with use of the Gershgorin circle theorem $\square$
The above theorem is useful only when $\tilde{d}_{min}$ is close to $\frac{\tilde{n}}{2}$ (denominator for the upper bound of $k$), which is not the case for the common binary codes. In fact, in communication systems, parity bits are inserted to protect the main data payload, i.e., $\tilde{k}$ bits of data are followed by $\tilde{n}-\tilde{k}$ parity bits. In this case, we have $\tilde{d}_{min}\leq\tilde{n}-\tilde{k}+1$; thus, to have $\tilde{d}_{min}\approx\frac{\tilde{n}}{2}$, the number of parity bits should have the same order as the data payload which is impractical. In the next section we show how these types of codes can be designed using the well-known BCH codes.
\section{BCH codes with large $\tilde{d}_{min}$}\label{sec:dminCodes}
Since the focus in this section is on the design of BCH codes with large minimum distances, we first briefly review the BCH structure.
BCH codes are a class of cyclic binary codes with $\tilde{n}=2^{\tilde{m}}-1$ which are produced by a generating polynomial $g(x)\in GF(2)[x]$ such that $g(x)|x^{2^{\tilde{m}}-1}+1$ \cite{LinCostelloBook}. According to a result in Galois theory, we know:
\begin{eqnarray}
x^{2^{\tilde{m}}-1}+1=\prod_{\begin{array}{c}
r\in GF(2^{\tilde{m}})\\
r\neq 0
\end{array}}(x-r)
\end{eqnarray}
Hence, the BCH generating polynomial can be decomposed into the product of linear factors in $GF(2^{\tilde{m}})[x]$. Let $\alpha\in GF(2^{\tilde{m}})$ be a primitive root of the field and let $\alpha^i$ be one of the roots of $g(x)$. Since $g(x)\in GF(2)[x]$, all conjugate elements of $\alpha^i$ (with respect to $GF(2)$) are also roots of $g(x)$. Again using the results in Galois theory, we know that these conjugates are different elements of the set $\{\alpha^{i2^{j}}\}_{j=0}^{m-1}$. Moreover, since $\alpha^{2^{\tilde{m}}-1}=1$, $i_1\equiv i_2 (\textrm{mod}~2^{\tilde{m}}-1)$ implies $\alpha^{i_1}=\alpha^{i_2}$ which reveals the circular behavior of the exponents.
The main advantage of the BCH codes compared to other cyclic codes is their guaranteed lower bound on the minimum distance \cite{LinCostelloBook}: if $\alpha^{i_1},\dots,\alpha^{i_d}$ are different roots of $g(x)$ (not necessarily all the roots) such that $i_1,\dots,i_d$ form an arithmetic progression, then $\tilde{d}_{min}\geq d+1$.
Now we get back to our code design approach. We construct the desired code generating polynomials by investigating their parity check polynomial which is defined as:
\begin{eqnarray}
h(x)\triangleq \frac{x^{2^{\tilde{m}}-1}+1}{g(x)}
\end{eqnarray}
In other words, each field element is the root of exactly one of the $g(x)$ and $h(x)$. We construct $h(x)$ by introducing its roots. Let $l<\tilde{m}$ be an integer and define
\begin{eqnarray}
\mathcal{G}_{\tilde{m}}^{(l)}=\{\alpha^{0},\alpha^{1},\dots,\alpha^{2^{\tilde{m}-1}+2^{l}-1}\}
\end{eqnarray}
Note that the definition of $\mathcal{G}_{\tilde{m}}^{(l)}$ depends on the choice of the primitive element ($\alpha$). We further define $\mathcal{H}_{\tilde{m}}^{(l)}$ as the subset of $\mathcal{G}_{\tilde{m}}^{(l)}$ which is closed with respect to the conjugate operation:
\begin{eqnarray}
\mathcal{H}_{\tilde{m}}^{(l)}\triangleq \{r\in\mathcal{G}_{\tilde{m}}^{(l)}~\big|~\forall~j\in\mathbb{N}:~r^{2^j}\in\mathcal{G}_{\tilde{m}}^{(l)}\}
\end{eqnarray}
The above definition shows that if $r\in\mathcal{H}_{\tilde{m}}^{(l)}$ then its conjugate $r^{2^j}\in\mathcal{H}_{\tilde{m}}^{(l)}$. Now let us define $h(x)$:
\begin{eqnarray}
h(x)= \prod_{r\in\mathcal{H}_{\tilde{m}}^{(l)}}(x-r)
\end{eqnarray}
As discussed before, if $r$ is a root of $h(x)$, all its conjugates are also roots of $h(x)$; therefore, $h(x)\in GF(2)[x]$, which is a required condition. Moreover,
\begin{eqnarray}
1=\alpha^{0}\in\mathcal{G}_{\tilde{m}}^{(l)} &\Rightarrow& 1\in\mathcal{H}_{\tilde{m}}^{(l)}\nonumber\\
&\Rightarrow& (1+x)\big|h(x)
\end{eqnarray}
which means that the all one vector is a valid code word:
\begin{eqnarray}
&& c=[\underbrace{1,\dots,1}_{2^{\tilde{m}}-1}]^T\nonumber\\
&\Rightarrow& c(x)=1+x+\dots+x^{2^{\tilde{m}}-2}=\frac{x^{2^{\tilde{m}}-1}+1}{x+1}\nonumber\\
&\Rightarrow& x^{2^{\tilde{m}}-1}+1\big|(x^{2^{\tilde{m}}-1}+1)\frac{h(x)}{1+x}=c(x)h(x)
\end{eqnarray}
Hence, the code generated by $g(x)=\frac{x^{\tilde{n}}+1}{h(x)}$ is a symmetric code and fulfills the requirement of Theorem \ref{theo:CodingCS}. For the minimum distance of the code, note that the roots of $h(x)$ form a subset of $\mathcal{G}_{\tilde{m}}^{(l)}$; thus, all the elements in $GF(2^{\tilde{m}})\backslash\mathcal{G}_{\tilde{m}}^{(l)}$ are roots of $g(x)$:
\begin{eqnarray}
\forall~2^{\tilde{m}-1}+2^l\leq j\leq 2^{\tilde{m}}-2:~~g(\alpha^{j})=0
\end{eqnarray}
Consequently, there exists an arithmetic progression of length $2^{\tilde{m}-1}-2^l-1$ among the powers of $\alpha$ in roots of $g(x)$. As a result:
\begin{eqnarray}
\tilde{d}_{min}\geq (2^{\tilde{m}-1}-2^l-1)+1=2^{\tilde{m}-1}-2^l
\end{eqnarray}
In coding, it is usual to look for a code with maximum $\tilde{d}_{min}$ given $\tilde{n},\tilde{k}$. Here, we have designed a code with good $\tilde{d}_{min}$ for a given $\tilde{n}$ but with unknown $\tilde{k}$:
\begin{eqnarray}
\tilde{n}&=&\tilde{k}+deg\big(g(x)\big)\nonumber\\
\Rightarrow \tilde{k}&=&\tilde{n}-deg\big(g(x)\big)\nonumber\\
&=&\big(deg\big(g(x)\big)+deg\big(h(x)\big)\big)-deg\big(g(x)\big)\nonumber\\
&=°\big(h(x)\big)=|\mathcal{H}_{\tilde{m}}^{(l)}|
\end{eqnarray}
The following theorem reveals how $|\mathcal{H}_{\tilde{m}}^{(l)}|$ should be calculated.
\begin{theo}\label{theo:DetermineK}
With the previous terminology, $|\mathcal{H}_{\tilde{m}}^{(l)}|$ is equal to the number of binary sequences of length $\tilde{m}$ such that if the sequence is written around a circle, between each two $1$'s, there exists at least $\tilde{m}-l-1$ zeros.
\end{theo}
\textbf{Proof}. We show that there exists a 1-1 mapping between the elements of $\mathcal{H}_{\tilde{m}}^{(l)}$ and the binary sequences. Let $(b_{\tilde{m}-1},\dots,b_0)\in\{0,1\}^{\tilde{m}}$ be one of the binary sequences and let $\beta$ be the decimal number that its binary representation coincides with the sequence:
\begin{eqnarray}
\beta=(\overline{b_{\tilde{m}-1} \dots b_0})_2=\sum_{i=0}^{\tilde{m}-1}b_i2^i
\end{eqnarray}
We will show that $\alpha^{\beta}\in\mathcal{H}_{\tilde{m}}^{(l)}$. For the sake of simplicity, let us define $\beta_j$ as the decimal number that its binary representation is the same as the sequence subjected to $j$ units of left circular shift ($\beta_0=\beta$):
\begin{eqnarray}
\beta_0&=&(\overline{b_{\tilde{m}-1} \dots b_0})_2\nonumber\\
\beta_1&=&(\overline{b_{\tilde{m}-2} \dots b_0 b_{\tilde{m}-1}})_2\nonumber\\
\beta_2&=&(\overline{b_{\tilde{m}-3} \dots b_0 b_{\tilde{m}-1} b_{\tilde{m}-2}})_2\nonumber\\
&\vdots&\nonumber\\
\beta_{\tilde{m}-1}&=&(\overline{b_0 b_{\tilde{m}-1} \dots b_{1}})_2
\end{eqnarray}
Now we have:
\begin{eqnarray}
2\beta_{j}&=&2\times (\overline{b_{\tilde{m}-1-j} \dots b_0 b_{\tilde{m}-1} b_{\tilde{m}-j}})_2\nonumber\\
&=&2^{\tilde{m}}b_{\tilde{m}-1-j}+ (\overline{b_{\tilde{m}-2-j} \dots b_0 b_{\tilde{m}-1} b_{\tilde{m}-j}0})_2\nonumber\\
&\equiv& \beta_{j+1}~\big(\textrm{mod}~2^{\tilde{m}}-1\big)\nonumber\\
\Rightarrow && \beta_{j}\equiv 2^j\beta~\big(\textrm{mod}~2^{\tilde{m}}-1\big)\nonumber\\
\Rightarrow && \alpha^{\beta_j}=\alpha^{2^j\beta}
\end{eqnarray}
which shows that $\{\alpha^{\beta_j}\}_j$ are conjugates of $\alpha^\beta$. To show $\alpha^\beta\in\mathcal{H}_{\tilde{m}}^{(l)}$, we should prove that all the conjugates belong to $\mathcal{G}_{\tilde{m}}^{(l)}$, or equivalently, we should show $0\leq\beta_j\leq 2^{\tilde{m}-1}+2^l-1$. It is clear that $0<\beta_j$; to prove the right inequality we consider two cases:
\begin{enumerate}
\item MSB of $\beta_j$ is zero:
\begin{eqnarray}
b_{\tilde{m}-1-j}=0\Rightarrow \beta_j<2^{\tilde{m}-1}<2^{\tilde{m}-1}+2^l-1
\end{eqnarray}
\item MSB of $\beta_j$ is one; therefore, according to the property of the binary sequences, the following $\tilde{m}-l-1$ bits are zero:
\begin{eqnarray}
b_{\tilde{m}-1-j}=1&\Rightarrow& b_{\tilde{m}-2-j}=\dots=b_{l-j}=0\nonumber\\
&\Rightarrow& \beta_j\leq 2^{\tilde{m}-1}+\sum_{j=0}^{l-1}2^j\nonumber\\
&\Rightarrow& \beta_j\leq 2^{\tilde{m}-1}+2^l-1
\end{eqnarray}
\end{enumerate}
Up to now, we have proved that each binary sequence with the above zero-spacing property can be assigned to a separate root of $h(x)$. To complete the proof, we show that if the binary representation of $\beta$ does not satisfy the property, then we have $\alpha^\beta\notin\mathcal{H}_{\tilde{m}}^{(l)}$. In fact, by circular shifts introduced in $\beta_j$, all the bits can be placed in the MSB position; thus, if the binary representation of $\beta$ does not obey the property, at least one of the $\beta_j$'s should be greater than $2^{\tilde{m}-1}+2^l-1$. This means that at least one of the conjugates of $\alpha^\beta$ does not belong to $\mathcal{G}_{\tilde{m}}^{(l)}$ $\square$
Theorem \ref{theo:DetermineK} relates the code parameter $\tilde{k}$ to a combinatorics problem. Using this relation, it is shown in Appendix \ref{app:codeK} that $|\mathcal{H}_{\tilde{m}}^{(l)}|=\mathcal{O}\bigg(\big(\frac{\tilde{m}-l}{2}+1\big)^{\frac{l}{\tilde{m}-l}}\bigg)$.
\section{Sampling and Reconstruction}\label{sec:SampRec}
In previous sections, we presented the principles of matrix construction. In this section, in addition to a stepwise instruction set, we focus on the column selection procedure from complement pairs. In the second part of this section, we show that the original sparse vector can be reconstructed from the samples by simple methods such as Matching Pursuit.
\subsection{Matrix Construction}
Recalling the arguments in the previous section, the choice of the polynomial $g(x)$ depends on the choice of the primitive root. In addition to this degree of freedom, from Theorem \ref{theo:CodingCS}, no matter which code vectors from complement sets are selected, the generated matrix satisfies RIP. Hence, for a given primitive element, there are $2^{2^{\tilde{k}-1}}$ (there are $2^{\tilde{k}-1}$ complement pairs) possible matrix constructions. Among these huge number of possibilities, some of them have better characteristics for signal recovery from the samples. More specifically, we look for the matrices such that columns are closed with respect to the circular shift operation: if $\mathbf{a}=[a_1,\dots,a_{\tilde{n}}]^T$ is a column of $\mathbf{A}$, for all $1<j\leq\tilde{n}$, $\mathbf{a}_j=[a_j,a_{j+1},\dots,a_{\tilde{n}},a_1,\dots,a_{j-1}]^T$ is also a column of $\mathbf{A}$.
The key point is that the BCH codes are a subset of cyclic codes, i.e., if $\mathbf{c}_{\tilde{n}\times 1}$ is a code vector, all its circular shifts are also valid code vectors. Thus, if we are careful in selecting from the complement sets, the generated sampling matrix will also have the cyclic property. For this selection, it should be noted that if $\mathbf{a}_{\tilde{n}\times 1},\mathbf{b}_{\tilde{n}\times 1}$ is a complement pair and $\mathbf{c}_{\tilde{n}\times 1}$ is a circular shifted version of $\mathbf{a}_{\tilde{n}\times 1}$, the overal parity (sum of the elements in mod $2$) of $\mathbf{a}_{\tilde{n}\times 1}$ and $\mathbf{b}_{\tilde{n}\times 1}$ are different (each code vector has $2^{\tilde{m}}-1$ elements which is an odd number) while $\mathbf{a}_{\tilde{n}\times 1}$ and $\mathbf{c}_{\tilde{n}\times 1}$ have the same parity. Therefore, if we discard the code vectors with even (odd) parity (from the set of all code vectors), we are left with a set half the size of the main set such that from each complement set exactly one is selected while the set is still closed with respect to the circular shift operation. The selection algorithm is as follows:
\begin{enumerate}
\item For a given $k$ (compressed sensing parameter), let $i=\lceil\log_2(k)\rceil$ and choose $\tilde{m}\geq i$ (the number of compressed samples will be $m=2^{\tilde{m}}-1$).
\item \label{step:SetDef} Let $\mathcal{H}_{seq}$ be the set of all binary sequences of length $\tilde{m}$ such that $1$'s are circularly spaced with at least $i$ zeros. In addition, let $\mathcal{H}_{dec}$ be the set of decimal numbers such that their binary representation is a sequence in $\mathcal{H}_{seq}$.
\item Choose $\alpha$ as one of the primitive roots of $GF(2^{\tilde{m}})$ and define:
\begin{eqnarray}
\mathcal{H}=\{\alpha^r~\big|~r\in\mathcal{H}_{dec}\}
\end{eqnarray}
\item Define the parity check and code generating polynomials as:
\begin{eqnarray}
h(x)=\prod_{r\in\mathcal{H}}(x-r)
\end{eqnarray}
and
\begin{eqnarray}
g(x)=\frac{x^{2^{\tilde{m}}}-1}{h(x)}
\end{eqnarray}
\item Let $\tilde{\mathbf{A}}_{(2^{\tilde{m}}-1)\times(2^{deg(h)-1})}$ be the binary matrix composed of even parity code vectors as its columns, i.e., if columns are considered as polynomial coefficients (in $GF(2)[x]$), each polynomial should be divisible by $(x+1)g(x)$ (the additional factor of $x+1$ implies the even parity).
\item Replace all the zeros in $\tilde{\mathbf{A}}$ by $-1$ and normalize each column to obtain the final compressed sensing matrix ($\mathbf{A}_{(2^{\tilde{m}}-1)\times(2^{deg(h)-1})}$).
\end{enumerate}
For a simple example, we consider the case $\tilde{m}=i$. It is easy to check that the number of $1$'s in each of the binary sequences in step \ref{step:SetDef} cannot exceed one. Therefore, we have $\mathcal{H}_{dec}=\{0,2^0,2^1,2^2,\dots,2^{2^{i-1}}\}$. This means that $h(x)$, except for the factor $(x+1)$ is the same as the minimal polynomial of $\alpha$ (the primitive root). Since for code generation, we use $(x+1)g(x)$ instead of $g(x)$, the effective $h(x)$ will be the minimal polynomial of $\alpha$ which is a primitive polynomial. In this case, the matrix $\tilde{\mathbf{A}}$ is the $(2^i-1)\times(2^i-1)$ square matrix whose columns are circularly shifted versions of the Pseudo Noise Sequence (PNS) output generated by the primitive polynomial (the absolute value of the inner product of each two columns of $\mathbf{A}$ is exactly $\frac{1}{2^i-1}$).
Table \ref{tab:pm1Matrixi3} summarizes some of the parity check polynomials for $i=3$ (useful for $k<8$). Also, Fig. \ref{fig:degH} shows the degree of $h(x)$ for some of the choices of $\tilde{m}$ and $i$.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|}
\hline
$\tilde{m}$ & $h(x)$\\
\hline\hline
$4$ & $x^5+x^4+x^2+1$\\
\hline
$6$ & $x^7+x^6+x^2+1$\\
\hline
$8$ & $x^{13}+x^{12}+x^{10}+x^9+x^8+x^4+x^3+1$\\
\hline
$10$ & $x^{26}+x^{25}+x^{24}+x^{20}+x^{16}+x^{14}+x^{13}+x^{12}$\\
& $+x^{10}+x^9+x^7+x^5+x^4+x^3+x+1$\\
\hline
\end{tabular}
\caption{Parity check polynomials for different values of $\tilde{m}$ when $i=3$.}
\label{tab:pm1Matrixi3}
\end{table}
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{deghxs.eps}
\caption{Degree of $h(x)$ for different values of $\tilde{m}$ and $i$.}
\label{fig:degH}
\end{figure}
\subsection{Reconstruction from the samples}
Matching Pursuit is one of the simplest methods for the recovery of sparse signals from sampling matrices (linear projections). Here we show that this method can exactly recover the sparse signal from noiseless samples.
Let $\mathbf{A}_{m\times n}$ and $\mathbf{s}_{n\times 1}$ be the sampling matrix and the $k$-sparse signal vector, respectively. The sampling process is defined by:
\begin{eqnarray}
\mathbf{y}_{m\times 1}=\mathbf{A}_{m\times n}\cdot\mathbf{s}_{n\times 1}
\end{eqnarray}
For unique reconstruction of $\mathbf{s}_{n\times 1}$ from the samples $\mathbf{y}_{m\times 1}$, it is sufficient that the sampling matrix $\mathbf{A}_{m\times n}$ satisfies RIP of order $2k$ \cite{Candes2008}. In this section, we show that if $\mathbf{A}_{m\times n}$ is constructed as described in previous section and satisfies RIP of order $2k$, the matching pursuit method can be used for perfect reconstruction. In addition, due to the circular structure of the columns in $\mathbf{A}_{m\times n}$, the computational complexity can be decreased (less than the ordinary matching pursuit).
Let $\{i_1,\dots,i_k\}\subset\{1,\dots,n\}$ be the nonzero locations in $\mathbf{s}_{n\times 1}$; thus, we have:
\begin{eqnarray}
\mathbf{y}_{m\times 1}=\mathbf{A}\cdot\mathbf{s}=\sum_{j=1}^k s_{i_j}\mathbf{a}_{i_j}
\end{eqnarray}
where $\mathbf{a}_{i}$ denotes the $i^{th}$ column in $\mathbf{A}$. In the matching pursuit method, in order to find the nonzero locations in $\mathbf{s}$, the inner products of the sample-vector ($\mathbf{y}$) with all the columns of $\mathbf{A}$ are evaluated and then, the index of the maximum value (in absolute) is chosen as the most probable nonzero location. Here, we show that the index associated with the maximum value is always a nonzero location. Without loss of generality, assume $|s_{i_1}|\geq|s_{i_2}|\geq\dots\geq|s_{i_k}|$. We then have:
\begin{eqnarray}\label{eq:Biggerineq}
\big|\langle\mathbf{y},\mathbf{a}_{i_1}\rangle\big|&=&\big|\sum_{j=1}^{k}s_{i_j} \langle\mathbf{a}_{i_j},\mathbf{a}_{i_1}\rangle\big|\nonumber\\
&\geq& |s_{i_1}|\langle\mathbf{a}_{i_1},\mathbf{a}_{i_1}\rangle- \sum_{j=2}^{k}|s_{i_j}||\langle\mathbf{a}_{i_j},\mathbf{a}_{i_1}\rangle|\nonumber\\
&>& |s_{i_1}| - \frac{1}{2k-1}\sum_{j=2}^{k}|s_{i_j}|\nonumber\\
&\geq& |s_{i_1}| - \frac{k-1}{2k-1}|s_{i_1}|=\frac{k}{2k-1}|s_{i_1}|
\end{eqnarray}
Now assume $l\in\{1,\dots,n\}\backslash \{i_1,\dots,i_k\}$:
\begin{eqnarray} \label{eq:Smallerineq}
\big|\langle\mathbf{y},\mathbf{a}_{l}\rangle\big|&=&\big|\sum_{j=1}^{k}s_{i_j} \langle\mathbf{a}_{i_j},\mathbf{a}_{l}\rangle\big|\nonumber\\
&\leq& \sum_{j=1}^{k}|s_{i_j}||\langle\mathbf{a}_{i_j},\mathbf{a}_{l}\rangle|\nonumber\\
&<& \frac{1}{2k-1}\sum_{j=1}^{k}|s_{i_j}|\leq\frac{k}{2k-1}|s_{i_1}|
\end{eqnarray}
Combining (\ref{eq:Biggerineq}) and (\ref{eq:Smallerineq}), we get:
\begin{eqnarray}
\big|\langle\mathbf{y},\mathbf{a}_{l}\rangle\big|~<~\frac{k}{2k-1}|s_{i_1}|~<~ \big|\langle\mathbf{y},\mathbf{a}_{i_1}\rangle\big|
\end{eqnarray}
Hence, the largest inner product is obtained either with $\mathbf{a}_{i_1}$ or one of the other $\mathbf{a}_{i_j}$'s. Therefore, in the noiseless case, we never select a nonzero location by using the matching pursuit algorithm, and finally we reconstruct the original sparse signal perfectly.
In each recursion of the matching pursuit algorithm, the inner product of $\mathbf{y}_{m\times 1}$ with all the columns in $\mathbf{A}_{m\times n}$ needs to be calculated. Each inner product requires $m$ multiplications and $m-1$ additions. Now we observe that the circular property of the columns of $\mathbf{A}$ can be useful. Let $\mathbf{a}$ be one of the columns in $\mathbf{A}$ and $\mathbf{a}^{(j)}$ be its $j^{th}$ circularly shifted version. We observe that $\{\mathbf{a}^{(j)}\}_j$ are all columns of $\mathbf{A}$; thus, $\langle\mathbf{a}^{(j)},\mathbf{y}\rangle$ has to be calculated for all $j$. Let $\{\mathbf{a}^{(1)} , \mathbf{a}^{(2)} , \dots , \mathbf{a}^{(\mu)}\}$ be different elements of $\{\mathbf{a}^{(j)}\}_j$ (obviously $\mu\leq m$ and more precisely $\mu|m$). These inner products require $\mu m$ multiplications and $\mu (m-1)$ additions if directly calculated.
An alternative approach for evaluation of these values is to employ Discrete Fourier Transform (DFT) or its fast implementation-FFT. The key point in this approach is that the inner products can be found through circular convolution of $\mathbf{y}$ and $\mathbf{a}$, i.e.,
\begin{eqnarray}
\langle\mathbf{y},\mathbf{a}_{(j)}\rangle=\mathbf{y}\varoast_{m} \mathbf{a}\big|_{j}
\end{eqnarray}
where $\varoast_m$ represents the circular convolution with period $m$. It is well-known that the circular convolution can be easily calculated using DFT: if $\mathbf{y}_f$ and $\mathbf{a}_f$ denote the DFT of $\mathbf{y}$ and $\mathbf{a}$, respectively, we have:
\begin{eqnarray}
IDFT\{\mathbf{y}_f\odot\mathbf{a}_f\}=\big[\mathbf{y}\varoast_m \mathbf{a}\big|_{0},\dots,\mathbf{y}\varoast_m \mathbf{a}\big|_{m-1}\big]
\end{eqnarray}
where $\mathbf{v}_{m\times 1}\odot\mathbf{u}_{m\times 1}\triangleq[v_1u_1,\dots,v_mu_m]^T$. For evaluation of the inner products in this way, $\mathbf{y}_f$has to be calculated only once using DFT. Thus, excluding the calculation of $\mathbf{y}_f$ (which is done only once), the inner products of $\mathbf{y}$ with $\{\mathbf{a}^{(j)}\}_j$ require one $DFT$, one $IDFT$ and $m$ multiplications. Since $\mu$ different circular shifts of $\mathbf{a}$ are possible, at most $\mu$ coefficients of $\mathbf{a}_f$ at equi-distance positions are nonzero; hence, $\mu$-point DFT (and consequently IDFT) of $\mathbf{a}_{m\times 1}$ rather than the general $m$-point DFT is adequate.
For $\mu$-point DFT of $\mathbf{y}$, we can simply down-sample the evaluated $m\times 1$ vector of $\mathbf{y}_f$ (note that $\mu|m$) and there is no need for an extra $\mu$-point DFT. Employing the FFT version, we require $2\mu\lceil\log_2\mu\rceil$ multiplications and $m-\mu+2\mu\lceil\log_2\mu\rceil$ additions per $\mu$-point DFT or IDFT. Comparing the number of required multiplications in calculation of the above $\mu$ inner products reveal the efficiency of the DFT approach; i.e., the required computational complexity for reconstruction of the signal from the samples obtained from the sampling matrix is less than the common amount for general matrices. It should be emphasized that this reduction in computational complexity is the result of the circular format of the columns.
\section{Matrices with $\{0,1,-1\}$ Elements}\label{sec:ThreeElement}
We have presented a method to generate RIP-fulfilling matrices with $\pm 1$ elements. In this section, we show that the matrices introduced in \cite{DeVore2007} can be improved using our technique in this paper.
In \cite{DeVore2007}, in contrast to this paper, binary compressed sensing matrices are considered. The main difficulty in designing such matrices is that the columns should (almost) be normal which means that prior to normalization, the number of $1$'s in each column is fixed (matrix elements are all scaled with the same coefficient for normalization). In \cite{DeVore2007}, $p^2\times p^{r+1}$ binary matrices are introduced such that in each column, exactly $p$ elements are equal to $1$ (equal to $\frac{1}{\sqrt{p}}$ after normalization) and the inner product of each two columns is less than or equal to $r$ ($\frac{r}{p}$ after normalization). Here $p$ is a power of a prime integer; the matrix construction is based on polynomials in $GF(p)$.
It is evident that by changing some of the $1$'s in the aforementioned matrix into $-1$, the norm of the columns does not change; however, the inner products change. To show how we can benefit from this feature, let us assume that $p=2^i$; thus, there are $2^i$ nonzero elements in each column. We construct a new matrix from the original binary matrix as follows: we repeat each column $2^i$ times and then change the sign of the nonzero elements in the replicas in such a way that these nonzero elements form a Walch-Hadamard matrix. In other words, for each column, there are $2^i$ columns (including itself) that have the same pattern of nonzero elements. Moreover, the nonzero elements of these semi-replica vectors are different columns of the Walch-Hadamard matrix. Thus, the semi-replica vectors are orthogonal and the absolute value of the inner product of two vectors with different nonzero patterns is upper-bounded by $r$ (maximum possible value in the original matrix). Hence, the new matrix still satisfies the RIP condition with the same $k$ and $\delta_k$.
Although we have expanded the matrix with this trick, the change is negligible when the order of matrix sizes are considered ($p^2\times p^{r+1}$ is expanded to $p^2\times p^{r+2}$). In fact, the orthogonality of the semi-replicas is not a necessary condition; we only need that their inner products do not exceed $r$ in absolute value. It shows that instead of the Walch-Hadamard matrix, we can use other $\pm 1$ matrices with more number of columns (with the same number of rows) such that their columns are almost orthogonal (inner product less than $r$). This is the case for the matrices introduced in the previous sections.
In order to mathematically describe the procedure, we need to define an operation. Let $\mathbf{s}$ be a $\beta\times 1$ binary vector with exactly $\alpha$ elements of $1$ in locations $r_1,\dots,r_{\alpha}\in\{1,2,\dots,\beta\}$. Also, let $\mathbf{x}_{\alpha\times 1}=[x_1,\dots,x_{\alpha}]^T$ be an arbitrary vector. We define $\mathbf{y}_{\beta\times 1}=\mu(\mathbf{s},\mathbf{x})$ as:
\begin{eqnarray}
\left\{\begin{array}{llll}
\forall~1\leq j\leq \alpha: & y_{r_j} & =x_{j}\\
\forall~j\notin\{r_1,\dots,r_{\alpha}\}:& y_j & =0
\end{array}\right.
\end{eqnarray}
From the above definition, we can see:
\begin{eqnarray}
\langle\mu(\mathbf{s},\mathbf{x}_1)\;,\;\mu(\mathbf{s},\mathbf{x}_2)\rangle=\langle\mathbf{x}_1,\mathbf{x}_2\rangle
\end{eqnarray}
Furthermore, if the elements of both $\mathbf{x}_1,\mathbf{x}_2$ lie in the closed interval $[-1,1]$, we have:
\begin{eqnarray}
\big|\langle\mu(\mathbf{s}_1,\mathbf{x}_1)\;,\;\mu(\mathbf{s}_2,\mathbf{x}_2)\rangle\big|\leq\langle\mathbf{s}_1,\mathbf{s}_2\rangle
\end{eqnarray}
For the matrix construction, let $\tilde{m}$ be an integer such that $p=2^{\tilde{m}}-1$ is a prime (the primes of this form are called Mersenne primes). Let $k<p$ be the required order of the RIP condition and let:
\begin{eqnarray}
r=\big\lfloor\frac{p}{k}\big\rfloor~~,~~i=\lceil\log_2 k\rceil
\end{eqnarray}
Also let $\mathbf{S}_{p^2\times p^{r+1}}=[\mathbf{s}_1~\dots~\mathbf{s}_{p^{r+1}}]$ be the binary RIP-fulfilling matrix constructed as in \cite{DeVore2007} and $\mathbf{X}_{p\times 2^{\tilde{k}}}=[\mathbf{x}_1~\dots~\mathbf{x}_{2^{\tilde{k}}}]$ ($\tilde{k}=|\mathcal{H}_{\tilde{m}}^{(\tilde{m}-i)}|$ with the previous terminology) be the $\pm 1$ matrix introduced in the previous sections (we further normalize the columns of these matrices). We construct a new $p^2\times (p^{r+1}.2^{\tilde{k}})$ matrix with elements in $\{0,1,-1\}$ by combining these two matrices:
\begin{eqnarray}
\mathbf{A}=[\mu(\mathbf{s}_i,\mathbf{x}_j)]_{i,j}
\end{eqnarray}
Employing the same approach as used before, we show that $\mathbf{A}$ satisfies the RIP condition of order $k$, i.e., we show that the inner product of two different columns of $\mathbf{A}$ cannot exceed $\frac{1}{k-1}$ in absolute value while each column is normal:
\begin{eqnarray}
\langle~\mu(\mathbf{s}_i,\mathbf{x}_j)~,~\mu(\mathbf{s}_i,\mathbf{x}_j)~ \rangle=\langle \mathbf{x}_j , \mathbf{x}_j\rangle = 1
\end{eqnarray}
To study the inner product of $\mu(\mathbf{s}_{i_1},\mathbf{x}_{j_1}$ and $\mu(\mathbf{s}_{i_2},\mathbf{x}_{j_2}$, we consider two cases:
\begin{enumerate}
\item $i_1=i_2$. In this case, since $\mathbf{s}_{i_1}=\mathbf{s}_{i_2}$, we have:
\begin{eqnarray}\label{eq:IneqX}
\big|\langle~\mu(\mathbf{s}_{i_1},\mathbf{x}_{j_1})~,~\mu(\mathbf{s}_{i_2},\mathbf{x}_{j_2})~ \rangle\big|&=&\big|\langle~\mathbf{x}_{j_1}~,~\mathbf{x}_{j_2}~ \rangle\big|\nonumber\\
&<&\frac{1}{k-1}
\end{eqnarray}
\item $i_1\neq i_2$ and therefore, $\mathbf{s}_{i_1}\neq\mathbf{s}_{i_2}$; since the elements of both $\mathbf{x}_{j_1}$ and $\mathbf{x}_{j_1}$ lie in $[-1,1]$, we have:
\begin{eqnarray}\label{eq:IneqS}
\big|\langle~\mu(\mathbf{s}_{i_1},\mathbf{x}_{j_1})~,~\mu(\mathbf{s}_{i_2},\mathbf{x}_{j_2})~ \rangle\big|&\leq&\big|\langle~\mathbf{s}_{i_1}~,~\mathbf{s}_{i_2}~ \rangle\big|\nonumber\\
&<&\frac{1}{k-1}
\end{eqnarray}
\end{enumerate}
Inequalities (\ref{eq:IneqX}) and (\ref{eq:IneqS}) hold due to the RIP-fulfilling structure of the matrices $\mathbf{X}$ and $\mathbf{S}$.
Hence, the claimed property of the inner products of the columns in $\mathbf{A}$ is proved. Consequently, $\mathbf{A}$ obeys the RIP condition of order $k$.
\section{Conclusion}\label{sec:Conclusion}
Despite the enormous amount of literature in random sampling matrices for compressed sensing, deterministic designs are not well researched. In this paper, we introduce a new connection between the coding theory and RIP fulfilling matrices. In the new design, we replace the zeros in the binary linear code vectors by $-1$ and use them as the columns of the sampling matrix in compressed sensing. The advantage of these matrices, in addition to their deterministic and known structure, is the simplicity in the sampling process; real/complex entries in the sampling matrix increases the computational complexity of the sampler as well as the required bit-precision for storing the samples. The linear codes for this purpose should have some desired characteristics; existence of such linear codes is proved by explicitly introducing binary BCH codes. One of the features of these matrices is that their produced samples can be easily (using matching pursuit method) decoded as the original sparse vector and due to the circular structure of the columns, the computational complexity in recovery can be reduced. These $\pm 1$ matrices are further expanded by considering $\{0,1,-1\}$ elements; this expansion is achieved by combining the $\pm 1$ matrices introduced in this paper with the Devore's binary matrices. Although the generated matrices show an improvement in the realizable size of the RIP-constrained matrices, the bound predicted by random matrices is not achieved yet.
\appendices
\section{Evaluation of $\tilde{k}$}\label{app:codeK}
In Theorem \ref{theo:DetermineK}, we showed that $\tilde{k}$ is equal to the number of binary sequences of length $\tilde{m}$ such that no two $1$s are spaced by less than $\tilde{m}-l-1$ zeros (circular definition). To evaluate this number, let us define $\tau^{(a)}_{b}$ as the number of binary sequences of length $b$ such that if the sequence is put around a circle, between each two $1$'s, there is at least $a$ zeros. In addition, let $\kappa_{b}^{(a)}$ be the number of binary sequences such $1$'s are spaced by at least $a$ zeros apart (circular property is no longer valid for $\kappa_{b}^{(a)}$). We first calculate $\kappa_{b}^{(a)}$ and then we show the connection between $\kappa_{b}^{(a)}$ and $\tau_{b}^{(a)}$.
There are two kinds of binary sequences counted in $\kappa_{b}^{(a)}$:
\begin{enumerate}
\item The last bit in the sequence is $0$; by omitting this bit, we obtain a sequence of length $b-1$ with the same property. Also, each binary sequence of length $b-1$ with the above property can be padded by $0$ while still satisfying the required property to be included in $\kappa_{b}^{(a)}$. Therefore, there are $\kappa_{b-1}^{(a)}$ binary sequence of this type.
\item The last bit in the sequence is $1$; this means that the last $a+1$ bits of the sequence are $\underbrace{0,\dots,0}_{a},1$. Similar to the above case, each binary sequence of length $b-a-1$ counted in $\kappa_{b-a-1}^{(a)}$ can be padded by the block $\underbrace{0,\dots,0}_{a},1$ to produce a sequence included in $\kappa_{b}^{(a)}$. Thus, there are $\kappa_{b-a-1}^{(a)}$ binary sequences of this type.
\end{enumerate}
In summary, we have the following recursive equation:
\begin{eqnarray}\label{eq:DifferenceEq}
\kappa_{b}^{(a)}=\kappa_{b-1}^{(a)}+\kappa_{b-a-1}^{(a)}
\end{eqnarray}
Since for $b\leq a+1$, there can be at most one $1$ in the binary sequence, we thus have:
\begin{eqnarray}\label{eq:InitialCond}
1\leq b\leq a+1:~\kappa_{b}^{(a)}=b+1
\end{eqnarray}
From (\ref{eq:DifferenceEq}), the last initial condition ($\kappa_{a+1}^{(a)}=a+2$) is equivalent to $\kappa_{0}^{(a)}=1$. If we define the onesided $\mathcal{Z}$-transform of $\kappa_{b}^{(a)}$ as follows
\begin{eqnarray}
\kappa^{(a)}(z)=\sum_{b=0}^{\infty}\kappa_{b}^{(a)}z^{-b},
\end{eqnarray}
it is not hard to check that:
\begin{eqnarray}
\kappa^{(a)}(z)=\frac{z}{z-1}\cdot\frac{z^{a+1}-1}{z^{a+1}-z^a-1}
\end{eqnarray}
Therefore, the increasing rate $\kappa_{b}^{(a)}$ with respect to $b$ ($b\gg 1$) has the same order as $\gamma^b$ where $\gamma$ is the largest (in absolute value) root of $f(z)=z^{a+1}-z^a-1$. Since $f(1)\cdot f(2)<0$, there is a real root in $(1~,~2)$; let us denote this root by $\gamma$. In fact, $\gamma$ is the largest root of $f(z)$ (we do not prove this; however, if $f(z)$ has a larger root, the increasing rate of $\kappa_{b}^{(a)}$ would be greater than $\gamma^b$):
\begin{eqnarray}
1<\gamma <2~~,~~~ f(\gamma)=\gamma^{a+1}-\gamma^a-1=0
\end{eqnarray}
Since $\gamma>1$ we have $\gamma^{a+1}>1$. For the sake of simplicity, let us define:
\begin{eqnarray}
\delta\triangleq \gamma^{a+1} - 1
\end{eqnarray}
We thus have:
\begin{eqnarray}\label{eq:rootineq}
&&(1+\delta) - (1+\delta)^{\frac{a}{a+1}}=1\nonumber\\
&\Rightarrow& (1+\delta)^{\frac{a}{a+1}}\bigg((1+\delta)^{\frac{1}{a+1}}-1\bigg)=1\nonumber\\
&\Rightarrow& (1+\delta)^{\frac{a}{a+1}}\big((1+\delta)-1\big)=\sum_{j=0}^{a}(1+\delta)^{\frac{j}{a+1}}\nonumber\\
&\Rightarrow& \delta(1+\delta)^{\frac{a}{a+1}}\geq \sum_{j=0}^{a} 1+\frac{j}{a+1}\delta\nonumber\\
&\Rightarrow& \delta(1+\delta)\geq\frac{a}{2}\delta+a+1\nonumber\\
&\Rightarrow& \big(\delta -\frac{a-2}{4}\big)^2\geq\frac{(a-2)^2}{16}+a+1>\bigg(\frac{a+4}{4}\bigg)^2\nonumber\\
&\Rightarrow& \delta>\frac{a+1}{2}~~\Rightarrow~~ 1+\delta>\frac{a+3}{2}\nonumber\\
&\Rightarrow& \gamma>\bigg(\frac{a+3}{2}\bigg)^{\frac{1}{a+1}}
\end{eqnarray}
Now we can show the connection between $\tau_b^{(a)}$ and $\kappa_b^{(a)}$. According to the definition of these parameters, we see that every binary sequence counted in $\tau_b^{(a)}$ is also counted in $\kappa_b^{(a)}$, therefore:
\begin{eqnarray}
\tau_b^{(a)}\leq\kappa_b^{(a)}
\end{eqnarray}
In addition, if a sequence counted in $\kappa_{b-a}^{(a)}$ is padded with $a$ zeros at the end, it satisfies the requirements to be counted in $\tau_b^{(a)}$, thus:
\begin{eqnarray}
\kappa_{b-a}^{(a)}\leq\tau_b^{(a)}
\end{eqnarray}
Combining the latter two inequalities, we get:
\begin{eqnarray}
\mathcal{O}(\gamma^{b-a})\leq \tau_b^{(a)} \leq\mathcal{O}(\gamma^{b})
\end{eqnarray}
The above equation in conjunction with the result in (\ref{eq:rootineq}), yields:
\begin{eqnarray}
\tau_b^{(a)}\gtrapprox \mathcal{O}\bigg(\bigg(\frac{a+3}{2}\bigg)^{\frac{b}{a+1}-1}\bigg)
\end{eqnarray}
The interpretation of the above inequality for $\tilde{k}$ is as follows:
\begin{eqnarray}
\tilde{k}=\tau_{\tilde{m}}^{(\tilde{m}-l-1)}\gtrapprox \mathcal{O}\bigg(\big(\frac{\tilde{m}-l}{2}+1\big)^{\frac{l}{\tilde{m}-l}}\bigg)
\end{eqnarray}
Figure \ref{fig:kappa} shows the asymptotic behavior of $\kappa_b^{(a)}$ at different $a$ values when $b$ increases.
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{kappa.eps}
\caption{Exact values of $\kappa_b^{(a)}$ for different values of $a$ and $b$.}
\label{fig:kappa}
\end{figure}
\section*{Acknowledgment}
The authors sincerely thank K. Alishahi for his help in the proof given in the appendix.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,877,628,089,591 | arxiv | \section{Introduction} \label{sec:intro}
Cooling phenomena are widely used in everyday applications, such as food storage and medical treatment, and also in technology for next-generation electronics, such as hydrogen-fuel cells\cite{Turner-1999,Steele-2001,Schlapbach-2001,Crabtree-2004} and quantum information processing\cite{Nakamura-1999,Leuenberger-2001,Ladd-2010,Johnson-2011}.
Thus, high-performance cooling technology has been actively developed for many fields.
A ubiquitous cooling technology is gas refrigeration, which rely on a compressor cycle of refrigerant gas.
Another method, called magnetic refrigeration, which uses magnetic materials for cooling, has attracted much attention, because the magnetic entropy density is high compared with that of gas refrigeration\cite{Warburg-1881,Debye-1926,Giauque-1927,Giauque-1933,Hashimoto-1981,Hashimoto-1987,Bennett-1992,Bennett-1993,Shaw-1994,Shaw-1995,Pecharsky-1999,Novotny-1999,Pecharsky-2001,Zhang-2001,Tishin-2003,Zhitomirsky-2003,Provenzano-2004,Gschneidner-2005,Campos-2006,Zou-2009,Shen-2009,Sakai-2009,Zvere-2010,Oliveira-2010,Mamiya-2010,Buchelnikov-2010,Sadakuni-2010,Lyubina-2010,Zhu-2011,Perez-2012,Franco-2012,Yonezawa-2013,Mizumaki-2013,Lorusso-2013,Moya-2014,Choudhury-2014,Pospisil-2014}.
The magnetic entropy change is caused by varying control parameters, such as the temperature and magnetic field, which is called magnetocaloric effect (MCE), and it leads to magnetic refrigeration.
The magnetic entropy is low in magnetic ordered states, such as ferromagnetic and antiferromagnetic states.
However, the magnetic entropy increases as the temperature increases, because the equilibrium state is disordered.
In a word, high-efficiency magnetic refrigeration can be achieved by designing low- and high-entropy states.
\begin{figure}[!b]
\begin{center}
\includegraphics[scale=1.0]{schematic_S_sr.eps}
\end{center}
\caption{\label{fig:schematic_S}
(Color online)
(a) Temperature dependence of the magnetic entropy $S_\text{M} (T,H)$ under magnetic fields $H_1$ (black line) and $H_2$ (gray line) ($H_1<H_2$) in ferromagnets or paramagnets.
Two typical processes in magnetic refrigeration are indicated by the solid arrows.
(b) Temperature dependence of the isothermal magnetic entropy change $\Delta S_\text{M} (T,H_2 \to H_1)$ in ferromagnets or paramagnets.
}
\end{figure}
The temperature $T$ and magnetic field $H$ dependences of the magnetic entropy $S_\text{M} (T,H)$ are important for designing magnetic refrigeration cycles.
In ferromagnets and paramagnets, the magnetic entropy decreases as the magnetic field increases at a given temperature (Fig.~\ref{fig:schematic_S} (a)).
Figure~\ref{fig:schematic_S} (a) shows the two main processes in magnetic refrigeration.
In the isothermal demagnetization process, when the magnetic field decreases from $H_2$ to $H_1$ at temperature $T_1$, the magnetic entropy changes according to
\begin{align}
\Delta S_\text{M} (T_1,H_2 \to H_1) = S_\text{M} (T_1,H_1) - S_\text{M} (T_1,H_2),
\label{eq:def_DSM}
\end{align}
which is called the isothermal magnetic entropy change.
$\Delta S_\text{M} (T_1,H_2 \to H_1)$ in Fig.~\ref{fig:schematic_S} (a) is positive, which means that the magnetic entropy increases and the magnetic system absorbs the amount of heat $T_1 \Delta S_\text{M} (T_1,H_2 \to H_1)$.
Thus, a large isothermal magnetic entropy change is required for good magnetic refrigeration materials.
In the adiabatic magnetization process, the magnetic field increases from $H_1$ to $H_2$ with no change in the magnetic entropy.
The initial temperature is $T_{1}$ and the temperature of the magnetic materials changes according to
\begin{align}
\Delta T_\text{ad} (T_1,H_1 \to H_2) = T_2 - T_1,
\end{align}
where $T_2$ is the temperature, such that $S_\text{M}(T_1,H_1)=S_\text{M}(T_2,H_2)$.
$\Delta T_\text{ad} (T_1,H_1 \to H_2)$ is the adiabatic temperature change.
$\Delta T_\text{ad} (T_1,H_1 \to H_2)$ in Fig.~\ref{fig:schematic_S} (a) is positive, which means that the temperature of the magnetic material increases.
In the active magnetic regenerator (AMR) cycle or its similar cycles\cite{Brown-1976,Patton-1986,Rowe-2006,Zimm-2006,Okamura-2006,Utaki-2007,Fujita-2007,Matsumoto-2011},
the adiabatic temperature change is used directly.
Thus, a large adiabatic temperature change is another requirement for good magnetic refrigeration materials.
The relative cooling power (RCP) has been often used as a benchmark for good magnetic refrigeration materials. \cite{Gschneidner-2000}
The RCP is defined as
\begin{align}
&\text{RCP}(H_2 \to H_1) \notag \\
&\ \ \ \ \ = \Delta S_\text{M max} (H_2 \to H_1) \times \Delta T_{1/2} (H_2 \to H_1), \label{eq:RCP}
\end{align}
where $\Delta S_\text{M max} (H_2 \to H_1)$ and $\Delta T_{1/2} (H_2 \to H_1)$ are the maximum value and the full width at half maximum of $\Delta S_\text{M} (T,H_2 \to H_1)$ at given $H_{1}$ and $H_{2}$, respectively (Fig.~\ref{fig:schematic_S} (b)).
Magnetic materials with a large RCP exhibit a large isothermal magnetic entropy change over a wide temperature range.
Hereafter, the argument of the quantities will be sometimes omitted.
Magnetic materials with a large $\Delta S_\text{M}$, $\Delta T_\text{ad}$, and RCP are considered to have a high magnetic refrigeration efficiency.
Ferromagnets are a good magnetic refrigeration material because of their large magnetic field response near the Curie temperature\cite{Pecharsky-1997,Dankov-1998,Wada-2001,Wada-2002,Tegus-2002,Fujieda-2002,Fujita-2003,Fujieda-2006,Yan-2006,Chikazumi-2009,Lyubina-2012}.
In a simple protocol, the magnetic field changes from finite $H$ to zero during the isothermal demagnetization process, and from zero to finite $H$ during the adiabatic magnetization process; that is, $H_1=0$ in Fig.~\ref{fig:schematic_S}.
This protocol has been used in most previous studies of magnetic refrigeration.
We refer to this method as the conventional protocol.
Recently, the MCE in non-ferromagnetic materials, such as antiferromagnets and random magnets, has been experimentally investigated to explore their potential application in magnetic refrigeration\cite{Bohigas-2002,Sosi-2005,Roy-2006,Luo-2006,Luo-2007,Samanta-2007a,Samanta-2007b,Hu-2008,Li-2009a,Li-2009b,Chen-2009,Naik-2011,Kim-2011,Yuan-2012}.
In these magnetic materials, the magnetic refrigeration efficiency calculated with the conventional protocol is sometimes small.
We have focused on the relation between the magnetic refrigeration efficiency and the magnetic ordered structure.
In Ref.~\onlinecite{Tamura-2014}, we focused on just the isothermal demagnetization process and investigated $\Delta S_\text{M}$ of the Ising model on a simple cubic lattice using the Wang-Landau method. \cite{Wang-2001a, Wang-2001b,Lee-2006}
Ref.~\onlinecite{Tamura-2014} studied $\Delta S_\text{M}$ in a ferromagnet and in A-, C-, and G-type antiferromagnets, which are typical magnetic ordered structures.
The MCE in the antiferromagnets differs from that in the ferromagnet.
Furthermore, we proposed a new magnetic refrigeration protocol.
The proposed protocol produces larger $\Delta S_\text{M}$ in antiferromagnets than the conventional protocol.
This paper continues the work in Ref.~\onlinecite{Tamura-2014} and aims to elucidate the relation between the magnetic refrigeration efficiency and typical magnetic ordered structures.
In particular, we focus on the following.
(i) We study the relation between MCE and measurable physical quantities, such as specific heat and magnetization.
(ii) We consider the performance of magnetic refrigeration in both the isothermal demagnetization process and the adiabatic magnetization process.
The rest of the paper is organized as follows.
In Sec.~\ref{sec:model}, we introduce the Ising model on a simple cubic lattice and the magnetic ordered structures considered in this paper.
In Sec.~\ref{sec:MC}, we explain how to obtain physical quantities using the Wang-Landau method.
The dependence of the magnetic entropy on the magnetic ordered structures is shown.
In addition, we discuss the relation between the magnetic entropy and the behavior of the specific heat and magnetization.
In Sec.~\ref{sec:MCE}, the magnetic refrigeration efficiency in the isothermal and adiabatic magnetization processes is considered.
To estimate the efficiency, we calculate the isothermal magnetic entropy change and the adiabatic temperature change.
The RCP is not sufficient for estimating the magnetic refrigeration performance in this study.
Thus, to investigate the efficiency in the isothermal demagnetization process in more detail, we introduce a new quantity called total cooling power (TCP) to replace the RCP.
Section \ref{sec:conclusion} is the conclusion.
In Appendix~\ref{sec:TCP}, we explain the properties of the TCP.
Mean-field analysis of magnetic refrigeration is described in Appendix~\ref{sec:MF}.
\begin{figure*}[t]
\begin{center}
\includegraphics[scale=1.0]{structure_sr.eps}
\end{center}
\caption{\label{fig:structure}
(Color online)
Schematics of ordered magnetic structures in the Ising models on a simple cubic lattice.
The signs of the magnetic interactions $J_{ab}$ and $J_c$ and the wave vector $\mathbf{k}$, when the lattice constant is set to unity, are shown for each magnetic structure.
These figures were drawn by VESTA\cite{Momma-2011}.
}
\end{figure*}
\section{Ising model on a simple cubic lattice} \label{sec:model}
In this section, we introduce the model for investigating the dependence of the magnetic refrigeration efficiency on the magnetic ordered structures.
We consider the MCE in the $S=1/2$ Ising models on a simple cubic lattice using statistical thermodynamics.
Let $N= L \times L \times L$ be the number of sites, where $L$ is the linear dimension.
The model Hamiltonian is defined by
\begin{align}
\mathcal{H} = - J_{ab} \sum_{\langle i,j \rangle_{ab}} s_i^z s_j^z
- J_{c} \sum_{\langle i,j \rangle_{c}} s_i^z s_j^z
- H \sum_{i} s_i^z,
\ \ \ s_i^z = \pm \frac{1}{2},
\label{eq:model}
\end{align}
where $J_{ab}$ and $J_{c}$ are nearest-neighbor interactions in the $ab$-plane and in the $c$-axis, respectively, and $H$ is the uniform magnetic field parallel to the $z$-axis of spin.
Here, the $g$-factor and the Bohr magneton $\mu_\text{B}$ are set to unity.
The periodic boundary conditions are imposed for all directions.
In this paper, we focus on the ferromagnetic structure and the A-, C-, and G-type antiferromagnetic structures shown in Fig.~\ref{fig:structure} as in Ref.~\onlinecite{Tamura-2014}.
These are typical magnetic ordered structures.
Throughout this paper, the absolute values of interactions are set as the same, $J := |J_{ab}|=|J_{c}|$, where $J$ is the energy unit.
The A-, C-, and G-type antiferromagnetic structures are bipartite magnetic structures.
The blue and red arrows in Fig.~\ref{fig:structure} indicate the spins on respective sublattices.
At $H=0$, the thermodynamic properties of these models, which have any one of the antiferromagnetic ground states, are the same as the ferromagnetic Ising model when the gauge transformation ($s_i^z \to -s_i^z$ for any $i$ in one of sublattices) is applied.
Thus, a second-order phase transition occurs at the critical temperature $T_\text{c}/J=1.127\cdots$\cite{Ferrenberg-1991} for all cases where $H=0$.
Here, the Boltzmann constant $k_\text{B}$ is set to unity.
For the ferromagnet, $T_\text{c}/J$ is the Curie temperature, and $T_\text{c}/J$ is the N\'eel temperature for the antiferromagnets.
\section{Monte Carlo simulation results} \label{sec:MC}
In this section, we consider the MCE behavior in the Ising model.
We obtain the dependence of the magnetic entropy, the magnetic specific heat, and the magnetization on $T$ and $H$ by using the Wang-Landau method\cite{Wang-2001a, Wang-2001b,Lee-2006}.
The Wang-Landau method is a Monte Carlo method and performs a random walk in energy space.
It can directly calculate the absolute density of states $g(E,H)$, where $E$ is the energy of the state.
The absolute density of states is normalized as $\sum_E g (E,H) = 2^N$ which corresponds to the total number of states.
In other words, the magnetic entropy per spin is $\ln 2=0.693\cdots$ for the limit of $T\to \infty$.
The partition function $Z (T,H)$, the Helmholtz free energy $F (T,H)$, and the internal energy $U (T,H)$ at a given $T$ and $H$ can be calculated with the obtained $g(E,H)$ by
\begin{align}
Z (T,H) &= \sum_{E} g (E,H) \text{e}^{- \beta E},
\label{eq:partition}\\
F (T,H) &=- T \ln Z(T,H), \\
U (T,H) &= \frac{1}{Z(T,H)} \sum_{E} E g(E,H) \text{e}^{- \beta E},
\end{align}
where $\beta$ is the inverse temperature $1/T$.
Using these quantities, $S_\text{M} (T,H)$ and the magnetic specific heat per spin $C_\text{M} (T,H)$ are obtained by
\begin{align}
S_\text{M} (T,H) &= \frac{1}{N} \frac{U(T,H)-F(T,H)}{T},
\label{eq:entropy} \\
C_\text{M} (T,H) &= \frac{1}{N} \frac{\partial U(T,H)}{\partial T}
\label{eq:specificheat}.
\end{align}
We can directly calculate the magnetic entropy without integrating the magnetic specific heat or the magnetization, which is an advantage of the Wang-Landau method.
Furthermore,
the magnetization per spin $m (T,H)$ is calculated by
\begin{align}
m (T,H) = \frac{1}{Z(T,H)} \sum_E \langle \tilde{m} (E,H) \rangle g (E,H) \text{e}^{-\beta E},
\label{eq:mag}
\end{align}
where $\langle \tilde{m} (E,H) \rangle$ is a microcanonical ensemble average of the magnetization per spin, which can be calculated simultaneously with $g (E,H)$.
\begin{figure*}
\begin{center}
\includegraphics[scale=1.0]{MC_entropy_sr.eps}
\end{center}
\caption{\label{fig:MC_entropy}
(Color online)
Temperature dependence of the magnetic entropy per spin $S_\text{M}(T,H)$ for $L=16$ under various magnetic fields obtained by the Wang-Landau method.
The insets show the temperature dependence of $H_\text{max} (T)$ at which the magnetic entropy reaches its maximum.
}
\end{figure*}
\begin{figure*}
\begin{tabular}{c}
\begin{minipage}{0.3\hsize}
\begin{center}
\vspace{-5mm}
\includegraphics[scale=1.0]{snapshot_ab_sr.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.7\hsize}
\begin{center}
\vspace{-5mm}
\includegraphics[scale=1.0]{snapshot_cd_sr.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{\label{fig:snapshot}
(Color online)
(a) Magnetic field dependence of $S_\text{M} (T,H)$ with $T/T_\text{c}=0.5$ for $L=16$ for the ferromagnet.
(b) Magnetic field dependence of $S_\text{M} (T,H)$ with $T/T_\text{c}=0.5$ for $L=16$ for the G-type antiferromagnet.
(c) Snapshots of the spins of the G-type antiferromagnet for $H/J=0, 2.9$, and $5.0$ at $T/T_\text{c}=0.5$.
(d) Masked snapshots of the spins of the G-type antiferromagnet for $H/J=0, 2.9$, and $5.0$ at $T/T_\text{c}=0.5$.
(c) and (d) were drawn by VESTA\cite{Momma-2011}.
}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[scale=1.0]{MC_quantities_sr.eps}
\end{center}
\caption{\label{fig:MC_quantities}
(Color online)
(a) Temperature dependence of the magnetic specific heat per spin $C_\text{M}(T,H)$ for $L=16$ under various magnetic fields obtained by the Wang-Landau method.
(b) Temperature dependence of magnetization per spin $m(T,H)$ for $L=16$ under various magnetic fields obtained by the Wang-Landau method.
(c) Magnetic field dependence of $m(T,H)$ at fixed temperatures of $T/T_\text{c}=0.0-2.0$ for $L=16$.
(d) Magnetic field dependence of magnetic susceptibility per spin $\chi_\text{M}(T,H)$ at fixed temperatures of $T/T_\text{c}=0.0-2.0$ for $L=16$.
The Insets show $H_\text{max}(T)/J$ (gray curves) and the peak position of $\chi_{\text{M}}(T,H)$ (black crosses).
}
\end{figure*}
Using the Wang-Landau method, the temperature dependence of the magnetic entropy $S_\text{M}(T,H)$ for $L=16$ is obtained (Fig.~\ref{fig:MC_entropy}).
The magnetic entropies for $L=8$, $12$, and $16$ collapse within the line width in Fig.~\ref{fig:MC_entropy},
thus we use a lattice size of $L=16$ throughout this paper.
In the ferromagnet, the magnetic entropy decreases as the magnetic field increases for any temperature.
The same behavior is observed in the paramagnetic phase above $T_\text{c}$ in antiferromagnets.
In contrast, the magnetic entropy behavior in antiferromagnets below the N\'eel temperature differs from the behavior in the ferromagnet.
The magnetic entropy in antiferromagnets reaches a maximum value at finite $H$ below $T_\text{c}$, whereas in the ferromagnet, the maximum magnetic entropy at a given $T$ is achieved when $H=0$.
Let $H_\text{max} (T)$ be the magnetic field at which the magnetic entropy reaches its maximum value at $T$.
The insets of Fig.~\ref{fig:MC_entropy} show the temperature dependence of $H_\text{max}(T)$ for antiferromagnets.
As the temperature increases, $H_\text{max}(T)$ monotonically decreases.
In addition, a large residual magnetic entropy of about $\ln 2/2$ is observed in the G-type antiferromagnet with $H/J=3.0$.
The large residual magnetic entropy indicates the existence of macroscopically degenerate ground states, as found in frustrated magnetic systems\cite{Vannimenus-1977,Liebmann-1986,Harris-1997,Kobayashi-1998,Bramwell-2001,Matsuhira-2002,Matsuhira-2002,Udagawa-2002,Yoshioka-2004,Diep-2005,Moessner-2006,Tanaka-2007,Tahara-2007,Andrews-2009,Ogitsu-2010,Tanaka-2010,Tanaka-2012,Chern-2013}.
Next, we consider the microscopic origin of nonzero $H_\text{max} (T)$ in antiferromagnets compared with the ferromagnet.
Figures~\ref{fig:snapshot} (a) and (b) show the magnetic field dependence of the magnetic entropy at $T/T_\text{c}=0.5$ for the ferromagnet and the G-type antiferromagnet, respectively.
In the ferromagnet, the magnetic entropy decreases as the magnetic field increases because the magnetic field reinforces the ferromagnetic order.
In contrast, for the G-type antiferromagnet, the magnetic field dependence of the magnetic entropy has a peak at $H_{\text{max}}(T) (\neq 0)$.
In antiferromagnets, the ordered magnetic structure is destroyed by the magnetic field.
Thus, the magnetic entropy increases as the magnetic field increases below $H_\text{max}(T)$.
When we apply a strong magnetic field greater than $H_{\text{max}}(T)$, the spin structure becomes a saturated ferromagnetic structure.
Thus, the magnetic entropy decreases as the magnetic field increases above $H_{\text{max}}(T)$.
Figure~\ref{fig:snapshot} (c) shows snapshots of the spin configuration of the G-type antiferromagnet at $T/T_{\text{c}}=0.5$.
Because the temperature is lower than the N\'eel temperature, the antiferromagnetic ordered state appears at $H=0$.
To represent the antiferromagnetic ordered structure more clearly, masked snapshots of spin configuration are also shown (Fig.~\ref{fig:snapshot} (d)).
In the masked snapshots, the local gauge transformation $s_i^z \to -s_i^z$ is applied to any spins in one of two sublattices in the G-type antiferromagnet.
Thus, most spins in the masked snapshot at $H/J=0$ are the same color, which indicates that the structure is almost completely antiferromagnetically ordered.
The snapshot of spin configuration at $H/J=2.9$ which is near $H_{\text{max}}(T) (\lesssim 2.9J)$ is shown in the middle panel of Fig.~\ref{fig:snapshot} (c).
The spin structure is almost random, which can be also confirmed in the masked snapshot of the spin configuration (middle panel of Fig.~\ref{fig:snapshot} (d)).
This is a magnetic field induced disordered state.
Near the magnetic field value, the magnetic entropy should be large.
The snapshot of the spin configuration at $H/J=5.0$, which is larger than $H_{\text{max}}(T)$, is shown in the right panel of Fig.~\ref{fig:snapshot} (c).
The spins are almost parallel to the magnetic field at $H/J=5.0$.
As the magnetic field increases from $H_\text{max} (T)$, the saturated ferromagnetic structure appears and the magnetic entropy decreases.
Here, we consider the relation between the behavior of the magnetic entropy and measurable physical quantities.
Figures~\ref{fig:MC_quantities} (a) and (b) show the temperature dependence of the magnetic specific heat $C_\text{M}(T,H)$ and magnetization $m(T,H)$ under various magnetic fields, which are obtained by the Wang-Landau method using Eqs.~(\ref{eq:specificheat}) and (\ref{eq:mag}).
Let us discuss why nonzero $H_{\text{max}}(T)$ exists in antiferromagnets from the behaviors of $C_{\text{M}}(T,H)$ and $m(T,H)$.
The magnetic entropy $S_\text{M} (T,H)$ is calculated from $C_\text{M} (T,H)$ and $m (T,H)$ as
\begin{align}
S_\text{M} (T,H) &= \int_0^{T} \frac{C_\text{M} (T',H)}{T'} dT' + S_\text{M} (0,H),
\label{eq:S-C} \\
S_\text{M} (T,H) &= \int_{0}^H \left( \frac{\partial m (T,H')}{\partial T} \right)_{H'} d H' + S_\text{M} (T,0). \label{eq:S-M}
\end{align}
First, we focus on the magnetic specific heat below $T_{\text{c}}$ (Fig.~\ref{fig:MC_quantities} (a)).
The second term in Eq.~(\ref{eq:S-C}) is the residual magnetic entropy and should always be zero in the ferromagnet.
Since the integrand in the first term in Eq.~(\ref{eq:S-C}) satisfies the inequality
$C_{\text{M}}(T,H_1)/T > C_{\text{M}}(T,H_2)/T$ for any $T(< T_\text{c})$, $H_1$, and $H_2$ with $H_1 < H_2$,
it follows that $S_{\text{M}}(T,H_1)>S_{\text{M}}(T,H_2)$; that is, $H_{\text{max}}(T)$ is always zero below the Curie temperature.
As shown in Fig.~\ref{fig:MC_entropy}, $H_\text{max}(T\to 0)/J=1,2,3$ for the A-, C-, and G-type antiferromagnets, respectively, and $H_\text{max}(T)$ monotonically decreases as the temperature increases.
Here we consider a magnetic field lower than $H_\text{max}(T \to 0)$.
In this region, because the residual magnetic entropy is zero as it is for the ferromagnet, it is sufficient to compare the integrand in the first term in Eq.~(\ref{eq:S-C}).
Figure~\ref{fig:MC_quantities} (a) shows that there is a region in which the inequality
$C_{\text{M}}(T,H_1)/T < C_{\text{M}}(T,H_2)/T$ is satisfied for $H_1<H_2$ below $T_\text{c}$.
In this region, the magnetic entropy increases with the magnetic field.
Thus, in antiferromagnets, $H_{\text{max}}(T)$ is a finite value below the transition temperature.
Next we consider the relation between the magnetic entropy and the magnetization (Fig.~\ref{fig:MC_quantities} (b)).
In the ferromagnet, the magnetization decreases as the temperature increases.
Since the inequality $S_\text{M}(T,H_1) > S_\text{M}(T,H_2)$ is satisfied for any values of $T$, $H_1$, and $H_2$ with $H_1 < H_2$, the value of $H_{\text{max}}(T)$ should be zero for all temperatures from Eq.~(\ref{eq:S-M}).
In contrast, in antiferromagnets below $T_\text{c}$, because $\left( \frac{\partial m (T,H')}{\partial T}\right)_{H'}$ can be positive in the small $H$ region (Fig.~\ref{fig:MC_quantities} (b)), Eq.~(\ref{eq:S-M}) shows that the magnetic entropy can increase with the magnetic field.
Thus, $H_{\text{max}}(T)$ is a finite value below $T_\text{c}$.
The peak position in the temperature dependence of the magnetization under $H$ (Fig.~\ref{fig:MC_quantities} (b)) corresponds to the temperature, such that $H=H_{\text{max}}(T)$.
In other words, the sign of $\left( \frac{\partial m (T,H')}{\partial T}\right)_{H'}$ changes at temperature $T$, such that $H'=H_\text{max}(T)$.
Finally, we show the magnetic field dependence of the magnetization in Fig.~\ref{fig:MC_quantities} (c).
The transition between the antiferromagnetic ordered state, where $m(T,H)=0$, and the saturated ferromagnetic state, where $m(T,H)=0.5$, is observed in antiferromagnets below $T_{\text{c}}$.
The magnetic susceptibility, defined as $\chi_\text{M}(T,H)=\left( \frac{\partial m(T,H)}{\partial H}\right)_T$, is shown in Fig.~\ref{fig:MC_quantities} (d), in which the peak positions correspond to $H_{\text{max}}(T)$ (inset of Fig.~\ref{fig:MC_quantities} (d)).
Note that the magnetic susceptibility peak positions are the metamagnetic transition points.
\section{Magnetic refrigeration efficiency} \label{sec:MCE}
In this section, we consider the magnetic refrigeration efficiency of the ferromagnet and the A-, C-, and G-type antiferromagnets for the isothermal demagnetization process and the adiabatic magnetization process.
We compare the magnetic refrigeration efficiency of the proposed protocol reported in Ref.~\onlinecite{Tamura-2014} with that of the conventional protocol.
\begin{figure}[b]
\begin{center}
\hspace{10mm} \includegraphics[scale=1.0]{HmaxT_sr.eps}
\end{center}
\caption{\label{fig:HmaxT}
(Color online)
(Left) Schematic of the conventional protocol, which is suitable for ferromagnets and paramagnets in the isothermal demagnetization process.
(Right) Schematic of the proposed protocol, which is suitable for antiferromagnets in the isothermal demagnetization process, proposed in Ref.~\onlinecite{Tamura-2014}.
Note that for the adiabatic magnetization process, the direction of the arrows is reversed.
}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[scale=1]{MC_deltaS_sr.eps}
\end{center}
\caption{\label{fig:MC_deltaS}
(Color online)
(a) Isothermal magnetic entropy change as a function of temperature under the conventional protocol ($H \to 0$) for $L=16$.
(b) Isothermal magnetic entropy change as a function of temperature under the proposed protocol ($H \to H_\text{max}(T)$) for $L=16$.
}
\end{figure*}
In the isothermal demagnetization process under the conventional protocol, the magnetic field is changed from finite $H$ to zero (left panel of Fig.~\ref{fig:HmaxT}).
This protocol is efficient when $S_\text{M}(T,H)$ decreases as $H$ increases at a given temperature, which is realized in ferromagnets and paramagnets.
The conventional protocol has been used in most previous studies.
In contrast, Ref.~\onlinecite{Tamura-2014} proposed the protocol that $\Delta S_\text{M}$ given by Eq.~(\ref{eq:def_DSM}) is the maximum value.
Thus, the maximum amount of heat can be absorbed using the proposed protocol shown in the right panel of Fig.~\ref{fig:HmaxT}.
The magnetic field is changed from $H$ to $H_\text{max} (T)$ (purple dotted curve).
$H_\text{max} (T)$ is a finite value below $T_\text{c}$, whereas $H_\text{max} (T)=0$ above $T_\text{c}$ in antiferromagnets (insets of Fig.~\ref{fig:MC_entropy}).
The proposed protocol is more efficient than the conventional protocol when the $H$-dependence of $S_\text{M}(T,H)$ has a peak at finite $H$, as in antiferromagnets (Fig.~\ref{fig:snapshot} (b)).
Note that because $H_\text{max} (T)=0$ for all temperatures in the ferromagnet, the proposed protocol includes the conventional protocol for ferromagnets and paramagnets.
In Ref.~\onlinecite{Tamura-2014}, the magnetic refrigeration efficiency is only considered with respect to the amount of heat absorption during the isothermal demagnetization process.
In this work, we examine the magnetic refrigeration efficiency in the isothermal demagnetization and adiabatic magnetization processes.
We show that the protocol proposed in Ref.~\onlinecite{Tamura-2014} is useful for obtaining high magnetic refrigeration efficiencies in the two typical processes.
\subsection{Isothermal demagnetization process}
\label{sec:MCE_IDP}
In this subsection, we focus on the isothermal demagnetization process (Fig.~\ref{fig:schematic_S} (a)).
Figures~\ref{fig:MC_deltaS} (a) and (b) are the temperature dependences of $\Delta S_\text{M}(T,H \to 0)$ under the conventional protocol and $\Delta S_\text{M}(T, H \to H_\text{max} (T))$ under the proposed protocol\cite{supplemetal_material1}, respectively.
Here, $H$ is the magnetic field at the start point in both protocols.
In the ferromagnet, because $H_\text{max} (T)$ is always zero, both processes are the same.
Thus, $\Delta S_\text{M} (T,H \to 0)$ is always positive and the magnetic entropy increases.
In contrast, $\Delta S_\text{M} (T, H \to 0)$ can be negative in antiferromagnets below $T_\text{c}$.
In this case, the magnetic entropy decreases under the conventional protocol, which is called the inverse MCE\cite{Tohei-2003,Krenke-2005,Sandeman-2006,Krenke-2007,Ranke-2009} (Fig.~\ref{fig:MC_deltaS} (a)).
The inverse MCE does not appear in the proposed protocol as $\Delta S_\text{M} (T, H \to H_\text{max} (T))$ is always positive (Fig.~\ref{fig:MC_deltaS} (b)) through the definition of $H_{\text{max}}(T)$.
In addition, $ \Delta S_\text{M} (T, H \to H_\text{max} (T)) \ge \Delta S_\text{M} (T, H \to 0)$ is always satisfied and achieves the maximum value.
Thus, the proposed protocol is useful for obtaining a large isothermal magnetic entropy change in antiferromagnets.
Next, we consider the performance of the magnetic refrigeration in the proposed protocol from a different perspective.
Let us consider a refrigerator that transfers heat from a low-temperature reservoir to a high-temperature reservoir when the magnetic field is changed from $H_2$ to $H_1$.
The amount of transferred heat, called the cooling capacity $q$, is given by
\begin{align}
q = \int_{T_{\text{l}}}^{T_{\text{h}}} \Delta S_{\text{M}} (T,H_{2} \to H_{1}) dT,
\label{eq:cc}
\end{align}
where $T_{\text{l}}$ and $T_{\text{h}}$ are the temperatures of the low- and high-temperature reservoirs, respectively.
In ferromagnets, RCP approximately characterizes the cooling capacity when $T_{\text{l}}$ and $T_{\text{h}}$ are set to temperatures such that $\Delta S_{\text{M}}(T_\text{l},H_{2} \to H_{1})=\Delta S_{\text{M}}(T_\text{h},H_{2}\to H_{1})=\frac{1}{2}\Delta S_{\text{M\, max}}$, as shown in Fig.~\ref{fig:schematic_S} (a).
In fact, the RCP value is nearly $4/3$ times the cooling capacity in ferromagnets\cite{Gschneidner-2000}.
However, the RCP is less suitable for measuring the performance of the magnetic refrigeration under the proposed protocol.
In antiferromagnets, the temperature dependence of $\Delta S_{\text{M}}$ under the proposed protocol has, in some cases, more than two temperatures at which $\Delta S_{\text{M}}=\frac{1}{2}\Delta S_{\text{M\, max}}$ (Fig.~\ref{fig:schematic_RCP}).
This situation differs qualitatively from $\Delta S_{\text{M}}$ in ferromagnets or paramagnets shown in Fig.~\ref{fig:schematic_S} (b).
Thus, we introduce a new measure, the TCP, to consider the efficiency of the magnetic refrigeration under the proposed protocol more appropriately.
The TCP is defined as
\begin{align}
\text{TCP} &= \int_{0}^{\infty} \Delta S_{\text{M}}(T,H_{2}\to H_{1}) \Theta(\Delta S_{\text{M}}(T,H_{2}\to H_{1})) d T, \label{eq:TCP} \\
\Theta(x) &=
\begin{cases}
0 \quad (x < 0)\\
1 \quad (x \ge 0)
\end{cases}.
\end{align}
The TCP characterizes the whole potential of the cooling power of the target material.
It is a natural quantity, because the TCP value is almost $3/2$ times the RCP value in the ferromagnet.
A more detailed explanation of the TCP and the calculation method is given in Appendix~\ref{sec:TCP}.
\begin{figure}
\begin{center}
\includegraphics[scale=1.0]{schematic_RCP_sr.eps}
\end{center}
\caption{\label{fig:schematic_RCP}
Schematic of the entropy change in the antiferromagnet under the proposed protocol.
}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[scale=1]{TCP_sr.eps}
\end{center}
\caption{\label{fig:TCP}
(Color online)
(a) Magnetic field at the start point $H$ dependence of TCP under the conventional protocol (black points) and under the proposed protocol (blue points) for $L=16$.
(b) Relation between the maximum magnetic entropy change $\Delta S_\text{M max}$ and the temperature $T_\text{max}$ at which $\Delta S_{\text{M}}= \Delta S_{\text{M\, max}}$ under the conventional protocol (black points) and under the proposed protocol (blue points) for a lattice size of $L=16$.
The lines between points are guide to the eye.
}
\end{figure*}
We consider the TCP of the conventional protocol ($H \to 0$) and that of the proposed protocol ($H \to H_{\text{max}}(T)$).
Figure~\ref{fig:TCP} (a) shows the TCP as a function of $H$ which is the magnetic field at the start point of the given protocols.
In the ferromagnet, the TCPs of both protocols are the same because both protocols are equivalent.
However, for the antiferromagnets, the TCP of the proposed protocol is greater than that of the conventional protocol for any $H$.
Figure~\ref{fig:TCP} (b) shows the relation between the maximum magnetic entropy change $\Delta S_\text{M max}$ and the temperature $T_\text{max}$, at which $\Delta S_{\text{M}}=\Delta S_{\text{M\, max}}$ (Fig.~\ref{fig:schematic_S} (b) and Fig.~\ref{fig:schematic_RCP}) for various $H$.
Using the conventional protocol, $\Delta S_\text{M max}$ monotonically increases with $H$ regardless of the magnetic ordered structure.
Furthermore, $T_\text{max}$ is always larger than $T_\text{c}$, and $T_\text{max}$ monotonically increases in the ferromagnet.
In antiferromagnets, $T_{\text{max}}$ decreases as $H$ increases in the small $H$ region.
However, because $T_{\text{max}}$ should be infinite within the limits of $H \to \infty$, $T_{\text{max}}$ should increase with $H$ in the large $H$ region.
This is not observed using the mean-field analysis, which is discussed in Appendix~\ref{sec:MF}.
In contrast, complicated relation between $\Delta S_\text{M max}$ and $T_\text{max}$ are observed in antiferromagnets using the proposed protocol.
In fact, there is the case that $T_\text{max}$ is smaller than $T_\text{c}$.
We confirm that $\Delta S_\text{M max}$ obtained in the proposed protocol is the same or larger than that in the conventional protocol at a given value of $H$.
In antiferromagnets,
$H_\text{max} (T) \neq 0$ for $T<T_\text{c}$ and $H_\text{max} (T) = 0$ for $T>T_\text{c}$ (insets of Fig.~\ref{fig:MC_entropy}).
Thus, when $T_\text{max} > T_\text{c}$, $\Delta S_\text{M max}$ obtained from the conventional protocol and the proposed protocol are the same.
\subsection{Adiabatic magnetization process}
In this subsection, we focus on the adiabatic magnetization process.
Let us consider the case that magnetic field is changed from zero to $H$ in the conventional protocol, and from $H_\text{max} (T)$ to $H$ in the proposed protocol (Fig.~\ref{fig:schematic_S} (a)).
Thus, the direction of the arrows in Fig.~\ref{fig:HmaxT} should be reversed.
Figure~\ref{fig:MC_deltaT} (a) is the temperature at the start point dependence of the adiabatic temperature change $\Delta T_\text{ad} (T,0\to H)$ in the conventional protocol.
$\Delta T_\text{ad} (T,0\to H)$ is always positive in the ferromagnet; thus, the temperature of the ferromagnet increases.
However, in antiferromagnets, $\Delta T_\text{ad} (T,0\to H)$ can be negative below $T_\text{c}$.
That is, $H$ is finite, such that $\Delta T_{\text{ad}}(T,0\to H)<0$ for any $T(< T_{\text{c}})$.
In this case, the temperature of antiferromagnets decreases.
For the G-type antiferromagnet with $H/J=3.0$,
$\Delta T_\text{ad} (T,0\to H)$ cannot be well defined below about $T/T_\text{c}=0.9$, because there is no temperature end point in the adiabatic magnetization process.
The behavior is caused by a large residual magnetic entropy at $H/J=3.0$.
The C-type antiferromagnet with $H/J=2.0$ also shows similar behavior in our calculation.
Next, we consider the adiabatic temperature change $\Delta T_\text{ad} (T,H_\text{max} (T)\to H)$ under the proposed protocol (Fig.~\ref{fig:MC_deltaT} (b)).
In the proposed protocol, $\Delta T_\text{ad} (T,H_\text{max} (T)\to H)$ is always positive and reaches the maximum value at a given $T$ by the definition of $H_\text{max}(T)$.
Antiferromagnets exhibit a larger adiabatic temperature change in the proposed protocol than in the conventional protocol.
Thus, the proposed protocol is useful for obtaining a large adiabatic temperature change in antiferromagnets.
\begin{figure*}
\begin{center}
\includegraphics[scale=1.0]{MC_deltaT_sr.eps}
\end{center}
\caption{\label{fig:MC_deltaT}
(Color online)
(a) Adiabatic temperature change as a function of temperature at the start point under the conventional protocol ($0 \to H$) for $L=16$.
(b) Adiabatic temperature change as a function of temperature at the start point under the proposed protocol ($H_\text{max}(T) \to H$) for $L=16$.
}
\end{figure*}
\section{Conclusion} \label{sec:conclusion}
We have studied the magnetic refrigeration efficiency in the Ising models of a ferromagnet and A-, C-, and G-type antiferromagnets, which have typical magnetic ordered structures.
The temperature and magnetic field dependences of the magnetic entropy in the Ising models were calculated by the Wang-Landau method, which is a Monte Carlo method.
The obtained magnetic entropy indicates that the protocol proposed in Ref.~\onlinecite{Tamura-2014} achieves the maximum magnetic entropy change in the isothermal demagnetization process and the maximum adiabatic temperature change in the adiabatic magnetization process.
In the proposed protocol, $H_{\text{max}}(T)$ plays an important role, where $H_{\text{max}}(T)$ is the magnetic field at which the magnetic entropy is the maximum value at a given temperature $T$.
$H_{\text{max}}(T)$ is used as the start or end points in the isothermal demagnetization and adiabatic magnetization processes.
The physical meaning of $H_{\text{max}}(T)$ was discussed in terms of measurable physical quantities, such as the magnetic specific heat and the magnetization.
The magnetization process suggests that $H_{\text{max}}(T)$ corresponds to the metamagnetic transition point.
In addition, to estimate the full potential of the cooling power under the proposed protocol, we introduced the new quantity, called total cooling power (TCP).
TCP under the proposed protocol is the same as or larger than that under the conventional protocol in the considered models.
This suggests that the proposed protocol is useful for obtaining the maximum amount of heat transfer.
Finally, we emphasize that the proposed protocol can produce the maximum magnetic refrigeration efficiency in the models we considered and also in other magnetic systems, such as nonferromagnets with an inhomogeneous magnetic ordered structure.
Thus, we believe that this study becomes a fundamental research in the field of magnetic refrigeration.
\section*{Acknowledgment}
We thank Kenjiro Miyano for useful comments and discussions.
R. T., S. T., and H. K. were partially supported by a Grand-in-Aid for Scientific Research (C) (Grant No. 25420698).
In addition, R. T. is partially supported by National Institute for Materials Science.
S. T. is the Yukawa Fellow and his work is supported in part by Yukawa Memorial Foundation.
The computations in the present work were performed on super computers at National Institute for Materials Science, Supercomputer Center, Institute for Solid State Physics, University of Tokyo, and Yukawa Institute of Theoretical Physics.
|
2,877,628,089,592 | arxiv |
\section{How to Use this Template}
\section{Introduction}
In the current understanding of gamma-ray bursts (GRBs), short GRBs are produced when two compact objects merge together \cite{npp92,je99,fwh99}, while a long GRB is a result of the collapse of a massive star \cite{le93, w93, mw93}. In both cases, a bipolar jet is launched from the interior of the star or merger and escapes to produce the observed signatures. While breaking out of the stellar surface, the jet interacts with a dense medium or stellar envelope ahead of it. The interaction leads to the deceleration of the jet and the formation of shocked matter at the jet-envelope interface. This piled up matter is an extended structure known as a stellar cocoon or cork. The cocoon then warps up the jet and affects its dynamic evolution \cite{ma09}.
The properties of the cork are modeled extensively in the literature \cite{bc89,m03, bnp11, nhs14,bmr14,ldm16, mrm17, hgn18, gnp18, gnph18, yb22}. This cocoon forms in both the short and long GRBs \cite{np16}. Depending upon the kinetic power of the jet, it either pierces through the cocoon to escape to the external medium \cite{rml02,zwm03,zwh04}, or it is choked to produce backscattered photons \cite{em08,e14,e18,vpe21a,vpe21b, vpe21c}. In the former case, the escaped jet's geometry is shaped and its dynamics is significantly affected by the jet cork interaction \cite{nhs14,hgn18, m03,mlb07,ma09,mi13, dqk18, gln19, hi21, gnb21,gg21}. The cocoon is capable of inducing recollimation shocks in the jet stem \cite{bnp11,gln19}. To explain the observed features of the bursts, the formation of a cocoon is found to be an essential requirement. In recently discovered short GRB associated with gravitational source GW170817, the radio observations hint towards the presence of a cocoon as the jet erupts from the burst \cite{hcm17,mnh18,gbs20}. It is found that the cocoon shock breakout of the jet in the case of GW170817 is a more likely picture over other scenarios such as photospheric emission \cite{gnph18,ijt21}.
Given the fact that the jet interacts with the cocoon before breaking out of the system, one may ask an obvious question about how the strength of the cocoon and the composition of the jet matter affect its dynamical evolution during this process. The jet composition is an open question in GRB physics. It is debatable whether GRBs are matter-dominated or radiation dominated \cite{mkp06,kz15,p15, cg16, fcz17, zzc18,cpd21,gkg21,zwl21}. In this study, we consider a jet composed of baryons and leptons and we keep the lepton fraction over baryons to be a free parameter, an approach similar to \cite{zwl21}. It is noted that the neutrino annihilation in the initial phase of the burst leads to the formation of a large amount of electron-positron pairs \cite{w93,pwf99,mw99,le03,gl14} and thus the positrons are likely to be a part of the outflowing jet. In the general relativistic regime, we aim to investigate further properties of the jet once it interacts and is shaped by the cocoon. We pose and seek to answer the following questions in this analysis: \begin{itemize}
\item As the jets in short GRBs produce near strong gravitational potential, how the jet scenario is affected in the regime of general relativity;
\item Once the jet strikes against the cocoon to be collimated and escaped, how its dynamics is sensitive to its matter composition ({\em i.e., } ratio of lepton's fraction to baryonic matter) and how its observational appearance changes.;
\item What are the conditions for the generation of the recollimation shocks produced by collimation of the jet and how the possibility of the shock transition depends upon matter composition?
\item How the properties of the shock (like shock strength and transition location) are sensitive to the jet composition?
\end{itemize}
Hence, we project to carry out an extensive analysis of the dependence of jet dynamics on its composition, as well as, the collimation strength of the cocoon. To meet this purpose, we solve general relativistic hydrodynamic equations of motion using a relativistic equation of state proposed by \citep{cr09}. This equation of state takes care of the fluid composition. The jet matter made of electrons and protons may harbour a significant fraction of positrons as pair production takes place at high optical depths \cite{gcs08}.
In the next section, we describe the assumptions in the model in detail including the considered jet geometry and the equation of state. Then we proceed to discuss the jet dynamics along with the equations of motion in the general relativistic regime in section \ref{sec_dyn}. In section \ref{sec_method}, we discuss the method to solve the dynamical equations of motion using sonic point analysis and discuss the shock conditions and shock properties. The results are described in section \ref{sec_results}. We conclude the paper in section \ref{sec_conclusions} to highlight the principle outcomes and their significance in understanding the GRBs. In brief, we discuss the future extension of this work in section \ref{sec_future}.
\section{Assumptions}
In this work, we construct a semi-analytic general relativistic steady-state model of an erupting jet which is shaped and collimated by the stellar cork. Although the cocoon forms both in long and short GRBs, we restrict this paper to the case of a short GRB jet erupting from the merger of two neutron stars. The conclusions should have morphological similarities with the long GRB jets. However, general relativistic consideration is necessary for short GRBs only. As the jet is hot and harbours relativistic temperatures, one needs a relativistic equation of state to account for the variable nature of the adiabatic index with its temperature. We describe the equation of state in section \ref{sec_eos} below.
A typical neutron star has mass $1.4-3.6 M_\odot$ and a radius $10-12$ Km or roughly $1.5-3$ Schwarzschild radius ($r_g$) \cite{nc73,rr74,kb96,st08}. When two neutron stars merge, we expect that the jet should be erupting from the surface of the merger. It is clear that the analysis needs the inclusion of curved space-time. We consider the Schwarzschild metric and work with geometric units where length and time are defined in the units $GM/c^2$ (or $r_g/2$) and $r_g/2c$, respectively. Here $G$ is the universal constant of gravity, $M$ is the total mass after the merger and $c$ is light speed. Hence, the velocity is defined in terms of $c$. The base of the jet is considered at $R^*=2 r_g$, or we treat it as the surface of the star.
After launching, the jet interacts mechanically with the envelope above the stellar surface and is shaped accordingly. Properties of the considered jet cross-section are inspired by various numerical studies and are described in detail in section \ref{sec_cross} below. It is a one-dimensional study, {\em i.e., } we consider that the jet's local properties are constant across its horizontal width. This assumption allows us to treat the problem in one dimension and it is justified for narrow jets with a small opening angle which is reasonable for GRBs. We treat the cocoon as an auxiliary agent that shapes the jet and defines its geometry and do not consider any energy exchange between both. It means that in the adiabatic equation of state, the jet energy parameter remains constant and is not affected by its interaction with the cork but its dynamical evolution changes. Further, as the jet speed is likely to be significantly greater than the outflowing speed of the cocoon, we simply consider a stationary cross-section of the jet. It implies that the cocoon shapes the jet but its collimation effects don't evolve with time.
\label{sec_assump}
\begin{figure}[H] \begin{center}
\includegraphics[width=10.5 cm]{geom.eps}
\caption{The cross-section of the jet above stellar surface for various values of collimation parameters $m=0.01$ (solid red), $0.1$ (dotted black) and $0.3$ (dashed blue). The orange region is the merger where the two compact stars merge. The cocoon shapes the jet geometry and collimates it above the surface of the merger. The lengths are shown in units $GM/c^2$ or 0.5$r_g$.}
\label{lab_jet_geom}
\end{center}
\end{figure}
\subsection{Jet cross section}
\label{sec_cross}
A fluid jet composed of baryons and leptons erupts from the stellar surface after the merger and it is shaped by an envelope or cocoon above it. Consider spherical coordinate system $r,\theta, \phi$ whose origin lies at the centre of the merger. The jet propagates vertically along $\theta=0$ axis and the system has azimuthal symmetry ({\em i.e., } dynamical parameters are independent along $\phi$ coordinate). If the radius of jet stem at radial coordinate $r$ is $y_j=r\sin \theta$, we define the jet geometry as,
\begin{equation}
y_j=\frac{mA_1(r-d)}{1+A_2(r-d)^2}+m_{inf}(r-d)+c_j
\label{yj.eq}
\end{equation}
Here the constants are $A_1=0.5,~ ~d=5$, \textit{A}$_2=$1.0. $m_{inf}=0.1/(1+m)$ is the slope of radially expanding jet after crossing the envelope and $c_j=1.5$. $mA_1$ and $A_2$ shape the jet. $d$ constraints the approximate location of the cocoon at $r=d$ (as it is extended and has no precise location). $A_2$ controls the vertical extent up to which the cocoon would affect the jet geometry. We retain constant values of $d$ and $A_2$ in this work, which means that the cork location as well its dimensions are constant. $c_j$ is the mathematical intercept of the jet when it erupts following the merger and is also kept constant. The cross-section of the jet at location $r$ is given as $A=\pi y_j^2$. Here $m$ is a geometric parameter which represents the ability of the cocoon to affect the jet geometry. In other words, $m$ controls the cocoon's strength. This jet geometry is inspired by various studies carried out on an erupting GRB jet (for eg. see \citep{bpb19, ijt21}). For $m=0$, the jet is radial throughout and there is no effect of cork on its geometry. This type of geometric model was used by \citep{vc17} where a jet launched around a black hole is collimated following its interaction with the inner torus of an accretion disc. However, such collimation is more prominent in GRB jet as it pierces the cocoon, hence we choose the parameters accordingly. This considered geometry contains information of a jet that is collimated and then flows radially.
In Figure \ref{lab_jet_geom}, we plot the respective jet geometries corresponding to $m=0.01, 0.1$ and $0.3$ respectively. In the attached cartoon diagram in this figure, the rough jet launching location from the merger is also shown. For higher values of $m$, the jet is shaped after erupting. Hence $m$ controls the effective collimation above the stellar surface due to its interaction with the outer envelope of the star, as well as, the cocoon. The collimation of the jet due to the cocoon leads to the variation in the vertical gradient of the jet cross-section. As we will see in the next section, the parameter that affects the jet dynamics is $A^{-1}dA/dr$. This implies that the slight fractional change in cross-section results in the variation of jet bulk speed.
\begin{figure}[H]
\begin{center}
\includegraphics[width=10.5 cm]{m_theta_op.eps}
\caption{Opening angle $\theta_{op}$ as a function of $m$ at $r=100$ (as $r>>d$)}
\label{lab_m_theta_op}
\end{center}
\end{figure}
The jet opening angle $\theta_{op}$ is defined at large radius (or $r>>d$ above). To have further physical insight into the jet geometry described above, we write Equation \ref{yj.eq} at large distances in the following form (at a large distance),
$$
y_j=\tan \theta_{op}(r-d)+c_j
$$
Here ,
$$
\theta_{op} = \tan^{-1}\left[\frac{A_1m}{1+A_2(r-d)^2}+m_{inf}\right]
$$
In Figure \ref{lab_m_theta_op}, we plot the opening angle of the jet $\theta_{op}$ as a function of $m$ by choosing $r=100$. As expected, a higher value of $m$ leads to stronger collimation of the jet and results in narrow opening angles. This parameterized representation of jet geometry captures the cocoon's ability to collimate the jet. Hence $m$ represents the strength of the cork or cocoon.
This type of collimation is similar to the conventional de Laval nozzle in hydrodynamics \cite{h17}. Here we treat the problem in curved space and the flow is relativistic in nature. In other words, we work out an analysis of the general relativistic de Laval nozzle problem in the context of a GRB jet erupting against a stellar cork ahead of it.
\subsection{Relativistic equation of state and its dependence upon flow composition}
\label{sec_eos}
To solve a set of hydrodynamic equations of motion we need an equation of state which is a closure relation between the internal energy density $e$, matter density $\rho$ and fluid pressure $p$. Here we use a relativistic adiabatic equation of state proposed in \citep{rcc06,cr09}. It is an approximated form of the original relativistic equation of state \citep{c57} (Also see \citep{s57}). This equation of state provides a good agreement with exact results (see appendix C of \cite{vkmc15}) and is easy to use in analytic studies. It takes care of multispecies in the fluid flow and has been used to study the effect of flow composition in relativistic fluids \cite{cr09,crj13, vc18a,vc18b, vc19,sc20, ky22,jcr22,pmd22}. It allows us to define the thermodynamic state of the fluid for a given fraction of leptons over baryons. For a relativistic fluid with electron number density $n_{e^{-}}$ the equation of state is given as,
\begin{equation}
e=n_{e^-}m_ec^2f,
\label{eos.eq}
\end{equation}
Here $m_e$ is the mass of the electron. Parameter $f$ is a function of jet composition, as well as, the temperature of the flow.
\begin{equation}
f=(2-\xi)\left[1+\Theta\left(\frac{9\Theta+3}{3\Theta+2}\right)\right]
+\xi\left[\frac{1}{\eta}+\Theta\left(\frac{9\Theta+3/\eta}{3\Theta+2/\eta}
\right)\right].
\label{eos2.eq}
\end{equation}
We defined the non-dimensional temperature as
$\Theta=kT/(m_ec^2)$, with $k$ being the Boltzmann constant. Information of the fluid composition is contained in parameter $\xi = n_{p^{+}}/n_{e^{-}}$ which is a relative proportion of
number density of protons and electrons. $\xi$ ranges from 0 to 1. $\xi=0$ represents purely leptonic flow composed of electrons and positrons while $\xi=1$ is for electron-proton flow. $\eta = m_{e}/ m_{p^{+}}$ where $m_{p^{+}}$ is the mass of proton.
By definition, the expressions of polytropic index $N$, sound speed $a$ and adiabatic
index $\Gamma$, as well as, specific enthalpy $h$ are given as,
\begin{equation}
N=\frac{1}{2}\frac{df}{d\Theta} ; ~~
a^2=\frac{\Gamma p}{e+p}=\frac{2 \Gamma \Theta}
{f+2\Theta};~~ \Gamma=1+\frac{1}{N};~~ h =({e+p})/{\rho}=({f+2\Theta})/{\tau}
\label{sound.eq}
\end{equation}
Here $\tau=(2-\xi+\xi/\eta)$ is a function of jet composition. For an insight into the effect of the relativistic equation of state compared to the nonrelativistic equation of state with an invariant adiabatic index, we direct the reader to compare the results of \citep{vc18a} and \citep{vc18b}, a study carried in the context of X-ray binary jets.
\section{Jet dynamics}
\label{sec_dyn}
\subsection{Dynamical equations of motion of the jet}
The energy momentum tensor of a fluid in bulk motion is given as $T^{\alpha \beta}=(e+p)u^{\alpha}u^{\beta}+pg^{\alpha \beta}$.
Here $u^{\alpha}$ are the components of four-velocity of the jet fluid and $g^{\alpha \beta}$ are the the metric tensor components. The conservation of energy-momentum enables us to write down the equations of motion by setting the four divergence of $T^{\alpha \beta}$ to zero. {\em i.e., }
\begin{equation}
T^{\alpha \beta}_{;\beta}=[(e+p)u^{\alpha}u^{\beta}+pg^{\alpha \beta}]_{;\beta} = 0~~~~
\label{eq_mo1}
\end{equation}
Projecting Equation \ref{eq_mo1} using the projection operator $(g^{i}_{\alpha}+u^iu_\alpha)$ gives us the momentum balance equation along $i^{th}$ coordinate while projecting it along four velocity $u^{\alpha}$ leads to the energy conservation principle as,
\begin{equation}
(g^{i}_{\alpha}+u^iu_\alpha)T^{\alpha \beta}_{{;\beta}}=0; ~~~ u_{\alpha}T^{\alpha \beta}_{{;\beta}}=0
\label{genmomb.eq}
\end{equation}
\\
For this semi-analytic study, we obtain steady-state equations of motion. We assume an axis-symmetric jet propagating along $\theta=0$ and we need to derive and solve the equations along radial coordinate $r$ as the system has $\phi$ symmetry. As explained in section \ref{sec_assump}, we assume that the jet properties remain invariant along its horizontal extent so we solve the equations along $r$ and reduce the problem to one dimension. The momentum balance equation and the energy conservation equation reduce to,
\begin{equation}
u^r\frac{du^r}{dr}+\frac{1}{r^2}=-\left(1-\frac{2}{r}+u^ru^r\right)
\frac{1}{e+p}\frac{dp}{dr},
\label{eu1con.eq}
\end{equation}
and
\begin{equation}
\frac{de}{dr}-\frac{e+p} {\rho}\frac{d\rho}{dr}=0,
\label{en1con.eq}
\end{equation}
Here $\rho$ is the local fluid density. The mass conservation enables us to write the continuity equation as
\begin{equation}
(\rho u^{\beta})_{; \beta}=0
\label{eq_cont}
\end{equation}
Integration of the continuity equation gives outflow rate along radial coordinate $r$ in spherical coordinates ($r,\theta,\phi$),
\begin{equation}
\dot {M}_{\rm {out}}=\rho u^r {A}.
\label{mdotout.eq}
\end{equation}
Here $A$ is the jet cross section. The continuity equation can be written in differential form as,
\begin{equation}
\frac{1}{{\rho}}\frac{d{\rho}}{dr}+\frac{1}{{A}}\frac{d{A}}{dr}
+\frac{1}{u^r}\frac{du^r}{dr}=0
\label{con1con.eq}
\end{equation}
Pressure $p$ is given as
\begin{equation}
p=\frac{2\Theta \rho}{\tau}=\frac{2\Theta \dot {M}_{\rm {out}}}{\tau u^r A}
\label{pressure.eq}
\end{equation}
using these relations, the equations of motion convert to,
\begin{equation}
\frac{dv}
{dr}=\frac{\left[a^2\left\{\frac{1}{r(r-2)}+\frac{1}{A}
\frac{dA}{dr}\right\}-\frac{1}{r(r-2)}\right]}{\gamma^2v\left(1-\frac{a^2}{v^2}\right)}
\label{dvdr.eq}
\end{equation}
and
\begin{equation}
\frac{d{\Theta}}{dr}=-\frac{{\Theta}}{N}\left[ \frac{{\gamma}
^2}{v}\left(\frac{dv}{dr}\right)+\frac{1}{r(r-2)}
+\frac{1}{A}\frac{dA}{dr}\right]
\label{dthdr.eq}
\end{equation}
$v$ is the three velocity of the jet. It is defined as
\begin{equation}
v^2=-u_iu^i/u_tu^t=-u_ru^r/u_tu^t
\end{equation}
Here $u^r=\sqrt{g^{rr}}{\gamma}v$ and $u_t=-{\gamma}\sqrt{(1-2/r)}$. $\gamma^2=-u_tu^t$ is the Lorentz factor and $g^{rr}=1-2/r$.
We need to integrate equations \ref{dvdr.eq} and \ref{dthdr.eq} simultaneously to obtain the dynamical evolution of the jet parameters $v$ and $\Theta$ along radial coordinate $r$. All the other parameters are then derived using the relations defined above in the text.
\subsection{Constants of motion}
Integrating momentum balance equation gives the first constant of motion, {\em i.e., }
\begin{equation}
A (e+p)u^ru_t=-{\dot E}={\rm constant}
\label{energflux.eq}
\end{equation}
We obtain the specific energy by dividing equation (\ref{energflux.eq}) to (\ref{mdotout.eq})
{\em i.e., }
\begin{equation}
E=\frac{{\dot E}}{{\dot M}_{\rm out}}=-hu_t.
\label{enr.eq}
\end{equation}
It also enables us to define the total kinetic power of the jet as,
\begin{equation}
L_j={\dot E}={\dot M}_{\rm out}E
\label{ljet.eq}
\end{equation}
Integrating energy conservation equation (\ref{en1con.eq}) one obtains an adiabatic relation similar to
$p\propto \rho^{\Gamma}$ for a constant $\Gamma$ with,
\begin{equation}
\rho={\cal C}\mbox{exp}(k_3) \Theta^{3/2}(3\Theta+2)^{k_1}
(3\Theta+2/\eta)^{k_2},
\label{rho.eq}
\end{equation}
Here the defined parameters are, $k_1=3(2-\xi)/4$, $k_2=3\xi/4$, $k_3=(f-\tau)/(2\Theta)$ while ${\cal C}$ is the entropy constant. We obtain the entropy outflow rate by substituting $\rho$ into
equation (\ref{mdotout.eq}),
\begin{equation}
{\dot {\cal M}}=\frac{{\dot M}_{\rm out}}{{\rm geom. const.}
{\cal C}}=\mbox{exp}(k_3) \Theta^{3/2}(3\Theta+2)
^{k_1}
(3\Theta+2/\eta)^{k_2}u^rA
\label{entacc.eq}
\end{equation}
The two constants of motion are defined by equations (\ref{entacc.eq}) and (\ref{enr.eq}). ${\dot {\cal M}}$ is discontinuous at the shock.
\section{Method of obtaining solutions : sonic point analysis and shock conditions}
\label{sec_method}
Once erupted from the surface of the merger, the matter is hot and sub-relativistic which implies that the matter has to be subsonic ($v<a$). As it progresses, it speeds up due to thermal pressure and cools down subsequently. Escaping to infinity, the jet is supersonic ($v>a$). At some distance $r=r_s$, the jet has to pass through a sonic point where the local sound speed equals the bulk jet speed. Flows associated with such solutions are called transonic flows. The entropy of these solutions is maximum among the whole family of solutions. Following the second law of thermodynamics, the jet naturally chooses the transonic trajectory having maximum entropy. It is a necessary condition for the escape of a subsonic flow to become relativistic. We inject the jet with sub-relativistic and subsonic speeds and solve equations \ref{dvdr.eq} and\ref{dthdr.eq} simultaneously using Runge Kutta 4th order method for the numerical solutions. Stability of the coupled equations and the test of convergence of the numerical solutions require that the constants of motion $E$ and $\dot {\cal M}$ should remain conserved across the jet extent and it should always be checked for each solution. Alternatively, one can study the dynamical evolution of the system directly using equations \ref{enr.eq} and \ref{entacc.eq} to arrive at identical results.
Sonic point analysis is important in studying the mathematical nature of the solutions, as well as, it helps in revealing their physical insight. The location of a sonic point is obtained when the denominator of equation \ref{dvdr.eq} equals zero. The sonic point condition is thus,
\begin{equation}
a_s^2=\left[1+r_s(r_s-2)\left(\frac{1}{{A}}\frac{d{A}}{dr}\right)_s\right]^{-1}\
\label{sonic.eq}
\end{equation}
At $r=r_s$, the slope $dv/dr_{(r=r_s)}=0/0$ is undetermined and we need to use L'hopital's rule for its estimation \citep{s63}.
For a given sonic point, the energy parameter, as well as, entropy is determined. With condition $v_s=a_s$,
the sonic point serves as a mathematical boundary from which the transonic solutions can be alternatively computed by integrating the equations of motion outward ($dr\rightarrow +ve$), as well as, inward ($dr\rightarrow -ve$).
We will see that the collimation of the jet by the cocoon leads to the formation of multiple sonic points for a range of energy parameter $E$. In the current case, we will see that a jet can have up to three sonic points, which makes it possible to go through a shock transition. The shock conditions across the shock are given by relativistic Rankine-Hugoniot equations where the following flow quantities remain constant across the flow \cite{t48}.
\begin{equation}
[{\rho}u^r]=[(e+p)u^tu^r]=[(e+p)u^ru^r+pg^{rr}]=0,
\label{sk1.eq}
\end{equation}
The quantities in square brackets represent their differences across the shock. {\em i.e., } $[A]=A_2-A_1$, where subscript 1(2) denotes the respective quantity in pre-shock (post-shock) region. These conditions ensure the invariance of mass, momentum and energy fluxes across the shock. In case multiple sonic points are present for a given value of $E$, the solutions from outer and inner sonic points can have a shock transition between them if the values of fluxes in Equation \ref{sk1.eq} coincide at shock location $r=r_{sh}$. The shock is always located between the outer and inner sonic points. For any given value of $E$, followed by multiple sonic points, we check for the solutions from outer and inner sonic points and test for consistency from the shock conditions defined above to look for the location of shock transition between the two solutions. It may be noted that the entropy of the flow does have a discontinuous jump at the shock and is not necessarily conserved in such a case. The fluid is compressed at the shock and the associated compression ratio is defined as
\begin{equation}
{\cal R} = \frac{\rho_2}{\rho_1}
\end{equation}
Higher value of $\cal R$ represents stronger shock transition. From the continuity equation (Equation \ref{mdotout.eq}), we can write it as
\begin{equation}
{\cal R} = \frac{\gamma_1 v_1}{\gamma_2 v_2}>1
\end{equation}
\begin{figure}[H]
\begin{center}
\includegraphics[width=13.5 cm]{sonic_1.eps}
\caption{$E-r_s$ parameter space for various values of $m=0.0, 0.05, 0.1, 0.2, 0.3$ and $0.4$ keeping $\xi=0.1$. Deviation from monotonous trajectory for $m=0$ marks effect of cocoon's collimation on the jet dynamics. For $m\geq 0.1$, in certain range of $E$, we obtain three sonic points for single value of $E$, it marks possibility of shock transition in the flow. Points with black open circles and filled blue stars show the location of sonic points corresponding to solutions in Figure \ref{lab_vel_m0} top left panel and bottom right panel respectively. Solutions corresponding to sonic points with filled black circles on case $m=0.3$ are shown in Figure \ref{lab_vel_m0.3_xi_0.1}. }
\label{lab_sonic_1}
\end{center}
\end{figure}
\section{Results}
\label{sec_results}
For a given energy parameter $E$, the total energy of the jet is fixed as we work with the adiabatic equation of state. We inject the jet at the surface of the star assigning it a value of $E$ which corresponds to sub relativistic flow speeds and high sound speed. As the jets are thermally driven in this study, their dynamic evolution for a given energy parameter is likely to be affected by their composition ($\xi$), as well as, the cocoon's strength which is controlled by the parameter $m$ in the jet cross-section profile.
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.5 cm]{vel_m0.eps}
\includegraphics[width=6.5 cm]{E_m0.eps}
\includegraphics[width=6.5 cm]{Ent_m0.eps}
\includegraphics[width=6.5 cm]{vel_1.eps}
\caption{Top left : Velocity profiles associated with Figure \ref{lab_sonic_1} for case $m=0$ choosing different values of $E=5.91$ (blue), 2.18(red), 1.39 (black); Establishing the invariance of the constants of motion $E$ (top right) and ${\dot {\cal M}}$ (bottom left) along radial coordinate $r$ for solutions in Figure \ref{lab_sonic_1}. Bottom right : Lorentz factor profiles for the choices of smaller values of $m=0.05$ and $0.2$ choosing $E=20$. Points with black open circles and blue filled stars denote the locations of sonic points as in Figure \ref{lab_sonic_1}. For all panels above, $\xi=0.1$. }
\label{lab_vel_m0}
\end{center}
\end{figure}
\subsection{Dependence of flow solutions on the cocoon's strength}
To study the effect of the jet cross-section on its dynamics we set to analyze a family of sonic points for different values of the energy parameter keeping a constant composition. In Figure \ref{lab_sonic_1} we plot the energy parameter $E$ as a function of sonic point $r_s$ for various choices of $m$. For these results, we consider a fixed composition assigned by $\xi=0.1$.
For $m=0$, the jet is radial in nature and $E$ monotonously decreases with $r_s$. It means that a single sonic point corresponds to each value of $E$ and there is no effect of the cocoon on the jet's dynamics. It produces monotonous jet velocity ($v$) profiles, plotted in Figure \ref{lab_vel_m0} (top left panel) for various values of energy parameter $E=5.91,2.18$ and $1.39$. $E\rightarrow1$ represents a jet with $\gamma\rightarrow1$ at infinity. Hence $E$ is realizable for values $E\geq1$. Higher energy content shows more relativistic jets. Along with ${\dot M}_{\rm out}$ [Equation \ref{ljet.eq}] $E$ constraints the total kinetic luminosity of the jet. Both the constants of motion $E$ (top right) and $\dot{\cal M}$ (bottom left) remain invariant across the spatial extent of the jet in all the solutions. Black open circles in these velocity profiles mark the location of the corresponding sonic point. These points are also shown in Figure \ref{lab_sonic_1}. The higher energy parameter brings the sonic point closer to the surface of the ejecta which is seen in Figure \ref{lab_sonic_1} (blue dotted curve).
These trajectories are similar to conventional Bondi solutions in relativistic regime. For higher values of $m$ ({\em i.e., } $m\geq 0.1$), a single value of $E$ corresponds to multiple sonic points which leads to the possibility of shock transition. For a more detailed account of the nature of multiple sonic points and the formation of shocks, see \citep{kh76,ht83,ftr85}.
In the bottom right panel of Figure \ref{lab_vel_m0}, we plot the Lorentz factor profile for high energy jet with $E=20$ and a weak cocoon ({\em i.e., } small values of $m=0.05,$ and $0.2$). The jet becomes transonic at points shown by filled blue circles.
The cocoon slightly affects and collimates the jet after its ejection. However, the terminal value of $\gamma$ at infinity is unaffected by the cocoon's presence. This fact directly follows from the expression of $E$ and our consideration of the adiabatic equation of state where no energy dissipation is involved in the jet's propagation throughout its spatial extent.
\begin{figure}[H]
\begin{center}
\includegraphics[width=12.5 cm]{vel_m0.3_xi_0.1.eps}
\caption{Velocity profiles for $m=0.3$ with different values of $E=20.5$ (dotted blue), 3.19 (solid black) and 1.97 (dashed-dotted green). The vertical dotted line marks the shock transition at location $r=1.275R^*$ and filled circles denote the sonic points through which the physical solutions pass.}
\label{lab_vel_m0.3_xi_0.1}
\end{center}
\end{figure}
Next, we choose a stronger cocoon by setting $m=0.3$ and keeping $\xi=0.1$ same as before. The corresponding velocity profiles are plotted in Figure \ref{lab_vel_m0.3_xi_0.1} for three different values of jet energies $E=20.5$( Dotted blue), 3.19 (solid black) and $1.97$ (green dashed-dotted). Physical solutions pass through sonic points shown by black filled circles (Also marked in Figure \ref{lab_sonic_1}). From Figure \ref{lab_sonic_1}, in $E-r_s$ parameter space for $m=0.3$, we observe that multiple sonic points are present for a single value of $E$ in large sonic point space ($r_s<2$). Thus, it leads to the possibility of a shock transition. From the velocity profiles, we see that a jet with high energy ($E=$20.5, blue dotted) doesn't care much about the cocoon's presence and is mildly decelerated by cocoon interaction before further accelerated due to thermal pressure to achieve relativistic speeds. Similarly, a jet with a very low energy parameter (E=1.97, red dashed-dotted) is also not able to form shocks and has a smooth solution up to infinity. However, at intermediate energies ($E\sim 3.19$, black solid) the jet goes through recollimation shock transition at $r_{sh}=1.275 R^*$. The reason why low energy jets are able to escape without being much affected by the cocoon is that they do not have a sufficient momentum to interact with the cocoon to go through a discontinuous transition. It should be noted that the jets with even low energies ($E\rightarrow 1$) are chocked by the cocoon or generate breeze solutions. We do not consider such solutions in this study and restrict ourselves to the cases where the jet has minimum energy to convert into transonic outflows with a successful escape.
The jet harbours shocks within the energy range $E=3.09$ to $3.2$. In Figure \ref{lab_R_vs_E} we plot the compression ratio $\cal R$ as a function of $E$ (top left panel) and show that the shock in the high energy jets is stronger. In top right panel the variation of shock location $r_{sh}$ is showed with $E$. The high value of $E$ pushes the shock away from the base and the spatial shock region appears between $r=1.25R^*$ and $1.3R^*$ above the stellar surface. These solutions are for a fixed strength of cocoon (or a constant $m=0.3$). In bottom left and bottom right panels of Figure \ref{lab_R_vs_E}, we plot $\cal R$ and $r_{sh}$ as functions of $m$ keeping a jet with fixed energy parameter $E=3.26$. As an obvious result, we obtain a stronger shock for higher values of $m$ or a stronger cocoon. Subsequently, a stronger cocoon produces shocks closer to the jet base ({\em i.e., } $r_{sh}$ decreases with increasing $m$).
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.5 cm]{R_vs_E.eps}
\includegraphics[width=6.5 cm]{rs_vs_E.eps}
\includegraphics[width=6.5 cm]{R_vs_m.eps}
\includegraphics[width=6.5 cm]{rs_vs_m.eps}
\caption{Variation of compression ratio $\cal R$ (top left) and shock location $r_{sh}$ (top right) with $E$ for $m=0.3$. Similarly $\cal R$ (bottom left) and $r_{sh}$ (bottom right) as functions of $m$ keeping $E=3.26$. For all these plots, $\xi=0.1$.}
\label{lab_R_vs_E}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.7 cm]{sonic_2_xi_vary.eps}
\includegraphics[width=6.7 cm]{sonic_3_xi_vary.eps}
\caption{$E-r_s$ parameter space for various choices of jet composition. Left panel : $\xi=0.1$ (dotted blue) 0.5 (dashed green), $0.75$ (dashed dotted red) and 1.0 (solid black) for a moderate cocoon strength with $m=0.1$. Velocity profiles corresponding to $E=3.5$ (horizontal black solid line) are plotted in Figure \ref{lab_vel_m0.1_E_3.5_xi_vary}.
Right panel: $\xi=0.01$ (dotted blue) and 1.0 (solid black). Solutions of $E=3.26$ (horizontal black solid line) are shown in Figure \ref{lab_vel_m0.8_xi_1.0}.}
\label{lab_sonic_2_xi_vary}
\end{center}
\end{figure}
\subsection{Effect of fluid composition on jet dynamics}
To investigate the effect of flow composition on the jet dynamics, we choose $m=0.1$, that corresponds to a weak cocoon and plot $E-r_s$ parameter space in Figure \ref{lab_sonic_2_xi_vary} (left panel) for different choices of the jet composition $\xi=0.1$ (dotted blue), 0.5 (dashed green), 0.75 (dashed-dotted red) and 1.0 (solid black). For whole range of $\xi$, the jet harbours multiple sonic points within distance $\sim1.2R^*-1.7R^*$. However, the occurrence of multiple sonic points is more prominent in jets with low $\xi$. It can be understood as the low value of $\xi$ corresponds to a relatively high fraction of leptons over baryons. Or the jet has a lower inertia. The cocoon more effectively collimates a less dense jet. In the right panel, we consider strong cocoon with $m=0.8$ for $\xi=0.01$ (solid blue) and $\xi=1.0$ (dashed black). As the cocoon is stronger, it effectively collimates the jet irrespective of its composition and harbours multiple sonic points in most of the parameter space ($r_s<5R^*$).
\begin{figure}[H]
\begin{center}
\includegraphics[width=12.5 cm]{vel_m0.1_E_3.5_xi_vary.eps}
\caption{Jet velocity profiles for collimation parameter $m=0.1$ and $E=3.5$ for different choices of $\xi$ in range $0.1-1$. Curves with decreasing vertical position represent a decreasing value of $\xi$. No shock solutions are found as the collimation is weak.}
\label{lab_vel_m0.1_E_3.5_xi_vary}
\end{center}
\end{figure}
In Figure \ref{lab_vel_m0.1_E_3.5_xi_vary}, we plot the velocity profiles associated with weaker cocoon with $m=0.1$ (Figure \ref{lab_sonic_2_xi_vary} left panel) and for a fixed value of $E(=3.5)$ marked by horizontal black line in the parameter space for $\xi$ in range 0.1-1.0. Solutions from top to bottom are in the order of decreasing value of $\xi$. Interestingly, heavy jets ($\xi=1.0$) and lepton dominated jets ($\xi=0.1$) are less affected by the cocoon compared to intermediate values of $\xi$. Jets with high and low inertia are able to pierce through the cork more easily than the jets with intermediate inertia. However, the cocoon is weak and it is not capable of inducing shocks in the flow for all the parameters.
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.7 cm]{vel_m0.8_xi_1.0.eps}
\includegraphics[width=6.7 cm]{vel_m0.8_xi_0.75.eps}
\includegraphics[width=6.7 cm]{vel_m0.8_xi_0.50.eps}
\includegraphics[width=6.7 cm]{vel_m0.8_xi_0.10.eps}
\includegraphics[width=10.5 cm]{vel_m0.8_xi_combined.eps}
\caption{Jet velocity $v$ for different choices of $\xi=1.0$ (top left), 0.75 (top right), 0.50 (middle left), and 0.10 (middle right) with fixed values of $m=0.8$ and $E=3.26$. The whole range of $\xi$ produces shock in the jet as the collimation is strong. In the bottom panel, we overplot these profiles and zoom the region near the surface of the star to show that the shock gets stronger and is pushed apart as the baryon fraction in the jet increases.
}
\label{lab_vel_m0.8_xi_1.0}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=7 cm]{R_vs_xi.eps}
\includegraphics[width=7 cm]{rs_vs_xi.eps}
\caption{Variation of compression ratio $\cal R$ (bottom left) and shock location $r_{sh}$ (bottom right) with composition parameter $\xi$. whole range of $\xi$ produces shock in the jet as the collimation is strong.}
\label{lab_R_vs_xi}
\end{center}
\end{figure}
In Figure \ref{lab_vel_m0.8_xi_1.0}, we plot the velocity profiles for a jet piercing a strong cocoon with $m=0.8$. Energy of the jet is kept fixed at $E=3.26$ (the horizontal black line in Figure \ref{lab_sonic_2_xi_vary} right panel) and four cases of $\xi=1.0$ (top left), $\xi=0.75$ (top right), $\xi=0.50$ (middle left) and $\xi=0.1$ (middle right) are chosen. As the cocoon interacts with the jet strongly, in all the cases, the jet goes through a shock transition between the outer and inner sonic points marked by black open circles. In the bottom panel, we over-plot all these solutions and zoom in on the location around shock generation. The shock is pushed away from the jet base for an increasing value of $\xi$ and subsequently, the shock transition is stronger which is seen in the height of the vertical shock transition dashed curves. This conclusion is quantitatively visible in Figure \ref{lab_R_vs_xi} where we plot the variation of compression ratio $\cal R$ (left) and shock location $r_{sh}$ (right) with $\xi$. Jet with a higher baryon fraction produces stronger shock while the shock subsequently forms at larger distances.
\subsection{Jets with supersonic injection}
So far we discussed transonic jets that start with subsonic speeds and become supersonic at the sonic point. There is another possibility for the jets that are injected at the stellar surface with supersonic speeds or relativistic Lorentz factors \citep{mi13}. Such jets travel at supersonic speeds throughout their propagation. We consider one such case in Figure \ref{lab_vel_super}. A jet with an initial Lorentz factor $10$ is considered with an energy parameter $E=19$. It faces a cork with collimation parameter $m=0.8$. The flow composition is kept at $\xi=0.1$. We observe that the jet shows deceleration due to the cocoon collimation. After escaping, it reaches Lorentz factors $\approx 20$. Such jets are physically realizable if there are accelerating agents that drive the jet up to supersonic speeds at the time of eruption from the surface of the star.
\begin{figure}[H]
\begin{center}
\includegraphics[width=12.5 cm]{vel_super.eps}
\caption{Jet Lorentz factor profile $\gamma$ for jets with supersonic injection with $\gamma=10$ at $R=R_*$. Here $m=0.8$, $\xi=0.1$ and $E=19$.}
\label{lab_vel_super}
\end{center}
\end{figure}
\section{Discussion and conclusions}
\label{sec_conclusions}
In this work, we studied a gamma-ray burst produced by the merger of a compact binary star and giving rise to a relativistic jet that breaks out of the surface of the merger. The jet faces a cocoon or stellar envelope above the surface and pierces through it to escape to infinity. In the process, the jet dynamics is affected and it is effectively collimated by the cocoon. We describe the cocoon's effectiveness by an empirical jet geometry in this model (Equation \ref{yj.eq}) where the cocoon's effectiveness is controlled by the collimation parameter $m$. This geometry sets the defined cross-section that the jet follows during its evolution. We solve the hydrodynamic equations of motion with the help of adiabatic relativistic equation of state which is sensitive to the jet composition. The fluid composition is controlled by parameter $\xi$ and it assigns the presence of lepton fraction over baryon concentration in the fluid. This is an exploratory analysis where we describe the jet dynamics within all possible ranges of jet energy parameter $E$, cocoon strength $m$ and the lepton fraction in its matter composition.
In this hydrodynamic study of thermally driven relativistic GRB jets, we reconfirmed that a sufficiently strong cocoon is able to produce recollimation shocks in the jet stem. Additionally, we explored the dependence of the shock properties on dynamical parameters of the system. We can draw the following conclusions from this analysis.
\begin{itemize}
\item The mechanical interaction of the piercing jet with the cocoon leads to the formation of strong shocks which. The possibility of shock transition strongly depends upon the energy content of the jet. A jet with very high or low values of $E$ and interacting with a moderate strength having cocoon ($m=0.3$) is less affected by it and is not capable of forming the shocks. However, the shock transition takes place for intermediate energies (Figure \ref{lab_vel_m0.3_xi_0.1}). Our hydrodynamic study captures the theoretical picture of the jet collimation by the cocoon in a GRB jet that is repeatedly seen in various numerical studies \cite{nhs14} and the formation of recollimation shocks above the stellar surfaces \cite{lml15}. However, in the current model we are only able to observe a single recollimation shock compared to such multiple shocks seen in some simulations. It is due to complicated structure and the time evolution of the cocoon interaction with the jet.
\item The compression ratio $\cal R$, as well as, the transition location $r_{sh}$ of the generated shock are sensitive to all the free parameters, cocoon strength $m$, jet energy parameter $E$ and the fluid composition $\xi$. Stronger shocks are formed by higher collimation, as well as, the jets with greater energy.
\item We show that the jets injected at sub relativistic speeds can be driven up to relativistic Lorentz factors following thermally driving. These jets comfortably achieve Lorentz factors $\sim$ few $\times$ $10$. As we have ignored other possible accelerating agents such as magnetic driving and radiation in this study, the Lorentz factors are mildly relativistic. The Lorentz factor of GRB 170817A is constrained to be $\gamma=13.4^{+9.8}_{-5.5}$ \cite{zwm17}. Further, the observationally constrained minimum Lorentz factors for several GRBs with early time radio emission are found to be in a typical range of $5.8-21$ (table 4 of \cite{avs14}) which is again consistent with the magnitudes obtained in this study. The upper limit of Lorentz factors extends by an order of magnitude and the above-mentioned acceleration factors might be responsible there.
\end{itemize}
\section{Missing links in this work and future prospects}
\label{sec_future}
Besides the importance of this study in exploring the GRB jet dynamics in general relativistic regime along with the dependence of outcomes on the jet composition, the model is quite simple and doesn't consider the jet cocoon mixing, and its possible effects. Also, we ignored the effects of other possible factors like radiation driving and the effects of large scale magnetic fields on the jet. In the next attempts, we will study the effect of the existing radiation field due to the star and the cocoon and will elaborate on their effects on the jet dynamics. In previous works related to radiation driving of X-ray binary jets and the jets in AGNs, we established that the external radiation fields are capable of inducing recollimation shocks in the jets. It is worth investigating to seek such effects in GRB jets. In the presence of magnetic fields across the shock, the recollimation shocks generated in this paper would accelerate nonthermal electrons through diffuse Fermi acceleration \citep{be87}. However, to account for the precise estimation of particle acceleration, one needs to incorporate the effect of the magnetic field on the shock conditions as well as on the jet propagation. Furthermore, the effect of the jet composition on the shock properties in this model has no numerical analogue to the best of our knowledge and can be tested in future simulations to reveal their significance as well their time-dependent nature.
\acknowledgments{I am thankful to the anonymous reviewers who helped in clarifying various aspects of the study and I am grateful to Asaf Pe'er for an insightful discussion and important suggestions. I further acknowledge the support from Israel government's PBC program and the European Union (EU) via ERC consolidator grant 773062 (O.M.J.)}
\conflictsofinterest{The author declares no conflict of interest.}
\begin{adjustwidth}{-\extralength}{0cm}
\reftitle{References}
|
2,877,628,089,593 | arxiv | \section{\label{IntroSec} Introduction}
Ionisation chambers are of great utility for measuring radionuclide activities and half-lives. The chamber outputs a current proportional to the activity of the source inside the chamber, with the constant of proportionality determined by primary calibration methods involving absolute counting of decay events from a diluted source \cite{schrader1997activity,schrader2007ionization}. The linearity and stability of the ionisation chamber current measurement is ensured by traceable calibration of the current measuring instrument. Historically, these instruments have usually been capacitor-ramp electrometers which integrate the ionisation chamber current and allow the current to be calculated according to $I = C\frac{dV}{dt}$. For ionisation chamber currents in the picoamp to nanoamp range, voltage ramp rates of $\frac{dV}{dt} \sim 1$~V/s require capacitances $C$ in the picofarad to nanofarad range. Such capacitors are available commercially as low-loss air or sealed-gas units possessing long-term stability at the part-per-million level, and low sensitivity to temperature and humidity changes. The relevant calibrations of voltage, capacitance and time are available as standard services from national metrology institutes (NMIs), and accredited laboratories, with relative uncertainties less than $10$ parts per million (ppm), and in the absence of complicating factors these low uncertainties are transferred directly to the measured current.
In the last 15-20 years, a number of developments have occurred in the field of small current metrology which encourage a fresh look at ionisation chamber current readout methods. In response to industry demand, a number of NMIs have inaugurated calibration services for nanoamp-level ammeters with uncertainties as low as $\sim 10$~ppm. Reference currents are usually sourced by applying a linear voltage ramp to a low-loss capacitor (essentially the reverse process of a capacitor-ramp electrometer) \cite{willenberg2003traceable,van2005accurate,fletcher2007new,callegaro2007current}. To validate these new services, the first international intercomparison of reference current sources was undertaken \cite{willenberg2013euromet}. While broadly validating NMI capability, the comparison could not provide information at uncertainty levels much below $\sim 100$~ppm due to transport instability and environmental effects in the commercial ammeters used as transfer standards. In parallel, research into prototype quantum current sources, known as electron pumps, which generate small currents by moving electrons one at a time \cite{pekola2013single}, focused attention on small-current metrology at the lowest possible uncertainty level. In this research setting, currents of order $100$~pA have been measured with combined uncertainties of $\sim 0.2$~ppm \cite{stein2016robustness,zhao2017thermal}. A practical spin-off from the electron pump research has been the ultrastable low-noise current amplifier, or ULCA \cite{drung2015ultrastable,drung2017ultrastable}. This instrument, following calibration using a cryogenic current comparator (CCC)\cite{drung2015improving}, can either source or measure small currents with uncertainties as low as $0.1$~ppm, and has demonstrated stability under international transportation at the $1$~ppm level \cite{drung2015validation}. Recently, different versions of the ULCA have been tested, including ones with high gain and small, stable, offset suitable for the measurement of the very low background currents from ionisation chambers \cite{drung2017ultrastable,krause2017measurement}.
Inspired by these developments, in this paper we test an alternative traceability route for ionisation chamber currents: an ammeter calibrated directly using a primary reference small-current source. We compare this ammeter method with an established capacitor-ramp method in which the traceability is to standards of capacitance, voltage and time, and discuss the advantages and limitations of each. We also address an important and neglected question in ionisation chamber metrology: how does the random uncertainty in the measured current depend on the measurement time, and what is the optimum interval between chamber background measurements.
\section{\label{TracSec} Traceability routes}
In figure \ref{TraceFig} (a), three complete traceability routes for small electrical currents are summarized, starting with primary standards at the top. The electron pump is included for completeness; although they currently have the status of research devices, electron pumps offer a very direct tracebility route and are likely to play a role in primary current metrology in the future \cite{pekola2013single}. In this paper, we will be concerned mainly with the first two - the capacitor ramp method and the resistor/voltage method.
The capacitor ramp method realizes current via the rate of change of voltage across a capacitor, and the concept can be applied to either the generation or measurement of a current. The traceability route for capacitance is either to the dc quantum Hall resistance (QHR) via a quadrature bridge and ac/dc transfer resistor, or via the calculable capacitor, which realizes a small ($\lesssim 1$~pF) capacitance based on a length measurement. Both these routes are moderately complex to implement, but the end result is that standard capacitors of $1$~nF or less can be calibrated routinely at audio frequencies with uncertainties of order $1$~ppm. Voltage is traceable to the ac Josephson effect, and digital voltmeters (DVM's) can be calibrated directly against a Josephson voltage standard (JVS), or indirectly using a calibrator or a Zener diode voltage reference. High-specification DVMs may drift by at most a few ppm in a 1-year calibration interval and have non-linearity errors less than $1$~ppm. The third traceable quantity, time, can be realized with ppm accuracy in a number of ways - for example using a commercial off-air frequency standard. The quantity $C\times \frac{dV}{dt}$ can consequently be realized with an uncertainty of a few parts per million, and precision reference current sources have mostly used this route\cite{willenberg2003traceable,van2005accurate,fletcher2007new,callegaro2007current}.
Generation of sub-nA reference currents using a resistor and voltage source is less common. This may be because high-value standard resistors, in contrast to sub-nF air-gap capacitors, can have temperature co-efficients as large as few tens of ppm per degree, and therefore require additional environmental control to reach ppm-level accuracy. Calibration uncertainties of high-value resistors have also been generally higher than low-value capacitors, although ppm-level calibration uncertainties of resistors up to $1$~G$\Omega$ are now attainable using CCCs \cite{fletcher2000cryogenic,bierzychudek2009uncertainty}. The ULCA \cite{drung2015ultrastable} also generates and measures current with respect to an internal $1$~M$\Omega$ resistor and an external DVM, and as already noted, has demonstrated 1-year stability at the ppm level. The resistor and voltage source method has the obvious advantage that current can be generated continuously without being constrained by a capacitor charge-discharge cycle.
\begin{figure}
\includegraphics[width=8.5cm]{TraceRoutes}
\caption{\label{TraceFig}\textsf{(a): Diagram showing routes for traceable generation of small currents via three main mechanisms: capacitor ramping, Ohms law and the controlled transport of charge. Abbreviations are JVS = Josephson voltage standard, QHR = quantum Hall resistance, DVM = digital voltmeter. The elementary charge is denoted $e$. (b): Simplified schematic circuit diagram of an integrating electrometer. (c): Simplified schematic circuit diagram of a feedback ammeter.}}
\end{figure}
A problem with the capacitor ramp method is that the low calibration uncertainties of the standard capacitors are achieved using voltage-transformer bridge techniques\cite{kibble1984coaxial} which work at audio frequencies. Calibrations are typically performed at $1$~kHz, and the techniques can be extended down in frequency to a practical lower limit of $\sim 25$~Hz. In contrast, capacitor ramp methods for generating or measuring small currents operate at frequencies many orders of magnitude lower, in the millihertz range. One study found that some samples of standard capacitor exhibited unexpectedly large frequency dependence in the range $\sim 10$~mHz - $1$~kHz, up to several hundred ppm \cite{giblin2010frequency}, which is certainly far in excess of the $1$~kHz calibration uncertainty and begins to impact the uncertainty budgets of NMI-level ionisation chamber readout systems. Either the capacitance needs to be measured at the ramp frequency, which is a laborious and non-standard procedure\cite{giblin2010frequency}, or the capacitance uncertainty must be expanded to allow a worst-case scenario. This issue reduces somewhat the apparent advantages of the capacitor ramp method, and prompts fresh consideration of the resistor and voltage method.
\section{\label{SystSec}Current measurement and generation systems}
\subsection{Current measurement systems}
The two types of current readout system investigated in this paper, the capacitor ramp electrometer and the feedback ammeter, are illustrated schematically in figure \ref{TraceFig} (b,c). We will refer to them subsequently as the `electrometer' and the `ammeter' respectively. Both types of instrument use a high-gain amplifier with feedback; the feedback element is a capacitor in the case of the electrometer, and a resistor in the case of the ammeter. The electrometer used in this study employs a home-made amplifier with an external integrating air-gap capacitor of value $\sim 500$~pF, and an external DVM (Datron model 1061) triggered with a calibrated $1$~s interval between readings. We denote the current measured by the electrometer $I_{\text{E}} \equiv C_{\text{corr}} \times \frac{dV}{dt}$. Here, $C_{\text{corr}}=C_{\text{cal}}+C_{\text{stray}}$ where $C_{\text{cal}}$ is the calibrated value of the standard capacitor, and $C_{\text{stray}}$ is the stray capacitance correction. For the bulk of the study, excepting the data of figure \ref{NoiseFig} (f,g), the ammeter was a Keithely model 6430 set to the 1 nA range \footnote{We could have used the 100 pA range, achieving slightly lower instrument noise at the expense of slower response time. However, the noise in $I_{\text{A}}$ is roughly two orders of magnitude higher than the instrument noise floor so no benefit is obtained by using a lower range}. The resistive feedback of the ammeter gives an output voltage $=I_{\text{in}}R$, which is digitized by an analogue-to-digital converter (ADC) internal to the instrument, and converted to a current reading by the instrument's firmware. The feedback resistor ($\sim 1$~G$\Omega$ on the 1 nA range) is internal to the ammeter, and the ammeter was calibrated by supplying it with a reference current.
\begin{figure}
\includegraphics[width=8.5cm]{CircuitFig}
\caption{\label{CircuitFig}\textsf{Simplified schematic circuit diagram showing the input stage of an ammeter connected to a non-ideal current source with finite output resistance $R_{\text{out}}$. The offset current and voltage are denoted $I_{\text{off}}$ and $V_{\text{off}}$ respectively, and the current and voltage noise are denoted $I_{\text{n}}$ and $V_{\text{n}}$ respectively.}}
\end{figure}
\subsection{noise considerations}
In figure \ref{CircuitFig}, we present an expanded circuit model for the input stage of an ammeter connected to a current source, including the offsets and noise sources\footnote{The term 'noise', frequently used in this paper, means random fluctuations in a measured signal, irrespective of the origin of those fluctuations.} present in real ammeters, and the finite output resistance $R_{\text{out}}$ of the current source. Additional noise due to the current source itself is not considered in this model. The voltage offset and noise are represented by a single source in the diagram for convenience (and likewise for the current offset and noise) but this should not be taken to imply that they are due to the same process or component in the amplifier. The same circuit describes the electrometer, but with $R$ replaced by a capacitor. A detailed discussion of amplifier properties is beyond the scope of this paper, but some qualitative comments will help with interpreting the data of sections \ref{TypeASec} and \ref{AgreeSec}. The total amplifier noise is the sum of three contributions: the current noise $I_{\text{n}}$, the thermal noise in the feedback resistor $R$ (in the case of capacitive feedback, there is no thermal noise), and the voltage noise $V_{\text{n}}$ driving a noise current in the source resistance $R_{\text{out}}$ \cite{graeme1996photodiode}. Crucially, while the first two contributions are independent of $R_{\text{out}}$, the last one increases in inverse proportion to $R_{\text{out}}$. The reference current source used for calibrating the ammeter and electrometer (described in the next paragraph) has $R_{\text{out}} = 1$~G$\Omega$, whereas an ionisation chamber presents a very high output resistance, $R_{\text{out}} \gg 1$~G$\Omega$. We therefore expect $V_{\text{n}}$ to contribute more noise during a calibration of the instrument than when measuring an ionisation chamber current, and depending on the relative size of $I_{\text{n}}$ and $V_{\text{n}}$ (we did not separately measure these for either the electrometer or the ammeter) we may expect to see an increase in the noise when the instrument is connected to the reference current source. Generally, designers of amplifiers have to make trade-offs, and it is difficult to make both $V_{\text{n}}$ and $I_{\text{n}}$ arbitrarily small. We note that the measurement of ionisation chamber currents is an application in which $V_{\text{n}}$ can be relaxed somewhat in a specialised instrument design due to the very high output resistance of the source, to enable the smallest possible $I_{\text{n}}$. A commercial ammeter, on the other hand, may offer smaller $V_{\text{n}}$ and larger $I_{\text{n}}$, in order to yield a reasonable total noise when measuring current sources with a wide range of $R_{\text{out}}$.
The same general comments also apply to the offset current and voltage, $I_{\text{off}}$ and $V_{\text{off}}$; in instrument design there is typically a trade-off between the two. We measured $V_{\text{off}}=5$~mV for the electrometer, and $V_{\text{off}} \sim 0.2$~mV for our Keithley 6430 ammeter unit on the 1 nA range. The large offset exhibited by the electrometer caused a $5$~pA offset current to flow when it was connected to the reference current source for the measurements of section \ref{AgreeSec} A, but the on-off calibration cycle subtracted this offset and measured only the gain factor of the electrometer.
\subsection{reference current source}
Our reference current generator consisted of a calibrated, temperature-controlled $1$~G$\Omega$ standard resistor, an uncalibrated voltage source and a calibrated DVM (model Keysight 3458A). The combined type B uncertainty in the reference current was $\sim 1$~ppm. In discussing calibrations, we need first to distinguish the instrument's gain factor from its offset. We describe the relationship between the true current, $I_{\text{true}}$, and current indicated by the instrument, $I_{\text{Ind}}$, as $I_{\text{true}} = (g \times I_{\text{Ind}}) + I_{\text{offset}}$, where $g$ is the gain factor. Our calibration determines only the gain factor $g$. The offset current $I_{\text{offset}}$ is automatically removed from the background-corrected measurements of activity discussed in section \ref{AgreeSec}, since it is present in the current with and without the radionuclide source in the ionisation chamber. We calibrated the gain factor of the ammeter every 2-3 days during the measurement period, and we denote the current measured by the ammeter, after adjusting the indicated current for the gain factor, as $I_{\text{A}}$. Care was taken not to subject the sensitive ammeter preamp unit to mechanical shock, as previous experience with the EM-S24 small-current inter-comparison \cite{willenberg2013euromet} showed that even small mechanical shocks, such as plugging a cable into the preamp, could change the gain factor by several tens of ppm. Following these precautions, the ammeter calibration factor changed by less than $5$~ppm over $2-3$ weeks. For part of the study, we also used the same reference current source to calibrate the electrometer, as detailed in section \ref{AgreeSec}.
\section{\label{TypeASec} Dependence of type A uncertainty on averaging time}
All the radionuclide measurements were performed using the same ionisation chamber, which was of type Vinten 671. To assess the type A (statistical) uncertainty after a given averaging time, we placed a sealed Ra-226 source in the chamber and measured the current for periods of several hours. Raw data from ammeter measurements is shown in figure \ref{AdevFig} (a). The ammeter was set to integrate each data point for $10$ power line cycles (PLC), with the auto zero function disabled, and consequently the raw data set consists of $5$ data points per second. In figure \ref{AdevFig} (b), the same data has been block-averaged so that each plotted point is averaged over $85$ seconds of measurement time. A plot of the ionisation chamber current from the same Ra-226 source, measured using the electrometer, is shown in figure \ref{AdevFig} c. In this plot, each data point is obtained from one voltage ramp cycle. The ramp cycle lasted $85$~s, so the data points in figures \ref{AdevFig} b,c can be directly compared, i.e. each data point corresponds to the instrument integrating the current signal for the same amount of time. The offset of $\sim 0.1$~pA between the two instruments is not significant because these measurements are not corrected for the background, and we will investigate the agreement between the two systems in section \ref{AgreeSec}. The significant feature visible in these long data sets is that the average current measured by the ammeter appears to drift with time, decreasing by $\sim 50$~fA over the first few hours and continuing a downward drift more slowly for the remainder of the measurement time. The rapid drift visible at the start of this data set was rather atypical of the performance of this instrument, and was not the result of mechanical shock. The instrument was powered up and running in acquisition mode for several days prior to the start of the data set. We cannot rule out the possibility that the drift is due to a change in the ambient temperature, coupled to a temperature dependence of the ammeter's gain-setting resistor, but this is unlikely as calibrations of the ammeter spread over two weeks showed the gain to the stable at the $10^{-5}$ level. In contrast to the ammeter data, the current measured by the electrometer appears to be stationary in time. Next, we employ the Allan deviation to more quantitatively investigate this observation.
\begin{figure}
\includegraphics[width=8.5cm]{AdevGraph}
\caption{\label{AdevFig}\textsf{(a): raw ammeter data obtained while measuring the output of an ionisation chamber containing a Ra-226 source. (b): The data in (a) averaged in 85-second blocks. (c) Data from the same source/chamber combination measured using a capacitor ramp electrometer. Each ramp takes $85$~seconds. (d): Allan deviation as a function of averaging time $\tau$ of the data in plot (a) (open triangles), plot (c) (filled diamonds) and an additional data set obtained with the ammeter disconnected from the ionisation chamber to measure its noise floor (open diamonds). The diagonal solid line is a guide to the eye with slope $1/ \sqrt{\tau}$. The vertical double arrow indicates the difference between the ammeter noise floor and the noise when measuring the ionisation chamber current, and the horizontal dashed lines indicate relative random uncertainties of $0.1 \%$ and $0.01 \%$.}}
\end{figure}
The Allan deviation is a statistical tool developed as a way of assigning a meaningful statistical uncertainty to data with a non-stationary mean \cite{allan1987should}. It is widely used in time and frequency metrology, and its use in electrical metrology is becoming more widespread, for example to characterize the stability of voltage standards \cite{witt2000using} and current comparator bridges \cite{williams2010automated,drung2015improving}. Here, we briefly summarize it. The Allan deviation $\sigma _{\text{A}}$ is computed from a time-series of data points evenly spaced over a total time $T$. The computation yields $\sigma _{\text{A}}$ as a function of averaging time $\tau$, for $\tau \lesssim T/4$. For the case of frequency-independent noise, $\sigma _{\text{A}}(\tau) = \sigma / \sqrt{\tau}$, where $\sigma$ is the standard deviation of the data; in other words, the Allan deviation is equal to the standard error of the mean, and decreases as the square root of the measurement time. However, in the presence of frequency-dependent noise, the standard error of the mean is no longer a meaningful measure of the type A uncertainty. Two examples of frequency-dependent noise are $1/f$ noise, in which the Allan deviation is independent of $\tau$, and random-walk, or $1/f^2$ noise, in which the Allan deviation increases as the square root of $\tau$.
The Allan deviation of the time-domain data from figure \ref{AdevFig} (a) and (c) is shown in figure \ref{AdevFig} (d). Note that the first data point for the electrometer is at $\tau=85$~s, the time for one integration ramp, whereas the ammeter data starts at $\tau=0.2$~s, the time to acquire one reading. It is clear that both instruments have very similar $\sigma _{\text{A}}$ for $\tau < 2000$~s, and that $\sigma_{\text{A}} (\tau) \propto 1/ \sqrt{\tau}$. For $\tau > 2000$~s, the behavior of the two instruments diverges. The ammeter enters a regime of approximately $1/f$ noise, in which further increases in the averaging time do not result in any further decrease in the type A uncertainty. The lowest type A uncertainty achievable with the ammeter, based on this data set, is $\sim 5$~fA, or 100 ppm of $I_{\text{A}}$. The electrometer, on the other hand, continues to follow $\sigma_{\text{A}} (\tau) \propto 1/ \sqrt{\tau}$ out to the longest time-scale probed by this data set, $\tau \sim 40000$~s, where $\sigma _{\text{A}} \sim 1$~fA, or 20 ppm of $I_{\text{E}}$.
Some insight into the behaviour of the ammeter can be gained by plotting the Allan deviation of a time-series of data taken with the instrument left open-circuit (open diamonds in figure \ref{AdevFig} (d)). This exhibits a transition to $1/f$ noise at $\tau \sim 10$~s, which is due to the low frequency behaviour of its input bias current noise. A small additional contribution may be due to the ADC voltage measurement \footnote{Instability in the analogue-to-digital conversion of the preamp output voltage can be ameliorated by using the ammeter's auto zero function. However, each reading then takes at least twice as long}. Referring to section \ref{SystSec}, the superior stability of the electrometer at long averaging times is probably a consequence of it having a more stable offset current and voltage than the ammeter. We might also propose that the electrometer has a more stable gain-setting element (the feedback capacitor) than the ammeter. However, in section \ref{AgreeSec}, and referring to the inset to figure \ref{AgreeFig} (a), we see that the ammeter gain is stable at the level of $10^{-5}$ on a time-scales of a few hour, so the $10^{-4}$ limit to the type A uncertainty discussed in the previous paragraph is unlikely to be due to instability in the resistive gain element.
\begin{figure}
\includegraphics[width=8.5cm]{NoiseGraph}
\caption{\label{NoiseFig}\textsf{(a): Ammeter current as a function of ionisation chamber voltage, with the ionisation chamber energised with a low-noise laboratory DC supply. The Ra-226 source in the chamber is the same as in figure \ref{AdevFig}. (b,c): Ionisation chamber current with the chamber energised using (b): the low-voltage source and (c): the high-voltage source. In each data trace, the source is initially in the chamber, and is then removed. (d): Allan deviation of sections of data with the chamber empty from plots (b) and (c). Open symbols: LV source, filled symbols: HV source. (e): As (d), but with the Ra-226 source in the chamber. (f): Amplitude spectra of current noise from an empty chamber energised with the LV and HV sources. (g): as (f), but with the Ra-226 source in the chamber.}}
\end{figure}
The analysis presented in this section is not intended to be a definitive comparison of the two types of current measuring instrument, nor should the ammeter data be interpreted as definitively describing the particular make and model of instrument used in this study \footnote{The averaging time at which the Allan deviation changes from white-noise behaviour to frequency-dependent behaviour depends on the instrument range, the stability of the environmental conditions and the particular unit of instrument (of the same model number)}. Rather, it is intended to demonstrate a methodology for evaluating the type A uncertainty achieved following a given averaging time. For example, referring again to figure \ref{AdevFig} (d), if a statistical uncertainty of $50$~fA ($0.1 \%$ of the signal from the Ra-226 source) was desired, it is only necessary to integrate the current for $30$~s using either type of instrument. Knowledge of the stability of the current measuring instrument is also important when designing a protocol for measuring the chamber background current. One possible such protocol would be to measure the background current once a day, and subtract the same background from all calibrations performed that day. In this case, the Allan deviation of the readout current for $\tau = 1$~day would yield the minimum meaningful statistical uncertainty achievable in any calibration. Since instruments generally suffer from $1/f$ or random walk behavior at long time-scales, a more robust procedure would be to measure a new background signal every time the chamber is empty, i.e. in between calibrations of different sources.
\section{\label{NoiseSec} Investigation of excess noise}
A remarkable feature of the data in figure \ref{AdevFig} (d) is the roughly factor of $100$ increase in the short-averaging-time noise when the ammeter is connected to the energised ionisation chamber. This excess noise is indicated by a vertical double arrow. The excess noise is not due to the cable connecting the ammeter to the ionisation chamber. Separate measurements showed that the cable on its own, or indeed the cable connected to the chamber, but with the high voltage (HV) source disconnected from the chamber, increased the noise by a negligible amount compared to the situation with the ammeter input left open circuit. The statistical nature of current generation in the ionisation chamber can be expected to add a shot-noise contribution, but we do not believe this is a significant contributor to the total noise because there was only a small decrease in the total noise (less than a factor of $2$) when the source was removed from the chamber.
To investigate the nature of the excess noise, we replaced the HV source with a low-noise laboratory voltage source (Yokogawa GS200), which we will refer to as the low-voltage (LV) source. This source was limited to a maximum of $32$~V, but as shown in figure \ref{NoiseFig} (a), the chamber current almost reached saturation at this voltage using the same Ra-226 source employed in the measurements reported in section \ref{TypeASec}. In figures \ref{NoiseFig} (b) and (c) we show data measured using the ammeter, in which the source was initially in the chamber, and was then withdrawn from the chamber. The data of figures \ref{NoiseFig} (b) and (c) were obtained using the LV and HV voltage sources, set to $32$~V and $1455$~V, respectively. The lower current noise when using the LV source is immediately apparent. Allan deviation plots of sections of the data from figures \ref{NoiseFig} (b) and (c), shown in panels (d) and (e) show, however, that the reduction in noise using the LV source is rather more complicated than might appear from the time-domain data plots. With the chamber empty, the reduction in noise using the LV source is indeed dramatic, at least a factor of 20 for averaging times from $0.2$~s to $100$~s. A single $0.2$~s data point using the LV source has a type A uncertainty of less than $10$~fA, while to achive the same type A uncertainty using the HV source requires averaging for at least $100$~s. With the Ra-226 source loaded into the chamber (figure \ref{NoiseFig} (e)), the LV source yields lower noise for averaging times up to a few seconds. For longer averaging times, the Allan deviation plots using the two voltage sources converge, and the LV source yields roughly a factor 2 lower noise than the HV source.
Next, we measured the amplitude spectra of the current noise using both the LV and HV sources, with the chamber empty and containing the Ra-226 check source. For these measurements, the Keithley 6430 ammeter was replaced with an ammeter setup consisting of a Femto DDPCA-300 transimpedance amplifier with gain set to $10^8$~V/A followed by a Keysight 34461A integrating voltmeter sampling $1000$ times a second. The bandwidth (3 dB point) of the transimpedance amplifier is $150$~Hz. Time-domain data traces were transformed in software to yield the amplitude spectra scaled in units of pA/$\sqrt{Hz}$ (figures \ref{NoiseFig} (f) and (g)). The spectra have peaks at multiples of the $50$~Hz power line frequency with both voltage sources, but the striking difference between the sources is at frequencies below about $50$~Hz, where the HV source generates a broad background with an amplitude more than ten times that of the LV source. The background due to the HV source persists even if its variable voltage is turned down as low as $10$~V, although it disappears if the voltage is set to zero. This data convincingly shows that the HV source is the origin of a large part of the the excess noise first seen in figure \ref{AdevFig} (d). We did not attempt to investigate the origin of the noise further, for example by directly measuring the voltage noise spectral density of the two voltage sources. It is nevertheless clear that elimination of excess noise due to the HV power supply, by filtering or improved design, would result in reductions in the amount of time required to achieve a given resolution in a measurement of ionisation chamber current, and more dramatic reductions in the time required to measure the background current.
\section{\label{AgreeSec} Absolute agreement between two readout systems}
\subsection{Calibration of electrometer using reference current source}
We now return to the comparison between the ammeter and the electrometer. In this section, we investigate how well the two systems agree in background-corrected measurements of a range of radioactive sources. As already noted in section \ref{TracSec}, the gain factor of the ammeter was regularly calibrated using a reference current source consisting of a $1$~G$\Omega$ standard resistor and a calibrated DVM. Here, we also calibrated the gain factor of the electrometer using the same reference current source. For all the calibrations, the reference current was periodically switched between a nominal zero setting, and $50$~pA, yielding a difference current $\Delta I_{\text{cal}}=49.995$~pA. the difference currents $\Delta I_{\text{A}}$ and $\Delta I_{\text{E}}$ were extracted from the instrument readings. Figure \ref{AgreeFig} (a) shows values of $\Delta I_{\text{A}}$ (top-left inset) and $\Delta I_{\text{E}}$ (main plot) extracted from calibrations of the ammeter and electrometer respectively, over times of several hours. The most striking difference between the two instruments is that the values of $\Delta I_{\text{E}}$ exhibit much more statistical scatter than those for $\Delta I_{\text{A}}$ (note the different y-axis scales for the main panel of figure \ref{AgreeFig} (a) and the inset). This could be a consequence of the specialised design of the electrometer amplifier module: as discussed in section \ref{SystSec}, the input voltage noise of the amplifier module will cause excess noise when it is connected to the $1$~G$\Omega$ reference current source, and the electrometer may have a larger input voltage noise than the ammeter. However since we did not directly measure the voltage noise for either the electrometer or the ammeter, this remains a conjecture.
After averaging the statistical fluctuations in the calibration data of figure \ref{AgreeFig} (a), we find that the mean current difference indicated by the electrometer, $\langle \Delta I_{\text{E}} \rangle$, is offset from $\Delta I_{\text{cal}}$ by a statistically significant amount: $(\Delta I_{\text{cal}} - \langle \Delta I_{\text{E}} \rangle) / \Delta I_{\text{cal}} = (460 \pm 46) \times 10^{-6}$. This error, $460$ ppm, is much larger than the uncertainty in the capacitance, voltage and time components used to calculate $I_{\text{E}}$, although still much smaller than the uncertainties in the radionuclide-specific ionisation chamber calibration factors. We now consider the possible causes of this error.
The most likely cause of the error is non-linearity of the voltage ramp. In a previous study on another type of capacitor-ramp electrometer, non-linearity of the $V(t)$ ramp was at the level of a few parts in $10^{4}$ for currents in the range of $10$~pA to $100$~pA\cite{giblin2009si}. The non-linearity was assumed to arise due to dielectric storage, or other non-ideal properties of $C_{\text{stray}}$. However, it could not be satisfactorally modeled, and the measured non-linearity was used to assign empirical type B components to the uncertainty budget for the electrometer\cite{giblin2009si}. Measurements of $V(t)$ were also made on the electrometer under investigation in this study, and they also showed non-linearity at the level of a few parts in $10^{4}$. More extensive characterisation of the voltage ramp over a range of currents are needed to clarity this error mechanism.
As already noted in section \ref{TracSec}, a possible source of error in capacitor-ramp electrometers is frequency-dependence in the feedback capacitor. We measured the frequency dependence of the capacitor (a sealed-gas unit of $\sim 500$~pF) over the range $50-20000$~Hz using a commercial precision capacitance bridge (AH2700A), and found that it only changed by a few ppm. However, we did not measure the capacitance at the millihertz frequencies at which the electrometer operates. In a previous study, it was found that capacitors with a large frequency dependence in the millhertz range also showed an anomalously large dependence in the audio range \cite{giblin2010frequency}. However, this study was based on a small sample of capacitors, and we cannot conclusively rule out capacitor frequency dependence as the cause of the $460$~ppm gain error in the ionisation chamber electrometer.
Finally, the error may be simply an artifact due to the input resistance of the electrometer in conjunction with the $1$~G$\Omega$ output resistance of the reference current source. The sign of the error (the indicated current is less than the actual current) is consistent with this mechanism. The measured $460$~ppm error would imply an input resistance of $460$~k$\Omega$, which is quite high but not implausible. Future calibrations, using reference current sources with different output resistances, will clarify this matter. In the following comparison between the ammeter and electrometer, we simply treat the calibration of the electrometer as yielding a correction factor, in the same manner in which we calibrated the ammeter.
\begin{figure}
\includegraphics[width=8.5cm]{AgreementGraph}
\caption{\label{AgreeFig}\textsf{(a): The main plot shows the current indicated by the capacitor ramp electrometer when supplied with a known current of $49.995$~pA from a calibrated source. Each data point is averaged from $\sim 40$~minutes of voltage ramps, and the error bars indicate the standard error on the mean of the current calculated from the individual ramps. Horizontal dashed lines show the calibrated current (upper line) and the mean of the indicated currents (lower line). The upper left inset shows the current indicated by an ammeter when supplied with the same calibrated current. The inset shares the same time axis, and averaging time per point is the same as for the main plot, but note the different y-axis scales of the inset and main plot. (b): Agreement between the current measured by the electrometer and ammeter, when connected to an ionisation chamber in a series of measurements of four different radionuclides. Red open points: electrometer current as indicated. The red dashed line with error bar shows the weighted mean. Blue filled points: electrometer current corrected for the calibration factor determined from plot (a). The blue dashed line with error bar shows the weighted mean. Each measurement is corrected for background, and error bars indicate the type A uncertainty. The nuclide and approximate current are indicated above each pair of data points.}}
\end{figure}
\subsection{Background-corrected measurements using both readout systems}
As a direct comparison, the electrometer and ammeter were both used to measure background-corrected ionisation chamber currents from four different radionuclides. Each measurement consisted of a raw data set similar to that shown in figure \ref{NoiseFig} (c), from which the background corrected currents $I_{\text{AC}}$ and $I_{\text{EC}}$ were obtained. To ensure that random geometrical factors due to source placement inside the chamber did not affect the comparison, the source was only put into the chamber once for each comparison. So, for example the ammeter would be used to measure first the empty chamber, then the source. Next the electrometer would be used to measure the source followed by the empty chamber. The socket at which the instruments were connected and disconnected from the chamber was mechanically isolated from the chamber via a cable to avoid disturbing the position of the source when the instruments were swapped. As detailed, $I_{\text{AC}}$ already incorporates a correction factor from the ammeter calibration. $I_{\text{EC}}$ was optionally corrected, based on the calibration detailed in the previous sub-section. Figure \ref{AgreeFig} (b) shows the normalised difference between the two background corrected currents both with and without the calibration correction applied to the electrometer current. After applying the correction, the weighted mean of the $4$ points yields the average $\langle \frac{I_{\text{AC}}-I_{\text{EC}}}{I_{\text{EC}}} \rangle = (-0.009 \pm 0.021) \%$, as indicated by the blue horizontal dashed line and error bar; the two systems agree within the random uncertainties. Without applying the correction factor, the weighted mean of the normalised differences is $(0.037 \pm 0.021) \%$, a statistically significant disagreement. The measurement of the 4 radionuclides could be considered as an indirect comparison of the two current measuring instruments, although with a higher uncertainty than the direct calibrations discussed in the previous section. Our ability to compare the two measurement systems with radionuclide measurements is hampered by the excess noise, probably due to the HV supply as discussed in section \ref{NoiseSec}, but we conclude that once they are both calibrated using a reference current, they agree to within $\sim 0.02 \%$. They could be considered as equivalent candidates for an ionisation chamber readout system, provided a reference current source was available to calibrate them.
\section{\label{ConcSec} Conclusions}
We compared examples of a feedback ammeter and an integrating electrometer, and we can conclude that the feedback ammeter, calibrated using a reference current source, can be considered as a viable alternative to the integrating electrometer traditionally used for ionisation chamber readout. Measuring ionisation chamber currents of a few tens of picoamps at an uncertainty level of $0.1 \%$, which is sufficient for most radionuclide calibrations, the two current readout systems can be considered equivalent. At an uncertainty level of $0.01 \%$, the two systems can also be considered equivalent with respect to type A uncertainty, reaching a relative type A uncertainty of $0.01 \%$ for a current of $50$~pA after $1000$ seconds of averaging. However, when calibrated using a reference current source, the electrometer was found to be in error by $0.046 \%$. This highlights the importance of calibrating electrometers directly using reference current sources, as non-idealities in these systems can introduce errors orders of magnitude larger than the ppm-level uncertainties in the individual calibrations of capacitance, voltage and time. Reference current sources can be realised at uncertainty levels of around 1 ppm using calibrated standard resistors, voltmeters, and now the ULCA.
Independent of the readout system, the type A uncertainty was increased by a significant amount above the measuring instrument noise floor by a large amount of background noise originating in the high-voltage source. This shows that careful engineering of a low-noise high-voltage source would be a fruitful project, enabling type A uncertainties less than $0.01 \%$ to be achieved in just a few seconds of measurement time. We have also presented the Allan deviation as a useful statistical tool for evaluating the stability of current measuring instruments as a function of measuring time. This helps the design of calibration protocols which make most efficient use of the available time to reach a desired uncertainty level.
\begin{acknowledgments}
This research was supported by the UK department for Business, Energy and Industrial Strategy and the EMPIR Joint Research Project 'e-SI-Amp' (15SIB08). The European Metrology Programme for Innovation and Research (EMPIR) is co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. The authors would like to thank Kelley Ferreira for assistance with handling radionuclide samples.
\end{acknowledgments}
|
2,877,628,089,594 | arxiv | \section{Introduction}
\label{sec:introduction}
The mechanism of core collapse supernovae (ccSNe) remains an open problem
despite over half a century of theoretical effort. In particular, theory
cannot reliably predict which stars that undergo core collapse become
ccSNe or which form black holes instead of neutron stars (e.g.
\citealt{Zhang2008}, \citealt{Nordhaus2010}, \citealt{Oconnor2011}, \citealt{Fryer2012},
\citealt{Hanke2012}, \citealt{Takiwaki2012}, \citealt{Ugliano2012}, \citealt{Couch2013},
\citealt{Dolence2013}, \citealt{Couch2014}, \citealt{Dolence2014}, \citealt{Wong2014}).
Observationally, comparisons
of massive star formation and SNe rates (\citealt{Horiuchi2011},
\citealt{Botticella2012}), limits on the diffuse SN neutrino background
(\citealt{Lien2010}, \citealt{Lunardini2009}), the Galactic rate (\citealt{Adams2013}),
and direct searches
for failed SNe (\citealt{Kochanek2008}, \citealt{Gerke2014}) limit
the fraction of failed ccSNe to $f \ltorder 50\%$ of core collapses.
There does, however, appear to be a deficit of higher
mass ccSN progenitors (\citealt{Kochanek2008}), which is best
quantified by the absence of red supergiant progenitors above
$M \gtorder 17M_\odot$ (\citealt{Smartt2009}). A natural
explanation of this deficit is that $f \sim 20$ to $30\%$ of
core collapses fail to produce a ccSNe and instead form a
black hole without a dramatic external explosion. Further
progress in completing the mapping between progenitors and
outcomes both for successful ccSNe and in searches for failed
ccSNe should clarify these issues over the next decade.
We have few other probes of the ccSN mechanism other than this
mapping between progenitors and external outcomes, although
even this does not constrain the balance between neutron star
and black hole formation in successful ccSNe. Neutrinos or
gravity waves would be the best probe of the physics of core
collapse, but this will only be possible for a Galactic supernova
where the rates are low (once every $\sim 50$~years, \citealt{Adams2013}).
Furthermore, the stellar mass function favors having a relatively
low mass progenitor for which the neutrino mechanism for ccSNe
works reasonably well and we have high confidence that the outcome
is a neutron star (e.g., \citealt{Thompson2003}, \citealt{Kitaura2006}, \citealt{Janka2008})
rather than a rarer, higher mass progenitor where
the explosion mechanism and the type of remnant remains problematic.
Even if the rate of failed ccSNe is $f\simeq 20$ to $30\%$, the probability of
detecting the formation of a black hole in the Galaxy is very low.
Another direct probe of the SN mechanism is the mass function of the remnant
neutron stars and black holes (see, e.g., \citealt{Bailyn1998},
\citealt{Ozel2010}, \citealt{Farr2011}, \citealt{Kreidberg2012},
\citealt{Ozel2012}, \citealt{Kiziltan2013}). Most neutron star masses are clustered
around $1.4 M_\odot$, black hole masses are clustered around
$5$ to $10M_\odot$, and there is a gap (or at least a deep
minimum) between the masses of neutron stars and black
holes. Interpreting these results is challenging because
these masses can only be measured in binaries and the selection
functions for finding neutron star and black hole binaries are both
different and difficult or impossible to model from first
principles.
If, however, we assume that the separate mass distributions of neutron
stars and black holes are relatively unbiased, then we can use them
to constrain the physics of core collapse. For example, in \cite{Pejcha2012},
we showed that the masses of double neutron star binaries strongly
favored explosion models with no mass falling back onto the proto-neutron
star and that the explosion probably develops at the edge of the
iron core near a specific entropy of $S/N_A \simeq 2.8 k_B$. Fall-back
is traditionally invoked in order to explain how the observed masses
of black holes can be less than the typical masses of their progenitors
(e.g., \citealt{Zhang2008}, \citealt{Fryer2012}, \citealt{Wong2014}).
However, in \cite{Kochanek2014}, we pointed out that failed ccSNe
of red supergiants naturally produce black holes with the observed
masses because the hydrogen envelope is ejected to leave a remnant
that has the mass of the helium core (\citealt{Nadezhin1980},
\citealt{Lovegrove2013}). Similarly, \cite{Burrows1987} noted
that stellar mass loss processes leads to stars (e.g., Wolf-Rayet stars)
that will produce black holes with masses comparable to that of
the helium core.
Based on these concepts, \cite{Clausen2014} associated the black hole mass with the helium
core mass and then estimated the probability of black hole formation
as a function of progenitor mass needed to explain the observed
black hole mass function. As expected from the arguments in
\cite{Kochanek2014}, this required a peak in the probability
distribution at initial (ZAMS) masses of $M_0=20$-$25M_\odot$
while also allowing a second peak at $M_0\sim 60M_\odot$ because
mass loss leads the \cite{Woosley2007} progenitor models to produce
the same helium core mass for two different initial masses.
However, while stellar mass largely determines the fate of a star, it
is not directly related to the physics of core collapse.
A more physical approach would be to relate the formation of a black
hole to some property of the stellar core at the onset of collapse
that is related to the likelihood of a successful explosion. This
should not only lead to a more realistic model of the remnant mass
distribution, but the parameter range needed to explain the mass
function can then be used to inform models of core collapse.
\cite{Oconnor2011} argued that the compactness of the core defined by
\begin{equation}
\xi_M = { M \over M_\odot} {1000~\hbox{km} \over R(M_{bary}=M) }
\label{eqn:compact}
\end{equation}
is a good metric for the ``explodability'' of a star. In particular,
$\xi_{2.5}$, the compactness at a baryonic mass of $M_{bary}=2.5M_\odot$, is a measure
of how rapidly the density is dropping outside the iron core. If
$\xi_{2.5}$ is small, the density is dropping rapidly and it is easier
for neutrino heating or other physical effects to revive the shock and
produce a successful explosion. The reverse holds if the
compactness is high. \cite{Ugliano2012} argued for a lower
compactness threshold than \cite{Oconnor2011}, and also
considered the correlation of other properties of the core
with the production of a successful explosion, finding
relatively strong correlations with the binding energy of the
material outside the iron core and little correlation with the
mass of the iron core or the mass inside the oxygen burning shell.
The compactness is not a simple function of progenitor mass,
and \cite{Sukhbold2014} find that complex interactions between
(in particular) carbon and oxygen burning shells can drive rapid
variations in compactness with stellar mass.
In this paper, following the approach of \cite{Pejcha2012} for
neutron stars, we model the observed black hole mass function
to determine what explosion criteria will explain the black hole
mass function under the assumption that the black hole mass equals the
helium core mass at the time of explosion. While partly inspired
by \cite{Clausen2014}, our approach ties the model to the
physics underlying the success of an explosion rather than
the initial stellar mass. Unlike
\cite{Clausen2014}, we will also directly fit the data on
black hole masses from \cite{Ozel2010} rather than fitting
their parametrized model of the black hole mass function. This is of some importance because
the compactness is a complex function of initial mass, leading
to a mass function with non-trivial structure. In \S2 we
summarize our statistical model, in \S3 we present our
results, and in \S4 we discuss the future of this approach.
\section{Statistical Methods}
\label{sec:constraints}
There are three elements to our calculation. First, we must estimate the probability
$P(D_i|M_{BH})$ of the data $D_i$ for black hole candidate system $i$ given an estimated
black hole mass $M_{BH}$. Second, we must estimate the probability $P_j(M_{BH}| \vec{p})$
of finding a black hole of mass $M_{BH}$ given a set of model parameters $\vec{p}$
describing the outcomes of core collapse for a set of models $j$. Third, we must
have some set of priors $P(\vec{p})$ on the model parameters.
Combining these terms using Bayes theorem, the probability distribution for our model
parameters given the data on black holes is
\begin{equation}
P_j(\vec{p}|D) \propto P(\vec{p}) \Pi_i \int dM_{BH} P(D_i|M_{BH}) P_j(M_{BH}|\vec{p}).
\end{equation}
If we only want to consider the probability distribution for the parameters of a
particular model $j$, we simply normalize this distribution to unity.
We can also compare different stellar models or criteria for black
hole formation, where the probability of model $j$
compared to all other models is
\begin{equation}
P_j(D) = \int d\vec{p} P_j(\vec{p}|D) \left[ \sum_j \int d\vec{p} P_j(\vec{p}|D)\right]^{-1}
\label{eqn:relprob}
\end{equation}
for models with the same numbers of parameters.
This is basically the procedure that has been used extensively
to estimate the intrinsic black hole mass distribution (\citealt{Bailyn1998}, \citealt{Ozel2010},
\citealt{Farr2011}, \citealt{Kreidberg2012}, \citealt{Ozel2012}, \citealt{Kiziltan2013}) but modified
as done in \cite{Pejcha2012} to relate the data to an underlying model of core
collapse rather than to a parametrized model of the remnant mass distribution.
For modeling the data, we simply follow the procedures and data summaries from
\cite{Ozel2010} with one exception. The probability $P(D_i|M_{BH})$ depends on the data available
for each system. If both the mass ratio and inclination of a system are constrained,
the probability distribution is described as a Gaussian
\begin{equation}
P(D_i |M_{BH}) \propto \exp\left[-(M_{BH}-M_i)^2/2\sigma_i^2\right]
\label{data:case1}
\end{equation}
where $M_i$ and $\sigma_i$ are the estimated mass and its uncertainty and
the term is always normalized such that $\int P(D_i |M_{BH}) dM_{BH}\equiv 1$.
If the measured mass function is $m_i$ with uncertainty $\sigma_{mi}$
and the mass ratio $q$ is restricted to the
range $q_{min} < q < q_{max}$ then
\begin{equation}
P(D_i |M_{BH}) \propto \int_{q_{min}}^{q_{max}} dq \int_{x_m}^1
{ dx \exp\left[-(m_i-m))^2/2\sigma_{mi}^2\right] \over 1-x_m}
\label{data:case2}
\end{equation}
where $m=M_{BH}\sin^3 i/(1+q)^2$, the inclination distribution is assumed
to be uniform in $x=\cos i$ over $x_m < x < 1$, and the minimum inclination
$x_m = 0.462 (q/(1+q))^{1/3}$ is set by the requirement for having no
eclipses. Finally, if there is also a constraint on the inclination,
we include a multiplicative probability for the inclination,
$\exp(-(i-i_0)^2/2\sigma_i^2)$. We could reproduce the results in
\cite{Ozel2010} if we also truncated the distributions at $M_{BH}=50M_\odot$.
We model the probability of observing a black hole of mass $M_{BH}$ as
\begin{equation}
P(M_{BH}|\vec{p}) \propto
\sum_i { dN \over dM_{0} } \left| dM_{0} \over dM_{BH} \right|_i P(\xi(M_{0})) C(M_{BH})
\label{eqn:mapping}
\end{equation}
which is also normalized to unity, $\int P(M_{BH}|\vec{p}) dM_{BH} \equiv 1$.
The first term is a Salpeter progenitor mass function, $dN/dM_{0} \propto M_{0}^{-2.35}$.
The stellar models define a mapping $M_{BH}(M_0)$ between the initial mass and the
helium core mass we use for the mass of the black hole. The second term comes
from the variable transformation from $M_0$ in the progenitor mass function to
$M_{BH}$ in the black hole mass function. The effects of mass loss on higher
mass stars means that the same black hole mass can result for two
different progenitor masses, and we must sum over all solutions $i$.
The third
term is the probability that a progenitor with some physical property $\xi(M_0)$
will form a black hole. This is a one-dimensional sequence of a variable
like the compactness and we assume a simple model where $P(\xi)=0$ for
$\xi<\xi^{min}$, $P(\xi)=1$ for $\xi>\xi^{max}$ and that $P(\xi)$ increases
linearly over $\xi^{min} < \xi < \xi^{max}$. It is also useful to
constrain $\xi^{50\%}=(\xi^{min}+\xi^{max})/2$, the value at which 50\%
of core collapses produce black holes.
We might expect a sharp transition
with $\xi^{min}=\xi^{max}$ at which stars either form black holes or
not. However, there are many secondary variables (e.g., rotation,
composition, binary mass transfer) that affect stellar evolution beyond mass,
and stars of a given initial mass likely end with a distribution of
compactnesses (or any other collapse criterion) at death. It is reasonable
to assume this distribution is largely a spread in final compactnesses
around the values for a particular sequence of progenitor models with mass,
with the net effect of producing a smoothed $P(\xi)$
even if the true transition is sharp. \cite{Clausen2014} proposed using
a mass-dependent probability of black hole formation for similar physical
reasons.
The final term, $C(M_{BH})=(M_{BH}/10 M_\odot)^\alpha$, models the completeness
of the observed black hole mass function. Because $P(M_{BH}|\vec{p})$ is normalized
to unit total probability (or, equivalently, that we have no constraint on
the absolute number of black holes), only the shape and not the normalization
of $C(M_{BH})$ affects the results. If $\alpha >0$ we are more likely
to find high mass black holes, and the reverse if $\alpha <0$. Completeness
and biases have always been a significant concern for interpreting the
black hole mass function, particularly because black hole masses are only
measured in interacting binaries (see, \citealt{Bailyn1998}, \citealt{Ozel2010},
\citealt{Farr2011}, \citealt{Kreidberg2012}, \citealt{Ozel2012},
\citealt{Clausen2014}). Whatever biases exist, they are probably a
relatively smooth function of mass and including $C(M_{BH})$ allows
us to test for their effects or to simply marginalize over
$\alpha$ as a nuisance parameter.
Our model parameters, such as the compactness limits or the exponent
of the completeness function, all have limited dynamic ranges making uniform
priors a reasonable choice. For comparisons between models with different
parameters (Equation~\ref{eqn:relprob}), we normalize the priors as $\int P(\vec{p}) d\vec{p} \equiv 1$
over the range used for the calculation. This will give different models
equal relative probabilities in Equation~\ref{eqn:relprob} unless the
data significantly discriminates between the models.
We also compute
the fraction $f$ of core collapses leading to black holes under the
assumption that all stars with $M_0>8M_\odot$ undergo core collapse.
We include a weak prior $P(f)$ on the models using the constraints
derived by \cite{Adams2013} from combining the observed Galactic
SN rate with the lack of any neutrino detection of a
Galactic black hole formation event.
A stiffer limit of $f < 0.5$ could probably be
justified based on comparisons of SN and star formation rates
(\citealt{Horiuchi2011}) or limits on the diffuse neutrino
background (\citealt{Lien2010}), but these introduce a dependence
on estimates of star formation rates and are difficult to translate
into a mathematical prior. These estimates of $f$ are contingent
on the normalization that $P(\xi(M_0))$ becomes unity for
$\xi > \xi^{max}$. We could allow $P(\xi(M_0))=P_{max} < 1$
for $\xi > \xi^{max}$ without any consequences for our models
of the black hole mass function, and this would reduce $f$.
While there are differences in the details of the analyses by
\cite{Ozel2010}, \cite{Farr2011} and \cite{Ozel2012}, the
primary differences in the results are driven by differences
in the samples of black hole candidates. \cite{Ozel2010} and
\cite{Ozel2012} analyze a sample of 16 systems with low mass
companions, while \cite{Farr2011} analyze 15 low mass systems,
where GC~339$-$4 is the system that is
not in common. For these low mass systems, the resulting
inferences about the black hole mass functions are mutually
consistent. \cite{Farr2011} also carries out the analysis
including 5 systems with high mass companions. These systems
generally have probability distributions requiring significantly
higher masses than the low mass sample, and
the inferred black hole mass functions have significantly more
probability for $M_{BH}>10M_\odot$ with their inclusion.
Like the question of differences in selection functions for
neutron stars and black holes, the relative selection functions for black hole
systems with high and low mass companions probably cannot be derived.
However, the sharply declining mass functions found from
using only the low mass systems are hard to reconcile
with the existence of the high mass systems.
For example, for an exponential mass function, $dN/dM_{BH} \propto \exp(-M_{BH}/M_s)$ with
$ M_c < M_{BH} < 50M_\odot$ fit to the low mass samples,
\cite{Ozel2012} finds $M_c=6.32M_\odot$ and $M_s=1.61M_\odot$
while \cite{Farr2011} finds $M_c=6.03M_\odot$
and $M_s=1.55M_\odot$. These low mass samples exclude
the high mass system Cyg~X1, which has an improved mass
estimate of $M_{BH}=(14.8\pm1.0)M_\odot$ by \cite{Orosz2011}
(although \citealt{Ziolkowski2014} argues the uncertainties are somewhat larger).
Given the sample sizes, the probability of finding a system
as massive as Cyg~X1 based on these mass functions is only
about 5\%. Clearly, adding the information that these
generally higher black hole mass, high companion mass systems exist will change the inferences
from the low mass systems, just as found by \cite{Farr2011}.
However, of the five high mass systems included by \cite{Farr2011},
only Cyg~X1 is a Galactic source. If, as seems likely given the
existing samples, the high mass X-ray binaries tend to host higher
mass black holes than the low mass X-ray binaries, then including
the four extragalactic high mass systems without their accompanying
low mass compatriots could be biasing the estimates of the mass
function in the other direction. As an imperfect compromise,
we model the 16 low mass systems following \cite{Ozel2010} and
include Cyg~X1 with a Gaussian probability distribution of
mean $14.8M_\odot$ and dispersion $2.0M_\odot$ (Equation~\ref{data:case1}), doubling the
uncertainties from \cite{Orosz2011} based on the arguments in
\cite{Ziolkowski2014}. This gives us a sample of 17 mass
estimates.
In order to compare to the earlier studies, we fit the data using
both an exponential and a power-law ($dN/dM_{BH} \propto \exp(-M_{BH}/M_s)$ or $\propto M_{BH}^\beta$
for $M_c < M_{BH} <50 M_\odot$) parametric mass function. For
these calculations, Equation~\ref{eqn:mapping} is simply replaced
by the appropriate parametric form. We used uniform priors for
the two parameters in each models. Figure~\ref{fig:exp} shows the
results for the exponential mass function, where we find a median
cutoff mass of $M_c=5.98M_\odot$ ($4.94 M_\odot < M_c < 6.51 M_\odot$)
and an exponential scale mass of $M_s=2.99M_\odot$
($1.55 M_\odot < M_s < 5.85M_\odot$) where we always present 90\%
confidence intervals. As expected from adding Cyg~X1, we find a
exponential scale mass that is larger than the results from
\cite{Ozel2010}, \cite{Farr2011} and \cite{Ozel2012} using only
the low companion mass sample, but smaller than the results from \cite{Farr2011}
including all the high companion mass systems. For the power law model
we find $M_c=6.21M_\odot$ ($5.57M_\odot < M_c < 6.62 M_\odot$)
and $\beta=-4.87$ ($-7.93 < \beta < -3.00$). Unlike \cite{Farr2011},
we used a fixed upper mass cutoff at $M_{BH}=50M_\odot$, but the \cite{Farr2011} estimates
of $M_c=6.10 M_\odot$ ($1.28 M_\odot < M_c < 6.63M_\odot$ with
$\beta=-6.39$ ($-12.42 < \beta < 5.69$) for their low mass
sample and $M_c=5.85 M_\odot$ ($4.87 M_\odot < M_c < 6.46M_\odot$)
with $\beta=-3.23$ ($-5.05 < \beta < -1.77$) for their
combined sample appear compatible with our estimates.
The relative probabilities
of these two models are about $1.4$ in favor of the exponential
model, but this is well within the regime where the results will
be dominated by the effects of priors (through the choice of the
parameter range over which we carried out the probability integrals).
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig1.ps}}
\caption{
Probability contours for the exponential parametric model, $dN/M_{BH}\propto \exp(-M/M_s)$ for
$M_c < M_{BH} < 50M_\odot$, of the black hole mass function. The probability contours enclose 90\%, 95\%
and 99\% of the probability computed over the parameter range shown. The filled square with
error bars shows the median value and the 90\% confidence range for each parameter. The
open square with no error bar shows the result from \protect\cite{Ozel2012} using only low mass
systems. The open triangles with error bars show the results from \protect\cite{Farr2011}
for only low mass systems (lower) or both low and high mass systems (upper). Our
addition of one high mass system, Cyg~X1, to the \protect\cite{Ozel2010} sample produces
an intermediate result.
}
\label{fig:exp}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig2.ps}}
\caption{
Compactness $\xi_{2.5}$ (top) and black hole mass (bottom) as a function of the initial stellar
mass. The solid lines are for the \protect\cite{Woosley2007} and \protect\cite{Sukhbold2014} models while the
dashed lines are for the \protect\cite{Woosley2002} models. In the lower panel, dotted red lines show
the final masses of the star. The hydrogen mass at death (the mass difference between the
total mass and the helium core/black hole mass) is assumed to be ejected by the \protect\cite{Nadezhin1980}
mechanism in a failed supernova.
}
\label{fig:profile0}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig3.ps}}
\caption{
Compactness $\xi_{2.5}$ as a function of black hole mass and the black hole mass function
if all stars formed black holes with the masses of their helium cores for the W02
(top two panels) and W07$+$S14 (bottom two panels) models. The solid black (dashed
red) lines show the contributions from progenitors below (above) the stellar mass producing
the peak helium core mass. The normalizations of the mass functions are arbitrary.
The squares and triangles in the
compactness panel for the W07$+$S14 models show
the compactness estimates for the same cases from \protect\cite{Oconnor2011}
and \protect\cite{Sukhbold2014}, respectively.
Filled black (open red) symbols correspond to low (high) mass progenitors.
As noted by \protect\cite{Ugliano2012} there is little difference between the compactness estimates.
The mass function panels include the parametric mass functions with the median
parameter estimates from \S2 as the dotted curves.
}
\label{fig:profile}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig4.ps}}
\caption{
Constraints on the compactness $\xi_{2.5}$ for the W07 model assuming no biases
in the black hole mass function ($\alpha=0$). The probability contours
enclose 90, 95 and 99\% of the total probability for a uniform prior
over the region shown. The red dotted lines show
contours of the failed ccSNe fraction with $f=0.1$, $0.2$, $0.3$, $0.4$ and $0.5$
(from right to left).
}
\label{fig:xi1}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig5.ps}}
\caption{
Model probabilities as a function of the completeness exponent $\alpha$ for the
W02 (dashed), W07 (solid) and W07$+$S14 (dotted) $\xi_{2.5}$ models. The
probabilities are relative to the $\alpha=0$ W07 model. The mass function
is incomplete at high mass for $\alpha<0$ and incomplete at low mass for
$\alpha > 0$.
}
\label{fig:incomp}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig6.ps}}
\caption{
Constraints on the compactness $\xi_{2.5}$ as a function of the completeness.
The top panels are for the W02 model and the bottom panels are for the W07
model. The left panels assume the observations are more complete for low
mass black holes ($\alpha=-2$) while the right panels assume the observations
are more complete for high mass black holes ($\alpha=2$). Low values of
$\xi_{2.5}^{min}$ are favored in the $\alpha=2$ models. The probability contours
enclose 90, 95 and 99\% of the total probability for each model.
The red dotted lines show
contours of the failed ccSNe fraction with $f=0.1$, $0.2$, $0.3$, $0.4$ and $0.5$
(from right to left).
}
\label{fig:xi2}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig7.ps}}
\caption{
Constraints on the compactness $\xi_{2.5}$ marginalized over completeness ($-3 < \alpha < 3$)
for the W02 (black solid) and W07 (red dotted) models. The results for the W07$+$S14
models are very close to those for the W07 models. The probability contours enclose
90, 95 and 99\% of the total probability for each model.
}
\label{fig:xi3}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig8.ps}}
\caption{
Probability of black hole formation as a function of initial stellar mass for $\alpha=0$
and the W02 (top), W07 (middle) and W07$+$S14 (bottom) models after marginalizing
over $\xi_{2.5}^{min}$ and $\xi_{2.5}^{max}$. The heavy black line is the median probability
and the shaded band is the 90\% confidence range. The estimates are highly correlated
between different masses.
}
\label{fig:prob}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig9.ps}}
\caption{
Maximum likelihood mass functions for the W02 (top), W07 (middle) and W07$+$S14
models with $\alpha=0$. They are normalized to have the same integrated area.
For comparison, the red solid and dashed curves show the parametric
mass function estimates from \S2. Most of the structures here are real
and are created by the rapid changes of the compactness with $M_0$ in
some mass ranges (see \protect\citealt{Sukhbold2014}).
}
\label{fig:mfunc}
\end{figure}
\section{Results}
\label{sec:results}
We consider the \cite{Woosley2002} and \cite{Woosley2007} Solar metallicity progenitor models.
The \cite{Woosley2002} models span $10.8 M_\odot < M_0 < 70 M_\odot$ with a relatively
dense sampling ($\Delta M=0.2 M_\odot$) from $10.8 M_\odot$ to $30 M_\odot$, a
coarser sampling ($\Delta M=1.0 M_\odot$) to $40M_\odot$ and then a last model
at $75 M_\odot$. The \cite{Woosley2007} models span $12 M_\odot < M_0 < 120M_\odot$
sampling every $\Delta M =1.0 M_\odot$ up to $35 M_\odot$, every $\Delta M=5.0M_\odot$
to $60 M_\odot$, and models at $70$, $80$, $100$ and $120 M_\odot$. \cite{Sukhbold2014}
supplemented the \cite{Woosley2007} models, sampling the range from $15$ to $30M_\odot$ with $\Delta M= 0.1 M_\odot$.
We assume models from $8M_\odot$ to $75 M\odot$ for the \cite{Woosley2002} models or
$120 M_\odot$ for the \cite{Woosley2007} models undergo core collapse and allow black
hole formation only over the mass range of the model sequences. Stars with masses
from $8M_\odot$ to the minimum masses of the sequences ($10.8$ or $12M_\odot$) are
assumed to be successful SNe forming neutron stars. We will refer
to the three progenitor sequences as the W02, W07, and W07$+$S14 models.
For each model we computed the mass of the helium core, defined by the radius where
$X_H = 0.2$, and the compactness $\xi_M$ (Equation~\ref{eqn:compact})
for $M_{bary}=2.0$, $2.5$ and $3.0$ based on the
progenitor model. For the densely sampled mass ranges we slightly smoothed the
helium core masses as a function of mass to remove small fluctuations that
complicate the mapping from initial mass into black hole mass due to the derivative
in Equation~\ref{eqn:mapping}. Following \cite{Ugliano2012} we also computed
the mass of the iron core ($M_{Fe}$, defined by the point where $Y_e=0.497$) and
the mass at the inner edge of the oxygen burning shell ($M_O$, defined by the point
where the dimensionless entropy per baryon equaled 4) for the W02 and W07
models. We did not possess the density profiles needed to compute these
properties for the S14 models, but \cite{Sukhbold2014}
kindly supplied a table of their helium core
masses and $\xi_{2.5}$.
\cite{Oconnor2011} argue that $\xi_{2.5}$
should be computed at the time of core bounce, while \cite{Sukhbold2014} note
that computing it when the collapse speed reaches $1000$~km/s produces
equivalent results and \cite{Ugliano2012} note that there is little
difference between simply calculating it from the progenitor model and
calculating it at the time of core bounce. Operationally, simply using the
compactness of the progenitor is far simpler because it avoids
simulating a portion of the collapse for each model.
If we use the \cite{Oconnor2011} or \cite{Sukhbold2014} values, we
find no significant differences from our results simply using the values estimated
from the progenitors.
Figure~\ref{fig:profile0} shows the helium core
mass, which we now define to be the black hole mass, and the compactness
$\xi_{2.5}$ as a function of the initial stellar mass $M_0$. The main qualitative
difference is that the \cite{Woosley2002} models have a lower maximum helium
core mass and a significantly smaller and lower compactness peak near $M_0 \simeq 40 M_\odot$.
The other compactness sequences look similar but with modest changes in the
average level and the ratio of the peaks near $20$-$25$ and $40 M_\odot$.
Figure~\ref{fig:profile0} also shows the total mass when the star explodes,
and we assume that any remaining hydrogen envelope is ejected. For a failed
supernova of a red supergiant, which represents the bulk of any models with
residual hydrogen, the \cite{Nadezhin1980} mechanism naturally does so,
as seen in the simulations of \cite{Lovegrove2013}. For the higher mass stars,
mass loss has stripped the hydrogen prior to the explosion. As the
amount of residual hydrogen becomes negligible, the stellar envelope will
collapse and the \cite{Nadezhin1980} mechanism cannot work because it depends on the
very low binding energy of a red supergiant envelope. At this point, however,
the correction from adding the remaining hydrogen to the black hole is
also unimportant.
While we are
phrasing black hole formation as being due to failed SNe, fine tuning fall
back could produce the same black hole masses in successful SNe.
Figure~\ref{fig:profile} shows the structure of these models as a function
of black hole mass. Because the black hole mass peaks at an intermediate
initial mass, there are two branches to the solutions, corresponding to
initial masses above and below this maximum. Broadly speaking, there
is an extended plateau in $\xi_{2.5}$ starting near $M_{BH}\simeq 4M_\odot$
with a rapid drop at lower masses, a narrow peak near $6 M_\odot$ and
a broader peak near $8M_\odot$. We also show the $\xi_{2.5}$ values
from \cite{Oconnor2011} and \cite{Sukhbold2014}, and we see that there
are few differences from our estimates simply using the structure of
the progenitor, as previously noted by \cite{Ugliano2012}.
We also show the black hole mass functions
which would result from all the models becoming black holes with the mass
of the helium core. The general decline of the Salpeter mass function
is visible, but the slope of $M_{BH}(M_0)$ introduces significant structure.
The broader trends are real features of the models, but some of the small scale
structure is due to ``noise'' in $M_{BH}(M_0)$. Whenever $M_{BH}(M_0)$
is locally flat, the derivative term in Equation~\ref{eqn:mapping} leads
to a local peak in the mass function. The smoothing we used to make the
$M_{BH}(M_0)$ profiles monotonic outside of the single peak significantly reduces these
features compared to the raw relations. None of these local structures
are important because they are changes in the mass function on such small
scales that they have no consequences for modeling the data. For comparison,
we also show the two parametric estimates of the black hole mass function from
\S2. Producing a similar mass function
requires suppressing the formation of low mass black holes. The mass
functions derived from the stellar models do not decline as rapidly
as the parametric models, which could be a problem in either of the models
or a consequence of completeness in the black hole sample.
We first consider the three models using $\xi_{2.5}$ and no mass-dependent
completeness effects ($\alpha=0$). These W02, W07 and W07$+$S14 models have
probabilities relative to the exponential parametric model of $0.7$,
$2.5$ and $2.0$, respectively. Thus, our first result is that our
models based on progenitor models and a physical criterion for
black hole formation can fit the observed mass function just as well
as standard parametric models. However, given the changing parameters
and parameter ranges, the differences are not large enough to argue
that the $\alpha=0$ W07 model is significantly better. Between the
three stellar models, the ratios are $0.27:1.0:0.80$, favoring the
W07 model but not by a large enough factor to rule out the W02 model.
This ordering is driven by the inclusion of Cyg~X1. If we drop it,
the relative probabilities are $1:0.33:0.26$ and the W02 model is
favored over the W07 model. These shifts are driven by the relative
areas of the compactness peaks in the two models: with Cyg~X1, W02
produces too few massive black holes, and without Cyg~X1, W07 produces
too many. We will explore this in more detail when we discuss the
effects of the completeness model.
Figure~\ref{fig:xi1} shows the likelihood function for the $\alpha=0$,
$\xi_{2.5}$ W07 model as
well as the fractions of core collapses becoming black holes. The structures
of the W02 and W07$+$S14 results are similar. The compactness threshold
$\xi_{2.5}^{min}$ for black hole formation is well constrained. Although the
most probable model is always for a sharp transition with $\xi_{2.5}^{max}=\xi_{2.5}^{min}$,
the width of the transition, $\xi_{2.5}^{max}-\xi_{2.5}^{min}$, is not well constrained.
Formally, the median estimates are $\xi_{2.5}^{min}=0.17$ ($0.07 < \xi_{2.5}^{min} < 0.23$),
$\xi_{2.5}^{min}=0.16$ ($0.06 < \xi_{2.5}^{min}<0.22$), and
$\xi_{2.5}^{min}=0.15$ ($0.06 < \xi_{2.5}^{min}<0.20$)
for the W02, W07 and W07$+$S14 models, respectively. This is somewhat
misleading because the lower values of $\xi_{2.5}^{min}$ are associated with
wide transitions. A better metric is probably $\xi_{2.5}^{50\%}=(\xi_{2.5}^{min}+\xi_{2.5}^{max})$,
the point where the probability of forming a black hole becomes 50\%.
For these models we find that
$\xi_{2.5}^{50\%}=0.24$ ($0.17<\xi_{2.5}^{50\%}<0.36$),
$\xi_{2.5}^{50\%}=0.23$ ($0.17<\xi_{2.5}^{50\%}<0.33$), and
$\xi_{2.5}^{50\%}=0.21$ ($0.16<\xi_{2.5}^{50\%}<0.32$), so $\xi_{2.5}^{50\%}$ is constrained
to be close the plateau in $\xi_{2.5}$ seen in Figure~\ref{fig:profile}
and high enough to largely prevent black hole formation at lower
masses where the compactness is dropping rapidly. As we will see,
this is sufficient to lead to a sharp break in the black hole
mass function just as is included in the parametric models.
These estimates from fitting the observed black hole mass
function are quite similar to the range of $0.15$ to $0.35$
found by \cite{Ugliano2012} in their core collapse simulations
and lower than the limit proposed by \cite{Oconnor2011}.
The fractions of core collapses producing black holes are
$f=0.13$ ($0.05<f<0.28$),
$f=0.21$ ($0.11<f<0.34$), and
$f=0.21$ ($0.11<f<0.33$), respectively, where the W07 models
produce larger numbers of black holes because of the prominent compactness
peak near $M_0=40 M_\odot$. If we arbitrarily remove this
peak by setting $\xi_{2.5}=0.2$ in this regime, the results
more closely resemble those for the W02 models.
For the W02 and W07 models we can also examine using the compactness
at another mass cut, $\xi_{2.0}$ or $\xi_{3.0}$, the iron core mass,
$M_{Fe}$, or the mass at inner edge of the oxygen burning shell, $M_O$,
as the criterion for forming a black hole. The procedure is the
same in each case, we raise the probability for forming a black hole
linearly from zero at one value of the parameter to unity at a higher
value. For the $\alpha=0$ W02 models, the relative probabilities of
the criteria are $1.97:1.00:0.31:0.19:0.20$ for $\xi_{2.0}$, $\xi_{2.5}$,
$\xi_{3.0}$, $M_{Fe}$ and $M_O$, respectively, where we have normalized
the models to the standard $\xi_{2.5}$ model. For the $\alpha=0$ W07 models,
the ratios are $1.11:1.00:0.52:0.28:0.03$. This general structure
holds when we vary the completeness as well. If we simply marginalize
over the completeness ($-3 < \alpha < 3$ with a uniform prior) and
the W02 and W07 models, the probabilities of the formation criteria relative
to the $\xi_{2.5}$ model are $0.92:1.00:0.39:0.29:0.14$. The
compactness at a smaller mass cut $\xi_{2.0}$ is almost equally
good, the compactness at a larger mass cut $\xi_{3.0}$ is moderately
worse, the iron core mass is an even poorer model, and the
oxygen burning shell mass is the worst model. Arguably, only
$M_O$ shows a large enough probability ratio to be rejected
at a reasonable confidence level. These results are consistent
with the simulations of \cite{Ugliano2012}, who found that
$M_{Fe}$ and $M_O$ had poorer correlations with black hole
formation than $\xi_{2.5}$. For the remainder of the paper
we will just consider the $\xi_{2.5}$ model.
Figure~\ref{fig:incomp} shows the relative probabilities of the
W02, W07 and W07$+$S14 $\xi_{2.5}$ models as a function of
the completeness exponent $\alpha$. The W02 models modestly
prefer incompleteness at low mass, while the W07 and
W07$+$S14 models more strongly prefer incompleteness at
high mass. We only explored the range $-3 < \alpha < 3$
with a uniform prior. All three cases peak in this range
and larger values of $|\alpha|$ seemed unreasonable --
for $|\alpha|=3$, the relative completeness changes by a
factor of 8 between $M_{BH}=5$ and $10M_\odot$. The
qualitative differences can be understood from the
differences in the $\xi_{2.5}$ profiles shown in
Figures~\ref{fig:profile0} and \ref{fig:profile}.
The peak near $M_0=40 M_\odot$ which produces the most
massive black holes is weaker compared to the peak
near $M_0=20$-$25M_\odot$ in the W02 models relative
to the W07 or W07$+$S14 models. Compared to the data,
the W02 model produces too few high mass black holes
and so to compensate the models make the observed sample
incomplete at low masses. The W07 models produce
too many high mass black holes, and compensate by
making the observed sample incomplete at high mass.
If we exclude Cyg~X1, then all three models favor
observational samples that are incomplete at high
black hole mass.
The completeness probability distributions are related to the results of
\cite{Clausen2014} in their estimate of the probability
of black hole formation as a function of initial stellar
mass $M_0$. Their formation probability distribution has a peak
associated with the first peak in $\xi_{2.5}$ at
$M_0=20$-$25M_\odot$ but no peak associated with the
second peak in $\xi_{2.5}$ near $M_0=40 M_\odot$.
They derived the probability distributions by fitting
the parametric mass functions of \cite{Ozel2012},
which are based on only the low mass binaries and
have low probabilities for higher mass black holes.
The lack of a probability peak near $M_0=40 M_\odot$
is similar to the effects of $\alpha <0$.
For comparison to the mass-independent completeness case
in Figure~\ref{fig:xi1},
Figure~\ref{fig:xi2} shows the results for the W02 and W07
models with $\alpha=-2$ and $2$. This corresponds to
changing the completeness between $M_{BH}=5M_\odot$
and $10M_\odot$ by a factor of 4. When the discovery
of low mass black holes is favored ($\alpha=-2$), the
upper limits on $\xi_{2.5}^{min}$ change little but the lower
limits become much stronger. When the discovery of high
mass black holes is favored ($\alpha=2$), the upper limit
on $\xi_{2.5}^{min}$ becomes somewhat stronger, while the lower
limit becomes significantly weaker. As the completeness
for low mass black holes decreases, there is more freedom
to make them. This can be seen in the fractions of core
collapses producing black holes, which are $f=0.10$ ($0.04<f<0.20$)
and $0.08$ ($0.14<f <0.25$) for the W02 and W07 models with
$\alpha=-2$ but $f=0.20$ ($0.06 < f < 0.45$) and
$f=0.34$ ($0.18<f<0.51$) for the $\alpha=2$ models.
Finally, we can also marginalize over the completeness model
($-3 < \alpha < 3$ with a uniform prior). Marginalized
over $\alpha$, the relative probabilities of the W02, W07
and W07$+$S14 $\xi_{2.5}$ models are $0.33:1.00:0.88$, moderately
favoring the W07 models. Figure~\ref{fig:xi3} shows the
constraints on the compactness parameters for the W02 and
W07 models marginalized over completeness. The hard upper
limit is robust, but the lower limit becomes soft because
of the contribution from the $\alpha > 0$ models.
The estimates of the critical compactness $\xi_{2.5}^{min}$ are
$\xi_{2.5}^{min}=0.17$ ($0.03 < \xi_{2.5}^{min} < 0.23$),
$\xi_{2.5}^{min}=0.18$ ($0.05 < \xi_{2.5}^{min} < 0.23$), and
$\xi_{2.5}^{min}=0.17$ ($0.05 < \xi_{2.5}^{min} < 0.21$)
for the W02, W07 and W07$+$S14 models, respectively.
The constraints on $\xi_{2.5}^{50\%}$, the compactness where the
probability of forming a black hole is 50\%, are tighter,
with
$\xi_{2.5}^{50\%}=0.23$ ($0.11 < \xi_{2.5}^{50\%} < 0.35$),
$\xi_{2.5}^{50\%}=0.24$ ($0.15 < \xi_{2.5}^{50\%} < 0.37$), and
$\xi_{2.5}^{50\%}=0.23$ ($0.14 < \xi_{2.5}^{50\%} < 0.35$).
Finally, the constraints on the fraction of core collapses
leading to black holes are
$f=0.14$ ($0.05 < f < 0.41$),
$f=0.18$ ($0.09 < f < 0.39$), and
$f=0.18$ ($0.09 < f < 0.38$).
Despite allowing for very large, black hole mass-dependent
completeness corrections, the constraints on $\xi_{2.5}^{50\%}$
and $f$ are quite strong.
\cite{Clausen2014} estimated the probability of forming a black
hole with the mass of the helium core as a function of initial
stellar mass needed to match the \cite{Ozel2012} parametric
model of the black hole mass function. For a given model, we
can marginalize over $\xi_{2.5}^{min}$ and $\xi_{2.5}^{max}$ to estimate
this probability as a function of mass as well as its variance
over the model space, although the uncertainty estimates are
highly correlated. The results for the $\alpha=0$
case are shown in Figure~\ref{fig:prob}.
Black hole formation is disfavored below $M_0 \simeq 20M_\odot$
with the probability steadily dropping towards lower masses.
Black hole formation is probably
required near $20$-$25M_\odot$ and near $40M_\odot$.
At most other masses, the data do
not strongly constrain the probability of forming a black
hole.
Finally, in Figure~\ref{fig:mfunc} we show the maximum likelihood
mass functions for the three $\alpha=0$, $\xi_{2.5}$ models as
compared to the median fit parametric exponential and power-law models
from \S2. Our models roughly match the minimum
black hole masses of the parametric models. The W07 and W07$+$S14 mass
functions are relatively flat due to the broad compactness
peak at $M_0 \sim 40M_\odot$, while the W02 mass function
is more strongly peaked at low masses. The absence of
higher mass black holes in the W02 models as compared to
the W07/W07$+$S14 models explains why the W07/W07$+$S14 models
are favored with the inclusion of Cyg~X1.
\section{Discussion}
\label{sec:discuss}
If black hole formation is controlled by the compactness of
the stellar core at the time of collapse (e.g.,
\citealt{Oconnor2011}, \citealt{Ugliano2012}, \citealt{Sukhbold2014})
and we associate the mass of the resulting black hole
with the mass of the helium core (e.g., \citealt{Burrows1987},
\citealt{Kochanek2014}, \citealt{Clausen2014}) then we
can constrain the compactness above which black holes
form by fitting the observed black hole mass function
(e.g., \citealt{Bailyn1998},
\citealt{Ozel2010}, \citealt{Farr2011}, \citealt{Kreidberg2012},
\citealt{Ozel2012}). The helium core mass is a natural
scale for black hole masses due to either mass loss
(e.g., \citealt{Burrows1987}) or the physics of failed
ccSNe (\citealt{Nadezhin1980},
\citealt{Lovegrove2013}, \citealt{Kochanek2014}). We
also, for the first time, include a model for the
completeness of the observed sample of black holes and
examine its consequences for the constraints on the
core collapse parameters.
We use a sample of 17 black hole candidates, the 16 low
mass binary systems used by \cite{Ozel2010} and \cite{Ozel2012}
combined with the one Galactic high mass system, Cyg~X1.
\cite{Farr2011} found that the parameters of their models
of the black hole mass function changed significantly when
the high mass binaries were included in the analysis because
they tend to have higher average masses. Including
Cyg~X1 is a compromise between excluding all high mass
systems (\citealt{Ozel2010}, \citealt{Ozel2012}, \citealt{Farr2011}) and
including all high mass systems (\citealt{Farr2011}). We
exclude the extragalactic high mass systems since there
is no equivalent sample of extragalactic low mass systems.
Where necessary, we discuss the impact of including Cyg~X1
on the results. If we model the data using the parametric
methods and models of these previous studies, we obtain
similar results.
The first interesting result is that our models based on
combining progenitor models with a physical criterion for
forming a black hole fit the observed black hole mass function
as well as the existing parametric models. Our best model actually has
a higher likelihood, but the likelihood ratios are not
large enough to be significant. Unlike the simple parametric
models, the mass functions produced
by our models have a great deal of structure (Figure~\ref{fig:mfunc})
created by the rapid variations in compactness with
mass (Figures~\ref{fig:profile0} and \ref{fig:profile}).
We tested five different parameters for predicting the
formation of a black hole. The compactnesses $\xi_{2.0}$,
$\xi_{2.5}$ and $\xi_{3.0}$ of the core of the progenitor
at baryonic masses of $2.0$, $2.5$ and $3.0M_\odot$
(Equation~\ref{eqn:compact}), the
mass of the iron core $M_{Fe}$ and the mass inside the
oxygen burning shell $M_O$. We considered three sets
of stellar models, W02 from \cite{Woosley2002},
W07 from \cite{Woosley2007}, and W07$+$S14 which
supplements the W07 models with the denser mass
sampling from \cite{Sukhbold2014}.
Marginalizing over all our other variables, we
find that the compactnesses $\xi_{2.0}$ and $\xi_{2.5}$
produced the highest likelihoods. The compactness
$\xi_{3.0}$ models were somewhat less probable, the
iron core mass models still less so, and the oxygen
shell mass models were the worst. With overall
likelihood ratios relative to the $\xi_{2.5}$ model
of $0.92:1.00:0.39:0.29:0.14$ none of the other
possibilities are strongly ruled out. \cite{Ugliano2012}
found in their simulations that the compactness was
a better predictor of outcomes than the iron core or
oxygen shell masses. We now focus
on the $\xi_{2.5}$ models.
The relative probabilities of the W02, W07
and W07$+$S14 $\xi_{2.5}$ models after marginalizing
over the completeness model are $0.33:1.00:0.88$,
favoring the W07 models. The origin of the differences
are due to the relative strengths of the peaks in the
compactness as a function of mass near $M_0=20$-$25M_\odot$
and $M_0=40M_\odot$, which control the relative production
of higher and lower mass black holes. The W02 models
have difficulty producing higher mass black holes and
so are disfavored. This estimate is affected
by the inclusion of Cyg~X1 -- if Cyg~X1 is excluded, the
relative probabilities favor the W02 models over the
W07 or W07$+$S14 models by similar factors.
The W07 and W07$+$S14 cases favor models in which the observed
black hole mass function is incomplete at high masses, while
the W02 case favors models in which it is incomplete at low masses.
This is again a reflection of the relative importance of
the two compactness peaks. If we exclude Cyg~X1, thereby
dropping the (probably) highest mass black hole in the sample,
all three models favor a black hole mass function that is
incomplete at high masses. These trends are related to the
estimate of the black hole formation probability as a function
of initial stellar mass by \cite{Clausen2014}. Using only
the low mass companion systems, their models have a low probability
of black hole formation at the compactness peak near
$M_0=40 M_\odot$ that produces the higher mass black holes.
This is equivalent to our models reducing the completeness
of the observed sample for higher mass black holes.
That our completeness models generally favor incompleteness
for high mass systems rather than low mass systems supports
the existence of a gap or a deep minimum
between the masses of neutrons stars
and black holes.
Even with a model allowing large variations in the
completeness as a function of mass, we obtain interesting
constraints on the compactness leading to black hole
formation. We modeled the probability of black hole
formation as linearly rising from zero at a minimum compactness
$\xi_{2.5}^{min}$ to unity at $\xi_{2.5}^{max}$.
The probability of black hole
formation should increase monotonically with compactness,
but stars of a given mass will really end their lives
with a range of compactnesses because of secondary
variables other than their initial mass (e.g., composition,
rotation, binary interactions). If these secondary
variables produce a spread in the compactness at a given
initial mass,
the finite width of our transition will mimic much of
the effect. That being said, the maximum likelihood
model was always the one with an abrupt transition
($\xi_{2.5}^{min}=\xi_{2.5}^{max}$), although probability of
$\xi_{2.5}^{max}-\xi_{2.5}^{min}>0$ only declines slowly and
the width of the transition is not well constrained.
For models in which the completeness is mass-independent,
we find $\xi_{2.5}^{min}=0.15$ ($0.06 < \xi_{2.5}^{min}<0.20$)
and $\xi_{2.5}^{50\%}=0.23$ ($0.17<\xi_{2.5}^{50\%}<0.33$) where
$\xi_{2.5}^{50\%}=(\xi_{2.5}^{min}+\xi_{2.5}^{max})/2$
is the compactness at which the black hole formation
probability is 50\%. Because the width of the
transition is poorly constrained, $\xi_{2.5}^{50\%}$ has
smaller uncertainties than $\xi_{2.5}^{min}$.
If we marginalize over the completeness model,
we find $\xi_{2.5}^{min}=0.18$ ($0.05 < \xi_{2.5}^{min} < 0.23$)
and $\xi_{2.5}^{50\%}=0.24$ ($0.15 < \xi_{2.5}^{50\%} < 0.37$).
The upper limits change little,
but the lower limits become softer because of the
contribution from models with high incompleteness at low black
hole mass. These
results are for the W07 models, but the results for the
other two cases are similar.
The fraction of core collapses predicted to form black
holes is relatively high. For the W07 model with no
mass-dependent completeness corrections, $f=0.21$ ($0.11<f<0.34$),
while after marginalizing over the completeness corrections,
$f=0.18$ ($0.09 < f < 0.39$). The results for the other
model sequences are similar. This fraction assumes a
Salpeter IMF where all stars from $M_0=8M_\odot$ to the
maximum mass in the model sequence undergo core collapse.
It also assumes that the probability of black hole formation
for $\xi_{2.5}>\xi_{2.5}^{max}$ is unity, $P_{max}\equiv 1$. Lowering
$M_0$ or $P_{max}$ would reduce $f$, while raising $M_0$
would increase $f$. For example, raising the minimum mass
for core collapse from $8M_\odot$ to $9M_\odot$ raises
$f$ by 18\%. The probability of black hole formation as a
function of initial stellar mass is a complex function
reflecting the structure of $\xi_{2.5}(M_0)$. Generically,
the probability drops rapidly below $M_0 =20 M_\odot$
in order to minimize the production of low mass black holes
and is unity near $M_0=20$-$25M_\odot$ and $M_0=40M_\odot$
where the compactness peaks. At other mass ranges, the
probabilities vary greatly with $\xi_{2.5}^{min}$ and $\xi_{2.5}^{max}$.
The most interesting directions for expansion would be to
consider other potential indicators of outcomes, such as
binding energies, other sequences of stellar models, and
other possible definitions of the black hole mass. In,
particular, many model sequences of ccSN progenitors span
only portions of the mass range of interest, frequently
with large spacings in mass. In order to carry out the
analysis, this approach requires the full mass range from
$M_0 \sim 8 M_\odot$ to $M_0 \sim 100 M_\odot$. While
the differences between the densely sampled \cite{Sukhbold2014}
and the sparser \cite{Woosley2007} models in the mass range
$15M_\odot < M_0 < 30 M_\odot$ seem to have little consequence
for our results, we are concerned that the sparse sampling
of the \cite{Woosley2007} models near $M_0 \sim 40 M_\odot$
compared to the more densely sampled \cite{Woosley2002} models
may drive some differences in the contribution of stars in
this mass range to the black hole mass function. Our
approach, including models of sample completeness,
could also be used to fit sequences of fall-back
models (e.g., \citealt{Zhang2008}, \citealt{Fryer2012}) to the
black hole mass function. We have not done so here because
these fall-back models produce continuous mass functions
that are completely incompatible with the existing data.
We used a fairly minimalist prior on the fraction of failed
ccSNe based on the Galactic ccSN rate and the absence of any
neutrino detections of core collapse (\citealt{Adams2013}). Stronger priors could
be developed based on the detections of ccSNe progenitors
(e.g., \citealt{Smartt2009}), limits on the rate of
failed ccSN (\citealt{Kochanek2008}, \citealt{Gerke2014}),
the neutron star mass function (\citealt{Pejcha2012}), the diffuse neutrino
background produced by ccSNe (\citealt{Lien2010}, \citealt{Lunardini2009})
or comparisons of the star and ccSN rates (\citealt{Horiuchi2011},
\citealt{Botticella2012}). The present results
are consistent with the available constraints, and roughly
predict the mass range and fraction of failed ccSNe needed
to explain the red supergiant problem.
\section*{Acknowledgments}
We thank T. Sukhbold and S. Woosley for sharing the helium core
masses and compactnesses for their expanded set of progenitor
models. We thank J.F.~Beacom, D.~Clausen, A.~Gould, C.~Ott, T.~Piro,
K.Z.~Stanek and T.A.~Thompson for comments and discussions.
|
2,877,628,089,595 | arxiv | \chapter*{Abstract}
We formulate new boundary conditions that prove well defined variational principle and finite response functions for conformal gravity (CG).
In the Anti--de Sitter/conformal field theory framework, gravity theory that is considered in the bulk gives information about the corresponding boundary theory. The metric is split in the holographic coordinate, used to approach the boundary, and the metric at the boundary. One can consider the quantities in the bulk perturbing the (one dimension lower) boundary metric in holographic coordinate.
The response functions to fluctuations of the boundary metric are Brown--York stress energy tensor sourced by the leading term in the expansion of the boundary metric and a Partially Massless Response, specific for CG and sourced by the subleading term in the expansion of the boundary metric.
They formulate boundary charges that define the asymptotic symmetry algebra or Lie algebra of the diffeomorphisms that preserve the boundary conditions of the theory. We further analyse CG via canonical analysis constructing the gauge generators of the canonical charges that agree with Noether charges, while the charge associated to Weyl transformations, vanishes.
Asymptotic symmetry algebra is determined by the leading term in the expansion of the boundary metric and for the asymptotically Minkowski, $R\times S^2$ and the boundaries related by conformal rescaling, defines conformal algebra.
The key role is played by the subleading term in the expansion of the metric forbidden by Einstein gravity equations of motion, however allowed in CG. We classify the subalgebras of conformal algebra restricted by this term and use them to deduce the global solutions of CG. The largest subalgebra is five dimensional and extrapolates to plane wave (or geon) global solution of CG.
Further, we compute the one loop partition function of CG in four and six dimensions and supplement the theoretical computations with the computation of thermodynamical quantities and observables, black holes and Mannheim--Kazanas--Riegert solution which is the most general spherically symmetric solution of conformal gravity analogous to Schwarzschild solution of Einstein gravity.
\chapter{Introduction}
Conformal gravity (CG) is an effective, low energy, theory of gravity. To justify it, we first have to approach the known issues of CG and recognise the issues of Einstein gravity (EG) that are solved within the CG. The overall goal is to provide a bridge for CG
that one can use as a map from the string theory, more correctly from limiting case of string theory to quantum field theories. In general, that framework is known and it is called Anti--de Sitter/conformal field theory (AdS/CFT) correspondence.
It is important to notice, before proceeding further, that based on the current knowledge of EG vs. CG, much more is known about EG since it is used for the description of our Universe and experimentally verified, which encourages its further investigations. Although the experimental results and the fact it has been investigated the most turn us to investigate it further, its unresolved issues that exist for long time have turned some researchers to study other theories of gravity. Another reason to search for the alternative theory of gravity is cosmological constant.
Cosmological constant appears naturally in EG as the lowest order term in a derivative expansion that is compatible with all symmetries, therefore the issue is not how to add it to EG, however to argue its small value which is $10^{-123}$ in natural units. This may motivate to consider theories where the cosmological constant is not a fixed parameter in the action.
From observing the anisotropies of cosmic microwave background (CMB), structure formation, data from the clusters, galaxy rotation curves and others, it is evident that there should be additional matter in the Universe which is not visible, which is therefore called dark matter \cite{Bertone:2004pz}. Dark matter however has not been found yet which additionally motivates the consideration of the modified gravity theories. Big caveat is that the most modifications that would replace the dark matter encounter a conflict with the solar system precision tests or other tests of EG.
Higher derivative theories are prime candidates for the alternative theory of gravity. It was shown \cite{Utiyama:1962sn,Stelle:1976gc,Fradkin:1981iu} that with addition of the higher derivative terms to EG one can obtain renormalizable theory. If one expands the gravitational potential in power series in the gravitation constant $\kappa$, each term will correspond to a Feymann diagram, that consists (in case of energy momentum tenosr) of a loop with matter lines within, that are responsible for the divergencies of three different types $\infty^4$, $\infty^2$ and $\log\infty$. The divergencies $\infty^2$ and $\log\infty$ are new in comparison with electrodynamics. First one can be treated by renormalisation of the gravitation constant while the second one introducing a counterterm that can be derived from the Lagrangian with quadratic Riemann tensor\footnote{The $\infty^4$ divergency is removed by introducing a term analogous to cosmological constant, refer to "cosmological term" \cite{Utiyama:1962sn}}.
That, together with the nonrenomalizability of GR \cite{Deser:1974xq} led to a conclusion that gravitational actions with quadratic curvature tensor are renormalizable \cite{Stelle:1976gc}.
The main aims of the thesis are to prove the following points.
\begin{itemize}
\item CG has well defined variational principle and finite response functions without addition of the generalised Gibbons--Hawking--York term or holographic counterterms.
\item The Noether charges obtained from the response functions agree with the charges obtained by canonical analysis of CG, where charge associated to Weyl symmetry vanishes. The algebra defined by the canonical charges is isomporhic to the Lie algebra of the boundary conditions preserving diffeomorphisms along the boundary.
\item The asymptotic symmetry algebra defined by the charges has a rich boundary structure. It can be classified into subalgebras defined by the asymptotic solutions. Such largest subalgebra is five dimensional and defines global pp wave (or geon) solution.
\item The one loop partition function of CG around the $AdS_4$ background is not negligible, neither as the classical contribution. It consists of the partition function of EG, conformal ghost and partially masses mode, analogously to structure of partition function in three dimensions.
\end{itemize}
While higher derivative theories of gravity are superior to EG when it comes to renormalizability properties, their main disadvantage is that generically these theories suffer from ghosts, i.e. states of negative energy. Which is the case for pure Weyl theory \cite{Hooft:1974bx}.
Therefore, before studying CG one needs to be aware that CG solves the two loop non-renormalizability issue of EG, however it introduces one of its own, existence of ghosts.
They have been treated from several different approaches. Pais-Uhlenbeck oscillator approach \cite{Bender:2007wu} defines the parameter space on which negative energy states which identify ghost states, do not appear.
Mannheim's approach treats non-unitarity proposing non-hermiticity of the certain operators that do not affect computation of remaining observables \cite{Mannheim:2011ds}.
Aspects that favour studyng CG as an effective theory of gravity come from the fact that within the AdS/CFT framework, CG arrises as a conuterterm from five dimensional EG \cite{Liu:1998bu,Balasubramanian:2000pq}, and from the twistor string theory \cite{Berkovits:2004jj}. It has been studied in a series of articles by t'Hooft who suggested that conformal symmetry might be the key for understanding the physics at the Planck scale \cite{Hooft:2009ms, Hooft:2010ac, Hooft:2010nc, Hooft:2014daa}.
Phenomenologically, CG has been studied as a theory that could describe the galactic rotation curves without the addition of dark matter in series of articles by Mannheim \cite{Mannheim:1988dj, Mannheim:2006rd, Mannheim:2010xw, Mannheim:2011ds, Mannheim:2012qw}. It was also used in the cosmological model in which particular terms responsible for the inflation were defined by conformal invariance \cite{Jizba:2014taa}. \newline The thesis is structured as follows. We give brief introduction in GR in the second chapter, after which we focus on studying CG. In the third chapter we describe the holographic renormalisation procedure of the CG action and outline its main result that is a proof that CG action in its initial form without addition of boundary counterterms of the "Gibbons-Hawking-York type" or holographic counterterms has well defined variational principle and finite response functions. The result is obtained by imposition of the suitable boundary conditions which conserve the gauge transformations and define the asymptotic symmetry algebra at the boundary.
We continue the study of the CG via canonical analysis in the following chapter, where we find the canonical charges that agree with the Noether charges from the third chapter.
In addition, we learn that the Weyl charge vanishes, which is analogous to three dimensional Chern-Simons action that exhibits conformal invariance. The analogy exists when the Weyl factor is kept fixed. In four dimensions, Weyl charge is for the freely varying charge vanishing, while the discrepancy with the 3D analogy vanishes.
The fifth chapter analyses the richness of the structure defined by the asymptotic symmetry algebra obtained from the charges (in the fourth chapter) and from the boundary conditions that conserve gauge transformations (in the third chapter).
The subalgebra defined by the Schwarzschild analogue of CG solution is four dimensional $\mathbb{R}\times o(3)$ algebra, while the highest subalgerba we find is the five dimensional subalgebra that extends to global geon or pp-wave solution.
The sixth and the final chapter consists of the computation of the one loop partition function of CG in four and six dimensions, using the heat kernel mechanism and the group theoretic approach for the evaluation of the traced heat kernel.
\newpage
\chapter{General Relativity and AdS/CFT}
\section{Preliminaries}
To consider CG as an effective theory of gravity one has to introduce the mathematical framework. In this chapter we introduce the mathematical framework and the crucial concepts on the example of EG.
Assume we have a manifold $\mathcal{M}$ on which we define the metric $g_{\mu\nu}$, fields build from the metric and the derivatives of the metric.
The manifold that is curved is described by the curvature tensors, defined by the parallel transport. If a vector $V^{\rho}$, parallel transported (appendix: General Relativity and AdS/CFT: Paralel Transport) along the loop defined by two vectors $A^{\mu}$ and $B^{\nu}$ for the distances $\delta a$ and $\delta b$, is not the same when it comes back to its initial position, the manifold $\mathcal{M}$ is curved. The vector changes by the value $\delta V^{\rho}$
\begin{equation}
\delta V^{\rho}=(\delta a)(\delta b)A^{\nu}B^{\mu}R^{\rho}{}_{\sigma\nu\mu}V^{\sigma},
\end{equation}
where we define the {\it curvature} or {\it Riemann tensor } $R^{\rho}{}_{\sigma\nu\mu}$ antisymmetric in the $\mu\nu$ indices. Interchanging the vectors would mean traveling the loop in the other direction, that would give the inverse of the original answer. The Riemann tensor can be conveniently expressed with by adding "Christoffel symbols" and "covariant derivative" which we introduce below.
\input{covd}
Expanding the equation for the metric compatibility for the three different permutations, one can derive the expression for the connection in terms of the metric tensor and show the uniqueness and the existence of exactly one torsion-free connection on a given manifold, that is compatible with some given metric on that manifold
\begin{equation}
\Gamma^{\lambda}_{\mu\nu}=\frac{1}{2}g^{\rho\sigma}(\partial_{\mu}g_{\nu\rho}+\partial_{\nu}g_{\rho\mu}-\partial_{\rho}g_{\mu\nu}).
\end{equation}
Using the commutation of the two covariant derivatives we can write the Riemann tensor with
\begin{equation}
[\nabla_{\mu},\nabla_{\nu}]V^{\rho}=R^{\rho}{}_{\sigma\mu\nu}V^{\sigma}-T_{\mu\nu}{}^{\lambda}\nabla_{\lambda}V^{\rho}
\end{equation}
where $T_{\mu\nu}{}^{\lambda}$ is a torsion, in our conventions zero, and the Riemann tensor is defined with
\begin{equation}
R^{\rho}{}_{\sigma\mu\nu}=\partial_{\mu}\Gamma^{\rho}_{\nu\sigma}-\partial_{\nu}\Gamma^{\rho}_{\mu\sigma}+\Gamma^{\rho}_{\mu\lambda}\Gamma^{\lambda}_{\nu\sigma}-\Gamma^{\rho}_{\nu\lambda}\Gamma^{\lambda}_{\mu\sigma}.
\end{equation}
Contracting the indices of the Riemann tensor $R^{\lambda}{}_{\mu\lambda\nu}=R_{\mu\nu}$, one obtains Ricci tensor, and contracting the indices $R^{\lambda}_{\lambda}=R$, Ricci scalar, both of which enter the definition of the Einstein tensor.
Einstein tensor defines the Einstein field equations that govern the metric response to the energy and momentum. One could introduce it in two ways, by using the variational principle, or following Einstein's way of introducing them. To generalise the physical laws for the curved space-times, in principal, one has to use tensor fields and covariant derivative instead of the partial derivative that is used on flat spacetimes.
There is no unique way since writting such physical laws causes ambiguities in literature dealt with via various prescriptions, as remembering to preserve gauge invariance for electromagnetism. There can be more than one way to adapt a physical law to curved spacetimes, and right alternative can ultimately be decided by an experiment.
We want to find an equation analogous to the Poisson equation for Newtonian potential,
\begin{equation}
\nabla^2 \phi=4\pi G \rho,
\end{equation}
with $\rho$ mass density, and $\nabla^2$ Laplacian in flat space. The equation connects the Laplacian acting on the gravitational potential with the mass distribution, and according to prescription to obtain relativistic equation (in curved spacetime) we need relation between tensors. On the right hand side (RHS) we need energy-momentum tensor and on the left hand side (LHS), metric tensor.
To deduce whether the covairantized laws are correct, we want to know their Newtonian limit. This is defined with the requirements that the particles are moving slowly (with the respect to the speed of light) and the gravitational field is weak and static (does not change in time).
The equation we expect to get in the Newtonian limit would reproduce the $\nabla^2h_{00}=-8\pi GT_{00}$ for $T_{00}=\rho$, G Newton's constant and $h_{00}$ the $00$ component of a small perturbation around flat metric.
Which leads to expected Newton's potential. (see appendix: General Relativity and AdS/CFT: Newton Potential for Small Perturbation Around the Metric and \cite{Carroll:1997ar}).
To obtain the covariant expression, for the $\nabla^2$ we can assume to be D'Alambert operator that acts on the metric - and in order to obtain non-vanishing result for second derivative of a tensor, instead of the metric we take Riemann tensor. That quantity should be proportional to the stress energy tensor $T_{\mu\nu}$. From the \textbf{Principle of Equivalence}, energy conservation \begin{align}\nabla^{\mu} T_{\mu\nu}=0\label{encon}\end{align} in combination with the Bianchi identity \begin{equation}\nabla_{\mu}R_{\mu\nu}=\frac{1}{2}\nabla_{\nu}R\end{equation} implies that covariant derivative acting on the Ricci tensor cannot be zero, however there is a tensor constructed from the second derivatives of the metric, Ricci tensor and Ricci scalar, which obeys $\nabla^{\mu}G_{\mu\nu}=0$. That is Einstein tensor
\begin{equation}
G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}R g_{\mu\nu}.
\end{equation} Its generalisation gives Einstein equations of motion (EOM) with matter
\begin{equation}
G_{\mu\nu}=\kappa T_{\mu\nu}
\end{equation} reproduces correct result in Newtonian limit and in comparison with it, defines $\kappa=8\pi G$.
The approach of obtaining EOM, common in gravity theories, is from the variational principle.
\section{Variational Principle}
We start with an action consisted of the integral over spacetime over Lagrange density. According to Hilbert the simplest possible choice for the Lagrangian and the action is only independent scalar that can be constructed form the Riemann tensor, Ricci scalar
\begin{equation}
S=\int d^nx \sqrt{-g}R,\label{egndim}
\end{equation}
where we label the spacetime dimension with {\it n}.
Varying the action we obtain boundary terms and EOM \begin{equation}R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=0,\label{eeq}\end{equation}
i.e., Einstein's equations in vacuum.
Adding the properly normalised matter to the action $S=\frac{1}{8\pi G}S_{H}+S_{M}$ and we can recover the Einstein's non-vacuum equations
\begin{equation}
\frac{1}{\sqrt{-g}}\frac{\delta S}{\delta g^{\mu\nu}}=\frac{1}{8\pi G}\left( R_{\mu\nu} -\frac{1}{2}Rg_{\mu\nu}\right)+\frac{1}{\sqrt{-g}}\frac{\delta S_{M}}{\delta g^{\mu\nu}}=0
\end{equation}
in which we set \begin{equation}T_{\mu\nu}=-\frac{1}{\sqrt{-g}}\frac{\delta S_{M}}{\delta g^{\mu\nu}}.\end{equation}
To think of the Einstein equations without specification of the theory from which $T_{\mu\nu}$ is derived our real concern is existence of solutions for Einstein's equations when there are present realistic sources of energy and momentum. The most common property is that $T_{\mu\nu}$ represents positive energy densities, negative masses are not allowed. If we allow the action constructed from scalars up to two derivatives in the metric, the first term to add is constant. By itself, it does not lead to interesting dynamics, however it gives an important role to EOM
\begin{equation}R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}+\lambda g_{\mu\nu}=0\end{equation}
where $\lambda$ is "energy density of the vacuum" $T_{\mu\nu}=-\lambda g_{\mu\nu}$, energy and momentum present in the universe even in the absence of matter.
In quantum mechanics the minimum of classical energy $E_0=0$ of an harmonic oscillator with frequency $\omega$ has, upon quantisation a ground state $E_0=\frac{1}{2}\hbar \omega$. Each of the modes contribute to the ground state. The result is infinity and must be regularised using a cutoff at high frequencies. For the cosmological constant, the final vacuum energy, which is the regularised sum of the energies for the ground state oscillations of all the fields in the theory, is expected to have a natural scale
\begin{equation}\lambda \sim m_P^4.\end{equation} The prediction of the theory considers the Planck mass $m_P\sim 10^{19}GeV$ which differs from the observations of the Universe on the large scale by at least a factor of $10^{123}$. This convinces people that the "cosmological constant problem" is one of the most important unsolved issues in the physics today.
\section{Conformal Gravity}
Allowing higher derivatives, one generalisation of the Einstein-Hilbert action is the action of CG
\begin{equation}
S_{CG}=\alpha_{CG}\int d^4 x \sqrt{|g|}g_{\alpha\mu}g^{\beta\nu}g^{\gamma\lambda}g^{\delta\tau}C^{\alpha}{}_{\beta\gamma\delta}C^{\mu}{}_{\nu\lambda\tau}. \label{scg}
\end{equation}
It is consisted of the Weyl squared term, in n dimensions given by
\begin{equation}C_{\rho\sigma\mu\nu}=R_{\rho\sigma\mu\nu}-\frac{2}{n-2}\left(g_{\rho[\mu}R_{\nu]\sigma}-g_{\sigma[\mu}R_{\nu]\rho}\right)+\frac{2}{(n-1)(n-2)}Rg_{\rho[\mu}g_{\nu]\sigma}\end{equation}
that inherits the properties of Riemann tensor
\begin{align}
C_{\rho\sigma\mu\nu}&=C_{[\rho\sigma][\mu\nu]} \\
C_{\rho\sigma\mu\nu}&=C_{\mu\nu\rho\sigma} \\
C_{\rho[\sigma\mu\nu]}&=0.
\end{align}
In addition, Weyl tensor is invariant under the Weyl rescalings of the metric \begin{equation}g_{\mu\nu}\rightarrow \Omega(x)^2g_{\mu\nu}\label{wresc}.\end{equation}
The bulk action (\ref{scg}) is therefore unique, it is the only action polynomial in curvature invariants that enjoys not just the diffeomorphism invariance but also Weyl invariance. The factor that comes from the square root determinant of the metric is exactly cancelled by the factor from the contributions of the metric in (\ref{scg}).
Up to now, we have introduced basic concepts used in the general relativity (GR). To describe the further research and the first step in verification whether the theory of gravity can be considered as correct effective theory, i.e. whether its variational principle is well defined and the response functions finite,
we continue with introduction of the partition function, variational principle, correlators and the AdS/CFT correspondence.
To obtain the EOM of CG, we use variational principle. After variation of action, in general, one would expect to obtain the EOM and the boundary terms, however, as we will see explicitly, CG does not require such additional terms that are called boundary terms.
\section{Partition Function, Variational Principle and Correlators}
Let us introduce the one of the key functions in the AdS/CFT correspondence, partition function.
Spectrum of energy levels is convenient to compute in the form of a trace $Tr \exp(\beta H)$, where $H$ is Hamiltonian of the considered action.
Including a conserved angular momentum $J$ that generates a rotation at infinity of the asymptotical space and commutes with H, we can write the partition function as
\begin{equation}
Z(\beta,\theta)=\text{ Tr } exp(-\beta H-i\theta J),\label{pt0}
\end{equation}
where $\theta$ is the angular chemical potential (rotation chemical potential). The partition function (\ref{pt0}) is standardly computed using the Euclidean path integral according to the formal recipe. In general, Euclidean quantum gravity path integral is not convergent because the action is not bounded from below \cite{Maloney:2007ud}. One approaches that issue, by expanding from below around the classical solution and obtain a perturbatively meaningful result. However, it is important to mention that it is not clear whether the topologies that do not admit classical solutions contribute to the Eucliedan path integral or not. There is as well no known method to evaluate the contributions in case they do exist. We are focused on the four dimensions in which the classical solutions are not completely described. (Which differs currently from the lower dimensional cases, in particular three, where one can completely describe the partition function due to knowing the classical solutions and the fact that the perturbation theory around them terminates with one-loop term. One can in that case write the complete sum of the known contributions to the path integral.) In addition to the contribution from the classical solutions, there is a possibility of the contributions from the excitations described by cosmic strings or the contribution from complex, and not just real, saddle points\footnote{see below for the description of saddle points} which were considered for the three cases. In our, four dimensional case, such additional contribution could occur from the solutions that describe as well cosmic string solution, or solution such as geon.
The path integral
\begin{equation}
\mathcal{Z}=\int \mathcal{D} g \exp \left(-\frac{1}{\hbar} I[g]\right) \label{pathint}
\end{equation}
is evaluated by imposing the boundary conditions on the fields and summing over the relevant spacetimes ($\mathcal{M},g$) using the weighted sum.
The semi-classical limit is dominated by the stationary points of the action \cite{Bergamin:2007sm,Grumiller:2007ju}, so one considers the saddle point approximation.
The meaningful expansion around the classical solution
\begin{equation}
I[g_{cl}+\delta g] = I[g_{cl}]+\delta I[g_{cl},\delta g]+\frac{1}{2}\delta^2 I[g_{cl},\delta g]+...
\end{equation}
verifies that. Here, $\delta I$ and $\delta^2 I$ are linear and quadratic terms in the Taylor expansion and the saddle point approximation
\begin{equation}
\mathcal{Z}\sim \exp\left(-\frac{1}{\hbar} I[g_{cl}]\right)\int \mathcal{D}\delta g \exp\left(-\frac{1}{2\hbar}\delta^{2}I[g_{cl},\delta g]\right) \label{saddleap}
\end{equation}
is defined with the requirements that
\begin{enumerate}[label=(\alph*),ref=(\alph*)]
\item the on-shell actions is bounded from below \label{a}
\item the first variation of the action vanishes on shell for all the variations of the metric that preserve the boundary conditions. \label{firstvar}
\item the second variation has the correct sign of convergence of the Gaussian in (\ref{saddleap}). \label{c}
\end{enumerate}
In the gravity cases the transition from the (\ref{pathint}) to (\ref{saddleap}) is complicated when the action does not posses the requirements \ref{a}, \ref{firstvar} and \ref{c}. The fact that the on-shell gravity action can diverge is commonly solved by the "background subtraction" technique \cite{Gibbons:1976ue,Liebl:1996ti}. The non-vanishing of the linear term appears when the boundary terms are not considered in detail, or when one is interested into the EOM. If one is interested in the response functions, one-, two- or three-point functions, one needs to treat properly the boundary terms. The proper treatment of the boundary terms is called holographic renormalisation procedure that we explain below.
The third issue that can arise is that the Gaussian integral is divergent. Then, the canonical partition function is not well-defined and it does not describe the thermodynamics of the stable system but the information about the decay rates between the field configurations with specified boundary conditions \cite{Gross:1982cv}.
The third issue dictates the thermodynamic stability of the system and is usually treated in the same manner as in \cite{York:1986it}. The density of states grows so fast that the canonical ensemble is not defined. The black hole is put inside the cavity and the system is coupled to a thermal reservoir, with the boundary conditions fixed at the wall of the cavity. The canonical ensemble obtained after the procedure is well defined if and only if the specific heat of the system is positive.
From the above consideration one can notice the reason that the action is required to have well defined variational principle. In four dimensions, using the normalisation from \cite{poisson}, variation of the action
\begin{equation}
\delta S=\frac{1}{16\pi}\int d^4 x \delta(\sqrt{-g}R)\label{eh4}
\end{equation}
beside to EOMs, leads to boundary term that has to be canceled by adding an appropriate boundary term to action (\ref{egndim}) (for n=4), which is called Gibbons-Hawking-York boundary term.
Variation of (\ref{eh4}) is
\begin{equation}
\delta S=\int d^{n} x \left[\sqrt{-g}(g^{\mu\nu}\delta R_{\mu\nu}+R_{\mu\nu}\delta g^{\mu\nu})+R\delta\sqrt{-g}\right].\label{eh}
\end{equation}
To vary the second term in (\ref{eh}) we have to vary the metric with upper indices
\begin{equation}
\delta(g^{\mu\nu})=-g^{\mu\alpha}g^{\nu\beta}\delta g_{\alpha\beta}\label{vargup}
\end{equation}
while for the third term we use the matrix property that
\begin{equation}
Tr (\ln M)=\ln (\det M)
\end{equation}
where $ exp (ln M)=M$ and the variation is \begin{equation} Tr(M^{-1}\delta M)=\frac{1}{det M}\delta (det M).\end{equation}
The variation of the third term brings to
\begin{equation}
\delta \sqrt{g^{-1}}=-\frac{1}{2}\sqrt{-g}g_{\mu\nu}\delta g^{\mu\nu}.
\end{equation}
Plugging the results of variation of the acton (\ref{eh}) we obtain
\begin{equation}
\delta S=\int \left(R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}\right)\delta g^{\mu\nu}\sqrt{-g}d^4x+\int g^{\mu\nu}\delta R_{\mu\nu}\sqrt{-g}d^4x.\label{vareh}
\end{equation}
With variation of Ricci tensor $\delta R_{\mu\nu}$, which can be in more detail found in the appendix: General Relativity and AdS/CFT: Summary of the Conventions (\ref{varricten}), one can write
\begin{align}
g^{\mu\nu}\delta R_{\mu\nu}={\mathchar '26\mkern -10mu\delta} v^{\mu}{}_{;\mu} & {\mathchar '26\mkern -10mu\delta} v^{\mu}=g^{\alpha\beta}\delta\Gamma^{\mu}_{\alpha\beta}-g^{\alpha\mu}\delta\Gamma^{\beta}_{\alpha\beta}
\end{align}
in which we denoted with ${\mathchar '26\mkern -10mu\delta} v^{\mu}$ that ${\mathchar '26\mkern -10mu\delta}$ is not the variation of the quantity $v^{\mu}$.
The second integral in (\ref{vareh}) is
\begin{align}
\int_{\mathcal{M}} g^{\mu\nu}\delta R_{\mu\nu}\sqrt{-g}d^4x&=\int {\mathchar '26\mkern -10mu\delta} v^{\mu}_{;\mu}\sqrt{-g}d^4x \nonumber \\
& =\oint_{\partial\mathcal{M}}{\mathchar '26\mkern -10mu\delta} v^{\mu}d\Sigma_{\mu} \nonumber \\
&=\oint_{\partial\mathcal{M}}\epsilon{\mathchar '26\mkern -10mu\delta} v^{\mu}n_{\mu}\sqrt{|\gamma|}d^3x
\end{align}
in which $n_{\mu}$ is the unite normal to $\partial\mathcal{M}$, $\epsilon\equiv n^{\mu}n_{\mu}\pm1$, $\gamma$ is metric on the three dimensional manifold $\partial\mathcal{M}$ and $d\Sigma_{\mu}$ infinitesimal element of the hyper surface $\Sigma$. Following the conventions of \cite{poisson}, the hypersurface $\Sigma$ partitions spacetime in two regions $\mathcal{M}^{\pm}$ defined with the metric $g_{\mu\nu}^{\pm}$ and the coordinates $x_{\pm}^{\mu}$. Now, one has to evaluate ${\mathchar '26\mkern -10mu\delta} v^{\mu}n_{\mu}$. While on $\partial\mathcal{M}$, $\delta g_{\mu\nu}=\delta g^{\mu\nu}=0$. Under that conditions
\begin{equation}
\delta \Gamma^{\mu}_{\alpha\beta}|_{\partial\mathcal{M}}=\frac{1}{2}g^{\mu\nu}\left(\delta g_{\nu\alpha,\beta}+\delta g_{\nu\beta,\alpha}-\delta g_{\alpha\beta,\nu}\right)
\end{equation}
where "$,$" denotes partial derivative "$\partial$". It follows ${\mathchar '26\mkern -10mu\delta} v_{\mu}=g^{\alpha\beta}\left(\delta g_{\mu\beta,\alpha}-\delta g_{\alpha\beta,\mu}\right)$ and
\begin{align}
n^{\mu}{\mathchar '26\mkern -10mu\delta} v_{\mu}|_{\partial\mathcal{M}}&=n^{\mu}\left(\epsilon n^{\alpha} n^{\beta}+\gamma^{\alpha\beta}\right)\left(\delta g_{\mu\beta,\alpha}-\delta g_{\alpha\beta,\mu}\right) \nonumber \\
& n^{\mu}\gamma^{\alpha\beta}\left(\delta g_{\mu\beta,\alpha}-\delta g_{\alpha\beta,\mu}\right).
\end{align}
To obtain the second line, we have used the completeness relation $g^{\mu\nu}=\epsilon n^{\mu}n^{\nu}+\gamma^{\mu\nu}$ and multiplied $n^{\alpha}n^{\mu}$ with the antisymmetric quantity in the brackets. Next, we observe that the tangential derivative of $\delta g_{\mu\nu}$ must vanish since $\delta g_{\mu\nu}$ vanishes everywhere on $\partial \mathcal{M}$, which means $\gamma^{\alpha\beta}\delta g_{\mu\beta,\alpha}=0$ and one obtains
\begin{equation}
n^{\mu}{\mathchar '26\mkern -10mu\delta} v_{\mu}|_{\partial\mathcal{M}}=-\gamma^{\alpha\beta}\delta g_{\alpha\beta,\mu}n^{\mu},
\end{equation}
which is nonzero since $\delta g_{\alpha\beta}$ can contain non-vanishing $\textbf{normal}$ derivative on hyper surface.
One can write the variation of the action with
\begin{equation}
\delta S=\int_{\mathcal{M}}G_{\alpha\beta}\delta g^{\alpha\beta}\sqrt{-g}d^4x-\oint_{\partial\mathcal{M}}\epsilon \gamma^{\alpha\beta}\delta g_{\alpha\beta,\mu}n^{\mu}\sqrt{|\gamma|}d^3x\label{vareh1}
\end{equation}
where the second term is canceled by adding
\begin{equation}
\delta S_{B}=\frac{1}{8 \pi}\int_{\partial\mathcal{M}}\epsilon K\sqrt{|\gamma|}d^3x
\end{equation}
and $K$ is a trace of the extrinsic curvature which can be written
\begin{align}
K&=n^{\alpha}_{;\alpha}=(\epsilon n^{\alpha}n^{\beta}+\gamma^{\alpha\beta})n_{\alpha;\beta} =\gamma^{\alpha\beta}n_{\alpha;\beta}=\gamma^{\alpha\beta}\left(n_{\alpha,\beta}-\Gamma^{\gamma}_{\alpha\beta}n_{\gamma}\right)
\end{align}
whose variation is
\begin{align}
\delta K&=-\gamma^{\alpha\beta}\delta^{\gamma}_{\alpha\beta}n_{\gamma}=\frac{1}{2}\gamma^{\alpha\beta}\delta g_{\alpha\beta,\mu}n^{\mu}.
\end{align}
Here, we used that the tangential derivatives from $\delta g_{\mu\nu}$ vanish on $\partial\mathcal{M}$. That leads precisely to the second integral in (\ref{vareh1}).
Adding that term to the entire action (\ref{eh4}) leads to the first variation that vanishes when EOM are evaluated, which is in agreement with the requirement $(b)$.
\section{Anti de Sitter/Conformal Field Theory Correspondence}
Anti de Sitter/Conformal field theory ($AdS/CFT$) correspondence is the framework
that relates the gravity theory in one dimension higher with the quantum field theory in one dimension lower, to which it is often referred to as gauge/gravity correspondence. Since its proposal in 1997 \cite{Maldacena:1997zz} it has been generalised to wider framework that fits the name "gauge/gravity" correspondence.
The duality has been discovered in the context of string theory and it has been extended over the different domains, for example, analysis of the strong coupling dynamics of QCD and the electroweak theories, quantum gravity and physics of black holes, relativistic hydrodynamics, applications in condensed matter physics (for example holographic superconductors, quantum phase transitions and cold atoms...).
The fields in AdS correspond to sources of operators on the field theory side and by analysing the dynamic of the sources in the curved space we can learn about the dual operators.
Let us introduce the notion of sources of the operators.
Taking an particular example in analysing the systems on lattice (also called Kadandoff-Willson renormalisation group approach \cite{Ramallo:2013bua}), one may consider a gravitational system in a lattice with a Hamiltonian
\begin{equation}
H=\sum_{x,i}J_i(x,a)\mathcal{O}^i(x)
\end{equation}
for $a$ lattice spacing, $x$ different lattice sites and $i$ labels of operators $\mathcal{O}^i$. $J_{i}(x,a)$ are the coupling constants that are called sources, for the operators defined at the point $x$. Using particular computational method \cite{Ramallo:2013bua} the operators are appropriately weighed while Hamiltonian retains its form. Therefore, the couplings change at each step and acquire a dpenedence on the scale
\begin{align}
J_{i}(x,a)\rightarrow J_i(x,2a)\rightarrow J_i(x,4a)\rightarrow..
\end{align}
which can be written as $J_i(x,u)$ for $u=(a,2a,4a,..)$ a length scale at which we probe a system. The evolution of the couplings with the scale is defined with equations
\begin{equation}
u\frac{\partial}{\partial u}J_i(x,u)=\beta_i(J_j(x,u),u)
\end{equation}
for $\beta_i$ a $\beta$ function of the $i^{th}$ coupling constant.
$\beta_{i}$'s can at weak coupling be determined in perturbation theory, while at strong coupling, $AdS/CFT$ suggests to consider $u$ as an extra dimension. This way, one may consider successive lattices at different $u$-s, as layers of a new higher-dimensional space, while $J_i(x,u)$ are considered as fields in a space with one extra dimension. One may write \begin{equation}
J_i(x,u)=\phi_i(x,u),
\end{equation}
where the dynamics of sources is governed by defined action which is in $AdS/CFT$ particular gravity theory.
This way, one may think of holographic duality as a geometrization of the quantum dynamics defined by the renormalisation group.
Couplings of the theory at UV are identified with values of the bulk fields at the boundary of the higher dimensional space.
The source $\phi_i$ on the gravity side need to have equal tensor structure of the corresponding dual operator $\mathcal{O}^i$ of the field theory so that $\phi_i\mathcal{O}^i$ is scalar. $A_{\mu}$ is dual to current $J^{\mu}$, spin two field $g_{\mu\nu}$ to symmetric second order tensor $T_{\mu\nu}$, identified with the energy momentum tensor $T_{\mu\nu}$ of the field theory.
The often usage of the correspondence is in compuation of the correlation functions. One may compute the correlation functions
\begin{equation}
\langle\mathcal{O}(x_1)...\mathcal{O}(x_n)\rangle
\end{equation}
in Euclidean space from the gravity theory.
In the field theory the correlators can be computed from
\begin{equation}
\mathcal{L}\rightarrow\mathcal{L}+J(x)\mathcal(O(x))\equiv\mathcal{L}+\mathcal{L}_J,
\end{equation}
where Lagrangian $\mathcal{L}$ is perturbed by the source term $J(x)$, and perturbation of the Lagrangian denoted with $\mathcal{L}_J$.
The generating functional
\begin{equation}
Z_{QFT}[J]=\langle exp\left[\int \mathcal{L}_J\right]\rangle
\end{equation}
defines the connected correlators
\begin{equation}
\langle \prod_{i}\mathcal{O}(x_i)\rangle=\prod_i\frac{\delta}{\delta J(x_i)}\log Z_{QFT}[J]|_{J=0} .
\end{equation}
For a bulk field $\phi(z,x)$ that fluctuates in AdS we define $\phi_0$ as a value of $\phi$ at the boundary
\begin{equation}
\phi_0(x)=\phi(z=0,x)=\phi|_{\partial AdS}(x),
\end{equation}
where field $\phi_0$ is related to a source for the dual operator $\mathcal{O}$ in QFT.
The value of $\phi$ at $z=0$ is actually the limit
\begin{equation}
lim_{z\rightarrow0}z^{\Delta-d}\phi(z,x)=\psi(x)
\end{equation}
in which $\Delta$ is defined as a dimension of the dual operator and determined from the largest root of the equation
\begin{equation}
(\Delta-p)(\Delta+p-d)=m^2L^2
\end{equation}
for $p$ denoting indices of antisymmetric tensor $A_{\mu_1....\mu_p}$ (in our case scalar field $\phi$), its mass $m$ and $L$ radius of the AdS space.
It reads
\begin{equation}
\Delta=\frac{d}{2}+\sqrt{\left(\frac{d-2p}{2}\right)^2+m^2L^2}.
\end{equation}
AdS/CFT then claims \cite{Gubser:1998bc,Witten:1998qj}
\begin{equation}
Z_{QFT}[\phi_0]=\langle exp[\int\phi_0\mathcal{O}]\rangle_{QFT}=Z_{gravity}[\phi\rightarrow\phi_0]
\end{equation}
for $Z_{gravity}[\phi\rightarrow\phi_0]$ partition function (path integral) of the gravity theory evaluated for the functions with value $\phi_0$ at the boundary
\begin{equation}
Z_{gravity}[\phi\rightarrow\phi_0]=\sum_{\{\phi\rightarrow\phi_0\}}e^{S_{gravity}}.
\end{equation}
In the limiting case of the classical gravity, the sum can be approximated with the classical solution term. That term, which contains on-shell gravity action, is usually divergent and must be holographically renormalised \cite{Henningson:1998gx}, see chapter below,
and the classical action is replaced with the renormalised one. One may write for the generating functional
\begin{equation}
\log Z_{QFT}=S^{ren}_{grav}[\phi\rightarrow\phi_0]
\end{equation}
and the $n-$point function is obtained from
\begin{equation}
\langle\mathcal{O}(x_1)...\mathcal{O}(x_n)\rangle=\frac{\delta^{(n)}S^{ren}_{grav}[\phi]}{\delta\psi(x_1)...\delta\psi(x_n)}\vert_{\phi=0}.
\end{equation}
We will compute this explicitly on the example for CG and one-point function, which is for an operator $\mathcal{O}$ in the presence of the source $\phi$ written as
\begin{equation} \langle\mathcal{O}(x)\rangle_{\phi}=\frac{\delta S^{ren}_{grav}[\phi]}{\delta \psi (x)}.\end{equation}
\chapter{Holographic Renormalisation}
\section{Variation of Conformal Gravity Action and Boundary Conditions}
To obtain the equation of motion of the CG we follow the above described procedure. We vary the action (\ref{scg}) and obtain
\begin{equation}
\delta S_{CG}=\alpha_{CG}\int d^4x \sqrt{|g|}\left(EOM^{\mu\nu}\delta g_{\mu\nu}+\nabla_{\sigma}J^{\sigma}\right)\label{var1}
\end{equation}
in which $EOM^{\mu\nu}$ denotes EOM of CG, and $\nabla_{\sigma}J^{\sigma}$ boundary terms. $\alpha_{CG}$ is dimensionless coupling constant and only coupling constant of the theory.
The EOM of the CG require vanishing of the Bach tensor \cite{bachr}
\begin{equation}
\left(\nabla^{\delta}\nabla_{\gamma}+\frac{1}{2}R^{\delta}_{\gamma}\right)C^{\gamma}{}_{\alpha\delta\beta}=0\label{bach},
\end{equation}
where the computation of finding EOM is preformed as described on the Einstein case, while one can verify it using the computer program xAct \cite{xAct}, convenient for the application on higher derivative actions. We use it in particular for obtaining EOM, while upon introducing certain auxiliary tensor in variations one can obtain the boundary terms as well.
EOM of CG are equations of the fourth order in derivatives and it is not straightforward to obtain their most general solution.
The equations consist of the coupled partial differential equations. The same issue arises even in the EOM of EG.
Then, one searches for the perturbative or numerical solutions.
In general, in perturbative approach in the AdS/CFT framework, one splits the metric in the holographic coordinate $\rho$ using which is approached to the boundary, and the boundary metric.
That is generalised Fefferman-Graham expansion of the metric, which describes the boundary conditions.
We introduce the length scale $\ell$ that is related to cosmological constant with $\Lambda=3\sigma/\ell^2$ where $\sigma=-1$ for AdS and $\sigma=+1$ for dS in which the asymptotic $(0<\rho<<\ell)$ line-element is
\begin{equation}
ds^2=\frac{\ell^2}{\rho^2}\left(-\sigma d\rho^2+\gamma_{ij}dx^idx^j\right).\label{le}
\end{equation}
Here, we have partially fixed the gauge and used Gaussian coordinates.
Near the conformal boundary, at $\rho=0$ on the three dimensional manifold, $\gamma_{ij}$ is
\begin{equation}
\gamma_{ij}=\gamma_{ij}^{(0)}+\frac{\rho}{\ell}\gamma_{ij}^{(1)}+\frac{\rho^2}{\ell^2}\gamma_{ij}^{(2)}+\frac{\rho^{3}}{\ell^3}\gamma_{ij}^{(3)}+...\label{expansiongamma}
\end{equation}
The coefficients in the expansion, $\gamma_{ij}^{(n)}$ matrices, can depend on the coordinates on the boundary of the manifold, and the boundary metric $\gamma^{(0)}_{ij}$ needs to be invertible. The EOM in each order of the expansion in the holographic coordinate give condition on the terms in the expansion of the boundary metric.
We will be interested in the asymptotic and full solutions of the CG EOM that we use as examples. Full solutions of CG, one can classify in \begin{itemize}
\item most general solutions. To this class belongs the most general spherically symmetric CG solution
\begin{equation}
ds^2=-k(r)dt^2+\frac{dr^2}{k(r)}+r^2d\Omega^2_{S^2} \label{MKR1}
\end{equation}
for $d\Omega_{S^2}^2$ line-element of the 2-sphere and
\begin{equation}
k(r)=\sqrt{1-12aM}-\frac{2M}{r}-\Lambda r^2+2ar^2
\end{equation}
in which for $a=0$ one obtains the Schwarzschild-(A)dS solution. In a lower dimensional effective model for gravity at large distances, the obtained solution corresponds to ours when $aM<<1$. From phenomenological aspect for $\Lambda\approx10^{-123}$, $a\approx10^{-61}$, $M\approx10^{38}M_{\odot}$ with $M_{\odot}=1$ for the Sun, one obtains $aM\approx10^{-23}M_{\odot}<<1$ for black holes or galaxies in the Universe \cite{Grumiller:2010bz}.
\item conformally flat solutions, which automatically makes them satisfy of the Bach equation.
\item Einstein metrics, in which $R_{\alpha\beta}\propto g_{\alpha\beta}$. That makes solutions of EG a subset of the broader class of solutions of CG.
\end{itemize} In the perturbative expansion approach described above, the restrictions from the EOM do not appear until the fourth order in the expansion of EOM.
These restrictions, are important when they affect the results evaluated "on-shell" ("on shell"=when restrictions from EOM are taken into an account)
In order for the first variation of the action to vanish, in general gravitational theories, one requires boundary conditions as part of the definition, as we have seen when considering the variation of the partition function \ref{firstvar}.
Often, "natural" boundary conditions consist of the rapid fall-off of the fields as approaching the boundary in an asymptotic region. That is not the case for gravitational theories, because the metric should not be zero. An example for that is in AdS/CFT correspondence where boundary conditions define the dual field theory on the boundary. De Sitter space similarly, requires boundary conditions which have been defined for EG in four dimensions by Starobinsky \cite{Starobinsky:1982mr}, and further worked out in \cite{Anninos:2010zf}, \cite{Anninos:2011jp}. Precisely imposing the right boundary conditions, Maldacena reduced CG solutions to solutions of EG \cite{Maldacena:2011mk}.
In our case boundary conditions are imposed by fixing the leading and the first-order terms in (\ref{expansiongamma}) on $\partial\mathcal{M}$. They are fixed up to a local Weyl rescalings
\begin{align}
\delta\gamma_{ij}^{(0)}|_{\partial\mathcal{M}}=2\lambda\gamma_{ij}^{(0)},& & \delta\gamma_{ij}^{(1)}|_{\partial\mathcal{M}}=\lambda\gamma_{ij}^{(1)}\label{bcs}
\end{align} for $\lambda$ regular function on $\partial\mathcal{M}$ and second and higher order terms that are allowed to vary.
For the set of boundary conditions to be consistent, on general grounds, one may expect to require adding an analog of Gibbons-Hawking-York term, that was, as we have seen, in EG played by extrinsic curvature \cite{York:1972sj,Gibbons:1977mu},
which would prove that the variational principle is well defined and produces the desired boundary value problem.
The additional terms that may be required, are the holographic counterterms \cite{Henningson:1998ey,Balasubramanian:1999re,Emparan:1999pm,Kraus:1999di,deHaro:2000vlm,Papadimitriou:2005ii}. Their assignment is to make the response functions (in the AdS/CFT language) finite.
Below, we will show that
for CG however, these counterterms are not required. This one might have anticipated based on the computation of the on-shell action. On-shell action
\begin{equation}
\Gamma_{CG}=S_{CG}=\int_{\partial\mathcal{M}}d^4x\sqrt{|g|}C^{\lambda}{}_{\mu\sigma\nu}C_{\lambda}{}^{\mu\sigma\nu}\label{action},
\end{equation}for any metric of the form (\ref{expansiongamma}) and (\ref{le}) when evaluated on compact region for which $\rho_c\leq \rho$ is finite when $\rho_c\rightarrow0$. In addition, the free energy obtained from the on-shell action (\ref{action}) agrees wit the Arnowitt-Deser-Misner mass and the definition of the entropy according to Wald \cite{Wald:1993nt}. Free energy that agrees with the on-shell action can imply that boundary terms that are added to the action (\ref{action}) should vanish on-shell. The simplest answer is that the terms themselves are zero.
To verify this claim rigorously one computes consistency of variational principle and finiteness of response functions.
For that, first we rewrite the action in the form
\begin{equation}
S_{CG}=\int_{M}d^4x\sqrt{-g}\left(32\pi^2\epsilon_4+2R_{\mu\nu}R^{\mu\nu}-\frac{2}{3}R^2\right)
\end{equation}
for $\epsilon_4$ Euler density in four dimensions, see appendix: General Relativity and AdS/CFT: Summary of the Conventions
and normalization $\chi(S^4)=2$. By adding that surface term to the bulk integral of $\epsilon_4$, one obtains a topological invariant on a space with boundary. Adding and subtracting that surface term leads to action separated into a topological part consisted from the Euler characteristic $\chi(\mathcal{M})$, and part consisted from Ricci squared term and the boundary terms
\begin{align}
\Gamma_{CG}&=\int_{\mathcal{M}}d^4x\sqrt{|g|}\left(2R^{\mu\nu}R_{\mu\nu}-\frac{2}{3}R^2\right)+32\pi^2\chi(\mathcal{M}) \nonumber \\ &+\int_{\partial\mathcal{M}}d^3x\sqrt{|\gamma|}\left(-8\sigma\mathcal{G}^{ij}K_{ij}+\frac{4}{3}K^3-4KK^{ij}K_{ij}+\frac{8}{3}K^{ij}K_{j}^kK_{ki}\right).\label{ac2}
\end{align}
Where boundary terms cancel similar terms from the Euler characteristic for spacetimes with conformal boundary \cite{Myers:1987yn}, and $\mathcal{G}^{ij}$ is 3D Einstein tensor on the 3D surface $\partial\mathcal{M}$ for the metric $\gamma_{ij}$.
The extrinsic curvature is \begin{equation}K_{ij}=-\frac{\sigma}{2}\pounds_{n}\gamma_{ij}\end{equation}
for $\pounds$ Lie derivative and $n^{\mu}$ outward (future) pointing unit vector $n^{\mu}$ normal to $\partial\mathcal{M}$.
Using the auxiliary Lagrangian, one can rewrite the action
\begin{align}
S_{CG}+32\pi^2\chi(\mathcal{M})&=-\int_{\mathcal{M}}d^4x\sqrt{-g}\left(f^{\mu\nu}G_{\mu\nu}+\frac{1}{8}f^{\mu\nu}f_{\mu\nu}-\frac{1}{8}f^{\mu}_{\mu}f^{\nu}_{\nu}\right) \nonumber \\
& +\int_{\partial\mathcal{M}}d^3x\sqrt{|\gamma|}\big(-8\sigma\mathcal{G}^{ij}K_{ij}+\frac{4}{3}K^3-4KK^{ij}K_{ij}\nonumber\\&+\frac{8}{3}K^{ij}K_{j}^kK_{ki}\big),\label{ac3}
\end{align}
in which the variation of the bulk action was somewhat simplified using the auxiliary field $f_{\mu\nu}$.
The fields in the first integral, after the variation further have to be first decomposed into $3+1$ metric, in Gaussian normal coordinates
while under the second integral all the quantities are already defined on the three dimensional manifold.
Variation of the first integral in (\ref{ac3}) requires variation of three terms $I_1=f^{\mu\nu}G_{\mu\nu}$, $I_2=\frac{1}{8}f^{\mu\nu}f_{\mu\nu}$, $I_3=-\frac{1}{8}f^{\mu}_{\mu}f^{\nu}_{\nu}$
\begin{align}
\delta (f^{\mu\nu}G_{\mu\nu})&= \delta f_{\mu\nu}G^{\mu\nu}-2 f^{\mu\nu}\delta G_{\mu\nu}
\nonumber \\
\frac{1}{8}\delta f^{\mu\nu}f_{\mu\nu}&=\frac{1}{4}\left(f^{\mu\nu}\delta f_{\mu\nu}-f_{\kappa}^{\mu}f^{\nu\kappa}\delta g_{\mu\nu}\right) \nonumber \\
-\frac{1}{8}\delta{\left(f^2\right)}&=-\frac{1}{4}fg^{\mu\nu}\delta f_{\mu\nu}+\frac{1}{4}ff^{\mu\nu}\delta g_{\mu\nu}\label{term3}
\end{align}
where we used (\ref{vargup}). Variation of $G_{\mu\nu}$ is brought to variation of the Ricci tensor and Ricci scalar given in the appendix: General Relativity and AdS/CFT: Summary of the Conventions,
(\ref{varricsc}) and (\ref{varricten}), that is
\begin{align}
-\int d^4x \sqrt{|q|} 2f^{\mu\nu}\delta G_{\mu\nu} =&-\int d^4x \sqrt{|g|} 2f^{\mu\nu}\delta\left(R_{\mu\nu}-\frac{1}{2}R g_{\mu\nu}\right) \nonumber \\
=&-\int d^4x\sqrt{|g|} f^{\mu\nu}\bigg(\big( \nabla^{\lambda}\nabla_{\mu}\delta g_{\nu\lambda}\nonumber\\ +&\nabla^{\lambda}\nabla_{\nu}\delta g_{\mu\lambda}-g^{\lambda\sigma}\nabla_{\mu}\nabla_{\nu}\delta g_{\lambda\sigma}-\nabla^2\delta g_{\mu\nu} \big)
\nonumber \\ \nonumber +&\big(-R^{\lambda\sigma}\delta g_{\lambda\sigma}+\nabla^{\lambda}(\nabla^{\sigma}\delta g_{\lambda\sigma}\nonumber \\ - &g^{\kappa\delta}\nabla_{\lambda} \delta g_{\kappa\delta}) \big)g_{\mu\nu} +R\delta g_{\mu\nu} \bigg). \label{varfg}
\end{align
Obviously we have to partially integrate analogously to case with EG which will lead to terms that define EOM, and boundary terms that define the response functions, which we demonstrate on the first term under the integral (\ref{varfg})
\begin{align}
\int d^4x\sqrt{g} f^{\mu\nu}\nabla^{\lambda}\nabla_{\mu}\delta g_{\nu\lambda} &= \int d^4x \sqrt{g}\nabla^{\lambda}\left(f^{\mu\nu}\nabla_{\mu}\delta g_{\nu\lambda} \right)-\int d^4x\sqrt{g} \nabla^{\lambda}f^{\mu\nu}\nabla_{\mu}\delta g_{\nu\lambda}\nonumber \\& =\int d^4x \sqrt{g}\nabla^{\lambda}\left(f^{\mu\nu}\nabla_{\mu}\delta g_{\nu\lambda} \right)-\int d^4x\sqrt{g}\nabla_{\mu}(\nabla^{\lambda}f^{\mu\nu}\delta g_{\nu\lambda})\nonumber \\&+\int d^4x \sqrt{g}\nabla_{\mu}\nabla^{\lambda}f^{\mu\nu}\delta g_{\nu\lambda}\label{pokaz}
\end{align}
in which we perform partial integration in the first line, and partial integration of the second term on the RHS when going form the first to the second line.
In second partial integration the non-trivial part is commutation of the covariant derivatives. Both terms in second line on the RHS, that participate in the partial integration have contribution from Christoffel symbols that appear in commutation, however rewriting explicitly the covariant derivative before preforming the partial integration, shows that the Christoffel symbols remained, combine with the ones required for writing the covariant derivatives. The new required Christoffels that have to be added to first partial derivative to form it into covariant derivative are exactly equal to those that have to be subtracted from the second partial derivative in order to make it covariant.
From the equation (\ref{pokaz}) we may observe which of the terms upon the transformation to the GNC contribute to EOM, and which to the boundary terms. To boundary terms contribute obviously two terms of (\ref{pokaz}) in the second line, while to EOM, the term in the third line.
The EOM do not contribute with conditions on $\gamma_{ij}^{(1)}$ matrix or conditions up to fourth order in the $\rho$ expansion, we provide them in the appendix: Holographic Renormalisation: Equations of Motion in Conformal Gravity.
\section{Boundary Terms}
We can write the first variation of the action as
\eq{
\delta\Gamma_{\textrm{\tiny CG}} = \textrm{EOM} + \int_{\partial{\cal M}}\!\!\!\!\extdm \!^3x\sqrt{|\gamma|}\,\big(\pi^{ij}\,\delta\gamma_{ij} + \Pi^{ij}\,\delta K_{ij}\big)~,
}{eq:CG13}
where the boundary terms are momenta $\pi^{ij}$ and $\Pi^{ij}$ that read
\begin{align}
\pi^{ij}& = \tfrac{\sigma}{4}(\gamma^{ij} K^{kl}-\gamma^{kl} K^{ij}) f_{kl} + \tfrac{\sigma}{4} f^{\rho}{}_{\rho} (\gamma^{ij} K - K^{ij})
- \tfrac12 \gamma^{ij} {\cal D}_{k}(n_{\rho} f^{k \rho}) + \tfrac{1}{2} {\cal D}^{i}(n_{\rho} f^{\rho j}) \nonumber \\
&- \tfrac{1}{4} (\gamma^{ik} \gamma^{jl} - \gamma^{ij} \gamma^{kl}) \pounds_{n} f_{kl} + \sigma\,\big(2 K {\cal R}^{ij} - 4 K^{ik} {\cal R}_{k}{}^{j} + 2 \gamma^{ij} K_{kl} {\cal R}^{kl} - \gamma^{ij} K {\cal R} \nonumber \\
& + 2 {\cal D}^{2} K^{ij}- 4 {\cal D}^{i} {\cal D}_{k} K^{kj}
+ 2 {\cal D}^{i} {\cal D}^{j} K + 2 \gamma^{ij} ({\cal D}_{k} {\cal D}_{l} K^{kl} - {\cal D}^{k} {\cal D}_{k} K) \big)\nonumber \\ & + \tfrac{2}{3} \gamma^{ij} K^{k}{}_{m} K^{lm} K_{kl} - 4 K^{i k} K^{jl} K_{kl} + 2 K^{ij} K^{kl} K_{kl} + \tfrac{1}{3} \gamma^{ij} K^3 - 2 K^{ij} K^2\nonumber \\
& -\gamma^{ij} K K^{kl} K_{kl} + 4 K K^{i}{}_{k} K^{jk}
+ i \leftrightarrow j\
\label{eq:CG14}
\end{align}
and
\begin{align}
\Pi^{ij} &= -8 \,\sigma\, {\cal G}^{ij} - \sigma \,\big(f^{ij} - \gamma^{ij}f^k{}_k \big)
+ 4\gamma^{ij} \big(K^2 - K^{kl}K_{kl}\big)\nonumber \\
& - 8 K K^{ij} + 8K^i{}_k K^{kj} \, ,
\label{eq:CG15}
\end{align}
respectively. Where we can vary independently the boundary metric and the extrinsic curvature.
To obtain the response functions that correspond to the sources $\delta \gamma^{(0)}_{ij}$ and $\delta\gamma_{ij}^{(1)}$ we insert the expansion of the curvatures and extrinsic curvature in (\ref{eq:CG14}) and (\ref{eq:CG15}). We obtain for $\Pi_{Kij}$
\begin{align}
\Pi_{K}&=\rho^2 (\frac{4 R[D]^{ij}}{\ell^2} - \frac{4 \gamma^{ij} R[D]}{3 \ell^2} - \frac{\gamma^{(1)ij} \gamma^{(1)k}{}_{k}}{\ell^4} + \frac{\gamma^{ij} \gamma^{(1)k}{}_{k} \gamma^{(1)l}{}_{l}}{3 \ell^4} \nonumber \\ & + \frac{2 \gamma^{(2)ij}}{\ell^4} - \frac{2 \gamma^{ij} \gamma^{(2)k}{}_{k}}{3 \ell^4}) + \rho^3 (- \frac{2 R[D] \gamma^{(1)ij}}{3 \ell^3} + \frac{2 R[D]^{jk} \gamma^{(1)i}{}_{k}}{\ell^3} + \frac{2 R[D]^{ik} \gamma^{(1)j}{}_{k}}{\ell^3} \nonumber \\ & - \frac{8 \gamma^{ij} R[D]^{kl} \gamma^{(1)}{}_{kl}}{3 \ell^3} - \frac{4 R[D]^{ij} \gamma^{(1)k}{}_{k}}{\ell^3} + \frac{2 \gamma^{ij} R[D] \gamma^{(1)k}{}_{k}}{\ell^3} + \frac{\gamma^{(1)ij} \gamma^{(1)}{}_{kl} \gamma^{(1)kl}}{\ell^5} \nonumber \\ &+ \frac{2 \gamma^{(1)ik} \gamma^{(1)j}{}_{k} \gamma^{(1)l}{}_{l}}{\ell^5} - \frac{\gamma^{(1)ij} \gamma^{(1)k}{}_{k} \gamma^{(1)l}{}_{l}}{3 \ell^5} - \frac{2 \gamma^{ij} \gamma^{(1)k}{}_{k} \gamma^{(1)}{}_{lm} \gamma^{(1)lm}}{3 \ell^5}\nonumber \\ & - \frac{\gamma^{(1)k}{}_{k} \gamma^{(2)ij}}{\ell^5} - \frac{2 \gamma^{(1)j}{}_{k} \gamma^{(2)ik}}{\ell^5} - \frac{2 \gamma^{(1)ik} \gamma^{(2)j}{}_{k}}{\ell^5} + \frac{2 \gamma^{ij} \gamma^{(1)kl} \gamma^{(2)}{}_{kl}}{3 \ell^5} - \frac{\gamma^{(1)ij} \gamma^{(2)k}{}_{k}}{3 \ell^5} \nonumber \\ & + \frac{2 \gamma^{ij} \gamma^{(1)k}{}_{k} \gamma^{(2)l}{}_{l}}{3 \ell^5} + \frac{2 \gamma^{(3)ij}}{\ell^5} - \frac{2 \gamma^{ij} \gamma^{(3)k}{}_{k}}{3 \ell^5} + \frac{2 D^{i}D_{k}\gamma^{(1)jk}}{\ell^3} - \frac{2 D^{j}D^{i}\gamma^{(1)k}{}_{k}}{\ell^3} \nonumber \\ & + \frac{2 D^{j}D_{k}\gamma^{(1)ik}}{\ell^3} - \frac{2 D_{k}D^{k}\gamma^{(1)ij}}{\ell^3} - \frac{4 \gamma^{ij} D_{l}D_{k}\gamma^{(1)kl}}{3 \ell^3} + \frac{4 \gamma^{ij} D_{l}D^{l}\gamma^{(1)k}{}_{k}}{3 \ell^3}).
\end{align}
Analogously, inserting the expansions of tensors for the auxiliary fields and unphysical fields (\ref{fijexp1}), (\ref{fijexp2}), (\ref{vexp}) and (\ref{wexp}) from the appendix: Holographic Renormalisation: EOM for CG, while keeping in mind the order of $\frac{\rho}{\ell}$ in which the fields appear, one obtains
\begin{align}
\pi_{\tilde{g}}&=\rho^2 (- \frac{4 R[D]^{ij}}{\ell^3} + \frac{4 \gamma^{ij} R[D]}{3 \ell^3} + \frac{\gamma^{(1)ij} \gamma^{(1)k}{}_{k}}{\ell^5} - \frac{\gamma^{ij} \gamma^{(1)k}{}_{k} \gamma^{(1)l}{}_{l}}{3 \ell^5} - \frac{2 \gamma^{(2)ij}}{\ell^5}\nonumber \\ & + \frac{2 \gamma^{ij} \gamma^{(2)k}{}_{k}}{3 \ell^5}) + \rho^3 (\frac{7 R[D] \gamma^{(1)ij}}{3 \ell^4} - \frac{7 R[D]^{jk} \gamma^{(1)i}{}_{k}}{\ell^4} - \frac{7 R[D]^{ik} \gamma^{(1)j}{}_{k}}{\ell^4} + \frac{19 \gamma^{ij} R[D]^{kl} \gamma^{(1)}{}_{kl}}{3 \ell^4} \nonumber \\ & + \frac{2 \gamma^{(1)ik} \gamma^{(1)jl} \gamma^{(1)}{}_{kl}}{\ell^6} + \frac{8 R[D]^{ij} \gamma^{(1)k}{}_{k}}{\ell^4} - \frac{4 \gamma^{ij} R[D] \gamma^{(1)k}{}_{k}}{\ell^4} - \frac{3 \gamma^{(1)ij} \gamma^{(1)}{}_{kl} \gamma^{(1)kl}}{2 \ell^6} \nonumber \\ & - \frac{2 \gamma^{ij} \gamma^{(1)}{}_{k}{}^{m} \gamma^{(1)kl} \gamma^{(1)}{}_{lm}}{3 \ell^6} - \frac{3 \gamma^{(1)ik} \gamma^{(1)j}{}_{k} \gamma^{(1)l}{}_{l}}{\ell^6} + \frac{2 \gamma^{(1)ij} \gamma^{(1)k}{}_{k} \gamma^{(1)l}{}_{l}}{3 \ell^6}\nonumber \\ & + \frac{13 \gamma^{ij} \gamma^{(1)k}{}_{k} \gamma^{(1)}{}_{lm} \gamma^{(1)lm}}{12 \ell^6} - \frac{\gamma^{ij} \gamma^{(1)k}{}_{k} \gamma^{(1)l}{}_{l} \gamma^{(1)m}{}_{m}}{12 \ell^6} + \frac{3 \gamma^{(1)k}{}_{k} \gamma^{(2)ij}}{2 \ell^6} \nonumber \\ & + \frac{\gamma^{(1)j}{}_{k} \gamma^{(2)ik}}{\ell^6} + \frac{\gamma^{(1)ik} \gamma^{(2)j}{}_{k}}{\ell^6} + \frac{\gamma^{ij} \gamma^{(1)kl} \gamma^{(2)}{}_{kl}}{6 \ell^6} + \frac{\gamma^{(1)ij} \gamma^{(2)k}{}_{k}}{6 \ell^6} - \frac{5 \gamma^{ij} \gamma^{(1)k}{}_{k} \gamma^{(2)l}{}_{l}}{6 \ell^6} - \frac{\gamma^{(3)ij}}{\ell^6} \nonumber \\ & + \frac{\gamma^{ij} \gamma^{(3)k}{}_{k}}{3 \ell^6} - \frac{4 D^{i}D_{k}\gamma^{(1)jk}}{\ell^4} + \frac{3 D^{j}D^{i}\gamma^{(1)k}{}_{k}}{\ell^4} \nonumber \\ & - \frac{4 D^{j}D_{k}\gamma^{(1)ik}}{\ell^4} + \frac{5 D_{k}D^{k}\gamma^{(1)ij}}{\ell^4} + \frac{8 \gamma^{ij} D_{l}D_{k}\gamma^{(1)kl}}{3 \ell^4} - \frac{8 \gamma^{ij} D_{l}D^{l}\gamma^{(1)k}{}_{k}}{3 \ell^4}).
\end{align}
The response functions, we are interested in, arise as tensor fields multiplying $\delta\gamma_{ij}^{(0)}$ and $\delta\gamma_{ij}^{(1)}$.
Therefore we have to express the variation of
extrinsic curvature as
\begin{equation}
\delta K_{ij}=\left(\frac{\ell}{\rho}\right)^2\left(\delta \theta_{ij}-\frac{1}{\ell}\delta\gamma_{ij}\right)
\end{equation}
since expansion of $\theta_{ij}$ is given explicitly in terms of $\gamma_{ij}$ (\ref{thetadef})
The variation of action
\begin{align}
\delta\Gamma_{CG}&=\int_{\partial_{\mathcal{M}}}\sqrt{\tilde{g}}\big[\pi_g^{ij}\left(\frac{\ell^2}{\rho^2}\right)^2\delta\gamma_{ij}+\Pi_K^{ij}\left(\frac{\ell}{\rho}\right)^2\left(\delta\theta_{ij}-\frac{1}{\ell}\delta\gamma_{ij}\right) \big]\nonumber \\
&=\int_{\partial\mathcal{M}}d^3x\sqrt{\gamma}\left(\frac{\ell}{\rho}\right)^5\left(\left( \pi_g^{ij}-\frac{1}{\ell}\Pi_{K}^{ij}\right)\delta\gamma_{ij}+\Pi_{K}\delta\theta_{ij}\right).\label{varac1}
\end{align}
in which we have written $\tilde{g}_{ij}=\frac{\ell^2}{\rho^2}\gamma_{ij}$ for $\tilde{g}_{ij}$ three dimensional part of the metric $g_{\mu\nu}$ (defined on the $\partial\mathcal{M}$ manifold), combines both $\pi_g^{ij}$ and $\Pi^{ij}_K$ into one response function. We express the tensors from (\ref{varac1}) in unphysical variables
\begin{align}
\delta\Gamma_{CG}&=\int_{\partial\mathcal{M}}d^3x\sqrt{\gamma}\left(\frac{\ell}{\rho}\right)^3\left(\pi_{\gamma}^{ij}\delta\gamma_{ij}+\Pi_{\theta}^{ij}\delta\theta_{ij}\right)
\end{align}
for $\pi_{\gamma}^{ij}=\left(\frac{\ell^2}{\rho^2}\right)^2\left(\pi_{\tilde{g}}^{ij}-\frac{1}{\ell}\Pi_{K}^{ij}\right)$ and $\Pi_K^{ij}=\left(\frac{\rho}{\ell}\right)^2\pi_{\theta}^{ij}$,
and expand the variations
\begin{align}
\delta \gamma_{ij}&=\delta\gamma_{ij}^{(0)}+\left(\frac{\rho}{\ell}\right)\delta\gamma_{ij}^{(1)}+...\\
\delta\theta_{ij}&=\frac{\rho}{\ell}\frac{1}{2\ell}\delta
\gamma_{ij}^{(1)}+...
\end{align}
We obtain that the most important equation in this section that is variation of action
\begin{align}
\delta \Gamma_{CG}=\int_{\partial{M}}d^3x \sqrt{\gamma^{(0)}}\left(\tau^{ij}\delta \gamma_{ij}^{(0)}+P^{ij}\delta\gamma_{ij}^{(1)}\right)\label{finres1}
\end{align}vanishes up to $\mathcal{O}(\rho^{0})$, which means that response functions $\tau^{ij}$ and $P^{ij}$ are finite as $\rho_c\rightarrow 0$. The result for the $\tau_{ij}$ and $P_{ij}$ response functions can be found below in (\ref{eq:CG17}) and (\ref{eq:CG18}), respectively.
Here, the result did not require Weyl invariance. The response function $\tau^{ij}$ plays a role of stress energy tensor, which in the case of EG corresponds to a response function of the source $\delta \gamma_{ij}^{(0)}$. While $P^{ij}$ is a response function specific for CG.
Response functions $\tau^{ij}$ and $P^{ij}$ satisfy the conditions
\begin{align}
\gamma_{ij}^{(0)}\tau^{ij}+\frac{1}{2}\psi_{ij}^{(1)}P^{ij}=0, && \gamma_{ij}^{(0)}P^{ij}=0 \label{tracecond}
\end{align}
for $\psi_{ij}^{(1)}$ traceless $\gamma^{(1)}_{ij}$ matrix as defined in (\ref{tracelessmet}). The first variation therefore vanishes on shell for the satisfied boundary conditions (\ref{bcs}), which proves a well-defined variational principle. To write the response functions, it is convenient to define the electric $E_{ij}$ and magnetic $B_{ijk}$ part of the Weyl tensor
\begin{align}
E_{ij}&=n_{\mu}n^{\nu}C^{\mu}{}_{i\nu j}\label{elc} \\
B_{ijk}&=n_{\mu}C^{\mu}{}_{ijk} \label{magc}
\end{align}
which are as well expanded
\begin{align}
& B_{ijk}^{\ms{(1)}} = \tfrac{1}{2\ell}\,\big({\cal D}_j\psi^{\ms{(1)}}_{ik}-\tfrac12\,\gamma_{ij}^{\ms{(0)}}\,{\cal D}^l\psi^{\ms{(1)}}_{kl}\big) - j \leftrightarrow k \label{eq:CG24} \\
& E_{ij}^{\ms{(2)}} = - \tfrac{1}{2\ell^2} \psi_{ij}^{\ms{(2)}} + \tfrac{\sigma}{2}\, \big({\cal R}_{ij}^{\ms{(0)}} - \tfrac13 \gamma_{ij}^{\ms{(0)}}{\cal R}^{\ms{(0)}}\big) + \tfrac{1}{8\ell^2} \gamma^{\ms{(1)}} \psi_{ij}^{\ms{(1)}} \label{eq:CG23} \\
& E^{\ms{(3)}}_{ij} = -\tfrac{3}{4\ell^2}\,\psi^{\ms{(3)}}_{ij} -\tfrac{1}{12\ell^{2}}\,\gamma_{ij}^{\ms{(0)}}\,\psi^{kl}_{\ms{(1)}} \, \psi_{kl}^{\ms{(2)}}
-\tfrac{1}{16\ell^{2}}\,\psi^{\ms{(1)}}_{ij}\,\psi^{\ms{(1)}}_{kl}\,\psi_{\ms{(1)}}^{kl} \nonumber - \tfrac{\sigma}{12}\,\big(\mathcal{R}^{\ms{(0)}}\,\psi_{ij}^{\ms{(1)}}\\ &-\gamma _{ij}^{\ms{(0)}}\,\mathcal{R}_{kl}^{\ms{(0)}}\,\psi^{kl}_{\ms{(1)}}
+\gamma _{ij}^{\ms{(0)}}\,\mathcal{D}_{l}\,\mathcal{D}_{k}\,\psi^{kl}_{\ms{(1)}} \nonumber +\tfrac{3}{2}\,\mathcal{D}_{k}\,\mathcal{D}^{k}\,\psi_{ij}^{\ms{(1)}}
-3\,\mathcal{D}_{k}\,\mathcal{D}_{i}\,\psi^{\ms{(1)} k}_{j}
\big)\\
& + \tfrac{1}{24\ell^2} \bigg( \gamma_{\ms{(1)}}\,(3\,\psi^{\ms{(2)}}_{ij}+\tfrac12\,\gamma_{ij}^{\ms{(0)}}\,
\psi_{kl}^{\ms{(1)}}\,\psi^{kl}_{\ms{(1)}} - \gamma_{\ms{(1)}}\,\psi_{ij}^{\ms{(1)}}) + 5\,\gamma _{\ms{(2)}}\,\psi_{ij}^{\ms{(1)}} \nonumber\\
&- \sigma\ell^2\,(\mathcal{D}_{j}\,\mathcal{D}_{i}\,\gamma_{\ms{(1)}}
-\tfrac{1}{3}\,\gamma _{ij}^{\ms{(0)}}\,\mathcal{D}^{k}\,\mathcal{D}_{k}\,\gamma _{\ms{(1)}})\,\bigg) + i \leftrightarrow j.
\label{eq:CG26a}.
\end{align}
The response function $\tau_{ij}$ expressed in terms of the electric and magnetic part of the Weyl tensor when $\rho_c\rightarrow0$ reads
\begin{align}
\tau_{ij} &= \sigma \big[\tfrac{2}{\ell}\,(E_{ij}^{\ms{(3)}}+ \tfrac{1}{3} E_{ij}^{\ms{(2)}}\gamma^{\ms{(1)}}) -\tfrac4\ell\,E_{ik}^{\ms{(2)}}\psi^{\ms{(1)} k}_j
+ \tfrac{1}{\ell}\,\gamma_{ij}^{\ms{(0)}} E_{kl}^{\ms{(2)}}\psi_{\ms{(1)}}^{kl}
+ \tfrac{1}{2\ell^3}\,\psi^{\ms{(1)}}_{ij}\psi_{kl}^{\ms{(1)}}\psi_{\ms{(1)}}^{kl}
\nonumber\\&- \tfrac{1}{\ell^3}\,\psi_{kl}^{\ms{(1)}}\,\big(\psi^{\ms{(1)} k}_i\psi^{\ms{(1)} l}_j-\tfrac13\,\gamma^{\ms{(0)}}_{ij}\psi^{\ms{(1)} k}_m\psi_{\ms{(1)}}^{lm}\big)\big]
- 4\,{\cal D}^k B_{ijk}^{\ms{(1)}} + i\leftrightarrow j\,,
\label{eq:CG17}
\end{align}
while $P_{ij}$ is partially massless response obtained in the following way. The response function sourced by $\delta\gamma^{(1)}_{ij}$
\begin{equation}
P_{ij}=-\tfrac{4\,\sigma}{\ell}\,E_{ij}^{\ms{(2)}}\,
\label{eq:CG18}\end{equation}
is finite like $\tau_{ij}$ and does not require adding counterterms. Its definition as partially massleless response (PMR) is in a sense of
Deser, Nepomechie and Waldron \cite{Deser:1983mm,Deser:2001pe}. That means that the tensor does not contain full rank, for example, if we decompose the tensor into transverse and traceless part the "partial massless" means that not all modes are present.
When we plug in $P^{ij}$ into linearised CG EOM, around (A)dS background we obtain partial masslessness. That behaviour is expected comparing with the behaviour in 3D \cite{Afshar:2011yh} and on general grounds when one thinks of Weyl invariance \eqref{wresc} as a non-linear completion of the gauge enhacement at the linearised level caused by partial masslesness \cite{Deser:2013bs, Deser:2013uy}. That kind of non-perturbative completion does not appear in general for partial masslessness in higher derivative theories \cite{Deser:2012ci}.
\section{Ward identity}
This is a good point to explain the concepts of the Ward identity, Noether charge and entropy, which we use in analysis of the response functions obtained from CG and observables that can be analysed.
Most common example of Ward identity is using the photon polarisations, where we follow the the description of Peskin and Schr\v{o}der \cite{Peskin:1995ev}. The sum over electron polarisations can be done using the identity $\sum u(p)\overline{u}(p)=p+m$, where u(p) are electron wave functions, p its momenta and m its mass, similarly, for photon polarisations one performs
$\sum_{polarizations}\epsilon_{\mu}^*\epsilon_{\nu}\rightarrow -g_{\mu\nu}$ where $\epsilon_{\mu}$ denote photon polarisation, and $g_{\mu\nu}$ spacetime metric. If for simplicity we orient the impulse $k$ of the vectors in the $z$ direction $k^{\mu}=(k,0,0,k)$, the transverse polarisation vectors can be be chosen as $\epsilon_1^{\mu}=(0,1,0,0)$ and $\epsilon_2^{\mu}=(0,0,1,0)$ that one can use to write
cross section with a QED amplitude of arbitrary QUD process that involves external photon with momentum $k$ as $\mathcal{M}(k)\equiv\mathcal{M}^{\mu}(k)\epsilon_{\mu}^*(k)$
\begin{equation}
\sum_{\epsilon}|\epsilon_{\mu}^{*}(k)\mathcal{M}^{\mu}(k)|^2=\sum_{\epsilon}\epsilon_{\mu}^{*}\epsilon_{\nu}\mathcal{M}^{\mu}(k)\mathcal{M}^{\nu*}(k)=|\mathcal{M}^1(k)|^2+|\mathcal{M}^2(x)|^2.
\end{equation}
Since one expects $\mathcal{M}^{\mu}(k)$ to be given by the matrix element of the Heisenberg field $j^{\mu}$
\begin{equation}
\mathcal{M}^{\mu}(k)=\int d^4x e^{ikx}\langle f| j^{\mu}(x)|i \rangle \label{wi0}
\end{equation}
for $f$ and $i$ final and initial states tagat include all particles except photon in question, respectively, and $j^{\mu}$ Dirac vector current $j^{\mu}=\overline{\psi}\gamma^{\mu}\psi$ where $\psi$ is a wave function and $\gamma^{\mu}$ in this (and only this) context Dirac gamma matrix.
Since EOM tell us that the current $j^{\mu}$ is conserved $\partial_{\mu}j^{\mu}(x)=0$, assuming the property holds for quantum theory, it follows from (\ref{wi0})
\begin{equation} k_{\mu}\mathcal{M}^{\mu}(k)=0. \end{equation} I.e., description for the vanishing of the amplitude $\mathcal{M}$ when polarisation vector $\epsilon_{\mu}(k)$ is replaced by $k_{\mu}$ and is called as "Ward identity". It states the current conservation, a consequence of gauge symmetry \begin{align}\psi(x)\rightarrow e^{i\alpha(x)}\psi(x),&& A_{\mu}\rightarrow A_{\mu}-\frac{1}{e}\partial_{\mu}\alpha(x) \end{align} for $\alpha(x)$ local phase, $A_{\mu}$ electromagnetic vector potential and $e$ charge of an electron.
The general form of this identity is Ward-Takanashi identity. It states that for an external photon of momentum $k$, $n$ electrons in an initial state with momenta $p_1,...,p_n$ and $n$ electrons with momenta $q_1,...,q_n$ in the final state, one may write
\begin{align}
k_{\mu}\mathcal{M}^{\mu}(k;p_1,...,p_n;q_1,...,q_n)&=e\sum_i\big[\mathcal{M}_0(p_1,...,p_n;q_1,...(q_i-k),...q_n) \nonumber \\&-\mathcal{M}_0(p_1,...(p_i+k),...,p_n;q_1,...,q_n) \big].
\end{align}
If the external electrons are on-shell, the particles on the right have one external particle off-shell and do not contribute, such that for all external electrons on-shell one obtains Ward identity.\footnote{For proof of the the identity one may take a look at \cite{Peskin:1995ev}}
Similarly in EG Ward identity will yield $\nabla^j\langle \tau_{ij}\rangle=0$ which we regarded as energy conservation (\ref{encon}). Depending on the considered action, Ward identities yield corresponding results. If we couple EG to matter we notice the change in the Ward identity (\ref{encon}) \cite{deHaro:2000xn}.
For the action
\begin{align}
S&=S_{gr}+S_{M} \nonumber \\
& =\frac{1}{16\pi G_{M}}\big[ \int_{\mathcal{M}}d^{d+1}x\sqrt{g}(R-R \Lambda)-\int_{\partial \mathcal{M}}d^dx\sqrt{\gamma}2K \big]
\nonumber \\&+\frac{1}{2}\int_{\mathcal{M}} d^{d+1}x \sqrt{g}(g^{\mu\nu}\partial_{\mu}\Phi\partial_{\nu}\Phi+m^2\Phi^2) \label{smat}
\end{align}
where $\Phi$ is a scalar of mass $m$ defined with
\begin{align}
\Phi(x,\rho)=\rho^{(d-\Delta)/2}\phi(x,\rho), && \phi(x,\rho)=\phi_{(0)}+\phi_{(2)}\rho+...
\end{align}
and $\Delta$ conformal dimension of the dual operator,
the regulated on-shell value of (\ref{smat}) reads \cite{deHaro:2000xn}
\begin{align}
S_{reg}(bulk)&=\int_{\rho\leq\epsilon}d\rho d^dx\sqrt{g}\frac{1}{\rho}\sqrt{\gamma(x,\rho)}\left[\frac{d}{16\pi G_{N}}\rho^{-d/2}-\frac{m^2}{2(d-1)}\phi^2(x,\rho)\rho^{-k}\right]
\end{align}
for $k=\Delta-d/2$, and $g$ and $\gamma$, defined with
\begin{align}
ds^2&=g_{\mu\nu}dx^{\mu}dx^{\nu}=\ell^2\left(\frac{d\rho^2}{4\rho}^2+\frac{1}{\rho}\gamma_{ij}(x,\rho)dx^idx^j\right) \\
\gamma(x,\rho)&=\gamma_{(0)}+.....+\rho^{d/2}\gamma_{(d)}+h_{(d)}\rho^{d/2}\log \rho+...
\end{align}
Here, the part with logarithm appears for even $d$\footnote{When $d$ is even, one obtains conformal anomalies, while when $k$ is positive integer, one obtains matter conformal anomalies.}.
The expectation value of boundary stress energy tensor, is not conserved with existing, sources but it satisfies Ward identity relating covariant divergence and expectation value of the operators coupling the sources.
For generating functional
\begin{equation}
Z_{CFT}[\gamma_{(0)},\phi_{(0)}]=\langle \exp \int d^dx \sqrt{\gamma_{(0)}}\left[\frac{1}{2} \gamma^{ij}_{(0)}\tau_{ij}-\phi_{(0)}O\right] \rangle
\end{equation}
for $\langle O(x)\rangle=-\frac{1}{\sqrt{det \gamma_{(0)}}}\frac{\delta S_{M,ren}}{\delta\phi_{(0)}}$, the obtained Ward identity is
\begin{equation}
\nabla^j\langle \tau_{ij}\rangle=\langle O\rangle \partial_i\phi_{(0)}\label{wieg}.
\end{equation}
It yields from the invariance under infinitesimal diffeomorphisms
\begin{equation}
\delta \gamma_{(0)ij}=\nabla_i\xi_j+\nabla_j\xi_i.
\end{equation}
Analogously to the (\ref{wieg}) we obtain the relation that is similar to the energy condition (\ref{encon}) however since it considers CG it is modified. Interesting findings about Ward identities in axial gauge are shown in \cite{Capper:1981rc,Capper:1981rd}.
\subsection{Application to Conformal Gravity in Four Dimensions}
From the equations (\ref{eq:CG17}) and (\ref{eq:CG18}) the (\ref{tracecond}), trace condition on the response functions are result of the identities
\begin{align}
\gamma_{(0)}^{ij}E_{ij}^{(3)}&=\psi_{(1)}^{ij}E_{ij}^{(2)}, \\
\gamma_{(0)}^{ij}E_{ij}^{(2)}&=\gamma_{(0)}^{ij}B_{ijk}^{(1)}=0
\end{align}
obtained by the tracelessness of the electric and magnetic
parts of the Weyl tensor. For Starobinsky boundary conditions only the Brown-York stress energy tensor is traceless, while in general that is true only for PMR. From the obtained current
\begin{align}
J^i=\left(2\tau^j{}_j+2P^{il}\gamma_{lj}^{(1)}\right)\xi^{j}, \label{currentcg}
\end{align}
with $\xi^j$ a boundary diffeomorphism that contributes in the definition of the asymptotic symmetry of CG, we may obtain the conserved charges. To ensure that conformal boundary $ \partial \mathcal{M}$ is timelike, we set $\sigma=-1$ and consider $AdS$ case. Which implies considering a constant time surface $\mathcal{C}$ in $\partial\mathcal{M}$
\begin{align}
Q[\xi]=\int_{\mathcal{C}}d^2x \sqrt{h}u_iJ^i\label{charge1} \end{align} defines the charge, where $u^i$ is the future pointing unit normal vector to $\mathcal{C}$ and $h$ a metric on $\mathcal{C}$. These charges we continue to analyse in the "Canonical Analysis" chapter and prove that they generate the asymptotic symmetres.
The combination of the response functions and $\gamma_{ij}^{(1)}$ that paperers in $J^i$ corresponds to the modified stress energy tensor in a sense of Hollands, Ishibashi and Marlof \cite{Hollands:2005ya}.
Modified stress energy tensor satisfies the covariant divergence
\begin{align}
\mathcal{D}(2\tau^{ij}+2P^i{}_{l}\gamma^{(1)lj})=P^{il}\mathcal{D}^j\gamma_{il}^{(1)},\label{ident}
\end{align}
which is responsible for the difference in charges on surfaces $\mathcal{C}_1$ and $\mathcal{C}_2 $ bounded by a region $\mathcal{V}\subset\partial\mathcal{M}$
\begin{align}
\Delta Q[\xi]=\int_{\mathcal{V}}d^3x\sqrt{|\gamma^{(0)}|}
\left(\tau^{ij}\pounds_{\xi}\gamma_{ij}^{(0)}+P^{ij}\pounds_{\xi}\gamma^{(1)}_{ij}\right).\label{dq}\end{align}
The difference of the charges (\ref{dq}) vanishes for the asymptotic symmetries.
\subsection{Alternate Boundary Conditions}
Conformal gravity action (\ref{action}) can be modified by adding a Weyl invariant boundary term
\begin{align}
\tilde{\Gamma}_{CG}=\Gamma_{CG}+8\int_{\partial \mathcal{M}}d^3x\sqrt{|\gamma|}K^{ij}E_{ij}
\end{align}
which performs a Lagandre transformation of the action. Written in this form, the action is also finite on-shell, however its first variation
\begin{equation}
\delta \tilde{\Gamma}_{CG}=\int_{\partial\mathcal{M}}d^3x\sqrt{|\gamma|}\left(\tilde{\tau}_{ij}\delta\gamma_{ij}^{(0)}+\tilde{P}^{ij}\delta E_{ij}^{(2)}\right) \label{ltr1var}
\end{equation}
is an expression that contains exchanged roles of the source and the response function. The role of the source is played by the $E_{ij}^{(2)}$, while in (\ref{finres1}) that role belongs to $\gamma_{ij}^{(1)}$. The response function to $\gamma_{ij}^{(1)}$ in (\ref{finres1}) is $P_{ij}$ which is proportional to $E_{ij}^{(2)}$ (\ref{eq:CG18}) and in (\ref{ltr1var}) the response function $\tilde{P}_{ij}$ is proportional to $\gamma_{ij}^{(1)}$, precisely
\begin{align}
\tilde{P}_{ij}=\frac{4\sigma}{\ell}\gamma_{ij}^{(1)}.
\end{align}
The response function to a $\delta\gamma_{ij}^{(0)}$ in (\ref{ltr1var}) is stress tensor
\begin{align}
\tilde{\tau}_{ij} &= \tau_{ij}+\tfrac{2\sigma}{\ell} \,E_{\ms{(2)}}^{kl}\psi^{\ms{(1)}}_{kl}\,\gamma^{\ms{(0)}}_{ij} + \tfrac{8\sigma}{3\ell} \, E^{\ms{(2)}}_{ij}\gamma^{\ms{(1)}} \nonumber\\
&\quad - \tfrac{4\sigma}{\ell}\,\big(E^{\ms{(2)}}_{ik}\psi^{\ms{(1)} k}_j + E^{\ms{(2)}}_{jk}\psi^{\ms{(1)} k}_i\big),
\end{align}
which now interestingly has zero trace, $\tilde{\tau}_i^i=0$.
In further consideration, when referring to the response functions, we will primarily think of the response functions from the action (\ref{action}).
In the following chapter we apply these results on the three prominent examples.
\section{Black hole solutions}
The charges and the response functions, one can compute explicitly on the asymtptocally (A)dS black hole solution of EG with cosmological constant, i.e. Schwarzschild black hole, MKR solution and the rotating black hole.
The solution that obeys the Starobinsky boundary conditions, $\gamma_{ij}^{(1)}=0$ includes solutions of EG with cosmological constant, which are asymptotically (A)dS. It follows from the EOM that $E_{ij}^{(2)}=0$, which implies vanishing of the PMR. The stress energy tensor becomes
\begin{align}
\tau_{ij}=\frac{4\sigma}{\ell}E^{(3)}_{ij}.
\end{align}
That agrees with the traceless and conserved stress energy tensor of EG \cite{deHaro:2000xn}, Maldacena's analysis and the work by Deser and Tekin \cite{Deser:2002jk}.
An interesting example is MKR solution that does not have vanishing $\gamma_{ij}^{(1)}$ matrix in the FG expansion. In the equation (\ref{le}) we set $\sigma=-1$ and from the MKR solution (\ref{MKR1}) transform to the FG form.
The transition from the original MKR solution to the FG form of the metric is performed by transformation of the coordinate $r(\rho)$ into $\rho$ with
\begin{equation}
r(\rho)=\frac{a_{-1}}{\rho}+a_0+a_1\rho+a_2\rho^2+a_3\rho^3+a_4\rho^4. \label{expan}
\end{equation}
We insert the new coordinate in
\begin{equation}
\frac{d\left(r(\rho)\right)^2}{V\left[r(\rho)\right]}=\frac{1}{\rho^2} \label{cond},
\end{equation}
demand for the equality to hold, and read out the coefficients $a_i$, $i=(-1,0,1,2,3,4)$. We insert the coordinate $r(\rho)$ in the remaining components of the metric, expand them in the $\rho$ coordinate, and read out the matrices
in the FG expansion
\begin{align}
\gamma_{ij}^{(0)}=diag\left(-1,1,1\right)
\end{align}
\begin{align}
\gamma_{ij}^{(1)}=diag\left(0,-2a,-2a\right)\label{gama1mkr}
\end{align}
\small
\begin{align}
\gamma_{ij}^{(2)}= diag\left(\frac{1}{2}\left(a^2-\sqrt{1-12aM}\right),\frac{3a^2}{2}-\frac{1}{2}\sqrt{1-12aM} ,\frac{3a^2}{2}-\frac{1}{2}\sqrt{1-12aM}\right) \nonumber
\end{align}
\normalsize
\begin{align}
\gamma^{(3)}_{ij}=diag\left(\frac{4M}{3},\frac{1}{6}\left(-3a^2+4M+3a\sqrt{1-12aM}\right),\frac{1}{6}\left(-3a^2+4M+3a\sqrt{1-12aM}\right)\right)\nonumber
\end{align}
and for the response functions
\begin{align}
\tau_{11}&= \frac{4 \left(a \left(\sqrt{1-12 a M}-1\right) \ell^2+6 M\right)}{3 \ell^4}\nonumber \\
\tau_{22}&=\frac{4}{3} \left(\frac{3 M}{\ell^2}+a \left(\sqrt{1-12 a M}-1\right)\right)\nonumber \\
\tau_{33}&=\frac{4 \left(a \left(\sqrt{1-12 a M}-1\right) \ell^2+3 M\right) \sin ^2(\theta )}{3 \ell^2}
\end{align}
\begin{align}
P_{ij}=\left(
\begin{array}{ccc}
\frac{4 \left(\sqrt{1-12 a M}-1\right)}{3 \ell^3} & 0 & 0 \\
0 & \frac{2 \left(\sqrt{1-12 a M}-1\right)}{3 \ell} & 0 \\
0 & 0 & \frac{2 \left(\sqrt{1-12 a M}-1\right) \sin ^2(\theta )}{3 \ell} \\
\end{array}
\right),
\end{align}
where the off-diagonal elements of the $\tau_{ij}$ are vanishing.
One can notice that the Rindler acceleration $a$ appears linearly in the partially massless response and makes it non-vanishing. Non-vanishing Rindler acceleration, appears as well quadratically in the trace of the stress tensor
\begin{equation}
\tau_i^i=\frac{4a(-1+\sqrt{1-12aM})}{3\ell^2}
\end{equation}
and leads using the equation (\ref{currentcg})
to the charge
\begin{equation}
Q_{ij}=\left(
\begin{array}{ccc}
\frac{8 \left(A \left(\sqrt{1-12 A M}-1\right) L^2+6 M\right)}{3 L^4} & 0 & 0 \\
0 & \frac{8 M}{L^2} & 0 \\
0 & 0 & \frac{8 M \sin ^2(\theta )}{L^2} \\
\end{array}
\right) \label{chargemkrsph}.
\end{equation}
Conserved charge associated with the Killing vector $\partial_t$, using (\ref{charge1})
with normalisation of action $\alpha_{CG}=\frac{1}{64\pi}$ gives
\begin{equation}
Q[\partial_t]=\frac{M}{\ell^2}-a(1-\sqrt{1-12aM}).\label{holrenmkrcharge}
\end{equation}
Using the Wald's approach the on-shell action gives for the entropy
\begin{equation}
S=\frac{A_h}{4\ell^2}
\end{equation}
for $A_h=4\pi r_h^2$ where $r_h$ is area of the horizon $k(r_h)=0$.
Where we notice that the area law is obeyed despite the fact that we are considering higher-derivative gravity theory.
On the rotating black hole example we consider solution in AdS with Rindler hair. The solution is parametrised with Rindler acceleration $\mu$ and rotation parameter $\tilde{a}$, however the mass parameter vanishes.
The fact that the mass parameter vanishes leads to vanishing of the PMR, $P_{ij}=0$, which means that in order for existence of PMR $\gamma_{ij}^{(1)}\neq0$ is necessary but not sufficient.
The conserved energy is \begin{align}
E=-\frac{\tilde{a}^2\mu}{\ell^2\left(1-\frac{\tilde{a}^2}{\ell^2}\right)^2}.
\end{align}
\chapter{Canonical Analysis of Conformal Gravity}
The main goal of this chapter is to present canonical analysis of CG. Analysis of CG using the holographic renormalisation as described in the first chapter is complemented and supplemented via canonical analysis. That way one obtains detail insight into the boundary charges.
Canonical analyses of higher derivative gravities in four dimensions have been done earlier \cite{Kluson:2013hza} pointing out that theory with most symmetries is CG. It has also been applied to lower dimensional gravitational theories. In conformally invariant three dimensional Chern-Simons gravity \cite{Afshar:2011qw}, the charge that describes conformal invariance vanishes as well as in four dimensions. In three dimensions, the result depends on the Weyl factor, if the Weyl factor varies freely the corresponding charge does not vanish. That leads to an enhancement of the algebra at the boundary which is consisted of the two copies of Virasoro algebra, with current $U(1)$ algebra. In four dimensions, however, as we shall demonstrate below, Weyl charge vanishes even in the case of freely varying Weyl factor.
The physical system that is complicated and non-linear, however it contains global symmetries and conserved quantities, can be considered using canonical analysis of conserved quantities as one of the most useful analytic tools to understand the system. The conserved quantities in the ADM split have been studied for asymptotically flat dynamical spacetimes exploring the subtleties in diffeomorphism invariance.
The notion of global symmetry is given by asymptotic symmetries, equivalence classes of diffeomorphisms that exhibit analogous asymptotic behaviour at infinity.
In other words, asymptotic symmetrys are defined as gauge transformations that leave the field configurations that are considered, asymptotically invariant. Furthermore, they are essential to define the total ("global") charges. \cite{Brown:1986nw, Abbott:1981ff, Abbott:1982jh}.
The notion of asymptotic symmetry naturally depend vastly on the boundary conditions. The imposition of boundary conditions causes true gauge symmetries to be merely a subset of the entire diffeomeophism group that allows for the non-trivial asymptotic symmetries. Three most prominent reasons to study asymptotic symmetries and corresponding conserved charges of AdS spacetimes are
\begin{enumerate}
\item Simply to gain further insight into the asymptotic symmetry in gravity. Empty AdS is maximally symmetric solution and studying asymtpotically AdS spaces is simple and natural choice.
\item The found structure in the AdS is richer then one obtained for the asymptotically flat space, which is connected too the fact that multiple moments of a field in AdS decay at the equal rate at infinity \cite{Fischetti:2012rd,Horowitz:2000fm}. The asymptotically flat spacetime is dominated by monopoles, while AdS equally admits higher multipoles. Therefore, one is interested not just into global charges, (e.g. total energy), rather into local densities of the charges at the boundary. Actually, it is natural to study entire boundary stress energy tensor.
\item Conserved charges have fundamental reason to the AdS/CFT correspondence, most oftenly used in these times.
\end{enumerate}
\footnote{Computation of the variations using the constraints, as a general method, has been introduced by Ter Haar in 1971.}
Therefore, we devote this chapter to analysis of the CG charges, while the analysis of the asymptotic symmetry algebra and richness of its structure is a theme of the following chapter.
From CG Lagrangian
\begin{equation}
\mathcal{L}=-\frac{1}{4}\omega_{g}C^a{}_{bcd}C_a{}^{bcd} \label{langcg}
\end{equation}
where $\omega_g$ is $\sqrt{q}d^4x$ a volume form, we split the Lagrangian (\ref{langcg}) in the Arnowitt-Deser-Misner (ADM) decomposition. We introduce a more general formalism which is not defined in a given basis, while the traditional ADM formalism, in coordinate system, is presented in the appendix: Canonical Analysis of Conformal Gravity: ADM Decomposition.
Conisder a function t on a manifold $\mathcal{M}$ which we call time. We assume it to foliate the manifold with spatial hypersurfaces $\sigma_t$ on which $t=const$. Kernel of the one-form $\nabla_a t$ for $\nabla$ Levi-Civita connection defined on the manifold, defines a tangent bundle $\tau\Sigma$. The spacial hyper surfaces are defined when
\begin{equation}
g^{ab}\nabla_at\nabla_bt<0,
\end{equation}
the future pointing normal vector is $n_{\alpha}=\alpha\nabla_at$ with $\alpha$ a normalisation constant completely defined in the terms of so called lapse function N.
A congruence of curves and $t^a$, their tangent vector fields, are related as \begin{equation}
t^a\nabla_at=1,
\end{equation}
$t^a$ can be decomposed in
\begin{equation}
t^a=Nn^a+N^a.
\end{equation}
N measures a tick rate for a physical observer that follows normal $n^a$, while $N^a$ is defined as shift vector. If we imagine a manifold in terms of the coordinate grid, drag describes its shift orthogonally to $n^{a}=0$. That leads to
\begin{align}
t^an_a=\alpha=-N\\
n_a=-N\nabla_at.
\end{align}
One can write the decomposition of the metric $g_{ab}$ with the metric $h_{ab}$ on $\Sigma$ and the normal vectors
\begin{equation}
g_{ab}=-n_an_b+h_{ab},
\end{equation}
which is called 3+1 or ADM decomposition of the metric, while the Levi-Civita connection on the boundary is called $D_a$. With the boundary metric $h_{ab}$ in the form $h_a^b$ and the normal vector $n_a$,
one can split the tensor fields defined on the manifold $\mathcal{M}$. When we have split the four dimensional tensor field expressing it solely in the terms of the boundary indices, we say that the tensor field has been projected to the boundary.
We denote the projection of the tensor with the $h_{a}{^b}$ metric with
\begin{align}
P=\perp\mathcal{P}.
\end{align}
Where tensors $P$ on $\Sigma$ can be obtained from the tensor fields $\mathcal{P}$ on the manifold $\mathcal{M}$.
The relation of the Levi-Civita connections reads
\begin{equation}
DP=\perp[\nabla (\perp\mathcal{P})].
\end{equation}
while the decomposition of the determinant is
\begin{align}
\sqrt{g}=N\sqrt{h}.
\end{align
The bending of the surface and curves with respect to the space in which they are embedded, defined by the change of the normal vector projected on the hypersurface, defines extrinsic curvature $K_{ab}$.
Extrinsic curvature of the spatial hypsersurfaces is
\begin{equation}
K_{ab}=h_a^{\ c}\nabla_cn_b= \frac{1}{2}\,^{\ms{(4)}}\pounds_n h_{ab},
\end{equation}
for ${}^{(4)}\pounds_n$ Lie derivative in the $n^a$ direction, while we reserve the symbol $\pounds$ without prefix for the Lie derivative on $\Sigma$. Normal of the 4D Lie derivative of the covariant spatial tenor field on $\Sigma$
\begin{equation}
n^{a_i}\,^{\ms{(4)}}\pounds_n P_{a_1\cdots a_i \cdots a_n}=- P_{a_1\cdots a_i \cdots a_n}(n\nabla)n^{a_i}+ P_{a_1\cdots a_i \cdots a_n}(n\nabla)n^{a_i}=0,
\end{equation}
is spatial, while the the 4D Lie derivative along the spatial vector $V^a$ becomes spatial (3D) Lie derivative on $\Sigma$ when projected to the tensor bundle on $\Sigma$
\begin{equation}
\perp \,^{\ms{(4)}}\pounds_V P_{a_1\cdots a_i \cdots a_n}=\pounds_V P_{a_1\cdots a_i \cdots a_n}.\label{proj1}
\end{equation}
The relation (\ref{proj1}) plays a key role in definition of velocities
\begin{eqnarray}
\dot{h}_{ab}&=&\perp \,^{\ms{(4)}}\pounds_t h_{ab}=N\,^{\ms{(4)}}\pounds_n h_{ab}+\pounds_N h_{ab}=\nonumber\\
&=&2\left(NK_{ab}+D_{(a}N_{b)}\right),
\end{eqnarray}
it measure the change of the spatial quantity when $t$ changes on the spatial slice. With the definition of the ADM decomposition of the curvatures in the appendix: Canonical Analysis of Conformal Gravity: ADM Decomposition of Curvatures
we obtain the decomposed Lagrangian of CG
\begin{equation}
\mathcal{L}=N\omega_h\left(\perp n^eC_{ebcd}\perp n_fC^{fbcd}-2\perp n^en^fC_{aecf}\perp n_gn_hC^{agch}\right).
\end{equation}
As we already know, CG is gravity theory of the fourth order in derivatives, while the terms quadratic in curvature are of the second order in time derivatives.
In other words, our Lagrangian contains acceleration of $h_{ab}$, i.e. velocity of $K_{ab}$, which is in contrast to GR in ADM form that contains first order in time derivatives.
The Hamiltonian formulation, defines only first order time derivatives
$\frac{df}{dt}=\{f,H\}$. In order to be able to use Hamiltonian formulation, we define an additional constraint.
We consider $K_{ab}$ as a canonical coordinate independent on $h_{ab}$ and relate it with $\dot{h}_{ab}$ via constraint with corresponding Lagrange multiplier, $\lambda^{ab}$. The Lagrangian of CG in the ADM decomposition then reads
\begin{align}
\mathcal{L}&=N\omega_h\bigg\{ -\frac{1}{2}\mathcal{T}^{abcd}\left[R_{ab}+K_{ab}K-\frac{1}{N}\left(\dot{K}_{ab}-\pounds_NK_{ab}-D_aD_bN\right)\right]\nonumber\\ &\times\left[R_{cd}+K_{cd}K-\frac{1}{N}\left(\dot{K}_{cd}-\pounds_NK_{cd}-D_cD_dN\right)\right]\nonumber\\
&+B_{abc}B^{abc}+\lambda^{ab}\left[\frac{1}{N}\left(\dot{h}_{ab}-\pounds_Nh_{ab}\right)-2K_{ab}\right]\bigg\}.\label{Lagrangean1}
\end{align}
where
\begin{align}
\mathcal{T}^{abcd}=\frac{1}{2}(h^{ac}h^{bd}+h^{ad}h^{bc})-\frac{1}{3}h^{ab}h^{cd}
\end{align}
denotes DeWitt metric.
Lagrangian (\ref{Lagrangean1}), function of the variables and velocities
\begin{align}
\mathcal{L}(N,N^a,h_{ab},\partial_th_{ab},K_{ab},\partial_t K_{ab},\lambda^{ab})
\end{align}
allows us to immediately notice primary constraints
\begin{align}
\Pi_N&=\frac{\partial\mathcal{L}}{\partial (\partial_t N)}\approx0 &&\Pi_a=\frac{\partial \mathcal{L}}{\partial (\partial_t N^a)}\approx0 \label{prim1} \\
\Pi^{\lambda}_{ab}&=\frac{\mathcal{L}}{\partial(\partial_t\lambda^{ab} )}\approx0&& \label{prim2}
\end{align
To write the Lagrangian in the Hamiltonian formulation (\ref{hc0}) one needs to identify the momenta and corresponding canonical variables, that requires analysis of the constraints and for the consistency conditions (\ref{cc1}, \ref{cc2}, \ref{cc3}). Since that procedure requires introducing the Dirac brackets, it is convenient to first inspect whether one can read out the momenta and corresponding variables directly from the Lagrangian following the method of \cite{Faddeev:1988qp}. Since the Lagrangian (\ref{Lagrangean1}) allows for identification of the momenta conjugate to $h_{ab}$ and $K_{ab}$, we denote them with $\Pi_{h}^{ab}$ and $\Pi_{K}^{ab}$
\begin{align}
\Pi_h^{ab}&=\frac{\partial \mathcal{L}}{\partial(\partial_th_{ab})}=\sqrt{h}
\lambda^{ab}\label{primh}\\
\Pi_K^{ab}&=\frac{\partial\mathcal{L}}{\partial(\partial_tK_{ab})}=\sqrt{h}2\alpha C^a{}_{\textbf{n}}{}^b{}_{\textbf{n}}\label{primk}
\end{align}
respectively. For the projection of the Weyl tensor
\begin{align}
\Pi_{K}^{ab}=-\alpha\sqrt{h}\mathcal{T}^{abcd}\bigg(\mathcal{L}K_{cd}-R_{cd}-K_{cd}K-\frac{1}{N}D_cD_dN\bigg),
\end{align}
that can be recognised from (\ref{Lagrangean1}).
Since the DeWitt metric and the projection of the Weyl tensor are traceless we will have to ensure that $\Pi_K^{ab}$ is traceless and define one more primary constraint.
We can rewrite the Lagrangian (\ref{Lagrangean1}) via (\ref{prim1}), (\ref{prim2}), (\ref{primk}) and (\ref{primh}) and the canonical variables
\begin{eqnarray}
\mathcal{L}&=&\Pi_K^{ab}\dot{K}_{ab}+\Pi_h^{ab}\dot{h}_{ab}+N\bigg[\omega_h^{-1}\frac{\Pi_K^{ab}\Pi^K_{ab}}{2}-\Pi_K^{ab}\left(R_{ab}+K_{ab}K\right)+\omega_hB_{abc}B^{abc}\nonumber\\ &&-2\Pi_h^{ab}K_{ab}\bigg]
-\Pi_K^{ab}D_aD_bN-\Pi_K^{ab}\pounds_NK_{ab}-\Pi_h^{ab}\pounds_Nh_{ab}-\lambda_P\Pi_K^{ab}h_{ab}.\label{Lagrangean2}
\end{eqnarray}
To write the Lagrangian in the form that manifestly contains the constraints of the Hamiltonian, using partial integration we rewrite the Lagrangian in the form that there is no lapse or shift under covariant derivatives
\begin{align}
L&=\int_\Sigma \bigg\{\Pi_K^{ab}\dot{K}_{ab}+\Pi_h^{ab}\dot{h}_{ab}-N\bigg[-\omega_h^{-1}\frac{\Pi_K^{ab}\Pi^K_{ab}}{2}+\Pi_K^{ab}\left(R_{ab}+K_{ab}K\right)-\omega_hB_{abc}B^{abc} \nonumber\\
&+2\Pi_h^{ab}K_{ab}+D_aD_b\Pi_K^{ab}\bigg]-N^c\left[\Pi_K^{ab}D_cK_{ab}-2D_a\left(\Pi_K^{ab}K_{bc}\right)-D_a\Pi_h^{ab}h_{bc}\right]-\lambda_P\Pi_K^{ab}h_{ab}\bigg\} \nonumber\\
&-\int_{\partial\Sigma}\ast\left[\Pi_K^{ab}D_bN-D_b\Pi_K^{ab}N+2N^c\left(\Pi_h^{ab}h_{bc}+\Pi_K^{ab}K_{bc}\right)\right].\label{Lagrangean3}
\end{align}
Where the $\ast$ denotes contraction with the free index that belongs to one of the indices of the differential form hidden in tensor densities that build momentum variables.
The term that is multiplied with the Lagrange multiplier $\lambda_P$ is the term that ensures the tracelessness of $\Pi_K^{ab}$ and new primary constraint. Demanding that primary constraints $\Pi_N$ and $\Pi_{a}^{\vec{N}}$ are conserved in time we can from the (\ref{Lagrangean3}) identify the constraints
\begin{align}
\mathcal{H}_\perp&=-\omega_h^{-1}\frac{\Pi_K^{ab}\Pi^K_{ab}}{2}+D_aD_b\Pi_K^{ab}+\Pi_K^{ab}\left(R_{ab}+K_{ab}K\right)\nonumber \\ &-\omega_hB_{abc}B^{abc}+2\Pi_h^{ab}K_{ab}
\end{align}
the Hamiltonian constraint that is multiplied by $N$, and
\begin{align}
\mathcal{V}_c&=\Pi_K^{ab}D_cK_{ab}-2D_a\left(\Pi_K^{ab}K_{bc}\right)-D_a\Pi_h^{ab}h_{bc},
\end{align}
vector constraint that is multiplied with $N^a$. The constraint that ensures tracelessness and is multiplied with $\lambda_P$ we define with $\mathcal{P}\equiv \Pi_K^{ab}h_{ab}$.
The constraints $N$ and $N^a$ can be considered as Lagrange multipliers, however we consider them to be canonical coordinates, since they multiply secondary constraints $\mathcal{H}_{\perp}$ and $\mathcal{V}_c$.
This step may seem superficial, however, it ensured that the gauge generators found via Castellani algorithm have accurate space-time interpretation. This is important in considering the asymptotic symmetry algebra of CG.
\section{Total Hamiltonian of Conformal Gravity}
We can write the total Hamiltonian (\ref{ht}), from the canonical Hamiltonian (\ref{Lagrangean3}) expressing the terms using the constraints,
\eqref{Lagrangean3} now reads
\begin{eqnarray}
H_T=\int_{\Sigma}\left(\lambda_N\Pi_N+\lambda_{\vec{N}}^a\Pi^{\vec{N}}_{a}+\lambda_P\mathcal{P}+N\mathcal{H}_\perp+N^a\mathcal{V}_a\right)+\int_{\partial\Sigma}\left(\mathcal{Q}_\perp+\mathcal{Q}_D\right).\label{scw}
\end{eqnarray}
where we have denoted the surface terms
\begin{align}
\mathcal{Q}_{\perp}&=\ast\left[\Pi_K^{ab}D_bN-D_b\Pi_K^{ab}N\right]
\\
\mathcal{Q}_D&=\ast\left[2N^c\left(\Pi_h^{ab}h_{bc}+\Pi_K^{ab}K_{bc}\right)\right].
\end{align}
with $\mathcal{Q}_{\perp}$ and $\mathcal{Q}_D$.
Surface terms appear because of the integration by parts of
\begin{align}
\int_{\Sigma}d^3x \Pi_K^{ab}D_aD_bN&=\int_{\Sigma}\left(D_a\big(Pi_K^{ab}D_bN\big)-D_b\big(D_a\Pi_K^{ab}N\big)+ND_aD_b\Pi_K^{ab}\right)\\
&=\int_{\Sigma}d^3xND_aD_b\Pi_K^{ab}+\oint_{\partial\Sigma}\ast(D_bN\Pi_K^{ab}-ND_a\Pi_K^{ab}),
\end{align}
and integration by parts of the vector constraint.
One can define the canonical pairs $(h_{ab},\Pi_h^{cd})$, $(K_{ab},\Pi_K^{cd})$,
$(N^a,\Pi_c^{\vec{N}})$ and $(N,\Pi_{N})$ and define the canonical Poisson bracket with
\begin{align}
\{g_A(x),p^B(x')\}=\delta(x-x')\delta_A^B\label{krondel}
\end{align}
where $\delta_A^B$ denotes symmetrized product of delta Kronecker symbols.
From (\ref{krondel}) and following the consistency conditions (\ref{cc1}, \ref{cc2},\ref{cc3}), we can define one further secondary constraint, that we denote with $\mathcal{W}$
\begin{align}
\left\{H_T,\mathcal{P}\right\}&=N\left(\Pi_K^{ab}K_{ab}+2\Pi_h^{ab}h_{ab}\right)+NK\mathcal{P}-D_c\left(N^c\mathcal{P}\right)\\&\approx N\left(\Pi_K^{ab}K_{ab}+2\Pi_h^{ab}h_{ab}\right)\equiv N\mathcal{W}.
\end{align}
To find the gauge generators, improved generators and their algebra, we have to compute the Poisson bracket algebra among the constraints.
For that, we define smeared function on an example of a momentum constraint.
The smeared momentum constraint can be written as a functional
\begin{align}
V[\vec{X}]=\int_{\Sigma}d^3x X^aV_a\label{smear1}
\end{align}
for $\vec{X}$ an arbitrary test vector on $\Sigma$.
In this sense, one can alternatively write the momentum constraint with
\begin{align}
V[\vec{X}]=\int_{\Sigma}d^3x(\Pi_h^{ab}\mathcal{L}_{\vec{X}}h_{ab}+\Pi_K^{ab}\mathcal{L}_{\vec{X}}K_{ab})-\oint_{\partial\Sigma}\ast(X_b\Pi_h^{ab}+X^{a}\Pi_K^{bc}K_{bc})
\end{align}
for $\mathcal{L}_{\vec{X}}h_{ab}=2D_{(a}X_{b)}$.
Where the terms under the first integral, for $\vec{X}=\vec{N}$, read
\begin{align}
\int_{\Sigma}\Pi_h^{ab}\pounds_Nh_{ab}&=-2\int_{\Sigma}D_a\Pi_h^{ab}h_{bc}N^c+2\int_{\partial\Sigma}\ast\Pi_h^{ab}h_{bc}N^c\\
\int_{\Sigma}\Pi_K^{ab}\pounds_NK_{ab}&=\int_{\Sigma}\big[\Pi_K^{ab}D_cK_{ab}-2D_a\left(\Pi_K^{ab}K_{bc} \right)\big]N^c+2\int_{\partial\Sigma}\ast\Pi_{K}^{ab}K_{bc}N^c.
\end{align}
Vector constraint satisfies the Lie algebra
\begin{equation}
\big\{V[\vec{X}],V[\vec{Y}]\big\}=V[[\vec{X},\vec{Y}]],
\end{equation}
that is obeyed since the Lie derivative has the property
\begin{align}
\mathcal{L}_{\vec{X}}\mathcal{L}_{\vec{Y}}-\mathcal{L}_{\vec{Y}}\mathcal{L}_{\vec{X}}=\mathcal{L}_{[\vec{X},\vec{Y}]}
\end{align}
where $\vec{X}$ and $\vec{Y}$ satisfy
\begin{align}
[\vec{X},\vec{Y}]^a=X^{b}\partial_bY^a-Y^b\partial_bX^a.
\end{align}
Under spatial diffeomorphisms, the variables $N, N^a,h_{ab}$ and $K_{ab}$ are scalar or tensor fields while the corresponding canonical momenta are scalar or tensor densities with unit weight. \footnote{Under spatial diffeomorphisms, all the constraints are scalar or tensor densities.}
For the Hamiltonian constraint, the smeared function acts as one of the scalars
\begin{align}
\mathcal{H}_{\perp}[\epsilon]=\int_{\Sigma}d^3x\epsilon\mathcal{H}_{\perp}\label{smear2}
\end{align}
for $\epsilon$ an arbitrary function on $\Sigma$.
If we write the total Hamiltonian using that conventions we can write
\begin{equation}
H_T=H_0[N]+V[\vec{N}]+P[\psi]+\sum_{i=1}^4 C_i[\psi^{(i)}]+Q_{\perp}[N]+Q_{D}[\vec{N}] \label{htw2}
\end{equation}
where we define the functionals in the form of (\ref{smear1}) and (\ref{smear2}). The terms in the Hamiltonian are the following
\begin{align}
\sum_{i=1}^4C_i[\phi^{(i)}]\equiv\int_{\Sigma}\phi^{(1)}_{ab}\big(\Pi_h^{ab}-\omega_h\lambda^{ab}\big)+\phi_{(2)}^{ab}\Pi_{ab}^{\lambda}+\phi_a^{(3)}\Pi_{ab}^{\lambda}+\phi_a^{(3)} \Pi_{a}^{\vec{N}}+\phi^{(4)}\Pi_N
\end{align}
for $\phi^{(2)},\phi^{(3)},\phi^{(4)}$ Lagrange multipliers of $\lambda^{ab},N^a,N$ respectively.
The remaining functionals read
\begin{align}
H_0[N]&\equiv\int_{\Sigma}N\bigg[-\omega_h^{-1}\frac{\Pi_K^{ab}\Pi_{ab}^{K}}{2}+D_aD_b\Pi_K^{ab}+\Pi_K^{ab}(R_{ab}K_{ab}K)+2\Pi_h^{ab}K_{ab}\nonumber \\&-N\omega_hB_{abc}B^{abc}\bigg]\\
V[\vec{N}]&\equiv \int_{\Sigma}N^c\big[\Pi_K^{ab}D_cK_{ab}-2D_a\left(\Pi_K^{ab}K_{bc}\right)-D_a\Pi_h^{ab}h_{ab}\big]\\
P[\psi]&\equiv\int_{\Sigma}\psi\Pi_{K}^{ab}h_{ab}.
\end{align}
Note that in the Hamiltonian (\ref{htw2}), in comparison to the Hamiltonian (\ref{scw}) we have two additional constraints. Namely, the constraint that ensures that the momentum from the $\lambda^{ab}$ vanishes, and the constraint that ensures that the momentum of $h^{ab}$ variable is proportional to $\lambda^{ab}$. These, are exactly the constraints that can be immediately identified, as we did when considering (\ref{htw2}) or treated with Dirac brackets, as we show below.
\subsection{Poisson Bracket Algebra}
To consider the Poisson bracket algebra, constraints need to satisfy consistency conditions. For the $\Pi_{ab}^{\lambda}$ and $\Pi_{h}^{ab}-\omega_h\lambda^{ab}$ the Poisson brackets
\begin{align}
\big\{H_T,\Pi_{ab}^{\lambda} \big\}&=-\omega_{h}\phi^{(1)}\approx0\\
\big\{H_T,\Pi_h^{ab}-\omega_h\lambda^{ab}\big\}&=\frac{\delta H_T}{\delta h_{ab}}+\frac{1}{2}\omega_hh^{cd}\frac{\delta H_T}{\delta_h^{cd}}+\omega_h\phi^{ab}_{(2)}\approx0.
\end{align}
define $\phi_{ab}^{(1)}$ and $\phi_{ab}^{(2)}$. That implies that
$\Pi_{ab}^{\lambda}$ and $\Pi_{h}^{ab}-\omega_h\lambda^{ab}$ are second class constraints which need to be considered using the Dirac brackets. Here, we set them strongly to zero. The Poisson brackets with the remaining primary constraints give
\begin{align}
\{H_T,\Pi_N\}=\mathcal{H}_0\approx0 \\
\{H_{T},\Pi_{\vec{N}}\}=\mathcal{V}_a\approx0
\end{align}
where consistency for the third constraint $\mathcal{P}$ that results with the new constraint $\mathcal{W}$ was verified in (\ref{scw}) \footnote{see appendix: Canonical Analysis of Conformal Gravity: Variations}.
The diffeomorphism constraint $\{\cdot,V[\vec{X}]\}$ of arbitrary tensor density on the phase space defined with $(h,K,\Pi_h,\Pi_K)$ is defined with
\begin{align}
\{\Phi,V[\vec{X}]\}=\pounds_{\vec{X}}\Phi\label{eqpb}
\end{align}
where the change under diffeomorphisms, of the canonical coordinate $h_{ab}$ and its momenta reads
\begin{align}
h_{ab}\rightarrow h_{ab}+\pounds_{\vec{X}}h_{ab}, && \Pi_h^{ab}\rightarrow\Pi_h^{ab}-\pounds_{\vec{X}}\Pi_h^{ab}.
\end{align}
To compute this bracket one needs to consider the scalar density $\psi$ as a form that has a maximal degree on a manifold
\begin{align}
\pounds_{\vec{\lambda}} \Psi=\mathrm{d}(\iota_{\vec{\lambda}}\Psi)+\iota_{\vec{\lambda}}\mathrm{d}\Psi=\mathrm{d}(\iota_{\vec{\lambda}}\Psi).
\end{align}
which means that identity
\begin{equation}
\int_\Sigma Y_{a_1\cdots a_n}\pounds_{\vec{\lambda}} \Psi^{a_1\cdots a_n}=-\int_\Sigma \pounds_{\vec{\lambda}} Y_{a_1\cdots a_n}\Psi^{a_1\cdots a_n},
\end{equation}
holds up to boundary terms. That allows us to treat the Lie derivative as partial integration. The Poisson brackets for the diffeomorphism constraint then read
\begin{align}
\left\{V\left[\vec{X}\right],V\left[\vec{Y}\right]\right\}&=V\left[\pounds_{\vec{X}}\vec{Y}\right],\nonumber\\
\left\{V\left[\vec{X}\right],H_\perp[\epsilon]\right\}&=H_\perp\left[\pounds_{\vec{X}}\epsilon\right],\nonumber\\
\left\{V\left[\vec{X}\right],P[\epsilon]\right\}&=P\left[\pounds_{\vec{X}}\epsilon\right],\nonumber\\
\left\{V\left[\vec{X},\right],W[\epsilon]\right\}&=W\left[\pounds_{\vec{X}}\epsilon\right].
\end{align}
The brackets for the $P$ constraint are
\begin{align}
\left\{P[\epsilon],W[\eta]\right\}&=P[\epsilon\eta],\nonumber\\
\left\{P[\epsilon],H_\perp[\eta]\right\}&=-W[\epsilon\eta]-P[\epsilon\eta K],
\end{align}
while the one for $\mathcal{W}$ and $H_{0}$ are
\begin{align}
\left\{W[\epsilon],H_\perp[\eta]\right\}&=H_\perp[\epsilon\eta]+P[D^2\epsilon\eta+\epsilon D^2\eta-D\epsilon\cdot D\eta)],\nonumber\\
\left\{H_\perp[\epsilon],H_\perp[\eta]\right\}&=V\left[\epsilon D^a\eta-\eta D^a\epsilon\right]+P\left[\left(\epsilon D^a\eta-\eta D^a\epsilon\right)\left(D_cK^c_{\ a}-D_cK\right)\right].\label{eq:DiracAlg}
\end{align}
Now, we can count the degrees of freedom.
Among the 32 phase space coordinates we found 10 constraints that are first class, that eliminates $2\times10$ coordinates from phase space. The remaining number of the physical degrees of freedom is $12/2=6$. CG degrees of freedom are divided in 2 degrees of freedom that describe massless graviton, and 4 degrees of freedom that belong to partially massless graviton.
\subsection{Gauge Generators of Conformal Gravity}
To obtain the generators of CG we follow the procedure described in section "Castellani algorithm".
Since the algorithm uses PFCs for the start of the Castellani procedure, the start is determined with PFCs
\begin{align}
\Pi_N\approx0,&&\Pi_{\vec{N}_i}\approx0,&& \mathcal{P}\approx0.
\end{align}
Consider $\mathcal{G}_1=\Pi_N$. Castellani algorithm then suggests
\begin{align}
\mathcal{G}_1&=\Pi_N,\\
\mathcal{G}_0+\{\Pi_N,H_T\}&=PFC, \\
\{\mathcal{G}_0,H_T\}&=PFC.
\end{align}
The ansatz for the linear combination (\ref{castel}) i
\begin{align}
PFC(x)=\int_{\Sigma}\left(\alpha_1(x,y)\Pi_{\vec{N}_i}(y)+\alpha_2(x,y)\Pi_N(y)+\alpha_3(x,y)\mathcal{P}(y)\right).\label{ansatz1}
\end{align}
Determination of the coefficients leads to
\begin{align}
\label{eq:c1}
\alpha_1^{a}(x, y)&=\delta^{3}(x-y)D^{a}N(y)+N(y)\gamma^{ab}D_{b}\delta^{3}(x-y)\\
\alpha_2(x, y)&=N^{a}(y)\partial_{a}\delta^{3}(x-y)\,\,\,,\,\, \\\alpha_3(x, y)&=\frac{\lambda_{\mathcal{P}}}{N}(y)\delta^{3}(x-y)
\end{align}
which writting in the form (\ref{geng}) allows us to
write the canonical gauge generator for the diffeomorphisms that are orthogonal to the hypersurface
\begin{equation}
\label{eq:g1}
G_{\perp}[\epsilon, \dot{\epsilon}]= \int \! \left[\dot{\epsilon}^{\ms{(1)}} \Pi_{N}+ \epsilon \left(\mathcal{H}+\pounds_{\vec{N}}\Pi_{N}+{\Pi_{\vec{N}}}_{a} D^{a}N+D_{a}(\Pi_{\vec{N}}^{a}N)+\frac{\lambda_{\mathcal{P}}}{N}\mathcal{P}\right)\right]
\end{equation}
Choosing that $\mathcal{G}_{1a}=\Pi_{\vec{N}a}$ we obtain the recursion relations
\begin{align}
\mathcal{G}_{1a}&=\Pi_{\vec{N}_a}\\
\mathcal{G}_{0a}+\{\Pi_{\vec{N}a},H_T\}&=PFC_a\\
\{\mathcal{G}_{0a},H_T\}&=PFC_a
\end{align}
that with an ansatz
\begin{align}
PFC_a(x)&=\int_{y}\left(\alpha_{1a}(x,y)\Pi_{\vec{N}_i}(y)+\alpha_{2a}(x,y)\Pi_N(y)+\alpha_{3a}(x,y)\mathcal{P}(y)\right).
\end{align}
lead to coefficients
\begin{align}
\alpha_{1a}^b(x,y)&=\delta^3(x-y)D_aN^b(y)+N^c(y)\delta_a^bD_c\delta^3(x-y)\\
\alpha_2(x,y)&=\delta^3(x-y)D_aN(y)\\
\alpha_3(x,y)&=0,
\end{align}
and generator for spatial diffeomorphisms
\begin{align}
G_D[\epsilon^{a}, \dot{\epsilon}^{a}]= \int \left[ \dot{\epsilon}^{\ms{(1)} a} {\Pi_{\vec{N}}}_{a}+ \epsilon^{a} \left(\mathcal{V}_{a}+\Pi_{N} D_{a}N+\pounds_{\vec{N}}{\Pi_{\vec{N}}}_{a}\right)\right]\label{eq:g2}.
\end{align}
For $\mathcal{G_1}=\mathcal{P}$ Castellani algorithm reads
\begin{align}
\mathcal{G}_1&=\mathcal{P}\\
\mathcal{G}_0+\{\mathcal{P},H_T\}&=PFC \\
\{\mathcal{G}_0,H_T\}&=PFC
\end{align}
for the ansatz generator of the form equal to (\ref{ansatz1}).
The coefficients for this case read
\begin{align}
\alpha_1^a(x,y)&=0\\
\alpha_2(x,y)&=N^2(y)\delta^3(x-y) \\
\alpha_3(x,y)&=N^a(y)D_a\delta^3(x-y)+\frac{\lambda_N}{N}(y)\delta^3(x-y),\end{align}
that inserting in (\ref{geng}) lead to
\begin{align}
G_{W}[w, \dot{w}]= \int \! \left[ \frac{\dot{w}^{\ms{(1)} }}{N}\mathcal{P} (x)+ w \left( \mathcal{W}+N\Pi_{N}+\pounds_{\vec{N}}\frac{\mathcal{P}}{N}\right)\right].\label{eq:g3}
\end{align}
One can compare the generators of the diffeomeorphisms orthogonally and in the direction of the spatial hypersurface to the ones from GR \cite{Castellani:1981us} and notice the same structure apart from the terms that involve $\mathcal{P}$. Naturally, the generator involving the Weyl symmetry does not appear among generators in EG.
The relation of the generators of the diffeomorphism orthogonal and transversal to the hypersurface and the diffeomorphisms generated with a vector field $\xi^a$ on the manifold $\mathcal{M}$, is
\begin{align}
\xi^a&=\epsilon_{\perp}n^a+\epsilon^a \label{decompx}\\
\text{ for } \epsilon^a&=h^a_b\xi^b \text{ and }\epsilon_{\perp}=-n_a\xi^a. \label{decompdif}
\end{align}
The generators (\ref{eq:g1}), (\ref{eq:g2}) and (\ref{eq:g3}) generate Weyl rescalings and diffeomorphisms. For the ADM decomposition of the metric
$g_{tt}=-N^2+N^aN^bh_{ab}$, $g_{tb}=N^ah_{ab}$ and $g_{ab}=h_{ab}$ and the identification of the diffeomorphisms transversal to the hypersurface $\epsilon_{\perp}=N\xi^t$ and along the hyper surface $\epsilon^a=\xi^a+N^a\xi^a$ it follows
\begin{align}
\{g_{\mu\nu},G_W[\omega]\}&=2\omega g_{\mu\nu}\\
\{g_{\mu\nu},G_{\perp}[\epsilon_{\perp}]+G_{D}[\vec{\epsilon}]\}&=\pounds_{\xi}g_{\mu\nu}
\end{align}
These generators differ from the generators in \cite{Kluson:2013hza} evaluated on the full phase space. The generator of Weyl transformations does not change the shift vector field $N^i$ and takes into account the constraint $\mathcal{P}$ responsible for the correct transformation of $K_{ab}$ and $\Pi_{h}^{ab}$.
Gauge generators modify the surface deformation algebra which one can see from the Poisson bracket algebra of the constraints $H_{\perp}$ and $V$, (\ref{eq:DiracAlg})
\begin{align}
\left[\xi,\chi\right]_{\mathrm{SD}}^\perp&=\pounds_\epsilon\eta_\perp,\nonumber\\
\left[\xi,\chi\right]_{\mathrm{SD}}^a&=h^{ab}\left(\epsilon_\perp D_b\eta_\perp- \eta_\perp D_b\epsilon_\perp\right)+\pounds_\epsilon\eta^a,
\end{align}
for the decomposition of $\xi^a$ as in (\ref{decompdif}, \ref{decompx}) and analogously for $\chi^a$. This modification appears because we consider the action of the PFCs. Poisson brackets of the generators
\begin{align}
G[\xi]\equiv G_\perp[\epsilon_\perp]+G_D[\vec{\epsilon}] && G[\chi]\equiv G_\perp[\eta_\perp]+G_D[\vec{\eta}]
\end{align}
close the algebra
\begin{equation}
\left\{G[\xi],G[\chi]\right\}=G[[\xi,\chi]]+PFC,\label{eq:CastellaniAlg}
\end{equation}
where
\begin{align}
\left[\xi,\chi\right]^\perp&=n_a\ ^{\ms{(4)}}\pounds_\xi\chi^a,\nonumber\\
\left[\xi,\chi\right]^a&=h^a_b\ ^{\ms{(4)}}\pounds_\xi\chi^b,
\end{align}
and we have set $\dot{N}=\lambda_N$ and $\dot{N}^a=\lambda_{\vec{N}}^a$ to accurately treat $\dot{\epsilon}_{\perp}$ and $\dot{\epsilon}^a$.
\section{Boundary Conditions}
In order to be able to find the boundary charges we have to define the boundary conditions and the asymptotic expansion at the boundary. We consider the Gaussian coordinates and asymptotically AdS space using the metric
\begin{equation}
ds^2=\frac{\ell^2e^{2\omega}}{\rho^2}\left(d\rho^2+\gamma_{ij}dx^{i}dx^{j} \right)
\end{equation}
for $i,j..=0,1,2$. Which is equal to the expansion (\ref{le}) for $\sigma=-1$ and up to a term $e^{2\omega}$ with which we can multiply the metric since it is conformally invariant. The Fefferman-Graham expansion of the boundary metric is equal to the one in (\ref{expansiongamma}) with $\ell$ set to one, it reads
\begin{equation}
\gamma_{ij}=\gamma_{ij}^{(0)}+\rho\gamma_{ij}^{(1)}+\rho^2\gamma_{ij}^{(2)}+\rho^3\gamma_{ij}^{(3)}+...
\end{equation}
where the metric in ADM variables near the boundary $\partial \Sigma$ is of the form
\begin{align}
h_{ab}=\Omega^2\overline{h}_{ab},&& N=\Omega\overline{N}, && N^a=\overline{N}^a \,\text{ for }\,\Omega\equiv\frac{\ell e^{\omega}}{\rho}.
\end{align} therefore\begin{equation}
\omega_h=\Omega^3\omega_{\overline{h}}.
\end{equation}
$h_{ab}$ metric can be further split
\begin{align}
\overline{h}_{ab}=\partial_a\rho\partial_b\rho+\gamma_{IJ}\partial_ax^{I}\partial_bx^J,&& \overline{N}^I=\gamma^{IJ}\gamma_{J0},&&\overline{N}^3=0, && \overline{N}=\sqrt{-\frac{1}{\gamma^{00}}}
\end{align}
for $a,b,...=1,2,3$ and $I,J,..=1,2$.
Evaluating on shell EOM for $h_{ab}, $ $K_{ab} $ and $\Pi_{K}^{ab}$ (and $\mathcal{P}$) lead to new expressions for $K_{ab}$, $\Pi_{K}^{ab}$ and $\Pi_{h}^{ab}$.
EOM for $h_{ab}$ lead to
\begin{align}
K_{ab}&=\frac{1}{2N}\left(\partial_t-\pounds_{\vec{N}}\right)h_{ab}=\nonumber\\
&=\Omega\left[\frac{\overline{h}_{ab}}{\overline{N}}\left(\partial_t-\pounds_{\vec{N}}\right)\ln\Omega+\overline{K}_{ab}\right]
\end{align}
Using the transformation properties for the connection
\begin{align}
C^a_{bc}&=2\delta^a_{(b}\overline{D}_{c)}\ln\Omega-\overline{h}_{bc}\overline{D}^a\ln\Omega+\overline{C}^a_{bc}
\end{align}
Ricci tensor
\begin{align}
R_{ab}&=-\overline{D}_a\overline{D}_b\ln\Omega-\overline{h}_{ab}\overline{h}^{cd}\overline{D}_c\overline{D}_d\ln\Omega+\nonumber\\
&+\overline{D}_a\ln\Omega\overline{D}_b\ln\Omega-\overline{h}_{ab}\overline{h}^{cd}\overline{D}_c\ln\Omega\overline{D}_d\ln\Omega+\overline{R}_{ab}
\end{align}
and the double covariant derivative of the momenta
\begin{align}
\frac{1}{N}D_aD_bN&=\overline{D}_a\overline{D}_b\ln\Omega-\overline{D}_a\ln\Omega\overline{D}_b\ln\Omega+\nonumber\\
&+\overline{h}_{ab}\overline{h}^{cd}\overline{D}_c\ln\Omega\overline{D}_d\ln\Omega+\frac{1}{\overline{N}}\overline{h}_{ab}D^c\ln\Omega\overline{D}_c\overline{N}+\frac{1}{\overline{N}}\overline{D}_a\overline{D}_b\overline{N},
\end{align}
one computes $\Pi_{K}^{ab}$ from the equation of motion for $K_{ab}$ and requirement that $\mathcal{P}$=0
\begin{align}
\Pi_K^{ab}&=\omega_h\mathcal{T}^{abcd}\left[R_{cd}+K_{cd}K+\frac{1}{N}D_aD_bN-\frac{1}{N}\left(\partial_t-\pounds_{\vec{N}}\right)K_{ab}\right]\nonumber\\
&=\Omega^{-1}\overline{\Pi}_K^{ab}.
\end{align}
Which agrees with the relation obtained from the rescaling of projection for four dimensional Weyl
\begin{equation}
\Pi_K^{ab}=\omega_h n^cn^dC^a{}_b{}^c{}_d
\end{equation}
with $n_a=\Omega\overline{n}_a$ and $C^a{}_{bcd}=\overline{C}^a{}_{bcd}$.
That is similar to the projection of the magnetic part of the Weyl
\begin{equation}
B_{abcd}=\Omega \overline{B}_{abcd}.
\end{equation}
From $\Pi_{K}^{ab}$, one can compute the Weyl rescaling for the $\Pi_h^{ab}$ momenta,
\begin{align}
\Pi_h^{ab}&=-\frac{1}{2N}\left(\partial_t-\pounds_{\vec{N}}\right)\Pi_K^{ab}-\frac{2}{N}D_c\left(N\omega_hB^{c(ab)}\right)-\frac{1}{2}\left(\Pi_K^{ab}K+\Pi_K^{cd}K_{cd}h^{ab}\right)\nonumber\\
&=\Omega^{-2}\left[-\frac{1}{\overline{N}}\left(\partial_t-\pounds_{\vec{N}}\right)\ln\Omega\overline{\Pi}_K^{ab}+\overline{\Pi}_h^{ab}\right].
\end{align}
The allowed variations near the boundary are accordingly
\begin{align}
\delta h_{ab}&=\Omega^2\left(2\delta\ln\Omega\overline{h}_{ab}+\delta\overline{h}_{ab}\right),\\
\delta K_{ab}&=\Omega\delta\ln\Omega\left[\frac{\overline{h}_{ab}}{\overline{N}}\left(\partial_t-\pounds_{\vec{N}}\right)\ln\Omega+\overline{K}_{ab}\right]
\Omega\frac{\overline{h}_{ab}}{\overline{N}}\left(\partial_t-\pounds_{\vec{N}}\right)\delta\ln\Omega\nonumber\\
&\quad \Omega\left\{\frac{\delta\overline{h}_{ab}}{\overline{N}}\left(\partial_t-\pounds_{\vec{N}}\right)\ln\Omega+\delta\overline{K}_{ab}+\right.\nonumber\\
&\quad \left.+\frac{\overline{h}_{ab}}{\overline{N}}\left[-\left(\partial_t-\pounds_{\vec{N}}\right)\ln\Omega\frac{\delta\overline{N}}{\overline{N}^2}-\delta N^a\overline{D}_a\ln\Omega\right]\right\},\\
\delta \Pi_h^{ab}&=-\frac{2\delta\ln\Omega}{\Omega^2}\left[-\frac{1}{\overline{N}}\left(\partial_t-\pounds_{\vec{N}}\right)\ln\Omega\overline{\Pi}_K^{ab}+\overline{\Pi}_h^{ab}\right] -\frac{1}{\Omega^2\overline{N}}\left(\partial_t-\pounds_{\vec{N}}\right)\delta\ln\Omega\overline{\Pi}_K^{ab}\nonumber\\
&\quad \Omega^{-2}\left\{-\frac{1}{\overline{N}}\left(\partial_t-\pounds_{\vec{N}}\right)\ln\Omega\delta\overline{\Pi}_K^{ab}+\delta\overline{\Pi}_h^{ab}\right.\nonumber\\
&\quad \left.+\frac{\overline{\Pi}_h^{ab}}{\overline{N}}\left[\left(\partial_t-\pounds_{\vec{N}}\right)\ln\Omega\frac{\delta\overline{N}}{\overline{N}^2}+\delta N^a\overline{D}_a\ln\Omega\right]\right\},\\
\delta \Pi_K^{ab}&=\Omega^{-1}\left(-\delta\ln\Omega+\delta\overline{\Pi}_K^{ab}\right).\label{eq:WeylVari}
\end{align}
Where the variations of the quantities that have been rescaled (those that contain an overline) are set to
\begin{align}
\delta \overline{h}_{ab}\vert_{\partial\Sigma}=D_c\delta \overline{h}_{ab}\vert_{\partial\Sigma}=0,\nonumber\\
\delta \overline{N}\vert_{\partial\Sigma}=D_c\delta \overline{N}\vert_{\partial\Sigma}=0,\nonumber\\
\delta N^a\vert_{\partial\Sigma}=D_c\delta N^a\vert_{\partial\Sigma}=0
\end{align}
that leads to the requirement that
\begin{equation}
\delta \overline{K}_{ab}\vert_{\partial\Sigma}=0,
\end{equation}
and that the variations of the momenta $\delta\overline{Pi}_K^{ab}|_{\partial\Sigma}$ and $\delta\overline{\Pi}_h^{ab}|_{\partial\Sigma}$ at the boundary are arbitrary but finite.
These boundary conditions are preserved by the gauge transformation defined by bulk diffeomorphisms $\xi^a$
\begin{equation}
\pounds_{\xi}\overline{g}_{ab}=2\lambda\overline{g}_{ab},
\end{equation}
and arbitrary rescalings of the metric (Weyl rescalings)
\begin{equation}
\delta_{\omega}g_{ab}=2\omega g_{ab}.
\end{equation}
The scalings we use for variations are
\begin{align}
\delta \overline{h}_{ab}&=2\delta\omega\overline{h}_{ab}\\
\delta\overline{N}&=\delta\omega\overline{N}\\\
\delta\omega&\sim\mathcal{O}(\rho)\\
\delta N^a&\sim\mathcal{O}(\rho^2)
\end{align}
that are consistent with the scalings
\begin{align}
\delta\overline{K}&\sim\mathcal{O}(\rho)\\
\delta\overline{\Pi}_{K}^{ab}&\sim\mathcal{O}(1)\\
\delta \overline{\Pi}_h^{ab}&\sim \mathcal{O}(1).
\end{align}
\section{Canonical Charges}
To compute the charges, we follow the prescription outlined in chapter "Gauge Generators". We have to define the generators that are functionally differentiable, by solving an imposed boundary value problem. One may refer to that as searching for the well defined action for the canonical generators.
To gauge generators, G, we have to add boundary terms to make them integrable.
In general, the mechanism to obtain the global charges of certain gauge theory using the Hamiltonian analysis is well known \cite{Brown:1986nw}. First, one needs to define the boundary conditions in the spatial infinity that should be obeyed by the fields which we have done in the above chapter "Boundary Conditions", and then identify asymptotic symmetries conserving that asymptotic behaviour.
To be able to use the Hamiltonian formulation, one needs to to convert the boundary conditions on the space-time metric into boundary conditions on the canonical variables. The asymptotic symmetries define the allowed surface deformation vectors $\xi^{\mu}$ ($\mu=\perp,i$) for considered space like hypersurfaces.
\subsection{Boundary Terms from Weyl Constraint}
The Weyl charge is given by its boundary integral that is associated to generator (\ref{eq:g3}). To render the generator finite, we have to add to it a term whose total variation will correspond to boundary term of the generator, that corresponds to a charge.
Boundary terms for $G_W$ are
\begin{align}
-\int_{\Sigma}\pounds_{\vec{N}} \omega+\int_{\partial \Sigma}\ast\omega N^aP.
\end{align}
The generator is modified due to the charge $Q_W$
\begin{equation}
\Gamma[\omega,\dot{\omega}^{(1)}]=G_W[\omega,\dot{\omega}^{(1)}]-Q_{W}[\omega,\dot{\omega}^{(1)}]
\end{equation}
for $Q_W$
\begin{equation}
Q_W=\int_{\partial\Sigma}\ast \omega N^aP.
\end{equation}
Which vanishes on shell because of the $P$ constraint. An improved generator $\Gamma_{W}$, therefore keeps the form of (\ref{eq:g3}) earlier obtained generator.
\subsection{Boundary Terms from Diffeomorphism Constraint}
Evaluation of the boundary terms on shell leads to vanishing of the terms that involve $\mathcal{P}=0$ and $\mathcal{W}=0$. Among these contributions are all the terms that include $\Omega$. That allows us to replace the variables $h_{ab}$, $K_{ab}$, $\Pi_{h}^{ab}$ and $\Pi_K^{ab}$ in (\ref{dhdiffbound}) to (\ref{dPiKdiffbound}) with their finite values
$\overline{h}_{ab}$, $\overline{K}_{ab}$, $\overline{\Pi}_h^{ab}$ and $\overline{\Pi}_K^{ab}$. For $\epsilon^{\rho}\sim\mathcal{O}(\rho)$ and $\delta\overline{h}_{ab}\sim\mathcal{O}(\rho)$ and $\delta\overline{K}_{ab}\sim\mathcal{O}(\rho)$ the term
\begin{align}\int_{\partial\Sigma}\ast \epsilon^{c}\left(\overline{\Pi_K^{ab}}\delta\overline{K}_{ab}+\overline{\Pi}_h^{ab}\delta\overline{h}_{ab}\right)
\end{align} vanishes.
And remaining part
\begin{equation}
-2\int_{\partial\Sigma}\ast \left(\overline{\Pi}_K^{ca}\epsilon^b\delta\overline{K}_{ab}+\delta\overline{\Pi}_K^{ca}\epsilon^bK_{ab}+\overline{\Pi}_h^{ca}\epsilon^b\delta\overline{h}_{ab}+\delta\overline{\Pi}_h^{ca}\epsilon^b\overline{h}_{ab}\right) \label{chwoov}
\end{equation}
is integrated into an on-shell charge of the spatial diffeomorphism
\begin{equation}
\overline{Q}_D[\epsilon]=2\int_{\partial\Sigma}\ast\epsilon^c\left(\overline{\Pi}_h^{ab}\overline{h}_{bc}+\overline{\Pi}_K^{ab}\overline{K}_{bc}\right),\label{eq:diffchargeol}
\end{equation}
where we denoted the charge expressed with "overlined" values with $\overline{Q}_D$.
The charge is finite because the tensors that constitute it are $\mathcal{O}(1)$ and $\epsilon^I\sim\mathcal{O}(1)$. The terms $\overline{h}_{ab}$ and $\overline{K}_{ab}$ are obtained by insertion of the background metric $\gamma_{ij}$ and its expansion including the terms $\gamma_{ij}^{(0)}$ and $\gamma_{ij}^{(1)}$. The electric part of the Weyl tensor, $\overline{\Pi}_{K}^{ab}$, is determined from the terms from the expansion up to $\gamma_{ij}^{(2)}$, while the terms in $\overline{\Pi}_{h}^{ab}$ include even $\gamma_{ij}^{(3)}$.
The charge \begin{equation}
Q_D[\epsilon]=2\int_{\partial\Sigma}\ast\epsilon^c\left(\Pi_h^{ab}h_{bc}+\Pi_K^{ab}K_{bc}\right).\label{eq:diffcharge}
\end{equation}
is as well finite since including the boundary conditions
one obtains on-shell equivalence
\begin{equation}
\Pi_h^{ab}h_{bc}+\Pi_K^{ab}K_{bc}=\overline{\Pi}_h^{ab}\overline{h}_{bc}+\overline{\Pi}_K^{ab}\overline{K}_{bc}.
\end{equation}
That leads to a generator
\begin{equation}
\Gamma_D[\epsilon]=\int{\Sigma}\left(\dot{\epsilon}^{a} {\Pi_{\vec{N}}}_{a}+\Pi_K^{ab}\pounds_\epsilon K_{ab}+\Pi_h^{ab}\pounds_\epsilon h_{ab}+ \Pi_{N}\pounds_\epsilon N+{\Pi_{\vec{N}}}_{a}\pounds_\epsilon N^a\right).\label{eq:imprD}
\end{equation}
whose variation with the included charge vanishes on the constraint surface (or incorporating the boundary conditions)
\begin{align}
\delta \Gamma_D[\epsilon]&\approx\int_{\partial\Sigma}\ast \epsilon^a \left(\Pi_K^{bc}\delta K_{bc}+\Pi_h^{bc}\delta h_{bc}\right)+\ast\xi^t\delta N^c\left(\Pi_h^{ab}h_{bc}+\Pi_K^{ab}K_{bc}\right)\approx\nonumber\\
&\approx\int_{\partial\Sigma}\ast \epsilon^a \left(\overline{\Pi}_K^{bc}\delta \overline{K}_{bc}+\overline{\Pi}_h^{bc}\delta \overline{h}_{bc}\right)+\ast\xi^t\delta N^c\left(\overline{\Pi}_h^{ab}\overline{h}_{bc}+\overline{\Pi}_K^{ab}\overline{K}_{bc}\right)=0.
\end{align}
We proceed with the charge coming form the Hamiltonian constraint.
\subsection{Boundary Terms from Hamiltonian Constraint}
In order to render the variation of the improved generator of $G_{\perp}[\epsilon,\dot{\epsilon}]$ vanish, and obtain the corresponding charge, we compute the boundary term that involves $\Pi_{\vec{N}}$ and $\Pi_{\vec{N}_a}$ (PFCs) and variation of the Hamiltonian constraint (\ref{dhH0bound}), (\ref{dKH0bound}) and (\ref{dPiKH0bound}). The terms that are PFCs vanish on shell, while the term coming from variation of Hamiltonian constraint, with asymptotic on-shell relations (boundary conditions) $h_{ab}=\Omega^2\overline{h}_{ab}$, $\epsilon=\Omega\overline{\epsilon}$ and $\mathcal{P}=0$ leads to
\begin{equation}
-\int_{\partial\Sigma}\ast \overline{\epsilon}^cD^c\ln\Omega\left(\overline{\Pi}_K^{ab}\delta\overline{h}_{ab}+\delta\overline{\Pi}_K^{ab}\overline{h}_{ab} \right).
\end{equation}
Remaining contributions are equal to those from (\ref{dhH0bound}), (\ref{dKH0bound}) and (\ref{dPiKH0bound}) only with variables replaced with overlined ones. Therefore, we can drop the terms proportional to $\delta\overline{h}_{ab}\sim\mathcal{O}(\rho)$, $\delta\overline{K}_{ab}\sim\mathcal{O}(\rho)$ however not the terms with $\delta\overline{C}^c_{ed}\sim\mathcal{O}(1)$ because of the derivatives that act on $\delta\overline{h}_{ab}$.
These terms
\begin{align}
\int_{\partial\Sigma}\ast \bigg[ -\overline{\epsilon}\overline{D}^cln\Omega\left(\overline{\Pi}_K^{ab}\delta\overline{h}_{ab}+\delta\overline{\Pi}_{K}^{ab}\overline{h}_{ab}\right) \bigg]+\overline{\epsilon}\overline{D}_b\delta\overline{\Pi}_K^{cd}+\overline{\epsilon}\delta\overline{C}^c_{ab}\overline{\Pi}_K-\overline{D}_b\overline{\epsilon}\delta\overline{\Pi}_K^{cd},
\end{align}
are cancelled by varying the counterterm that we can obtain from
\begin{align}
\int_{\partial\Sigma}\ast\left[\epsilon\delta C^c_{ab}\Pi_K^{ab}+\epsilon D_b\delta\Pi_K^{cb}-D_b\epsilon\delta\Pi_K^{cb} \right].
\end{align}
This, functionally integrated gives an on-shell finite term
\begin{align}
Q_{\perp}[\epsilon]&=\int_{\partial\Sigma}\ast\left[\epsilon D_b\Pi_K^{cb}-D_b\epsilon\Pi_K^{cb}\right] \label{qperp1} \\
&=\int_{\partial\Sigma}\ast \left[ \overline{\epsilon}\overline{D}_b\overline{\Pi}_K^{cb}-\overline{D}_b\overline{\epsilon}\overline{\Pi}_K^{cb}-\overline{\epsilon}\overline{D}^c ln\Omega\overline{\mathcal{P}}\right].\label{qperp}
\end{align}
However, in the variation of the $G_{\perp}$ also appears
\begin{align}
\int_{\partial\Sigma}\ast \left[ \overline{\epsilon}\left( \delta\overline{C}^c_{ab}\overline{\Pi}_{K}^{ab}-\delta \overline{C}^a_{ba}\overline{\Pi}_K^{cb} \right)\right]
\end{align}
which is finite when one allows $\delta\overline{h}_{ab}=2\delta\omega\overline{h}_{ab}$
\begin{align}
-\int_{\partial\Sigma}\ast\left(\overline{\epsilon}\overline{\Pi}_K^{cd}D_d\delta\omega\right).
\end{align}
To cancel it, we take into account that $\epsilon=\xi^tN$ contains variation $\delta \overline{N}=\delta\omega\overline{N}$, such that
$\delta_{\overline{N}}Q_{\perp}$ leads to
$\delta_{\overline{N}}O_{\perp}=-\int_{\partial\Sigma}\ast \left(\overline{\epsilon}\overline{\Pi}_K^{cd}D_d\delta\omega \right)$.
This has proven that
\begin{align}
\Gamma_{\perp}[\epsilon_{\perp}]=G_{\perp}[\epsilon_{\perp}] +Q_{\perp}[\epsilon_{\perp}]
\end{align}
with $Q_{\perp}[\epsilon]$ from (\ref{qperp}), is the required modified gauge generator (to which we refer to as well as an "improved generator").
The finiteness of the charges (\ref{eq:diffcharge}) and (\ref{qperp}), beside by using the property of conformal invariance, one may show by direct insertion of the expanded boundary metric.
\subsection{Asymptotic Symmetry Algebra of the Improved Generators}
In case $\xi^a$ and $\chi^a$ are gauge generators we can use relation for the Poisson brackets among the generators (\ref{eq:CastellaniAlg}) and write analogous relation for the improved generators
\begin{equation}
\left\{\Gamma[\xi],\Gamma[\chi]\right\}=\Gamma[[\xi,\chi]]+PFC.\label{eq:imprCastellaniAlg}
\end{equation}
This is true \cite{Brown:1986ed} for $\xi^a$ and $\chi^a$ small diffeomorphisms that generate boundary condition preserving gauge transformations.
An improved generator is a functionally differentiable generator that has an action compatible with corresponding boundary conditions.
According to \cite{Henneaux:1985tv,Brown:1986nw}, fixing the gauge would turn first class into second class constraints, that need to strongly vanish while the Poisson brackets are required to be turned into Dirac brackets. Wince the evaluation of improved generators on shell gives charges, the relation (\ref{eq:imprCastellaniAlg}) in terms of Dirac brackets coverts to
\begin{equation}
\left\{Q[\xi],Q[\chi]\right\}^\ast=Q[[\xi,\chi]].
\end{equation}
That leads to the isomorphism among the Dirac algebra of the charges and the Lie algebra of the boundary condition preserving gauge transformations.
\section{Time Conservation of Charges}
To prove the time conservation of charges, one may use one of the three approaches.
\begin{itemize}
\item By clever inspection set $\xi^a=t^a$.
\item Prove that upon straightwforwardly acting on charges with $\partial_t$ they remain finite and keep they value.
\item Prove the equivalence of the canonical charges obtained using Hamiltonan procedure and the Noether charges obtained in \cite{Grumiller:2013mxa}.
\end{itemize}
We will present first and the third method.
\subsection{Time Conservation of Charges Using the Method $\xi^a=t^a$}
First method requires to set in (\ref{eq:imprCastellaniAlg}) that $\xi^{a}=t^a$. Even though $t^a$ is not a gauge generator that preserves a boundary condition,
the functional derivative of $\Gamma[t]$ is well defined, and denoting $\dot{N}=\lambda_N$ and $\dot{N}^a=\lambda_{\vec{N}}^a$, leads to $\Gamma[t]\equiv H_T$. That proves functional differentiability of $H_T$ since Hamiltonian EOM do not require additional boundary terms. This agrees with the first chapter in which we have seen that CG does not require boundary terms for the action to have well defined variational principle \cite{Grumiller:2013mxa}. For $\Gamma[\chi]$ an improved generator, and $\chi^a$ generator of the diffeomorphisms
\begin{equation}
\left\{H_T,\Gamma[\chi]\right\}=\Gamma[-\pounds_\chi t^a]+PFC=\Gamma[\dot{\chi}]+PFC.\label{eq:timecharge}
\end{equation}
The Poisson bracket can be turned to Dirac bracket by fixing the gauge, and the equation (\ref{eq:timecharge}) can be understood as a time evolution equation for $-Q[\chi]$, where the action $H_T$ is not influenced by $\chi$. To obtain the total time derivative we add $-Q[\dot{\chi}]$ to (\ref{eq:timecharge})
\begin{equation}
\frac{dQ[\chi]}{dt}=Q[\dot{\chi}]-\left\{H_T,Q[\chi]\right\}^\ast=0,
\end{equation}
that proves time conservation of charges.
\section{Asymtpotic Symmetry Algebra and MKR Solution}
We want to consider the canonical charges on the particular example of the CG solution, Mannheim-Kazanas-Riegert solution.
To be able to do that, first we briefly introduce the asymptotic symmetry algebra that is in more detail analysed in the following chapter.
Consider the space-time foliation of the manifold with a function $\rho$ that defines timelike hypersurfaces $\rho=const.$, with the boundary at $\rho=0$. The metric is
\begin{equation}
ds^2=\frac{e^{2\omega}\ell^2}{\rho^2}\left(d\rho^2+\gamma_{ij}dx^idx^j\right),\label{eq:BC4}
\end{equation}
for the expansion of $\gamma_{ij}$
\begin{equation}
\gamma_{ij}=\sum_{n=0}\gamma_{ij}^{\ms{(n)}}\left(\frac{\rho}{\ell}\right)^n,\label{eq:BC5}
\end{equation}
where $\omega$ is arbitrary and $\gamma^{(0)}$ and $\gamma^{(1)}$
are fixed.
(\ref{eq:BC4}) is conserved by $\pounds_{\xi}$ and $\delta_{\omega}$ to leading and the subleading order in $\rho$ up to rescaling with $e^{2\omega}$ (that means $\pounds_{\xi}$ and $\delta_{\omega}$ do not change the metric (\ref{eq:BC4}) when they act on the prefactor). The demand that remains is
\begin{equation}
\pounds_\xi \frac{\ell^2}{\rho^2}\overline{g}_{\mu\nu}=2\lambda\frac{\ell^2}{\rho^2}\overline{g}_{\mu\nu}.\label{rembc}
\end{equation}
We insert the expansion
\begin{eqnarray}
\xi^\rho=\xi^\rho_{\ms{(0)}}+\rho\xi^\rho_{\ms{(1)}}+\mathcal{O}(\rho^2),\nonumber\\
\xi^i=\xi^i_{\ms{(0)}}+\rho \xi^i_{\ms{(1)}}+\mathcal{O}(\rho^2),\nonumber\\
\lambda=\lambda_{\ms{(0)}}+\rho\lambda_{\ms{(1)}}+\mathcal{O}(\rho^2),
\end{eqnarray}
of the small diffeomorphism generators $\xi^i$, $\xi^{\rho}$ and $\lambda$ coefficient, in the equation (\ref{rembc}),
and obtain
\begin{equation}
\ ^{\ms{(2+1)}}\pounds_{\xi^k_{\ms{(0)}}}\gamma^{\ms{(0)}}_{ij}=2\lambda_{\ms{(0)}}\gamma^{\ms{(0)}}_{ij},\label{eq:LOCKV}
\end{equation}
with requirements $\xi^\rho_{\ms{(0)}}=0$ and $ \xi^i_{\ms{(1)}}=0$.
At the leading order $\xi^\rho_{\ms{(1)}}=\lambda_{\ms{(0)}}=\frac{1}{3} \mathcal{D}_i\xi^i_{\ms{(0)}}$ for $\mathcal{D}$ covariant derivative corresponding to the metric $\gamma_{ij}^{(0)}$. The subleading order \footnote{see following chapter for more details} imposes condition
\begin{equation}
\pounds_{\xi^k_{\ms{(0)}}}\gamma_{ij}^{\ms{(1)}}-\frac{1}{3}\,\gamma_{ij}^{\ms{(1)}}\mathcal{D}_{k}\epsilon^{k}_{\ms{(0)}}+4\lambda^{\ms{(1)}}\gamma_{ij}^{\ms{(0)}}=0,\label{eq:FOCKV}
\end{equation}
here $\lambda^{(1)}$ is obtained from the trace of (\ref{eq:FOCKV}). Rewriting the MKR metric (\ref{MKR1}) (that for $a=0$ becomes Schwarzschild -(A)dS metric)
in the FG from, we obtain $\gamma_{ij}^{(0)}$ and $\gamma_{ij}^{(1)}$ matrices $\gamma_{ij}^{(0)}=diag(-1,1,1)$ and (\ref{gama1mkr}) respectively. Leading order Killing equation (\ref{eq:LOCKV}) admits 10 Killing vectors of conformal algebra (\ref{kvsph0})-(\ref{sphkv}), see appendix: Canonical Analysis of Conformal Gravity: Killing Vectors for Conformal Algebra on Spherical Background,
while the subleading order, conserves a subset of 4 KVs
\begin{align}
\xi^{\ms{(0)} a}_{1}&=(0,0,1),\\
\xi^{\ms{(0)} a}_{2}&=(0, \sin (\phi), \cot (\theta)\cos (\phi)),\\
\xi^{\ms{(0)} a}_{3}&=(0, -\cos (\phi), \cot (\theta)\sin (\phi)), \\
\xi^{\ms{(0)} a}_{4}&=(1,0,0).
\end{align}that close the asymptotic symmetry algebra $\mathbb{R}\times o(3)$, one of the subalgebras of conformal algebra.
The conserved charge that does not vanish is $Q[\xi_4^{(0)i}]=Q_{\perp}[N]$, in agreement with the \cite{Grumiller:2013mxa}
\begin{equation}
Q[\partial_t]=\frac{M}{\ell^2}-a\frac{(1-\sqrt{1-12aM})}{6},
\end{equation}
MKR charge.
\section{Equivalence of Canonical and Noether Charges
We want to demonstrate that canonical charges (\ref{qperp1}), (\ref{eq:diffcharge}) (that we write here for convenience)
\begin{align}
Q_{\perp}[\epsilon]&=\int_{\partial\Sigma}\ast\left[\epsilon D_b\Pi_K^{cb}-D_b\epsilon\Pi_K^{cb}\right] \label{qperp1} \\
Q_D[\epsilon]&=2\int_{\partial\Sigma}\ast\epsilon^c\left(\Pi_h^{ab}h_{bc}+\Pi_K^{ab}K_{bc}\right).\label{eq:diffcharge}
\end{align}
are equivalent to Noether charges
\begin{align}
Q=\int d^2x \sqrt{\sigma}n_iJ^i
\end{align} where we take into account that the second line of the (\ref{eq:CG17})
\begin{align}
\tau_{ij} &= \sigma \big[\tfrac{2}{\ell}\,(E_{ij}^{\ms{(3)}}+ \tfrac{1}{3} E_{ij}^{\ms{(2)}}\gamma^{\ms{(1)}}) -\tfrac4\ell\,E_{ik}^{\ms{(2)}}\psi^{\ms{(1)} k}_j
+ \tfrac{1}{\ell}\,\gamma_{ij}^{\ms{(0)}} E_{kl}^{\ms{(2)}}\psi_{\ms{(1)}}^{kl}
+\nonumber\\& \tfrac{1}{2\ell^3}\,\psi^{\ms{(1)}}_{ij}\psi_{kl}^{\ms{(1)}}\psi_{\ms{(1)}}^{kl}
- \tfrac{1}{\ell^3}\,\psi_{kl}^{\ms{(1)}}\,\big(\psi^{\ms{(1)} k}_i\psi^{\ms{(1)} l}_j-\tfrac13\,\gamma^{\ms{(0)}}_{ij}\psi^{\ms{(1)} k}_m\psi_{\ms{(1)}}^{lm}\big)\big]
\nonumber\\&- 4\,{\cal D}^k B_{ijk}^{\ms{(1)}} + i\leftrightarrow j\,,
\label{eq:CG17n}
\end{align}
vanishes due to Cayley-Hamilton theorem that for traceless $\gamma^{(1)}_{ij}$ matrix read
\begin{align}
\frac{1}{2}\psi_{ij}^{(1)}\psi_{lk}^{(1)}\psi^{(1)lk}-\psi_{lk}^{(1)}\psi^{(1)k}_i\psi^{(1)l}_j+\frac{1}{3}\psi^{(1)}_{lk}\psi^{(1)k}_{m}\psi_{(1)}^{lm}\gamma_{ij}^{(0)}=0
\end{align}
One decomposes the finite part of the metric near the boundary with respect to the timelike unit normal $n^a$ and the unit normal $u_a=\nabla_a\rho$
\begin{align}
\overline{g}_{ab}=-n_an_b+u_au_b+\sigma_{ab}\end{align}
where $\sigma_{ab}$ is the induced metric on $\partial\Sigma$.
Electric and magnetic parts of the Weyl tensor can be decomposed with respect to $u_a$ as
\begin{align}
\mathcal{\varepsilon}=\perp_au_cu^dC^c{}_{adb}, && \mathcal{B}_{abc}=\perp_uu_cC^c{}_{abc}.
\end{align}
The fully projected Weyl tensor consists of a polynomial of the extrinsic curvature of the timelike hypersurface $\rho=const.$ which is irrelevant due to Cayley-Hamilton theorem, and a second one that we can write in terms of the electric part of the Weyl tensor.
Close to the boundary we can decompose the momentum $\Pi_K^{ab}$ as
\begin{align}
\Pi_K^{ab}&=2u\wedge\omega_\sigma\left(u^au^b\mathcal{E}_{nn}+2u^{(a}\sigma^{b)c}\mathcal{B}_{ncn}+\sigma^{ac}\sigma^{bd}\mathcal{E}_{cd}-\sigma^{ab}\mathcal{E}_{nn}\right).
\end{align}
Let us consider the first term in $Q_{\perp}$ and note that $\epsilon_{\perp}=N\xi^t$ and $\xi^t=\xi_{(0)}^t+\mathcal{O}(\rho^2)$ at the boundary
\begin{align}
\int_{\partial\Sigma}\ast D_b\epsilon_\perp\Pi_K^{ba}=\int_{\partial\Sigma}2\omega_\sigma\epsilon_\perp\left(\mathcal{E}_{nn}u\cdot\partial\ln N-\ ^{(2)}D_b\mathcal{B}^{nbn}\right),
\end{align}
for ${}^{(2)}D_b$ covariant derivative at the boundary and we obtained the second term by partial integration. The second term in the $Q_{\perp}$ reads
\begin{align}
-\int_{\partial\Sigma}\ast\epsilon_\perp D_b\Pi_K^{ba}=-\int_{\partial\Sigma}&2\omega_\sigma\epsilon_\perp\left(n^cn^du\cdot\partial\mathcal{E}_{cd}+2\mathcal{E}_{nn}\mathcal{K}-4\mathcal{E}^{nb}\mathcal{K}_{bn}-\mathcal{E}^{cd}\mathcal{K}_{cd}\right)\nonumber\\
&-2\omega_\sigma\ ^{(2)}D_b\mathcal{B}^{nbn}-2\omega_\sigma\epsilon_\perp\mathcal{E}_{nn}u\cdot\partial\ln N
\end{align}
where we used
\begin{align}
\ ^{(2)}D_b\mathcal{B}^{nbn}&=-n_an_c\mathcal{D}_b\mathcal{B}^{(ac)b}+\mathcal{B}^{acn}k_{ac},\nonumber\\
\mathcal{E}_{ab}&\equiv\mathcal{E}^{\ms{(2)}}_{ab}+\mathcal{O}(\rho)\nonumber\\
u\cdot\partial\mathcal{E}_{ab}&\equiv\mathcal{E}^{\ms{(3)}}_{ab}+\mathcal{O}(\rho)\nonumber\\
\mathcal{B}_{abc}&\equiv-\mathcal{B}^{\ms{(1)}}_{abc}+\mathcal{O}(\rho)
\end{align}
with $k_{ab}$ extrinsic curvature on $\partial\Sigma$ in $\partial\mathcal{M}$.
This leads to
\begin{align}
Q_\perp=\int_{\partial\Sigma}-2\omega_\sigma n_c\xi^g\left(-n_gn^b\right)&\left(\mathcal{E}^c_{\ms{(3)} b}+\mathcal{E}^c_{\ms{(2)} b}\gamma^{\ms{(1)}}-2\mathcal{E}^c_{\ms{(2)} d}\gamma^{\ms{(1)} d}_b-\mathcal{E}^{\ms{(2)}}_{bd}\gamma^{ad}_{\ms{(1)}}+\frac{1}{2}\gamma_{\ms{(0)} b}^{c}\mathcal{E}^{cd}_{\ms{(2)}}\gamma^{\ms{(1)}}_{cd}\right.\nonumber\\
&\left.+2\omega_\sigma\gamma^{\ms{(0)}}_{be}\mathcal{D}_d\mathcal{B}^{(ce)d}_{\ms{(1)}}\right)-4\omega_\sigma \epsilon_\perp\mathcal{B}^{acn}k_{ab}.
\end{align}
To obtain the $Q_D$ charge, we first need via EOM obtain
\begin{align}
\Pi_h^{ab}&=K\Pi_K^{ab}-2\Pi_K^{e(a}K^{b)}_{\ e}+\frac{1}{2}\Pi_K^{cd}K_{cd}h^{ab}\\&+\omega_h\perp\left(n_en_dn\nabla C^{aebd}-2n_d\nabla_c C^{c(ab)d}\right).
\end{align}
We obtain for the decomposition of $Q_D$
\begin{align}
Q_D=\int_{\partial\Sigma}-2\omega_\sigma n_c\xi^g\sigma_g^{\ b}&\left(\mathcal{E}^c_{\ms{(3)} b}+\mathcal{E}^c_{\ms{(2)} b}\gamma^{\ms{(1)}}-2\mathcal{E}^c_{\ms{(2)} d}\gamma^{\ms{(1)} d}_b-\mathcal{E}^{\ms{(2)}}_{bd}\gamma^{ad}_{\ms{(1)}}\right.\nonumber\\
&\left.+2\omega_\sigma\gamma^{\ms{(0)}}_{be}\mathcal{D}_d\mathcal{B}^{(ce)d}_{\ms{(1)}}\right)+4\omega_\sigma \epsilon_b\ ^{(2)}D_d\perp_n\mathcal{B}^{(bd)n}.
\end{align}
Integrating the last term by part reads
\begin{align}
\int_{\partial\Sigma}4\omega_\sigma \epsilon_b\ ^{(2)}D_d\perp_n\mathcal{B}^{(bd)n}=-\int_{\partial\Sigma}4\omega_\sigma \ ^{(2)}D_{(d}\epsilon_{b)}\mathcal{B}^{bdn}.
\end{align}
Decomposition of the metric $\gamma_{ab}$ with the generator with respect to $n_a$ and $\sigma_{ab}$ gives for spatially projected part
\begin{align}
\sigma_a^{\ c}\sigma_b^{\ d}\pounds_\xi\gamma_{cd}=2\epsilon_\perp k_{ab}+2\ ^{(2)}D_{(d}\epsilon_{b)}=2\lambda^{\ms{(0)}}\sigma^{\ms{(0)}}_{ab}+\mathcal{O}(\rho).
\end{align}
With $\sigma^{\ms{(0)}}_{ab}\mathcal{B}^{abn}_{\ms{(1)}}=0$ one can notice that the last term in the $Q_D$ is equal to the last term in $Q_{\perp}$ and they cancel. The sum of charges therefore reads
\begin{align}
Q[\xi]=\int_{\partial\Sigma}-2\omega_\sigma n_c\xi^{b}&\left(\mathcal{E}^c_{\ms{(3)} b}+\mathcal{E}^c_{\ms{(2)} b}\gamma^{\ms{(1)}}-2\mathcal{E}^c_{\ms{(2)} d}\gamma^{\ms{(1)} d}_b-\mathcal{E}^{\ms{(2)}}_{bd}\gamma^{ad}_{\ms{(1)}}+\frac{1}{2}\gamma_{\ms{(0)} b}^{c}\mathcal{E}^{cd}_{\ms{(2)}}\gamma^{\ms{(1)}}_{cd}\right.\nonumber\\
&\left.+2\omega_\sigma\gamma^{\ms{(0)}}_{be}\mathcal{D}_d\mathcal{B}^{(ce)d}_{\ms{(1)}}\right).
\end{align}
This charge agrees with the one from the first chapter up to an over all factor 4 that is the difference in the initial action that we started with.
\chapter{Classification}
\section{Introduction to Classification}
Higher derivative theories of gravity, as CG, lead to very complicated sets of partial differential equations. In order for them to be solved in full generality one reaches for the simplifications imposing physically interesting conditions such as spherical, axial or particular kind of symmetry \cite{Breitenlohner:2004fp}, which in combination with restriction on the coordinate dependency other than radial coordinate, can make equations analytically solvable. The other two approaches are numerical one that focuses on certain set of solutions and the bottom-up approach that can be imagined analogously to the reverse procedure of Klauza-Klein reduction \cite{Grumiller:2006ww}.
The latter approach arises from the analysis of the asymptotic symmetry algebra.
The exemplary case is the algebra of Einstein Hilbert action. When $\Lambda<0$ matter free Einstein equations have a solution with a maximally symmetric AdS space and group $O(3,2)$ which is for $\Lambda=0$ analogous to Minkowski. The fields that form the action need to asymptotically approach AdS configuration, that requires
\begin{itemize}
\item proving the invariance of asymptotic conditions under the AdS group action
\item well defined canonical generators of the symmetry (which we have proven in the previous chapter)
\item included physically interesting asymptotically AdS solutions
\item boundary conditions written in terms of the spacetime metric components \cite{Henneaux:1985tv}.
\end{itemize}
We have seen that computing the canonical boundary charges one finds the algebra that agrees with the one obtained from the boundary conditions preserving diffeomorphisms.
The boundary conditions imposed on the metric are the AdS boundary conditions. This asymptotic symmetry algebra correspond to the algreba formed by the canonical charges that one obtains with the boundary conditions that are background independent.
Boundary conditions are generated by infinitesimal diffeomorphism \begin{equation}x^{\mu}\rightarrow x^{\mu}+\xi^{\mu}\label{diff}\end{equation} with vector field $\xi_{\mu}$ and Weyl rescalings of the metric $g_{\mu\nu}\rightarrow e^{2\omega}g_{\mu\nu}$, where $\omega$ is Weyl factor. The transformation of the metric is
\begin{equation}
\delta g_{\mu\nu}=\left(e^{2\omega}-1\right)g_{\mu\nu}\pounds_{\xi}g_{\mu\nu}\label{trafo}.
\end{equation}
The boundary conditions needed to be conserved, introduced in chapter 1, are the form of the metric $ds^2=\frac{\ell^2}{\rho^2}\left(-\sigma d\rho^2+\gamma_{ij}dx^idx^j\right)$ (\ref{le}) with the expansion $\gamma_{ij}=\gamma_{ij}^{(0)}+\frac{\rho}{\ell}\gamma_{ij}^{(1)}+\frac{\rho^2}{\ell^2}\gamma_{ij}^{(2)}+\frac{\rho^{(3)}}{\ell^3}\gamma_{ij}^{(3)}+...$ (\ref{expansiongamma}) at the boundary, the variations
\begin{enumerate}
\item $\delta g_{\rho\rho}=0$,
\item $\delta g_{\rho i}=0$, \label{varcond2}
\item $\delta \gamma_{ij}^{(0)}=2\overline{\lambda} \gamma_{ij}^{(0)}$
\item and $\delta \gamma_{ij}^{(1)}=\overline{\lambda(x)} \gamma_{ij}^{(1)}$ (\ref{bcs}), for $\overline{\lambda(x)}$ arbitrary function of the boundary coordinates.
\end{enumerate}
We want to find the transformations that preserve that. Assuming the expansion of the Weyl factor
\begin{equation}
\omega=\omega^{(0)}+\frac{\rho}{\ell}\omega^{(1)}+\frac{1}{2}\left(\frac{\rho}{\ell}\right)^2\omega^{(2)}+...\label{omexp}
\end{equation}
for the $\rho\rho$ component of equation (\ref{trafo}) the obtained condition
\begin{align}
\delta g_{\rho\rho}=\left(e^{2\omega}-1\right)g_{\rho\rho}+\xi^{\mu}\partial_{\mu}g_{\rho\rho}+2g_{\rho\rho}\partial_{\rho}\xi^{\rho}
\end{align}
\begin{align}
(e^{2\omega}-1)-2\frac{\xi^{\rho}}{\rho}+2\partial_{\rho}\xi^{\rho}=0.\label{eqr}
\end{align}
dictates the allowed form of $\xi^{\rho}$. To consider the contribution of each term in the expansion (\ref{omexp}) we compute the differential equation (\ref{eqr}) in $\rho$ for $\xi$. To weaken the restrictions from equation (\ref{eqr}) we set the RHS to be equal to constant $c$ and consider the restrictions of the expansion (\ref{omexp}) term by term.
\begin{itemize}
\item When only the first term in the expansion is taken, i.e. $\omega=\omega_0$, one obtains
\begin{equation}
\xi^{\rho}=c_1 \rho+\frac{1}{2}\rho\left(1+c-e^{2\omega_0}\right)\rho \ln(\rho)
\end{equation}
\item for $\omega$ equal to the second term in the expansion, $\omega=\omega_1 \rho$, the result of the equation is
\begin{equation}
\xi^{\rho}=c_1 \rho+\frac{1}{2}\rho\left(-Ei(2\rho\omega_1)+(1+c)\ln(\rho)\right),
\end{equation}
where $Ei(z)=-\int_{-z}^{\infty}\frac{e^{-t}}{t}dt$,
\item while $\omega$ that is equal to one of the higher terms, $\omega=\omega_n \rho^n$ leads to
\begin{equation}
\xi^{\rho}=c_1 \rho+\frac{1}{2}\rho\left(-\frac{1}{n}Ei(2\rho^n\omega_n)+(1+c)\ln(\rho)\right).
\end{equation}
\end{itemize}
Let us now consider general forms of $\xi^{\rho}$ and $\omega$. The solution for $\xi^{\rho}$ to the (\ref{eqr})
reads
\begin{equation}
\xi^{\rho}=c_1r+r\int_1^r-\frac{-k_1+e^{2\omega(k_1)}k_1}{2k_1^2}dk_1\label{integr}
\end{equation}
with $k_1$ integration parameter. We want general expansion of $\xi^{\rho}$ for which we have to take into account particular conditions that may appear. One of them being not allowing logarithmic terms. The simplest way to restrict this is to require from under integral function in (\ref{integr}) to be the form $\frac{constant}{k_1}$ and impose the condition that
\begin{equation}
\frac{-k_1+e^{2\omega(k_1)}k_1}{2k_1^2}\neq c.
\end{equation}
That leads to $\omega(k_1)\neq c'$ where $c'=\frac{1}{2}\ln(c+1)$ and implies that the first condition we need to impose in the expansion (\ref{omexp}) is that the first term is vanishing, as we will also see below. If we consider expansion for $\xi^{\rho}$
\begin{equation}
\xi^{\rho}=\xi^{(0)\rho}+\rho\xi^{(1)\rho}+\rho^2\xi^{(2)\rho}+\rho^3\xi^{(3)\rho}+...
\end{equation} and for $\omega$ (\ref{omexp}), and insert in the equation for $\xi^{\rho}$ (\ref{eqr}) we obtain requirements $\omega_0=\xi^{(0)\rho}=0$, $\xi^{(2)\rho}=-\omega^{(1)}$, $\omega_{2}=-2\xi^{(3)\rho}-\omega_1^2$, $\xi^{(3)\rho}=\frac{\sqrt{-12\xi^{(5)\rho}+2\omega_1^4-6\omega_1\omega_3-3\omega_4}}{2\sqrt{3}}$, and $\xi^{(4)\rho}=\frac{1}{9}(12 \xi^{(3)\rho}\omega_1+4\omega_1^3-3\omega_3)$...
They satisfy the equation (\ref{eqr}) to $\mathcal{O}(\rho^5)$ and define $\omega$ and $\xi^{\rho}$ which read
\begin{align}
\omega&=\frac{\rho}{\ell}\omega^{(1)}+\frac{1}{2}\frac{\rho^2}{\ell^2}\omega^{(2)}+..\\
\xi^{\rho}&=\rho\xi^{(0)\rho}-\frac{\rho^2}{\ell}\omega^{(1)}+...
\end{align}
Here $"..."$ denote higher order terms and $\xi^{(0)\rho}\equiv\lambda(x)$ for $\lambda(x)$ allowed to depend on the boundary coordinates.
The second condition on variation of the metric $\delta g_{\rho i}=0$ and equation for transformation of the metric (\ref{trafo}) lead to the equation
\begin{align}
0&=g_{ij}\partial_{\rho}\xi^j+g_{\rho\rho}\partial_i\xi^{\rho}\nonumber \\
&=\gamma_{ij}\partial_{\rho}\xi^{i}-\sigma\xi^{\rho}
\end{align}
which rewritten in $\partial_{\rho}\xi^k=\sigma\gamma^{ki}\partial_j\xi^{\rho}$ has a solution
\begin{equation}
\xi^{i}=\xi^{i(0)}+\int d\rho\left(\gamma^{kj}\partial_j\xi^{\rho} \right)
\end{equation}
that can be integrated collecting orders of $\rho$
\begin{align}
\xi^i&=\xi^{i(0)}+\int d\rho\big[\gamma^{(0)ij}-\gamma^{(1)ij}\rho+\big(-\gamma^{(0)jk}\gamma_j^{m(2)}+\gamma_j^{(1)m}\gamma^{(1)jk}\big) \big]\partial_j(\rho\xi^{(1)\rho}\nonumber \\ &+\rho^2\xi^{(2)\rho}+\rho^3\xi^{(3)\rho})\nonumber \\
&=\xi^{(0)i}+\int d\rho\left[ \rho \gamma^{(0)ij}\partial_j\xi^{(1)\rho}+ \rho^2\left(\gamma^{(0)ij}\partial_j\xi^{(2)\rho}-\gamma^{(1)ij}\partial_j\xi^{(1)\rho}\right) \right].\label{eq:1}
\end{align}
\noindent We are interested into the expansion up to first order. Integrating the first term in (\ref{eq:1}) leads to $\int d\rho \rho \gamma^{(0)ij}\partial_j\xi^{(1)\rho}=\frac{1}{2}\gamma^{(0)ij}\partial_j\lambda\rho^2$ of $\mathcal{O}(\rho^2)$ that defines $\xi^i$:
\begin{equation}
\xi^{i}=\xi^{(0)i}+\frac{1}{2}\sigma\rho^2\mathcal{D}^i\lambda
\end{equation}
where with $\mathcal{D}_i$ is covariant derivative along the boundary, compatible with $\gamma_{ij}^{(0)}$.
The linear term in the expansion of the Killing vector (KV) $\xi^{i}$ does not exist what will allow classification of the subalgebras of the conformal algebra coming from $ij$ compnent of equation for the transformation of the metric (\ref{trafo}). From $ij$ component of (\ref{trafo}) (with $\ell=1$) one obtains
\begin{align}
\delta g_{ij}&=(e^{2\omega}-1)g_{ij}+\pounds_{\xi}g_{ij}\\
\frac{1}{\rho^2}\delta\gamma_{ij}&=(e^{2\omega}-1)\frac{1}{\rho^2}\gamma_{ij}+\left(\xi^{\alpha}\partial_{\alpha}\left(\frac{1}{\rho^2}\gamma_{ij}\right)+\frac{1}{\rho^2}\gamma_{\alpha i}(\partial_j\xi^{\alpha})+\frac{1}{\rho^2}\gamma_{\alpha j}(\partial_i\xi^{\alpha}) \right) \nonumber \\
&=\frac{1}{\rho^2}\left[(e^{2\omega}-1)\gamma_{ij}+\pounds_{\xi^k}\gamma_{ij}+\xi^{\rho}\left(-\frac{2}{\rho}\gamma_{ij}+\partial_{\rho}\gamma_{ij}\right)\right] \label{eq:2}
\end{align}
which expanded (\ref{eq:2}) in $\rho$
\begin{align}
\delta \left(\gamma_{ij}^{(0)}+\rho \gamma_{ij}^{(1)}+...\right)&=\left(2\omega^{(1)} \rho+2\left(\omega^{(1)}+\omega^{(2)}\right)\rho^2 \right)\left(\gamma_{ij}^{(0)}+\rho \gamma_{ij}^{(1)}+...\right)\nonumber \\ &+[ \left(\xi^{(0)l}+\frac{1}{2}\sigma\rho^2\mathcal{D}^l\lambda \right)\mathcal{D}_{l}\left(\gamma_{ij}^{(0)}+\rho\gamma_{ij}^{(1)}\right) \nonumber \\ &+\left(\gamma_{il}^{(0)}+\rho\gamma_{il}^{(1)}\right)\mathcal{D}_j\left(\xi^{(0)l}+\frac{1}{2}\sigma\rho^2\mathcal{D}^l\lambda\right)\nonumber \\ &+\left(\gamma_{lj}^{(0)}+\rho\gamma_{lj}^{(1)}\right)\mathcal{D}_i\left(\xi^{(0)l}+\frac{1}{2}\sigma\rho^2\mathcal{D}^l\lambda\right) ] \nonumber \\ &+ \left(\rho\xi^{(0)\rho}-\frac{\rho^2}{\ell}\omega^{(1)} \right)\partial_{\rho}\left(\gamma_{ij}^{(0)}+\rho \gamma_{ij}^{(1)}+...\right) \nonumber \\ &-\frac{2}{\rho}\left(\rho\xi^{(0)\rho}-\frac{\rho^2}{\ell}\omega^{(1)} \right)\left(\gamma_{ij}^{(0)}+\rho \gamma_{ij}^{(1)}+...\right)
\end{align}\noindent in the leading and the next to leading order read
\begin{align}
\delta \gamma_{ij}^{(0)}&=\mathcal{D}_i\xi_j^{(0)}+\mathcal{D}_j\xi^{(0)}_i-2\lambda\gamma_{ij}^{(0)}\label{eqnn1}\\
\delta\gamma_{ij}^{(1)}&=\pounds_{\xi^{k}_{(0)}}\gamma_{ij}^{(1)}+4\omega^{(1)}\gamma_{ij}^{(0)}-\lambda\gamma_{ij}^{(1)}.\label{eqnn2}
\end{align} Since the boundary conditions of the theory allow variations $\delta\gamma_{ij}^{(0)}=2\overline{\lambda}(x)\gamma_{ij}^{(0)}$ and $\delta \gamma_{ij}^{(1)}=\overline{\lambda}(x)\gamma_{ij}^{(1)}$ from above, for $\overline{\lambda}\neq\lambda$,
the trace of the condition (\ref{eqnn1}) gives
$\gamma_{il}^{(0)}\mathcal{D}^{i}\xi^{(0)i}=3\lambda+3\overline{\lambda}$
and defines $\overline{\lambda}$ \begin{equation}\overline{\lambda}=\frac{1}{3}\gamma_{il}^{(0)}\mathcal{D}^i\xi^{l}-\lambda.\end{equation}
Inserting it back in (\ref{eqnn1})
gives
\begin{equation}
\mathcal{D}_{i}\xi^{(0)}_j+\mathcal{D}_j\xi^{(0)}_i=\frac{2}{3}\gamma_{ij}^{(0)}\mathcal{D}_{k}\xi^{(0)k}\label{lo}
\end{equation}
that defines the asymptotic symmetry algebra (ASA) of the theory at the boundary. Further restrictions that define subalgebras one determines from (\ref{eqnn2}).
The trace of the (\ref{eqnn2}), the boundary condition $\delta\gamma_{ij}^{(1)}=\overline{\lambda}\gamma_{ij}^{(1)}$ and relation between $\overline{\lambda}$ and $\lambda$ from (\ref{lo}) determine
\begin{equation}
\omega^{(1)}=\frac{1}{12}\left[-\pounds_{\xi^{(0)l}}\gamma^{(1)}+\frac{1}{3}\gamma^{(1)}\mathcal{D}_l\xi^{(0)k} \right]\label{om1}
\end{equation}
that introduced in equation (\ref{eqnn2})
read
\begin{equation}
\pounds_{\xi^{(0)l}}\gamma_{ij}^{(1)}=\frac{1}{3}\gamma_{ij}^{(1)}\mathcal{D}_l\xi^{(0)l}-4\omega^{(1)}\gamma_{ij}^{(0)}.\label{nloke}
\end{equation}
\section{Flat, Spherical and Linearly Combined Killing Vectors}
The equation that defines ASA (\ref{lo}), defines the leading order Killing equation dependent only on the first term in the expansion of the Killing vector $\xi^{(0)i}$, and on the background metric $\gamma^{(0)}_{ij}$. We will consider two background metrics
\begin{enumerate}
\item
flat background metric
\begin{equation}\gamma_{ij}^{\ms{(0)}}=\eta_{ij}=diag(-1,1,1)_{ij}\label{eqga0}\end{equation}
with coordinates $(t,x,y)$ defined on $\partial\mathcal{M}$,
\item
and the spherical $\mathbb{R}\times S^2$ background metric
\begin{equation}\gamma_{ij}^{\ms{(0)}}=\eta_{ij}=diag(-1,1,\sin(\theta)^2)_{ij},\label{eqga0}\end{equation}
with coordinates $(t,\theta,\phi)$.
\end{enumerate}
To compute the Killing vectors $\xi^{(0)i}$, we follow the procedure from \cite{DiFrancesco:1997nk}, and from now on write quantities with indices $\mu,\nu,\kappa...$ since the computation relates to d dimensions.
\noindent The conformal transformation $g'_{\mu\nu}(x')=\Omega(x) g_{\mu\nu}$ is locally equivalent to a (pseudo) rotation and a dilation. The group that is formed by a set of conformal transformations contains Poincare group as a subgroup when $\Omega\equiv1$. The name "conformal" comes from the fact that the angle between two arbitrary curves that cross each other at some point, is unaffected, and the angles are preserved.
The consequences of $g'_{\mu\nu}(x')=\Omega(x)g_{\mu\nu}(x)$ on an infinitesimal transformation (\ref{diff}) of the metric are that in the first order of $\xi^{\mu}$ one obtains
\begin{equation}
g_{\mu\nu}\rightarrow g_{\mu\nu}-(\partial_{\mu}\xi_{\nu}+\partial_{\nu}\xi_{\mu}).
\end{equation}
while conformal invariance means
\begin{equation}
\partial_{\mu}\xi_{\nu}+\partial_{\nu}\xi_{\mu}=f(x)g_{\mu\nu}\label{pd1}
\end{equation}
where one can recognise the form of (\ref{lo}). $f(x)$ is found analogously to $\lambda$ from (\ref{eqnn1})
\begin{equation}
f(x)=\frac{2}{d}\partial_{\rho}\xi^{\rho},
\end{equation}
taking a trace of (\ref{pd1}) with the standard, flat, Cartesian metric $g_{\mu\nu}=\gamma^{(0)}_{\mu\nu}$.
Partial derivation, $\partial_{\rho}$, of (\ref{pd1}) and permutation of indices
define three equations. Its sum defines linear combination
\begin{equation}
2\partial_{\mu}\partial_{\nu}\xi_{\rho}=\gamma^{(\ms{(0)})}_{\mu\rho}\partial_{\nu}f+\gamma^{(\ms{(0)})}_{\nu\rho}\partial_{\mu}f-\gamma^{\ms{(0)}}\partial_{\rho}f.\label{2pp}
\end{equation}
that contracted with $\gamma^{\ms{(0)}\mu\nu}$ lead to
\begin{equation}
2\partial^{2}\xi_{\mu}=(2-d)\partial_{\mu}f\label{2md}.
\end{equation}
After acting with $\partial_{\nu}$ on (\ref{2md}) and $\partial^2$ on (\ref{pd1}) one finds the equation $(2-d)\partial_{\mu}\partial_{\nu}f(x)=\gamma_{\mu\nu}^{(0)}\partial^2f(x)$ with a trace
\begin{equation}
(d-1)\partial^2f=0.\label{dm1}
\end{equation}
These equations allow the derivation of the expected form for conformal transformations in $d$ dimensions. We focus on $d\geq3$, for which
equations (\ref{dm1}) and (\ref{2md}) imply $\partial_{\mu}\partial_{\nu}f=0$ so the function f can be maximally linea
\begin{equation}
f(x)=A+B_{\mu}x^{\mu}
\end{equation}
for constant $A,B_{\mu}$.
Inserting that ansatz into (\ref{2pp}) leads to constant $\partial_{\mu}\partial_{\nu}\xi_{\kappa}$ and the $\xi_{\mu}$ that is at most quadratic in coordinates
\begin{equation}
\xi_{\mu}=a_{\mu}+b_{\mu\nu}x^{\nu}+c_{\mu\nu\kappa}x^{\nu}x^{\kappa}\label{xiansatz}
\end{equation}
where $c_{\mu\nu\kappa}=c_{\mu\kappa\nu}$.
Note that (\ref{pd1}) - (\ref{2pp}) are true for all $x$ and one is allowed to treat the powers of coordinates individually. That means $a_{\mu}$ is free of constraints, and it denotes an infinitesimal translation with corresponding finite transformation \begin{equation} x'^{\mu}=x^{\mu}+a^{\mu}. \end{equation} Considering the linear term in (\ref{pd1}) leads to
\begin{equation}
b_{\mu\nu}+b_{\nu\mu}=\frac{2}{d}b^{\kappa}_{\kappa}\gamma^{(0)}_{\mu\nu}
\end{equation}
from which it yields that $b_{\mu\nu}$ is of the form
\begin{align}
b_{\mu\nu}=\alpha\gamma^{(0)}_{\mu\nu}+m_{\mu\nu} && m_{\mu\nu}=-m_{\nu\mu},
\end{align}
i.e. we obtain the sum of an antisymmetric part and a trace. The trace represents an infinitesimal scale transformation, of the finite transformation \begin{equation} x'^{\mu}=\alpha x^{\mu},\end{equation} and the antisymmetric part an infinitesimal (rigid) rotation, and the finite transformation \begin{equation}x'^{\mu}=M^{\mu}_{\nu}x^{\nu}.\end{equation}
Inserting the ansatz for the Killing vector $\xi^{\mu}$, (\ref{xiansatz}), into equation for the linear combination of partial derivatives on function $f$, (\ref{2pp}), gives the form of the term from $\xi^{\mu}$ quadratic in coordinates, $c_{\mu\nu\kappa}$,
\begin{align}
c_{\mu\nu\kappa}=\gamma^{(0)}_{\mu\kappa}b_{\nu}+\gamma^{(0)}_{\mu\nu}b_{\kappa}-\gamma^{(0)}_{\nu\kappa}b_{\mu} &&\text{ with } && b_{\mu}\equiv\frac{1}{d}c^{\nu}_{\nu\mu}
\end{align}
and the corresponding infinitesimal transformation
\begin{equation}
x'^{\mu}=x^{\mu}+2(x\cdot b)x^{\mu}-b^{\mu}x^2, \label{infsct}
\end{equation}
called {\it special conformal transformation} (SCT). The corresponding finite transformation is \begin{equation} x'^{\mu}=\frac{x^{\mu}-b^{\mu}x^2}{1-2b\cdot x+b^2x^2}.\label{trsct}
\end{equation}
One may demonstrate that transformation (\ref{trsct}) corresponds to the infinitesimal transformation (\ref{infsct}) and prove that it is conformal for the conformal factor $\Omega(x)=(1-2b\cdot x+b^2x^2)^2.$
Another way to think of the SCT is in a form of a translation preceded and followed by an inversion $x^{\mu}\rightarrow x^{\mu}/x^2$.
Using the definition of the generator of the infinitesimal transformations one obtains the generators of the conformal group. It is customary to define the transformation with
\begin{align}
x'^{\mu}&=x^{\mu}+\omega_a\frac{\delta x^{\mu}}{\delta \omega_a} \\
\Phi'(x')&=\Phi(x)+\omega_a\frac{\delta \mathcal{F}}{\delta \omega_a}(x)\label{defgen}
\end{align}
Where $\omega_a$ denotes a set of infinitesimal parameters that are considered up to first order, which are in our case $a_{\mu},b_{\mu\nu}$ and $c_{\mu\nu\kappa}$ The generator $G_a$ is defined by the symmetry transformation via the expression for the infinitesimal transformation at one point
\begin{equation}
\delta_{\omega}\Phi(x)\equiv \Phi'(x)-\Phi(x)\equiv \omega_a G_{a}\Phi(x),\label{gener}
\end{equation}
that combined with the (\ref{defgen}) leads to
\begin{align}
\Phi'(x') &=\Phi(x')-\omega_a\frac{\delta x^{\mu}}{\delta\omega_a}\partial_{\mu}\Phi(x')+\omega_a\frac{\delta\mathcal{F}}{\delta\omega_a}(x').
\end{align}
From which one may obtain the generator as
\begin{equation}
G_a\Phi=\frac{\delta x^{\mu}}{\delta\omega_a}\partial_{\mu}\Phi-\frac{\delta \mathcal{F}}{\delta \omega_a}.
\end{equation}
In the case of translation by a vector $\omega^{\mu}$ that leads to $\frac{\delta x^{\mu}}{\delta\omega^{\nu}}=\delta^{\mu}_{\nu}$ and $\frac{\delta\mathcal{F}}{\delta\omega^{\nu}}=0$. The generators of translations reads
\begin{equation}
P_{\nu}=\partial_{\nu},
\end{equation}
and the function $\mathcal{F}$ can be taken to be constant.
For general case one may in the definition of generator (\ref{gener}) set constant on the RHS, which can be set to be equal to $i$ or $1$ depending on the theory one is interested to consider.
In the case of the rotations, the procedure is analogous, however the function $\mathcal{F}(\Phi)$ is taken to be $\mathcal{F}(\Phi)=L_{\lambda}\Phi$ where $L_{\lambda}$ is generator of infinitesimal Lorentz transformations \cite{DiFrancesco:1997nk,Blagojevic:2002du}.
Now we return to our case of three dimensions and use the names of the coordinates ($t,x,y$) and the indices $i,j,k...$.
In three dimensional Minkowski space we obtain three generators of translations
\begin{align}
\xi^{(0)} &= \partial_t, &
\xi^{(1)} &= \partial_x, &
\xi^{(2)} &= \partial_y
\end{align}
that together with the generators of Lorentz rotations $ L_{ij}=(x_i\partial_j-x_j\partial_i) $, (or in components)
\begin{align}
\xi^{(3)} &= x\partial_t + t\partial_x &
\xi^{(4)} &= y\partial_t + t\partial_y &
\xi^{(5)} &= y\partial_x - x\partial_y
\end{align}
form Poincare algebra. Four additional conformal Killing vectors (CKVs) generate dilatations and special conformal transformations respectively
\begin{align}
\xi^{(6)} &= t\partial_t + x\partial_x + y\partial_y &
\xi^{(7)} &= tx\partial_t + \frac{t^2+x^2-y^2}{2}\,\partial_x + xy\partial_y \\
\xi^{(8)} &= ty\partial_t + xy\partial_x + \frac{t^2+y^2-x^2}{2}\,\partial_y &
\xi^{(9)} &= \frac{t^2+x^2+y^2}{2}\,\partial_t + tx\partial_x + ty\partial_y \label{origckvs}.
\end{align}
We denote the KVs that generate translations with $\xi^{t}=(\xi^{(0)},\xi^{(1)}),\xi^{(2)})$, generator of dilatations, $\xi^{(6)}\equiv\xi^d$ and generators of SCTs with $\xi^{sct}=(\xi^{(7)},\xi^{(8)},\xi^{(9)})$. The generators obey conformal algebra commutation rules
\begin{align}
[\xi^d,\xi^t_j]&=-\xi^t_j && [\xi^d,\xi^{sct}_j]=\xi^{sct}_j\\
[\xi_l^t,L_{ij}]&=(\eta_{li}\xi^t_j-\eta_{lj}\xi^t_i) && [\xi_{l}^{sct},L_{ij}]=-(\eta_{li}\xi^{sct}_{j}-\eta_{lj}\xi^{sct}_i) \label{ca1}
\end{align}
\begin{align}
[\xi_i^{sct},\xi_j^t]&=-(\eta_{ij}\xi^d-L_{ij})\\
[L_{ij},L_{mj}]&=-L_{im}\label{ca2}
\end{align}
which can be verified explicitly.
One can notice the analogous commutation relations of SCTs and translations with rotations. As we will see later, the consequences of the analogy will be manifest in the subalgebras of the conformal algebra with translational KVs and SCT KVs.
Knowing that the above KVs that form the conformal algebra result from imposing the flat background metric on the equation (\ref{lo}), we can continue to consider the equation (\ref{nloke}). When the linear term in the FG expansion of the metric vainshes ($\gamma_{ij}^{(0)}=0$), there is no condition on the asymptotic symmetry algebra in the linear order. When the linear term in the FG expansion exists, one obtains next to leading order Killing equation (\ref{nloke}) for flat background $\gamma_{ij}^{(0)}$
\eq{
\xi^{(0)}{}^k\partial_k\gamma^{\ms{(1)}}_{ij}+\gamma^{\ms{(1)}}_{kj}\partial_i\xi^{(0)}{}^k+\gamma_{ik}^{\ms{(1)}}\partial\xi^{(0)}{}^k=\frac{1}{3}D_k\xi^{(0)}{}^k\gamma^{\ms{(1)}}_{ij}-4\gamma^{\ms{(0)}}_{ij}\omega^{(1)}.}{eq:nloke}
\noindent One can use (\ref{eq:nloke}) for the analysis of the CG solutions as follows.
\begin{enumerate}
\item
Consider the solutions of CG, transform them into the FG form of the metric, determine the $\gamma_{ij}^{\ms{(0)}}$, bring it to form of Minkowski metric, determine $\gamma^{(1)}_{ij}$ and classify the solutions according to the KVs conserved by (\ref{eq:nloke}) for the given $\gamma_{ij}^{(1)}$.
That will determine the sub algebra of conformal algebra conserved by the CG solution, and the generators that define the dual field theory at the boundary according to the AdS/CFT prescription.
\item
Consider the realised subalgebras imposing particular demands on the $\gamma_{ij}^{(1)}$ term in the metric. That procedure provides information about the asymptotic solutions of CG and their behaviour, and based on them one can investigate whether global CG solutions are reachable.
\end{enumerate}
In order to perform any of the above analysis one has to solve the equation (\ref{eq:nloke}) for $\gamma^{\ms{(1)}}_{ij}$. The possible solutions and the subgroups of the conformal algebra that can be realized are not made only from the Killing vectors written above. One can take any linear combination of the above KVs
\begin{align}
\xi^{lc}&=a_0\xi^{(0)}+a_1\xi^{(1)}+a_2\xi^{(2)}+a_3\xi^{(3)}+a_4\xi^{(4)}+a_5\xi^{(5)}+a_6\xi^{(6)}+a_7\xi^{(7)}\nonumber \\ &+a_8\xi^{(8)}+a_9\xi^{(9)} \label{lc}
\end{align}
and consider whether there is a $\gamma^{(1)}_{ij}$ that satisfies such combination of KVs and the equation (\ref{eq:nloke}). Here we have denoted linearly combined KV with $\xi^{lc}$. One can as well take the opposite approach, impose any condition on the $\gamma_{ij}^{(1)}$ matrix and consider whether there is set of linearly combined KVs that satisfy (\ref{eq:nloke}). That set of linearly combined KVs will then form a sub algebra of conformal algebra.
\section{Coordinate Analysis of the $\gamma_{ij}^{(0)}$}
Let us analyse first the form of the $\gamma_{ij}^{(1)}$ matrix that we can obtain depending on the coordinates in it, and simulateously the behaviour of KVs.
\noindent We can demand from the $\gamma_{ij}^{(1)}$ matrix to be
\begin{itemize}
\item constant
\item dependent on one coordinate
\item dependent on two co-ordinates
\item dependent on three coordinates.
\end{itemize}
First, we focus on the form of the matrix that depends on which of the KVs from CA are conserved, and inspect the symmetries that appear in $\gamma_{ij}^{(1)}$. The importance of that may be questionable when opposed to solving the partial differential equation, however in solving the partial differential equation, we encounter the system of connected partial differential equations of up to three unknowns, that can be reduced to one partial differential equation of fourth order dependent on three coordinates. This system of equations may be solved by recognising the symmetries that can be implemented into equation and make it solvable.
\subsection{Constant $\gamma_{ij}^{(1)}$}
The first requirement \begin{equation}\partial_{k}\gamma_{ij}^{(1)}=0\label{transcoord}\end{equation} is satisfied for all the translation generators when $\gamma_{ij}^{(1)}$ matrix is of arbitrary constant form
\begin{align}
\left(
\begin{array}{ccc}
c_1 & c_2 & c_3 \\
c_2 & c_4 & c_5 \\
c_3 & c_5& c_1-c_4 \\
\end{array} \right). \label{g1t} \end{align}
The requirement that one of the Lorentz rotations conserves the matrix, leads to matrices
\begin{align} \begin{array}{cc}
\left(\begin{array}{ccc} -c_1 & 0 & 0 \\ 0 & c_1 &0 \\ 0 & 0 & -2c_1 \end{array}\right) \text{ for } \xi^{(3)}, &
\left(\begin{array}{ccc} \frac{c_1}{2} & 0 & 0 \\ 0 & c_1 &0 \\ 0 & 0 & -\frac{c_1}{2} \end{array}\right)
\text{ for } \xi^{(4)}, \end{array} \label{trans3rb} \end{align} \begin{align}
\left(\begin{array}{ccc} 2c_1 & 0 & 0 \\ 0 & c_1 &0 \\ 0 & 0 & c_1 \end{array}\right) \text{ for } \xi^{(5)} \text{ conserved. } \label{trans3rot1}
\end{align}
The sub algebra conserved for $\xi^{(5)}$ is obtained by bringing the flat MKR solution to the FG form, about which we say more in the chapter "MKR Solution".
From the remaining KVs, neither of KVs of SCTs nor dilatation KV conserve the constant $\gamma_{ij}^{(1)}$.
\subsection{Algebra with Five Killing Vectors}
The constant $\gamma_{ij}^{(1)}$ matrix allows to find the subalgebra of maximal number of KVs, five dimensional subalgebra. For this one needs to use linearly combined KVs (\ref{lc}) in next to leading order Killing equation (\ref{eq:nloke}). Does the algebra with more KVs exist? One can inspect that straightforwardly. Set constant coefficients in the $\gamma^{(1)}_{ij}$, make $\gamma_{ij}^{(1)}$ traceless and insert in (\ref{eq:nloke}).
The computational time does not allow the evaluation. Let us go around that.
Constant $\gamma_{ij}^{(1)}$ automatically conserves three translational KVs. If maximal subalgebra consists of five, that means they are formed of the remaining 7 KVs. Set one of these 7 CKVs to zero. If that KV forms a new linearly combined KV that enters bigger subalgebra, the maximal subalgebra one can find contains N-1 and not N KVs. Obtaining full set of solutions, we will be able to find all the new KVs but the one that would be formed if we have included this one.
Concretely, since KV of Lorentz rotations enters the new KV of the 5 KV sub algebra, we set one of the Lorentz rotations to zero. Explicit solution of (\ref{eq:nloke}) for constant $\gamma^{(1)}_{ij}$ leads to a maximal number of KVs that form sub algebra of the constant $\gamma^{(1)}_{ij}$.
Let us focus on the particular case of subalgebra with 5 KVs.
The Killing vectors that define it, consist of three translational KVs and two additional KVs made from dilation and Lorentz rotations.
From the set of partial differential equations (PDEs), once $\gamma_{ij}^{(1)}$ is set constant, we are able to form three analogous conserved matrices with corresponding new KVs.
For the matrix
\begin{equation}
\gamma^{(1)}_{ij}=\left(
\begin{array}{ccc}
c & c & 0 \\
c & c & 0 \\
0 & 0 & 0 \\
\end{array}
\right)\label{five}
\end{equation}
two new Killing vectors are
\begin{align}
\chi^{(1)}&=(\text{a6} t-\frac{\text{a6} x}{2},\text{a6} x-\frac{\text{a6} t}{2},\text{a6} y) &
\chi^{(2)}&=(-\text{a5} y,\text{a5} y,-\text{a5} t-\text{a5} x).
\end{align}
Permuting the combination of the original KVs that form the new ones we obtain the matrices
\begin{align}
\gamma^{(1)}_{ij}&=\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & c & i c \\
0 & i c & -c \\
\end{array}
\right), & \gamma^{(1)}_{ij}&=\left(
\begin{array}{ccc}
-c & 0 & c \\
0 & 0 & 0 \\
c & 0 & -c \\
\end{array}
\right)\label{five2}\end{align}for the KVs
\begin{align}
\chi^{(1)}&=(2 i \text{a5} t,\text{a5} y+2 i \text{a5} x,-\text{a5} x+2 i \text{a5} y) &
\chi^{(2)}&=(\text{a3} x+i \text{a3} y,\text{a3} t,i \text{a3} t),
\end{align}
and
\begin{align}
\chi^{(1)}&=(-\text{a5} x,-\text{a5} t+\text{a5} y,-\text{a5} x) &
\chi^{(2)}&=(\text{a6} t+\frac{\text{a6} y}{2},\text{a6} x,\text{a6} y+\frac{\text{a6} t}{2}),
\end{align}respectively. The subalgebras close, where we can demonstrate closing of the subalgebra on the third example. Setting $a$ coefficients to one, the commutators form the algebra
\begin{align}
[\xi^{(0)},\chi^{(1)}]&=-\xi^{(2)} & [\xi^{(1)},\chi^{(1)}]&=-\xi^{(0)}-\xi^{(2)} & [\xi^{(2)},\chi^{(1)}]&=\xi^{(2)} \nonumber \\
[\xi^{(0)},\chi^{(2)}]&=\xi^{(0)}+\frac{\xi^{(2)}}{2} & [\xi^(1),\chi^{(2)}]&=\xi^{(1)} & [\xi^{(2)},\chi^{(1)}]&=\frac{\xi^{(0)}}{2}+\xi^{(2)} \nonumber \\
[\chi^{(1)},\chi^{(2)}]&=-\frac{1}{2}\chi^{(1)}.
\end{align}
\noindent The generators arrange in the generators of the similitude algebra, one of the largest subalgebras of conformal algebra about which we talk more below. Naming
\begin{equation}
P_0=-\xi^{(0)},P_1=\xi^{(1)},P_2=\xi^{(2)},F=\xi^{(6)},K_1=\xi^{(3)}, K_2=\xi^{(4)},L_3=\xi^{(5)}\label{simid}
\end{equation}
we obtain so called "$a_{5,4}$" subalgebra $(F+\frac{1}{2}K_2,-K_1+L_3,P_0,P_1,P_2)$ of the 3 dimensional extended Poincare algebra
\begin{align}
[\xi^d,\xi^t_j]&=-\xi^t_j\\
[\xi^t_l,L_{ij}]&=-(\eta_{li}\xi^t_j-\eta_{lj}\xi^t_i)\\
[L_{ij},L_{mj}]&=L_{im},
\end{align}
for the "$a_{5,4}$" according to Patera et al. classification \cite{Patera:1976my}.
\section{ $\gamma_{ij}^{(1)}$ Dependent on One Coordinate}
In above chapter, the dependency on the particular KV of Lorentz rotations could have been observed from components of the $\gamma^{(1)}_{ij}$ matrix. Naturally, it is analogous here, supplemented with the dependency on coordinates.
One notices that translations are conserved in the direction of the coordinate on which $\gamma_{ij}^{(1)}$ does not depend, while the partial derivative of $\gamma_{ij}^{(1)}$ with respect to remaining directions is zero. $\gamma_{ij}^{(1)}$ that conserves two Ts, e.g. $\xi^{(0)}$ and $\xi^{(2)}$ is given by
\begin{equation}\gamma_{ij}^{(1)}=\left(
\begin{array}{ccc}
\text{$\gamma_{11}$}(x) & \text{$\gamma_{12}$}(x) & \text{$\gamma_{13}$}(x) \\
\text{$\gamma_{12}$}(x) & \text{$\gamma_{22}$}(x) & \text{$\gamma_{23}$}(x) \\
\text{$\gamma_{13}$}(x) & \text{$\gamma_{23}$}(x) & \text{$\gamma_{11}$}(x)-\text{$\gamma_{22}$}(x) \\
\end{array}
\right).\label{eq2t}\end{equation}
If we want to conserve two translations (keep the maximal number of translational KVs) and include KV of Lorentz rotations, conserved KV of Lorentz rotations will be the one that does not contain the coordinate that appears in $\gamma_{ij}^{(1)}$. For $\xi^{(0)}$ and $\xi^{(2)}$, $\xi^{(0)}$ and $\xi^{(1)}$, and $\xi^{(1)}$ and $\xi^{(2)}$, and one KV of Lorentz rotations, the $\gamma_{ij}^{(1)}$ matrices will be, respectively, of the analogous form as in the constant case
\begin{align} \begin{array}{cc}
\left(\begin{array} {ccc} c_1(x) & 0 & 0 \\ 0 & 2c_1(x) &0 \\ 0 & 0 & -c_1(x) \end{array}\right) \text{ for } \xi^{(4)}, &
\left(\begin{array} {ccc} -c_1(y) & 0 & 0 \\ 0 & c_1(y) &0 \\ 0 & 0 & -2c_1(y) \end{array}\right)
\text{ for } \xi^{(3)},\end{array} \end{align} \begin{align}
\left(\begin{array} {ccc} 2c_1(t) & 0 & 0 \\ 0 & c_1(t) &0 \\ 0 & 0 & c_1(t) \end{array}\right) \text{ for } \xi^{(5)} \label{dep1}
\end{align} and form the 2 dimensional Poincare algebra (2 Ts and one Lorents rotation).
$\gamma_{ij}^{(1)}$ matrix, that conserves $\xi^{(5)}$ and contains coordinate $y$, is not allowed by NLO KE, the obtained condition requires $\gamma_{ij}^{(1)}$ to be constant, which is analogous for $\xi^{(4)}$ and $\xi^{(3)}$ and permuted coordinates.
To include dilatations, $\xi^{(6)}$, NLO KE requires matrix $\gamma_{ij}^{(1)}=\frac{c_{ij}}{x_i}$ for $x_{i}=t,x,y$. That $\gamma_{ij}^{(1)}$ conserves 2 dimensional expended Poincare algebra (2 Ts, Lorentz rotation and dilatations), e.g. $\xi^{(0)},\xi^{(2)},\xi^{(4)},\xi^{d}$
\begin{align}
\gamma_{ij}^{(1)}=\left(
\begin{array}{ccc}
\frac{c}{2 x} & 0 & 0 \\
0 & \frac{c}{x} & 0 \\
0 & 0 & -\frac{c}{2 x} \\
\end{array}
\right) \label{poind}
\end{align}
\noindent To include SCTs, the allowed solution is only $\gamma^{(1)}_{ij}=0$.
\section{ $\gamma_{ij}^{(1)}$ Dependent on Two Coordinates}
Depending on the conserved direction of translations, realised $\gamma^{(1)}_{ij}$ matrix depends on the coordinates in remaining two directions. Condition for conservation of translations (\ref{transcoord}), the derivative with respect to the coordinate of conserved direction of translations, requires $\gamma^{(1)}_{ij}$ constant in the that coordinate.
$\gamma_{ij}^{(1)}$ can dependent on all three coordinates in specific linear combination, for which KVs of translations are correspondingly linearly combined, and lead to the analogous conclusion as for the original KVs. General $\gamma^{(1)}_{ij}$, dependent on two coordinates, conservs only one translation. If we want to conserve the translation in $t$ component and $\xi^{(0)}$,
$\gamma_{ij}^{(1)}$ takes the form
\begin{equation} \gamma^{(1)}_{ij}=\left(
\begin{array}{ccc}
\text{$\gamma_{11}$}(x,y) & \text{$\gamma_{12}$}(x,y) & \text{$\gamma_{13}$}(x,y) \\
\text{$\gamma_{12}$}(x,y) & \text{$\gamma_{22}$}(x,y) & \text{$\gamma_{23}$}(x,y) \\
\text{$\gamma_{13}$}(x,y) & \text{$\gamma_{23}$}(x,y) & \text{$\gamma_{11}$}(x,y)-\text{$\gamma_{22}$}(x,y) \\
\end{array}
\right). \label{eq1t} \end{equation}
We can add to it
\begin{enumerate} \item
one Lorentz rotation, for which $\gamma_{ij}^{(1)}$ needs to be
\begin{align}
\left(\begin{array} {ccc} \frac{1}{2}f\left[\frac{1}{2}\bigl(-t^2+x^2\bigr)\right]& 0 & 0 \\ 0 & -\frac{1}{2}f\left[\frac{1}{2}\bigl(-t^2+x^2\bigr)\right] &0 \\ 0 & 0 & f\left(\frac{1}{2}\bigl[-t^2+x^2\bigr)\right] \end{array}\right) & \text{ for } \xi^{(3)},\\
\left(\begin{array} {ccc} f\left[\frac{1}{2}\bigl(-t^2+y^2\bigr)\right]& 0 & 0 \\ 0 & 2f\left[\frac{1}{2}\bigl(-t^2+y^2\bigr)\right] &0 \\ 0 & 0 & -f\left(\frac{1}{2}\bigl[-t^2+y^2\bigr)\right] \end{array}\right) & \text{ for } \xi^{(4)}, \\
\left(\begin{array} {ccc} f\left[\frac{1}{2}\bigl(x^2+y^2\bigr)\right]& 0 & 0 \\ 0 & \frac{1}{2}f\left[\frac{1}{2}\bigl(x^2+y^2\bigr)\right] &0 \\ 0 & 0 & \frac{1}{2}f\left(\frac{1}{2}\bigl[x^2+y^2\bigr)\right] \end{array}\right) & \text{ for } \xi^{(5)}. \label{trots}
\end{align}
To conserve two KVs of Lorentz rotations, PDEs require $\gamma_{ij}^{(1)}$ of a from (\ref{dep1}),
\item
while to keep dilatations, the components of $\gamma^{(1)}_{ij}$ need to be \begin{equation}\gamma^{\ms{(1)}}_{ij}=\frac{b_{ij}\left(\frac{x_{j}}{x_{i}}\right)}{x_{i}}+\frac{c_{ij}\left(\frac{x_{i}}{x_{j}}\right)}{x_{j}}\label{dilat2},\end{equation} with $i,j=t,x,y$ for $i\neq j$.
Functional dependency of latter $\gamma_{ij}^{(1)}$ allows solving PDEs for one more KV.
\end{enumerate}
The subalgebras of the two cases we considered form
\begin{itemize}
\item (trivial) Abelian algebra for one translation and one rotation,
\item and as well Abelian algebra for one translation and one dilatation.
\end{itemize}
Dependency on two coordinates, allows also $\gamma_{ij}^{(1)}$ matrix that conserves SCTs. One can find it by solving (\ref{nloke}) for SCTs. We will present on one example way to solve (\ref{nloke}) and obtain desired KVs, in particular one KV of SCTs, one Lorentz rotation, dilatation and translation.
It is convenient to start with the KV of translations. Translation is conserved in the direction on which components of the $\gamma_{ij}^{(1)}$ do not depend. We include KV of translations choosing the components to depend on the remaining two coordinates.
For simplicity we set components $\gamma_{13}^{(1)}$ and $\gamma_{23}^{(1)}$ to zero. Then we compute the set of equations for the KVs of dilatations and SCTs, and after that for Lorentz rotations. The order is not important, however cleverly choosing the order of equations to solve can simplify the computation.
If one of the KVs we want to conserve are dilatations, it is useful to solve that equations first, because they give particular form (\ref{dilat2}) that simplifies further PDEs.
$\gamma_{ij}^{(1)}$ that conserves $y$ translation, rotation around $y$ axis, dilatation and special conformal transformations in $y$ direction is
\begin{equation}
\gamma_{ij}^{(1)}=\left(
\begin{array}{ccc}
-\frac{\left(t^2+2 x^2\right) c_2}{3 ((t-x) (t+x))^{3/2}} & \frac{t x c_2}{((t-x) (t+x))^{3/2}} & 0 \\
\frac{t x c_2}{((t-x) (t+x))^{3/2}} & -\frac{\left(2 t^2+x^2\right) c_2}{3 ((t-x) (t+x))^{3/2}} & 0 \\
0 & 0 & \frac{c_2}{3 \sqrt{(t-x) (t+x)}} \\
\end{array}
\right).\label{scty}
\end{equation}
The set of partial differential equations that lead to $\gamma_{ij}^{(1)}$ (\ref{scty}) is given in the appendix: Classification.
\section{$\gamma_{ij}^{(1)}$ Dependent on Three Coordinates}
Condition $\partial_{k}\gamma_{ij}^{(1)}=0$ shows that for $\gamma_{ij}^{(1)}$ dependent on three coordinates, translations are not realised.
$\gamma_{ij}^{(1)}$ dependent on three coordinates that conserves Lorentz rotations is analogous to $\gamma_{ij}^{(1)}$ dependent on two coordinates, with a difference, that dependency on the third coordinate appears as a function that multiplies the function on the diagonal, while components of
$\gamma_{ij}^{(1)}$ that conserves dilatations are $\gamma^{\ms{(1)}}_{ij}=\frac{a_{ij}\left(\frac{x}{t},\frac{y}{t}\right)}{t}+\frac{b_{ij}\left(\frac{x}{y},\frac{t}{y}\right)}{y}+\frac{c_{ij}\left(\frac{t}{x},\frac{y}{x}\right)}{x}$, with $i,j=t,x,y$ for $i\neq j$.
Two important 4KV subalgebras that require three coordinates to define $\gamma_{ij}^{(1)}$, are the subalgebra with three Lorentz rotations, and the subalgebra with three SCTs and one Lorentz rotation.
From the analogy of translations and SCTs one may notice that the subalgebra with three SCTs and one Lorentz rotation, and three translations and one Lorentz rotation, i.e. MKR solution (for $\xi^{(5)}$ fourth KV), have analogous algebraic structure. That provides a basis in search for the full solutions of CG, where one could expect full solution of CG with three SCTs and rotation as a global solution of CG, analogous to MKR.
The algebra for 3 KVs of Lorentz rotations, is conserved by $\gamma_{ij}^{(1)}$, obtained from the PDEs given in the appendix: Classification. We solve PDEs expressing one component of $\gamma_{ij}^{(1)}$ with the other until they reduce to one PDE $LHS=RHS$
\begin{align}
LHS&=(x-y) \gamma_{11}^{(0,0,1)}(t,x,y)+6 \gamma_{11}(t,x,y)\\
RHS&=\left(x^2+y^2\right) \gamma_{11}^{(2,0,0)}(t,x,y)+(x+y) \gamma_{11}^{(0,1,0)}(t,x,y)\\ \nonumber&+t [t \left(\gamma_{11}^{(0,0,2)}(t,x,y)+\gamma_{11}^{(0,2,0)}(t,x,y)\right)\\ \nonumber &+2 \left(\gamma_{11}^{(1,0,0)}(t,x,y)+y \gamma_{11}^{(1,0,1)}(t,x,y)+x \gamma_{11}^{(1,1,0)}(t,x,y)\right)] \nonumber.
\end{align}
To solve it one may use numerical methods, or infer the solution from the symmetries of the equation.
The latter approach and an assumption $\gamma_{ij}^{(1)}=2c t^2+cx^2+cy^2$, lead to
\begin{equation}
\gamma_{ij}^{(1)}=\left(
\begin{array}{ccc}
c \left(2 t^2+x^2+y^2\right) & -3 c t x & -3 c t y \\
-3 c t x & c \left(t^2+2 x^2-y^2\right) & 3 c x y \\
-3 c t y & 3 c x y & c \left(t^2-x^2+2 y^2\right) \\
\end{array}
\right)\label{only3rot}
\end{equation}
where $c$ is an arbitrary parameter.
To obtain the subalgebra with SCTs, algebraically analogous to MKR solution, one needs to solve the system of the PDEs (see appendix: Classification). Analogously as for $\gamma_{ij}^{(1)}$ that conserves Lorentz rotations, we compute PDEs, expressing one component in terms of the other. There is one convenient PDE solved by $\gamma_{11}^{(1)}=\frac{c_1\left(\frac{x}{t},\frac{y}{t}\right)}{t^2}$ that reduces the number of PDEs. The simplest one can be written using a change of the coordinates $x\rightarrow z t$ and $y\rightarrow q t$, where we introduce two new coordinates $z$ and $q$. The equation then reads
\begin{align}
0&=24 q \left(q^2-z^2-2\right) c_1(z,q)+\left(q^2+z^2-1\right) \big(q^4 c_1{}^{(0,3)}(z,q) \nonumber \\&+2 q^2 z^2 c_1{}^{(0,3)}(z,q)+12 q \left(q^2+z^2-1\right) c_1{}^{(0,2)}(z,q)\nonumber \\&+12 \left(3 q^2-z^2-2\right) c_1{}^{(0,1)}(z,q)-6 z \left(q^2+z^2-1\right) c_1{}^{(1,1)}(z,q)\nonumber \\&-2 q^2 c_1{}^{(0,3)}(z,q)+z^4 c_1{}^{(0,3)}(z,q)-2 z^2 c_1{}^{(0,3)}(z,q)-24 q z c_1{}^{(1,0)}(z,q)\nonumber \\&+c_1{}^{(0,3)}(z,q)\big)
\end{align}
where there is no dependency on t. This is third order PDE that can be solved numerically or analysing the symmetries, which does not lead to most general solution.
Based on the analysis of the symmetries we obtain
\begin{align}
\gamma^{(1)}_{11}&=-\frac{\left(t^4+4 \left(x^2+y^2\right) t^2+\left(x^2+y^2\right)^2\right) c_1}{\left(t^2-x^2-y^2\right)^3} \nonumber \\
\gamma^{(1)}_{12}&=\frac{3 t^2 \sqrt{\frac{x^2}{t^2}} \left(t^2+x^2+y^2\right) c_1}{\left(t^2-x^2-y^2\right)^3}\nonumber \\
\gamma_{13}^{(1)}&=\frac{3 t y \left(t^2+x^2+y^2\right) c_1}{\left(t^2-x^2-y^2\right)^3} \nonumber \\
\gamma_{22}^{(1)}&=-\frac{\left(t^4+2 \left(5 x^2-y^2\right) t^2+\left(x^2+y^2\right)^2\right) c_1}{2 \left(t^2-x^2-y^2\right)^3} \nonumber\\
\gamma_{23}^{(1)}&=-\frac{6 t^3 \sqrt{\frac{x^2}{t^2}} y c_1}{\left(t^2-x^2-y^2\right)^3} \nonumber \\
\gamma_{33}^{(1)}&=-\frac{\left(t^4-2 \left(x^2-5 y^2\right) t^2+\left(x^2+y^2\right)^2\right) c_1}{2 \left(t^2-x^2-y^2\right)^3}\label{rsct}
\end{align}
the solution that also satisfies the equation (\ref{nloke}) for the KV of rotation which makes it algebraically analogous to MKR solution.
Next, we classify and find $\gamma_{ij}^{(1)}$ matrices for subalgebras that can be formed from the generators of CA that are not allowed to linearly combine into new KVs. Then, we consider realisations of $\gamma_{ij}^{(1)}$ for which is allowed to use linearly combined KVs. Interesting research, which exceeds the scope of our analysis, is to inspect the symmetries of $\gamma_{ij}^{(1)}$ matrices allowed by certain KVs, and compare to $\gamma_{ij}^{(1)}$ matrix allowed by the combination of those KVs.
\section{Classification According to the Generators of the Conformal Group}
Whether $\gamma_{ij}^{(1)}$ is allowed to be realised for set of KVs is determined by closing of the subalgebra of those KVs.
Let us consider an example of verification whether the set of KVs closes into subalgebra.
Assume we have one KV of translations, one KV of Lorentz rotations and one SCT.
The Poisson bracket of the SCT with T closes into $[\xi_i^{sct},\xi_j^t]=2(\eta_{ij}\xi^d-L_{ij})$. For $i\neq j $ that implies $[\xi^{sct}_i,\xi_j^t]=-2L_{ij}$, which means Lorentz rotation in the directions $i$ and $j$ needs to close with $\xi^t_i$ and $\xi^{sct}_j$ as well. Here, $[\xi_i^{t},L_{ij}]=\xi_j^t$ and $[\xi_i^{sct},L_{ij}]=\xi_{j}^{sct}$. That means that for algebra to close we need additional $\xi^{t}_{i}$ and $\xi^{sct}_{j}$ which leads to an algebra with six KVs $\xi^t_i,\xi^t_j,\xi^{sct}_i,\xi^{sct}_j,L_{ij},\xi^d$. Since the algebra with one translation, one Lorentz rotation and one SCT does not close, therefore $\gamma_{ij}^{(1)}$ for such combination of KVs does not exist. Rather, if one obtains $\gamma_{ij}^{(1)}$ for those KVs, he or she, needs to verify the equations for KVs $\xi^d,\xi_i^t,\xi^{sct}_j$ which should as well be satisfied for the given $\gamma_{ij}^{(1)}$.
As mentioned earlier, complicated PDEs can be solvable by recognising symmetries which does not provide the most general solution.
For the subalgebra with one translation and one Lorentz rotation, e.g., (for T in $t$ direction, and KV of rotations) one obtains corresponding PDEs (presented in the appendix: Classification) (\ref{rotxy},\ref{transt}) whose solutions lead to two PDEs of the form:
\begin{align}
LHS&=x^2 c_2{}^{(0,2)}(x,y)+y^2 c_2{}^{(2,0)}(x,y)+c_2(x,y)+c_6\left(\frac{1}{2} \left(x^2+y^2\right)\right)\nonumber \\ RHS&=y c_2{}^{(0,1)}(x,y)+x \left(c_2{}^{(1,0)}(x,y)+2 y c_2{}^{(1,1)}(x,y)\right). \label{pdesym}
\end{align}
From the symmetries one would assume $c_2=f\left[\frac{1}{2}(x^2+y^2)\right]$ solves the equation (\ref{pdesym}), which is correct, however (\ref{pdesym}) is second order linear PDE with two independent variables, and with known solving methods. If one takes into account the known solutions that ought to satisfy equation (\ref{pdesym}) together with the solution recognised from symmetries one obtains more general $\gamma_{ij}^{(1)}$.
The solution concluded from symmetries leads to analogous $\gamma_{ij}^{(1)}$ as $\gamma_{ij}^{(1)}$ for constant components, only with the function $f\left(\frac{1}{2}(x^2+y^2)\right)$ on the diagonal, equation (\ref{trots}).
The latter form of the solution gives the off-diagonal terms as well, which depend on the solution added to $c_{2}(x,y)$, $\gamma_{ij}^{(1)}$ is then
\begin{align}
\gamma_{ij}^{(1)}=\left(
\begin{array}{ccc}
c_4\left(\frac{1}{2} \left(x^2+y^2\right)\right) & a x+b y & a y-b x \\
a x+b y & \frac{1}{2} c_4\left(\frac{1}{2} \left(x^2+y^2\right)\right) & 0 \\
a y-b x & 0 & \frac{1}{2} c_4\left(\frac{1}{2} \left(x^2+y^2\right)\right) \\
\end{array}
\right)\label{trot}.
\end{align}
To provide a transparent overview of the subalgebras realised for the particular $\gamma_{ij}^{(1)}$ we present them in tables. \newline\noindent
In the first row we write the original generators obtained by the leading order equation (\ref{lo}). We first write translations (Ts) and $\gamma_{ij}^{(1)}$ that conserve them, then possible combinations of KVs with Ts, then rotations (Rs) and combinations with rotations, dilatation (D) with corresponding combinations and special conformal transformations (SCTs) with their combinations. \newline\noindent
The second row presents whether subagebra with the generator from the first row exists, (closes), and the third row presents an example of $\gamma_{ij}^{(1)}$ denoted with "(example)" that realises the subalgebra, stating the most general form of $\gamma_{ij}^{(1)}$ when given.
The fourth row denotes the number of CKVs that are contained in the algebra.
The matrixes $\gamma_{ij}^{(1)}$ near which we write "(comment $number$)" are commented in the text.
\begin{center}
\hspace{-0.65cm}\begin{tabular}{ |l | p{4.3 cm} | p{7.5 cm} | p{0.5cm}|}
\hline
Algebra & Name/existence(closing) & Realization & \\
\hline\hline
\hspace{0.18cm}1 T & $\exists$ & $\exists$: see equation (\ref{eq1t}) & 1 \\
\hspace{0.18 cm}2 T & $\exists$ & $\exists$: see equation (\ref{eq2t}) & 2\\
\hspace{0.18 cm}3 T & $\exists$ & $\exists$: see equation (\ref{g1t})& 3\\
\hspace{0.18 cm}1 T + 1 R & $[\xi^t_{l},L_{ij}]=\eta_{li}\xi^t_{j}-\eta_{lj}\xi^t_{i}$,
$\nexists$ for $l=i$ or $j$, $\exists$ for $l\neq i \neq j$ & (example): equation (\ref{trot}) & 2 \\
\hspace{0.18 cm}1T + D & $\exists$ &
$\exists$: see equation (\ref{dilat2}) &2\\
$\begin{array}{l}\text{1 T + 1 R}\\ \text{+ D}\end{array}$ & $\exists$ & $\left(
\begin{array}{ccc}
\frac{c_6}{\sqrt{x^2+y^2}} & \frac{x c_4+y c_5}{x^2+y^2} & \frac{y c_4-x c_5}{x^2+y^2} \\
\frac{x c_4+y c_5}{x^2+y^2} & \frac{c_6}{2 \sqrt{x^2+y^2}} & 0 \\
\frac{y c_4-x c_5}{x^2+y^2} & 0 & \frac{c_6}{2 \sqrt{x^2+y^2}} \\
\end{array}
\right)$ There exist analogous matrices for the translations in the two remaining directions that depend, for the translation in the $l$ direction on the coordinates $i\neq l$ and $j\neq l$ &3\\
&& (example) & \\
$\begin{array}{l}\text{1T + D}\\ \text{+1 SCT}\end{array}$& $[\xi_i^{sct},\xi_j^t]=2(\eta_{ij}\xi^d-L_{ij})$, $\exists$ for $i=j$; sl(2) &
$ \left(
\begin{array}{ccc}
\frac{f\left(\frac{x}{t}\right)}{t} & -\frac{3 x f\left(\frac{x}{t}\right)}{t^2+2 x^2} & 0 \\
-\frac{3 x f\left(\frac{x}{t}\right)}{t^2+2 x^2} & \frac{\left(2 t^2+x^2\right) f\left(\frac{x}{t}\right)}{t^3+2 x^2 t} & 0 \\
0 & 0 & \frac{\left(x^2-t^2\right) f\left(\frac{x}{t}\right)}{t^3+2 x^2 t} \\
\end{array}
\right)$ example for $i=j=y$
& 3 \\
$\begin{array}{l}\text{1 T + 1 R} \\ \text{+ D+1 SCT}\end{array}$ & $\exists$: $[\xi^t_i,\xi^{sct}_if]=2\xi^d$, $[\xi^t_l,L_{ij}]=0$, $[\xi^{sct}_l,L_{ij}]=0$ for $l\neq i$ and $l\neq j$;
sl(2)+u(1) & example for $\xi^t_y,\xi^{sct}_y,L_{xt},D$, see equation (\ref{scty}) &4\\
\hspace{0.18 cm}2 T + 1 R & $\exists$: \text{2d Poincare} & $\exists$ see equation (\ref{dep1}) &3\\
$\begin{array}{l}\text{2 T + 1 R}\\ \text{+ D}\end{array}$ & $\exists$: \text{2d Poincare +D} & $\exists$ see equation (\ref{poind}) &5\\
\hline
\end{tabular}
\end{center}
\begin{center}
\begin{table}
\hspace{-0.65cm}\begin{tabular}{ | l | p{1 cm} | p{9.8 cm} | p{0.5cm}|}
\hline
\hspace{0.18 cm}2 T + D & $\exists$ & $\exists$ $\gamma_{ij}^{(1)}=\left(
\begin{array}{ccc}
\frac{\text{c1}}{x} & \frac{\text{c2}}{x} & \frac{\text{c3}}{x} \\
\frac{\text{c2}}{x} & \frac{\text{c4}}{x} & \frac{\text{c5}}{x} \\
\frac{\text{c3}}{x} & \frac{\text{c5}}{x} & \frac{\text{c1}-\text{c4}}{x} \\
\end{array}
\right)$ &6\\
$\begin{array}{l}\text{2 T + 1 R} \\ \text{+ D + 2 SCT}\end{array}$ & $\exists$ & $\nexists$ requirement for 2 T restricts the components on dependency on one coordinate, in which case one can easily see the system of equations does not close. &6\\
\hspace{0.18 cm}3 T + 1R & $\exists$: MKR & $\exists$ $\gamma_{ij}=\left(
\begin{array}{ccc}
2 \text{c} & 0 & 0 \\
0 & \text{c} & 0 \\
0 & 0 & \text{c} \\
\end{array}
\right)$ &4\\
\hspace{0.18 cm}3 T + 3 R & $\exists$ & $\nexists$ requirement for 3 Ts restricts the components of $\gamma_1$ to be constant, in which equations for 3R are not solvable. &6\\
$\begin{array}{c}\text{3 T + 3 R} \\ \text{+ D}\end{array} $& $\exists$ & $\nexists$ - explanation is analogous to the one for 3T+3R&7\\
\hspace{0.18 cm}3 T + D & $\exists$ & $\nexists$ - requirement for 3 Ts restricts the components of $\gamma_1$ to be constant, in which equation for D is not solvable. & 4\\
\hspace{0.18 cm}3 T + 3 R & $\exists$ & $\nexists$ requirement for 3 Ts restricts the components of $\gamma_1$ to be constant, in which equations for 3R are not solvable. &6\\
$\begin{array}{c}\text{3 T + 3 R}\\ \text{+ D}\end{array}$ & $\exists$ & $\nexists$ - explanation is analogous to the one for 3T+3R&7\\
\hspace{0.18 cm}3 T + D & $\exists$ & $\nexists$ - requirement for 3 Ts restricts the components of $\gamma_1$ to be constant, in which equation for D is not solvable. & 4\\
\hline
\hspace{0.18 cm}1 R & $\exists$ & $\exists$ see equation (\ref{trot}) & 1 \\
\hspace{0.18 cm}3 R & $\exists$ & $\exists$ see equation (\ref{only3rot})& 3 \\
\hspace{0.18 cm}1 R+D & $\exists$ & $\gamma_{ij}^{(1)}=\left(
\begin{array}{ccc}
\frac{c_1\left(\frac{x^2+y^2}{2 t^2}\right)}{t} & \frac{a x+b y}{t^2} & \frac{a y-b x}{t^2} \\
\frac{a x+b y}{t^2} & \frac{c_1\left(\frac{x^2+y^2}{2 t^2}\right)}{2 t} & 0 \\
\frac{a y-b x}{t^2} & 0 & \frac{c_1\left(\frac{x^2+y^2}{2 t^2}\right)}{2 t} \\
\end{array}
\right)$ &2 \\
\hspace{0.18 cm}3 R+D & $\exists$ & $\exists$ & 4\\
\hspace{0.18 cm} 3 R+D & $\exists$ &$\exists$ see equation (\ref{tn1})& 4 \\ \hline
\hspace{0.18 cm}1 R + 2 SCT & $\exists$ & $\exists$ see equation (\ref{eqq}) & 3\\
\hspace{0.18 cm}1 R + 3 SCT & $\exists$ & $\exists$ see equation (\ref{rsct}) & 4\\ \hline
\hspace{0.18 cm}3R+3SCT & $\exists$ & The equations are not solvable simultaneously & 3 \\ \hline
\hspace{0.18 cm}1 R+D+2 SCT & $\exists$ & $\exists$ see equation (\ref{t3}) & 4\\
$\begin{array}{c}\text{1 R+D}\\ \text{+3 SCT}\end{array}$ & $\exists$ & the system of equations does not close, except for $\gamma^{(1)}_{ij}=0$ & 5\\ \hline
$\begin{array}{c}\text{3 R+D}\\\text{+3 SCT} \end{array}$& $\exists$ & $\nexists$ up to now - equations are not solvable simultaneously (the claim is valid w/o assumptions or simplifications) & 5\\
\hspace{0.18 cm}1 SCT & $\exists$ &$\exists$ see equation (\ref{t4}) & 1\\
\hspace{0.18 cm}2 SCT & $\exists$ & $\exists$ see equation (\ref{t5}) & 2\\
\hspace{0.18 cm}3 SCT &$\exists$ & \textcolor{blue}{} & 3\\ \hline
\hspace{0.18 cm}1 SCT+D & $\exists$ & $\exists$ see equation (\ref{t6}) & 1\\
\hspace{0.18 cm}2 SCT+D & $\exists$ &$\exists$ see equation (\ref{t7}) & 2\\
\hspace{0.18 cm}3 SCT+D &$\exists$ & the system of equations does not close, except for $\gamma^{(1)}_{ij}=0$ & 3\\ \hline
\hline
\end{tabular}
\end{table}
\end{center}
The subalgebra with 3 R and D is realised in $\gamma_{ij}^{(1)}$
\begin{align}
\gamma_{11}^{(1)}&= \frac{\left(2 t^2+x^2+y^2\right) c_2}{2 t^3 \left(-\frac{-t^2+x^2+y^2}{t^2}\right)^{3/2}} & &
\gamma_{12}^{(1)}=\frac{3 x c_2}{2 t^2 \left(-\frac{-t^2+x^2+y^2}{t^2}\right)^{3/2}}\nonumber \\
\gamma_{13}^{(1)}&= -\frac{3 y c_2}{2 t^2 \left(-\frac{-t^2+x^2+y^2}{t^2}\right)^{3/2}} &&
\gamma_{22}^{(1)}=\frac{\left(t^2+2 x^2-y^2\right) \sqrt{-\left(-t^2+x^2+y^2\right)} c_2}{2 \left(-t^2+x^2+y^2\right)^2}\nonumber \\
\gamma_{23}^{(1)}&=\frac{3 x y c_2}{2 t^3 \left(-\frac{-t^2+x^2+y^2}{t^2}\right)^{3/2}} &&
\gamma_{33}^{(1)}=\frac{\sqrt{-\left(-t^2+x^2+y^2\right)} \left(t^2-x^2+2 y^2\right) c_2}{2 \left(-t^2+x^2+y^2\right)^2}\label{tn1}
\end{align}
\noindent while the subalgebra that contains 1 R and 2 SCTs (rotation and SCTs in $x$ and $y$ direction) is obtained for the $\gamma_{ij}^{(1)}$
\begin{align}
\gamma_{11}^{(1)}&= \frac{\left(t^4+4 \left(x^2+y^2\right) t^2+\left(x^2+y^2\right)^2\right) c_2\left(\frac{-t^2+x^2+y^2}{t}\right)}{12 t^3} \nonumber \\ \nonumber
\gamma_{12}^{(1)}&= -\frac{x \left(t^2+x^2+y^2\right) c_2\left(\frac{-t^2+x^2+y^2}{t}\right)}{4 t^2} \\ \nonumber
\gamma_{13}^{(1)}&= -\frac{y \left(t^2+x^2+y^2\right) c_2\left(\frac{-t^2+x^2+y^2}{t}\right)}{4 t^2} \\ \nonumber
\gamma_{22}^{(1)}&=\frac{\left(t^4+2 \left(5 x^2-y^2\right) t^2+\left(x^2+y^2\right)^2\right) c_2\left(\frac{-t^2+x^2+y^2}{t}\right)}{24 t^3} \\
\gamma_{23}^{(1)}&= -\frac{y \left(t^2+x^2+y^2\right) c_2\left(\frac{-t^2+x^2+y^2}{t}\right)}{4 t^2} \label{eqq},
\end{align}
\noindent here, $c_2$ is a function of $\frac{-t^2+x^2+y^2}{t}$. The sub algebra that realises 1 R, 2 SCTs and D is
\begin{align}
\gamma_{12}^{(1)}& = -\frac{x \left(t^2+x^2+y^2\right) c_3}{4 \left(-t^2+x^2+y^2\right)^2} &&
\gamma_{11}^{(1)}=\frac{\left(t^4+4 \left(x^2+y^2\right) t^2+\left(x^2+y^2\right)^2\right) c_3}{12 t \left(-t^2+x^2+y^2\right)^2} \nonumber \\
\gamma_{13}^{(1)}&= -\frac{y \left(t^2+x^2+y^2\right) c_3}{4 \left(-t^2+x^2+y^2\right)^2} &&
\gamma_{22}^{(1)}= \frac{\left(t^4+2 \left(5 x^2-y^2\right) t^2+\left(x^2+y^2\right)^2\right) c_3}{24 t \left(-t^2+x^2+y^2\right)^2} \nonumber \\
\gamma_{23}^{(1)}&=\frac{t x y c_3}{2 \left(-t^2+x^2+y^2\right)^2}, && \label{t3}
\end{align}
\noindent where we solve (\ref{lo}) with $\xi^{(0)i}=D^{i}$ and with $\gamma_{ij}^{(1)}$ (eqq) (that has function $c_2$) for $c_2$.
\noindent The example of $\gamma_{ij}^{(1)}$ that realises 1 SCT (in $y$ direction) is
\begin{align}
\gamma_{11}^{(1)}&= -\frac{\left(t^2+2 x^2\right) c_1\left(\frac{x}{t},\frac{-t^2+x^2+y^2}{t}\right)}{3 t^2 x} &&\gamma_{12}^{(1)}=\frac{c_1\left(\frac{x}{t},\frac{-t^2+x^2+y^2}{t}\right)}{t} \nonumber \end{align} \begin{align}
\gamma_{22}^{(1)}&= -\frac{\left(2 t^2+x^2\right) c_1\left(\frac{x}{t},\frac{-t^2+x^2+y^2}{t}\right)}{3 t^2 x} \nonumber \\ \gamma_{33}^{(1)}&=\frac{(t-x) (t+x) c_1\left(\frac{x}{t},\frac{-t^2+x^2+y^2}{t}\right)}{3 t^2 x} , \label{t4}
\end{align}
$\gamma_{13}^{(1)}=\gamma_{23}^{(1)}=0$,
where one can notice the function $c_1\left(\frac{x}{t},\frac{-t^2+x^2+y^2}{t}\right)$ that allows to use the $\gamma_{ij}^{(1)}$ (\ref{t4}) in (\ref{lo}) and solve further desired KVs.
(To avoid clutter we have given $\gamma_{ij}^{(1)}$ (\ref{t4}) that is not of the most general form, the most general form of the $\gamma_{ij}^{(1)}$ is given in the appendix: Classification.)
$\gamma_{ij}^{(1)}$ that conserves SCT in $x$ direction is similarly to (\ref{t4})
\begin{align}
\gamma_{11}^{(1)}&= -\frac{\left(t^2+2 y^2\right) c_1\left(\frac{y}{t},\frac{-t^2+x^2+y^2}{t}\right)}{3 t^2 y}&& \gamma_{13}^{(1)}=\frac{c_1\left(\frac{y}{t},\frac{-t^2+x^2+y^2}{t}\right)}{t} \nonumber \\
\gamma_{22}^{(1)}&= \frac{(t-y) (t+y) c_1\left(\frac{y}{t},\frac{-t^2+x^2+y^2}{t}\right)}{3 t^2 y} && \gamma_{33}^{(1)}=-\frac{\left(2 t^2+y^2\right) c_1\left(\frac{y}{t},\frac{-t^2+x^2+y^2}{t}\right)}{3 t^2 y}, \label{sctx}
\end{align}where $\gamma_{12}^{(1)}$ and $\gamma_{23}^{(1)}$ are zero.
The $\gamma_{ij}^{(1)}$ that conserves SCT in $t$ direction, computed with analogous simplifications as $\gamma_{ij}^{(1)}$ for SCT in $x$ and $y$ direction has different form
\begin{align}
\gamma_{11}^{(1)}&=-\frac{e^{\tanh ^{-1}\left(\frac{t^2+x^2+y^2}{t^2-x^2-y^2}\right)} \left(x^2+y^2\right) c_1\left(\frac{y}{x},\log \left(-\frac{-t^2+x^2+y^2}{x}\right)\right)}{3 t x y}
\nonumber \\
\gamma_{22}^{(1)}&=\frac{e^{\tanh ^{-1}\left(\frac{t^2+x^2+y^2}{t^2-x^2-y^2}\right)} \left(x^2-2 y^2\right) c_1\left(\frac{y}{x},\log \left(-\frac{-t^2+x^2+y^2}{x}\right)\right)}{3 t x y}
\nonumber \\
\gamma_{23}&=\frac{e^{\tanh ^{-1}\left(\frac{t^2+x^2+y^2}{t^2-x^2-y^2}\right)} c_1\left(\frac{y}{x},\log \left(-\frac{-t^2+x^2+y^2}{x}\right)\right)}{\sqrt{t^2}}
\end{align}
\noindent $
\gamma_{12}^{(1)}=0$, $\gamma_{13}^{(1)}=0$ and with that acknowledges Minkowski background metric.
$\gamma_{ij}^{(1)}$ that realises 2 SCTs (SCT in $y$ and $t$ direction) reads
\begin{align}
\gamma_{12}^{(1)}&=-\frac{(t+y) \left(x^2+(t+y)^2\right)}{2 x^2}&& \gamma_{11}^{(1)}=\frac{\left(x^2+(t+y)^2\right)^2}{4 x^3} \nonumber \\
\gamma_{13}^{(1)}&= -\frac{(t-x+y) (t+x+y) \left(x^2+(t+y)^2\right)}{4 x^3} &&\gamma_{22}^{(1)}=\frac{(t+y)^2}{x} \nonumber \\
\gamma_{23}^{(1)}&=\frac{(t+y) (t-x+y) (t+x+y)}{2 x^2} && \nonumber \\ \gamma_{33}^{(1)}&=\frac{(t-x+y)^2 (t+x+y)^2}{4 x^3} && \label{t5}
\end{align} while $\gamma_{ij}^{(1)}$ that realises 1 SCT and one D (SCT in $y$ direction) is
\begin{align}
\gamma_{11}^{(1)}&= \frac{t^2 y \left(t^2+x^2+y^2\right) c_9\left(\frac{x}{t}\right)}{2 x^2 \left(-t^2+x^2+y^2\right)^2} && \gamma_{12}^{(1)}=-\frac{t y \left(3 t^2+x^2+y^2\right) c_9\left(\frac{x}{t}\right)}{4 x \left(-t^2+x^2+y^2\right)^2}\nonumber \\
\gamma_{13}^{(1)}&=-\frac{t \left(t^4+6 y^2 t^2-x^4+y^4\right) c_9\left(\frac{x}{t}\right)}{8 x^2 \left(-t^2+x^2+y^2\right)^2} && \gamma_{22}^{(1)}= \frac{t^2 y c_9\left(\frac{x}{t}\right)}{\left(-t^2+x^2+y^2\right)^2}\nonumber \\
\gamma_{23}^{(1)}&=\frac{t^2 \left(t^2-x^2+3 y^2\right) c_9\left(\frac{x}{t}\right)}{4 x \left(-t^2+x^2+y^2\right)^2} && \gamma_{33}^{(1)}=\frac{t^2 y \left(t^2-x^2+y^2\right) c_9\left(\frac{x}{t}\right)}{2 x^2 \left(-t^2+x^2+y^2\right)^2} \label{t6}
\end{align}
and $\gamma_{ij}^{(1)}$ that realises 2 SCTs and a D (SCTs in $x$ and $y$ directions) is
\begin{equation}
\gamma_{ij}^{(1)}=\left(
\begin{array}{ccc}
\frac{y \left(t^2+x^2+y^2\right) c_{10}}{2 \left(-t^2+x^2+y^2\right)^2} & -\frac{x y \left(3 t^2+x^2+y^2\right) c_{10}}{4 t \left(-t^2+x^2+y^2\right)^2} & -\frac{\left(t^4+6 y^2 t^2-x^4+y^4\right) c_{10}}{8 t \left(-t^2+x^2+y^2\right)^2} \\
-\frac{x y \left(3 t^2+x^2+y^2\right) c_{10}}{4 t \left(-t^2+x^2+y^2\right)^2} & \frac{x^2 y c_{10}}{\left(-t^2+x^2+y^2\right)^2} & \frac{x \left(t^2-x^2+3 y^2\right) c_{10}}{4 \left(-t^2+x^2+y^2\right)^2} \\
-\frac{\left(t^4+6 y^2 t^2-x^4+y^4\right) c_{10}}{8 t \left(-t^2+x^2+y^2\right)^2} & \frac{x \left(t^2-x^2+3 y^2\right) c_{10}}{4 \left(-t^2+x^2+y^2\right)^2} & \frac{y \left(t^2-x^2+y^2\right) c_{10}}{2 \left(-t^2+x^2+y^2\right)^2}
\end{array}
\right).\label{t7}
\end{equation}
Let us notice that the largest realised subalgebra consisted of original KVs of CA is 4 dimensional.
The importance of above analysis is to find the $\gamma_{ij}^{(1)}$ for each of the KVs. Then, one can expect form of $\gamma_{ij}^{(1)}$ for subalgebra of CA built from linearly combined KVs.
Which can eventually lead to a global solution of CG.
\section{Patera et al. Classification}
The subalgebras (SAs) of conformal algebra $o(3,2)$ have been classified in \cite{Patera:1976my}, they are formed from the generators of the conformal algebra (\ref{ca1},\ref{ca2}), or in particular, from the linear combination of these generators. The subalgebras are
\begin{enumerate}
\item sim(2,1) is similitude algebra that we have encountered when talking about the constant $\gamma_{ij}^{(1)}$ matrix. It contains 5 dimensional subalgebra for which we found realised $\gamma_{ij}^{(1)}$. sim(2,1), contains the highest number of KVs, which is seven.
\item opt(2,1), optical algebra, contains equal maximal number of generators as similitude algebra.
\item $o(3)\oplus o(2)$ is a maximal compact sub algebra with maximally four generators.
\item $o(2)\oplus o(2,1)$ is a sub algebra that contains maximally four generators which are built from the KVs of conformal algebra
\item o(2,2) is a sub algebra with six as a highest number of generators in SA.
\item o(3,1) defines Lorentz group in four dimensions that contains the SA with maximally six and lower number of generators, while it does not contain the SA with five.
\item o(2,1) is the irreducible SA with maximally only three generators.
\end{enumerate}
Let us define the nomenclature. We define a group $O(p,q)$ for $p$ and $q$ integers that satisfy $p\leq0$ as a closed linear group of all matrices $M$ of degree $p+q$ over the field of real numbers $\mathbb{R} $ that satisfy the matrix equation
\begin{equation}
MD_{p+q}M^T=D_{p+q},
\end{equation}
for $M^T$ matrix transposed to $M$, and $D_{p,q}=\left(\begin{array}{cc}I_p & \\ & -I_p \end{array}\right)$ with $I_p$ identity matrix of degree $p $.
The groups that we need beside $O(p,q)$ are
\begin{enumerate}
\item SO(p,q) that consists of elements $g$ of $O(p,q)$ group with $\det g=1$
\item $O_1(p,q)$ that consists of the elements $g$ of $O(p,q)$that have $\text{spn} g=1$. Where $\text{spn}$ is the {\it spinor norm}.
Spinor norm is defined to be spn$g=1$ for g an identity component of $O(p,q)$ denoted with $SO_0(p,q)$ (that simultaneously means det$g=1$), or if det$g=-1$ and the product of $g$ with certain member $M_1=\left(\begin{array}{ccccc}1 & & & &\\& &1&& \\ &&..& & \\& &&1& \\ &&&& -1\end{array} \right)$ of $O(p,q)$ is not in $SO_0(p,q)$. In other case
spinor norm is spn$g=-1$.
\end{enumerate}
We consider which linear combinations of original generators realise $\gamma_{ij}^{(1)}$
and focus on realisations of $\gamma_{ij}^{(1)}$ matrices for the algebras with the highest number of generators, 7,6,5 and 4.
\subsection{sim(2,1) Algebra}
7 generators of the similitude algebra can be identified with (\ref{simid}), however that is not the only identification of the generators one can use, analogous identification can be obtained using the SCTs instead of Ts.
We classify the realised subalgebras in the following table.
First volume denotes the name of the subalgebra obtained by combination of the
known algebras. The second column denotes the name from Patera et al. \cite{Patera:1976my}, in the third column are generators as denoted in \cite{Patera:1976my} and in the fourth column are $\gamma_{ij}^{(1)}$ that realises the subalgebra. The names of subalgebras from Patera et al. \cite{Patera:1976my} are defined with two subscripts, first subscript defines the dimension of the sub
algebra, while the second subscript enumerates the subalgebras of the same dimension. In each of the subalgebras first are listed the decomposable subalgebras, then indecomposable ones. The superscript, for example $ a_{4,8}^{a}, a_{4,11}^{b}$, denotes the algebra that depends on the parameter, where we simultaneously write the range of the parameter (for example $b>0, \neq1$ for $a^b_{4,10}$).
If one range is written, it is equal under $o(3,2)$ and the identity component of the corresponding maximal subgroup (here $sim(2,1)$). In case the range under the maximal subgroup is larger than under $o(3,2)$ the larger range is denoted wight the square brackets, for example in case of $a_{4,8}^{\epsilon}$ it is written $\epsilon=1[\epsilon=\pm1]$, which means that $a^{-1}_{4,8}$ is conjugate to $a_{4,8}^1$ under $o(3,2)$ (and even $SO_0(3,2)$),
but not under the identity component $sim(2,1)$.
\begin{center}
\begin{table}
\hspace{-0.5cm}\begin{tabular}{ | l | p{2.5 cm} | p{4.0 cm} | p{6cm} |}
\hline
\multicolumn{4}{|c|}{Realized subalgebras} \\
\hline
$\begin{array}{c}\text{ Name/ }\\ \text{ commutators}\end{array}$&Patera name&generators & Realisation \\ \hline\hline
&$a_{5,4}^a$ & $\begin{array}{l}F+\frac{1}{2}K_2,-K_1+L_3,\\P_0,P_1,P_2\end{array}$ & see equation (\ref{five2})
\\
&$a\neq0,\pm1$&$a=\frac{1}{2}$&\\
$\mathcal{R}\oplus o(3)$ &$a_{4,1}=b_{4,6}$ & $P_1\oplus\left\{K_2,P_0,P_2\right\}$ & see equation (\ref{trans3rb}) for $\xi^{(4)}$ \\
&$a_{4,2}$ & $P_0-P_2\oplus\left\{F-K_2;P_0+P_2,P_1\right\}$ & see equation (\ref{trans3rb}) for $\xi^{(3)}$ \\
$\begin{array}{c}
MKR \\ \mathcal{R}\oplus o(3)\end{array}$ &$a_{4,3}$ & $P_0\oplus\left\{L_3,P_1,P_2\right\}$ & see equation (\ref{trans3rot1}) for $\xi^{(5)}$ \\
&$a_{4,4}$ & $F\oplus\left\{K_1,K_2,L_3\right\}$ & see equation (\ref{tn1})\\
&$a_{4,5}$ & $F\{K_2;P_0-P_2\}\oplus\left\{F-K_2,P_1\right\}$ &$ \left(
\begin{array}{ccc}
0 & \frac{c}{t-y} & 0 \\
\frac{c}{t-y} & 0 & \frac{c}{y-t} \\
0 & \frac{c}{y-t} & 0 \\
\end{array}
\right)$\\
&$a_{4,6}=b_{4,9}$ &
$ \begin{array}{c} \left\{ F+K_2,P_0-P_2 \right\}\oplus \\ \left\{F-K_2,P_0+P_2\right\} \end{array} $
&
$\left(
\begin{array}{ccc}
\frac{c}{x} & 0 & 0 \\
0 & \frac{2 c}{x} & 0 \\
0 & 0 & -\frac{c}{x} \\
\end{array}
\right)$,and (\ref{poind}) \\
&$a_{4,7}$&$ \begin{array}{c} L_3-K_1,P_0+P_2 ; \\ P_0-P_2,P_1 \end{array} $& leads to 5 KV subalgebra for constant components of $\gamma_{ij}^{(1)}$\\
&$\begin{array}{c}a_{4,10}^b=b_{4,13}\\ b>0,\neq1 \end{array}$ & $\left\{ F-bK_2,P_0,P_1,P_2 \right\}$ & $\left(
\begin{array}{ccc}
\text{c} & 0 & \text{c} \\
0 & 0 & 0 \\
\text{c} & 0 & \text{c} \\
\end{array}
\right), \left(
\begin{array}{ccc}
0 & \text{c} & 0 \\
\text{c} & 0 & -\text{c} \\
0 & -\text{c} & 0 \\
\end{array}
\right) $, $\left(
\begin{array}{ccc}
0 & \text{c} & 0 \\
\text{c} & 0 & \text{c} \\
0 & \text{c} & 0 \\
\end{array}
\right) $, $\begin{array}{l}\text{fourth one leads to}\\ \text{5KV subalgebra}\end{array}$\\
&$\begin{array}{c}a_{4,11}^b=b_{4,13}\\ b>0,[b\neq0] \end{array}$ & $\left\{ F+bL_3,P_0,P_1,P_2 \right\}$ & $\left(
\begin{array}{ccc}
0 & \text{c} & i \text{c} \\
\text{c} & 0 & 0 \\
i \text{c} & 0 & 0 \\
\end{array}
\right),\left(
\begin{array}{ccc}
0 & \text{c} & -i \text{c} \\
\text{c} & 0 & 0 \\
-i \text{c} & 0 & 0 \\
\end{array}
\right),$ $\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & i \text{c} & \text{c} \\
0 & \text{c} & -i \text{c} \\
\end{array}
\right) $, $\begin{array}{l}\text{fourth one leads to}\\ \text{5KV subalgebra}\end{array}$\\
\hline
\end{tabular}
\label{tablesim}
\end{table}
\end{center}
\begin{center}
\hspace{-0.3cm}\begin{tabular}{ | l | p{2.5 cm} | p{4.3 cm} | p{5.cm} |}
\hline
& $\begin{array} {c}a_{4,12}^{\epsilon}=b_{4,14}\\
\epsilon=1*[\epsilon\pm1]
\end{array}$ &
$ \begin{array}{c}
\big\{ F+K_2+\epsilon(P_0+P_2), \\
-K_1+L_3,P_0-P_2,P_1 \big\}
\end{array} $
& $\left(
\begin{array}{ccc}
ce^{\frac{y-t}{4 e}} & 0 & -ce^{\frac{y-t}{4 e}} \\
0 & 0 & 0 \\
-ce^{\frac{y-t}{4 e}} & 0 & ce^{\frac{y-t}{4 e}} \\
\end{array}
\right) $\\
&$a_{4,13}=b_{4,15}$ & $\begin{array}{l}\big\{ F-K_2,P_0-P_2,\\-K_1+L_3,P_1 \big\}\end{array}$ & $\left(
\begin{array}{ccc}
\frac{c}{(y-t)^{3/2}} & 0 & -\frac{c}{(y-t)^{3/2}} \\
0 & 0 & 0 \\
-\frac{c}{(y-t)^{3/2}} & 0 & \frac{c}{(y-t)^{3/2}} \\
\end{array}
\right) $ \\
&$\tilde{a}_{4,14}$ & $\begin{array}{l}\big\{ F,-K_1+L_3,\\P_1,P_0-P_2 \big\}\end{array}$ & $\left(
\begin{array}{ccc}
-\frac{c}{y-t} & 0 & \frac{c}{y-t} \\
0 & 0 & 0 \\
\frac{c}{y-t} & 0 & -\frac{c}{y-t} \\
\end{array}
\right)$ \\
&$a_{4,15}=b_{4,17}$ & $\begin{array}{l}\big\{ F+bK_2,-K_1+L_3,\\P_0-P_2,P_1 \big\}\end{array}$ &
$\begin{array}{l} \gamma_{11}^{(1)}=c\cdot(y-t)^{\frac{1-2 b}{b-1}},\\ \gamma_{13}^{(1)}=-c\cdot(y-t)^{\frac{1-2 b}{b-1}},\\ \gamma_{33}^{(1)}=c\cdot(y-t)^{\frac{1-2 b}{b-1}} \\ \gamma_{12}^{(1)}=\gamma_{22}^{(1)}=0\end{array}$,
\\
&$\begin{array}{l}a_{4,16}^{a}\\ a=1*[a=\pm1]\end{array}$&$\begin{array}{c}\big\{F+\frac{1}{2}K_2;-K_1+L_3\\ +a(P_0+P_2),P_0-P_2,P_1\big\}\end{array}$& $\left(
\begin{array}{ccc}
-c_3 & 0 & c_3 \\
0 & 0 & 0 \\
c_3 & 0 & -c_3 \\
\end{array}
\right)$, leads to subalgebra with 5 KVs \\
$ \begin{array}{c} \text{Extended}\\ \text{2d Poincare}\end{array}$&$a_{4,17}=b_{4,17}$ & $\left\{ F,L_3,P_1,P_2 \right\}$ & $\left(
\begin{array}{ccc}
\frac{c}{t} & 0 & 0 \\
0 & \frac{c}{2 t} & 0 \\
0 & 0 & \frac{c}{2 t} \\
\end{array}
\right)$ \\
\hline
\end{tabular}
\label{tablesim}
\end{center}
The first five subalgebras that are realised, we have commented earlier.
The following subalgebra is $a_{4,5}$ whose $\gamma_{ij}^{(1)}$ depends on the difference $y-t$, in the analogous behaviour to dependency on one coordinate. One may expect this behaviour redefining the translational KVs into two new KVs, one that is a difference of two translational KVs, and another one, their sum.
The subalgebra $a_{4,6}$ we have commented in the text above the (\ref{poind}), while subalgebra $a_{4,7}$ for constant components leads to $\gamma_{ij}^{(1)}$ that agrees with $\gamma_{ij}^{(1)}$ for 5 KV subalgebras, which is exhibited adding one more KV to $a_{4,7}$. Subalgebras $a_{4,10}$ and $a_{4,11}$ show similar interesting behaviour. Since both of them contain 3 KVs of translation, they admit only constant components in $\gamma_{ij}^{(1)}$. They provide four different $\gamma_{ij}^{(1)}$ one of which admits one more KV. In addition, $a_{4,11}$ contains imaginary value. Subalgebras $a^{\epsilon}_{4,12}$, $a_{4,13}$, $\tilde{a}_{4,14}$ and $a_{4,15}$ are similar to $a_{4,5}$ in a sense that their $\gamma_{ij}^{(1)}$ depends on $y-t$. Interesting property of the algebra $a_{4,15}$ which depends on "one coordinate" is that the exponent that appears in $\gamma_{ij}^{(1)}$ is defined with the parameter that enters one of the generators of the subalgebra.
The subalgebra $a_{4,17}$ defines the $\gamma_{ij}^{(1)}$ matrix that is the analog of the (\ref{poind}) in $t$ coordinate. The subalgebras with the highest number of generators (7,6,5 and 4) which do not have realised $\gamma_{ij}^{(1)}$ are listed in the appendix: Classification.
\subsection{opt(2,1) Algebra}
One way to define the generators of optical algebra is
\begin{align}
W&=-\frac{\xi^{(6)}+\xi^{(4)}}{2} & K_1&=\frac{\xi^{(6)}-\xi^{(4)}}{2} \\ K_2&=\frac{1}{2}\left[\xi^{(0)}-\xi^{(2)}+\frac{(\xi^{(8)}-\xi^{(9)})}{2}\right] &
L_3&=\frac{1}{2}\left[\xi^{(0)}-\xi^{(2)}-\frac{(\xi^{(8)}-\xi^{(9)})}{2}\right] \\ M&=-\sqrt{2}\xi^{(1)} & Q&=\frac{\xi^{(5)}-\xi^{(3)}}{2\sqrt2} \\
N&=-(\xi^{(0)}+\xi^{(2)}) & &
\end{align}
\noindent in which we can automatically see the two other possible identifications. Since each of the original generators that enter the definition appear in each of the coordinates, we can permute them to obtain the realised $\gamma_{ij}^{(1)}$ that depend on particular coordinate.
The generators close into the algebra
\begin{align}
[K_1,K_2]&=-L_3, & [L_3,K_1]&=K_2, & [L_3,K_2]&=-K_1, & [M,Q]&=-N, & &, \\ [K_1,M]&=-\frac{1}{2} M, &[K_1,Q]&=\frac{1}{2}Q, & [K_1,N]&=0, &[K_2,M]&=\frac{1}{2}Q, \\
[K_2,Q]&=\frac{1}{2}M, & [K_2,N]&=0 &
[L_3,M]&=-\frac{1}{2}Q, & [L_3,Q]&=\frac{1}{2}M, \\ [L_3,N]&=0 &
[W,M]&=\frac{1}{2}M, & [W,Q]&=\frac{1}{2}Q, &[W,N]&=\frac{1}{2}N.
\end{align}
There are five realised subalgebras that we write in the following table
\begin{center}
\begin{table}
\hspace{-0.3cm}\begin{tabular}{ | l | p{2.5 cm} | p{3.9 cm} | p{5.2cm} |}
\hline
\multicolumn{4}{|c|}{Realised subalgebras of $opt(2,1)$} \\
\hline
$\begin{array}{c}Name/ \\ commutators\end{array}$&Patera name&generators & realisation \\ \hline\hline
&$b_{5,6}=a_{5,4}$ & $W+aK_1,K_2+L_3,M,Q,N$ & \\
&$b_{4,1}$&$N\oplus\{;K_1,K_2,L_3\}$ & $\left(
\begin{array}{ccc}
\frac{c}{x^3} & 0 & -\frac{c}{x^3} \\
0 & 0 & 0 \\
-\frac{c}{x^3} & 0 & \frac{c}{x^3} \\
\end{array}
\right)$ \\
&$b_{4,2}$& $W\oplus\left\{ K_1,K_2,L_3 \right\}$ & see eq. (\ref{eqn1}) \\
&$b_{4,3}$& $L_3,Q,M,N$ & see eq. (\ref{b43}) \\
&$\begin{array}{c}b_{4,4}\\ b>0\ast [b\neq0]\end{array}$& $W+bL_3,Q,M,N$ & see eq. (\ref{b44}).\\
\hline
\hline
\end{tabular}
\end{table}
\end{center}
\begin{center}
\begin{table}
\hspace{-0.3cm}\begin{tabular}{ | l | p{2.8 cm} | p{3.6 cm} | p{5.2cm} |}
\hline
\multicolumn{4}{|c|}{Realised subalgebras of $opt(2,1)$, continuation} \\
\hline
$\begin{array}{c} Extended\\ 2d Poincare\end{array}$ &$\overline{b}_{4,9}=a_{4,6}$ &
$ \begin{array}{c} \left\{K_1,K_2+L_3 \right\}\oplus \\ \left\{W,N\right\} \end{array} $
& see equation (\ref{poind}) \\
&$\begin{array}{c} \overline{b}_{4,11}\sim a_{4,8}^{-1} \end{array}$ & $\begin{array}{l} \big\{ W-K_1+Q\\ K_2+L_3,M,N\big\}\end{array}$ &$\left(
\begin{array}{ccc}
-\frac{5 \text{c}}{3 \sqrt{2}} & \text{c} & \frac{\text{c}}{\sqrt{2}} \\
\text{c} & -\frac{1}{3} \left(2 \sqrt{2} \text{c}\right) & -\text{c} \\
\frac{\text{c}}{\sqrt{2}} & -\text{c} & -\frac{\text{c}}{3 \sqrt{2}} \\
\end{array}
\right) $ \\
&$\begin{array}{l}\overline{b}^b_{4,17}=a_{4,15}^{\text{\tiny{ (1-b)/(1+b)}}}\\ b>0,b\neq1\end{array}$ & $\begin{array}{l}\big\{W-bK_1,M,Q,N \big\}\end{array}$ &
$\begin{array}{l} \gamma_{11}^{(1)}=c\cdot(y-t)^{-\frac{3 b+1}{2 b}},\\ \gamma_{13}^{(1)}=-c\cdot(y-t)^{-\frac{3 b+1}{2 b}},\\ \gamma_{33}^{(1)}=c\cdot(y-t)^{-\frac{3 b+1}{2 b}} \\ \gamma_{12}^{(1)}=\gamma_{22}^{(1)}=0\end{array}$,
\\
\hline
\end{tabular}
\end{table}
\end{center}
$\gamma_{ij}^{(1)}$ matrices for $b_{4,2}, b_{4,3}$ and $b_{4,4}$ subalgebras from the table, are respectively\begin{align}
\gamma_{ij}^{(1)}&=\left(
\begin{array}{ccc}
\frac{c\cdot\left(x^2+3 (t+y)^2\right) }{x^3} & -\frac{3 c\cdot(t+y) }{x^2} & -\frac{3 c\cdot(t+y)^2 }{x^3} \\
-\frac{3c \cdot(t+y) }{x^2} & \frac{2 c}{x} & \frac{3 c\cdot(t+y) }{x^2} \\
-\frac{3 c\cdot(t+y)^2 }{x^3} & \frac{3 c\cdot(t+y) }{x^2} & -\frac{c\cdot\left(x^2-3 (t+y)^2\right) }{x^3} \\
\end{array}
\right)\label{eqn1},\\%\end{equation}\begin{align}
\gamma_{ij}^{(1)}&=\left(
\begin{array}{ccc}
-\frac{c}{\left((t-y)^2+4\right)^{3/2}} & 0 & \frac{c}{\left((t-y)^2+4\right)^{3/2}} \\
0 & 0 & 0 \\
\frac{c}{\left((t-y)^2+4\right)^{3/2}} & 0 & -\frac{c}{\left((t-y)^2+4\right)^{3/2}} \\
\end{array}
\right)\label{b43},\\
\gamma_{ij}^{(1)}&=\left(
\begin{array}{ccc}
\frac{c\cdot e^{\frac{\tan ^{-1}\left(\frac{t-y}{2}\right)}{b}}}{\left((t-y)^2+4\right)^{3/2}} & 0 & -\frac{c\cdot e^{\frac{\tan ^{-1}\left(\frac{t-y}{2}\right)}{b}} }{\left((t-y)^2+4\right)^{3/2}} \\
0 & 0 & 0 \\
-\frac{c\cdot e^{\frac{\tan ^{-1}\left(\frac{t-y}{2}\right)}{b}} }{\left((t-y)^2+4\right)^{3/2}} & 0 & \frac{c \cdot e^{\frac{\tan ^{-1}\left(\frac{t-y}{2}\right)}{b}} }{\left((t-y)^2+4\right)^{3/2}} \\
\end{array}
\right)\label{b44}.
\end{align}In the table, an asterisk after the range under $o(3,2)$ (e.g. in $a^{\epsilon}_{4,12}$) means that the range needs to be doubled in case one considers conjugacy under $SO_0(3,2)$ [or $O_1(3,1)$] rather than under $O(3,2)$ [or $SO(3,2)$]. For example, $\epsilon=1\ast$ means that $\epsilon=\pm1$ under $SO_0(3,2)$, and $b>0\ast$ indicates that $b\neq0,-\infty<b<\infty$ under $SO_0(3,2)$.
Let us analyse the structure of the above matrices and compare the ingredients with the matrices obtained for the original KVs. $b_{4,1}$ depends similarly on the coordinates in the denominator to the $\gamma_{ij}^{(1)}$ that conserves two SCTs. We can notice that $K_2$ and $L_3$ that appear in $b_{4,1}$, both contain SCT KVs in the definition. In $b_{4,3}$ and $b_{4,2}$ in the denominator we notice the power $3/2$ that appears in $\gamma_{ij}^{(1)}$ for 3R+D, while hyperbolic tangens appears in $\gamma_{ij}^{(1)}$ that conserves the SCT in $t$ direction. Further $b$ subalgebras that are already mentioned in $sim(2,1)$ group consist of $b_{4,5}=a_{4,18}, \overline{b}_{4,6}=a_{4,1}, \overline{b}_{4,10}=a_{4,7},\overline{b}^b_{4,13}=a^{(b-1)/(b+1)}_{4,10} \text{ with }0<|b|<1 \text{ and }[b\neq0,\pm1], \overline{b}^{\epsilon}_{4,14}=a^{\epsilon}_{4,12} \text{ with }\epsilon=1\ast[\epsilon=\pm1],\overline{b}_{4,15}=a_{4,13},\overline{b}_{4,16}=a_{4,14}$ while the subalgebras that are not realised are
\begin{center}
\begin{tabular}{|c|c|}\hline
\hline
\multicolumn{2}{|c|}{Subalgebras that are not realised} \\
\hline
Patera name&generators \\ \hline\hline
$ b_{7,1}$ &$ W,K_1,K_2,L_3,M,Q,N$ \\
$b_{6,1}$& $ K_1,K_2,L_3,M,Q,N$\\
$\overline{b}_{6,2}=a_{6,2} $& $W,K_1,K_2,+L_3,M,Q,N$ \\ \hline
$b_{5,1}$ & $\begin{array}{c}\{K_1,K_2,L_3\}\oplus\{W,N\}\end{array}$\\
$b_{5,2}$ & $W,L_3,M,Q,N$\\
$\overline{b}_{5,3}=a_{5,1}$ & $W+K_1,K_2+L_3,M,Q,N$\\
$\overline{b}_{5,4}=a_{5,2}$ &$ K_1,K_2+L_3,M,Q,N$\\
$\overline{b}_{5,5}=a_{5,3}$ &$W,K_2+L_3,M,Q,N $\\
$\overline{b}_{5,7}=a_{5,5}$ &$ W-K_1,K_2+L_3,M,Q,N$\\
$\overline{b}_{5,8=a_{5,6}}$ & $ W,K_1,K_2+L_3,M,N $\\
$\overline{b}_{5,9}=a_{5,8}$ & $W,K_1,M,Q,N$\\
$b_{4,5}=a_{4,18}$ &$ W,Q,M,N$\\
$\overline{b}_{4,7}=a_{4,2}$ &$ N\oplus\{K_1,K_2+L_3,M\}$\\
$\overline{b}_{4,8}=a_{4,5}$ & $\{W+K_1,N\}\oplus\{K_1,M\},$\\
\hline
$\overline{b}_{4,10}=a_{4,7}$ &$ K_2+L_3,Q,M,N$\\
$\overline{b}_{4,18}\sim a^{-1}_{4,16}$ &$ W-\frac{1}{3}K_1,K_2+L_3+Q,M,N$\\
\hline
\end{tabular}
\end{center}
$o(3)\oplus o(2)$ does not contain realised subalgebras, while $o(2)\oplus o(2,1)$ contains one algebra with four generators. This is the algebra that we have encountered, the one formed by 1T+1SCT+1R+D and identified with $sl(2)\oplus u(1)$.
Next we consider $o(2,2)$.
\subsection{o(2,2) Algebra}
The subagebra $o(2,2)$ contains generators
\begin{align}
A_1&=-\frac{1}{2}\left[\frac{\xi^{(9)}+\xi^{(8)}}{2}-(\xi^{(0)}+\xi^{(2)})\right], && A_2=\frac{1}{2}(\xi^{(6)}+\xi^{(4)}), \nonumber \\
A_3&=\frac{1}{2}\left[-\frac{\xi^{(9)}+\xi^{(8)}}{2}-(\xi^{(0)}+\xi^{(2)})\right], &&
B_1=-\frac{1}{2}\left[\frac{-\xi^{(9)}+\xi^{(8)}}{2}+(\xi^{(0)}-\xi^{(2)})\right], \nonumber \\ B_2&=\frac{1}{2}(\xi^{(6)}-\xi^{(4)}), && B_3=\frac{1}{2}\left[\frac{\xi^{(9)}-\xi^{(8)}}{2}+(\xi^{(0)}-\xi^{(2)})\right]. \label{genlo22}
\end{align}\noindent As above, one may obtain $\gamma_{ij}^{(1)}$ that depends on the remaining two coordinates by permutiation of the original generators in the definition (\ref{genlo22}).
The algebra is isomorphic to $o(2,1)\oplus o(2,1)$ and defined with commutation relations
\begin{align}
[A_1,A_2]&=-A_3,& [A_3,A_1]&=A2, & [A_2,A_3]&=A_1, \\
[B_1,B_2]&=-B_3, & [B_3,B_1]&=B_2, & [B_2,B_3]&=B_1, \\ [A_i,B_k]&=0 &(i,k=1,2,3)
\end{align}
It contains the subalgebras with number of generators from six to one, while two largest algebras are not realised
\begin{center}
\begin{tabular}{|c|c|}\hline
\hline
\multicolumn{2}{|c|}{Subalgebras that are not realised} \\
\hline
Patera name&generators \\ \hline\hline
$ e_{6,1}$ &$ \{ A_1,A_2,A_3 \}\oplus\{B_1,B_2,B_3\}$ \\
$ \overline{e}_{5,1}=b_{5,1}$ &$ \{ A_1,A_2-A_3 \}\oplus\{B_1,B_2,B_3\}.$ \\
\hline
\end{tabular}
\end{center}
Part of the subalgebras with the lower number of generators, 4 are equal to the algebras $\overline{e}_{4,2}=a_{4,6},\overline{e}_{4,3}=b_{4,1},\overline{e}_{4,4}=b_{4,2}$.
\subsection{o(3,1) Lorentz Algebra}
Four dimensional Lorentz algebra we can write on the three dimensional hypersurface
using
\begin{align}
L_1&=\xi^{(7)}+\frac{\xi^{(2)}}{2}, & L_2&=\xi^{(5)}, & L_3&=\xi^{(8)}+\frac{1}{2}\xi^{(1)}, \\
K_1&=\xi^{(8)}-\frac{1}{2}\xi^{(1)}, & K_2&=\xi^{(6)}, & K_3&=-\xi^{(7)}+\frac{1}{2}\xi^{(2)} .
\end{align}
that close with the commutation relations
\begin{align}
[L_i,L_j]&=\epsilon_{ijk}L_k, & [L_i,K_j]&=\epsilon_{ijk}K_k, & [K_i,K_j]&=-\epsilon_{ijk}L_k.
\end{align}
The algebra $o(3,1)$ itself is not realised in the form of $\gamma_{ij}^{(1)}$ while the first highest sub algebra of $o(3,1)$, $\overline{f}_{4,1}$, that contains four generators $K_1,L_1;L_2-K_3,L_3+K_2$, is equal to the algebra $a_{4,17}$ that realises 2 dimensional Poincare algebra (\ref{poind}).
From the $\gamma_{ij}^{(1)}$ matrices found for the flat background metric $\gamma_{ij}^{(0)}$, one can find the transformation to the $\gamma_{ij}^{(1)}$ in the spherical background metric. We consider that in the appendix: Classification: Map to Spherical Coordinates Using Global Coordinates and 5 KV Algebra.
The relation between the subalgebras written using original KVs in the previous chapter with the subalgebras in the Patera et al. classification one can find in the appendix: Classification: Map from Classification of KVs from Conformal Algebra to Patera et. al Classfication.
\section{Global Solutions: Bottom-Up Approach}
Bottom-Up approach is the approach to building a global solution based on the known asymptotical solution of the $\gamma_{ij}^{(1)}$ matrix and the known symmetries. The first candidates from which to deduce the global solution are
the subalgebras with the highest number of KVs because they exhibit the most symmetries. In our case that is the subalgebra with 5 CKVs. The global solution written as an asymptotically AdS solution with $\gamma_{ij}^{(1)}$ that of a 5 CKV subalgebra, and the higher order terms in the FG expansion set to zero, is the solution of the full Bach equation.
For other $\gamma_{ij}^{(1)}$ matrices from classification, one cannot use the analogous procedure, however Bach equation can simplify.
\subsection{Geon Solution}
The global solution that arises from the 5 dimensional algebra $so(2)\ltimes o(1,1)$ or in Patera et al. notation, $a_{5,4}=b_{5,6}$, that conserves $\gamma_{ij}^{(1)}$ matrices in (\ref{five}), defines a geon which is analog to pp-wave solution. One can consider geon \cite{Wheeler:1955zz} in several different notions \cite{Melvin:1963qx}, \cite{Kaup:1968zz}, while we consider them here in the sense of instantons \cite{Anderson:1996pu}, or pp-wave solutions. They were recently discussed and connected with the instability of the AdS spacetime \cite{Horowitz:2014hja}, while interesting use of them includs the nearly linear solution to the vacuum constraint equations representing even-parity ingoing wave packets by imploding from a black hole, which is in fact a formation of a black hole obtained by imploding an axisymmetric gravitational wave, \cite{Abrahams:1992ib}.
We promote an asymptotic solution to a global one using the global metric
\begin{equation}
ds^2=dr^2+(-1+cf(r))dt^2+2cf(r)dtdx+(1+cf(r))dx^2+dy^2.\label{geon}
\end{equation}
that solves the Bach equation for $f(r)=c_1+c_2 r+c_3 r^2+c_4 r^3$.
When $c_3$ and $c_4$ coefficients are zero, one obtains Ricci flat metric, and the solution is a solution of EG as well, while the metric (\ref{geon}) is built so that it satisfies AdS boundary conditions.
Conformal invariance of the metric allows it to be rescaled then the metric, does not keep Ricci flatness. The response functions for the metric (\ref{geon})
\begin{align}
\begin{array}{ccc}
\tau_{ij}=\left(\begin{array}{ccc}-c c_4& -c c_4 &0 \\ -c c_4 & -c c_4 &0 \\ 0& 0 & 0 \end{array}\right)
& \text{ and } &
P_{ij}=\left(\begin{array}{ccc} c c_3 & c c_3 & 0 \\ c c_3 & c c_3 &0 \\ 0 & 0 & 0 \end{array}\right),
\end{array}
\end{align}
allow us to notice that choosing the form of the metric we can require one or both of the response functions to vanish. From the response functions, we compute the charges and the currents $J_i=Q_{ij}\xi^{j}$ of the solution, where $\xi^j$ are 5 CKVs that form the subalgebra.
The charge associated with the timelike KV $(1,0,0)$ is $-(2cc_4)$ per square unit of AdS radius, as well as the charge associated with the KV $(0,1,0)$. The charge associated with $(0,0,1)$ vanishes. The charges associated with the new KVs $(t-\frac{x}{2},-\frac{t}{2}+x,y)$ and $(-y,y,-t-x)$ are $cc_4\cdot(t+x)$, and zero, respectively.
Interestingly, the solution (\ref{geon}) is not conformally flat, however it gives Weyl squared equal to zero and finite polynomial invariants (appendix: Classification).
Another global solution with constant $\gamma_{ij}^{(1)}$ and 4 KVs, it is the solution $a_{4,10}^{b}$ from the $sim(2,1)$ table. For $\gamma_{ij}^{(1)}$ of the form $\left(\begin{array}{ccc}c&0&c\\ 0&0&0\\c&0&c\\ \end{array}\right)$, solution is analogous to the solution of the 5 KV case, i.e. $f(r)$ that solves Bach equation is $f(r)=a_1+a_2r+a_3r^2+a_4r^3$. When the realisation of the global solution is $\left(\begin{array}{ccc}0&c&0\\ c&0&-c\\0&-c&0\\ \end{array}\right)$, allowed coefficients in $f(r)$ that solve the Bach equation are only $a_1$ and $a_2$.
Analogous geon or pp-wave solutions appears also in the dependency on one, two and three coordinates. These solutions have vanishing Weyl squared, while they are not conformally flat. By particular choice of coefficients that multiply r component they can be brought to Ricci flat form, and one of them has the structure of {\it double holography-like solution}.
\subsection{Global Solution with $\gamma_{ij}^{(1)}$ Dependent on One Coordinate}
Ansatz for the global solution of Bach equation
\begin{equation}
ds^2=dr^2+(-1+b(x)f(r))dt^2+dx^2+2b(x)f(r)dtdy+(1+b(x)f(r))dy^2,\label{gldep1}
\end{equation}
solves the Bach equation for two cases
\begin{enumerate}
\item $f=c_1+c_2 r+c_3 r^2+c_4 r^3$ and $b=a_1+a_2 x$
\item $f=c_1+c_2 r$ and $b=a_1+a_2 x +a_3 x^2+a_4 x^3$
\end{enumerate}
in which, as in the constant case, one can straightforwardly read out the $\gamma_{ij}^{(i)}$, $i=1,2,3$ matrices. Therefore, if we want vanishing or non-vanishing response functions, we can choose the solution of the Bach equation that gives us that.
For the case 1. the response functions and charges are non-vanishing which is opposite from the case 2.
$\gamma_{ij}^{(1)}$ is conserved by the KVs
\begin{eqnarray}
\xi^{\ms{(0)} a}_{1}&=&(0,0,1),\\
\xi^{\ms{(0)} a}_{2}&=&(t-y,x,-t+y), \\
\xi^{\ms{(0)} a}_{3}&=&(1,0,0), \label{ricflat}
\end{eqnarray}
two translations and a combination of the dilatation and boost in $t-y$ plane. From the Patera et al. classification, the subalgebra belongs to sim(2,1) and it is $a^c_{3,19},c\neq0,\pm1,-2$ with KVs $P_0,P_2,F-c K_2$ for $c=1$.
For further insight in the subalgebra one can consider linear combinations of the KVs, $\xi^{\ms{(0)} a}_{+}=\xi^{\ms{(0)} a}_{1}+\xi^{\ms{(0)} i}_{3}$, $\xi^{\ms{(0)} a}_{-}=\xi^{\ms{(0)} a}_{1}-\xi^{\ms{(0)} a}_{3}$ and $\chi^{(0) a}=\frac{1}{2}\xi^{\ms{(0)} a}$, that form the ASA $[\chi^{(0) a},\xi^{\ms{(0)} a}_{-}]=- \xi^{\ms{(0)} a}_{-}$.
The response functions are analog to those of 5 KV subalgebra with manifest dependency on the x coordinate
\begin{align}\begin{array}{cc}\tau_{ij}=\left(
\begin{array}{ccc}
-x c_4 & 0 & -x c_4 \\
0 & 0 & 0 \\
-x c_4 & 0 & -x c_4 \\
\end{array}
\right) & P_{ij}=\left(
\begin{array}{ccc}
-x c_4 & 0 & -x c_4 \\
0 & 0 & 0 \\
-x c_4 & 0 & -x c_4 \\
\end{array}
\right).\end{array}
\end{align}
The charges of the $(1,0,0)$, $(0,0,y)$ and $(t-y,x,-t+y)$ KVs are $2xc_4$, $2xc_4$ and vanishing, respectively, while
the metric can be reduced to a Ricci flat solution for a choice of metric
\begin{equation}
ds^2=dr^2-(1+r x) dt^2+2 r x dtdx + dx^2+(1+r x) dy^2,
\end{equation}
or transparently for, $\gamma^{\ms{(1)}}_{ij}=\left(\begin{array}{c c c} x & 0& x \\ 0& 0 &0 \\ x& 0 & x\end{array}\right).
$
\subsection{Global Solution with $\gamma_{ij}^{(1)}$ Dependent on Two Coordinates}
In global solutions, we can obtain analogous solutions by permuting the components in $\gamma_{ij}^{(1)}$ and KVs in algebra. We have noticed this already in analysis of ASA. Excellent example for that are global solutions with $\gamma_{ij}^{(1)}$ dependent on two coordinates. The metrics
\begin{align}
ds^2&=dr^2+\left[-1+ a(t+x)f(r)\right] dt^2+2a(t+x)f(r)dtdx \nonumber \\&+ \left[1+ a(t+x)f(r)\right]dx^2+dy^2 \label{dep2ex1} \\
ds^2&=dr^2+\left[-1- b(t-y)f(r)\right] dt^2+dy^2+2b(t-x)f(r)dtdy \nonumber \\& + \left[1-b(t-y)f(r)\right]dy^2
\end{align}
solve the Bach equation for the form the function $f(r)=c_1+c_2 r+c_3r^2+c_4 r^4$ and
give asymptotically desired forms of $\gamma_{ij}^{(1)}$, which are respectively
\begin{align}
\gamma_{ij}^{(1)}&=\left(\begin{array}{ccc} c_ia(t+x) & c_ia(t+x) &0 \\ c_ia(t+x)& c_ia(t+x) & 0 \\ 0&0& 0 \end{array}\right),\nonumber \\ \gamma_{ij}^{(1)}&=\left(\begin{array}{ccc} -c_ib(t-y) & 0 &c_ib(t-y) \\ 0&0 & 0 \\ c_ib(t-y)&0& -c_ib(t-y) \end{array}\right),
\label{globdep2}
\end{align}
for $i=1,2,3$.
The algebras given by the solutions are defined with the KVs
\begin{align}
\begin{array}{cc}
\begin{array}{ll}
\xi^{(n1)}_1&=(-1,1,0) \\
\xi^{(n2)}_1&=(0,0,1) \\
\xi^{(n3)}_1&=(-y,y,-t-x)
\end{array} &
\begin{array}{ll}
\xi^{(n1)}_2&=(1,0,1) \\
\xi^{(n2)}_2&=(0,1,0) \\
\xi^{(n3)}_2&=( -x , y-t , - x)
\end{array}
\end{array}.
\end{align}
Response functions of the first solution \begin{align} \tau_{ij}&=\left(
\begin{array}{ccc}
-c_4 \text{b}(t+x) & -c_4 \text{b}(t+x) & 0 \\
-c_4 \text{b}(t+x) & -c_4 \text{b}(t+x) & 0 \\
0 & 0 & 0 \\
\end{array}
\right), \\ P_{ij}&=\left(
\begin{array}{ccc}
c_3 \text{a}(t+x) & c_3 \text{a}(t+x) & 0 \\
c_3 \text{a}(t+x) & c_3 \text{a}(t+x) & 0 \\
0 & 0 & 0 \\
\end{array}
\right) \end{align}
give for the charge of the $\xi^{(n1)}_1$ and $\xi^{(n2)}_1$ to vanish,
while the response functions of the second solution are
\begin{align} \tau_{ij}&=\left(
\begin{array}{ccc}
c_4 \text{b}(t-y) &0 & -c_4 \text{b}(t-y) \\
0 & 0 & 0 \\
-c_4 \text{b}(t-y) & 0 & c_4 \text{b}(t-y) \\
\end{array}
\right), \\ P_{ij}&=\left(
\begin{array}{ccc}
-c_3 \text{b}(t-y) & 0& c_3 \text{b}(t-y) \\
0& 0 & 0 \\
c_3 \text{b}(t-y) & 0 & -c_3 \text{b}(t-y) \\
\end{array}
\right). \end{align} The corresponding charges for $\xi^{(n1)}_2$ and $\xi^{(n2)}_2$ vanish, while the charge for $\xi^{(n3)}_2$ is $-4c_4x\cdot b(t-y)$.
Since the functional dependence is maintained in the global solution one can use it to build a {\it double holography like solution}, which leads to one more KV, $\xi^{(6)}$. If we consider a function $a(t+x)=\frac{2}{t+x}$ and substitute $t\rightarrow\chi+\tau$ and $x\rightarrow\chi-\tau$ we obtain
\begin{equation}
ds^2=\frac{4 r d\chi^2}{\chi }+\frac{4 r d\chi dy}{\chi }+dr^2-4 d\tau d\chi +dy^2
\end{equation}
a metric that can be using $r\rightarrow\chi\eta$ brought to a form
\begin{equation}
ds^2=4 \eta d\chi^2+(\chi d\eta+\eta d\chi)^2-4 d\tau d\chi +4 \eta d\chi dy+dy^2.
\end{equation}
\noindent We have obtained the four dimensional subalgebra with double-holography like global solution.
Very interesting global solutions in this case are the ones whose $\gamma_{ij}^{(1)}$ matrix is specialisation of the (\ref{globdep2}) for one of the sub algebras from the Tables of $sim(2,1)$ and $opt(2,1)$ algebras.
Let us demonstrate this on the algebra $a_{4,12}$. The KVs $\{ F+K_2+\epsilon(P_0+P_2),-K_1+L_3,P_0-P_2,P_1 \}$ lead to the response functions
\begin{align}
\begin{array}{cc}
\tau_{ij}&=\left(
\begin{array}{ccc}
-(c e^{\frac{1}{4} c \cdot(y-t)} c_4) & 0 & c e^{\frac{1}{4} c\cdot (y-t)} c_4 \\
0 & 0 & 0 \\
c e^{\frac{1}{4} c \cdot(y-t)} c_4 & 0 & -(c e^{\frac{1}{4} c\cdot (y-t)} c_4) \\
\end{array}
\right), \\ P_{ij}&= \left(
\begin{array}{ccc}
c e^{\frac{1}{4} c \cdot(y-t)} c_3 & 0 & -(c e^{\frac{1}{4} c\cdot (y-t)} c_3) \\
0 & 0 & 0 \\
-(c e^{\frac{1}{4} c\cdot (y-t)} c_3) & 0 & c e^{\frac{1}{4} c \cdot(y-t)} c_3 \\
\end{array}
\right).
\end{array}
\end{align}
The only non-vanishing charge belongs to KV $P_0-P_2$ and it reads $4cc_4e^{c\cdot(y-t)/4}$.
Similarly, each of the subalgebras from $sim(2,1)$ and $opt(2,1)$ tables, that are of the form (\ref{globdep2}), and dependent on the $t-y $ coordinates can be realised as global solutions. These are $a^{\epsilon}_{4,12}$, $a_{4,13}$, $a_{4,14}$, $a_{4,15}$, $b_{4,3}$ and $b_{4,4}$.
Going to the higher dimensional subalgebra from the KVs: $\xi_2^{(n1)}=\xi^{(0)}+\xi^{(2)},\xi_2^{(n2)}=\xi^{(1)},\xi_{2}^{(n3)}=\xi^{(5)}-\xi^{(3)}$ conserving (\ref{globdep2}) one can choose fourth KV, solving the equation (\ref{eq:nloke}) for the desired KV.
Choosing the KV
\begin{itemize}
\item
$\xi^{(6)}$ the function $b(t-y)$ would take the form $b(t-y)=\frac{b_1}{t-y}$,
\item choosing $\xi^{(6)}+\xi^{4}+\epsilon(\xi^{(0)}-\xi^{(2)})$ it would become $b(t-y)=b_1\cdot e^{\frac{(t-y)}{2\epsilon}}$.
\item $\xi^{(0)}-\xi^{(2)}$ leads to $b(t-y)$ linear in $t-y$,
\item $\xi^{(6)}-\xi^{(4)}$ provides $b(t-y)=\frac{b_1}{(t-y)^{3/2}}$,
\item $\xi^{(4)}$ gives $b(t-y)=\frac{b_1}{(t-y)^2}$,
\item $\xi^{(6)}+c\xi^{(4)}$ defines $b(t-y)=b_1\cdot(t-y)^{\frac{1-2c}{-1+c}}$.
\end{itemize}
Which means that an arbitrary profile $b(t-y)$ breaks $a_{5,4}$ to $(\xi^{(5)}-\xi^{(3)},\xi^{(1)},\xi^{(0)}+\xi^{(2)})$.
In addition, one can focus to study the profile of $\gamma_{ij}^{(1)}$ of (\ref{globdep2}) when $b(t-y)=(t-y)^{\beta}$. Beside the known conserved KVs, one obtains
\begin{itemize}
\item
$\xi^{(6)}$ for $\beta=-1$
\item $\xi^{(4)}$ for $\beta=-2$
\item and $\xi_{new}=\xi^{(6)}+\frac{1+\beta}{2+\beta}\xi^{(4)}$.
\end{itemize}
\subsection{Global Solution with $\gamma_{ij}^{(1)}$ Dependent on Three Coordinates}
When we introduce dependency on one more coordinate, we can as well obtain a global solution, starting with a metric of a similar form
\begin{equation}
ds^2=dr^2+\left[-1+ b(t+x+y)f(r)\right] dt^2+dx^2+2b(t+x+y)f(r)dtdy + dy^2.
\end{equation}
The ansatz satisfies Bach equation (\ref{bach}) for the functions that will give non-vanishing response functions $f(r)=c_1+c_2 r+ c_3 r^2+c_4 r^3$ and $b(t+x+y)=b_1+b_2\cdot(t+x+y)($ up to linear term. Again keeping the $c_i$, $i=1,2,3,4$ coefficients, one obtains
\begin{align}
\tau_{ij}&=\left(
\begin{array}{ccc}
-b_1-(t+x+y) b_2 & 0 & -b_1-(t+x+y) b_2 \\
0 & 0 & 0 \\
-b_1-(t+x+y) b_2 & 0 & -b_1-(t+x+y) b_2 \\
\end{array}
\right),\\ P_{ij}&=\left(
\begin{array}{ccc}
\left(b_1+(t+x+y) b_2\right) c_4 & 0 & \left(b_1+(t+x+y) b_2\right) c_4 \\
0 & 0 & 0 \\
\left(b_1+(t+x+y) b_2\right) c_4 & 0 & \left(b_1+(t+x+y) b_2\right) c_4 \\
\end{array}
\right)
\end{align}
that conserve KVs $\chi^{(1)}=(-1,1,0)$ with a corresponding charge $-2c_3\cdot\left[c_1+c_2\cdot(t+x+y)\right]$, and $\chi^{(2)}=(-1,0,1)$ whose charge vanishes.
The KVs form an Abelian algebra $o(2)$.
To solving of the Bach equation one can approach using the top-down approach analogous to analysis of MKR solution in the third chapter. We present that approach in the appendix: Classification: Global Solutions: Top-Down Approach. The possibility of solving the Bach equation asymptotically we present in the appendix: Classification: Asymptotic Solutions.
\chapter{One Loop Partition Function}
In this chapter we analyse the conformal gravity one loop partition function. It is one of the key quantities for study in the AdS/CFT correspondence.
The computation of entire partition function is not known in general, however we can compute it perturbatively. Once the quantum gravity theory is known it should give microscopic description of the Bekenstein-Hawking entropy, while currently we are able to compute it in the semi-classical limit, when the entropy is related to horizon area.
However, one-loop computations of the partition function allow determination of the subleading corrections to the semi-classical result.
Computation of the one loop partition function provides also corrections to computations of other thermodynamical quantities.
Large part of the one-loop partition function analysis has been done in lower dimensions \cite{Maloney:2007ud, Bertin:2011jk,Gaberdiel:2010xv,Zojer:2012rj}. In three dimensions, one loop partition function of EG gives the result anticipated by Brown and Henneaux. It is consisted of the sum over boundary excitations of $AdS_3$, Virasoro descendants of the AdS vacuum. EG and Chern Simons gravity give also an anticipated result, the Virasoro descendants from the EG and one more part. That provides and evidence that the dual CFT to topologically massive gravity (TMG) at the chiral point is logarithmic \cite{Gaberdiel:2010xv}.
In higher dimensions however, CFTs do not posses analogous properties as $CFT_2$ and in order to compare the partition function from $AdS$ and $CFT$ side it is essential to consider theory and background of the symmetry that allows such comparison \cite{Beccaria:2014jxa}. For example, conformal spin S partition function has been considered in $CFT_d/AdS_{d+1}$ correspondence with $S^1\times S^{d-1}$ boundary of $AdS_{d+1}$. For the $d=4$ case, the partition function on $S^1\times S^3$ background, for the conformal higher spin (CHS) field corresponds to double partition function of the CHS field for the positive energy ground states on the $AdS$ background, which is particularity of $d=4$. In three and five dimensions it was computed in the form of the MacMahon function \cite{Gupta:2012he}.
\section{Heat Kernel}
The method that we use to study the one loop partition function is the method of the heat kernel. In physics, it was introduced by Fock noting that one can conveniently represent Green's functions as integrals over an auxiliary co-ordinate ("proper time") of a kernel that satisfies the heat equation, and by Schwinger who recognised that through these representations, issues related to renormalisation and gauge invariance in external field are more transparent. It was used by DeWitt as one of the main tools of covariant approach in quantum theories.
Using the asymptotics of a heat kernel one can infer information about the eigenvalue asymptotics which describes recovering of the geometry from a manifold via the spectrum of a differential operator. In that case one can benefit from knowing the heat kernel coefficients.
It is used in computation of the vacuum polarisation, the Casimir effect and study of quantum anomalies - the context in which we use it here and it was considered on various manifolds with and without boundaries. Furthermore, a single computation can be used in a various of applications.
The heat kernel method can be used for various backgrounds and operators. When they are of particular symmetry, for example sphere or hyperbolic space one can compute the partition function analytically. Otherwise, one can study it via the heat kernel coefficients. The formalism that can be used is worldline method \cite{Fliegner:1994zc,Fliegner:1997rk}.
The formalism has been used for computation of the one-loop EG with matter on general backgrounds and in representation with worldline path integrals resulted with correct one loop divergencies \cite{Bastianelli:2013tsa}.
The operators that can be studied include Laplace operator \cite{Camporesi:1994ga,Gopakumar:2011qs,Beccaria:2016tqy}, GJMS operators \cite{Beccaria:2016tqy,Beccaria:2015vaa}, conformal higher spin operators \cite{Beccaria:2014jxa,Beccaria:2016tqy}, more general ones, or Paneitz operator \cite{Fradkin:1981iu,Fradkin:1981jc,paneitz} an differential operator with construction important in four dimensional conformal differential geometry.
Consider the generating functional for the Green's functions of the field $\phi$, analogously to the procedure in the chapter about variational principle,
\begin{equation}
Z[J]=\int\mathcal{D}\phi\exp(-\mathcal{L}(\phi,J)), \label{pfdef}
\end{equation}
where the case from the first chapter would be obtained for $\phi=g$.
The simple example
for computation of the partition function would be
\begin{equation}
Z=\int D\phi e^{-g^2S(\phi)}
\end{equation}
for $\phi$, free quantum field (scalar, vector or tensor)
and the coupling g. Since $\phi$ is a free field computation is straight forward.
Action \begin{equation} S(\phi)=\int_{\mathcal{M}}d^3x\sqrt{g}\phi\Delta\phi \end{equation} contains $\Delta$ second order differential operator. It lives on the space of formalisable functions on $\mathcal{M}$ and in general contains discrete and continuous spectrum of eigenvalues. For compact $\mathcal{M}$ , $\Delta$ has a discrete spectrum of eigenvalues $\lambda_n$, while on non-compact and homogeneous manifolds $\Delta$ has continues spectrum.
The latter causes that the one loop correction
\begin{equation}
S^{(1)}=-\frac{1}{2}\log\det(\Delta)=-\frac{1}{2}\sum_n\log\lambda_n\label{discr}
\end{equation}
contains divergence proportional to volume of $\mathcal{M}$ that can be absorbed in the local counterterm.
General computation of $S^{(1)}$ is complicated that manifests mainly for gauge fields and gravitons. Straightforwardly one has to find a complete basis of normalizable eigenfunctions $\{\psi_n\}$ for which $\Delta\psi_{n}=\lambda_n\psi_m$ and compute the sum directly.
Other option that we present here is to use the heat kernel approach.
To study one loop partition function in the path integral representation, we need to perturb the Lagrangian $\mathcal{L}$ to second order in fluctuations $\phi$
\begin{equation}
\mathcal{L}=\mathcal{L}_{cl}+\langle\phi,J\rangle+\langle\phi,D\phi\rangle \label{langexp}
\end{equation}
with the first term in the expansion of action evaluated on the classical background and $\langle... \rangle$ an inner product of the quantum fields, defined with
\begin{equation} \langle \phi_1,\phi_2\rangle = \int d^nx\sqrt{g}\phi_1(x)\phi_2(x).\end{equation}
Here, under classical action one includes as well one point functions and considers the entire Lagranigan evaluated on shell, that means the contribution from linear term (one point function) needs to vanish. The external sources however, are arbitrary if one is interested into studying the correlation functions.
D is a differential operator, and in a simplest case of a quantum scalar field it is a Laplacian with a mass term
\begin{equation}
D=D_0\equiv-\nabla_{\mu}\nabla^{\mu}+m^2.
\end{equation}
One defines path integral measure for
\begin{align}
\text{ tensors }&&1&=\int D h_{\mu\nu} \text{Exp}\left( -\langle h,h\rangle \right), \label{pimt}\\
\text{ vectors }&&
1&=\int D \xi_{\mu} \text{Exp}\left( -\langle \xi,\xi\rangle \right), \label{defxi} \\
\text{ and scalars } &&
1&=\int D s \text{Exp}\left( -\langle s,s\rangle \right). \label{pims} \end{align}
where the right hand side of the above definitions is divergent, in a strict sense, however the divergence does not depend on external sources on the geometry of the background, and it may be absorbed in an normalisation constant which is irrelevant.
To evaluate (\ref{pfdef}) we use
\begin{equation}
Z[J]=e^{-\mathcal{L}_{cl}}\text{det}^{-1/2}(D)\exp\big(\frac{1}{4}JD^{-1}J\big) ,\label{gauseval}\end{equation} which is true for $D$ self-adjoint operator, i.e. when $\langle \phi_1,D\phi_2\rangle=\langle D\phi_1,\phi_2\rangle$, for its domain of definition equal to the one of the corresponding adjoint.
That requirement simultaneously imposes important restrictions on the admissible boundary conditions \cite{Vassilevich:2003xt}.
The formal expression (\ref{hkgen}) to which we can refer to as $K(x,y,t,D)=\langle x|\exp(-t D)|y\rangle,$ needs to satisfy
\begin{equation}
(\partial_t+D_x)K(x;y;t;D)=0,\label{heqn}
\end{equation}
a heat conduction equation, with the initial condition
\begin{equation}
K(0;x;y;D)=\delta(x,y).
\end{equation}
The solution for $D=D_0$ (1.4) on flat background $M=\mathbb{R}^n$ is
\begin{equation}
K(x;y;t;D_0)=(4\pi t)^{-n/2}\exp\left(-\frac{(x-y)^2}{4t}-tm^2\right).\label{kd0}
\end{equation}
In case that operator $D$ is more general and contains the potential term or a gauge field, (\ref{kd0}) defines a singularity in the leading order for $t\rightarrow 0$ while the subleading terms act as power-law corrections
\begin{equation}
K(x;y;t;D)=K(x;y;t;D_0)\left(1+tb_2(x,y)+t^2b_4(x,y)+...\right), \label{kexp}
\end{equation}
where the {\it heat kernel coefficients} $b_k(x,y)$ are regular for $y\rightarrow x$. Then, $b_k(x,x)$ are local polynomials of background fields and their derivatives. One can write the propagator $D^{-1}(x,y)$ as
\begin{equation}
D^{-1}(x,y)=\int_0^{\infty}dt K(x;y;t;D)
\end{equation}
and integrate (\ref{kexp})
\begin{equation}
D^{-1}(x,y)\propto 2(4\pi)^{-n/2}\sum_{j=0}\left(\frac{|x-y|}{2m}\right)^{-\frac{1}{2}n+j+1}K_{-\frac{1}{2}n+j+1}(|x-y|m)b_{2j}(x,y).
\end{equation}
Formal integration of the expansion, with $b_0=1$ gives the proportionality to a Bessel function $K_{\nu}(z)$ for small argument z, in which the first several kernel coefficients $b_k$ describe the singularities in the propagator at coinciding points.
The part of the (\ref{gauseval})
\begin{equation}
W=\frac{1}{2}\ln\det (D)\label{func}
\end{equation}
defines one-loop effective action which arrises due to the quantum effects of the background fields, at the one-loop level.
To relate the functional (\ref{func}) and the heat kernel one has to remember that for each positive eigenvalue $\lambda$ of the operator $D$ it is true up to an infinite constant that
\begin{equation}
\ln \lambda =-\int_0^{\infty}\frac{dt}{t}e^{-t\lambda}. \label{intl}
\end{equation}
The constant does not depend on $\lambda$ that one can convince himself by differentiating both sides of (\ref{intl}) with respect to $\lambda$. $\ln\det(D)=Tr\ln(D)$ gives the relation with a heat kernel
\begin{equation}
W=-\frac{1}{2}\int_{0}^{\infty}\frac{dt}{t}K(t,D) \label{ktd}
\end{equation}
for
\begin{equation}
K(t,D)=Tr(e^{-tD})=\int d^{n}x\sqrt{g}K(x;x;t;D).\label{trhk}
\end{equation}
Therefore, we can state the following.
In order to analyse the (\ref{gauseval}) one can introduce the heat kernel of the Laplacian $\Delta_{(S)}$ for a spin-S field on a manifold $\mathcal{M}_{d+1}$ between two points x and y
\begin{equation}
K_{ab}{}^{(S)}(t;x,y)=\left\< y,b |e^{t\Delta_{(S)}}|x,a \right\>=\sum\limits_{n}\psi_{n,a}^{(S)}(x)\psi_{n,b}^{(S)}(y)^*e^{tE_n^{(S)}}\label{hkgen}
\end{equation}
in which the spectrum eigenvalues are $E_{n}^{(S)}$, the normalised eigenfunctions that belong to $\Delta_{(S)}$ are $\psi_{n,a}^{(S)}$, while $a$ and $b$ denote the local Lorentz indices of the field.
By tracing over the spin and the spacetime labels we define the trace of the heat kernel
\begin{equation}
K^{(S)}(t)\equiv Tr e^{t\Delta_{(S)}}=\int_{\mathcal{M}}d^{d+1}x\sqrt{g}\sum\limits_aK_{aa}^{(S)}(t;x,x) e^{tE_n^{(S)}}\label{trhkgen},
\end{equation}
and relate the one-loop partition function to the trace of the heat kernel
\begin{equation}
ln Z^{(S)}=ln\det(-\Delta_{(S)})=Tr ln(-\Delta_{(S)})=-\int\limits_0^{\infty}\frac{dt}{t}Tre^{t\Delta_{(S)}}=-\int\limits_0^{\infty}\frac{dt}{t}K^{(S)}(t) \label{pf}.
\end{equation}
The issue that may arise is that the integral (\ref{ktd}), (\ref{pf}) may be divergent in both limits.
When $t=\infty$ D can obtain zero or negative eigenvalue that cause infra-red divergencies.
When the mass $m$ is sufficiently large for the integral to be convergent in the upper limit, they are not encountered.
At the lower limit, divergencies cannot be analogously removed, in order to remove them one has to introduce a cut off for $t=\Lambda^{-2}$
\begin{equation}
W_{\Lambda}=-\frac{1}{2}\int_{\Lambda^{-2}}^{\infty}\frac{dt}{t}K(t,D).
\end{equation}
The divergent part of $W_{\Lambda}$ in the limit $\Lambda\rightarrow\infty$
\begin{align}
W_{\Lambda}^{div}&=-(4\pi)^{-n/2}\int d^n x\sqrt{g}\bigg[\sum_{2(j+l)<n}\Lambda^{n-2j-2l} b_{2j}(x,x)\frac{(-m^2)^{l}l!}{n-2j-2l}\\ &+\sum_{2(j+l)=n}\ln(\Lambda)(-m^2)^ll!b_{2j}(x,x)+\mathcal{O}(\Lambda^0)\bigg]
\end{align}
contains ultra-violet divergencies for the $b_{k(x,x)}$ with $k\leq n$.
The integral for $b_0(x,x)$ is divergent for the non-compact manifolds and one removes this divergency using the subtraction of the "reference heat kernel"
The higher heat kernel coefficients $b_k$ ($k>n$) are not divergent and their contribution to the effective action reads for $\Lambda\rightarrow\infty$
\begin{equation}
-\frac{1}{2}(4\pi)^{-n/2}m^n\int d^nx\sqrt{g}\sum_{2j<n}\frac{b_{2j}(x,x)}{m^{2j}}\Gamma(2j-n)
\end{equation}
which corresponds to a large mass expansion that can be applied on the weak and slowly varying background fields
The property of the heat kernel expansion which we are interested in is the description of the one-loop divergencies and counter terms in order to study quantum anomalies. Beside that, heat kernel can be used for studying short-distance behaviour of the propagator, $1/m$ expansion of the effective action (as we have seen above), perturbative expansions of the effective action, selected non-perturbative relations for the effective action.
The information is contained in the geometric invariants and there is no distinction for different spins or gauge groups which allows generalisation to the arbitrary space-time dimensions. One computation can be used for many applications, and knowing the structure of the heat kernel is useful for computations with complicated geometries. Among the popular examples of the geometries studied via the heat kernel are Dirichlet branes.
The deficiencies of the heat kernel are that it works less effectively when bosonic and fermionic quantum fields mix, while the biggest is that {\it "..heat kernel is not applicable beyond the on-loop approximation. It is not clear whether necessary generalisations to higher loop could be achieved at all."} \cite{Vassilevich:2003xt}.
The heat kernel have been used in the treatment of mathematical problems related to expansion in coefficients \cite{Fliegner:1997rk,Fliegner:1994zc}, for computation of Casimir energy\cite{Giombi:2014yra} and Bose-Einstein condensation, for quantum field theory in curved spaces, quantisation of gauge theories and from the point of vie in quantum cosmology. It provides information for the zeta function, and one may study it using the DeWitt approach and the path integral.
For further applications one may consult \cite{Vassilevich:2003xt}.
\section{Group Theoretic Approach to Heat Kernel}
\subsection{Heat Kernel for Partially Massless STT Field}
The approach that we have described, considers the heat kernel coefficients. For sphere $S^n$, hyperbolic space $\mathbb{H}^n$ and their cosets as backgrounds, the equation (\ref{ktd}), (\ref{trhk}), (\ref{pf}) can be solved analytically. Furthermore, the fields $\phi$ in (\ref{langexp}) that are symmetric, transverse and traceless, simplify the computations.
We consider determinants (\ref{func}) for the symmetric transverse traceless fields (STT) and evaluate the corresponding heat kernel (\ref{pf}). To evaluate the heat kernel (\ref{pf}), one could solve the appropriate heat equation (\ref{heqn}) by direct evaluation and construction of the eigenvalues and eigenfunctions of the spin-S Laplacian on a manifold $\mathcal{M}$ and computation of the resulting sum, or for homogeneous $\mathcal{M}$, by the a group theoretic techniques \cite{Gopakumar:2011qs}.
The evaluation of the heat kernel with group theoretic techniques we can describe with four steps
\begin{enumerate}
\item evaluation of the heat kernel on the symmetric space
\item and then on the coset space of the symmetric space. We consider the heat kernel on the sphere, on the coset space of the sphere ("thermal quotient of $S^{2n}$"),
\item and analytically continuate to hyperbolic space (Euclidean hyperboloid) \item and coset space of hyperbolic space that is thermal AdS ("thermal quotient of $\mathbb{H}^{2n}$").
\end{enumerate}
That kind of analysis is also called harmonic analysis.
Group theoretic approach
is more subtle for the even dimensional spaces. For odd dimensional spaces the contribution that appears is from the principal series,
while In consideration of general tensor fields one can have contribution from the discrete series. However, they do not contribute to the STT field that we are considering here.
\section{Traced Heat Kernel for Even-Dimensional Hyperboloids}
\subsection{Step 1. Heat Kernel on $S^{2n}$}
The manifold we start with, on which we considered (\ref{hkgen}), is 2n sphere $S^{2n}\simeq SO(2n+1)/SO(2n)$. We denote it here with $\mathcal{M}$.
Knowing $\mathcal{M}$ we induce $\psi_{n,a}^{S}$ and $E_n^{(S)}$.
If we have two compact Lie groups G and H for which $H\in G$ we can define representation $R$ of $G$ with corresponding space $V_R$ of the dimension $d_R$ and analogously an unitary irreducible representation $S$ of $H$ with vector space $V_S$ of a dimension $d_S$. The indices on $V_S$ (subspace of $V_R$) are denoted with $a$, and the indices on $V_R$ with $I$.
Then, define the quotienting with the right action of H on G with a coset space $G/H$ by $G/H=\{gH\}$ for $g\in G$, while the quotienting with the left action is $\Gamma\setminus G/H$.
The coset space $G/H$ and $G$ have a projection map $\sigma:G/H\rightarrow G$ with the corresponding map $\pi:G\rightarrow G/H$, where $\pi\circ\sigma= e$, and $e$ is identity in G. This map determines $\psi_{n,a}^{S}$ of $\Delta_{(S)}$ in terms of matrix elements.
Once we defined the section $\sigma(gH)=g_0$ for $g_0\in gH$ chosen to obey predefined rules \cite{Gopakumar:2011qs}, we define the matrix element
\begin{equation}
\psi_a^{(S)I}(x)=\mathcal{U}^{(R)}(\sigma(x)^{-1})_{a}^I. \label{ef1}
\end{equation}
Using this notation (\ref{ef1}) for the eigenfuction, the heat kernel between two points x and y (\ref{hkgen}) is \begin{align}
K_{ab}(x,y;t)=\sum\limits_{R}a_R^{(S)}\mathcal{U}^{(R)}(\sigma (x)^{-1}\sigma(y))_a{}^{b}e^{tE_R^{(S)}}\label{kab}.
\end{align}
The indices $n$ of energy eigenvalue in (\ref{kab}), are denoted with labels ($R,I$) and we have introduced the $a_R^{(S)}=\frac{d_R}{d_S}\frac{1}{V_{G/H}}$, for $V_{G/H}$ volume of the $G/H$ space.
We omit the index $I$ since the energy eigenvalues of the coset spaces SO(N+1)/SO(N) and SO(N,1)/SO(N)
contain representation S within representation R only once because the egienfunctions with equal R and different I are degenerate \cite{Gopakumar:2011qs}. (\ref{trhkgen}) becomes
\begin{equation}
K^{(S)}(x,y;t)\equiv\sum\limits_{a=1}^{d_S}K_{aa}^{(S)}(x,y;t)= \sum_{R}a_R^{(S)}Tr_{S}(\mathcal{U}^{(R)}(\sigma(x)^{-1}\sigma(y)))e^{tE_{R}^{(S)}}\label{css}
\end{equation}
in which we define the
\begin{equation}
Tr_{S}(\mathcal{U})\equiv\sum_{a=1}^{d_s}\langle a,S|\mathcal{U}|a,S\rangle.
\end{equation}
\subsection{Step 2. Heat Kernel on Thermal Quotient of $S^{2n}$}
Thermal quotient of $S^{2n}$ (with $S^{2n}=G/H$) is $\Gamma\backslash G/H$ in which quotienting is done with a discrete group $\Gamma$, isomorphic to $\mathbb{Z}_N$, for thermal quotient of the $S^{2n}$ that can be embedded in $G$. The section that is compatible with the $\Gamma$ quotienting is defined with an element $\gamma\in\Gamma$. Section $\sigma(x) $ is compatible with the quotienting $\Gamma$ if and only if there is $\gamma$ that acts on $x=gH\in G/H$ with $\gamma:gH\rightarrow\gamma\cdot gH$ for which
\begin{equation}\sigma(\gamma(x))=\gamma\cdot\sigma(x)\label{compat}.\end{equation}
That relation allows to use the {\it method of images} \cite{David:2009xg}
\begin{equation}
K_{\Gamma}^{(S)}(x,y;t)=\sum\limits_{\gamma\in\Gamma}K^{(S)}(x,\gamma(y);t)\label{mirror}
\end{equation}
which allows computation of the traced heat kernel $K_{\Gamma}^{(S)}$ between two points x and y on $\Gamma\backslash G/H$.
Fixing the point x and summing over the images of y,
gives an expression for the trace of the heat kernel $K_{\Gamma}^{(S)}$
\begin{equation}
K_{\Gamma}^{(S)}(t)=\sum_{m\in \mathbb{Z}_N}\int_{\Gamma\backslash G/H}d\mu(x)\sum_{a}K_{aa}(x,\gamma^m(x);t)\label{mirim}.
\end{equation}
Here, $d\mu(x)$ defines a measure on $\Gamma\backslash G/H$ obtained from the {\it Haar} measure on G, while x defines points in $\Gamma\backslash G /H$, while $\Gamma\simeq\mathbb{Z}_N$.
The properties of integral over the quotient space allow to write (\ref{mirim}) as \cite{Gopakumar:2011qs}
\begin{equation}
K_{\Gamma}^{(S)}=\frac{\alpha_1}{2\pi}\sum\limits_{k\in\mathbb{Z}_N}\sum\limits_{R}\chi_{R}(\gamma^k)e^{t E_R(S)} \label{thk}
\end{equation}
for $\frac{\alpha_1}{2\pi}$ a volume factor of the thermal quotient $\gamma$. $\chi_R$ defines the character of the representation R with $E_R(S)$ eigenvalue of the spin-S Laplacian $\Delta_{(S)}$ on $S^{2n}$. The quotient $\gamma$ is an exponential of the "Cartan" generators of representation R (\ref{compat}), here SO(2n+1), with an explicit example for the four dimensional case given below.
The representations R of SO(2n+1), are representations that contain $S$ when they are restricted to the $SO(2n)$.
The eigenvalues $E_R$, necessary for the evaluation of the $K_{\Gamma}^{(S)}$ have been listed in
\cite{Camporesi:1994ga} and they are
\begin{equation}
E_{R,AdS_{2n}}^{(S)}=-(\lambda^2+\rho^2+s)
\end{equation}
for $\rho\equiv\frac{N-1}{2}$ and $N$ dimension of the space we consider.
\subsection{Step 3. Heat Kernel on $\mathbb{H}^{2n}$}
From the expression $K_{\Gamma}^{(S)}$ for the heat kernel on $S^{2n}$ we can define the analogous expression for $K_{\Gamma}^{(S)}$ on $\mathbb{H}^{2n}$. The characters in (\ref{thk}) are evaluated on the compact symmetric space. On hyperbolic space, we can expect the heat kernel to be of that form which is exactly what happens, the eignevalues and eigenfunctions stay the same, while the sum turns into an integral. The unitary representations G that define matrix elements are infinite dimensional since G is not compact, and they have been classified for $SO(N,1)$.
\begin{itemize}
\item
Analogously to the compact case, the analysis on the Euclidean $AdS$ (hyperbolic space $\mathbb{H}^{N}$)
\begin{equation}
\mathbb{H}_N\approx SO(N,1)/SO(N)
\end{equation}
with N dimension of space, requires writing a sectioning $SO(N,1)$ obtained by analytic continuation form $SO(N+1)$.
Ilustrativ example is in terms of the coordinates and a line element. If we have defined coordinates on $S^{2n}$ with the metric
\begin{equation}
ds^2=d\theta^2+\cos^2\theta d\phi_1^2+\sin^2\theta d\Omega_{2n-2}^2,
\end{equation}
and an analytic continuation
\begin{align}
\theta\rightarrow-i\rho, && \phi_1\rightarrow i t,\label{acn}
\end{align}
where $\rho,t\in \mathbb{R }$, we analytically continuate to
\begin{equation}
ds^2=-(d\rho^2+\cosh^2\rho dt^2+\sinh^2\rho d\omega_{2n-2}^2).
\end{equation}
This is equal to continuation $SO(N+1)$ to $SO(N,1)$ via one axis chosen as a time direction, for example axis "1", and continuating the generators $Q_{1j}\rightarrow iQ_{1j}$ that define the corresponding Lie algebras. One can show this explicitly considering the particular number of dimensions. If we express the thermal quotient using the coordinates on $S^4$: complex numbers $z_1,z_2,z_3$, which satisfy the condition
\begin{equation}
|z_1|^2+|z_2|^2+|z_3|^2=1,
\end{equation}
the quotient is defined with
\begin{equation}
\gamma: \{\phi_i\}\rightarrow\{\phi_i+\alpha_i\}.\label{tq}
\end{equation}
$\phi_1,\phi_2$ in (\ref{tq}) are phases of the z's and $n_i\alpha_1=2\pi$ for some $n_i\in\mathbb{Z}$ while not all $n_i$s can simultaneously be zero, and
thermal quotient requires \begin{equation}\alpha_i=0 (\forall i\neq1).\label{alphas}\end{equation}
$\Gamma$ needs to be embedded in SO(5), and for that we decompose complex numbers into 5 coordinates. The coordinates are real and embed $S^4$ in $\mathbb{R}^5$
\begin{align}
x_1&=\cos\theta\cos\phi_1 & x_2&=\cos\theta\sin\phi_1 \nonumber \\
x_3&=\sin\theta\cos\psi\cos\phi_2 & x_4&=\sin\theta\cos\psi\sin\phi_2 \nonumber \\
x_5&=\sin\theta\sin\psi. &
\end{align} We denote the point in $R^5$ with coordinates (1,0,0,0,0) as a north pole and construct a matrix g(x) which rotates it to the generic point x. $g(x)\in SO(5)$ contains point x on $S^4$ and defines one to one correspondence between $SO(5)$ and $S^4$ up to multiplication by an element of $SO(4)$. North pole is invariant under multiplication by an element of SO(4).
$g(x)$ can be e.g.
\begin{equation}
g(x)=e^{i\phi_1Q_{12}}e^{i\phi_2Q_{34}}e^{i\psi Q_{35}}e^{i\theta Q_{13}}\label{egsec}
\end{equation}
where Qs are generators of $SO(5)$. We can recognise that as an element of a section in G over G/H and write the action for the thermal quotient (\ref{tq}) on $g(x)$ (\ref{egsec}) as an embedding of $\Gamma$ in SO(5)
\begin{equation}
\gamma : g(x)\rightarrow g(\gamma(x))=e^{i\alpha_1Q_{12}}e^{i\alpha_2Q_{34}}\cdot x=\gamma\cdot g(x).
\end{equation}
Here we define matrix multiplication with "$\cdot$". Now we can recognise the property (\ref{compat}) and write the thermal section as
\begin{equation}
\sigma_{th}(x)=g(x).
\end{equation}
This property is used in the method of images for the construction of the heat kernel on $\Gamma\backslash$ SO(N)/SO(N+1).
\end{itemize}
The unitary representations of SO(N,1) that we consider, are those that contain unitary representations of SO(N).
Using $N=2n$ (for even dimensional hyperboloids) that are unitary representations of principal series \cite{fuchs, fuchs_schw}\footnote{In the mathematical literature "representation space" here shortened into "representation" is referred to with "module".} of $SO(2n,1)$ labelled with
\begin{align}
R=(i\lambda,m_2,m_3,...,m_n), && \lambda\in\mathbb{R}, && m_2\geq m_3\geq...\geq m_n
\end{align}
where $m_i$ are non-negative (half-)integers $m_2,m_3,...,m_n$, which we denote by $\vec{m}$. They contain S of $SO(2n)$ according to branching rules
\cite{Gopakumar:2011qs}
\begin{equation}
s_1\geq m_2\geq s_2\geq ...\geq m_n \geq |s_n|.
\end{equation}
They simplify for $STT$ fields since $m_2=s$, while $m_i=s_{i-1}=0$ for $i>2$ \footnote{There is an exception for n=1 when $|m_2|=s$} when the highest weight of the representation is (s,0,...,0).
\subsection{Step 4. Traced Heat Kernel on thermal $AdS_{2n}$}
The traced heat kernel of a tensor on the thermal quotient $AdS_{2n}$ ($\mathbb{Z} \backslash G/H$) that is a hyperbolic space $\mathbb{H}_{2n}$ has $\mathbb{Z}$ identification of coordinate
\begin{align}
t \sim t+\beta, & &\beta=i\alpha_1
\end{align}
for $\beta$ an inverse temperature. That, corresponds to analytic continuation by (\ref{acn}) of (\ref{alphas}) identification. Where we have taken into account that $\Gamma\approx\mathbb{Z}$ while for the sphere it was $\mathbb{Z}_N$. Therefore, on the place of the character of SO(2n+1) in (\ref{thk}) now there is Harish-Chandra character, i.e. global character of the non-compact group SO(2n,1).
Analogously to the (\ref{thk}) the traced heat kernel on thermal $AdS_{2n}$ reads
\begin{equation}
K^{(S)}(\gamma,t)=\frac{\beta}{2\pi}\sum\limits_{k\in\mathbb{Z}}\sum\limits_{\vec{m}}\int\limits_{0}^{\infty}d\lambda\chi_{\lambda,\vec{m}}(\gamma^k)e^{tE_R^{(S)}}\label{thkads},
\end{equation} \cite{hirai}. One can read out the characters to obtain
\begin{equation}
\chi_{\lambda,\vec{m}}(\beta,\phi_1,\phi_2,...,\phi_n)=\frac{\cos(\beta\lambda)\chi^{SO(2l+1)}_{\vec{m}}(\gamma)}{2^{2l}\sinh^{2l+1}\left(\frac{\beta}{2}\right)}\label{char}
\end{equation}
where for the thermal quotient, $\beta\neq0$, $\phi_i=0$ $ \forall i $ and $l=n-1$ \cite{hirai}. The $\vec{m}=(m_2,...,m_n)$ are highest weights of $\chi^{SO(2l+1)}_{\vec{m}}.$ The character (\ref{char}) has to be inserted into (\ref{thkads}) and integrated. For the STT fields $\vec{m}=(s,0,..,0)$, which we denote with $(s,0)$, and (\ref{thkads}) becomes
\begin{equation}
K^{(S)}(\beta,t)=\frac{\beta}{2^{2l+1}\sqrt{\pi t}}\sum\limits_{k\in\mathbb{Z}_+}\chi^{SO(2l+1)}_{(s,0)}\frac{1}{\sinh^{2l+1}\frac{k\beta}{2}}e^{-\frac{k^2\beta^2}{4t}-t(\rho^2+s)}.
\end{equation}
The term with $k=0$ was not included into summation, since it diverges. The divergence appears because of the infinite volume of AdS space, over which we integrate the coincident heat kernel. The parameters of the theory can be redefined reabsorbing the term which is not of interest in this case, since it does not depend on $\beta$.
For the evaluation of the heat kernel we have to compute the integral
\begin{equation}
\int \frac{dt}{t^{3/2}}e^{-\frac{a^2}{4t}-b^2 t}=\frac{2\sqrt{\pi}}{a}e^{-ab}
\end{equation}
that enters in the calculation of the one-loop determinant via
\begin{equation}
-\log\det(-\Delta_{(S)}+m_S^2)=\int\limits_0^{\infty}\frac{dt}{t}K^{(S)}(\beta,t)e^{-m_S^2t}.
\end{equation}
That leads to the equation for the traced heat kernel for STT fields
\begin{equation}
-\log\det(-\Delta_{(S)}+m_S^2)=\frac{1}{2^{2l}}\sum\limits_{k\in\mathbb{Z}_+}\chi^{SO(2l+1)}_{(s,0)}\frac{1}{\sinh^{2l+1}\frac{k\beta}{2}}\frac{1}{k}e^{-k\beta\sqrt{\rho^2+s+m_S^2}},
\end{equation}
that can be more conveniently rewritten as
\begin{equation}
-\log\det(-\Delta_{(S)}+m_S^2)=\sum\limits_{k\in\mathbb{Z}_+}\chi^{SO(2l+1)}_{(s,0)}\frac{2}{(1-e^{-k\beta})^{2l+1}e^{k\beta l }e^{\frac{k\beta}{2}}}\frac{1}{k}e^{-k\beta \sqrt{\rho^2+s+m_S^2}}.\label{main1}
\end{equation}
From the analogous expression for the heat kernel in odd dimensions
\begin{equation}
-\log\det(-\nabla^2+m_{s}^2)=\sum_{k\in\mathbb{Z}_+}\chi_{(s,0)}^{SO(d-1)}\frac{2 e^{-nk\beta}}{(1-e^{-k\beta})^{2n}k}e^{-k\beta\sqrt{s+n^2+m_s^2}}\label{odd},
\end{equation}
we can conclude the heat kernel on arbitrary dimensional Euclidean AdS spaces, using the substitution $\ell=n-1$, $\rho=\frac{d-1}{2}$, $q=e^{-\beta}$ and substituting $d=(dimension\textit{ of }AdS)=2n+1$ for odd dimensions, and $d=(dimension\textit{ of }AdS)=2n$ for even dimensions, is
\begin{equation}
\log Z_{s,d}(AdS_d)=\sum_{k=1}^{\infty}\frac{(-1)}{k}\frac{q^{k(d-3+s)}}{(1-q^k)^{(d-1)}}\chi_{s,d}\label{zsd},
\end{equation}
for
\begin{equation}
\chi_{s}=(2s+d-3)\frac{(s+d-4)!}{(d-3)!s!}.
\end{equation}
\section{One Loop Partition Function in Four Dimensions}
The one loop partition function of the gravity theory (\ref{pfdef}) can be written as a multiplication of three terms
\begin{align}
Z_{one-loop}=\int \mathcal{D}h_{\mu\nu}\times ghost\times exp(-\delta^{(2)}S).\label{loopstructure}
\end{align}
The ghost term denotes the determinants originating from elimination of gauge degrees of freedom. They are referred to as \textit{ghost determinants}. $\mathcal{D}h_{\mu\nu}$ is path integral over the perturbations $h_{\mu\nu}$ around the background that is in our case thermal Euclidean $AdS_{d}$. The term $\exp(-\delta^{(2)}S)$ denotes the exponential of the second variation of the action of the theory.
Once we have obtained the first variation of the action (\ref{var1}) we compute the second variation by varying the action second time
\eq{
\delta^{(2)}S=\alpha\int d^4 x \left[ \delta EOM^{\alpha\beta}\delta g_{\alpha\beta}+EOM^{\alpha\beta}\delta^{(2)}g_{\alpha\beta} \right].
}{acvar2}
Since the contribution to the one loop partition function comes from the bulk term, when we vary the action one more time we are considering the variation of the bulk term, i.e. EOM, and do not consider the boundary term. The contribution comes essentially from the variation of the Bach tensor, $\big(\nabla^{\delta}\nabla_{\gamma}+\frac{1}{2}R_{\gamma}^{\delta} \big)C^{\gamma}{}_{\alpha\delta\beta}=0$, (\ref{bach}).
We define the metric split
\begin{equation}
g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}\label{metricspl}
\end{equation}
in which $\overline{g}_{\mu\nu}$ is the background $AdS_{4}$ metric and $h_{\mu\nu}$ is the small perturbation of the metric $g_{\mu\nu}$ around the background
\begin{align}
\delta g_{\mu\nu}=h_{\mu\nu} && \delta g^{\mu\nu}=-h^{\mu\nu}.
\end{align}
The indices are raised and lowered with the background metric. Indices in the perturbative terms are lowered with the background metric, while the indices of tensors are raised and lowered with the entire metric. As in the first chapter.
The second variation of the metric is
\begin{align}
\delta^{(2)}g_{\mu\nu}=0 && \delta^{(2)}g^{\mu\nu}=-\delta h^{\mu\nu}=2 h^{\mu}{}_{\rho}h^{\rho\mu}.
\end{align}
To evaluate the second variation of action we take into account simplifications for the $AdS_4$ background. We can express the Riemann tensor using the cosmological constant $\Lambda$ and the background metric $\bar{g}$, \begin{equation}R^{\mu\nu\rho\sigma}=\Lambda(-\bar{g}^{\mu \sigma}\bar{g}^{\nu\rho}+\bar{g}^{\mu\rho}\bar{g}^{\nu\sigma})\label{riema}.\end{equation}Ricci tensor and Ricci scalar are correspondingly simplified and read respectively $R^{\mu\nu}=3\Lambda g^{\mu\nu}$ and $R=12\Lambda$.
After second variation of action (\ref{acvar2}), we introduce a decomposition of the metric perturbation $h_{\mu\nu}$ into
\begin{equation}
h_{\mu\nu}(h^{TT},h,\xi)=h_{\mu\nu}^{TT}+\frac{1}{4}g_{\mu\nu}h+2\nabla_{(\mu}\xi_{\nu)}.\label{decomp4}
\end{equation}
Here, transverse traceless part of the metric is $h_{\mu\nu}^{TT}$, trace is $h$ and $\nabla_{(\mu}\xi_{\nu)}$ defines gauge part. Transverse traceless part of the metric is by definition $h^{TT\mu}{}_{\mu}=\nabla^{\mu}h^{TT}_{\mu\nu}=0$. The gauge part of the metric can be further decomposed into transverse and the gauge part
\eq{\xi_{\mu}(\xi^{T},s)=\xi^{T}_{\mu}+\nabla_{\mu}s,}{dec2} where the transverse part is by definition $\nabla^{\mu}\xi_{\mu}^{T}=0$.
Once the decomposition of the metric and the gauge part of the metric are introduced in the action, we need to verify that the terms in the decomposition containing the trace, scalar and the transverse vector fields vanish. That is due to the gauge and diffeomorphism invariance of the action. Upon permutation of covariant derivatives one indeed obtains the action that is consisted from the transverse traceless tensors
\begin{align}
\delta^{(2)}S&=\int d^4x \left(8 \Lambda^2 h^{\text{TT}}_{ab} h^{\text{TT}ab} -6 \Lambda h^{\text{TT}ab} \nabla_{c}\nabla^{c}h^{\text{TT}}_{ab} + h^{\text{TT}ab} \nabla_{d}\nabla^{d}\nabla_{c}\nabla^{c}h^{\text{TT}}_{ab} \right)\label{htt}.
\end{align}
The result is consistent with the one from \cite{Giombi:2014yra} and the linearised EOM from \cite{Lu:2011ks} and \cite{Lu:2011zk}. Following the prescription (\ref{loopstructure}) we need to evaluate path integral over the perturbations $\mathcal{D}h_{\mu\nu}$, ghost determinant and the second variation of action (\ref{htt}). We insert the decomposition of the second variation of the action (\ref{decomp4})
in the path integral. The degrees of freedom over which we can trivially integrate are $\xi$ and $h$ since the action is diffeomorphism and scale invariant, and these are degrees of freedom that describe the volume of the gauge group and with which we have to divide the path integral measure. The ghost determinant is defined by the Jacobian $ghost=Z_{gh}$
and change of variables from (\ref{decomp4}) $h_{\mu\nu}\rightarrow(h_{\mu\nu}^{TT},h,\xi_{\mu})$
\begin{equation}
\mathcal{D}h_{\mu\nu}=Z_{gh}\mathcal{D}_{\mu\nu}^{TT}\mathcal{D}\xi_{\mu}\mathcal{D}h.
\end{equation}
One can further change the variables \begin{equation}\xi_{\mu}\rightarrow (\xi_{\mu}^{T},s) \label{decj1} \end{equation} that
decomposes $\xi_{\mu}$ as in (\ref{dec2}). That decomposition brings to an additional determinant $J_1$, $\mathcal{D}\xi_{\mu}=J_1\mathcal{D}\xi_{\mu}^{T}\mathcal{D}s$, that using normalisation (\ref{defxi}), (\ref{pims}) and ultralocal invariant scalar products \cite{Gaberdiel:2010xv}
\begin{align}
\langle h,h'\rangle&=\int d^3x\sqrt{\overline{g}}h^{\mu\nu}h'_{\mu\nu} \nonumber \\
\langle \xi,\xi' \rangle&=\int d^3x \sqrt{\overline{g}}\xi^{\mu}\xi_{\mu}' \nonumber \\
\langle s,s'\rangle&=\int d^3x\sqrt{\overline{g}}ss' \label{ultra}
\end{align} reads \begin{align}
1&=\int D\xi_{\mu}^TDs J_1\text{Exp}\left(-\int d^4x\sqrt{g}\langle\xi_\nu(\xi^T,s)\xi^{\nu}(\xi^T,s)\rangle \right) \label{detj11} \\
&=\int D\xi_{\mu}^{T} Ds J_1\text{Exp} \left( -\int d^4x\sqrt{g}\langle(\xi_{\nu}^{T}\xi^{T\nu}-s\nabla^2s)\rangle \right) \label{detj12} \\
&=J_1\left[ \det(-\nabla^2)_0 \right]^{-1/2}. \label{detj1}
\end{align}
When going from (\ref{detj11}) to (\ref{detj12}), we have inserted and evaluated the decomposition of the gauge part (\ref{dec2}), while when going from (\ref{detj12}) to (\ref{detj1}) we recognised a Gaussian integral. The index "0" denotes the determinant of a scalar field, while indices "1" and "2" will denote the determinants from the vector and tensor fields respectively. The decomposition of the metric
\begin{equation}
h_{\mu\nu}(h^{TT},h,\xi)=h_{\mu\nu}^{TT}+\frac{1}{4}\overline{g}_{\mu\nu}h+2\nabla_{(\mu}\xi^T_{\nu)}+2\nabla_{\mu}\nabla_{\nu}s \label{decomp4},
\end{equation}
that corresponds to the change of the variables $h_{\mu\nu}\rightarrow(h^{TT},h,\xi^T,s)$ will contribute with the Jacobian factor $J_2$
\begin{align}
1&=\int Dh_{\mu\nu}^{TT}D\xi_{\mu}^{T}DhDs J_2\times\text{Exp}\left\{ -\int d^4x \sqrt{g} h_{\mu\nu}(h_{\mu\nu}^{TT},h,\xi^T_{\mu},s)h^{\mu\nu}(h_{\mu\nu}^{TT},h,\xi^T_{\mu},s) \right\}\nonumber \\ &=\int Dh_{\mu\nu}^{TT}D\xi_{\mu}^{T}DhDs J_2\times \nonumber \\ & \text{Exp}\bigg\{ -\int d^4x \sqrt{g} \bigg[h^{\text{TT}}_{\mu \nu} h^{\text{TT}\mu \nu} + \tfrac{1}{4} h^2 -\xi ^T_{\mu} (6 \Lambda+2\nabla_{\nu}\nabla^{\nu} ) \xi ^{T\mu} \nonumber \\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + s (12 \Lambda\nabla_{\mu}\nabla^{\mu}+3\nabla_{\nu}\nabla^{\nu}\nabla_{\mu}\nabla^{\mu} )s \bigg] \bigg\} \nonumber \\
&= J_2 \left[\det\left(12 \Lambda\nabla_{\mu}\nabla^{\mu}+3\nabla_{\nu}\nabla^{\nu}\nabla_{\mu}\nabla^{\mu} \right)_{0}\right]^{-1/2}\left[\det\left( -6 \Lambda-2\nabla_{\nu}\nabla^{\nu}\right)_{1}\right]^{-1/2}. \label{jac}
\end{align}
Now we can write the partition function for CG in four dimensions
\begin{equation}
Z^{(4)}_{CG}=Z_{gh}\int Dh_{\mu\nu}^{TT} \text{Exp}(-\delta^{(2)}S) \label{zcg4}
\end{equation}
with ghost determinant $Z_{gh}$
\begin{equation}
Z_{gh}=\frac{J}{J_0}=\left[ \det(4\Lambda+\nabla^2)_0 \right]^{1/2}\left[\det(-3\Lambda-\nabla^2)_1 \right]^{1/2}.
\end{equation}
The partition function in terms of the determinants reads
\begin{equation}
Z^{(4)}_{CG}=\frac{\left[ \det(4\Lambda+\nabla^2)_0 \right]^{1/2}\left[\det(-3\Lambda-\nabla^2)_1 \right]^{1/2}}{\left[\det(-4\Lambda+\nabla^2)_2\right]^{1/2}\left[\det(-2\Lambda+\nabla^2)_2\right]^{1/2}} \label{zcgd}
\end{equation}
that was studied in \cite{Tseytlin:2013jya}, equation (3.16) and in references therein, namely, \cite{Fradkin:1985am}, \cite{Tseytlin:1984wj} and \cite{Fradkin:1983zz}. (\ref{zcgd}) agrees with these partition functions once $\Lambda$ is set to -1. From (\ref{zcgd}) one can recognise partition function of EG in four dimensions
\begin{equation}
Z^{(4)}_{CG}=Z_{EG}\frac{\left[ \det(4\Lambda+\nabla^2)_0 \right]^{1/2}}{\left[\det(-4\Lambda+\nabla^2)_2\right]^{1/2}} \label{zcgeg},
\end{equation}
determinant of the partially massless mode that appear in CG \newline $\left[\det(-4\Lambda+\nabla^2)_2\right]^{1/2}$, and of the conformal ghost $\left[ \det(4\Lambda+\nabla^2)_0 \right]^{1/2}$. Whether determinant is massless, partially massive or massive can be determined form the spin and the dimension of the field.
From (\ref{main1}) one can compute the partition function on the thermal $AdS_4$ in terms of the characters of the highest weight representation of the SO(3) group, dimension and spin S of the fields
\begin{align}
\log Z^{(4)}_{CG}=\sum_k^{\infty}\frac{-1}{k(1-e^{-k\beta})^3}e^{-k\beta\left(\frac{-3}{2}\right)}&\bigg( \chi_{(0,0)}^{SO(3)}e^{-k\beta\frac{5}{2}}+\chi_{(2,0)}^{SO(3)}e^{-k\beta\frac{3}{2}} \nonumber \\ &+\chi_{(1,0)}^{SO(3)}e^{-k\beta\frac{1}{2}}+\chi_{(2,0)}^{SO(3)}e^{-k\beta\frac{1}{2}} \bigg). \label{zcg}
\end{align}
Using $q=e^{-\beta}$ and reading out the characters \cite{Beccaria:2014jxa} $\chi_{\vec{m}}^{SO(3)}(\phi_1)=1+2S$ for $\phi_1=0$, (\ref{zcg})
becomes
\begin{equation}
\log Z_{CG}^{(4)}=-\sum_{k\in \mathbb{Z}_{+}}\frac{q^{2k}(-5+4q^{2k}-5q^k)}{(1-q^k)^3k}.\label{zcg4d}
\end{equation}
\section{One Loop Partition Function in Six Dimensions}
CG in six dimensions has invoked much interest since it belongs to the six dimensional theory of gravity related to string theory \cite{Bastianelli:2000hi}. We consider it from the aspect it arises in the AdS/CFT correspondence. From the string theory perspective it is related to tensionless strings \cite{Baulieu:2001pu}, relevant for the (0,2) theory \cite{Henningson:1998gx}, plays an important role in conformal supergravity \cite{Beccaria:2015uta}, and arises from the seven dimensional gravitational effective action within the $AdS_7/CFT_6$ correspondence \cite{Beccaria:2015ypa}. It has been studied from the ordinary derivative approach \cite{Metsaev:2010kp} and from the geometric analysis of the anomalies \cite{Deser:1993yx} about which we discuss in more detail below. Imposing the right boundary conditions to conformal gravity in four dimensions one can obtain EG \cite{Maldacena:2011mk}. The procedure has been translated into a formalism that generalises the parameter choice in critical gravity leading to EG \cite{Lu:2011ks}. That procedure allowed generalisation to six dimensions \cite{Lu:2011ks}, and led to analogous conclusions. Its relation to the Seeley-DeWitt coefficients was studied in \cite{Bastianelli:2000hi} and the logarithmic divergence in one loop effective action was also studied on different backgrounds $S^6, CP^3, S^2\times S^4, S^2\times CP^2, S^3\times S^3$ and $S^2\times S^2\times S^2$ \cite{Pang:2012rd}
Conformal anomaly of a classically Weyl invariant theory in six dimensions can be written in a general form \cite{Beccaria:2015ypa}
\begin{align}
A_6\equiv(4\pi)^3\langle T\rangle=-a E_6+ W_6 & & W_6=c_1I_1+c_2 I_2+c_3 I_3.
\end{align}
We denote $E_6=\epsilon_6\epsilon_6 RRR$ as a six dimensional Euler density, for $\epsilon_6$ Levi-Civita tensors, "$R$" Riemann tensors, and a and c coefficients of the theory. Terms $I_1,I_2,I_3$ are Weyl invariants \cite{Bastianelli:2000rs,Bastianelli:2000hi}.
Based on their geometry, conformal anomalies \cite{Deser:1976yx,Fradkin:1983tg} can be set into two different classes. One, that
consists of Weyl invariants that vanish in integer dimensions and arises from finite and scale-free contributions to the effective gravitational action, proportional to the Euler term. And one that requires the scale. That one is correlated to conformal scalar polynomials that include powers of Weyl tensor and derivatives of the Weyl tensor.
In even integer dimensions of the effective gravitational theories there are terms that do not simultaneously preserve diffeomorphism and Weyl symmetries. In case of the free matter, one cannot simultaneously preserve tracelessness and tracelessness of the stress tensor correlators.
To maintain the diffeomorphism invariance, dilatation becomes equal to a scale (which is constant Weyl) transformation.
For the infinitesimal variation of the metric
$\delta g_{\mu\nu}=2\phi(x)g_{\mu\nu}$
and the gravitational action, integrating out the matter field by $S[g_{\mu\nu}]$ gives conformal anomaly
\begin{equation}
\mathcal{A}(g_{\mu\nu})\equiv\frac{\delta W}{\delta \phi(x)}.
\end{equation}
When the action does not contain scale $\mu$ the anomaly has a vanishing integral
\begin{equation}
\frac{\delta W}{\delta \ln \mu^2}=\int d^d x\mathcal{A}=0\label{typea}
\end{equation}
and since the scalar density $\mathcal{A}$ needs to be related to a topological invariant, an available parity-even candidate is Euler density.
In case that $W$ does not contain scale, the anomaly must reflect this
\begin{equation}
\frac{\delta W}{\delta \ln \mu^2}=\int d^d x \mathcal{A}\neq0.\label{typeb}
\end{equation}
Explicitly in six dimensional case, the anomalous variation can be written as
\begin{equation}
\delta_{\sigma}W[g]=\int d^6x\sqrt{g}\phi(x)\mathcal{A}(x)
\end{equation}
which by functional differentiation with respect to $\frac{2}{\sqrt{g}}\frac{\delta}{\delta g_{ab}}$
produces an anomalous trace to the stress tensor
\begin{equation}
\langle T^a{}_{a}\rangle =\mathcal{A}(x),
\end{equation}
dependent on the background curvature \cite{Bastianelli:2000rs}.
Type A anomaly is (\ref{typea}), while type B anomaly is the one with non-vanishing integral (\ref{typeb}). The third type of anomaly, trivial anomaly, is local and can be removed with a local counterterm \cite{Deser:1993yx}.
The anomalies have been restudied in \cite{Bastianelli:2000hi,Bastianelli:2000rs} while they origin dates from the anomalies from the dimensional regularisation \cite{Capper:1974ic}.
They can be computed using Feynamn graphs, using the heat kernel techniques by De Witt \cite{DeWitt:1964oba} or by a quantum mechanical representation \cite{Fradkin:1982kf}.
In the geometric classification according to type A, type B and trivial anomalies \cite{Bastianelli:2000rs}, the invariants that belong to type A anomaly are\begin{align}
I_1&=C_{\mu\nu\rho\sigma}C^{\mu\lambda\kappa\sigma}C_{\lambda}{}^{\nu\rho}{}_{\kappa} \\
I_2&=C_{\mu\nu\rho\sigma}C^{\rho\sigma\lambda\kappa}C_{\lambda\kappa}{}^{\mu\nu} \\
I_3&=C_{\mu\nu\rho\sigma}\left(\delta^{\mu}_\lambda\Box+4R^\mu_\lambda-\frac{6}{5}R\delta^\mu_\lambda\right)C^{\lambda\nu\rho\sigma}+\nabla_\mu J^\mu \label{invsb}
\end{align}
the tensor $J^{\mu}$ is a trivial Weyl anomaly \cite{Bastianelli:2000rs}. It can be written as
\begin{equation}
\nabla_iJ^i=-\frac{2}{3}M_5-\frac{13}{3}M_{6}+2M_7+\frac{1}{3}M_8\label{tan}
\end{equation}
where we define the basis \begin{align}
K_1&=R^3 && K_2=RR_{ab}^2 && K_3=RR_{abmn}^2 \nonumber \\
K_4&=R_a^mR_m^iR_i^a && K_5=R_{ab}R_{mn}R^{mabn} && K_6=R_{ab}R^{amnl}R^b_{mnl}\nonumber \\
K_7&R_{ab}{}^{mn}R_{mn}{}^{ij}R_{ij}{}^{ab}&& K_8=R_{mnab}R^{mnij}R_i{}^{ab}{}_{j}&& K_{9}=R\nabla^2R \nonumber \\
K_{10}&=R_{ab}\nabla^2R^{ab} && K_{11}=R_{abmn}\nabla^2R^{abmn} && K_12=R^{ab}\nabla_a\nabla_bR \nonumber \\
K_{13}&=(\nabla_aR_{mn})^2 && K_{14}=\nabla_aR_{bm}\nabla^{b}R^{am} && K_{15}=(\nabla_iR_{abmn}^2) \nonumber \\
K_16&=\nabla^2R^2 && K_{17}=\nabla^4R, \label{ks}
\end{align}
for the $M_i$ for $i=5,6,7,8$ \begin{align}
M_5&=6K_6-3K_7+12K_8+K_10-7K_{11}-11K_{13}+12K_{14}-4K_{15}\\
M_6&=-\frac{1}{5}K_9+K_{10}+\frac{2}{5}K_{12}+K_{13}\\
M_7&=K_{4}+K_5-\frac{3}{20}K_9+\frac{4}{5}K_{12}+K_{14}\\
M_8&=-\frac{1}{5}K_9+K_{11}+\frac{2}{5}K_{12}+K_{15}.\end{align}
These terms are cancelled by the local functionals (counterterms obtained as variation of local functionals) given in the Appendix: One Loop Partition Function.
Due to $J^{\mu}$, $I_3$ is locally Weyl invariant when it is multiplied with measure $\sqrt{g}$ and it vanishes for Einstein spaces in which we are interested \cite{Lu:2011ks,Henningson:1998gx}.
The general combination of invariants $\sum_{i=1}^3 c_i I_i+c_4 E$ does not satisfy Einstein metric, in order for the Einstein metric to satisfy the EOM obtained from the variation of action, the Lagrangin has to be
\begin{equation}
\mathcal{L}=\kappa\left( 4 I_1+I_2-\frac{1}{3}I_3-\frac{1}{24}E_6\right).
\end{equation}
Second variation of the action
\begin{equation}
\mathcal{S}=\kappa\int d^6x \sqrt{|g|} \left( 4 I_1+I_2-\frac{1}{3}I_3-\frac{1}{24}E_6\right).
\end{equation}
analogously to the four dimensional case, leads to the linearised EOM
\begin{equation}
\delta^{(2)}S=\int d^6 x \sqrt{|g|} \delta EOM\delta g_{\mu\nu}.\label{var26}
\end{equation}
In (\ref{var26}) we insert the linearised expansion of the metric (\ref{metricspl}), and define the variations analogously as in four-dimensions.
The tensors are in this case evaluated on $AdS_6$ background on which the Riemann tensor is expressed in terms of cosmological constant $\Lambda$ and the background metric $\overline{g}_{\mu\nu}$ (\ref{riema}), as in four dimensional case.
Ricci tensor becomes $R^{\mu\nu}=5\Lambda\overline{g}^{\mu\nu}$, while the Ricci scalar is $R=30\Lambda$. In addition to the conventions taken in the four dimensions, we use transverse traceless gauge of the metric $\nabla^{\mu}h_{\mu\nu}=0$ and $h^{\mu}_{\nu}=0$. Linearized EOM lead to the action
\begin{equation}
\delta^{(2)}S=\int d^4x \left(8 \Lambda^2 h^{TT}_{ab} h^{TT}{}^{ab} -6 \Lambda h^{TT}{}^{ab} \nabla_{c}\nabla^{c}h^{TT}_{ab} + h^{TT}{}^{ab} \nabla_{d}\nabla^{d}\nabla_{c}\nabla^{c}h^{TT}_{ab}\right)\label{2var6}.
\end{equation}
To evaluate the one loop partition function, we have to insert (\ref{2var6}) into (\ref{loopstructure}).
The contribution from the path integral arises from the decomposition of the metric
\begin{equation}
h_{\mu\nu}=h^{\text{TT}}_{\mu\nu} + \tfrac{1}{6} \bar{g}_{\mu\nu} h +2\nabla_{(\mu}\xi_{\nu)}.
\end{equation}
We divide the path integral measure by the gauge group volume, for the change of the variables $h_{\mu\nu}\rightarrow(h_{\mu\nu}^{TT},h,\xi_{\mu})$
\begin{equation}
\mathcal{D}h_{\mu\nu}=Z_{gh}^{(6)}\mathcal{D}h^{TT}_{\mu\nu}\mathcal{\xi}_{\mu}\mathcal{D}h.
\end{equation}
Using the definitions of the path integral measure for tensors, vectors and scalars (\ref{pimt}), (\ref{defxi}) and (\ref{pims}) respectively, and ultralocal invariant scalar products (\ref{ultra}) \cite{Grumiller:2009sn} one can decompose $\xi_{\mu}$ (\ref{decj1}) and from the change of the variables $\mathcal{D}\xi_{\mu}\rightarrow\mathcal{D}\xi_{\mu}^{T}\mathcal{D}s$ obtain
\begin{align}
1&=\int D\xi_{\mu}^{T} Ds J_2^{(6)}\text{Exp} \left( -\int d^6x\sqrt{g}(\xi_{\nu}^{T}\xi^{T\nu}-s\nabla^2s) \right) \nonumber \\
&=J_1^{(6)}\left[ \det(-\nabla^2)_0 \right]^{-1/2}.
\end{align}
The decomposition of the metric in six dimensions
\begin{equation}
h_{\mu\nu}=h^{\text{TT}}_{\mu\nu} + \tfrac{1}{6} \bar{g}_{\nu\mu} h + \nabla_{\mu}\xi ^T_{\nu} + 2 \nabla_{\mu}\nabla_{\nu}s + \nabla_{\nu}\xi ^T_{\mu},
\end{equation}
leads to analogous ghost determinant, as in four dimensional case. Using the change of variables $h_{\mu\nu}\rightarrow(h^{TT},h,\xi^T,s)$ one finds
\begin{equation}
1=\int J_2^{(6)} \mathcal{D}h_{\mu\nu}^{TT}\mathcal{D}h\mathcal{D}\xi_{\mu}^T\mathcal{D}s \exp\left(-\left<h(h^{TT},h,\xi^T,s),h(h^{TT},h,\xi^T,s) \right>\right)
\end{equation} and obtains \begin{align}
1 &=\int Dh_{\mu\nu}^{TT}D\xi_{\mu}^{T}DhDs J^{(6)}_2 \times \nonumber \\ \nonumber &
Exp\bigg\{ -\int d^4x \sqrt{g} \bigg[ h^{\text{TT}}_{\mu\nu} h^{\text{TT}\mu\nu} + \tfrac{1}{6} h^2 -\xi ^{T \mu} (10 \Lambda +2 \nabla_{\nu}\nabla^{\nu})\xi ^T_{\mu} \\ & + s(20 \Lambda \nabla_{\mu}\nabla^{\mu} + \tfrac{10}{3} \nabla_{\nu}\nabla^{\nu}\nabla_{\mu}\nabla^{\mu})s \bigg] \bigg\},
\end{align}
which defines $J^{(6)}_2$
\begin{align}
\mathcal{D}h_{\mu\nu}&=J^{(6)}_2\mathcal{D}h_{\mu\nu}^{TT}\mathcal{D}h\mathcal{D}\xi_{\mu}^T\mathcal{D}s \\
J^{(6)}_2&=\left[ \det(\nabla^2)_0 \right]^{1/2}[\det(5\lambda+\nabla^2)_1]^{1/2}[\det(6\lambda+\nabla^2)_0]^{1/2} \label{j6}.
\end{align}
Here, we use the property $\det A\cdot \det B=\det AB$. Computing the ghost determinant
\begin{equation}
Z_{gh}^{(6)}=\frac{J^{(6)}_2}{J^{(6)}_1}=[\det(5\lambda+\nabla^2)_1]^{1/2}[\det(6\lambda+\nabla^2)_0]^{1/2}. \label{zgh}
\end{equation}
one loop partition function for CG in six dimensions becomes
\begin{align}
Z_{CG}^{(6)}&=Z_{gh}\int D h_{\mu\nu}^{TT}Exp(-\delta^{(2)}S)\nonumber \\ &=\frac{[\det(-\nabla^2-5\lambda)_1]^{1/2}[\det(-\nabla^2-6\lambda)_0]^{1/2}}{[\det(-\nabla^2+2\lambda)_2]^{1/2}[\det(-\nabla^{2}+6\lambda)_2]^{1/2}[\det(-\nabla^2+8\lambda)_2]^{1/2}}\label{zcgdet}.
\end{align}
The CG one loop partition function in six dimensions consists of EG determinants
\begin{equation}
Z_{EG}^{(6)}=\frac{[\det(-\nabla^2-5\lambda)_1]^{1/2}}{[\det(-\nabla^2+2\lambda)_2]^{1/2}}
\end{equation}
that have been considered in \cite{Gupta:2012he,Giombi:2014yra},
scalar determinant in the numerator, that corresponds to the contribution from conformal ghost $[\det(-\nabla^2-6\lambda)_0]^{1/2}$, determinant from the partially massless mode
$[\det(-\nabla^{2}+6\lambda)_2]^{1/2}$, and massive determinant $[\det(-\nabla^2+8\lambda)_2]^{1/2}$. It has been considered as well in \cite{Tseytlin:2013fca}.
From (\ref{main1}) and (\ref{zcgdet}) we can read out the partition function for CG in six dimensions
\begin{align}
\log Z_6 =&\sum_{k\in\mathcal{Z_+}}
\frac{-e^{-\frac{5}{2}k\beta}}{k(1-e^{-k\beta})^5}(\chi_{(1,0)}^{SO(5)} e^{-\frac{7}{2}k\beta}+\chi_{(0,0)}^{SO(5)} e^{-\frac{7}{2}k\beta} \nonumber \\
&-\chi_{(2,0)}^{SO(5)} e^{-\frac{5}{2}k\beta}-\chi_{(2,0)}^{SO(5)} e^{-\frac{3}{2}k\beta}-\chi_{(2,0)}^{SO(5)} e^{-\frac{1}{2}k\beta} ). \label{z61}
\end{align}
Comparing the partition function (\ref{z61}), with the partition function expressed in terms of the determinants (\ref{zcgdet}), we can recognise the terms that originate from particular determinant in (\ref{zcgdet}). That is allowed by the character of SO(5) group that depends on spin, visible in the exponent multiplying the character. Using the character $\chi_{(s,0)}^{SO(5)}=\frac{1}{6}(2s+3)(s+2)(s+1)$ and the notation $q=e^{-\beta k}$, we can write (\ref{z61}) as
\begin{align}
\log Z_6=\sum_{k\in\mathbb{Z}_{+}} \frac{-2 q^{3k}}{k(1-q^k)^5}\left(3 q^{3k}-7q^{2k}-7q-7\right),\label{zsix}
\end{align}
or as the sum of the partition functions it consists of, partition function for EG
\begin{equation}
\log Z_{EG}=\sum_{k\in\mathbb{Z}_{+}} \frac{- q^{5k}}{k(1-q^k)^5}\left(5 q^k-14\right),\label{zeg}
\end{equation}
conformal ghost, partially massless mode and massive mode
\begin{align}
\log Z_{diff}=\sum_{k\in\mathbb{Z}_{+}} \frac{- q^{3k}}{k(1-q^k)^5}\left(q^{3k}-14q^k-14\right).
\end{align}
\section{Thermodynamic Quantities}
One of the applications of the one loop partition function is computation of the subleading correction to thermodynamic quantities. Let us consider an example of four dimensional CG.
(Helmholtz) free energy, computed from \begin{equation}F=-\frac{1}{\beta}\ln Z_{one-loop}\end{equation} can be read out from partition function (\ref{zcg4d})
\begin{equation}
\label{eq:fe}
F_{1-\text{loop}}=\sum\limits_{k\in\mathbb{Z}_+}\frac{e^{-2 k \beta} (-5 + 4 e^{-2 k \beta} -5 e^{-k \beta})}{(1- e^{-k \beta})^{3} k \beta}.
\end{equation}
The literature often refers to it multiplied with $\beta$. This subleading term is correction to the Euclidean AdS background, around which we consider it. The free energy vanishes on the $AdS_4$ background because Weyl tensor vanishes and the CG action does not have to be renormalised. The Einstein part of the free energy that was considered in \cite{Giombi:2014yra} agrees with our result
\begin{eqnarray}
\label{eq:eq}
-\beta F_{\text{EG}_{1-\text{loop}}}= lnZ_{\text{EG}_{1-\text{loop}}}&=&-\sum\limits_{k\in\mathbb{Z}_+}\frac{q^{3k} (-5 +3 q^{k})}{(1- q^{k})^{3} k}.
\end{eqnarray}
The subleading correction to the entropy
\begin{equation}
S_{one-\text{loop}}=-\frac{\partial{F_{1-\text{loop}}}}{\partial{T}}\end{equation}
reads
\begin{equation}
S_{one-\text{loop}}=\sum\limits_{k\in\mathbb{Z}_+}\frac{e^{\frac{-k}{T}}\Big(20\, k\, e^{\frac{2 k}{T}}+ 4 (k+T)+ 5\,(2k+T)\, e^{\frac{3 k}{T}} - (16k+9T)\,e^{\frac{k}{T}}\Big)}{k T (-1+e^{\frac{k}{T}})^{4}}, \label{s1loop}
\end{equation}
for $T$ a temperature.
The correction is not divergent and one may want to interpret it physically. The interpretation should be done carefully questioning the semi-classical approximation of the solution. One can not neglect the classical contribution or the one-loop part. It is expected in general, that these terms have contribution from the full quantum corrections.
The examples of the leading order computation of the entropy we have encountered in the second chapter while treating the Schwarzschild, MKR and rotating black hole solution. We have as well considered it in the fourth chapter computing the leading order in entropy of the geon solution, global and non-trivial solutions from the classification of the subalgebras of $o(3,2)$.
\chapter{Summary and Discussion}
\section{Summary}
Knowing its advantages and disadvantages compared to EG, study of CG has proven that CG possesses necessary ingredients to be considered as an effective theory of gravity.
However, it needs to be studied further.
In the following paragraph, we briefly summarise the content of the chapters while in the latter ones we address the main results from each chapter.
We have studied conformal gravity using the holographic renormalisation procedure, performed canonical analysis of canonical charges in CG, analysed its asymptotic symmetry algebra and found its one loop partition function, which we considered as well in six dimensions. The computations were performed in the AdS/CFT framework
from the gravity side, which means that partition function played one of the key roles. The second chapter proved agreement with the previous results, while the analyses in the third and the fourth chapter were focused on the charges and the asymptotic symmetry algebra at the boundary, respectively. The fifth chapter studied one-loop partition function of CG, for which CFT side is not known. The CFT side provides nice verification in the lower dimensional theories where intrinsic symmetries of the 3D spacetime allow its computation.
In the first two chapters we have introduced the main topic, CG, and basic concepts used in GR. In the third chapter we have, using the holographic renormalisation procedure, verified that CG has well defined variational principle and finite response functions. For that we did not need to add neither generalised Gibbons-Hawking-York counterterms as extrinsic curvature in EG
nor the holographic conutertems, to obtain the finite response function for the imposed boundary conditions.
These boundary conditions included the Fefferman-Graham decomposition of the metric
\begin{equation}
ds^2=\frac{\ell^2}{\rho^2}\left(-\sigma d\rho^2+\gamma_{ij}dx^idx^j\right).\label{les}
\end{equation}
with $\gamma_{ij}$
\begin{equation}
\gamma_{ij}=\gamma_{ij}^{(0)}+\frac{\rho}{\ell}\gamma_{ij}^{(1)}+\frac{\rho^2}{\ell^2}\gamma_{ij}^{(2)}+\frac{\rho^{3}}{\ell^3}\gamma_{ij}^{(3)}+...\label{expansiongammas}
\end{equation}
near $\rho=0$,
and relations
\begin{align}
\delta \gamma_{ij}^{(0)} = \lambda \gamma_{ij}^{(0)} &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{ and}& \delta \gamma_{ij}^{(1)}=2\lambda\gamma_{ij}^{(1)}
\end{align}
where function $\lambda$ as well as tensors $\gamma_{ij}^{(0)}$ and $\gamma_{ij}^{(1)}$ are allowed to depend on all the coordinates of the boundary however, not on the holographic coordinate $\rho$. Where $\sigma=-1$ for AdS and $\sigma=+1$ for dS.
The response functions expressed in terms of the electric $E_{ij}$ and magnetic $B_{ijk}$ part of the Weyl tensor,
\begin{align}
\tau_{ij} &= \sigma \big[\tfrac{2}{\ell}\,(E_{ij}^{\ms{(3)}}+ \tfrac{1}{3} E_{ij}^{\ms{(2)}}\gamma^{\ms{(1)}}) -\tfrac4\ell\,E_{ik}^{\ms{(2)}}\psi^{\ms{(1)} k}_j
+ \tfrac{1}{\ell}\,\gamma_{ij}^{\ms{(0)}} E_{kl}^{\ms{(2)}}\psi_{\ms{(1)}}^{kl}
+ \tfrac{1}{2\ell^3}\,\psi^{\ms{(1)}}_{ij}\psi_{kl}^{\ms{(1)}}\psi_{\ms{(1)}}^{kl}
\nonumber\\&- \tfrac{1}{\ell^3}\,\psi_{kl}^{\ms{(1)}}\,\big(\psi^{\ms{(1)} k}_i\psi^{\ms{(1)} l}_j-\tfrac13\,\gamma^{\ms{(0)}}_{ij}\psi^{\ms{(1)} k}_m\psi_{\ms{(1)}}^{lm}\big)\big]
- 4\,{\cal D}^k B_{ijk}^{\ms{(1)}} + i\leftrightarrow j\,,
\label{eq:CG17s}
\end{align} and \begin{equation}
P_{ij}=-\tfrac{4\,\sigma}{\ell}\,E_{ij}^{\ms{(2)}}\,
\label{eq:CG18s},\end{equation} obtained with the boundary terms form charges that generate asymptotic symmetries that define the asymptotic symmetry algebra at the boundary, in this case conformal algebra.
We apply the results on the three examples, Schwarzschild black hole, Mannheim--Kazanas--Riegert (MKR) solution and the rotating black hole.
In the case of the Schwarzschild black hole we recover the known \cite{deHaro:2000xn,Deser:2002jk} and expected solutions:
\begin{align}
P_{ij}&=0, \\
\tau_{ij}&=\frac{4\sigma}{\ell}E^{(3)}_{ij},
\end{align}
while the MKR solution contains non-vanishing PMR response for the non-vanishing Rindler acceleration parameter $a$. The Rindler parameter then plays the role (it can be interpreted with) of partially massless graviton condensate, while the conserved charge $Q[\partial_t]=m-a a_M$ is the one that corresponds to the Killing vector $\partial_t $ for the normalisation of the action $\alpha_{CG}=\frac{1}{64\pi}$. The asymptotic symmetry algebra that closes at the boundary is four dimensional $\mathbb{R}\times o(3)$ algebra.
While the entropy on-shell is $S=\frac{A_h}{4\ell^2}$ and $A_h=4\pi r_h^2$ defines an area of the horizon $k(r_h)=0$.
It is remarkable that the entropy obeys an area law even though CG is higher-derivative theory of gravity.
The third example of the rotating black hole with a Rindler acceleration $\mu$ parameter, the rotation parameter $\tilde{a}$ and the vanishing mass leads to vanishing $P_{ij}=0$, which proves that for the non-zero Rindler term, $\gamma_{ij}\neq0$ is necessary but not sufficient.
We have also seen that the Legendre transformation of the action exchanges the role of the PMR and its source. In this case the stress energy tenor $\tau_{ij}$ has zero trace.
In the fourth chapter we analyse the canonical charges for which we show that they are equivalent to the Noether charges. The charge associated to the Weyl symmetry vanishes, while the diffeomoprhism (\ref{eq:diffcharge})
\begin{equation}
Q_D[\epsilon]=2\int_{\partial\Sigma}\ast\epsilon^c\left(\Pi_h^{ab}h_{bc}+\Pi_K^{ab}K_{bc}\right),\label{eq:diffcharges}
\end{equation} and Hamiltonian charge (\ref{qperp1s}) \begin{align}
Q_{\perp}[\epsilon]&=\int_{\partial\Sigma}\ast\left[\epsilon D_b\Pi_K^{cb}-D_b\epsilon\Pi_K^{cb}\right], \label{qperp1s} \end{align}
do not vanish. $h_{ab}$, $\Pi_h^{ab}$ and $K_{ab}$, $\Pi_K^{ab}$ denote the metric on the 3D hypersurface and the corresponding momenta, and extrinsic curvature with corresponding momenta, respectively.
Analogously, in three dimensional gravity the charge associated to a fixed Weyl rescaling vanishes. The discrepancy arises for the freely varying Weyl rescaling function, when the Weyl charge in the 3D does not vanish.
Further, we have shown that these charges define asymptotic symmetry algebra at the boundary which corresponds to the Lie algebra of the asymptotic diffeomorphisms. We expand the Killing equation for the Lie algebra of the small difeomorphisms $\xi$ and Weyl rescalings (\ref{trafo}) \begin{equation}
\delta g_{\mu\nu}=\left(e^{2\omega}-1\right)g_{\mu\nu}\pounds_{\xi}g_{\mu\nu}\label{trafos}
\end{equation} and obtain the leading (\ref{lo}) \begin{equation}
\mathcal{D}_{i}\xi^{(0)}_j+\mathcal{D}_j\xi^{(0)}_i=\frac{2}{3}\gamma_{ij}^{(0)}\mathcal{D}_{k}\xi^{(0)k},\label{slo}
\end{equation} and subleading (\ref{nloke}) \begin{equation}
\pounds_{\xi^{(0)l}}\gamma_{ij}^{(1)}=\frac{1}{3}\gamma_{ij}^{(1)}\mathcal{D}_l\xi^{(0)l}-4\omega^{(1)}\gamma_{ij}^{(0)},\label{snloke}
\end{equation}
Killing equation.
The subleading Killing equation (\ref{snloke}) defines the subalgebra of the asymptotic solution of $\gamma_{ij}^{(1)}$ at the boundary for the subset of the so(3,2) KVs which we classify according to Patera et al. classification. The largest solution consists of the 5 CKV $so(2)\ltimes o(1,1)$ subalgebra and defines a global geon or pp wave solution (\ref{geon}) \begin{equation}
ds^2=dr^2+(-1+cf(r))dt^2+2cf(r)dtdx+(1+cf(r))dx^2+dy^2.\label{geons}
\end{equation}
with $f(r)=c_1+c_2 r+c_3 r^2+c_4 r^3$ and $c,c_i$ arbitrary constants,
while the asymptotic MKR solution closes 4 CKV $\mathbb{R}\times o(3)$ subalgebra.
We have defined a map from the solutions of the flat to $\mathbb{R}\times S^2$ background.
In the fifth chapter we consider that one loop partition function of conformal gravity in four (\ref{zcg4d}) \begin{equation}
\log Z_{CG}^{(4)}=-\sum_{k\in \mathbb{Z}_{+}}\frac{q^{2k}(-5+4q^{2k}-5q^k)}{(1-q^k)^3k},\label{zcg4ds}
\end{equation} and six dimensions (\ref{zsix}) \begin{align}
\log Z_6=\sum_{k\in\mathbb{Z}_{+}} \frac{-2 q^{3k}}{k(1-q^k)^5}\left(3 q^{3k}-7q^{2k}-7q-7\right),
\end{align} on the background Euclidean AdS with $q=e^{-\beta k}$, and for completeness provide the general formula for partition function in arbitrary number of dimensions. For obtaining the one loop partition function we use the
heat kernel and group theoretic approach. The result consists of the contribution from the conformal ghost, contribution from partially massless response and the part from the Einstein gravity. In six dimensions we obtain the analogous contribution, however, in addition there is a contribution form a massive graviton. The structure of the partition function for the gravity with conformal invariance keeps its structure as well in 3D, consisting of the contribution from Einstein gravity, conformal ghost and partially masses mode.
\section{Discussion}
If conformal gravity is ever to be considered as a correct effective theory of gravity, one has to find the way to deal with ghosts.
The current propositions for treatment of ghosts include Pais-Uhlenbeck oscillator approach that finds the parameter space for which there are no states of negative energy. The mechanism suggested by Mannheim consists of considering the theory as PT symmetric rather then Hermitian. %
Assuming that we accept one of these two possible solutions, or treat CG as a toy model we can further analyse it. Obvious direction for further analysis includes considerations of CG in four dimensions on different backgrounds, analogously to lower dimensional studies. In lower dimensions the studies have been done in the gauge/gravity correspondence sense, for $AdS/LCFT$ duality \cite{Grumiller:2009mw,Grumiller:2009sn}, duality of the asymptotically flat spacetimes and non relativistic conformal field theories \cite{Bagchi:2010zz}, correspondences between AdS space and Ricci flat spaces \cite{Caldarelli:2013aaa,Caldarelli:2012hy} and others. In particular, it would be interesting to study the analogous in the flat space since the gravity theory that wants to be considered as correct effective theory of gravity should have the flat space limit. Within that framework one would look for results similar to those above.
Second direction is to look for the higher point functions such as three point functions in the AdS space.
In particular, the continuation of the analysis of the third chapter would include such studies. In the third chapter canonical analysis of charges can be done subsequently to the analysis of the first considering appropriate background. The fourth chapter provides rich field for the further investigation, one can search for the additional solutions using the bottom up approach and compute the properties of the full solutions. They by themselves provide rich basis for further research.
The sixth and final chapter has an interesting property that could be further investigated, that is the fact that partition function for CG in 4D on thermal $AdS_4$ background relates to the partition function of CG in 4D on $\mathbb{R}\times S^3$ with factor of two, which does not appear in other dimensions.
The reason for that is not evident. However one must not exclude the possibility that may be pure coincidence.
Beside that, as for the analysis done in each chapter, partition function can be computed and analysed on different backgrounds.
\chapter{ }
\section{Appendix: General Relativity and AdS/CFT}
\subsubsection{Parallel Transport}
If we have a curve $x^{\mu}(\lambda)$, tensor T is constant along this curve in flat space when $\frac{dT}{d\lambda}=\frac{dx^{\mu}}{d\lambda}\frac{\partial T}{\partial x^{\mu}}=0$. Covariant derivative along path is \begin{equation} \frac{D}{d\lambda}=\frac{dx^{\mu}}{d\lambda}{\nabla_{\mu}},\end{equation}
and the parallel transport along the path reads
\begin{equation}
\left(\frac{D}{d\lambda}T\right)^{\mu_1\mu_2...\mu_k}_{\nu_1\nu_2...\nu_l}\equiv\frac{dx^{\sigma}}{d\lambda}\nabla_{\sigma}T^{\mu_1\mu_2...\mu_k}{}_{\nu_1\nu_2...\nu_l}=0.
\end{equation}
\subsubsection{Newton Potential from Small Perturbation Around the Metric}
We can write the metric $g_{\mu\nu}$ in the form of the perturbation $h_{\mu\nu}$ around the flat background metric $\eta_{\mu\nu}$, $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$, where the indices of the terms in the expansion are raised and lowered with the background metric.
One obtains for the geodesic equation
\begin{equation}
\frac{d^2x^{\mu}}{d\tau^2}=\frac{1}{2}\eta^{\mu\lambda}\partial_{\lambda}h_{00}\left(\frac{dt}{d\tau}\right)^2,
\end{equation}
for $t$ time component and $\tau$ proper time.
The $\mu=0$ component gives constant $\frac{dt}{d\tau}$ and the spatial part (with space like components of $\eta^{\mu\nu}$ as an identity matrix)
\begin{equation} \frac{d^2x^1}{dt^2}=\frac{1}{2}\partial_i h_{00}. \end{equation}
That means that $h_{00}=-2\phi$. This shows that the curvature of spacetime is sufficient for description of gravity in the Newtonian limit for $g_{00}=-1-2\phi$, where $\phi$ is defined taking into account the Weak Equivalence Principle so that the acceleration of the body due to inertial mass is $\vec{a}=-\nabla \phi$.
\subsubsection{Summary of the Conventions}
We follow conventions \cite{bob} in computations.
For the $d+1$ dimensional manifold $\mathcal{M}$ with metric $g_{\mu\nu}$ and the covariant derivative $\nabla_{\mu}$ on $\mathcal{M}$ compatible with $g_{\mu\nu}$ one may write
Christoffel symbols
\begin{align}
\Gamma_{\mu\nu}^{\lambda}=\frac{1}{2}g^{\lambda\rho}\left(\partial_{\mu}g_{\rho\nu}+\partial_{\nu}g_{\mu\rho}-\partial_{\rho}g_{\mu\nu}\right),
\end{align}
Riemann tensor
\begin{align}
R^{\lambda}_{\mu\sigma\nu}=\partial_{\sigma}\Gamma_{\mu\nu}^{\lambda}-\partial_{\nu}\Gamma^{\lambda}_{\mu\sigma}+\Gamma_{\mu\nu}^{\kappa}\Gamma^{\lambda}_{\kappa\sigma}-\Gamma^{\kappa}_{\mu\sigma}\Gamma^{\lambda}_{\kappa\nu},
\end{align}
Ricci tensor
\begin{align}
R_{\mu\nu}=\delta^{\sigma}_{\lambda}R^{\lambda}_{\mu\sigma\nu},
\end{align}
commutators of covariant derivatives
\begin{align}
\left[\nabla_{\mu},\nabla_{\nu}\right]A_{\lambda}&=R_{\lambda\sigma\mu\nu}A^{\sigma}\\
\left[\nabla_{\mu},\nabla_{\nu}\right]A^{\lambda}&=R^{\lambda}{}_{\sigma\mu\nu}A^{\sigma},
\end{align}
and Bianchi identities
\begin{align}
\nabla_{\kappa}R_{\lambda\mu\sigma\nu}-\nabla_{\lambda}R_{\kappa\mu\sigma\nu}+\nabla_{\mu}R_{\kappa\lambda\sigma\nu}&=0\\
\nabla^{\nu}R_{\lambda\mu\sigma\nu}&=\nabla_{\mu}R_{\lambda\sigma}-\nabla_{\lambda}R_{\mu\sigma}\\
\nabla^{\nu}R_{\mu\nu}&=\frac{1}{2}\nabla_{\mu}R.
\end{align}
For $d+1$=2n an even number, one defines an Euler number
\begin{align}
\chi(\mathcal{M})&=\int_{\mathcal{M}}d^{2n}x\sqrt{g}\varepsilon_{2n
\end{align}
normalised with $\chi(S^{2n})=2$ and with Euler density
\begin{align}
\varepsilon_{2n}&=\frac{1}{(8\pi)^{n}\Gamma(n+1)}\epsilon_{\mu_1..\mu_{2n}}R^{\mu_1\mu_2\nu_1\nu_2}...R^{\mu_{2n-1}\mu_{2n}\nu_{2n-1}\nu_{2n}}.
\end{align}
In four dimensions the Euler density is
\begin{align}
\varepsilon_{4}&=\frac{1}{128\pi^2}\epsilon_{\mu\nu\lambda\rho}\epsilon_{\alpha\beta\gamma\delta}R^{\mu\nu\alpha\beta}R^{\lambda\rho\gamma\delta} \\
&=\frac{1}{32\pi^2}\left(R^{\mu\nu\lambda\rho}_{\mu\nu\lambda\rho}-4R^{\mu\nu}R_{\mu\nu}+R^2\right).
\end{align}
If we consider small perturbation of the metric in the form $g_{\mu\nu}\rightarrow g_{\mu\nu}+\delta g_{\mu\nu}$, raise and lower the indices using the unperturbed metric and its inverse, we can express the quantities in terms of the perturbation of the metric with lower indices. Variational operator is
\begin{align}
\begin{array}{ll}
\delta(g_{\mu\nu})=\delta g_{\mu\nu} & \delta^2(g_{\mu\nu})=\delta(\delta g_{\mu\nu})=0 \\
\delta(g^{\mu\nu})=-g^{\mu\alpha}g^{\nu\beta}\delta_{g_{\alpha\beta}}\text{ }& \delta^2(g^{\mu\nu})\delta\left(-g^{\mu\lambda}g^{\nu\rho}\delta g_{\lambda\rho}\right)=2 g^{\mu\alpha}g^{\nu\beta}g^{\lambda\rho}\delta
\end{array}
\end{align}
\vspace{-0.35cm}
\begin{align}
f(g+\delta g)=f(g)+\delta f(g)+\frac{1}{2}\delta^2 f(g)+...+\frac{1}{n!}\delta^n f(g)+..
\end{align}
that brings to variations of Christoffels to higher orders
\begin{align}
\delta \Gamma^{\lambda}_{\mu\nu}&=\frac{1}{2}g^{\lambda\rho}\left( \nabla_{\mu}\delta_{\rho\nu}+\nabla_{\nu}\delta_{\mu\rho}-\nabla_{\rho}\delta g_{\mu\nu}\right)\\
\delta^n \Gamma^{\lambda}_{\mu\nu}&=\frac{n}{2}\delta^{n-1}\left(g^{\lambda\rho}\right)\left(\nabla_{\mu}\delta g_{\rho\nu}+\nabla_{\nu}\delta g_{\mu\rho}-\nabla_{\rho}\delta g_{\mu\nu}\right),
\end{align}
the variation of the Riemann tensor
\begin{align}
\delta R^{\lambda}{}_{\mu\sigma\nu}=\nabla_{\sigma}\delta\Gamma^{\lambda}_{\mu\nu}-\nabla_{\nu}\delta\Gamma^{\lambda}_{\mu\sigma},
\end{align}
and Ricci tensor
\begin{align}
\delta R_{\mu\nu}&=\nabla_{\lambda}\delta\Gamma_{\mu\nu}^{\lambda}-\nabla_{\nu}\delta\Gamma^{\lambda}_{\mu\lambda}\\
&=\frac{1}{2}\left(\nabla^{\lambda}\nabla_{\mu}\delta g_{\mu\nu}+\nabla^{\lambda}\nabla_{\nu}\delta g_{\mu\lambda}-g^{\lambda\rho}\nabla_{\mu}\nabla_{\nu}\delta g_{\lambda\rho}-\nabla^2\delta g_{\mu\nu}\right).\label{varricten}
\end{align}
The remaining variation of the Ricci scalar is
\begin{equation}
\delta R=-R^{\mu\nu}\delta g_{\mu\nu}+\nabla^{\mu}\left(\nabla^{\nu}\delta g_{\mu\nu}-g^{\lambda\rho}\nabla_{m}\delta g_{\lambda\rho}\right).\label{varricsc}
\end{equation}
\subsubsection{Covariant Derivative}
The convention for the covariant derivative we use is
\begin{align}
\nabla T^{\mu_1\mu_2...\mu_k}{}_{\nu_1\nu_2....\nu_l}&=\partial_{\sigma}+\Gamma^{\mu_1}{}_{\sigma\lambda}T^{\lambda\mu_2....\mu_k}{}_{\nu_1\nu_2...\nu_l}+\Gamma^{\mu_2}{}_{\sigma\lambda}T^{\mu_1\lambda....\mu_k}{}_{\nu_1\nu_2...\nu_l}\nonumber \\ &-\Gamma^{\lambda}{}_{\sigma\nu_1}T^{\mu_1\mu_2....\mu_k}{}_{\lambda\nu_2...\nu_l}-\Gamma^{\lambda}{}_{\sigma\nu_2}T^{\mu_1\mu_2....\mu_k}{}_{\nu_1\lambda...\nu_l},
\end{align}
while to express the covariant derivative of a one-form
with the same connection, it has to satisfy two following requirements:
\begin{itemize}
\item commute with the contractions $\nabla_{\mu}T^{\lambda}{}_{\lambda \rho}=(\nabla T)_{\mu}{}^{\lambda}{}_{\lambda\rho}$
\item reduce to partial derivatives when acting on scalars $\nabla_{\mu}\phi=\partial_{\mu}\phi$
\end{itemize}
Commutation of covariant derivative is
\begin{align}
[\nabla_{\rho},\nabla_{\sigma}]X^{\mu_1\mu_2....\mu_k}{}_{\nu_1\nu_2....\nu_l}&= -T_{\rho\sigma}{}^{\lambda}\nabla_{\lambda}X^{\mu_1\mu_2....\mu_k}{}_{\nu_1....\nu_l} \nonumber \\
&+R^{\mu_1}{}_{\lambda\rho\sigma}X^{\lambda\mu_2...\mu_k}{}_{\nu_1...\nu_l}+R^{\mu_2}{}_{\lambda\rho\sigma}X^{\mu_1\lambda...\mu_k}{}_{\nu_1...\nu_l}+...
\nonumber \\
& - R^{\lambda}{}_{\nu_1\rho\sigma}X^{\mu_1...\mu_k}{}_{\lambda\nu_2...\nu_l}- R^{\lambda}{}_{\nu_2\rho\sigma}X^{\mu_1...\mu_k}{}_{\nu_1\lambda...\nu_l}.
\end{align}
\textbf{Jacobi identity}
The identity we use for the verification of the bracket operation of the Lie algebra
\begin{equation}
[[\nabla_{\mu},\nabla_{\rho}],\nabla_{\sigma}]+[[\nabla_{\rho},\nabla_{\sigma}],\nabla_{\mu}]+[[\nabla_{\sigma},\nabla_{\mu}],\nabla_{\rho}]=0.
\end{equation}
\section{Appendix: Holographic Renormalisation}
\subsubsection{Christoffel symbols for EOM of CG}
\begin{align}
\begin{array}{lll}
\Gamma_{\rho\rho}^{\rho}=-\frac{1}{\rho} & \Gamma_{\rho i }^{\rho}=\Gamma_{\rho\rho}^i=0 &
\\
\Gamma_{ij}^{\rho}=\frac{\rho}{\ell}K_{ij}=\frac{\ell}{\rho}\left(\Theta_{ij}-\frac{1}{\ell}\gamma_{ji}\right) \text{ }& \Gamma_{\rho j}^k=\frac{\ell}{\rho}K_j^k=\frac{\ell}{\rho}\left(\Theta_j^k-\frac{1}{\ell}\gamma_j^k\right)
\end{array}
\end{align}
\subsection{Decomposition of Curvature Tensors in Gaussian Normal Coordinates}
The decomposition of the metric for holographic renormalisation procedure decomposes the metric into
\begin{equation}
ds^2=-\frac{\ell^2}{\rho^2}d\rho^2+\gamma_{ij}(x^k,\rho)dx^idx^j \label{gns2}.
\end{equation}
The relation of $\rho$ and time is $\rho=e^{-2t/\rho}$, so that $t\rightarrow\infty$ implies $\rho\rightarrow0$ ($\rho>0$).
The asymptotic boundary $\partial \mathcal{M}$ represents a constant $\rho$ surface for $\rho<<\ell$, where the normal vector that is timelike/spacelike for $\varsigma=+/-$ is
\begin{align}
u^{\mu}=-\frac{\rho}{\ell}\delta^{\mu}_{\rho} && u_{\mu}=\varsigma\frac{\ell}{\rho}\delta_{\mu}^{\rho},
\end{align} the lapse is $\alpha^2=\frac{\ell^2}{\rho^2}$ and the shift $\beta^i=0$. For the projector on constant $\rho$ surfaces we use $\frac{\partial x^{\mu}}{\partial x^i}\frac{\partial x^{\nu}}{\partial x^j}...=\perp^{\mu\nu...}_{ij...}$.
The extrinsic curvature is
\begin{equation}
K_{ij}=-\varsigma\frac{1}{2}\pounds_u\gamma_{ij}=\varsigma\frac{\rho}{2\ell}\partial_{\rho}\gamma_{ij},
\end{equation}
and the projections of the curvatures
\begin{align}
\perp_{kilj}^{\lambda\mu\sigma\nu}R_{\lambda\mu\sigma\nu}&={}^3R_{kilj}+\varsigma K_{lk}K_{ij}-K_{kj}K_{li}\\
\perp_{ilj}^{\mu\sigma\nu}u^{\lambda}R_{\lambda\mu\sigma\nu}&=\varsigma({}^3\nabla_{l}K_{ij}-{}^3K_{il})\\
\perp_{ij}^{\mu\nu}u^{\lambda}u^{\sigma}R_{\lambda\mu\sigma\nu}&=\varsigma\pounds_{u}K_{ij}+K_i^lK_{jl}\\
\perp_{ij}^{\mu\nu}R_{\mu\nu}&={}^3R_{ij}+\varsigma(KK_{ij}-2K_i^lK_{jl})-\pounds_uK_{ij}\\
\perp_i^{\mu}R_{\mu\nu}u^{\nu}&=\varsigma({}^3\nabla_iK-{}^3\nabla^jK_{ij})\\
R_{\mu\nu}u^{\mu}u^{\nu}&=\varsigma\pounds_uK-K^{ij}K_{ij}\\
&=\varsigma\gamma^{ij}\pounds_uK_{ij}+K^{ij}K_{ij}\\
R&={}^3R+\varsigma(K^2+K^{lk}K_{lk})-2\pounds_uK\\
&={}^3R+\varsigma(K^2-3K^{lk}K_{lk})-2\gamma^{ij}\pounds K_{ij}
\end{align}
where the metric indices denoted with ${}^3R_{kiln},{}^3R_{ij}$ and ${}^3R$ denote intrinsic curvature tensors constructed from the boundary metric $\gamma_{ij}$, ${}^3\nabla_i$ is covariant derivative on the manifold $\partial M$ compatible with $\gamma_{ij}$, and $\pounds_{u}$ is Lie derivative along the normal vector $u^{\mu}$.
\subsubsection{Expansion of the Curvatures in Conformal Gravity}
In this section we provide the expanded quantities that appear in the definition of the EOM. For convenience, in some cases it is useful to expand the quantities using the expansion with explicit $\frac{1}{n!}$ factors, while in other cases we a priori use the expansion in which the factorials are absorbed in the $\gamma_{ij}$ matrices, or expanded tensor fields. In case we use the type of expansion that does not absorb the $n!$ in $\gamma_{ij}$ matrices, we write that explicitly.
The expansion of the inverse metric $\gamma^{ij}$ reads
\begin{align}
\gamma^{ij}&= \gamma^{(0)ij} - \rho\gamma^{(1)ij} + \rho^2(\gamma^{(1)aj} \gamma^{(1)i}{}_{a} - \gamma^{(2)ij}) \nonumber \\ &- \rho^3(\gamma^{(1)a}{}_{b} \gamma^{(1)bj} \gamma^{(1)i}{}_{a} - \gamma^{(1)i}{}_{a} \gamma^{(2)aj} - \gamma^{(1)aj} \gamma^{(2)i}{}_{a} + \gamma^{(3)ij}),
\end{align} while the inverse of the $\gamma^{ij}$ with explicit factorials is
\begin{align}
\gamma^{ij}&=h^{ij} - \frac{\rho h^{(1)ij}}{\ell} + \frac{\rho^2 (2 h^{(1)ik} h^{(1)}{}_{k}{}^{j} - h^{(2)ij})}{2 \ell^2} \nonumber \\ &+ \frac{\rho^3 (-6 h^{(1)ik} h^{(1)}{}_{k}{}^{l} h^{(1)}{}_{l}{}^{j} + 3 h^{(1)}{}_{n}{}^{j} h^{(2)in} + 3 h^{(1)im} h^{(2)}{}_{m}{}^{j} - h^{(3)ij})}{6 \ell^3}.
\end{align}
Here, to accent that, we write $h_{ij}$ on the place of $\gamma_{ij}$ and continue with that notation. The expansion of the Christoffel symbol $\Gamma^i{}_{jl}$ is
\begin{align}
\Gamma^i{}_{jl}&=\Gamma [D]^{i}{}_{jl} + \frac{\rho}{\ell} (- \tfrac{1}{2} D^{i}h^{(1)}{}_{jl} + \tfrac{1}{2} D_{j}h^{(1)i}{}_{l} + \tfrac{1}{2} D_{l}h^{(1)i}{}_{j})\nonumber \\ & + \frac{\rho^2}{\ell^2} (- \tfrac{1}{4} D^{i}h^{(2)}{}_{jl} - \tfrac{1}{2} h^{(1)ik} D_{j}h^{(1)}{}_{lk} + \tfrac{1}{4} D_{j}h^{(2)i}{}_{l} \nonumber \\ &+ \tfrac{1}{2} h^{(1)ik} D_{k}h^{(1)}{}_{jl} - \tfrac{1}{2} h^{(1)ik} D_{l}h^{(1)}{}_{jk} + \tfrac{1}{4} D_{l}h^{(2)i}{}_{j})
\end{align}
and the $\theta$ tensor (\ref{thetadef}), that defines the extrinsic curvature $K_{ij}$ with (\ref{kdef}), is
\begin{align}
\theta_{ij}=\frac{\eta h^{(1)}{}_{ij}}{2 \ell^2} + \frac{\eta^2 h^{(2)}{}_{ij}}{2 \ell^3} + \frac{\eta^3 h^{(3)}{}_{ij}}{4 \ell^4} + \frac{\eta^4 h^{(4)}{}_{ij}}{12 \ell^5}.
\end{align}
The curvatures, Ricci tensor and Ricci scalar are respectively,
\begin{align}
R_{ij}&=R[D]_{ij} + \frac{\rho}{\ell} (- \tfrac{1}{2} D_{j}D_{i}h^{(1)k}{}_{k} + \tfrac{1}{2} D_{k}D_{i}h^{(1)}{}_{j}{}^{k} + \tfrac{1}{2} D_{k}D_{j}h^{(1)}{}_{i}{}^{k} - \tfrac{1}{2} D_{k}D^{k}h^{(1)}{}_{ij}) \nonumber \\ &+ \frac{\rho^2}{\ell^2} (\tfrac{1}{2} h^{(1)kl} D_{i}D_{j}h^{(1)}{}_{kl} + \tfrac{1}{4} D_{i}h^{(1)kl} D_{j}h^{(1)}{}_{kl} - \tfrac{1}{4} D_{j}D_{i}h^{(2)k}{}_{k} \nonumber \\ &+ \tfrac{1}{4} D_{i}h^{(1)}{}_{j}{}^{k} D_{k}h^{(1)l}{}_{l} + \tfrac{1}{4} D_{j}h^{(1)}{}_{i}{}^{k} D_{k}h^{(1)l}{}_{l} + \tfrac{1}{4} D_{k}D_{i}h^{(2)}{}_{j}{}^{k} + \tfrac{1}{4} D_{k}D_{j}h^{(2)}{}_{i}{}^{k} \nonumber \\ &- \tfrac{1}{4} D_{k}D^{k}h^{(2)}{}_{ij} - \tfrac{1}{4} D_{k}h^{(1)l}{}_{l} D^{k}h^{(1)}{}_{ij} - \tfrac{1}{2} D_{i}h^{(1)}{}_{j}{}^{k} D_{l}h^{(1)}{}_{k}{}^{l} - \tfrac{1}{2} D_{j}h^{(1)}{}_{i}{}^{k} D_{l}h^{(1)}{}_{k}{}^{l}\nonumber \\ & + \tfrac{1}{2} D^{k}h^{(1)}{}_{ij} D_{l}h^{(1)}{}_{k}{}^{l} - \tfrac{1}{2} h^{(1)kl} D_{l}D_{i}h^{(1)}{}_{jk} - \tfrac{1}{2} h^{(1)kl} D_{l}D_{j}h^{(1)}{}_{ik}\nonumber \\ & + \tfrac{1}{2} h^{(1)kl} D_{l}D_{k}h^{(1)}{}_{ij} - \tfrac{1}{2} D_{k}h^{(1)}{}_{jl} D^{l}h^{(1)}{}_{i}{}^{k} + \tfrac{1}{2} D_{l}h^{(1)}{}_{jk} D^{l}h^{(1)}{}_{i}{}^{k} ),
\end{align}
\begin{align}
R&=R[D] + \frac{\rho}{\ell} (- R[D]^{ij} h^{(1)}{}_{ij} + D_{j}D_{i}h^{(1)ij} - D_{j}D^{j}h^{(1)i}{}_{i}) \nonumber \\ &+ \frac{\rho^2}{\ell^2} (R[D]^{ij} h^{(1)}{}_{i}{}^{k} h^{(1)}{}_{jk} - \tfrac{1}{2} R[D]^{ij} h^{(2)}{}_{ij} + h^{(1)ij} D_{j}D_{i}h^{(1)k}{}_{k} \nonumber \\ & + \tfrac{1}{2} D_{j}D_{i}h^{(2)ij} - \tfrac{1}{2} D_{j}D^{j}h^{(2)i}{}_{i} - h^{(1)ij} D_{j}D_{k}h^{(1)}{}_{i}{}^{k} \nonumber \\ & - \tfrac{1}{4} D_{j}h^{(1)k}{}_{k} D^{j}h^{(1)i}{}_{i} - D_{i}h^{(1)ij} D_{k}h^{(1)}{}_{j}{}^{k} + D^{j}h^{(1)i}{}_{i} D_{k}h^{(1)}{}_{j}{}^{k} \nonumber \\ &- h^{(1)ij} D_{k}D_{j}h^{(1)}{}_{i}{}^{k} + h^{(1)ij} D_{k}D^{k}h^{(1)}{}_{ij} - \tfrac{1}{2} D_{j}h^{(1)}{}_{ik} D^{k}h^{(1)ij} \nonumber \\ &+ \tfrac{3}{4} D_{k}h^{(1)}{}_{ij} D^{k}h^{(1)ij})
\end{align}
\subsection{Equations of Motion in Conformal Gravity}
Since the action (\ref{ac3}) consists from two dynamical fields $f_{\mu\nu}$ and $g_{\mu\nu}$ its variation gives $EOM_{f}$ and $EOM_{g}$, i.e. EOM for auxiliary field $f_{\mu\nu}$ and for the metric, respectively. We are interested in restrictions from EOM order by order.
The most important difference with the EG is that the Einstein EOM do not allow the term $\gamma_{ij}^{(1)}$ in the expansion (\ref{expansiongamma}) restricting it to be zero, while Bach equation does not impose such condition.
Let us take the dS case, $\sigma=1$, in which we consider $\infty>\rho>0$ and future is placed at $\rho\rightarrow0$. The coordinate $\rho$ is related to the time coordinate with $\rho=e^{-2t/\ell}$ and $t\rightarrow\infty$ corresponds to $\rho\rightarrow0$ for $\rho>0$.
We take the boundary $\partial M$ as a constant $\rho$ surface for $\rho<<\ell$, with a timelike vector normal to the surface is defined with
\begin{align}
u^{\rho}=-\frac{\rho}{\ell} && u_{\rho}=\frac{\ell}{\rho}
\end{align}
That makes the extrinsic curvature
\begin{equation}
K_{ij}=\frac{\ell^2}{\rho^2}\left(-\frac{1}{2}\pounds_{\textbf{n}}\gamma_{ij}-\frac{1}{\ell}\gamma_{ij}\right) \label{kdef}
\end{equation}
If we define for convenience \begin{align}\theta_{ij}=-\frac{1}{2}\pounds_{\textbf{n}}\gamma_{ij}=\frac{\rho}{2\ell}\partial_{\rho}\gamma_{ij}\label{thetadef}\end{align} we can write the extrinsic curvature
\begin{equation}
\Rightarrow K_{ij}=\frac{\ell^2}{\rho^2}\left(\theta_{ij}-\frac{1}{\ell}\gamma_{ij}\right)\label{exc}
\end{equation}
which raising the index with $\frac{\ell^2}{\rho^2}\gamma_{ij}$ metric leads to
\begin{equation}
K_{i}^j=\theta_i^j=\frac{1}{\ell}\gamma_i^j,
\end{equation}
while the Christoffels are defined in the appendix: Holographic Renormalisation and One-Point Functions in Conformal Gravity: Christoffel Symbols for EOM of CG.
Using the new unphysical variables
\begin{align}
f_i^j&=\phi_i^j, && f_i^{\rho}=v_i, && f_{\rho}^i=-v^i, &&f_{\rho}^{\rho}=w, \nonumber \\
f_{ij}&=\frac{\ell^2}{\rho^2}\phi_{ij}, && f_{i\rho}=-\frac{\ell^2}{\rho^2}v_i, && f_{\rho i}=-\frac{\ell^2}{\rho^2}v_i, && f_{\rho\rho}=-\frac{\ell^2}{\rho^2}w
\label{defs}
\end{align}
and the variables defined on the three dimensional hypersurface, we can write $EOM_f$ and $EOM_g$. The convenience of the unphysical variables is that computations using the computer package xAct simplifies. The tensors on the three dimensional manifold we denote with the prefix ${}^3$ while the tensors expressed with the unphysical metric we write with no prefixes. Physical and unphysical Ricci tensor and Ricci scalar are denoted with respectively,
\begin{align}
\begin{array}{cc}
{}^3R_{ij}=R_{ij}, & {}^3R=\frac{\rho^2}{\ell^2}R\\
\end{array}
\end{align}
where unphysical indices, indices on the unphysical quantities, are raised and lowered with the unphysical metric. First using the physical variables we can write
the EOM $E_{\mu\nu}^f$ for the auxiliary field $f_{\mu\nu}$
\begin{equation}
E_{\mu\nu}^f=\frac{1}{4}f_{\mu\nu}-\frac{1}{4}g_{\mu\nu}f_{\l}^{\l}+G_{\mu\nu}=0\label{auxeom}
\end{equation}
where $G_{\mu\nu}$ is earlier defined Einstein tensor. Evaluation of the trace and insertion in the equation (\ref{auxeom}) leads to
\begin{align}
f_{\mu\nu}=-4G_{\mu\nu}+\frac{4}{3}g_{\mu\nu}G^{\l}_{\l}.
\end{align}
We can decompose it in the GNC $\rho\rho$, $\rho i$ and $ij$ to obtain
\begin{align}
u^{\mu}u^{\nu}E_{\mu\nu}^f=0 &&
\Rightarrow&& 0=f_i^i+2{}^3R+2K^2-2K^{ij}K_{ij} \label{one}
\end{align}
\vspace{-0.3cm}
\begin{align}
u^{\nu}E_{i\nu}^f&=0 &&
\Rightarrow &&0=\frac{\ell}{\rho}f^{\rho}_i+4\left( {}^3\nabla_i K-{}^3\nabla^jK_{ij} \right)\label{two}
\end{align}
\vspace{-0.4cm}
\begin{align}
E_{ij}^f&=0 \nonumber\\
\Rightarrow 0&=f_{ij}-\frac{\ell^2}{\rho^2}\gamma_{ij}f^{\rho}_{\rho}+4 {}^3R_{ij}+4KK_{ij}-8K_i^kK_{jk} \nonumber \\
&+4\frac{\ell^2}{\rho^2}\gamma_{ij}K^{kl}K_{kl}-4\left( \gamma_{i}^k\gamma_j^l-\gamma_{ij}\gamma^{kl}\right)\pounds_{\textbf{n}}K_{kl}
\end{align}
Taking the trace of the $ij$ equation with $\frac{\rho^2}{\ell^2}\gamma^{ij}$ reads
\begin{align}
0=-f^{\rho}_{\rho}+2K^{ij}K_{ij}+\frac{2}{3}{}^3R+\frac{2}{3}K^2+\frac{8}{3}\frac{\rho^2}{\ell^2}\label{three}
\end{align}
which we can insert back into the $ij$ equation that leads to the form
\begin{align}
\Rightarrow 0=f_{ij}+4\left({}^3R_{ij}-\frac{1}{6}\gamma_{ij}{}^3R\right)-4\left(\gamma_i^k\gamma_j^l-\frac{1}{3}\gamma_{ij}\gamma^{kl} \right)\pounds_{\textbf{n}}K_{kl}+ \nonumber \\
4 K K_{ij}-8 K_i^kK_{jk}-\frac{2}{3}\frac{\ell^2}{\rho^2}\gamma_{ij}K^2+2\frac{\ell^2}{\rho^2}\gamma_{ij}K^{kl}K_{kl}. \label{four}
\end{align}
Rewritting the equations with the unphysical tensors we obtain
\begin{align}
(\ref{one}) \Rightarrow 0&=\phi^i_i+\frac{12}{\ell^2}-\frac{8}{\ell}\theta+2\theta^2-2\theta^{ij}\theta_{ij}+2\frac{\rho^2}{\ell^2}R\\ \label{f1eom}
(\ref{two})\Rightarrow 0&=v_i+4\frac{\rho}{\ell}\left(D_i\theta-D^j\theta_{ij}\right)\\ \label{veom}
(\ref{three}) \Rightarrow 0&=w+\frac{4}{\ell^2}-\frac{8}{3\ell}\theta-\frac{8}{3}\frac{\rho^2}{\ell^2}\gamma^{ij}u^{\textbf{n}}\partial_{\rho}\theta_{ij} \nonumber \\
-&2\theta^{ij}\theta_{ij}-\frac{2}{3}\theta^2-\frac{2}{3}\frac{\rho^2}{\ell^2}R \\
(\ref{four}) \Rightarrow 0&=\phi_{ij}+\frac{4}{\ell^2}\frac{\ell^2}{\rho^2}\gamma_{ij}-\frac{12}{\ell}\theta_{ij}+\frac{12}{\ell}\theta_{ij}+\frac{4}{3\ell}\frac{\ell^2}{\rho^2}\gamma_{ij}\theta+4\theta\theta_{ij}-8\theta_i^k\theta_{jk} \nonumber \\
-&\frac{2}{3}\frac{\ell^2}{\rho^2}\gamma_{ij}\theta^2+2\frac{\ell^2}{\rho^2}\theta^{kl}\theta_{kl}-4(h_i^kh_j^l-\frac{1}{3}\gamma_{ij}\gamma^{kl})\pounds_{\textbf{n}\theta_{kl}}\nonumber \\
+&4\frac{\rho^2}{\ell^2}(R_{ij}-\frac{1}{6}\frac{\ell^2}{\rho^2}\gamma_{ij}R) \label{f2eom}
\end{align}
EOM for the unphysical auxiliary variables we further use to determine metric EOM $EOM_g$.
In the physical coordinates $EOM_g$ read
\begin{align}
E_{\mu\nu}^{g}&=-\frac{1}{2}f_{\mu}^{\l}G_{\l\nu}-\frac{1}{2}f^{\l}_{\nu}G_{\mu\l}-\frac{1}{4}f_{\mu}^{\l}f_{\nu\l}+\frac{1}{2}G_{\mu\nu}f^{\l}_{\l}+\frac{1}{4}f_{\mu\nu}f_{\l}^{\l}+ \nonumber \\
&+ \frac{1}{2} g_{\mu\nu}f^{\lambda\rho}G_{\l\rho}-\frac{1}{4}g_{\mu\nu}f^{\l}_{\l}G^{\rho}_{\rho}+\frac{1}{16}g_{\mu\nu}(f^{\l\rho}f_{\l\rho}-f^{\l}_{\l}f_{\rho}^{\rho}) \nonumber \\
&-R_{\mu\l\nu\rho}f^{\l\rho}+\frac{1}{2}\nabla_{\mu}\nabla^{\l}f_{\l\nu}+\frac{1}{2}\nabla_{n}\nabla^{\l}f_{\mu\l}-\frac{1}{2}\nabla_{\mu}\nabla_{\nu}f_{\l}^{\l} \nonumber \\
&-\frac{1}{2}\nabla^{2}f_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\nabla_{\l}\nabla_{\rho}f^{\l\rho}+\frac{1}{2}g_{\mu\nu}\nabla^2f^{\l}_{\l}
\end{align}
and after simplification with the $EOM_f$ they become
\begin{equation}
E_{\mu\nu}^{g}=-\frac{1}{8}f_{\mu\nu}f^{\l}_{\l}-\frac{1}{16}g_{\mu\nu}f^{\l\rho}f_{\l\rho}-R_{\mu\l\nu\rho}f^{\l\rho}\\
+ \frac{1}{2}\nabla_{\mu}\nabla_{\nu}f^{\l}_{\l}-\frac{1}{2}\nabla^2f_{\mu\nu}.
\end{equation}
To determine the restrictions coming from them we consider again the components $\rho\rho$, $\rho i$ and $ij$. $EOM_g$ in the ${}_\rho{}^\rho$, ${}_i{}^j$ and ${}_i{}^{\rho}$ respectively, read
\begin{align}
E_{\rho}^{g\rho}=-\frac{1}{8}w^2-\frac{1}{8}w\phi^i_i-\frac{1}{16}\phi^{ij}\phi_{ij}+\frac{1}{8}v^iv_i-\frac{1}{16}w^2 \nonumber \\
+f^{ij}\left(\pounds_{\textbf{n}}K_{ij}+K_i^kK_{jk}\right)+\frac{1}{2}\nabla_{\rho}\nabla^{\rho}f_{\lambda}^{\lambda}-\frac{1}{2}\nabla^2f^{\rho}_{\rho},\label{err}
\end{align}
\begin{align}
E_{i}^{gj}=-\frac{1}{8}f_i^jf_{\lambda}^{\lambda}-\frac{1}{16}\delta_i^jf^{\lambda\rho}f_{\lambda\rho}-R_{i\lambda}{}^j{}_{\rho}f^{\lambda\rho}+\frac{1}{2}\nabla_i\nabla^jf^{\lambda}_{\lambda}-\frac{1}{2}\nabla^2f_i^j,\label{egj}
\end{align}
\begin{align}
E_{i}^{g\rho}=-\frac{1}{8}f_i^{\rho}f_{\lambda}^{\lambda}-R_{i\lambda}{}^{\rho}{}_{\kappa}f^{\lambda\kappa}+\frac{1}{2}\nabla_i\nabla^{\rho}f_{\lambda}^{\lambda}-\frac{1}{2}\nabla^2f_i^{\nabla}.\label{egr}
\end{align}
Analogously like EOM for the auxiliary field $f_{\mu\nu}$, equations (\ref{err}), (\ref{egj}) and (\ref{egr}) can be written in terms of the unphysical variables. In the unphysical variables the equation of motion for the $\rho\rho$, $\rho i$ and $ij$ component read
\begin{align}
E_{unphy.coords.\rho}^{g\rho}&=--\frac{3}{16}w^2-\frac{1}{8} w \phi^k_k-\frac{1}{16}\phi^{ij}\phi_{ij}+\frac{1}{8}v^iv_i \nonumber \\
&+ \phi^{ij}\pounds_{\textbf{n}}\theta_{ij}+\phi^{ij}\theta_i^k\theta_{jk}+\frac{2}{\ell}\phi^{ij}\phi_{ij}-\frac{1}{\ell^2}\phi^k_k \nonumber \\
&-\frac{1}{2}\frac{\rho^2}{\ell^2}\partial_\rho^2\phi^k_k-\frac{1}{2}\frac{\rho}{\ell^2}\partial_{\rho}\phi_i^i-\frac{3}{2}\frac{\rho}{\ell^2}\partial_{\rho}w+\frac{1}{2}\frac{\rho}{\ell}\theta \partial_{\rho}w \nonumber \\
&-\frac{1}{2}\frac{\rho^2}{\ell^2}D^2w+\frac{\rho}{\ell}v_jD_i\theta^{ij}+2\frac{\rho}{\ell}\left( \theta^{ij}-\frac{1}{\ell}\frac{\rho^2}{\ell^2}\gamma^{ij}\right)D_iv_j \nonumber \\
&+\left(\theta^i_k-\frac{1}{\ell}\gamma^i_k \right)\left( \theta^{jk}-\frac{1}{\ell}\frac{\rho^2}{\ell^2}\gamma^{jk} \right)\left(\phi_{ij}-\frac{\ell^2}{\rho^2} \gamma_{ij}w\right),
\end{align}
\begin{align}
E_{i}^{g\rho}&=-\frac{1}{8}v_i(\phi_j^j+w)-v^j(\pounds_{\textbf{n}})\theta_{ij}+\theta_i^k\theta_{jk}+\frac{2}{\ell}\frac{\ell^2}{\rho^2}\gamma_{ij})\nonumber \\ &+\frac{\rho}{\ell}\big(D_k\theta_{ji}-D_i\theta_{kj}\big)\phi^{kj}+\frac{1}{2}\nabla_i\nabla^{\rho}f^{\lambda}_{\lambda}-\frac{1}{2}\nabla^2 f_i^{\rho},
\end{align}
\begin{align}
E^{gj}_{i}&=-\frac{1}{8}\phi_i^j(\phi_k^k+w)-\frac{1}{16}\delta_i^j(\phi^{kl}\phi_{kl}-2v^kv_k+w^2) \nonumber \\
& -\frac{\rho^2}{\ell^2}R^j{}_{lik}\phi^{lk}-(\theta_i^j-\frac{1}{\ell}\gamma_i^j)(\theta_{lk}-\frac{1}{\ell}\frac{\ell^2}{\rho^2}\gamma_{lk})\phi^{lk} \nonumber \\&+(\theta^j_k-\frac{1}{\ell}\gamma_{jk})(\theta_{il}-\frac{1}{\ell}\frac{\ell^2}{\rho^2}\gamma_{jl})\phi^{lk} \nonumber \\
& +\frac{\rho}{\ell}v^k(2D_k\theta_i^j-D^j\theta_{ik}-D_i\theta^j_k) \nonumber \\
& + w\left( \frac{\rho^2}{\ell^2}\gamma^{jl}\pounds_{\textbf{n}}\theta_{il}+\theta_i^k\theta_j^k+\frac{2}{\ell}\theta_i^j -\frac{1}{\ell^2}\gamma_{i}^j\right)\nonumber \\
& + \frac{1}{2}\frac{\rho^2}{\ell^2}D_iD^j(\phi^k_k+w)-\frac{1}{2}\frac{\rho}{\ell}(\theta_i^j-\frac{1}{\ell}\gamma_i^j)\partial_{\rho}(\phi_k^k+w)\nonumber \\
&-\frac{1}{2}\nabla^2f_i^j.
\end{align}
In these equations, we insert the FG expansion. For the convenience, we use the expansion of the metric
\begin{equation}
\gamma_{ij}=\gamma_{ij}^{(0)} + \frac{\rho \delta \gamma^{(1)}{}_{ij}}{\ell} + \frac{\rho^2 \delta \gamma^{(2)}{}_{ij}}{2 \ell^2} + \frac{\rho^3 \delta \gamma^{(3)}{}_{ij}}{6 \ell^3} + \frac{\rho^4 \delta \gamma^{(4)}{}_{ij}}{24 \ell^4}
\end{equation}
that contains factorials $\frac{1}{n!}$. In this expansion factorials are not absorbed in the matrices. The tensors are perturbed analogously
\begin{align}
w+\delta w=w + \frac{\eta w^{(1)}}{\ell} + \frac{\eta^2 w^{(2)}}{2 \ell^2} + \frac{\eta^3 w^{(3)}}{6 \ell^3} + \frac{\eta^4 w^{(4)}}{24 \ell^4}
\end{align},
\begin{align}
v_i+\delta v_i=v_{i} + \frac{\eta v^{(1)}{}_{i}}{\ell} + \frac{\eta^2 v^{(2)}{}_{i}}{2 \ell^2} + \frac{\eta^3 v^{(3)}{}_{i}}{6 \ell^3} + \frac{\eta^4 v^{(4)}{}_{i}}{24 \ell^4},
\end{align}
\begin{align}
\phi_{ij}+\delta \phi_{ij}= \phi_{ij}+\frac{\eta \phi^{(1)}{}_{ij}}{\ell} + \frac{\eta^2 \phi^{(2)}{}_{ij}}{2 \ell^2} + \frac{\eta^3 \phi^{(3)}{}_{ij}}{6 \ell^3} + \frac{\eta^4 \phi^{(4)}{}_{ij}}{24 \ell^4}
\end{align}
and their terms in the expansion expressed in the metric $\gamma^{(1)}_{ij}$ are determined using the EOM for the auxiliary field (\ref{f1eom})
and (\ref{f2eom}). First four orders of EOM obtained varying with respect to $\gamma_{ij}$ give exactly zero, which is plausible since Bach equations is fourth order partial differential equation, while the fourth order gives restriction on the terms in the FG expansion. These equations we present here for $\gamma_{ij}^{(0)}=diag(-1,1,1)$, for the full expression see appendix: Holographic Renormalisation and One-Point Functions in Conformal Gravity: EOM for CG, Full Expressionss. The $\rho\rho$ component reads
\begin{align}
E^{(1)\rho}_{\rho}&=- \frac{3 \psi^{(1)}_{i}{}^{k} \psi^{(1)ij} \psi^{(1)}_{j}{}^{l} \psi^{(1)}_{kl}}{4 \ell^8} + \frac{\psi^{(1)}_{ij} \psi^{(1)ij} \psi^{(1)}_{kl} \psi^{(1)kl}}{8 \ell^8} + \frac{\psi^{(2)}_{ij} \psi^{(2)ij}}{4 \ell^8} + \frac{\psi^{(1)}_{i}{}^{k} \psi^{(1)ij} \psi^{(2)}_{jk}}{\ell^8} \nonumber \\& - \frac{\psi^{(1)ij} \psi^{(3)}_{ij}}{2 \ell^8} + \frac{\partial_{j}\partial_{i}\psi^{(2)ij}}{\ell^6} - \frac{2 \psi^{(1)ij} \partial_{j}\partial_{k}\psi^{(1)}_{i}{}^{k}}{\ell^6} - \frac{\partial_{i}\psi^{(1)ij} \partial_{k}\psi^{(1)}_{j}{}^{k}}{2 \ell^6} \nonumber \\ &+ \frac{3 \psi^{(1)ij} \partial_{k}\partial^{k}\psi^{(1)}_{ij}}{2 \ell^6} - \frac{\partial_{j}\psi^{(1)}_{ik} \partial^{k}\psi^{(1)ij}}{\ell^6} + \frac{\partial_{k}\psi^{(1)}_{ij} \partial^{k}\psi^{(1)ij}}{\ell^6},
\end{align}
for \begin{equation}\psi_{ij}^{(n)}=\gamma_{ij}^{(n)}-\frac{1}{3}\gamma^{(n)}\gamma_{ij}^{(0)}\label{tracelessmet}\end{equation} the traceless part of the terms in the expansion of the metric (\ref{expansiongamma}).
The $\rho i$ and $ij$ component are respectively
\begin{align}
E^{(1)\rho}_i&=\frac{4 \psi^{(2)jk} \partial_{i}\psi^{(1)}_{jk}}{3 \ell^7} - \frac{2 \psi^{(1)}_{j}{}^{l} \psi^{(1)jk} \partial_{i}\psi^{(1)}_{kl}}{\ell^7} + \frac{5 \psi^{(1)jk} \partial_{i}\psi^{(2)}_{jk}}{6 \ell^7} - \frac{\psi^{(1)}_{i}{}^{j} \psi^{(1)kl} \partial_{j}\psi^{(1)}_{kl}}{\ell^7} \nonumber \\ & + \frac{\partial_{j}\psi^{(3)}_{i}{}^{j}}{\ell^7} - \frac{2 \psi^{(2)jk} \partial_{k}\psi^{(1)}_{ij}}{\ell^7} - \frac{\psi^{(2)}_{i}{}^{j} \partial_{k}\psi^{(1)}_{j}{}^{k}}{\ell^7} - \frac{\psi^{(1)jk} \partial_{k}\psi^{(2)}_{ij}}{\ell^7} \nonumber \\ &- \frac{2 \psi^{(1)}_{i}{}^{j} \partial_{k}\psi^{(2)}_{j}{}^{k}}{\ell^7} + \frac{2 \partial_{k}\partial_{j}\partial_{i}\psi^{(1)jk}}{3 \ell^5} - \frac{\partial_{k}\partial^{k}\partial_{j}\psi^{(1)}_{i}{}^{j}}{\ell^5} + \frac{2 \psi^{(1)}_{j}{}^{l} \psi^{(1)jk} \partial_{l}\psi^{(1)}_{ik}}{\ell^7}\nonumber \\ & - \frac{\psi^{(1)}_{jk} \psi^{(1)jk} \partial_{l}\psi^{(1)}_{i}{}^{l}}{2 \ell^7} + \frac{2 \psi^{(1)}_{i}{}^{j} \psi^{(1)kl} \partial_{l}\psi^{(1)}_{jk}}{\ell^7} + \frac{2 \psi^{(1)}_{i}{}^{j} \psi^{(1)}_{j}{}^{k} \partial_{l}\psi^{(1)}_{k}{}^{l}}{\ell^7}
\end{align}
\begin{align}
E^{(1)j}_{i}&=\frac{6 \psi^{(1)}_{i}{}^{k} \psi^{(1)jl} \psi^{(1)}_{k}{}^{m} \psi^{(1)}_{lm}}{\ell^8} - \frac{\psi^{(1)}_{i}{}^{j} \psi^{(1)}_{k}{}^{m} \psi^{(1)kl} \psi^{(1)}_{lm}}{\ell^8} - \frac{\psi^{(1)}_{i}{}^{k} \psi^{(1)j}{}_{k} \psi^{(1)}_{lm} \psi^{(1)lm}}{\ell^8} \nonumber \\ & - \frac{7 \delta_{i}{}^{j} \psi^{(1)}_{k}{}^{m} \psi^{(1)kl} \psi^{(1)}_{l}{}^{n} \psi^{(1)}_{mn}}{4 \ell^8} + \frac{7 \delta_{i}{}^{j} \psi^{(1)}_{kl} \psi^{(1)kl} \psi^{(1)}_{mn} \psi^{(1)mn}}{24 \ell^8} + \frac{\psi^{(1)}_{kl} \psi^{(1)kl} \psi^{(2)}_{i}{}^{j}}{\ell^8} \nonumber \\ & - \frac{4 \psi^{(1)jl} \psi^{(1)}_{kl} \psi^{(2)}_{i}{}^{k}}{\ell^8} + \frac{3 \psi^{(2)}_{i}{}^{k} \psi^{(2)j}{}_{k}}{\ell^8} - \frac{4 \psi^{(1)}_{i}{}^{k} \psi^{(1)}_{kl} \psi^{(2)jl}}{\ell^8} - \frac{4 \psi^{(1)}_{i}{}^{k} \psi^{(1)jl} \psi^{(2)}_{kl}}{\ell^8} \nonumber \\ &+ \frac{7 \psi^{(1)}_{i}{}^{j} \psi^{(1)kl} \psi^{(2)}_{kl}}{6 \ell^8} - \frac{13 \delta_{i}{}^{j} \psi^{(2)}_{kl} \psi^{(2) kl}}{12 \ell^8} + \frac{11 \delta_{i}{}^{j} \psi^{(1)}_{k}{}^{m} \psi^{(1) kl} \psi^{(2)}_{lm}}{3 \ell^8}\nonumber \\ & + \frac{2 \psi^{(1) j}{}_{k} \psi^{(3)}_{i}{}^{k}}{\ell^8} + \frac{2 \psi^{(1)}_{i}{}^{k} \psi^{(3) j}{}_{k}}{\ell^8} - \frac{7 \delta_{i}{}^{j} \psi^{(1) kl} \psi^{(3)}_{kl}}{6 \ell^8} - \frac{\psi^{(4)}{}_{i}{}^{j}}{\ell^8} \nonumber \\ & - \frac{\psi^{(1) kl} \partial^{j}\partial_{i}\psi^{(1)}_{kl}}{\ell^6} - \frac{\partial_{k}\partial_{i}\psi^{(2) jk}}{\ell^6} - \frac{\partial_{k}\partial^{j}\psi^{(2)}_{i}{}^{k}}{\ell^6} + \frac{2 \partial_{k}\partial^{k}\psi^{(2)}_{i}{}^{j}}{\ell^6} \nonumber \\ & - \frac{\partial_{k}\psi^{(1)}_{i}{}^{k} \partial_{l}\psi^{(1) jl}}{\ell^6} + \frac{\partial_{i}\psi^{(1) jk} \partial_{l}\psi^{(1)}_{k}{}^{l}}{\ell^6} + \frac{\partial^{j}\psi^{(1)}_{i}{}^{k} \partial_{l}\psi^{(1)}_{k}{}^{l}}{\ell^6} \nonumber
\end{align}
\vspace{-1.2cm}
\hspace{0.5cm}
\begin{align}
&+ \frac{2 \psi^{(1) kl} \partial_{l}\partial_{i}\psi^{(1)j}{}_{k}}{\ell^6} + \frac{\psi^{(1) jk} \partial_{l}\partial_{i}\psi^{(1)}_{k}{}^{l}}{\ell^6} + \frac{2 \psi^{(1) kl} \partial_{l}\partial^{j}\psi^{(1)}_{ik}}{\ell^6} + \frac{\psi^{(1)}_{i}{}^{k} \partial_{l}\partial^{j}\psi^{(1)}_{k}{}^{l}}{\ell^6} \nonumber \\ & - \frac{2 \psi^{(1) kl} \partial_{l}\partial_{k}\psi^{(1)}_{i}{}^{j}}{\ell^6} + \frac{\psi^{(1)}_{i}{}^{j} \partial_{l}\partial_{k}\psi^{(1) kl}}{3 \ell^6} + \frac{\delta_{i}{}^{j} \partial_{l}\partial_{k}\psi^{(2) kl}}{3 \ell^6} \nonumber \\ & - \frac{2 \psi^{(1) jk} \partial_{l}\partial^{l}\psi^{(1)}_{ik}}{\ell^6} - \frac{2 \psi^{(1)}_{i}{}^{k} \partial_{l}\partial^{l}\psi^{(1) j}{}_{k}}{\ell^6} + \frac{2 \partial_{k}\psi^{(1) j}{}_{l} \partial^{l}\psi^{(1)}_{i}{}^{k}}{\ell^6} \nonumber \\ & - \frac{4 \partial_{l}\psi^{(1) j}{}_{k} \partial^{l}\psi^{(1)}_{i}{}^{k}}{\ell^6} - \frac{\delta_{i}{}^{j} \partial_{k}\psi^{(1) kl} \partial_{m}\psi^{(1)}_{l}{}^{m}}{6 \ell^6} - \frac{4 \delta_{i}{}^{j} \psi^{(1) kl} \partial_{m}\partial_{l}\psi^{(1)}_{k}{}^{m}}{3 \ell^6} \nonumber \\ & + \frac{7 \delta_{i}{}^{j} \psi^{(1) kl} \partial_{m}\partial^{m}\psi^{(1)}_{kl}}{6 \ell^6} - \frac{\delta_{i}{}^{j} \partial_{l}\psi^{(1)}_{km} \partial^{m}\psi^{(1) kl}}{3 \ell^6} + \frac{\delta_{i}{}^{j} \partial_{m}\psi^{(1)}_{kl} \partial^{m}\psi^{(1) kl}}{\ell^6}.
\end{align}
Since these equations do not give any conditions on $\gamma_{ij}^{(1)}$ they exhibit analogous behaviour to the 3D CG gravity \cite{Afshar:2011qw}, and differ from the EG in which the $\gamma_{ij}^{(1)}$ needs to vanish.
\subsubsection{EOM for CG, Full Expressions}
Focusing on the $dS$ case,
EOM for CG for the unphysical fields $w, v_i$ and $\phi_{ij}$ that define auxiliary field $f_{ij}$ are
\begin{align}
\phi_{ij}^{(0))}& =\frac{4 h_{ji}}{\ell^2}\end{align}\begin{align}
\phi_{ij}^{(1))}& =0\end{align}\begin{align}
\phi_{ij}^{(2)}& =-8 R[D]_{ij} + \tfrac{4}{3} h_{ji} R[D] - \frac{4 h^{(1)}{}_{i}{}^{k} h^{(1)}{}_{jk}}{\ell^2} + \frac{2 h^{(1)}{}_{ij} h^{(1)k}{}_{k}}{\ell^2} \nonumber \\ & + \frac{h_{ji} h^{(1)}{}_{kl} h^{(1)kl}}{\ell^2} - \frac{h_{ji} h^{(1)k}{}_{k} h^{(1)l}{}_{l}}{3 \ell^2} - \frac{4 h_{ji} h^{(2)k}{}_{k}}{3 \ell^2} \label{fijexp1}
\end{align}
\begin{align}
\phi_{ij}^{(3)}&=4 R[D] h^{(1)}{}_{ij} -4 h_{ji} R[D]^{kl} h^{(1)}{}_{kl} + \frac{12 h^{(1)}{}_{i}{}^{k} h^{(1)}{}_{j}{}^{l} h^{(1)}{}_{kl}}{\ell^2} - \frac{3 h^{(1)}{}_{ij} h^{(1)}{}_{kl} h^{(1)kl}}{\ell^2} \nonumber \\ &- \frac{6 h_{ji} h^{(1)}{}_{k}{}^{m} h^{(1)kl} h^{(1)}{}_{lm}}{\ell^2} - \frac{h^{(1)}{}_{ij} h^{(1)k}{}_{k} h^{(1)l}{}_{l}}{\ell^2} + \frac{2 h_{ji} h^{(1)k}{}_{k} h^{(1)}{}_{lm} h^{(1)lm}}{\ell^2} \nonumber \\ &+ \frac{6 h^{(1)k}{}_{k} h^{(2)}{}_{ij}}{\ell^2} - \frac{12 h^{(1)}{}_{jk} h^{(2)}{}_{i}{}^{k}}{\ell^2} - \frac{12 h^{(1)}{}_{i}{}^{k} h^{(2)}{}_{jk}}{\ell^2} + \frac{10 h_{ji} h^{(1)kl} h^{(2)}{}_{kl}}{\ell^2} \nonumber \\ & + \frac{2 h^{(1)}{}_{ij} h^{(2)k}{}_{k}}{\ell^2} - \frac{2 h_{ji} h^{(1)k}{}_{k} h^{(2)l}{}_{l}}{\ell^2} + \frac{4 h^{(3)}{}_{ij}}{\ell^2} - \frac{4 h_{ji} h^{(3)k}{}_{k}}{\ell^2 } \nonumber \\ &+ 12 D_{j}D_{i}h^{(1)k}{}_{k}-12 D_{k}D_{i}h^{(1)}{}_{j}{}^{k} -12 D_{k}D_{j}h^{(1)}{}_{i}{}^{k} + 12 D_{k}D^{k}h^{(1)}{}_{ij} \nonumber \\ & + 4 h_{ji} D_{l}D_{k}h^{(1)kl} -4 h_{ji} D_{l}D^{l}h^{(1)k}{}_{k}\end{align}
\begin{align} \phi_{ij}^{(4)}&= -16 R[D]^{kl} h^{(1)}{}_{ij} h^{(1)}{}_{kl} + 16 h_{ji} R[D]^{kl} h^{(1)}{}_{k}{}^{m} h^{(1)}{}_{lm} \nonumber \\ &- \frac{48 h^{(1)}{}_{i}{}^{k} h^{(1)}{}_{j}{}^{l} h^{(1)}{}_{k}{}^{m} h^{(1)}{}_{lm}}{\ell^2}+ \frac{8 h^{(1)}{}_{ij} h^{(1)k}{}_{k} h^{(1)}{}_{lm} h^{(1)lm}}{\ell^2} \nonumber \\ &+ \frac{36 h_{ji} h^{(1)}{}_{k}{}^{m} h^{(1)kl} h^{(1)}{}_{l}{}^{n} h^{(1)}{}_{mn}}{\ell^2} - \frac{8 h_{ji} h^{(1)k}{}_{k} h^{(1)}{}_{l}{}^{n} h^{(1)lm} h^{(1)}{}_{mn}}{\ell^2} \nonumber \\ &- \frac{4 h_{ji} h^{(1)}{}_{kl} h^{(1)kl} h^{(1)}{}_{mn} h^{(1)mn}}{\ell^2} + 8 R[D] h^{(2)}{}_{ij} - \frac{18 h^{(1)}{}_{kl} h^{(1)kl} h^{(2)}{}_{ij}}{\ell^2}\nonumber \\ & - \frac{2 h^{(1)k}{}_{k} h^{(1)l}{}_{l} h^{(2)}{}_{ij}}{\ell^2} + \frac{48 h^{(1)}{}_{j}{}^{l} h^{(1)}{}_{kl} h^{(2)}{}_{i}{}^{k}}{\ell^2} - \frac{48 h^{(2)}{}_{i}{}^{k} h^{(2)}{}_{jk}}{\ell^2}\nonumber \\ & + \frac{48 h^{(1)}{}_{i}{}^{k} h^{(1)}{}_{kl} h^{(2)}{}_{j}{}^{l}}{\ell^2} -8 h_{ji} R[D]^{kl} h^{(2)}{}_{kl} + \frac{24 h^{(1)}{}_{i}{}^{k} h^{(1)}{}_{j}{}^{l} h^{(2)}{}_{kl}}{\ell^2} \nonumber \\&+ \frac{4 h^{(1)}{}_{ij} h^{(1)kl} h^{(2)}{}_{kl}}{\ell^2} + \frac{16 h^{(2)}{}_{ij} h^{(2)k}{}_{k}}{\ell^2} + \frac{20 h_{ji} h^{(2)}{}_{kl} h^{(2)kl}}{\ell^2} \nonumber \\ & - \frac{76 h_{ji} h^{(1)}{}_{k}{}^{m} h^{(1)kl} h^{(2)}{}_{lm}}{\ell^2} + \frac{12 h_{ji} h^{(1)k}{}_{k} h^{(1)lm} h^{(2)}{}_{lm}}{\ell^2} - \frac{8 h^{(1)}{}_{ij} h^{(1)k}{}_{k} h^{(2)l}{}_{l}}{\ell^2} \nonumber \\ &- \frac{4 h_{ji} h^{(2)k}{}_{k} h^{(2)l}{}_{l}}{\ell^2} + \frac{8 h_{ji} h^{(1)}{}_{kl} h^{(1)kl} h^{(2)m}{}_{m}}{\ell^2} + \frac{12 h^{(1)k}{}_{k} h^{(3)}{}_{ij}}{\ell^2}\nonumber \end{align}\begin{align} & - \frac{24 h^{(1)}{}_{jk} h^{(3)}{}_{i}{}^{k}}{\ell^2} - \frac{24 h^{(1)}{}_{i}{}^{k} h^{(3)}{}_{jk}}{\ell^2} + \frac{28 h_{ji} h^{(1)kl} h^{(3)}{}_{kl}}{\ell^2} - \frac{4 h^{(1)}{}_{ij} h^{(3)k}{}_{k}}{\ell^2} \nonumber \\ & - \frac{4 h_{ji} h^{(1)k}{}_{k} h^{(3)l}{}_{l}}{\ell^2} + \frac{12 h^{(4)}{}_{ij}}{\ell^2} - \frac{8 h_{ji} h^{(4)k}{}_{k}}{\ell^2} -48 h^{(1)kl} D_{i}D_{j}h^{(1)}{}_{kl} \nonumber \\ &-24 D_{i}h^{(1)kl} D_{j}h^{(1)}{}_{kl} + 24 D_{j}D_{i}h^{(2)k}{}_{k} -24 D_{i}h^{(1)}{}_{j}{}^{k} D_{k}h^{(1)l}{}_{l}\nonumber \\ & -24 D_{j}h^{(1)}{}_{i}{}^{k} D_{k}h^{(1)l}{}_{l} -24 D_{k}D_{i}h^{(2)}{}_{j}{}^{k} -24 D_{k}D_{j}h^{(2)}{}_{i}{}^{k} \nonumber \\ &+ 24 D_{k}D^{k}h^{(2)}{}_{ij} + 24 D_{k}h^{(1)l}{}_{l} D^{k}h^{(1)}{}_{ij} + 48 D_{i}h^{(1)}{}_{j}{}^{k} D_{l}h^{(1)}{}_{k}{}^{l}\nonumber \\ & + 48 D_{j}h^{(1)}{}_{i}{}^{k} D_{l}h^{(1)}{}_{k}{}^{l} -48 D^{k}h^{(1)}{}_{ij} D_{l}h^{(1)}{}_{k}{}^{l} + 48 h^{(1)kl} D_{l}D_{i}h^{(1)}{}_{jk} \nonumber \\ &+ 48 h^{(1)kl} D_{l}D_{j}h^{(1)}{}_{ik}-48 h^{(1)kl} D_{l}D_{k}h^{(1)}{}_{ij} + 16 h^{(1)}{}_{ij} D_{l}D_{k}h^{(1)kl} \nonumber \\ &+ 16 h_{ji} h^{(1)kl} D_{l}D_{k}h^{(1)m}{}_{m} + 8 h_{ji} D_{l}D_{k}h^{(2)kl}-16 h^{(1)}{}_{ij} D_{l}D^{l}h^{(1)k}{}_{k} \nonumber \\ &-8 h_{ji} D_{l}D^{l}h^{(2)k}{}_{k} -16 h_{ji} h^{(1)kl} D_{l}D_{m}h^{(1)}{}_{k}{}^{m} + 48 D_{k}h^{(1)}{}_{jl} D^{l}h^{(1)}{}_{i}{}^{k} \nonumber \\ &-48 D_{l}h^{(1)}{}_{jk} D^{l}h^{(1)}{}_{i}{}^{k} -4 h_{ji} D_{l}h^{(1)m}{}_{m} D^{l}h^{(1)k}{}_{k} -16 h_{ji} D_{k}h^{(1)kl} D_{m}h^{(1)}{}_{l}{}^{m} \nonumber \\ &+ 16 h_{ji} D^{l}h^{(1)k}{}_{k} D_{m}h^{(1)}{}_{l}{}^{m} -16 h_{ji} h^{(1)kl} D_{m}D_{l}h^{(1)}{}_{k}{}^{m} \nonumber \\ &+ 16 h_{ji} h^{(1)kl} D_{m}D^{m}h^{(1)}{}_{kl}-8 h_{ji} D_{l}h^{(1)}{}_{km} D^{m}h^{(1)kl} \nonumber \\ &+ 12 h_{ji} D_{m}h^{(1)}{}_{kl} D^{m}h^{(1)kl}, \label{fijexp2}
\end{align}
\begin{align}
v_i^{(0)}&=0\end{align}\begin{align}
v_i^{(1)}&=0\end{align}\begin{align}
v_i^{(2)}&=\frac{4 D_{i}h^{(1)j}{}_{j}}{\ell} - \frac{4 D_{j}h^{(1)}{}_{i}{}^{j}}{\ell}\\
v_i^{(3)}&- \frac{18 h^{(1)jk} D_{i}h^{(1)}{}_{jk}}{\ell} + \frac{12 D_{i}h^{(2)j}{}_{j}}{\ell} - \frac{6 h^{(1)}{}_{i}{}^{j} D_{j}h^{(1)k}{}_{k}}{\ell} - \frac{12 D_{j}h^{(2)}{}_{i}{}^{j}}{\ell} \nonumber \\ &+ \frac{12 h^{(1)jk} D_{k}h^{(1)}{}_{ij}}{\ell} + \frac{12 h^{(1)}{}_{i}{}^{j} D_{k}h^{(1)}{}_{j}{}^{k}}{\ell} \end{align}\begin{align}
v_{i}^{(4)}&=- \frac{48 h^{(2)jk} D_{i}h^{(1)}{}_{jk}}{\ell} + \frac{96 h^{(1)}{}_{j}{}^{l} h^{(1)jk} D_{i}h^{(1)}{}_{kl}}{\ell} - \frac{60 h^{(1)jk} D_{i}h^{(2)}{}_{jk}}{\ell} \nonumber \\ &+ \frac{24 D_{i}h^{(3)j}{}_{j}}{\ell} + \frac{24 h^{(1)}{}_{i}{}^{j} h^{(1)kl} D_{j}h^{(1)}{}_{kl}}{\ell} - \frac{24 h^{(2)}{}_{i}{}^{j} D_{j}h^{(1)k}{}_{k}}{\ell} - \frac{12 h^{(1)}{}_{i}{}^{j} D_{j}h^{(2)k}{}_{k}}{\ell} \nonumber \\ & - \frac{24 D_{j}h^{(3)}{}_{i}{}^{j}}{\ell} + \frac{24 h^{(2)jk} D_{k}h^{(1)}{}_{ij}}{\ell} + \frac{48 h^{(2)}{}_{i}{}^{j} D_{k}h^{(1)}{}_{j}{}^{k}}{\ell} + \frac{24 h^{(1)}{}_{i}{}^{j} h^{(1)}{}_{j}{}^{k} D_{k}h^{(1)l}{}_{l}}{\ell} \nonumber \\ &+ \frac{48 h^{(1)jk} D_{k}h^{(2)}{}_{ij}}{\ell} + \frac{24 h^{(1)}{}_{i}{}^{j} D_{k}h^{(2)}{}_{j}{}^{k}}{\ell} - \frac{48 h^{(1)}{}_{j}{}^{l} h^{(1)jk} D_{l}h^{(1)}{}_{ik}}{\ell} \nonumber \\ & - \frac{48 h^{(1)}{}_{i}{}^{j} h^{(1)kl} D_{l}h^{(1)}{}_{jk}}{\ell} - \frac{48 h^{(1)}{}_{i}{}^{j} h^{(1)}{}_{j}{}^{k} D_{l}h^{(1)}{}_{k}{}^{l}}{\ell},\label{vexp}
\end{align}
\begin{align}
w^{(0)}&=\frac{4}{\ell^2}\end{align}\begin{align}
w^{(1)}&=0\end{align}\begin{align}
w^{(2)}& \tfrac{4}{3} R[D] - \frac{h^{(1)}{}_{ij} h^{(1)ij}}{\ell^2} - \frac{h^{(1)i}{}_{i} h^{(1)j}{}_{j}}{3 \ell^2} + \frac{8 h^{(2)i}{}_{i}}{3 \ell^2}\end{align}\begin{align}
w^{(3)}&=-4 R[D]^{ij} h^{(1)}{}_{ij} + \frac{6 h^{(1)}{}_{i}{}^{k} h^{(1)ij} h^{(1)}{}_{jk}}{\ell^2} + \frac{2 h^{(1)i}{}_{i} h^{(1)}{}_{jk} h^{(1)jk}}{\ell^2} - \frac{14 h^{(1)ij} h^{(2)}{}_{ij}}{\ell^2} \nonumber \\ &- \frac{2 h^{(1)i}{}_{i} h^{(2)j}{}_{j}}{\ell^2} + \frac{8 h^{(3)i}{}_{i}}{\ell^2} + 4 D_{j}D_{i}h^{(1)ij} -4 D_{j}D^{j}h^{(1)i}{}_{i} \end{align}\begin{align}
w^{(4)}&=16 R[D]^{ij} h^{(1)}{}_{i}{}^{k} h^{(1)}{}_{jk} - \frac{36 h^{(1)}{}_{i}{}^{k} h^{(1)ij} h^{(1)}{}_{j}{}^{l} h^{(1)}{}_{kl}}{\ell^2} - \frac{8 h^{(1)i}{}_{i} h^{(1)}{}_{j}{}^{l} h^{(1)jk} h^{(1)}{}_{kl}}{\ell^2} \nonumber \\ & - \frac{4 h^{(1)}{}_{ij} h^{(1)ij} h^{(1)}{}_{kl} h^{(1)kl}}{\ell^2} -8 R[D]^{ij} h^{(2)}{}_{ij} - \frac{28 h^{(2)}{}_{ij} h^{(2)ij}}{\ell^2} + \frac{92 h^{(1)}{}_{i}{}^{k} h^{(1)ij} h^{(2)}{}_{jk}}{\ell^2} \nonumber \\ &+ \frac{12 h^{(1)i}{}_{i} h^{(1)jk} h^{(2)}{}_{jk}}{\ell^2} - \frac{4 h^{(2)i}{}_{i} h^{(2)j}{}_{j}}{\ell^2} + \frac{8 h^{(1)}{}_{ij} h^{(1)ij} h^{(2)k}{}_{k}}{\ell^2} - \frac{44 h^{(1)ij} h^{(3)}{}_{ij}}{\ell^2} \nonumber \\ & - \frac{4 h^{(1)i}{}_{i} h^{(3)j}{}_{j}}{\ell^2} + \frac{16 h^{(4)i}{}_{i}}{\ell^2} + 16 h^{(1)ij} D_{j}D_{i}h^{(1)k}{}_{k} + 8 D_{j}D_{i}h^{(2)ij} -8 D_{j}D^{j}h^{(2)i}{}_{i} \nonumber \\ & -16 h^{(1)ij} D_{j}D_{k}h^{(1)}{}_{i}{}^{k} -4 D_{j}h^{(1)k}{}_{k} D^{j}h^{(1)i}{}_{i} -16 D_{i}h^{(1)ij} D_{k}h^{(1)}{}_{j}{}^{k} \nonumber \\ &+ 16 D^{j}h^{(1)i}{}_{i} D_{k}h^{(1)}{}_{j}{}^{k} -16 h^{(1)ij} D_{k}D_{j}h^{(1)}{}_{i}{}^{k} + 16 h^{(1)ij} D_{k}D^{k}h^{(1)}{}_{ij} \nonumber \\ & -8 D_{j}h^{(1)}{}_{ik} D^{k}h^{(1)ij} + 12 D_{k}h^{(1)}{}_{ij} D^{k}h^{(1)ij}. \label{wexp}
\end{align}
EOM of CG for the metric $\gamma_{ij}$, using the expansion with $\frac{1}{n!}$ reads in
$\rho\rho$ component
\begin{align}
E^{(4)\rho}_{\rho}&=- \frac{3 \psi ^{(1)}_{i}{}^{k} \psi ^{(1)ij} \psi ^{(1)}_{j}{}^{l} \psi ^{(1)}_{kl}}{4 \ell^8} + \frac{\psi ^{(1)}_{ij} \psi ^{(1)ij} \psi ^{(1)}_{kl} \psi ^{(1)kl}}{8 \ell^8} + \frac{\psi ^{(2)}_{ij} \psi ^{(2)ij}}{4 \ell^8} + \frac{\psi ^{(1)}_{i}{}^{k} \psi ^{(1)ij} \psi ^{(2)}_{jk}}{\ell^8} \nonumber \\ & - \frac{\psi ^{(1)ij} \psi ^{(3)}_{ij}}{2 \ell^8} - \frac{5 \psi ^{(1)}_{i}{}^{k} \psi ^{(1)}_{jk} R[D]^{ij}}{\ell^6} - \frac{R[D]_{ij} R[D]^{ij}}{\ell^4} + \frac{5 \psi ^{(1)}_{ij} \psi ^{(1)ij} R[D]}{6 \ell^6}\nonumber \\ & + \frac{R[D]^2}{3 \ell^4} - \frac{D_{i}D^{i}R[D]}{3 \ell^4} + \frac{D_{j}D_{i}\psi ^{(2)ij}}{\ell^6} - \frac{2 \psi ^{(1)ij} D_{j}D_{k}\psi ^{(1)}_{i}{}^{k}}{\ell^6} - \frac{D_{i}\psi ^{(1)ij} D_{k}\psi ^{(1)}_{j}{}^{k}}{2 \ell^6}\nonumber \\ & + \frac{3 \psi ^{(1)ij} D_{k}D^{k}\psi ^{(1)}_{ij}}{2 \ell^6} - \frac{D_{j}\psi ^{(1)}_{ik} D^{k}\psi ^{(1)ij}}{\ell^6} + \frac{D_{k}\psi ^{(1)}_{ij} D^{k}\psi ^{(1)ij}}{\ell^6},
\end{align}
in $\rho i $ component
\begin{align}
E^{(4)\rho}_{i}&=\frac{4 \psi ^{(2)jk} D_{i}\psi ^{(1)}_{jk}}{3 \ell^7} + \frac{4 R[D]^{jk} D_{i}\psi ^{(1)}_{jk}}{3 \ell^5} - \frac{2 \psi ^{(1)}_{j}{}^{l} \psi ^{(1)jk} D_{i}\psi ^{(1)}_{kl}}{\ell^7} + \frac{5 \psi ^{(1)jk} D_{i}\psi ^{(2)}_{jk}}{6 \ell^7} \nonumber \\ & - \frac{2 \psi ^{(1)jk} D_{i}R[D]_{jk}}{3 \ell^5} + \frac{2 D_{i}D_{k}D_{j}\psi ^{(1)jk}}{3 \ell^5} + \frac{2 R[D] D_{j}\psi ^{(1)}_{i}{}^{j}}{3 \ell^5} - \frac{\psi ^{(1)}_{i}{}^{j} \psi ^{(1)kl} D_{j}\psi ^{(1)}_{kl}}{\ell^7} \nonumber \\ &+ \frac{D_{j}\psi ^{(3)}_{i}{}^{j}}{\ell^7} - \frac{\psi ^{(1)}_{i}{}^{j} D_{j}R[D]}{3 \ell^5} - \frac{2 \psi ^{(2)jk} D_{k}\psi ^{(1)}_{ij}}{\ell^7} - \frac{2 R[D]^{jk} D_{k}\psi ^{(1)}_{ij}}{\ell^5} \nonumber \\ & - \frac{\psi ^{(2)}_{i}{}^{j} D_{k}\psi ^{(1)}_{j}{}^{k}}{\ell^7} + \frac{R[D]_{i}{}^{j} D_{k}\psi ^{(1)}_{j}{}^{k}}{\ell^5} - \frac{\psi ^{(1)jk} D_{k}\psi ^{(2)}_{ij}}{\ell^7} - \frac{2 \psi ^{(1)}_{i}{}^{j} D_{k}\psi ^{(2)}_{j}{}^{k}}{\ell^7} \nonumber \\ &+ \frac{2 \psi ^{(1)jk} D_{k}R[D]_{ij}}{\ell^5} - \frac{D_{k}D^{k}D_{j}\psi ^{(1)}_{i}{}^{j}}{\ell^5} + \frac{2 \psi ^{(1)}_{j}{}^{l} \psi ^{(1)jk} D_{l}\psi ^{(1)}_{ik}}{\ell^7} \nonumber \\ & - \frac{\psi ^{(1)}_{jk} \psi ^{(1)jk} D_{l}\psi ^{(1)}_{i}{}^{l}}{2 \ell^7} + \frac{2 \psi ^{(1)}_{i}{}^{j} \psi ^{(1)kl} D_{l}\psi ^{(1)}_{jk}}{\ell^7} + \frac{2 \psi ^{(1)}_{i}{}^{j} \psi ^{(1)}_{j}{}^{k} D_{l}\psi ^{(1)}_{k}{}^{l}}{\ell^7}
\end{align}
and in $ij$ component
\begin{align}
E^{(4)j}_{i}&=\frac{6 \psi ^{(1)}_{i}{}^{k} \psi ^{(1)jl} \psi ^{(1)}_{k}{}^{m} \psi ^{(1)}_{lm}}{\ell^8} - \frac{\psi ^{(1)}_{i}{}^{j} \psi ^{(1)}_{k}{}^{m} \psi ^{(1)kl} \psi ^{(1)}_{lm}}{\ell^8} - \frac{\psi ^{(1)}_{i}{}^{k} \psi ^{(1)j}{}_{k} \psi ^{(1)}_{lm} \psi ^{(1)lm}}{\ell^8} \nonumber \\ & - \frac{7 \delta_{i}{}^{j} \psi ^{(1)}_{k}{}^{m} \psi ^{(1)kl} \psi ^{(1)}_{l}{}^{n} \psi ^{(1)}_{mn}}{4 \ell^8} + \frac{7 \delta_{i}{}^{j} \psi ^{(1)}_{kl} \psi ^{(1)kl} \psi ^{(1)}_{mn} \psi ^{(1)mn}}{24 \ell^8} \nonumber \\ &+ \frac{\psi ^{(1)}_{kl} \psi ^{(1)kl} \psi ^{(2)}_{i}{}^{j}}{\ell^8} - \frac{4 \psi ^{(1)jl} \psi ^{(1)}_{kl} \psi ^{(2)}_{i}{}^{k}}{\ell^8} + \frac{3 \psi ^{(2)}_{i}{}^{k} \psi ^{(2)j}{}_{k}}{\ell^8} \nonumber \\ & - \frac{4 \psi ^{(1)}_{i}{}^{k} \psi ^{(1)}_{kl} \psi ^{(2)jl}}{\ell^8} - \frac{4 \psi ^{(1)}_{i}{}^{k} \psi ^{(1)jl} \psi ^{(2)}_{kl}}{\ell^8} + \frac{7 \psi ^{(1)}_{i}{}^{j} \psi ^{(1)kl} \psi ^{(2)}_{kl}}{6 \ell^8}\nonumber \\ & - \frac{13 \delta_{i}{}^{j} \psi ^{(2)}_{kl} \psi ^{(2)kl}}{12 \ell^8} + \frac{11 \delta_{i}{}^{j} \psi ^{(1)}_{k}{}^{m} \psi ^{(1)kl} \psi ^{(2)}_{lm}}{3 \ell^8} + \frac{2 \psi ^{(1)j}{}_{k} \psi ^{(3)}_{i}{}^{k}}{\ell^8} \nonumber \\ &+ \frac{2 \psi ^{(1)}_{i}{}^{k} \psi ^{(3)j}{}_{k}}{\ell^8} - \frac{7 \delta_{i}{}^{j} \psi ^{(1)kl} \psi ^{(3)}_{kl}}{6 \ell^8} - \frac{5 \psi ^{(1)}_{kl} \psi ^{(1)kl} R[D]_{i}{}^{j}}{\ell^6} + \frac{6 \psi ^{(1)jl} \psi ^{(1)}_{kl} R[D]_{i}{}^{k}}{\ell^6}\nonumber \\ & - \frac{4 \psi ^{(2)j}{}_{k} R[D]_{i}{}^{k}}{\ell^6} - \frac{8 R[D]_{i}{}^{k} R[D]^{j}{}_{k}}{\ell^4} + \frac{6 \psi ^{(1)}_{i}{}^{l} \psi ^{(1)}_{kl} R[D]^{jk}}{\ell^6} - \frac{4 \psi ^{(2)}_{ik} R[D]^{jk}}{\ell^6}\nonumber \\ & + \frac{2 \psi ^{(1)}_{i}{}^{j} \psi ^{(1)}_{kl} R[D]^{kl}}{3 \ell^6} - \frac{19 \delta_{i}{}^{j} \psi ^{(1)}_{k}{}^{m} \psi ^{(1)}_{lm} R[D]^{kl}}{3 \ell^6} + \frac{8 \delta_{i}{}^{j} \psi ^{(2)}_{kl} R[D]^{kl}}{3 \ell^6}\nonumber \\ & + \frac{3 \delta_{i}{}^{j} R[D]_{kl} R[D]^{kl}}{\ell^4} - \frac{7 \psi ^{(1)}_{i}{}^{k} \psi ^{(1)j}{}_{k} R[D]}{3 \ell^6} + \frac{17 \delta_{i}{}^{j} \psi ^{(1)}_{kl} \psi ^{(1)kl} R[D]}{6 \ell^6} + \frac{4 \psi ^{(2)}_{i}{}^{j} R[D]}{3 \ell^6} \nonumber \\ &+ \frac{14 R[D]_{i}{}^{j} R[D]}{3 \ell^4} - \frac{5 \delta_{i}{}^{j} R[D]^2}{3 \ell^4} - \frac{\psi^{(4)}{}_{i}{}^{j}}{\ell^8} - \frac{13 \psi ^{(1)kl} D_{i}D^{j}\psi ^{(1)}_{kl}}{2 \ell^6} - \frac{D_{i}D_{k}\psi ^{(2)jk}}{\ell^6} \nonumber \\ &+ \frac{2 \psi ^{(1)kl} D_{i}D_{l}\psi ^{(1)j}{}_{k}}{\ell^6} + \frac{\psi ^{(1)jk} D_{i}D_{l}\psi ^{(1)}_{k}{}^{l}}{\ell^6} + \frac{11 \psi ^{(1)kl} D^{j}D_{i}\psi ^{(1)}_{kl}}{2 \ell^6} - \frac{2 D^{j}D_{i}R[D]}{3 \ell^4} \nonumber \end{align}
\vspace{-0.3cm}
\begin{align} & - \frac{D^{j}D_{k}\psi ^{(2)}_{i}{}^{k}}{\ell^6} + \frac{2 \psi ^{(1)kl} D^{j}D_{l}\psi ^{(1)}_{ik}}{\ell^6} + \frac{\psi ^{(1)}_{i}{}^{k} D^{j}D_{l}\psi ^{(1)}_{k}{}^{l}}{\ell^6} + \frac{2 D_{k}D^{k}\psi ^{(2)}_{i}{}^{j}}{\ell^6}\nonumber \\ & + \frac{2 D_{k}D^{k}R[D]_{i}{}^{j}}{\ell^4} - \frac{\delta_{i}{}^{j} D_{k}D^{k}R[D]}{3 \ell^4} - \frac{D_{k}\psi ^{(1)}_{i}{}^{k} D_{l}\psi ^{(1)jl}}{\ell^6} + \frac{D_{i}\psi ^{(1)jk} D_{l}\psi ^{(1)}_{k}{}^{l}}{\ell^6}\nonumber \\ & + \frac{D^{j}\psi ^{(1)}_{i}{}^{k} D_{l}\psi ^{(1)}_{k}{}^{l}}{\ell^6} - \frac{2 \psi ^{(1)kl} D_{l}D_{k}\psi ^{(1)}_{i}{}^{j}}{\ell^6} + \frac{\psi ^{(1)}_{i}{}^{j} D_{l}D_{k}\psi ^{(1)kl}}{3 \ell^6} + \frac{\delta_{i}{}^{j} D_{l}D_{k}\psi ^{(2)kl}}{3 \ell^6} \nonumber \\ & - \frac{2 \psi ^{(1)jk} D_{l}D^{l}\psi ^{(1)}_{ik}}{\ell^6} - \frac{2 \psi ^{(1)}_{i}{}^{k} D_{l}D^{l}\psi ^{(1)j}{}_{k}}{\ell^6} - \frac{4 \delta_{i}{}^{j} \psi ^{(1)kl} D_{l}D_{m}\psi ^{(1)}_{k}{}^{m}}{3 \ell^6} \nonumber \\ &+ \frac{2 D_{k}\psi ^{(1)j}{}_{l} D^{l}\psi ^{(1)}_{i}{}^{k}}{\ell^6} - \frac{4 D_{l}\psi ^{(1)j}{}_{k} D^{l}\psi ^{(1)}_{i}{}^{k}}{\ell^6} - \frac{\delta_{i}{}^{j} D_{k}\psi ^{(1)kl} D_{m}\psi ^{(1)}_{l}{}^{m}}{6 \ell^6} \nonumber \\ & + \frac{7 \delta_{i}{}^{j} \psi ^{(1)kl} D_{m}D^{m}\psi ^{(1)}_{kl}}{6 \ell^6} - \frac{\delta_{i}{}^{j} D_{l}\psi ^{(1)}_{km} D^{m}\psi ^{(1)kl}}{3 \ell^6} + \frac{\delta_{i}{}^{j} D_{m}\psi ^{(1)}_{kl} D^{m}\psi ^{(1)kl}}{\ell^6}
\end{align}
\subsection{Killing Vectors for Conformal Algebra on Spherical Background}
The Killing vectors admitted by the leading order Killing equation (\ref{lo}), (\ref{eq:LOCKV}) that agree with asymptotic isometries obtained in \cite{Henneaux:1985tv} when $\frac{1}{r}=\frac{\rho}{\ell^2}\rightarrow0$.
\begin{align}
\text{$\xi^{sph}_0 $}&=(1,0,0) \label{kvsph0}\\
\text{$\xi^{sph} _7$}&=(0,0,1)\\
\text{$\xi^{sph}_3 $}&=\left(\cos (\theta ) (-\sin (t)),\sin (\theta ) (-\cos (t)),0\right) \\
\text{$\xi^{sph}_6 $}&=\left(\cos (\theta ) \cos (t),\sin (\theta ) (-\sin (t)),0\right) \\
\text{$\xi^{sph}_8$}&=\left(0,-\sin (\phi ),-\cot (\theta ) \cos (\phi )\right) \\
\text{$\xi^{sph}_9 $}&=\left(0,\cos (\phi ),-\cot (\theta ) \sin (\phi )\right) \\
\text{$\xi^{sph}_1 $}&=\left(\sin (\theta ) (-\sin (t)) \cos (\phi ),\cos (\theta ) \cos (t) \cos (\phi ),-\frac{\cos (t) \sin (\phi )}{\sin (\theta )}\right)\\
\text{$\xi^{sph}_2 $}&=\left(\sin (\theta ) (-\sin (t)) \sin (\phi ),\cos (\theta ) \cos (t) \sin (\phi ),\frac{\cos (t) \cos (\phi )}{\sin (\theta )}\right) \\
\text{$\xi^{sph}_4 $}&=\left(\sin (\theta ) \cos (t) \cos (\phi ),\cos (\theta ) \sin (t) \cos (\phi ),-\frac{\sin (t) \sin (\phi )}{\sin (\theta )}\right)\\
\text{$\xi^{sph}_5 $}&=\left(\sin (\theta ) \cos (t) \sin (\phi ),\cos (\theta ) \sin (t) \sin (\phi ),\frac{\sin (t) \cos (\phi )}{\sin (\theta )}\right).\label{sphkv}
\end{align}
\section{Appendix: Canonical Analysis of Conformal Gravity}
\subsection{Hamiltonian analysis}
To discuss the Hamiltonian formulation and the dynamics of gauge systems we start with the action principle in the form of the Lagrangian.
If the action
\begin{equation}
S_I=\int_{t_1}^{t_2}L(q,\dot{q})dt
\end{equation}
is stationary under the variations $\delta q^n(t)$ that vanish at $t_1$ and $t_2$ for $q^n(n=1,...,N)$ Lagrangian variables, we have defined the classical motion of the system.
That
is fulfilled if the Euler-Lagrange equations
\begin{align}
\frac{d}{dt}\left(\frac{\partial L}{\partial q^n}\right)-\frac{\partial L}{\partial q^n}=0, && n=1,...,N
\end{align}
or
\begin{equation}
\ddot{q}^n\frac{\partial^2L}{\partial\dot{q}^{n'}\partial\dot{q}^n}=\frac{\partial L}{\partial q^n}-\dot{q}^{n'}\frac{\partial^2L}{\partial\dot{q}^{n'}\partial\dot{q}^{n}}
\end{equation}are satisfied.
Positions and the velocities at the time t, determine accelerations when \begin{equation}
\frac{\partial^2 L}{\partial\dot{q}^{n'}\partial\dot{q}^n
\end{equation}
is invertible, i.e.
\begin{equation}
\textbf{D}=det\frac{\partial^2 L}{\partial\dot{q}^{n'}\partial\dot{q}^n} \label{detcond}
\end{equation}
does not vanish. If \textbf{D}=0, the accelerations $\ddot{q}$ are not uniquely determined with positions $q$ and velocities $\dot{q}$, and one could add arbitrary functions of time to the solutions of the EOM. In other words, when we are interested in the systems with gauge degrees of freedom, we are interested in systems for which $\frac{\partial^2 L}{\partial\dot{q}^{n'}\partial\dot{q}^n}$ cannot be inverted.
In case we define canonical momenta by
\begin{equation}
p_n=\frac{\partial L}{\partial \dot{q}^n} \label{momenta},
\end{equation}
the condition that \textbf{D}=0, reflects that velocities as functions of coordinates and momenta are not invertible, i.e., momenta $p_n$ are not independent and it follows from (\ref{momenta}) that
\begin{align}
\phi_m(q,p)=0, && m=1,...,M'\label{frstcon}.
\end{align}
These conditions (\ref{frstcon}), are restricted by the regularity conditions, and define a constant (for simplicity) submanifold in $(q,\dot{q})$ space, {\it primary constraint surface}.
Its rank is N-M' for M' independent equations (\ref{frstcon}) and the dimension of phase space is 2N-M'.
Since (\ref{frstcon}) says that transformation from $p$ to $\dot{q}$ is multivalued, which can be shown on the mapping between the manifolds, one has to introduce {\it Lagrange multipliers}, parameters that make it single valued.
\subsection{Primary and Secondary Constraints}
For the constrained surface of $p$ and $q$ denoted with $\Gamma$, the subspace $\Gamma_1$ is defined with the constraints (\ref{frstcon}) and
it defines "weak equality" (which we write with "$\approx$"). Then the function $F$ which is zero on the constrained surface $\Gamma$ \begin{equation}F(p,q)\vert_{\Gamma_1}=0,\label{eqn}\end{equation} "vanishes weakly". When partial derivatives of the function $F$ with respect to coordinates
$\frac{\partial F}{\partial q}\vert_{\Gamma_1}=0$ and $\frac{\partial F}{\partial p}\vert_{\Gamma_1}=0$ also vanish on the constrained surface $\Gamma_1$, $F$ satisfies "strong equality" (denoted with $"="$). Its variation on constrained phase space $\Gamma_1$ is zero
\begin{equation}\delta F\vert_{\Gamma_1}=\left(\frac{\partial F}{\partial q_{a}}\delta q_{a}+\frac{\partial F}{\partial p_{a}}\delta p_a\right)\vert_{\Gamma_1}=0\label{varf} \end{equation}
for the variations of coordinates and momenta that satisfy $k$ conditions (\ref{frstcon}). Varying the constraints, we obtain \begin{equation}
\frac{\partial\phi_m }{\partial q_a}\delta q_a + \frac{\partial \phi _m}{\partial p_a}\delta p_a\approx 0.\label{varcons}
\end{equation}
The terms that multiply $\delta q_a$ and $\delta p_a$ in (\ref{varf}) and (\ref{varcons}), are equal up to an arbitrary Lagrange multiplier $\lambda^m$,
which leads to
\begin{align}
\frac{\partial}{\partial q_a}(F-\lambda^m\phi_m)\approx 0\\
\frac{\partial}{\partial p_a}(F-\lambda^m\phi_m)\approx 0.%
\end{align}
Next step is to define a canonical Hamiltonian,
\begin{equation}
H=\dot{q}^np_n-L\label{hc0}
\end{equation}
that can be expressed only using canonical coordinates $q$ and momenta $p$, therefore it is valid only on the constrained phase space.
(That can be verified by taking the variation $\delta H$ induced by arbitrary variations of the positions and velocities.)
Varying (\ref{hc0})
\begin{equation}
\delta H=\dot{q}^n\delta p_n-\delta q^n\frac{\partial L}{\partial q^n}\label{ht1}
\end{equation}
we notice that (\ref{ht1}) is not uniquely determined depending on canonical coordinates $q_n$ and momenta $p_n$. $\delta p_n$ in (\ref{ht1}) are restricted to conserve primary constraints $\phi_m\approx0$, which means that (\ref{hc0}) is an identity on the constrained surface $\Gamma_1$. The formalism should be the same under the change
\begin{equation}
H\rightarrow H+c^m(q,p)\phi_m.
\end{equation}
We can define total Hamiltonian $H_T$, equal to the canonical Hamiltonian up to terms that are proportional to the constraints
\begin{equation} H_T=H+\lambda^m \phi_m\label{ht}. \end{equation}
We can rewrite (\ref{ht1}) as
\begin{align}
\left( \frac{\partial H}{\partial q^n}+\frac{\partial L}{\partial q^n}\right)\delta q^n +\left(\frac{\partial H}{\partial p_n}-\dot{q}^n\right)\delta p_n=0. \label{rwr}
\end{align}
Using a theorem that says, if $\lambda_n\delta q^n+\mu^n\delta p_n=0$ are for arbitrary variations tangent to the constraint surface, \begin{align} \lambda_n=u^m\frac{\partial\phi_m}{\partial q^n}\\ \mu^n=u^m\frac{\partial\phi_m}{\partial p_n} \end{align} for some $u^m$, the equalities here are those from the surface (\ref{frstcon}) and we can infer
\begin{align}
\dot{q}^n&=\frac{\partial H}{\partial p_n}+u^m\frac{\partial\phi_m}{\partial p_n}\\
-\frac{\partial L}{\partial q^n}|_{\dot{q}}&=\frac{\partial H}{\partial q^n}|_{p}+u^m\frac{\partial\phi_m}{\partial q^n}.
\end{align}
That allows us to write the $\dot{q}^n$ in terms of the momenta $p_n$ (with $\phi_m=0$) and extra parameters $u^m$. If constraints are independent, then $\frac{\partial \phi_m}{\partial p_n}$ are independent on $\phi_m=0$. That means that different sets of $u's$ can not lead to equal velocities. They can be expressed
using the coordinates and velocities from
\begin{equation}
\dot{q}^n=\frac{\partial H}{\partial p_n}(q,p(q,\dot{q})) +u^m(q,\dot{q})\frac{\partial \phi_m}{\partial p_n}(q,p(q,\dot{q})).
\end{equation}
Rewritng the Legendre transformation from $(q,\dot{q})$ space to $\phi_m(q,p)=0$ of (q,p,u) space the transformation is invertible.
That allows to rewrite the Lagrangian equations in the Hamiltonian form, which can be also obtained by varying the action
\begin{equation}
\delta \int_{t_1}^{t_2}(\dot{q}^np_n-H-u^m\phi_m)=0\label{acham}
\end{equation}
with respect to $\delta q^n, \delta p_n,$ and $\delta u_m$ with $\delta q_n(t_1)=\delta q_n(t_2)=0$. $u^m$ obtain clear role of Lagrange multipliers imposing primary constraints..
Equation of motion obtained from (\ref{acham}) can be conveniently written using the Poisson brackets. For an arbitrary dynamical quantity $g(q,p)$ the equation of motion $\dot{q}(q,p)$ is
\begin{equation}\dot{g}=\{g,H_c \} +u^m \{ g,\phi_m\}\approx \{g,H_T \} \label{cond} \end{equation}
where EOM are valid on shell. Analogously, the equation of motion for a constraint $\phi_m$ can be written as
\begin{equation} \dot{\phi}_m= \{\phi_m,H_c \} +u^n\{\phi_m,\phi_n \}. \label{eomphi} \end{equation}
For the theory to be consistent, we must demand that the primary constraints are conserved in time, which implies consistency conditions
\begin{enumerate}[label=\textbf{C.\arabic*}]
\item (\ref{eomphi}) is satisfied trivially, $0=0$; \label{cc1}
\item (\ref{eomphi}) determines Legendre multipliers via $p$s and $q$s; \label{cc2}
\item (\ref{eomphi}) leads to condition with no multipliers, that defines a new \emph{secondary constraint} which define the subspace $\Gamma_2\subseteq\Gamma_1$. \label{cc3}
\end{enumerate}
Denoting all the constraints with $\varphi$ we can write EOM \begin{eqnarray} \dot{\varphi}_s= \{\varphi_s,H_c \} +u^m\{\varphi_s,\phi_m \} \label{eomchi}, \end{eqnarray}
for $s=1,...,N$ where N denotes all the constraints. It has a solution
\begin{align}
u^m=U^m+v^aV_a{}^m,
\end{align}
here, $V_a{}^m$ solve homogeneous equation and denote independent solutions, $v^a=v^a(t)$ denote arbitrary coefficients, $U^m$ denote particular solutions, and the index $a$ runs over all the solutions.
This general soution and $V_a{}^m\phi_m=\phi_a$ allows to see from \begin{equation}
H_T=H_c+U^m\phi_m+v^aV_a{}^m\phi_m=H_c+U^m\phi_m+v^a\phi_a \label{htn}
\end{equation}
that there are arbitrary functions of time in the equation even after satisfying all the consistency conditions.
This implies that the dynamical variables are not uniquely determined by their initial values at some future instant of time.
\subsection{First and Second Class Constraints}
If we consider a dynamical variable R(q,p) and determine that it has a weakly vanishing Poisson bracket with all the constraints, we have found a \emph{first class constraint}. Otherwise, the constraint is \emph{second class}.
It should be noted that $H_c+U^m\phi_m$ and $\phi^a$ in (\ref{htn}) are first class constraints.
Consider the time evolution of the general dynamical variable $g(t)$ from $t=0$. The initial value $g(0)$ is determined from the initial values of (q(0),p(0)) while the value of $g(t)$ at the instant of time $\delta t$ is computed from
\begin{eqnarray}
g(\delta(t))&=&g(0)+\delta t \dot{g} \\
&=&g(0)+\delta t \left( \{g,H'\}+v^a\{g,\phi_a\} \right).
\end{eqnarray}
Since different values of the arbitrary coefficients $v^a(t)$ are allowed, we can obtain different values for $g(\delta t)$
\begin{equation}
\Delta g(\delta t)=\delta t(v_2^a-v_1^a)\{g,\phi_a\},
\end{equation}
where $H'$ is sum of the canonical Hamiltonian and $U^m\phi_m$.
Phyisical states $g_1(\delta t)$ and $g_2(\delta_t)$ do not depend on the multipliers, that means $g(\delta t)$ is unphysical. The number of the first class constraints $\phi^a$ is equal to number of $v^a(t)$, arbitrary functions, that implies the transformations that are generated this way are unphysical. i.e. Gauge transformations, unphysical transformations of the dynamical variables, are generated by the primary first class constraints (PFC).
\subsection{Dirac Brackets}
Imagine we have two second-class constraints \begin{align} q_1\approx0 && p_1\approx0. \end{align}
Second class constraints do not conserve all the constraints therefore their usage as generators of gauge transformations may lead to contradictions.
If for example
\begin{align}
F\equiv p_1\psi(q)\approx0 \text{ for } \psi\neq0,
\end{align}
one obtains
\begin{align}
\delta F=\epsilon\{q_1,F \} =\psi
\end{align}
and learns $\delta F\neq0$.
The constraints are weakly equal to zero and for that reason one should first compute the Poisson brackets (PB)s and then use the constraints.
From these equations we notice the variables $(q_1,p_1)$ are not relevant and one can eliminate them from the theory.
To do that we introduce the modified PB in which $(q_1,p_1)$ are discarded
\begin{align}
\{f,g\}^*=\sum_{i\neq1}\left(\frac{\partial f}{\partial q_i}\frac{\partial g}{\partial p_i}-\frac{\partial f}{\partial p_i}\frac{\partial g}{\partial q_i}\right). \label{bdb}
\end{align}
Once (\ref{bdb}) was defined, one can treat the constraints $q_1\approx0$, $p_1\approx0$ as strong equations, defining the theory for the variables $(q_i,p_i)$ when $i\neq1$.
That implies that second-class constraints are dynamical degrees of freedom that are of no importance. To eliminate them, we define new PBs that include only the important dynamical degrees of freedom.
If there are $N_1$ FC constraints $\phi_a$ and $N_2$ remaining constraints $\theta_s$ (which are second class), the matrix $\Delta_{rs}=\{\theta_r,\theta_s\}$ is non-singular (and antisymmetric). If $det(\Delta_{rs})=0$ then $\lambda^s\{\theta_r,\theta_s\}=0$ would lead to solution for $\lambda^s$, and $\lambda^s\theta_s$ would be linear combination equal to FC. Which we have excluded by assumption.
Since $\Delta$ is not singular, we can define new PB using its inverse
\begin{align}
\{f,g\}^*=\{f,g\}-\{f,\theta_r\}\Delta_{r,s}^{-1}\{\theta_s,g\}.
\end{align}
This PB defines {\it Dirac bracket} which satisfies the properties of PB.
Dirac bracket of an arbitrary variable with any second class constraint is constructed to vanish
\begin{align}
\{\theta_m,g\}^*=\{\theta_m,g\}-\{\theta_m,\theta_r\}\Delta^{-1}_{rs}\{\theta_s,g\}=0
\end{align}since $\{\theta_m,\theta_r\}\Delta_{rs}^{-1}=\delta_{ms}$.
In other words, by construction of the Dirac brackets, second-class constraints $\theta_m\approx0$ can be regarded as strong equalities. The EOM (\ref{cond}) in terms of the Dirac brackets read
\begin{equation}
\dot{g}\approx\{g,H_T\}^*.
\end{equation}
The main difference between the first and second class constraints is that the first class constraints generate unphysical transformations, while the second class constraints, can be treated as strong equations after introduction of Dirac brackets.
The process of their construction can be simplified using the subsets of second class constraints, where for the first subset one uses Poisson brackets, while for the second one, the constructed Dirac brackets.
The number of degrees of freedom of a constrained system one may compute form the Dirac's formula, that says that
number of physical degrees of freedom $N_{d.o.f.}$ is
\begin{align}
N_{d.o.f.}=\frac{1}{2}\left( N_{c.v.}-2N_{FC}-N_{SC} \right)\label{ndofs}
\end{align}
where $N_{c.v}$ denotes number of canonical variables, $N_{FC}$ denotes number of first class constraints, and $N_{SC}$ number of second class constraints.
\subsection{Castellani algorithm}
If we have a total Hamiltonian (\ref{ht}), computed functions $v^a(t)$, all the constraints $\phi_b\approx0$, and a trajectory $T_1(t)=(q(t),p(t))$ with defined initial conditions on the constraint surface $\Gamma_2$, we obtain EOM
\begin{align}
\dot{q}_i&=\frac{\partial H'}{\partial p_i}+v^a \frac{\partial \phi_a}{\partial p_i} \\
-\dot{p}_i&=\frac{\partial H'}{\partial q_i}+v^a \frac{\partial \phi_a}{\partial q_i} \\
\psi_b(q,p)&=0 \label{hteom}
\end{align}
for $\phi_b$ entire set of $b$ constraints. We denote $\phi_a$ as a generator of transformations, while the variation $\delta v^{a}(t)$ is an infinitesimal parameter.
One can write an analogous set of equations for a new varied trajectory $T_2(t)=(q(t)+\delta_0q(t),p(t)+\delta_0p(t))$ that starts at the same point but satisfies EOM with new functions $v^a(t)+\delta_0v^a(t)$ and small variations denoted with $\delta_0$,
\begin{align}
\delta_0\dot{q}_i&=\left(\delta_0q_j\frac{\partial}{\partial q_j}+\delta_0 p_j\frac{\partial}{\partial p_j}\right)\frac{\partial H_T}{\partial p_i}+\delta_0 v^a\frac{\partial \phi_a}{\partial p_i} \\
-\delta_0\dot{p}_i&=\left(\delta_0 q_j \frac{\partial}{\partial q_j}+\delta_0 p_j\frac{\partial}{\partial p_j} \right)\frac{\partial H_T}{\partial q_j}+\delta_0v^a\frac{\partial\phi_a}{\partial q_i}\\
\frac{\partial \psi_s}{\partial q_i}\delta_0 q_i+\frac{\partial \psi_s}{\partial p_j}\delta_0 p_j&=0.
\end{align}
Simultaneous transition from one to another trajectory is represented by unphysical gauge transformation. \cite{Blagojevic:2002du}
In case we determine variations of the dynamical variables by
an arbitrary infinitesimal parameter $\epsilon(t)$, that leads to form
\begin{align}
\delta q_i&=\epsilon(t)\{q^i,G\}=\epsilon(t)\frac{\partial G}{\partial p_i}\\
\delta p_i&=\epsilon(t)\{p^i,G\}=-\epsilon(t)\frac{\partial G}{\partial q_i}. \label{varpq}
\end{align}
Here, we define the generator of the transformation with G.
When we vary the equation (\ref{hteom}) with respect to $v^a(t)$ and differentiate (\ref{varpq}) with respect to $t$ we obtain
\begin{align}
\frac{\partial}{\partial p_i}\left(\dot{\epsilon}G+\epsilon\{G,H_t\}-\phi_a\delta v^a \right)\approx0 \label{gen1can}\\
\frac{\partial}{\partial q^i}\left(\dot{\epsilon}G +\epsilon\{G,H_T\}-\phi_a\delta v^a \right)\approx 0 \label{gen2can} \\
\epsilon\{\psi_j,G\}\approx0.
\end{align}
The equations (\ref{gen1can}) and (\ref{gen2can}) lead to
\begin{equation}
\{F,\dot{\epsilon}G+\epsilon\{G,H_T\}-\phi_a\delta v^a\}\approx 0 \label{trivgen}
\end{equation}
where F is an arbitrary function defined on the subspace $\Gamma_1$. This leads to the conclusion that we obtained a trivial generator
$\dot{\epsilon}G+\epsilon\{G,H_T\}-\phi_a\delta v^a$.
In other words, physical state F is invariant under the gauge transformations that cause the redundancy in the variables that reflect gauge symmetry.
This physical state satisfies EOM and the constraints can be imagined as trajectory in Hamiltonian theory.
The gauge generator can be found from transformation of the canonical variables and conjugate momenta generated by a function G that acts on a given phase-space and is parametrised by an infinitesimal parameter $\epsilon(t)$.
The general requirement demands time derivatives of $\epsilon(t)$, $\epsilon^{(n)}\equiv \frac{d^n\epsilon}{dt}$ to be of finite order. In the phase space, that transformation gives varied trajectory, that needs to satisfy constraints and EOM.
From that, we obtain conditions that define the gauge transformations, and solving them we compute gauge generators.
The generator $G$ is
\begin{equation}
G(\epsilon,\epsilon^{(1)},\epsilon^{(2)},...,\epsilon^k)=\sum_{n=0}^k\epsilon^{(n)}G_{(n)}.\label{geng}
\end{equation}
The algorithm for computing the gauge generators has been discovered by Leonardo Castellani defining its name as "Castellani algorithm".
It starts with the $G_k$ which is primary first class constraint (PFC) while the $G_{(n)}$ are all the first class constraints. The algorithm
\begin{eqnarray}
G_k=&PFC\\
G_{k-1}+\{G_k,H\}=&PFC \\
.& \\
.&\\
.&\\
G_1+\{G_2,H\}=&PFC \\
G_0+\{G_1,H\}=&PFC \\
\{G_0,H\}=&PFC \label{castel}
\end{eqnarray}
was developed by Castellani \cite{Castellani:1981us}.
Here, linear combinations of the primary first class constraints are also considered under "PFC". One can notice that $k$ gives a number of secondary constraints.
\subsection{Gauge generators}
Assuming that our theory contains three PFCs, one of which is a vector $PFC^{(1)}_j$ and two of them, $PFC^{(2)}$ and $PFC^{(3)}$ are scalars, the ansatz for a generator that starts the algorithm is
\begin{align}
G_{k-1}&=-\{G_k,H\}+\int d^3x\bigg[ \alpha_1^j(x,y)PFC^{(1)}_j(y)+\alpha_2(x,y)PFC^{(2)}(y) \nonumber \\&+\alpha_3(x,y)PFC^{(3)}(y) \bigg],\label{generat}
\end{align}
for $\alpha_1$, $\alpha_2$ and $\alpha_3$ variables we have to find to determine the generator.
Consider a theory with fields $\{\phi_i\}$ and label the gauge transformations with $\xi$. The gauge transformations are generated with generators of a from (\ref{generat}) determined from (\ref{castel})
\begin{align}
G\left[ \xi,\psi \right]=\int_{\sigma}d^nx\mathcal{G}\left[\xi,\psi\right].
\end{align}
General variation of a generator deforms it into
\begin{align}
\delta G\left[\xi,\psi \right]=\int_{\sigma}d^nx\frac{\delta\mathcal{G}}{\delta \phi_i}+\int_{\partial\sigma}d^{n-1}xB\left[\xi,\phi,\delta\phi\right]\label{ngen}
\end{align}
Where we have denoted the boundary term with B. It has to be added to a generator to bring it into a finite form. The small fluctuations of the fields therefore, define the boundary conditions and bring B into the total variation
\begin{align}
B\left[ \xi,\phi,\delta\phi \right]&=-\delta\Gamma\left[ \xi,\phi\right]\\
Q\left[ \xi\right]&=\Gamma\left[\xi\right].
\end{align}
$Q$ defines a canonical charge of a theory. The new generators (\ref{ngen}) define an asymptotic symmetry algebra of the "improved generators" which consequently, as we will show on the example of CG, define asymptotic symmetry algebra of the charges. This algebra should agree with the algebra obtained by the boundary condition preserving diffeomorphisms and Weyl rescaling, crucial in definition of the field theory at the boundary.
In the following chapter we will consider the canonical analysis of CG in four dimensions.
\subsection{ADM Decomposition}
A manifold $\mathcal{M}$ described with coordinates $x^i$ can be split into space and time coordinates, with the successive hypersurfaces described via time-parameter $t$. In four dimensional space-time, three geometries are treated differently than the four geometry of entire manifold. If we denote two respective hypersurfaces of the spacetime split into $t=constant$, a "lower" and a $t+dt=constant$ "upper" hypersurface, the information sufficient to build that kind of sandwich structure are
\begin{itemize}
\item
the metric on the $3-geometry$ of the lower hypersurface
\begin{equation}
g_{ij}(t,x,y,y)dx^idx^j,
\end{equation}
\item
the distance between one point in the lower hyper surface and in the upper one, and the metric on the upper hypersurface
\begin{equation}
g_{ij}(t+dt,x,y,z)dx^idx^j
\end{equation}
\item the definition of proper length
\begin{align}
\left(\begin{array}{c} \text{lapse of} \\ \text{proper time} \\ \text{between lower} \\ \text{and upper} \\ \text{hypersurface} \end{array}\right) =\left(\begin{array}{c} " \text{lapse} \\ \text{function} " \end{array}\right)dt=N(t,x,y,z) dt
\end{align} for the connector on the ($x,y,z$) point of the lower hypersurface,
\item and the definition for the place of the upper hypersurface
\begin{align}
x_{\text{upper}}^i(x^m)=x^i-N^i(t,x,y,z)dt,
\end{align}
to which to connect.
\end{itemize}
From the Pythagorean theorem in four dimensional form,
\begin{align}
ds^2=\left(\begin{array}{c}\text{proper distance} \\ \text{in base 3-geometry}\end{array}\right)^2-\left( \begin{array}{c} \text{proper time from} \\ \text{ lower to upper 3-geometry}\end{array}\right)^2
\end{align}
leads to
\begin{equation}
ds^2=g_{ij}(dx^11+N^idt)(dx^j+N^jdt)-(Ndt)^2.\label{adm}
\end{equation}
To obtain the components of the four dimensional metric tensor in relation to the three dimensional one, we compare the split (\ref{adm})
with \begin{equation}
ds^2={}^{(4)}g_{\alpha\beta}dx^{\alpha}dx^{\beta}
\end{equation}
and read out the components. The construction of the metric is
\begin{equation}
\left(\begin{array}{cc}g_{00} & g_{0k}\\g_{i0}& g_{ik}\end{array}\right)=\left( \begin{array}{cc} (N_sN^s-N^2) & N_k \\ N_i &g_{ik}\end{array}\right)
\end{equation}
with $N^i$ components of the shift in the original covariant form, while its indices, raised and lowered with three dimensional metric $N_i=g_{im}N^m$ are covariant components, and $N^m=g^{ms}N_s$. We obtain the inverse metric from the product
\begin{equation}
\left(\begin{array}{cc}{}^{(4)}g^{00} & {}^{(4)}g^{0m} \\ {}^{(4)}g^{0m} & {}^{(4)} g^{k0}\end{array}\right)=\left(\begin{array}{cc}-\frac{1}{N^2} & \frac{N^m}{N^2} \\ \frac{N^k}{N^2} & g^{km}- \frac{N^kN^m}{N^2}\end{array}\right)\label{invadm}.
\end{equation}
When one adds the lapse and shift to the $3-$metric, that determines the components of the unit timelike normal vector $\textbf{n}$. The vector is normalised saying there is $\textbf{n}$ -dual to $\textbf{n}$ for which
\begin{equation}
\langle \textbf{n},\textbf{n}\rangle=-1.
\end{equation}
The value of $\textbf{n}$ is
\begin{equation}
\textbf{n}=n_{\beta}\textbf{d}x^{\beta}=-N\textbf{d}t+0+0+0.
\end{equation}
The unit timelike normal vector has the components
\begin{equation}
n_{\beta}=(-N,0,0,0),
\end{equation}
while this vector with raised index, using the metric (\ref{invadm}) has the components
\begin{equation}
n^{\alpha}=\left(\frac{1}{N},-\frac{N^m}{N}\right).
\end{equation}
One can for completeness define the "perpendicular connector" with components
\begin{equation}
(dt,-N^mdt)
\end{equation}
and the proper length $d\tau=Ndt$.
\subsection{Cayley--Hamilton Theorem}
The theorem that we find useful in treating tensorial quantities is the Cayley-Hamilton theorem.
It states that a square matrix over a commutative ring is the root of the characteristic polynomial that belongs to it, $P(A)=0$. One defines characteristic polynomial with
\begin{align}
P(\lambda)=\text{det}(\lambda I-A),
\end{align}
where we denoted unit matrix with $I$. Its tensor form is a result of the relation between matrices, linear transformations and the rank 2 tensors on a vector space.
If we have a tensor $T^{\mu}_{\nu}$ on a $d$-dimensional vector space, for example a tangent space of the $d-$dimensional manifold, the theorem states
\begin{align}
P(T)^{\mu}{}_{\nu}&=-(d+1)\delta^{\mu}{}_{[\nu}T^{\alpha_1}{}_{\alpha_1}T^{\alpha_2}{}_{\alpha_2}\cdots T^{\alpha_d}{}_{\alpha_d]}\\
&=(T^d)^{\mu}{}_{\nu}+c_1(A^{d-1})^{\mu}{}_{\nu}+\cdots+c_{d-1}T^{\mu}{}_{\nu}+c_d\delta^{\mu}{}_{n}=0
\end{align}
for the coefficients $c_n$
\begin{align}
c_n=(-1)^nT^{\mu_1}{}_{[\mu_1}T^{\mu_2}{}_{\mu_2}\cdots T^{\mu_n}{}_{\mu_n]} && n=1,2,...,d
\end{align}
and
\begin{align}
(T^m)^{\mu}{}_{\nu}=T^{\mu}{}_{\alpha_1}T^{\alpha_1}{}_{\alpha_2}\cdots T^{\alpha_{m-2}}{}_{\alpha_{m-1}}T^{\alpha_{m-1}}{}_{\nu} && m=2,3,...,d.
\end{align}
In particular for a 3D Riemannian manifold the tensor $T^i{}_{j}$ satisfies
\begin{align}
P(T)^i{}_j&=T^i{}_kT^k{}_{l}T^l{}_{j}-T^i{}_kT^k{}_{j}T-\frac{1}{2}T^i{}_j(T^k{}_lT^l{}_k-T^2)\\&-\frac{\delta^i{}_{j}}{6}(2T^k{}_lT^l{}_{m}T^m{}_{k}-3T^k{}_lT^l{}_{k}T+T^3)=0
\end{align}
for $T=T^i{}_i$ a trace.
\subsection{ADM Decomposition of Curvatures}
From the conventions in the chapter "Canonical Analysis" we obtain the ADM decomposition of the curvature tensors.
The decompositions of the metric that lead to the Gauss, Codazzi and Ricci relations respectively, are
\begin{align}
\perp {}^4R_{abcd}&=-K_{ad}K_{bc}+K_{ac}K_{bd}+R_{abcd},\\
\perp n^{d(4)}R_{abcd}&=D_{a}K_{bc}-D_bK_{ac},\\
\perp n^bn^{d(4)}R_{abcd}&=K_a{}^eK_{ec}-\frac{1}{N}\dot{K}_{ac}+\frac{1}{N}D_aD_bN+\frac{1}{N}\pounds_{N}K_{ac}.
\end{align}
They are employed in the derivation of the Ricci tensor
\begin{align}
\perp \,^{\ms{(4)}}R_{ab}&=-2K_{ac}K_b^{\ c}+K_{ab}K+\frac{1}{N}\dot{K}_{ab}-\frac{1}{N}\pounds_NK_{ab}+\frac{1}{N}D_aD_cN+R_{ab},\nonumber\\
\perp n^b\,^{\ms{(4)}}R_{ab}&=D_cK_a^{\ c}-D_aK,\nonumber\\
n^an^b \,^{\ms{(4)}}R_{ab}&=K_{ab}K^{ab}-\frac{1}{N}h^{ab}\left(\dot{K}_{ab}-\pounds_NK_{ab}\right),
\end{align}
and Ricci scalar
\begin{align}
\,^{\ms{(4)}}R=-3K_{ab}K^{ab}+K^2+\frac{2}{N}h^{ab}\left(\dot{K}_{ab}-\pounds_NK_{ab}\right)+R.
\end{align}
The tracelessness of the Weyl tensor
\begin{align}
h^{bd}\perp C_{abcd}&=\perp n^bn^dC_{abcd},\nonumber\\
h^{bd}\perp n^dC_{abcd}&=0,\nonumber\\
h^{bd}\perp n^an^dC_{abcd}&=0,
\end{align}
in combination with its symmetries, allow us to write the trace part of the Weyl tensor spatial projection $\perp C_{abcd}$ with
\begin{equation}
h_{bd}\perp n^en^fC_{aecf}+h_{bc}\perp n^en^fC_{aefd}+h_{ad}\perp n^en^fC_{ebcf}+h_{ac}\perp n^en^fC_{ebfd}.\label{Weyltrace}
\end{equation}
For this decomposition one has to impose only the tracelessness condition and the Gauss relation in order to derive the traceless part of the Weyl tensor, $K_{abcd}$. This leaves only the extrinsic curvature as a candidate that may appear in the final result, while the traceless part of the Riemann tensor corresponds to induced Weyl which vanishes. Therefore,
\begin{align}
K_{abcd}&=&\frac{1}{2}K_{ac}K_{bd}+h_{ac}\left(K_{be}K_d^{\ e}-K_{bd}K\right)-\frac{1}{4}h_{ac}h_{bd}\left(K_{ef}K^{ef}+K^2\right)+\nonumber\\&\ &+(a\leftrightarrow b,c\leftrightarrow d)-(a\leftrightarrow b)-(c\leftrightarrow d).
\end{align}
The remaining projections of the Weyl tensor give
\begin{align}
&\perp n^dC_{abcd}=2\mathcal{S}_{abc}^{def}D_dK_{ef}\equiv B_{abc}\nonumber\\
&\perp n^an^dC_{abcd}=\frac{1}{2}\mathcal{T}_{ab}^{ef}\left[R_{ef}+K_{ef}K-\frac{1}{N}\left(\dot{K}_{ef}-\pounds_NK_{ef}-D_eD_fN\right)\right],
\end{align}
where
\begin{align}
\mathcal{S}_{abc}^{def}&=h_{a}^{\ [d}h_{b}^{\ e]}h_{c}^{\ f}-h_{a}^{\ [d}h_{bc}h^{e]f}\nonumber\\
\mathcal{T}_{ab}^{de}&=2h_{(a}^{\ d}h_{b)}^{\ e}-\frac{1}{3}h_{ab}h^{de}.
\end{align}
One can now write the decomposition of the Weyl tensor in the contributions
\begin{align}
1 &\times \perp C_{abcd}\nonumber\\
4&\times\ n_bn_d\perp n^en^fC_{aecf}\nonumber\\
4&\times -n_a\perp n^eC_{ebcd}.\label{splweyl}
\end{align}
using Weyl traceless and the symmetries in the expansion of $C_{abcd}C^{abcd}$ we find that each of the terms in (\ref{splweyl}) contribute when they are contracted with itself
\begin{equation}
C_{abcd}C^{abcd}=\perp C_{abcd}\perp C^{abcd}-4\perp n^eC_{ebcd}\perp n_feC^{fbcd}+4\perp n^en^fC_{aecf}\perp n_gn_hC^{agch}.
\end{equation}
The term that contributes in $K_{abcd}K^{abcd}$ is $2K_{abcd}K^{ac}K^{bd}$ which also vanishes because of the Cayley-Hamilton theorem. Reason for this is that $-1/3K^a{}_{bcd}K^{bd}$ in matrix form (with suppressed indices) gives characteristic polynomial of $K$ for the $K$ its argument
\begin{equation}
K^3-K^2\text{tr} K +K \frac{1}{2}\left[\left(\text{tr} K\right)^2-\text{tr} K^2\right]-\text{Id}\frac{1}{6}\left[\left(\text{tr} K\right)^3-3\text{tr} K\text{tr} K^2+2\text{tr} K^3\right].
\end{equation}
\subsection{Variations}
Variations of the $V[\vec{\lambda}]$ are
\begin{align}
\delta_h V[\vec{\lambda}]&=\nonumber\\
=\int_\Sigma&\ -\pounds_{\vec{\lambda}}\Pi_h^{ab}\delta h_{ab}+\int_{\partial \Sigma}\star\left(\lambda^c\Pi_h^{ab}-2\Pi_h^{c(a}\lambda^{b)}\right)\delta h_{ab},\label{dhdiffbound}\\
\delta_{\Pi_h}V[\vec{\lambda}]&=\nonumber\\
=\int_\Sigma&\ \pounds_{\vec{\lambda}}h_{ab}\delta\Pi_h^{ab}-2\int_{\partial \Sigma}\star\lambda^ch_{bc}\delta\Pi_h^{ab},\label{dPihdiffbound}\\
\delta_K V[\vec{\lambda}]&=\nonumber\\
=\int_\Sigma&\ -\pounds_{\vec{\lambda}}\Pi_K^{ab}\delta K_{ab}+\int_{\partial \Sigma}\star\left(\lambda^c\Pi_K^{ab}-2\Pi_K^{ca}\lambda^{b}\right)\delta K_{ab},\label{dKdiffbound}\\
\delta_{\Pi_K}V[\vec{\lambda}]&=\nonumber\\
=\int_\Sigma&\ \pounds_{\vec{\lambda}}K_{ab}\delta\Pi_K^{ab}-2\int_{\partial \Sigma}\star\lambda^cK_{bc}\delta\Pi_K^{ab}.\label{dPiKdiffbound}
\end{align}
These variations lead to the relation
\begin{align}
\left\{\Phi,V[\vec{\lambda}]\right\}&=\int_\Sigma\pounds_{\vec{\lambda}}h_{ab}\frac{\delta}{\delta h_{ab}}\Phi-\pounds_{\vec{\lambda}}\Pi_h^{ab}\frac{\delta}{\delta \Pi_h^{ab}}\Phi\cdots=\nonumber\\
&=\Phi(h+\delta_\lambda h,\Pi_h+\delta_\lambda\Pi_h,\cdots)-\Phi(h,\Pi_h,\cdots),
\end{align}
proving that $V[\vec{\lambda}]$ is a generator of spatial diffeomorhpisms on the phase space with constraints.
Variations of $H[\lambda]$ are
\begin{align}
\delta_h H_0[\lambda]&=\nonumber\\
=\int_\Sigma &\lambda\left\{ \omega_h^{-1}\left(\frac{1}{4}\Pi_K\cdot \Pi_Kh^{ab}-\Pi_K^{ac}\Pi_{K\ c}^b\right)-\Pi_K\cdot K K^{ab}+\right.\nonumber\\
&\ +D_cD^{(b}\Pi_K^{a)c}-\frac{1}{2}h^{ab}D_cD_d\Pi_K^{cd}-\frac{1}{2}D^2\Pi_K^{ab}+\nonumber\\
&\ +2\omega_h\left[-\frac{1}{4}B\cdot Bh^{ab} +B^{acd}B_{\ cd}^b+\frac{1}{2}B^{cda}B_{cd}^{\ \ b}+\right.\nonumber\\
&\ \quad\quad\quad +B^{d(ab)}D_dK-B^{d(ab)}D_cK_d^{\ c}+\nonumber\\
&\ \quad\left.\left.\quad\quad -D_c\left(B^{d(ab)}K_d^{\ c}+B^{cd(a}K^{b)}_{\ d}+B^{(a\vert dc \vert}K^{b)}_{\ d} \right)\right]\right\}\delta h_{ab}+\nonumber\\
&+D_c\lambda\left[2D_d\Pi_K^{d(a}h^{b)c}+ D^{(b}\Pi_K^{a)c}-\frac{3}{2}D^c\Pi_K^{ab}-D_d\Pi_K^{cd}h^{ab}+\right.\nonumber\\
&\quad \left.\ -2\omega_h\left(B^{d(ab)}K_d^{\ c}+B^{cd(a}K^{b)}_{\ d}+B^{(a\vert dc \vert}K^{b)}_{\ d} \right)\right]\delta h_{ab}\nonumber\\
&+D_cD_d\lambda\left[2\Pi_K^{(d\vert (a}h^{b)\vert d)}-\Pi_K^{ab}h^{cd}-\frac{1}{2}\Pi_K^{cd}h ^{ab}\right]\delta h_{ab}+\nonumber\\
\int_{\partial\Sigma} &\star\left[2\lambda\omega_h\left(B^{cd(a}K^{b)}_{\ d}+B^{(a\vert dc}K^{b)}_{\ d}+B^{d(ab)}K_d^{\ c}\right)\delta h_{ab}\right.+\nonumber\\
&+\lambda\left(2\delta C^c_{ed}\Pi_K^{ed}-\delta C^e_{de}\Pi_K^{cd}\right)+\nonumber\\
&+\lambda\left(-D^a\Pi_K^{ec}+\frac{1}{2}D^c\Pi_K^{ea}+\frac{1}{2}D_d\Pi_K^{cd}h^{ea}\right)\delta h_{ea}+\nonumber\\
&+ \left.\left(-2D^a\lambda\Pi_K^{ec}+D^c\lambda\Pi_K^{ea}+\frac{1}{2}D_d\lambda\Pi_K^{cd}h^{ea}\right)\delta h_{ea}\right],\label{dhH0bound}
\end{align}
\newpage
\begin{align}
\delta_KH_0[\lambda]&=\nonumber\\
=\int_\Sigma &\lambda\left(\Pi_K^{ab}K+\Pi_K\cdot K h^{ab}+2\Pi_h^{ab}+4\omega_hD_cB^{cab}\right)\delta K_{ab}+\nonumber\\
&D_c\lambda4\omega_hB^{cab}\delta K_{ab}+\nonumber\\
\int_{\partial\Sigma} &\lambda\star4\omega_hB^{cab}\delta K_{ab}\label{dKH0bound}\\
\delta_{\Pi_h}H_0[\lambda]&=\int_\Sigma \lambda2K_{ab}\delta \Pi_h^{ab}\\
\delta_{\Pi_K}H_0[\lambda]
=\int_\Sigma &\lambda \left(-\omega_h^{-1}\Pi^K_{ab}+R_{ab}+K_{ab}K\right)\delta\Pi_K^{ab}\nonumber\\
&D_aD_b\lambda\delta\Pi_K^{ab}+\nonumber\\
\int_{\partial\Sigma}&\star\left(\lambda D_b\delta\Pi_K^{ab}-D_b\lambda \delta\Pi_K^{ab}\right),\label{dPiKH0bound}
\end{align}
for $C^a_{bc}$ difference between Levi-Civita connections.
\section{Appendix: Classification}
Here we provide the example for the partial differential equations (PDEs) that lead to $\gamma_{ij}^{(1)}$ matrix that conserves respectively,
translations
\begin{align}
\begin{array}{rcl}
0&=&\text{$\gamma_{11} $}^{(1,0,0)}(t,x,y)\\0&=&
\text{$\gamma_{11} $}^{(1,0,0)}(t,x,y)-\text{$\gamma_{22} $}^{(1,0,0)}(t,x,y)\\0&=&
\text{$\gamma_{12} $}^{(1,0,0)}(t,x,y)\\0&=&
\text{$\gamma_{13} $}^{(1,0,0)}(t,x,y)\\0&=&
\text{$\gamma_{22} $}^{(1,0,0)}(t,x,y)\\0&=&
\text{$\gamma_{23} $}^{(1,0,0)}(t,x,y
\end{array}\label{transt},
\end{align}
Lorentz rotations in the $y$ direction
\begin{align}
\begin{array}{rcl}
0&=& t \text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{11} $}^{(1,0,0)}(t,x,y)-t \text{$\gamma_{22} $}^{(0,1,0)}(t,x,y)\\&&-x \text{$\gamma_{22} $}^{(1,0,0)}(t,x,y)\\0&=&
t \text{$\gamma_{11} $}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{11} $}^{(1,0,0)}(t,x,y)+2 \text{$\gamma_{12} $}(t,x,y)\\0&=&
\text{$\gamma_{11} $}(t,x,y)+t \text{$\gamma_{12} $}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{12} $}^{(1,0,0)}(t,x,y)+\text{$\gamma_{22} $}(t,x,y)\\0&=&
t \text{$\gamma_{13} $}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{13} $}^{(1,0,0)}(t,x,y)+\text{$\gamma_{23} $}(t,x,y)\\0&=&
2 \text{$\gamma_{12} $}(t,x,y)+t \text{$\gamma_{22} $}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{22} $}^{(1,0,0)}(t,x,y)\\0&=&
\text{$\gamma_{13} $}(t,x,y)+t \text{$\gamma_{23} $}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{23} $}^{(1,0,0)}(t,x,y
\end{array},
\end{align}
dilatations
\begin{align}
\begin{array}{rcl}
0&=& y \text{$\gamma_{11} $}^{(0,0,1)}(t,x,y)+x \text{$\gamma_{11} $}^{(0,1,0)}(t,x,y)+t \text{$\gamma_{11} $}^{(1,0,0)}(t,x,y)+\text{$\gamma_{11} $}(t,x,y)\\0&=&
y \text{$\gamma_{11} $}^{(0,0,1)}(t,x,y)+x \text{$\gamma_{11} $}^{(0,1,0)}(t,x,y)+t \text{$\gamma_{11} $}^{(1,0,0)}(t,x,y)+\text{$\gamma_{11} $}(t,x,y)\\&&-y \text{$\gamma_{22} $}^{(0,0,1)}(t,x,y)- x \text{$\gamma_{22} $}^{(0,1,0)}(t,x,y)-t \text{$\gamma_{22} $}^{(1,0,0)}(t,x,y)-\text{$\gamma_{22} $}(t,x,y)\\0&=&
y \text{$\gamma_{12} $}^{(0,0,1)}(t,x,y)+x \text{$\gamma_{12} $}^{(0,1,0)}(t,x,y)+t \text{$\gamma_{12} $}^{(1,0,0)}(t,x,y)+\text{$\gamma_{12} $}(t,x,y)\\0&=&
y \text{$\gamma_{13} $}^{(0,0,1)}(t,x,y)+x \text{$\gamma_{13} $}^{(0,1,0)}(t,x,y)+t \text{$\gamma_{13} $}^{(1,0,0)}(t,x,y)+\text{$\gamma_{13} $}(t,x,y)\\0&=&
y \text{$\gamma_{22} $}^{(0,0,1)}(t,x,y)+x \text{$\gamma_{22} $}^{(0,1,0)}(t,x,y)+t \text{$\gamma_{22} $}^{(1,0,0)}(t,x,y)+\text{$\gamma_{22} $}(t,x,y)\\0&=&
y \text{$\gamma_{23} $}^{(0,0,1)}(t,x,y)+x \text{$\gamma_{23} $}^{(0,1,0)}(t,x,y)+t \text{$\gamma_{23} $}^{(1,0,0)}(t,x,y)+\text{$\gamma_{23} $}(t,x,y
\end{array}
\end{align}
In order to compute the $\gamma_{ij}^{(1)}$ matrix that conserves three KVs of rotation, one has to solve the PDEs:
\begin{align}
\begin{array}{rcl}
0&=& x \text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)-y \text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)\\0&=&
x \text{$\gamma_{12}$}^{(0,0,1)}(t,x,y)+\text{$\gamma_{13}$}(t,x,y)-y \text{$\gamma_{12}$}^{(0,1,0)}(t,x,y) \\0&=&
x \text{$\gamma_{22}$}^{(0,0,1)}(t,x,y)+2 \text{$\gamma_{23}$}(t,x,y)-y \text{$\gamma_{22}$}^{(0,1,0)}(t,x,y) \\0&=&
\text{$\gamma_{11}$}(t,x,y)+x \text{$\gamma_{23}$}^{(0,0,1)}(t,x,y)-2 \text{$\gamma_{22}$}(t,x,y)-y \text{$\gamma_{23}$}^{(0,1,0)}(t,x,y) \\0&=&
y \text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{22}$}^{(0,0,1)}(t,x,y)+2 \text{$\gamma_{23}$}(t,x,y)\\ &&-x \text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)-y \text{$\gamma_{22}$}^{(0,1,0)}(t,x,y)\\0&=&
\text{$\gamma_{12}$}(t,x,y)+y \text{$\gamma_{13}$}^{(0,1,0)}(t,x,y)-x \text{$\gamma_{13}$}^{(0,0,1)}(t,x,y) \\0&=&
t \text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{11}$}^{(1,0,0)}(t,x,y)-t \text{$\gamma_{22}$}^{(0,1,0)}(t,x,y)-x \text{$\gamma_{22}$}^{(1,0,0)}(t,x,y)\\0&=&
t \text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{11}$}^{(1,0,0)}(t,x,y)+2 \text{$\gamma_{12}$}(t,x,y)\\0&=&
t \text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)+y \text{$\gamma_{11}$}^{(1,0,0)}(t,x,y)+2 \text{$\gamma_{13}$}(t,x,y)\\0&=&
t \text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)+y \text{$\gamma_{11}$}^{(1,0,0)}(t,x,y)+2 \text{$\gamma_{13}$}(t,x,y)\\&&-t \text{$\gamma_{22}$}^{(0,0,1)}(t,x,y)-y \text{$\gamma_{22}$}^{(1,0,0)}(t,x,y) \\0&=&
\text{$\gamma_{11}$}(t,x,y)+t \text{$\gamma_{12}$}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{12}$}^{(1,0,0)}(t,x,y)+\text{$\gamma_{22}$}(t,x,y)\\0&=&
t \text{$\gamma_{12}$}^{(0,0,1)}(t,x,y)+y \text{$\gamma_{12}$}^{(1,0,0)}(t,x,y)+\text{$\gamma_{23}$}(t,x,y)\\0&=&
t \text{$\gamma_{13}$}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{13}$}^{(1,0,0)}(t,x,y)+\text{$\gamma_{23}$}(t,x,y)\\0&=&
2 \text{$\gamma_{11}$}(t,x,y)+t \text{$\gamma_{13}$}^{(0,0,1)}(t,x,y)+y \text{$\gamma_{13}$}^{(1,0,0)}(t,x,y)-\text{$\gamma_{22}$}(t,x,y) \\0&=&
2 \text{$\gamma_{12}$}(t,x,y)+t \text{$\gamma_{22}$}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{22}$}^{(1,0,0)}(t,x,y)\\0&=&
t \text{$\gamma_{22}$}^{(0,0,1)}(t,x,y)+y \text{$\gamma_{22}$}^{(1,0,0)}(t,x,y)\\0&=&
\text{$\gamma_{13}$}(t,x,y)+t \text{$\gamma_{23}$}^{(0,1,0)}(t,x,y)+x \text{$\gamma_{23}$}^{(1,0,0)}(t,x,y)\\0&=&
\text{$\gamma_{12}$}(t,x,y)+t \text{$\gamma_{23}$}^{(0,0,1)}(t,x,y)+y \text{$\gamma_{23}$}^{(1,0,0)}(t,x,y)
\end{array}.
\end{align}
PDEs that give $\gamma_{ij}^{(1)}$ matrix are
\begin{align}
\begin{array}{l}
0= \left(t^2+x^2-y^2\right) \text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)+2 x y \text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)\\+2 t x \text{$\gamma_{11}$}^{(1,0,0)}(t,x,y)+2 x \text{$\gamma_{11}$}(t,x,y)+4 t \text{$\gamma_{12}$}(t,x,y)
\end{array}
\end{align}
\begin{align}
\begin{array}{l}
0= \frac{1}{2} \left(t^2-x^2+y^2\right) \text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)+x y \text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)\\+t y \text{$\gamma_{11}$}^{(1,0,0)}(t,x,y)+y \text{$\gamma_{11}$}(t,x,y)+2 t \text{$\gamma_{13}$}(t,x,y)
\end{array}
\end{align}
\begin{align}
\begin{array}{l}
0=\left(t^2+x^2+y^2\right) \text{$\gamma_{11}$}^{(1,0,0)}(t,x,y)+2 t \big(y \text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)\\ +x \text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)+\text{$\gamma_{11}$}(t,x,y)\big)+4 (x \text{$\gamma_{12}$}(t,x,y)+y \text{$\gamma_{13}$}(t,x,y))
\end{array}
\end{align}
\begin{align}
\begin{array}{l}
0= \frac{1}{2} \left(t^2+x^2-y^2\right) \text{$\gamma_{12}$}^{(0,1,0)}(t,x,y)+t \text{$\gamma_{11}$}(t,x,y)+x y \text{$\gamma_{12}$}^{(0,0,1)}(t,x,y)\\+t x \text{$\gamma_{12}$}^{(1,0,0)}(t,x,y)+x \text{$\gamma_{12}$}(t,x,y)+y \text{$\gamma_{13}$}(t,x,y)+t \text{$\gamma_{22}$}(t,x,y)
\end{array}
\end{align}
\begin{align}
\begin{array}{l}
0= \frac{1}{2} \big(t^2-x^2+y^2\big) \text{$\gamma_{12}$}^{(0,0,1)}(t,x,y)+x y \text{$\gamma_{12}$}^{(0,1,0)}(t,x,y)\\+t y \text{$\gamma_{12}$}^{(1,0,0)}(t,x,y)+y \text{$\gamma_{12}$}(t,x,y)+t \text{$\gamma_{23}$}(t,x,y)-x \text{$\gamma_{13}$}(t,x,y) \end{array}
\end{align}
\begin{align}
\begin{array}{l}
0=\left(t^2+x^2+y^2\right) \text{$\gamma_{12}$}^{(1,0,0)}(t,x,y)+2 \bigg(x \big(\text{$\gamma_{11}$}(t,x,y)+t \text{$\gamma_{12}$}^{(0,1,0)}(t,x,y)\\+\text{$\gamma_{22}$}(t,x,y)\big)+y \left(t \text{$\gamma_{12}$}^{(0,0,1)}(t,x,y)+\text{$\gamma_{23}$}(t,x,y)\right)+t \text{$\gamma_{12}$}(t,x,y)\bigg)
\end{array}
\end{align}
\begin{align}
\begin{array}{l}
0=\frac{1}{2} \left(t^2+x^2-y^2\right) \text{$\gamma_{13}$}^{(0,1,0)}(t,x,y)+x y \text{$\gamma_{13}$}^{(0,0,1)}(t,x,y)\\+t x \text{$\gamma_{13}$}^{(1,0,0)}(t,x,y)+x \text{$\gamma_{13}$}(t,x,y)+t \text{$\gamma_{23}$}(t,x,y)-y \text{$\gamma_{12}$}(t,x,y)
\end{array}
\end{align}
\begin{align}
\begin{array}{l}
0=\left(t^2+x^2+y^2\right) \text{$\gamma_{13}$}^{(1,0,0)}(t,x,y)+4 y \text{$\gamma_{11}$}(t,x,y)+2 t \big(y \text{$\gamma_{13}$}^{(0,0,1)}(t,x,y)\\+x \text{$\gamma_{13}$}^{(0,1,0)}(t,x,y)+\text{$\gamma_{13}$}(t,x,y)\big)+2 x \text{$\gamma_{23}$}(t,x,y)-2 y \text{$\gamma_{22}$}(t,x,y)
\end{array}
\end{align}
\begin{align}
\begin{array}{l}
0=\big(t^2-x^2+y^2\big) \text{$\gamma_{13}$}^{(0,0,1)}(t,x,y)+4 t \text{$\gamma_{11}$}(t,x,y)+2 (x \text{$\gamma_{12}$}(t,x,y)\\+y \text{$\gamma_{13}$}(t,x,y))+2 y \big(x \text{$\gamma_{13}$}^{(0,1,0)}(t,x,y)+t \text{$\gamma_{13}$}^{(1,0,0)}(t,x,y)\big)-2 t \text{$\gamma_{22}$}(t,x,y)
\end{array}
\end{align}
\begin{align}
\begin{array}{l}
0= \frac{1}{2} \big(t^2+x^2-y^2\big) \big(\text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)-\text{$\gamma_{22}$}^{(0,1,0)}(t,x,y)\big)\\+x y \big(\text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)-\text{$\gamma_{22}$}^{(0,0,1)}(t,x,y)\big)+t x \big(\text{$\gamma_{11}$}^{(1,0,0)}(t,x,y)\\-\text{$\gamma_{22}$}^{(1,0,0)}(t,x,y)\big)+x (\text{$\gamma_{11}$}(t,x,y)-\text{$\gamma_{22}$}(t,x,y))-2 y \text{$\gamma_{23}$}(t,x,y)
\end{array}
\end{align}
\begin{align}
\begin{array}{l}
0= \frac{1}{2} \big(t^2-x^2+y^2\big) \big(\text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)-\text{$\gamma_{22}$}^{(0,0,1)}(t,x,y)\big)\\+x y \big(\text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)-\text{$\gamma_{22}$}^{(0,1,0)}(t,x,y)\big)+t y \big(\text{$\gamma_{11}$}^{(1,0,0)}(t,x,y)\\-\text{$\gamma_{22}$}^{(1,0,0)}(t,x,y)\big)+y (\text{$\gamma_{11}$}(t,x,y)-\text{$\gamma_{22}$}(t,x,y))+2 t \text{$\gamma_{13}$}(t,x,y)\\+2 x \text{$\gamma_{23}$}(t,x,y)
\end{array}
\end{align}
\begin{align}
\begin{array}{l}
0= \frac{1}{2} \big(t^2+x^2+y^2\big) \big(\text{$\gamma_{11}$}^{(1,0,0)}(t,x,y)-\text{$\gamma_{22}$}^{(1,0,0)}(t,x,y)\big)\\+t y \big(\text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)-\text{$\gamma_{22}$}^{(0,0,1)}(t,x,y)\big)+t x \big(\text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)\\-\text{$\gamma_{22}$}^{(0,1,0)}(t,x,y)\big)+t (\text{$\gamma_{11}$}(t,x,y)-\text{$\gamma_{22}$}(t,x,y))+2 y \text{$\gamma_{13}$}(t,x,y)
\end{array}
\end{align}
\begin{align}
\begin{array}{l}
0= \frac{1}{2} \big(t^2+x^2-y^2\big) \text{$\gamma_{22}$}^{(0,1,0)}(t,x,y)+2 t \text{$\gamma_{12}$}(t,x,y)+x y \text{$\gamma_{22}$}^{(0,0,1)}(t,x,y)\\+t x \text{$\gamma_{22}$}^{(1,0,0)}(t,x,y)+x \text{$\gamma_{22}$}(t,x,y)+2 y \text{$\gamma_{23}$}(t,x,y)
\end{array}
\end{align}
\begin{align}
0&= \frac{1}{2} \big(t^2-x^2+y^2\big) \gamma_{22}^{(0,0,1)}(t,x,y)+x y \gamma_{22}^{(0,1,0)}(t,x,y)\nonumber \\ &+t y \gamma_{22}^{(1,0,0)}(t,x,y)+y \gamma_{22}(t,x,y)-2 x \gamma_{23}(t,x,y)
\\0&= \big(t^2+x^2+y^2\big) \gamma_{22}^{(1,0,0)}(t,x,y)+4 x \gamma_{12}(t,x,y)\nonumber \\&+2 t \big(y \gamma_{22}^{(0,0,1)}(t,x,y)+x \gamma_{22}^{(0,1,0)}(t,x,y)+\gamma_{22}(t,x,y)\big)
\end{align}
\begin{align}
0&=\big(t^2+x^2-y^2\big) \gamma_{23}^{(0,1,0)}(t,x,y)+2 y \gamma_{11}(t,x,y)+2 t \gamma_{13}(t,x,y)\nonumber \\ &+2 \bigg(x \big(y \gamma_{23}^{(0,0,1)}(t,x,y) +\gamma_{23}(t,x,y)\big)-2 y \gamma_{22}(t,x,y)\bigg)+2 t x \gamma_{23}^{(1,0,0)}(t,x,y)
\end{align}
\begin{align}
\begin{array}{l}
0= \frac{1}{2} \big(t^2-x^2+y^2\big) \text{$\gamma_{23}$}^{(0,0,1)}(t,x,y)+x (\text{$\gamma_{22}$}(t,x,y)-\text{$\gamma_{11}$}(t,x,y))\\+t \text{$\gamma_{12}$}(t,x,y)+x \text{$\gamma_{22}$}(t,x,y)+x y \text{$\gamma_{23}$}^{(0,1,0)}(t,x,y)\\+t y \text{$\gamma_{23}$}^{(1,0,0)}(t,x,y)+y \text{$\gamma_{23}$}(t,x,y)
\end{array} \end{align}
\begin{align}
\begin{array}{l}
0= \big(t^2+x^2+y^2\big) \text{$\gamma_{23}$}^{(1,0,0)}(t,x,y)+2 y \text{$\gamma_{12}$}(t,x,y)+2 x \text{$\gamma_{13}$}(t,x,y)\\+2 t \big(y \text{$\gamma_{23}$}^{(0,0,1)}(t,x,y)+x \text{$\gamma_{23}$}^{(0,1,0)}(t,x,y)+\text{$\gamma_{23}$}(t,x,y)\big)
\end{array}
\end{align}
The set of equations that needs to be solved for the rotation KV to be conserved is
\begin{align}
0&=y \text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)-x \text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)\nonumber \\0&=
-x \text{$\gamma_{12}$}^{(0,0,1)}(t,x,y)+y \text{$\gamma_{12}$}^{(0,1,0)}(t,x,y)-\text{$\gamma_{13}$}(t,x,y)\nonumber \\0&=
\text{$\gamma_{12}$}(t,x,y)-x \text{$\gamma_{13}$}^{(0,0,1)}(t,x,y)+y \text{$\gamma_{13}$}^{(0,1,0)}(t,x,y)\nonumber \\0&=
-x \big(\text{$\gamma_{11}$}^{(0,0,1)}(t,x,y)-\text{$\gamma_{22}$}^{(0,0,1)}(t,x,y)\big)+y \big(\text{$\gamma_{11}$}^{(0,1,0)}(t,x,y)\nonumber\\&-\text{$\gamma_{22}$}^{(0,1,0)}(t,x,y)\big)+2 \text{$\gamma_{23}$}(t,x,y)\nonumber\\0&=
-x \text{$\gamma_{22}$}^{(0,0,1)}(t,x,y)+y \text{$\gamma_{22}$}^{(0,1,0)}(t,x,y)-2 \text{$\gamma_{23}$}(t,x,y)\nonumber\\0&=
-\text{$\gamma_{11}$}(t,x,y)+2 \text{$\gamma_{22}$}(t,x,y)-x \text{$\gamma_{23}$}^{(0,0,1)}(t,x,y)+y \text{$\gamma_{23}$}^{(0,1,0)}(t,x,y
\label{rotxy}
\end{align}
\subsection{ Classification According to the Generators of the Conformal Group}
Table with the subalgebras of conformal algebra that are not realised in the form of $\gamma_{ij}^{(1)}$, with algebra it implies. We omit combinations of the KVs of 3 SCTs and n Ts, or 3 Ts and n SCTs that imply the full conformal algebra.
\begin{center}
\begin{tabular}{ |l | p{7.7 cm} | p{1 cm} | c|}
\hline
1 T + 2 R & $\nexists$ & $\nexists$ & 3\\
1 T + 3 R & $\nexists$ & $\nexists$ &4\\
1 T + 2 R + D & $\nexists$ & $\nexists$ &4\\
1 T + 3 R + D & $\nexists$ & $\nexists$&3\\
\hline
2 R & $\nexists$ $\Rightarrow$ 3 Rs & & 2 \\
2 R+D & $\nexists$ $\Rightarrow$ 3 Rs +D & & 3 \\
1 R + 1 SCT & $\nexists$, $ [\xi_{l}^{sct},L_{ij}]=\eta_{li}\xi_j^{sct}-\eta_{lj}\xi_{i}^{sct},$ $\Rightarrow$ $\xi^{sct}_i,\xi^{sct}_j,L_{ij}$ for $l\neq i \neq j$ & & 2 \\
2 R+1 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ 3 Rs+3 SCTs & & 3 \\
2 R+2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ 3 Rs+3 SCTs & & 3 \\
2 R+3 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ 3 Rs+3 SCTs & & 3 \\
3 R+1 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ 3 Rs+3 SCTs & & 3 \\
3 R+2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ 3 Rs+3 SCTs & & 3 \\
1 T + 1 R+1 SCT & $\nexists$, the example from the text shows $\Rightarrow$ subalgebra of 6 CKVs $\xi_i^t,\xi_j^t,\xi_i^{sct},\xi_j^{sct},L_{ij},D$ is required & $\nexists$ & 2 \\
1 T + 2 R+1 SCT & $\nexists$ $\Rightarrow$ conformal algebra & $\nexists$&3\\
1 T + 3 R+1 SCT & $\nexists$, $\Rightarrow$ full conformal algebra&$\nexists$ &4\\
1 T + 2 R + D+1 SCT & $\nexists$, $[D,\xi^{act}_j]=\xi^{act}_j$, $[D,\xi^t_{j}]=-\xi^t_j$, $[D,L_{ij}]=0$ $\Rightarrow$ analysis of the existence of subalgebra is equal to the analysis (1 T + 2 R +1 SCT) & &4\\
1 T + 3 R + D+1 SCT & $\nexists$, because of the commutation of the remaining generators with dilatations, the existence of subalgebra is equal to the analysis (1 T + 3 R +1 SCT) & &3 \\ \hline
1 T + 1 R+2 SCT & $\nexists$, from (\ref{ca1},\ref{ca2}) required full conformal algebra & & 4 \\
1 T + 2 R+2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra & & 5\\
1 T + 3 R+2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra & &6\\
1 T + D+2 SCT & $\nexists$, leads to subalgebra $\xi_i^t,\xi_j^t,\xi_i^{sct},\xi_j^{sct},L_{ij},D$ (with $i\neq j$), commutation relation with D close, leaving $[\xi^t_i,\xi^{sct}_i]=2\eta_{ij}\xi^d$, $[\xi^t_i,\xi^{sct}_j]=2L_{ij}$ $\Rightarrow$ requires $L_{ij}$; $[\xi^t,L_{ij}]=\xi^t_j$ $\Rightarrow$ requires second $\xi^t_j$. & & 2 \\
1 T + 1 R + D+2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra & &6\\
1 T + 2 R + D+2 SCT & $\nexists$, like 1 T+2 R+2SCTs leads to full conformal algebra & &6\\
1 T + 3 R + D+2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full confromal algebra: & & 7\\ \hline
\hline
\end{tabular}
\end{center}
\begin{center}
\hspace{-2cm}\begin{tabular}{ |l | p{7.5 cm} | p{1 cm} | c|}
\hline
\hline
2 T + 2 R & $\nexists$ & \text{ } &4\\
2 T + 3 R & $\nexists$ & $\nexists$ &5\\
2 T + 2 R +D & $\nexists$ & \text{ } &6\\
2 T + 3 R +D &$\nexists$ & $\nexists$ & 7\\
2 T + 1 R + 1 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ $\xi_i^t,\xi_j^t,\xi_i^{sct},\xi_j^{sct},L_{ij},D$ & &4\\
2 T + 2 R + 1 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra & &5\\
2 T + 3 R +1 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra & &6\\
2 T + 1 R + D + 1 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ $\xi_i^t,\xi_j^t,\xi_i^{sct},\xi_j^{sct},L_{ij},D$ & &6\\
2 T + 2 R + D + 1 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra & &6\\
2 T + 3 R + D + 1 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra & & 7\\
2 T + D + 1 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ $\xi_i^t,\xi_j^t,\xi_i^{sct},\xi_j^{sct},L_{ij},D$ & &4 \\
2 T + 1 R + 2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ $\xi_i^t,\xi_j^t,\xi_i^{sct},\xi_j^{sct},L_{ij},D$ & &5\\
2 T + 2 R + 2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra& &6\\
2 T + 3 R +2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra & &7\\
2 T + 2 R + D + 2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra & &6\\
2 T + 3 R + D + 2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra & & 8\\
2 T + D + 2 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ $\xi_i^t,\xi_j^t,\xi_i^{sct},\xi_j^{sct},L_{ij},D$ & &5\\ \hline\hline
3 T + 2 R & $\nexists$ & \text{ } &5\\
3 T + 1 R + D & $\nexists$ & $\nexists$ &5\\
3 T +2 R+D & $\nexists$ & \text{ } &6\\
1 T + 1 SCT & $\nexists$ $\Rightarrow$ $\xi^i_i,\xi^{sct}_i,D$ or $\xi^i_i,\xi^{sct}_i,\xi^{t}_j,\xi^{sct}_j,L_{ij},D$ for $i\neq j$ & $\nexists$ &2 \\
1 T + 2 SCT & $\nexists$ $\Rightarrow$ $\xi^i_i,\xi^{sct}_i,\xi^{t}_j,\xi^{sct}_j,L_{ij},D$ & $\nexists$ &3\\
2 T +1 SCT & $\nexists$, (\ref{ca1},\ref{ca2}) $\Rightarrow$ full conformal algebra & &3\\
2 T + 2 SCT & $\nexists$ & $\nexists$ &4 \\
n T +m SCT & $\nexists$ \text{ needs R or D } & \text{ } &n+m\\ \hline
1 R+D+1 SCT &$\nexists$, $ [\xi_{l}^{sct},L_{ij}]=\eta_{li}\xi_j^{sct}-\eta_{lj}\xi_{i}^{sct},$ $\Rightarrow$ $\xi^{sct}_i,\xi^{sct}_j,L_{ij}$ for $l\neq i \neq j$ && 3\\
2 R+D+1 SCT & $\nexists$ $\Rightarrow$ 3 Rs+D+3 SCTs && 3\\
2 R+D+2 SCT & $\nexists$ $\Rightarrow$ 3 Rs+D+3 SCTs && 4\\
2 R+D+3 SCT & $\nexists$ $\Rightarrow$ 3 Rs+D+3 SCTs && 5\\ \hline
3 R+D+1 SCT & $\nexists$ $\Rightarrow$ 3 Rs+D+3 SCTs && 3\\
3 R+D+2 SCT & $\nexists$ $\Rightarrow$ 3 Rs+D+3 SCTs && 4\\
\hline
\end{tabular}
\end{center}
\subsubsection{Patera et al. Classification}
The table with non-realized $\gamma_{ij}^{(1)}$ matrices.
\begin{center}
\begin{tabular}{|c|c|}\hline
\hline
\multicolumn{2}{|c|}{Subalgebras that are not realised} \\
\hline
Patera name&generators \\ \hline\hline
$ a_{7,1}$ &$ F,K_1,K_2,L_3,P_0,P_1,P_2$ \\
$a_{6,1}$& $ K_1,K_2,L_3,P_0,P_1,P_2$\\
$a_{6,2} $& $F,K_2,K_1-L_3,P_0,P_1,P_2$ \\ \hline
$a_{5,1}$ & $K_2,L_3-K_1,P_0,P_1,P_2$\\
$a_{5,2}$ & $F-K_2,-K_1+L_3,P_0,P_1,P_2$\\
$a_{5,3}$ & $F+K_2,-K_1+L_3,P_0,P_1,P_2$\\
$a_{5,5}$ &$ F,L_3-K_1,P_0,P_1,P_2$\\
$a_{5,6}$ &$ F,K_2,P_0,P_1,P_2$\\
$a_{5,7}$ &$ F,L_3,P_0,P_1,P_2$\\
$a_{5,8}$ & $F,K_2,-K_1+L_3,P_0,P_1,-P_2$\\
$a_{4,2}=b_{4,7}$ & $P_0-P_2\oplus \left\{F-K_2,P_0+P_2,P_1\right\}$\\
$a_{4,5}=b_{4,8}$ &$ F+\epsilon(L_3-K_1),P_0,P_1,P_2$\\
$a_{4,7}=b_{4,10}$ &$ L_3-K_1,P_0+P_2,P_0-P_2,P_1$\\
$a_{4,9}$ &$ F,P_0,P_1,P_2$\\
$a_{4,16}$ &$ F+\frac{1}{2}K_2,-K_1+L_3,+\epsilon(P_0+P_2),P_0-P_2,P_1$\\
$a_{4,18}=b_{4,5}$ &$ F+K_2,-K_1+L_3,P_0-P_2,P_1$\\
\hline
\end{tabular}
\end{center}
\subsubsection{Global Solutions}
The list of the polynomial invariants of a geon global solution reads
\begin{align}
\begin{array}{ll}
R, r_1=\frac{1}{2}S^{a}_{b}S^{b}_{a} & r_2=-\frac{1}{8}S^{a}_{b}S^{b}_{c}S^{c}_{a} \\ w_1=\frac{1}{8}(C_{abcd}+i C*_{abcd})C^{abcd} & w_2=-\frac{1}{16}(C_{ab}^{cd}+iC*_{ab}^{cd})C_{cd}^{ef}C_{ef}^{ab} \\ m_1=\frac{1}{8}S^{ab}S^{cd}(C_{acdb}+iC*_{acdb}) &
m_{2a}=\frac{1}{16}S^{bc}S_{ef}C_{abcd}C^{aefd} \\ m_{2b}=\frac{1}{16}S^{bc}S_{ef}C*_{abcd}C*^{aefd} & m_2=m_{2a}-m_{2b}+\frac{1}{8}iS^{bc}_{ef}C*_{abcd}C^{aefd} \\ m_3=m_{2a}+m_{2b} &
m_{4a}=-\frac{1}{32}S^{ag}S^{ef}S^{c}_{d}C_{ac}^{db}C_{befg} \\ m_{4b}=-\frac{1}{32}S^{ag}S^{ef}S^{c}_{d}C*_{ac}^{db}C*_{befg} &
m_{4}=m_{4a}+m_{4b} \\ m_{5a}=\frac{1}{32}S^{cd}S^{ef}C^{aghb}C_{acdb}C_{gefh} & m_{5b}=\frac{1}{32}S^{cd}S^{ef}C^{aghb}C*_{acdb}C*_{gefh} \\
m_{5c}=\frac{1}{32}S^{cd}S^{ef}C*^{aghb}C_{acdb}C_{gefh} & m_{5d}=\frac{1}{32}S^{cd}S^{ef}C*^{aghb}C*_{acdb}C*_{gefh} \\
m_5=m_{5a}+m_{5b}+i(m_{5c}+m_{5d}) & m_{6}=\frac{1}{32}S_{a}^{e}S_{e}^{c}S_{b}^{f}S_{f}^{d}(C^{ab}_{cd}+iC*{ab}_{cd})\\ r_3=\frac{1}{16}S^{a}_{b}S^{b}_{c}S^{c}_{d}S^{d}_{a}&
\end{array}
\end{align}
for $S_{ab}=R_{ab}-\frac{1}{n}Rg_{ab}$.
\subsubsection{Examples of $\gamma^{(1)}_{ij}$ with $R\times S^2$ Boundary }
In the following table there are examples of the $\gamma^{(1)}$ matrix realised with a particular subset of the Killing vectors (\ref{sphkv}), where we denote {\it number of KVs} with n.
\begin{center}
\begin{tabular}{|c|c|c|}\hline
KVs &Realization, $\gamma_{ij}^{(1)}$ & n of KVs\\
\hline
$\begin{array}{cc}\xi^{\ms{(0)} sph}_{7},& \xi^{\ms{(0)} sph}_{8},\\ \xi^{\ms{(0)} sph}_{9}, &\xi^{\ms{(0)} sph}_{0}\end{array}$& $\left(
\begin{array}{ccc}
\text{$\gamma_{11} $} & 0 & 0 \\
0 & \frac{\text{$\gamma_{11} $}}{2} & 0 \\
0 & 0 & \frac{1}{2} \text{$\gamma_{11} $} \sin ^2(\theta ) \\
\end{array}
\right)$ & 4 \\
$\xi^{\ms{(0)} sph}_{7}, \xi^{\ms{(0)} sph}_{0}$& $\left(
\begin{array}{ccc}
\text{$\gamma_{11} $} & 0 & 0 \\
0 & \text{$\gamma_{22} $} & 0 \\
0 & 0 & (\text{$\gamma_{11} $}-\text{$\gamma_{22}$}) \sin ^2(\theta ) \\
\end{array}
\right)$ & 2 \\ \hline
$\begin{array}{l} \xi^{\ms{(0)} sph}_{7}, \xi^{\ms{(0)} sph}_{8}, \\ \xi^{\ms{(0)} sph}_{9} \end{array}$& $\left(
\begin{array}{ccc}
2 \text{$\gamma_{22} $}(t) & 0 & 0 \\
0 & \text{$\gamma_{22} $}(t) & 0 \\
0 & 0 & \sin ^2(\theta ) \text{$\gamma_{22} $}(t) \\
\end{array}
\right)$ & 3 \\
$\xi^{\ms{(0)} sph}_{7}, \xi^{\ms{(0)} sph}_{6}$&$ \left(
\begin{array}{ccc}
-c_1 \sec (t) & 0 & 0 \\
0 & c_1 \sec (t) & 0 \\
0 & 0 & -2 c_1 \sec (t) \sin ^2(\theta ) \\
\end{array}
\right)$ & 2 \\
$\xi^{\ms{(0)} sph}_{7}, \xi^{\ms{(0)} sph}_{3}$& $\left(
\begin{array}{ccc}
-c_1 \csc (t) & 0 & 0 \\
0 & c_1 \csc (t) & 0 \\
0 & 0 & -2 c_1 \csc (t) \sin ^2(\theta ) \\
\end{array}
\right)$& 2 \\ \hline
$\begin{array}{c}\xi^{\ms{(0)} sph}_{7}, \xi^{\ms{(0)} sph}_{0},\\\xi^{\ms{(0)} sph}_{6}, \xi^{\ms{(0)} sph}_{3}\end{array}$&$ \left(
\begin{array}{ccc}
c_1 \csc (\theta ) & 0 & 0 \\
0 & -c_1 \csc (\theta ) & 0 \\
0 & 0 & 2 c_1 \sin (\theta ) \\
\end{array}
\right)$ & 4 \\ \hline
$\xi^{\ms{(0)} sph}_{0},$ &$ \left(
\begin{array}{ccc}
\text{$\gamma_{11} $}(\phi ) & 0 & 0 \\
0 & \text{$\gamma_{22} $}(\phi ) & 0 \\
0 & 0 & \sin ^2(\theta ) (\text{$\gamma_{11} $}(\phi )-\text{$\gamma_{22} $}(\phi )) \\
\end{array}
\right)$& 1\\
$\xi^{\ms{(0)} sph}_{8}, \xi^{\ms{(0)} sph}_{0}$ &eq. (\ref{eq80})& 2 \\
$\xi^{\ms{(0)} sph}_{9}, \xi^{\ms{(0)} sph}_{0}$ &eq. (\ref{eq90})& 2\\
$\begin{array}{l} \xi^{\ms{(0)} sph}_{0}, \xi^{\ms{(0)} sph}_{6},\\ \xi^{\ms{(0)} sph}_{3} \end{array}$ & $\left(
\begin{array}{ccc}
-\csc (\theta ) c_1(\phi ) & 0 & 0 \\
0 & \csc (\theta ) c_1(\phi ) & 0 \\
0 & 0 & -2 \sin (\theta ) c_1(\phi ) \\
\end{array}
\right)$& 3\\ \hline
$\xi^{\ms{(0)} sph}_{7},\xi^{\ms{(0)} sph}_{6}$& eq. (\ref{sph1}) & 2\\ \hline
$\xi^{\ms{(0)} sph}_{8}$&
eq. (\ref{eqn8})& 1 \\
$\xi^{\ms{(0)} sph}_{9}$&eq.(\ref{sph2})& 1 \\ \hline
\end{tabular}
\end{center}
\begin{align}
\gamma_{11}^{(1)}&=
2 c_1\left[-2 \cos (\phi ) \sin (\theta )\right] \nonumber\\
\gamma_{22}^{(1)}&= c_1\left[-2 \cos (\phi ) \sin (\theta )\right] \nonumber\\
\gamma_{33}^{(1)}&= \sin ^2(\theta ) c_1\left[-2 \cos (\phi ) \sin (\theta )\right]
\label{eq80}
\end{align}
\begin{equation}
\gamma_{ij}^{(1)}=\left(
\begin{array}{ccc}
2 c_1\left[2 \sin (\theta ) \sin (\phi )\right] & 0 & 0 \\
0 & c_1\left[2 \sin (\theta ) \sin (\phi )\right] & 0 \\
0 & 0 & \sin ^2(\theta ) c_1\left[2 \sin (\theta ) \sin (\phi )\right] \\
\end{array}
\right)\label{eq90}
\end{equation}
\begin{align}
\gamma_{11}^{(1)}&=
-\sec (t) c_1\left[2 \sec (t) \sin (\theta )\right] \nonumber \\
\gamma_{22}^{(1)}&= \sec (t) c_1\left[2 \sec (t) \sin (\theta )\right] \nonumber \\
\gamma_{33}^{(1)}&= -2 \sec (t) \sin ^2(\theta ) c_1\left[2 \sec (t) \sin (\theta )\right]
\label{sph1}
\end{align}
\begin{align}\gamma_{11}^{(1)}&= 2 c_1\left[t,-2 \cos (\phi ) \sin (\theta )\right] \nonumber \\
\gamma_{22}^{(1)}&= c_1\left[t,-2 \cos (\phi ) \sin (\theta )\right] \nonumber\\
\gamma_{33}^{(1)}&= \sin ^2(\theta ) c_1\left[t,-2 \cos (\phi ) \sin (\theta )\right
\label{eqn8}
\end{align}
\begin{align}
\gamma_{11}^{(1)}&=
2 c_1\left[t,2 \sin (\theta ) \sin (\phi )\right] \nonumber \\
\gamma_{22}^{(1)} &= c_1\left[t,2 \sin (\theta ) \sin (\phi )\right] \nonumber \\
\gamma_{33}^{(1)} &= \sin ^2(\theta ) c_1\left[t,2 \sin (\theta ) \sin (\phi )\right] \label{sph2}
\end{align}
The off-diagonal elements of these matrices are vanishing.
\subsection{Map to Spherical Coordinates Using Global Coordinates and 5 KV Algebra}
To translate $\gamma_{ij}^{(1)}$ from flat background into spherical one, one may choose one of the two approaches. Use a map from the flat to spherical coordinates, or transform the KVs and find the solutions in spherical coordinates.
For generality, we describe the map for translation of the solutions to spherical ones, and give several examples for the black holes, MKR and geons (5 KV solutions).
AdS and flat space have related conformal compactifications. One can comapactify the spatial part $\mathbb{R}^n$ of Euclidean case to $S^n$ by adding a point at infinity. Euclidean $AdS_{n+1}$ is conformally equivalent to a disk $ \mathcal{D}_{n+1}$ of a $n+1$ dimensions, while the compactified Euclidean AdS has a boundary that is compactified Euclidean space, which is analogous as in Minkowski signature.
Define the embedding space, $AdS_{p+2}$ of $p+2$ dimensional hyperboloid from flat $p+3$ dimensional space using the metric
\begin{equation}
ds^2=-dX_{o}^2-dX_{p+2}^2+\sum_{i=1}^{p+1}dX_i^2\label{emb}
\end{equation}
and the constraint
\begin{equation}
X_0^2+X^2_{p+2}-\sum^{p+1}_{i=1}X_i^2=L^2.\label{condition}
\end{equation}
By construction, the isometry of the space is $SO(2,p+1)$ while the space is homogeneous and isotropic.
Condition (\ref{condition}) is solved via parametrization
\begin{align}
X_0&=L\cosh \rho \cos \tau, && X_0=L\cosh\rho\sin\tau \\
X_i&=L\sinh\rho\Omega_i && \left(i=1,....,p+1,\sum_{i=1}^{p+1}\Omega_i^2=1 \right)\label{param}
\end{align}
\noindent where $\Omega_i$ are coordinates on $S^p. $ Using parametrisation (\ref{param}) in (\ref{emb}) one obtains $AdS_{p+2}$ metric
\begin{equation}
ds^2=L^2\left( -\cosh^2\rho d\tau^2+d\rho^2+\sinh^2\rho d\Omega_p^{\rho} \right)
\end{equation}
which for $\rho\in(0,\infty)$ and $\tau\in[0,2\pi)$ covers the hyperboloid parametrisation, where $\rho,\tau,\Omega_i$ are global coordinates of $AdS$. In the neighbourhood of $\rho\sim 0$ the metric becomes
\begin{equation}
ds^2\sim L^2(-d\tau^2+d\rho^2+\rho^2d\Omega_p^2)
\end{equation}
from which one can notice that topology of $AdS_{p+2}$ is $S^1\times \mathbb{R}^{p+1}$. Since $S^1$ is timelike, $AdS_{p+2}$ contains closed timelike curves, to obtain causal space-time, we have to take the universal cover of $S^1$ coordinate which leads to $-\infty<\tau<+\infty$ and does not contain closed timelike curves.
To bring endpoints of the $\rho$ coordinate to finite values, one introduces a new coordinate $\theta$
\begin{align}
\tan \theta=\sinh\rho, & \theta\in\left[0,\frac{\pi}{2}\right)
\end{align}
and the $AdS_{p+2}$ metric becomes
\begin{equation}
ds^2=\frac{L^2}{\cos^2\theta}\left(-d\tau^2+d\theta^2+\sin^2\theta d\Omega_p^2\right)
\end{equation}
which can be conformally transformed to Einstein static universe metric
\begin{equation}
\tilde{s}^2=-d\tau^2+d\theta^2+\sin^2d\Omega_p^2\label{sph}
\end{equation}
with a difference that the $\theta$ coordinate ranges is $\left[0,\frac{\pi}{2}\right)$ and not the entire range $[0,\pi)$.
That kind of space-time, that is conformal to space-time isomorphic to half of the static Einstein universe, is "asymptotically AdS".
Because of the fact that the boundary in the timelike direction $\tau$ extends, we must specify a boundary condition on $\mathbb{R}\times S^p=\frac{\pi}{2}$ to well define the Cauchy problem of AdS.
One can define Poincare coordinates $(u,t,\vec{x})$ for $ru>0,\vec{x}\in\mathbb{R}^p$ with
\begin{align}
X_0&=\frac{u}{2}\left[1+\frac{1}{u^2}(L^2+\vec{x}^2-t^2)\right] & X_i=\frac{Lx^i}{u}, \\
X_{p+1}&=\frac{u}{2}\left[1-\frac{1}{u^2}(L^2-\vec{x}^2+t^2)\right] & X_{p+2}=\frac{Lt}{u}
\end{align}
that cover half of the hyperboloid and define the metric
\begin{equation}
ds^2=\frac{L^2}{u^2}\left[du^2-dt^2+d\vec{x}^2\right] \label{flat}
\end{equation}
with $u=0$ boundary.
The Poincare symmetry that acts on $(t,\vec{x})$
and the $SO(1,1)$ symmetry that acts $(u,t,\vec{x})\rightarrow (au,at,a\vec{x}),a>0$
are in these coordinates manifest. While the latter acts as dilatation on the $\mathbb{R}^{1,p}$ $(t,\vec{x})$ coordinates \cite{Kiritsis:2007zza}.
Let us use this transcription to obtain a map between two different background metrics in four dimensions.
Define the global coordinates
\begin{align}
\text{X0}=L \cosh \left(\frac{r}{L}\right) \cos \left(\frac{t}{L}\right) \nonumber \\
\text{X4}=L \cosh \left(\frac{r}{L}\right) \sin \left(\frac{t}{L}\right) \nonumber \\
\text{X3}=L \cos (\theta ) \sinh \left(\frac{r}{L}\right) \nonumber \\
\text{X2}=L \sin (\theta ) \sin (\phi ) \sinh \left(\frac{r}{L}\right) \nonumber \\
\text{X1}=L \sin (\theta ) \cos (\phi ) \sinh \left(\frac{r}{L}\right) \label{glob}
\end{align}
whose line element reads \begin{equation}
ds^2=dr^2-\cosh\left(\frac{r}{L}\right)^2+L^2\sinh\left(\frac{r}{L}\right)^2d\theta^2+L^2\sin\theta^2\sinh\left(\frac{r}{L}\right)^2,
\end{equation}
and the Poincare coordinates
\begin{align}
\begin{array}{ll}
\text{Y4}=\frac{L T}{u} & \text{Y1}=\frac{L x}{u} \\ \text{Y2}=\frac{L y}{u} & \text{Y0}=\frac{1}{2} u \left(\frac{L^2-T^2+x^2+y^2}{u^2}+1\right) \\ \text{Y3}=\frac{1}{2} u \left(1-\frac{L^2+T^2-x^2-y^2}{u^2}\right) &\\
\end{array}
\end{align}
with an line element \begin{equation} ds^2=\frac{L^2}{u^2}(-dT^2+du^2+dx^2+dy^2) \end{equation}
where
\begin{align}
T=\frac{L Y_4}{Y_0-Y_3}, u=\frac{L^2}{Y_0-Y_3}, x=\frac{LY_1}{Y_0-Y_3} \text{ and } y=\frac{L Y_2}{Y_0-Y_3}. \label{ys}
\end{align}
To transform Poincare to global coordinates, one has to insert (\ref{glob}) in (\ref{ys}) where $Y_{i}$ is changed to $X_i$ for $i=0,1,2,3,4$, and use $r\to L \log \left(\frac{2 L}{\rho }\right)$
\begin{align}
u(r,t,\theta,\phi)=\frac{L}{\cosh \left(\frac{r}{L}\right) \cos \left(\frac{t}{L}\right)-\cos (\theta ) \sinh \left(\frac{r}{L}\right)} \Rightarrow \nonumber \\
\Rightarrow u(\rho,t,\theta,\phi)=\frac{4 L^2 \rho }{\cos (\theta ) \left(\rho ^2-4 L^2\right)+\left(4 L^2+\rho ^2\right) \cos \left(\frac{t}{L}\right)} \label{defu} \end{align}\begin{align}
T(r,t,\theta,\phi)=\frac{L \cosh \left(\frac{r}{L}\right) \sin \left(\frac{t}{L}\right)}{\cosh \left(\frac{r}{L}\right) \cos \left(\frac{t}{L}\right)-\cos (\theta ) \sinh \left(\frac{r}{L}\right)}\Rightarrow \nonumber \\
\Rightarrow T(\rho,t,\theta,\phi)=\frac{L \left(4 L^2+\rho ^2\right) \sin \left(\frac{t}{L}\right)}{\cos (\theta ) \left(\rho ^2-4 L^2\right)+\left(4 L^2+\rho ^2\right) \cos \left(\frac{t}{L}\right)} \end{align} \vspace{-0.5cm}\begin{align}
x(r,t,\theta,\phi)=\frac{L \sin (\theta ) \cos (\phi ) \sinh \left(\frac{r}{L}\right)}{\cosh \left(\frac{r}{L}\right) \cos \left(\frac{t}{L}\right)-\cos (\theta ) \sinh \left(\frac{r}{L}\right)}\Rightarrow \nonumber \\ \Rightarrow x(\rho,t,\theta,\phi)=\frac{L \sin (\theta ) \left(4 L^2-\rho ^2\right) \cos (\phi )}{\cos (\theta ) \left(\rho ^2-4 L^2\right)+\left(4 L^2+\rho ^2\right) \cos \left(\frac{t}{L}\right)}\end{align}\vspace{-0.5cm}\begin{align}
y(r,t,\theta,\phi)=\frac{L \sin (\theta ) \sin (\phi ) \sinh \left(\frac{r}{L}\right)}{\cosh \left(\frac{r}{L}\right) \cos \left(\frac{t}{L}\right)-\cos (\theta ) \sinh \left(\frac{r}{L}\right)}\Rightarrow \nonumber \\ \Rightarrow y(\rho,t,\theta,\phi)=\frac{L \sin (\theta ) \left(4 L^2-\rho ^2\right) \sin (\phi )}{\cos (\theta ) \left(\rho ^2-4 L^2\right)+\left(4 L^2+\rho ^2\right) \cos \left(\frac{t}{L}\right)}\label{ynew}
\end{align}
Differentiating
\begin{align}
\begin{array}{cccc}
dx=\frac{dx(x_i)}{dx_i}dx_i & dy=\frac{dy(x_i)}{dx_i}dx_i & dT=\frac{dT(x_i)}{dx_i}dx_i & du=\frac{du(x_i)}{dx_i}dx_i
\end{array}
\end{align}
for $x_i=\rho,t,\theta,\phi$.
Let us write the metric (\ref{sph}) in the expansion of the coordinate $u$
\begin{equation}
ds^2= \frac{L^2}{u^2}\left(du^2-\left(1+\frac{u}{L}\cdot c\right)dT^2+dx^2+\left(1-\frac{u}{L}c\right)dy^2+2c\frac{u}{L} dydT\right)\label{expu}
\end{equation}
and rewrite (\ref{expu}) in the terms of (\ref{defu}-\ref{ynew}). Taking the leading order one obtains the line element \begin{equation} ds^2= \frac{\left(\rho ^2-4 L^2\right)^2d\theta^2 }{16 \rho ^2}+\frac{L^2d\rho^2 }{\rho ^2}-\frac{ \left(4 L^2+\rho ^2\right)^2dt^2}{16 L^2 \rho ^2}+\frac{\sin ^2(\theta ) \left(\rho ^2-4 L^2\right)^2d\phi^2 }{16 \rho ^2}\end{equation} that in the leading order has desired spherical $\gamma_{ij}^{(0)}$
We are mostly interested in the largest subalgebra with 5 KVs.
Now we can consider the transformation of the $\gamma_{ij}^{(1)}$ matrix, and expand it in $\rho$ coordinate, which will give us the subleading term in the expansion with the $\mathbb{R}\times S^2$ background.
If we define $ds^2_c$ with the part of the line element that defines the $\gamma_{ij}^{(1)}$ matrix on $\mathbb{R}\times S^2$ we can write
\begin{equation}
ds^2_c=\frac{\rho^2}{L^2}\frac{L}{u}\left(-cdT^2-cdy^2+2cdTdy\right).
\end{equation}
Rewriting the ($u,T,y,x$) coordinates as above, we obtain in the line element that in leading order of $\rho$ for the $dx^idx^j$ ($x^i,x^j=\rho,t,\theta,\phi$) has the following coefficients
\begin{center}
\hspace{-0.2cm}\begin{tabular}{ |l | p{0.6 cm} | p{8.9 cm}|}\hline
$dtdt$ & $\gamma_{tt}^{(1)}$ & $ -\frac{c \left(\sin (\theta ) \sin (\phi ) \sin \left(\frac{t}{L}\right)+\cos (\theta ) \cos \left(\frac{t}{L}\right)-1\right)^2}{\left(\cos \left(\frac{t}{L}\right)-\cos (\theta )\right)^3}$ \\
$dtd\theta$ & $\gamma_{t\theta}^{(1)}$ & $\begin{array}{l}\frac{L \big[4 c \sin (\phi ) \left(4 \cos (\theta ) \cos \left(\frac{t}{L}\right)-\cos (2 \theta ) \cos \left(\frac{2 t}{L}\right)-3\right)\big]}{8 \left(\cos \left(\frac{t}{L}\right)-\cos (\theta )\right)^3} \\-\frac{L\big[c (\cos (2 \phi )-3) \left(4 \sin (\theta ) \sin \left(\frac{t}{L}\right)-\sin (2 \theta ) \sin \left(\frac{2 t}{L}\right)\right)\big]}{8 \left(\cos \left(\frac{t}{L}\right)-\cos (\theta )\right)^3}\end{array}$ \\
$dtd\phi$ & $\gamma_{t\phi}^{(1)}$ & $-\frac{2 L \left(c \sin (\theta ) \cos (\phi ) \left(\sin (\theta ) \sin (\phi ) \sin \left(\frac{t}{L}\right)+\cos (\theta ) \cos \left(\frac{t}{L}\right)-1\right)\right)}{2 \left(\cos \left(\frac{t}{L}\right)-\cos (\theta )\right)^2}$\\
$d\theta d\theta$ & $\gamma_{\theta\theta}^{(1)} $& $-\frac{c L L \left(\sin (\phi ) \left(\cos (\theta ) \cos \left(\frac{t}{L}\right)-1\right)+\sin (\theta ) \sin \left(\frac{t}{L}\right)\right)^2}{\left(\cos \left(\frac{t}{L}\right)-\cos (\theta )\right)^3}$ \\
$d\theta d\phi$ & $\gamma_{\theta\phi}^{(1)} $& $-\frac{2 L \left(c L \sin (\theta ) \cos (\phi ) \left(\sin (\phi ) \left(\cos (\theta ) \cos \left(\frac{t}{L}\right)-1\right)+\sin (\theta ) \sin \left(\frac{t}{L}\right)\right)\right)}{2 \left(\cos \left(\frac{t}{L}\right)-\cos (\theta )\right)^2}$ \\
$d\phi d\phi$ & $\gamma_{\phi\phi}^{(1)}$& $-\frac{c L L \sin ^2(\theta ) \cos ^2(\phi )}{\cos \left(\frac{t}{L}\right)-\cos (\theta )}$ \\ \hline
\end{tabular}
\end{center}That way transforming the metric (\ref{flat}) and (\ref{sph}) in global coordinates one can define the transformation from the flat to $\mathbb{R}\times S^2$ background, and vice versa.
Considering the background
\begin{equation}
\gamma_{ij}^{(0)}=\left(\begin{array}{ccc}-1 & 0& 0 \\ 0& 1& 0 \\ 0& 0& \sin\theta^2\end{array}\right) \label{sphm}
\end{equation}
in the equation (\ref{lo}) and the corresponding KVs that conserve it and form the conformal algebra are given in the appendix: Canonical Analysis of Conformal Gravity: Killing Vectors for Conformal Algebra on Spherical Background. The above components of $\gamma_{ij}^{(1)}$ matrix satisfy 5 KVs as when we were considering flat background, for the 5KVs
\begin{equation}
\xi^{sph}_1-\xi^{sph}_9, \xi^{sph}_4+\xi^{sph}_7,\xi^{sph}_2+\xi^{sph}_8,2\xi^{sph}_3+\xi^{sph}_5\text{ and }\xi^{sph}_0-\xi^{sph}_6
\end{equation}
where index $sph$ denotes that we are considering the KVs in the spherical background.
Although very instructive, that method requires particular computational time if we want to consider global solutions rather then asymptotical expansions.
Other convenient method is to consider the solutions of (\ref{eq:nloke}) with the spherical KVs (\ref{kvsph0}-\ref{sphkv}). Number of these examples can be found in the appendix: Classification.
In the following subchapter we consider geon and MKR global solutions on the flat background and two known solutions on the $\mathbb{R}\times S^2$ boundary.
\subsection{Map from Classification of KVs from Conformal Algebra to Patera et. al Classfication}
The map denotes original KVs that correspond to KVs from subalgebras of Patera et al.
\begin{itemize}
\item 1 T: $a_{1,2}=\overline{b}_{1,8}=\overline{e}_{1,3}; a_{1,3}=\overline{b}_{1,9}=\overline{d}_{1,5}=\overline{e}_{1,4}$
\item 2 T: $a_{2,4}=\overline{b}_{2,7}=\overline{f}_{2,1};a_{2,5}=\overline{e}_{2,1}\approx \overline{b}_{2,8}$
\item 3 T: $a_{3,1}=\overline{b}_{3,4}$
\item 1 T+1 R: $\tilde{a}_{2,1} (K_2,P_1)=\overline{b}_{2,5}; a_{2,3} (L_3,P_0)=\overline{d}_{2,2} $
\item 1 T+D: $\tilde{a}_{2,10} (F,P_1)=\overline{b}_{2,12};a_{2,13}(F,P_0)=\overline{b}_{2,15}=\overline{e}_{2,9}$
\item 1 T+1 R+D: $\tilde{a}_{3,3}$
\item 1 T+D+1 SCT: the irreducible group o(2,1)
\item 2 T+D+1 SCT+1 R: the group $o(2,1)\oplus o(2)$
\item 2 T+1 R=$a_{3,13} (K_2,P_0,P_2)=\overline{b}_{3,16}; a_{3,21}(L_3,P_1,P_2)$
\item 2 T+1 R+1D: $a_{4,17}(F,L_3,P_1,P_2)$
\item 2 T+D: $a_{3,11}(F,P_1,P_2)=\overline{b}_{3,13};a_{3,12}(F,P_0,P_2)=\overline{b}_{3,15}$
\item 3 T+1 R: $a_{4,1}(P_0,P_1,P_2,K_2)=\overline{b}_{4,6};a_{4,3}(P_0,P_1,P_2,L_3)$
\item 1 R: $\tilde{a}_{1,1}=\overline{b}_{1,7};\overline{a}_{1,10}(L_3)=d_{1,1}=\overline{c}_{1,3}$
\item 3R: $a_{3,24}$
\end{itemize}
\subsubsection{Bach Equations for MKR}
Bach equations obtained for the flat MKR, are
\begin{align}
\left(r \left(r f''(r)-2 f'(r)\right)+2 f(r)\right)^2-2 r^3 f^{(3)}(r) \left(r f'(r)-2 f(r)\right)=0 \end{align}
\begin{align}
-2 r^4 f^{(3)}(r) f'(r)+r^2 \left(r f''(r)-2 f'(r)\right)^2-2 r f(r) \big(4 f'(r)+r \big(r^2 f^{(4)}(r)\nonumber\\+2 r f^{(3)}(r)-2 f''(r)\big)\big)+4 f(r)^2=0 \end{align}
\begin{align}
-2 r^4 f^{(3)}(r) f'(r)+r^2 \left(r f''(r)-2 f'(r)\right)^2-4 r f(r) \big(2 f'(r)+r \big(r^2 f^{(4)}(r)\\+3 r f^{(3)}(r)-f''(r)\big)\big)+4 f(r)^2=0
\end{align} Subtracting a second equation from the first equation one obtains equation (\ref{mkreq})
\subsection{Global solutions: Top-Down Approach}
Searching for the solutions of Bach equation, we can attempt to solve the Bach equation directly. The above method that suggests the ansatz solution can simplify the procedure, however, one can as well take a simple ansatz that can be solvable at fourth order, and determine the $\gamma_{ij}^{(1)}$ function.
We can show this on MKR solution.
\subsubsection{MKR Solution}
We set a function f(r) in an ansatz solution of Bach equation. Our ansatz metric is of the form
\begin{equation}
g(r)=-\left(
\begin{array}{cccc}
-f(r) & 0 & 0 & 0 \\
0 & \frac{1}{f(r)} & 0 & 0 \\
0 & 0 & r^2 & 0 \\
0 & 0 & 0 & r^2 \\
\end{array}
\right)\label{mkranz}
\end{equation} that inserting in Bach equation leads to three different partial differential equations, of a third and fourth order (see appendix: Classification).
Manipulation of the PDEs leads to equation
\begin{equation}
4f^{(3)}(r)+rf^{(4)}(r)=0\label{mkreq}
\end{equation}
which gives for $f(r)$
\begin{equation}
f=-\frac{c_1}{6r}+c_2+r c_3+r^2c_4.
\end{equation}
That solves Bach equation for the relation of coefficients
$c_1=-\frac{2c_2^2}{c_3}.$
$f(r)$ reads
$f(r)=c_2+\frac{c_2^2}{3rc_3}+r(c_r+rc_4).$
Comparison of the solution with the MKR determines the coefficients
$c_4=-\frac{\Lambda}{3}$ , $c_3=-2a$, $c_2=\sqrt{12a M}$.
This solution, in the limit when $M\rightarrow0$ can be brought to FG form,
with conformal transformation and transformation of coordinates.
In addition, imposing the requirements that the trace of the first term in the FG expansion is three, and that $\gamma_{ij}^{(1)}$ is traceless, one obtains the form of the FG expansion with a vanishing $\gamma_{ij}^{(2)}$ matrix. That is done as follows.
\begin{itemize}
\item
Set $M\rightarrow0$ and transform $r\rightarrow 1/u$, $u\rightarrow U(\eta)$ in (\ref{mkranz}) to obtain the line element with coefficient
\begin{equation}
\frac{U'(\eta)}{U(\eta)^2-2a U(\eta)^3}=\frac{1}{\eta^2}
\end{equation}
in the $d\eta^2$ holographic component.
Its solution
\begin{equation}
U=\frac{1-Tanh\left[\frac{1}{2}\left(c_1-ln(\eta)\right) \right]^2}{2a},
\end{equation} gives a desired form for the $d\eta^2$ term in the metric, i.e. brings the metric to FG form.
Insert the solution in the line element, and factorize, so that the three dimensional metric reads
\begin{equation}
\gamma_{ij}=\left(
\begin{array}{ccc}
-\frac{1}{4} a^2 (\eta -1)^2 (\eta +1)^2 & 0 & 0 \\
0 & \frac{1}{4} a^2 (\eta +1)^4 & 0 \\
0 & 0 & \frac{1}{4} a^2 (\eta +1)^4 \\
\end{array}
\right)\label{mkrflatg}
\end{equation} while entire line element was multiplied with $\eta^2$. The first four terms in $\eta$ expansion of (\ref{mkrflatg}) are
\begin{align}
\gamma_{11}&=-\frac{a^2}{4}+\frac{a^2\eta^2}{2}+O(\eta)^4 \\
\gamma_{22}&=\frac{a^2}{4}+a^2\eta+\frac{3a^2\eta^2}{2}+a^2\eta^3+O(\eta)^4 \\
\gamma_{33}&=\frac{a^2}{4}+a^2\eta+\frac{3a^2\eta^2}{2}+a^2\eta^3+O(\eta)^4
\end{align}
from which one can immediately read out $\gamma_{ij}^{(1)}$, $\gamma_{ij}^{(2)}$ and $\gamma_{ij}^{(3)}$ matrices. Factor $\frac{a}{2}$ that appears in the $\gamma_{ij}^{(0)}$ is absorbed in the coordinates. However, $\gamma_{ij}^{(1)}$ has a trace and to make it traceless one needs to perform the following.
\item Transform the metric into a metric in FG form, so that $\eta\rightarrow P(\rho)$ and multiply with conformal factor $\frac{1}{\rho^2}\frac{P(\rho)^2}{P'(\rho)^2}$, which gives the metric
\begin{equation}
\gamma_{ij}=\left(
\begin{array}{ccc}
-\frac{\left(a^2 P(\rho )^2-4\right)^2}{16 P'(\rho )^2} & 0 & 0 \\
0 & \frac{(a P(\rho )+2)^4}{16 P'(\rho )^2} & 0 \\
0 & 0 & \frac{(a P(\rho )+2)^4}{16 P'(\rho )^2} \\
\end{array}
\right).\label{gama}
\end{equation}
Taking a trace $Tr(\gamma)$ of that metric and subtracting
\begin{equation}\psi_{ij}=\gamma_{ij}-\frac{1}{3}Tr(\gamma_{ij})\end{equation}
gives the traceless metric. Demanding that the trace of the metric $\gamma_{ij}$ is 3 and imposing initial condition that $P(\rho)=0$ when $\rho\rightarrow0$, leads to the PDE for $\rho$
\begin{equation}
48 P'(\rho)^2=(2+aP(\rho))^2(12+a P(\rho))(4+3aP(\rho))
\end{equation}
with four solutions.
\begin{align}
P(\rho)&=\frac{2\left(-1+e^{\frac{a\rho}{\sqrt{3}}}\right)\left(2-\sqrt{3}+e^{\frac{a\rho}{\sqrt{3}}}\right)}{a\left(1+e^{\frac{a\rho}{\sqrt{3}}}\right)\left(-2+\sqrt{3}+e^{\frac{a\rho}{\sqrt{3}}}\right)} \\
P(\rho)&=\frac{2 \left(e^{\frac{a \rho }{\sqrt{3}}}-1\right) \left(e^{\frac{a \rho }{\sqrt{3}}}+2+\sqrt{3}\right)}{a \left(-e^{\frac{a \rho }{\sqrt{3}}}+2+\sqrt{3}\right) \left(e^{\frac{a \rho }{\sqrt{3}}}+1\right)} \label{firstsol} \\
P(\rho)&=-\frac{2 \left(e^{\frac{a \rho }{\sqrt{3}}}-1\right) \left(-2 e^{\frac{a \rho }{\sqrt{3}}}+\sqrt{3} e^{\frac{a \rho }{\sqrt{3}}}-1\right)}{a \left(e^{\frac{a \rho }{\sqrt{3}}}+1\right) \left(-2 e^{\frac{a \rho }{\sqrt{3}}}+\sqrt{3} e^{\frac{a \rho }{\sqrt{3}}}+1\right)} \\
P(\rho)&=-\frac{2 \left(e^{\frac{a \rho }{\sqrt{3}}}-1\right) \left(2 e^{\frac{a \rho }{\sqrt{3}}}+\sqrt{3} e^{\frac{a \rho }{\sqrt{3}}}+1\right)}{a \left(e^{\frac{a \rho }{\sqrt{3}}}+1\right) \left(2 e^{\frac{a \rho }{\sqrt{3}}}+\sqrt{3} e^{\frac{a \rho }{\sqrt{3}}}-1\right)}
\end{align}
Inserting that solutions in the $\gamma_{ij}$ (\ref{gama}) one can see that the second order of the FG expansion, $\gamma_{ij}^{(2)}$ matrix, vanishes. Explicitly, first solution (\ref{firstsol}) inserted in $\gamma_{ij}^{(1)}$ (\ref{gama}) and expanded in $\rho$ contains diagonal components
\begin{align}
\gamma_{11}&=-1-2 c \rho +c^3 \rho ^3 +\mathcal{O}(\rho^4)\nonumber \\
\gamma_{22}&=1-c \rho \nonumber+\frac{c^3 \rho ^3}{2}+\mathcal{O}(\rho^4) \\
\gamma_{33}&=1-c \rho+\frac{c^3 \rho ^3}{2} +\mathcal{O}(\rho^4) \label{g1m1}.
\end{align}
This form (\ref{g1m1}) of the functional dependence on diagonal, can be obtained from the condition for the Weyl flattens, with and ansatz metric
\begin{equation}
g_{ij}=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 2 f_1(\rho)-1 & 0 & 0 \\
0 & 0 & f_1(\rho)+1 & 0 \\
0 & 0 & 0 & f_1(\rho)+1 \\
\end{array}
\right). \label{nulg2}
\end{equation}
The expansion of the function $f_1(\rho)$ will correspond to expansion (\ref{g1m1}).
The condition for Weyl flatness on (\ref{nulg2}) reduces "constraint" from Bach equation to the PDE
\begin{equation}
\frac{3 f_1\rho(\rho) f_1'(\rho)^2}{2 f_1(\rho)-1}-(f_1(\rho)+1) f_1''(\rho)=0
\end{equation}
with the solution \begin{equation} f_1(\rho)=\frac{1}{2} \left(1-3 \tanh ^2\left(\frac{1}{2} \left(-\sqrt{3} c_1 \rho-\sqrt{3} c_2 c_1\right)\right)\right)\label{v1} \end{equation} that
leads to $c_{2\pm}=\pm\frac{2ArcTan\left( \frac{1}{\sqrt{3}}\right)}{\sqrt{3}c_1}$. Where the metric $g_{ij}$ with the inserted $c_{2+}$ is $g_{ij}=diag(g_{11},g_{22},g_{33},g_{44})$ for
\begin{align}
g_{11}&=1\nonumber \\
g_{22}&=-3 \tanh ^2\left(\coth ^{-1}\left(\sqrt{3}\right)+\frac{1}{2} \sqrt{3} \rho c_1\right) \nonumber \\
g_{33}&=\frac{3}{2} \text{sech}^2\left(\coth ^{-1}\left(\sqrt{3}\right)+\frac{1}{2} \sqrt{3} \rho c_1\right) \nonumber \\
g_{44}&=\frac{3}{2} \text{sech}^2\left(\coth ^{-1}\left(\sqrt{3}\right)+\frac{1}{2} \sqrt{3} \rho c_1\right)
\end{align}
and $g_{22},g_{33}$ and $g_{44}$ expanded in $\rho$ read
\begin{align}
g_{22}&=-1-2c_1\rho+c_1^3\rho^3+\mathcal{O}(\rho)^4\nonumber \\
g_{33}&=1-c_1 \rho +\frac{1}{2}c_1^3\rho^3+\mathcal{O}(\rho)^4\nonumber \\
g_{44}&=1-c_1 \rho+\frac{1}{2}c_1^3\rho^3+\mathcal{O}^4. \label{mkrconfflat}
\end{align}
Matrix defined with (\ref{mkrconfflat}) does not contain $\gamma_{ij}^{(2)}$ and can be compared with the (\ref{g1m1}). In particular the function $f_1$ with inserted $c_2$ and divided with $c_1$ is equal to $\frac{P(\rho ) (a P(\rho )+2)^2}{4 P'(\rho )^2}$ with $a\rightarrow\frac{3}{2}c_1$.
\end{itemize}
As one can notice from the (\ref{mkrconfflat}) the $\gamma_{ij}^{(1)}$ matrix in that case takes the form that conserves the translational KVs and the rotation (\ref{trans3rot1}).
In the limit when $M$ does not go to zero, flat MKR metric, obtained from the ansatz (\ref{mkranz}),
\begin{align}
ds_{flatMKR}^2&=-\frac{d(t)^2 \left(2 \sqrt{3} r \sqrt{a M}-2 a r^2-2 M+r^3\right)}{r}\nonumber \\&-\frac{r d(r)^2}{-2 \sqrt{3} r \sqrt{a M}+2 a r^2+2 M-r^3}+r^2 d(x)^2+r^2 d(y)^2 \label{fmkr}
\end{align}
using the standard FG expansion and redefinition (transformation) of the $r$ coordinate, can be only brought to a form with a non-vanishing $\gamma_{ij}^{(2)}$ metric. To redefine the $r $ coordinate, one takes
\begin{equation}
r(\rho)=\frac{a_{-1}}{\rho}+a_0+a_1 \rho+a_2\rho^2+a_3\rho^3+a_4\rho^4+a_5\rho^5.\label{razvoj}
\end{equation} Transformation of the metric into
\begin{equation}
\frac{dr(\rho)^2}{V[r(r)]}=\frac{d\rho^2}{\rho^2}
\end{equation}
defines the coeffieicients in (\ref{razvoj})
\begin{align}
\begin{array}{|c|c|c|c|c|c|} \hline
a_{-1}&a_0 & a_1 &a_2 & a_3 & a_4 \\
1 & \frac{a^2-2 \sqrt{3} \sqrt{a M}}{4 } & \frac{M}{3 } & -\frac{a M}{4 } & \frac{5 a^2 M+2 \sqrt{3} M \sqrt{a M}}{30} &\frac{-15 a^3 M-18 \sqrt{3} a M \sqrt{a M}-4 M^2}{144 } \\ \hline
\end{array}
\end{align}and bring the metric into FG form.
In the $g_{tt}$, $g_{\theta\theta}$ and $g_{\phi\phi}$ terms we insert the expansion that allows us to read out $\gamma_{ij}^{(1)}$, $\gamma_{ij}^{(2)}$ and $\gamma_{ij}^{(3)}$ matrices
\begin{align} \gamma_{ij}^{(1)}&=diag(0,2a,2a),\\
\gamma_{ij}^{ (2)}&= \left(
\begin{array}{ccc}
{\scriptstyle \frac{1}{2} \left(a^2-2 \sqrt{3} \sqrt{a M}\right)} & 0 & 0 \\
0 &{\scriptstyle \frac{3 a^2}{2}-\sqrt{3} \sqrt{a M}} & 0 \\
0 & 0 &{\scriptstyle \frac{3 a^2}{2}-\sqrt{3} \sqrt{a M}} \\
\end{array}
\right)\\
\gamma_{ij}^{(3)}&=\left(\begin{array}{ccc}
\frac{4 M}{3} & 0 & 0 \\
0 &{\scriptstyle \frac{1}{6} \left(3 a^3-6 \sqrt{3} \sqrt{a M} a+4 M\right) }& 0 \\
0 & 0 & {\scriptstyle \frac{1}{6} \left(3 a^3-6 \sqrt{3} \sqrt{a M} a+4 M\right)} \\
\end{array}
\right).
\end{align}
$\gamma_{ij}^{(1)}$ matrix conserves as expected three translational KVs and rotation.
This from of $\gamma_{ij}^{(1)}$ is not traceless, to make it traceless we perform the conformal rescaling of the metric analogously to $M\rightarrow0$ above. Mulitplying the metric (\ref{fmkr}) with the conformal factor \begin{equation} e^{-\frac{2a}{3r}} \label{cf} \end{equation} leads to the set of coefficients in $r(\rho)$ expansion that lead to the traceless $\gamma_{ij}^{(1)}$ as $\gamma_{ij}^{(1)}$ in (\ref{trans3rot1}). If we, for convenience expand the r coordinate in the expansion
\begin{equation}
r(\rho)=b_1+b_2\rho+b_3\rho^2+b_4\rho^3+b_5\rho^4
\end{equation}
the condition to obtain the FG expansion $\frac{dr(\rho)^2}{V[r(\rho)]}=\frac{1}{\rho^2}$ leads to \begin{center}
\begin{tabular}{|c|c|c|c|}\hline
$b_1$ & $b_2$ & $b_3$ & $b_4$ \\
1 & $-\frac{2a}{3}$ & $ \frac{1}{18} \left(a^2+9 \sqrt{3} \sqrt{a M}\right) $& $ \frac{1}{243} \left(38 a^3-108 \sqrt{3} a \sqrt{a M}-81 M\right) $ \\ \hline
&&&$b_5$ \\
&&& $ -\frac{a \left(176 a^3+216 \sqrt{3} a \sqrt{a M}-3483 M\right)}{2916}$ \\ \hline
\end{tabular}
\end{center}
\noindent The $\gamma_{ij}^{m}$ for $m=1,2,3$ diagonal matrices in the FG expansion read
\begin{align} \gamma_{ij}^{(1)}=\left(
\begin{array}{ccc}
\frac{4 a}{3} & 0 & 0 \\
0 & \frac{2 a}{3} & 0 \\
0 & 0 & \frac{2 a}{3} \\
\end{array}
\right), \end{align}
\begin{align}
\gamma_{11}^{(2)}&= \frac{1}{243} \left(214 a^3+270 \sqrt{3} \sqrt{a M} a+324 M\right) \nonumber \\
\gamma_{22}^{(2)}&= \frac{1}{243} \left(83 a^3-189 \sqrt{3} \sqrt{a M} a+162 M\right) \nonumber \\
\gamma_{33}^{(2)}&=-\frac{a \left(-563 a^3+1242 \sqrt{3} \sqrt{a M} a-2835 M\right)}{2916} \nonumber
\end{align}
\begin{align}
\gamma_{11}^{(3)}&=-\frac{a \left(1331 a^3+3942 \sqrt{3} \sqrt{a M} a+8667 M\right)}{2916} \nonumber\\
\gamma_{22}^{(3)}&=-\frac{a \left(-563 a^3+1242 \sqrt{3} \sqrt{a M} a-2835 M\right)}{2916} \nonumber \\
\gamma_{33}^{(3)}&=-\frac{a \left(-563 a^3+1242 \sqrt{3} \sqrt{a M} a-2835 M\right)}{2916}
\end{align}
and the $\gamma_{ij}^{(1)}$ is now traceless as required.
The response functions
\begin{align}
\tau_{ij}&=\left(
\begin{array}{ccc}
\frac{8}{9} \left(\sqrt{3} \sqrt{a M} a+9 M\right) & 0 & 0 \\
0 & 4 M-\frac{8 a \sqrt{a M}}{3 \sqrt{3}} & 0 \\
0 & 0 & 4 M-\frac{8 a \sqrt{a M}}{3 \sqrt{3}} \\
\end{array}
\right), \\ P_{ij}&=\left(
\begin{array}{ccc}
\frac{8 \sqrt{a M}}{\sqrt{3}} & 0 & 0 \\
0 & \frac{4 \sqrt{a M}}{\sqrt{3}} & 0 \\
0 & 0 & \frac{4 \sqrt{a M}}{\sqrt{3}} \\
\end{array}
\right)
\end{align}
using the (\ref{charge1}) $\mathcal{Q}_{ij}=2\tau_{ij}+2P^{ik}\gamma_{kj}^{(1)}$.
define the modified stress energy tensor in a sense of Hollands, Ishibashi and Marlof \cite{Hollands:2005ya}
\begin{equation}
\mathcal{Q}_{ij}=\left(
\begin{array}{ccc}
16 M-\frac{16 a \sqrt{a M}}{\sqrt{3}} & 0 & 0 \\
0 & 8 M & 0 \\
0 & 0 & 8 M \\
\end{array}
\right) \label{chmkrflat}
\end{equation}
and give for the energy $16M-\frac{16 a\sqrt{aM}}{\sqrt{3}}$ per square unit of surface.
One can also compute the charges corresponding to conserved KVs. Equation (\ref{chmkrflat}) using the relation $J^i=(2\tau^i_j+2P^{ik(1)}_{kj})\xi^j$ and $Q[\xi]=\int_{\mathcal{C}}d^2x\sqrt{h}u_iJ^i$ for $h$ metric on $\mathcal{C}$ and $u^i$ future-pointing vector, of normal to $\mathcal{C}$ and normalised to unity,
for the timelike, space like and rotational KVs $(1,0,0),(0,1,0),(0,0,1)$ and $(0,y,-x)$ lead to the currents and charges
\begin{align}\begin{array}{|c|c|c|c|c|}\hline KV & (1,0,0) & (0,1,0) & (0,0,1) & (0,y,-x) \\ \hline current & \left(16\left(M-\frac{a\sqrt{aM}}{\sqrt{3}} \right) ,0,0 \right)& (0,8M,0) & (0,0,8M) & (0,8My,-8Mx)\\ \hline
charge & 16\left(M-\frac{a\sqrt{aM}}{\sqrt{3}}\right) & 0 &0 & 0 \\ \hline \end{array}\end{align}
\noindent Let us consider two examples on the spherical background.
\subsubsection{Spherical Examples: Black hole solutions}
\textbf{MKR solution.}
Two examples for which we know the global solution are MKR solution and the rotating black hole solution. We have considered MKR solution in the first chapter. Once expanded in FG expansion it gives the $\gamma_{ij}^{(1)}$ matrix (\ref{gama1mkr})
and conserves four KVs, $\xi^{sph}_0,\xi^{sph}_7,\xi^{sph}_8,\xi^{sph}_9$ that form $\mathbb{R}\times o(3)$ subalgebra of the conformal algebra $so(2,3)$. The only charge that does not vanish is the one that belongs to $\xi_4^{sph}=\partial_t$ KV
(\ref{holrenmkrcharge})
\begin{equation}
Q[\xi_f^{sph}]=\frac{M}{\ell^2}-a(1-\sqrt{1-12aM}).
\end{equation}
In the terms of canonical analysis, that charge is equal to $Q[\xi^{sph}_4]=Q_{\perp}[N]$.
\\
\\
\noindent{\bf Rotating Black Hole}
Let us consider the $\gamma_{ij}^{(1)}$ matrix for spherical global solution of Bach equation, rotating black hole solution. The metric reads \cite{Liu:2012xn}
\begin{align}
ds^2&=\rho^2\big[\frac{dr^2}{\Delta_r}+\frac{d\theta^2}{\Delta_{\theta}}+\frac{\Delta_{\theta}\sin^2\theta}{\rho^2}\left(\alpha dt-(r^2+\alpha^2)\frac{d\phi}{\Sigma}\right) \noindent \\ &-\frac{\Delta_r}{\rho^2}\left(dt-a\sin^2\theta\frac{d\phi}{\Sigma}\right) \big],
\end{align} with
\begin{align}
\rho^2&=r^2+\alpha^2\cos^2\theta, && \Delta_{\theta}=1+\frac{1}{3}\Lambda\alpha^2\cos^2\theta, && \Sigma=1+\frac{1}{3}\Lambda\alpha^2
\end{align}
\begin{align}
\Delta_r&=\left( r^2+a^2 \right)\left(1-\frac{1}{3}\Lambda r^2 \right)-2\mu r^3
\end{align}
To find the subalgebra of $so(2,3)$ for the $\gamma_{ij}^{(1)}$, one first needs to transform the metric to FG expansion, transform the leading term in expansion $\gamma_{ij}^{(0)}$ to the spherical background $\mathbb{R}\times S^2$
and use the same transformation on $\gamma_{ij}^{(1)}$. The resulting $\gamma_{ij}^{(1)}$ is the one that defines the subalgebra and corresponding KVs.
After setting $\Lambda\rightarrow3$ and multiplying the line element with $\frac{1}{r^2+\alpha^2\cos^2\theta}$ we insert the expansion of the coordinate $r$ in dependency on the new introduced coordinate $\rho$ (\ref{razvoj}).
The $\rho\rho$ component of the FG expansion defines equation
\begin{equation}
\frac{dr(\rho)^2}{\left(1+r(\rho)\right)^2\left(\alpha^2+r(\rho)^2\right)-2r(\rho)^3\mu}=\frac{d\rho^2}{\rho^2}
\end{equation}
and gives for the coefficients in the expansion (\ref{razvoj})
\begin{align}
\begin{array}{|c|c|c|c|}
\hline
a_0 & a_1 & a_2 & a_3 \\
\frac{\mu}{2} & \frac{-2-2\alpha^2+3\mu^2}{12 a_{-1}} & \frac{-\mu-\alpha^2\mu+\mu^3}{8a_{-1}} & \frac{2(7-22\alpha^2+7\alpha^4)-60(1+\alpha^2)\mu^2+45\mu^4}{720 a_{-1}^3} \\ \hline
\end{array}\label{tablecoef}
\end{align}
\noindent that inserted in the metric give $g_{\rho\rho}=1$ (where we assume the metric is rescaled with the factor $\frac{1}{\rho^2}$).
The following term we want to determine is $g_{tt}$
\begin{equation}
g_{tt}= 1+\frac{8r(\rho)^3\mu}{(\alpha^2+2r(\rho)^2+\alpha^2\cos(2\theta))^2}-\frac{2(1+\alpha^2+2r(\rho)^2)}{\alpha^2+2r(\rho)^2+\alpha^2\cos(2\theta)},
\end{equation}
inserting $r(\rho)$ and (\ref{tablecoef}) and expanding
in $\rho$, the $g_{tt}$ component of the metric becomes
\begin{equation}
g_{tt}=-1+2\mu\rho+\rho^2(-1-\mu^2+\alpha^2\cos(2\theta)),\label{gtt}
\end{equation}
while
\begin{equation}
g_{t\phi}=-\frac{\alpha\sin^2\theta}{-1+\alpha^2}+\frac{2\alpha\mu\rho\sin^2\theta}{-1+\alpha^2}+\frac{\alpha\rho^2(-\alpha^2-2\mu^2+\alpha^2\cos(2\theta))\sin^2\theta}{2(-1+\alpha^2)}\label{gtph}
\end{equation}and $g_{\theta\theta}$ terms remain the same since there is no $r(\rho)$ coordinate appearing
\begin{equation}
g_{\theta\theta}=\frac{1}{1-\alpha^2\cos^2\theta}.\label{gthth}
\end{equation}
Term $g_{\phi\phi}$ is
\begin{equation}
g_{\phi\phi}=\frac{\sin^2\theta}{-1+\alpha^2}+\frac{2\alpha^2\mu\rho\sin^4\theta}{(-1+\alpha^2)^2}-\frac{\alpha^2(-1+\alpha^2+\mu^2)\rho^2\sin^4\theta}{(-1+\alpha^2)^2}.\label{gff}
\end{equation}
From the (\ref{gtt}), (\ref{gtph}), (\ref{gthth}), and (\ref{gff}) one can read out the terms in the FG expansion
\begin{align}
\gamma_{ij}^{(0)}=\left(\begin{array}{ccc} -1 & 0 & -\frac{\alpha\sin^2\theta}{-1+\alpha^2} \\
0 & \frac{1}{1-\alpha^2\cos^2\theta} & 0 \\
-\frac{\alpha \sin^2\theta}{-1+\alpha^2} & 0 & \frac{\sin^2\theta}{1-\alpha^2} \end{array}\right).
\end{align}
To transform it into a form $\gamma_{ij}^{(0)}=diag(-1,1,\sin^2\theta)$ we transform the coordinates
\begin{align}
\phi&\rightarrow bt_1+a\phi_1, && t\rightarrow-\frac{b}{a_2t_1}\\
t_1&\rightarrow \frac{t_1\sqrt{-1+\alpha^2}}{b}, &&\phi_1\rightarrow\frac{\phi_1\sqrt{-1+\alpha^2}}{a}\\
\phi&\rightarrow \frac{1}{a_2}\phi_1 &&
\end{align}and divide the metric with
\begin{equation}
1-\frac{1}{\alpha^2}-\sin^2\theta.
\end{equation}The $\gamma_{ij}^{(0)}$ term is
\begin{equation}
\left(
\begin{array}{ccc}
-1 & 0 & 0 \\
0 & -\frac{4 \text{a2}^2}{\left(\cos (2 \theta ) \text{a2}^2+\text{a2}^2-2\right)^2} & 0 \\
0 & 0 & -\frac{2 \sin ^2(\theta )}{\cos (2 \theta ) \text{a2}^2+\text{a2}^2-2}
\end{array}
\right).
\end{equation}
To obtain the required form of the $\gamma_{ij}^{(0)}$ we solve the equation
\begin{equation}
-\frac{2\sin^2\theta(x)}{-2+\alpha^2+\alpha^2\cos(2\theta(x))}=\sin^2(x)
\end{equation}
and obtain the solution for $\theta_{x}$
\begin{equation}
\theta\rightarrow Arctan\left[-\frac{\sqrt{-\cos^2x}}{-1+\alpha^2\sin^2x}\frac{\sqrt{-1+\alpha^2\sin x}}{\sqrt{-1+\alpha^2}\sin^2x}\right]
\end{equation}
that leads to the line element
\begin{equation}
ds^2=-dt^2+\frac{\alpha dx^2}{-1+\alpha^2}+\sin^2x d\phi^2.
\end{equation}
Which is by multiplication with
\begin{equation}
\frac{-1+\alpha^2}{\alpha^2}
\end{equation}and transformations
\begin{align}
\phi&\rightarrow\phi \frac{\alpha}{\sqrt{-1+\alpha^2}}, && t\rightarrow\frac{t}{\sqrt{1-\frac{1}{\alpha^2}}}
\end{align}
finally brought to the desired form
$-dt^2+d\theta^2+\sin^2\theta d\phi^2$.
After transformation into FG form of the metric we read out $\gamma_{ij}^{(1)}$ from (\ref{gtt}), (\ref{gtph}), (\ref{gthth}), and (\ref{gff})
\begin{align}
\gamma_{ij}^{(1)}=\left(\begin{array}{ccc} 2\mu &0 & \frac{2\alpha\mu\sin^2\theta}{-1+\alpha^2} \\ 0&0&0 \\ \frac{2\alpha\mu\sin^2\theta}{-1+\alpha^2} & 0 & \frac{2\alpha^2\mu\sin^4\theta}{(-1+\alpha^2)^2}
\end{array}
\right).
\end{align}
and applying the transformations from above to $\gamma_{ij}^{(1)}$ obtain
\begin{align}
\gamma_{ij}^{(1)}=\left(
\begin{array}{ccc}
\frac{4\mu}{2-\alpha^2+\alpha^2\cos(2\theta)} &0& \frac{4\alpha\mu\sin^2\theta}{2-\alpha^2+\alpha^2\cos(2\theta)} \\ 0 & 0 & 0\\ \frac{4\alpha\mu\sin^2\theta}{2-\alpha^2+\alpha^2\cos(2\theta)} &0 & \frac{4\alpha^2\mu\sin^4\theta}{2-\alpha^2+\alpha^2\cos(2\theta)}
\end{array}
\right)
\end{align}
that conserves $(1,0,0)$ and $(0,0,1)$ KVs, and forms the $o(2)$ algebra.
\subsection{Asymptotic Solutions}
When bottom-up and top down approach to solving the Bach equation become to complicated, one can use asymptotical analysis. It searches for solutions in the neighbourhood of the conformal boundary. They can be extended to global solutions for simple enough cases. Here, we want to find "new" boundary solutions with non-zero charges, that are not equivalent to the MKR solution.
Let us consider coordinates with the boundary $\rho=0$, the familiar asymptotic expansion
\begin{equation}
ds^2=\frac{\ell^2}{\rho^2}\left[d\rho^2+\left(\gamma_{ij}^{(0)}+\frac{\rho}{\ell}\gamma_{ij}^{(1)}+\frac{\rho^2}{\ell^2}\gamma_{ij}^{(2)}+\frac{\rho^3}{\ell^3}\gamma_{ij}^{(3)}+...\right)dx^idx^j\right],\label{gc}
\end{equation}
and traceless higher order terms $\psi_{ij}^{(n)}$. We compute the Bach equation order by order in holographic $\rho$ coordinate for three examples
\begin{enumerate}
\item MKR with vanishing $\gamma_{ij}^{(2)}$ matrix ($M\rightarrow0$) and $\mathbb{R}^3$ boundary,
\item MKR with vanishing $\gamma_{ij}^{(2)}$ matrix and $\mathbb{R}\times S^2$,
\item the example with asymptotic solution of arbitrary function on the diagonal and above $\gamma_{ij}^{(1)}$.
\end{enumerate}
The first term in the expansion is defined by the choice of conformal boundary, and
for the first subleading term we choose $\gamma_{ij}^{(1)}=diag(2c,c,c)$ (the procedure can be applied to the $\gamma_{ij}^{(1)}$ solutions that we have listed above in the classifications). For simplicity, we set the second term in the FG expansion (\ref{gc}) to zero, $\psi_{ij}^{(2)}=0$. The condition that Bach equation gives on the third term $\psi_{ij}^{(3)}$ in the FG expansion (\ref{gc}) is \begin{equation}\partial^{j}\psi^{(3)}_{ij}=0\end{equation}
\begin{enumerate}
\item
For the first case, one can for simplicity set as an ansatz, traceless $\gamma_{ij}^{(3)}$ matrix
\begin{equation}
\psi_{ij}^{(3)}=\left(
\begin{array}{ccc}
d_1+d_2 & d_3 & d_4 \\
d_3 & d_1 & d_5 \\
d_4 & d_5 & d_2 \\
\end{array}
\right)
\end{equation}
Since the metric is flat there is no contribution from the curvatures and the condition on the coefficients in the $\psi_{ij}^{(3)}$ comes from the $\rho\rho$ component of the EOM which setting $\ell=1$ reduces to
\begin{equation}
-\frac{3}{4}\psi^{(1)}{}_i^{k}\psi^{(1)}{}^{ij}\psi^{(1)l}_j\psi^{(1)}_{kl}+\frac{1}{8}\psi^{(1)}_{ij}\psi^{(1)}{}^{ij}\psi^{(1)}_{lk}\psi^{(1)lk}-\frac{1}{2}\psi^{(1)ij}\psi^{(3)}_{ij}=0
\end{equation}
and gives $d_1=-6c^3-d_2$ and
\begin{equation}
\gamma_{ij}^{(3)}=\left(
\begin{array}{ccc}
-6 c^3 & d_3 & d_4 \\
d_ 3 & -6 c^3-d_2 & d_5 \\
d_4 & d_5 & d_2 \\
\end{array}
\right).
\end{equation}
The Brown York stress tensor for the metric is defined by the $\psi_{ij}^{(3)}$ and $\psi_{ij}^{(1)}$ matrix, that means electric part of the Weyl tensor $E_{ij}^{(3)}$, while $E_{ij}^{(2)}$ for flat metric vanishes.
If we change the expansion of the metric (\ref{gc}) with the expansion that for convenience in computation introduces factorials in the metric
\begin{equation}
ds^2=\frac{\ell^2}{\rho^2}\left[d\rho^2+\left(\gamma_{ij}^{(0)}+\frac{\rho}{\ell}\gamma_{ij}^{(1)}+\frac{1}{2!}\frac{\rho^2}{\ell^2}\gamma_{ij}^{(2)}+\frac{1}{3!}\frac{\rho^3}{\ell^3}\gamma_{ij}^{(3)}+...\right)dx^idx^j\right]\label{gcf}
\end{equation}
the choice of the components of the $\psi_{ij}^{(3)}$ metric is particularly convenient because in that case BY ST leads to
\begin{equation}
\tau_{ij}=\left(
\begin{array}{ccc}
0 & d_3 & d_4 \\
d_3 & -3 c^3-d_2 & d_5 \\
d_4 & d_5 & 3 c^3+d_2 \\
\end{array}
\right)
\end{equation}
that contains vanishing $\tau_{11}$ component,
while for the first choice of the expansion (i.e. the factorials in the expansion are absorbed in the components of $\psi_{ij}^{(3)}$) and the equal $\psi_{ij}^{(3)}$ matrix,
\begin{equation}
\tau_{ij}=\left(
\begin{array}{ccc}
-12 c^3 & 3 d_3 & 3 d_4 \\
3 d_3 & -3 \left(5 c^3+d_2\right) & 3 d_5 \\
3 d_4 & 3 d_5 & 3 \left(c^3+d_2\right) \\
\end{array}
\right).
\end{equation}
The charge associated to $(1,0,0)$ KV therefore in the latter case vanishes, while the charge for $(0,1,0)$ reads $2d_3 l^2$, for $(0,0,1)$ it is $2d_4 l^2$ and for $(y,-x,0)$ it is $l^2(-2d_4x+2d_3y)$. Where $l$ are lengths over which we integrate the charges. In the first case, all four charges are present, that from the computational side shows that convenient choice for the components of the matrix can simplify search for the solutions and result with the form of the response functions desired for the particular purposes.
\item
Let us now choose the form of the expansion (\ref{gcf}) and consider the background $\mathbb{R}\times S^2$, which is closer to the original examples of $AdS$ holography and the global results for EG for cosmological constant $\Lambda<0$, where we assume that the conformal boundary belongs to the same conformal class as the Einstein's static universe.
Assume conformal boundary with coordinates $(t,\theta,\phi)$ and the metric \begin{equation}
\gamma_{ij}^{(0)}=diag(-1,L^2,L^2\sin^2(\theta)),
\end{equation}
for $L$ radius of the $S^2$.
The boundary conditions are conserved by the diffeomorphism $\xi^{\mu}$ for (\ref{lo}) and (\ref{nloke}).
The solution $\gamma_{ij}^{(1)}$ that conserves three KVs of the $S^2$ and $(1,0,0)$ is
\begin{equation}
\psi_{ij}^{(1)}=diag(2c,cL^2,cL^2\sin^2(\theta))
\end{equation}
which is covariantly constant $\mathcal{D}_k\psi^{(1)}_{ij}=0$. The curvatures $\mathcal{R}_{ij}^{(0)}=\delta_i^a\delta_j^b\frac{1}{L^2}\gamma_{ab}^{(0)}$, $\mathcal{R}^{(0))}=\frac{2}{L^2}$ and the condition $\mathcal{D}_k\mathcal{R}_{ij}^{(0)}=0$ simplify the Bach equation (\ref{bach}) in fourth order which in $\mu=i$ and $\nu=\rho$ component reads
\begin{equation}
\mathcal{D}^{j}\psi_{ij}^{(3)}=0\label{ir}.
\end{equation}
For the ansatz
\begin{gather}
\psi^{(3)}_{ij} = \left( \begin{array}{ccc}
d_1 + d_2 & d_3 & d_4 \\
d_3 & d_1\,L^2 & d_5 \\
d_4 & d_5 & d_2\,L^2\,\sin^{2}\theta \end{array} \right)
\end{gather}
the equation (\ref{ir}) gives $d_3=0$, $d_5=0$ and $d_2=d_1$. As in the example above, we are interesting in finding simple solution with interesting charges.
The Bach equation for the components $\mu=\rho$ and $\nu=\rho$ leads to
\begin{gather}
0 = -9\,\frac{c^4}{\ell^{8}} - 3\,\frac{c\,d_1}{\ell^{8}}-\frac{2}{3}\,\frac{1}{\ell^{4} L^{4}}
\end{gather}
\begin{gather}
\Rightarrow d_1 = -3\,c^3 - \frac{2\,\ell^{4}}{9\,c\,L^4} ~.
\end{gather}
It is interesting to notice that in the above case, one can not take the limit $c\rightarrow0$ while that is allowed for the conformal boundary $\mathbb{R}^3$.
To find the response functions we need first the magnetic an electric part of the Weyl tensors
\begin{align}
B^{(1)}_{kij} = & \,\, 0
\end{align}
\begin{align}
E^{(2)}_{ij} = & - \frac{1}{2}\,\left(\mathcal{R}^{\ms{(0)}}_{ij} - \frac{1}{3}\,\gamma^{\ms{(0)}}_{ij}\,\mathcal{R}^{\ms{(0)}} \right) = -\frac{1}{6 L^2 c}\, \\ \psi^{\ms{(1)}}_{ij} =& - \frac{1}{6 L^2}\,\text{diag}(2,L^2,L^2 \sin^{2}\theta)
\end{align}
\begin{align}
E^{(3)}_{ij} = & \,\, - \frac{1}{4\,\ell^{3}}\,\psi^{\ms{(3)}}_{ij} - \frac{1}{8\,\ell^{3}}\,\psi^{\ms{(1)}}_{ij}\,\psi^{\ms{(1)}}{}^{kl} \psi^{\ms{(1)}}_{kl} + \frac{1}{6}\,\left(\mathcal{R}^{\ms{(0)}} \psi^{\ms{(1)}}_{ij} - \gamma^{\ms{(0)}}_{ij}\,\mathcal{R}^{\ms{(0)}}_{kl} \psi^{\ms{(1)}}{}^{kl}\right) ~.
\end{align}
Which define PMR
\begin{align}
P_{ij} =& \frac{4}{\ell}\,E^{\ms{(2)}}_{ij} = - \frac{2\,\alpha^{2}}{3\,\ell^{3}\,c}\, \\ \psi^{\ms{(1)}}_{ij} =& - \frac{2\,\alpha^{2}}{3\,\ell^{3}}\,\text{diag}(2, L^2, L^2 \sin^{2}\theta)
\end{align}
for $\alpha=\frac{\ell}{L}$ the ratio of the $AdS$ length scale and the radius of the sphere. While the BY ST is
\begin{gather}
\tau_{ij} = - \frac{4}{\ell}\,E^{\ms{(3)}}_{ij} + \frac{4}{\ell}\,\left(E^{\ms{(2)}}_{ik} \psi^{\ms{(1)}}{}^{k}{}_{j} + E^{\ms{(2)}}_{kj} \psi^{\ms{(1)}}{}_{i}{}^{k} \right) - \frac{2}{\ell}\,\gamma^{\ms{(0)}}_{ij}\,E^{\ms{(2)}}_{kl} \psi^{\ms{(1)}}{}^{kl}
\end{gather}
\begin{gather}
\tau_{ij} = \left( \begin{array}{ccc}
-\frac{2 c \alpha^{3}}{3 \ell^{3}} - \frac{4 \alpha^{4}}{9 c \ell^{3}} & 0 & \frac{d_4}{\ell^{3}} \\
0 & \frac{2c}{3\ell} - \frac{2 \alpha^{2}}{9 c \ell} & 0 \\
\frac{d_4}{\ell^{3}} & 0 & \left(\frac{2c}{3\ell} - \frac{2 \alpha^{2}}{9 c \ell}\right) \sin^{2}\theta \end{array} \right).
\end{gather}
In this case we obtain the component with at least one t index
\begin{align}
T_{tt} = & \,\, \frac{4 \, c \, \alpha^{2}}{\ell^{3}} - \frac{8 \, \alpha^4}{9 \, c \, \ell^{3}}\\
T_{t\phi} = & \,\, \frac{2\,d_4}{\ell^{3}} ~.
\end{align}
The charges integrated over the compact constant time surface ($S^2$) give finite results. They are associated with the time translation and rotations in the $\phi$ direction
\begin{align}
Q[\xi_0^{sph}] = &\,\,\frac{16\,\pi\,c}{\ell} - \frac{32\,\pi\,\alpha^{4}}{9\,c\,\ell} \\
Q[\xi_7^{sph}] = &\,\, \frac{8\,\pi\,d_4}{\ell\,\alpha^{2}} ~.
\end{align}
The Casimir energy for this boundary exists which differs from the $\mathbb{R}^3$ boundary.
Similarly, that happens in EG holography, only here conformal boundary is three dimensional. For EG Casimir energy appears only in the cases when conformal boundary is even-dimensional.
\item In the third example we assume the metric
\begin{equation}
g_{ij}=\left(
\begin{array}{cccc}
\frac{1}{r^2} & 0 & 0 & 0 \\
0 & \frac{2 a_1 c_1(r)-1}{r^2} & 0 & 0 \\
0 & 0 & \frac{a_1 c_1(r)+1}{r^2} & 0 \\
0 & 0 & 0 & \frac{a_1 c_1(r)+1}{r^2} \\
\end{array}
\right)\label{gex3}
\end{equation}
which we expand in the r component. We set $\gamma_{ij}^{(0)}$ to be flat background and $\gamma_{ij}^{(1)}=diag(2c,c,c)$.
The metric (\ref{gex3}) is simple enough to allow the computation of the Bach equation using the RGTC code and give non-vanishing components of the Bach tensor $B_{ij}$,
$B_{rr},B_{tt},B_{xx},B_{yy}$.
The equations are still to complicated to be solved exactly, however one can compute them asymptotically, expanding the $c_1(r)$ function
\begin{equation}
c_1(r)=b_1 r+ b_2 r^2+b_3 r^3+b_4 r^4+b_5 r^5+ b_6 r^6+ b_7 r^7+ b_8 r^8+b_9 r^9+b_{10} r^{10}.
\end{equation}
The expansion of the $B_{rr}$ equation around the $r=0$ up to 10th order gives coefficients next to each of the $b$s. Bach equation is expectedly vanishing up to 4th order since it is fourth order in derivative, however we obtain $\mathcal{O}(r^5)$ to vanish as well. The first non-vanishing term appears in $6th$ order $\mathcal{O}(r^6)$ and gives the condition on the $b_3$ coefficient $b_3=-\frac{1}{2}a_1^2b_1^3$. Inserting that solution in the following, 7th, order determines the coefficient $b_4$ and one can recursively solve Bach equation up to desired and computational allowed order.
For the above ansatz of constant $b$s one obtains
\begin{align}
\begin{array}{|c|c|c|c|c|c|}\hline
b_3 & b_4 & b_5 & b_6 & b_7 & b_8\\
-\frac{1}{2}a_1^2b_1^3 & -\frac{1}{4}a_1^3b_1^4 & \frac{3a_1^4b_1^5}{40} & \frac{a_1^5b_1^6}{8} & \frac{17a_1^6b_1^8}{560} & -\frac{9}{320}a_1^7b_1^8 \\ \hline
\end{array}.
\end{align}
These solutions satisfy the $B_{tt}$, $B_{xx}$ and $B_{yy}$ components of Bach equation as well.
Therefore we have solved the Bach equation up to $\mathcal{O}(r^{10})$.
One can analogously, using the asymptotic expansion, consider simpler forms of the metric ansatz in dependency on $t,x$ or $y$ coordinates or their combinations. Solving the equations for coefficients in each order, as in this case, consists of solving the partial differential equations, that can result with additional free coefficients. That can lead to functional dependency in asymptotic form of the metric, or depending on the metric, bring to determination of the coefficients from other components of Bach equation.
\end{enumerate}
We have seen two possible approaches for solving the asymptotical Bach equation. One of them can give more freedom in the choice of the higher $\gamma_{ij}^{(n)}$ (n=1,2,3,4) matrices, solving the Bach equation in fourth order, while the other can give solution for higher orders of Bach equation for $\gamma_{ij}^{(n)}$ (n $<$ computationally allowed) of an analogous form.
The third option is solving the differential equations numerically. In this case one can, cleverly choosing the initial form of the metric with desired functional dependence on the boundary coordinates and the boundary conditions, inspect the forms of the curvatures and the Bach equation that may lead to the clever initial ansatz for the metric.
\section{Appendix: One Loop Partition Function}
\subsection{One Loop Partition Function in Six Dimensions}
Trivial anomalies are generated with local functionals
\begin{align}
\mathcal{M}_i&=\int d^6x\sqrt{g}\sigma(x)M_i\\
\mathcal{K}_i&=\int d^6x\sqrt{g}K_i
\end{align}
here, in the notation of \cite{Bastianelli:2000rs}
\begin{align}
\mathcal{M}_5&=\delta_{\sigma}\bigg( \frac{1}{30}\mathcal{K}_1-\frac{1}{4}\mathcal{K}_2+\mathcal{K}_{6}\bigg), \mathcal{M}_6=\delta_{\sigma}\bigg( \frac{1}{100}\mathcal{K}_1-\frac{1}{20}\mathcal{K}_2\bigg), \nonumber\\
\mathcal{M}_7&=\delta_{\sigma}\bigg(\frac{37}{6000}\mathcal{K}_1-\frac{7}{150}\mathcal{K}_2+\frac{1}{75}\mathcal{K}_3-\frac{1}{10}\mathcal{K}_5-\frac{1}{15}\mathcal{K}_6\bigg), \mathcal{M}_8=\delta_{\sigma}\bigg(\frac{1}{150}\mathcal{K}_1\-\frac{1}{20}\mathcal{K}_3\bigg) \\
\mathcal{M}_9&=\delta_{\sigma}\bigg(-\frac{1}{30}\mathcal{K}_1\bigg),\mathcal{M}_{10}=\delta_{\sigma}\bigg(\frac{1}{300}\mathcal{K}_1-\frac{1}{20}\mathcal{K}_9\bigg)\nonumber.
\end{align}
$\sigma$ is infinitesimal Weyl transformation parameter, and $K_i$ are defined in the main text from (\ref{ks}).
\subsection{Generalization to Higher Dimensions}
The above expressions for the partition function in four and six dimensions, can be generalised for the partition functions in arbitrary number of dimensions. That can be achieved by the straightforward computation of the partition function on the thermal AdS space of the CG partition function
\begin{align}
Z_s(S^d)&=\prod_{k=0}^{s-1}\left(\det[-\nabla^2+k-(s-1)(s+d-2)]_{k\perp}\right)^{1/2} \nonumber \\ & \times \prod_{k=-\frac{1}{2}(d-4)^{s-1}}\left(\det[-\nabla^2+s-(k'-1)(k'+d-2)]_{s\perp}\right)^{-1/2},
\end{align}
\footnote{The partition function is evaluated on the conformally flat Einstein background that is $(A)dS_d$ or $S^d$ \cite{Tseytlin:2013fca}.} or using the procedure introduced in \cite{Beccaria:2016tqy}.
That is the procedure that we describe here.
We introduce partition function of higher spin EG that originates from the action of massless higher spins \cite{Gupta:2012he}. Partition function of EG in arbitrary number of dimensions for arbitrary spin on the Euclidean AdS considered in \cite{Gupta:2012he} is,
\begin{equation}
Z_{s}=\frac{\left[\det \left(-\nabla^2-(s-1)(3-d-s)\right)_{(s-1)}\right]^{1/2}}{\left[\det\left(-\nabla^2+s^2+(d-6)s-2(d-3)\right)_{(s)}\right]^{1/2}},\label{zeg}
\end{equation}
for AdS radius $\ell\rightarrow 1$.
From comparison of the $\partial AdS_{d+1}=S^{1}\times S^d$ and the $AdS_d$ one can infer the relation between the partition functions on the $AdS_{d+1}$ and $AdS_{d}$ background. Assuming that the kinetic operator of conformal field factorizes, the action can be written in form of the sum of second derivative terms
\begin{equation}
\log Z(AdS_d)= -\frac{1}{2} \sum_{i=1}^{N}n_i \log\det\hat{\Delta}_{s_i\perp}(M^2_i) \label{defz}
\end{equation}
where
\begin{equation}
\hat{\Delta}_{s\perp}(M^2)\equiv (-\nabla^2-M^2)_{s\perp}
\end{equation}
for $\hat{\Delta}_{s\perp}$ defined on symmetric transverse traceless field of rank $s$, $n_i$ multiplicities which are positive for physical fields and negative for ghost fields, $i=1,...,N$ indices of tensor fields, and $M$ their mass. Each of the operators in (\ref{defz}) has possible ground state energies $\Delta_d^{\pm}$ determined by the mass term, that give solution to the equation \cite{Metsaev:1994ys}
\begin{align}
\Delta_d^{\pm}(\Delta^{\pm}_d-d+1)-2=-M^2 && \Delta_d^{-}=d-1-\Delta_d^+, && \Delta_d^{-}\leq\Delta_d^+,
\end{align}
and describe classical solutions of the equation for the STT field $\Phi_{s\perp}$
\begin{equation}
\hat{\Delta}_{s\perp}(M^2)\Phi_{s\perp}=0
\end{equation}
for two boundary conditions. Using
\begin{equation}
\log Z=\sum_{k=1}^{\infty}\frac{1}{k}\mathcal{Z}(q^k),\label{notat}
\end{equation}
the single particle partition function obtained from the thermal quotient of $AdS_{\overline{d}}$ from (\ref{defz}) is
\begin{equation}
\mathcal{Z}^{\pm}(AdS_d;q)=\sum_{n=1}^{N}n_i\chi_{s_i}^{(d)}\frac{q^{\Delta^{\pm}_{d,i}}}{(1-q)^{d-1}}
\end{equation}
which depends on the boundary conditions we choose.
The relation between the $\mathcal{Z}^+$ and $\mathcal{Z}^{-}$ partition function is obtained using $\Delta^{-}=d-1-\Delta^{+}$
\begin{equation}
\mathcal{Z}^-(AdS_{d};q)=(-1)^{d-1}\mathcal{Z}^+(AdS_{d};q^{-1}).
\end{equation}
That allows us to write the relation between partition functions of higher spin field in $AdS_{d+1}$ and the conformal field on the $AdS_d$
\begin{align}
LHS&=\mathcal{Z}^-_{HS}(AdS_{d+1};q)+(-1)^d\mathcal{Z}^-_{HS}(AdS_{d+1};q^{-1}) \\ RHS&=\mathcal{Z}^-_{CF}(AdS_d;q)+(-1)^d\mathcal{Z}_{CF}(AdS_d;q^{-1})\label{it}.
\end{align}
The relation can be rewritten in the general form of the partition functions $\mathcal{Z}_{HS}^+(AdS_{d+1})$ and $\mathcal{Z}_{CF}^+(AdS_{d})$
\begin{align}
\mathcal{Z}^+_{HS}(AdS_{d+1};q)=\frac{P(q)q^{\frac{d}{2}}}{(1-q)^d} && \mathcal{Z}_{CF}^{+}(AdS_d;q)=\frac{F(q)q^{\frac{d-1}{2}}}{(1-q)^{d-1}}.
\end{align}
We can write (\ref{zsd}) as
\begin{align}
\log Z_{s,d}&=\log (Z_{s,d(1)}+Z_{s,d(2)})\\
\log Z_{s,d(1)}(AdS_d)&=\sum_{k=1}^{\infty}\frac{(-1)}{k}\frac{q^{k(d-3+s)}}{(1-q^k)^{(d-1)}}\chi_{s-1,d}q^k \\ \log Z_{s,d(2)}(AdS_d)&=\sum_{k=1}^{\infty}\frac{(-1)}{k}\frac{q^{k(d-3+s)}}{(1-q^k)^{(d-1)}}(-\chi_{s,d})
\end{align}
Comparing it to (\ref{notat}), one may conclude that $\mathcal{Z}_{HS(1,2)}^+(AdS_{d+1};q)$ equals
\begin{align}
\mathcal{Z}_{HS,(1)}^+(AdS_{d+1};q)=\frac{ q^{(s+\frac{d}{2}-1)}\chi_{s-1,d+1}q^{\frac{d}{2}}}{(1-q)^d}\\ \mathcal{Z}_{HS,(2)}^+(AdS_{d+1};q)=\frac{ q^{(s+\frac{d}{2}-2)} \chi_{s,d+1}q^{\frac{d}{2}}}{(1-q)^d}
\end{align}
that means
\begin{equation}
P(q)=P(q)_{(1)}=q^{(s+\frac{d}{2}-1)}\chi_{s-1,d+1}.
\end{equation}
Using the relation (\ref{it}) one can obtain the relation between the $F(q)$ and $P(q)$ \cite{Beccaria:2016tqy}
\begin{equation}
F(q)+F(q^{-1})=\frac{\sqrt{q}}{1-q}\left[P(q)^{-1}-P(q)\right]
\end{equation}
which for the term $Z_{s,d(1)}$ read
\begin{equation}
F(q)_{(1)}+F(q^{-1})_{(1)}=\frac{\chi_{s-1,d+1}\sqrt{q}}{(1-q)}\left[q^{-(s+\frac{d}{2}-1)}-q^{s+\frac{d}{2}-1}\right].
\end{equation}
If we denote $s+\frac{d}{2}-1=n$ and use $a^n-b^n=(a-b)(a^{n-1}+ba^{n-2}+b^2a^{n-3}+...+b^{n-2}a+b^{n-1})$
we can write
\begin{align}
\frac{\sqrt{q}}{(1-q)}(q^{-n}-q^n)&=q^{\frac{2n-1}{2}}+q^{\frac{2n-3}{2}}+q^{\frac{2n-5}{2}}+...\nonumber \\ &+q^{-\left(\frac{2n-5}{2}\right)}+q^{-\left(\frac{2n-3}{2}\right)}+q^{-\left(\frac{2n-1}{1}\right)} \nonumber \\
&= \sum_{m=1}^n\left(q^{\frac{2n-(2m-1)}{2}}+q^{-\frac{2n-(2m-1)}{2}}\right)
\end{align}
which leads to
\begin{equation}
F(q)=\sum_{m=1}^{n}\chi_{s-1,d+1}\left(q^{\frac{2n-(2m-1)}{2}}\right)
\end{equation}
and
\begin{align}
\mathcal{Z}_{CF(1)}^+(AdS_d;q)&=\sum_{m=1}^{s+\frac{d}{2}-1}\chi_{s-1,d+1}\frac{q^{s-m+d-1}}{(1-q)^{d-1}}\nonumber \\ &=\chi_{s-1,d+1}\frac{q^{-1+\frac{d}{2}}(q-q^{\frac{d}{2}+s})}{(1-q)^{d}}.
\end{align}
The second term $\mathcal{Z}_{HS,(2)}^+(AdS_{d+1};q)$ gives for P(q)
\begin{equation}
P(q)=P(q)_{(2)}=q^{(s+\frac{d}{2}-2)}\chi_{s,d+1}
\end{equation}
and leads to \begin{equation}
F(q)_{(2)}+F(q^{-1})_{(2)}=\frac{\chi_{s,d+1}\sqrt{q}}{(1-q)}\left[q^{-(s+\frac{d}{2}-2)}-q^{s+\frac{d}{2}-2}\right].
\end{equation}
and
\begin{equation}
\mathcal{Z}_{CF(2)}^+(AdS_d;q)=\sum_{m=0}^{s+\frac{d}{2}-2}\chi_{s,d+1}\frac{q^{s+d-m-2}}{(1-q)^{d-1}}=\chi_{s,d+1}\frac{q^{-2+\frac{d}{2}}(q^2-q^{\frac{d}{2}+s})}{(1-q)^{d}}.
\end{equation} That means for the entire partition function we obtain
\begin{align}
\mathcal{Z}^+_{CF}(AdS_d;q)=\frac{1}{(1-q)^d}&\bigg[\chi_{s-1,d+1}q^{-1+\frac{d}{2}}(q-q^{\frac{d}{2}+s})\nonumber \\ &\, - \chi_{s,d+1}q^{-2+\frac{d}{2}}(q^2-q^{\frac{d}{2}+s})\bigg]. \label{zcfev}
\end{align}
Inserting the relation for characters leads to
\begin{align}
\mathcal{Z}^+_{CF}(AdS_d;q)&=\frac{\Gamma[d+s-3]}{\Gamma[d-1]\Gamma[s+1]}\frac{q^-2+\frac{d}{2}}{(1-q)^d}\bigg[s q\left(q-q^{\frac{d}{2}+s}\right)\nonumber \\&+\left(-q^2+q^{\frac{d}{2}+s}\right)(d+s-3)(d+2s-2)\bigg]
\end{align}
the partition function for CG in d dimensions.
\subsection{Representative Cases}
Let us consider particular representative cases of the theories obtained for EG higher spin (HS) fields. From the partition function of EG on $AdS_7$
\begin{equation}
\mathcal{Z}^+_{EG}(AdS_7)=\frac{2q^6(-10+3q)}{(-1+q)^6}
\end{equation}
one obtains partition function of CG on $AdS_6$
\begin{equation}
\mathcal{Z}^+_{CG}(AdS_6)=-\frac{2q^3(-7-7q-7q^2+3q^3)}{(-1+q)^5}.
\end{equation}
While the partition function of EG in 5 dimensions
\begin{equation}
\mathcal{Z}^+_{EG}(AdS_5)=\frac{q^4(-9+4q)}{(-1+q)^4}
\end{equation}
leads to partition function of CG on $AdS_4$
\begin{equation}
\mathcal{Z}^+_{CG}(AdS_4)=-\frac{q^2(-5-5q+4q^2)}{(-1+q^3)}.
\end{equation}
\chapter*{Acknowledgements}
I would like to thank a supervisor Daniel Grumiller that made possible my arrival and stay in Vienna and start of working on the research for the doctoral thesis, availability for discussions and guidance in the scientific research as well as financial support. I would like to thank parents for the financial support to be able to work on the PhD thesis. R. McNees for explanations, help during the work and fruitful collaboration, S. Stricker that was always available for discussions and answering the small questions newly started PhD student encounters. F. Preis for useful discussions and successful collaboration, D. Vassilevich for guidance in the research, useful discussions and reading and commenting the drafts of my articles. I would also like to thank F. Bruenner for discussions about various physics topics, T. Zojer, for useful Skype conversations about the holographic renormalisation, J. Rosseel for discussions and being a bridge between the knowledge of professor and new PhD student, H. Afshar for discussions about canonical analysis, F. Schoeller for discussions about intrinsic physical properties, S. Prohazka and J. Salzer for discussions related to partition function, M. Gary for discussions. I would like to thank A. Tesytlin for very useful discussions about the partition function and invited talk at the Imperial College London and hospitality, M. Beccaria for useful communications about the partition function, and E. Bergshoeff for visiting Vienna to be an external referee at my thesis defence.
At the end I would like to thank friends.
The financial support was provided by START project Y~435-N16 of the Austrian Science Fund (FWF) and the FWF project I~952-N16, and Forschungsstipendien 2015 of Technische Universit\"at Wien.
\thispagestyle{empty}
\clearpage
\section{#1}\setcounter{equation}{0}}
\newcommand{\subsectiono}[1]{\subsection{#1}\setcounter{equation}{0}}
\renewcommand\O{{\mathcal{O}}}
\newcommand{\beta}[1]{ \begin{equation}\label{#1} }
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\bea}[1]{\begin{eqnarray}\label{#1} }
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\partial}{\partial}
\newcommand{\Delta}{\Delta}
\newcommand{\tilde{\Delta}}{\tilde{\Delta}}
\newcommand{\tilde{\xi}}{\tilde{\xi}}
\newcommand{\omega}{\omega}
\newcommand{\bar{\omega}}{\bar{\omega}}
\newcommand{\refb}[1]{(\ref{#1})}
\newcommand{\<}{\langle}
\renewcommand{\>}{\rangle}
\renewcommand{\(}{\left(}
\renewcommand{\)}{\right)}
\newcommand{\mbox{SL}(2,\mathbb{R})}{\mbox{SL}(2,\mathbb{R})}
\newcommand{\gamma_{ij}^{(1)}}{\gamma_{ij}^{(1)}}
\newcommand{\eq}[2]{\begin{equation} #1 \label{#2} \end{equation}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\alpha}{\alpha}
\newcommand{\gamma}{\gamma}
\newcommand{\delta}{\delta}
\newcommand{\omega}{\omega}
\newcommand{\kappa}{\kappa}
\newcommand{\lambda}{\lambda}
\newcommand{\sigma}{\sigma}
\newcommand{\Varepsilon}{\Varepsilon}
\newcommand{\Alpha}{\Alpha}
\newcommand{\Beta}{\Beta}
\newcommand{\Gamma}{\Gamma}
\newcommand{\Delta}{\Delta}
\newcommand{\Omega}{\Omega}
\newcommand{\Kappa}{\Kappa}
\newcommand{\Lambda}{\Lambda}
\newcommand{\Sigma}{\Sigma}
\newcommand{\Theta}{\Theta}
\DeclareMathOperator{\extdm}{d}
\newcommand{\extdm \!}{\extdm \!}
\newcommand{\ts}[1]{\textrm{\tiny #1}}
\newcommand{\ms}[1]{\textrm{\tiny $#1$}}
\newcommand{\ms{(0)}}{\ms{(0)}}
\newcommand{\ms{(1)}}{\ms{(1)}}
\newcommand{\ms{(2)}}{\ms{(2)}}
\newcommand{\ms{(3)}}{\ms{(3)}}
\newcommand{\ms{(n)}}{\ms{(n)}}
\newcommand{\mu}{\mu}
\newcommand{\nu}{\nu}
\newcommand{\xi^{(0)}}{\xi^{(0)}}
\newcommand{\xi^{(1)}}{\xi^{(1)}}
\newcommand{\xi^{(2)}}{\xi^{(2)}}
\newcommand{\xi^{(3)}}{\xi^{(3)}}
\newcommand{\xi^{(4)}}{\xi^{(4)}}
\newcommand{\xi^{(5)}}{\xi^{(5)}}
\newcommand{\xi^{(6)}}{\xi^{(6)}}
\newcommand{\xi^{(7)}}{\xi^{(7)}}
\newcommand{\xi^{(8)}}{\xi^{(8)}}
\newcommand{\xi^{(9)}}{\xi^{(9)}}
\newcommand{\textbf{A:}}{\textbf{A:}}
\newcommand{\textbf{E/N:}}{\textbf{E/N:}}
\newcommand{\textbf{R:}}{\textbf{R:}}
\def{\mathchar '26\mkern -10mu\delta}{{\mathchar '26\mkern -10mu\delta}}
\newcounter{rowcount}
\setcounter{rowcount}{0}
\DeclareMathOperator{\arcsec}{arcsec}
\DeclareMathOperator{\arccot}{arccot}
\DeclareMathOperator{\arccsc}{arccsc}
\usepackage{etoolbox}
\makeatletter
\patchcmd{\chapter}{\if@openright\cleardoublepage\else\clearpage\fi}{}{}{}
\makeatother
|
2,877,628,089,596 | arxiv | \section{Introduction}
\label{intro}
Over the past years, more and more data have been being collected at ever higher frequency that developing efficient pattern-detection methods and data-mining techniques has become very crucial to identify a few highly informative features. In this context, one of the most relevant examples is given by high-dimensional (multiple) time series that originate from constituent units of large systems characterized by inner interactions.
The cooperative behaviour within a complex system involving relationships among its constituent units can be effectively described by networks, where the interactions among the constituents (or nodes of the network) are represented by links. The topology of the network, which coincides with the topology of such interconnections or links, is in itself complex \cite{Newman2003, Boccaletti2006}. These networks show a certain organization at a mesoscopic level, which is intermediate between the microscopic level that involves the single constituent units and the macroscopic level that involves the entire system as a whole. This mesoscopic level reflects the modular organization of the system, characterized by the existence of interconnected groups where some units are heavily linked with each other while, at the same time, are less correlated with the rest of the network. These interconnected groups are generally featured as communities \cite{Girvan2002, Fortunato2010}. Detecting such communities represents an important step in the dynamical characterization of a network, because it could reveal special relationships between the nodes that may not be easily detectable by direct empirical tests \cite{Lancichinetti2008}, this helps to a better understanding of the characteristics of dynamic processes that take place in a network.
The use of complex networks to understand the interactions characterizing a climatic system has been growing in the last years \cite{Tsonis2006, Tsonis2008, Donges2009a, Donges2009b, Gozolchiani2008}, and various approaches have been used in constructing the related networks \cite{Tsonis2004, Yamasaki2008, Tsonis2008, Donges2009b, Steinhaeuser2009}. Further, complex network offer a new mathmatical modelling approach for non-linear dynamics \cite{Donner2019} and for climatological data analysis \cite{Donner2}.
Among the meteo-climatic parameters, wind is an important factor that influences the evolution of a climatic system; several studies have been devoted to understand better its time dynamics by using several methods, like extreme value theory and copula \cite{DAMICO2015}, machine learning algorithms\cite{Treiber2016}, visibility graph analysis\cite{PIERINI2012}, Markov chains models \cite{KANTZ2004}, fractal \cite{DEOLIVEIRASANTOS2012, Fortuna2014}, multifractal analysis \cite{Telesca2016, Garcia2013}.
The topological properties of wind systems have been a focus of investigation only in the very recent years. Laib et al. \cite{Laib2018a} studied the long-range fluctuations in the connectivity density time series of a correlation-based network of high-dimensional wind speed time series recorded by a monitoring system in Switzerland. They found that the daily time series of a connectivity density of the wind speed network is characterized by a clear annual periodicity that modulates the connectivity density more intensively for low than high absolute values of the correlation threshold.
Laib et al. \cite{Laib2018b} analysed the multifractality of connectivity density time series of the wind network and found that the larger multifractality at higher absolute values of thresholds could be probably induced by the higher spatial sparseness of the linked nodes at these thresholds.
Considering the topographic conditions of Switzerland and its wide-spread wind monitoring system, it is challenging to investigate the topology of the wind network in terms of existence of network communities, and to check if these communities match with the topography of the territory.
To this aim, the edges of the network (the links between any two stations of the wind system, which are the nodes of the network) are weighted by the mutual information between the wind time series recorded at each station. The mutual information, which quantifies the degree of non-linear correlation between two time series, has been already used to construct seismic networks\cite{Jimenez2013}, global foreign exchange markets\cite{Cao2017}, prediction of stock market movements\cite{Kim2017}.
\section{Data and network construction}
\label{sec:1}
The data used in this work consists of daily mean wind speed, collected from 119 measuring stations from 2012 to 2016 by SwissMetNet, which is one of the weather monitoring systems in Switzerland covering almost homogeneously all the Swiss territory (Fig. \ref{fig1}). Fig. \ref{fig2} shows, as an example, some of the measured wind speed series.
To construct the network, the mutual information was used as a metric to weight the edges between the nodes:
\begin{equation}
\label{MIeq}
I(X,Y)=\sum_{x \in X} \sum_{y \in Y} p(x,y)log\left( \frac{p(x,y)}{p(x)p(y)}\right)
\end{equation}
where $X$ and $Y$ are two different random variables (wind time series), $p(x)$ and $p(y)$ are respectively their probabilities, while $p(x,y)$ is their joint probability.
Mutual information is a measure of the amount of information that one random variable contains about another random variable \cite{MIbook}.
It can be shown that Eq. \ref{MIeq} can be written as follows\cite{MIbook}
\begin{equation}
I(X,Y) = D(p(x,y) \parallel p(x)p(y))
\end{equation}
where $D$ is the Kullback-Leibler divergence, which is a dissimilarity measure between two probability distributions.
Thus, the mutual information can be seen as the departure of the joint probability $p(x,y)$ from the product of the two marginal probabilities $p(x)$ and $p(y)$. We can easily show that $I(X,Y) \geqslant 0$ with equality if and only if $X$ and $Y$ are independent \cite{MIbook}. Consequently, the higher the mutual information, the stronger the dependence between $X$ and $Y$.
Since the mutual information, defined in Eq. \ref{MIeq}, is symmetric, the network is undirected. Furthermore, the network is completely connected, because all the nodes are connected. However, the edges differ by their weights given by the mutual information.
\section{COMMUNITY DETECTION BY THE MULTILEVEL METHOD}
Proposed by Blondel et al. \cite{Blondel2008}, the MultiLevel algorithm (ML) is one of the community detection methods. Yang et al. \cite{Yang2016} compared several well-known algorithms of community detection (Edge-betweenness\cite{Edbet}, Fastgreedy \cite{fastg}, Infomap \cite{infom}, walktrap \cite{walkt}, and Spinglass \cite{SPG}), and found that ML outperforms all other algorithms on a set of benchmarks.
The ML algorithm aims to optimise the modularity \cite{Modu}, which measures the density of links inside a community, and compares it between other communities. The modularity is defined as follows:
\begin{footnotesize}
\begin{equation}
\label{Modu}
Q=\frac{1}{2m}\sum_{ij}\left[A_{ij}-\frac{k_ik_j}{2m}\delta(c_i,c_j)\right]
\end{equation}
\end{footnotesize}
where $Q$ ranges between $-1$ and $1$ and\cite{Blondel2008}:
\begin{itemize}
\item $A_{ij}$ is the weight between nodes $i$ and $j$;
\item $2m$ is the sum of all the weights in the graph;
\item $k_i$ and $k_j$ are the sum of weights connected to nodes $i$ and $j$ respectively;
\item $c_i$ and $c_j$ communities (classes) of nodes.
\item $\delta$ is the delta function of the variables $c_i$ and $c_j$.
\end{itemize}
The ML algorithm consists of two iterative steps. Firstly, each node is considered as a community for an initial partition. Then, the node $i$ is removed from its community $c_i$ and placed in another community $c_j$, if this replacement maximises the modularity (Eq. \ref{Modu}), otherwise the node $i$ remains in its original community until when there is no gain in the modularity. The gain in modularity of moving a node $i$ into a community $C$ is computed as follows \cite{Blondel2008}:
\begin{footnotesize}
\begin{equation}
\label{Gmeq}
\Delta Q= \left[ \frac{\sum_{in}+2k_{i,in}}{2m}- \left( \frac{\sum_{tot}+k_i}{2m}\right)^2\right] - \left[ \frac{\sum_{in}}{2m}-\left( \frac{\sum_{tot}}{2m}\right)^2 - \left( \frac{k_i}{2m}\right)^2 \right]
\end{equation}
\end{footnotesize}
where $\sum_{in}$ is the sum of weights inside $C$, $\sum_{tot}$ is the sum of weights of edges incident to nodes in community $C$, $k_{i,in}$ is the sum of weights of connection of node $i$ with other nodes of the same community $C$, and $m$ is the sum of all weights in the network.
In the second step, every community is considered as a node and building a new network. The weights between these new nodes are defined by the sum of the link weights of the corresponding communities of the old network, as it is proposed by Arenas et al. \cite{reduce} for reducing size of a complex network by preserving the modularity. Then, the first step is applied again on the new network iteratively until the modularity stops to increase.
\section{Results and discussion}
Fig. \ref{fig3} shows the mutual information among all the nodes.
Applying the community detection based on the MultiLevel method, three different communities are identified, as shown in Fig. \ref{fig4}. Mapping the communities on the territory of Switzerland (Fig. \ref{fig5}), two classes are mixed spatially (stations indicated by green and black circles).
To quantify such spatial mixing effect, the well-known silhouette width was used \cite{Silhouettes1987}.
This is defined as
\begin{equation}
\label{Sil}
s(i)=\frac{b(i)-a(i)}{max\{a(i),b(i)\}}
\end{equation}
where $a(i)$ is the dissimilarity between the node (object) $i$ and the other nodes of the same community, $b(i)$ is the minimum value of dissimilarity between the node $i$ and the other nodes of other communities, and the dissimilarity is the minimum Euclidean distance. From Eq. \ref{Sil}, we can see that the silhouette $s(i)$ ranges between $-1$ and $1$.
Fig. \ref{fig6} shows the silhouette widths for each station of each community, by applying the silhouette on the mutual information matrix and the obtained communities. Fig. \ref{fig7} shows the silhouette widths, by applying it on the XY coordinates.
The average values are $0.19$ (Mutual information matrix) and $0.09$ (XY coordinates). These low values indicate that the obtained communities are not well spatially separated.
In order to understand the origin of such spatial mixing between communities, we filtered out from the wind series the trend and the yearly cycle \cite{Laib2018c} by using the Seasonal Decomposition of Time Series by Loess (STL) \cite{Cleveland1990} (implemented by using the stl function of the ”stats” R library \cite{lanR}). Then, we applied the community detection MultiLevel method to the residual wind series. Fig. \ref{fig8} shows the residuals of the same time series showed in Fig. \ref{fig2}, and Fig. \ref{fig9} presents two detected communities.
Mapping the communities on the Swiss territory, the two communities do not show significant spatial mixing (Fig. \ref{fig10}).
Furthermore, the silhouette width for each station of each community is shown in Figs. \ref{fig11} and \ref{fig12}, and the mean silhouette values are $0.35$ (mutual information matrix) and $0.24$ (XY coordinates) respectively. These values are better than that obtained on the original data before applying STL. This indicates that there is not significant spatial mixing between the two communities. This result was found also significant comparing it with the silhouette widths calculated for $1,000$ random spatial distribution of the stations. Figs. \ref{fig13} and \ref{fig14} show the histogram of the silhouette width for the randomised classes.
\section{Conclusions}
\begin{enumerate}
\item The wind network, constructed by representing the interactions between the nodes using the mutual information, highlights the (non-linear) correlations among the wind series.
\item The STL decomposition permits to extract the residuals of the wind speed not influenced by the trends and annual weather-induced forcings, but only by local meteo-climatic features depending on the geo-morphological and topographic characteristics of each measuring station.
\item The MultiLevel method for community detection in the mutual information-based network of wind series shows different topological structures of the monitoring system, before and after the removal of the trend and seasonal components. The network constructed on the original data is characterized by three different communities, while that constructed on the residual data (deprived of the trend and seasonal component) is characterized only by two communities.
\item The communities of the network built on the original data are quite spatially mixed. However, the communities of the network built on the residual data are, instead, spatially well separated, with no significantly apparent mixing between the stations belonging to the two communities.
\item The silhouette width, used to quantify the spatial mixing between the found communities, shows an average value for the communities detected in the network based on the original data much lower than that found for the communities detected in the network based on the residuals. Furthermore, the last is significant against the silhouette widths calculated after shuffling the stations of the two communities.
\item The two communities detected after removing the trend and seasonal components match very well with climatic zones of Switzerland, the Alps and the Jura-Plateau. This suggests the potential of the complex network method in disclosing the inner interactions among wind speed series measured in different climatic regions mainly due to the local topographic factors.
\end{enumerate}
\section{Acknowledgements}
F. Guignard thanks the support of the National Research Programme 75
"Big Data" (PNR75) of the Swiss National Science Foundation (SNSF).
L. Telesca thanks the support of the "Scientific Exchanges" project n$^\circ$ 180296 funded by the SNSF.
M. Laib thanks the support of "Soci\'et\'e Acad\'emique Vaudoise" (SAV) and the Swiss Government Excellence Scholarships.
The authors thank MeteoSwiss for providing the data.
|
2,877,628,089,597 | arxiv | \section{Supplemental Material}
\section{I. More details about the processing procedure of the specific heat data}
In the inset of Fig. 2 in the main text, we fit the low-temperature specific heat data following the procedure in Ref.~\cite{1han_correlated_2016}:
\begin{small}
\begin{equation}
C=Rx\int \frac{d \omega}{\pi} \frac{\omega}{T^{2} \sinh ^{2}\left(\frac{\omega}{2 T}\right)}\left[\frac{\omega}{2 T} \operatorname{coth}\left(\frac{\omega}{2 T}\right)-1\right] \tan ^{-1}\left(\frac{\omega}{\Gamma}\right)\nonumber
\end{equation}
\end{small}
where $R$ is the gas constant, and the integral is taken between 0.1 and 0.7 meV. The fitting parameters are $\Gamma$---the relaxation rate of the Lorentzian fitting of the neutron data between 0.1 and 0.7 meV in Ref.~\cite{1nilsen_low-energy_2013}, and $x$---the interlayer impurity concentration. The fitting yields $\Gamma$ = 0.1 meV and $x$ = 6\%, both smaller than the respective values obtained using specific heat data on powder samples (from Ref. ~\cite{1Helton_spin_2007}) in Ref.~\cite{1han_correlated_2016}.
We failed to fit our low-temperature specific heat data to Eq. 1 in Ref.~\cite{1Kelly_electron_2016}.
A scaling function was derived for QSL candidates with quenched disorder in Ref.~\cite{1kimchi_scaling_2018}. The specific heat data on powder samples (from Ref. ~\cite{1Helton_spin_2007}) was utilized to showcase the application of the scaling function. We plotted our data on single crystals in Fig. S1. Although it was argued that the powder sample data exhibits a data collapse for intermediate range of $T/\mu_0 H$, the scaling function does not describe our data well.
\section{II. More details about the processing procedure of the thermal conductivity data}
In the main text, the thermal conductivity data was fitted to $\kappa/T$ = $a$ + $bT^{\alpha -1}$ with $\alpha$ = 2.5. The two terms represent the contribution from itinerant gapless spin excitations and phonons, respectively. Generally, gapless spin excitations do not necessarily exhibit a thermal conductivity linear in temperature so that their contribution is difficult to separate from that of phonons. This is the case for, e.g., gapless magnons~\cite{1Li_Ballistic_2005}. However, as discussed in the main text, in the context of ZnCu$_3$(OH)$_6$Cl$_2$ and Cu$_3$Zn(OH)$_6$FBr, the possible spin excitations exhibit a thermal conductivity linear or sub-linear in temperature, thereby giving a constant or diverging residual linear term $\kappa_0/T \equiv a$. For this reason, this formula has been routinely employed in heat transport studies on QSL candidates~\cite{1yamashita_highly_2010,1tokiwa_possible_2016,1Xu_absence_2016,1Yu_heat_2017,1Yu_ultra_2018,1Ni_ultralow_2018,1Ni_absence_2019,1Hope_thermal_2019,1li_possible_2020,1Pan_specific_2021,1ni_giant_2021}.
\renewcommand{\thefigure}{S1}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{Hc-scaling.pdf}
\end{center}
\centering
\caption{
Scaling plot of the specific heat data plotted following Ref.~\cite{1kimchi_scaling_2018}.}
\end{figure}
As demonstrated in the main text, we observed no contribution by any spin excitation. In other words, the only heat carriers are the phonons. At low temperatures, phonons are generally scattered by boundaries like the sample surfaces. Because of the specular reflections of phonons at the sample surfaces, the power $\alpha$ is typically between 2 and 3~\cite{1Sutherland_thermal_2003,1Li_low_2008}. In addition, the phonons can also couple to the magnetic degree of freedom. In the context of ZnCu$_3$(OH)$_6$Cl$_2$ and Cu$_3$Zn(OH)$_6$FBr, the phonons may be scattered additionally by, (i) the gapless spin excitations from the kagome spins (if they exist), (ii) the spin fluctuations from the interlayer impurity subsystem, and (iii) the impurity sites in the kagome plane (if they exist). We have demonstrated in the main text that gapless spin excitations, if they exist, must be localized so that they do not conduct heat. However, these localized excitations are still capable of scattering phonons. However, as shown in the main text, the gapless spin excitations are sensitive to magnetic field. This means that the phonon thermal conductivity should also be field dependent, if phonons were to be strongly scattered by gapless spin excitations. This was not observed, and we can exclude possibility (i) accordingly. Similar reasoning applies to (ii): at our temperature range, the interlayer impurity moments can be saturated by a magnetic field of $\sim$10 T~\cite{1han_thermodynamic_2014}. Therefore, the spin fluctuations associated with these moments are also field dependent. The field independent phonon thermal conductivity thus excludes possibility (ii). As to possibility (iii), the impurities in the kagome plane, if they exist, are most likely quasifree~\cite{1Zorko_symmetry_2017}, meaning that the scattering by these impurities are presumably temperature independent. Therefore, we conclude that the low-temperature thermal conductivity of ZnCu$_3$(OH)$_6$Cl$_2$ and Cu$_3$Zn(OH)$_6$FBr is dominated by phonons, and the phonons are scattered predominantly by sample surfaces.
|
2,877,628,089,598 | arxiv | \section{Introduction}
\blfootnote{Project webpage will be released.}
Creating presentations is often a work of art. It requires skills to abstract complex concepts and conveys them in a concise and visually pleasing manner. Consider the steps involved in creating presentation slides based on a white paper or manuscript: One needs to 1) establish a storyline that will connect with the audience, 2) identify essential sections and components that support the main message, 3) delineate the structure of that content, e.g., the ordering/length of the sections, 4) summarize the content in a concise form, e.g., punchy bullet points, 5) gather figures that help communicate the message accurately and engagingly, and 6) arrange these elements (e.g., text, figures, and graphs) in a logical and aesthetically pleasing manner on each slide.
\begin{figure}[tp]
\centering
\includegraphics[width=.9\linewidth]{figs/fig1-v2.pdf}
\caption{We introduce a novel task of generating a slide deck from a document. This requires solving several challenges in the vision \& language domain, e.g., visual-semantic embedding and multimodal summarization. In addition, slides exhibit unique properties such as concise text (bullet points) and stylized layout. We propose an approach to solving \texttt{DOC2PPT}\xspace, tackling these challenges.}
\label{fig:concept}
\end{figure}
Can machines emulate this laborious process by \textit{learning} from the plethora of example manuscripts and slide decks created by human experts? We argue that this is an area where AI can enhance humans' productivity, e.g., by drafting slides for humans to build upon. This would open up new opportunities to human-AI collaboration, e.g., one could quickly generate a slide deck by revising the draft or simply generate slide decks of many papers and skim them through to digest a lot of material quickly
However, building such a system poses unique challenges in vision and language understanding. Both the input (a manuscript) and output (a slide deck) contain tightly coupled visual and textual elements; thus, it requires multimodal reasoning. Further, there are significant differences in the presentation: compared to manuscripts, slides tend to be more \textit{concise} (e.g., containing bullet points rather than full sentences), \textit{structured} (e.g., each slide has a fixed screen real estate and delivers one or few messages), and \textit{visual-centric} (e.g., figures are first-class citizens, the visual layout plays an important role, etc.).
Existing literature only partially addresses some of the challenges above. Document summarization~\cite{cheng16ext-summ,chopra16seq2seq-summ} aims to find a concise text summary of the input, but it does not deal with images/figures and lacks multimodal understanding. Cross-modal retrieval~\cite{frome13devise,kiros14text-image-emb} focuses on finding a multimodal embedding space but does not produce summarized outputs. Multimodal summarization~\cite{zhu18msmo} deals with both (summarizing documents with text and figures), but it lacks the ability to produce structured output (as in slides). Furthermore, none of the above addresses the challenge of finding an optimal visual layout of each slide. While assessing visual aesthetics have been investigated~\cite{marchesotti11assessing}, exiting work focuses on photographic metrics for images that would not translate to slides. These aspects make ours a unique task in the vision-and-language literature.
In this paper, we introduce \texttt{DOC2PPT}\xspace, a novel task of creating presentation slides from documents. As this is a new task with no existing benchmark, we release code, and a new dataset of 5,873 paired scientific documents and associated presentation slide decks (for a total of about 70K pages and 100K slides, respectively). We present a series of automatic data processing steps to extract useful learning signals from documents and slides. We also introduce new quantitative metrics designed to measure the quality of the generated slides.
To tackle this task, we present a hierarchical recurrent sequence-to-sequence architecture that ``reads'' the input document and ``summarizes'' it into a \textit{structured} slide deck. We exploit the inherent structure within documents and slides by performing inference at the section-level (for documents) and at the slide-level (for slides). To make our model end-to-end trainable, we explicitly encode section/slide embeddings and use them to learn a policy that determines \textit{when to proceed} to the next section/slide. Further, we learn the policy in a hierarchical manner so that the network decides which actions to take by considering the structural context, e.g., a decision to create a new slide will depend on the section the model is currently summarizing and the previous slides that it has generated thus far.
To account for the concise nature of text in slides (e.g., bullet points), we incorporate a paraphrasing module that converts document-style full sentences to slide-style phrases/clauses. We show that this module drastically improves the quality of the generated textual content for the slides. In addition, we introduce a text-image matching objective that encourages related text-image pairs to appear on the same slide. We demonstrate that this objective substantially improves figure placement in slides. Lastly, we explore both template-based and learning-based solutions for slide layout design and compare them both quantitatively and qualitatively.
To summarize, our main contributions include: 1) Introducing a novel task, dataset, and evaluation metrics for automatic slide generation; 2) Proposing a hierarchical sequence-to-sequence approach that summarizes a document in a structure output format suitable for slide presentation; 3) Evaluating our approach both quantitatively, using our proposed metrics, and qualitatively based on human evaluation. The task of generating presentation slides presents numerous challenges. We hope that our work will enable researchers to advance the state-of-the-art in the vision-and-language domain.
\section{Related Work}
\paragraph{Vision and Language.}
Joint modeling of vision-and-language has been studied from different angles. Image/video captioning~\cite{vinyals16show,you16image,li16tgif,xu16msr}, visual question answering~\cite{antol15vqa,jang17tgif,anderson18bottom}, visually-grounded dialogue generation~\cite{das17visual} and visual navigation~\cite{wang19reinforced} are all tasks that involve learning relationships between visual imagery and text. Despite this large body of work, there remain many vision and language tasks that have not been addressed, e.g., multimodal document generation such as ours. As argued above, our task brings a new suite of challenges to vision-and-language understanding.
\vspace{-.2em}\paragraph{Document Summarization.}
This task has been tackled from two angles: abstractive~\cite{chopra16seq2seq-summ,see17ptr-summ,cho19selector-summ,liu19bert-summ,dong19unilm,zhang20pegasus,celikyilmaz18dca,rush15abs-summ,liu18gan-summ,paulus18rl-summ} and extractive~\cite{barrios15textrank-summ,narayan18refresh,liu19bert-ext-summ,chen18its,yin14sel-summ,cheng16ext-summ,yasunaga17graph-summ}. Our \texttt{DOC2PPT}\xspace task involves both abstractive and extractive summarization because it requires a model to extract the key content from a document \textit{and} paraphrase it into a concise form. A task closely related to ours is scientific document summarization \cite{elkiss08cite-summ,lloret13compendium,jaidka16clscisumm,parveen16cp-scisumm}, but to date that work has only focused on producing text summaries, while we focus on generating multimedia slides. Furthermore, existing datasets in this domain (such as TalkSumm~\cite{lev19talksumm} and ScisummNet~\cite{yasunaga19scisummnet}) are rather small with only about 1K documents each. We propose a large dataset of 5,873 pairs of high-quality scientific documents and slide decks.
\vspace{-.2em}\paragraph{Visual-Semantic Embedding}
Our task involves generating slides with relevant text and figures. Learning text-image similarity has been studied in the visual-semantic embedding (VSE) literature~\cite{kiros14text-image-emb,karpathy14text-image,vendrov16text-image-emb,faghri18vse,huang18text-image,gu18text-image-gan,song19polysemous}. However, unlike the VSE setting where text instances are known in advance, ours requires simultaneously \textit{generating} text and retrieving the related images at the same time.
\vspace{-.2em}\paragraph{Multimodal Summarization}
This task aims to summarize a document with text and figures into a summary that also contains text and figures. MultiModal Summarization with Multimodal Output (MSMO) \cite{zhu18msmo,zhu20msmo} applies an attention mechanism to generate a textual summary with related images for news articles. Similarly, our task involves summarizing multimodal documents, but it also involves putting the summary in a structured format such as slides.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figs/pipeline2.png}
\caption{An overview of our network architecture. It consists of four main modules (DR, PT, OP, PAR) that read a document and generate a slide deck in a hierarchically structured manner.}
\label{fig:overall}
\end{figure}
\section{Approach}
The goal of \texttt{DOC2PPT}\xspace is to generate a slide deck from a multimodal document with text and figures.\footnote{In this work, figures include images, graphs, charts, and tables.} As shown in Fig.~\ref{fig:concept}, the task involves ``reading'' a document (i.e., encoding sentences and images) and summarizing it, paraphrasing the summarized sentences into a concise format suitable for slide presentation, and placing the chosen text and figures to appropriate locations in the output slides.
\paragraph{Overview} Given the multi-objective nature of the task, we design our network with modularized components that are jointly trained in an end-to-end fashion. Fig.~\ref{fig:overall} shows an overview of our network that includes these modules:
\setlist{nolistsep}
\begin{itemize}[noitemsep]
\item A \textbf{Document Reader (DR)} encodes sentences and figures in a document.
\item A \textbf{Progress Tracker (PT)} maintains pointers to the input (i.e., which section is currently being processed) and the output (i.e., which slide is currently being generated) and determines when to proceed to the next section/slide based on the progress so far.
\item An \textbf{Object Placer (OP)} decides which object from the current section (sentence or figure) to put on the current slide. It also predicts the location and the size of each object to be placed on the slide.
\item A \textbf{Paraphraser (PAR)} takes the selected sentence and rewrites it in a concise form before putting it on a slide.
\end{itemize}
\begin{comment}
\begin{table}[t]
\centering
\footnotesize
\begin{tabular}[t]{ccc}
\toprule
Symbol & Definition & \#Number \\
\midrule
$D$ & input document & - \\
$S_i$ & the $i$th section in $D$ & $N^{in}_S$ \\
$T^{in}_{i, j}$ & the $j$th sentence in $S_i$ & $N^{in}_i$ \\
$I^{in}_i$ & the $i$th image in $D$ & $M^{in}_I$ \\
\hdashline
$O_k$ & the $k$th slide of output $O$ & $N^{out}_O$ \\
$T^{out}_{k, j}$ & the $j$th sentence in $O_k$ & $N^{out}_k$ \\
$L^{txt}_{k, j}$ & the layout of the $j$th sentence in $O_k$ & $N^{out}_k$ \\
$I^{out}_{k, j}$ & the $j$th image in $O_k$ & $M^{out}_k$ \\
$L^{img}_{k, j}$ & the layout of the $j$th image in $O_k$ & $M^{out}_k$ \\
\bottomrule
\\
\end{tabular}
\caption{Glossary of notation.}
\label{table:symbol}
\end{table}
\end{comment}
\paragraph{Notation} A document $\mathcal{D}$ is organized into sections $\mathcal{S}=\{S_i\}_{i \in N^{in}_S}$ and figures $\mathcal{F}=\{F^{in}_q\}_{q \in M^{in}_F}$. Each section $S_i$ contains sentences $\mathcal{T}^{in}_i=\{T^{in}_{i, k}\}_{k \in N^{in}_i}$, and each figure $F_q=\{I_q, C_q\}$ contains an image $I_q$ and a caption $C_q$. We do not assign figures to any particular section because multiple sections can reference the same figure. A slide deck $\mathcal{O}=\{O_j\}_{j \in N^{out}_O}$ contains a number of slides, each containing sentences $\mathcal{T}^{out}_j=\{T^{out}_{j,k}\}_{k \in N^{out}_j}$ and figures $\mathcal{F}^{out}_j=\{F^{out}_{j, k}\}_{k \in M^{out}_j}$. We encode the position and the size of each object on a slide in a bounding box format using an auxiliary layout variable $L_{j, k}$, which includes four real-valued numbers $\{l^x, l^y, l^w, l^h\}$ encoding the x-y offsets (top-left corner), the width and height of a bounding box.
\subsection{Model}
\paragraph{Document Reader (DR)}
We extract sentence and figure embeddings from an input document and project them to a shared embedding space so that the Object Placer treats both textual and visual elements as an object coming from a joint multimodal distribution.
For each section $S_i$, we use RoBERTa~\cite{liu19roberta} to encode each of the sentences $T^{in}_{i, k}$, and then use a bidirectional GRU \cite{chung14gru} to extract contextualized sentence embeddings $X^{in}_{i, k}$:
\begin{equation}
\begin{split}
B^{in}_{i, k} &= \text{RoBERTa}(T^{in}_{i, k}), \\
X^{in}_{i, k} &= \text{Bi-GRU}{(B^{in}_{i, 0}, ..., B^{in}_{i, N^{in}_i-1})}_{k},
\end{split}
\end{equation}
Similarly, for each figure $F^{in}_{q}=\{I^{in}_q, C^{in}_q\}$, we apply ResNet-152~\cite{he16resnet} to extract the image embedding of $I^{in}_q$ and RoBERTa for the caption embedding of $C^{in}_{q}$. We then concatenate them as the figure embedding $V^{in}_{q}$:
\begin{equation}
\begin{split}
V^{in}_{q} &= [\text{ResNet}(F^{in}_{q}), \text{RoBERTa}(C^{in}_{q})].
\end{split}
\end{equation}
Next, we project $X^{in}_{i, k}$ and $V^{in}_{q}$ to a shared embedding space using a two-layer multilayer perceptron (MLP):
\begin{equation}
\label{eq:emb-e}
E^{txt}_{i, k} = \text{MLP}^{txt}(X^{in}_{i, k}), \:\:\:\: E^{fig}_{q} = \text{MLP}^{fig}(V^{in}_{q}).
\end{equation}
Finally, we combine $E^{txt}_{i}$ and $E^{fig}$ as the section embedding $E^{sec}_{i}$ of $S_i$:
\begin{equation}
\label{eq:emb-s}
E^{sec}_{i} = \{ E^{txt}_{i, k}, E^{fig}_{q} \}_{k \in N^{in}_{i}, q \in M^{in}_{F}}
\end{equation}
We include all figures $\mathcal{F}$ in \textit{each} section embedding $E^{sec}_{i}$ because each section can reference any of the figures.
\paragraph{Progress Tracker (PT)}
We define the PT as a state machine operating in a hierarchically-structured space with sections ([SEC]), slides ([SLIDE]), and objects ([OBJ]). This is to reflect the structure of documents and slides, i.e., each section of a document can have multiple corresponding slides, and each slide can contain multiple objects.
The PT maintains pointers to the current section $i$ and the current slide $j$, and learns a policy to proceed to the next section/slide as it generates slides. For simplicity, we initialize $i=j=0$, i.e., the output slides will follow the natural order of sections in an input document.
We construct PT as a three-layer hierarchical RNN with $( \texttt{PT}^{sec},\texttt{PT}^{slide},\texttt{PT}^{obj} )$, where each RNN encodes the latent space for each level in a section-slide-object hierarchy. This is a natural choice to encode our prior knowledge about the hierarchical structure; in Section~\ref{sec:experiments}, we empirically compare this to a ``flattened'' version of RNN that encodes the section-slide-object structure using a single latent space.
First, $\texttt{PT}^{sec}$ takes as input the head-tail contextualized sentence embeddings from the DR, which encodes the overall information of the current section $S_i$:
\begin{equation}
h^{sec}_{i} = \texttt{PT}^{sec}(h^{sec}_{i-1}, [X^{in}_{i, 1}, X^{in}_{i, N^{in}_i}]),
\end{equation}
We use GRU~\cite{cho14learning} for $\texttt{PT}^{sec}$ and initialize $h^{sec}_{0}$ to the contextualized sentence embeddings of the first section, i.e., $h^{sec}_{0}=[X^{in}_{0, 1}, X^{in}_{0, N^{in}_0 - 1}]$.
Based on the section state $h^{sec}_{i}$, $\texttt{PT}^{slide}$ models the section-to-slide relationships,
\begin{equation}
a^{sec}_j, h^{slide}_{j} = \texttt{PT}^{slide}(a^{sec}_{j-1}, h^{slide}_{j-1}, E^{sec}_{i}),
\end{equation}
where $h^{slide}_{0} = h^{sec}_{i}$, $E^{sec}_{i}$ is the section embedding (Eq.~\ref{eq:emb-s}), and $a^{sec}_j$ is a binary action variable that tracks the section pointer, i.e, it decides if the model should generate a new slide for the current section $S_i$ or proceed to the next section $S_{i+1}$. We implement $\texttt{PT}^{slide}$ as a GRU and a two-layer MLP with a binary decision head that learns a policy $\phi$ to predict $a^{sec}_j = \{ \texttt{[NEW\_SLIDE]}, \texttt{[END\_SEC]}\}$,
\begin{equation}
\begin{split}
a^{sec}_j &= \text{MLP}^{slide}_{\phi}([h^{slide}_j, \sum\nolimits_r \alpha^{slide}_{j, r} E^{sec}_{i, r}]), \\
\alpha^{slide}_j &= \text{softmax}(h^{slide}_j W (E^{sec}_i)^{\intercal}).
\end{split}
\end{equation}
Here, $\alpha^{slide}_j \in \mathbb{R}^{N_i^{in}+M^{in}}$ is an attention map over $E^{sec}_i$ that computes the compatibility between $h^{slide}_j$ and $E^{sec}_i$ in a bilinear form.
Finally, the object $\texttt{PT}^{obj}$ tracks which objects to put on the current slide $O_j$ based on the slide state $h^{slide}_{j}$,
\begin{equation}
\begin{split}
a^{slide}_k, h^{obj}_{k} &= \texttt{PT}^{obj}(a^{slide}_{k-1}, h^{obj}_{k-1}, E^{sec}_{i}), \\
a^{slide}_k &= \text{MLP}^{obj}_{\psi}([h^{obj}_k, \sum\nolimits_r \alpha^{obj}_{k, r} E^{sec}_{i, r}]), \\
\alpha^{obj}_k &= \text{softmax}(h^{obj}_k W (E^{sec}_i)^{\intercal}),
\end{split}
\end{equation}
where we set $h^{obj}_{0} = h^{slide}_{j}$. Similar to $\texttt{PT}^{slide}$, $a^{slide}_k= \{ \texttt{[NEW\_OBJ]}, \texttt{[END\_SLIDE]}\}$ is a binary action variable that decides whether to put a new object for the current slide or proceed to the next. We again use a GRU and a two-layer MLP with a policy $\psi$ to implement $\texttt{PT}^{obj}$, together with an attention mechanism that measures the compatibility scores between $h^{obj}_k$ and $E^{sec}_i$. Note that each of the three PTs have an independent set of weights to ensure that they model distinctive dynamics in the section-slide-object structure.
\paragraph{Object Placer (OP)}
When $\texttt{PT}^{obj}$ takes an action $a^{slide}_k=\texttt{[NEW\_OBJ]}$, the OP selects an object from the current section $S_i$ and predicts the location on the current slide $O_j$ in which to place it. For this, we use the attention score $\alpha^{obj}_k$ to choose an object (sentence or figure) that has the maximum compatibility score with the current object state $h^{obj}_k$, i.e., $\arg\max_r \alpha^{obj}_k$. We then employ a two-layer MLP to predict the layout variable for the chosen object,
\begin{equation}
\{l^x_k, l^y_k, l^w_k, l^h_k\} = \text{MLP}^{layout}([h^{obj}_k, \sum\nolimits_r \alpha^{obj}_{k, r} E^{sec}_{i, r}]),
\end{equation}
Note that the distinctive style of presentation slides requires special treatment of the objects. If an object is a figure, we take only the image part and resize it to fit the bounding box region while maintaining the original aspect ratio. If an object is a sentence, we first paraphrase it into a concise form and also adjust the font size to fit inside the bounding box region.
\paragraph{Paraphraser (PAR)}
We paraphrase sentences before placing them on slides. This step is crucial because without it the text would be too verbose for a slide presentation.\footnote{In our dataset, sentences in the documents have an average of 17.3 words, while sentences in slides have 11.6 words; the difference is statistically significant ($p=0.0031$).}
We implement the PAR as an attention-based Seq2Seq~\cite{bahdanau15seq2seq} with the copy mechanism~\cite{gu16copy}:
\begin{equation}
\{w_0, ..., w_{l-1}\} = \text{PAR}(T_{j,k}^{out}, h_k^{obj}),
\end{equation}
where $T_{j,k}^{out}$ is a sentence from a document chosen by OP. We condition PAR on the object state $h_{k}^{obj}$ to provide contextual information; we provide the importance of this conditioning in the supplementary material.
\begin{table*}[tp]
\centering
\scriptsize
\begin{tabular}[t]{lrccrrrrrrr}
\toprule
& \multicolumn{2}{c}{\textbf{Document \& Slide Pairs}} & & \multicolumn{3}{c}{\textbf{Documents}} & ~ & \multicolumn{3}{c}{\textbf{Slides}} \\
\cmidrule{2-3} \cmidrule{5-7} \cmidrule{9-11}
& \#Pairs & Train / Val / Test & ~ & \#Sections & \#Sentences & \#Figures & ~ & \#Slides & \#Sentences & \#Figures \\
\midrule
CV & 2,600 & 2,073 / 265 / 262 & ~ & 15,588 (6.0) & 721,048 (46.3) & 24,998 (9.6) & ~ & 37,969 (14.6) & 124,924 (8.0) & 4,290 (1.7) \\
NLP & 931 & $\:\:\:$741 / $\:\:$93 / $\:\:$97 & ~ & 7,743 (8.3) & 234,764 (30.3) & 8,114 (8.7) & ~ & 19,333 (20.8) & 63,162 (8.2) & 3,956 (4.2) \\
ML & 2,342 & 1,872 / 234 / 236 & ~ & 17,735 (7.6) & 801,754 (45.2) & 15,687 (6.7) & ~ & 41,544 (17.7) & 142,698 (8.0) & 6,187 (2.6) \\ \midrule
Total & 5,873 & 4,686 / 592 / 595 & ~ & 41,066 (6.99) & 1,757,566 (42.8) & 48,799 (8.3) & ~ & 98,856 (16.8) & 330,784 (8.1) & 14,433 (2.5) \\
\bottomrule
\\
\end{tabular}
\caption{Descriptive statistics of our dataset. We report both the total count and the average number (in parenthesis).}
\label{table:data-stats}
\end{table*}
\subsection{Training}
We design a learning objective that captures both the structural similarity and the content similarity between the ground-truth slides and the generated slides.
\paragraph{Structural similarity} The series of actions $a^{sec}_j$ and $a^{slide}_k$ determines the \textit{structure} of output slides, i.e., the number of slides per section. To encourage our model to generate slide decks with a similar structure as the ground-truth, we define our structural similarity loss as
\begin{equation}
\mathcal{L}_{structure} = \sum\nolimits_j \text{CE}(a^{sec}_j) + \sum\nolimits_k \text{CE}(a^{slide}_k)
\end{equation}
where CE is the cross-entropy loss.
\paragraph{Content similarity} We formulate our content similarity loss to capture various aspects of slide generation quality, measuring whether the model 1) selected important sentences and figures from the input document, 2) adequately phrased sentences in the presentation style (e.g., shorter sentences), 3) placed sentences and figures to the right locations on a slide, and 4) put sentences and figures on a slide that are relevant to each other.
We define our content similarity loss to measure each of the four aspects described above:
\begin{equation}
\label{eq:loss-c}
\begin{split}
\mathcal{L}_{content} & = \sum\nolimits_k \text{CE}(\alpha^{obj}_k) + \sum\nolimits_l \text{CE}(w_l) + \\
\sum\nolimits_{u,v} & \text{CE}(\delta([E^{txt}_u, E^{fig}_v])) + \sum\nolimits_k \text{MSE}(L_k).
\end{split}
\end{equation}
\textbf{Selection loss.} The first term checks whether the model selected the ``correct'' objects that also appear in the ground-truth slide deck. This term is slide-insensitive, i.e., the correct/incorrect inclusion of an object is not affected by which specific slide it appears in.
\textbf{Paraphrasing loss.} The second term measures the quality of paraphrased sentences by comparing the output sentence and the ground-truth sentence word-by-word.
\textbf{Text-image matching loss.} The third term measures the relevance of text and figures appearing in the same slide. We follow the literature on visual-semantic embedding~\cite{frome13devise,kiros14text-image-emb,karpathy14text-image} and learn an additional multimodal projection head $\delta([E^{txt}_u, E^{fig}_v])$ with a sigmoid activation that outputs a scalar variable in $[0, 1]$ indicating the relevance score of text and figure embeddings. We construct training samples with positive and negative pairs. For positive pairs, we sample text-figure pairs from a) the ground-truth slides and b) paragraph-figure pairs where the figure is mentioned in the paragraph. We randomly construct negative pairs.
\textbf{Layout loss.} The last term measures the quality of slide layout by regressing the predicted bounding box to the ground-truth. While there exist several solutions to bounding box regression~\cite{he15spatial,ren15faster}, we opted for the simple mean squared error (MSE) computed directly over the layout variable $L_k=\{l_k^x, l_k^y, l_k^w, l_k^h\}$.
\paragraph{The final loss} We define our final learning objective as
\begin{equation}
\mathcal{L}_{DOC2PPT} = \mathcal{L}_{structure} + \gamma \mathcal{L}_{content}
\end{equation}
where $\gamma$ controls the relative importance between structural and content similarity; we set $\gamma=1$ in our experiments.
To train our model, which is a sequential prediction task, we follow the standard teacher-forcing approach~\cite{williams89teacher-forcing} and provide the ground-truth results for the past prediction steps, e.g., the next actions $a^{sec}_j$ and $a^{slide}_k$ are based on the ground-truth actions $\tilde{a}^{sec}_{j-1}$ and $\tilde{a}^{slide}_{k-1}$, the next object $\alpha^{obj}_k$ is selected based on the ground-truth object $\tilde{\alpha}^{obj}_{k-1}$, etc.
\subsection{Inference}
The inference procedures during training and test times largely follow the same process, with one exception: At test time, we utilize the multimodal projection head $\delta(\cdot)$ to act as a post-processing tool. That is, once our model generates a slide deck, we remove figures that have relevance scores lower than a threshold $\theta^R$ and add figures with scores higher than a threshold $\theta^A$. We tune the two hyper-parameters $\theta^R$ and $\theta^A$ via cross-validation (we set $\theta^R=0.8$, $\theta^A=0.9$).
\section{Dataset}
We collect pairs of documents and the corresponding slide decks from academic proceedings, focusing on three research communities: computer vision (CVPR, ECCV, BMVC), natural language processing (ACL, NAACL, EMNLP), and machine learning (ICML, NeurIPS, ICLR). Table~\ref{table:data-stats} reports the descriptive statistics of our dataset.
Our dataset contains PDF documents and slides in the JPEG image format. For the training and validation set, we automatically extract text and figures from them and perform matching to create document-to-slide correspondences at various levels. To ensure that our test set is clean and reliable, we use Amazon Mechanical Turk (AMT) and have humans perform image extraction and matching for the entire test set. We provide an overview of our extraction and matching processes below; including details of data collection and automatic extraction/matching processes with reliability analyses in the supplementary material.
\textbf{Text and Figure Extraction.} For each document $\mathcal{D}$, we extract sections $\mathcal{S}$ and sentences $T^{in}$ using ScienceParse~\cite{url:scienceparse} and figures $\mathcal{F}^{in}$ using PDFFigures2.0~\cite{clark16pdffigures}. For each slide deck $\mathcal{O}$, we extract sentences $\mathcal{T}^{out}$ using Azure OCR~\cite{url:azureocr} and figures $\mathcal{F}^{out}$ using morphological transformation and the border following technique~\cite{suzuki85border-following,url:opencv}.
\textbf{Slide Stemming.} Many slides are presented with animations, and this makes $\mathcal{O}$ contain some successive slides that have similar content minus one element on the preceding slide. For simplicity we consider these near-duplicate slides as redundant and remove them by comparing text and image contents of successive slides: if $O_{j+1}$ covers more than 80\% of the content of $O_j$ (per text/visual embeddings) we discard it and keep $O_{j+1}$ as it is deemed more complete.
\textbf{Slide-Section Matching.}
We match slides in a deck to the sections in the corresponding document so that a slide deck is represented as a set of non-overlapping slide groups each with a matching section in the document. To this end, we use RoBERTa~\cite{liu19roberta} to extract embeddings of the text content in each slide and the paragraphs in each section of the document. We assume that a slide deck follows the section order of the corresponding document, and use dynamic programming to find slide-to-section matching based on the cosine similarity between text embeddings.
\textbf{Sentence Matching.}
We match sentences from slides to the corresponding document. We again use RoBERTa to extract embeddings of each sentence in slides and documents, and search for the matching sentence based on the cosine similarity. We limit the search space only within the corresponding sections using the slide-section matching result.
\textbf{Figure Matching.}
Lastly, we match figures from slides to those in the corresponding document. We use MobileNet~\cite{howard17mobilenets} to extract visual embeddings of all $I^{in}$ and $I^{out}$ and match them based on the highest cosine similarity. Note that some figures in slides do not appear in the corresponding document (and hence no match). For simplicity, we discard $F^{out}$ if its highest visual embedding similarity is lower than a threshold $\theta^{I}=0.8$.
\begin{table*}[t]
\centering
\footnotesize
\begin{tabular}[t]{ccccc||cc|ccc|c|c}
\toprule
$\:$ & \multicolumn{4}{c||}{Ablation Settings} & \multicolumn{2}{c|}{ROUGE-SL} & \multicolumn{3}{c|}{LC-FS} & \multirow{2}{*}{TFR} & mIoU \\
~ & Hrch-PT & PAR & TIM & Post Proc. & Ours & w/o SL & Prec. & Rec. & F1 & ~ & (Layout / Template) \\ \midrule
(a) & \ding{55} & \ding{55} & \ding{55} & \ding{55} & 24.35 & 29.77 & \underline{25.54} & 14.85 & 18.78 & 5.61 & 43.34 / 38.15 \\
(b) & \ding{51} & \ding{55} & \ding{55} & \ding{55} & 24.93 & 29.68 & 17.48 & \underline{26.26} & 20.99 & 8.58 & \underline{49.16} / 40.94 \\
(c) & \ding{51} & \ding{51} & \ding{55} & \ding{55} & \underline{27.19} & \underline{32.27} & 17.48 & \underline{26.26} & 20.99 & 9.23 & \underline{49.16} / 40.94 \\
(d) & \ding{51} & \ding{55} & \ding{51} & \ding{55} & 26.52 & 30.99 & 23.47 & 25.31 & \underline{24.36} & 10.09 & \textbf{50.82} / 42.96 \\
(e) & \ding{51} & \ding{51} & \ding{51} & \ding{55} & \textbf{29.40} & \textbf{34.27} & 23.47 & 25.31 & \underline{24.36} & \underline{11.82} & \textbf{50.82} / 42.96 \\
\midrule
(f) & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \textbf{29.40} & \textbf{34.27} & \textbf{26.36} & \textbf{38.39} & \textbf{31.26} & \textbf{17.49} & $\:\:\:$ - $\:\:\:$ / 46.73 \\
\bottomrule
\\
\end{tabular}
\caption{An overall result of different ablation settings under automatic evaluation metrics ROUGE-SL, LC-FS, TFR, and mIoU.}
\label{table:overall}
\end{table*}
\section{Experiments}
\label{sec:experiments}
\texttt{DOC2PPT}\xspace is a new task with no established evaluation metrics and baselines. To enable large-scale evaluation we propose automatic metrics specifically designed for evaluating slide generation methods. We carefully ablate various components of our approach and evaluate them on our proposed metrics. We also perform human evaluation to assess the generation quality.
\subsection{Evaluation Metrics}
\label{sec:eval}
\paragraph{Slide-Level ROUGE (ROUGE-SL)}
To measure the quality of text in the generated slides, we adapt the ROUGE score~\cite{lin14rouge} widely-used in document summarization. Note that ROUGE does not account for the text length in the output, which is problematic for presentation slides (e.g., text in slides are usually shorter).
Intuitively, the number of slides in a deck is a good proxy for the overall text length. If too short, too much text will be put on the same slide, making it difficult to read; conversely, if a deck has too many slides, each slide can convey only little information while making the whole presentation lengthy. Therefore, we propose the slide-level ROUGE:
\begin{equation}
\text{ROUGE-SL} = \text{ROUGE-L} \times e^{\frac{|Q-\tilde{Q}|}{Q}},
\end{equation}
where $Q$ and $\tilde{Q}$ are the number of slides in the generated and the ground-truth slide decks, respectively.
\paragraph{Longest Common Figure Subsequence (LC-FS)}
We measure the quality of figures in the output slides by considering both the correctness (whether the figures from the ground-truth deck are included) and the order (whether all the figures are ordered logically -- i.e, in a similar manner to the ground-truth deck). To this end, we use the Longest Common Subsequence (LCS) to compare the list of figures in the output $\{I^{out}_0, I^{out}_1, ...\}$ to the ground-truth $\{\tilde{I}^{out}_0, \tilde{I}^{out}_1, ...\}$ and report precision/recall/F1.
\paragraph{Text-Figure Relevance (TFR)}
A good slide deck should put text with relevant figures to make the presentation informative and attractive. In addition to considering text and figures independently, we measure the relevance between them. We again adapt ROUGE and modify as
\begin{equation}
\text{TFR} = \frac{1}{M^{in}_F} \sum\nolimits^{M^{in}_F-1}_{i=0} \text{ROUGE-L}(P_{i}, \tilde{P}_{i}),
\end{equation}
where $P_{i}$ and $\tilde{P}_{i}$ are sentences from generated and ground-truth slides that contain $I^{in}_{i}$, respectively.
\paragraph{Mean Intersection over Union (mIoU)}
A good design layout makes it easy to consume information presented in slides. To evaluate the layout quality, we adapt the mean intersection over union (mIoU)~\cite{everingham10pascal} by incorporating the LCS idea. Given a generated slide deck $\mathcal{O}$ and the ground-truth $\tilde{\mathcal{O}}$, we compute:
\begin{equation}
\text{mIoU}(\mathcal{O}, \tilde{\mathcal{O}}) = \frac{1}{N^{out}_{O}} \sum \nolimits^{N^{out}_{O}-1}_{i=0} \text{IoU}(O_i, \tilde{O}_{J_i})
\end{equation}
where $\text{IoU}(O_i, \tilde{O}_j)$ computes the IoU between a set of predicted bounding boxes from slide $i$ and a set of ground-truth bounding boxes from slide and $J_i$. To account for a potential structural mismatch (with missing/extra slides), we find the $J = \{j_0, j_1, ..., j_{N^{out}_{O}-1}\}$ that achieves the maximum mIoU between $\mathcal{O}$ and $\tilde{\mathcal{O}}$ in an increasing order.
\begin{table*}[t]
\centering
\footnotesize
\begin{tabular}[t]{cccccc}
\toprule
Train $\downarrow$ \:/\: Test $\rightarrow$ & CV & NLP & ML & All \\
\midrule
CV & \textbf{31.2} / \textbf{32.1} / \textbf{19.7} & 24.1 / 21.5 / 5.6 & 24.0 / 25.6 / 11.2 & 24.7 / 29.2 / \underline{15.8} \\
NLP & 28.8 / 30.0 / 13.4 & \textbf{34.7} / \textbf{30.7} / \textbf{11.8} & 29.2 / 32.7 / 15.3 & \underline{28.9} / 30.9 / 13.6 \\
ML & 21.1 / 29.2 / 11.6 & 21.1 / 26.6 / 6.6 & \textbf{32.1} / \textbf{36.8} / \textbf{22.8} & 24.9 / \textbf{31.4} / 14.4 \\
All & \underline{29.2} / \underline{31.2} / \underline{18.6} & \underline{30.0} / \underline{28.8} / \underline{9.7} & \underline{29.4} / \underline{32.9} / \underline{20.6} & \textbf{29.4} / \underline{31.3} / \textbf{17.5} \\
\bottomrule
\\
\end{tabular}
\caption{Topic-aware evaluation results (ROUGE-SL / LC-F1 / TFR) when trained and tested on data from different topics.}
\label{table:topic}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figs/fig_qual-comb2.png}
\vspace{-.5em}
\caption{Qualitative examples of the generated slide deck from our model (Paper source: top~\cite{izmailov20flow-gmm} and bottom~\cite{chen20semi}). We provide more results, including failure cases, on our project webpage.}
\label{fig:qual-overall}
\vspace{-.5em}
\end{figure*}
\subsection{Implementation Detail}
For the DR, we use a Bi-GRU with 1,024 hidden units and set the MLPs to output 1,024-dimensional embeddings. Each layer of the PT is based on a 256-unit GRU. The PAR is designed as an attention-based seq2seq model~\cite{bahdanau15seq2seq} with 512 hidden units. All the MLPs are two-layer fully-connected networks. We train our network end-to-end using ADAM~\cite{kingma14adam} with a learning rate of 3e-4.
\subsection{Results and Discussions}
\paragraph{Is the hierarchical modeling effective?}
To answer this question we define a ``flattened'' version of our Progress Tracker (flat-PT) by replacing the hierarchical RNN with a vanilla RNN that learns a single shared latent space to model the section-slide-object structure. The flat-PT contains a single GRU and a two-layer MLP with a ternary decision head that learns a policy $\zeta$ to predict an action $a_t=\{ \texttt{[NEW\_SECTION]}, \texttt{[NEW\_SLIDE]}, \texttt{[NEW\_OBJ]} \}$. For a fair comparison, we increase the number of hidden units in the baseline GRU to 512 (ours is 256) so the model capacities are roughly the same between the two.
First, we compare the structural similarity between the generated and the ground-truth slide decks. For this, we build a list of tokens indicating a section-slide-object structure (e.g., $\texttt{[SEC]}, \texttt{[SLIDE]}, \texttt{[OBJ]}, ..., \texttt{[SLIDE]}, ...$) and compare the lists using the longest common subsequence (LCS). Our hierarchical approach achieves 64.15\% vs. the flat approach 51.72\%, suggesting that ours was able to learn the structure better than baseline.
Table~\ref{table:overall} (a) and (b) compare the two models on the four metrics introduced in Sec.~\ref{sec:eval}. The results show that ours outperforms flat-PT across all metrics. The flat-PT achieves slightly better performance on ROUGE-SL without the slide-length term (w/o SL), which is the same as ROUGE-L. This suggests that ours generates a slide structure more similar to the ground-truth than the flat approach.
\vspace{-1em}\paragraph{A deeper look into the content similarity loss}
We ablate different terms in the content similarity loss (Eq.~\ref{eq:loss-c}) to understand their individual effectiveness; shown in Table~\ref{table:overall}.
\textbf{PAR.} The paraphrasing loss improves text quality in slides; see the ROUGE-SL scores of (b) vs. (c), and (d) vs. (e). It also improves the TFR metric because any improvement in text quality will benefit text-image relevance.
\textbf{TIM.} The text-image matching loss improves the figure quality; see (b) vs. (d) and (c) vs. (e). It particularly improves LC-FS precision with a moderate drop in recall rate, indicating the model added more correct figures. TIM also improves ROUGE-SL because it helps constrain the multimodal embedding space, resulting in better selection of text.
\vspace{-1em}\paragraph{Figure post-processing}
At test time, we leverage the multimodal projection head $\delta(\cdot)$ as a post-processing module to add missing figures and/or remove unnecessary ones. Table~\ref{table:overall} (f) shows this post-processing further improves the two image-related metrics, LC-FS and TFR. For simplicity, we add figures following equally fitting in template-based design instead of using OP to predict its location.
\vspace{-1em}\paragraph{Layout prediction vs. templates}
The object placer (OP) predicts the layout to decide where and how to put the extracted objects. We compare this with a template-based approach, which selects the current section title as the slide title and puts sentences and figures in the body line-by-line. For those extracted figures, they will equally fit (with the same width) in the remaining space under the main content.
The result shows that the predicted-based layout, which directly learns from the layout loss, can bring out higher mIoU with the groundtruth. But for the template-based design, in the aspect of the visualization, it can make the generated slide deck more consistent.
\vspace{-1em}\paragraph{Topic-aware evaluation}
We evaluate performance in a topic-dependent and independent fashion.
To do this, we train and test our model on data from each of the three research communities (CV, NLP, and ML). Table~\ref{table:topic} shows that models trained and tested within each topic performs the best (not surprisingly), and that models trained on data from all topics achieves the second best performance, showing generalization to different topic areas. Training on NLP data, despite being the smallest among the three, seems to generalize well to other topics on the text metric, achieving the second best on ROUGE-SL (28.9). Training on CV data provides the second highest performance on the text-figure metric TFR (15.8), and training on ML achieves the highest figure extraction performance (LC-FS F1 of 31.4).
\vspace{-1em}\paragraph{Human evaluation}
We conduct a user study to assess the perceived quality of generates slides. We select 50 documents from the test set and prepare four slide decks per document: the ground-truth deck, and the ones generated by the flat PT (Table~\ref{table:overall} (a)), by ours without PAR and TIM (b), and by our final model (f). To make the task easy to complete, we sample 200 sections from 50 documents and create 600 pairs of ground-truth and generate slides.
We recruited three AMT Master Workers for each task (HIT). The workers were shown the slides from the ground-truth deck (DECK A) and one of the methods (DECK B). The workers were then asked to answer three questions: \texttt{Q1}. Looking only at the TEXT on the slides, how similar is the content on the slides in DECK A to the content on the slides in DECK B?; \texttt{Q2}. How well do the figure(s)/tables(s) in DECK A match the text or figures/tables in DECK B?; \texttt{Q3}. How well do the figure(s)/table(s) in DECK A match the TEXT in DECK B? The responses were all on a scale of 1 (not similar at all) to 7 (very similar).
Fig.~\ref{fig:human} shows the average scores for each method. The average rating for our approach was significantly greater for all three questions compared to the other two methods. There was no significant difference between the ratings for the other two methods.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figs/human.pdf}
\vspace{-1em}
\caption{The average scores for how closely the generated text and figures match the text and figures in the ground-truth slides, respectively. And how well the generated text matches the figures in the ground-truth slides. Error bars reflect standard error. Significance tests: two-sample t-test ($p<$0.05.)}
\label{fig:human}
\end{figure}
\vspace{-.5em}\paragraph{Qualitative Results}
Fig.~\ref{fig:qual-overall} illustrates two qualitative examples (top~\cite{izmailov20flow-gmm} and bottom~\cite{chen20semi}) of the slide deck generated by our model. With the post-processing, TIM can add the related figure into the slide and make it more informative. PAR helps create a better presentation by paraphrasing the sentences into bullet point form.
\section{Conclusion}
We present a novel task and approach for generating slides from documents. This is a challenging multimodal task that involves understanding and summarizing documents containing text and figures and structuring it into a presentation form. We release a large set of 5,873 paired documents and slide decks, and provide evaluation metrics with our results. We hope our work will help advance the state-of-the-art in vision and language understanding.
\onecolumn
|
2,877,628,089,599 | arxiv | \section{X-ray spectral modeling}\label{spectral}
In this study, we used all available data obtained from different space-based satellites, covering a time-span
from July 2011 until end of April 2012. Spectra were extracted for all the {\em RXTE}/PCA, {\em Swift}/XRT, {\em Suzaku}/XIS03,
and {\em XMM--Newton}/pn\, data, using standard software provided by the different
team missions, and modeled using XSPEC version 12.7.0. Best fits were
found using a blackbody plus power-law (BB+PL; $\chi^2_\nu/$dof
=1.05/2522) and a 2 blackbodies (2BBs; $\chi^2_\nu/$dof =1.06/2522)
model, all corrected for the photoelectric absorption. Figure 1
(left) shows how the flux decreased by about an order of magnitude, typical of the outburst decay of magnetars.
\begin{figure}[!t]
\includegraphics[trim=0 12 0 30, clip, height=6cm, width=0.5\textwidth]{f8.eps}
\includegraphics[trim=10 25 230 95, clip, height=6cm, width=0.5\textwidth]{f1b.eps}
\caption{{\em Left:} Outburst model from \cite[Pons \& Rea (2012)]{pons12} superimposed to the 1-10\,keV flux decay of Swift\,J1822.3--1606\,. Black circles denote {\em Swift}/XRT data, red triangles correspond to \textit{XMM-Newton} and blue stars to {\em Suzaku}/XIS03\, data.
{\em Right:} Pulse phase evolution as a function of time, together with the time residuals (lower panel) after having corrected for the linear component (correction to the $P$ value). The solid lines in the two panels mark the inferred $P$--$\dot{P}$ coherent solution based on the whole dataset, while the dotted lines represent the $P$--$\dot{P}$ coherent solution based on the data collected during the first 90 days only.}
\label{fig:phases}
\end{figure}
The aggressive monitoring campaign we present here allowed us not only to study
in detail the flux decay of Swift\,J1822.3--1606, but also to give an estimate of its typical
timescale. We have compared the observed outburst decay with the more physical theoretical model presented in Pons \& Rea (2012). In addition, we have performed numerical
simulations with a 2D code designed to model the magneto-thermal evolution of neutron stars. In Figure 1 (left), super-imposed, we show our best representative model that reproduce the observed properties of the decay of Swift\,J1822.3--1606\, outburst. This model corresponds to an injection of $4\times10^{25}$~erg~cm$^{-3}$ in the outer crust, in the narrow layer with density between $6\times10^{8}$ and $6\times 10^{10}$\,g~cm$^{-3}$, and in an angular region of 35 degrees (0.6 rad) around the pole. The total injected energy was then $1.3\times10^{42}$~erg.
\section{X-ray timing analysis}\label{timing}
For the X-ray timing analysis we used all available data after barycentering all the events.
We started by obtaining an accurate period measurement by folding the data from the first two XRT pointings which were separated by less than 1\,day, and studying the phase evolution within these observations by means of a phase-fitting technique (see \cite[Dall'Osso et~al.\ 2003]{dallosso03} for details). The resulting best-fit period (reduced $\chi^2=1.1$ for 2 dof) is $P=8.43966(2)$\,s (all errors are given at 1$\sigma$ c.l.) at the epoch MJD 55757.0. The above period accuracy of 20\,$\mu$s is enough to phase-connect coherently the later {\em Swift}, {\em RXTE}, {\it Chandra}, {\em Suzaku}, and {\em XMM--Newton}\, pointings (see Figure\,\ref{fig:phases}).
We modeled the phase evolution with a linear plus quadratic term. The corresponding coherent solution (valid until November 2011) is $P=8.43772007(9)$\,s and period derivative $\dot{P} = 1.1(2)\times 10^{-13}$\,s s$^{-1}$ ($\chi^2=132$ for 57 dof; at epoch MJD 55757.0). The above solution accuracy allows us to unambiguously extrapolate the phase evolution until the beginning of the next {\em Swift}\ visibility window which started in February 2012. The final resulting phase-coherent solution, once the latest 2012 observations are included, returns a best-fit period of $P=8.43772016(2)$\,s and period derivative of $\dot{P} = 8.3(2)\times 10^{-14}$\,s~s$^{-1}$ at MJD 55757.0 ($\chi^2=145$ for 67 dof). The above best-fit values imply a surface dipolar magnetic field of $B\simeq2.7\times10^{13}$\,G (at the equator), a characteristic age of $\tau_{\rm c}=P/2\dot{P}\simeq1.6$\,Myr, and a spin-down power L$_{\rm rot}=4\pi I \dot{P}/P^3\simeq1.7\times 10^{30}$ ${\rm erg\, s}^{-1}$ (assuming a neutron star radius of 10\,km and a mass of 1.4$M_{\odot}$). The final solution has a relatively high r.m.s. ($\sim$ 120\,ms) resulting in a best-fit reduced $\chi_{\nu}^2=2.1$. The 3$\sigma$ upper limit of the second derivative of the period was $\ddot{P}<5.8\times10^{-21}$s~s$^{-2}$ (but see also Livingstone et al. 2011 and Scholz et al. 2012).
\section{Conclusions}
We have reported on the outburst evolution of the new magnetar Swift\,J1822.3--1606, which,
despite its relatively low magnetic field ($B=2.7\times10^{13}$\,G),
is in line with the outbursts observed for other magnetars with
higher dipolar magnetic fields.
We found that the current properties of the source can be reproduced if it has now an age of $\sim550$\,kyr, and it was born with a toroidal crustal field of $7\times10^{14}$\, G, which has by now decayed by less than an order of magnitude.
The position of Swift\,J1822.3--1606\ in the $P$--$\dot P$ diagram is close to that of the ``low'' field magnetar
SGR\,0418$+$5729\, (\cite[Rea et al.\ 2010]{rea10}). As argued in more detail in Rea
et al. (2012), we note that the discovery of a second magnetar-like source with a magnetic field in
the radio-pulsar range strengthens the idea that magnetar-like behavior may be much more widespread than what believed in the past, and that it is related to the intensity and topology of the internal and surface toroidal components, rather than only to the surface dipolar field (\cite[Rea et al.\ 2010]{rea10}, \cite[Perna \& Pons\ 2011]{perna11}, \cite[Turolla et al.\ 2011]{turolla11}).
|
2,877,628,089,600 | arxiv | \section{Introduction}
Higher twist terms contribute to the nucleon structure functions at lower scales $Q^2$. The range
in which these terms may be safely neglected against the leading twist contributions, partly depends
on the size of the experimental errors in the respective measurement. Highly precise data at low values
of $Q^2$ allow to access these contributions, the detailed physical understanding of which is presently
still in an early stage. It has been outlined in Refs.~\cite{BBG,Blumlein:2008kz} how the higher twist
contributions can be extracted in a phenomenological way in case of the structure functions $F_2(x,Q^2)$
and $F_L(x,Q^2)$ in the valence quark region. In this note we report on recent results of an improved
analysis. Another interesting question concerns the structure function $g_2(x,Q^2)$
in the polarized case, which has been measured to a higher precision during the last years \cite{DATA}.
Here we try to extract first information on the twist-3 contributions to $g_2(x,Q^2)$.
\section{Higher Twist Contributions to \boldmath $F_2^{p,d}(x,Q^2)$}
We have carried out a QCD analysis in the valence region including more recent data from JLAB following
earlier work \cite{BBG}. In the present analysis tails from sea-quarks and the gluon in the valence
region were dealt with based on the ABKM distributions \cite{Alekhin:2009ni}. Both the valence quark
distributions
$xu_v(x,Q^2_0)$ and $xd_v(x,Q^2_0)$ at $Q_0^2 = 4~\GeV^2$ are effected only very little. The values of
$\alpha_s(M_Z^2)$ change marginally w.r.t. the earlier analysis \cite{BBG}. We obtain~: $\alpha_s(M_Z^2) =
0.1148 \pm 0.0019~\text{NLO},
= 0.1134 \pm 0.0020~\text{NNLO};~0.1141 \pm 0.0021~\text{N$^3$LO$^*$}$. Here, the N$^3$LO$^*$-analysis
accounts for the three-loop Wilson coefficients and a Pad\'e-model for the non-singlet four-loop anomalous
dimension,
to which
we attached a $\pm 100 \%$ uncertainty, cf.~\cite{BBG} for details. Furthermore, we found that the response
of the individual deep-inelastic data sets in the valence region respond stable values, which are in
accordance
with the central value obtained moving from NLO to N$^3$LO$^*$. The present result agrees very well with
determinations of $\alpha_s(M_Z^2)$ in Refs.~\cite{Alekhin:2009ni,GJR,Alekhin:2012ig},
see also~\cite{Alekhin:2011ey}. A survey on the current status of $\alpha_s(M_Z^2)$ based on precision
measurements in different reactions has been given in \cite{Bethke:2011tr}. In the present analysis we
obtain a lower value of $\alpha_s$ than the world average, cf.~\cite{Bethke:2011tr}, and values being
obtained in \cite{MSTW,NNPDF} at NNLO. Reasons for the difference to the values given in \cite{MSTW,NNPDF}
have been discussed in Refs.~\cite{Alekhin:2012ig,Alekhin:2011ey} in detail. In particular, the partial
response of $\alpha_s$ in case of the BCDMS and SLAC data in \cite{MSTW,NNPDF} turns out to be partly different
comparing to the results in \cite{Alekhin:2009ni,GJR,Alekhin:2012ig}. There are also differences between
the analyses \cite{MSTW} and \cite{NNPDF} w.r.t. several data sets contributing.
The higher twist contributions can be determined by extrapolating the fit-results at leading twist
for $W^2 > 12.5~\GeV^2$ to the region $4 < W^2 < 12.5~\GeV^2, Q^2 \geq 4~\GeV^2$,
cf.~\cite{Blumlein:2008kz,BB12a}.\footnote{In Ref. \cite{Alekhin:2012ig} also higher twist contributions for $x$
below the
valence region have been determined.} The results for the coefficients $C_{\rm HT}^{p,d}(x)$
\begin{eqnarray}
F_2(x,Q^2) = F_2(x,Q^2)\left[\frac{O^{\rm TM}[F_2(x,Q^2)]}{F_2(x,Q^2)} + \frac{C_{\rm HT}(x)}{Q^2[\GeV^2]}
\right]
\end{eqnarray}
are shown in Figure~1, where we averaged over the respective range in $Q^2$. We applied the target mass
corrections \cite{Georgi:1976ve} to the leading twist contributions.\footnote{An unfolding of the target mass
corrections of the DIS world data for $F_2$ and $F_L$ including the JLAB data, has been performed in
\cite{Christy:2012tk} recently.} The result for the higher twist
coefficients for proton and deuteron targets depends on the order to which the leading twist distribution is
described. The higher twist terms become smaller moving from NLO to N$^3$LO$^*$. Within the present theoretical
and
experimental accuracy the curves stabilize for $x < 0.65$, while at larger values there are still differences.
\restylefloat{figure}
\begin{figure}[H]
\begin{center}
\mbox{\epsfig{file=bluemlein_johannes_fig1.eps,height=7cm,width=7cm}}
\mbox{\epsfig{file=bluemlein_johannes_fig2.eps,height=7cm,width=7cm}}
\end{center}
\caption[]{
\label{FIG:1}
\small \sf
The empiric higher twist contributions to $F_2^{p,d}(x,Q^2)$ in the valence region, Eq.~(1),~extracted
by calculating the leading twist part at NLO, NNLO, and N$^3$LO$^*$, \cite{BB12a}.}
\end{figure}
\section{\boldmath $g_2^{\rm tw3}(x,Q^2)$}
\noindent
Higher twist contributions to the polarized structure function $g_1(x,Q^2)$ have been studied in
Refs.~\cite{Leader:2009tr,Blumlein:2010rn} in phenomenological approaches aiming on the twist-4 contributions.
However, the structure function $g_2(x,Q^2)$, together with other polarized electro-weak structure functions
\cite{Blumlein:1996tp,Blumlein:1996vs,Blumlein:1998nv}, receives also twist-3 contributions. $g_2(x,Q^2)$
obeys the Burkhardt-Cottingham relation \cite{Burkhardt:1970ti}
\begin{eqnarray}
\int_0^1 dx g_2(x,Q^2) = 0~.
\end{eqnarray}
Since the Wandzura-Wilczek relation \cite{Wandzura:1977qf} implies, that the first moment of the
twist-2 part vanishes separately also
\begin{eqnarray}
\int_0^1 dx g_2^{\rm tw3}(x,Q^2) = 0
\end{eqnarray}
holds. The errors on the present world data from E143, E155, HERMES and NMC \cite{DATA} on $g_2(x,Q^2)$ are
still
large but yet one may try the fit of a profile in $x$.
\restylefloat{figure}
\begin{figure}[H]
\begin{center}
\mbox{\epsfig{file=bluemlein_johannes_fig3.eps,height=7cm,width=7cm}}
\end{center}
\caption[]{
\label{FIG2}
\small \sf
The twist-3 contributions to $g_2(x,Q^2)$ subtracting the twist-2 part according to the Wandzura-Wilczek
relation \cite{Wandzura:1977qf} using the result of \cite{Blumlein:2010rn} for the twist-2 contribution to
$g_1(x,Q^2)$ on experimental data from E143, E155, and HERMES \cite{DATA} fitting the shape (\ref{SHA}) (full
line). Open symbols refer to data in the region $Q^2 < 1~\GeV^2$. The dashed line shows the result of a
calculation at $Q^2 = 1~\GeV^2$ given in \cite{Braun:2011aw}.}
\end{figure}
In Ref.~\cite{Braun:2011aw} the parameterization
\begin{eqnarray}
g_2^{\rm tw3}(x) = A \left[ \ln(x) +(1-x) + \frac{1}{2}(1-x)^2\right]
+(1-x)^3\left[B - C(1-x) + D(1-x)^2\right]
\label{SHA}
\end{eqnarray}
has been proposed. Since the data points are measured at different values of $Q^2$ an evolution has
to be performed to a common scale. Furthermore, the target mass corrections \cite{Blumlein:1998nv}
have to be taken into account. In Figure~2 the results of the fit to $g_2^{\rm tw3}(x,Q^2)$ are presented
for $Q^2 = 3~\GeV^2$. We limited the analysis to the region $Q^2 > 1~\GeV^2$. The present errors are still
large and the data of E155 dominate in the fit. We may compare with a theoretical prediction given in
\cite{Braun:2011aw}. Indeed both results are quite similar.
The twist-3 contribution to the structure function $g_1(x,Q^2)$ can be obtained from that to $g_2(x,Q^2)$
by the integral-relation \cite{Blumlein:1998nv}
\begin{eqnarray}
g_1^{\rm tw3}(x,Q^2) = \frac{4x^2 M^2}{Q^2} \left[g_2^{\rm tw3}(x,Q^2) - 2 \int_x^1 \frac{dy}{y} g_2^{\rm
tw3}(y,Q^2) \right]~,
\end{eqnarray}
cf.~\cite{BB12b}. Due to the large errors of the data the present results are of more qualitative
character. To study the twist-3 contributions both to the structure functions $g_2(x,Q^2)$ and
$g_1(x,Q^2)$ in detail, a high luminosity machine, like the planned EIC \cite{Boer:2011fh}, is needed.
\section{Conclusions}
\noindent
We performed a re-analysis of the present deep-inelastic world data on proton and deuteron
targets for the structure function $F_2(x,Q^2)$ in the valence region $x > 0.3$ accounting for remaining
non-valence tails, which were calculated using the ABKM09 distributions \cite{Alekhin:2009ni}.
We obtain a slightly lower value of $\alpha_s(M_Z^2)$ than in our previous analysis \cite{BBG}
at N$^3$LO$^*$, however, far within the $1\sigma$ error range. Very stable predictions are obtained going
from NLO to N$^3$LO$^*$, both for the valence distribution functions and $\alpha_s(M_Z^2)$. The
values being obtained for the different sub-sets of experimental data in the present
fit are well in accordance with our global result. We do not confirm the significant differences reported by
MSTW between the SLAC $ep$ and $ed$ data at NNLO \cite{MSTW}. We also disagree with the large value
of NNPDF \cite{NNPDF} for the BCDMS data at NLO, which also contradicts the corresponding result by MSTW
\cite{MSTW}. Our results are in agreement with those of the GJR collaboration \cite{GJR} and the singlet
analyses \cite{Alekhin:2009ni,Alekhin:2012ig}. We obtained an update of the dynamical higher twist contributions
to $F_2^{p,d}(x,Q^2)$ in the valence region,
which depends on the order to which the leading twist contributions were calculated. The effect stabilizes
including corrections up to N$^3$LO$^*$ in the range $0.3 < x \raisebox{-0.07cm 0.65$. At larger values of $x$ still higher
order corrections may be needed. A first estimate on the quarkonic twist-3 contributions to the polarized
structure function $g_2(x,Q^2)$ is given in a fit to the available world data on $g_2(x,Q^2)$.
The contributions to $g_1(x,Q^2)$ are obtained by an integral relation, cf. Ref.~\cite{Blumlein:1998nv}.
\section{Acknowlededgments}
\noindent
For discussions we would like to thank S. Alekhin. This work has been supported in part by DFG
Sonderforschungsbereich Transregio 9, Computergest\"utzte Theoretische Teilchenphysik, and EU
Network {\sf LHCPHENOnet} PITN-GA-2010-264564.
{\raggedright
\begin{footnotesize}
|
2,877,628,089,601 | arxiv | \section{Introduction and Motivation}
What is the best way to partition a cake into two pieces for two different people? This question, even when properly quantified, will not have a clear universal answer. However, it is conceivable to pose a number of axioms that one wishes a cake-division rule to satisfy and study
the set of all cake-subdivision rules satisfying these axioms. This axiomatic method has been very effectively used in cooperative game theory and economics (see e.g. \cite{har, kapeller, nash, shapley}). A nice by-product of the axiomatic approach is that it moves the discussion from `what should we do?' to `what are desirable properties?' which often leads to more insight. In the same manner, we ask a question that was the original motivation of this paper.
\begin{quote}
\textbf{Question.} Let $f:\mathbb{R}^n \rightarrow \mathbb{R}$. What is the `best' way to average $f$ over a given scale? What are natural desirable properties that one could require of such an averaging procedure and which averaging procedures are characterized by these properties?
\end{quote}
In the spirit of the axiomatic method, we will pose a number of desirable properties and then investigate what these properties imply. The various symmetries of $\mathbb{R}^n$ should be reflected in the averaging method: in particular, we will focus on the special case
where
$$ \mbox{average of}~f~\mbox{in a point}~x = \int_{\mathbb{R}^n}{ f(x+y) u(y) dy},$$
where $u:\mathbb{R}^n \rightarrow \mathbb{R}_{}$ is a nonnegative, radial function with $L^1-$norm $\|u\|_{L^1(\mathbb{R}^n)}=1$. Moreover, we will assume that the averaging is supposed to happen at a fixed scale, we will do so by imposing a condition that a certain moment is fixed, i.e.
$$ \int_{\mathbb{R}^n}{ |x|^{\alpha} u(x) dx} = \mbox{fixed}.$$
However, even with all these restrictions, there are still a large number of functions $u$ that could conceivably be used. This question has been actively studied in \textit{scale-space theory} (see \cite{babaud, linde, linde2, yu}), a theoretical branch of image processing concerned with the same question: how should one properly smooth an image? In this field, the Gaussian is the canonical choice:
\begin{quote}
``A notable coincidence between the different
scale-space formulations that have been stated is that the Gaussian
kernel arises as a unique choice for a large number of different
combinations of underlying assumptions (scale-space axioms).'' (Lindeberg \cite{linde}, 1997)
\end{quote}
We were motivated by trying to understand the implications of a new \textbf{axiom}: `a convolution at a certain scale should be as smooth as possible'. Obviously, this can be interpreted in many ways -- a very natural way is to look for the function $u$ satisfying all the constants above for which the constant $c_u$ in the inequality
$$ \forall ~f \in L^2(\mathbb{R}^n) \qquad \| \nabla (u*f)\|_{L^2(\mathbb{R}^n)} \leq c_u \|f\|_{L^2(\mathbb{R}^n)}$$
is as small as possible. Using the Fourier-Transform, we see that, up to a universal constant $c_n$,
$$ \| \nabla (u*f)\|^2_{L^2(\mathbb{R}^n)} = c_{n} \int_{\mathbb{R}^n} |\xi|^2 |\widehat{u}(\xi)|^2 |\widehat{f}(\xi)|^2 d\xi \leq c_n\| \xi \cdot \widehat{u}(\xi) \|_{L^{\infty}(\mathbb{R}^n)}^2 \|f\|_{L^2(\mathbb{R}^n)}^2.$$
It is not too difficult to see that this constant is sharp (since $\widehat{u}(\xi)$ is continuous, we can construct a function $f$ concentrating its $L^2-$mass close to a point where $\xi \cdot \widehat{u}(\xi)$ assumes its extremum). So the question is simply: which function minimizes $ \| \xi \cdot \widehat{u}(\xi) \|_{L^{\infty}}$ among all radial functions with normalized $L^1-$mass and a normalized moment (controlling the scale)?
This is the question that we address in this paper. However, we emphasize are many other interesting questions in the vicinity. There are other ways of studying oscillation of a function than $\|\nabla^s f\|$ and other function spaces than $L^2$ in which one could measure the size of a function and its derivative. Finally, we note one particularly interesting problem that arises in $n=1$ dimensions when one demands $u$ to be supported in $[-\infty, 0]$. This question is of particular importance for time-series: how would one compute the average score of a function when one cannot look into the future? This case is much less understood: there are arguments in favor of the exponential distribution \cite{schonberg, steini}, Gaussian constructions \cite{linde2} and intermediate constructions \cite{vickrey}. It would be very interesting to have a better understanding of this case, also from the perspective taken in this paper.
\section{The Results}
\subsection{An Uncertainty Principle.} We state the most general form of the statement; the case most of interest to us throughout the rest of the paper is $(n,\beta) = (1,1)$. The case $\beta \neq 1$ corresponds to either higher derivatives (if $\beta \in \mathbb{N}$) or fractional derivatives (if $\beta \notin \mathbb{N}$). We know very little about these cases.
\begin{theorem}[Uncertainty Principle] For any $\alpha > 0$ and $\beta > n/2$, there exists $c_{\alpha, \beta,n} > 0$ such that for all $u \in L^1(\mathbb{R}^n)$
$$ \| |\xi|^{\beta} \cdot \widehat{u}\|^{\alpha}_{L^{\infty}(\mathbb{R}^n)} \cdot \| |x|^{\alpha} \cdot u \|^{\beta}_{L^1(\mathbb{R}^n)} \geq c_{\alpha, \beta,n} \|u\|_{L^1(\mathbb{R}^n)}^{\alpha + \beta}.$$
\end{theorem}
This inequality shows that fixing the $L^1-$mass to be $\|u\|_{L^1(\mathbb{R}^n)} = 1$ and fixing any moment leads to a universal lower bound on
how small $\||\xi|^{\beta} \cdot \widehat{u}(\xi)\|_{L^{\infty}(\mathbb{R}^n)}$ can be. This shows that our axiom for the averaging operation is meaningful: for any
averaging function $u$ (having fixed scale and $L^1-$norm) there is indeed a frequency $\xi$ such that $u * \exp(i \xi x)$ is not all that small.
Somewhat to our surprise, we were not able to locate this uncertainty principle among the large number of results that have been obtained in this area (see e.g. \cite{amrein, babenko, beckner, beckner2, bene, bene2, bened, bourgain, cohn, cow, dreier, dreier2, ehm, feig, folland, gneiting, gobber, gobber2, gon, gon2, gon3, gorb, gorb2, hardy, heinig, hirsch, hogan, johan, martini, mor}). Indeed, it seems that most uncertainty principles have the lower bound in $L^2$. Two somewhat related inequalities are given by a special case of the Cowling-Price uncertainty principle \cite{cow} stating that for any $\alpha > 0$ and $\beta > 1/2$
$$ \| |\xi|^{\beta} \cdot \widehat{u}\|^{\alpha + \frac12}_{L^{\infty}(\mathbb{R})} \cdot \| |x|^{\alpha} \cdot u \|^{\beta - \frac12}_{L^1(\mathbb{R})} \geq c_{\alpha, \beta} \|u\|_{L^2(\mathbb{R})}^{\alpha + \beta}.$$
and an inequality of Laeng \& Morpurgo \cite{laeng}
$$ \| \xi \cdot \widehat{u}\|^{2}_{L^{2}(\mathbb{R})} \cdot \| |x|^{2} \cdot u \|_{L^1(\mathbb{R})} \geq c_{} \|u\|_{L^1(\mathbb{R})} \|u\|_{L^2(\mathbb{R})}^2$$
which has some resemblance to our inequality for $(n,\alpha, \beta)= (1,2,1)$
$$ \| \xi \cdot \widehat{u}\|^{2}_{L^{\infty}(\mathbb{R})} \cdot \| |x|^{2} \cdot u \|_{L^1(\mathbb{R})} \geq c_{} \|u\|_{L^1(\mathbb{R})}^3.~~~~~~~~\quad~~~\qquad$$
\subsection{The Characteristic Function.} From now on, we will restrict ourselves to trying to understand the extremizer in the case $(n,\beta) = 1$. Other cases may be just as interesting.
Considering the initial motivation of finding the `best' kernel for the purpose of smoothing functions, an interesting choice is given by the characteristic function of an interval that is symmetric around the origin -- using the dilation symmetry, we can restrict ourselves to the case
$$ u(x) = \chi_{[-1/2, 1/2]}.$$
This function does indeed lead to a very small constant in the uncertainty principle: in particular, as soon as $\alpha \geq 1.38$, the characteristic function leads to a smaller constant than the Gaussian. We prove that it is a local minimizer among even functions $u:[-1/2,1/2] \rightarrow \mathbb{R}$ for some parameters.
\begin{theorem}[Characteristic Function as Local Minimizer] Let $(n,\beta) =(1,1)$ and $\alpha \in \left\{2,3,4,5,6\right\}$. The characteristic function $u(x) = \chi_{[-1/2,1/2]}(x)$ is a local minimizer in the class of even, smooth functions $f:[-1/2, 1/2] \rightarrow \mathbb{R}$.
\end{theorem}
The proof is based on the lucky confluence of several factors:
\begin{enumerate}
\item if $ u(x) = \chi_{[-1/2, 1/2]},$ then $\xi \cdot \widehat{u}(\xi)$ assumes its extrema on $\mathbb{Z} + 1/2$.
\item $\widehat{u}(\xi)$ is band-limited: its Fourier transform is supported on $[-1/2, 1/2]$
\item the Shannon-Whittaker reconstruction formula allows us to reconstruct such a band-limited function from equally spaced function values as long as we can sample with density at least 1
\item and all the arising computations can be carried out.
\end{enumerate}
The proof of Theorem 2 requires a Lemma that may be interesting in its own right.
Let $f: [-1/2, 1/2] \rightarrow \mathbb{R}$ be an even, smooth function. We introduce the quantity
$$ \max(\widehat{f}) = \max \left\{ \sup_{k \in \mathbb{N}} \left(2k+\frac{1}{2}\right)\widehat{f}\left(2k+\frac{1}{2}\right), -\inf_{k \in \mathbb{N}} \left(2k + \frac{3}{2}\right)\widehat{f}\left(2k +\frac{3}{2}\right) \right\}.
$$
This quantity arises naturally in the stability analysis. As it turns out, we have the following sharp inequality (equality is attained for constant functions).
\begin{lem} Let $\alpha \in \left\{2,3,4,5,6\right\}$ and let $f:[-1/2, 1/2] \rightarrow \mathbb{R}$ be smooth and even. We have
$$ \max(\widehat{f}) \geq \frac{\alpha+1}{\alpha \pi}\int_{-1/2}^{1/2} \left(1-|2x|^{\alpha}\right) f(x) dx.$$
\end{lem}
This seems to be quite a curious statement -- it would be interesting to understand it better; we can verify it in some special cases, $\alpha \in \left\{2,3,4,5,6\right\}$, but it does seem like it should be a special instance of a more general principle.
It is quite conceivable that the Lemma holds for all integers $\alpha \geq 2$ or possibly even for all real numbers $\alpha \geq 2$. A necessary condition is given in the next section.
\subsection{A Sign Pattern in $_1F_2$?} At this point, it is natural to wonder about the restriction $\alpha \in \left\{2,3,4,5,6\right\}$. As far as we can tell,
any case $\alpha \in \mathbb{N}$ can be decided by a finite procedure that consists of analyzing the sign pattern of an explicit polynomial: this poses no difficulty for $\alpha \in \left\{2,3,4,5,6\right\}$ and it does seem like it could be easily done for individual larger values of $\alpha$ as well. However, we have not found a common
mechanism by which all of them can be established simultaneously (or, put differently, a reason \textit{why} they should have such a sign pattern). This seems to hinge on an interesting
sign pattern structure in a hypergeometric function.
\begin{proposition} Let $\alpha > 0$. We define, for integers $k \geq 1$, the sequence
$$ a_k = ~_1F_2\left( \frac{1+\alpha}{2}; \frac{3}{2}, \frac{3 + \alpha}{2}; -\frac{\pi^2}{16} (2k-1)^2 \right).$$
\emph{If} $a_k \geq 0$ for odd values of $k$ and $a_k \leq 0$ for even values of $k$, then for all smooth, even functions $f:[-1/2, 1/2] \rightarrow \mathbb{R}$,
$$ \max(\widehat{f}) \geq \frac{\alpha+1}{\alpha \pi}\int_{-1/2}^{1/2} \left(1-|2x|^{\alpha}\right) f(x) dx.$$
Moreover, the characteristic function $\chi_{[-1/2,1/2]}(x)$ is a local minimizer among smooth functions supported in $[-1/2,1/2]$ for that value of $\alpha$ and $\beta=1$.
\end{proposition}
For any $\alpha \in \mathbb{N}$, the hypergeometric function reduces to a trigonometric polynomial that is not terribly difficult to analyze. However, we have not found a uniform way of treating all parameters of $\alpha$. It is also conceivable that the result holds for all $\alpha \geq 2$. Numerically, it seems to fail for $\alpha < 2$ (though this becomes harder to check as $\alpha$ approaches 2). The hypergeometric function is given by
$$ ~_1F_2\left( 1 + \frac{\alpha}{2}; \frac{3}{2}, \frac{3 + \alpha}{2}; x\right) = \sum_{n=0}^{\infty} \frac{1 + \frac{\alpha}{2}}{1 + \frac{\alpha}{2} + n} \frac{1}{(\frac{3}{2})_n} \frac{x^n}{n!}.$$
It is hard to see sign patterns from this form. We also have the identity (e.g. \cite{cho})
$$ \int_{0}^{x} J_{\frac{1}{2}}(x) x^{\alpha - \frac{1}{2}} dx = c_{\alpha} \cdot x^{\alpha+1} ~ _1F_2\left( \frac{\alpha + 1}{2}; \frac{3}{2}, \frac{3 + \alpha}{2}; -\frac{x^2}{4} \right)$$
where $J_{1/2}$ is the Bessel function of order $1/2$ and $c_{\alpha} > 0$ is a constant. This relates the problem to the oscillation behavior of a Bessel function. Askey \cite{askey} remarks that for $\alpha=1$, there is no sign change. Other identities exist: introducing a modified Bessel function
$$ \mathcal{J}_{\alpha}(x) =~_0F_1\left(\alpha+1; -\frac{x^2}{4} \right) = \Gamma(\alpha+1) \left( \frac{x}{2} \right)^{-\alpha} J_{\alpha}(x),$$
we have the following identity (from a more general result in Cho \& Yun \cite{cho})
\begin{align*}
~_1F_2\left( \frac{1+ \alpha}{2}; \frac{3}{2}, \frac{3 + \alpha}{2}; -\frac{x^2}{4}\right) &= \mathcal{J}_{1/2}\left(\frac{x}{2}\right)^2 \\
&+\sum_{n=1}^{\infty} \frac{2n+1}{n+1} \frac{1}{(3/2)_n^2} \frac{((1-\alpha)/2)_n}{((3-\alpha)/2)_n} \left( \frac{x}{4} \right)^{2n} \mathcal{J}_{n + \frac12}^2 \left(\frac{x}{2} \right).
\end{align*}
We also refer to Askey \cite{askey2}, Cho \& Yun \cite{cho}, Fields \& Ismail \cite{fields} and Gasper \cite{gasper}.
It seems that there are known criteria that can be used to prove that such expressions do not change sign. In contrast, we are interested in highly controlled sign changes.
\subsection{Open Problems.} There are many open problems, we only list a few.
\begin{enumerate}
\item Does the uncertainty principle admit an extremizer? Is it compactly supported? Is it possible to show that for some parameters $n,\alpha, \beta$ that the maximizer $u$ has Fourier decay $|\widehat{u}(\xi)| \sim |\xi|^{-\beta}$? For small values of $\beta$, this would imply that a maximizer need not be continuously differentiable.
\item Is the extremizer given by the characteristic function when $\beta =1$ and $\alpha \geq 2$? Or maybe for integer $\alpha \geq 2$? Is it a global extremizer among functions $u:[-1/2, 1/2] \rightarrow \mathbb{R}$ that do not vanish in $[-1/2,1/2]$?
\item Is it true that for any $\alpha \geq 2$ (or maybe $2 \leq \alpha \in \mathbb{N}$?), the sequence
$$ a_k = ~_1F_2\left( \frac{1+\alpha}{2}; \frac{3}{2}, \frac{3 + \alpha}{2}; -\frac{\pi^2}{16} (2k-1)^2 \right)$$
alternates sign?
\item What can be said about the case $\beta \neq 1$? For $\| |\xi|^{\beta} \cdot \widehat{u}\|_{L^{\infty}}$ to be finite, we require $|\widehat{u}(\xi)| \lesssim (1+|\xi|^{\beta})^{-1}$ which guarantees improved regularity for larger $\beta$. How does the regularity of the extremizer depend on $\beta$?
\item What can be said about the extremizer when $n=1$ and we restrict $u$ to be supported on the half line $[-\infty, 0]$? This is relevant when one is unable to look into the future; for an example from economics, see \cite{vickrey}.
\item These questions are just as interesting in higher dimensions but it is less clear what one could expect an extremizer to look like. It is not clear whether the characteristic function of a disk plays a similar role -- its Fourier transform is connected to the Bessel function which already arose here as well in connection with $_1 F_2$: are there other sign identities attached to it or are these connections restricted to the one-dimensional case? Is the alternating sign pattern observed for $_1 F_2$ a special instance of a more general phenomenon in higher dimensions?
\end{enumerate}
\section{Proofs}
\subsection{Proof of Theorem 1} Uncertainty principles are often a consequence of some hidden form of compactness; our proof is in a similar spirit. We first show that the inequality is invariant under multiplication with scalars and dilation. This allows us to assume without loss of generality that
$$\|u\|_{L^1(\mathbb{R}^n)} = 1 \qquad \mbox{and} \qquad \| |x|^{\alpha} \cdot u\|_{L^1(\mathbb{R}^n)}=1$$
and it remains to show that $\| |\xi|^{\beta} \widehat{u}\|_{L^{\infty}(\mathbb{R}^n)}$ is not too small. The inequality is only interesting when the quantity is finite. Then we can use $\| \widehat{u}\|_{L^{\infty}(\mathbb{R}^n)} \leq \|u\|_{L^1(\mathbb{R}^n)} = 1$ close to the origin and $|\widehat{u}(\xi)| \lesssim |\xi|^{-\beta}$ away from the origin to conclude that $\widehat{u} \in L^2(\mathbb{R}^n)$ and thus $u \in L^2(\mathbb{R}^n)$.
The normalization
$$\| |x|^{\alpha} \cdot u\|_{L^1(\mathbb{R}^n)}= 1 = \| u\|_{L^1(\mathbb{R}^n)}$$
implies that a nontrivial amount of $L^1-$mass is distance at most $\sim_{\alpha} 1$ from the origin. This fact combined with the Cauchy-Schwarz inequality shows that $\|u\|_{L^2} \gtrsim_{\alpha} 1$. The condition $|\widehat{u}(\xi)| \lesssim |\xi|^{-\beta}$ implies that the $L^2-$mass cannot be located at arbitrarily high frequencies (depending on $\alpha, \beta$) since $|\xi|^{-2 \beta}$ is integrable when $\beta > n/2$. If some of the $L^2-$mass is in a bounded region around the origin, then $\| |\xi|^{\beta} \widehat{u}\|_{L^{\infty}}$ is not too small unless it is all concentrated around the origin which is not possible because $\| \widehat{u}\|_{L^{\infty}(\mathbb{R}^n)} \leq \|u\|_{L^1(\mathbb{R}^n)} = 1$ concluding the argument.
\begin{proof}[Proof of Theorem 1] We first note the behavior of the inequality under rescaling by constants and dilations. If
$$v(x) = c \cdot u(x/L) \quad \mbox{for some} \quad c,L > 0,$$
then
\begin{align*}
\| |\xi|^{\beta} \cdot \widehat{v}(\xi)\|_{L^{\infty}}^{\alpha} &= \| |\xi|^{\beta} \cdot \left( c L^n \widehat{u}(L \xi)\right) \|_{L^{\infty}}^{\alpha} =
c^{\alpha} \| |\xi|^{\beta} L^{\beta} L^{n-\beta} \widehat{u}(L \xi) \|_{L^{\infty}}^{\alpha}\\
&= c^{\alpha} L^{(n-\beta)\alpha} \| |L \xi|^{\beta} \widehat{u}(L \xi) \|_{L^{\infty}}^{\alpha} = c^{\alpha} L^{(n-\beta)\alpha} \| \xi \cdot \widehat{u}(\xi) \|_{L^{\infty}}^{\alpha}
\end{align*}
as well as
\begin{align*}
\| |x|^{\alpha} \cdot v \|^{\beta}_{L^1(\mathbb{R}^n)} &= c^{\beta} \left( \int_{\mathbb{R}^n} \left||x|^{\alpha} u\left(\frac{x}{L}\right)\right| dx\right)^{\beta} \\
&= c^{\beta} \cdot L^{\alpha \beta} \left( \int_{\mathbb{R}^n} \left| \left|\frac{x}{L}\right|^{\alpha} u\left(\frac{x}{L}\right)\right| dx\right)^{\beta} \\
&= c^{\beta} \cdot L^{\alpha \beta + n\beta} \cdot \| |x|^{\alpha} \cdot u\|^{\beta}_{L^1(\mathbb{R}^n)}
\end{align*}
and
\begin{align*}
\|v\|_{L^1(\mathbb{R}^n)}^{\alpha + \beta} &= c^{\alpha + \beta} L^{n(\alpha + \beta)} \|u\|^{\alpha + \beta}_{L^1(\mathbb{R}^n)}.
\end{align*}
Thus, the inequality is invariant under multiplication with scalars and dilation.
We use these symmetries to assume without loss of generality that
$$\|u\|_{L^1(\mathbb{R}^n)} = 1 \qquad \mbox{and} \qquad \| |x|^{\alpha} \cdot u\|_{L^1(\mathbb{R}^n)}=1.$$
These two identities combined imply with Markov's inequality that for any $y > 0$,
$$1 = \int_{\mathbb{R}^n}{|x|^{\alpha} |u(x)| dx} \geq y^{\alpha} \int_{|x| \geq y}{ |u(x)| dx}$$
implying that there is some mass around the origin
$$ \int_{|x| \leq y}{|u(x)| dx} \geq 1- \frac{1}{y^{\alpha}}$$
and, in particular, for $Y=10^{1/\alpha}$, we have
$$ \int_{|x| \leq Y}{|u(x)| dx} \geq \frac{9}{10}.$$
We note that
$$ |\widehat{u}(\xi)| \leq \min\left\{ 1, \frac{\||\xi|^{\beta} \cdot \widehat{u}\|_{L^{\infty}(\mathbb{R}^n)}}{|\xi|^{\beta}} \right\},$$
where the first inequality follows from $\|\widehat{u}\|_{L^{\infty}(\mathbb{R}^n)} \leq \|u\|_{L^1(\mathbb{R}^n)}$ and the second one is merely the definition of the $L^{\infty}-$norm. As soon as $\beta > n/2$, this shows that
\begin{align*}
\int_{\mathbb{R}^n}{ | \widehat{u}(\xi)|^2 d\xi} &\lesssim 1 + \||\xi|^{\beta} \cdot \widehat{u}\|^2_{L^{\infty}(\mathbb{R}^n)} \int_{1}^{\infty} \frac{1}{|\xi|^{2 \beta}} |\xi|^{n-1} d\xi \\
&\lesssim 1 + \||\xi|^{\beta} \cdot \widehat{u}\|^2_{L^{\infty}(\mathbb{R}^n)}.
\end{align*}
In particular, if $\||\xi|^{\beta} \cdot \widehat{u}\|^2_{L^{\infty}(\mathbb{R}^n)}$ is finite (the only case of interest here), then $\widehat{u} \in L^2(\mathbb{R}^n)$ and thus $u \in L^2(\mathbb{R}^n)$.
Using H\"older's inequality, we get that, using $\omega_n$ to denote the volume of the unit ball in $\mathbb{R}^n$,
$$ \frac{9}{10} \leq \int_{|x| \leq Y}{|u(x)| dx} \leq \omega_n^{1/2} |Y|^{n/2} \left( \int_{|x| \leq Y}{u(x)^2 dx} \right)^{1/2}$$
and thus
$$ \int_{\mathbb{R}^n}{ |\widehat{u}(\xi)|^2 d\xi } = \int_{\mathbb{R}^n}{u(x)^2 dx} \geq \int_{|x| \leq Y}{u(x)^2 dx} \geq \frac{1}{\omega_n Y^{n}}.$$
Our goal is to show that $\| |\xi|^{\beta} \cdot \widehat{u}\|_{L^{\infty}(\mathbb{R}^n)}$ cannot be arbitrarily small. If $\| |\xi|^{\beta} \cdot \widehat{u}\|_{L^{\infty}(\mathbb{R}^n)} \geq 1$, then we have achieved the goal. We can therefore assume without loss of generality that $\| |\xi|^{\beta} \cdot \widehat{u}\|_{L^{\infty}(\mathbb{R}^n)} \leq 1$. Then
$$ |\widehat{u}(\xi)| \leq \min\left\{ 1, \frac{1}{|\xi|^{\beta}} \right\}$$
implies, for any $c_1 > 0$,
$$ \int_{|\xi| \geq c_1}{|\widehat{u}(\xi)|^2} \leq n\omega_n \int_{c_1}^{\infty}{\frac{|\xi|^{n-1}}{|\xi|^{2\beta}} d\xi} \leq \frac{2n\omega_n}{2\beta -1}\frac{1}{c_1^{2\beta -1}}.$$
This can be made arbitrarily small by making $c_1$ sufficiently large. Using
\begin{align*}
\int_{|\xi| \leq c_1}{ |\widehat{u}(\xi)|^2 d\xi } &= \int_{\mathbb{R}^n}{ |\widehat{u}(\xi)|^2 d\xi } - \int_{|\xi| \geq c_1}{ |\widehat{u}(\xi)|^2 d\xi }\\
&\geq \frac{1}{\omega_n Y^n} - \int_{|\xi| \geq c_1}{ |\widehat{u}(\xi)|^2 d\xi }
\end{align*}
we see that for some constant $c_1=c_1(Y,\beta,n)$ depending only on $Y$ (and thus only on $\alpha$), $\beta$ and $n$,
$$ \int_{|\xi| \leq c_1}{ |\widehat{u}(\xi)|^2 d\xi } \geq \frac{1}{2\omega_n Y^n}.$$
Using $|\widehat{u}(\xi)| \leq \|u\|_{L^1(\mathbb{R}^n)}=1$, we deduce that
$$\int_{|\xi| \leq c_1}{ |\widehat{u}(\xi)| d\xi } \geq \int_{|\xi| \leq c_1}{ |\widehat{u}(\xi)|^2 d\xi } \geq \frac{1}{2\omega_n Y^n}$$
and that for suitable $0 < c_2 < c_1$ depending only on $Y$ and $n$,
$$ \int_{c_2 < |\xi| \leq c_1}{ |\widehat{u}(\xi)| d\xi } \geq \frac{1}{4\omega_n Y^n} > 0.$$
This, in turn, shows that
\begin{align*}
\frac{1}{4\omega_n Y^n} &\leq \int_{c_2 \leq |\xi| \leq c_1}{ |\widehat{u}(\xi)| d\xi } \leq \frac{1}{c_2^{\beta}} \int_{c_2 \leq |\xi| \leq c_1}{ |\xi|^{\beta} |\widehat{u}(\xi)| d\xi } \leq \frac{1}{c_2^{\beta}} \| |\xi|^{\beta} \cdot \widehat{u}(\xi) \|_{L^{\infty}}.
\end{align*}
Since all the arising constants depend only on $\alpha, \beta$ and $n$, the result follows. \end{proof}
\section{Proof of Theorem 2}
We first perform a local stability analysis to understand the type of statement we need to prove. We will then state a slight reformulation of the Shannon-Whittaker reconstruction formula and prove the desired stability result in the simplest possible case $\alpha = 2$. Indeed, in this case all the computations can be carried out in closed form. We will then give a proof of the general case which will mimic the proof of the $\alpha = 2$ case while bypassing the evaluation of one of the integrals.
\subsection{Local Stability Analysis.}
We first perform a local stability analysis of the uncertainty principle for $(n,\alpha,\beta) = (1,\alpha,1)$ around the function
$$ u(x) = \chi_{[-1/2, 1/2]}.$$
Since we are interested in sharp constant, we need to specify which normalization of the Fourier transform we use: it will be $$ \widehat{u}(\xi) = \int_{\mathbb{R}}u(x) e^{-2 \pi i \xi x} dx \qquad \mbox{leading to} \qquad \widehat{u}(\xi) = \frac{\sin{(\pi \xi)}}{\pi \xi}.$$
Let $f$ be an even function compactly supported in $[-1/2,1/2]$. We analyze the behavior of the inequality under replacing $u$ by $u + \varepsilon f$ as $\varepsilon \rightarrow 0$. We observe that $\xi \cdot \widehat{u}(\xi)$ assumes the extremal values $\pm \pi^{-1}$ and, more precisely,
$$\xi \cdot \widehat{u}(\xi) = \begin{cases} \pi^{-1} \qquad &\mbox{if}~\xi = 2n + \frac{1}{2} \\ -\pi^{-1} \qquad &\mbox{if}~\xi = 2n+\frac{3}{2}.\end{cases}$$
This allow us to determine that for any even, smooth function $f:[-1/2,1/2] \rightarrow \mathbb{R}$, as $\varepsilon \rightarrow 0$ and up to lower-order terms,
\begin{align*}
\| \xi \cdot (\widehat{u}(\xi) + \varepsilon \widehat{f}(\xi)) \|_{L^{\infty}} &= \frac{1}{\pi} + \varepsilon \max(\widehat{f}) + \mbox{l.o.t.},
\end{align*}
where $\max(\widehat{f})$ is an abbreviation for
$$ \max(\widehat{f})= \max \left\{ \sup_{k \in \mathbb{N}} \left(2k+\frac{1}{2}\right)\widehat{f}\left(2k +\frac{1}{2}\right),- \inf_{k \in \mathbb{N}} \left(2k + \frac{3}{2}\right)\widehat{f}\left(2k +\frac{3}{2}\right) \right\}.
$$
The other two terms are easy to analyze since $f$ is smooth and thus
$$ \| |x|^{\alpha} ( u+ \varepsilon f) \|_{L^1} =\| |x|^{\alpha} \|_{L^1([-1/2,1/2])} + \varepsilon \int_{-1/2}^{1/2} |x|^{\alpha} f(x) dx + \mbox{l.o.t.} $$
and
$$ \| u+ \varepsilon f \|_{L^1}^{\alpha + 1} = 1 + (\alpha+1) \varepsilon \int_{-1/2}^{1/2} f(x) dx + \mbox{l.o.t.}$$
Moreover, the constant $c$ in the equation
$$ \| \xi \cdot \widehat{u}(\xi) \|_{L^{\infty}}^{\alpha} \cdot \| |x|^{\alpha} \cdot u\|_{L^{\infty}} = c \|u\|_{L^1}^{\alpha + 1}$$
is easily computed to be
$$ c= \pi^{-\alpha} \cdot \int_{-1/2}^{1/2}{ |x|^{\alpha} dx} = \frac{1}{(2\pi)^{\alpha}} \frac{1}{\alpha + 1}.$$
This shows that local stability at order $\varepsilon$ is equivalent to
$$ \frac{1}{\alpha+1} \frac{1}{2^{\alpha}}\frac{\alpha \varepsilon}{\pi^{\alpha-1}} \max(\widehat{f}) + \frac{1}{\pi^{\alpha}} \varepsilon \int_{-1/2}^{1/2} |x|^{\alpha} f(x) dx \geq \frac{\varepsilon }{(2\pi)^{\alpha}} \int_{-1/2}^{1/2}{f(x) dx}.$$
This can be rewritten as
$$ \max(\widehat{f}) \geq \frac{\alpha + 1}{\pi \alpha} \int_{-1/2}^{1/2}{(1-|2x|^{\alpha}) f(x) dx}.$$
\subsection{The Shannon-Whittaker Reconstruction/Interpolation Formula.}
The Shannon-Whittaker reconstruction formula \cite{shannon, whittaker}, first formulated by Kotelnikov \cite{kotelnikov} (see L\"uke \cite{luke}), states that if $f$ is compactly supported in $[-1/2, 1/2]$, then
its Fourier transform is completely determined from its values at the integers and
$$ \widehat{f}(\xi) = \sum_{k \in \mathbb{Z}}{ \widehat{f}(k) \frac{\sin{(\pi(\xi-k))}}{\pi(\xi-k)}}.$$
Heuristically put, it states that a compactly supported function is determined by its Fourier coefficients
(adapted to the interval of corresponding length) which correspond to the values of the Fourier transform at equally
spaced points. We will use a shifted version
$$ \widehat{f}(\xi) = \sum_{k \in \mathbb{Z}}{ \widehat{f}\left( k - \frac12\right) \frac{\sin{\left(\pi(\xi-k+\frac12\right))}}{\pi\left(\xi-k+\frac12\right)}}.$$
\begin{proof}[Proof of the shifted version.] The representation follows quite easily from the symmetries of the Fourier transform. Let us consider the function
$$g(x) = e^{i \pi x} f(x).$$
Naturally, if $f$ is supported on $[-1/2,1/2]$, then so is $g$.
Then we have
$$ \widehat{g}(\xi) = \sum_{k \in \mathbb{Z}}{ \widehat{g}(k) \frac{\sin{(\pi(\xi-k))}}{\pi(\xi-k)}}.$$
However, we also have
$$ \widehat{g}(\xi) = \widehat{f}\left(\xi-\frac{1}{2}\right).$$
Therefore
\begin{align*}
\widehat{f}\left(\xi-\frac{1}{2}\right) = \widehat{g}(\xi) = \sum_{k \in \mathbb{Z}}{ \widehat{g}(k) \frac{\sin{(\pi(\xi-k))}}{\pi(\xi-k)}} = \sum_{k \in \mathbb{Z}}{ \widehat{f}\left(k - \frac12\right) \frac{\sin{(\pi(\xi-k))}}{\pi(\xi-k)}}.
\end{align*}
\end{proof}
\subsection{The case $\alpha = 2$.} The purpose of this section is to explain the argument in its simplest possible setting. The case is particularly interesting because all of the arising quantities can be computed in closed form allowing for a very explicit argument.
We will show that for smooth, even $f:[-1/2,1/2] \rightarrow \mathbb{R}$
$$ \max(\widehat{f}) \geq \frac{3}{2\pi}\int_{-1/2}^{1/2} (1-4x^2) f(x) dx.$$
\begin{proof}
We use the Plancherel identity
$$ \int_{\mathbb{R}} f(x) g(x) dx = \int_{\mathbb{R}} \widehat{f}(\xi) \widehat{g}(\xi)d\xi$$
to write
$$\int_{-1/2}^{1/2} (1-4x^2) f(x) dx = \int_{\mathbb{R}}^{} \frac{2 \sin{(\pi \xi)} - 2\pi \xi \cos{(\pi \xi)}}{\pi^3 \xi^3}\widehat{f}(\xi) d\xi.$$
We use the Shannon-Whittaker reconstruction formula to decompose
$$ \widehat{f}(\xi) = \sum_{k \in \mathbb{Z}} \widehat{f}\left(k-\frac{1}{2}\right) \frac{\sin{(\pi\left(\xi -k +\frac12\right))}}{\pi \left( \xi - k + \frac12\right)}$$
allowing us to write
$$\int_{-1/2}^{1/2} (1-4x^2) f(x) dx = \sum_{k \in \mathbb{Z}}{ a_k \widehat{f}\left(k-\frac12\right)},$$
where
$$ a_k = \int_{\mathbb{R}}^{} \frac{2 \sin{(\pi \xi)} - 2\pi \xi \cos{(\pi \xi)}}{\pi^3 \xi^3} \frac{\sin{\pi\left(\xi-k+\frac12\right)}}{\pi \left( \xi-k +\frac12\right)}d\xi.$$
We now evaluate $a_k$. Abbreviating $h(x) = \max\left\{1-4x^2,0\right\}$, we can also write
$$ a_k = \int_{\mathbb{R}}^{} \widehat{h}(\xi) \frac{\sin{\pi\left(\xi-k+\frac12\right)}}{\pi \left( \xi-k +\frac12\right)}d\xi.$$
We use the Shannon-Whittaker formula once more to express $\widehat{h}$ and obtain
\begin{align*}
a_k &= \int_{\mathbb{R}}^{} \widehat{h}(\xi) \frac{\sin{\pi\left(\xi-k+\frac12\right)}}{\pi \left( \xi-k +\frac12\right)}d\xi\\
&= \int_{\mathbb{R}}^{} \left( \sum_{m \in \mathbb{Z}}{ \widehat{h}\left( m - \frac12\right) \frac{\sin{\left(\pi(\xi-m+\frac12\right))}}{\pi\left(\xi-m+\frac12\right)}} \right) \frac{\sin{\pi\left(\xi-k+\frac12\right)}}{\pi \left( \xi-k +\frac12\right)}d\xi\\
&= \widehat{h}\left(k - \frac12\right) = \int_{-1/2}^{1/2} (1-4x^2) \cos{\left(2 \pi \left( k - \frac12 \right) x\right)} dx \\
&= \frac{16}{\pi^3} \frac{(-1)^{k+1}}{(2k-1)^3}.
\end{align*}
This shows
$$\int_{-1/2}^{1/2} (1-4x^2) f(x) dx = \frac{16}{\pi^3}\sum_{k\in \mathbb{Z}} \frac{(-1)^{k+1}}{(2k-1)^3} \widehat{f}\left(k-\frac12\right).$$
We also note that, since $f$ is even and real-valued, $\widehat{f}(x) = \widehat{f}(-x)$, and thus
$$ \widehat{f}\left(k+1-\frac12\right) = \widehat{f}\left(-k-\frac12\right)$$
which allows us to group positive and negative integers into
\begin{align*}
\frac{16}{\pi^3}\sum_{k\in \mathbb{Z}} \frac{(-1)^{k+1}}{(2k-1)^3} \widehat{f}\left(k-\frac12\right) &=
\frac{16}{\pi^3}\sum_{k=1}^{\infty} \widehat{f}\left(k-\frac12\right) \left( \frac{(-1)^{k+1}}{(2k-1)^3} + \frac{(-1)^{(-k+1)+1}}{(2(-k+1)-1)^3} \right)
\\
&=\frac{32}{\pi^3}\sum_{k=1}^{\infty} \widehat{f}\left(k-\frac12\right) \frac{(-1)^{k+1} }{(2k-1)^3}
\end{align*}
Let us now fix the variable $\max(\widehat{f})$ via
$$ \max(\widehat{f})= \max \left\{ \sup_{k\in \mathbb{N}} \left(2k+\frac{1}{2}\right)\widehat{f}\left(2k +\frac{1}{2}\right), -\inf_{k \in \mathbb{N}} \left(2k + \frac{3}{2}\right)\widehat{f}\left(2k +\frac{3}{2}\right) \right\}.
$$
This means that for all $k\in \mathbb{N}$
$$ \widehat{f}\left(2k +\frac{1}{2}\right) \leq \frac{ \max(\widehat{f})}{2k+\frac{1}{2}} \quad \mbox{and} \quad \widehat{f}\left(2k +\frac{3}{2}\right) \geq - \frac{ \max(\widehat{f})}{2k + \frac32}.$$
We can now maximize the sum by estimating
\begin{align*}
\frac{32}{\pi^3}\sum_{k=1}^{\infty} \widehat{f}\left(k-\frac12\right) \frac{(-1)^{k+1} }{(2k-1)^3}
&= \frac{32}{\pi^3}\sum_{k=1 \atop k~{\tiny \mbox{odd}}}^{\infty} \widehat{f}\left(k-\frac12\right) \frac{1 }{(2k-1)^3} \\
&-\frac{32}{\pi^3}\sum_{k=1 \atop k~{\tiny \mbox{even}}}^{\infty} \widehat{f}\left(k-\frac12\right) \frac{1 }{(2k-1)^3} \\
&\leq \frac{32}{\pi^3}\sum_{k=1 \atop k~{\tiny \mbox{odd}}}^{\infty} \frac{ \max(\widehat{f})}{k-\frac12} \frac{1 }{(2k-1)^3} \\
&+\frac{32}{\pi^3}\sum_{k=1 \atop k~{\tiny \mbox{even}}}^{\infty} \frac{ \max(\widehat{f})}{k-\frac12} \frac{1 }{(2k-1)^3} \\
&= \frac{32}{\pi^3}\sum_{k=1 \atop k}^{\infty} \frac{ \max(\widehat{f})}{k-\frac12} \frac{1 }{(2k-1)^3}
\end{align*}
This sum can be combined into one sum resulting in
$$ \frac{32}{\pi^3}\sum_{k=1}^{\infty} \widehat{f}\left(k-\frac12\right) \frac{(-1)^{k+1} }{(2k-1)^3} \leq
\frac{64 \max(\widehat{f})}{\pi^3}\sum_{k=1}^{\infty} \frac{1}{(2k-1)^4}.$$
We have the generalized zeta function identity
$$ \sum_{k=1}^{\infty} \frac{1}{(2k-1)^4} = \frac{\pi^4}{96}$$
and thus
$$\int_{-1/2}^{1/2} (1-4x^2) f(x) dx \leq \frac{3 \pi}{2} \max(\widehat{f}) \qquad \mbox{as desired.}$$
\end{proof}
\subsection{The general case.} The general case requires an additional ingredient: an oscillating sign pattern in the hypergeometric function $_1 F_2$.
\begin{lem} Let $k \in \mathbb{N}$ and consider the integral
$$ \int_{-1/2}^{1/2} (1-|2x|^{\alpha}) e^{-2 \pi i \left( k - \frac12 \right) x} dx.$$
This integral has the same sign as
$$ a_k=~_1F_2\left( \frac{1+\alpha}{2}; \frac{3}{2}, \frac{3 + \alpha}{2}; -\frac{\pi^2}{16} (2k-1)^2 \right).$$
If $\alpha \in \left\{2,3,4,5,6\right\}$, then $a_k$ is positive for odd $k$ and negative for even $k$.
\end{lem}
\begin{proof} Since $1-|2x|^{\alpha}$ is even, it is easy to see that the imaginary part vanishes. It remains to understand the sign of the integral
$$ \int_{-1/2}^{1/2} (1-|2x|^{\alpha}) \cos{\left(2 \pi \left( k - \frac12 \right) x\right)} dx.$$
Integration by parts leads to the integral
$$ I = \frac{2^{\alpha + 1} \alpha}{2\pi (k-1/2)} \int_0^{1/2} x^{\alpha-1} \sin{\left(2 \pi \left( k - \frac12 \right) x\right)}.$$
We conclude the first step of the argument by noting that the term in front of the integral is positive and that the integral
evaluates to
$$I = \frac{\alpha}{\alpha+1}~_1F_2\left( \frac{1+\alpha}{2}; \frac{3}{2}, \frac{3 + \alpha}{2}; -\frac{\pi^2}{16} (2k-1)^2 \right).$$
We now consider the special cases, If $\alpha = 2$, then
$$ \int_0^{1/2} x^{\alpha-1} \sin{\left(2 \pi \left( k - \frac12 \right) x\right)} = \frac{ (-1)^{k+1}}{ \pi^2 (2k - 1)^2}.$$
If $\alpha = 3$, then
$$ \int_0^{1/2} x^{\alpha-1} \sin{\left(2 \pi \left( k - \frac12 \right) x\right)} = \frac{\pi (-1)^{k+1} (2k-1) -2}{(2k-1)^3 \pi^3} .$$
If $\alpha = 4$, then
$$ \int_0^{1/2} x^{\alpha-1} \sin{\left(2 \pi \left( k - \frac12 \right) x\right)} = \frac{3}{4} \frac{(-1)^{k+1} (\pi^2 (2k-1)^2 - 8)}{(2k -1)^4 \pi^4}.$$
If $\alpha = 5$, then
$$ \int_0^{1/2} x^{\alpha-1} \sin{\left(2 \pi \left( k - \frac12 \right) x\right)} = \frac{(-1)^k (2k-1) \pi ((2k-1)^2 \pi^2 - 24) -48}{2 (2k-1)^5 \pi^5}.$$
If $\alpha = 6$, then
$$ \int_0^{1/2} x^{\alpha-1} \sin{\left(2 \pi \left( k - \frac12 \right) x\right)} = \frac{5}{16}\frac{(-1)^{k+1} ( 384 - 48(1-2k)^2 \pi^2 + (1-2k)^4 \pi^4)}{ \pi^6 (2k-1)^6}.$$
For all these explicit expressions, the claim is easily verified.
\end{proof}
\begin{proof}[Proof of Theorem 2] We now assume that $\alpha > 0$ is a real number for which
$$ a_k = \int_{-1/2}^{1/2} (1-|2x|^{\alpha}) \cos{\left(2 \pi \left( k - \frac12 \right) x\right)} dx.$$
satisfies $a_{2k+1} \geq 0$ and $a_{2k+2} \leq 0$ for all $k \geq 0$.
Under these assumptions, we will show that
$$ \max(\widehat{f}) \geq \frac{\alpha+1}{\alpha \pi}\int_{-1/2}^{1/2} (1-|2x|^{\alpha}) f(x) dx$$
which, by the reasoning in \S 4.1, is equivalent to local stability of the minimizer.
Interpreting the variable
$$ a_k = \int_{-1/2}^{1/2} (1-|2x|^{\alpha}) e^{-2 \pi i \left( k - \frac12 \right) x} dx $$
as the Fourier transform of $h(x) = \max\left\{0, 1 - |2x|^{\alpha}\right\}$ evaluated at $\mathbb{Z} + 1/2$, we use the Shannon-Whittaker reconstruction formula to write the integral
$$ I = \int_{-1/2}^{1/2} (1-|2x|^{\alpha}) f(x) dx $$
as
\begin{align*}
I &= \int_{\mathbb{R}} \widehat{h}(\xi) \widehat{f}(\xi) d\xi \\
&= \int_{\mathbb{R}} \left( \sum_{k \in \mathbb{Z}}{ a_k\frac{\sin{\left(\pi(\xi-k+\frac12\right))}}{\pi\left(\xi-k+\frac12\right)}} \right) \left( \sum_{k \in \mathbb{Z}}{ \widehat{f}\left( k - \frac12\right) \frac{\sin{\left(\pi(\xi-k+\frac12\right))}}{\pi\left(\xi-k+\frac12\right)}} \right) d\xi.
\end{align*}
Orthogonality leads to cancellation of off-diagonal terms and we obtain
$$ \int_{-1/2}^{1/2} (1-|2x|^{\alpha}) f(x) dx = \sum_{k \in \mathbb{Z}} a_k \widehat{f}\left(k - \frac{1}{2}\right).$$
As above, we note that $a_k = a_{-k+1}$ and, since $f:[-1/2,1/2] \rightarrow \mathbb{R}$ is even,
$$ \widehat{f}\left(k+1-\frac12\right) = \widehat{f}\left(-k-\frac12\right)$$
allowing us to write
$$ \int_{-1/2}^{1/2} (1-|2x|^{\alpha}) f(x) dx = 2\sum_{k =1}^{\infty} a_k \widehat{f}\left( k - \frac12 \right).$$
Introducing the variable $\max(\widehat{f})$ via
$$ \max(\widehat{f})= \max \left\{ \sup_{k \in \mathbb{N}} \left(2k+\frac{1}{2}\right)\widehat{f}\left(2k +\frac{1}{2}\right), -\inf_{k \in \mathbb{N}} \left(2k + \frac{3}{2}\right)\widehat{f}\left(2k +\frac{3}{2}\right) \right\},
$$
we have for all $k\in \mathbb{N}$
$$ \widehat{f}\left(2k +\frac{1}{2}\right) \leq \frac{ \max(\widehat{f})}{2k+\frac{1}{2}} \quad \mbox{and} \quad \widehat{f}\left(2k +\frac{3}{2}\right) \geq - \frac{ \max(\widehat{f})}{2k + \frac32}.$$
Moreover, by assumption, we have $a_k > 0$ for odd $k$ and $a_k < 0$ for even $k$. This allows us to write
\begin{align*}
\int_{-1/2}^{1/2} (1-|2x|^{\alpha}) f(x) dx &\leq \max(\widehat{f}) 4\sum_{k=1}^{\infty}{ \frac{|a_k|}{2k-1}} \\
&= \max(\widehat{f}) 4\sum_{k=1}^{\infty}{ \frac{a_k (-1)^{k+1}}{2k-1}}.
\end{align*}
It remains to understand this infinite sum. Recalling that $a_k$ are defined as Fourier coefficients of the function $h(x) = \max\left\{0, 1-|2x|^{\alpha} \right\}$, we can write
\begin{align*}
4 \sum_{k=1}^{\infty}{ \frac{a_k (-1)^{k+1}}{2k-1}} &= 4 \sum_{k=1}^{\infty}{ \frac{ (-1)^{k-1}}{2k+1}} \int_{-1/2}^{1/2} (1-|2x|^{\alpha}) e^{-2 \pi i \left( k - \frac12 \right) x} dx\\
&=4 \int_{-1/2}^{1/2} (1-|2x|^{\alpha}) \sum_{k=1}^{\infty}{ \frac{ (-1)^{k+1}}{2k-1}}e^{-2 \pi i \left( k - \frac12 \right) x} dx.
\end{align*}
This infinite sum can be evaluated. Note that
\begin{align*}
\sum_{k=1}^{\infty}{ \frac{ (-1)^{k+1}}{2k-1}}e^{-2 \pi i \left( k - \frac12 \right) x} &= e^{i \pi x}\sum_{k=1}^{\infty}{ \frac{ (-1)^{k+1}}{2k-1}}e^{-2 \pi i k x} \\
&= \arctan{(e^{-i \pi x})}\\
&= \frac{i}{2} \log{\left(\frac{i+e^{-i \pi x}}{i-e^{-i \pi x}}\right)}
\end{align*}
We have, for $-1/2 < x < 1/2$ that
$$\arctan{(e^{-i \pi x})} = \frac{\pi}{4} + \mbox{odd and purely imaginary function}.$$
which simplifies the sum to
\begin{align*}
4\sum_{k=1}^{\infty}{ \frac{a_k (-1)^{k+1}}{2k+1}} &= \pi \int_{-1/2}^{1/2} (1-|2x|^{\alpha})dx \\
&= \pi \left(1 - \frac{1}{\alpha + 1}\right) = \frac{\alpha \pi}{\alpha + 1}.
\end{align*}
Altogether, we have seen that
\begin{align*}
\int_{-1/2}^{1/2} (1-|2x|^{\alpha}) f(x) dx &\leq 4\max(\widehat{f}) \sum_{k=1}^{\infty}{ \frac{a_k (-1)^{k+1}}{2k-1}} = \max(\widehat{f}) \frac{\alpha \pi}{\alpha + 1}
\end{align*}
which is the desired result.
\end{proof}
|
2,877,628,089,602 | arxiv | \section{Introduction}
In the present paper we give a Lie algebraic and differential
geometry derivation of a wide class of multidimensional nonlinear
systems. The systems under consideration are generated by
the zero curvature condition for a connection on a trivial principal
fiber bundle $M \times G \to M$, constrained by the relevant grading
condition. Here $M$ is either the real manifold ${\Bbb R}^{2d}$, or
the complex manifold ${\Bbb C}^d$, $G$ is a complex Lie group,
whose Lie algebra ${\frak g}$ is endowed with a ${\Bbb
Z}$--gradation. We call the arising systems of partial differential
equations the multidimensional Toda type systems. From the physical
point of view, they describe Toda type fields coupled to matter
fields, all of them living on $2d$--dimensional space. Analogously to the
two dimensional situation, with an appropriate In\"on\"u--Wigner
contraction procedure, one can exclude for our systems the back
reaction of the matter fields on the Toda fields.
For the two dimensional case and the finite dimensional Lie algebra
${\frak g}$, connections taking values in the local part of ${\frak
g}$ lead to abelian and nonabelian conformal Toda systems and their
affine deformations for the affine ${\frak g}$, see \cite{LSa92} and
references therein, and also \cite{RSa94,RSa96} for differential and
algebraic geometry background of such systems. For the connection
with values in higher grading subspaces of ${\frak g}$ one deals with
systems discussed in \cite{GSa95,FGGS95}.
In higher dimensions our systems, under some additional
specialisations, contain as particular cases the Cecotti--Vafa type
equations \cite{CVa91}, see also \cite{Dub93}; and those of
Gervais--Matsuo \cite{GMa93} which represent some reduction of a
generalised WZNW model. Note that some of the arising systems are
related to classical problems of differential geometry, coinciding
with the well known completely integrable Bourlet type equations
\cite{Dar10,Bia24,Ami81} and those sometimes called multidimensional
generalisation of the sine--Gordon and wave equations, see, for
example, \cite{Ami81,TTe80,Sav86,ABT86}.
In the paper by the integrability of a system of partial differential
equations we mean the existence of a constructive procedure to obtain
its general solution. Following the lines of
\cite{LSa92,RSa94,GSa95,RSa96}, we formulate the integration scheme for the
multidimensional Toda type systems. In accordance with this scheme,
the multidimensional Toda type and matter type fields are
reconstructed from some mappings which we call integration data. In
the case when $M$ is ${\Bbb C}^d$, the integration data are divided
into holomorphic and antiholomorphic ones; when $M$ is ${\Bbb
R}^{2d}$ they depend on one or another half of the independent
variables. Moreover, in a multidimensional case the integration data
are submitted to the relevant integrability conditions which are
absent in the two dimensional situation. These conditions split into
two systems of multidimensional nonlinear equations for integration
data. If the integrability conditions are integrable systems, then
the corresponding multidimensional Toda type system is also
integrable. We show that in this case any solution of our systems
can be obtained using the proposed integration scheme. It is also
investigated when different sets of integration data give the same
solution.
Note that the results obtained in the present paper can be extended in a
natural way to the case of supergroups.
\section{Derivation of equations}\label{de}
In this section we give a derivation of some class of
multidimensional nonlinear equations. Our strategy here is a direct
generalisation of the method which was used to obtain the Toda type
equations in two dimensional case \cite{LSa92,RSa94,GSa95,RSa96}. It
consists of the following main steps. We consider a general flat
connection on a trivial principal fiber bundle and suppose that the
corresponding Lie algebra is endowed with a ${\Bbb Z}$--gradation.
Then we impose on the connection some grading conditions and prove
that an appropriate gauge transformation allows to bring it to the
form parametrised by a set of Toda type and matter type fields. The
zero curvature condition for such a connection is equivalent to a set
of equations for the fields, which are called the multidimensional
Toda type equations. In principle, the form of the equations in
question can be postulated. However, the derivation given below
suggests also the method of solving these equations, which is
explicitly formulated and discussed in section \ref{cgs}.
\subsection{Flat connections and gauge transformations}
Let $M$ be the manifold ${\Bbb R}^{2d}$ or the manifold ${\Bbb C}^d$.
Denote by $z^{-i}$, $z^{+i}$, $i = 1, \ldots, d$, the standard
coordinates on $M$. In the case when $M$ is ${\Bbb C}^d$ we suppose
that $z^{+i} = \mbar{z^{-i}}$. Let $G$ be a complex connected matrix
Lie group. The generalisation of the construction given below to the
case of a general finite dimensional Lie group is straightforward,
see in this connection \cite{RSa94,RSa96} where such a generalisation was
done for the case of two dimensional space $M$. The general
discussion given below can be also well applied to infinite
dimensional Lie groups. Consider the trivial principal fiber
$G$--bundle $M \times G \to M$. Denote by ${\frak g}$ the Lie algebra
of $G$. It is well known that there is a bijective correspondence
between connection forms on $M \times G \to G$ and ${\frak
g}$--valued 1--forms on $M$. Having in mind this correspondence, we
call a ${\frak g}$--valued 1--form on $M$ a connection form, or
simply a connection. The curvature 2--form of a connection $\omega$
is determined by the 2--form $\Omega$ on $M$, related to $\omega$ by
the formula
\[
\Omega = d\omega + \omega \wedge \omega,
\]
and the connection $\omega$ is flat if and only if
\begin{equation}
d\omega + \omega \wedge \omega = 0. \label{16}
\end{equation}
Relation (\ref{16}) is called the {\it zero curvature condition}.
Let $\varphi$ be a mapping from $M$ to $G$. The connection
$\omega$ of the form
\[
\omega = \varphi^{-1} d \varphi
\]
satisfies the zero curvature condition. In this case one says that
the connection $\omega$ is generated by the mapping $\varphi$. Since
the manifold $M$ is simply connected, any flat connection is
generated by some mapping $\varphi: M \to G$.
The gauge transformations of a connection in the case under
consideration are described by smooth mappings from $M$ to $G$. Here
for any mapping $\psi: M \to G$, the gauge transformed connection
$\omega^\psi$ is given by
\begin{equation}
\omega^\psi = \psi^{-1} \omega \psi + \psi^{-1} d \psi. \label{17}
\end{equation}
Clearly, the zero curvature condition is invariant with respect to
the gauge transformations. In other words, if a connection satisfies
this condition, then the gauge transformed connection also satisfies
this condition. Actually, if a flat connection $\omega$ is generated
by a mapping $\varphi$ then the gauge transformed connection
$\omega^\psi$ is generated by the mapping $\varphi \psi$. It is
convenient to call the gauge transformations defined by (\ref{17}),
{\it $G$--gauge transformations}.
In what follows we deal with a general connection $\omega$ satisfying
the zero curvature condition. Write for $\omega$ the representation
\[
\omega = \sum_{i=1}^d (\omega_{-i} dz^{-i} + \omega_{+i} dz^{+i}),
\]
where $\omega_{\pm i}$ are some mappings from $M$ to ${\frak g}$,
called the components of $\omega$. In terms of $\omega_{\pm i}$ the
zero curvature condition takes the form
\begin{eqnarray}
&\partial_{-i} \omega_{-j} - \partial_{-j} \omega_{-i} +
[\omega_{-i},
\omega_{-j}] = 0,& \label{18} \\
&\partial_{+i} \omega_{+j} - \partial_{+j} \omega_{+i} +
[\omega_{+i},
\omega_{+j}] = 0,& \label{19} \\
&\partial_{-i} \omega_{+j} - \partial_{+j} \omega_{-i} +
[\omega_{-i},
\omega_{+j}] = 0.& \label{20}
\end{eqnarray}
Here and in what follows we use the notation
\[
\partial_{-i} = \partial/\partial z^{-i}, \qquad \partial_{+i} =
\partial/\partial z^{+i}.
\]
Choosing a basis in ${\frak g}$ and treating the components of the
expansion of $\omega_{\pm i}$ over this basis as fields, we can
consider the zero curvature condition as a nonlinear system of
partial differential equations for the fields. Since any flat
connection can be gauge transformed to zero, system
(\ref{18})--(\ref{20}) is, in a sense, trivial. From the other hand,
we
can obtain from (\ref{18})--(\ref{20}) nontrivial integrable systems
by imposing some gauge noninvariant constraints on the connection
$\omega$. Consider one of the methods to impose the constraints in
question, which is, in fact, a direct generalisation of the
group--algebraic approach \cite{LSa92,RSa94,GSa95,RSa96} which was used
successfully in two dimensional case ($d=1$).
\subsection{${\Bbb Z}$--gradations and modified Gauss decomposition}
Suppose that the Lie algebra ${\frak g}$ is a ${\Bbb Z}$--graded Lie
algebra. This means that ${\frak g}$ is represented as the direct sum
\begin{equation}
{\frak g} = \bigoplus_{m \in {\Bbb Z}} {\frak g}_m, \label{1}
\end{equation}
where the subspaces ${\frak g}_m$ satisfy the condition
\[
[{\frak g}_m, {\frak g}_n] \subset {\frak g}_{m+n}
\]
for all $m, n \in {\Bbb Z}$. It is clear that the subspaces ${\frak
g}_0$ and
\[
\widetilde {\frak n}_- = \bigoplus_{m < 0} {\frak g}_m,
\qquad \widetilde {\frak n}_+ = \bigoplus_{m > 0} {\frak g}_m
\]
are subalgebras of ${\frak g}$. Denoting the subalgebra
${\frak g}_0$ by $\widetilde {\frak h}$, we write the generalised
triangle decomposition for ${\frak g}$,
\[
{\frak g} = \widetilde {\frak n}_- \oplus \widetilde {\frak h} \oplus
\widetilde {\frak n}_+.
\]
Here and in what follows we use tildes to have the notations
different from ones usually used for the case of the canonical
gradation of a complex semisimple Lie algebra. Note, that this
gradation is closely related to the so called principal
three--dimensional subalgebra of the Lie algebra under consideration
\cite{Bou75,RSa96}.
Denote by $\widetilde H$ and by $\widetilde N_\pm$ the connected Lie
subgroups corresponding to the subalgebras $\widetilde {\frak h}$ and
$\widetilde {\frak n}_\pm$. Suppose that $\widetilde H$ and
$\widetilde N_\pm$ are closed subgroups of $G$ and, moreover,
\begin{eqnarray}
&\widetilde H \cap \widetilde N_\pm = \{e\}, \qquad \widetilde N_-
\cap \widetilde N_+ = \{e\},& \label{67} \\
&\widetilde N_- \cap \widetilde H \widetilde N_+ = \{e\}, \qquad
\widetilde N_- \widetilde H \cap \widetilde N_+ = \{e\}.& \label{68}
\end{eqnarray}
where $e$ is the unit element of $G$. This is true, in particular,
for the reductive Lie groups, see, for example, \cite{Hum75}. The
set $\widetilde N_- \widetilde H \widetilde N_+$ is an open subset of
$G$. Suppose that
\[
G = \mbar{\widetilde N_- \widetilde H \widetilde N_+}.
\]
This is again true, in particular, for the reductive Lie groups.
Thus,
for an element $a$ belonging to the dense subset of $G$, one has the
following, convenient for our aims, decomposition:
\begin{equation}
a = n_- h n_+^{-1}, \label{69}
\end{equation}
where $n_\pm \in \widetilde N_\pm$ and $h \in \widetilde H$.
Decomposition (\ref{69}) is called the {\it Gauss decomposition}. Due
to
(\ref{67}) and (\ref{68}), this decomposition is unique. Actually,
(\ref{69}) is one of the possible forms of the Gauss decomposition.
Taking the elements belonging to the subgroups $\widetilde N_\pm$ and
$\widetilde H$ in different orders we get different types of the
Gauss
decompositions valid in the corresponding dense subsets of $G$. In
particular, below, besides of decomposition (\ref{69}), we will
often use the Gauss decompositions of the forms
\begin{equation}
a = m_- n_+ h_+, \qquad a = m_+ n_- h_-, \label{70}
\end{equation}
where $m_\pm \in \widetilde N_\pm$, $n_\pm \in \widetilde N_\pm$ and
$h_\pm \in \widetilde H$. The main disadvantage of any form of the
Gauss decomposition is that not any element of $G$ possesses such a
decomposition. To overcome this difficulty, let us consider so called
modified Gauss decompositions. They are based on the following almost
trivial remark. If an element $a \in G$ does not admit the Gauss
decomposition of some form, then, subjecting $a$ to some left shift
in $G$, we can easily get an element admitting that decomposition.
So, in particular, we can say that any element of $G$ can be
represented in forms (\ref{70}) where $m_\pm \in a_\pm \widetilde
N_\pm$ for some elements $a_\pm \in G$, $n_\pm \in \widetilde N_\pm$
and $h_\pm \in \widetilde H$. If the elements $a_\pm$ are fixed, then
decompositions (\ref{70}) are unique. We call the Gauss
decompositions obtained in such a way, the {\it modified Gauss
decompositions} \cite{RSa94,RSa96}.
Let $\varphi: M \to G$ be an arbitrary mapping and $p$ be an
arbitrary point of $M$. Suppose that $a_\pm$ are such elements of $G$
that the element $\varphi(p)$ admits the modified Gauss
decompositions (\ref{70}). It can be easily shown that for any point
$p'$ belonging to some neighborhood of $p$, the element $\varphi(p')$
admits the modified Gauss decompositions (\ref{70}) for the same
choice of the elements $a_\pm$ \cite{RSa94,RSa96}. In other words, any
mapping $\varphi: M \to G$ has the following local decompositions
\begin{equation}
\varphi = \mu_+ \nu_- \eta_-, \qquad \varphi = \mu_- \nu_+ \eta_+,
\label{2}
\end{equation}
where the mappings $\mu_\pm$ take values in $a_\pm \widetilde N_\pm$
for some elements $a_\pm \in G$, the mappings $\nu_\pm$ take values
in $\widetilde N_\pm$, and the mappings $\eta_\pm$ take values in
$\widetilde H$. It is also clear that the mappings $\mu_+^{-1}
\partial_{\pm i} \mu_+$ take values in $\widetilde {\frak n}_+$,
while the mappings $\mu_-^{-1} \partial_{\pm i} \mu_-$ take values in
$\widetilde {\frak n}_-$.
\subsection{Grading conditions}
The first condition we impose on the connection $\omega$ is that the
components $\omega_{-i}$ take values in $\widetilde {\frak n}_-
\oplus
\widetilde {\frak h}$, and the components $\omega_{+i}$ take values
in
$\widetilde {\frak h} \oplus \widetilde {\frak n}_+$. We call this
condition
the {\it general grading condition}.
Let a mapping $\varphi: M \to G$ generates the connection $\omega$;
in other words, $\omega = \varphi^{-1} d \varphi$. Using
respectively the first and the second equalities from (\ref{2}),
we can write the following representations for the connection
components $\omega_{-i}$ and $\omega_{+i}$:
\begin{eqnarray}
&&\omega_{-i} = \eta^{-1}_- \nu^{-1}_- (\mu^{-1}_+ \partial_{-i}
\mu_+)
\nu_- \eta_- + \eta^{-1}_- (\nu^{-1}_- \partial_{-i} \nu_-) \eta_- +
\eta^{-1}_- \partial_{-i} \eta_-, \label{6} \\
&&\omega_{+i} = \eta^{-1}_+ \nu^{-1}_+ (\mu^{-1}_- \partial_{+i}
\mu_-)
\nu_+ \eta_+ + \eta^{-1}_+ (\nu^{-1}_+ \partial_{+i} \nu_+) \eta_+ +
\eta^{-1}_+ \partial_{+i} \eta_+. \label{7}
\end{eqnarray}
{}From these relations it follows that the connection $\omega$
satisfies the general grading condition if and only if
\begin{equation}
\partial_{\pm i} \mu_\mp = 0. \label{8}
\end{equation}
When $M = {\Bbb R}^{2d}$ these equalities mean that $\mu_-$ depends
only on coordinates $z^{-i}$, and $\mu_+$ depends only on coordinates
$z^{+i}$. When $M = {\Bbb C}^d$ they mean that $\mu_-$ is a
holomorphic mapping, and $\mu_+$ is an antiholomorphic one. For a
discussion of the differential geometry meaning of the general
grading condition, which is here actually the same as for two
dimensional case, we refer the reader to \cite{RSa94,RSa96}.
Perform now a further specification of the grading condition. Define
the subspaces $\widetilde {\frak m}_{\pm i}$ of $\widetilde {\frak
n}_\pm$ by
\[
\widetilde {\frak m}_{-i} = \bigoplus_{-l_{-i} \le m \le -1} {\frak
g}_m, \qquad \widetilde {\frak m}_{+i} = \bigoplus_{1 \le m \le
l_{+i}} {\frak g}_m,
\]
where $l_{\pm i}$ are some positive integers. Let us require that
the connection components $\omega_{-i}$ take values in the subspace
$\widetilde {\frak m}_{-i} \oplus \widetilde {\frak h}$, and the
components $\omega_{+i}$ take values in $\widetilde {\frak h} \oplus
\widetilde {\frak m}_{+i}$. We call such a requirement the {\it
specified grading condition}. Using the modified Gauss decompositions
(\ref{2}), one gets
\begin{eqnarray}
&&\omega_{-i} = \eta^{-1}_+ \nu^{-1}_+ (\mu^{-1}_- \partial_{-i}
\mu_-)
\nu_+ \eta_+ + \eta^{-1}_+ (\nu^{-1}_+ \partial_{-i} \nu_+) \eta_+ +
\eta^{-1}_+ \partial_{-i} \eta_+, \label{3} \\
&&\omega_{+i} = \eta^{-1}_- \nu^{-1}_- (\mu^{-1}_+ \partial_{+i}
\mu_+)
\nu_- \eta_- + \eta^{-1}_- (\nu^{-1}_- \partial_{+i} \nu_-) \eta_- +
\eta^{-1}_- \partial_{+i} \eta_-. \label{4}
\end{eqnarray}
Here the second equality from (\ref{2}) was used for $\omega_{-i}$
and the first one for $\omega_{+i}$. From relations (\ref{3}) and
(\ref{4}) we conclude that the connection $\omega$ satisfies the
specified grading condition if and only if the mappings $\mu_-^{-1}
\partial_{-i} \mu_-$ take values in $\widetilde {\frak m}_{-i}$, and
the mappings $\mu_+^{-1} \partial_{+i} \mu_+$ take values in
$\widetilde {\frak m}_{+i}$.
It is clear that the general grading condition and the specified
grading condition are not invariant under the action of an arbitrary
$G$--gauge transformation, but they are invariant under the action of
gauge transformations (\ref{17}) with the mapping $\psi$ taking
values in the subgroup $\widetilde H$. In other words, the system
arising from the zero curvature condition for the connection
satisfying the specified grading condition still possesses some gauge
symmetry. Below we call a gauge transformation (\ref{17}) with the
mapping $\psi$ taking values in $\widetilde H$ an {\it $\widetilde
H$--gauge transformations}. Let us impose now one more restriction on
the connection and use the $\widetilde H$--gauge symmetry to bring it
to the form generating equations free of the $\widetilde H$--gauge
invariance.
\subsection{Final form of connection}
Taking into account the specified grading condition, we write the
following representation for the components of the connection
$\omega$:
\[
\omega_{-i} = \sum_{m = 0}^{-l_{-i}} \omega_{-i, m}, \qquad
\omega_{+i} = \sum_{m = 0}^{l_{+i}} \omega_{+i, m},
\]
where the mappings $\omega_{\pm i, m}$ take values in ${\frak g}_{\pm
m}$. There is a similar decomposition for the mappings $\mu_\pm^{-1}
\partial_{\pm i} \mu_\pm$:
\[
\mu^{-1}_- \partial_{-i} \mu_- = \sum_{m = -1}^{-l_{-i}} \lambda_{-i,
m}, \qquad \mu^{-1}_+ \partial_{+i} \mu_+ = \sum_{m = 1}^{l_{+i}}
\lambda_{+i, m}.
\]
{}From (\ref{3}) and (\ref{4}) it follows that
\begin{equation}
\omega_{\pm i, \pm l_{\pm i}} = \eta_\mp^{-1} \lambda_{\pm i, \pm
l_{\pm i}}
\eta_\mp. \label{5}
\end{equation}
The last restriction we impose on the connection $\omega$ is
formulated as follows. Let $c_{\pm i}$ be some fixed elements of the
subspaces ${\frak g}_{\pm l_{\pm i}}$ satisfying the relations
\begin{equation}
[c_{-i}, c_{-j}] = 0, \qquad [c_{+i}, c_{+j}] = 0. \label{39}
\end{equation}
Require that the mappings $\omega_{\pm i, \pm l_{\pm i}}$ have the
form
\begin{equation}
\omega_{\pm i, \pm l_{\pm i}} = \eta_\mp^{-1} \gamma_\pm c_{\pm i}
\gamma_\pm^{-1} \eta_\mp \label{21}
\end{equation}
for some mappings $\gamma_\pm: M \to \widetilde H$. A connection
which
satisfies the grading condition and relation (\ref{21}) is called an
{\it admissible connection}. Similarly, a mapping from $M$ to $G$
generating an admissible connection is called {\it an admissible
mapping}.
Taking into account (\ref{5}), we conclude that
\begin{equation}
\lambda_{\pm i, \pm l_{\pm i}} = \gamma_\pm c_{\pm i}
\gamma^{-1}_\pm.
\label{11}
\end{equation}
Denote by $\widetilde H_-$ and $\widetilde H_+$ the isotropy
subgroups of the sets formed by the elements $c_{-i}$ and $c_{+i}$,
respectively. It is clear that the mappings $\gamma_\pm$ are defined
up to multiplication from the right side by mappings taking values in
$\widetilde H_\pm$. In any case, at least locally, we can choose the
mappings $\gamma_\pm$ in such a way that
\begin{equation}
\partial_{\mp i} \gamma_\pm = 0. \label{38}
\end{equation}
In what follows we use such a choice for the mappings $\gamma_\pm$.
Let us show now that there exists a local $\widetilde H$--gauge
transformation that brings an admissible connection to the connection
$\omega$ with the components of the form
\begin{eqnarray}
&\omega_{-i} = \gamma^{-1} \partial_{-i} \gamma + \sum_{m =
-1}^{-l_{-i} + 1} \upsilon_{-i,m} + c_{-i},& \label{14} \\
&\omega_{+i} = \gamma^{-1} \left(\sum_{m = 1}^{l_{+i} -1}
\upsilon_{+i, m} +
c_{+i} \right) \gamma,& \label{15}
\end{eqnarray}
where $\gamma$ is some mapping from $M$ to $\widetilde H$, and
$\upsilon_{\pm i, m}$ are mappings taking values in ${\frak g}_{\pm
m}$.
To prove the above statement, note first that taking into account
(\ref{8}), we get from (\ref{6}) and (\ref{7}) the following
relations
\begin{eqnarray}
&&\omega_{-i} = \eta^{-1}_- (\nu^{-1}_- \partial_{-i} \nu_-) \eta_-
+
\eta^{-1}_- \partial_{-i} \eta_-, \label{9} \\
&&\omega_{+i} = \eta^{-1}_+ (\nu^{-1}_+ \partial_{+i} \nu_+) \eta_+ +
\eta^{-1}_+ \partial_{+i} \eta_+. \label{10}
\end{eqnarray}
Comparing (\ref{9}) and (\ref{3}), we come to the relation
\[
\nu_-^{-1} \partial_{-i} \nu_- = \left[ \eta \nu_+^{-1} (\mu_-^{-1}
\partial_{-i} \mu_-) \nu_+ \eta^{-1} \right]_{\widetilde {\frak
n}_-},
\]
where
\begin{equation}
\eta = \eta_- \eta_+^{-1}. \label{36}
\end{equation}
Hence, the mappings $\nu_-^{-1} \partial_{-i} \nu_-$ take values in
subspaces ${\frak m}_{- i}$ and we can represent them in the form
\[
\nu_-^{-1} \partial_{-i} \nu_- = \eta \gamma_- \left( \sum_{m =
-1}^{-l_{-i}} \upsilon_{-i,m} \right) \gamma_-^{-1} \eta^{-1},
\]
with the mappings $\upsilon_{-i, m}$ taking values in ${\frak g}_{-
m}$. Substituting this representation into (\ref{9}), we obtain
\[
\omega_{-i} = \eta_+^{-1} \gamma_- \left( \sum_{m = -1}^{-l_{-i}}
\upsilon_{-i, m} \right) \gamma_-^{-1} \eta_+ + \eta_-^{-1}
\partial_{-i} \eta_-.
\]
{}From (\ref{5}) and (\ref{11}) it follows that $\upsilon_{-i, -l_{-i}}
= c_{-i}$. Therefore,
\begin{equation}
\omega_{-i} = \eta_+^{-1} \gamma_- \left(c_{-i} + \sum_{m =
-1}^{-l_{-i}
+ 1} \upsilon_{-i, m} \right) \gamma_-^{-1} \eta_+ + \eta_-^{-1}
\partial_{-i} \eta_-. \label{12}
\end{equation}
Similarly, using (\ref{10}) and (\ref{4}), we conclude that
\[
\nu_+^{-1} \partial_{+i} \nu_+ = \left[ \eta^{-1} \nu_-^{-1}
(\mu_+^{-1}
\partial_{+i} \mu_+) \nu_- \eta \right]_{\widetilde {\frak n}_+}.
\]
Therefore we can write for $\nu_+^{-1} \partial_{+i} \nu_+$ the
representation
\[
\nu_+^{-1} \partial_{+i} \nu_+ = \eta^{-1} \gamma_+ \left( \sum_{m =
1}^{l_{+i}} \upsilon_{+i,m} \right) \gamma_+^{-1} \eta,
\]
where the mappings $\upsilon_{+i, m}$ take values in ${\frak g}_m$.
Taking into account (\ref{10}), we get
\[
\omega_{+i} = \eta_-^{-1} \gamma_+ \left( \sum_{m = 1}^{l_{+i}}
\upsilon_{+i, m} \right) \gamma_+^{-1} \eta_- + \eta_+^{-1}
\partial_{+i} \eta_+.
\]
Using again (\ref{5}) and (\ref{11}), we obtain $\upsilon_{+i,
l_{+i}} = c_{+i}$. Therefore, the following relation is valid:
\begin{equation}
\omega_{+i} = \eta_-^{-1} \gamma_+ \left(\sum_{m = 1}^{l_{+i} -1}
\upsilon_{+i, m} + c_{+i} \right) \gamma_+^{-1} \eta_- +
\eta_+^{-1} \partial_{+i} \eta_+. \label{13}
\end{equation}
Taking into account (\ref{12}) and (\ref{13}) and performing the
gauge transformation defined by the mapping $\eta_+^{-1} \gamma_-$,
we
arrive at the connection with the components of the form given by
(\ref{14}) and (\ref{15}) with
\begin{equation}
\gamma = \gamma_+^{-1} \eta \gamma_-. \label{40}
\end{equation}
Note that the connection with components (\ref{14}), (\ref{15}) is
generated by the mapping
\begin{equation}
\varphi = \mu_+ \nu_- \eta \gamma_- = \mu_- \nu_+ \gamma_-.
\label{95}
\end{equation}
\subsection{Multidimensional Toda type equations}
The equations for the mappings $\gamma$ and $\upsilon_{\pm i,m}$,
which result from the zero curvature condition (\ref{18})--(\ref{20})
with the connection components of form (\ref{14}), (\ref{15}), will
be
called {\it multidimensional Toda type equations}, or {\it
multidimensional Toda type systems}. It is natural to call the
functions parametrising the mappings $\gamma$ and $\upsilon_{\pm
i,m}$, {\it Toda type} and {\it matter type fields}, respectively.
The multidimensional Toda type equations are invariant with respect
to the remarkable symmetry transformations
\begin{equation}
\gamma' = \xi_+^{-1} \gamma \xi_-, \qquad \upsilon'_{\pm i} =
\xi_\pm^{-1} \upsilon_{\pm i} \xi_\pm, \label{91}
\end{equation}
where $\xi_\pm$ are arbitrary mappings taking values in the isotropy
subgroups $\widetilde H_\pm$ of the sets formed by the elements
$c_{-i}$ and $c_{+i}$, and satisfying the relations
\begin{equation}
\partial_\mp \xi_\pm = 0. \label{92}
\end{equation}
Indeed, it can be easily verified that the connection components of
form (\ref{14}), (\ref{15}) constructed with the mappings $\gamma$,
$\upsilon_{\pm i}$ and $\gamma'$, $\upsilon'_{\pm i}$ are connected
by the $\widetilde H$--gauge transformation generated by the mapping
$\xi_-$. Therefore, if the mappings $\gamma$, $\upsilon_{\pm i}$
satisfy the multidimensional Toda type equations, then the mappings
$\gamma'$, $\upsilon'_{\pm i}$ given by (\ref{91}) satisfy the same
equations. Note that, because the mappings $\xi_\pm$ are subjected to
(\ref{92}), transformations (\ref{91}) are not {\it gauge} symmetry
transformations of the multidimensional Toda type equations.
Let us make one more useful remark. Let $h_\pm$ be some fixed
elements of $\widetilde H$, and mappings $\gamma$, $\upsilon_{\pm i}$
satisfy the multidimensional Toda type equations generated by the
connection with the components of form (\ref{14}), (\ref{15}). It is
not difficult to get convinced that the mappings
\[
\gamma' = h_+^{-1} \gamma h_-, \qquad \upsilon'_{\pm i} = h_\pm^{-1}
\upsilon_{\pm i} h_\pm
\]
satisfy the multidimensional Toda type equations where instead of
$c_{\pm i}$ one uses the elements
\[
c'_{\pm i} = h_\pm^{-1} c_{\pm i} h_\pm.
\]
In such a sense, the multidimensional Toda type equations determined
by the elements $c_{\pm i}$ and $c'_{\pm i}$ which are connected by
the above relation, are equivalent.
Let us write the general form of the multidimensional Toda type
equations for $l_{-i} = l_{+i} = 1$ and $l_{-i} = l_{+i} = 2$. The
cases with other choices of $l_{\pm i}$ can be treated similarly.
Consider first the case $l_{-i} = l_{+i} = 1$. Here the connection
components have the form
\[
\omega_{-i} = \gamma^{-1} \partial_{-i} \gamma + c_{-i}, \qquad
\omega_{+i} = \gamma^{-1} c_{+i} \gamma.
\]
Equations (\ref{18}) are equivalent here to the following ones:
\begin{eqnarray}
&{}[c_{-i}, \gamma^{-1} \partial_{-j} \gamma] + [\gamma^{-1}
\partial_{-i} \gamma, c_{-j}] = 0,& \label{25} \\
&\partial_{-i} (\gamma^{-1} \partial_{-j} \gamma) - \partial_{-j}
(\gamma^{-1} \partial_{-i} \gamma) + [\gamma^{-1} \partial_{-i}
\gamma, \gamma^{-1} \partial_{-j} \gamma] = 0.& \label{26}
\end{eqnarray}
Equations (\ref{26}) are satisfied by any mapping $\gamma$;
equations (\ref{25}) can be identically rewritten as
\begin{equation}
\partial_{-i} (\gamma c_{-j} \gamma^{-1}) = \partial_{-j} (\gamma
c_{-i} \gamma^{-1}). \label{22}
\end{equation}
Analogously, equations (\ref{19}) read
\begin{equation}
\partial_{+i} (\gamma^{-1} c_{+j} \gamma) = \partial_{+j}
(\gamma^{-1}
c_{+i} \gamma). \label{23}
\end{equation}
Finally, we easily get convinced that equations (\ref{20}) can be
written as
\begin{equation}
\partial_{+j} (\gamma^{-1} \partial_{-i} \gamma) = [c_{-i},
\gamma^{-1}
c_{+j} \gamma]. \label{24}
\end{equation}
Thus, the zero curvature condition in the case under consideration is
equivalent to equations (\ref{22})--(\ref{24}). In the two
dimensional
case equations (\ref{22}) and (\ref{23}) are absent, and equations
(\ref{24}) take the form
\[
\partial_+ (\gamma^{-1} \partial_- \gamma) = [c_-, \gamma^{-1}
c_+ \gamma].
\]
If the Lie group $G$ is semisimple, then using the canonical
gradation of the corresponding Lie algebra ${\frak g}$, we get the
well known abelian Toda equations; noncanonical gradations lead to
various nonabelian Toda systems.
Proceed now to the case $l_{-i} = l_{+i} = 2$. Here the connection
components are
\[
\omega_{-i} = \gamma^{-1} \partial_{-i} \gamma + \upsilon_{-i} +
c_{-i}, \qquad \omega_{+i} = \gamma^{-1} (\upsilon_{+i} + c_{+i})
\gamma,
\]
where we have denoted $\upsilon_{\pm i, \pm 1}$ simply by
$\upsilon_{\pm i}$. Equations (\ref{18}) take the form
\begin{eqnarray}
&[c_{-i}, \upsilon_{-j}] = [c_{-j}, \upsilon_{-i}],& \label{27} \\
&\partial_{-i}(\gamma c_{-j} \gamma^{-1}) - \partial_{-j} (\gamma
c_{-i} \gamma^{-1}) = [\gamma \upsilon_{-j} \gamma^{-1}, \gamma
\upsilon_{-i} \gamma^{-1}],& \label{28} \\
&\partial_{-i} (\gamma \upsilon_{-j} \gamma^{-1}) = \partial_{-j}
(\gamma \upsilon_{-i} \gamma^{-1}).& \label{29}
\end{eqnarray}
The similar system of equations follows from (\ref{19}),
\begin{eqnarray}
&[c_{+i}, \upsilon_{+j}] = [c_{+j}, \upsilon_{+i}],& \label{30} \\
&\partial_{+i}(\gamma^{-1} c_{+j} \gamma) - \partial_{+j}
(\gamma^{-1} c_{+i} \gamma) = [\gamma^{-1} \upsilon_{+j} \gamma,
\gamma^{-1} \upsilon_{+i} \gamma],& \label{31} \\
&\partial_{+i} (\gamma^{-1} \upsilon_{+j} \gamma) = \partial_{+j}
(\gamma^{-1} \upsilon_{+i} \gamma).& \label{32}
\end{eqnarray}
After some calculations we get from (\ref{20}) the equations
\begin{eqnarray}
&\partial_{-i} \upsilon_{+j} = [c_{+j}, \gamma \upsilon_{-i}
\gamma^{-1}],& \label{33} \\
&\partial_{+j} \upsilon_{-i} = [c_{-i}, \gamma^{-1} \upsilon_{+j}
\gamma],& \label{34} \\
&\partial_{+j} (\gamma^{-1} \partial_{-i} \gamma) = [c_{-i},
\gamma^{-1}
c_{+j} \gamma] + [\upsilon_{-i}, \gamma^{-1} \upsilon_{+j} \gamma].&
\label{35}
\end{eqnarray}
Thus, in the case $l_{-i} = l_{+i} = 2$ the zero curvature condition
is equivalent to the system of equations (\ref{27})--(\ref{35}). In
the two dimensional case we come to the equations
\begin{eqnarray*}
&\partial_- \upsilon_+ = [c_+, \gamma \upsilon_-
\gamma^{-1}], \qquad \partial_+ \upsilon_- = [c_-, \gamma^{-1}
\upsilon_+ \gamma],& \\
&\partial_+ (\gamma^{-1} \partial_- \gamma) = [c_-, \gamma^{-1}
c_+ \gamma] + [\upsilon_-, \gamma^{-1} \upsilon_+ \gamma],&
\end{eqnarray*}
which represent the simplest case of higher grading Toda systems
\cite{GSa95}.
\section{Construction of general solution}\label{cgs}
{}From the consideration presented above it follows that any admissible
mapping generates local solutions of the corresponding
multidimensional Toda type equations. Thus, if we were to be able to
construct admissible mappings, we could construct solutions of the
multidimensional Toda type equations. It is worth to note here that
the solutions in questions are determined by the mappings $\mu_\pm$,
$\nu_\pm$ entering Gauss decompositions (\ref{2}), and from the
mapping $\eta$ which is defined via
(\ref{36}) by the mappings $\eta_\pm$ entering the same
decomposition. So, the problem is to find the mappings $\mu_\pm$,
$\nu_\pm$ and $\eta$ arising from admissible mappings by means of Gauss
decompositions (\ref{2}) and relation (\ref{36}). It appears that
this problem has a remarkably simple solution.
Recall that a mapping $\varphi: M \to G$ is admissible if and
only if the mappings $\mu_\pm$ entering Gauss decompositions
(\ref{2}) satisfy conditions (\ref{8}), and the mappings
$\mu_\pm^{-1}
\partial_{\pm i} \mu_\pm$ have the form
\begin{eqnarray}
&\mu^{-1}_- \partial_{-i} \mu_- = \gamma_- c_{-i} \gamma_-^{-1} +
\sum_{m = -1}^{-l_{-i} + 1 } \lambda_{-i, m},& \label{41} \\
&\mu^{-1}_+ \partial_{+i} \mu_+ = \sum_{m = 1}^{l_{+i} - 1}
\lambda_{+i, m} + \gamma_+ c_{+i} \gamma_+^{-1}.& \label{42}
\end{eqnarray}
Here $\gamma_\pm$ are some mappings taking values in $\widetilde H$
and satisfying conditions (\ref{38}); the mappings $\lambda_{\pm i,
m}$ take values in ${\frak g}_{\pm m}$; and $c_{\pm i}$ are the fixed
elements of the subspaces ${\frak g}_{\pm l_\pm}$, which satisfy
relations (\ref{39}).
{}From the other hand, the mappings $\mu_\pm$ uniquely determine the
mappings $\nu_\pm$ and $\eta$. Indeed, from (\ref{2}) one gets
\begin{equation}
\mu_+^{-1} \mu_- = \nu_- \eta \nu_+^{-1}. \label{37}
\end{equation}
Relation (\ref{37}) can be considered as the Gauss decomposition of
the mapping $\mu_+^{-1} \mu_-$ induced by the Gauss decomposition
(\ref{69}). Hence, the mappings $\mu_\pm$ uniquely determine the
mappings $\nu_\pm$ and $\eta$.
Taking all these remarks into account we propose the following
procedure for obtaining solutions to the multidimensional Toda type
equations.
\subsection{Integration scheme}\label{is}
Let $\gamma_\pm$ be some mappings taking values in $\widetilde H$,
and $\lambda_{\pm i, m}$ be some mappings taking values in ${\frak
g}_{\pm m}$. Here it is supposed that
\begin{equation}
\partial_{\mp i} \gamma_\pm = 0, \qquad \partial_{\mp i} \lambda_{\pm
j, m} = 0. \label{56}
\end{equation}
Consider (\ref{41}) and (\ref{42}) as a system of partial
differential equations for the mappings $\mu_\pm$ and try to solve
it. Since we are going to use the mappings $\mu_\pm$ for
construction of admissible mappings, we have to deal only with
solutions of equations (\ref{41}) and (\ref{42}) which satisfy
relations (\ref{8}). The latter are equivalent to the following ones:
\begin{eqnarray}
&\mu_-^{-1} \partial_{+ i} \mu_- = 0,& \label{93} \\
&\mu_+^{-1} \partial_{- i} \mu_+ = 0.& \label{94}
\end{eqnarray}
So, we have to solve the system consisting of
equations (\ref{41}), (\ref{42}) and (\ref{93}), (\ref{94}).
Certainly, it is possible to solve this system if and only if the
corresponding integrability conditions are satisfied. The right hand sides of
equations (\ref{41}), (\ref{93}) and (\ref{42}), (\ref{94}) can be
interpreted as components of flat connections on the trivial
principal fiber bundle $M \times G \to M$. Therefore, the
integrability conditions of equations (\ref{41}), (\ref{93}) and
(\ref{42}), (\ref{94}) look as the zero curvature condition for these
connections. In particular, for the case $l_{-i} = l_{+i} = 2$ the
integrability conditions are
\begin{eqnarray*}
&\partial_{\pm i} \lambda_{\pm j} = \partial_{\pm j} \lambda_{\pm
i},& \\
&\partial_{\pm i}(\gamma_\pm c_{\pm j} \gamma_\pm^{-1}) -
\partial_{\pm
j}(\gamma_\pm c_{\pm i} \gamma_\pm^{-1}) = [\lambda_{\pm j},
\lambda_{\pm
i}],& \\
&[\lambda_{\pm i}, \gamma_\pm c_{\pm j} \gamma_\pm ^{-1}] =
[\lambda_{\pm j}, \gamma_\pm c_{\pm i} \gamma_\pm ^{-1}],&
\end{eqnarray*}
where we have denoted $\lambda_{\pm i, 1}$ simply by $\lambda_{\pm
i}$.
In general, the integrability conditions can be considered as two
systems of partial nonlinear differential equations for the mappings
$\gamma_-$, $\lambda_{-i, m}$ and $\gamma_+$, $\lambda_{+i, m}$,
respectively. The multidimensional Toda type equations are integrable
if and only if these systems are integrable. In any case, if we
succeed to find a solution of the integrability conditions, we can
construct the corresponding solution of the multidimensional Toda
type equations. A set of mappings $\gamma_\pm$ and $\lambda_{\pm i,
m}$ satisfying (\ref{56}) and the corresponding integrability
conditions will be called {\it integration data}. It is clear that
for any set of integration data the solution of equations (\ref{41}),
(\ref{93}) and (\ref{42}), (\ref{94}) is fixed by the initial
conditions which are constant elements of the group $G$. More
precisely, let $p$ be some fixed point of $M$ and $a_\pm$ be some
fixed elements of $G$. Then there exists a unique solution of
equations (\ref{41}), (\ref{93}) and (\ref{42}), (\ref{94})
satisfying the conditions
\begin{equation}
\mu_\pm (p) = a_\pm. \label{72}
\end{equation}
It is not difficult to show that the mappings $\mu_\pm$ satisfying
the equations under consideration and initial conditions (\ref{72})
take values in $a_\pm \widetilde N_\pm$. Note that in the two
dimensional case the integrability conditions become trivial.
The next natural step is to use Gauss decomposition (\ref{37}) to
obtain the mappings $\nu_\pm$ and $\eta$. In general, solving
equations (\ref{41}), (\ref{93}) and (\ref{42}), (\ref{94}), we get
the mappings $\mu_\pm$, for which the mapping $\mu_+^{-1} \mu_-$ may
have not the Gauss decomposition of form (\ref{37}) at some points of
$M$. In such a case one comes to solutions of the
multidimensional Toda type equations with some irregularities.
Having found the mappings $\mu_\pm$ and $\eta$, one uses (\ref{40})
and the relations
\begin{eqnarray}
&\sum_{m = -1}^{-l_{-i}} \upsilon_{-i,m} = \gamma_-^{-1} \eta^{-1}
(\nu_-^{-1} \partial_{-i} \nu_-) \eta \gamma_-,& \label{52} \\
&\sum_{m = 1}^{l_{+i}} \upsilon_{+i,m} = \gamma_+^{-1} \eta
(\nu_+^{-1} \partial_{+i} \nu_+) \eta^{-1} \gamma_+& \label{53}
\end{eqnarray}
to construct the mappings $\gamma$ and $\upsilon_{\pm i, m}$. Show
that these mappings satisfy the multidimensional Toda type equations.
To this end consider the mapping
\[
\varphi = \mu_+ \nu_- \eta \gamma_- = \mu_- \nu_+ \gamma_-,
\]
whose form is actually suggested by (\ref{95}). The mapping $\varphi$
is admissible. Moreover, using formulas of section \ref{de}, it is
not difficult to demonstrate that it generates the connection with
components of form (\ref{14}) and (\ref{15}), where the mappings
$\gamma$ and $\upsilon_{\pm i, m}$ are defined by the above
construction. Since this connection is certainly flat, the mappings
$\gamma$ and $\upsilon_{\pm i, m}$ satisfy the multidimensional Toda
type equations.
\subsection{Generality of solution}\label{gs}
Prove now that any solution of the multidimensional Toda type
equations can be obtained by the integration scheme described above.
Let $\gamma: M \to \widetilde H$ and $\upsilon_{\pm i, m}: M \to
{\frak g}_{\pm m}$ be arbitrary mappings satisfying the
multidimensional Toda type equations. We have to show that there
exists a set of integration data leading, by the above integration
scheme, to the mappings $\gamma$ and $\upsilon_{\pm i, m}$.
Using $\gamma$ and $\upsilon_{\pm i, m}$, construct the
connection with the components given by (\ref{14}) and (\ref{15}).
Since this connection is flat and admissible, there exists an
admissible mapping $\varphi: M \to G$ which generates it. Write for
$\varphi$ local Gauss decompositions (\ref{2}). The mappings
$\mu_\pm$ entering these decompositions satisfy relations (\ref{8}).
Since the mapping $\varphi$ is admissible, we have expansions
(\ref{41}), (\ref{42}). It is convenient to write them in
the form
\begin{eqnarray}
&\mu^{-1}_- \partial_{-i} \mu_- = \gamma'_- c_{-i} \gamma_-^{\prime
-1} +
\sum_{m = -1}^{-l_{-i} + 1 } \lambda_{-i, m},& \label{54} \\
&\mu^{-1}_+ \partial_{+i} \mu_+ = \sum_{m = 1}^{l_{+i} - 1}
\lambda_{+i, m} + \gamma'_+ c_{+i} \gamma_+^{\prime -1},& \label{55}
\end{eqnarray}
where we use primes because, in general, the mappings $\gamma'_\pm$
are not yet the mappings leading to the considered solution of the
multidimensional Toda type equations. Choose the mappings
$\gamma'_\pm$ in such a way that
\[
\partial_{\mp i} \gamma'_\pm = 0.
\]
Formulas (\ref{12}) and (\ref{13}) take in our case the
form
\begin{eqnarray}
&\omega_{-i} = \eta_+^{-1} \gamma'_- \left(c_{-i} + \sum_{m =
-1}^{-l_{-i}
+ 1} \upsilon'_{-i, m} \right) \gamma_-^{\prime -1} \eta_+ +
\eta_-^{-1} \partial_{-i} \eta_-,& \label{43} \\
&\omega_{+i} = \eta_-^{-1} \gamma'_+ \left(\sum_{m = 1}^{l_{+i} -1}
\upsilon'_{+i, m} + c_{+i} \right) \gamma_+^{\prime -1} \eta_- +
\eta_+^{-1} \partial_{+i} \eta_+,& \label{44}
\end{eqnarray}
where the mappings $\upsilon'_{\pm i, m}$ are defined by the
relations
\begin{eqnarray}
&\sum_{m = -1}^{-l_{-i}} \upsilon'_{-i,m} = \gamma_-^{\prime -1}
\eta^{-1} (\nu_-^{-1} \partial_{-i} \nu_-) \eta \gamma'_-,&
\label{50} \\
&\sum_{m = 1}^{l_{+i}} \upsilon'_{+i,m} = \gamma_+^{\prime -1} \eta
(\nu_+^{-1} \partial_{+i} \nu_+) \eta^{-1} \gamma'_+.& \label{51}
\end{eqnarray}
{}From (\ref{44}) and (\ref{15}) it follows that the mapping $\eta_+$
satisfies the relation
\[
\partial_{+ i} \eta_+ = 0.
\]
Therefore, for the mapping
\[
\xi_- = \gamma_-^{\prime -1} \eta_+
\]
one has
\[
\partial_{+ i} \xi_- = 0.
\]
Comparing (\ref{43}) and (\ref{14}) one sees that the mapping $\xi_-$
takes values in $\widetilde H_-$. Relation (\ref{95}) suggests to define
\begin{equation}
\gamma_- = \eta_+, \label{45}
\end{equation}
thereof
\[
\gamma_- = \gamma'_- \xi_-.
\]
Further, from (\ref{43}) and (\ref{14}) we conclude that
\[
\partial_{-i} (\eta_- \gamma^{-1}) = 0,
\]
and, hence, for the mapping
\[
\xi_+ = \gamma_+^{\prime -1} \eta_- \gamma^{-1}
\]
one has
\[
\partial_{- i} \xi_+ = 0.
\]
Comparing (\ref{44}) and (\ref{15}), we see that the mapping $\xi_+$
takes values in $\widetilde H_+$. Denoting
\begin{equation}
\gamma_+ = \eta_- \gamma^{-1}, \label{49}
\end{equation}
we get
\[
\gamma_+ = \gamma'_+ \xi_+.
\]
Show now that the mappings $\gamma_\pm$ we have just defined, and the
mappings $\lambda_{\pm i, m}$ determined by relations (\ref{54}),
(\ref{55}), are the sought for mappings leading to the considered
solution of the multidimensional Toda type equations.
Indeed, since the mappings $\xi_\pm$ take values in $\widetilde
H_\pm$, one gets from (\ref{54}) and (\ref{55}) that the mappings
$\mu_\pm$ can be considered as solutions of equations (\ref{41}) and
(\ref{42}). Further, the mappings $\nu_\pm$ and $\eta = \eta_-
\eta_+^{-1}$ can be treated as the mappings obtained from the Gauss
decomposition (\ref{37}). Relations (\ref{45}) and (\ref{49}) imply
that the mapping $\gamma$ is given by (\ref{40}). Now, from
(\ref{43}), (\ref{44}) and (\ref{14}), (\ref{15}) it follows that
\[
\upsilon_{\pm i, m} = \xi_\pm^{-1} \upsilon'_{\pm i, m} \xi_\pm.
\]
Taking into account (\ref{50}) and (\ref{51}), we finally see that
the mappings $\upsilon_{\pm i, m}$ satisfy relations (\ref{52}) and
(\ref{53}). Thus, any solution of the multidimensional Toda type
equations can be locally obtained by the above integration scheme.
\subsection{Dependence of solution on integration data}
It appears that different sets of integration data can give the same
solution of the multidimensional Toda type equations. Consider this
problem in detail. Let $\gamma_\pm$, $\lambda_{\pm i, m}$ and
$\gamma'_\pm$, $\lambda'_{\pm i, m}$ be two sets of mappings
satisfying the integrability conditions of the equations determining
the corresponding mappings $\mu_\pm$ and $\mu'_\pm$. Suppose that
the solutions $\gamma$, $\upsilon_{\pm i, m}$ and $\gamma'$,
$\upsilon'_{\pm i, m}$ obtained by the above procedure coincide. In
this case the corresponding connections $\omega$ and $\omega'$ also
coincide. As it follows from the discussion given in section
\ref{is}, these connections are generated by the mappings $\varphi$
and $\varphi'$ defined as
\begin{equation}
\varphi = \mu_- \nu_+ \gamma_- = \mu_+ \nu_- \eta \gamma_-, \qquad
\varphi' = \mu'_- \nu'_+ \gamma'_- = \mu'_+ \nu'_- \eta' \gamma'_-.
\label{59}
\end{equation}
Since the connections $\omega$ and $\omega'$ coincide, we have
\[
\varphi' = a \varphi
\]
for some element $a \in G$. Hence, from (\ref{59}) it follows that
\[
\mu'_- \nu'_+ \gamma'_- = a \mu_- \nu_+ \gamma_-.
\]
This equality can be rewritten as
\begin{equation}
\mu'_- = a \mu_- \chi_+ \psi_+, \label{57}
\end{equation}
where the mappings $\chi_+$ and $\psi_+$ are defined by
\[
\chi_+ = \nu_+ \gamma_- \gamma_-^{\prime -1} \nu_+^{\prime -1}
\gamma'_- \gamma_-^{-1}, \qquad \psi_+ = \gamma_- \gamma_-^{\prime
-1}.
\]
Note that the mapping $\chi_+$ takes values in $\widetilde N_+$ and
the mapping $\psi_+$ takes values in $\widetilde H$. Moreover, one
has
\begin{equation}
\partial_{+i} \chi_+ = 0, \qquad \partial_{+i} \psi_+ = 0. \label{61}
\end{equation}
Similarly, from the equality
\[
\mu'_+ \nu'_- \eta' \gamma'_- = a \mu_+ \nu_- \eta \gamma_-
\]
we get the relation
\begin{equation}
\mu'_+ = a \mu_+ \chi_- \psi_-, \label{58}
\end{equation}
with the mappings $\chi_-$ and $\psi_-$ given by
\[
\chi_- = \nu_- \eta \gamma_- \gamma_-^{\prime -1} \eta^{\prime -1}
\nu_-^{\prime -1} \eta' \gamma'_- \gamma_-^{-1} \eta^{-1}, \qquad
\psi_- = \eta \gamma_- \gamma_-^{\prime -1} \eta^{\prime -1}.
\]
Here the mapping $\chi_-$ take values in $\widetilde N_-$, the
mapping $\psi_-$ take values in $\widetilde H$, and one has
\begin{equation}
\partial_{-i} \chi_- = 0, \qquad \partial_{-i} \psi_- = 0.
\label{62}
\end{equation}
Now using the Gauss decompositions
\[
\mu_+^{-1} \mu_- = \nu_- \eta \nu_+^{-1}, \qquad
\mu_+^{\prime -1} \mu'_- = \nu'_- \eta' \nu_+^{\prime -1}
\]
and relations (\ref{57}), (\ref{58}), one comes to the equalities
\begin{equation}
\eta' = \psi_-^{-1} \eta \psi_+, \qquad \nu'_\pm = \psi_\pm^{-1}
\chi_\pm^{-1} \nu_\pm \psi_\pm. \label{63}
\end{equation}
Further, from the definition of the mapping $\psi_+$ one gets
\begin{equation}
\gamma'_- = \psi_+^{-1} \gamma_-. \label{65}
\end{equation}
Since $\gamma' = \gamma$, we can write
\[
\gamma_+^{\prime -1} \eta' \gamma'_- = \gamma_+^{-1} \eta \gamma_-,
\]
therefore,
\begin{equation}
\gamma_+' = \psi_-^{-1} \gamma_+. \label{66}
\end{equation}
Equalities (\ref{57}) and (\ref{58}) give the relation
\begin{eqnarray}
\lefteqn{\mu_\pm^{\prime -1} \partial_{\pm i} \mu'_\pm} \nonumber \\
&=& \psi_\mp^{-1} \chi_\mp^{-1} (\mu_\pm^{-1} \partial_{\pm i}
\mu_\pm) \chi_\mp \psi_\mp + \psi_\mp^{-1} (\chi_\mp^{-1}
\partial_{\pm i} \chi_\mp) \psi_\mp + \psi_\mp^{-1} \partial_{\pm
i} \psi_\mp, \hspace{3.em} \label{60}
\end{eqnarray}
which implies
\begin{equation}
\sum_{m = \pm 1}^{\pm l_\pm \mp 1} \lambda'_{\pm i, m} =
\left[ \psi_\mp^{-1} \chi_\mp^{-1} \left( \sum_{m = \pm 1}^{\pm l_\pm
\mp
1} \lambda_{\pm i, m} \right) \chi_\mp \psi_\mp \right]_{\widetilde
{\frak n}_\pm}. \label{71}
\end{equation}
Let again $\gamma_\pm$, $\lambda_{\pm i, m}$ and $\gamma'_\pm$,
$\lambda'_{\pm i, m}$ be two sets of mappings satisfying the
integrability conditions of the equations determining the
corresponding mappings $\mu_\pm$ and $\mu'_\pm$. Denote by $\gamma$,
$\upsilon_{\pm i, m}$ and by $\gamma'$, $\upsilon'_{\pm i, m}$ the
corresponding solutions of the multidimensional Toda type equations.
Suppose that the mappings $\mu_\pm$ and $\mu'_\pm$ are connected by
relations (\ref{57}) and (\ref{58}) where the mappings $\chi_\pm$
take values in $\widetilde N_\pm$ and the mappings $\psi_\pm$ take
values in $\widetilde H$. It is not difficult to get convinced that
the mappings $\chi_\pm$ and $\psi_\pm$ satisfy relations (\ref{61})
and (\ref{62}). It is also clear that in the case under
consideration relations (\ref{63}) and (\ref{60}) are valid. From
(\ref{60}) it follows that
\[
\gamma'_\pm c_{\pm i} \gamma_\pm^{\prime -1} = \psi_\mp^{-1}
\gamma_\pm c_{\pm i} \gamma_\pm^{-1} \psi_\mp.
\]
Therefore, one has
\begin{equation}
\gamma'_\pm = \psi_\mp^{-1} \gamma_\pm \xi_\pm, \label{64}
\end{equation}
where the mappings $\xi_\pm$ take values in $\widetilde H_\pm$.
Taking into account (\ref{63}), we get
\[
\gamma' = \xi_+^{-1} \gamma \xi_-.
\]
Using now (\ref{52}), (\ref{53}) and the similar relations for the
mappings $\upsilon'_{\pm i, m}$, we come to the relations
\[
\upsilon'_{\pm i, m} = \xi_\pm^{-1} \upsilon_{\pm i, m} \xi_\pm.
\]
If instead of (\ref{64}) one has (\ref{65}) and (\ref{66}), then
$\gamma' = \gamma$ and $\upsilon'_{\pm i, m} = \upsilon_{\pm i, m}$.
Thus, the sets $\gamma_\pm$, $\lambda_{\pm i, m}$ and $\gamma'_\pm$,
$\lambda'_{\pm i, m}$ give the same solution of the multidimensional
Toda type equations if and only if the corresponding mappings
$\mu_\pm$ and $\mu'_\pm$ are connected by relations (\ref{57}),
(\ref{58}) and equalities (\ref{65}), (\ref{66}) are valid.
Let now $\gamma_\pm$ and $\lambda_{\pm i, m}$ be a set of integration
data, and $\mu_\pm$ be the solution of equations (\ref{41}),
(\ref{93}) and (\ref{42}), (\ref{94}) specified by initial conditions
(\ref{72}). Suppose that the mappings $\mu_\pm$ admit the Gauss
decompositions
\begin{equation}
\mu_\pm = \mu'_\pm \nu'_\mp \eta'_\mp. \label{73}
\end{equation}
where the mappings $\mu'_\pm$ take values in $a'_\pm \widetilde
N_\pm$, the mappings $\nu'_\pm$ take values in $\widetilde N_\pm$ and
the mappings $\eta'_\pm$ take values in $\widetilde H$.
Note that if $a_\pm \widetilde N_\pm = a'_\pm \widetilde N_\pm$, then
$\mu'_\pm = \mu_\pm$. Equalities (\ref{73}) imply that
the mappings $\mu_\pm$ and $\mu'_\pm$ are connected by relations
(\ref{57}) and (\ref{58}) with $a = e$ and
\[
\chi_\pm = \eta_\pm^{\prime -1} \nu_\pm^{\prime -1} \eta'_\pm, \qquad
\psi_\pm = \eta_\pm^{\prime -1}.
\]
{}From (\ref{60}) it follows that the mappings $\gamma'_\pm$ and
$\lambda'_{\pm i, m}$ given by (\ref{65}), (\ref{66}) and (\ref{71})
generate the mappings $\mu'_\pm$ as a solution of equations
(\ref{41}), (\ref{42}). It is clear that in the case under
consideration the solutions of the multidimensional Toda type
equations, obtained using the mappings $\gamma_\pm$, $\lambda_{\pm i,
m}$ and $\gamma'_\pm$, $\lambda'_{\pm i, m}$, coincide. Certainly, we
must use here the appropriate initial conditions for the mappings
$\mu_\pm$ and $\mu'_\pm$. Thus, we see that the solution
of the
multidimensional Toda equation, which is determined by the mappings
$\gamma_\pm$, $\lambda_{\pm i, m}$ and by the corresponding mappings
$\mu_\pm$ taking values in $a_\pm \widetilde N_\pm$, can be also
obtained starting from some mappings $\gamma'_\pm$, $\lambda'_{\pm i,
m}$ and the corresponding mappings $\mu'_{\pm}$ taking values in
$a'_\pm \widetilde N_\pm$. The above construction fails when the
mappings $\mu_\pm$ do not admit Gauss decomposition (\ref{73}).
Roughly speaking, almost all solutions of the multidimensional Toda
type equations can be obtained by the method described in the present
section if we will use only the mappings $\mu_\pm$ taking values in
the
sets $a_\pm \widetilde N_\pm$ for some fixed elements $a_\pm \in G$.
In particular, we can consider only the mappings $\mu_\pm$ taking
values in $\widetilde N_\pm$.
Summarising our consideration, describe once more the procedure for
obtaining the general solution to the multidimensional Toda type
equations. We start with the mappings $\gamma_\pm$ and $\lambda_{\pm
i, m}$ which satisfy (\ref{56}) and the integrability conditions of
equations (\ref{41}), (\ref{93}) and (\ref{42}), (\ref{94}).
Integrating these equations, we get the mappings $\mu_\pm$. Further,
Gauss decomposition (\ref{37}) gives the mappings $\eta$ and
$\nu_\pm$. Finally, using (\ref{40}), (\ref{52}) and (\ref{53}), we
obtain the mappings $\gamma$ and $\upsilon_{\pm i, m}$ which satisfy
the multidimensional Toda type equations. Any solution can be
obtained by using this procedure. Two sets of mappings $\gamma_\pm$,
$\lambda_{\pm i, m}$ and $\gamma'_\pm$, $\lambda'_{\pm i, m}$ give
the same solution if and only if the corresponding mappings $\mu_\pm$
and $\mu'_\pm$ are connected by relations (\ref{57}), (\ref{58}) and
equalities (\ref{65}), (\ref{66}) are valid. Almost all solutions of
the multidimensional Toda type equations can be obtained using the
mappings $\mu_\pm$ taking values in the subgroups $\widetilde N_\pm$.
\subsection{Automorphisms and reduction}\label{ar}
Let $\Sigma$ be an automorphism of the Lie group $G$, and $\sigma$ be
the corresponding automorphism of the Lie algebra ${\frak g}$.
Suppose that
\begin{equation}
\sigma({\frak g}_m) = {\frak g}_m. \label{115}
\end{equation}
In this case
\begin{equation}
\Sigma(\widetilde H) = \widetilde H, \qquad \Sigma(\widetilde N_\pm) =
\widetilde N_\pm. \label{111}
\end{equation}
Suppose additionally that
\begin{equation}
\sigma(c_{\pm i}) = c_{\pm i}. \label{116}
\end{equation}
It is easy to show now that if mappings $\gamma$ and $\upsilon_{\pm
i, m}$ satisfy the multidimensional Toda type equations, then the
mappings $\Sigma \circ \gamma$ and $\sigma \circ
\upsilon_{\pm i, m}$ satisfy the same equations. In such a situation
we can consider the subset of the solutions satisfying the conditions
\begin{equation}
\Sigma \circ \gamma = \gamma, \qquad \sigma \circ \upsilon_{\pm i,
m} = \upsilon_{\pm i, m}. \label{109}
\end{equation}
It is customary to call the transition to some subset of the
solutions of a given system of equations a reduction of the system.
Below we discuss a method to obtain solutions of
the multidimensional Toda type system satisfying relations (\ref{109}).
Introduce first some notations and give a few definitions.
Denote by $\widehat G$ the subgroup of $G$ formed by the elements
invariant with respect to the automorphism $\Sigma$. In other words,
\[
\widehat G = \{a \in G \mid \Sigma(a) = a\}.
\]
The subgroup $\widehat G$ is a closed subgroup of $G$. Therefore,
$\widehat G$ is a Lie subgroup of $G$. It is clear that the
subalgebra $\widehat {\frak g}$ of the Lie algebra ${\frak g}$,
defined by
\[
\widehat {\frak g} = \{x \in {\frak g} \mid \sigma(x) = x\},
\]
is the Lie algebra of $\widehat G$. The Lie algebra $\widehat{\frak
g}$ is a ${\Bbb Z}$--graded subalgebra of ${\frak g}$:
\[
\widehat {\frak g} = \bigoplus_{m \in {\Bbb Z}} \widehat{\frak g}_m,
\]
where
\[
\widehat {\frak g}_m = \{x \in {\frak g}_m \mid \sigma(x) = x\}.
\]
Define now the following Lie subgroups of $\widehat G$,
\[
\widehat{\widetilde H} = \{a \in \widetilde H \mid \Sigma(a) = a\},
\qquad \widehat{\widetilde N}_\pm = \{a \in \widetilde N_\pm \mid
\Sigma(a) = a\}.
\]
Using the definitions given above, we can reformulate conditions
(\ref{109}) by saying that the mapping $\gamma$ takes value in
$\widehat{\widetilde H}$, and the mappings $\upsilon_{\pm i, m}$ take
values in $\widehat {\frak g}_m$.
Let $a$ be an arbitrary element of $\widehat G$. Consider $a$ as an
element of $G$ and suppose that it has the Gauss decomposition
(\ref{69}). Then from the equality $\Sigma(a) = a$, we get the relation
\[
\Sigma(n_-) \Sigma(h) \Sigma(n_+^{-1}) = n_- h n_+^{-1}.
\]
Taking into account (\ref{111}) and the uniqueness of the Gauss
decomposition (\ref{69}), we conclude that
\[
\Sigma(h) = h, \qquad \Sigma(n_\pm) = n_\pm.
\]
Thus, the elements of some dense subset of $\widehat G$
possess the Gauss decomposition (\ref{69}) with $h \in
\widehat{\widetilde H}$, $n_\pm \in \widehat{\widetilde N}_\pm$,
and this decomposition is unique. Similarly, one can get convinced
that any element of $\widehat G$ has the modified Gauss
decompositions (\ref{70}) with $m_\pm \in a_\pm \widehat{\widetilde
N}_\pm$ for some elements $a_\pm \in \widehat G$, $n_\pm \in
\widehat{\widetilde N}_\pm$ and $h_\pm \in \widehat{\widetilde H}$.
To obtain solutions of the multidimensional Toda type equations
satisfying (\ref{109}), we start with the mappings $\gamma_\pm$ and
$\lambda_{\pm i, m}$ which satisfy the corresponding integrability
conditions and the relations similar to
(\ref{109}):
\begin{equation}
\Sigma \circ \gamma_\pm = \gamma_\pm, \qquad \sigma \circ \lambda_{\pm
i, m} = \lambda_{\pm i, m}. \label{112}
\end{equation}
In this case, for any solution of equations (\ref{41}), (\ref{93}) and
(\ref{42}), (\ref{94}) one has
\[
\sigma \circ (\mu_\pm^{-1} \partial_{\pm i} \mu_\pm) = \mu_\pm^{-1}
\partial_{\pm i} \mu_\pm, \qquad
\sigma \circ (\mu_\pm^{-1} \partial_{\mp i} \mu_\pm) = \mu_\pm^{-1}
\partial_{\mp i} \mu_\pm.
\]
{}From these relations it follows that
\begin{equation}
\Sigma \circ \mu_\pm = b_\pm \mu_\pm, \label{110}
\end{equation}
where $b_\pm$ are some elements of $G$. Recall that a solution of
equations (\ref{41}), (\ref{93}) and (\ref{42}), (\ref{94}) is
uniquely specified by conditions (\ref{72}). If the elements $a_\pm$
entering these conditions belong to the group $\widehat G$, then
instead of (\ref{110}), we get for the corresponding mappings
$\mu_\pm$ the relations
\[
\Sigma \circ \mu_\pm = \mu_\pm.
\]
For such mappings $\mu_\pm$ the Gauss decomposition (\ref{37}) gives
the mappings $\eta$ and $\nu_\pm$ which satisfy the equalities
\[
\Sigma \circ \eta = \eta, \qquad \Sigma \circ \nu_\pm = \nu_\pm.
\]
It is not difficult to get convinced that the corresponding solution
of the multidimensional Toda type equations satisfies (\ref{109}).
Show now that any solution of the multidimensional Toda type
equations satisfying (\ref{109}) can be obtained in such a way.
Let mappings $\gamma$ and $\upsilon_{\pm i, m}$ satisfy the
multidimensional Toda type equations and equalities (\ref{109}) are
valid. In this case, for the flat connection $\omega$ with the components
defined by (\ref{14}) and (\ref{15}), one has
\[
\sigma \circ \omega = \omega.
\]
Therefore, a mapping $\varphi: M \to G$ generating the connection
$\omega$ satisfies, in general, the relation
\[
\Sigma \circ \varphi = b \varphi,
\]
where $b$ is some element of $G$. However, if for some point $p \in
M$, one has $\varphi(p) \in \widehat G$, then we have the relation
\begin{equation}
\Sigma \circ \varphi = \varphi. \label{113}
\end{equation}
Since the mapping $\varphi$ is defined up to the multiplication from the
left hand side by an arbitrary element of $G$, it is clear that we
can always choose this mapping in such a way that it satisfies
(\ref{113}). Take such a mapping $\varphi$ and construct for it the
local Gauss decompositions (\ref{2}) where the mappings $\mu_\pm$
take values in the sets $a_\pm \widehat{\widetilde N}_\pm$ for some
$a_\pm \in \widehat G$, the mappings $\nu_\pm$ take values in
$\widehat{\widetilde N}_\pm$, and the mappings $\eta_\pm$ take values
in $\widehat{\widetilde H}$. In particular, one has
\begin{equation}
\Sigma \circ \mu_\pm = \mu_\pm. \label{114}
\end{equation}
As it follows from the consideration performed in section \ref{gs},
the mappings $\mu_\pm$ can be treated as solutions of equations
(\ref{41}), (\ref{93}) and (\ref{42}), (\ref{94}) for some mappings
$\lambda_{\pm i, m}$ and the mappings $\gamma_\pm$ given by
(\ref{45}), (\ref{49}). Clearly, in this case
\[
\Sigma \circ \gamma_\pm = \gamma_\pm,
\]
and from (\ref{114}) it follows that
\[
\Sigma \circ \lambda_{\pm i, m} = \lambda_{\pm i, m}.
\]
Moreover, the mappings $\gamma_\pm$ and $\lambda_{\pm i, m}$ are
integration data leading to the considered solution of the
multidimensional Toda type equations. Thus, if we start with
mappings $\gamma_\pm$ and $\lambda_{\pm i, m}$ which satisfy the
integrability conditions and relations (\ref{112}), use the
mappings $\mu_\pm$ specified by conditions (\ref{72}) with $a_\pm \in
\widehat G$, we get a solution satisfying (\ref{109}), and any such a
solution can be obtained in this way.
Let now $\Sigma$ be an antiautomorphism of $G$, and $\sigma$ be the
corresponding antiautomorphism of ${\frak g}$. In this case we again
suppose the validity of the relations $\sigma({\frak g}_m) = {\frak
g}_m$ which imply that $\Sigma(\widetilde H) = \widetilde H$ and
$\Sigma(\widetilde N_\pm) = \widetilde N_\pm$. However, instead of
(\ref{116}), we suppose that
\[
\sigma(c_{\pm i}) = - c_{\pm i}.
\]
One can easily get convinced that if the mappings $\gamma$ and
$\upsilon_{\pm i, m}$ satisfy the multidimensional Toda type
equations, then the mappings $(\Sigma \circ \gamma)^{-1}$ and
$-\sigma \circ \upsilon_{\pm i, m}$ also satisfy these equations.
Therefore, it is natural to consider the reduction to the mappings
satisfying the conditions
\[
\Sigma \circ \gamma = \gamma^{-1}, \qquad \sigma \circ \upsilon_{\pm
i, m} = - \upsilon_{\pm i, m}.
\]
The subgroup $\widehat G$ is defined now as
\begin{equation}
\widehat G = \{a \in G \mid \Sigma(a) = a^{-1} \}. \label{134}
\end{equation}
To get the general solution of the reduced system, we should start
with the integration data $\gamma_\pm$ and $\lambda_{\pm i, m}$
which satisfy the relations
\[
\Sigma \circ \gamma_\pm = \gamma_\pm^{-1}, \qquad \sigma \circ
\lambda_{\pm i, m} = - \lambda_{\pm i, m},
\]
and use the mappings $\mu_\pm$ specified by conditions (\ref{72})
with $a_\pm$ belonging to the subgroup $\widehat G$ defined by (\ref{134}).
One can also consider reductions based on antiholomorphic
automorphisms of $G$ and on the corresponding antilinear
automorphisms of ${\frak g}$. In this way it is possible to introduce
the notion of `real' solutions to multidimensional Toda type system.
We refer the reader to the discussion of this problem given in
\cite{RSa94,RSa96} for the two dimensional case. The generalisation to the
multidimensional case is straightforward.
\section{Examples}
\subsection{Generalised WZNW equations}
The simplest example of the multidimensional Toda type equations is
the so called generalised Wess--Zumino--Novikov--Witten (WZNW)
equations \cite{GMa93}. Let $G$ be an arbitrary complex connected
matrix Lie group. Consider the Lie algebra ${\frak g}$ of $G$ as a
${\Bbb Z}$--graded Lie algebra ${\frak g} = {\frak g}_{-1} \oplus
{\frak g}_0 \oplus {\frak g}_{+1}$, where ${\frak g}_0 = {\frak g}$
and ${\frak g}_{\pm 1} = \{0\}$. In this case the subgroup
$\widetilde H$ coincides with the whole Lie group $G$, and the
subgroups $\widetilde N_\pm$ are trivial. So, the mapping $\gamma$
parametrising the connection components of form (\ref{14}),
(\ref{15}), takes values in $G$. The only possible choice for the
elements $c_{\pm i}$ is $c_{\pm i} = 0$, and equations
(\ref{22})--(\ref{24}) take the form
\[
\partial_{+j} (\gamma^{-1} \partial_{-i} \gamma) = 0,
\]
which can be also rewritten as
\[
\partial_{-i}(\partial_{+j} \gamma \gamma^{-1}) = 0.
\]
These are the equations which are called in \cite{GMa93} the {\it
generalised WZNW equations}. They are, in a sense, trivial and can
be easily solved. However, in a direct analogy with two dimensional
case, see, for example, \cite{FORTW92}, it is possible to consider
the multidimensional Toda type equations as reductions of the
generalised WZNW equations.
Let us show how our general integration scheme works in this simplest
case. We start with the mappings $\gamma_\pm$ which take values in
$\widetilde H = G$ and
satisfy the
relations
\[
\partial_{\mp i} \gamma_\pm = 0.
\]
For the mappings $\mu_\pm$ we easily find
\[
\mu_\pm = a_\pm,
\]
where $a_\pm$ are some arbitrary elements of $G$. The Gauss decomposition
(\ref{37}) gives $\eta = a_+^{-1} a_-$, and for the general solution
of the generalised WZNW equations we have
\[
\gamma = \gamma_+^{-1} a_+^{-1} a_- \gamma_-.
\]
It is clear that the freedom to choose different elements $a_\pm$ is
redundant, and one can put $a_\pm = e$, which gives the usual
expression for the general solution
\[
\gamma = \gamma_+^{-1} \gamma_-.
\]
\subsection{Example based on Lie group ${\rm GL}(m, {\Bbb C})$}
Recall that the Lie group ${\rm GL}(m, {\Bbb C})$ consists of all
nondegenerate $m \by m$ complex matrices. This group is reductive.
We identify the Lie algebra of ${\rm GL}(m, {\Bbb C})$ with the Lie algebra
${\frak gl}(m, {\Bbb C})$.
Introduce the following ${\Bbb Z}$--gradation of ${\frak gl}(m, \Bbb C)$.
Let $n$ and $k$ be some positive integers such that $m = n + k$.
Consider a general element $x$ of ${\frak gl}(m, {\Bbb C})$ as a $2
\by 2$ block matrix
\[
x = \left( \begin{array}{cc}
A & B \\
C & D
\end{array} \right),
\]
where $A$ is an $n \by n$ matrix, $B$ is an $n \by k$ matrix,
$C$ is a $k \by n$ matrix, and $D$ is a $k \by k$ matrix.
Define the subspace ${\frak g}_0$ as the subspace of ${\frak gl}(m,
{\Bbb C})$, consisting of all block diagonal matrices, the subspaces
${\frak g}_{-1}$ and ${\frak g}_{+1}$ as the subspaces formed by all
strictly lower and upper triangular block matrices, respectively.
Consider the multidimensional Toda type equations
(\ref{22})--(\ref{24}) which correspond to the choice $l_{-i} =
l_{+i} = 1$. In our case the general form of the elements $c_{\pm i}$ is
\[
c_{-i} = \left(\begin{array}{cc}
0 & 0 \\
C_{-i} & 0
\end{array} \right), \qquad
c_{+i} = \left(\begin{array}{cc}
0 & C_{+i} \\
0 & 0
\end{array} \right),
\]
where $C_{-i}$ are $k \by n$ matrices, and $C_{+i}$ are $n \by k$
matrices. Since ${\frak g}_{\pm 2} = \{0\}$, then conditions
(\ref{39}) are satisfied. The subgroup $\widetilde H$ is isomorphic
to the group ${\rm GL}(n, {\Bbb C}) \times {\rm GL}(k, {\Bbb C})$, and
the mapping $\gamma$ has the block diagonal form
\[
\gamma = \left( \begin{array}{cc}
\beta_1 & 0 \\
0 & \beta_2
\end{array} \right),
\]
where the mappings $\beta_1$ and $\beta_2$ take values in ${\rm
GL}(n, {\Bbb C})$ and ${\rm GL}(k, {\Bbb C})$, respectively.
It is not difficult to show that
\[
\gamma c_{-i} \gamma^{-1} = \left( \begin{array}{cc}
0 & 0 \\
\beta_2 C_{-i} \beta_1^{-1} & 0
\end{array} \right);
\]
hence, equations (\ref{22}) take the following form:
\begin{equation}
\partial_{-i} (\beta_2 C_{-j} \beta_1^{-1}) = \partial_{-j} (\beta_2
C_{-i} \beta_1^{-1}). \label{96}
\end{equation}
Similarly, using the relation
\[
\gamma^{-1} c_{+i} \gamma = \left( \begin{array}{cc}
0 & \beta_1^{-1} C_{+i} \beta_2 \\
0 & 0
\end{array} \right),
\]
we represent equations (\ref{23}) as
\begin{equation}
\partial_{+i} (\beta_1^{-1} C_{+j} \beta_2 ) =
\partial_{+j} (\beta_1^{-1} C_{+i} \beta_2). \label{97}
\end{equation}
Finally, equations (\ref{24}) take the form
\begin{eqnarray}
&\partial_{+j} (\beta_1^{-1} \partial_{-i} \beta_1) = - \beta_1^{-1}
C_{+j} \beta_2 C_{-i},& \label{98} \\
&\partial_{+j} (\beta_2^{-1} \partial_{-i} \beta_2) = C_{-i}
\beta_1^{-1} C_{+j} \beta_2. \label{99}
\end{eqnarray}
In accordance with our integration scheme, to construct the general
solution for equations (\ref{96})--(\ref{99}) we should start with
the mappings $\gamma_{\pm}$ which take values in $\widetilde H$ and
satisfy (\ref{56}). Write for these mappings the block matrix
representation
\[
\gamma_\pm = \left( \begin{array}{cc}
\beta_{\pm 1} & 0 \\
0 & \beta_{\pm 2}
\end{array} \right).
\]
Recall that almost all solutions of the multidimensional Toda type
equations can be obtained using the mappings $\mu_\pm$ taking values
in the subgroups $\widetilde N_\pm$. Therefore, we choose these
mappings in the form
\[
\mu_- = \left( \begin{array}{cc}
I_n & 0 \\
\mu_{-21} & I_k
\end{array} \right), \qquad
\mu_+ = \left( \begin{array}{cc}
I_n & \mu_{+12} \\
0 & I_k
\end{array} \right),
\]
where $\mu_{-21}$ and $\mu_{+12}$ take values in the spaces of $k \by
n$ and $n \by k$ matrices, respectively. Equations (\ref{41}),
(\ref{93}) and (\ref{42}), (\ref{94}) are reduced now to the equations
\begin{eqnarray}
&\partial_{-i} \mu_{-21} = \beta_{-2} C_{-i} \beta_{-1}^{-1}, \qquad
\partial_{+i} \mu_{-21} = 0,& \label{102} \\
&\partial_{+i} \mu_{+12} = \beta_{+1} C_{+i} \beta_{+2}^{-1}, \qquad
\partial_{-i} \mu_{+12} = 0.& \label{103}
\end{eqnarray}
The corresponding integrability conditions are
\begin{eqnarray}
&\partial_{-i} (\beta_{-2} C_{-j} \beta_{-1}^{-1}) = \partial_{-j}
(\beta_{-2} C_{-i} \beta_{-1}^{-1}),& \label{100} \\
&\partial_{+i} (\beta_{+1} C_{+j} \beta_{+2}^{-1}) = \partial_{+j}
(\beta_{+1} C_{+i} \beta_{+2}^{-1}).& \label{101}
\end{eqnarray}
Here we will not study the problem of solving the integrability
conditions for a general choice of $n$, $k$ and $C_{\pm i}$. In the
end of this section we discuss a case when it is quite easy to find
explicitly all the mappings $\gamma_\pm$ satisfying the integrability
conditions, while now we will continue the consideration of the
integration procedure for the general case.
Suppose that the mappings $\gamma_\pm$ satisfy the integrability
conditions and we have found the corresponding mappings $\mu_\pm$.
Determine from the Gauss decomposition (\ref{37}) the mappings
$\nu_\pm$ and $\eta$. Actually, in the case under consideration we
need only the mapping $\eta$. Using for the mappings $\nu_-$,
$\nu_+$ and $\eta$ the following representations
\[
\nu_- = \left( \begin{array}{cc}
I_n & 0 \\
\nu_{-21} & I_k
\end{array} \right), \qquad
\nu_+ = \left( \begin{array}{cc}
I_n & \nu_{+12} \\
0 & I_k
\end{array} \right), \qquad
\eta = \left( \begin{array}{cc}
\eta_{11} & 0 \\
0 & \eta_{22}
\end{array} \right),
\]
we find that
\begin{eqnarray*}
&\nu_{-21} = \mu_{-21} (I_n - \mu_{+12} \mu_{-21})^{-1}, \qquad
\nu_{+12} = (I_n - \mu_{+12} \mu_{-21})^{-1} \mu_{+12}, \\
&\eta_{11} = I_n - \mu_{+12} \mu_{-21}, \qquad
\eta_{22} = I_k + \mu_{-21} (I_n - \mu_{+12} \mu_{-21})^{-1}
\mu_{+12}.
\end{eqnarray*}
It is worth to note here that the mapping $\mu_+^{-1} \mu_-$ has the
Gauss decomposition (\ref{37}) only at those points $p$ of $M$, for
which
\begin{equation}
\det \left( I_n - \mu_{+12}(p) \mu_{-21}(p) \right) \ne 0. \label{104}
\end{equation}
Now, using relation (\ref{40}), we get for the general solution of
system (\ref{96})--(\ref{99}) the following expression:
\begin{eqnarray*}
&\beta_1 = \beta_{+1}^{-1} (I_n - \mu_{+12} \mu_{-21}) \beta_{-1},& \\
&\beta_2 = \beta_{+2}^{-1} (I_k + \mu_{-21} (I_n - \mu_{+12}
\mu_{-21})^{-1} \mu_{+12}) \beta_{-2}.&
\end{eqnarray*}
Consider now the case when $n = m-1$. In this case $\beta_1$ takes
values in ${\rm GL}(n, {\Bbb C})$, $\beta_2$ is a complex function,
$C_{-i}$ and $C_{+i}$ are $1 \by n$ and $n \by 1$ matrices,
respectively. Suppose that the dimension of the manifold $M$ is equal
to $2n$ and define $C_{\pm i}$ by
\[
(C_{\pm i})_r = \delta_{ir}.
\]
System (\ref{96})--(\ref{99}) takes now the form
\begin{eqnarray}
&\partial_{-i} (\beta_2 (\beta_1^{-1})_{jr}) = \partial_{-j} (\beta_2
(\beta_1^{-1})_{ir}),& \label{105} \\
&\partial_{+i}((\beta_1^{-1})_{rj} \beta_2) =
\partial_{+j}((\beta_1^{-1})_{ri} \beta_2),& \label{106} \\
&\partial_{+j}(\beta_1^{-1} \partial_{-i} \beta_1)_{rs} = -
(\beta_1^{-1})_{rj} \beta_2 \delta_{is},& \label{107} \\
&\partial_{+j}(\beta_2^{-1}\partial_{-i} \beta_2) =
(\beta_1^{-1})_{ij} \beta_2,& \label{108}
\end{eqnarray}
and the integrability conditions (\ref{100}), (\ref{101}) can be
rewritten as
\begin{eqnarray*}
&\partial_{-i}(\beta_{-2}(\beta_{-1}^{-1})_{jr}) =
\partial_{-j}(\beta_{-2}(\beta_{-1}^{-1})_{ir}),& \\
&\partial_{+i}((\beta_{+1})_{rj} \beta_{+2}^{-1}) =
\partial_{+j}((\beta_{+1})_{ri} \beta_{+2}^{-1}).&
\end{eqnarray*}
The general solution for these integrability conditions is
\begin{eqnarray*}
&(\beta_{-1}^{-1})_{ir} = U_- \partial_{-i} V_{-r}, \qquad
\beta_{-2}^{-1} = U_-,& \\
&(\beta_{+1})_{ri} = U_+ \partial_{+i} V_{+r}, \qquad \beta_{+2} =
U_+.
\end{eqnarray*}
Here $U_\pm$ and $V_{\pm r}$ are arbitrary functions satisfying
the conditions
\[
\partial_{\mp} U_\pm = 0, \qquad \partial_\mp V_{\pm r} = 0.
\]
Moreover, for any point $p$ of $M$ one should have
\[
U_\pm(p) \ne 0, \qquad \det (\partial_{\pm i} V_{\pm r}(p)) \ne 0.
\]
The general solution of equations (\ref{102}), (\ref{103}) is
\[
\mu_{-21} = V_-, \qquad \mu_{+12} = V_+,
\]
where $V_-$ is the $1 \by n$ matrix valued function constructed with
the functions $V_{-r}$, and $V_+$ is the $n \by 1$ matrix valued
function constructed with the functions $V_{+r}$. Thus, we have
\[
\eta_{11} = I_n - V_+ V_-.
\]
In the case under consideration, condition (\ref{104}) which
guarantees the existence of the Gauss decomposition (\ref{37}), is
equivalent to
\[
1 - V_-(p) V_+(p) \ne 0.
\]
When this condition is satisfied, one has
\[
(I_n - \mu_{+12} \mu_{-21})^{-1} = (I_n - V_+ V_-)^{-1} = I_n +
\frac{1}{1 - V_- V_+} V_+ V_-,
\]
and, therefore,
\[
\eta_{22} = \frac{1}{1 - V_- V_+}.
\]
Taking the above remarks into account, we come to the following
expressions for the general solution of system (\ref{105})--(\ref{108}):
\begin{eqnarray*}
&(\beta_1^{-1})_{ij} = - U_+ U_- (1 - V_- V_+) \partial_{-i}
\partial_{+j} \ln (1 - V_- V_+),& \\
&\beta_2^{-1} = U_+ U_- (1 - V_- V_+).&
\end{eqnarray*}
\subsection{Cecotti--Vafa type equations}
In this example we discuss the multidimensional Toda system
associated with the loop group ${\cal L}({\rm GL}(m, {\Bbb C}))$
which is an infinite dimensional Lie group defined as the group of
smooth mappings from the circle $S^1$ to the Lie group ${\rm GL}(m,
{\Bbb C})$. We think of the circle as consisting of complex numbers
$\zeta$ of modulus one. The Lie algebra of ${\cal L}({\rm GL}(m,
{\Bbb C}))$ is the Lie algebra ${\cal L}({\frak gl}(m, {\Bbb C}))$
consisting of smooth mappings from $S^1$ to the Lie algebra ${\frak
gl}(m, {\Bbb C})$.
In the previous section we considered some class of ${\Bbb
Z}$--gradations of the Lie algebra ${\frak gl}(m, {\Bbb C})$ based on
the representation of $m$ as the sum of two positive integers $n$ and
$k$. Any such a gradation can be extended to a ${\Bbb
Z}$--gradation of the loop algebra ${\cal L}({\rm GL}(m, {\Bbb C})$.
Here we restrict ourselves to the case $m = 2n$.
In this case the element
\[
q = \left(\begin{array}{cc}
I_n & 0 \\
0 & -I_n
\end{array} \right)
\]
of ${\frak gl}(2n, {\Bbb C})$ is the grading operator of the ${\Bbb
Z}$--gradation under consideration. This means that an element $x$ of
${\frak gl}(2n, {\Bbb C})$ belongs to the subspace ${\frak g}_k$ if
and only if $[q, x] = k x$. Using the operator $q$, we introduce the
following ${\Bbb Z}$--gradation of ${\cal L}({\frak gl}(2n, {\Bbb
C}))$. The subspace ${\frak g}_k$ of ${\cal L}({\frak gl}(2n, {\Bbb
C}))$ is defined as the subspace formed by the elements $x(\zeta)$ of
${\cal L}({\frak gl}(2n, {\Bbb C}))$ satisfying the relation
\[
[q, x(\zeta)] + 2 \zeta \frac{dx(\zeta)}{d\zeta} = k x(\zeta).
\]
In particular, the subspaces ${\frak g}_0$, ${\frak g}_{-1}$ and
${\frak g}_{+1}$ of ${\cal L}({\frak gl}(2n, {\Bbb C}))$
consist respectively of the elements
\[
x(\zeta) = \left(\begin{array}{cc}
A & 0 \\
0 & D
\end{array} \right), \qquad
x(\zeta) = \left(\begin{array}{cc}
0 & \zeta^{-1} B \\
C & 0
\end{array} \right), \qquad
x(\zeta) = \left(\begin{array}{cc}
0 & B \\
\zeta C & 0
\end{array} \right),
\]
where $A$, $B$, $C$ and $D$ are arbitrary $n \by n$ matrices which do not
depend on $\zeta$.
Consider the multidimensional Toda type equations
(\ref{22})--(\ref{24}) which correspond to the choice $l_{-i} =
l_{+i} = 1$. In this case the general form of the elements $c_{\pm
i}$ is
\[
c_{-i} = \left(\begin{array}{cc}
0 & \zeta^{-1} B_{-i} \\
C_{-i} & 0
\end{array} \right), \qquad
c_{+i} = \left(\begin{array}{cc}
0 & C_{+i} \\
\zeta B_{+i} & 0
\end{array} \right).
\]
To satisfy conditions (\ref{39}) we should have
\[
B_{\pm i} C_{\pm j} - B_{\pm j} C_{\pm i} = 0, \quad C_{\pm i} B_{\pm
j} - C_{\pm j} B_{\pm i} = 0.
\]
The subgroup $\widetilde H$ is isomorphic to the group ${\rm GL}(n,
{\Bbb C}) \times {\rm GL}(n, {\Bbb C})$, and the mapping $\gamma$ has
the block diagonal form
\[
\gamma = \left( \begin{array}{cc}
\beta_1 & 0 \\
0 & \beta_2
\end{array} \right),
\]
where $\beta_1$ and $\beta_2$ take values in ${\rm GL}(n, {\Bbb C})$.
Hence, one obtains
\[
\gamma c_{-i} \gamma^{-1} = \left( \begin{array}{cc}
0 & \zeta^{-1} \beta_1 B_{-i} \beta_2^{-1} \\
\beta_2 C_{-j} \beta_1^{-1} & 0
\end{array} \right),
\]
and comes to following explicit expressions for equations (\ref{22}):
\begin{eqnarray}
&\partial_{-i} (\beta_1 B_{-j} \beta_2^{-1}) =
\partial_{-j} (\beta_1 B_{-i} \beta_2^{-1}),& \label{74}\\
&\partial_{-i} (\beta_2 C_{-j} \beta_1^{-1}) =
\partial_{-j} (\beta_2 C_{-i} \beta_1^{-1}).&\label{75}
\end{eqnarray}
Similarly, using the relation
\[
\gamma^{-1} c_{+i} \gamma = \left( \begin{array}{cc}
0 & \beta_1^{-1} C_{+i} \beta_2 \\
\zeta \beta_2^{-1} B_{+i} \beta_1 & 0
\end{array} \right),
\]
we can represent equations (\ref{23}) as
\begin{eqnarray}
&\partial_{+i} (\beta_1^{-1} C_{+j} \beta_2 ) =
\partial_{+j} (\beta_1^{-1} C_{+i} \beta_2),& \label{76}\\
&\partial_{+i} (\beta_2^{-1} B_{+j} \beta_1) =
\partial_{+j} (\beta_2^{-1} B_{+i} \beta_1).&\label{77}
\end{eqnarray}
Finally, equations (\ref{24}) take the form
\begin{eqnarray}
&\partial_{+j} (\beta_1^{-1} \partial_{-i} \beta_1) =
B_{-i} \beta_2^{-1} B_{+j} \beta_1 - \beta_1^{-1} C_{+j} \beta_2
C_{-i},& \label{78}\\
&\partial_{+j} (\beta_2^{-1} \partial_{-i} \beta_2) =
C_{-i} \beta_1^{-1} C_{+j} \beta_2 - \beta_2^{-1} B_{+j} \beta_1
B_{-i}.&\label{79}
\end{eqnarray}
System (\ref{74})--(\ref{79}) admits two interesting reductions,
which can be defined with the help of the general scheme described in
section \ref{ar}. Represent an arbitrary element $a(\zeta)$ of
${\cal L}({\rm GL}(2n, {\Bbb C}))$ in the block form,
\[
a(\zeta) = \left( \begin{array}{cc}
A(\zeta) & B(\zeta) \\
C(\zeta) & D(\zeta)
\end{array} \right),
\]
and define an automorphism $\Sigma$ of ${\cal L}({\rm GL}(2n, {\Bbb
C}))$ by
\[
\Sigma(a(\zeta)) = \left( \begin{array}{cc}
D(\zeta) & \zeta^{-1} C(\zeta) \\
\zeta B(\zeta) & A(\zeta)
\end{array} \right).
\]
It is clear that the corresponding automorphism $\sigma$ of ${\cal
L}({\frak gl}(2n, {\Bbb C}))$ is defined by the relation of the same
form. In the case under consideration relation (\ref{115}) is valid.
Suppose that $B_{\pm i} = C_{\pm i}$, then relation (\ref{116}) is
also valid. Therefore, we can consider the reduction of system
(\ref{74})--(\ref{79}) to the case when the mapping $\gamma$
satisfies the equality $\Sigma \circ \gamma = \gamma$ which can be
written as $\beta_1 = \beta_2$. The reduced system looks as
\begin{eqnarray}
&\partial_{-i}(\beta C_{-j} \beta^{-1}) = \partial_{-j}(\beta C_{-i}
\beta^{-1}),& \label{117} \\
&\partial_{+i}(\beta^{-1} C_{+j} \beta) = \partial_{+j}(\beta^{-1}
C_{+i} \beta),& \label{118} \\
&\partial_{+j}(\beta^{-1} \partial_{-i} \beta) = [C_{-i}, \beta^{-1}
C_{+j} \beta],& \label{119}
\end{eqnarray}
where we have denoted $\beta = \beta_1 = \beta_2$.
The next reduction is connected with an antiautomorphism $\Sigma$ of
the group ${\cal L}({\rm GL}(2n, {\Bbb C}))$ given by
\[
\Sigma(a(\zeta)) = \left( \begin{array}{cc}
A(\zeta)^t & -\zeta^{-1} C(\zeta)^t \\
-\zeta B(\zeta)^t & D(\zeta)^t
\end{array} \right).
\]
The corresponding antiautomorphism of ${\cal L}({\frak gl}(2n, {\Bbb
C}))$ is defined by the same formula. It is evident that
$\sigma({\frak g}_k) = {\frak g}_k$. Suppose that $B_{\pm i} =
C^t_{\pm i}$, then $\sigma(c_{\pm i}) = - c_{\pm i}$, and one can
consider the reduction of system (\ref{74})--(\ref{79}) to the case
when the mapping $\gamma$ satisfies the equality $\Sigma \circ \gamma
= \gamma^{-1}$ which is equivalent to the equalities $\beta_1^t =
\beta_1^{-1}$, $\beta_2^t = \beta_2^{-1}$. The reduced system of
equations can be written as
\begin{eqnarray}
&\partial_{-i}(\beta_2 C_{-j} \beta_1^t) = \partial_{-j} (\beta_2
C_{-i} \beta_1^t),& \label{123} \\
&\partial_{+i}(\beta_1^t C_{+j} \beta_2) = \partial_{+j} (\beta_1^t
C_{+i} \beta_2),& \label{124} \\
&\partial_{+j}(\beta_1^t \partial_{-i} \beta_1) = C_{-i}^t \beta_2^t
C_{+j}^t \beta_1 - \beta_1^t C_{+j} \beta_2 C_{-i},& \label{125} \\
&\partial_{+j}(\beta_2^t \partial_{-i} \beta_2) = C_{-i} \beta_1^t
C_{+j} \beta_2 - \beta_2^t C_{+j}^t \beta_1 C_{-i}^t.& \label{126}
\end{eqnarray}
If simultaneously $B_{\pm i} = C_{\pm i}$ and $B_{\pm i} = C_{\pm
i}^t$, one can perform both reductions. Here the reduced system has
form (\ref{117})--(\ref{119}) where the mapping $\beta$ take values
in the complex orthogonal group ${\rm O}(n, {\Bbb C})$. These are
exactly the equations considered by S. Cecotti and C. Vafa \cite{CVa91}.
As it was shown by B. A. Dubrovin \cite{Dub93} for $C_{-i} = C_{+i} =
C_i$ with
\[
(C_i)_{jk} = \delta_{ij} \delta_{jk},
\]
the Cecotti--Vafa equations are connected with some well known
equations in differential geometry. Actually, in \cite{Dub93} the
case $M = {\Bbb C}^n$ was considered and an additional restriction
$\beta^\dagger = \beta$ was imposed. Here equation
(\ref{118}) can be obtained from equation (\ref{117}) by hermitian
conjugation, and the system under consideration consists of equations
(\ref{117}) and (\ref{119}) only. Rewrite equation (\ref{117}) in the
form
\[
[\beta^{-1} \partial_{-i} \beta, C_j] = [\beta^{-1} \partial_{-j}
\beta, C_i].
\]
{}From this equation it follows that for some matrix valued mapping $b =
(b_{ij})$, such that $b_{ij} = b_{ji}$, the relation
\begin{equation}
\beta^{-1} \partial_{-i} \beta = [C_i, b] \label{120}
\end{equation}
is valid. In fact, the right hand side of relation (\ref{120}) does
not contain the diagonal matrix elements of $b$, while the other
matrix elements of $B$ are uniquely determined by the left hand side of
(\ref{120}). Furthermore, relation (\ref{120}) implies that the
mapping $b$ satisfies the equation
\begin{equation}
\partial_{-i} [C_j, b] - \partial_{-j} [C_i, b] + [[C_i, b], [C_j,
b]] = 0. \label{121}
\end{equation}
{}From the other hand, if some mapping $b$ satisfies equation
(\ref{121}), then there is a mapping $\beta$ connected with $b$ by
relation (\ref{120}), and such a mapping $\beta$ satisfies equation
(\ref{117}). Therefore, system (\ref{117}), (\ref{119}) is equivalent
to the system which consist of equations (\ref{120}), (\ref{121})
and the equation
\begin{equation}
\partial_{+j} [C_i, b] = [C_i, \beta^{-1} C_j \beta] \label{122}
\end{equation}
which follows from (\ref{120}) and (\ref{119}). Using the concrete
form of the matrices $C_i$, one can write the system (\ref{120}),
(\ref{121}) and (\ref{122}) as
\begin{eqnarray}
&\partial_{-k} b_{ji} = b_{jk} b_{ki}, \qquad \mbox{$i$, $j$, $k$
distinct};& \label{127} \\
&\sum_{k=1}^n \partial_{-k} b_{ij} = 0; \qquad i \neq j;& \label{128}
\\
&\partial_{-i} \beta_{jk} = b_{ik} \beta_{ji}, \qquad i \neq j;&
\label{129} \\
&\sum_{k=1}^n \partial_{-k} \beta_{ij} = 0;& \label{130} \\
&\partial_{+k} b_{ij} = \beta_{ki} \beta_{kj}, \qquad i \neq j.&
\label{131}
\end{eqnarray}
Equations (\ref{127}), (\ref{128}) have the form of equations which
provide vanishing of the curvature of the diagonal metric with
symmetric rotation coefficients $b_{ij}$ \cite{Dar10,Bia24}. Recall
that such a metric is called a Egoroff metric. Note that the
transition from system (\ref{117}), (\ref{119}) to system
(\ref{127})--(\ref{131}) is not very useful for obtaining solutions
of (\ref{117}), (\ref{119}). A more constructive way here is to use
the integration scheme described in section \ref{cgs}. Let us
discuss the corresponding procedure for a more general system
(\ref{123})--(\ref{126}) with $C_{-i} = C_{+i} = C_i$.
The integrations data for system (\ref{123})--(\ref{126}) consist of
the mappings $\gamma_\pm$ having the following block diagonal form
\[
\gamma_\pm = \left( \begin{array}{cc}
\beta_{\pm 1} & 0 \\
0 & \beta_{\pm 2}
\end{array} \right).
\]
As it follows from the discussion given in section \ref{ar}, the
mappings $\beta_{\pm 1}$ and $\beta_{\pm 2}$ must satisfy the conditions
\[
\beta_{\pm 1}^t = \beta_{\pm 1}^{-1}, \qquad \beta_{\pm 2}^t =
\beta_{\pm 2}^{-1}.
\]
The corresponding integrability conditions have the form
\begin{equation}
\partial_{\pm i}(\beta_{\pm 2} C_{j} \beta_{\pm 1}^t) =
\partial_{\pm j}(\beta_{\pm 2} C_{i} \beta_{\pm 1}^t). \label{133}
\end{equation}
Rewriting these conditions as
\[
\beta_{\pm 2}^t \partial_{\pm i} \beta_{\pm 2} C_j - C_j
\beta_{\pm 1}^t \partial_{\pm i} \beta_{\pm 1} = \beta_{\pm 2}^t
\partial_{\pm j} \beta_{\pm 2} C_i - C_i \beta_{\pm 1}^t
\partial_{\pm j} \beta_{\pm 1},
\]
we can get convinced that for some matrix valued mappings $b_\pm$ one
has
\begin{equation}
\beta_{\pm 1}^t \partial_{\pm i} \beta_{\pm 1 } = C_i b_\pm
- b_\pm^t C_i, \qquad \beta_{\pm 2}^t \partial_{\pm i}
\beta_{\pm 2 } = C_i b_\pm^t - b_\pm C_i. \label{132}
\end{equation}
{}From these relations it follows that the mappings $b_\pm$ satisfy the
equations
\begin{eqnarray}
&\partial_{\pm i} (b_\pm)_{ji} + \partial_{\pm j} (b_\pm)_{ij} +
\sum_{k \neq i,j} (b_\pm)_{ik} (b_\pm)_{jk} = 0, \quad i \neq j;&
\label{83} \\
&\partial_{\pm k} (b_\pm)_{ji} = (b_\pm)_{jk} (b_\pm)_{ki}, \qquad
\mbox{$i$, $j$, $k$ distinct};& \label{84} \\
&\partial_{\pm i} (b_\pm)_{ij} + \partial_{\pm j} (b_\pm)_{ji} +
\sum_{k \neq i,j} (b_\pm)_{ki} (b_\pm)_{kj} = 0, \qquad i \neq j.&
\label{85}
\end{eqnarray}
Conversely, if we have some mappings $b_\pm$ which satisfy equations
(\ref{83})--(\ref{85}), then there exist mappings $\beta_{\pm 1}$ and
$\beta_{\pm 2}$ connected with $b_\pm$ by (\ref{132}) and satisfying
the integrability conditions (\ref{133}).
System (\ref{83})--(\ref{85}) represents a limiting case of the
completely integrable Bourlet equations \cite{Dar10,Bia24} arising after
an appropriate In\"on\"u--Wigner contraction of the corresponding Lie
algebra \cite{Sav86}. Sometimes this system is called the
multidimensional generalised wave equations, while equation
(\ref{119}) is called the generalised sine--Gordon equation
\cite{Ami81,TTe80,ABT86}.
\section{Outlook}
Due to the algebraic and geometrical clearness of the equations
discussed in the paper, we are firmly convinced that, in time, they
will be quite relevant for a number of concrete applications in
classical and quantum field theories, statistical mechanics and
condensed matter physics. In support of this opinion we would like to
remind a remarkable role of some special classes of the equations
under consideration here.
Namely, in the framework of the standard abelian and nonabelian,
conformal and affine Toda fields coupled to matter fields, some
interesting physical phenomena which possibly can be described on the
base of corresponding equations, are mentioned in
\cite{GSa95,FGGS95}. In particular, from the point of view of
nonperturbative aspects of quantum field theories, they might be very
useful for understanding the quantum theory of solitons, some
confinement mechanisms for the quantum chromodynamics,
electron--phonon systems, etc. Furthermore, the Cecotti--Vafa
equations \cite{CVa91} of topological--antitopological fusion, which,
as partial differential equations, are, in general, multidimensional
ones, describe ground state metric of two dimensional $N=2$
supersymmetric quantum field theory. As it was shown in \cite{CVa93},
they are closely related to those for the correlators of the massive
two dimensional Ising model, see, for example, \cite{KIB93} and
references therein. This link is clarified in \cite{CVa93}, in
particular, in terms of the isomonodromic deformation in spirit of
the holonomic field theory developed by the Japanese school, see, for
example, \cite{SMJ80}.
The authors are very grateful to J.--L.~Gervais and B. A. Dubrovin
for the interesting discussions. Research supported in part by the
Russian Foundation for Basic Research under grant no. 95--01--00125a.
|
2,877,628,089,603 | arxiv | \section{Introduction}
\subsection{General background}
The theory of \emph{cubespaces} and its important subclass of \emph{nilspaces} originated in the work of Host and Kra \cite{HK08} under the name of \emph{parallelepiped structures}, and was developed further by Antolín Camarena and Szegedy \cite{ACS12}. A \emph{cubespace} is a structure consisting of a compact metric space $X$, together with a closed collection of \emph{cubes} $C^k(X)\subseteq X^{2^k}$ for each integer $k\ge 0$, satisfying certain natural axioms. A \emph{nilspace} is a cubespace satisfying an additional rigidity condition \footnote{For the exact definitions of the class of \emph{cubespaces} and the subclass of \emph{nilspaces} see Definitions \ref{cubespace} and \ref{nilspace} respectively.}. The theory has already found applications in \emph{higher order Fourier analysis} \cite{szegedy2012higher, tao2012higher}, in particular in relation to the inverse theorem for the Gowers norms \cite{GTZ12}, as well as in ergodic theory \cite{gutman2019strictly,candela2018nilspace} in relation to the Host-Kra structure theorem \cite{HK05,host2018nilpotent}. Cubespaces and nilspaces also played an essential role in the structure theory of the higher order \emph{nilpotent regionally proximal relations}
introduced and developed by Glasner, Gutman and Ye in \cite{GGY18}.
As the theory of cubespaces/nilspaces has matured, it has been observed that \emph{fibrations}, cubespace morphisms satisfying an additional rigidity condition, are highly useful\footnote{For the exact definition of the class of \emph{fibrations} (of \emph{finite degree}) see Definition \ref{fibration definition} (Definition \ref{def:fibration_finite_degree}).}.
This notion was introduced in \cite{GMVI} generalizing the previous notion of \emph{fiber-surjective morphism} from \cite{ACS12}. Indeed
in \cite{GMVI, GMVII, GMVIII} Gutman, Manners and Varj\'{u} developed a weak structure theory for fibrations as an important step in the proof of the structure theorem for minimal topological dynamical \emph{systems of finite degree}, i.e., such that the nilpotent regionally proximal relation of some degree is trivial. According to this theorem such a system may be represented as an \emph{inverse limit of nilsystems} (subject to some mild assumptions on the acting group).
According to the \emph{weak structure theorem} a fibration of finite degree factors
as a finite tower of compact abelian group extensions. The groups appearing in this factorization are referred to as the \emph{structure groups} of the fibration.
In this paper we give a finer structure theory for fibrations building on the Antolín Camarena-Szegedy fundamental structure theorems for nilspaces. We show that a fibration of finite degree between cubespaces obeying some natural conditions factors as a (possibly countable) tower of \emph{Lie-fibered fibrations}, i.e., fibrations whose structure groups are
(compact abelian) Lie groups.
Given a fibration of finite degree $f \colon X \to Y$ it is well known that the \emph{fibers} $f^{-1}(y)$ are nilspaces. However this by itself does not elucidate the relation between \emph{different} fibers. In this paper we are able to give conditions guaranteeing that all fibers are \emph{isomorphic as cubespaces}. Moreover, taking advantage of the above-mentioned factorization into a tower of Lie-fibered fibrations, if the structure groups of the fibers are connected\footnote{In Proposition \ref{prop:structure groups} we show that the structure groups of \emph{all} fibers are connected iff the structure groups of $f$ are connected.} we show that the fibers are approximated uniformly as closely as desired by quotients by natural co-compact subgroups of a \emph{single} Lie group associated with the factorization.
We relate our results to the theory of topological dynamical systems. We show that any factor map between minimal distal systems $\pi\colon (G,X) \to (G,Y)$ where $G$ is an arbitrary topological group, is a fibration between the associated dynamical cubespaces. This supplies an abundance of hitherto unknown new examples of fibrations.
The \emph{regionally proximal relations} have a long history in topological dynamics \cite{EG60,veech1968equicontinuous,ellis1971characterization}. Its relativization, the \emph{relativized regionally proximal relations} play a fundamental role in the study of the structure of equicontinuous extensions \cite{MW73, McM78}. Attesting to its importance is its central use in Bronstein's proof \cite{Bro68, bronshtein1970distal}\footnote{See also \cite[Chapter 7]{A}.} of the (relative) Furstenberg structure theorem for minimal distal extensions \cite{furstenberg1963structure}\footnote{The relative Furstenberg structure theorem was also proven independently by Ellis in \cite{E68}.}.
In this article we introduce the \emph{relative nilpotent regionally proximal relations} for extensions between minimal systems and analyze its structure.
In particular we show that when an extension between minimal distal systems has trivial relative nilpotent regionally proximal relation of some degree, then it factors as a (possibly countable) tower of \emph{principal abelian Lie (compact) group extensions}\footnote{For the exact definitions of \emph{ principal abelian group extensions} see Definition \ref{def:principal abelian group}.}.
\subsection{Cubespaces and nilspaces}
In this subsection, we define the notions of cubespace and nilspace and survey their important properties. For more detailed information see \cite{ACS12, candela2016cpt_notes, candela2016alg_notes, candela2019nilspace_morphisms, GGY18, GMVI,GMVII, GMVIII}.
\begin{definition} \label{cubespace}
Let $k, \ell \geq 0$ be two integers. A map $f=(f_{1},\ldots,f_{\ell}):\{0,1\}^{k}\to\{0,1\}^{\ell}$ is called
a \textbf{morphism of discrete cubes} if each coordinate function
$f_{j}(\omega_{1},\ldots,\omega_{k})$ is either identically $0$,
identically $1$, or equals either $\omega_{i}$
or $\overline{\omega_{i}}=1-\omega_{i}$ for some $1\le i=i(j)\le k$.
If $\ell \leq k$, by an {\bf $\ell$-face} of $\{0, 1\}^k$ we mean a subset of $\{0, 1\}^k$ obtained from fixing values of $(k-\ell)$ coordinates. In particular, a $(k-1)$-face is called a {\bf hyperface}\index{hyperface}.
A {\bf cubespace} \index{cubespace} is a metric space $X$, associated with a sequence of closed subsets $C^\bullet(X):=\{\index[nota]{C^k(X)} C^k(X) \subseteq X^{\{0, 1\}^k} : k=0, 1, \ldots \}$ satisfying:
\begin{enumerate}
\item $C^0(X)=X$;
\item for every integer $k, \ell \geq 0$, $c \in C^\ell(X)$, and morphism of discrete cubes $\rho: \{0, 1\}^k \to \{0, 1\}^\ell$, one has that
$c \circ \rho \in C^k(X)$.
\end{enumerate}
We call the elements of $C^k(X)$ $k$-{\bf cubes}\index{$k$-cube}. A map $\{0, 1\}^k \to X$ is called a $k$-{\bf configuration}\index{configuration}.
\end{definition}
A particular class of cubespaces arise from dynamical systems.
By a {\bf (topological) dynamical system}\index{dynamical system} $(G, X)$\index{$(G, X)$}, we mean a continuous action of a topological group $G$ on a compact metric space $X$.
\begin{definition} \label{HK cube}
Let $(G, X)$ be a dynamical system. For every integer $k \geq 0$ the {\bf Host-Kra cube group ${\rm HK}^k(G)$} \index{Host-Kra cube group} is the subgroup of $G^{\{0, 1\}^k}$ generated by $[g]_F$ for all $g \in G$ and hyperfaces $F$ of $\{0, 1\}^k$. Here the configuration $[g]_F\index{$[g]_F$} \in G^{\{0, 1\}^k}$ sends $\omega \in F$ to $g$ and $e_G$ elsewhere. The cubespace $C_G^\bullet(X)$ is defined by taking orbit closures of constant configurations, i.e.
$$C_G^k(X):=\overline{\{\gamma . x^{\{0, 1\}^k}: \gamma \in {\rm HK}^k(G), x \in X \}},$$
where "." is denotes the pointwise action of ${\rm HK}^k(G)\subset G^{\{0, 1\}^k}$ on $X^{\{0, 1\}^k}$.
We call $(X, C_G^\bullet(X))$ a {\bf dynamical cubespace}\index{cubespace!dynamical cubespace}. Similarly if $(X, C_G^\bullet(X))$ is a nilspace then we call it a {\bf dynamical nilspace}.\index{nilspace !dynamical nilspace}
\end{definition}
The cubespaces category has natural notions of {\bf subcubespaces}, {\bf quotients} of cubespaces, and {\bf inverse limits} of cubespaces. \index{cubespace!subcubespace} \cite[Remark 3.7, Definition 5.2]{GMVI}. Let us review inverse limits in the category of cubespaces. Let $X :=\varprojlim X_i$ be an inverse system of cubespaces $(X_i, C^\bullet(X_i) )$. Then the cubespace structure on $X$ is defined via $$C^k(X):=\varprojlim C^k(X_i)$$
where the projections $ C^k(X_{i+1}) \to C^k(X_i)$ are induced from the pointwise projections $\{X_{i+1} \to X_i\}_i$.
A cubespace $X$ is called {\bf ergodic} \index{ergodic} if every pair of points in $X$ is a $1$-cube, i.e. $C^1(X)=X^{\{0, 1\}}$.
Recall that a dynamical system $(G, X)$ is called {\bf minimal} if the orbit of every point of $X$ is dense in $X$. A simple observation is that if $(G, X)$ is minimal then the induced dynamical cubespace $(X, C^\bullet_G(X))$ is ergodic.
A cubespace $X$ is called {\bf strongly connected}\index{strongly connected} when all $C^k(X)$ are connected.
The first important property of cubespace is the (corner) completion property.
Denote by $\overrightarrow{1}$ the element $(1, 1, \ldots, 1) \in \{0, 1\}^k$ and $\llcorner^k$\index{$\llcorner^k$} the set $\{0, 1\}^k\setminus \{ \overrightarrow{1} \}$. Let $X$ be a cubespace, $k \geq 0$, and $\lambda \colon \llcorner^k \to X$ a map. We call $\lambda$ a {\bf $k$-corner} \index{$k$-corner}
if every lower face of $\lambda$ is a $(k-1)$-cube, i.e. $\lambda|_{\{\omega: \ \omega_i=0\}} \in C^{k-1}(X)$ for all $i=1,\ldots, k$.
\begin{definition}
We say $X$ has {\bf $k$-completion} \index{$k$-completion} if every $k$-corner $\lambda$ of $X$ can be completed to a $k$-cube of $X$, i.e. $\lambda=c|_{\llcorner^k}$ for some $c \in C^k(X)$. A cubespace $X$ is called {\bf fibrant} \index{fibrant} if it has $k$-completion for every $k \geq 0$ (note that a $0$-corner is the empty set).
\end{definition}
Let $d$ be the mertic on a compact metric space $X$. Recall that a dynamical system $(G, X)$ is called {\bf distal} if $\inf_{g \in G} d(gx, gx') >0$ for any distinct points $x, x' \in X$.
\begin{example}
Let $(G, X)$ be a minimal distal system. Then the associated Host-Kra cubespace is fibrant \cite[Theorem 7.10]{GGY18}. In general, it is not the case for nondistal systems \cite[Example 3.10]{TY13} \cite[Example 9.3]{GGY18}.
\end{example}
An important property of fibrant cubespaces is that they are gluing \cite[Proposition 6.2]{GMVI}. Let us recall the definition as follows.
Let $c_1, c_2\colon \{0, 1\}^k \to X$ be two configurations. The {\bf concatenation} \index{concatenation} $ [c_1, c_2]\index{$[c_1, c_2]$}\colon \{0, 1\}^{k+1} \to X$ of $c_1$ and $c_2$ is defined by
sending $(\omega, 0)$ to $c_1(\omega)$ and $(\omega, 1)$ to $c_2(\omega)$ for every $\omega \in \{0, 1\}^k$.
\begin{definition}
We say a cubespace $X$ has the {\bf gluing property} or is {\bf gluing} \index{gluing}
if for every integer $k \geq 0$ and every $c_1, c_2, c_3 \in C^k(X)$, $[c_1, c_2], [c_2, c_3] \in C^{k+1}(X)$ implies that
$[c_1, c_3] \in C^{k+1}(X)$.
\end{definition}
Another important property of cubespaces is the uniqueness property.
\begin{definition}\label{nilspace}
A cubespace $X$ has {\bf $k$-uniqueness} \index{$k$-uniqueness} if for any $c_1, c_2 \in C^k(X)$ such that
$c_1|_{\llcorner^k}=c_2|_{\llcorner^k}$, one has $c_1=c_2$. Fix $s \geq 0$. We say $X$ is a {\bf nilspace of degree at most s} or simply an {\bf $s$-nilspace} \index{$s$-nilspace}
if it is fibrant and has (s+1)-uniqueness. We say $X$ is a {\bf nilspace} \index{nilspace} if it is an $s$-nilspace for some integer $s$.
\end{definition}
Note that if $X$ has $k$-uniqueness then $X$ has $\ell$-uniqueness for every $\ell \geq k$ as one can apply the $k$-uniqueness to some suitable $k$-face of a given $\ell$-cube.
\begin{definition} \label{canonical equivalence}
For a cubespace $X$, we say a pair of points $(x, x')$ of $X$ are {\bf $k$-canonically related}, denoted by $x \sim_k x'$\index{$\sim_k$} if there exists $c, c' \in C^{k+1}(X)$ such that $c|_{\llcorner^{k+1}}=c'|_{\llcorner^{k+1}}$, $c(\overrightarrow{1})=x$, and $c'(\overrightarrow{1})=x'$. The relation $\sim_k$ is called the {\bf $k$-th canonical relation}\index{$k$-th canonical relation}.
\end{definition}
For compact gluing cubespaces the $k$-th canonical relation is an equivalence relation (\cite[Proposition 6.3]{GMVI}). This follows as compact gluing cubespaces satisfy the so-called {\bf universal replacement property}\index{universal replacement property} \cite[Proposition 6.3]{GMVI} (see also \cite[Lemma 2.5]{ACS12} and \cite[Proposition 3]{HK08}):
\begin{proposition} \label{URP}
Let $X$ be a compact gluing cubespace. Fix $s \geq 0$. Let $k \leq s+1$ and $c \in C^k(X)$. Then if a configuration $c' \in X^{\{0, 1\}^k}$ has the same image as $c$ under the quotient map $X \to X/\sim_s$, then $c' \in C^k(X)$.
\end{proposition}
\begin{proposition}\cite[Proposition 6.2]{GMVI} \label{fibrant is gluing}
Fibrant cubespaces (in particular nilspaces) satisfy the gluing property.
\end{proposition}
Combining Propositions \ref{URP} and \ref{fibrant is gluing}, we have that the universal replacement property for a fibrant cubespace implies $(s+1)$-uniqueness for
the quotient cubespace $X/\sim_s$. Indeed we have:
\begin{corollary} \label{inducing nilspace}
Let $X$ be a compact fibrant cubespace. Then $X/\sim_s$ is an $s$-nilspace for every $s \geq 0$.
\end{corollary}
The {\bf weak structure theorem}\index{weak structure theorem} for nilspaces of finite degree is an important factorization result into a finite tower of compact abelian group principal fiber bundles. We first recall the definition of a principal fiber bundle and then state the theorem.
\begin{definition} \cite[Definition 2.2]{H94}\label{def:principal fiber bundle}
Let $G$ be a topological group. A $G$-\textbf{principal fiber bundle} is a surjective continuous map $p \colon E \to B$ between two topological spaces $E$ and $B$ satisfying the following:
\begin{enumerate}
\item there exists a free continuous action of $G$ on $E$ such that for every $x \in E$
$$p^{-1}(p(x))=Gx;$$
\item there exists a homeomorphism $\varphi: B \to E/G$ such that $\varphi \circ p$ is the projection map $E \to E/G$.
\end{enumerate}
\end{definition}
\begin{theorem} \cite[Theorem 5.4]{GMVI}\cite[Theorem 1]{ACS12} \label{WST}
Let $X$ be a compact ergodic $s$-nilspace. Then $X$ factors as
$$X=X/\sim_s \to X/\sim_{s-1} \to \cdots \to X/\sim_0 \cong \{\ast\},$$
where $\{\ast\}$ is a singleton and each map $X/\sim_k \to X/\sim_{k-1}$ is an $A_k$-principal fiber bundle for some compact metrizable abelian group $A_k$.
\end{theorem}
\noindent The group $A_k$ is called the {\bf $k$-th structure group} of $X$. When all $A_k$ are Lie groups, $X$ is called {\bf Lie-fibered}\index{Lie-fibered}.
\subsection{Host-Kra cubespaces}
Given a topological group $G$, a sequence of decreasing closed subgroups
$$G=G_0 \supseteq G_1 \supseteq \cdots \supseteq G_{s+1}=\{e_G\}=G_{s+2}=\cdots$$
is an {\bf s-filtration} or a {\bf filtration of degree s} if $[G_i, G_j] \subseteq G_{i+j}$ for all $i, j \geq 0$ (here $[\cdot, \cdot]$ denotes the commutator subgroup). In such a case, we call $G$ a {\bf filtered group}\index{filtered group} and
write $G_\bullet$\index{$G_\bullet$} to emphasize that it is equipped with some s-filtration.
\begin{definition} \label{Host-Kra cube}
For a filtered metric group $G_\bullet$, the {\bf Host-Kra $k$-cube group} ${\rm HK}^k(G_\bullet)$\index{${\rm HK}^k(G_\bullet)$} is defined
as the subgroup of $G^{\{0, 1\}^k}$ generated by $[x]_F$ for every face $F \subseteq \{0, 1\}^k $ and $x \in G_{(k-\dim (F))}$.
\end{definition}
We remark that the Host-Kra $k$-cube group ${\rm HK}^k(G)$ in Definition \ref{HK cube} can be recovered as ${\rm HK}^k(G_\bullet)$ with respect to the 1-filtration:
$$G=G_0=G_1 \supseteq G_2=\{e_G\}=\cdots.$$
When $G$ is a filtered topological group and $\Gamma$ is a discrete cocompact subgroup of $G$,
$G/\Gamma$ carries a quotient cubespace structure inherited from the Host-Kra cube group
${\rm HK}^k(G_\bullet)$, which we denote by ${\rm HK}^k(G_\bullet)/\Gamma$ (see \cite[Definition 2.4]{GMVI} for more details).
We summarize the properties of \emph{Host-Kra cubes} as follows \cite[Appendix A.4, Propositions 2.5, 2.6]{GMVI}.
\begin{proposition}
$(G, {\rm HK}^\bullet(G_\bullet))$ is a nilspace. Moreover, suppose that $\Gamma$ is {\bf compatible with $G_\bullet$} in the sense that $\Gamma \cap G_i$ is discrete and cocompact
in $G_i$ for all $i \geq 0$. Then for each $k \geq 0$, ${\rm HK}^k(G_\bullet)/\Gamma \subseteq (G/\Gamma)^{\{0, 1\}^k}$ is a compact subset. Hence
$(G/\Gamma, {\rm HK}^\bullet(G_\bullet)/\Gamma) $ is a compact nilspace.
\end{proposition}
Given a Lie group $G$ and a discrete cocompact subgroup the quotient $G/\Gamma$ (which carries the structure of a manifold) is termed a {\bf nilmanifold}\index{nilmanifold}. Using nilmanifolds the structure of a nilspace can be described as follows.
\begin{theorem} \cite[Theorem 1.28]{GMVIII} \label{absolute char}
Suppose that $X$ is a compact ergodic strongly connected nilsapce. Then $X$ is isomorphic as a cubespace to an inverse limit $\varprojlim X_n$ of nilmanifolds $X_n$ endowed with Host-Kra cubes.
\end{theorem}
\subsection{Structure theorems for fibrations}\label{subsec:main theorems}
\begin{definition}
Suppose that $\varphi \colon X \to Y$ is a continuous map between two cubespaces $X$ and $Y$. We say
$\varphi$ is a {\bf cubespace morphism}\index{cubespace!cubespace morphism} if $\varphi$ sends every cube of $X$ to a cube of $Y$. That is,
the set $\{\varphi \circ c: c \in C^k(X)\}$ is contained in $C^k(Y)$
for every integer $k \geq 0$. We say $\varphi$ is a {\bf cubespace isomorphism} or simply an {\bf isomorphism} if $\varphi$ is a bijection and both $\varphi$ and $\varphi^{-1}$ are cubespace morphisms.
\end{definition}
In the sequel, given an integer $n \geq 1$ and a map $\varphi: X \to Y$ between metric spaces, we will use the same notation $\varphi$ to denote the induced map $X^n \to Y^n$ by pointwise application of $\varphi$, when no confusion arises.
\begin{definition} \label{relative ergodic}
A cubespace morphism $\varphi \colon X \to Y$ is called {\bf relatively $k$-ergodic} if for any $c \in C^k(Y)$ any configuration of $\varphi^{-1}(c)$ is a cube of $X$.
\end{definition}
It is clear that if $\varphi$ is relatively $k$-ergodic, $\varphi$ is relatively $\ell$-ergodic for each $\ell \leq k$.
Relativizing the concept of corner-completion, fibrations are introduced in \cite[Definition 7.1]{GMVI}:
\begin{definition} \label{fibration definition}
A cubespace morhpism $f\colon X \to Y$ is called a {\bf fibration}\index{fibration} if $f$ has {\bf $k$-completion}\index{fibration!$k$-completion} for all $k \geq 0$. That is,
given a $k$-corner $\lambda$ in $X$, if $f( \lambda)$ can be completed to a cube $c$ in $Y$, then $\lambda$ can be completed to a cube $c_0$ of $X$ such that $f(c_0)=c$.
\end{definition}
\begin{example}
In the setting of Definition \ref{Host-Kra cube}, the quotient map $G \to G/\Gamma$ induces a fibration ${\rm HK}^k(G_\bullet) \rightarrow {\rm HK}^k(G_\bullet)/\Gamma$ \cite[Proposition A.17]{GMVI}.
\end{example}
It is clear that a composition of fibrations is a fibration. By a "corner-lifting" argument, we have the so-called
{\bf universal property}\index{fibration!universal property} of fibrations \cite[Lemma 7.8]{GMVI}:
\begin{proposition} \label{universal property}
Let $f: X \to Y$ be a cubespace morphism and $g: Y \to Z$ a map between cubespaces. Then if $f$ and $g\circ f$ are fibrations, so is $g$.
\end{proposition}
We recall the relative notion of uniqueness for a cubespace morphism.
\begin{definition}\label{def:fibration_finite_degree}
Let $f\colon X \to Y$ be a cubespace morphism. We say $f$ has {\bf $k$-uniqueness}\index{fibration!$k$-uniqueness} if for any $k$-cubes $c, c'$ of $X$ such
that $c|_{\llcorner^k}=c'|_{\llcorner^k}$ and $f( c) =f( c')$ we have $c=c'$. Moreover, we call $f$ is {\bf a fibration of degree at most $s$} or simply an {\bf $s$-fibration}\index{fibration!$s$-fibration}
if $f$ is a fibration and has $(s+1)$-uniqueness.
\end{definition}
\begin{definition}
Given a fibration $f\colon X \to Y$, two points $x, x' \in X$ are called {\bf $k$-canonically related relative to $f$}, denoted by $x\sim_{f, k} x'$ \index{$\sim_k$!$\sim_{f, k}$}, if $x\sim_k x'$ and $f(x)=f(x')$.
\end{definition}
The following is a relative version of Corollary \ref{inducing nilspace}.
\begin{proposition}\cite[Proposition 7.12]{GMVI} \label{inducing $s$-fibration}
Let $f\colon X \to Y$ be a fibration between compact gluing cubespaces. Fix $s \geq 0$. Then the relation $\sim_{f, s}$ is a closed equivalence relation and the projection map $\pi_{f, s}\index{$\pi_{f, s}$} \colon X \to X/\sim_{f, s}$ is a fibration and factors through an $s$-fibration $g\colon X/\sim_{f, s} \to Y$, that is, the relation induces a commutative diagram
$$\xymatrix{
X \ar[dd]^f \ar@{-->}[dr]^{\pi_{f, s}} & \\
& X/\sim_{f, s} \ar@{-->}[dl]^g \\
Y.
}$$
\end{proposition}
We remark that the dashed arrows in the above proposition emphasize that the underling maps are induced from the given map. This convention will be used throughout the paper.
The induced $s$-fibration $g: X/\sim_{f, s} \to Y$ is maximal in the following sense.
\begin{proposition} \label{maximal fibration}
Suppose that $f\colon X \to Y$ is a fibration and $s \geq 0$. Then the induced fibration $g\colon X/\sim_{f,s} \to Y$ is the {\bf maximal $s$-fibration} in the sense that for each commutative diagram of cubespace morphisms
$$\xymatrix{
X \ar[r]^{\pi'} \ar[d]_f & Z \ar[1, -1]^{g'} \\
Y &
}$$
such that $\pi'$ is a fibration and $g'$ is an $s$-fibration. Then $\pi\colon X \to X/\sim_{f, s}$ factors through $\pi'$, i.e. there is a unique fibration $\varphi: X/\sim_{f,s} \to Z$ for which the following diagram is commutative:
$$\xymatrix{
X \ar[r]^{\pi} \ar[d]_{\pi'} & X/\sim_{f, s} \ar@{-->}[1, -1]^\varphi \\
Z. &
}$$
\end{proposition}
Now we are ready to state the {\bf relative weak structure theorem}\index{relative weak structure theorem} for fibrations \cite[Theorem 7.19, Corollary 7.20]{GMVI}.
\begin{theorem} \label{RWST}
Let $f\colon X \to Y$ be an $s$-fibration between compact ergodic gluing cubespaces. Then $f$ factors as a finite tower of fibrations
$$\xymatrix{
X =X/\sim_{f, s} \ar[r] \ar[d]^f & X/\sim_{f, s-1} \ar[r] & \cdots \ar[r] & X/\sim_{f, 1} \ar[dlll] \\
Y \cong X/\sim_{f, 0}
},$$
where for each $s \geq k \geq 1$ the fibration $X/\sim_{f, k} \to X/\sim_{f, k-1}$ is an $A_k(f)$-principal fiber bundle for a compact metrizable abelian group $A_k(f)$.
\end{theorem}
The group $A_k(f)$ in the above theorem is named the {\bf $k$-th structure group} of $f$. In particular, $A_s(f)$ is called the {\bf top structure group}. If all $A_k(f)$ are Lie groups, $f$ is called {\bf Lie-fibered} as well as a {\bf Lie fibration}\index{Lie fibration}.
Observe that a cubespace $X$ is an $s$-nilspace is equivalent to saying that the map $X \to \{\ast\}$ is an $s$-fibration. The following proposition elaborates on this point.
\begin{proposition} \label{fiber of fibration}
Let $f\colon X \to Y $ be an $s$-fibration. Then each fiber of $f$, as a subcubespace of $X$, is an $s$-nilspace.
\end{proposition}
The main goal of this paper is to describe the structure of an $s$-fibration.
Based on the relative weak structure theorem, our main endeavor is to relativize various techniques which appeared in the absolute setting, i.e., we first consider the Lie-fibered case and then deal with the general case.
One of the innovations in this article is to associate to a fibration $f\colon X \to Y$ the {\it $k$-th translation group ${\rm Aut}_k(f)$ of $f$} (Definition \ref{relative translation}). Building on this concept, the main result we obtain is as follows.
\begin{theorem} \label{relative inverse limit}
Let $g\colon Z \to Y$ be a fibration of degree at most $s$ between compact ergodic gluing cubespaces. Then there exists a sequence of compact ergodic gluing cubespaces $\{Z_n\}_{n\geq 0}$, and an inverse system of fibrations $\{p_{m, n}\colon Z_n \to Z_m\}_{n \geq m}$ of degree at most $s$ satisfying the following properties:
\begin{enumerate}
\item $Z$ is isomorphic to the inverse limit $\varprojlim Z_n$;
\item There exists a sequence of Lie fibrations $\{h_n\colon Z_n \to Y\}$ that are compatible with the connecting maps $p_{m, n}$;
\item Suppose that $g^{-1}(g(z))$ is strongly connected for some $z \in Z$. Define $z_n=p_n(z)$ as the image of the projection map $p_n\colon Z \to Z_n$. Then $g^{-1}(g(z))$ is isomorphic to the inverse limit of nilmanifolds $$\varprojlim ({\rm Aut}_1^\circ(h_n)/{\rm Stab}(z_n)).$$ Moreover, for each $k \geq 0$, $C^k(g^{-1}(g(z)))$ is isomorphic to the inverse limit
$$\varprojlim ({\rm HK}^k({\rm Aut}_\bullet^\circ(h_n))/{\rm Stab}(z_n)).$$
\end{enumerate}
\end{theorem}
We remark that statements $(1)$ and $(2)$ of the above theorem imply that an $s$-fibration factors as an inverse limit of Lie fibrations. This is analogous to the statement in the absolute setting which says that an $s$-nilspace equals an inverse limit of Lie-fibered $s$-nilspaces.
By Proposition \ref{fiber of fibration} the fiber $g^{-1}(z)$ is a nilspace. As by assumption $g^{-1}(z)$ is strongly connected, it holds by Theorem \ref{absolute char}, that it is isomorphic as a cubespace to an inverse limit of nilmanifolds endowed with the Host-Kra cubes. However statement (3) is much stronger as it gives a
\emph{simultaneous} representation for \emph{all} fibers of $f$ in
terms of the fixed sequence of Lie groups ${\rm Aut}_1^\circ(h_n)$.
Thus the fibers are approximated uniformly as closely as desired
by quotients by natural co-compact subgroups of a single Lie group
associated with
the factorization\footnote{This follows from the following lemma:
For an inverse limit $Z=\varprojlim Z_n$ arising from continuous maps $p_n \colon (Z, d) \to (Z_n, d_n)$ between compact metric spaces, we have $\lim_{n \to \infty} \sup_{y \in Z_n} {\rm diam} (p^{-1}(y), d)$=0. As each $Z_n$ is homeomorphic to a disjoint
union of quotients of ${\rm Aut}_1^\circ(h_n)$, the approximation is
uniform for all fibers simultaneously. }.
We note that the fact that if one fiber is strongly connected then all fibers are strongly connected follows from Proposition \ref{prop:structure groups} in the sequel.
It is natural to ask what can be said for non strongly connected fibers. The general answer is not known, however in \cite{R78} Rees exhibited a certain minimal distal extension of a rotation on a solenoid for which all fibers are \emph{not} homeomorphic. As such an extension is a fibration we see that necessarily its fibers are not isomorphic. Rees also proved that the fibers of a minimal distal extension of a path-connected system are homeomorphic. Inspired by this result we prove the following theorem, relying on a homotopy argument and the relative weak structure theorem.
\begin{theorem} \label{fiber isomorphism}
Let $g\colon Z \to Y$ be a fibration of degree at most $s$ between compact ergodic gluing cubespaces. Suppose that $Y$ is path-connected and $g$ is Lie-fibered. Then for any $y_0, y_1 \in Y$ there is a cubespace isomorphism between $g^{-1}(y_0)$ and
$g^{-1}(y_1)$ as cubespaces.
\end{theorem}
\subsection{Fibrations arising from dynamical systems}\label{subsec:Dynamical fibrations}
Proving that a cubespace morphism is a fibration is in general non-trivial. Turning to topological dynamical systems we show they provide a hitherto unknown source of new examples of fibrations.
\begin{theorem} \label{dyn_factor_is_fibration}
Let $\pi\colon X \to Y$ be a factor map of minimal systems such that $X$ is fibrant (e.g., $X$ is minimal distal). Then $\pi$ is a fibration between the associated dynamical cubespaces.
\end{theorem}
The proof given in Section \ref{sec:Factor maps between minimal distal systems are fibrations} uses the relative Furstenberg structure theorem for distal extensions of minimal systems.
\begin{remark}
Another rich source of $s$-fibrations is found in ergodic theory. In \cite{gutman2019strictly}, Gutman and Lian studied the question when an ergodic abelian group extension of a strictly ergodic system admits a strictly ergodic distal model. Under some sufficient conditions they proved that the associated topological model map is actually an $s$-fibration.
\end{remark}
\subsection{The relative nilpotent regionally proximal relation}
In \cite{GGY18} Glasner, Gutman, and Ye introduced the so-called \emph{nilpotent regionally proximal relations} for actions of arbitrary groups. For minimal actions by abelian groups these relations coincide with the \emph{regionally proximal relations} introduced by Host, Kra and Maass \cite{HKM10}, generalizing the classical (degree $1$) definition of Ellis and Gottschalk \cite{EG60}. The regionally proximal relations are not equivalence relations in general. However \cite{veech1968equicontinuous,ellis1971characterization, McM78} proposed various sufficient conditions under which the regionally proximal relations of degree $1$ are equivalence relations. In particular this is the case for minimal abelian group actions \cite{SY12}. In \cite{GGY18} it was proven unexpectedly that for \emph{any} minimal action $(G, X)$, the nilpotent regionally proximal relation of degree $s$, denoted by ${\bf NRP}^{[s]}(X)$, is an equivalence relation and moreover using \cite[Theorem 1.4]{GMVIII}, under
some restrictions on the acting group, it corresponds to the maximal dynamical nilspace factor (also known as \emph{pronilfactor}) of order at most $s$. Moreover ${\bf NRP}^{[1]}(X)$ corresponds to the maximal abelian group factor of $(G, X)$ for any acting group $G$.
We recall the definition from \cite{GGY18}. Let $x, x' \in X$ be two points of a metric space $X$. Denote by $\llcorner^k(x, x')$\index{$\llcorner^k(x, x')$} the map $\{0,1\}^k \to X$ assigning the value $x'$ at $\overrightarrow{1}$ and $x$ elsewhere.
\begin{definition} \label{def:NRP}
Let $(G,X)$ be a dynamical system. We say a pair of points $(x, x')$ of $X$ are {\bf nilpotent regionally proximal of order $k$}\index{nilpotent regionally proximal of order $k$}, denoted by $(x, x') \in {\bf NRP}^{[k]}(X)$\index{${\bf NRP}^{[k]}(X)$}, if $\llcorner^{k+1}(x, x') \in C^{k+1}_G(X)$.
\end{definition}
\begin{theorem}\cite[Theorem 3.8]{GGY18}\label{equivalence relation}
If $(G, X)$ is minimal, then ${\bf NRP}^{[k]}(X)$ is a closed $G$-invariant equivalence relation for every $k \geq 0$.
\end{theorem}
\begin{definition} \label{relative NRP}
Let $\pi \colon (G, X) \to (G, Y)$ be a factor map of dynamical systems. We define the {\bf relative nilpotent regionally proximal relation of order $k$ w.r.t. $\pi$}, denoted by ${\bf NRP}^{[k]}(\pi)$\index{${\bf NRP}^{[k]}(X)$!${\bf NRP}^{[k]}(\pi)$}, as the intersection of ${\bf NRP}^{[k]}(X)$ with $R_\pi\index{$R_\pi$}:=\{(x, x') \in X^2: \pi(x)=\pi(x')\}$.
\end{definition}
\begin{proposition} \cite[Lemma 7.17]{GMVI} \label{alternative}
For a factor map $\pi: X \to Y$ of dynamical systems, we have ${\bf NRP}^{[k]}(\pi)=\sim_{\pi, k}$. In particular, ${\bf NRP}^{[k]}(X)=\sim_k$.
\end{proposition}
From Theorem \ref{equivalence relation}, we see that for any factor map $\pi$ between minimal systems ${\bf NRP}^{[k]}(\pi)$ is a closed $G$-invariant equivalence relation.
Let us discuss the relation between ${\bf NRP}^{[k]}(\pi)$ and several classical (relative) relations.
\begin{definition}
Let $\pi: (G, X) \to (G, Y)$ be a factor map. The {\bf relative proximal relation} ${\bf P}(\pi)$ is defined as those pairs $(x, y) \in R_\pi$ such that
$d(g_ix, g_iy)$ approaches $0$ for some sequence $\{g_i\}_i$ of $G$. The map $\pi$ is called a {\bf distal extension} if ${\bf P}(\pi)$ is trivial. In particular, $(G, X)$ is called a distal system if and only if ${\bf P}((G, X) \to \{\ast\})$ is trivial. When $G$ is abelian, for every positive integer $k$, we define the {\bf relative $k$-th regionally proximal relation of $\pi$}, denoted by ${\bf RP}^{[k]}(\pi)$, by the collection of pairs $(x, y) \in R_\pi$ such that there exist sequences of
elements $(x_i, y_i) \in R_\pi$ and $(g_i^1, \ldots, g_i^k) \in G^k$ satisfying that
$$\lim_{i \to \infty} (x_i, y_i)=(x, y), {\rm \ and \ } \lim_{i \to \infty} d((\sum_{j=1}^k \epsilon_jg_i^j)x_i, (\sum_{j=1}^k \epsilon_jg_i^j)y_i)=0$$
for every $(\epsilon_1, \ldots, \epsilon_d) \in \{0, 1\}^d\setminus \{\overrightarrow{0}\}$.
\end{definition}
Penazzi gave some algebraic conditions under which the relative regionally proximal relation is an equivalence relation \cite{P95}. More about this topic may be found in \cite[Chapter V.2]{VriesB}.
Let $G$ be an abelian group and $\pi: (G, X) \to (G, Y)$ a factor map of minimal systems. From \cite[Proposition 8.9]{GGY18}, we have that ${\bf RP}^{[k]}(X) \subseteq {\bf NRP}^{[k]}(X)$. As a consequence, we have the following proposition.
\begin{proposition} \label{relative relation}
Let $G$ be an abelian group and $\pi: (G, X) \to (G, Y)$ a factor map of minimal systems. Then
${\bf P}(\pi) \subseteq {\bf RP}^{[1]}(\pi) \subseteq {\bf RP}^{[k]}(\pi) \subseteq {\bf NRP}^{[k]}(\pi)$. In particular, If ${\bf NRP}^{[k]}(\pi)$ is trivial, $\pi$ is a distal extension.
\end{proposition}
\begin{remark}
\begin{enumerate}
\item For a factor map $\pi \colon (G, X) \to (G, Y)$ of minimal systems, since ${\bf RP}^{[1]}(\pi) \subsetneq {\bf RP}^{[1]}(X)\cap R_\pi$ in general (even for $G={\mathbb Z}$) \cite[Remark (2.2.3), Page 411]{VriesB}, we have in particular in general that ${\bf RP}^{[1]}(\pi) \subsetneq {{\bf NRP}^{[1]}(\pi)}$. \\
\item Recall that ${\bf RP}^{[1]}(\pi)$ generates the maximal equicontinuous factor $(G, X/{\bf RP}^{[1]}(\pi)) \to (G, Y)$ (see \cite[Chapter V, Theorem 2.21]{VriesB}). One may wonder whether ${\bf NRP}^{[1]}(\pi)$ corresponds to the maximal factor which is a principal abelian group extension\footnote{See Definition \ref{def:principal abelian group}.} of the base space. It turns out that this is not true. Indeed, from the famous Furstenberg counterexample \cite[Remark 7.19]{GGY18}, a principal abelian group extension of minimal (distal) systems is not necessarily of finite degree relative to the factor system. To see this, recall that Furstenberg's example is given by the projection map $\pi\colon ({\mathbb T}^2, T) \to ({\mathbb T}, S)$. Here $T$ sends $(x, y)$ to $(x+\alpha, y+\varphi(x))$ for some continuous map $\varphi: {\mathbb T} \to {\mathbb T}$ and irrational number $\alpha$. Since $({\mathbb T}, S)$ is a minimal abelian group system, by \cite[Lemma 8.4]{GGY18}, ${\bf NRP}^{[k]}({\mathbb T}) \subseteq {\bf NRP}^{[1]}({\mathbb T})$ are trivial for all $k \geq 1$. This implies that ${\bf NRP}^{[k]}({\mathbb T}^2)\subseteq R_\pi$ and hence ${\bf NRP}^{[k]}(\pi)={\bf NRP}^{[k]}({\mathbb T}^2)$ is not trivial.
\end{enumerate}
\end{remark}
\emph{Systems of finite degree}\footnote{Also known as \emph{systems of finite order}.} were introduced in \cite{HKM10} for $\mathbb{Z}$-actions. In \cite{GGY18} they were defined for general group actions and investigated from a structural point of view. Recall that a minimal system $(G, X)$ is called a system of degree at most $s$ if ${\bf NRP}^{[s]}(X)$ equals the diagonal subset $\Delta$ of $X^2$ (\cite[Definition 7.1]{GGY18}). It is thus natural to define:
\begin{definition}\label{def:extension of degree}
Let $\pi\colon X \to Y$ be a factor map of minimal systems. We say $\pi$ is an {\bf extension of degree at most $s$ (relative to $Y$)} if ${\bf NRP}^{[s]}(\pi)=\Delta$.
\end{definition}
In Section \ref{sec:extensions of finite degree} we develop the structure theory of extensions of finite degree. Our main structural theorem, proven in Section \ref{sec:extensions of finite degree}, is an application of Theorem \ref{relative inverse limit} in the dynamical setting:
\begin{theorem}\label{thm:structure_finite_degree_extension}
Let $\pi \colon (G,X) \to (G,Y)$ be an extension of degree at most $s$ between minimal distal systems. Then the following holds:
\begin{enumerate}
\item The map $\pi$ is an $s$-fibration.
\item
There exists a sequence of dynamical systems $\{(G,Z_n)\}_{n\geq 0}$, and an inverse system of factor maps $\{p_{m, n}\colon (G,Z_n) \to (G,Z_m)\}_{n \geq m}$ such that $(G,X)=\varprojlim (G,Z_n)$.
\item
There exists a sequence of factor maps $\{h_n\colon (G,Z_n) \to (G,Y)\}$ which are Lie fibrations and are compatible with the connecting maps $p_{m, n}$.
\end{enumerate}
\end{theorem}
\begin{question}\label{q:fiber_isomorphic}
Let $\pi \colon X \to Y$ be an extension of degree at most $s$ between minimal systems such that $\pi$ is a fibration. Is every fiber of $\pi$ isomorphic as a cubespace to an inverse limit of nilmanifolds?
\end{question}
\begin{remark}
It is important to point out that in the relative setting, $G$ does not immerse into ${\rm Aut}_1(\pi)$ because $G$ does not fix the fibers of $\pi$. This is a crucial obstruction not existing in the absolute setting, making Question \ref{q:fiber_isomorphic} non-trivial.
\end{remark}
\begin{remark}
It is interesting to compare Theorem \ref{thm:structure_finite_degree_extension} to a refinement of the relative Furstenberg structure theorem due to Bronstein (see \cite[3.17.8]{bronshteuin1979extensions}) which states that a distal extension of metric minimal dynamical systems factors as (a possibly countable) tower of \emph{isometric extensions}\footnote{See Definition \ref{isometric}.} where the fibers are given by quotients of compact Lie groups by closed subgroups. Note however that these compact Lie groups are not necessarily abelian.
\end{remark}
\subsection{Conventions}
Throughout the paper, all spaces are metric spaces and $G$ denotes a topological group. When we discuss an $s$-fibration, the underlying cubespaces are always assumed to be compact ergodic and have the gluing property. Unless specified otherwise $k$ always denotes a non-negative integer.
\subsection{Structure of the paper}
Theorem \ref{relative inverse limit} is proven in Sections \ref{sec:Strongly connected fibers} and \ref{sec:Approximating by Lie-fibered fibrations}, where Section \ref{sec:Strongly connected fibers} is solely devoted to the Lie-fibered case of statement (3). In Section \ref{sec:Isomorphisms between fibers of a fibration} we prove Theorem \ref{fiber isomorphism} and in Section \ref{sec:Factor maps between minimal distal systems are fibrations} we prove Theorem \ref{dyn_factor_is_fibration}. The final section, Section \ref{sec:extensions of finite degree} is devoted to
extensions of finite degree and to the proof of Theorem \ref{thm:structure_finite_degree_extension}.
\subsection{Acknowledgements}
Y.G. was partially supported by the National Science Centre (Poland) grant 2016/22/E/ST1/00448. B.L. would like to acknowledge the support from the Institute of Mathematics of the Polish Academy of Sciences.
\section{Strongly connected fibers}
\label{sec:Strongly connected fibers}
In this section, we prove the Lie-fibered case of Theorem \ref{relative inverse limit}(3). Recall that an $s$-fibration $g\colon Z \to Y$ between compact ergodic gluing cubespaces is called {\bf Lie fibered} if all structure groups $A_k(g)$ of $g$ are Lie groups.
In light of Theorem \ref{RWST}, we write $g_{s-1}: Z/\sim_{g, s-1} \to Y$\index{$g_{s-1}$} for the induced $(s-1)$-fibration, which we may call the {\bf canonical $(s-1)$-th factor of $g$}.
\subsection{The main steps of the proof}
Let us first recall the notion of the $k$-th translation groups of a cubespace. For a compact cubespace $X$, denote by ${\rm Aut}(X)$ the collection of cubespace isomorphisms of $X$.
\begin{definition} \label{translation}
Fix $k \geq 0$. We say $\varphi \in {\rm Aut}(X)$\index{${\rm Aut}(X)$} is a {\bf $k$-translation} if for every $n \geq k$, $(n-k)$-face $F \subseteq \{0, 1\}^n$, and $c \in C^n(X)$, the map $\{0, 1\}^n \to X$ sending $\omega \in F$ to $\varphi(c(\omega))$ and $c(\omega)$ elsewhere is still a cube of $X$. The {\bf $k$-th translation group}\index{$k$-th translation group} ${\rm Aut}_k(X)$\index{${\rm Aut}(X)$!${\rm Aut}_k(X)$} of $X$ is defined as the collection of all $k$-translations of $X$.
\end{definition}
Let $d$ be the metric on $X$. For each homeomorphism $\phi \colon X \to X$, define
$$||\phi||:=\max_{x \in X} d(x, \phi(x)).$$
Then the compact-open topology of the group of homeomorphisms of $X$, denoted by ${\rm Homeo}(X)$, can be induced from the metric
$$d(\phi, \psi):=||\phi \circ \psi^{-1}||.$$
In particular, as a closed subgroup of ${\rm Homeo}(X)$, ${\rm Aut}(X)$ is a metric space.
When we say $\varphi \in {\rm Homeo}(X)$ is (appropriately) {\bf small}, we mean $||\varphi||$ is (appropriately) small; if $f\circ \varphi=f$, we say $\varphi$ {\bf fixes the fibers of $f$}.
Now consider a fibration $f\colon X \to Y$. To study the cubespace structure of the fibers of $f$, we introduce the notion of the $k$-th translation group of $f$.
Note that each fiber $f^{-1}(y)$ is a subcubespace of $X$.
\begin{definition} \label{relative translation}
The {\bf $k$-th translation group of $f$}\index{$k$-th translation group!$k$-th translation group of $f$} is defined as
$${\rm Aut}_k(f)\index{${\rm Aut}(X)$!${\rm Aut}_k(f)$}:=\{\varphi \in {\rm Homeo}(X): \varphi|_{f^{-1}(y)} \in {\rm Aut}_k(f^{-1}(y)) \ {\rm for \ every \ } y \in Y \}.$$
\end{definition}
Intuitively this means that each element $\varphi$ of ${\rm Aut}_k(f)$ is a homeomorphism of $X$ such that $\varphi$ fixes the fibers of $f$ and preserves the cubespace structure of fibers in a strong sense.
\begin{remark}
It is interesting to compare this definition with the definition of the \emph{fiber preserving automorphism group} of a dynamical extension $\phi\colon (G, X) \to (G, Y)$, used to characterize (weak) group extensions \cite[Chapter V, Remark 4.2 (4)]{VriesB}.
\end{remark}
When $f$ is an $s$-fibration, ${\rm Aut}_{s+1}(f)=\{1\}$ and we obtain a filtration ${\rm Aut}_\bullet(f)$ of degree $s$ as follows:
$${\rm Aut}_1(f)={\rm Aut}_1(f) \supseteq {\rm Aut}_2(f) \supseteq \cdots \supseteq {\rm Aut}_{s+1}(f)=\{1\}.$$
We may therefore consider ${\rm Aut}_\bullet(f)$ as a cubespace (furthermore an $s$-nilspace) endowed with Host-Kra cubes.
Denote by ${\rm Aut}_k^\circ(f)$\index{${\rm Aut}(X)$!${\rm Aut}_k^\circ(f)$} the connected component of ${\rm Aut}_k(f)$ containing the identity. We obtain a filtration of closed groups
$${\rm Aut}_1^\circ(f)={\rm Aut}_1^\circ(f) \supseteq {\rm Aut}_2^\circ(f) \supseteq \cdots \supseteq {\rm Aut}_{s+1}^\circ(f)=1.$$
Similarly to \cite[Proposition 3.1]{GMVII}, we have
\begin{proposition} \label{embedding}
Let $g\colon Z \to Y$ be an $s$-fibration between compact ergodic gluing cubespaces. Then $A_s(g)$ embeds into ${\rm Aut}_s(g)$ via sending $a $ to $f_a$, where $f_a: Z \to Z$ is given by $f_a(z)=a.z$.
\end{proposition}
\begin{proof}
Fix $n \geq s, y \in Y$, and $c \in C^n(g^{-1}(y))$. Define $A:=A_s(g)$ and $\pi:=\pi_{g, s-1}$.
Since $C^n({\mathcal D}_s(A))c=\pi^{-1}(\pi(c))$ (see definition of ${\mathcal D}_s(A)$ in \cite[Section 5.1]{GMVI}), we have
$$g([a]_Fc)=g_{s-1}\circ \pi([a]_Fc)=g_{s-1}\circ \pi(c)=g(c)=y$$
for any face $F \subseteq \{0, 1\}^n$ of codimension $s$.
It follows that $f_a|_{g^{-1}(y)} \in {\rm Aut}_s(g^{-1}(y))$ and thus $f_a \in {\rm Aut}_s(g)$.
\end{proof}
As a relative analogue of \cite[Proposition 2.17]{GMVII}, the following proposition says that we can also use the evaluation map to construct the desired cubespace morphism.
\begin{proposition} \label{morphism repr}
Let $g: Z \to Y$ be an $s$-fibration. Fix $z \in Z$ and a cube $c$ of $C^n(g^{-1}(g(z)))$. Then for any $(\varphi_\omega)_{\omega \in \{0, 1\}^n}$ in ${\rm HK}^n({\rm Aut}_\bullet(g))$, $(\varphi_\omega(c(\omega)))_\omega \in C^n(g^{-1}(g(z)))$. In particular the induced natural map
$${\rm Aut}_1(g)/{\rm Stab}(z) \to g^{-1}(g(z))$$
is a cubespace morphism.
\end{proposition}
\begin{proof}
The definition of ${\rm Aut}_\bullet(g)$ guarantees that the map is well defined. The rest of the argument stays the same as in \cite[Proposition 2.17]{GMVII}.
\end{proof}
Recall that a cubespace $X$ is called {\bf strongly connected}\index{strongly connected} if $C^k(X)$ is connected for every $k \geq 0$.
In \cite[Theorem 2.18]{GMVII}, a strongly connected Lie-fibered nilspace $X$ is shown to be a nilmanifold using the translation groups ${\rm Aut}_k(X)$. For the relative case, we are now ready to state the structure theorem for a Lie-fibered fibration of finite degree.
\begin{theorem}[The Lie-fibered case of Theorem \ref{relative inverse limit}(3)] \label{relative nilmanifold}
Let $g\colon Z \to Y$ be a fibration of degree at most $s$ between compact ergodic cubespaces that obey the gluing condition. Fix a point $z \in Z$. Suppose that $g$ is Lie-fibered, then the following holds.
\begin{enumerate}
\item ${\rm Aut}_1^\circ(g)$ is a Lie group; \\
\item the stablizer ${\rm Stab}(z)$ of $z$ in ${\rm Aut}_1^\circ(g)$ is a discrete cocompact subgroup;\\
\item if $g^{-1}(g(z))$ is strongly connected as a subcubespace, then the fiber $g^{-1}(g(z))$ is homeomorphic to the nilmanifold ${\rm Aut}_1^\circ(g)/{\rm Stab}(z)$. Moreover, ther are cubespace isomorphisms
$$C^k(g^{-1}(g(z))) \cong {\rm HK}^k({\rm Aut}_\bullet^\circ(g))/{\rm Stab}(z)$$
for all $k \geq 1$.
\end{enumerate}
\end{theorem}
The proof of Theorem \ref{relative nilmanifold} uses the full strength of the relative weak structure theorem (Theorem \ref{RWST}) and will be given at the end of this subsection.
Recall that in \cite[Proposition 3.2]{GMVII}, for an $s$-nispace $X$ a canonical group homomorphism ${\rm Aut}_k(X) \to {\rm Aut}_k(X/\sim_s)$ is exhibited. We adapt the statement and argument for our relative setting.
\begin{proposition} \label{canonical map}
Fix $s, k \geq 1$. Let $g\colon Z \to Y$ be an $s$-fibration between compact ergodic gluing cubespaces. Then there is a canonical continuous group homomorphism $\pi_\ast\colon {\rm Aut}_k(g) \to {\rm Aut}_k(g_{s-1})$ such that for every $\varphi \in {\rm Aut}_k(g)$ the diagram
$$\xymatrix{
Z \ar[r]^\varphi \ar[d]_{\pi_{g, s-1}} \ar@/^3pc/[drr]^g & Z \ar[dr]^g \ar[d]_{\pi_{g, s-1}} & \\
\pi_{g, s-1}(Z) \ar@{-->}[r]^{\pi_\ast(\varphi)} \ar@/_2pc/[rr]^{g_{s-1}} & \pi_{g, s-1}(Z) \ar[r]^{g_{s-1}} & Y
}$$
commutes.
\end{proposition}
\begin{proof}
We modify the proof in the absolute setting to the relative case. Denote by $\pi$ the projection map $\pi_{g, s-1}$. Given $\varphi \in {\rm Aut}_k(g)$ and $y \in \pi(Z)$, writing $y=\pi(x)$ for some $x \in Z$, we define $\pi_\ast(\varphi)(y):=\pi \circ \varphi(x)$.
We check that $\pi_\ast$ is well-defined. Note that $\varphi \in {\rm Aut}_k(g) \subseteq {\rm Aut}_1(g)$. By Proposition \ref{embedding}, $A_s(g)$ embeds into ${\rm Aut}_s(g)$. Since ${\rm Aut}_\bullet(g)$ is filtered, we conclude that $A_s(g)$ commutes with ${\rm Aut}_k(g)$. For any $y \in \pi(Z)$ and $x, x' \in \pi^{-1}(y)$, there exists a unique $a \in A_s(g)$ such that $x'=ax$. Then
$$(\pi_\ast(\varphi))(y):=\pi \circ \varphi(x')=\pi\circ \varphi(ax)=\pi\circ (a\varphi(x))=\pi\circ \varphi(x).$$
This shows that $\pi_\ast$ is independent of the choice of lifting. The statement that $\pi_\ast(\varphi)$ is a $k$-translation follows from the facts that $\varphi$ is a $k$-translation and $\pi$ is a cubespace morphism.
\end{proof}
The first hard ingredient of the proof of Theorem \ref{relative nilmanifold}, used for establishing the nilmanifold structure in statement (3), is to relativize \cite[Proposition 3.3]{GMVII}: Every element of ${\rm Aut}_k(g_{s-1})$ of sufficient small norm can be realized as the image of $\pi_\ast$ of some element of ${\rm Aut}_k(g)$. The proof will be given in Section 4.2.
\begin{theorem}\label{openness}
Fix $s \geq 1$. Let $g: Z \to Y$ be an $s$-fibration between compact ergodic gluing cubespaces. Assume that $A_s(g)$ is a Lie group. Then $\pi_\ast \colon {\rm Aut}_k(g) \to {\rm Aut}_k(g_{s-1})$ is open for all $k \geq 1$.
\end{theorem}
The next result is analogous to \cite[Theorem 5.2]{GMVII}. As the proof is similar we omit it.
\begin{theorem} \label{rigidity}
Let $A$ be a compact abelian Lie group with a metric $d_A$. Fix integers $s \geq 0$ and $\ell \geq 1$. Then there exists
$\varepsilon$ depending only on $s, \ell$ and $A$ such that the following holds.
Let $g: Z \to Y$ be an $s$-fibration between compact ergodic gluing cubespaces. Suppose that $f: Z \to A$ is a continuous function such that
$$d_A(f(z), f(z')) \leq \varepsilon$$
for every $z, z' \in Z$ with $g(z)=g(z')$, and the map $\partial^\ell f: C^\ell_g(Z) \to A$ sending $c$ to
$$\sum_{\omega \in \{0, 1\}^\ell} (-1)^{|\omega|} f(c(\omega))$$
is the zero function. Then $f$ is a constant function.
\end{theorem}
The following is used in the the proof of Theorem \ref{relative nilmanifold}.
\begin{lemma} \label {discreteness}
Lie-fiberedness of $g$ implies that ${\rm Stab}(z) \subseteq {\rm Aut}_1(g)$ is discrete for every $z \in Z$.
\end{lemma}
\begin{proof}
In light of the relative weak structure theorem (Theorem \ref{RWST}), Theorem \ref{rigidity} guarantees that the argument in the absolute setting may be modified appropriately.
\end{proof}
The proof of the following lemma follows the argument of \cite[Proposition 3.7]{GMVII} (with the help of Lemma \ref{discreteness}) so we omit it.
\begin{lemma} \label{open subgroup}
$A_s(g) \subseteq \ker (\pi_*)$ is open.
\end{lemma}
Based on Lemma \ref{open subgroup}, we can prove one of the statements of Theorem \ref{relative nilmanifold}.
\begin{corollary} \label{Lieness}
The $1$-translation group ${\rm Aut}_1(g)$ has a Lie group structure.
\end{corollary}
\begin{proof}
First note that ${\rm Aut}_1(g_0)={\rm Aut}_1({\rm id}_Y)=1$ and hence $\ker({\rm Aut}_1(g_1) \to {\rm Aut}_1(g_0))={\rm Aut}_1(g_1)$. By Lemma \ref{open subgroup}, $A_1(g)$ is an open subgroup of $\ker({\rm Aut}_1(g_1) \to {\rm Aut}_1(g_0))$. Thus we can extend the differentiable structure of $A_1(g)$ onto ${\rm Aut}_1(g_1)$.
Inductively assume that ${\rm Aut}_1(g_{s-1})$ is Lie. By Theorem \ref{openness}, $\Ima (\pi_\ast)$ is an open subgroup of ${\rm Aut}_1(g_{s-1})$ and hence Lie. Applying the extension theorem for Lie groups \cite[Theorem 3.1]{Gle51}, it suffices to show $\ker (\pi_\ast)$ has a Lie group structure. By Lemma \ref{open subgroup}, $A_s(g) \subseteq \ker (\pi_\ast)$ is an open subgroup. Thus we can extend the Lie group structure of $A_s(g)$ onto $\ker (\pi_\ast)$.
\end{proof}
The following is an analogue of \cite[Lemma 3.10]{GMVII}.
\begin{lemma} \label{identity orbit is open}
Let $g\colon Z \to Y$ be a Lie-fibered $s$-fibration. Fix $0 \leq k < s$. Then for any $\varepsilon > 0$ there exists $\delta > 0$ satisfying the following.
For any $z, z' \in Z$ such that $z\sim_{g, k} z'$ and $d(z, z') < \delta$, there is $\varphi \in {\rm Aut}_{k+1}(g)$ such that $||\varphi||< \varepsilon$ and $\varphi(z)=z'$. In particular, for each $z \in Z$, ${\rm Aut}_{k+1}^\circ(g)z$ is open in $\pi_{g, k}^{-1}(\pi_{g, k}(z))$.
\end{lemma}
\begin{proof}
Based on Proposition \ref{canonical map} and Theorem \ref{openness}, similarly to the proof of \cite[Lemma 3.10]{GMVII}, we can induct on $s$ to obtain the first conclusion. We now prove that ${\rm Aut}_{k+1}^\circ(g)z$ is open. Fix $\varphi \in {\rm Aut}_{k+1}^\circ(g)$. Let $\varepsilon > 0$ be small enough such that the $\varepsilon$-ball $B_\varepsilon({\rm id})$ at the identity is contained in ${\rm Aut}_{k+1}^\circ(g)$.
There exists $\delta >0$ satisfying that whenever $y, y' \in \pi_{g, k}^{-1}(\pi_{g, k}(z))$ such that $d(y, y')< \delta$, we can find $\varphi' \in B_\varepsilon({\rm id})$ such that
$y=\varphi'(y')$.
Note that for $y \in B_\delta(\varphi(z)) \cap \pi_{g, k}^{-1}(\pi_{g, k}(z))$, we have
$$\pi_{g, k}(y)=\pi_{g, k}(z)=\pi_{g, k}\circ \varphi(z).$$
It follows that $y =\varphi' \circ \varphi(z)$
for some $\varphi' \in {\rm Aut}_{k+1}^\circ(g)$. Q.E.D.
\end{proof}
The following is an analogue of \cite[Lemma 3.12]{GMVII}.
Using the relative weak structure theorem, the proof follows a similar argument, inducting on $s$.
\begin{lemma} \label{cubehom is open}
For each $c \in C^n(g^{-1}(g(z)))$, the evaluation map ${\rm ev}_c\colon {\rm HK}^n({\rm Aut}_\bullet^\circ(g)) \to C^n(g^{-1}(g(z)))$ sending $(\varphi_\omega)_\omega$ to $(\varphi_\omega(c(\omega)))_\omega$ is open.
\end{lemma}
\begin{proof}[{\bf Proof of Theorem \ref{relative nilmanifold}}]
Since ${\rm Aut}_k^\circ(g) \subseteq {\rm Aut}_k(g) \subseteq {\rm Aut}_1(g)$ are closed subgroups, and by Corollary \ref{Lieness}, ${\rm Aut}_1(g)$ is Lie, we have ${\rm Aut}_k^\circ(g)$ is Lie.
To show that the evaluation map ${\rm HK}^n({\rm Aut}_\bullet^\circ(g)) \to C^n(g^{-1}(g(z)))$ is surjective, it suffices to show that the action ${\rm HK}^n({\rm Aut}_\bullet^\circ(g)) \curvearrowright C^n(g^{-1}(g(z)))$ is transitive. Since ${\rm Aut}_k^\circ(g)$ is open in ${\rm Aut}_k(g)$ for all $k \geq 0$, by a relative version of \cite[Lemma 3.15]{GMVII}, ${\rm HK}^n({\rm Aut}_\bullet^\circ(g))$ is open in ${\rm HK}^n({\rm Aut}_\bullet(g))$. Thus by Lemma \ref{cubehom is open}, the orbits of ${\rm HK}^n({\rm Aut}_\bullet^\circ(g))$ are open and hence closed in $C^n(g^{-1}(g(z)))$.
As by assumption, $C^n(g^{-1}(g(z)))$ is connected, it implies that the action is transitive. In particular, the evaluation map ${\rm ev}_z\colon {\rm Aut}_1^\circ(g)/{\rm Stab}(z) \to Z$ and the induced maps
$${\rm ev}_{\Box^n(z)}\colon {\rm HK}^n({\rm Aut}_\bullet^\circ(g))/{\rm Stab}(z) \to C^n(g^{-1}(g(z)))$$
by pointwise application are homeomorphisms. Here $\Box^n(z)$\index{$\Box^n(z)$} denotes the constant cube of $C^n(Z)$ taking value $z$.
To see ${\rm Stab}(z)\cap {\rm Aut}_k^\circ(g)$ is co-compact in ${\rm Aut}_k^\circ(g)$ for all $k \geq 0$, we first note by Lemma \ref{identity orbit is open} that ${\rm Aut}_k^\circ(g)z$ is open in $\pi_{g, k-1}^{-1}(\pi_{g, k-1}(z))$.
Moreover, since orbits partition the space and ${\rm Aut}_k^\circ(g)z$ is an orbit, it must be closed and hence compact. From the homeomorphism between ${\rm Aut}_k^\circ(g)z$ and ${\rm Aut}_k^\circ(g)/({\rm Stab}(z)\cap {\rm Aut}_k^\circ(g))$, we see that ${\rm Stab}(z)\cap {\rm Aut}_k^\circ(g)$ is co-compact.
\end{proof}
\subsection{Lifting relative translations}
In this subsection, we prove Theorem \ref{openness}. There are two steps in the proof: (1) lift small translations $\varphi$ in ${\rm Aut}_k(g_{s-1})$ to a small homeomorphism $\psi\colon Z \to Z$; (2) replacing $\psi$ with a genuine $k$-translation of the form $h(\cdot).\psi(\cdot)$ for some continuous map $h\colon Z \to A_s(g)$ with small norm. We will refer to the operation of replacing $\psi$ with a $k$-translation of the form $h(\cdot).\psi(\cdot)$ as \textbf{repairing}. Step (2) will be based on a relative version of \cite[Theorem 4.11]{GMVII} \cite[Lemma 3.19]{ACS12}.
By a careful local section argument for Lie-principal bundles as in \cite[Lemma 4.2]{GMVII}, we accomplish the first step by the following lemma.
\begin{lemma} \label{preliftting}
Let $g\colon Z \to Y$ be an $s$-fibration between compact ergodic gluing cubespaces. Assume that $A_s(g)$ is Lie. Then one can lift every small enough homeomorphism $\varphi$ of $\pi_{g, s-1}(Z)$ up to a small homeomorphism $\psi \colon Z \to Z$ such that $\psi$ is $A_s(g)$-equivariant.
\end{lemma}
\begin{definition}
For a cubespace morphism $f\colon X \to Y$, we define the {\bf $k$-th fiber cubes of $f$}\index{$k$-th fiber cube} as the closed subset
$$C^k_f(X): =\cup_{y \in Y} C^k(f^{-1}(y)).$$
\end{definition}
\begin{definition}
Let $\ell \geq 1$ and $n \geq 0$. Given $c_1, c_2 \in C^n(X)$, the {\bf generalized $\ell$-corner} $\llcorner^\ell(c_1; c_2)\colon \{0, 1\}^{n+\ell} \to X$ is given by $c_2(\omega_1, \ldots, \omega_n)$ for $\omega=(\omega_1, \ldots, \omega_n, \overrightarrow{1})$ and $c_1(\omega_1, \ldots, \omega_n)$ elsewhere.
\end{definition}
We need to repair the lift $\psi$ to a $k$-translation of ${\rm Aut}_k(g)$. Unwrapping the definition, we want to find a small continuous function $h\colon Z \to A_s(g)$ such that the map $\widetilde{\varphi}\colon Z \to Z$ defined by sending $z$ to $h(z).\psi(z)$ is a $k$-translation of $g$. To show $\widetilde{\varphi} \in {\rm Aut}_k(g)$, we need the following criteron.
\begin{proposition}\cite[Proposition 2.13]{GMVII} \label{criteron}
Let $X$ be an $s$-nilspace. Fix $0 \leq k \leq s+1$. Then a homeomorphism $\phi: X \to X$ is a $k$-translation if and only if for any $(s+1-k)$-cube $c$ of $X$ the configuration $\llcorner^k(c, \phi(c))$ is an $(s+1)$-cube.
\end{proposition}
Applying the above proposition to the cubespace $g^{-1}(y)$, we need to show that $\llcorner^k(c; \widetilde{\varphi}(c))$ is an $(s+1)$-cube of $g^{-1}(y)$ for every cube $c \in C^{s+1-k}(g^{-1}(y))$.
In order to measure the extent to which a configuration is failing to be a cube, let us introduce the definition of discrepancy in the setting of an $s$-fibration $g\colon Z \to Y$. Abbreviate $\pi_{g, s-1}$ by $\pi$. Let $c\colon \{0, 1\}^{s+1} \to Z$ be a map such that $\pi(c) \in C^{s+1}(\pi(Z))$, say, $\pi(c)=\pi(c_0)$ for some $c_0 \in C^{s+1}(Z)$. By the relative weak structure theorem (Theorem \ref{RWST}), there exists a unique map $\beta\colon \{0, 1\}^{s+1} \to A_s(g)$ such that $c=\beta . c_0$.
\begin{definition}
The {\bf discrepancy}\index{discrepancy} $\Delta(c)$\index{$\Delta(c)$} of $c$ is defined as
$$\Delta(c):=\sum_{\omega \in \{0, 1\}^{s+1}} (-1)^{|\omega|}\beta(\omega).$$
Here $|\omega|$ denotes the sum $\sum_{j=1}^{s+1}\omega_j$ for $\omega=(\omega_1, \ldots, \omega_{s+1})$.
\end{definition}
Following the argument of \cite[Proposition 4.5]{GMVII}, we have that the discrepancy is well defined and $\Delta(c)=0$ if and only if $c \in C^{s+1}(Z)$.
We now relativize the notion of cocycles and coboundaries of cubespaces:
\begin{definition}
Let $f\colon X \to Y$ be a cubespace morphism and $A$ an abelian group. Fix an integer $\ell \geq 1$. Consider a continuous map $\rho \colon C^\ell_f (X) \to A$. We say $\rho$ is an {\bf $\ell$-coclycle\index{cocycle} on fibers of $f$} if it is {\bf additive} in the sense that
$$\rho([c_1, c_3])=\rho([c_1, c_2])+ \rho([c_2, c_3])$$
for any $c_1, c_2, c_3 \in C^{\ell-1}_f(X)$ such that the three concatenations in the equation are in $C^\ell_f(X)$.
We say $\rho$ is a {\bf coboundary}\index{coboundary} if there exists a continuous map $h: X \to A$ such that $\rho$ can be written as
$$\rho(c)=\partial^\ell h(c): =\sum_{\omega \in \{0, 1\}^\ell} (-1)^{|\omega|}h(c(\omega))$$
for every $c \in C^\ell_f(X)$.
\end{definition}
\begin{lemma} \label{repairing}
Let $\psi\colon Z \to Z$ be a homeomorphism lifting some element $\varphi$ of ${\rm Aut}_k(g_{s-1})$. Then the map
$$\rho_\psi \colon C^{s+1-k}_g(Z) \to A_s(g)$$
sending $c$ to $\Delta(\llcorner^k(c; \psi(c)))$ is well defined.
Moreover, $\psi$ can be repaired to a $k$-translation of ${\rm Aut}_k(g)$ if $\rho_\psi$ is an $(s+1-k)$-coboundary of some function with sufficient small norm.
\end{lemma}
\begin{proof}
Fix $y \in Y$ and a cube $c \in C^{s+1-k}(g^{-1}(y))$. Since $\varphi \in {\rm Aut}_k(g_{s-1})$, from
$$\pi(\llcorner^k(c; \psi(c)))=\llcorner^k(\pi(c); \pi(\psi(c)))=\llcorner^k(\pi(c); \varphi (\pi(c))),$$
we have $\pi(\llcorner^k(c; \psi(c)))$
is a cube of $C^{s+1-k}(\pi \circ g^{-1}(y))$. Thus the discrepancy $\Delta(\llcorner^k(c; \psi(c)))$ is well defined and so is $\rho_\psi$.
Assume $\rho_\psi =\partial^{s+1-k}f$ for some continuous function $f: Z \to A_s(g)$ with small norn. We will show it is possible to "integrate $f$ over $A_s(g)$" resulting with a function $F: Z \to A_s(g)$ such that $F(ax)=F(x)$ for every $a \in A_s(g)$ and $x \in Z$. In order to show that such a procedure is well-defined, let us recall a "lifting up \& down" technique for defining integration.
Since $A_s(g)$ is a compact abelian Lie group, there exists an isomorphism $\phi\colon A_s(g) \to ({\mathbb R}/{\mathbb Z})^d\times K$ for some finite-dimensional torus $({\mathbb R}/{\mathbb Z})^d$ and some finite group $K$. If $f$ is of sufficiently small norm, there exists a sufficient small $\delta$ such that the image of $f$ belongs to a $\delta$-ball $B$ around the identity with respect to a given compatible metric. Notice that $\phi$ induces an embedding $p: B \to {\mathbb R}^d$.
For every $x \in X$, define $F(x)$ as
$$F(x):=p^{-1}\left(\int_{A_s(g)} p(f(ax))dm(a)\right),$$
where $m$ denotes the Haar measure on $A_s(g)$. It is easy to check that $F$ is well defined (see \cite[Subsection 5.2]{GMVII} for further explanation).
Notice that the repaired map $\widetilde{\varphi}(x):=F(x). \psi(x)$ is a bijection and hence is a homeomorphism. By Lemma \ref{fiber of fibration}, $g^{-1}(y)$ is an $s$-nilspace.
Applying Proposition \ref{criteron} to it, we obtain that $\widetilde{\varphi}|_{g^{-1}(y)} \in {\rm Aut}_k(g^{-1}(y))$. Thus $\widetilde{\varphi}$ is a $k$-translation of
${\rm Aut}_k(g)$.
\end{proof}
Let us introduce the notion of concatenation along the $k$-th axis.
\begin{definition}
Let $1 \leq k \leq \ell+1$. Given a cubespace $X$ and two maps: $c_1, c_2\colon \{0, 1\}^\ell \to X$, the {\bf concatenation along the $k$-th axis} $[c_1, c_2]_k \index{$[c_1, c_2]$!$[c_1, c_2]_k$} \colon \{0, 1\}^{\ell+1} \to X$ is defined by sending $\omega$ to
$$c_1(\omega_1, \ldots, \omega_{k-1}, \omega_{k+1}, \ldots, \omega_{\ell+1})$$
if $\omega_k=0$ and $c_2(\omega_1, \ldots, \omega_{k-1}, \omega_{k+1}, \ldots, \omega_{\ell+1})$ elsewhere.
\end{definition}
\noindent In particular, $[c_1, c_2]_{\ell+1}$ is simply the concatenation as defined previously.
We need a variant of \cite[Theorem 5.1]{GMVII} to prove Theorem \ref{openness}.
\begin{theorem} \label{fiber cocycle theorem}
Let $A$ be a compact abelian Lie group. Fix $s \geq 0$ and $\ell \geq 1$.
Then there exists $\varepsilon > 0$ (depending only on $s, \ell$, and $A$) satisfying the following.
Let $\beta \colon Z \to Y$ be an $s$-fibration. Fix $0 < \delta < \varepsilon$.
Suppose that $\rho\colon C^\ell_\beta(Z) \to A$ is an $\ell$-cocycle on fibers of $\beta$ such that $d(\rho(c), \rho(c')) \leq \delta$
whenever $c, c'$ are cubes on the same fiber of $\beta$. Then
$$\rho=\partial^\ell f$$
for some continuous function $f: Z \to A$ which is almost constant on fibers of $\beta$, i.e. there exists a constant $c > 0$ (depending only on $s$ and $\ell$) such that $d(f(x), f(y)) \leq c\delta$ whenever $\beta(x)=\beta(y)$.
\end{theorem}
\begin{proof}
The proof uses a similar argument to the one used in the proof of \cite[Theorem 5.1]{GMVII}.
To modify the proof of \cite[Lemma 5.7]{GMVII}, consider the set
$$T_1^\ell:=\{t: \{0, 1\}^{\ell-1} \to A_s(\beta): [\overrightarrow{0}, t] \in C^\ell(\mathcal{D}_s(A_s(\beta)))\}.$$
Then for any $c \in C^{\ell-1}_\beta(Z)$ and $t \in T^\ell_1$,
we have $[c, t.c] \in C^\ell_\beta(Z)$. Therefore $\rho': C^{\ell -1}_\beta(Z) \to A$ sending $c$ to
$$\rho'(c):=\int_{T_1^\ell} \rho([c, t.c]_k)d\mu_{T_1^\ell}(t)$$
is well defined. Here $\mu_{T_1^\ell}$ denotes the measure on $T_1^\ell$ induced from the Lebesgue measure (see \cite[Subsection 5.2]{GMVII} for more details).
\end{proof}
\begin{proof}[Proof of Theorem \ref{openness}.]
Without loss of generality, we may assume $k \leq s$. Since $\pi_\ast$ is a group homomorphism, it suffices to show it is open at the identity. Let $\varphi\in{\rm Aut}_k(g_{s-1})$ be an element of small norm. We need to find a small $\widetilde{\varphi}$ in ${\rm Aut}_k(g)$ such that
$$\pi_{g, s-1}\circ \widetilde{\varphi}=\varphi \circ \pi_{g, s-1}.$$
By Lemma \ref{preliftting}, we can lift $\varphi$ to a small homeomorphism $\psi\colon Z \to Z$ fixing fibers of $g$. Since $\psi$ is of small norm, for any $c \in C^{s+1-k}_g(Z)$, $\llcorner^k(c, \psi(c))$ is close to the $(s+1)$-cube $\Box^k(c) \in C^{s+1}(Z)$. Thus the discrepancy $\Delta(\llcorner^k(c, \psi(c)))$ is close to $\Delta(\Box^k(c))=0$, where $\Box^k(c) \colon \{0, 1\}^{s+1} \to Z$ sends $\omega=(\omega_1, \ldots, \omega_{s+1})$ to $c(\omega_1, \ldots, \omega_{s+1-k})$. This implies that the image of the $(s+1-k)$-cocycle $\rho_\psi$ has small diameter. Thus we can apply Theorem \ref{fiber cocycle theorem} to write $\rho_\psi$ as
$$\rho_\psi=\partial^{s+1-k}f$$
for some continuous map $f\colon Z \to A_s(g)$ whose fibers are close to constant values. Then we can define $f$
as
$$f=\widetilde{f}\circ g + h$$
for some continuous maps $\widetilde{f}\colon Y \to A_s(g)$ and $h\colon Z \to A_s(g)$ such that $h$ is almost constant $0_{A_s(g)}$ (in the sense of Theorem \ref{fiber cocycle theorem}).
Note that for any $c \in C^{s+1-k}_g(Z)$, $g(c)$ is a constant cube. It follows that $\rho_\psi=\partial^{s+1-k}h$.
Since $\psi$ fixes the fibers of $\pi_{g, s-1}$ and $\varphi$ fixes the fibers of $g_{s-1}$, we have that $\psi$ fixes fibers of $g_{s-1}\circ \pi_{g, s-1}=g$. Since the image of $h$ is contained inside $A_s(g) \subseteq {\rm Aut}_s(g)$, for each $z \in Z$, $h(z)$ also fixes fibers of $g$. Therefore, we have
$$g(h(z).\psi(z))=g(\psi(z))=g(z),$$
i.e. $\widetilde{\varphi}:=h.\psi$ fixes the fibers of $g$. In summary, we prove that $\widetilde{\varphi} \in {\rm Aut}_k(g)$.
Finally, since $h$ is close to the constant function $0_{A_s(g)}$, the repairing $\widetilde{\varphi}$ of small $\psi$ will also be small as desired.
\end{proof}
\section{Approximating by Lie-fibered fibrations}\label{sec:Approximating by Lie-fibered fibrations}
In this section, we complete the proof of Theorem \ref{relative inverse limit}.
\subsection{The main steps of the proof}
Let $g\colon Z \to Y$ be an $s$-fibration between compact ergodic gluing cubespaces.
To deal with the general case as in \cite[Theorem 1.28]{GMVIII}, we need to first represent $Z$ as an inverse limit $\varprojlim Z_n$ respecting the fibers of $g$ as in \cite[Theorem 1.26]{GMVIII}, and then endow the fibers of the fibrations $Z_n \to Y$ with Host-Kra cube structure to obtain the cubespace isomorphism.
The following is an analogue of \cite[Theorem 1.26]{GMVIII} and covers the statements (1) and (2) of Theorem \ref{relative inverse limit}.
\begin{theorem} \label{space as inverse limit}
Fix $s \geq 1$. Let $g\colon Z \to Y$ be an $s$-fibration between compact ergodic gluing cubespaces. Then $Z$ is isomorphic to an inverse limit
$$\varprojlim Z_n$$
for an inverse system of $s$-fibrations between compact ergodic gluing cubespaces $\{p_{m, n}\colon Z_n \to Z_m\}_{0\leq m \leq n \leq \infty}$ and compatible Lie-fibered $s$-fibrations $\{h_n\colon Z_n \to Y\}$.
Here "compatible" means the following diagram commutes:
$$\xymatrix{
Z \ar[r] \ar[d]^g & \cdots \ar[r] & Z_n \ar[r]^{p_{n-1, n}} \ar[dll]_{h_n}^{\cdots} & Z_{n-1} \ar[r] & \cdots \ar[r] & Z_1 \ar[dlllll]^{h_1}\\
Y:=Z_0.
}$$
\end{theorem}
Let us start with some preliminary steps. Since every compact abelian group equals an inverse limit of compact abelian Lie groups \cite[Lemma 2.1]{GMVIII}, we can write the top structure group $A_s(g)$ of $g$ as an inverse limit of Lie groups $A_n$ with $A_0=\{0\}$, i.e.
$$A_s(g)=\varprojlim A_n.$$
Denote by $K_n$ the kernel of the quotient homomorphism $A_s(g) \to A_n$ and denote the orbit space of $Z$ under the action of by $K_n$, by $Z^{(n)}_\infty$, i.e. $Z^{(n)}_\infty:=Z/K_n$.
\begin{lemma}
$Z^{(n)}_\infty$ has the gluing property.
\end{lemma}
\begin{proof}
Let $c_1, c_2, c_2', c_3 \in C^k(Z)$ such that $[c_1, c_2], [c_2', c_3] \in C^{k+1}(Z)$ and $\overline{c_2}=\overline{c_2'}$ in $C^k(Z^{(n)}_\infty)$.
We want to show $[\overline{c_1}, \overline{c_3}] \in C^k(Z^{(n)}_\infty)$.
\begin{proposition} \label{inner fibrant}
Let $f\colon X \to Y, g\colon Y \to Z$ and $h\colon X \to Z$ be three cubespace morphisms such that $h=g\circ f$. Suppose that h has $k$-completion and $g$ has $k$-uniqueness. Then $f$ has $k$-completion.
\end{proposition}
\begin{proof}
Assume that $\lambda$ is a $k$-corner of $X$ such that $f(\lambda)=c|_{\llcorner^k}$ for some $k$-cube $c$ of $Y$. We want to show there exists $x \in f^{-1}(c(\overrightarrow{1}))$ completing $\lambda$ as a cube of $X$.
Note that $g(c)$ is a $k$-cube extending $(g\circ f)(\lambda)=h(\lambda)$. Since $h$ has $k$-completion, there exists $x \in h^{-1}(g(c(\overrightarrow{1})))$ completing $\lambda$ to be a $k$-cube of $X$. Thus $(g\circ f)(x)=h(x)=g(c(\overrightarrow{1}))$. Thus $f(x)$ completes $f(\lambda)$ as another $k$-cube sharing the same $g$-image as $c$. But since $g$ has $k$-uniqueness, it implies that $f(x)=c(\overrightarrow{1})$. This finishes the proof.
\end{proof}
Recall that for a compact abelian group $A$, ${\mathcal D}_s(A)$ denotes the Host-Kra cubespace with respect to the $s$-filtration:
$$A=A_0=A_1=\cdots=A_s\supseteq A_{s+1}=\{0\}.$$
By the relative weak structure theorem (Theorem \ref{RWST}), there exists a unique $\alpha \in C^k({\mathcal D}_s(K_n))$ such that $c_2'=\alpha.c_2$.
Note that $[\alpha, \alpha] \in C^{k+1}({\mathcal D}_s(K_n))$. Thus $[\alpha.c_1, \alpha.c_2]$ is a $(k+1)$-cube of $X$. gluing with $[c_2', c_3]$, we obtainthat $[\alpha.c_1, c_3] \in C^{k+1}(X)$. In particular, $[\overline{c_1}, \overline{c_3}] \in C^k(Z^{(n)}_\infty)$. Thus $Z^{(n)}_\infty$ has the gluing property.
\end{proof}
Denote by
$\beta_n\colon Z \to Z^{(n)}_\infty$
the quotient map. By the definition of $Z^{(n)}_\infty$, $g$ uniquely factors through $\beta_n$ via a map
$g^{(n)}\colon Z^{(n)}_\infty \to Y.$ Since $g$ has $(s+1)$-uniqueness, applying the universal replacement property in a similar way to the proof of Proposition \ref{inducing $s$-fibration}, we have $g^{(n)}$ has $(s+1)$-uniqueness again. Since $g$ is a fibration, by Proposition \ref{inner fibrant}, $\beta_n\colon Z \to Z^{(n)}_\infty$ is fibrant for cubes of dimension greater than $s$. Applying the universal replacement property of $Z$, $\beta$ is fibrant for cubes of dimension less than $s+1$. Thus $\beta$ is a fibration.
By Proposition \ref{universal property}, $g^{(n)}$ is again a fibration and hence is an $s$-fibration. Applying the $(s+1)$-uniqueness of $g$ again, we see that $\beta$ has $(s+1)$-uniqueness and hence is an $s$-fibration. Moreover, we have
$$\pi_{g^{(n)}, s-1}(Z^{(n)}_\infty)=\pi_{g, s-1}(Z)=Z^{(0)}_\infty.$$
In summary, we have the commutative diagram:
$$\xymatrixcolsep{5pc}\xymatrix{
Z \ar[d]_{\beta_n} \ar[ddr]^g & \\
Z^{(n)}_\infty \ar[d] \ar@{-->}[dr]^{g^{(n)}} & \\
\pi_{g, s-1}(Z) \ar[r]^{g_{s-1}} & Y.
}$$
By the inductive hypothesis, the $(s-1)$-fibration $g_{s-1}\colon \pi_{g, s-1}(Z) \to Y $ factors as an inverse limit of a sequence of Lie-fibered $(s-1)$-fibrations
$\psi_{0, m}\colon Z^{(0)}_m \to Y$ along with compatible fibrations $\psi_{m, \infty}\colon \pi_{g, s-1}(Z) \to Z^{(0)}_m$. In fact, since $g_{s-1}$ is an $(s-1)$-fibration, so are $\psi_{0, m}$ and $\psi_{m, \infty}$. Here it is convenient to denote $Y$ by $Z^{(0)}_0$ and $Z$ by $Z^{(\infty)}_\infty$.
To factor $g^{(n)}$ properly based on the factorization of $g_{s-1}$, we will introduce some key definitions stemming from the following lemma, which a variant version of Proposition \ref{canonical map}.
\begin{lemma} \label{relative shadow}
Let $X, Y, Z$ be three compact ergodic gluing cubspaces and $\varphi: X \to Y$ a fibration. Let $g\colon X \to Z$ and $h: Y \to Z$ be two $s$-fibrations such that $g=h\circ \varphi$. Then there exists a unique fibration $\psi\colon \pi_{g, s-1}(X) \to \pi_{h, s-1}(Y)$ such that the following diagram
$$\xymatrix{
X \ar[r]^\varphi \ar[d]_{\pi_{g, s-1}} \ar@/^3pc/[drr]^g & Y \ar[dr]^h \ar[d]_{\pi_{h, s-1}} & \\
\pi_{g, s-1}(X) \ar@{-->}[r]^\psi \ar@/_2pc/[rr]^{g_{s-1}} & \pi_{h, s-1}(Y) \ar[r]^{h_{s-1}} & Z
}$$
commutes.
\end{lemma}
\begin{proof}
Using $g=h\circ \varphi$, one can directly check that $\pi_{h, s-1}\circ \varphi$ factors through $\pi_{g, s-1}$ by a unique map $\psi$ such that $\pi_{h, s-1}\circ \varphi=\psi\circ \pi_{g, s-1}$. By the universal property of fibrations \cite[Lemma 7.8]{GMVI}, $\psi$ is a fibration. From
$$h_{s-1}\circ \psi \circ \pi_{g, s-1}=h_{s-1}\circ \pi_{h, s-1}\circ \varphi=h\circ \varphi=g=g_{s-1}\circ \pi_{g, s-1},$$
we obtain$h_{s-1}\circ \psi=g_{s-1}$. This shows that the diagram is commutative.
\end{proof}
\begin{definition}
With the setup in Lemma \ref{relative shadow}, we say that $\psi$ is the {\bf shadow}\index{shadow} of $\varphi$. Moreover, we say $\varphi$ is {\bf horizontal}\index{horizontal} if it satisfies one of the following equivalent conditions:
\begin{enumerate}
\item $\varphi(x)\neq \varphi(x')$ for any $x \neq x' \in X$ with $x\sim_{g, s-1} x'$;\\
\item for any $x \in X$, the appropriate restriction of $\varphi$ induces a bijection between $\pi^{-1}_{g, s-1}(\pi_{g, s-1}(x))$ and $\pi^{-1}_{h, s-1}(\pi_{h, s-1}(\varphi(x)))$;
\item the equivalence relation $\sim_{\varphi, s-1}$ is trivial.
\end{enumerate}
\end{definition}
Now we are ready to formulate the relative version of \cite[Proposition 2.5]{GMVIII}.
\begin{proposition} \label{middle fibration}
Let $Z^{(n)}_\infty, \beta_n, g^{(n)}, \psi_{m, \infty}$, etc. be as above. There exists a strictly increasing sequence $ M_1, M_2, \ldots$ of positive integers satisfying the following. For each $n \in {\mathbb N}$ and $m \geq M_n$, there is a compact ergodic gluing cubespace $Z^{(n)}_m$ and an $s$-fibration
$$h^{(n)}_m\colon Z^{(n)}_m \to Y$$
satisfying that:
\begin{enumerate}
\item $\psi_{0, m}$ is the canonical $(s-1)$-th factor of $h^{(n)}_m$ with top structure group $A_n$;
\item there is a horizontal $s$-fibration $\varphi^{(n)}_m\colon Z^{(n)}_\infty \to Z^{(n)}_m$ such that $g^{(n)}=h_m^{(n)} \circ \varphi^{(n)}_m$ and $\psi_{m, \infty}$ is the relative shadow of $\varphi^{(n)}_m$;
\item If $m_1 \leq m_2$ and $n_1 \leq n_2$ are such that $Z^{(n_1)}_{m_1}$ and $Z^{(n_2)}_{m_2}$ are both defined, then the fibers of $\varphi^{(n_2)}_{m_2}\circ \beta_{n_2}$ refine the fibers of $\varphi^{(n_1)}_{m_1}\circ \beta_{n_1}$.
\end{enumerate}
In summary, we have a commutative diagram
$$\xymatrixcolsep{5pc}\xymatrix{
Z^{(n)}_\infty \ar@{-->}[r]^{\varphi^{(n)}_m} \ar[d]_{A_n} \ar@/^4pc/[drr]^{g^{(n)}} & Z^{(n)}_m \ar@{-->}[dr]^{h^{(n)}_m} \ar@{-->}[d]_{A_n} & \\
Z^{(0)}_\infty \ar[r]^{\psi_{m, \infty}} \ar@/_2pc/[rr]^{g_{s-1}} & Z^{(0)}_m \ar[r]^{\psi_{0, m}} & Y.
}$$
\end{proposition}
\begin{proof}[{\bf Proof of Theorem \ref{space as inverse limit}.}]
Using the notation of Proposition \ref{middle fibration}, we define $Z_n=Z^{(n)}_{ M_n}$ and the fibration $h_n=h^{(n)}_{ M_n}$. Note that the top structure group of $h_n$ is the Lie group $A_n$. By the induction hypothesis, the canonical $(s-1)$-factor $\psi_{0, M_n}$ of $h_n$ is Lie-fibered. Combining these two facts, we have that $h_n$ is Lie-fibered.
Define $p_{n, \infty}=\varphi^{(n)}_{ M_n}\circ \beta_n$. Since both $\varphi^{(n)}_{ M_n}$ and $\beta_n$ are $s$-fibrations, so is $p_{n, \infty}$. Then for every $n < \ell < \infty$, the fibers of $p_{\ell, \infty}$ refine the fibers of $p_{n, \infty}$. Thus by Proposition \ref{universal property}, $p_{n, \infty}$ and $p_{\ell, \infty}$ induce a unique fibration $p_{n, \ell}$ such that $p_{n, \infty}=p_{n, \ell}\circ p_{\ell, \infty}$. Moreover, we obtain a commutative diagram
$$ \xymatrixcolsep{5pc}\xymatrix{
Z \ar[r] \ar[d]^g \ar[r]^{p_{\ell, \infty}} \ar@/^2pc/[rr]^{p_{n, \infty}} & Z_\ell \ar@{-->}[r]^{p_{n, \ell}} \ar[dl]_{h_\ell} & Z_n \ar[dll]^{h_n} \\
Y.
}$$
For every $0 \leq n < \ell < o \leq \infty $, the condition $p_{n, \ell}\circ p_{\ell, o}=p_{n, o}$ may be verified in a similar way to the absolute setting. We verify that the inverse system $Z_n$ separates points of $Z$. Let $z, z'$ be two distinct points of $Z$. If $\pi_{g, s-1}(z)\neq \pi_{g, s-1}(z')$, then
$\psi_{ M_n, \infty}\circ\pi_{g, s-1}(z) \neq \psi_{ M_n, \infty}\circ \pi_{g, s-1}(z')$ as $n$ is large enough. Since $\psi_{ M_n, \infty}$ is the relative shadow of $\varphi^{(n)}_{ M_n}$, we have
$$\psi_{ M_n, \infty}\circ\pi_{g, s-1}=\pi_{h_n, s-1}\circ\varphi^{(n)}_{ M_n}\circ \beta_n=\pi_{h_n, s-1}\circ p_{n, \infty}.$$
It follows that $p_{n, \infty}(z)\neq p_{n, \infty}(z')$.
If $\pi_{g, s-1}(z) = \pi_{g, s-1}(z')$, then there is a unique $a \in A_s(g)$ such that $z'=az$. Note that $a \notin K_n$ as $n$ is large enough. Thus $\beta_n(z)\neq \beta_n(z')$. Since $\varphi_{ M_n}^{(n)}$ is horizontal, we get
$$p_{n, \infty}(z)=\varphi^{(n)}_{ M_n}\circ \beta_n(z) \neq \varphi^{(n)}_{ M_n}\circ \beta_n(z')=p_{n, \infty}(z').$$
\end{proof}
The following is an analogue of \cite[Theorem 1.27]{GMVIII} \cite[Theorem 4]{ACS12}. We will give the proof in Section 5.3.
\begin{theorem} \label{endow cubestructure}
Let $X, Y, Z$ be three compact ergodic gluing cubespaces. Suppose that $\varphi\colon X \to Y$
is a fibration, and $g\colon X \to Z$ and $h\colon Y \to Z$ are two Lie-fibered $s$-fibrations such that $g=h\circ \varphi$. Then for every $i\geq 1$, $\varphi$ induces a surjective continuous group homomorphism
$$\Phi\colon {\rm Aut}_i^\circ(g) \to {\rm Aut}_i^\circ(h)$$ such that
$\Phi (u)\circ \varphi=\varphi \circ u$
for all $u \in {\rm Aut}_i^\circ(g) $. In summary,the following diagram is commutative:
$$\xymatrixcolsep{5pc}\xymatrix{
X \ar[r]^u \ar[d]_\varphi & X \ar[dr]^g \ar[d]^\varphi & \\
Y \ar@{-->}[r]^{\Phi(u)} & Y \ar[r]^h & Z.
}$$
\end{theorem}
\begin{proof}[{\bf Proof of Theorem \ref{relative inverse limit}}]
Statement $(1)$ and $(2)$ have been proven in Theorem \ref{space as inverse limit}. Let us prove statement $(3)$. The case $s=0$ is trivial since $g$ is clearly a cubespace isomorphism and each fiber of $g$ is simply a singleton. We now assume $s \geq 1$. By Theorem \ref{space as inverse limit}, since the diagram commutes, we can first define $g^{-1}(g(z))$ as an inverse limit of $h_n^{-1}(g(z))$ by restricting the projection map $Z \to Z_n$. By Theorem \ref{relative nilmanifold}, $h_n^{-1}(g(z))=h_n^{-1}(h_n(z_n))$ is isomorphic to ${\rm Aut}_1^\circ(h_n)/{\rm Stab}(z_n)$. Thus
$$g^{-1}g(z)\cong \varprojlim (h_n^{-1}h_n(z_n)) \cong \varprojlim ({\rm Aut}_1^\circ(h_n)/{\rm Stab}(z_n)). $$
To see that the above isomorphism is a cubespace isomorphism, apply Theorem \ref{endow cubestructure} to the fibration $p_{n-1, n}$ and Lie-fibered $s$-fibrations $h_n$ and $h_{n-1}$, to obtain a surjective continuous homomorphism
$$\Phi_{{n-1, n} }\colon {\rm Aut}_i^\circ(h_n) \to {\rm Aut}_i^\circ(h_{n-1}).$$
This induces an inverse limit $\varprojlim ({\rm HK}^k({\rm Aut}_\bullet^\circ(h_n))/{\rm Stab}(z_n))$ for every $k \geq 0$. By Theorem \ref{relative nilmanifold}, $C^k(h_n^{-1}(h_n(z_n)))$ is isomorphic to ${\rm HK}^k({\rm Aut}_\bullet^\circ(h_n))/{\rm Stab}(z_n)$. Thus we obtain the desired cubespace isomorphism.
\end{proof}
The following proposition gives a a useful condition for verifying when a nilspace fiber is strongly connected.
\begin{proposition} \label{prop:structure groups} Let $f \colon X \to Y$ be a fibration of degree at most $d$ with structure groups $A_1,\ldots, A_d$, then for all $y\in Y$, the subcubespaces $f^{-1}(y)$ are nilspaces of degree at most $d$ with structure groups $A_1,\ldots, A_d$.\end{proposition}
\begin{proof}Recall that for each $k \geq 0$ and $y \in Y$, $C^k(f^{-1}(y))$ is given by the restriction $C^k(X)\cap (f^{-1}(y))^{\{0,1\}^k}$. Since $f$ is a fibration, for each $k$-corner $\lambda$ of $f^{-1}(y)$, $f(\lambda)$ can be completed as a constant $k$-cube, we have $\lambda$ can be completed as a cube of $f^{-1}(y)$. Thus $f^{-1}(y)$ has $k$-completion. Since $f$ has $(d+1)$-uniqueness, it guarantees that $f^{-1}(y)$ has $(d+1)$-uniqueness. This proves that $f^{-1}(y)$ is a nilspace of degree at most $d$.
Now we show the top structure group of $f^{-1}(y)$ is $A_d$. The case for other structure groups use the same argument which we omits. Set $A_d=A$ and $y=f(x)$ for some $x \in X$. By the construction of $A$ in the proof of \cite[Theorem 7.19]{GMVI}, for every $a \in A$, we have $f(ax)=f(x)$ and $ax\sim_{d-1} x$. Let $p \colon f^{-1}(y) \to f^{-1}(y)/\sim_{d-1}$ be the canonical projection map. It follows that $f^{-1}(y)$ is $A$-invariant and moreover $Ax \subseteq p^{-1}(p(x))$. On the other hand, if $x' \in f^{-1}(y)$ and $x' \sim_{d-1} x$, by the relative weak structure theorem, there exists a unique $a \in A$ such that $x'=ax$. So we have $Ax=p^{-1}(p(x))$. Thus $A$ is the top structure group of $f^{-1}(y)$. \end{proof}
\subsection{Straight classes and sections}
To prove Proposition \ref{middle fibration}, we need an analogue of \cite[Proposition 2.13]{GMVIII}. Let us start with some preliminary steps.
\begin{definition}
Let $X, Z, W$ be three compact ergodic gluing cubespaces. Given an $s$-fibration $g\colon X \to Z$ and a fibration $\psi\colon \pi_{g, s-1}(X) \to W$, we call a subset $D \subseteq X$ is a {\bf straight $\psi$-class}\index{ straight class} if there exists $w \in W$ such that
\begin{enumerate}
\item $D \cap \pi^{-1}_{g, s-1}(u)$ is a singleton for every $u \in \psi^{-1}(w)$ and $D$ is the union of those singletons;
\item a configuration $c\colon \{0, 1\}^{s+1} \to D$ is a cube if and only if $\pi_{g, s-1}(c)$ is a cube of $W$.
\end{enumerate}
In short, $D$ is a $\pi_{g, s-1}$-lifting of some fiber of $\psi$ respecting cube structure.
Consider a configuration $c\colon \{0, 1\}^{s+1} \to X$ inducing a cube $\pi_{g, s-1}(c)$. In light of the relative weak structure theorem (Theorem \ref{RWST}) and \cite[Proposition 5.1]{GMVI}, there exists a unique element $a \in A_s(g)$ such that the application of $a$ to $c$ at $(0, \ldots, 0) \in \{0, 1\}^{s+1}$ results with a cube of $X$. We call such an element $a \in A_s(g)$ the {\bf discrepancy} of $c$ and denote it by $D(c)$.
Let $U \subseteq W$ be an open subset. We say a continuous map $\sigma\colon \psi^{-1}(U) \to Z$ is a {\bf straight section}\index{ straight section} if
\begin{enumerate}
\item $\pi_{g, s-1}\circ \sigma={\rm id}_U$;
\item for any $c_1, c_2 \in C^{s+1}(\psi^{-1}(U))$ with $\psi(c_1)=\psi(c_2)$, we have $D(\sigma(c_1))=D(\sigma(c_2))$.
\end{enumerate}
\end{definition}
We remark that the straightness of a section $\sigma$ implies that $\sigma$ maps every fiber of $\psi$ onto a straight $\psi$-class.
The following lemma is a relative version of \cite[Lemma 2.7]{GMVIII}.
\begin{lemma} \label{fiber class}
Let $X, Y, Z$ be three compact ergodic gluing cubspaces and $\varphi: X \to Y$ be a fibration. Let $g\colon X \to Z$ and $h: Y \to Z$ be two $s$-fibrations such that $g=h\circ \varphi$. If $\varphi$ is horizontal, denoting by $\psi$ the shadow of $\varphi$, then each fiber of $\varphi$ is a straight $\psi$-class.
\end{lemma}
The following lemma is a relative analogue of \cite[Proposition 2.8]{GMVIII}, based on \cite[Theorem 1.25]{GMVIII} and the relative weak structure theorem (Theorem \ref{RWST}).
\begin{lemma} \label{enough straight classes}
Suppose that $g\colon X \to Z$ is an $s$-fibration between compact ergodic gluing cubespaces such that the top structure group is a Lie group $A$. Then for any $\varepsilon > 0$ there is $\delta > 0$ satisfying the following property.
Let $\psi\colon \pi_{g, s-1}(X) \to W$ be an $(s-1)$-fibration to another compact ergodic gluing cubespace $W$ such that $\psi$ is a $\delta$-embedding in the sense that every fiber of $\psi$ has diameter less than $\delta$. Then for every $c \in C^{s+1}(\pi_{g, s-1}(X))$, there is an open set $U \subseteq W$ satisfying that
\begin{enumerate}
\item the image of $c$ is contained inside $ \psi^{-1}(U)$;
\item there is a straight $\psi$-section $\sigma: \psi^{-1}(U) \to X$ with ${\rm diam} (\sigma (\psi^{-1}(b))) \leq \varepsilon$ for every $b \in U$.
\end{enumerate}
In particular, every $x \in X$ is contained in a $\psi$-class of small diameter (explicitly, $ x \in a.\sigma\circ \psi^{-1}(\psi\circ \pi_{g, s-1}(x))$ for some $a \in A$ ).
\end{lemma}
The following is a relative analogue of \cite[Proposition 2.9]{GMVIII}.
\begin{lemma} \label{close class}
Let $g\colon X \to Z$ be an $s$-fibration such that the top structure group is a Lie group $A$. Then there exists $\delta > 0$ depending only on $g$ satisfying the following.
Let $\psi\colon \pi_{g, s-1}(X) \to W$ be an $(s-1)$-fibration for another compact ergodic gluing cubespace $W$. Suppose that $D_1$ and $D_2$ are two straight $\psi$-classes with the same image of $\pi_{g, s-1}$ and the restriction $\pi_{g, s-1}|_{D_1\cup D_2}$ is a $\delta$-embedding. Then $D_1=aD_2$ for some $a \in A$.
\end{lemma}
\begin{proof}
Since $\psi$ is a fibration, by Proposition \ref{fiber of fibration}, the space $B:=\pi_{g, s-1}(D_1)$ is a compact ergodic nilspace. Applying \cite[Theorem 5.2]{GMVII} to the nilspace $B$, we obtain $\varepsilon=\varepsilon(s, s+1, A)$ of \cite[Theorem 5.2]{GMVII}. Define $\delta=\varepsilon/2$.
Denote by $\sigma_i\colon B \to D_i$ the inverses of $\pi_{g, s-1}$ restricted to $D_i$ for $i=1,2$. Let $f: B \to A$ be the continuous function determined by the equation $\sigma_2(y)=f(y).\sigma_1(y)$ for all $y \in B$. Since $D_1$ and $D_2$ are straight classes, for every $c \in C^{s+1}(B)$, $\sigma_1(c)$ and $\sigma_2(c)$ are cubes of $X$. Thus by the relative weak structure theorem (Theorem \ref{RWST}), $\partial^{s+1}(f(c))=0$. Since $\pi_{g, s-1}|_{D_1\cup D_2}$ is a $\delta$-embedding, we have that the diameter of $\Ima (f)$ is less that $\varepsilon$. Applying \cite[Theorem 5.2]{GMVII}, $f$ is constant. Q.E.D.
\end{proof}
\begin{remark}
We note that the assumption on $\psi$ in Lemmas \ref{enough straight classes} and \ref{close class} is weaker than the corresponding assumption in the absolute setting in \cite{GMVIII}.
\end{remark}
Combining Lemmas \ref{enough straight classes} with \ref{close class}, the straight $\psi$-classes induce an equivalence relation which we denote by $\approx_\psi$\index{$\approx_\psi$}. This equivalence relation allows us to construct the desired cubespaces and fibrations stated in Proposition \ref{middle fibration}.
\begin{lemma} \label{constructed space}
Let $g\colon X \to Z$ be an $s$-fibration such that the top structure group is a Lie group $A$. Write $\pi=\pi_{g, s-1}$. Then there exists $\delta >0$ depending only on $g$ satisfying the following.
Let $\psi\colon \pi_{g, s-1}(X) \to W$ be an $(s-1)$-fibration such that $\psi$ is a $\delta$-embedding. Then the induced equivalence relation $\approx_\psi$ from the straight $\psi$-classes is closed. Moreover, let $u\colon W \to Z$ be an $(s-1)$-fibrations with $g_{s-1}=u \circ \psi$. Then it holds:
\begin{enumerate}
\item the quotient map $\varphi\colon X \to X/\approx_\psi$ is a fibration and induces a fibration $\pi'\colon X/\approx_\psi \to W$ and an $s$-fibration $u':=u\circ \pi'$ such that $\varphi$ is horizontal and the diagram below commutes
$$\xymatrixcolsep{5pc}\xymatrix{
X \ar@{-->}[r]^\varphi \ar[d]^A_{\pi} \ar@/^3pc/[drr]^g & X/\approx_\psi \ar@{-->}[d]^{\pi'} \ar@{-->}[dr]^{u'} \\
\pi_{g, s-1}(X) \ar[r]^\psi \ar@/_2pc/[rr]^{g_{s-1}} & W \ar[r]^u & Z;
}$$
\item the top structure group of $u'$ is $A$ and $\pi'=\pi_{u', s-1}$;
\end{enumerate}
\end{lemma}
\begin{proof}
Based on Lemma \ref{close class}, the fact that the relation $\approx_\psi$ is closed follows from the argument of \cite[Proposition 2.13]{GMVIII}. It is clear from the definition of the equivalence relation that the induced map $\pi'$ is well-defined and $\psi \circ \pi=\pi'\circ \varphi$. From $g_{s-1}=u\circ \psi$, we have
$$g=g_{s-1}\circ \pi =u\circ \psi \circ \pi=u\circ \pi'\circ \varphi=u'\circ \varphi.$$
Thus $\psi$ is the shadow of $\varphi$.
We show $\varphi$ is a fibration. Since $g$ is an $s$-fibration and $u$ is an $(s-1)$-fibration, by a similar argument to the one in the proof of \cite[Lemma 2.14]{GMVIII}, we have that $u'$ has $(s+1)$-uniqueness. By Proposition \ref{inner fibrant}, $\varphi$ is fibrant for $k$-corners of every $k \geq s+1$. From the fact that $\varphi$ is relatively $s$-ergodic (see Definition \ref{relative ergodic}) and the fact that $\psi$ is a fibration, it follows that
$\varphi$ is fibrant for corners of lower dimension. Note that $A$ respects straight $\psi$-classes. Thus $X/\approx_\psi$ inherits an $A$-action from the $A$-action on $X$. Moreover, the straightness guarantees that $W$ is exactly the orbit space induced and hence $\pi'=\pi_{u', s-1}$ and $A$ is the top structure group of $u'$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{middle fibration}]
For each fixed $n$ applying Lemma \ref{constructed space} to the $s$-fibration $g^{(n)}: Z^{(n)}_\infty \to Y$, we obtain a $\delta_n$ satisfying the desired property. Take $M_n$ large enough such that for every $m \geq M_n$, $\psi_{m, \infty}$ is a $\delta_n$-embedding.
Then we obtain the desired $s$-fibration $h^{(n)}_m \colon Z^{(n)}_m \to Y$ and a horizontal fibration $\varphi^{(n)}_m \colon Z^{(n)}_\infty \to Z^{(n)}_m$ such that $g^{(n)}=h^{(n)}_m \circ \varphi^{(n)}_m$ from Lemma \ref{constructed space}.
In light of Lemma \ref{close class}, following the argument as in the absolute setting, it holds that the fibers of $\varphi^{(n_2)}_{m_2}\circ \beta_{n_2}$ refine the fibers of $\varphi^{(n_1)}_{m_1}\circ \beta_{n_1}$ for $n_1\leq n_2, M_{n_1}\leq m_1\leq m_2$ with $M_{n_2} \leq m_2$.
\end{proof}
\subsection{Relations between relative translations}
For an $s$-fibration $g\colon Z \to Y$ between compact ergodic gluing cubespaces, denote by ${\rm Aut}_i^\varepsilon(g)$\index{${\rm Aut}(X)$!${\rm Aut}_i^\varepsilon(g)$} the $\varepsilon$-neighborhood of the identity in ${\rm Aut}_i(g)$ under the metric
$$d(f,g):=\max_{x \in X} d(f(x), g(x)).$$
By Corollary \ref{Lieness}, ${\rm Aut}_i(g)$ is a Lie group. Thus ${\rm Aut}_i^\circ(g)$ is a group generated by ${\rm Aut}_i^\varepsilon(g)$ as $\varepsilon$ is small enough. Then Theorem \ref{endow cubestructure} is a consequence of the following general result.
\begin{theorem} \label{pushforward and backward}
Fix $i \geq 1$. Let $\varphi\colon X \to Y, g\colon X \to Z$ and $h\colon Y \to Z$ be three $s$-fibrations such that $g=h\circ \varphi$ and $g, h$ are Lie fibrations.
Then for any $\varepsilon > 0$ there is $\delta > 0$ satisfying the following property. For any $u \in {\rm Aut}_i^\delta(g)$ there is
$u' \in {\rm Aut}_i^\varepsilon(h)$, and conversely for any $u' \in Aut_i^\delta(h)$ there is $u \in {\rm Aut}_i^\varepsilon(g)$, such that
$u'\circ \varphi=\varphi\circ u$.
\end{theorem}
We introduce vertical fibrations \cite[Definition 3.2]{GMVIII} as follows.
\begin{definition}
Let $\varphi\colon X \to Y, g\colon X \to Z$ and $h\colon Y \to Z$ be three $s$-fibrations such that $g=h \circ \varphi$. We say $\varphi$ is a {\bf vertical fibration}\index{vertical fibration} if for any $x, x' \in X$ such that $\pi_{h, s-1}\circ \varphi(x)=\pi_{h, s-1} \circ \varphi(x')$, one obtains that $\pi_{g, s-1}(x)=\pi_{g, s-1}(x')$.
\end{definition}
We can factor a fibration in a relative way in contrast with \cite[Proposition 3.3]{GMVIII}.
\begin{proposition} \label{fibration factorization}
Let $\varphi\colon X \to Y, g\colon X \to Z$ and $h\colon Y \to Z$ be three $s$-fibrations such that $g=h \circ \varphi$. Then there exists a compact ergodic gluing cubespace $W$ and an $s$-fibration $k\colon W \to Z$ such that $\varphi$ factors as $$\varphi=\varphi_h \circ \varphi_v$$
for some vertical fibration $\varphi_v\colon X \to W$ (with respect to $g$ and $k$) and horizontal fibration $\varphi_h\colon W \to Y$ (with respect to $k$ and $h$). In summary, the following diagram is commutative:
$$\xymatrixcolsep{5pc}\xymatrix{
X \ar[dd]_{\varphi} \ar@{-->}[dr]^{\varphi_v} \ar@/^3pc/[ddrr]^g & &\\
& W \ar@{-->}[dl]_{\varphi_h} \ar@{-->}[dr]^k & \\
Y \ar[rr]^h & & Z.
}$$
\end{proposition}
\begin{proof}
Define $W= X/\sim_{\varphi, s-1}$. Denote by $\varphi_v$ the quotient map $ X \to W$ and $\varphi_h$ the induced map $ W \to Y$. Define $k:=h \circ \varphi_h$. Then it is routine to check the desired properties by definition.
\end{proof}
The following lemma is an analogue of \cite[Proposition 3.5]{GMVIII}.
\begin{lemma} \label{pushforward criterion}
Let $\varphi \colon X \to Y, g\colon X \to Z$ and $h\colon Y \to Z$ be three $s$-fibrations such that $g=h \circ \varphi$. Let $u \in {\rm Aut}_i(g)$. Suppose that the fibers of $\varphi$ refine the fibers of $\varphi \circ u$. Then there is a unique relative translation $u' \in {\rm Aut}_i(h)$ such that $u'\circ \varphi=\varphi \circ u$.
\end{lemma}
\begin{proof}
By Proposition \ref{universal property}, there exists a unique fibration $u'\colon Y \to Y$ such that $u'\circ \varphi=\varphi \circ u$.
We check that $u' \in {\rm Aut}_i(h)$.
Firstly, since $u$ fixes the fibers of $g$, we have $u'$ fixes fibers of $h$. Let $n\geq i, y \in Y$, and $c \in C^n(h^{-1}(h(y)))$. Since fibrations are surjective \cite[Corollary 7.6]{GMVI}, we have that $c=\varphi(\widetilde{c})$ for some $\widetilde{c} \in C^n(X)$. Choose some $x \in X$ such that $\varphi(x)=y$. It follows that $\widetilde{c} \in C^n(g^{-1}(g(x)))$. Let $F \subseteq \{0, 1\}^n$ be a face of codimension $i$. Since $u \in {\rm Aut}_i(g)$, we have $[u]_F.\widetilde{c} \in C^n(g^{-1}(g(x)))$. Thus $[u']_F.c=\varphi([u]_F.\widetilde{c})$ is a cube of $h^{-1}(h(y))$.
\end{proof}
The following lemma is an analogue of \cite[Lemma 3.6]{GMVIII}.
\begin{lemma} \label{vertical independence}
Let $\varphi\colon X \to Y, g\colon X \to Z$ and $h\colon Y \to Z$ be three $s$-fibrations such that $g=h\circ \varphi$ and $\varphi$ is vertical. Fix $u \in {\rm Aut}_1(g)$. Then for any $x, x' \in X$ with $\varphi(x)=\varphi(x')$, one has $\varphi \circ u(x)=\varphi \circ u(x')$.
\end{lemma}
\begin{proof}
Since $\varphi$ is vertical, we have $x\sim_{s-1}x'$ and hence $\llcorner^s(x, x')$ is a cube. Moreover, since $u \in {\rm Aut}_1(g)$, we obtain an $(s+1)$-cube $[\llcorner^s(x, x'), \llcorner^s(u(x), u(x'))]$. Hence $c:=\varphi([\llcorner^s(x, x'), \llcorner^s(u(x), u(x'))])$ is also an $(s+1)$-cube. On the other hand, by ergodicity of $Y$, we have another $(s+1)$-cube $c':=\varphi([\Box^s(x), \Box^s(u(x))])$. Note that $c|_{\llcorner^{s+1}}=c'|_{\llcorner^{s+1}}$ and
$h(c)=\Box^{s+1}(g(x))=h(c')$. By the $(s+1)$-uniqueness of $h$, it holds that
$$\varphi(u(x))=c'(\overrightarrow{1})=c(\overrightarrow{1})=\varphi(u(x')).$$
\end{proof}
Combining Lemmas \ref{pushforward criterion} with \ref{vertical independence}, we complete the pushing-forward part of Theorem \ref{pushforward and backward} for vertical fibrations.
\begin{proposition} \label{vertical pushforward}
Let $\varphi\colon X \to Y, g \colon X \to Z$ and $h\colon Y \to Z$ be three $s$-fibrations such that $g=h\circ \varphi$. Suppose that $\varphi$ is vertical.
Then there is a continuous homomorphism $\Phi\colon {\rm Aut}_i(g) \to {\rm Aut}_i(h)$ such that
$$\Phi(u) \circ \varphi =\varphi \circ u$$
for any $u \in {\rm Aut}_i(g)$.
\end{proposition}
Now we deal with the horizontal case.
\begin{proposition} \label{horizontal pushforward}
Let $\varphi\colon X \to Y, g\colon X \to Z$ and $h\colon Y \to Z$ be three $s$-fibrations such that $g=h\circ \varphi$. Suppose that $\varphi$ is horizontal and $g, h$ are Lie fibrations. Then for any $\varepsilon > 0$ there exists $\delta > 0$ satisfying the following property. For any $u \in {\rm Aut}_i^\delta(g)$ there is
$u' \in {\rm Aut}_i^\varepsilon(h)$ such that $u'\circ \varphi=\varphi \circ u$.
\end{proposition}
\begin{proof}
By Proposition \ref{pushforward criterion}, it suffices to show as $\delta$ is small enough every map in $ {\rm Aut}_i^\delta(g)$ preserves $\varphi$-fibers. We induct on $s$ to prove this.
The case $s=0$ is trivial as $\varphi$ is a cubespace isomorphism.
Denote by $\psi \colon \pi_{g, s-1}(X) \to \pi_{h, s-1}(Y)$ the shadow of $\varphi$. By the induction hypothesis, there exists $\delta_0 > 0$ such that any element of $ {\rm Aut}_i^{\delta_0}(g_{s-1})$ preserves $\psi$-fibers. Let $u \in {\rm Aut}_i^\delta(g)$ for $\delta$ to be decided later. By Proposition \ref{canonical map}, $u$ induces a map $v:=\pi_\ast(u) \in {\rm Aut}_i(g_{s-1})$. As $\delta$ is small enough, we have $v \in {\rm Aut}_i^{\delta_0}(g_{s-1})$.
By the induction hypothesis, we have that $v$ preserves $\psi$-fibers.
Let $y \in Y$ and fix a point $x_0 \in \varphi^{-1}(y)$. Define $z=\varphi (u(x_0))$. We show that $D_1:=u(\varphi^{-1}(y))=D_2:=\varphi^{-1}(z)$ by applying Proposition \ref{close class} to the Lie fibration $g$. Note that $u(x_0) \in D_1\cap D_2$.
By Lemma \ref{fiber class}, $D_2$ is a straight $\psi$-class. Since $v$ preserves $\psi$-fibers, we can deduce that $D_1$ is also a
straight $\psi$-class. Denote by $\delta'$ the number $\delta$ in Proposition \ref{close class}. Finally, one can check that $\pi_{g, s-1}|_{D_1\cup D_2}$ is a desired $\delta'$-embedding as $\delta$ is small. This will force that $D_1=D_2$.
\end{proof}
Following the argument of \cite[Proposition 3.9]{GMVIII}, based on Propositions \ref{vertical pushforward} and \ref{horizontal pushforward}, and Lemmas \ref{identity orbit is open} and \ref{discreteness}, we obtain the following proposition.
\begin{proposition} \label{pushbackward}
Let $\varphi\colon X \to Y, g\colon X \to Z$ and $h\colon Y \to Z$ be three $s$-fibrations such that $g=h \circ \varphi$ and $g, h$ are Lie fibrations. Then for any $\varepsilon > 0$ there exists $\delta > 0$ satisfying the following property. For any $u' \in {\rm Aut}_i^\delta(h)$ there is
$u \in {\rm Aut}_i^\varepsilon(g)$ such that $u' \circ \varphi=\varphi \circ u$.
\end{proposition}
\begin{proof}[{\bf Proof of Theorem \ref{pushforward and backward}}]
By Proposition \ref{fibration factorization}, in order to prove the first statement, it is enough to consider the cases that $\varphi$ is horizontal and vertical separately. These two cases are dealt with in
Propositions \ref{vertical pushforward} and \ref{horizontal pushforward} respectively. The second statement follows from Proposition \ref{pushbackward}.
\end{proof}
\section{Isomorphisms between fibers of a fibration}\label{sec:Isomorphisms between fibers of a fibration}
In this section we prove Theorem \ref{fiber isomorphism} which gives a natural condition under which the fibers of a Lie-fibered $s$-fibration are isomorphic as subcubespaces.
Let us first recall the covering homotopy theorem \cite[Theorem 2.1]{C17} \cite[Theorem 11.3]{Ste51}.
\begin{theorem}[Covering homotopy theorem]\label{thm:covering_homotopy}
Let $p\colon E \to B$ and $q\colon Z \to Y$ be fiber bundles with the same fiber $F$, where $B$ is compact. Let $h_0$ be a bundle map
$$
\xymatrix{
E \ar[r]^{\widetilde{h_0}} \ar[d]^p & Z \ar[d]^q\\
B \ar[r]^{h_0} & Y.
}$$
Let $H\colon B\times I \to Y$ be a homotopy of $h_0$, i.e. $h_0=H|_{B \times \{0\}}$. Then there exists a covering $\widetilde{H}$ of the homotopy $H$ by a bundle map
$$
\xymatrix{
E \times I \ar@{-->}[r]^{\widetilde{H}} \ar[d]^{p\times {\rm id}} & Z \ar[d]^q \\
B \times I \ar[r]^H & Y.
}$$
\end{theorem}
\begin{proof}[Proof of Theorem \ref{fiber isomorphism}]
We first modify the homotopy argument of \cite[Theorem 5.1]{R78} to obtain a local homeomorphism and then repair it to a cubespace isomorphism. By compactness, the isomorphism can be built globally.
Denote by $I$ the interval $[0, 1]$. Since $Y$ is path-connected, there exists a continuous map $H_0\colon \{y_0\}\times I \to Y$ such that $H_0(y_0, 0)=y_0$ and $H_0(y_0, 1)=y_1$. Define $R=\Ima(H_0)$ and $y_t=H_0(y_0, t)$. For every $i=0, 1, \ldots, s-1$ abbreviate by $\pi_i$ the map $\pi_{g, i}\colon Z/\sim_{g, i+1} \to Z/\sim_{g,i}$. We first show for every $0 \leq i \leq s$ there exists an interval $I_i:=[0, t_i]$ for some $t_i=t_i(y_0)> 0$ and a continuous map
$$G_i\colon g_i^{-1}(y_0)\times I_i \to g_i^{-1}(R)$$
such that
\begin{enumerate}
\item $G_i(x, 0)=x$ for all $x \in g_i^{-1}(y_0)$, i.e. $G_i$ is a homotopy of inclusion map;
\item $G_i(\cdot, t)\colon g_i^{-1}(y_0) \to g_i^{-1}(y_t)$ is a cubespace isomorphism for every $t \in I_i$.
\end{enumerate}
The case $i=0$ is clear since the fibers of $g_0$ are singletons and $G_0:=H_0$ works.
Inductively, suppose that we have a continuous map $G_{s-1}\colon g_{s-1}^{-1}(y_0)\times I_{s-1} \to g_{s-1}^{-1}(R)$ with the desired properties. Consider the fiber bundle map given by the inclusion map
$$\xymatrix{
g^{-1}(y_0) \ar[d]^{\pi_{s-1}} \ar[r]^i & (g_{s-1})^{-1}(R) \ar[d]^{\pi_{s-1}} \\
(g_{s-1})^{-1}(y_0) \ar[r]^i & (g_{s-1})^{-1}(R)
}$$
Clearly $G_{s-1}$ is a homotopy of the inclusion map $i: (g_{s-1})^{-1}(y_0) \to (g_{s-1})^{-1}(R)$.
Applying Theorem \ref{thm:covering_homotopy} to this bundle map, we obtain a fiber bundle map $H_s$ such that the following diagram
$$\xymatrix{
g^{-1}(y_0)\times I_{s-1} \ar[d]^{\pi_{s-1}\times {\rm id}} \ar@{-->}[r]^{H_s} & g^{-1}(R) \ar[d]^{\pi_{s-1}} \\
(g_{s-1})^{-1}(y_0) \times I_{s-1} \ar[r]^{G_{s-1}} & (g_{s-1})^{-1}(R).
}$$
commutes and $H_s(x,0)=x$ for all $x \in g^{-1}(y_0)$.
In particular, for every $t \in I_{s-1}$, $H_s$ restricts to a continuous map from $g^{-1}(y_0)\times\{t\}$ to $g^{-1}(y_t)$.
As $\pi_{s-1}$ is a principal $A_s(g)$-bundle map, it holds that this restriction map is a homeomorphism.
By the induction hypothesis $G_{s-1}(\cdot, t)$ is a cubespace isomorphism for every $t \in I_{s-1}$. Thus the discrepancy
$$\rho_{H_s(\cdot, t)} \colon C^{s+1}(g^{-1}(y_0)) \to A_s(g)$$
is well defined for every $t \in I_{s-1}$. Since $H_s$ is a homotopy, as $t$ is small enough, say, $t \in I_s:=[0, t_s]$ for some $0 < t_s \leq t_{s-1}$, $H_s(\cdot, t)$ is of sufficiently small norm. Thus by \cite[Theorem 4.11]{GMVII} (or \cite[Lemma 3.19]{ACS12}), $\rho_{H_s(\cdot, t)}$ is a coboundary map. Furthermore, since $H_s$ is continuous, it can be repaired to a (continuous) homotopy
$$G_s: g^{-1}(y_0)\times I_s \to g^{-1}(R)$$
which restricts to a cubespace isomorphism from $g^{-1}(y_0)\times \{t\}$ to $g^{-1}(y_t)$ for every $t \in I_s$.
This completes the inductive step.
Finally, for every $y \in R$, the above argument shows in particular that for every $t \in I$, there exists $\delta_t> 0$ such that $g^{-1}(y_u)$ is isomorphic to $g^{-1}(y_{u'})$ as cubespaces for every $u, u'$ in the ball $(t-\delta_t, t+\delta_t)$. By compactness, we obtain a finite open cover of $I$ consisting of open balls such that the $g$-fibers over each ball are isomorphic as cubespaces. Then a finite composition of isomorphisms give an isomorphism between $g^{-1}(y_0)$ and $g^{-1}(y_1)$.
\end{proof}
\begin{remark}
In the statement of Theorem \ref{fiber isomorphism}, it is desirable to drop the assumption that $g$ is a Lie fibration. Let us explain the obstruction. Note that the proof of Theorem \ref{fiber isomorphism} is based on an induction with a finite number of steps. To deal with the general case, one may apply the factorization of $g$ as an inverse limit of Lie fibrations in Theorem \ref{relative inverse limit}. However,
after an infinite steps of induction, one fails to obtain a homotopy for some strictly positive time interval. This gives rise to the obstruction to construct the desired cubespace isomorphism.
\end{remark}
\section{Factor maps between minimal distal systems are fibrations}\label{sec:Factor maps between minimal distal systems are fibrations}
In this section, we prove that every factor map between minimal distal systems is a fibration for the induced cubespace morphism.
\begin{definition} \label{def:principal abelian group}
Let $(G, X)$ be a dynamical system and $K$ a compact group acting on $X$ such that the action of $K$ commutes with the action by $G$. We say a factor map $\pi: (G, X) \to (G, Y)$ is a {\bf (topological) group extension by $K$} if
$$R_\pi:=\{(x, x') \in X^2: \pi(x)=\pi(x')\}=\{(x, kx): x \in X, k \in K\}.$$
If furthermore $K$ is an abelian group and $K$ acts on $X$ freely, we say $\pi$ is a {\bf principal abelian group extension}\footnote{Comparing to Definition \ref{def:principal fiber bundle}, it is easy to see that a group extension $\pi: (G, X) \to (G, Y)$ by a compact group $K$ which acts freely on $X$ is a $K$-principal bundle.}.
\end{definition}
\begin{lemma} \label{K invariant}
Let $(G, X)$ be a dynamical system and $\pi: X \to Y$ a group extension by a compact group $K$. Then
for every $\ell \geq 0$ we have
$$K{\bf NRP}^{[\ell]}(X)={\bf NRP}^{[\ell]}(X)$$
where $K$ acts on ${\bf NRP}^{[\ell]}(X)$ by diagonal action.
\end{lemma}
\begin{proof}
Let $c \in C_G^\ell(X)$ and $k \in K$. We check that $kc \in C_G^\ell(X)$. By definition there exists $g_n \in {\rm HK}^\ell(G)$
and $x_n \in X$ such that $c=\lim g_n(x_n, x_n, \ldots, x_n)$. Thus
$$kc=k(\lim g_n(x_n, x_n, \ldots, x_n))=\lim g_n(k(x_n, x_n, \ldots, x_n)) \in C_G^\ell(X).$$
Now let $(x, y) \in {\bf NRP}^{[\ell]}(X)$. By definition we have $ (x, x, \ldots, x, y) \in C_G^{\ell+1}(X)$. It follows that $k(x, x. \ldots, x, y) \in C_G^{\ell+1}(X)$. In other words, $(kx, ky) \in {\bf NRP}^{[\ell]}(X)$.
\end{proof}
\begin{definition} \label{isometric}
A factor map $\pi: X \to Y$ is an {\bf isometric extension} if there exists a continuous function $d\colon R_\pi: \to {\mathbb R}$ such that
\begin{enumerate}
\item for every $y \in Y$ the restriction map $d|_{\pi^{-1}(y)\times \pi^{-1}(y)}$ is a metric on $\pi^{-1}(y)$;
\item $d(gx, gx')=d(x, x')$ for every $g \in G$ and $(x, x') \in R_\pi$.
\end{enumerate}
\end{definition}
The following lemma says that every isometric extension between minimal systems factors through group extensions.
\begin{lemma} \cite[Page 15]{GlasnerB} \label{isometric extension}
A factor map $\pi\colon X \to Y$ is an isometric extension of minimal systems if and only if there exists a compact group $K$ and a closed subgroup $H$ of $K$ such that $X$ admits a group extension $\widetilde{X}$ by $H$ and $Y$ admits a group extension $\widetilde{X}$ by $K$ such that the
diagram
$$\xymatrix{
\widetilde{X} \ar[r] \ar[dr] & X (\cong \widetilde{X}/H) \ar[d]^\pi \\
& Y (\cong \widetilde{X}/K)
}$$
commutes.
\end{lemma}
The following proposition says that fibrations are stable under inverse limits.
\begin{proposition} \label{limit preserving}
Let $p_{i, i+1}\colon X_{i+1} \to X_i$ be a fibration for every $i=0, 1, 2, \ldots$. Denote by $X$ the inverse limit of $X_i$. Then the induced map $f\colon X \to X_0$ is a fibration. Similarly, the degree of fibrations is also preserved by the inverse limit operation.
\end{proposition}
\begin{proof}
Denote by $p_n$ the projection map $X \to X_n$. Assume that $\lambda$ is a $k$-corner of $X$ such that $f (\lambda)$ extends to a cube $c$ of $X_0$. We need to complete $\lambda$ as a cube via some point $(x_i)_i$ of $X$ such that $f((x_i)_i)=c(\overrightarrow{1})$.
Note that $p_1(\lambda)$ is a $k$-corner of $X_1$ and $p_{0, 1}\circ p_1 (\lambda)= f (\lambda) $ extends to a cube of $X_0$ via $c(\overrightarrow{1})$. Since $p_{0, 1}$ is a fibration, there exists $u_1 \in (p_{0, 1})^{-1}(c(\overrightarrow{1}))$ completing $p_1 (\lambda)$ as a cube of $X_1$.
Now $p_2(\lambda)$ is a $k$-corner of $X_2$ and $p_{1, 2}\circ p_2(\lambda)=p_1 (\lambda)$ extends to a cube of $X_1$ via $u_1$. Since $p_{1, 2}$ is a fibration, there exists $u_2 \in (p_{1,2})^{-1}(u_1)$ completing $p_2(\lambda)$ as a cube of $X_2$. Inductively, if $\alpha$ is a limit ordinal, we obtaina point $(u_i)_{i < \alpha}$ in $X_\alpha$ such that
$$p_{0, \alpha}((u_i)_{i< \alpha})=p_{0, 1}(u_1)=c(\overrightarrow{1}).$$
Here for every $i< \alpha$, $u_i$ completes $p_i(\lambda)=p_{i, \alpha}\circ p_\alpha (\lambda)$ as a cube of $X_i$. Thus $(u_i)_{i < \alpha}$ completes $p_\alpha (\lambda)$ as a cube of $X_\alpha$.
We proceed by induction.
\end{proof}
\begin{proof} [Proof of Theorem \ref{dyn_factor_is_fibration}]
The relative Furstenberg structure theorem states that a distal extension of minimal systems is given by a (countable) transfinite tower\footnote{This is defined in \cite[Appendix E14.3, E15.5]{VriesB}.} of isometric extensions \cite[Chapter V, Theorem 3.34]{VriesB}. Applying Proposition \ref{limit preserving}, we reduce to the case where $\pi$ is an isometric extension.
By Lemma \ref{isometric extension} and Proposition \ref{universal property}, we can further reduce to the case where $\pi$ is a group extension by a compact group $K$.
Fix $k \geq 1$. Suppose that $\lambda$ is a $k$-corner of $X$ such that $\pi(\lambda)$ can be extended to a cube $c$ of $Y$. Since $X$ is fibrant, we can extend $\lambda$ to be a cube via some point $x_0$ in $X$. It follows that $\pi(x_0)$ and $c(\overrightarrow{1})$ are $(k-1)$-canonically related. Since $(G, X)$ is minimal, from \cite[Theorem 6.1]{GGY18}, we have ${\rm NRP}^{k-1}(Y)=(\pi\times \pi)({\rm NRP}^{k-1}(X))$. Thus by Proposition \ref{alternative}, we have
$$(\pi(x_0), c(\overrightarrow{1})) \in {\rm NRP}^{k-1}(Y)=(\pi\times \pi)({\rm NRP}^{k-1}(X)).$$
Thus there exists $(x, z) \in {\rm NRP}^{k-1}(X)$ such that $\pi(x)=\pi(x_0)$ and $\pi(z)=c(\overrightarrow{1})$.
Now since $\pi$ is a group extension by $K$, there exists a unique $a \in K$ such that $x_0=ax$. By Proposition \ref{URP}, it suffices to show $(x_0, az) \in {\rm NRP}^{k-1}(X)$. Indeed, in such a case, $az$ completes $\lambda$ as a cube and
$\pi(az)=\pi(z)=c(\overrightarrow{1})$.
By Lemma \ref{K invariant}, $K{\bf NRP}^{k-1}(X)={\bf NRP}^{k-1}(X)$. Since $(x, z) \in {\bf NRP}^{k-1}(X)$, it follows that $(x_0, az) =a(x, z) \in {\bf NRP}^{k-1}(X)$.
\end{proof}
\section{Extensions of finite degree}\label{sec:extensions of finite degree}
In this section we investigate extensions of finite degree (see Definition \ref{def:extension of degree}).
\begin{proposition}\label{structure0}
Let $s \geq 1$ and $f \colon (G, X) \to (G, Y)$ an extension of degree at most $s$ such that $X$ is minimal distal. Then $f$ factors as a tower of principal abelian group extensions:
$$\xymatrix{
(G, X) \ar[r] \ar[d]^f & (G, X/{\bf NRP}^{s-1}(f)) \ar[r] & \cdots \ar[r] & (G, X/{\bf NRP}^{[1]}(f)) \ar[dlll] \\
(G, Y).
}$$
\end{proposition}
\begin{proof}
Since $X$ is minimal distal, by \cite[Theorem 7.10]{GGY18}, it is fibrant. From Theorem \ref{dyn_factor_is_fibration}, $f$ is a fibration. By Proposition \ref{fibrant is gluing}, fibrant cubespaces have the gluing property. Thus applying Proposition \ref{inducing $s$-fibration}, we conclude that $f$ is an $s$-fibration. Since $X$ is minimal, it is an ergodic cubespace and hence so are the induced quotient spaces. From Proposition \ref{alternative}, ${\bf NRP}^{[s]}(f)=\sim_{f, s}$. By Theorem \ref{RWST}, $f$ is factored as stated. It suffices to show every successive map in the tower is a principal abelian group extension. Let us show, for example, $f_s: (G, X) \to (G, X/{\bf NRP}^{s-1}(f))$ is a principal abelian group extension by the structure group $A_s$.
Since ${\bf NRP}^{[s]}(f)$ is a $G$-invariant closed equivalence relation, the quotient map $f_s$ is a factor map. By Theorem \ref{RWST}, $f_s$ is an $A_s$-principal fiber bundle. Thus we need only to show $A_s$ commutes with the $G$-actions. Recall that in \cite[Page 48]{GGY18}, $A_s$ is constructed as
$$A_s={\bf NRP}^{s-1}(f)/\sim_f,$$
where $(x, x')\sim_f (y, y')$ if and only if $f(x)=f(x'), f(y)=f(y')$ and $[\llcorner^s(x,x'), \llcorner^s(y, y')] \in C_G^{s+1}(X)$. Denote by $[x, x']_f$ the equivalence class of $(x, x')$. Fix $x \in X, a \in A_s$ and $t \in G$. We need to show $a(tx)=t(ax)$.
Set $x'=ax$. By definition \cite[Page 49]{GGY18}, $(x, x') \in {\bf NRP}^{s-1}(f)$ and $a=[x, x']_f$. Applying $(\Box^s(e), \Box^s(t)) \in {\rm HK}^{s+1}(G)$ to the $(s+1)$-cube $[\llcorner^s(x,x'), \llcorner^s(x,x')]$. We obtain another $(s+1)$-cube $[\llcorner^s(x,x'), \llcorner^s(tx,tx')]$. Note that $f(tx)=tf(x)=tf(x')=f(tx')$. It follows that
$$a=[tx, tx']_f=[tx, t(ax)]_f.$$
In particular, by definition of $A_s$-actions, we obtain $a(tx)=t(ax)$.
\end{proof}
Recall the definition of \emph{maximal $s$-fibration} in Proposition \ref{maximal fibration}. In \cite[Theorem 7.15]{GGY18}, it was shown that for a minimal system $(G, X)$, the maximal $s$-nilspace factor of $(X, C^\bullet_G(X))$ coincides with $X/{\bf NRP}^{[s]}(X)$. As a relative version of this, we have
\begin{proposition}\label{dynamical maximal}
Let $f\colon X \to Y$ be a factor map of minimal distal systems. Then $g\colon X/{\bf NRP}^{[s]}(f) \to Y$ is the maximal $s$-fibration for every $s\geq 0$ and $g$ is an extension of degree at most $s$ relative to $Y$.
\end{proposition}
\begin{proof}
From Proposition \ref{alternative}, we know ${\bf NRP}^{[s]}(f)=\sim_{f, s}$. By Proposition \ref{maximal fibration}, $g\colon X/{\bf NRP}^{[s]}(f) \to Y$ is the maximal $s$-fibration. Moreover, ${\bf NRP}^{[s]}(g)=\sim_{g,s}=\Delta$. Thus $g$ is an extension of degree at most $s$ relative to $Y$.
\end{proof}
In general, given a factor map $f\colon X \to Y$ of minimal systems, from Proposition \ref{relative relation}, the induced map $g\colon X/{\bf NRP}^{[k]}(f) \to Y$ is a distal extension and has $(k+1)$-uniqueness.
\begin{question} \label{fibration question}
\begin{enumerate}
\item Is $g$ a fibration?
\item More generally, if $f$ is distal, can one conclude that $f$ is a fibration? In \cite[Example 3.10]{TY13}, Tu and Ye considered the projection map of the Denjoy minimal system onto the unit circle and showed that it is not a fibration. However it is also not distal.
\end{enumerate}
\end{question}
Recall that a minimal system $(G, X)$ is called a system of degree at most $s$ if ${\bf NRP}^{[s]}(X)=\Delta$. In \cite[Theorem 7.14]{GGY18}, it is proved that $(G, X)$ is a system of degree at most $s$ if and only if it is an $s$-nilspace.
As a relative analogue of this result, we remark that if the answer of Question \ref{fibration question} (1) is positive, then we will obtain a dynamical characterization of $s$-fibration. That is, for a factor map $f: X \to Y$ of minimal systems, $f$ is an extension of degree at most $s$ relative to $Y$ if and only if $f$ is an $s$-fibration.
\begin{proof}[Proof of Theorem \ref{thm:structure_finite_degree_extension}]
By Proposition \ref{dynamical maximal}, $\pi$ is an $s$-fibration.
Applying Theorem \ref{relative inverse limit}, we obtain a
factorization of $\pi$
by $s$-fibrations $p_{m, n} \colon Z_n \to Z_m$ and Lie fibrations
$h_n \colon Z_n \to Y$ which are compatible with each other. It
suffices to show that every $p_{m, n}$ is a factor map. We induct on
the degree $s$ in order to prove this.
The case $s=0$ is trivial since then $\pi$ is an isomorphism. Assume
that the statement is true for $s-1$.
Recall that in the proof of Theorem \ref{relative inverse limit}, the
space $Z_n$ is constructed as a quotient space $X/\approx_{\psi_n}$
based
on a fibration map $\psi_n \colon \pi_{\pi, s-1}(X) \to B_n$ for some
cubespace $B_n$. To show $p_{m, n}$ is a factor map, it suffices to
show that the
equivalence relation $\approx_{\psi_n}$ is $G$-invariant. By the
induction hypothesis, $\psi_n$ is $G$-equivariant. Let
$x\approx_{\psi_n} x'$ for some $x, x' \in X$. Then for every $s \in
G$
$$\psi(\pi_{\pi, s-1}(sx))=\psi(s\pi_{\pi, s-1}(x))=s\psi(\pi_{\pi,
s-1}(x))=s\psi(\pi_{\pi, s-1}(x')).$$
Thus $sx\approx_{\psi_n} sx'$.
\end{proof}
\bibliographystyle{alpha}
|
2,877,628,089,604 | arxiv | \section{Introduction}
\vspace{-0.25em}
Superstring theory is a promising candidate for the quantum gravity.
The theory is consistently defined in 10-dimensional space-time, although we have observed that our universe is 4-dimensional space-time.
The compactification is a mechanism to describe how to effectively realize 4-dimensional space-times at a low-energy regime in superstring theory.
In the mechanism, there are innumerable number of possible ways to compactify the extra 6-dimensional space perturbatively.
Therefore it is difficult to choose a unique vacuum that corresponds to our universe perturbatively.
The type IIB matrix model \cite{Ishibashi:1996xs} is a promising candidate for a non-perturbative formulation of superstring theory.
This model is formally obtained by the dimensional reduction \cite{Eguchi:1982nm} of $\mathcal{N}=1$ supersymmetric SU($N$) Yang-Mills theory from 10D to 0D.
The model consists of $N\times N$ Hermitian matrices.
Space-time does not exist a priori, but it emerges from the matrix degrees of freedom.
The Euclidean version of the model was studied analytically using the Gaussian expansion method in \cite{Nishimura:2001sx,Kawai:2002jk,Aoyama:2006rk,Nishimura:2011xy}, and the results predict a spontaneous symmetry breaking (SSB) of the spatial symmetry from SO(10) to SO(3).
This prediction was confirmed by the first principles calculations using the complex Langevin method (CLM) to avoid a notorious sign problem in \cite{Anagnostopoulos:2013xga,Anagnostopoulos:2015gua,Anagnostopoulos:2017gos,Anagnostopoulos:2020xai}.
On the other hand, the Lorentzian version of the model was studied numerically using an approximation to avoid the sign problem.
The results show an emergence of (3+1)-dimensional space-time \cite{Kim:2011cr, Ito:2013qga, Ito:2013ywa, Ito:2015mxa, Ito:2015mem}, however, the structure of the space is not a continuous one due to the approximation \cite{Aoki:2019tby}.
Quite recently, we studied a fermion quenched version of the Lorentzian type IIB matrix model without the approximation using the CLM \cite{Nishimura:2019qal, Hirasawa:2021xeh}.
We found that the Lorentzian model is equivalent to the Euclidean one under the Wick rotation as it is \cite{Hatakeyama:2021ake}.
In this work, to obtain results inequivalent to the Euclidean one, we use a Lorentz invariant ``mass'' term.
In addition, we study the SUSY model and add a fermionic mass term to avoid the singular drift problem in the CLM \cite{Ito:2016efb}.
We found that the SSB of SO(9) spatial symmetry occurs, however, a 1-dimensional space expands exponentially when the mass of the fermions is large.
We expect that a 3-dimensional expanding space appears when the fermionic mass term is sufficiently small.
\vspace{-0.5em}
\section{Definition of the type IIB matrix model}
\vspace{-0.25em}
The partition function of the Lorentzian type IIB matrix model is written as
\begin{equation}
Z=\int dA d\Psi d\bar{\Psi}\ e^{i\left( S_{\rm b} + S_{\rm f} \right)},
\end{equation}
\begin{equation}
S_{\rm b}=-\frac{N}{4} {\rm Tr}\left\{ -2[A_0, A_i]^2 + [A_i,A_j]^2 \right\},
\end{equation}
\begin{equation}
S_{\rm f}=-\frac{N}{2} {\rm Tr}\left\{ \bar{\Psi}_\alpha (C\Gamma^\mu)_{\alpha\beta}[A_\mu,\Psi_{\beta}] \right\},
\end{equation}
where $A_\mu$ and $\Psi_\alpha$ are $N\times N$ Hermitian matrices, $\mu$ runs from 0 to 9, and $\alpha$ runs from 1 to 16.
$\Gamma^\mu$ are 10-dimensional gamma matrices after the Weyl projection, and $\mathcal{C}$ is a charge conjugate matrix.
This model has $\mathcal{N}=2$ supersymmetry (SUSY), and the SUSY algebra realizes translations by a shift of the matrices $A_\mu \to A_\mu + \alpha_\mu I$, where $I$ is the unit matrix.
Therefore, the eigenvalues of the matrices $A_\mu$ can be interpreted as the space-time coordinates.
After the integration over the fermionic matrices $\Psi_\alpha$, we obtain the following partition function:
\begin{equation}
Z=\int dA\ e^{iS_{\rm b}} {\rm Pf}{\mathcal M}(A_0,A_i),
\end{equation}
where $\mathcal{M}$ is the Dirac operator, and ``Pf'' stands for Pfaffian.
We perform a Wick rotation defined by
\begin{equation}
S_{\rm b} \to \tilde{S}_{\rm b} = N\ e^{i\frac{\pi}{2} u}\ {\rm Tr} \left\{ \frac{1}{2}e^{-i\pi u}[\tilde{A}_0, \tilde{A}_i]^2 - \frac{1}{4}[\tilde{A}_i,\tilde{A}_j]^2 \right\},
\end{equation}
\begin{equation}
\mathcal{M}(A_0,A_i) \to \mathcal{M}(e^{-i\frac{\pi}{2}u}A_0,A_i),
\end{equation}
where $u=0$ and $u=1$ correspond to the Lorentzian and Euclidean model respectively, and we omit an irrelevant overall phase factor for $\mathcal{M}$ because it can be absorbed by a redefinition of the fermionic matrices.
This Wick rotation is equivalent to the following contour deformation:
\begin{equation}
\begin{split}
A_0 &\to e^{-i\frac{\pi}{2}u}e^{i\frac{\pi}{8}u}\tilde{A_0} = e^{-i\frac{3}{8}\pi u}\tilde{A_0},\\
A_i &\to e^{i\frac{\pi}{8}u}\tilde{A_i}.
\end{split}
\label{eq:cont_deform}
\end{equation}
The Cauchy's theorem says that $\braket{\mathcal{O}(e^{-i\frac{3}{8}\pi u}\tilde{A_0}, e^{i\frac{\pi}{8}u}\tilde{A_i})}_u$ is independent of $u$.
This fact means that the Lorentzian version of this model is equivalent to the Euclidean one under the above Wick rotation.
We confirmed this fact using the complex Langevin simulation of the fermion quenched model in \cite{Hatakeyama:2021ake} (See Fig.\ref{fig:equive_Euc_Lor}).
\begin{figure}
\centering
\includegraphics[scale=0.33]{fig/TrA0sq.pdf}
\includegraphics[scale=0.33]{fig/TrAisq.pdf}
\caption{${\rm Tr}(A_0)^2$ (Left) and ${\rm Tr}(A_i)^2$ (Right) obtained by simulations of a fermion quenched model. ``E'' and ``L'' correspond to the Euclidean and Lorentzian versions of the model.}
\label{fig:equive_Euc_Lor}
\vspace{-1em}
\end{figure}
In order to obtain a large-$N$ limit which is inequivalent to the Euclidean model, we add a Lorentz invariant ``mass'' term to the action given by
\begin{equation}
S_\gamma = -\frac{1}{2} N \gamma {\rm Tr}(A_\mu)^2 = \frac{1}{2} N \gamma \left\{ {\rm Tr}(A_0)^2 - {\rm Tr}(A_i)^2 \right\},
\end{equation}
where $\gamma$ is the Lorentz invariant ``mass'' parameter.
This term was introduced for the first time in \cite{Steinacker:2017vqw} as an IR regulator, and has been studied at the perturbative level in \cite{Steinacker:2017bhb, Sperling:2018xrm, Sperling:2019xar, Steinacker:2019dii, Steinacker:2019awe, Steinacker:2019fcb, Hatakeyama:2019jyw, Steinacker:2020xph, Fredenhagen:2021bnw, Steinacker:2021yxt, Asano:2021phy, Karczmarek:2022ejn, Battista:2022hqn}.
Especially in \cite{Hatakeyama:2019jyw}, the authors found
that for $\gamma > 0$, typical classical solutions of fixed dimensionality describe the emergence of an expanding (3+1)-dimensional space-time.
In this work, we perform a first principles calculation of this model to study whether such a vacuum appears dynamically.
\vspace{-0.5em}
\section{Complex Langevin method}
\vspace{-0.25em}
In this section, we explain the complex Langevin method \cite{Parisi:1983mgm,Klauder:1983sp}, which is a method used to overcome the notorious sign problem.
Since the method is an extended version of the (real) Langevin method, we start by a brief review of it.
Here we consider a lattice theory that is described by a real dynamical variable $\phi_n$, where $n$ is a label for the lattice points, and we assume that the action $S(\phi)$ takes real values.
The partition function is given by
\begin{equation}
Z=\int \prod_n d\phi_n e^{-S\left(\phi_n\right)}.
\end{equation}
In the real Langevin method, the dynamical variable is a solution of the Langevin equation
\begin{equation}
\frac{d\phi_n}{dt_{\rm L}} = -\frac{\partial S}{\partial \phi_n} + \eta_n(t_{\rm L}),
\end{equation}
where $\partial S/\partial \phi_n$ is called the drift term, $t_{\rm L}$ is a fictitious time, the so-called Langevin time, and $\eta_n(t_{\rm L})$ is a real Gaussian noise with zero mean, and variance $\sigma^2=2$.
By solving the Fokker-Planck equation, one can confirm that the equilibrium probability distribution for $\phi$ is proportional to $e^{-S(\phi)}$.
If the action takes a complex value, then also the drift term takes complex values, and the real variable $\phi_n$ must be complexified.
We denote the complexified variable by $\varphi_n$, and it is a solution of the complex Langevin equation as
\begin{equation}
\frac{d\varphi_n}{dt_{\rm L}} = -\frac{\partial S}{\partial \varphi_n} + \eta_n(t_{\rm L}),
\end{equation}
where $\eta_n(t_{\rm L})$ is a real Gaussian noise with zero mean and variance $\sigma^2=2$.
It is known that the complex Langevin method does not always yield correct results.
A criterion for the correct convergence was discovered in \cite{Nagata:2016vkn}.
Its implementation requires to check that the drift histogram falls off exponentially or faster with the magnitude of the drift term, which is easy to implement in the simulations.
We apply the CLM to the Lorentzian type IIB matrix model.
In the simulations we fix a gauge in which $A_0$ is diagonal
\begin{equation}
A_0 = {\rm diag}(\alpha_1, \alpha_2, \dots, \alpha_N), \hspace{2em} \alpha_1 \le \alpha_2 \le \dots \le \alpha_N.
\label{eq:daiag_a0}
\end{equation}
In order to realize this ordering, we use a change of variables suggested in \cite{Nishimura:2019qal}
\begin{equation}
\alpha_1 = 0, \hspace{2em} {\rm and} \hspace{2em}\alpha_i = \sum_{a=1}^{i-1} e^{\tau_a} \hspace{1em}(2\le i \le N),
\end{equation}
where the $\tau_a$ are new variables, which we treat as dynamical variables.
It is obvious that the ordering (\ref{eq:daiag_a0}) is automatically realized.
Then $\tau_a$ and $A_i$ are complexified, and we generate configurations using the complex Langevin equations
\begin{equation}
\begin{split}
\frac{d\tau_a}{dt_{\rm L}} &= -\frac{\partial S}{\partial \tau_a} + \eta_{a}(t_{\rm L}), \\
\frac{d(A_i)_{ab}}{dt_{\rm L}} &= -\frac{\partial S}{\partial (A_i)_{ba}} + (\eta_i)_{ab}(t_{\rm L}),
\end{split}
\end{equation}
where $\eta_{a}(t_{\rm L})$ is the Gaussian noise, and the matrices $(\eta_a)_{ab}(t_{\rm L})$ are Hermitian matrices whose elements are generated using Gaussian noise.
The drift terms $\frac{\partial S}{\partial \tau_a}$ and $\frac{\partial S}{\partial (A_i)_{ba}}$ are computed for real variables $\tau_a$ and Hermitian matrices $(A_i)_{ab}$, and then we complexify $\tau_a$ and $(A_i)_{ab}$, thereby doing an analytical continuation to preserve holomorphicity.
For all our results we use the above mentioned criterion to check the correct convergence of the CLM.
When there are fermionic degrees of freedom, near-zero eigenvalues cause large drifts, which cause the singular drift problem.
This problem is one of the causes of the wrong convergence of the CLM.
To avoid this problem, we introduce a fermionic mass term given by
\begin{equation}
S_{m_{\rm f}} = iNm_{\rm f} {\rm Tr}[\bar{\Psi}_\alpha (\Gamma_7\Gamma_8^\dagger\Gamma_9)_{\alpha\beta}\Psi_\beta].
\end{equation}
This fermionic mass term has been used successfully in the simulations of the Euclidean model \cite{Anagnostopoulos:2017gos,Anagnostopoulos:2020xai} \footnote{
The authors in \cite{Kumar:2022giw} have proposed an alternative fermionic mass term that preserves the supersymmetry.
}, and the original model is recovered after carefully taking the $m_{\rm f}\to 0$ limit.
We perform the following technique to stabilize the complex Langevin simulation by replacing
\begin{equation}
A_i \to \frac{1}{1+\epsilon} \left( A_i + \epsilon A_i^\dagger \right)
\end{equation}
after every update.
This procedure is justifiable when the spatial matrices are nearly Hermitian.
A similar procedure, which is called the dynamical stabilization, has been used in the lattice QCD \cite{Attanasio:2018rtq}.
In this work, we choose $\epsilon=0.01$.
\vspace{-0.5em}
\section{Results}
\vspace{-0.25em}
In this paper, all results which we show below are obtained for $N=64$.
In Fig.\ref{fig:phase_structure}, we plot the eigenvalues of $A_0$ on the complex plane for various values of $\gamma$.
If $\gamma$ is equal to, or larger than, 2.6, the eigenvalues are almost real.
On the other hand, if $\gamma$ is equal to, or smaller than, 1.8, the results are equivalent to the Euclidean model under a contour deformation.
We call the former phase the real time phase, and the latter phase the Euclidean phase.
We expect that there is a phase transition for $1.8 \le \gamma \le 2.6$.
\begin{figure}
\centering
\includegraphics[width=0.40\hsize]{fig/alpha_N64.pdf}
\caption{Distributions of the $\alpha_i$ for the Lorentzian model for various values of $\gamma$ at $m_{\rm f}=10$. When the distribution of the $\alpha_i$ approaches the black line, the model is equivalent to the Euclidean one under a contour deformation.}
\label{fig:phase_structure}
\vspace{-1em}
\end{figure}
In order to see how the emergence of space from the spatial matrices, we calculate the following observable:
\begin{equation}
{\mathcal A}_{pq} = \frac{1}{9}\sum_{i=1}^9 |(A_i)_{pq}|^2.
\end{equation}
In Fig.\ref{fig:band_diagonal}, we plot this observables against $p$ and $q$.
This figure shows that the ${\mathcal A}_{pq}$ becomes small as $|p-q|$ increases.
Thus, only the elements near the diagonal have important information.
We call this matrix structure the band-diagonal structure, and $n$ denotes its bandwidth.
In the rest of the paper, we choose $n=12$.
\begin{figure}
\centering
\includegraphics[width=0.40\hsize]{fig/Xabs_gam4.pdf}
\caption{${\mathcal A}_{pq}$ against $p$ ($x$-axis) and $q$ ($y$-axis) at $\gamma=2.6$ and $m_{\rm f}=10$.}
\label{fig:band_diagonal}
\vspace{-1em}
\end{figure}
Using the bandwidth $n$, we define time by
\begin{equation}
t_a = \sum_{i=1}^{a}|\bar{\alpha}_{i}- \bar{\alpha}_{i-1}|,
\end{equation}
where
\begin{equation}
\bar{\alpha}_i=\frac{1}{n}\sum_{\nu=0}^{n-1} \alpha_{i+\nu}.
\end{equation}
Then we define the $n \times n$ block matrices $\bar{A}_i(t_a)$ by
\begin{equation}
(\bar{A}_i)_{kl}(t_a) = (A_i)_{(k+a-1)(l+a-1)}.
\end{equation}
These block matrices represent the state of the universe at $t_a$.
In the following, we omit the index $a$, and we denote time by $t$.
To study whether space is also real in the real time phase, we define a phase $\theta_{\rm s}(t)$ for the spatial matrices by
\begin{equation}
{\rm tr}(\bar{A}_i(t))^2 = e^{2i\theta_{\rm s}(t)} |{\rm tr}(\bar{A}_i(t))^2|.
\end{equation}
This phase becomes 0 when space is real, $\pi/8$ when the model is equivalent to the Euclidean one.
In Fig.\ref{fig:theta_spatial}, the $\theta_{\rm s}(t)$ is plotted against time.
This figure shows that space becomes real at late times.
\begin{figure}
\centering
\includegraphics[width=0.40\hsize]{fig/theta_N64.pdf}
\caption{$\theta_{\rm s}(t)$ for $\gamma=2.6$ and $m_{\rm f}=10$. The black line corresponds to the Euclidean model.}
\label{fig:theta_spatial}
\vspace{-1em}
\end{figure}
To study the SSB of the SO(9) rotational symmetry, we define the ``momentum of inertia tensor''
\begin{equation}
T_{ij}(t) = \frac{1}{n}{\rm tr}\left( X_i(t)X_j(t) \right),
\end{equation}
where
\begin{equation}
X_i(t) = \frac{1}{2} \left( \bar{A}_i(t) + \bar{A}_i^\dagger(t) \right).
\end{equation}
This Hermitianization is justifiable at late times because the spatial matrices become Hermitian there as we saw in Fig.\ref{fig:theta_spatial}.
In the SO(9) symmetric case, the eigenvalues of $T_{ij}(t)$ are all equal in the large-$N$ limit \footnote{The small differences between them are a finite-$N$ effect.}, but in the SO(9) broken case they are not.
In Fig.\ref{fig:gam_dep}, we plot the eigenvalues of $T_{ij}(t)$ against the time $t$.
These figures show that the eigenvalues become equal around $t=0$.
On the other hand, we find that the SSB of SO(9) occurs at $t \sim 0.5$.
After the SSB, 1 out of the 9 eigenvalues grows exponentially.
In other words, 1-dimensional space grows exponentially.
By comparing Fig.\ref{fig:gam_dep} (Left Top) and (Right Top), we can see that the expansion becomes more pronounced as $\gamma$ decreases.
Furthermore, by comparing Fig.\ref{fig:gam_dep} (Right Top) and (Bottom), we see that the expansion becomes more pronounced as $m_{\rm f}$ decreases.
\begin{figure}
\centering
\includegraphics[width=0.33\hsize]{fig/Tij_N64_mf10_gam4.0.pdf}
\includegraphics[width=0.33\hsize]{fig/Tij_N64_mf10_gam2.6.pdf}
\includegraphics[width=0.33\hsize]{fig/Tij_N64_mf5_gam2.6.pdf}
\caption{The eigenvalues of $T_{ij}(t)$ are plotted against time $t$. (Left Top) $\gamma=4.0$ and $m_{\rm f}=10$, (Right Top) $\gamma=2.6$ and $m_{\rm f}=10$, and (Bottom) $\gamma=2.6$ and $m_{\rm f}=5$. The dashed lines are obtained by exponential fits using $t \gtrsim 0.5$ data points.}
\label{fig:gam_dep}
\vspace{-1em}
\end{figure}
\vspace{-0.5em}
\section{Summary and discussion}
\vspace{-0.25em}
We studied the Lorentzian version of the type IIB matrix model numerically.
Since there is a strong sign problem, we used the complex Langevin method.
To avoid the singular drift problem we add a mass term for the fermionic matrices.
We denote this mass parameter by $m_{\rm f}$, and the original model is obtained after an $m_{\rm f} \to 0$ extrapolation.
The Lorentzian version of the model has been found to be equivalent to the Euclidean one under a contour deformation.
We confirm this point numerically for the fermion quenched model.
To obtain results inequivalent to the Euclidean model, we use the Lorentz invariant ``mass'' term which acts as an IR regulator.
We denote its mass parameter by $\gamma$.
We propose a novel large-$N$ limit, where one takes the $\gamma \to 0$ extrapolation after the large-$N$ limit is taken for specific $\gamma$.
Regarding the mass parameter $\gamma$, there are two phases depending on its value.
One appears at small $\gamma$, in which the model is found to be equivalent to the Euclidean model after a contour deformation.
The other phase appears at sufficiently large $\gamma$, in which time becomes real.
We call this phase the real time phase.
An important feature of the real time phase is that the spatial matrices have a band-diagonal structure.
This structure enables us to define block matrices which represent
the state of the universe at a given time.
We studied the time evolution of the universe using these block matrices.
We found that the real space appears at late times.
Moreover, a spontaneous symmetry breaking (SSB) of SO(9) occurs at some time, however, only 1-dimensional space expands exponentially after the SSB when the fermionic mass term is large ($m_{\rm f} \gtrsim 5$).
By focusing on the bosonic part of the action, the quantum fluctuations are suppressed when ${\rm Tr} [A_i, A_j] \sim 0$, which happens when only one spatial matrix is large.
This is the reason for the emergence of the 1-dimensional space at sufficiently large $m_{\rm f}$.
On the other hand, when $m_{\rm f} = 0$ and there are only two large matrices the Pfaffian becomes zero \cite{Krauth:1998xh,Nishimura:2000ds}.
Therefore, configurations in which only one or two of matrices are large
are strongly suppressed because the Pfaffian becomes small.
Thus, the emergence of expanding 1-dimensional space is suppressed by the presence of SUSY, and we expect the emergence of an expanding 3-dimensional space for sufficiently small $m_{\rm f}$.
\vspace{-0.5em}
\section*{Acknowledgements}
\vspace{-0.25em}
\setlength{\baselineskip}{13.5pt}
T.\;A., K.\;H. and A.\;T. were supported in part by Grant-in-Aid (Nos. 17K05425, 19J10002, and 18K03614, 21K03532, respectively)
from Japan Society for the Promotion of Science. This research was supported by MEXT as ``Program for Promoting Researches on the
Supercomputer Fugaku'' (Simulation for basic science: from fundamental laws of particles to creation of nuclei, JPMXP1020200105) and JICFuS.
This work used computational resources of supercomputer Fugaku provided by the RIKEN Center for Computational Science (Project ID: hp210165, hp220174), and Oakbridge-CX provided by the University of Tokyo (Project IDs: hp200106, hp200130, hp210094, hp220074) through the HPCI System Research Project.
Numerical computations were also carried out on PC clusters in KEK Computing Research Center.
This work was also supported by computational time granted by the Greek Research and Technology Network (GRNET) in the National HPC facility ARIS, under the project IDs SUSYMM and SUSYMM2.
K.\;N.\;A and S.\;K.\;P. were supported in part by a Program of Basic Research PEVE 2020 (No. 65228700) of the National Technical University of Athens.
\vspace{-0.5em}
\setlength{\baselineskip}{10.25pt}
\bibliographystyle{JHEP}
|
2,877,628,089,605 | arxiv | \section{Introduction}
Cluster analysis is a classical procedure to aggregate elements according to their similarity.
Among the most popular methods to do clustering, we find
the well-known $k$-means
\citep{Lloyd19823} and the partitioning around medoids (PAM) \citep{Kaufmann19873} algorithms.
These two procedures have the particularity of
building data partitions from an initial choice of random points called centroids.
These points are usually sampled uniformly at random from the set of all datapoints.
Since each point has the same probability of being chosen,
the initial sample may end up with a set of points containing many similar points
that carry the same type of information. That is, the initial sample might not represent the diversity present in the data.
This might affect the effectiveness of the clustering.
Nowadays, the diversity in the selected elements is a major concern in some domains of research,
like clinical trials \citep{Clark2019}, forensic sciences \citep{Wagstaff2018} or educational development \citep{Szelei2019}.
Adopting uniform random sampling as a sampling mechanism can result in sets of elements with a poor coverage of all the facets of a population under study.
Determinantal point processes, or DPPs for short, introduced by \cite{Borodin200031},
can address this problem. DPPs model negative correlations between points so that
similar elements have less chances of being simultaneously sampled.
The negative correlations are captured by the so-called kernel or Gram matrix \citep{Kulesza20123}, a
matrix whose entries represent a measure of similarity between pair of points.
DPPs have already been adopted in machine learning as models for subset selection
\citep{Hafiz20133,Gartrell2018,Shah2013,Gillenwater2012,Mariet2019}.
The origins of DPPs can be found in quantum physics \citep{Macchi19753}.
Known first as \textit{fermion processes,} they model the distribution of fermion systems at thermal equilibrium.
Much later, \cite{Borodin200031} introduced the now accepted \textit{Determinantal Point Process} terminolgy in the
mathematics community.
DPPs have also been applied to problems dealing with nonintersecting random paths \citep{Daley20033},
random spanning trees \citep{Borodin20032}, and the study of eigenvalues of random matrices \citep{BenHough20063}.
Clustering algorithms like $k$-means and the partitioning around medoids result in single partition of data, seeking to maximize intra-cluster similarity and inter-cluster dissimilarity.
However, the clustering results of two different algorithms can be very different.
The lack of an external objective and impartial criterion can explain those differences \citep{Vega20113}.
The dependence on the initial choice of centroids is also an important factor.
\cite{Blatt19963} and \cite{Blatt19973} proposed a new approach to improve the quality and robustness of clustering results.
The approach was later formalized by \cite{Strehl20023}, where the notion of \emph{cluster ensembles} is introduced.
Cluster ensembles combine different data partitions into a single consolidated clustering.
A particular cluster ensemble method was later introduced in \cite{Monti20033}, the so-called \emph{consensus clustering}.
This method consists in performing multiple runs of the same clustering algorithm on the same data,
to produce a single clustering configuration by agreement among all the runs.
Most often, the particular clustering algorithm to be run several times for consensus implies random initial conditions or centroids.
Recently \cite{vicente&murua1-2020} introduced one such method, the \textit{determinantal consensus clustering} or
\textit{consensus DPP} procedure.
This generates centroids through a DPP process.
The similarity or distance between the datapoints is incorporated in the similarity matrix, also known as the kernel
or Gram matrix,
which constitutes the core of the DPP process. Moreover, the link between similarity matrices and kernel methods for statistical or machine learning makes the method very flexible and effective at discovering data clusters.
The diversity within centroids is automatically inherited in the DPP sampling.
The DPP ``diversity at sampling'' property has shown to greatly improve
the consensus clustering results \citep{vicente&murua1-2020}.
In practice, the centroids are drawn using the DPP sampling algorithm described in \cite{BenHough20063} and \cite{Kulesza20123}.
This algorithm is based on computing the spectral decomposition of the data similarity or kernel matrix.
Unfortunately, when the data size $n$ is very large,
the eigendecomposition becomes a computational burden.
The computational complexity of the eigendecomposition of a $n\times n$ symmetric matrix is of order
$O\left(n^3\right)$.
However, to sample the centroid points,
we might not need to compute all eigenvalues and eigenvectors of the kernel matrix.
In fact, the probability of selecting each datapoint depends on a
corresponding eigenvalue.
Datapoints associated with relatively very small eigenvalues are selected with very low probability.
Therefore, the key to reduce the computational burden induced by large datasets is
the extraction of the largest eigenvalues of the associated kernel matrix.
This raises the necessity to deliver algorithms able to extract such a subset of eigenvalues.
One of the most used algorithms to extract the largest eigenvalues is the Lanczos algorithm \citep{Lanczos1950}.
Due to his proven numerical instability, many variations of it have been proposed.
A popular variation of the Lanczos algorithm is the \textit{implicitly restarted Lanczos method} \citep{Calvetti1994}
which we adopte in this paper.
The Lanczos algorithm have been specially developed for large sparse and symmetric matrices.
Hence, to be able to perform determinantal consensus clustering on large datasets,
we need to find good sparse approximations of the often dense \textit{original} kernel matrix.
We propose two sound approaches: one approach based on the Nearest Neighbor
Gaussian Process of \cite{Datta2016}, and another approach based on a
random sampling of small submatrices from the dense kernel matrix. This latter approach may be seen
as a kind of divide and conquer approach.
Although we show that these approaches offer good approximations to the eigenvalue distribution of the
original kernel matrix, our goal is rather to introduce alternative efficient DPP sampling models
for large datasets
that inherit the data diversity
expressed in the original kernel matrix.
The paper is organized as follows: in Section~\ref{sec:determinantal:consensus},
we summarize the consensus clustering and recall
the basic characteristics of determinantal point processes.
The determinantal consensus clustering is also summarized in this section.
In Section~\ref{largedatasets}, we introduce the problem of using the determinantal point process
in the context of large datasets, and present two approaches to address this issue.
In Section~\ref{experiments2}, we evaluate the two approaches presented in Section~\ref{largedatasets}
through large dataset simulations; here we also
illustrate the concept of diversity introduced by the determinantal point process.
A performance comparison between our two approaches and two other competing methods
on large real datasets is shown in Section~\ref{sec:real}.
We conclude with a few thoughts and a discussion in Section~\ref{sec:conclusions}.
\section{Determinantal consensus clustering}\label{sec:determinantal:consensus}
\subsection{Consensus clustering}\label{sec:consensus}
Throughout the paper, the data will be
denoted by $\mathcal{S}=\{{x_1}, \dots, {x_n}\}\subset \mathds{R}^p$, where $x_i$ represents a $p$-dimensional vector, for $i=1,\dots,n$ and $n \geq 2$. Consider
a particular clustering algorithm run $R$ times on the same data $\mathcal{S}$. The agreement among the several runs of the algorithm is based on the \emph{consensus matrix} $C$. This is a $n\times n$ symmetric matrix whose entries $\{ C_{ij}, \; i, j=1,\dots, n\}$ represent the proportion of runs in which elements $x_i$ and $x_j$ of $\mathcal{S}$ fall in the same cluster.
Let $r$ represent a specific run of the clustering algorithm, $r=1,\dots, R,$ and let $C_r$ be the associated $n\times n$ symmetric binary matrix with entries $c^{r}_{ij}= 1$ if $x_i$ and $x_j$ belong to the same cluster in the $r$th run of the algorithm, and
$c^{r}_{ij}= 0$, otherwise,
$i, j=1,\dots, n$.
The components of the consensus matrix $C$ are given by $C_{ij}=\sum_{r=1}^{R}c^{r}_{ij} / R,$
$i, j=1,\dots,n$. The entry $C_{ij}$ is known as \emph{consensus index}.
The diagonal entries are given by $C_{ii}=1,$ for $i=1,\dots, n$.
Our interest is to extend the determinantal consensus clustering (or consensus DPP) algorithm of \cite{vicente&murua1-2020} to large datasets. Consensus DPP is a modified version of Partitioning around medoids (PAM) algorithm \citep{Kaufmann19873}
in which the center points are sampled with a determinantal point process (DPP); more details on DPP are shown in the next section.
Each run starts with a Voronoi diagram \citep{Aurenhammer19913} on the set $\mathcal{S}$. This partitions the space into several cells or regions based on a random subset of {\it generator points.}
After $R$ runs of the algorithm, we obtain $R$ partitions associated with the $R$ Voronoi diagrams.
The consensus matrix $C$ is computed from these partitions.
Let $\theta \in [0,1]$ be a proportion threshold.
According to \cite{Blatt19963}, if $C_{ij}\geq \theta$, points $x_i$ and $x_j$ are defined as ``friends'' and are included in the same consensus cluster.
Moreover, all mutual friends (including friends of friends, etc.) are assigned to the same cluster.
To select an appropriate threshold value, we follow \cite{Murua20143} and consider all threshold values
given by the set of all different observed consensus indexes $C_{ij}$.
If there are $t$ different consensus indexes, we will have a collection of $t$ thresholds $\theta_1, \theta_2, \dots, \theta_t$.
For each threshold $\theta_i, \; i=1,\dots, t$, a consensus clustering configuration with $K(\theta_i)$ clusters is obtained.
If $\theta_i=0$, we obtain a configuration with $K(0)=1$ cluster, that is, a single cluster equal to $\mathcal{S}$.
If $\theta_i=1$, we obtain a configuration with $K(1) =n$ singleton clusters; that is, each element of $\mathcal{S}$ forms a singleton cluster.
In general, clustering configurations with one cluster or $n$ clusters are of no interest. Therefore,
thresholds $\theta_i$ that are too low or too large are not relevant.
This observation leads to a more efficient procedure \citep{vicente&murua1-2020}
where only a predetermined sequence of $t$ thresholds $\tau < \theta_1 < \theta_2 < \cdots <\theta_t$,
bounded from below by a certain minimum threshold $\tau$, are considered.
This approach is particularly useful in the context of large datasets, since reducing the range of considered thresholds can reduce the computational burden induced by the high number of elements.
Moreover, we are not interested in a clustering configuration with too many small clusters.
Only clustering configurations with cluster sizes larger than a minimal value are admissible (that is, accepted).
\cite{vicente&murua1-2020} established through extensive simulations that
the ``square-root choice'' (for the number of bins of a histogram) $\sqrt{n}$ is an adequate value for the minimum cluster size.
Each one of the $t$ consensus clustering configurations obtained with the $t$ thresholds $\{\theta_i\}$ are examined
to verify that they are admissible.
For every non-admissible consensus clustering configuration
we merge each small cluster with its closest ``large'' cluster, according to the following procedure, inspired by single linkage:
(i) select the cluster $\mathcal{V}$ that has the smallest cluster size $< \sqrt{n}$;
(ii) find the pair of indexes $(i^*, j^*) \in \{1,\ldots, n\}$ that satisfies
$C_{i^* j^*} = \max\{ C_{ij} : x_i \in \mathcal{V}, x_j \not\in \mathcal{V} \}$;
(iii) merge the cluster $\mathcal{V}$ to the cluster that includes $x_{j^*}$;
(iv) repeat the merging procedure until there are no more clusters with size smaller than $\sqrt{n}$.
Once all $t$ consensus clustering configurations are made admissible, we proceed to select the {\it final consensus clustering}
as the consensus clustering configuration among these $t$ configurations that minimizes the kernel-based validation index
of \cite{Fan20103}. This index was conceived after the studies of \cite{Girolami20023} to choose the optimal final cluster among several possible partitions. It can be seen as an index that combines modified extensions of the
between and within variances to kernel-based methods.
See \cite{Fan20103} or \cite{vicente&murua1-2020} for further details.
\subsection{The determinantal point process}
Let $L$ be a $n\times n$ real symmetric and positive semidefinite matrix that measures similarity between all pairs of elements of $\mathcal{S}$.
We denote by $L_Y=\left(L_{ij}\right)_{i,j\in Y}$
the principal submatrix of $L$ whose rows and columns are indexed by
the subset $Y\subseteq \mathcal{S}$.
A determinantal point process, DPP for short, is a probability measure on $2^{\mathcal{S}}$ that assigns probability
\begin{equation}\label{definition2}
P\left( Y \right)= \det(L_Y) / \det(L+I_n),
\end{equation}
to any subset $Y\in 2^{\mathcal{S}}$,
where $I_n$ is the identity matrix of dimension $n \times n$.
We write $\boldsymbol{Y}\sim DPP_{\mathcal{S}}(L)$ for the corresponding determinantal process.
The matrix $L$ is known as the kernel matrix of the DPP \citep{Kulesza20123,Kang20133,Hafiz20143}.
It can be shown \citep{Kulesza20123} that $\det(L+I_n) = \sum_{Y \subset 2^\mathcal{S}} \det(L_Y)$, hence
\eqref{definition2} does indeed define a probability mass function over all subsets in $2^\mathcal{S}$.
This definition states restrictions on all the principal minors of the kernel matrix $L$, denoted by $\det(L_Y)$.
Indeed, as $P\left(\boldsymbol{Y}=Y\right)\propto\det(L_Y)$ represents a probability measure, we have $\det(L_Y)\geq0$, for any $Y\subseteq \mathcal{S}$.
Any symmetric positive semidefinite matrix $L$ may be a kernel matrix of a DPP.
For the construction of the similarity matrix $L$ in \eqref{definition2}, we use a suitable Mercer Kernel \citep{Girolami20023}
as indicated in \cite{vicente&murua1-2020}.
The choice of an appropriate kernel is a critical step in the application of any kernel-based method.
However, as pointed out by \cite{Howley20063}, there is no rule nor consensus about its choice.
Ideally, the kernel must be chosen according to prior knowledge of the problem domain
\citep{Howley20063,Lanckriet20043}, but this practice is rarely observed.
In the absence of expert knowledge, a common choice is the
\emph{Radial Basis Function} (RBF) kernel (or \emph{Gaussian kernel}):
\begin{equation*}
L=\bigl( \exp\bigl\{ -\|x_i-x_j\|^2 / (2\sigma^2) \bigr\}\bigr)_{i,j=1}^n,
\end{equation*}
where the scale parameter $\sigma$, known as the kernel's bandwidth, represents the relative spread of the Euclidean distances $\|x_i-x_j\|$ between any two points $x_i$ and $x_j$.
Due to its appealing mathematical properties, this particular kernel has been extensively used in many studies.
A particular property of the Gaussian kernel is that it is positive and bounded from above by one,
making it directly interpretable as a scaled measure of similarity between any given pair of points.
The computation of the RBF kernel requires the estimation of the bandwidth parameter $\sigma$. As pointed by \cite{Murua20143}, most of the literature considers $\sigma$ as a parameter that can be estimated by observed data. Inspired by \cite{Blatt19963} and \cite{Blatt19973}, we estimate $\sigma^2$ by the average of all pairwise and squared Euclidean distances, i.e.,
$ \widehat{\sigma}^2= 2 \sum_{i<j}\|(x_i-x_j)\|^2 / \bigl( n(n-1)\bigr).$ Other choices of estimating $\sigma^2$ are referred in \cite{vicente&murua1-2020}. We chose the average for its simplicity and fast computation even when the dataset is large.
\section{The case of large datasets}\label{largedatasets}
\cite{BenHough20063}
and \cite{Kulesza20123} present an efficient scheme to sample from a DPP.
The algorithm is based on the following observations.
Let $L=\sum_{i=1}^{n} \lambda_i(L){v}_i{v}_i^T$ be an orthonormal eigendecomposition of $L$,
where $\lambda_1(L) \geq \lambda_2(L) \geq \cdots \geq \lambda_n(L) \geq 0$ are the eigenvalues of $L$,
and $\{v_i : i=1,\ldots, n\}$ are the eigenvectors of $L$.
For any set of indexes $J \subseteq \{1,2,\ldots, n\}$, define the subset of eigenvectors $V_J = \{ v_i : i \in J\}$,
and the associated matrix $\mathcal{K}_J = \sum_{i\in J} {v}_i {v}_i^T.$
It can be shown that the matrix $\mathcal{K}_J$ defines a so-called \emph{elementary} DPP which we denote by DPP$(\mathcal{K}_J)$.
It turns out that the DPP$_\mathcal{S}(L)$ is a mixture of all elementary DPP given by the index sets $J$.
That is
\[
\operatorname{DPP}_\mathcal{S}(L) = \sum_{J} \operatorname{DPP}(\mathcal{K}_J) \biggl[ \prod_{i\in J} \lambda_i(L)\biggr] / \det{(L + I_n)}.
\]
The mixture weight of $\operatorname{DPP}(\mathcal{K}_J)$ is given by the product of the eigenvalues $\lambda_i(L)$ corresponding to the eigenvectors $v_i\in V_J$, normalized by $\det\left(L + I_n\right) = \prod_{i=1}^{n}\left[\lambda_i(L)+1\right]$.
Sampling of a subset $\boldsymbol{Y}\sim \operatorname{DPP}(L)$ can be realized by first selecting an elementary DPP, $\operatorname{DPP}(\mathcal{K}_J)$, with probability equal to its mixture component weight, and then, in a second step,
sampling a subset from $\operatorname{DPP}(\mathcal{K}_J)$. Moreover, it can be shown that the expected value and variance
of the number of elements in $\boldsymbol{Y}$, $\operatorname{card}(\boldsymbol{Y})$, are given by
\begin{equation*
\mathbb{E}\left[\operatorname{card}(\boldsymbol{Y})\right] = \sum_{i=1}^{n}\tfrac{\lambda_i(L)}{\lambda_i(L)+1} \; ; \;
\operatorname{Var}\left[\operatorname{card}(\boldsymbol{Y})\right] = \sum_{i=1}^{n}\tfrac{\lambda_i(L)}{(\lambda_i(L)+1)^2}.
\end{equation*}
It is well known that the computational complexity of obtaining the eigendecomposition of a $n\times n$ symmetric matrix is of order $O(n^3)$ and, as $n$ grows larger, the computation of the eigenvalues and eigenvectors
becomes expensive.
Even the storage of the matrix is limited by the memory required.\\
The sampling algorithm based on DPP starts with a subset of the eigenvectors of the kernel matrix, selected at random, where the probability of selecting each eigenvector depends on its associated eigenvalue. This fact suggests directly that it is unnecessary to compute all the eigenvalues, as eigenvectors with low associated eigenvalues are selected with low probability. This is particularly useful in the case of large matrices: computing only the largest eigenvalues can substantially reduce the computational burden of obtaining all the eigenvalues. The literature points to many references of well-known algorithms that can extract the $t$ largest (or smallest) eigenvalues, with their associated eigenvectors, of a $n \times n$ Hermitian matrix, where usually, we have $t\ll n$. One of the most classical and used algorithms is the Lanczos algorithm \citep{Lanczos1950}. Despite its popularity and computational efficiency, the algorithm was proven to be numerically instable, due in part to the loss of orthogonality of the computed Krylov subspace basis vectors generated. Since then, many efforts have been made to solve this issue, like \cite{Cullum1978}, \cite{Parlett1989} or \cite{Grimes1994}. Many of the variations of the Lanczos algorithm propose a restart after a certain number of iterations. One of the most popular restarted variations of the algorithm is the implicitly restarted Lanczos method, proposed by \cite{Calvetti1994}, which is implemented in \textsc{arpack} \citep{Lehoucq1998}, motivating then our preference for this particular variation.
The Lanczos algorithm and its implicitly restarted variation were specially developed for large sparse symmetric matrices. Consequently, our first step before implementing the implicitly restarted Lanczos method is to find a good approximation of the kernel matrix $L$ by a sparse matrix. We decide to address this question by following two approaches: one approach based on the Nearest Neighbor Gaussian Process \cite{Datta2016} and another approach based on random sampling of small submatrices from the dense kernel matrix $L$.
\subsection{The Nearest Neighbor Gaussian Process}\label{NNGP}
The Nearest Neighbor Gaussian Process (NNGP) was developed by \cite{Datta2016} and extended by \cite{Finley2019}, to obtain a sparse approximation of the kernel matrix $L$, say $\widetilde{L}$, to which the implicitly restarted Lanczos method can be applied to obtain its largest eigenvalues.
\cite{Finley2019} showed that the covariance matrix $W$ of a Gaussian process can be expressed through a specific Cholesky decomposition:
\begin{equation}\label{eq:finley}
W =\left(I_n- A\right)^{-1} D\left(I_n-A\right)^{-T},
\end{equation}
where $A$ is a $n\times n$ strictly lower-triangular matrix and $D$ is a $n\times n$ diagonal matrix;
here $(\cdot)^{-T}$ stands for the inverse of the transposed matrix.
In order to define properly the matrices $A$ and $D$ we need to introduce the following notation.
For any $n\times n$ matrix $M$, and a subset of indices $J\subseteq \{1,\ldots, n\}$, we will write
$M_{k,J} = (M_{kj})_{j\in J}$ for the $\operatorname{card}(J)$-dimensional vector formed by the corresponding
components of the row $k$ of $M$, $k=1,\ldots, n$. We define similarly, $M_{J,k}$. Also, we write $[i_1:i_2]$ for the set
$J=\{ j : i_1 \leq j \leq i_2\}$.
Having introduced the notation, we can write the $i$th row of $A$, $A_{i\sbullet}$ as
$A_{i, [1:i-1]} = W_{[1:i-1]}^{-1} \, {W}_{[1:i-1], i},$ for $i=2,\ldots, n,$ and
$A_{i, [i:n]} = 0$, for $i=1,\ldots, n$.
where ${W}_{[1:k]}$ represents the leading principal submatrix of order $k$ of the matrix ${W}$. The diagonal entries $D_{ii}$ of ${D}$ are such that $D_{11}=W_{11}$ and
$D_{ii} = W_{ii} - W_{i,[1:i-1]} \, A_{i, [1:i-1]}^T$, for $i=2,\dots, n$.
Note that these equations are the linear equations that define the matrices $A$ and $D$. These equations need to be solve
for $A$ and $D$ in order to obtain the decomposition given by the expression in~\eqref{eq:finley}.
Unfortunately, the computation of ${A}_{i\sbullet}$ still takes $O(n^3)$ floating point operations,
specially for high values of $i$ closer to $n$, which increases the dimension of ${W}_{i-1}$.
Despite this shortcoming, the authors mention that this specific decomposition highlights where the sparseness can be exploited:
setting to zero some elements in the lower triangular part of ${A}$.
This is achieved by limiting the number of nonzero elements in each row of ${A}$ to a maximum of $m$ elements.
Let $N_i\subseteq \{1, \ldots, n\}$ be the set of indices $j < i$ for which $A_{i,j}\neq 0$. We constraint $N_i$ to
have at most $m$ indices.
In this latter case, all elements of the $i$th row ${A_{i\sbullet}}$ of ${A}$ are zero, except for the elements
$A_{i,N_i} = W_{N_i}^{-1} \, W_{N_i, i}$, for $i=2,\ldots, n$, where ${W}_{N_i}$ is the principal submatrix of ${W}$ whose rows and columns are indexed by $N_i$.
For the diagonal entries, we have $D_{11}=W_{11}$ and $D_{ii}=W_{ii}-{W}_{i, N_i} \, {A}_{i, N_i}^T$, for $i=2,\dots, n$.
These latter equations form a linear system of size at most $m\times m$, with $m=\underset{i}{\max}\left(\operatorname{card}(N_i)\right)$. This new system can be solved for $A$ and $D$ in $O(n m^3)$ floating points operations.
Using these solutions gives rise to the sparse approximation to the precision matrix $W^{-1}$
\begin{equation}\label{precision}
\widetilde{W}^{-1}=\left({I}_n-{A}\right)^{T}{D}^{-1}\left({I}_n-{A}\right).
\end{equation}
The inverse of $\widetilde{W}^{-1}$ is an approximation to $W$.
\cite{Datta2016} show in the context of spatial Gaussian processes, that $\widetilde{W}^{-1}$ has at most $n m(m+1)/2$ nonzero
entries. Thus, $\widetilde{W}^{-1}$ is sparse provided that $m \ll n$.
We can apply this result to develop an efficient determinantal consensus clustering (see Section~\ref{sec:determinantal:consensus}) when the data size $n$ is large.
Recall that the kernel matrix $L$ is a real symmetric positive semidefinite matrix.
When $L$ is also positive definite, it can be seen as a covariance matrix.
This is what we assume from now on. Therefore, it
can be approximated using the NNGP approach by a matrix $\widetilde{L}$ whose inverse $\widetilde{L}^{-1}$ is sparse.
For each $i\in\{2, \ldots, n\}$ consider the distances $d_{ij} = \lVert x_i - x_j\rVert$ for $j< i$.
Let $d_{i(1)} \leq d_{i(2)} \leq \cdots \leq d_{i(i-1)}$ be the corresponding sequence of ordered distances.
In our model we set $N_i = \{ j : j< i,\, d_{ij} \leq d_{i(m)}\}$, for $i > m$; and set $N_i = \{1,\ldots, i-1\}$ for
$i\leq m$.
Let $\check{L}$ be the $m$-nearest-neighbor matrix whose row entries are given by $\check{L}_{ii} = L_{ii}$,
$\check{L}_{ij} = L_{ij}$ if $j\in N_i$, and $\check{L}_{ij} = 0$, otherwise.
We would like to stress here that the matrix $\widetilde{L}$ based on the neighborhoods $\{N_i\}_{i=1}^n$
is not the same as the $m$-nearest-neighbor matrix $\check{L}$.
We have adopted $\widetilde{L}$ instead of $\check{L}$ for several reasons.
First, $\widetilde{L}$ is a dense approximation of $L$, whose inverse $\widetilde{L}^{-1}$ is sparse.
Second, $\widetilde{L}^{-1}$ is a sparse matrix with $O(nm^2)$ nonzero entries.
On the other hand, $\check{L}$ is a sparse matrix with $O(n m_{nn})$ nonzero entries,
where $m_{nn}$ is the number of nearest neighbors.
Therefore, for a fixed level of sparseness, building $\widetilde{L}^{-1}$ requires many fewer nearest neighbors $m = O( \sqrt{m_{nn}})$
than building $\check{L}$.
Finally, in Section~\ref{experiments2}, we compute both the Frobenius distances \citep{Horn20123}
$\lVert L -\widetilde{L}\rVert_F$, and $\lVert L -\check{L}\rVert_F$, and show that the former
is always smaller than the latter,
for all our simulated data.
Moreover, in terms of the symmetrized Kullback-Leibler divergence \citep{Kullback1951}, the distribution of the eigenvalues of $\widetilde{L}$
is closer to the distribution of the eigenvalues of $L$ than the distribution of the eigenvalues of $\check{L}$.
As our primary goal is to obtain the $t \ll n$ largest eigenvalues of the kernel matrix $L$,
we start with the construction of the sparse matrix $\widetilde{L}^{-1}$ using the formula in~\eqref{precision},
and the sparse computation form above.
We then apply the Lanczos algorithm to extract the $t$ smallest eigenvalues of $\widetilde{L}^{-1}$, and
their associated eigenvectors.
By inverting the eigenvalues, we obtain the $t$ largest eigenvalues of $\widetilde{L}$; the eigenvectors of $\widetilde{L}$
are the same as those of $\widetilde{L}^{-1}$.
With the eigenvalues and eigenvectors in hand, we can proceed
as usual with the determinantal consensus clustering of Section \ref{sec:determinantal:consensus}.
We stress here that the actual DPP used for the determinantal consensus clustering after this construction
is $DPP_\mathcal{S}(\widetilde{L})$, and not $DPP_\mathcal{S}(L)$. In pratice, $L$ is chosen only to give us
a measure of similarity between data points.
\subsection{Approach based on random sampling of small submatrices from $L$}\label{knn}
In this section, we consider another approach to deal with large datasets and kernel matrices.
In this approach we combine dimension reduction techniques and the advantages of working with sparse matrices.
Let $L^{(1)}, \dots, L^{(M)}$ denote $M$ $r\times r$ submatrices sampled uniformly at random
and without replacement from $L$, where $r<n$ (ideally, $r\ll n$).
The idea is to use these submatrices as proxies for $L$ in the DPP sampling.
By generating a sufficiently large number $M$ of matrices, we expect to cover the set of eigenvectors and eigenvalues
of $L$ with the smaller sets of eigenvectors and eigenvalues of the matrices collection $\{ L^{(i)} : i=1,\ldots, M\}$.
This might be the case if the data are well separated so that, although $L$ might be a {\it dense} matrix, its
hidden structure might be {\it sparse.} That is, many entries in $L$ might be small.
We could also achieve sparseness by thresholding the elements of $L$. However, the following approach
yielded better results in our experiments (not shown here), and hence, it is the second approach adopted
(the first having been described in the previous section).
We apply the following iterative methodology to the set of submatrices, for a number $N$ of times.
For $k=1,\ldots, N$:
\begin{enumerate}
\item Select an index $i_k$ from $\{1, 2, \dots, M\}$ at random (with replacement),
and consider the submatrix $L^{(i_k)}$.
\item Build a sparse approximation $\widehat{L}^{(i_k)}$ of the submatrix $L^{(i_k)}$ by considering the $k$-nearest neighbors
of each point associated with the rows of the submatrix; that is,
$\widehat{L}^{(i_k)}_{ij} = L^{(i_k)}_{ij}$ if $x_j$ is one of the $k$-nearest-neighbors of $x_i$ or if $x_i$ is one of the
the $k$-nearest-neighbors of $x_j$; $\widehat{L}^{(i_k)}_{ij} =0$, otherwise.
\item Generate a subset sample ${Y}_{i_k}$ from a $\operatorname{DPP}(\widehat{L}^{(i_k)})$
based only on the $t$ largest eigenvalues extracted with the Lanczos algorithm.
\item Find the Voronoi cells of the $n$ data points based on the sampled $Y_{i_k}$ center points.
This generates the $k$th partition of the algorithm.
\end{enumerate}
At the end of this procedure, we
apply the determinantal consensus clustering of Section~\ref{sec:determinantal:consensus}
to the set of $N$ partitions obtained.
The number $M$ of submatrices to be sampled must be chosen so that we get benefits from using the submatrices to sample the generator sets through DPP rather using the whole kernel matrix $L$. We know that the computational complexity of obtaining the eigendecomposition of the $n\times n$ kernel matrix is $O(n^3)$ operations.
On the other hand, the eigendecompositions of the $M$ $r\times r$ submatrices requires $O(M r^3)$ operations.
To obtain computational gains from the sampled submatrices, we must guarantee that
$Mr^3<n^3$, that is, $M<(r/n)^{-3}$. The quantity $\gamma = r/n$
is the proportion of points considered in the submatrices.
Since we would like to take full advantage of both the dimension reduction and speed,
we should work with values of $M\ll\gamma^{-3}$.
In our experiments, we set
$M=\lfloor\gamma^{-3}/2\rfloor$, where $\lfloor x\rfloor$ stands for the floor function.
\section{Experiments with large datasets}\label{experiments2}
In this section, we apply both approaches presented in Section~\ref{largedatasets} to moderately large datasets
in order to compare their results.
The comparison and evaluation of the results are based on simulations.
Because the data size depends on the nature of the problem,
there is no clear definition of what is considered a ``large dataset''.
Here, we consider two size values as large sizes: $n\in \{1000, 10000\}$.
These values were chosen so as to obtain results that can be applied to moderetaly large real datasets,
and to be able to explore the effect on the results of using sparse kernel matrices instead of the original
kernel matrices for determinantal consensus clustering.
Moreover, the chosen data sizes keep the computation time within reasonable elapsed times for our computer resources.
Our choices for data sizes do not necessarily constitute what is known as \emph{big data},
a term popularized by \cite{Mashey1997} to describe datasets with sizes that go
beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time \citep{Snijders2012}.
As \cite{Bonner2017} emphasize, the processing of large datasets does not have to involve big data.
\subsection{Large datasets with $n=1000$ observations}
\paragraph{Data generation.}\;
Following \cite{vicente&murua1-2020}, we created nine experimental conditions or scenarios to generate datasets with $n=1000$ observations. Each dataset was generated with the algorithm of \cite{Melnykov20123}, which draws datasets from $p$-variate finite Gaussian mixtures. These have the form
$\sum_{k=1}^K \pi_k \phi_p(\cdot; \mu_k, V_k)$, where $K$ is the number of Gaussian components, $\phi_p$ denotes the $p$-variate Gaussian density, $\{{\mu}_1, \ldots, {\mu}_K\}$, are the component means, and
$\{V_1, \ldots, V_K\}$ are the covariance matrices of the components.
The means constitute $K$ independent realizations of a uniform $p$-variate distribution on the $p$-dimensional unit hypercube;
the covariance matrices are $K$ independent realizations of a $p$-variate standard Wishart distribution with $p + 1$ degrees of freedom; the mixing proportions $\pi_k$ are drawn from a Dirichlet
distribution, so that $\sum_{k=1}^{K} \pi_k =1$.
The number of data points per component is a draw from the multinomial distribution based on the mixing proportions.
\cite{Melnykov20123} introduce the concept of \emph{pairwise overlap} to generate datasets with the algorithm.
It represents the degree of interaction between components and defines the clustering complexity of datasets.
\cite{Melnykov20123} define a range of 0.001 to 0.4 for an average pairwise overlap between components, representing a very low and a very high overlap degree, respectively.
We follow the values suggested by the authors, and choose, for every dataset generated, a random average pairwise overlap between 0.001 and 0.01.
This range of values keep the overlap degree at a low level so that performing clustering on the data makes sense.
All datasets were generated from mixtures with ellipsoidal covariance matrices; the simulated components do not necessarily
contain the same number of elements.
To obtain the nine simulated datasets with $n=1000$ observations,
we consider $p\in\{5, 10, 18\}$ variables, and $K\in\{4, 8, 15\}$ components.
These values correspond to three different levels (low, medium, large) for $p$ and $K$.
We also ensured that no cluster with size less than $\sqrt{n}$ is present among each simulated dataset,
because otherwise, following the procedure described in Section~\ref{sec:consensus},
small clusters will be inevitably merged with larger clusters.
We applied the two approaches presented in Section~\ref{largedatasets} to each simulated dataset.
\paragraph{Results with approach based on NNGP}\;
In order to study the effect of sparseness in the
approximation presented in Section~\ref{NNGP},
we set different values for $m$, the maximum number of nonzero elements in each row of the matrix $A$. Each value ensures four levels of sparseness
(or total percentage of zeros) for the matrix $\widetilde{L}^{-1}$: $20\%, 40\%, 60\%$ and $80\%$.
The first largest $t$ eigenvalues and corresponding eigenvectors of $\widetilde{L}^{-1}$ were extracted with the Lanczos algorithm, for $t\in\{10, 25, 50\}$.
The consensus DPP of Section~\ref{sec:determinantal:consensus} was applied to obtain 200 partitions, as recommended in \citep{vicente&murua1-2020}.
The whole procedure was repeated ten times.
Considering the nine simulated scenarios,
the experiment consists of nine plots with ten repeated measures from a $4\times 3$ factorial design given
by the combination of sparseness and number of extracted eigenvalues.
For each single plot, we note that all $120$ observations are dependent, since the same data were used for all
combinations.
Figure~\ref{densityestimNNGP} shows density estimates of the eigenvalues distribution obtained with the NNGP approximation matrix
$\widetilde{L}$ (solid line).
We considered two combinations of sparseness levels and number of eigenvalues: ($20\%, t=10$) and ($60\%, t=50$),
respectively.
The figure presents plots from one dataset, since they depict typical plots obtained with all datasets. The tick-marks in the horizontal axis locate all eigenvalues of the original kernel matrix $L$.
We can see that the density estimations obtained from $\widetilde{L}$ concentrate correctly around the true eigenvalues of the kernel matrix $L$.
For comparison purposes, we also display in Figure~\ref{densityestimNNGP} the density estimates of the set of eigenvalues from the
$m_{nn}$-nearest-neighbor matrix $\check{L}$ (dashed line) described in Section~\ref{NNGP}.
Unlike the NNGP approach, the eigenvalue density estimations associated with $\check{L}$ do not quite concentrate
on the set of true eigenvalues.
The plots corroborate the arguments of Section~\ref{NNGP} concerning our preference for $\widetilde{L}$ instead of $\check{L}$.
\begin{figure}[h]
\centering
\subfloat[Sparseness=20\% ; $t$=10 eigenvalues]{%
\includegraphics[width=0.4\textwidth]{densityplotNNGPs20e10.pdf}}
\quad
\subfloat[Sparseness=60\% ; $t$=50 eigenvalues]{%
\includegraphics[width=0.4\textwidth]{densityplotNNGPs60e50.pdf}}
\caption{Kernel density estimates
of the eigenvalue distribution from the NNGP approximation $\widetilde{L}$ (solid line), and the $m_{nn}$-nearest-neighbor matrix $\check{L}$ (dashed line). The plots are associated with two sparseness-eigenvalue conditions among the
twelve experimental conditions for a given dataset. The tick-marks indicate the eigenvalues of $L$. }
\label{densityestimNNGP}
\end{figure}
Following the discussion of Section~\ref{largedatasets},
we also computed the Frobenius distances between the sparse approximation matrices, $\widetilde{L}$ and $\check{L}$,
and the original dense matrix $L$.
Table~\ref{FDNNGP} shows the mean and standard deviation of Frobenius distances considering all nine data scenarios,
as a function of the level of sparseness.
The mean distance $\lVert L - \widetilde{L}\rVert_F$ is always much inferior to the distance $\lVert L - \check{L}\rVert_F$.
At first, the distances $\lVert L - \widetilde{L}\rVert_F$ decrease with sparseness until a level of at least $60\%$ sparseness is reached. However,
the distances of $L$ to $\check{L}$ monotonically increase with sparseness.
The approximation based on NNGP is then the preferred choice for extracting eigenvalues.
These results corroborate the nice concentration of the eigenvalue distributions of the NNGP approach
about the eigenvalues of $L$ in Figure~\ref{densityestimNNGP}.
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Approximation\\ matrix\end{tabular}} & \multicolumn{4}{ c }{Sparseness level} \\\cmidrule{2-5}
& $20\%$ & $40\%$ & $60\%$ & $80\%$\\ \midrule
$\widetilde{L}$ & 11.31 (16.49) & 6.80 (4.55) & 8.04 (5.70) & 66.58 (77.14)\\
$\check{L}$ & 200.17 (17.41) & 315.89 (16.27) & 421.82 (8.66) & 527.88 (8.18)\\ \bottomrule
\end{tabular}
\caption{Mean and standard deviation (within parentheses) of Frobenius distances. }
\label{FDNNGP}
\end{table}
To have an objective measure of the resemblance between the two sets of eigenvalues from $\widetilde{L}$ and $L$,
we computed the symmetrized Kulback-Leibler (KL) divergence \cite{Kullback1951} between corresponding density estimates of the two eigenvalue
distributions. The densities were estimated using a Gaussian kernel density estimator \citep{Silverman1986}.
Table~\ref{KLNNGP} reports the mean and standard deviation of the KL divergences
over all nine data scenarios
for each combination of sparseness and number of eigenvalues extracted.
There does not appear to be any significant difference between all the cases,
except for $t=10$ eigenvalues. Better results are obtained
when a larger but still very moderate number of eigenvalues is estimated.
The divergences reported in Table~\ref{KLNNGP} are very small,
showing that there is pretty good similarity between the eigenvalue distributions.
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
\toprule
Number of & \multicolumn{4}{ c }{Sparseness level} \\\cmidrule{2-5}
eigenvalues & $20\%$ & $40\%$ & $60\%$ & $80\%$\\ \midrule
10 & 0.01 (0.00) & 0.36 (0.98) & 0.36 (0.97) & 0.35 (0.96)\\
25 & 0.01 (0.00) & 0.01 (0.00) & 0.01 (0.00) & 0.01 (0.00)\\
50 & 0.01 (0.00) & 0.01 (0.00) & 0.01 (0.00) & 0.01 (0.00)\\ \bottomrule
\end{tabular}
\caption{Mean and standard deviation (within parentheses) of Kullback-Leibler divergences.}
\label{KLNNGP}
\end{table}
We also report and compare the elapsed time in seconds for calculating each set of eigenvalues.
Table~\ref{timeNNGP} displays the means and standard deviations of elapsed times over all nine data scenarios
for each combination of sparseness and number of eigenvalues extracted.
We can see that, as expected, the average computation time decreases with sparseness, and increases with the number of eigenvalues extracted.
In fact, a linear regression (not shown here) of the elapsed time as a function of sparseness and number of eigenvalues extracted yields a coefficient of determination of 0.97,
clearly indicating
a linear growth of the computational time with both sparseness and the number of eigenvalues to be estimated.
The same statistics computed from the original kernel matrix $L$, from which all eigenvalues must be extracted, yield a mean of 0.27 seconds with a standard deviation of 0.06. Extracting only a few eigenvalues with the Lanczos algorithm reduces
significantly the computation time.
The elapsed times were computed while running a Julia v1.1.1 script \citep{Bezanson2017}
on a PC with an Intel-Core i5-4460 CPU running at 3.20GHz with 16GB of RAM.
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
\toprule
Number of & \multicolumn{4}{ c }{Sparseness level} \\\cmidrule{2-5}
eigenvalues & $20\%$ & $40\%$ & $60\%$ & $80\%$\\ \midrule
10 & 0.10 (0.00) & 0.07 (0.00) & 0.06 (0.01) & 0.04 (0.01)\\
25 & 0.12 (0.01) & 0.09 (0.00) & 0.08 (0.01) & 0.05 (0.01)\\
50 & 0.14 (0.01) & 0.12 (0.01) & 0.11 (0.01) & 0.07 (0.01)\\ \bottomrule
\end{tabular}
\caption{Means and standard deviations (within parentheses) of elapsed times in seconds for eigenvalues calculation.}
\label{timeNNGP}
\end{table}
To obtain the optimal clustering configuration for each dataset, we used
the determinantal consensus clustering described in Section \ref{sec:determinantal:consensus}, meeting the minimal cluster size criterion of $\sqrt{n}$ and using the kernel-based validation index of \cite{Fan20103}.
The quality of the chosen optimal clustering configuration has been assessed with the adjusted Rand index or ARI
\citep{Rand19713,Hubert19853}.
Among the many known measures of goodness-of-fit that can be found in the literature,
the ARI is one of the most common criteria.
The original Rand index counts the proportion of elements that are either in the same clusters in both clustering configurations or in different clusters in both configurations.
The adjusted version of the Rand index corrected the calculus of the proportion, so that its expected value is zero when the clustering configurations are random.
The larger the ARI, the more similar the two configurations are, with the maximum ARI score of 1.0 indicating a perfect match.
Table~\ref{ressparseNNGP} displays the ARI means and standard deviations over all nine scenarios and replicas for
all twelve combinations
of sparseness and number of eigenvalues extracted.
\begin{table}[H]
\centering
\begin{tabular}{cccc}
\toprule
\begin{tabular}[c]{@{}c@{}}Sparseness\\ level\end{tabular} & $t=10$ & $t=25$ & $t=50$ \\ \midrule
20\% & 0.94\,(0.06) & 0.95\,(0.06) & 0.95\,(0.06) \\
40\% & 0.93\,(0.05) & 0.95\,(0.05) & 0.95\,(0.06) \\
60\% & 0.94\,(0.05) & 0.95\,(0.06) & 0.95\,(0.05) \\
80\% & 0.93\,(0.06) & 0.94\,(0.05) & 0.95\,(0.05)
\\ \bottomrule
\end{tabular}
\caption{Global ARI means and standard deviations (within parentheses) yielded by consensus DPP with the sparse kernel matrices for the twelve combinations of sparseness and number of eigenvalues extracted. Each mean and standard deviation was computed from ninety datasets.}
\label{ressparseNNGP}
\end{table}
For comparison purposes, we computed the ARI statistics obtained by applying
consensus DPP with the original dense kernel matrix $L$, from which we extracted all the eigenvalues and eigenvectors.
This yielded an ARI mean of 0.91, and a standard deviation of 0.08.
We did the same with the consensus clustering methodology applied to partitions generated with the well-known
Partitioning Around Medoids (PAM) algorithm \citep{Kaufmann19873}.
The PAM algorithm is a classical partitioning technique for clustering.
It chooses the data center points of the Voronoi cells by uniform random sampling.
Because DPP selects center points based on diversity,
our goal here is to show how much the quality of the clustering configurations is affected
by the lack of diversity at the moment of sampling centroids.
PAM yielded a much lower ARI mean of 0.86, and a standard deviation of 0.14.
We can see that the quality of the clustering results associated with the NNGP approach
is better in terms of ARI than the ones yielded using the whole dense kernel matrix $L$.
This conclusion holds regardless of the sparseness level and number of eigenvalues extracted.
The results are also more stable, considering the standard deviations.
Taking advantage of the elapsed time, choosing $t=25$ eigenvalues is the better choice among the three levels for $t$ studied here. This combined with a higher sparseness level (to speed up the computation of the matrix $\widetilde{L}^{-1}$)
appears to be a winning combination.
The results yielded by PAM are not as good.
In conclusion, the NNGP approach provides a very good and efficient alternative to the use of the complete dense matrix $L$.
\paragraph{Results with approach based on small submatrices from $L$}\;
Following the notation presented in Section~\ref{knn}, we studied the effect of three parameters on the clustering results. These are the proportion $\gamma$ of points chosen (or, equivalently, the size $r$ of any submatrix $L^{(i_k)}, \; k=1, 2,\dots, N$), the sparsennes level of any submatrix $L^{(i_k)}$ (along with the number $k$ of nearest neighbors needed to achieve such level), and the number $t\in\{10, 25, 50\}$ of the largest eigenvalues to be extracted. Table~\ref{tabknn} shows the choices for $\gamma$ and sparseness levels with the corresponding values of $r$ and $k$.
Note that
each value of $k$ ensures the same sparseness for all nine data scenarios.
\begin{table}[H]
\centering
\begin{tabular}{ccccccc}
\toprule
& & \mbox{\ } & \multicolumn{4}{c}{$k$} \\
$\gamma$ & $r$ & & 20\% & 40\% & 60\% & 80\% \\ \midrule
0.05 & 50 & & 34 & 25 & 17 & 8 \\
0.1 & 100 & & 68 & 50 & 34 & 16 \\
0.2 & 200 & & 136 & 100 & 68 & 32 \\ \bottomrule
\end{tabular}
\caption{Number of nearest neighbors $k$ associated with choices for proportion of data $\gamma$ and sparseness levels
20\%, 40\%, 60\% and 80\%. }
\label{tabknn}
\end{table}
As we did with the NNGP approach,
we set the number of partitions to obtain a consensus clustering to 200.
The whole procedure was repeated ten times.
The analysis of the results mimics the one made in the previous section with the NNGP approach.
Figure~\ref{densityestimsmallGram} shows density estimates of the eigenvalue distribution
associated with three combinations of
($\gamma$, sparseness, eigenvalues) $\in \{ (0.05, 20\%, 10),$ $(0.1, 40\%, 25),$ $(0.2, 60\%, 50)\}$.
The plots are associated with a given dataset; it depicts typical patterns oberved in all datasets.
The tick-marks in the horizontal axis locate all eigenvalues of the original kernel matrix $L$.
We can see that the density estimations concentrate well around the smaller eigenvalues of $L$,
but fail to capture the largest eigenvalues.
\begin{figure}[H]
\centering
\subfloat[$\gamma=0.05$; Sparseness=20\% ; $t$=10 eigenvalues]{%
\includegraphics[width=0.3\textwidth]{smallGram5s20e10.pdf}}
\quad
\subfloat[$\gamma=0.1$; Sparseness=40\% ; $t$=25 eigenvalues]{%
\includegraphics[width=0.3\textwidth]{smallGram10s40e25.pdf}}
\quad
\subfloat[$\gamma=0.2$; Sparseness=60\% ; $t$=50 eigenvalues]{%
\includegraphics[width=0.3\textwidth]{smallGram20s60e50.pdf}}
\caption{Density estimators of three sets of eigenvalues extracted from the random small matrices approach associated with three scenarios of Table~\ref{tabknn} on a given dataset. The tick-marks are placed on the eigenvalues of $L$.
}
\label{densityestimsmallGram}
\end{figure}
Table~\ref{KLknn} shows the Kullback-Leibler symmetrized divergences between the
distribution of the eigenvalues extracted from the random small matrices and the
distribution of the eigenvalues of $L$, for the ($\gamma,$ sparseness) combinations displayed in Table~\ref{tabknn}.
The eigenvalues distributions were estimated with kernel density estimators.
We can see that, globally, increasing the proportion $\gamma$ of points sampled reduces the KL divergence
when combined with moderate to low levels of sparseness and a higher number of eigenvalues.
Overall, the KL divergences are larger than those obtained with the NNGP approach.
But, again, we stress that the objective of the approaches is not to estimate the eigenvalues of $L$,
but to offer efficient DPP sample alternatives to DPP$_\mathcal{S}(L)$.
%
\begin{table}[H]
\centering
\begin{tabular}{cccccc}
\toprule
\multirow{2}{*}{$\gamma$} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Number of\\ eigenvalues\end{tabular}} & \multicolumn{4}{c}{Sparseness level} \\ \cmidrule{3-6}
& & 20\% & 40\% & 60\% & 80\% \\ \midrule
\multirow{3}{*}{0.05} & 10 & 2.09 (1.28) & 1.28 (0.37) & 1.00 (0.08) & 1.01 (0.03) \\
& 25 & 0.02 (0.01) & 0.58 (1.25) & 1.64 (0.46) & 1.01 (0.04) \\
& 50 & 0.02 (0.01) & 0.03 (0.23) & 1.71 (0.88) & 1.10 (0.17)\\ \midrule
\multirow{3}{*}{0.1} & 10 & 1.95 (1.39) & 1.34 (0.43) & 1.02 (0.10) & 1.00 (0.03) \\
& 25 & 0.01 (0.00) & 0.12 (0.58) & 1.87 (0.89) & 1.04 (0.10) \\
& 50 & 0.02 (0.00) & 0.01 (0.00) & 0.15 (0.68) & 1.32 (0.26)\\ \midrule
\multirow{3}{*}{0.2} & 10 & 2.07 (1.43) & 1.36 (0.44) & 1.03 (0.12) & 1.00 (0.03) \\
& 25 & 0.01 (0.00) & 0.16 (0.69) & 1.76 (1.08) & 1.12 (0.23) \\
& 50 & 0.01 (0.00) & 0.01 (0.00) & 0.01 (0.00) & 1.64 (0.51) \\ \bottomrule
\end{tabular}
\caption{Means and standard deviations (within parentheses) of Kullback-Leibler divergences associated with the random small matrices approach.}
\label{KLknn}
\end{table}
Recalling the notation of Section \ref{knn}, Table~\ref{timeknn} shows the means and standard deviations of the
elapsed times in seconds for eigenvalues computation, using the sampled submatrix $L^{(i_k)}$ or its sparse approximation $\widehat{L}^{(i_k)}$.
The results were obtained while running a Julia script on a PC with an Intel Core i5-4460 CPU running at 3.20GHz
with 16GB of RAM.
Note that contrary to NNGP, the level of sparseness does not affect the elapsed times for computing the eigenvalues
with the Lanczos method. This fact is probably due to the small size of the matrices $\widehat{L}^{(i_k)}$.
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{$\gamma$} & \multirow{2}{*}{Submatrix} & \multicolumn{3}{c}{Number of eigenvalues} \\ \cmidrule{3-5}
& & 10 & 25 & 50 \\ \midrule
\multirow{2}{*}{0.05} & $L^{(i_k)}$ & 0.001 (0.000) & 0.001 (0.000) & 0.001 (0.000) \\
& $\widehat{L}^{(i_k)}$ & 0.001 (0.000) & 0.001 (0.000) & 0.001 (0.000) \\ \midrule
\multirow{2}{*}{0.1} & $L^{(i_k)}$ & 0.004 (0.003) & 0.004 (0.003) & 0.004 (0.003) \\
& $\widehat{L}^{(i_k)}$ & 0.003 (0.001) & 0.003 (0.001) & 0.003 (0.001)\\ \midrule
\multirow{2}{*}{0.2} & $L^{(i_k)}$ & 0.01 (0.004) & 0.01 (0.004) & 0.01 (0.004) \\
& $\widehat{L}^{(i_k)}$ & 0.006 (0.002) & 0.006 (0.002) & 0.006 (0.002) \\ \bottomrule
\end{tabular}
\caption{Mean and standard deviation (within parentheses) of elapsed times in seconds to eigenvalues calculation, for $L^{(i_k)}$ and its sparse approximation $\widehat{L}^{(i_k)}$.
}
\label{timeknn}
\end{table}
%
Table~\ref{ressparseknn} displays the ARI means and standard deviations over all
($\gamma$, sparseness) combinations in Table~\ref{tabknn}.
For each combination, these statistics were computed from a sample of ninety ARI scores given
by the ten replica datasets from the nine data scenarios.
Table~\ref{resdense} displays the same statistics obtained by keeping the dense version of the submatrices $L^{(i_k)}$,
and extracting all its eigenvalues, instead of using their sparse approximations.
This comparison is appropriate to study the effect of making these small matrices sparse.
These results should be compared to those of
(i) consensus DPP applied to the original dense kernel matrix $L$, from which all eigenvalues are extracted,
and (ii) the consensus clustering methodology applied to partitions generated by PAM.
These latter results were already mentioned above when showing the results of the NNGP approach.
They are, respectively, (i) a mean ARI and standard deviation of 0.91 and 0.08, and
(ii) a mean ARI and standard deviation of 0.86 and 0.14.
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
\toprule
$\gamma$ & \begin{tabular}[c]{@{}c@{}}Sparseness\\ level\end{tabular} & $t=10$ & $t=25$ & $t=50$ \\ \midrule
\multirow{4}{*}{0.05}
& 20\% & 0.92 (0.06) & 0.94 (0.05) & 0.96 (0.06)\\
& 40\% & 0.93 (0.06) & 0.95 (0.05) & 0.95 (0.05) \\
& 60\% & 0.94 (0.05) & 0.95 (0.05) & 0.95 (0.06) \\
& 80\% & 0.93 (0.06) & 0.95 (0.06) & 0.95 (0.06) \\ \midrule
\multirow{4}{*}{0.1}
& 20\% & 0.93 (0.05) & 0.93 (0.06) & 0.94 (0.07) \\
& 40\% & 0.95 (0.06) & 0.95 (0.05) & 0.95 (0.06) \\
& 60\% & 0.95 (0.05) & 0.95 (0.05) & 0.95 (0.06) \\
& 80\% & 0.94 (0.06) & 0.94 (0.07) & 0.95 (0.06) \\ \midrule
\multirow{4}{*}{0.2}
& 20\% & 0.93 (0.05) & 0.94 (0.06) & 0.93 (0.07) \\
& 40\% & 0.94 (0.05) & 0.95 (0.05) & 0.94 (0.06) \\
& 60\% & 0.94 (0.05) & 0.94 (0.05) & 0.92 (0.06) \\
& 80\% & 0.94 (0.06) & 0.95 (0.06) & 0.91 (0.08) \\
\bottomrule
\end{tabular}
\caption{Global ARI means and standard deviations (within parentheses) associated with consensus DPP on the sparse random small submatrices, for each ($\gamma$, sparseness) combinations in Table~\ref{tabknn}, and over the corresponding
ninety datasets for each combination.}
\label{ressparseknn}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{ccc}
\toprule
$\gamma=0.05$ & $\gamma=0.1$ & $\gamma=0.2$ \\ \midrule
0.94 (0.05) & 0.95 (0.05) & 0.95 (0.06) \\ \bottomrule
\end{tabular}
\caption{Global ARI means and standard deviations (within parentheses) associated with consensus DPP on the dense version of the random small submatrices considering all datasets.}
\label{resdense}
\end{table}
The quality of the clustering results is much better in terms of ARI, if we use either the dense $L^{(i_k)}$
or sparse $\widehat{L}^{(i_k)}$ random small submatrices rather than the whole dense kernel matrix $L$.
The sparse approximations $\widehat{L}^{(i_k)}$ require more computation since the nearest neighbors must be computed.
This extra cost does not seem worth when comparing the results associated with the dense approximations $L^{(i_k)}$.
However, there is a slight improvement in the results when $\gamma=0.05$.
In this case, combining any sparseness level with a higher number of eigenvalues is generally a good combination,
since computational times for eigenvalue extraction do not differ significantly.
For other levels of $\gamma$, the gain is not worth the complication of making the submatrices sparse.
Again, as with the NNGP approach, the random small submatrices approach outperforms PAM and shows less variability in the quality of the results.
Furthermore, this approach achieves
comparable quality results to those of the NNGP approach.
From a computational point of view, the NNGP approach is more expensive because it requires dealing with larger matrices.
Therefore, the random small submatrices are a good and very efficient alternative to the use of the complete dense matrix $L$,
and to the use of the NNGP approach.
\subsection{Large dataset with $n=10000$ observations}
\paragraph{Data generation.}\;
For the second experiment, due to hardware limitations, we simulated only two datasets with $n=10000$ observations.
As in the previous case, these were generated with the algorithm of \cite{Melnykov20123}.
The first one has $p=15$ variables and $K=10$ components, while the second one has $p=10$ variables and $K=5$ components.
Throughout this section, these data will be referred to as dataset I and dataset II, respectively.
Both datasets have a maximum pairwise overlap of 0.01.
We ensured that no cluster with size less than $\sqrt{n}$ is present among the simulated datasets, as it will be inevitably merged with a larger cluster, according to the procedure described in Section~\ref{sec:consensus}. We applied the two approaches presented in Section~\ref{largedatasets} to the simulated datasets.
\paragraph{Results with approach based on NNGP}\label{sec:NNGP10000}\;
We studied the same four levels of sparseness for the matrix $\widetilde{L}^{-1}$ ($20\%, 40\%, 60\%, 80\%$), setting
the appropriate values for the maximum number $m$ of nonzero elements in each row of the matrix $A$ (Section~\ref{NNGP}).
The first largest $t\in\{100, 250, 500\}$ eigenvalues and corresponding eigenvectors of $\widetilde{L}^{-1}$ are extracted with the Lanczos algorithm. The determinantal consensus clustering of Section~\ref{sec:determinantal:consensus}
was then applied so as to obtain 200 partitions of the data.
The whole procedure was repeated five times.
This time, the experiment consists of two plots of five repeated measures each, from
a $4\times 3$ factorial design given by the combination of four levels of sparseness and three
levels for the number of extracted eigenvalues.
All $60$ observations of each plot are dependent, since the same datasets were used for all scenarios.
The analysis of the results follows the same steps as the NNGP approach applied to the smaller datasets.
Figure~\ref{densityestimNNGP10000} shows density estimates of the eigenvalue distribution of $\widetilde{L}$ for dataset I,
for two combinations of sparseness and number of eigenvalues: ($20\%, t=100$) and ($60\%, t=500$),
respectively. We can see that the density estimations concentrate correctly around the true eigenvalues of the kernel matrix $L$.
\begin{figure}[H]
\centering
\subfloat[Sparseness: 20\% ; $t$=100 eigenvalues]{%
\includegraphics[width=0.4\textwidth]{densityplotNNGP10000s20e100.pdf}}
\quad
\subfloat[Sparseness: 60\% ; $t$=500 eigenvalues]{%
\includegraphics[width=0.4\textwidth]{densityplotNNGP10000s60e500.pdf}}
\caption{Kernel density estimates
of the eigenvalue distribution associated with $\widetilde{L}$ for dataset I, for two sparseness-eigenvalue
conditions among the twelve experimental conditions. The tick-marks indicate the eigenvalues of $L$.}
\label{densityestimNNGP10000}
\end{figure}
For comparison purposes between the two sparse approximations $\widetilde{L}$ and $\check{L}$,
we computed the Frobenius distance between these matrices and the kernel matrix $L$.
Table~\ref{FDNNGP10000} displays the results for both datasets. As seen in the experiment with the smaller datasets,
the distance of $L$ to $\widetilde{L}$ is always much smaller than the distance of $L$ to $\check{L}$.
For both datasets, the distances always increase with sparseness.
The approximation based on NNGP is a better choice for extracting eigenvalues.
This also explains the correct concentration of the density estimation of the eigenvalue distribution of $\widetilde{L}$
around the eigenvalues of $L$, as seen in Figure~\ref{densityestimNNGP10000}.
\begin{table}[H]
\centering
\begin{tabular}{cccccc}
\toprule
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Approximation\\ matrix\end{tabular}} & \multirow{2}{*}{Dataset} & \multicolumn{4}{c}{Sparseness level} \\\cmidrule{3-6}
& & $20\%$ & $40\%$ & $60\%$ & $80\%$ \\ \midrule
\multirow{2}{*}{$\widetilde{L}$} & I & 2.75 & 8.82 & 15.65 & 96.71 \\
& II & 0.33 &4.47 &14.07 &191.45 \\\cmidrule{3-6}
\multirow{2}{*}{$\check{L}$} & I & 2122.11 & 3269.61 & 4265.59 & 5224.48 \\
& II &1955.09 &3079.10 &4128.15 &5228.46\\\bottomrule
\end{tabular}
\caption{Frobenius distances for both datasets (I and II).}
\label{FDNNGP10000}
\end{table}
Table~\ref{KLNNGP10000} reports the symmetrized Kullback-Leibler divergence
of both datasets between the density estimates of the two
eigenvalue distributions from $\widetilde{L}$ and $L$. The densities were estimated with kernel density
estimators.
Again, as observed with the smaller datasets, the very small divergence values indicate a good resemblance
between the eigenvalue distributions.
\begin{table}[H]
\centering
\begin{tabular}{cccccc}
\toprule
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Number of\\ eigenvalues\end{tabular}} & \multirow{2}{*}{Dataset} & \multicolumn{4}{c}{Sparseness level} \\\cmidrule{3-6}
& & $20\%$ & $40\%$ & $60\%$ & $80\%$ \\ \midrule
\multirow{2}{*}{100} & I & 0.0000338 & 0.0000337 & 0.0000337 & 0.0000336 \\
& II & 0.0000329 & 0.0000132 & 0.0000015 & 0.0000329 \\\cmidrule{3-6}
\multirow{2}{*}{250} & I & 0.0000365 & 0.0000359 & 0.0000359 & 0.0000358 \\
& II &0.0000133 &0.0000015 &0.0000329 &0.0000139\\\cmidrule{3-6}
\multirow{2}{*}{500} & I & 0.0000090 & 0.0000090 & 0.0000090 & 0.0000090 \\
& II &0.0000016 &0.0000322 &0.0000166 &0.0000017\\
\bottomrule
\end{tabular}
\caption{Kullback-Leibler divergences for both datasets (I and II).}
\label{KLNNGP10000}
\end{table}
Table \ref{timeNNGP10000} shows the comparison of the elapsed times in seconds for calculating each set of eigenvalues
for dataset I.
Extracting all eigenvalues from the original matrix $L$ yields an elapsed time 170.08 seconds and 156.74 seconds for datasets I and II, respectively. The results were obtained on Julia v1.1.1 running on a PC with an Intel-Core i5-4460 CPU running at 3.20GHz with 16GB of RAM.
The elapsed times for dataset I are well explained by a linear regression on sparseness and number of eigenvalues extracted, presenting a coefficient of determination of $0.998$. Similar results were obtained for dataset II.
\begin{comment}
\begin{table}[H]
\centering
\begin{tabular}{cccccc}
\toprule
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Number of\\ eigenvalues\end{tabular}}& \multirow{2}{*}{Dataset} & \multicolumn{4}{ c }{Sparseness level} \\\cmidrule{3-6}
& & $20\%$ & $40\%$ & $60\%$ & $80\%$\\ \midrule
\multirow{2}{*}{100} & I & 37.34 &34.46 & 31.79 & 27.39\\
& II & 39.75 &36.00 &35.95 &35.72 \\\cmidrule{3-6}
\multirow{2}{*}{250} & I & 68.03 & 65.86 & 60.62 &55.37 \\
& II &84.81 &83.58 &83.11 &82.78 \\\cmidrule{3-6}
\multirow{2}{*}{500} & I & 125.48 & 120.27 & 115.55 &112.85\\
& II & 99.40 &97.07 &93.52 &91.89 \\ \bottomrule
\end{tabular}
\caption{Elapsed times in seconds for eigenvalues calculation of datasets I and I. }
\label{timeNNGP10000}
\end{table}
\end{comment}
\begin{table}[H]
\centering
\begin{tabular}{cccccc}
\toprule
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Number of\\ eigenvalues\end{tabular}}& \multirow{2}{*}{Dataset} & \multicolumn{4}{ c }{Sparseness level} \\\cmidrule{3-6}
& & $20\%$ & $40\%$ & $60\%$ & $80\%$\\ \midrule
\multirow{1}{*}{100} & I & 37.34 &34.46 & 31.79 & 27.39\\
\multirow{1}{*}{250} & I & 68.03 & 65.86 & 60.62 &55.37 \\
\multirow{1}{*}{500} & I & 125.48 & 120.27 & 115.55 &112.85\\
\\ \bottomrule
\end{tabular}
\caption{Elapsed times in seconds for eigenvalues calculation of dataset I. }
\label{timeNNGP10000}
\end{table}
\begin{table}[H]
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Sparseness\\ level\end{tabular}} & \multicolumn{2}{c}{$t=100$} & \multicolumn{2}{c}{$t=250$} & \multicolumn{2}{c}{$t=500$} \\
& I & II & I & \multicolumn{1}{c}{II} & I & \multicolumn{1}{c}{II} \\ \midrule
20\% & 0.97\,(0.04) & 0.79\,(0.04) & 0.93\,(0.02)& 0.87\,(0.05) & 0.70\,(0.04)& 0.88\,(0.07) \\
40\% & 0.98\,(0.00)& 0.87\,(0.03) & 0.94\,(0.02)& 0.87\,(0.08) & 0.77\,(0.03)& 0.89\,(0.04) \\
60\% & 0.97\,(0.04) & 0.86\,(0.09) & 0.94\,(0.01)& 0.87\,(0.07) & 0.83\,(0.02) & 0.82\,(0.07) \\
80\% & 0.98\,(0.02)& 0.86\,(0.09) & 0.97\,(0.02) & 0.86\,(0.04) & 0.71\,(0.10) & 0.87\,(0.03) \\ \bottomrule
\end{tabular}}
\caption{ARI means and standard deviations (within parentheses) obtained by
consensus DPP on the sparse kernel matrices over the twelve experimental conditions and both datasets (I and II).}
\label{ressparseNNGP10000}
\end{table}
Table~\ref{ressparseNNGP10000} displays the ARI means and standard deviations over all twelve experimental conditions
for both datasets (I and II).
For comparison purposes, we computed the same statistics for (i) consensus DPP
applied to the original dense kernel matrix $L$, from which all eigenvalues were extracted,
and for (ii) the consensus clustering methodology applied to partitions generated by PAM.
For dataset I, the corresponding ARI means and standard deviations are
0.57 and 0.02 for consensus DPP,
and 0.93 and 0.01 for PAM.
For dataset II, we obtain ARI means and standard deviations of
0.82 and 0.06
for consensus DPP, and of
0.83 and 0.03
for PAM.
Observe that the quality of the clustering results is better if we consider the sparse approximation $\widetilde{L}^{-1}$ and a lower number of eigenvalues, $t\in\{100, 250\}$. Considering a higher number of eigenvalues
is not such a good option. The quality of the results decreases with the number of eigenvalues extracted for each sparseness level. The use of the original dense matrix $L$ with all eigenvalues is not a good alternative either.
The most probably reason for the observed results is the large size of the kernel matrix.
Most eigenvalues cannot be computed reliably, numerically speaking;
hence, using all numerically extracted eigenvalues might introduce noise,
leading the algorithm to perform poorly.
Also, if most of the eigenvalues are small, they will be estimated with a lot of error,
specially if the difference between the largest eigenvalues and the smallest ones is orders of magnitude.
This is known as ill-conditioning of the kernel matrix.
We would like to note that for all three cases, NNGP, the original dense matrix and PAM,
we used the $\sqrt{n}$ criterion to merge small clusters during consensus (see Section~\ref{sec:consensus}).
However, as pointed out in \cite{vicente&murua1-2020}, a criterion close to $n^{2/3}$ might have been more appropriate
given the small number of clusters $K$, and the large number of observations $n$.
In fact, using this larger size
of small cluster in the merging step of the consensus give slightly better results for all methods, specially for dataset II.
To summarize, determinantal consensus clustering with the NNGP approach outperforms PAM for all cases.
In addition, if we would like to favor high quality clustering results with low computational cost,
combining $t=100$ eigenvalues with a high level of sparseness appears to be the best option.
\paragraph{Results with approach based on small submatrices from $L$}\;
For this approach, due to hardware limitations,
we decided to reduce the number of evaluated scenarios in both datasets.
Thus, maintaining the proportion $\gamma\in\{0.05, 0.1, 0.2\}$ of points selected from the dense matrix $L$,
the sampled submatrices $L^{(i_k)},\; k=1,\dots,N$,
are of size $r \in\{500, 1000, 2000\}$, respectively.
These sizes are considerably larger than those used with the smaller datasets.
Hence, sparse approximations of $L^{(i_k)}$ are used with a unique high level of sparseness equal to $80\%$.
This sparseness is achieved with 80, 160 and 320 nearest neighbors, respectively.
Recall that with the NNGP approach applied to the smaller datasets,
a low mean elapsed time for eigenvalue calculation was achieved when combining a low number of eigenvalues
with a high level of sparseness (see Table~\ref{timeNNGP}).
Therefore, to speed up the computation time, we extract only the largest 100
eigenvalues for all the cases.
The following analysis of the results is organized as with the above simulations.
Figure~\ref{densityestimsmallGram10000} shows the kernel density estimates for the aforementioned situations. The tick-marks in the
horizontal axis locate all eigenvalues of the original kernel matrix $L$.
As seen in the results with the smaller datasets,
the density estimates of this approach do not capture well the largest eigenvalues when the sparseness is too high (for a low value of $\gamma$).
\begin{figure}[H]
\centering
\subfloat[$\gamma=0.05$ ]{%
\includegraphics[width=0.3\textwidth]{smallGram5s80e100_10000.pdf}}
\quad
\subfloat[$\gamma=0.1$ ]{%
\includegraphics[width=0.3\textwidth]{smallGram10s80e100_10000.pdf}}
\quad
\subfloat[$\gamma=0.2$ ]{%
\includegraphics[width=0.3\textwidth]{smallGram20s80e100_10000.pdf}}
\caption{Kernel density estimates
of the set of eigenvalues extracted from sparse $L^{(i_k)}$. The tick-marks are placed on all eigenvalues of $L$ .}
\label{densityestimsmallGram10000}
\end{figure}
Table~\ref{KLknn10000} reports the symmetrized Kullback-Leibler divergences between the distribution
of the eigenvalues extracted from the sparse submatrices $L^{(i_k)}$ and
the distribution of the eigenvalues of $L$, for $\gamma\in\{0.05, 0.1, 0.2\}$. The densities were
estimated with kernel density estimators. The KL divergences decrease with $\gamma$ and are very small, indicating a good resemblance between the two distributions.
\begin{table}[H]
\centering
\begin{tabular}{cccc}
\toprule
Dataset & $\gamma=0.05$ & $\gamma=0.1$ & $\gamma=0.2$ \\ \midrule
I & 0.000875 & 0.000301 & 0.000095 \\
II & 0.000579 & 0.000173 & 0.000061\\
\bottomrule
\end{tabular}
\caption{Kullback-Leibler divergences for both datasets (I and II).}
\label{KLknn10000}
\end{table}
Table~\ref{timeknn10000} shows the comparison of the elapsed time in seconds for eigenvalue computation,
using the sampled submatrix $L^{(i_k)}$ or its sparse approximation $\widehat{L}^{(i_k)}$. The results were obtained with Julia Language, version 1.1.1, on a Desktop PC with Intel Core i5-4460 CPU @ 3.20GHz Processor and 16 GB DDR3 RAM.
In this case, and due to the relatively large size of the matrices $L^{(i_k)}$, it does make a difference to use the sparse matrices $\widehat{L}^{(i_k)}$ instead of the dense ones.
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
\toprule
Dataset & Submatrix & $\gamma=0.05$ & $\gamma=0.1$ & $\gamma=0.2$ \\ \midrule
\multirow{2}{*}{I} &$L^{(i_k)}$ & 0.14 & 0.22 & 1.02 \\
& $\widehat{L}^{(i_k)}$ & 0.03 & 0.08 & 0.30\\ \midrule
\multirow{2}{*}{II} & $L^{(i_k)}$ & 0.17 & 0.38 & 1.66 \\
& $\widehat{L}^{(i_k)}$ & 0.09 &0.35 & 1.33 \\ \bottomrule
\end{tabular}
\caption{Elapsed times in seconds for eigenvalues calculation of datasets I and II. }
\label{timeknn10000}
\end{table}
Table~\ref{ressparseknn10000} displays the ARI means and their standard deviations for $\gamma\in\{0.05, 0.1, 0.2\}$ and both datasets (I and II). It also reports the results obtained with consensus DPP applied to the original dense kernel matrix $L$, from which all eigenvalues were extracted,
and PAM. These latter results were already reported in Section~\ref{sec:NNGP10000}.
\begin{table}[H]
\centering
\begin{tabular}{cccccc}
\toprule
Dataset & $\gamma=0.05$ & $\gamma=0.1$ & $\gamma=0.2$ & Original $L$ & PAM \\ \midrule
I & 0.95 (0.00) & 0.96 (0.01) & 0.96 (0.01) & 0.57 (0.02) & 0.93 (0.01) \\
II & 0.88 (0.02) & 0.87 (0.04) & 0.87 (0.03) & 0.82 (0.06) & 0.83 (0.03)\\ \bottomrule
\end{tabular}
\caption{ARI means and standard deviations (within parentheses) obtained from consensus DPP with approach based on small submatrices from $L$ for datasets I and II. The results for the original dense matrix and PAM are also displayed. }
\label{ressparseknn10000}
\end{table}
As we did with the smaller datsets,
we show in Table~\ref{resdense10000}, the same statistics obtained by keeping the dense version of the submatrices $L^{(i_k)}$, and extracting all its eigenvalues, instead of using their sparse approximations.
\begin{table}[H]
\centering
\begin{tabular}{cccc}
\toprule
Dataset & $\gamma=0.05$ & $\gamma=0.1$ & $\gamma=0.2$ \\ \midrule
I & 0.94 (0.05) & 0.95 (0.05) & 0.95 (0.06) \\
II & 0.88 (0.04) & 0.81 (0.06) & 0.83 (0.05)\\ \bottomrule
\end{tabular}
\caption{Global ARI means and standard deviations (within parentheses) of consensus DPP on the dense version of the random small submatrices for datasets I and II.}
\label{resdense10000}
\end{table}
The random small matrices approach yields excellent results for any proportion $\gamma$.
If we consider the results with the dense version of the submatrices, we can see that the results obtained are similar or better.
As already observed with the small samples, the use of a sparse approximation $\widehat{L}^{(i_k)}$ does not hurt the quality of the results.
The main advantage of using the sparse approximations is the reduced amount of time needed to compute the eigenvalues.
Another advantage is the resulting stability of the clustering quality results (low dispersion of ARI values).
In summary, if one would like low computational time and low dispersion,
using the sparse approximation of submatrices with a low proportion of sampled points ($5\%$) is a good option.
To end this section, we observe that the approach based on random sampling of submatrices outperforms PAM, for all $\gamma$ values in both datasets.
It also reaches comparable
quality clustering levels to the approach based on NNGP with $t=100$ and a sparseness of $80\%$.
Sampling submatrices of lower dimension is then a good alternative to the use of the NNGP approach.
\subsection{DPP as a measure of diversity}
The differences between DPP and PAM can also be highlighted using the logarithm of the probability mass function of the DPP, given by \eqref{definition2}. Figure~\ref{diversitysim} displays histograms of these probability logarithms.
The histograms are based on 1000 random subsets of datapoints drawn (i) using the DPP sampling algorithm of \cite{BenHough20063} and \cite{Kulesza20123}, and (ii) using the simple random sampling of PAM.
The subsets were drawn from two simulated large datasets: one of size $n=1000$, and another of size $n=10000$ observations.
\begin{figure}[H]
\centering
\subfloat[Dataset with $n=1000$ observations]{%
\includegraphics[width=0.45\textwidth]{diversity1000.pdf}}
\quad
\subfloat[Dataset with $n=10000$ observations]{%
\includegraphics[width=0.45\textwidth]{diversity10000.pdf}}
\caption{Histograms of the logarithm of the probability mass function (loglik), using DPP and simple random sampling, for two simulated large datasets.}
\label{diversitysim}
\end{figure}
The histograms clearly show that DPP selects random subsets with higher and less dispersed probability mass values (likelihood) than simple random sampling. This explains the low dispersion of the ARI observed in most results of this section, when sampling is performed with DPP. The higher likelihood achieved by DPP also implies a higher diversity of the sampled subsets. On its turn, subsets sampled as in PAM yield a highly dispersed likelihood, resulting in highly or poorly diverse subsets. DPP tends to be more consistent and stable since it ensures a high level of diversity among the selected elements at each sampling.
\section{Application to real data}\label{sec:real}
In this section we evaluate the performance of consensus DPP versus PAM on three large real datasets:
\vspace{3pt}
\noindent 1.- A dataset about human activity recognition and postural transitions using smartphones, collected from 30 subjects who performed six basic postures (downstairs, upstairs, walking, laying, sitting and standing), and six transitional postures between static postures (stand-to-sit, sit-to-stand, sit-to-lie, lie-to-sit, stand-to-lie and lie-to-stand). The experiment was realized in the same environment and conditions, while carrying a waist-mounted smartphone with embedded inertial sensors. The dataset consists of 10929 observations, with 561 time and frequency extracted features, which are commonly used in the field of human activity recognition.
The dataset is available on the \emph{UCI Machine Learning Repository} \citep{Dua20193}, a well known database in the machine learning community for clustering and classification problems.
The six transitional postures between static postures comprises a relatively small subset of observations.
Therefore, we apply our clustering algorithm only to the six basic postures.
Using the notation of the simulated datasets of Section~\ref{experiments2}, the final dataset has $n=10411$ observations, $p=561$ variables and $K=6$ components.
\vspace{3pt}
\noindent 2.- The Modified National Institute of Standards and Technology (\textsc{mnist}) dataset \citep{Lecun2010}, one of the most common datasets used for image classification. This dataset contains 60000 training images and 10000 testing images of handwritten digits, obtained from American Census Bureau employees and American high school students.
Each 784-dimensional observation represents a $28\times 28$ pixel gray-scale image depicting a handwritten version of one of the ten possible digits (0 to 9).
As it is a common practice with the \textsc{mnist} dataset, due to its intrinsic characteristics,
we transformed the data to its multiplicative inverse as hinted by the Box-Cox transformation.
We only use the testing set of 10000 images, so that
the final dataset has $n=10000$ observations, $p=784$ variables and $K=10$ components.
\vspace{3pt}
\noindent 3.- The Fashion-\textsc{mnist} dataset \citep{Xiao2017}, also one of the most common datasets used for image classification. This dataset contains 60000 training images and 10000 testing images of Zalando's articles\footnote{%
Zalando is a European e-commerce company specializing in fashion.
They provided image data in repositories like Github.}.
Each observation represents a $28\times 28$ pixel gray-scale images of clothes associated with a label from 10 classes.
For the same reasons as \textsc{mnist}, we transformed the data with an appropriate Box-Cox transformation,
and only worked with the testing portion of the data.
The dataset has $n=10000$ observations, $p=784$ variables and $K=10$ components.
\vspace{6pt}
We applied both approaches of Section~\ref{largedatasets} to each dataset, and followed the same analysis procedure
of Section \ref{experiments2}. For the NNGP approach, we set the maximum number $m$ of nonzero elements in each row of the matrix $A$ to obtain a sparse approximation of the kernel matrix $L$ with $80\%$ of sparseness. The Lanczos algorithm was applied to extract the $t=100$ largest eigenvalues from the sparse approximated matrix.
For the approach based on random small submatrices from $L$, we sampled a proportion $\gamma=0.05$ of points from the kernel matrix $L$, and obtained sparse approximations of the sampled submatrices with $80\%$ of sparseness.
The Lanczos algorithm was applied to extract the $t=100$ largest eigenvalues from the sparse approximated submatrices.
Determinantal consensus clustering was applied to 200 partitions. The whole procedure was repeated five times.
For comparison purposes,
we also include the results from a couple of popular algorithms: consensus clusterings with PAM, and $k$-means.
The PAM algorithm was already mentioned in Section~\ref{experiments2}.
The $k$-means algorithm was introduced by Stuart Lloyd in 1957, with a supporting publication in \cite{Lloyd19823}.
Given an initial set of $k$ means, representing $k$ clusters,
it assigns each observation to the cluster with the nearest mean,
and proceeds to recompute means from observations in the same cluster.
The procedure is repeated until no changes are observed in the assignments.
Among the many algorithms to initialize the $k$ centers of $k$-means \citep{Forgy19653,Pena19993,Gonzalez19853,Katsavounidis19943},
we chose the $k$-means$++$ algorithm of \cite{Arthur20073}.
This method has become largely popular \citep{Capo20173,Franti20193}.
It is a probability-based technique that avoids the usually poor results found by standard $k$-means initialization methods.
We performed consensus clusterings with PAM and $k$-means using 200 partitions, also repeating the whole procedure five times.
We report the mean and the standard deviation of the ARI achieved by the different methods in Table~\ref{realdatasets}.
\begin{table}[H]
\centering
\begin{tabular}{ccccl}
\toprule
Dataset & NNGP & \begin{tabular}[c]{@{}c@{}}Small\\ submatrices\end{tabular} & PAM & $k$-means \\ \midrule
Smartphones & 0.59 (0.01) & 0.58 (0.01) & 0.43 (0.12) & 0.49 (0.02)\\
\textsc{mnist} & 0.58 (0.05) & 0.36 (0.02) & 0.24 (0.01) & 0.54 (0.02) \\
Fashion-\textsc{mnist} & 0.43 (0.01) & 0.40 (0.01) & 0.40 (0.01) & 0.36 (0.03) \\ \bottomrule
\end{tabular}
\caption{ARI means and standard deviations (within parentheses) associated with NNGP, random small submatrices, PAM and $k$-means. }
\label{realdatasets}
\end{table}
Both DPP approaches, based on NNGP and small submatrices, attain good results.
The NNGP approach outperformed all other three methods. Overall, the gains are
very important when compared with PAM and $k$-means.
The method based on random small submatrices also ourperformed the other methods,
except for the \textsc{mnist} dataset.
\section{Conclusions}\label{sec:conclusions}
Within the context of determinantal consensus clustering,
we proposed two different approaches to overcome the computational burden induced by the eigendecomposition of
the kernel matrix associated with large datasets.
The first one is based on NNGP, and the second one, is based on random sampling of small submatrices from the kernel matrix.
The NNGP approach finds a sparse approximation of the kernel matrix, based on nearest-neighbors.
The sparse matrix substitutes the original dense kernel matrix to extract the largest eigenvalues,
so as to be able to perform determinantal consensus clustering on the large dataset.
Simulations showed that using a high sparseness level and extracting a few largest eigenvalues
are enough to ensure good clustering results.
Extracting 1\% to 3\% of the eigenvalues
seems to be a reasonable strategy.
The amount of time needed to achieve the final clustering configuration is reduced considerably.
The approach based on random sampling of small submatrices from the kernel matrix is a good alternative to the NNGP approach.
Its performance is comparable to that of the NNGP approach, even though
the distribution of eigenvalues extracted with this method is not as close to the original eigenvalue distribution
of the kernel matrix.
Rather than keeping the original kernel matrix, this approach shows that considering the extraction of eigenvalues
from several rather small submatrices
of the kernel matrix is enough to obtain good results with consensus DPP.
Our simulations hint at sampling a number of small matrices in the range of $0.05n$ to $0.1n$.
If the submatrices are still too large, finding their sparse approximation with the $k$-nearest neighbors graph method is a good option, even with a small $k$.
In fact, for very large datasets, our simulations showed that it is strongly recommended to find a sparse approximation of the small submatrices. In order to speed up computational time, choosing a high sparseness level is the best option.
The two approaches are able to reach better quality results than consensus clustering applied to PAM.
Applications on large real datasets confirm the results found with the simulations.
Determinantal consesus clustering also proved to be superior to $k$-means for most of the datasets.
The presence of diversity in the sampled centroids helps to improve the quality of clustering and the stability of the results.
\begin{comment}
A variety of interesting questions remain for future research:
(i) To extended the consensus clustering with DPP to large datasets with only categorical variables, inducing the choice of a proper measure of distance, necessary for the construction of the kernel matrix. The choice can be more difficult if the dataset includes both categorical and continuous variables.
(ii) To study the effect of multivariate outliers on the average and dispersion of quality scores of the final clustering configuration, when sampling subsets of points with the determinantal point process.
(iii) To solve the singularity problem faced in the approach based on NNGP, understanding the nature of the singularity and proposing a solution to overcome it. It will be a good opportunity to study the use of the $m$-nearest-neighbors matrix, despite the large Frobenius distances observed. (iv) To explore data processing techniques that can improve the quality results obtained with consensus DPP applied to real datasets.
\end{comment}
An issue we found with the NNGP approach, on some real datasets, is the possibility of
ill-conditioning of some of the intermediate calculation submatrices $W_{[1:i-1]}$ (see equation \eqref{eq:finley}).
This might occur because of strong similarity between some observations. As with the case of regression, the
original kernel matrix should be evaluated for numerical conditioning before attempting to use the NNGP approach.
A possible solution to avoid the problem is to eliminate observations that are too similar,
so as to only consider representative observations instead of all of them.
Another possibility is to regularize the submatrices by adding a small positive constant to the main diagonal,
as it is done in ridge regression.
Alternatively, the approach based on random small matrices might be used instead, if the problem arises.
\bibliographystyle{plain}
|
2,877,628,089,606 | arxiv | \section{Introduction}
A \emph{square} is a nonempty word of the form $XX$. A word $W$ \emph{contains} a square if it can be written as $W=PXXS$, with $X$ nonempty, while $P$ and $S$ possibly empty. Otherwise, $W$ is called \emph{square-free}. A famous theorem of Thue \cite{Thue} (see \cite{BerstelThue}) asserts that there exist infinitely many square-free words over a $3$-letter alphabet. This result initiated combinatorics on words --- the whole branch of mathematics, abundant in many deep results, exciting problems, and unexpected connections to diverse areas of science (see \cite{AlloucheShallit,BeanEM,BerstelPerrin,Currie TCS,Lothaire,LothaireAlgebraic}).
In this paper we study square-free words with additional properties involving two natural operations, a single-letter extension and a single-letter deletion, defined as follows.
Let $W$ be a word of length $n$ over a fixed alphabet $\mathcal{A}$. For $i=0,1,\dots,n$, let $P_i(W)$ and $S_i(W)$ denote the prefix and the suffix of $W$ of length $i$, respectively. Notice that the~words $P_0(W)$ and $S_0(W)$ are empty. In particular, we have $W=P_i(W)S_{n-i}(W)$ for every $i=0,1,\dots,n$. An \emph{extension} of $W$ at \emph{position} $i$ is a word of the form $U=P_i(W)\mathtt{x}S_{n-i}(W)$, where $\mathtt{x}$ is any letter from $\mathcal{A}$. In this case we also say that $W$ is a \emph{reduction} of $U$.
A square-free word $W$ is \emph{extremal} over $\mathcal{A}$ if there is no square-free extension of $W$. Grytczuk, Kordulewski and Niewiadomski \cite{GrytczukKN} proved that there exist infinitely many extremal ternary words, the shortest one being $$\mathtt{1231213231232123121323123}.$$They also conjectured that there are no extremal words over an alphabet of size $4$. Recently, Hong and Zhang \cite{HongZhang} proved that this is true for an alphabet of size $17$.
Harju \cite{Harju} introduced a complementary concept of \emph{irreducible} words. These are square-free words whose any non-trivial reduction (the deleted letter is neither the first one nor the~last one in the word) contains a square. He proved that for any $n\neq4,5,7,12$ there exists a~ternary irreducible word of length $n$.
In this article we consider square-free words with the very opposite properties, defined as follows: a square-free word is \emph{steady} if it remains square-free after deleting any single letter. For instance, the word $\mathtt{1231}$ is steady since all of its four reductions $$\mathtt{231,131,121,231}$$are square-free, while $\mathtt{1213}$ is not steady since one of its reductions is $\mathtt{113}$. Generally, every ternary square-free word of length at least $6$ must contain a factor of the form $\mathtt{xyx}$, and therefore is not steady. However, there are steady words of any given length over larger alphabets, as we prove in Theorem \ref{Theorem Steady 4}. We also consider a general variant of such statement in the following \emph{list} setting.
\begin{conjecture}\label{Conjecture 4-List Steady}
Let $n$ be a positive integer and let $\mathcal{A}_1,\mathcal{A}_2,\ldots,\mathcal{A}_n$ be a sequence of alphabets of size $4$. Then there exists a steady word $W=w_1w_2\cdots w_n$, with $w_i\in\mathcal{A}_i$ for every $i=1,2,\ldots,n$.
\end{conjecture}
In Theorem \ref{Theorem 7-list Steady} we prove that the statement of the conjecture holds for alphabets with at least $7$ letters. Let us mention that analogous conjecture for pure square-free words (with alphabets of size $3$), stated in \cite{Grytczuk}, is still open. Currently the best general result confirms it for alphabets of size $4$ (see \cite{GrytczukKM,GrytczukPZ,Rosenfeld1} for three different proofs). Recently Rosenfeld \cite{Rosenfeld2} proved that it holds when the union of all alphabets is a $4$-element set.
We also consider square-free words defined similarly with respect to extensions of words. A square-free word is \emph{bifurcate} over a fixed alphabet $\mathcal{A}$ if it has at least one square-free extension at \emph{every} position. For instance, the word $\mathtt{1231}$ is bifurcate over $\{1,2,3\}$ and here are its five square-free extensions: $$\mathtt{\underline{2}1231,1\underline{3}231,12\underline{1}31,123\underline{2}1,1231\underline{2}}.$$
Thus the word $\mathtt{1231}$ is both steady and bifurcate. This not a coincidence --- for an alphabet with at least three letters, every steady word is bifurcate, as we prove in Theorem \ref{Theorem Steady-Bifurcate}.
Clearly, ternary bifurcate words cannot be too long. Indeed, every ternary square-free word of length at least $6$ contains a factor of the form $\mathtt{xyxz}$ (or its reversal). On the other hand, any ternary square-free word is bifurcate over a $4$-letter alphabet. One may, however, inquire about the existence of an infinite \emph{chain} of bifurcate quaternary words.
\begin{conjecture}\label{Conjecture 4-Bifurcate Chain}
There exists an infinite sequence of quaternary bifurcate words $W_1,W_2,\ldots$ such that $W_{i+1}$ is a single-letter extension of $W_i$, for each $i=1,2,\ldots$.
\end{conjecture}
A much stronger property holds over alphabets of size at least $12$, namely there exists a~\emph{complete bifurcate tree} of bifurcate words (rooted at any single letter), in which every word of length $n$ has $n+1$ descendants, corresponding to the extensions at different positions, and each of them is again bifurcate, and the same is true for all of their descendant, and so on, ad infinitum. This curious fact follows easily from a result of K\"{u}ndgen and Pelsmajer on \emph{nonrepetitive} colorings of \emph{outerplanar} graphs, as noted in \cite{GrytczukSZ} in a different context of the~\emph{on-line Thue games}. We will recall this short argument for completeness. It seems plausible, however, that the actual number of letters needed for such an amazing property may be much smaller.
\begin{conjecture}\label{Conjecture 5-Bifurcate Tree}
There exists a complete bifurcate tree over an alphabet of size $5$.
\end{conjecture}
It is not hard to verify that the above conjecture is tight, as we demonstrate at the end of the following section.
\section{Results}
We shall present proofs of our results in the following subsections.
\subsection{Steady words over a $4$-letter alphabet}
For a rational $r\in[1,2]$, a \emph{fractional $r$-power} is a word $W=XP$, where $P$ is a prefix of $W$ of length $(r-1)|X|$. We say that $r$ is the \emph{exponent} of the word $W$. The famous Dejean's conjecture (now a theorem) states that the infimum of $r$ for which there exist infinite $n$-ary words without factors of exponent greater than $r$ is equal to
$$\left\{\begin{array}{ll}7/4&\mbox{ for }n=3,\\7/5&\mbox{ for }n=4,\\ n/(n-1)&\mbox{ for }n\neq3,4.\end{array}\right.$$
The case $n=2$ is a simple consequence of the classical theorem of Thue \cite{Thue}. The case $n=3$ was proved by Dejean \cite{Dejean1972} and the case $n=4$ by Pansiot \cite{Pansiot1984}. Cases up to $n=14$ were proved by Ollagnier \cite{Ollagnier1992} and Noori and Currie \cite{Noori2007}. Carpi \cite{Carpi2007} showed that statement holds for every $n\geq33$, while the remaining cases were proved by Currie and Rampersad \cite{Currie2011}.
\begin{theorem}\label{Theorem Steady 4}
There exist arbitrarily long steady words over a $4$-letter alphabet.
\end{theorem}
\begin{proof}
From Dejean's theorem we get that there exist arbitrarily long quaternary words without factors of exponent greater than $\frac75$. Let $S$ be any such word. Notice that any factor of the form $XYX$ in $S$ must satisfy $|Y|>|X|$, where $|W|$ denotes the length of a word $W$. Indeed, the oposite inequality, $|Y|\leqslant|X|$, implies that $S$ contains a factor with exponent at least $\frac32$, but $\frac32>\frac75$.
We claim that any word with the above \emph{separation} property is steady. Assume to the contrary that deleting some single interior letter in the word $S$ generates a square. We distinguish two cases corresponding to the relative position of the deleted letter:
\begin{enumerate}
\item $S=AC\mathtt{a}CB$ for some letter $\mathtt{a}$ and words $A,B,C$, where $C$ is non-empty. If we put $X=C$ and $Y=a$, we immediately get a contradiction with separation property of $S$, as $|C|\geqslant1$.
\item $S=AC'\mathtt{a}C''C'C''B$ for some letter $\mathtt{a}$ and words $A,B,C',C''$ where $C'$ and $C''$ are non-empty. If $|C'|\leqslant|C''|$, then the factor $C''C'C''$ contradicts the separation property of $S$ (by putting $X=C''$ and $Y=C'$). Otherwise, we have $|C''|<|C'|$, which implies that $|\mathtt{a}C''|\leqslant|C'|$. In this case, the factor $C'(\mathtt{a}C'')C'$ contradicts the separation property of $S$ (by taking $X=C'$ and $Y=\mathtt{a}C''$).
\end{enumerate}
Thus, the word $S$ is steady, which completes the proof.
\end{proof}
\begin{table}\centering
{\begin{tabular}{|c||ccccccccccccccc|}\hline
$n$&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17\\
$N$&1&2&3&5&5&7&9&12&16&21&28&37&45&58&73\\\hline
$n$&18&19&20&21&22&23&24&25&26&27&28&29&30&31&32\\
$N$&93&101&124&150&179&216&257&309&376&453&551&662&798&957&1149\\\hline
\end{tabular}}
\vskip0.5cm
\caption{The number $N$ of quaternary steady words of length $n$ (with respect to the permutations of symbols).}\label{T1n}
\end{table}
\subsection{Steady words from lists of size $7$}In this subsections we consider the list variant of steady words.
The proof of the following theorem is inspired by a beautiful argument due to Rosenfeld \cite{Rosenfeld1}, used by him in analogous problem for square-free words.
\begin{theorem}\label{Theorem 7-list Steady}
Let $(\mathcal{A}_1,\mathcal{A}_2,\ldots)$ be a sequence of alphabets such that $\left|\mathcal{A}_i\right|=7$ for all $i$. For every $N$ there exist at least $4^N$ steady words $w=w_1w_2\ldots w_N$, such that $w_i\in \mathcal{A}_i$ for all $i\geqslant 1$.
\end{theorem}
\begin{proof}
Let $\mathbb{C}_n$ be the set of steady words $w=w_1w_2\ldots w_n$, such that $w_i\in \mathcal{A}_i$ for all $i$ and set $C_n$ to be the size of $\mathbb{C}_n$. Similarly, let $\mathbb{F}_{n}$ be the set of words $f=f_1f_2\ldots f_n$ with $f_i\in L_i$ for all $i$, such that $f \notin \mathbb{C}_n$ but $f_1f_2\ldots f_{n-1}$ is in $\mathbb{C}_{n-1}$. We think of $F_n$ as the number of ways we can fail by appending a new symbol at the end of a steady word of length $n-1$. Our goal is to give a good upper bound on $F_n$.
\begin{claim}
\label{claim_failsnumber}
$$F_{n+1}\leqslant 2C_n + 2C_{n-1} + \sum_{i=0}^{\infty}\left(3+8i\right)C_{n-2-i}$$
\end{claim}
\begin{proof}
Let us partition $\mathbb{F}_{n+1}$ into subsets $\mathbb{D}_1, \mathbb{D}_2, \mathbb{D}_3, \ldots$ defined such that the word $f=f_1f_2\ldots f_{n+1}$ from $\mathbb{F}_{n+1}$ is in $\mathbb{D}_j$ if the suffix of $f$ of length $2j+1$ can be reduced to a square by removing a single letter, and $f$ is not in $\mathbb{D}_{j-1}$. For example $\mathbb{D}_1$ contains words that end with $aa$ or $axa$, $\mathbb{D}_2$ contains words that end with $abxab$ (note that $abab$, $axbab$ and $abaxb$ are not possible) and $\mathbb{D}_3$ contains words that end with
$abxcabc$, $abcxabc$ or $abcaxbc$ (again, $abcabc$, $axbcabc$ and $abcabxc$ are not possible). It is clearly a partition by the definition of $\mathbb{F}_n$.
Let $D_j$ be the size of $\mathbb{D}_{j}$. Note that $D_1\leqslant 2C_{n}$, because every word from $\mathbb{D}_1$ can be obtained by appending to some word from $\mathbb{C}_{n}$ a repetition of the last or one before last letter. We also have $D_2\leq C_{n-1}$ because, as remarked earlier, $\mathbb{D}_2$ contains only words that end with $abxab$ (where $a,b$ and $x$ are single letters) and each such word can be obtained by repeating a third- and second-from-last letter of a word from $\mathbb{C}_{n-1}$.
Now consider a word $f=f_1f_2\ldots f_{n+1}$ from $\mathbb{D}_j$ for $j>2$. Since a suffix of $f$ of length $2j+1$ can be reduced to a square by removing a single non-final letter, we have that either (a) $f_{n-2j+1}f_{n-2j+1}\ldots f_{n+1} = PxQPQ$ or (b) $f_{n-2j+1}f_{n-2j+1}\ldots f_{n+1} = PQPxQ$, where $x$ is a~single letter and $P$ and $Q$ are words, where $\left|P\right|+\left|Q\right|=j$ and $Q$ is nonempty. We will count the number of words $f$ from $\mathbb{D}_j$ that fit those cases separately for every possible position of $x$ (i.e. the length of $P$). Note that in case (a) we must have $\left|P\right|\geqslant 2$, because otherwise $f$ would be contained in $\mathbb{D}_{j-1}$. Therefore, there are $j-2$ possible lengths of $P$ and for each of those lengths there are $C_{n+1-j}$ compatible words in $\mathbb{D}_j$, as each such word can be obtained from a word from $\mathbb{C}_{n+1-j}$ by appending $PQ$ at the end; this totals to $(j-1)C_{n+1-j}$.
Similarly, in case (b) $\left|Q\right|\geqslant 2$, because otherwise $f$ would not be contained in $\mathbb{F}_{n+1}$. If the~length of $P$ is $0$ (respectively $1$), then there are at most $C_{n+1-j}$ (resp. $C_{n+2-j}$) compatible words in $\mathbb{D}_j$, obtained by repeating $j$ (resp. $j-1$) letters from $\mathbb{C}_{n+1-j}$ (resp. $\mathbb{C}_{n+2-j}$). If the~length of $P$ is at least $2$ (for which there are $j-3$ possibilities), then the number of compatible words is at most $7C_{n+1-j}$, as each of them can be obtained by repeating $j$ letters from a word in $\mathbb{C}_{n+1-j}$ and picking one of seven letters from the list $\mathcal{A}_{n+1-\left| P\right|}$.
After summing the above estimation in both cases, we obtain that for $j>2$
\begin{align*}
D_j \leqslant \left(j-1\right)C_{n+1-j} + C_{n+1-j} + C_{n+2-j} + \left(j-3\right)7C_{n+1-j} = C_{n+2-j} + \left(8j-22\right)C_{n+1-j}.
\end{align*}
This, together with our estimations on $D_1$ and $D_2$, implies that
\begin{align*}
F_{n+1}\leqslant 2C_{n} + C_{n-1} + \sum_{j=3}^{\infty}\left( C_{n+2-j} + \left(8j-22\right)C_{n+1-j}\right) = \\
= 2C_{n} + C_{n-1} + C_{n-1} + \sum_{i=0}^{\infty}C_{n-2-i} + \sum_{i=0}^{\infty}\left(8i+2\right)C_{n-2-i}= \\
= 2C_n + 2C_{n-1} + \sum_{i=0}^{\infty}\left(3+8i\right)C_{n-2-i},
\end{align*}
Which concludes the proof of the claim.
\end{proof}
Now we inductively show that $C_n \geqslant 4 C_{n-1}$ for all $n>0$. It is clearly true for $n=1$. Note that $C_{n+1} = 7C_n - F_{n+1}$, so by Claim \ref{claim_failsnumber} we obtain that
\begin{align*}
C_{n+1} \geqslant 5C_n - 2C_{n-1} - \sum_{i=0}^{\infty}\left(3+8i\right)C_{n-2-i}.
\end{align*}
Using the induction assumption and then calculating sums of the geometric series it follows that
\begin{align*}
C_{n+1} \geqslant 5C_n - 2\frac{C_{n}}{4} - \sum_{i=0}^{\infty}\left(3+8i\right)\frac{C_n}{4^{2+i}}
= C_n \left( \frac{9}{2} - 3\sum_{i=0}^{\infty}\frac{1}{4^{2+i}} - \frac{1}{2}\sum_{i=1}^{\infty}\frac{i}{4^{i}}\right) \\
= C_n \left( \frac{9}{2} - \frac{1}{4} - \frac{1}{2}\sum_{i=1}^{\infty}\sum_{j=1}^{i}\frac{1}{4^{i}}\right)
= C_n \left( \frac{17}{4} - \frac{1}{2}\sum_{j=1}^{\infty}\sum_{i=j}^{\infty}\frac{1}{4^{i}}\right) \\
= C_n \left( \frac{17}{4} - \frac{1}{2}\sum_{j=1}^{\infty}\frac{1}{4^{j}}\frac{4}{3}\right)
= C_n \left( \frac{17}{4} - \frac{2}{3}\sum_{j=1}^{\infty}\frac{1}{4^{j}}\right) \\
= C_n \left( \frac{17}{4} - \frac{2}{9} \right) > 4 C_n,
\end{align*}
which completes the proof.
\end{proof}
\subsection{Steady words are bifurcate}
As it turns out, the only square-free word which is steady, but not bifurcate is $\mathtt{1}$ over the~alphabet $\{\mathtt{1}\}$.
\begin{theorem}\label{Theorem Steady-Bifurcate}
Let $\mathcal{A}$ be a fixed alphabet with at least three letters. Every steady word over $\mathcal{A}$ is bifurcate over $\mathcal{A}$.
\end{theorem}
\begin{proof}
The words $\mathtt{1}$, $\mathtt{12}$, and $\mathtt{123}$ are the only steady words with length not greater than 3. Such words are also bifurcate, therefore the theorem holds in their cases.
Thus, let us assume that $n\geqslant4$ and let the square-free word $W=w_1w_2\ldots w_n$ over $\mathcal{A}$ be steady. Such word does not contain a factor $\mathtt{xyx}$, so $w_i\neq w_{i+2}$ for every $1\leqslant i\leqslant n-2$.
We show that for every $0\leqslant j\leqslant n$ there exists $\mathtt{x}\in\mathcal{A}$ such that the word $$P_j(W)\mathtt{x}S_{n-j}(W)$$ is square-free. The main idea is to show that creating a palindromic factor $\mathtt{xyx}$ in the extended word is in favor of its square-freeness. Further, we use a simple fact that an extension $A\mathtt{x}B$ of a square-free word $AB$ is square-free if and only if every factor of $A\mathtt{x}B$ which contains $\mathtt{x}$ is square-free.
\begin{case}
[$j=0$ and $j=n$]
\end{case}
Consider the extension of the word $W=w_1w_2\cdots w_n$ by the letter $w_2$ at the beginning:$$Y=\underline{w_2}w_1w_2\ldots w_n.$$ We will show that this extension is square-free. Since $w_1w_2\ldots w_n$ is square-free, it is sufficient to show that every prefix of $Y$ is square-free. Of course, $w_2$, $w_2w_1$ and $w_2w_1w_2$ are square-free. Let us notice that $w_1\neq w_3$, so $P_4(Y)$ is square-free. Finally, $w_2w_1w_2$ is a unique factor of the form $\mathtt{xyx}$ in the word $Y$, so it cannot be a prefix of a square factor of length greater than 5 in $Y$.
Analogously, one can show that the~word $w_1\ldots w_{n-1}w_n\underline{w_{n-1}}$ is square-free.
\begin{case}
[$j=1$ and $j=n-1$]
\end{case}
Consider the extension of $W$ by inserting the letter $w_3$ between the first and the second letter of $W$. The resulting word, $$Y=w_1\underline{w_3}w_2w_3\ldots w_n,$$ is square-free, since $w_1\neq w_3$, $w_2\neq w_4$, and $w_3w_2w_3$ is a unique factor of form $\mathtt{xyx}$. So, both words, $w_1w_3w_2w_3$ and $w_3w_2w_3$, are the prefixes of square-free factors.
Analogously, one can show that the~word $$Y=w_1\ldots w_{n-2}w_{n-1}\underline{w_{n-2}}w_n$$ is square-free.
\begin{case}
[$2\leqslant j\leqslant n-2$]
\end{case}
In this part, let us assume that $w_j=\mathtt{a}$, $w_{j+1}=\mathtt{b}$, and $w_{j+2}=\mathtt{c}$, where $\mathtt{a}$, $\mathtt{b}$, and $\mathtt{c}$ are pairwise different letters. Thus $$W=P_{j-1}(W)\mathtt{abc}S_{n-j-2}(W).$$
Let us investigate the extension
$$Z=P_j(W){\mathtt{c}}S_{n-j}(W)$$
of the word $W$, that is,
$$Z=\lefteqn{\underbrace{\phantom{w_1w_2\ldots \mathtt{a\underline{c}bc}}}_{P_{j+3}(Z)}}w_1w_2\ldots \mathtt{a}\overbrace{\underline{\mathtt{c}}\mathtt{bc}w_{j+3}\ldots w_{n}}^{S_{n-j+1}(Z)}.$$
We will show that $Z$ is indeed a square-free word.
First notice that the word $S_{n-j+1}(Z)$ is actually an extension of the word $S_{n-j}(W)$ by inserting the second letter, $\mathtt{c}$, at the beginning. Since $S_{n-j}(W)$ is a steady word, we obtain that $S_{n-j+1}(Z)$ is square-free, by Case 1.
Next notice that the word $P_{j+3}(Z)$ cannot contain a square as a suffix since it ends up with the unique palindrome $\mathtt{cbc}$. Therefore, it is sufficient to show that there are no squares in prefixes $P_{j+1}(Z)$ and $P_{j+2}(Z)$, and that no other potential square in $Z$ may contain the factor $\mathtt{cbc}$.
\begin{dupa}
[No squares in $P_{j+1}(Z)$]
\end{dupa}
Let us recall that $P_j(Z)=P_j(W)$ is steady. If there is a~square suffix in $P_{j+1}(Z)$, then
$$P_{j+1}(Z)=AB\mathtt{c}B\mathtt{c}$$
for some words $A$ and $B$ (where $A$ is possibly empty) and hence $$P_{j}(Z)=AB\mathtt{c}B=P_j(W),$$ which gives us a contradiction with the assumption that $W$ is steady.
\begin{dupa}
[No squares in $P_{j+2}(Z)$]
\end{dupa}
If there is a square suffix in $P_{j+2}(Z)$, then
$$P_{j+2}(Z)=AB\mathtt{cb}B\mathtt{cb}$$
for some, possibly empty, words $A$ and $B$. This construction implies that
$$P_{j+1}(W)=AB\mathtt{cb}B\mathtt{b}.$$
This word contains a reduction $$AB\mathtt{b}B\mathtt{b},$$ which contradicts the fact that $W$ is steady.
\begin{dupa}[No squares with the factor $\mathtt{cbc}$]
\end{dupa}
If $Z$ has a square $U$ which contains a unique factor $\mathtt{cbc}$, then
$$U=\mathtt{bc}A\mathtt{c}\mathtt{bc}A\mathtt{c}\ \mbox{ or }\ U=\mathtt{c}A\mathtt{cb}\mathtt{c}A\mathtt{cb}$$
for certain word $A$, and so $W$ has to contain a factor
$$U_1=\mathtt{bc}A\mathtt{bc}A\mathtt{c}\ \mbox{ or }\ U_2=\mathtt{c}A\mathtt{bc}A\mathtt{cb}.$$
Let us notice that $W$ cannot contain a factor $U_1$ since it would imply that $W$ is not square-free. Moreover, $W$ cannot contain a factor $U_2$ since it would imply that $W$ has a reduction with a factor $\mathtt{c}A\mathtt{c}A\mathtt{cb}$, which contradicts the fact that $W$ is steady.
The proof is complete.
\end{proof}
Let us notice that the conversed statement is not true --- a word $\mathtt{12312}$ over the alphabet $\{\mathtt{1,2,3}\}$ is bifurcate, but not steady.
\subsection{Bifurcate trees of words}We will now prove the aforementioned result on bifurcate trees. Let us start with a formal definition.
A \emph{bifurcate tree} is any family $\mathbb{B}$ of bifurcate words over a fixed alphabet arranged in a~rooted tree so that the descendants of a word are its single-letter extensions at different positions. Thus, a word of length $n$ may have at most $n+1$ descendants. A bifurcate tree is called \emph{complete} if every vertex has the maximum possible number of descendants. Notice that such a tree must be infinite.
We will demonstrate that complete bifurcate trees exist over alphabets of size at least $12$. This fact follows easily from the results concerning \emph{on-line nonrepetitive games} obtained in \cite{GrytczukSZ} and independently in \cite{KeszeghZhu}, but we recall the proof for completeness. The key idea is to apply the following result form graph coloring.
A coloring $c$ the vertices of a graph $G$ is \emph{square-free} if for every simple path $v_1v_2\ldots v_n$ in $G$, the word $c(v_1)c(v_2)\cdots c(v_n)$ is square-free. A graph is \emph{planar} if it can be drawn on the plane without crossing edges. A planar graph is called \emph{outerplanar} if it has a plane drawing such that all vertices are incident to the outer face.
\begin{theorem}[K\"{u}ndgen and Pelsmayer \cite{KundgenPelsmajer}]\label{Theorem Kundgen-Pelsmajer}Every outerplanar graph has a square-free coloring using at most $12$ colors.
\end{theorem}
\begin{figure}[h]
\center
\includegraphics[width=0.75\textwidth]{Dyadic.png}
\caption{The graph $D_3$.}\label{Figure Dyadic}
\end{figure}
We will apply this theorem to the infinite graph $D$ constructed as follows. Let us denote $V_0=\{0,1\}$ and $V_n=\{0,\frac{1}{2^n},\frac{2}{2^n},\frac{3}{2^n},\ldots,\frac{2^n-1}{2^n},1\}$ for $n\geqslant1$. Let $P_n$, $n\geqslant0$, denote the simple path on $V_n$ with edges joining consecutive elements in $V_n$. Let $D_n$ be the union of all paths $P_j$ with $0\leqslant j\leqslant n$ (see Figure \ref{Figure Dyadic}) and let $D$ be the countable union of all graphs $D_n$, $n\geqslant 0$.
\begin{theorem}\label{Theorem Bifurcate Tree}There exists a complete bifurcate tree over alphabet of size $12$.
\end{theorem}
\begin{proof}
Clearly, every finite graph $D_n$ is outerplanar, hence by Theorem \ref{Theorem Kundgen-Pelsmajer} it has a square-free coloring with $12$ colors. By compactness, the infinite graph $D$ also has a $12$-coloring without a square on any simple path. Fix one such square-free coloring $c$ of the graph $D$. It is now easy to extract a complete bifurcate tree $\mathcal{B}$ out of this coloring in the following way.
The root of $\mathcal{B}$ is $c(\frac{1}{2})$. Its descendants are $c(\frac{1}{4})c(\frac{1}{2})$ and $c(\frac{1}{2})c(\frac{3}{4})$. Each of these has the~following sets of descendants, $$c\left(\frac{1}{8}\right)c\left(\frac{1}{4}\right)c\left(\frac{1}{2}\right),c\left(\frac{1}{4}\right)c\left(\frac{3}{8}\right)c\left(\frac{1}{2}\right),c\left(\frac{1}{4}\right)c\left(\frac{1}{2}\right)c\left(\frac{5}{8}\right)$$ and $$c\left(\frac{3}{8}\right)c\left(\frac{1}{2}\right)c\left(\frac{3}{4}\right),c\left(\frac{1}{2}\right)c\left(\frac{5}{8}\right)c\left(\frac{3}{4}\right),c\left(\frac{1}{2}\right)c\left(\frac{3}{4}\right)c\left(\frac{7}{8}\right),$$ respectively. In general, every word $W$ in $\mathbb{B}$ corresponds to a path in the graph $D$ with vertices taken from different paths $P_n$ and closest to each other as the rational points in the~unit interval. All these words are square-free by the more general property of the coloring~$c$.
\end{proof}
One may easily extend this result to doubly infinite words, with the notions of bifurcate words and trees extended in a natural way.
\begin{theorem}\label{Theorem Bifurcate Tree Infinite}There exists a complete bifurcate tree of doubly infinite words over alphabet of size $12$.
\end{theorem}
\begin{proof}
Consider the graph $G$ on the set of all integers which is a countable union of copies of the graph $D$ inserted in every interval $[n,n+1]$, for every integer $n$. By Theorem \ref{Theorem Kundgen-Pelsmajer} the~graph $G$ has a square-free coloring $c$ using at most $12$ colors. As the root for the constructed tree one may take the doubly infinite word$$R=\cdots c(-3)c(-2)c(-1)c(0)c(1)c(2)\cdots.$$
This word is clearly bifurcate and the whole assertion follows similarly as in the previous proof.
\end{proof}
On the other hand, it is not difficult to establish that the above results are no longer true over alphabet of size four.
\begin{theorem}\label{Theorem Bifurcate Tree Lower Bound}
There is no complete bifurcate tree with words of length more than $6$ over a~$4$-letter alphabet.
\end{theorem}
\begin{proof}Let $\mathcal{A}=\{\mathtt{a,b,c,d}\}$ and suppose that $\mathbb{B}$ is a complete bifurcate tree over $\mathcal{A}$. We may assume that $\mathtt{ac}$ is a word in $\mathbb{B}$. Then an extension of $\mathtt{ac}$ in the middle is either $\mathtt{abc}$ or $\mathtt{adc}$. Notice that each of these words can be factorized as $XY$ such that $X$ is a word over the~alphabet $\{\mathtt{a,b}\}$ and $Y$ is a word over the alphabet $\{\mathtt{c,d}\}$. This property will be preserved in every further extension at the position separating $X$ form $Y$. So, the longest possible square-free word in $\mathbb{B}$ has the form $\mathtt{abacdc}$, which proves the assertion.
\end{proof}
\section{Final remarks}
Let us conclude the paper with some suggestions for future research.
First notice that the assertion of Theorem \ref{Theorem Kundgen-Pelsmajer} is actually much stronger than needed for deriving conclusions on bifurcate trees. Indeed, the $12$-coloring it provides is square-free on all possible paths while for our purposes it is sufficient to consider only directed paths going always to the right. More formally, let $D^*$ denote the directed graph obtained from $D$ by orienting every edge to the right (towards the larger number).
\begin{problem}
Determine the least possible $k$ such that there is a $k$-coloring of $D^*$ in which all directed paths are square-free.
\end{problem}
By Theorem \ref{Theorem Kundgen-Pelsmajer} we know that $k\leqslant 12$, but most probably this is not the best possible bound. Clearly, any improvement for the constant $k$ would give an improvement in statements of Theorems \ref{Theorem Bifurcate Tree} and \ref{Theorem Bifurcate Tree Infinite}. Therefore, by Theorem \ref{Theorem Bifurcate Tree Lower Bound} we know that $k\geqslant 5$.
Notice that the family of graphs $D_n$ we used in the proof of Theorem \ref{Theorem Bifurcate Tree} is actually a quite restricted subclass of planar graphs, which in turn is just one of the \emph{minor-closed} classes of graphs. It has been recently proved by Dujmović, Esperet, Joret, Walczak, and Wood \cite{DujmovicII} that every such class (except the class of all finite graphs) has bounded square-free chromatic number. In particular, every planar graph has a square-free coloring using at most $768$ colors. Perhaps these results could be used to derive other interesting properties of words. For such applications it is sufficient to restrict to \emph{oriented} planar graphs, that is, directed graphs arising from simple planar graphs by fixing for every edge one of the two possible orientations.
\begin{problem}
Determine the least possible $k$ such that there is a $k$-coloring of any oriented planar graph in which all directed paths are square-free.
\end{problem}
Finally, let us mention of another striking connection between words and graph colorings. Let $n\geqslant 0$ be fixed, and consider all possible proper vertex $4$-colorings of the graph $D_n$. Identifying colors with letters, one may think of these colorings as of words over a $4$-letter alphabet. Let us denote this set by $\mathbb{D}_n$. Clearly, every word in $\mathbb{D}_n$ has length equal to $2^{n}+1$ --- the number of vertices in the graph $D_n$.
Let $W$ be any word of length $N$ and let $A$ be any subset of $\{1,2,\ldots,N\}$. Denote by $W_A$ the subword of $W$ along the set of indices $A$. The following statement is a simple consequence of the celebrated \emph{Four Color Theorem} (see \cite{JensenToft}, \cite{Thomas}).
\begin{theorem}
For every pair of positive integers $n\leqslant N$ and any set of positive integers $A$, with $|A|=2^n+1$ and $\max A\leqslant 2^N+1$, there exists a word $W\in \mathbb{D}_N$ such that $W_A\in \mathbb{D}_n$.
\end{theorem}
What is more surprising is that this statement is actually equivalent to the Four Color Theorem, as proved by Descartes and Descartes \cite{Descartes} (see \cite{JensenToft}) Perhaps one could prove it directly, without refereeing to graph coloring and without huge computer verifications.
|
2,877,628,089,607 | arxiv |
\section{Abstraction Functions} \label{abstraction-functions}
This section defines the abstraction functions used throughout this paper.
In particular, the following abstraction functions links the concrete and the abstract configuration-spaces in Section~\ref{pdcfa}.
The abstraction function recurs structurally:
\begin{align*}
\alpha(\varsigma, \kappa)
&= (\alpha_{State}(\varsigma), \alpha_{Kont}(\kappa))
&& \text{[configuration abstraction]}
\\
\alpha_{State}(e, \rho, \sigma)
&= (e, \alpha_{Env}(\rho), \alpha_{Store}(\sigma))
&& \text{[state abstraction]}
\\
\alpha_{Env}(\rho)(v)
&= \alpha_{Addr}(\rho(v))
&& \text{[environment abstraction]}
\\
\alpha_{Store}(\sigma)({\hat{\addr}})
&= \bigsqcup_{\alpha_{Addr}(a) = {\hat{\addr}}} \alpha_{Clo}(\sigma(a))
&& \text{[store abstraction]}
\\
\alpha_{Clo}(\ensuremath{\var{lam}}, \rho)
&= \set{(\ensuremath{\var{lam}}, \alpha_{Env}(\rho))}
&& \text{[closure abstraction]}
\\
\alpha_{Kont}(\vect{\phi_1, \dots, \phi_n})
&= \vect{\alpha_{Frame}(\phi_1), \dots, \alpha_{Frame}(\phi_n)}
&& \text{[stack abstraction]}
\\
\alpha_{Frame}(v, e, \rho)
&= (v, e, \alpha_{Env}(\rho))
&& \text{[frame abstraction]}
\text.
\end{align*}
Just as address-allocation is a parameter, the address abstraction function, $\alpha_{Addr} : \Addr \to \aAddr$, is a parameter for the abstract semantics.
For Sections~\ref{ssintro}, ~\ref{sscfa}, and~\ref{ssgammacfa}, the abstraction function for configurations is:
\begin{align*}
\alpha(\varsigma, \kappa)
&= (\alpha_{State}(\varsigma), \alpha_{S}(\kappa))
&& \text{[configuration abstraction]}
\text,
\end{align*}
where the stack summarization function, $\alpha_{S}$, is a parameter as described in Section~\ref{ssintro}.
\section{Building a Dyck configuration graph}
\label{sec:algorithm}
\subsection{Building a Dyck state graph for PDCFA}
A fixed-point approach to building Dyck state graphs for PDCFA is best presented in~\cite{local:Earl:2010:PDCFA}.
The algorithm in Figure \ref{fig:build-DSG-algorithm} is a similar iterative algorithm, but it is formulated for stack-summarizing control-flow analysis (Section \ref{sscfa}).
The underlying approaches are similar.
In fact, for the algorithm in Figure \ref{fig:build-DSG-algorithm},
replacing configurations with states and switching the transition relation used throughout to the transition relation of Section~\ref{pdcfa} ($\aoTons$) is enough to convert the algorithm to build standard Dyck state graphs.
The main difference is that the algorithm presented here examines a single state, transition, or shortcut edge each iteration, whereas the fixed-point algorithm examines the entire frontier each iteration.
\subsection{Building a Dyck state graph for SSCFA}
The algorithm in Figure~\ref{fig:build-DSG-algorithm} builds the Dyck configuration graph and the $\epsilon$-closure graph{} for a given program $e$.
It uses the worklists, $\Delta S$, $\Delta E$, and $\Delta H$, to maintain the frontier of unexplored configurations, transitions, and shortcut edges respectively.
The while loop runs until all the worklists are empty, which is exactly when everything reachable has been explored.
Each iteration of the while loop, explores one previously unexplored shortcut edge, transition, or configuration.
Each new configuration, transition, and shortcut edge can imply other configurations, transitions, and shortcut edges globally.
Thus, after each configuration, transition, or shortcut edge is explored, the implied configurations, transitions, and shortcut edges are added to the worklists.
The procedure $addShort$ finds all the configurations, transitions, and shortcut edges implied by the given shortcut edge.
The only new transitions implied by a shortcut edge are pop transitions that are enabled by a new push transition.
The only new configurations are those in the newly implied pop transitions.
The procedure $addEdge$ finds all the configurations, transitions, and shortcut edges implied by the given transition.
There are three types of transitions:
First, there are no-op ($\epsilon$) transitions, which immediately become shortcut edges, and so are added to the $\epsilon$-closure graph{} and are expanded.
Next, there are push transitions, which imply new pop transitions (and configurations from these pop transitions) as well as new shortcut edges (from pre-existing pop transitions).
Finally, there are pop transitions, which imply only new new shortcut edges from pre-existing push transitions.
The last procedure $Explore$ finds all the configurations, transitions, and shortcut edges implied by the given configuration.
A configuration cannot imply any shortcut edges directly.
However, a configuration can imply new no-op, push, or pop transitions as well as new configurations from these transitions.
\begin{figure}
{
\def\hspace{0.2in}{\hspace{0.2in}}
\def\mathrel{\leftarrow}{\mathrel{\leftarrow}}
\parindent=0.0in
\begin{tabular}{l@{\hspace{0.1in}}l}
\textbf{procedure} $\textit{BuildDyck}(e)$
\\
$\hspace{0.2in}{\hat c}_0 \mathrel{\leftarrow} {\hat{\mathcal{I}}}_{\hat c}(e)$;
$G \mathrel{\leftarrow} (\emptyset,\Gamma,\emptyset,{\hat c}_0)$;
$G_\epsilon \mathrel{\leftarrow} (\emptyset,\emptyset)$;
$\Delta S \mathrel{\leftarrow} \{{\hat c}_0\}$;
$\Delta E \mathrel{\leftarrow} \emptyset$;
$\Delta H \mathrel{\leftarrow} \emptyset$
\\
$\hspace{0.2in}$\textbf{while}
$(
\Delta S \neq \emptyset
\textbf{ or }
\Delta E \neq \emptyset
\textbf{ or }
\Delta H \neq \emptyset
)$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
\textbf{if} $(\Delta H \neq \emptyset),$\textbf{ let}
$({\hat c},{\hat c}') \in \Delta H$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$(\Delta S',\Delta E',\Delta H') \mathrel{\leftarrow} addShort(G,G_\epsilon)(\biedge{{\hat c}}{{\hat c}'})$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$(S, H) \mathrel{\leftarrow} G_\epsilon$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$G_\epsilon \mathrel{\leftarrow} (S, H \cup \{({\hat c},{\hat c}')\})$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$(\Delta S,\Delta E,\Delta H) \mathrel{\leftarrow}
(\Delta S \cup \Delta S',
\Delta E \cup \Delta E',
\Delta H \cup \Delta H' - \{({\hat c},{\hat c}')\})$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
\textbf{else if} $(\Delta E \neq \emptyset),$\textbf{ let}
$({\hat c},g,{\hat c}') \in \Delta E$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$(\Delta S',\Delta E',\Delta H') \mathrel{\leftarrow} addEdge(G,G_\epsilon)({\hat c} \pdedge^g {\hat c}')$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$(S, \Gamma, E, {\hat c}_0) \mathrel{\leftarrow} G$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$G \mathrel{\leftarrow} (S, \Gamma, E \cup \{({\hat c},g,{\hat c}')\},{\hat c}_0)$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$(\Delta S,\Delta E,\Delta H) \mathrel{\leftarrow}
(\Delta S \cup \Delta S',
\Delta E \cup \Delta E' - \{({\hat c},g,{\hat c}')\},
\Delta H \cup \Delta H')$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
\textbf{else if} $(\Delta S \neq \emptyset),$\textbf{ let}
${\hat c} \in \Delta S$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$(\Delta S',\Delta E',\Delta H') \mathrel{\leftarrow} Explore(G,G_\epsilon)({\hat c})$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$(S, \Gamma, E, {\hat c}_0) \mathrel{\leftarrow} G$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$(S, H) \mathrel{\leftarrow} G_\epsilon$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$(G,G_\epsilon) \mathrel{\leftarrow}
((S \cup \{{\hat c}\}, \Gamma, E,{\hat c}_0),
(S \cup \{{\hat c}\}, H))$
\\
$\hspace{0.2in}$
$\hspace{0.2in}$
$\hspace{0.2in}$
$(\Delta S,\Delta E,\Delta H) \mathrel{\leftarrow}
(\Delta S \cup \Delta S' - \{{\hat c}\},
\Delta E \cup \Delta E',
\Delta H \cup \Delta H')$
\\
$\hspace{0.2in}$\textbf{return} $G,G_\epsilon$
\end{tabular}
\vspace{.5em}
\begin{tabular}{l@{\hspace{0.1in}}l}
\textbf{procedure} $addShort(G,G_\epsilon)({{\hat c}},{{\hat c}'})$
\\
$\hspace{0.2in}(S, \Gamma, E, {\hat c}_0) \mathrel{\leftarrow} G$;
$(S, H) \mathrel{\leftarrow} G_\epsilon$
\\
$\hspace{0.2in}\Delta E \mathrel{\leftarrow} \setbuild{({\hat c}',{\hat{\phi}_-},{\hat c}_2)}
{
({\hat c}_1,{\hat{\phi}_+},{\hat c}) \in E
\text{ and }
{\hat c}' \aoToss^{\hat{\phi}_-} {\hat c}_2
}$
\\
$\hspace{0.2in}\Delta S \mathrel{\leftarrow} \setbuild{{\hat c}_2}{({\hat c}_1,{\hat{\phi}_-},{\hat c}_2) \in \Delta E}$
\\
$\hspace{0.2in}\Delta H \mathrel{\leftarrow} \setbuild{({{\hat c}_1},{{\hat c}'})}{({{\hat c}_1},{{\hat c}}) \in H}$
\\
$\hspace{0.2in}\ind\hspace{0.2in}\cup
\setbuild{({{\hat c}},{{\hat c}_2})}{({{\hat c}'},{{\hat c}_2}) \in H}$
\\
$\hspace{0.2in}\ind\hspace{0.2in}\cup
\setbuild{({{\hat c}_1},{{\hat c}_2})}
{
({{\hat c}_1},{{\hat c}}),
({{\hat c}'},{{\hat c}_2}) \in H
}$
\\
$\hspace{0.2in}$\textbf{return} $\Delta S - S,\ \Delta E - E,\ \Delta H - H$
\end{tabular}
\vspace{.5em}
\begin{tabular}{l@{\hspace{0.1in}}l}
\textbf{procedure} $addEdge(G,G_\epsilon)({\hat c} \pdedge^g {\hat c}')$
\\
$\hspace{0.2in}(S, \Gamma, E, {\hat c}_0) \mathrel{\leftarrow} G$;
$(S, H) \mathrel{\leftarrow} G_\epsilon$
\\
$\hspace{0.2in}$\textbf{if} $(g = \epsilon)$
%
\textbf{return} $addShort(G,(S, H \cup \{({{\hat c}},{{\hat c}'})\}))({{\hat c}},{{\hat c}'})$
\\
$\hspace{0.2in}$\textbf{else if} $(g = \hat{\phi}_+)$
\\
$\hspace{0.2in}\ind$
$\Delta E \mathrel{\leftarrow} \setbuild{{\hat c}_1 \pdedge^{\hat{\phi}_-} {\hat c}_2}
{
\biedge{{\hat c}'}{{\hat c}_1} \in H
\text{ and }
{\hat c}_1 \aoToss^{\hat{\phi}_-} {\hat c}_2
}$
\\
$\hspace{0.2in}\ind$
$\Delta S \mathrel{\leftarrow} \setbuild{{\hat c}_2}{{\hat c}_1 \pdedge^{\hat{\phi}_-} {\hat c}_2 \in \Delta E}$
\\
$\hspace{0.2in}\ind$
$\Delta H \mathrel{\leftarrow} \setbuild{\biedge{{\hat c}}{{\hat c}_2}}
{
\biedge{{\hat c}'}{{\hat c}_1} \in H
\text{ and }
{\hat c}_1 \pdedge^{\hat{\phi}_-} {\hat c}_2 \in E
}$
\\
$\hspace{0.2in}\ind$
\textbf{return} $\Delta S - S,\ \Delta E - E,\ \Delta H - H$
\\
$\hspace{0.2in}$\textbf{else if} $(g = \hat{\phi}_-)$
\\
$\hspace{0.2in}\ind$
\textbf{return} $\emptyset,\ \emptyset,\
\setbuild{\biedge{{\hat c}_1}{{\hat c}'}}
{
\biedge{{\hat c}_2}{{\hat c}} \in H
\text{ and }
{\hat c}_1 \pdedge^{\hat{\phi}_+} {\hat c}_2 \in E
}- H$
\end{tabular}
\vspace{.5em}
\begin{tabular}{l@{\hspace{0.1in}}l}
\textbf{procedure} $Explore(G,G_\epsilon)({\hat c})$
\\
$\hspace{0.2in} (S, \Gamma, E, {\hat c}_0) \mathrel{\leftarrow} G$;
$(S, H) \mathrel{\leftarrow} G_\epsilon$
\\
$\hspace{0.2in}$\textbf{return}
$\setbuild{{\hat c}'}
{{\hat c} \aoToss^{g} {\hat c}'} - S,\
\setbuild{({\hat c},g,{\hat c}')}
{{\hat c} \aoToss^{g} {\hat c}'} - E,\
\emptyset$
\end{tabular}
}
\caption{Algorithm to build a Dyck configuration graph.
%
Procedures $addShort$, $addEdge$ and $Explore$ determine
what configurations, transitions, and shortcut edges are implied by
a given shortcut edge, transition, or configuration, respectively.}
\label{fig:build-DSG-algorithm}
\end{figure}
\subsection{Building a Dyck state graph for SS$\Gamma$CFA}
Finally, we let the procedures of Figure~\ref{fig:build-DSG-algorithm} use this transition relation ($\aToss_{AGC}$) and the reachable address push function $\apush^{ra}$.
Now the procedure $\textsc{BuildDyck}$ of Figure \ref{fig:build-DSG-algorithm} computes stack-summarizing control-flow analysis with abstract garbage collection soundly.
\section{Classical control-flow analysis} \label{classical-cfa}
This section presents traditional control-flow analysis for reference
and comparison with pushdown and stack-summarizing control flow analysis.
Classical control-flow analysis for ANF operates over the abstract
state-space in Figure \ref{fig:abs-conf-space}.
Our classical formulation follows Van Horn and Might's technique of
allocating abstract continuations in the store, as opposed to stacking
them~\cite{local:VanHorn:2010:Abstract}.
\begin{figure}
\begin{align*}
{\hat c} \in \sa{Conf} &= \sa{State} \times \sa{Addr} && \text{[configurations]}
\\
{\hat{\varsigma}} \in \sa{State} &= \syn{Exp} \times \sa{Env} \times \sa{Store} && \text{[states]}
\\
{\hat{\rho}} \in \sa{Env} &= \syn{Var} \rightharpoonup \sa{Addr} && \text{[environments]}
\\
{\hat{\sigma}} \in \sa{Store} &= \sa{Addr} \to \Pow{\sa{Clo} \cup \sa{Frame}} && \text{[stores]}
\\
{\widehat{\var{clo}}} \in \sa{Clo} &= \syn{Lam} \times \sa{Env} && \text{[closures]}
\\
\hat{\phi} \in \sa{Frame} &= \syn{Var} \times \syn{Exp} \times \sa{Env} \times \sa{Addr} && \text{[stack frames]}
\\
\widehat{\rp}, {\hat{\addr}} \in \sa{Addr} &\text{ is a \emph{finite} set of addresses} && \text{[addresses]}
\end{align*}
\caption{Abstract configuration-space for classical control-flow analysis.}
\label{fig:abs-conf-space}
\end{figure}
To complete the abstract semantics we need to define program injection, atomic expression evaluation, reachable configurations, transition relation, address allocation, and abstraction function:
\paragraph{Program injection}
The abstract injection function ${\hat{\mathcal{I}}} : \syn{Exp} \to \sa{Conf}$
pairs an expression with an empty environment, an empty store and an
empty stack to create the initial abstract configuration:
\begin{equation*}
{\hat c}_0 = {\hat{\mathcal{I}}}(e) = (e, [], [], \sa{null})
\text,
\end{equation*}
where $\sa{null}$ is an address bound to nothing in the store, thus representing an empty stack.
\paragraph{Atomic expression evaluation}
The abstract atomic expression evaluator, ${\hat{\mathcal{A}}} : \syn{Atom}
\times \sa{Env} \times \sa{Store} \to \PowSm{\sa{Clo} \cup \sa{Frame}}$, returns the value of
an atomic expression or a stack frame in the context of an environment and a store;
note how it returns a set:
\begin{align*}
{\hat{\mathcal{A}}}(\ensuremath{\var{lam}},{\hat{\rho}},{\hat{\sigma}}) &= \set{(\ensuremath{\var{lam}},\rho)} && \text{[closure creation]}
\\
{\hat{\mathcal{A}}}(v,{\hat{\rho}},{\hat{\sigma}}) &= {\hat{\sigma}}({\hat{\rho}}(v)) && \text{[variable look-up]}
\text.
\end{align*}
\paragraph{Reachable configurations}
The abstract program evaluator ${\hat{\mathcal{E}}} : \syn{Exp} \to
\PowSm{\sa{Conf}}$ returns all of the configurations reachable from
the initial configuration:
\begin{equation*}
{\hat{\mathcal{E}}}(e) = \setbuild{ {\hat c} }{ {\hat{\mathcal{I}}}(e) \leadsto^* {\hat c} }
\text.
\end{equation*}
\paragraph{Transition relation}
The abstract transition relation $(\leadsto) \subseteq \sa{Conf} \times
\sa{Conf}$ has three rules, two of which have become nondeterministic.
A tail call may fork because there could be multiple abstract closures
that it is invoking:
\begin{align*}
(\overbrace{(\sembr{\appform{f}{\mbox{\sl {\ae}}}}, {\hat{\rho}}, {\hat{\sigma}})}^{{\hat{\varsigma}}}, \widehat{\rp})
&\leadsto
((e,{\hat{\rho}}'',{\hat{\sigma}}'),\widehat{\rp})
\text{, where }
\\
(\sembr{\lamform{v}{e}}, {\hat{\rho}}') &\in {\hat{\mathcal{A}}}(f,{\hat{\rho}},{\hat{\sigma}})
\\
{\hat{\addr}} &= {\widehat{alloc}}(v,{\hat{\varsigma}})
\\
{\hat{\rho}}'' &= {\hat{\rho}}'[v \mapsto {\hat{\addr}}]
\\
{\hat{\sigma}}' &= {\hat{\sigma}} \sqcup [{\hat{\addr}} \mapsto {\hat{\mathcal{A}}}(\mbox{\sl {\ae}},{\hat{\rho}},{\hat{\sigma}})]
\text.
\end{align*}
The partial order for stores is:
\begin{equation*}
({\hat{\sigma}} \sqcup {\hat{\sigma}}')({\hat{\addr}}) = {\hat{\sigma}}({\hat{\addr}}) \cup {\hat{\sigma}}'({\hat{\addr}})
\text.
\end{equation*}
\noindent
A non-tail call builds a frame, adds it to the store, and evaluates the call:
\begin{align*}
(\overbrace{(\sembr{\letiform{v}{\ensuremath{\var{call}}}{e}}, {\hat{\rho}}, {\hat{\sigma}})}^{{\hat{\varsigma}}}, \widehat{\rp})
&\leadsto
((\ensuremath{\var{call}},{\hat{\rho}},{\hat{\sigma}}), \widehat{\rp}')
\text{, where }
\\
\widehat{\rp}' &= {\widehat{alloc}}(v,{\hat{\varsigma}})
\\
{\hat{\sigma}}' &= {\hat{\sigma}} \sqcup [\widehat{\rp}' \mapsto (v,e,{\hat{\rho}},\widehat{\rp})]
\text.
\end{align*}
\noindent
A function return may fork because there could be multiple frames bound to the current return pointer:
\begin{align*}
(\overbrace{(\mbox{\sl {\ae}}, {\hat{\rho}}, {\hat{\sigma}})}^{{\hat{\varsigma}}}, \widehat{\rp})
&\leadsto
((e,{\hat{\rho}}'',{\hat{\sigma}}'), \widehat{\rp}')
\text{, where }
\\
(v,e,{\hat{\rho}}',\widehat{\rp}') &\in {\hat{\sigma}}(\widehat{\rp})
\\
{\hat{\addr}} &= {\widehat{alloc}}(v,{\hat{\varsigma}})
\\
{\hat{\rho}}'' &= {\hat{\rho}}'[v \mapsto {\hat{\addr}}]
\\
{\hat{\sigma}}' &= {\hat{\sigma}} \sqcup [{\hat{\addr}} \mapsto {\hat{\mathcal{A}}}(\mbox{\sl {\ae}},{\hat{\rho}},{\hat{\sigma}})]
\text.
\end{align*}
\paragraph{Allocation, polyvariance and context-sensitivity}
\label{sec:polyvariance}
In the abstract semantics, the abstract allocation function
${\widehat{alloc}} : \syn{Var} \times \sa{State} \to \sa{Addr}$ determines the
polyvariance of the analysis (and, by extension, its
context-sensitivity).
The abstract allocation function is overloaded to assign return pointers
(addresses) to abstract stack frames:
${\widehat{alloc}} : \sa{Frame} \times \sa{State} \to \sa{Addr}$.
In a control-flow analysis, \emph{polyvariance} literally refers to
the number of abstract addresses (variants) there are for each
variable.
\paragraph{Abstraction function}
The abstraction function ($\alpha$) converts any structure from the
concrete semantics (Figure \ref{fig:conc-abs-conf-space}) into an abstract
form of the same structure (Figure \ref{fig:abs-conf-space}).
(The abstraction function is defined in Appendix~\ref{abstraction-functions}. While not specifically defined for these semantics, the abstraction function there can easily be modified to work with return pointers.)
\paragraph{Comparison to pushdown control-flow analysis}
The abstract semantics of pushdown control-flow analysis
are similar to those of the abstract
semantics for classical control-flow analysis (Figure
\ref{fig:abs-conf-space}).
However, the few key differences are worth noting.
Foremost, we are not working with the configuration-space directly;
rather we deal with the control-state-space.
Hence, a configuration is now defined as a state and a stack paired
together.
The results of pushdown control-flow analysis are rooted pushdown
systems rather than nondeterminisitic finite automata.
A rooted pushdown system handles the stack and configurations
implicitly, so we use the control-state-space instead of the
configuration-space.
The next change is that the store only contains bindings from
addresses to closures, instead of from addresses to closures and
frames.
The abstraction of the store creates imprecision.
By keeping the frames out of the store (and in a precise structure) we
avoid this imprecision for continuations.
The final difference is that frames no longer contain return pointers.
Again, this is because the enriched abstract transition system
encapsulates that information precisely.
\section{Conclusion} \label{concl}
We presented SS$\Upgamma$CFA, a synergistic fusion of pushdown
analysis and abstract garbage collection to combat the twin sinks for
precision in higher-order flow analysis: merging in arguments, and
merging in return-flow.
In order to create SS$\Upgamma$CFA, we had to first create SSCFA, a
pushdown control-flow analysis for higher-order programs capable of
iteratively synthesizing summaries of stack properties; in this case,
we required a summary of reachable addresses on the stack.
Abstract garbage collection combats merging in arguments by
eliminating monotonicity for the abstract store; pushdown analysis
eliminates the loss in return-flow precision by simulating the
concrete call stack with a pushdown stack, thereby properly matching
returns to call.
\section{Results} \label{results}
\input{related-work}
\input{conclusion}
\section{Introduction} \label{intro}
In higher-order flow analysis~\cite{mattmight:Shivers:1991:CFA},
merging is the enemy.
Merging of flow sets and control-flow paths is what destroys
precision.
Merging occurs in two forms: merging on call (for arguments) and
merging on return (for return-flow and return values).
Our goal is to alleviate argument-merging while simultaneously
eliminating return-flow merging.
For an example of both kinds of merging, consider the following code:
\begin{code}
(let* ((id (lambda (x) x))
(a (id 3))
(b (id 4)))
b)\end{code}
Flow-sensitive 0CFA makes the following inferences:
(1) the two instances of the argument {\tt x} merge together:
{\tt 3} and {\tt 4}, and
(2) the (implicit) continuations at applications of the identity
function also merge together, causing its return values to merge in
the variable {\tt b}.
For two decades, context-sensitivity---splitting bindings, calls and
returns among a finite set of abstract instances---has been the
``solution'' to both merging problems.
But, context-sensitivity is a finite, monotonic band-aid for an
infinite, non-monotonic problem.
Arguments ultimately merge because flow information accretes
\emph{monotonically}: once an analysis says that $x$ may flow to $y$, it will
never revoke that inference.
Return-flows merge because finite flow analyses implicitly allocate a
finite number of abstract stack pointers to continuations.
\subsection{Two solutions}
Might and Shivers developed abstract garbage collection (abstract GC)
to tame the argument-merging
problem~\cite{mattmight:Might:2006:GammaCFA}.
Abstract GC assumes a small-step abstract
interpretation~\cite{mattmight:Cousot:1977:AI,mattmight:Cousot:1979:Galois}
over a finite state-space.
Much like concrete GC, abstract GC finds all of the reachable
addresses in an abstract heap and reclaims any unreachable
addresses.
With abstract GC, the abstract heap no longer grows
monotonically across a small-step transition: the same abstract address has
the chance to get rebound to a singleton flow set over a different
value many times over, thereby making more judicious use of the
abstract resources available.
For programs composed of (possibly recursive) tail calls and closures
which never escape, abstract garbage collection delivers perfectly
precise control-flow analysis.
Pushdown control-flow analysis
(PDCFA)~\cite{local:Earl:2010:PDCFA}, a relative of Vardoulakis
and Shivers's CFA2~\cite{mattmight:Vardoulakis:2010:CFA2}, solves the
return-flow problem by using the arbitrarily large pushdown stack to
model the concrete call stack; thus, continuations never merge.
PDCFA can reason through arbitrary levels of recursive calls.
\subsection{One problem}
Our mission is to combine the benefits of both abstract
garbage collection and pushdown control-flow analysis: to produce an
``almost complete'' control-flow analysis which eliminates \emph{most}
argument merging and \emph{all} continuation merging.
\emph{
The challenge is an apparent incompatibility between the two techniques.}
Abstract garbage collection must have the ability to search an
entire state---stack included---to determine the reachable addresses.
A pushdown control-flow analysis approximates the evaluation of a
program, roughly speaking, as a push-down automaton. The machine
states of the PDA represent the control string, environment, and store
(heap) of the evaluator; while the stack of the PDA represents the
evaluator's stack, where each letter of the stack alphabet represents
a continuation frame. Transitions of the PDA push and pop frames much
like an abstract machine (e.g., the CESK machine) pushes and pops
continuations.
When a machine like the CESK machine performs garbage collection, it
crawls the stack to determine reachable heap locations.
That works because the stack is explicit in each machine state:
it's the K component.
But in a pushdown analysis, the abstract stack is not represented in
each control state.
Rather, the stack's structure is scattered across the transition
graph between control-states.
In more detail, the data structure accumulated during pushdown
analysis is a transition graph where each node contains the C, E and S
components and each edge is labeled with the change to K that
happens on that transition.
In order to recover the possible stack(s) at a node in the graph, the
analysis must consider all the paths from the initial control
state to the current state.
\subsection{Our contribution: SSCFA}
To complete our mission, we develop a new kind of higher-order
pushdown-like control-flow analysis that includes stack
\emph{summaries} in its control states: SSCFA.
To make our contribution more general, we place constraints on stack
summaries (in lieu of fixing them to be reachable addresses) and we
let clients supply alternate summaries, \emph{e.g.}, all procedures live on the
stack, whether the security context is privileged or unprivileged.
Thus, SSCFA could drive pushdown variants of dependence analysis or
even escape analysis in addition to abstract garbage collection.
The remainder of this paper is organized as follows:
\begin{itemize}
\item Section~\ref{prelim} reviews simple preliminaries for working
with pushdown systems.
\item Section~\ref{pdcfa} reviews pushdown control-flow analysis and
Dyck state graphs.
\item Section~\ref{abstract-gc} introduces the problem with integrating abstract garbage collection and pushdown analysis.
\item Section~\ref{ssintro} informally introduces the notion of a
\emph{stack summary}, defines criteria for \emph{stack
summarization}, and gives example summarization strategies.
\item Section~\ref{sscfa} formally defines stack-summarizing control
flow-analysis.
\item Section~\ref{ssgammacfa} presents the computable product of stack
summarizing control-flow analysis and abstract garbage collection.
\item Section~\ref{related} discusses related work and Section~\ref{concl} concludes.
\end{itemize}
\section{Pushdown control-flow analysis} \label{pdcfa}
In this section we present the concrete and abstract semantics for the
pushdown control-flow analysis (PDCFA) of a call-by-value $\lambda$-calculus{}, which
represents the core of a higher-order programming language.
To simplify presentation of the concrete and abstract semantics, we
analyze programs in A-Normal Form (ANF),
a syntactic discipline that enforces an order of evaluation and
requires that all arguments to a function be atomic:
\begin{align*}
e \in \syn{Exp} &\mathrel{::=} \letiform{v}{\ensuremath{\var{call}}}{e} && \text{[non-tail call]}
\\
&\;\;\mathrel{|}\;\; \ensuremath{\var{call}} && \text{[tail call]}
\\
&\;\;\mathrel{|}\;\; \mbox{\sl {\ae}} && \text{[return]}
\\
f,\mbox{\sl {\ae}} \in \syn{Atom} &\mathrel{::=} v \mathrel{|} \ensuremath{\var{lam}} && \text{[atomic expressions]}
\\
\ensuremath{\var{lam}} \in \syn{Lam} &\mathrel{::=} \lamform{v}{e} && \text{[lambda terms]}
\\
\ensuremath{\var{call}} \in \syn{Call} &\mathrel{::=} \appform{f}{\mbox{\sl {\ae}}} && \text{[applications]}
\\
v \in \syn{Var} &\text{ is a set of identifiers} && \text{[variables]}
\text.
\end{align*}
We use the CESK machine \cite{mattmight:Felleisen:1987:CESK} to
specify the semantics of ANF.
We chose the CESK machine because it has an explicit stack.
Figure~\ref{fig:conc-abs-conf-space} contains the concrete
configuration-space of this machine.
Each configuration contains a control-state component consisting of an
expression, an environment and a store; and a continuation/stack
component.
Under our abstractions, the stack component of this
configuration-space becomes both a finite ``stack summary'' in
abstract control states and a stack component in the pushdown system.
(See Appendix~\ref{classical-cfa} for a review of the
\emph{finite-state} approach and comparison to the pushdown approach.)
PDCFA does not collapse the abstract
stack into a finite structure like classical control-flow analysis.
Instead of folding the stack into the store through frame pointers,
PDCFA distributes the stack throughout an enriched abstract transition
system.
The abstract configuration-space of pushdown control-flow analysis (Figure
\ref{fig:conc-abs-conf-space}) is similar to concrete formulation.
\begin{figure}
\begin{align*}
c \in \s{Conf} &= \s{State} \times \s{Kont}
&
{\hat c} \in \sa{Conf} &= \sa{State} \times \sa{Kont}
&& \text{[configurations]}
\\
\varsigma \in \s{State} &= \syn{Exp} \times \s{Env} \times \s{Store}
&
{\hat{\varsigma}} \in \sa{State} &= \syn{Exp} \times \sa{Env} \times \sa{Store}
&& \text{[states]}
\\
\rho \in \s{Env} &= \syn{Var} \rightharpoonup \s{Addr}
&
{\hat{\rho}} \in \sa{Env} &= \syn{Var} \rightharpoonup \sa{Addr}
&& \text{[environments]}
\\
\sigma \in \s{Store} &= \s{Addr} \to \s{Clo}
&
{\hat{\sigma}} \in \sa{Store} &= \sa{Addr} \to \Pow{\sa{Clo}}
&& \text{[stores]}
\\
\var{clo} \in \s{Clo} &= \syn{Lam} \times \s{Env}
&
{\widehat{\var{clo}}} \in \sa{Clo} &= \syn{Lam} \times \sa{Env}
&& \text{[closures]}
\\
\kappa \in \s{Kont} &= \s{Frame}^*
&
{\hat{\kappa}} \in \sa{Kont} &= \sa{Frame}^*
&& \text{[stacks]}
\\
\phi \in \s{Frame} &= \syn{Var} \times \syn{Exp} \times \s{Env}
&
\hat{\phi} \in \sa{Frame} &= \syn{Var} \times \syn{Exp} \times \sa{Env}
&& \text{[stack frames]}
\\
a \in \s{Addr} &\text{ is an infinite set}
&
{\hat{\addr}} \in \sa{Addr} &\text{ is a \emph{finite} set}
&& \text{[addresses]}
\end{align*}
\caption{Configuration-space for CESK machine and pushdown control-flow analysis.}
\label{fig:conc-abs-conf-space}
\end{figure}
\subsection{Concrete semantics and PDCFA}
Next, we define the concrete semantics of ANF and pushdown
control-flow analysis simultaneously.
Specifically, we define program-to-machine injection, atomic
expression evaluation, reachable configurations/control states, the
transition relation and a resource-allocation parameter.
The abstraction functions that connect the concrete
configuration-space to the abstract configuration-space are
straightforward structural abstraction functions.
(Formal definitions of these abstractions can be found in Appendix
~\ref{abstraction-functions}.)
\paragraph{Program injection}
The concrete program-injection function pairs an expression with an
empty environment, store and stack to create the
initial configuration:
\begin{equation*}
c_0 = {\mathcal{I}}(e) = (e, [], [], \vect{})
\text.
\end{equation*}
We define two abstract injection functions---one that produces an
initial abstract control state, and one that produces an initial
abstract configuration.
The control-state injector ${\hat{\mathcal{I}}}_{\hat{\varsigma}} : \syn{Exp}
\to \sa{State}$ pairs an expression with an empty environment and
store to create the initial abstract state:
\begin{equation*}
{\hat{\varsigma}}_0 = {\hat{\mathcal{I}}}_{\hat{\varsigma}}(e) = (e, [], [])
\text.
\end{equation*}
The configuration injector ${\hat{\mathcal{I}}}_{\hat c} : \syn{Exp} \to \sa{Conf}$ tacks on an empty stack:
\begin{equation*}
{\hat c}_0 = {\hat{\mathcal{I}}}_{\hat c}(e) = ({\hat{\mathcal{I}}}_{\hat{\varsigma}}(e), \vect{})
\text.
\end{equation*}
\paragraph{Atomic expression evaluation}
The atomic expression evaluator, ${\mathcal{A}} : \syn{Atom} \times \s{Env}
\times \s{Store} \rightharpoonup \s{Clo}$ (or, ${\hat{\mathcal{A}}} : \syn{Atom} \times
\sa{Env} \times \sa{Store} \to \PowSm{\sa{Clo}}$ in the abstract),
returns the value of an atomic expression in the context of an
environment and a store:
\begin{align*}
{\mathcal{A}}(\ensuremath{\var{lam}},\rho,\sigma) &= (\ensuremath{\var{lam}},\rho)
&
{\hat{\mathcal{A}}}(\ensuremath{\var{lam}},{\hat{\rho}},{\hat{\sigma}}) &= \set{(\ensuremath{\var{lam}},\rho)}
&& \text{[closure creation]}
\\
{\mathcal{A}}(v,\rho,\sigma) &= \sigma(\rho(v))
&
{\hat{\mathcal{A}}}(v,{\hat{\rho}},{\hat{\sigma}}) &= {\hat{\sigma}}({\hat{\rho}}(v))
&& \text{[variable look-up]}
\text.
\end{align*}
\paragraph{Reachable configurations}
The program evaluator ${\mathcal{E}} : \syn{Exp} \to
\Pow{\s{Conf}}$ (or, ${\hat{\mathcal{E}}} : \syn{Exp} \to \PowSm{\sa{Conf}}$ in the
abstract) computes all of the configurations reachable from the
initial configuration:
\begin{align*}
{\mathcal{E}}(e) = \setbuild{ c }{ {\mathcal{I}}(e) \mathrel{\Rightarrow}^* c }
&&
{\hat{\mathcal{E}}}(e) = \setbuild{ {\hat c} }{ {\hat{\mathcal{I}}}_{\hat c}(e) \rightharpoondown^* {\hat c} }
\text.
\end{align*}
Since the stack's depth is unbounded, the number of reachable
configurations in both the concrete \emph{and} abstract semantics
could be infinite.
\paragraph{Transition relation}
The concrete transition, $c \mathrel{\Rightarrow} c'$, and its abstract
counterpart, ${\hat c} \rightharpoondown {\hat c}'$, each have three rules.
The first rule handles tail calls by evaluating the function into a
closure, evaluating the argument into a value and then moving to the
body of the $\lambda$-term{} within the closure:
\begin{gather*}
c = (\overbrace{(\sembr{\appform{f}{\mbox{\sl {\ae}}}}, \rho, \sigma)}^{\varsigma}, \kappa)\
\mathrel{\Rightarrow}\
((e,\rho'',\sigma'),\kappa)
\text{, where }
\\
\begin{align*}
(\sembr{\lamform{v}{e}}, \rho') &= {\mathcal{A}}(f,\rho,\sigma)
&
\rho'' &= \rho'[v \mapsto a]
\\
a &= \mathit{alloc}(v,\varsigma)
&
\sigma' &= \sigma[a \mapsto {\mathcal{A}}(\mbox{\sl {\ae}},\rho,\sigma)]
\end{align*}
\\[.5em]
{\hat c} = (\overbrace{(\sembr{\appform{f}{\mbox{\sl {\ae}}}}, {\hat{\rho}}, {\hat{\sigma}})}^{{\hat{\varsigma}}}, {\hat{\kappa}})\
\rightharpoondown\
((e,{\hat{\rho}}'',{\hat{\sigma}}'), {\hat{\kappa}})
\text{, where }
\\
\begin{align*}
(\sembr{\lamform{v}{e}}, {\hat{\rho}}') &\in {\hat{\mathcal{A}}}(f,{\hat{\rho}},{\hat{\sigma}})
&
{\hat{\rho}}'' &= {\hat{\rho}}'[v \mapsto {\hat{\addr}}]
\\
{\hat{\addr}} &= {\widehat{alloc}}(v,{\hat{\varsigma}})
&
{\hat{\sigma}}' &= {\hat{\sigma}} \sqcup [{\hat{\addr}} \mapsto {\hat{\mathcal{A}}}(\mbox{\sl {\ae}},{\hat{\rho}},{\hat{\sigma}})]
\text.
\end{align*}
\end{gather*}
In the abstract semantics, the tail-call transition is
nondeterministic, since multiple abstract closures may be invoked.
A non-tail call builds a frame, adds it to the stack, and evaluates the call:
\begin{align*}
((\sembr{\letiform{v}{\ensuremath{\var{call}}}{e}}, \rho, \sigma), \kappa)\
&\mathrel{\Rightarrow}\
((\ensuremath{\var{call}},\rho,\sigma), (v,e,\rho) : \kappa)
\\[.5em]
((\sembr{\letiform{v}{\ensuremath{\var{call}}}{e}}, {\hat{\rho}}, {\hat{\sigma}}), {\hat{\kappa}})\
& \rightharpoondown\
((\ensuremath{\var{call}},{\hat{\rho}},{\hat{\sigma}}), (v,e,{\hat{\rho}}) : {\hat{\kappa}})
\text.
\end{align*}
A function return pops the top frame of the stack and uses that frame
to continue the computation after binding the return value to the
frame's variable:
\begin{gather*}
c = (\overbrace{(\mbox{\sl {\ae}}, \rho, \sigma)}^{\varsigma}, (v,e,\rho') : \kappa)\
\mathrel{\Rightarrow}\
((e,\rho'',\sigma'), \kappa)
\text{, where }
\\
\begin{align*}
a &= \mathit{alloc}(v,\varsigma)
&
\rho'' &= \rho'[v \mapsto a]
&
\sigma' &= \sigma[a \mapsto {\mathcal{A}}(\mbox{\sl {\ae}},\rho,\sigma)]
\end{align*}
\\[1em]
{\hat c} = (\overbrace{(\mbox{\sl {\ae}}, {\hat{\rho}}, {\hat{\sigma}})}^{{\hat{\varsigma}}},
(v,e,{\hat{\rho}}') : {\hat{\kappa}}'
)\
\rightharpoondown\
((e,{\hat{\rho}}'',{\hat{\sigma}}'), {\hat{\kappa}}')
\text{, where }
\\
\begin{align*}
{\hat{\addr}} &= {\widehat{alloc}}(v,{\hat{\varsigma}})
&
{\hat{\rho}}'' &= {\hat{\rho}}'[v \mapsto {\hat{\addr}}]
&
{\hat{\sigma}}' &= {\hat{\sigma}} \sqcup [{\hat{\addr}} \mapsto {\hat{\mathcal{A}}}(\mbox{\sl {\ae}},{\hat{\rho}},{\hat{\sigma}})]
\text.
\end{align*}
\end{gather*}
\paragraph{Allocation, polyvariance and context-sensitivity}
The address-allocation function is an opaque parameter in both
semantics.
For the concrete semantics, letting addresses be
natural numbers suffices, and then the allocator can use the lowest
unused address: $\s{Addr} = \mathbb{N}$ and
$\mathit{alloc}(v,(e,\rho,\sigma,\kappa)) = 1 + \max(\var{dom}(\sigma))$.
The opacity is useful because abstract semantics also parameterize
allocation---to provide a knob to tune the polyvariance and
context-sensitivity of the resulting analysis---and allowing the
abstract semantics to choose a particular concrete allocation function
can simplify proofs of soundness.
\subsection{Removing the explicit stack}
The reachable subset of the abstract configuration-space for any
program could be infinite.
(There is no bound on the depth of the stack, so there are an infinite
number of stacks and therefore an infinte number of configurations.)
Consequently, the na\"ive exploration of the reachable abstract
configurations used in classical flow analyses may not terminate.
Fortunately, because the abstract semantics describe a pushdown
system, we can construct a finite (computable) description of the
reachable configurations.
Specifically, we can construct a labeled transition system in which
nodes are control states, and labels on edges denote stack change.
We define the legal transitions in any such graph through the transition
relation $(\curvearrowright}%\smile\!\!\!>) \subseteq \sa{State} \times \sa{Frame}_\pm \times
\sa{State}$.
Three rules define this relation: one determining when to push, one
when to pop and the last when to leave the stack unchanged.
The labels on each transition are the stack action for the transition:
\begin{align*}
{\hat{\varsigma}}
\ \aoTons^\epsilon\
{\hat{\varsigma}}'
& \text{ iff }
{\hat c} = ({\hat{\varsigma}}, {\hat{\kappa}})
\ \rightharpoondown\
({\hat{\varsigma}}', {\hat{\kappa}}) = {\hat c}'
\mbox{, for any stack }{\hat{\kappa}}
&& \text{[tail call]}
\\
{\hat{\varsigma}}
\ \aoTons^{\hat{\phi}_+}\
{\hat{\varsigma}}'
& \text{ iff }
{\hat c} = ({\hat{\varsigma}}, {\hat{\kappa}})
\ \rightharpoondown\
({\hat{\varsigma}}', \hat{\phi} : {\hat{\kappa}}) = {\hat c}'
\mbox{, for any stack }{\hat{\kappa}}
&& \text{[non-tail call]}
\\
{\hat{\varsigma}}
\ \aoTons^{\hat{\phi}_-}\
{\hat{\varsigma}}'
& \text{ iff }
{\hat c} = ({\hat{\varsigma}}, \hat{\phi} : {\hat{\kappa}})
\ \rightharpoondown\
({\hat{\varsigma}}', {\hat{\kappa}}) = {\hat c}'
\mbox{, for any stack }{\hat{\kappa}}
&& \text{[return]}
\text.
\end{align*}
From this transition relation, we build a rooted pushdown system
$(Q,\Gamma,\delta,q_0)$ for a program $e$ such that $Q = \sa{States}$,
$\Gamma = \sa{Frame}$,
$\delta = (\curvearrowright}%\smile\!\!\!>)$, and
$q_0 = {\hat{\mathcal{I}}}_{\hat{\varsigma}}(e)$.
The subset of this rooted pushdown system reachable over legal paths
provides a finite description of the original configuration-space.
This finite subset is a \textbf{Dyck state graph}
(DSG)~\cite{local:Earl:2010:PDCFA}.
(A path is legal only if all of the pops match up with pushes; there can be unmatched pushes left over.)
Several techniques can compute the Dyck state graph; for an efficient
technique specific to PDCFA, we defer to our recent
work~\cite{local:Earl:2010:PDCFA} or the algorithm as modified in
Appendix~\ref{sec:algorithm}.
\section{Pushdown preliminaries} \label{prelim}
In this work, we make extensive use of pushdown systems.
(A pushdown automaton is a specific kind of pushdown system.)
There are many (equivalent) definitions of these machines in the
literature, so we adapt our own definitions from \cite{mattmight:Sipser:2005:Theory}.
Even those familiar with pushdown theory may want to skim this
section to pick up our notation.
\subsection{Stack actions, stack change and stack manipulation}
Stacks are sequences over a alphabet $\Gamma$.
Pushdown systems do much stack manipulation; to represent this more
concisely, we turn stack alphabets into ``action'' sets where each
character represents a stack change: push, pop or no change.
For each character $\gamma$ in a stack alphabet $\Gamma$, the
\defterm{stack-action} set $\Gamma_\pm$ contains a push
($\gamma_{+}$) and a pop ($\gamma_{-}$) character and a
no-stack-change indicator ($\epsilon$):
\begin{align*}
g \in \Gamma_\pm &\mathrel{::=} \epsilon && \text{[stack unchanged]}
\\
&\;\;\mathrel{|}\;\; \gamma_{+} \;\;\;\text{ for each } \gamma \in \Gamma && \text{[pushed $\gamma$]}
\\
&\;\;\mathrel{|}\;\; \gamma_{-} \;\;\;\text{ for each } \gamma \in \Gamma && \text{[popped $\gamma$]}
\text.
\end{align*}
Given a string of stack actions, we can compact it into a minimal
string describing net stack change.
We do so through the operator $\fnet{\cdot} : \Gamma_\pm^* \to
\Gamma_\pm^*$, which cancels out opposing adjacent push-pop stack
actions:
\(
\fnet{\vec{g} \; \gamma_+\gamma_- \; \vecp{g}} =
\fnet{\vec{g} \; \vecp{g}}
\)
and
\(
\fnet{\vec{g} \; \epsilon \; \vecp{g}} =
\fnet{\vec{g} \; \vecp{g}}
\),
so that
\(\fnet{\vec{g}} = \vec{g}\),
if there are no cancellations to be made in the string $\vec{\gamma}$.
\subsection{Pushdown systems}
A \defterm{pushdown system} is a triple
$M = (Q,\Gamma,\delta)$ where
$Q$ is a finite set of control states;
$\Gamma$ is a stack alphabet; and
$\delta \subseteq
Q \times \Gamma_\pm \times Q$ is a transition relation.
We use $\mathbb{PDS}$ to denote the class of all pushdown systems.
Unlike the more widely known pushdown automaton, a pushdown system
\emph{does not recognize a language}.
\\
\noindent
For the following definitions, let $M = (Q,\Gamma,\delta)$.
The \defterm{configurations} of this
machine are pairs over control states and
stacks:
\(\s{Configs}(M) = Q \times \Gamma^*\).
The labeled \defterm{transition relation} $(\PDTrans_{M}) \subseteq \s{Configs}(M) \times \Gamma_\pm \times
\s{Configs}(M)$ determines whether one configuration may transition to another while performing the given stack action:
\begin{align*}
(q, \vec{\gamma})
\mathrel{\PDTrans_M^\epsilon}
(q',\vec{\gamma})
& \text{ iff }
(q,\epsilon,q')
\in \delta
&& \text{[no change]}
\\
(q, \gamma' : \vec{\gamma})
\mathrel{\PDTrans_M^{\gamma'_{-}}}
(q',\vec{\gamma})
& \text{ iff }
(q,\gamma'_{-},q')
\in \delta
&& \text{[pop]}
\\
(q, \vec{\gamma})
\mathrel{\PDTrans_{M}^{\gamma'_{+}}}
(q',\gamma' : \vec{\gamma})
& \text{ iff }
(q,\gamma'_{+},q')
\in \delta
&& \text{[push]}
\text.
\end{align*}
Additionally, we define:
\begin{align*}
c \mathrel{\PDTrans_{M}} c' &\text{ iff }
c \mathrel{\PDTrans_{M}^{g}} c'
\text{ for some stack action } g\text,
\\
c \mathrel{\PDTrans_M^{\vec{g}}} c'
&\text{ iff }
c = c_0
\mathrel{\PDTrans_M^{g_1}}
c_1
\cdots
c_{n-1}
\mathrel{\PDTrans_M^{g_n}}
c_n = c'
\text{ for some $\vec{g} = g_1 \ldots
g_n$,
\\
c \mathrel{\PDTrans_M^{*}} c'
&\text{ iff }
c \mathrel{\PDTrans_M^{\vec{g}}} c'
\text{ for some }
\vec{g}
\text.
\end{align*}
\subsection{Rooted pushdown systems}
A \defterm{rooted pushdown system} is a quadruple
$(Q,\Gamma,\delta,q_0)$ in which
$(Q,\Gamma,\delta)$ is a pushdown system and
$q_0 \in Q$ is an initial (root) state.
$\mathbb{RPDS}$ is the class of all rooted pushdown
systems.
For a rooted pushdown system $M =
(Q,\Gamma,\delta,q_0)$, we define a
the \defterm{root-reachable transition relation}:
\begin{equation*}
c \RPDTrans_M^{g} c' \text{ iff }
(q_0,\vect{})
\mathrel{\PDTrans_M^*}
c
\text{ and }
c
\mathrel{\PDTrans_M^g}
c'
\text.
\end{equation*}
In other words, the root-reachable transition relation also makes
sure that the root control state can actually reach the transition.
The root-reachable relation is overloaded to operate on
control states:
\begin{equation*}
q
\mathrel{\RPDTrans_M^g}
q' \text{ iff }
(q,\vec{\gamma})
\mathrel{\RPDTrans_M^g}
(q',\vecp{\gamma})
\text{ for some stacks }
\vec{\gamma},
\vecp{\gamma}
\text.
\end{equation*}
\section{Proofs}
\label{proofs}
\begin{proof}[of Theorem~\ref{soundness}]
Without loss of generality, assume that the path from the initial configuration $c_0$ to $c$ is length $n$, so the path to the new configuration $c'$ is $n+1$ transitions from the initial configuration.
The inductive hypothesis is that the theorem holds for all paths of length less than or equal to $n$.
So far the scenario can be diagrammed as below:
\[
\xymatrix{
c_0 \ar@{=>}[r] \ar[d]_\alpha
&
\dots \ar@{=>}[r]
&
c \ar@{=>}[r] \ar[d]_\alpha
&
c' \ar[d]_\alpha
\\
{\hat c}_0 \ar@2{~>}[r]
&
\dots \ar@2{~>}[r]
&
{\hat c} \ar@2{~>}[r]^{g}
&
{\hat c}'
}
\]
We now have three cases depending on what $g$ is:
\begin{itemize}
\item ${\hat c} \approx\!\!>^\epsilon {\hat c}' = ({\hat{\varsigma}}'', \widehat{ss}')$
\\
No change is made to the stack in the concrete, thus the stacks are equal:
%
$\kappa = \kappa'$.
%
Likewise, no change is made in the abstract, thus:
%
$\widehat{ss} = \widehat{ss}'$.
%
Since the first stack is subsumed by the first stack summary,
the second stack must be subsumed by the second stack summary:
$\alpha_{S}(\kappa') \sqsubseteq_{S} \widehat{ss}'$.
\item ${\hat c} \approx\!\!>^{\hat{\phi}_+} {\hat c}' = ({\hat{\varsigma}}'', \widehat{ss}')$
\\
A frame $\phi$, such that $\alpha(\phi) \sqsubseteq \hat{\phi}$, must be pushed onto the stack in the concrete, so the stacks are related thusly:
%
$\phi : \kappa = \kappa'$.
%
Likewise, the stack summaries are so related:
%
$\textbf{push}(\hat{\phi}, \widehat{ss}) \sqsubseteq_{S} \widehat{ss}'$.
%
By the constraint on all push operations:
$\alpha_{S}(\phi : \kappa) \sqsubseteq_{S} \textbf{push}(\hat{\phi}, \alpha_{S}(\kappa))$.
%
We can make the following replacements:
$\alpha_{S}(\kappa') \sqsubseteq_{S} \textbf{push}(\hat{\phi}, \widehat{ss})$.
%
By the definition of the transitive relation ($\approx\!\!>$):
$\widehat{ss}' = \textbf{push}(\hat{\phi}, \widehat{ss})$.
%
Finally we have:
$\alpha_{S}(\kappa') \sqsubseteq_{S} \widehat{ss}'$.
\item ${\hat c} \approx\!\!>^{\hat{\phi}_-} {\hat c}' = ({\hat{\varsigma}}'', \widehat{ss}')$
\\
A frame $\phi$, such that $\alpha(\phi) \sqsubseteq \hat{\phi}$, must be popped from the stack in the concrete, so the stacks are related like this:
%
$\kappa = \phi : \kappa'$.
%
We know that configuration $c$ is reachable from the initial configuration, which has an empty stack.
%
Since the transition from the configuration pops off a frame $\phi$, it does not currently have an empty stack.
%
So there exists a path,
$c_0 \mathrel{\Rightarrow}^* c_1 \mathrel{\Rightarrow}^{\phi_+} c_2 \mathrel{\Rightarrow}^{\vec{g}} c$,
such that the net of the stack actions after the push, $\fnet{\vec{g}}$, is empty.
%
Let
$c_1 = (\varsigma_1, \kappa_1)$.
%
Since the net of the stack actions is empty, the stack at the configuration before the last previously unmatched push, $\kappa_1$, is identical to the stack after the current pop, $\kappa'$:
$\kappa_1 = \kappa'$.
By the inductive hypothesis, there is a path through the Dyck configuration graph that parallels and mimics the path above.
%
Thus, there is a configuration ${\hat c}_1 = ({\hat{\varsigma}}_1, \widehat{ss}_1)$ such that $\alpha(c_1) \sqsubseteq {\hat c}_1$.
%
Also by this path, there is a sub-path from the configuration ${\hat c}_1$ to the new configuration ${\hat c}'$ that makes no changes to the stack.
%
Therefore, there is a shortcut edge between these two configurations ${\hat c}_1$ and ${\hat c}'$.
%
The proof of the first case for shortcut edges works for no-op transitions.
%
Thus the stack summaries at these two configurations are the same:
$\widehat{ss}_1 = \widehat{ss}'$.
The current situation is as follows:
\[
\xymatrix{
c_0 \ar@{=>}[r]^{\ast} \ar[d]_\alpha
&
c_1 \ar@{=>}[r]^{\phi_+} \ar[d]_\alpha \ar@/^2pc/[rrr]^\epsilon
&
c_2 \ar@{=>}[r]^{\vec{g}} \ar[d]_\alpha
&
c \ar@{=>}[r]^{\phi_-} \ar[d]_\alpha
&
c' \ar[d]_\alpha
\\
{\hat c}_0 \ar@2{~>}[r]^{\ast}
&
{\hat c}_1 \ar@2{~>}[r]^{\hat{\phi}_+} \ar@/_2pc/[rrr]^\epsilon
&
{\hat c}_2 \ar@2{~>}[r]^{\vec{\hat{g}}}
&
{\hat c} \ar@2{~>}[r]^{\hat{\phi}_-}
&
{\hat c}'
}
\]
Since the configuration before the last previously unmatched push, $c_1$ is subsumed by its equivalent in the Dyck configuration graph, ${\hat c}_1$, its stack is subsumed by the stack summary of configuration ${\hat c}_1$:
$\alpha_{S}(\kappa_1) \sqsubseteq_{S} \widehat{ss}_1$.
%
Since this stack and this stack summary are identical to the stack $\kappa'$ and the stack summary $\widehat{ss}'$ respectively, we have:
$\alpha_{S}(\kappa') \sqsubseteq_{S} \widehat{ss}'$.
\qed
\end{itemize}
\end{proof}
\section{Related Work} \label{related}
Stack summarization, the central contribution of this paper, overcomes
the apparent incompatibilities of two orthogonal anti-merging
techniques designed to improve precision: abstract garbage
collection~\cite{mattmight:Might:2006:GammaCFA} and pushdown
control-flow
analysis~\cite{local:Earl:2010:PDCFA,mattmight:Vardoulakis:2010:CFA2}.
As such, this work directly builds upon both techniques, as well as
classical control-flow analysis~\cite{mattmight:Shivers:1991:CFA},
abstract machines~\cite{mattmight:Felleisen:1987:CESK}, and abstract
interpretation~\cite{mattmight:Cousot:1977:AI,mattmight:Cousot:1979:Galois}
in general.
Abstract garbage collection~\cite{mattmight:Might:2006:GammaCFA,local:VanHorn:2010:Abstract} curbs
argument-merging, but it has not yet been applied to anything
beyond classical control-flow analysis.
Vardoulakis and Shivers's CFA2~\cite{mattmight:Vardoulakis:2010:CFA2}
is the precursor to the pushdown control-flow
analysis~\cite{local:Earl:2010:PDCFA} presented in
Section~\ref{pdcfa}.
CFA2 is a table-driven summarization algorithm that exploits the
balanced nature of calls and returns to improve return-flow precision
in a control-flow analysis.
While CFA2 uses a concept called ``summarization,'' it is a
summarization of execution paths of the analysis, roughly equivalent
to Dyck state graphs rather than our stack summaries.
In terms of recovering precision, pushdown control-flow
analysis~\cite{local:Earl:2010:PDCFA} is the dual to abstract garbage
collection:
it focuses on the global interactions of configurations via
transitions to precisely match push-pop/call-return, thereby
eliminating all return-flow merging.
However, pushdown control-flow analysis does nothing to improve
argument merging.
In the context of first-order languages, pushdown approaches to
analysis are well-established.
Reps \emph{et al.}~\cite{mattmight:Reps:1995:Precise} uses a
summarization algorithm to compute a Dyck-state-graph-like solution.
Debray and Proebsting~\cite{dvanhorn:Debray1997Interprocedural}
develop an analysis with perfect return-flow in the presence of tail
calls.
For higher-order languages, finite-state approaches
\emph{approximating} the pushdown precision of return-flow have been
explored by Midtgaard and Jensen~\cite{mattmight:Midtgaard:2009:CFA}
and Van Horn and Might~\cite{local:VanHorn:2010:Abstract}.
Our work extends the pushdown approach to higher-order languages with
tail calls, and produces stack summaries to enable abstract garbage
collection.
\section{SSCFA: Stack-summarizing control-flow analysis} \label{sscfa}
In the last section, we defined stack summaries and motivated their
implementation informally.
In this section, we formally define the configuration-space and an
abstract pushdown semantics for stack-summarizing control-flow analysis
(SSCFA).
Appendix~\ref{sec:algorithm} describes a formal algorithm for creating
a finite model of the reachable state-space for SSCFA.
\subsection{Abstract configuration-space}
The only change between the configuration-spaces for the pushdown
control-flow analysis and the stack-summarizing control-flow analysis
is that configurations contain stack summaries instead of stacks:
\begin{align*}
{\hat c} \in \sa{Conf} &= \sa{State} \times \sa{Summary} && \text{[configurations]}
\text.
\end{align*}
\subsection{Abstract pushdown semantics}
The abstract transition relation for SSCFA is similar to the
transition relation for PDCFA.
The transition relation, $(\approx\!\!>) \subseteq \sa{Conf} \times
\sa{Frame}_\pm \times \sa{Conf}$ has three rules.
With respect to a program $e$, we can define a rooted pushdown
system, $M_{\rm SS} = (\sa{Conf},\sa{Frame},(\approx\!\!>),{\hat c}_0)$, where
${\hat c}_0 = (e,[],[],\bot_{SS})$.
A tail call leaves the stack unchanged:
\begin{gather*}
(\overbrace{(\sembr{\appform{f}{\mbox{\sl {\ae}}}}, {\hat{\rho}}, {\hat{\sigma}})}^{{\hat{\varsigma}}}, \widehat{ss})\
\aoToss^{\epsilon}\
((e,{\hat{\rho}}'',{\hat{\sigma}}'), \widehat{ss})
\text{, where }
\\
\begin{align*}
(\sembr{\lamform{v}{e}}, {\hat{\rho}}') &\in {\hat{\mathcal{A}}}(f,{\hat{\rho}},{\hat{\sigma}})
&
{\hat{\rho}}'' &= {\hat{\rho}}'[v \mapsto {\hat{\addr}}]
\\
{\hat{\addr}} &= {\widehat{alloc}}(v,{\hat{\varsigma}})
&
{\hat{\sigma}}' &= {\hat{\sigma}} \sqcup [{\hat{\addr}} \mapsto {\hat{\mathcal{A}}}(\mbox{\sl {\ae}},{\hat{\rho}},{\hat{\sigma}})]
\text.
\end{align*}
\end{gather*}
A non-tail call builds a frame, adds it to the summary, and evaluates the call:
\begin{gather*}
((\sembr{\letiform{v}{\ensuremath{\var{call}}}{e}}, {\hat{\rho}}, {\hat{\sigma}}), \widehat{ss})\
\aoToss^{\hat{\phi}_+}\
((\ensuremath{\var{call}},{\hat{\rho}},{\hat{\sigma}}), \widehat{ss}')
\text{, where }
\\
\begin{align*}
\hat{\phi} & = (v,e,{\hat{\rho}})
&
\widehat{ss}' & = \textbf{push}(\hat{\phi}, \widehat{ss})
\text.
\end{align*}
\end{gather*}
A function return pops the top frame off the stack.
It also restores older stack summaries.
Thus, the algorithm must know all of the abstract configurations on
paths from the initial configuration that can reach the current
configuration on a path whose net stack change is the the frame to be
popped; we find these abstract configurations using the $\mathbf{pred} :
\sa{Conf} \times \sa{Frame} \to \Pow{\sa{Conf}}$ function:
\begin{equation*}
\mathbf{pred}({\hat c}, \hat{\phi}) =
\setbuild{ {\hat c}' }{ {\hat c} \mathrel{\RPDTrans_{M_{\rm SS}} ^{\vec{\phi'}}} {\hat c}' \text{ and }
\fnet{\vec{\phi'}} = \phi_{+} }
\text.
\end{equation*}
The transition rule for pop is then straightforward:
\begin{gather*}
(\overbrace{(\mbox{\sl {\ae}}, {\hat{\rho}}, {\hat{\sigma}})}^{{\hat{\varsigma}}}, \widehat{ss})
\mathrel{\aoToss^{\hat{\phi}_-}}
((e,{\hat{\rho}}'',{\hat{\sigma}}'), \widehat{ss}')
\text{, where }
\\
\begin{align*}
(\_,\widehat{ss}') &\in \mathbf{pred}({\hat c},\hat{\phi})
&
{\hat{\rho}}'' &= {\hat{\rho}}'[v \mapsto {\hat{\addr}}]
\\
(v,e,{\hat{\rho}}') &= \hat{\phi}
&
{\hat{\sigma}}' &= {\hat{\sigma}} \sqcup [{\hat{\addr}} \mapsto {\hat{\mathcal{A}}}(\mbox{\sl {\ae}},{\hat{\rho}},{\hat{\sigma}})]
\\
{\hat{\addr}} &= {\widehat{alloc}}(v,{\hat{\varsigma}})
\text.
\end{align*}
\end{gather*}
\subsection{Soundness of stack-summarizing control-flow analysis}
The soundness of Dyck state graphs has been proved
in~\cite{local:Earl:2010:PDCFA}.
However, the soundness of the stack summaries is provided below for the first time:
\begin{theorem} \label{soundness}
If $\alpha(c) \sqsubseteq {\hat c}$, $c \mathrel{\Rightarrow} c'$
and $c_0 \mathrel{\Rightarrow}^* c$,
then there exists ${\hat c}' \in \sa{Conf}$ such that
$\alpha(c') \sqsubseteq {\hat c}'$ and ${\hat c} \approx\!\!> {\hat c}'$.
\end{theorem}
\begin{proof}[sketch]
Let $c = (\varsigma, \kappa)$,
$c' = (\varsigma', \kappa')$ and
${\hat c} = ({\hat{\varsigma}}, \widehat{ss})$,
such that
$\alpha(c) \sqsubseteq {\hat c}$.
We know by theorems in~\cite{local:Earl:2010:PDCFA} that there exists a state
${\hat{\varsigma}}'' \in \sa{State}$
such that
$\alpha(\varsigma') \sqsubseteq {\hat{\varsigma}}''$.
We also know that the first stack is subsumed by the first stack summary:
$\alpha_{S}(\kappa) \sqsubseteq_{S} \widehat{ss}$.
So we must prove that there exists a stack summary
$\widehat{ss}' \in \sa{Summary}$
such that
$\alpha_{S}(\kappa') \sqsubseteq_{S} \widehat{ss}'$
and
${\hat c} \approx\!\!> ({\hat{\varsigma}}'', \widehat{ss}')$.
The proof continues with a case-wise analysis on the type of the transition as well as strong induction based upon the length of the path to configuration $c$. See Appendix~\ref{proofs} for details.
\end{proof}
\section{SSCFA with Abstract Garbage Collection}
\label{ssgammacfa}
Having constructed a framework for iteratively synthesizing stack
summaries during computation of a finite model for a pushdown system,
we can integrate abstract garbage collection.
In this section, we assume the ``reachable addresses'' stack summary
is in use.
We term this analysis SS$\Upgamma$CFA, the product of stack-summarizing
control-flow analysis and abstract garbage collection (also called
$\Upgamma$CFA).
SS$\Upgamma$CFA is a ``best of both worlds'' combination: it has all
the argument precision advantages of abstract garbage collection
and all the return-flow precision advantages of PDCFA.
As with classical abstract garbage collection, we must define what
makes an address or value reachable.
Essentially, an object is \emph{reachable} if it may be used either in
the current configuration or in a subsequent configuration.
If an address is reachable, all the values bound to it are also
reachable.
Values (closures and frames) reference addresses through their
environments, which are reachable as well.
Because values touch addresses and addresses touch values, finding
reachable addresses and values amounts to a bipartite graph search.
The concrete values of unreachable addresses will never be used again
during the course of the computation; thus, it is safe to set the
values of these addresses to bottom within the store.
The reachability exploration of the store begins with the addresses
that the current configuration ${\hat c}$ can immediately reach, called
the root set, $\mathit{Root}({\hat c})$.
The root function, $\mathit{Root} : \sa{Conf} \to \Pow{\sa{Addr}}$,
returns the root set for a configuration:
\[
\mathit{Root}((e,{\hat{\rho}},{\hat{\sigma}}),\widehat{ss}) =
\widehat{ss} \cup \setbuild{{\hat{\rho}}(v)}{v \in \mathit{free}(e)}
\text,
\]
where the function $\mathit{free} : \syn{Exp} \to \Pow{\syn{Var}}$ returns the free variables in the given expression.
The root set contains all the addresses bound to free variables in the expression, $e$, as well as the addresses in the reachable address summary.
The touch function, $\mathcal{T}_c : \sa{Clo} \to \sa{Addr}$, finds addresses referenced in closures:
\(
\mathcal{T}_c(\ensuremath{\var{lam}},{\hat{\rho}}) =
\setbuild{{\hat{\rho}}(v)}{v \in \mathit{free}(\ensuremath{\var{lam}})}
\).
The touching relation, $(\touchrel{{\hat{\sigma}}}) : \sa{Addr} \to \sa{Addr}$ links addresses directly to addresses:
\[
{\hat{\addr}} \touchrel{{\hat{\sigma}}} {\hat{\addr}}' \text{ iff }
{\hat{\addr}}' \in \mathcal{T}_c({\widehat{\var{val}}}) \text{ and }
{\widehat{\var{val}}} \in {\hat{\sigma}}({\hat{\addr}})
\text.
\]
With this relation, finding all reachable addresses of a configuration
${\hat c}$ becomes the transitive closure of the touching relation:
\begin{equation*}
\mathcal{R}({\hat c}) = \setbuild{{\hat{\addr}}'}{{\hat{\addr}} \touchrel{{\hat{\sigma}}}^* {\hat{\addr}}' \text{ and }
{\hat{\addr}} \in \mathit{Root}({\hat c})}
\text.
\end{equation*}
Finally, we define the abstract garbage collector itself, $AGC :
\sa{Conf} \to \sa{Conf}$, which simply restricts the store to the
reachable addresses:\footnote{
We define function restriction, $f|X$, so that $f|X = \lambda x . \cif{x \in X}{f(x)}{\bot}$.
}
\begin{equation*}
AGC({\hat c}) = (e,{\hat{\rho}},{\hat{\sigma}}\vert\mathcal{R}({\hat c}),\widehat{ss})
\text{, where } {\hat c} = (e,{\hat{\rho}},{\hat{\sigma}},\widehat{ss})\text.
\end{equation*}
The abstract transition relation for SS$\Upgamma$CFA, $(\aToss_{AGC}) \subseteq
\sa{Conf} \times \sa{Conf}$, needed for stack-summarizing control-flow
analysis with abstract garbage collection extends the abstract
transition relation to collect before each transition:
\begin{equation*}
{\hat c} \aToss_{AGC} {\hat c}'
\text{ iff }
AGC({\hat c}) \approx\!\!> {\hat c}'
\text.
\end{equation*}
The soundness theorems and their proofs for classical abstract garbage
collection are in Chapter 6 of \cite{mattmight:Might:2007:Dissertation};
they adapt readily to our pushdown framework.
\section{Adding abstract garbage collection to PDCFA} \label{abstract-gc}
In the classical version of abstract garbage collection, the abstract
interpretation ``collects'' each configuration before each transition~\cite{mattmight:Might:2006:GammaCFA}.
To collect a state, it explores the state to find the
reachable abstract addresses, and then it discards unreachable
addresses from the store, \emph{i.e.}, it maps them to the empty set.
Suppose we were to add abstract garbage collection to PDCFA.
At first, we might try collecting a control state prior to adding an
edge.
But, this approach doesn't work: to know the reachable addresses of a
configuration, the analysis must have access to the stack paired with
the control state.
Unfortunately, the stack has been distributed across the
Dyck state graph being accreted during the analysis.
To determine the possible stacks paired with a control state, the
analysis must consider all legal paths to that control state.
Considering all possible paths to a control state is expensive, and
troublesome in any event, since there could be an infinite number of
such paths.
A better solution would allow the analysis to iteratively compute
properties of stacks---like reachable addresses---and store these
summaries at individual control states.
We call this solution stack summaries.
\section{Stack summaries} \label{stack-summaries} \label{ssintro}
As PDCFA constructs a DSG, it accretes reachable control states one
edge at a time.
Each time it adds a labeled edge, it is abstractly executing the
transition relation $(\rightharpoondown)$.
To perform abstract garbage collection before each transition, the
analysis must know the reachable addresses for all configurations
described by paths to that state.
To accomplish this, we add a stack summary to each control state.
A stack summary is a client-defined finite abstraction of a stack.
To perform abstract garbagae collection, we will instantiate this
summary to be the reachable addresses in the stack.
A stack summary describes some property of the stack, \emph{e.g.}, the topmost
frame, the reachable addresses, the privilege level of the current
context.
With respect to our analysis, the set $\sa{Summary}$ is a parameter
containing all stack summaries, and we denote an individual stack
summary as $\widehat{ss}$.
A summarizing function, $\alpha_{S}: \s{Stack} \to \sa{Summary}$,
walks a stack to compute a summary.
Every stack summary regime also requires a push function parameter,
$\textbf{push} : \sa{Frame} \times \sa{Summary} \to \sa{Summary}$, which
computes the abstract effect of pushing a frame on a summary.
There are three requirements on stack summaries:
\begin{enumerate}
\item Summaries must be able to represent all possible stacks.
\item The set of summaries must be finite.
\item Summaries must form a lattice ($\sqsubseteq_{S}$)
\end{enumerate}
In addition, the push function must faithfully simulate
concrete push; formally:
\begin{equation*}
\text{ if }
\alpha(\phi) \sqsubseteq \hat{\phi}
\text{ and }
\alpha_{S}(\kappa) \sqsubseteq_{S} \widehat{ss}
\text{, then }
\alpha_{S}(\phi : \kappa) \sqsubseteq \textbf{push}(\hat{\phi},\widehat{ss})
\text.
\end{equation*}
We can efficiently percolate stack summaries through the construction
of a Dyck state graph, so that the algorithm never has to reconsider all paths
to a control state.
In fact, the algorithm never considers an entire path all at once; it
propagates summaries edge-by-edge.
To extend the Dyck-state-graph-construction algorithm, we need to
consider three cases: what is the effect of adding a push; what is the
effect of adding a pop; and what is the effect of a stack no-op?
In this section, we describe the core of the algorithm informally
but with sufficient detail to motivate the high-level idea.
In the next section, we'll describe the system-space of the algorithm
formally, and Appendix~\ref{sec:algorithm} contains the algorithm for
computing a Dyck state graph with stack summaries.
\subsection{Propagating stack summaries during DSG construction}
The propagation of stack summaries across no-op and pop edges during
DSG construction is agnostic of the particular stack summary in use.
Propagation across push edges is fully factored into the push function
parameter, $\textbf{push}$.
\paragraph{The summarizing no-op operation}
When the Dyck state graph construction algorithm needs to propagate
summaries across an edge which does not change the stack, the new
stack summary is identical to the old sumary:
when there is no stack change, there is no change to stack summaries.
\paragraph{The summarizing pop operation}
The pop operation, like the no-op operation, can be handled without
knowledge of the particular stack summary in use.
In PDCFA, every pop transition has at least one matching push
transition.
The stack summaries after a pop are those stack summaries that can
reach the new state with no net stack change.
These states are easy to find, because the DSG construction algorithm
maintains an $\epsilon$-closure graph in addition to the control-state
transition graph.
Edges in the $\epsilon$-closure graph connect states reachable
through no net stack change.
Diagramatically, we know that the stack summary at state
$\varsigma_4$ in the following is the same as the stack summary for
$\varsigma_1$:
\begin{equation*}
\xymatrix{
\varsigma_1 \ar[dr]^{\phi_+}
\ar[rrr]^{\epsilon}
&
&
&
\varsigma_4
\\
&
\varsigma_2 \ar[r]^{\epsilon}
&
\varsigma_3 \ar[ur]^{\phi_-}
}
\end{equation*}
\paragraph{The summarizing push operation}
Pushing a frame onto a stack makes a local change to the stack.
However, pushing a frame onto a stack may nontrivially change the
summary.
The operation $\textbf{push}$ must be able to determine the new stack
summary, so that when a push edge is introduced, the $\textbf{push}$ operation
determines the subsequent summary.
\subsection{Example: A frame-set summary}
The frame-set summary is both general and useful.
The frame-set summary is the set of (abstract) frames currently in the
stack:
\begin{equation*}
\sa{Summary_{fs}} = \Pow{\sa{Frame}}
\text.
\end{equation*}
This summary ignores order and repetition in favor of finite size and a simple (subset-based) lattice:
\begin{equation*}
\widehat{ss} \wts^{fs} \widehat{ss}'
\text{ iff }
\widehat{ss} \subseteq \widehat{ss}'
\text.
\end{equation*}
The summarization function for the frame set summary,
$\ssum^{fs} : \sa{Stack} \to \sa{Summary_{fs}}$,
abstracts each frame and keeps it in a set:
\begin{equation*}
\ssum^{fs}\vect{\phi_1,\ldots,\phi_n} =
\set{\alpha_{Frame}(\phi_1),\ldots,\alpha_{Frame}(\phi_n)}
\text.
\end{equation*}
The push operation $\apush^{fs} : \sa{Frame} \times \sa{Summary_{fs}} \to \sa{Summary_{fs}}$ simply adds the new frame to the set:
\(
\apush^{fs}(\hat{\phi}, \widehat{ss})
=
\{\hat{\phi}\}
\cup
\widehat{ss}
\).
\subsection{Example: A reachable-addresses summary}
\label{sec:reachable-address-summary}
The reachable-addresses summary is the set of all the addresses
directly touchable by a frame on the stack.
We formally define \emph{touch} through the touch function, $\mathcal{T}_f
: \sa{Frame} \to \sa{Addr}$, which returns the addresses within the
given frame:
\begin{equation*}
\mathcal{T}_f(v,e,{\hat{\rho}}) =
\setbuild{{\hat{\rho}}(v')}{v' \in \mathit{free}(e) - \{v\}}
\text.
\end{equation*}
The summary-space is the set of addresses:
\begin{equation*}
\sa{Summary_{ra}} = \Pow{\sa{Addr}}
\text.
\end{equation*}
The order on summaries is subset inclusion:
\begin{equation*}
\widehat{ss} \wts^{ra} \widehat{ss}'
\text{ iff }
\widehat{ss} \subseteq \widehat{ss}'
\text.
\end{equation*}
The reachable address summarization function,
$\ssum^{ra} : \sa{Stack} \to \sa{Summary_{ra}}$,
finds the reachable addresses of each abstracted frame and keeps them in a set:
\begin{equation*}
\ssum^{ra}\vect{\phi_1,\ldots,\phi_n} =
\mathcal{T}_f(\alpha_{Frame}(\phi_1)) \cup
\cdots \cup
\mathcal{T}_f(\alpha_{Frame}(\phi_n))
\text.
\end{equation*}
The push operation $\apush^{ra} : \sa{Frame} \times \sa{Summary_{ra}} \to \sa{Summary_{ra}}$ adds the reachable addresses from the new frame to the set:
\begin{equation*}
\apush^{ra}(\hat{\phi}, \widehat{ss})
=
\mathcal{T}_f(\hat{\phi})
\cup
\widehat{ss}
\text.
\end{equation*}
The reachable address summary provides the information about the stack needed for abstract garbage collection with pushdown control-flow analysis.
|
2,877,628,089,608 | arxiv | \section{Introduction and summary}
The partial differential equations (PDE) of numerical relativity have
typically been solved using finite difference methods. In finite
differencing (FD) one first chooses a finite number of coordinate
``grid'' points $x_{n}$ and approximates the space and time
derivatives in the PDEs by ratios of differences between field and
coordinate values on the grid. With a choice of grid and
``differencing scheme'' for converting derivatives to ratios of
differences, the equations of general relativity are approximated by a
system of algebraic equations whose solution approximates that of the
underlying PDEs.
In this paper we explore an alternative method for solving the elliptic
PDEs encountered in numerical relativity: pseudospectral collocation
(PSC). In PSC one begins by postulating an approximate solution,
generally as a sum over some finite basis of polynomials or
trigonometric functions. The coefficients in the sum are determined
by requiring that the residual error, obtained by substituting the
approximate solution into the exact PDEs, is minimized in some
suitable sense. Thus, if one describes FD as finding the exact
solution to an approximate system of equations, one can describe PSC
as finding an approximate solution to the exact equations.
Pseudospectral collocation has been applied successfully to solve
problems in many fields, including fluid dynamics, meteorology,
seismology, and relativistic astrophysics (cf.\
\cite{boyd89a,canuto88a,fornberg96a,bonazzola98a}). Its advantage
over FD arises for problems with smooth solutions, where the
approximate solution obtained using PSC converges on the actual
solution {\em exponentially} as the number of basis functions is
increased. The approximate FD solution, on the other hand, never
converges faster than algebraically with the number of grid points.
While the computational cost per ``degree of freedom'' --- basis
functions for PSC, grid points for FD --- is higher for PSC than for FD,
the computational cost of a high accuracy
PSC solution is a small fraction of the cost of
an equivalent FD solution. Even for problems in which only
modest accuracy is needed, PSC generally results in a significant
computational savings in both memory and time compared to FD,
especially for multidimensional problems.
The elliptic equations of interest here are the constraint equations
that must be solved as part of the general relativistic Cauchy initial
data problem. We focus on two axisymmetric problems: the initial data
for a black hole spacetime with angular momentum, and a spacetime with
a black hole superposed with
gravitational waves (Brill waves). Solutions for both of
these problems have been found by others using FD
\cite{choptuik86a,cook90a,bernstein94a}: our focus here is to
demonstrate the use of PSC for these problems and compare the
computational cost of high accuracy solutions obtained using both PSC
and FD.
In section~\ref{sec:init} we review briefly the key constraint
equations that arise in the traditional space-plus-time decomposition
of the Einstein field equations and describe three different elliptic
problems --- a nonlinear model problem whose analytic solution is
known, the nonlinear Hamiltonian constraint equation for an
axisymmetric black hole spacetime with angular momentum, and the
Hamiltonian constraint equation for a spacetime with a black hole
superposed with Brill waves --- that have been solved using FD and that we
solve here using PSC. We describe PSC in section~\ref{sec:specmeth},
and compare the computational cost of PSC and FD for high accuracy
solutions in section~\ref{sec:compare}. In section~\ref{sec:specimpl}
we solve the problems described in section~\ref{sec:init} using PSC and
compare the performance of PSC with that obtained by other authors
using FD. In section~\ref{sec:concl}, we discuss our conclusions and their
implications for solving problems in numerical relativity. Finally,
whether by FD or PSC the solution of a nonlinear elliptic system
involves solving a potentially large system of (nonlinear) algebraic
equations. We describe the methods we use for solving them in appendix
\ref{sec:solvsys}.
\section{Initial Value Equations}
\label{sec:init}
\subsection{Introduction}
The general relativistic Cauchy initial value problem requires that we
specify the metric and extrinsic curvature of a three-dimensional
spacelike hypersurface. These quantities cannot be specified
arbitrarily: rather they must satisfy a set of constraint equations,
which are a subset of the Einstein field equations. Arnowitt, Deser,
and Misner (ADM) \cite{arnowitt62a} were the first to formulate the
Cauchy initial value problem in relativity in this way; however, the
most common expression of this $3+1$ decomposition is due to York
\cite{york79a}.
In the York formulation of the ADM equations the spacelike
hypersurfaces are taken to be level surfaces of some spacetime scalar
function $\tau$. The generalized coordinates and conjugate momentum
are the three-metric $\gamma_{ij}$ induced on the spacelike
hypersurface by the space-time metric and the extrinsic curvature
${K}_{ij}$ of the spacelike hypersurface.\footnote{This approach is by
no means unique. For example, recent work by Choquet-Bruhat, York
and collaborators (cf.\ \cite{anderson99a} and references therein)
on a different choice of generalized coordinates and momenta have
yielded approaches in which the evolution equations form a
first-order symmetric and hyperbolic (FOSH) system. Many powerful
numerical methods for solving FOSH systems exist and the numerical
relativity community is only now beginning to explore how these
solution techniques can be brought to bear on the field equations in
this form.} Position on each surface is described by a set of
spatial coordinates $x_i$ (where Latin indices run from 1 to 3 and
indicate spatial coordinates on the slice), so that the line-element
on the hypersurface is
\begin{equation}
{}^{(3)}ds^2 = \gamma_{ij}dx^i\,dx^j.
\end{equation}
The normal $\bbox{n}$ to the spacelike
hypersurface is everywhere timelike. The time coordinate direction
$\bbox{t}$, however, need not be exactly along the normal. We can write
$\bbox{t}$ in terms of $\bbox{n}$ and the spatial vectors that span
the tangent space of the hypersurface:
\begin{equation}
\bbox{t} = \alpha \bbox{n} + \beta^i{\partial\over\partial x^i},
\end{equation}
where the {\em lapse\/} $\alpha$ describes how rapidly time elapses as
one moves along the hypersurface normal $\bbox{n}$, and the {\em
shift\/}
\begin{equation}
\vec\beta = \beta^i{\partial\over\partial x^i},
\end{equation}
is a vector field confined entirely to the hypersurface tangent space
that describes how the spatial coordinates are shifted, relative to
$\bbox{n}$, as one moves from one hypersurface to the next. The lapse
and shift are free functions: they correspond to the freedom to
specify the evolution of the coordinate system that labels points in
spacetime.
The {\em space-time\/} line element at any point on a hypersurface is
related to the spatial metric at that point, the lapse, and the shift:
\begin{eqnarray}
ds^2 &=& g_{\mu \nu} dx^\mu dx^\nu \nonumber \\
&=& - \left( \alpha^2 - \beta_a \beta^a
\right) dt^2 + 2\beta_i dx^i dt + \gamma_{ij} dx^i dx^j
\end{eqnarray}
(where Greek indices run from 0 to 3 and include the time coordinate
$t$, which is sometimes referred to as $x_0$).
Given a spacetime foliation, choice of ``field variables''
$\gamma_{ij}$ and $K_{ij}$, and coordinates (embodied in the lapse and
shift), the field equations can be decomposed into four {\em
constraint equations,} which $\gamma_{ij}$ and $K_{ij}$ must satisfy
on each slice, and six {\em evolution equations,} which describe how
the three-metric and extrinsic curvature evolve from one slice to the
next. In this paper we consider only the problem of consistent
specification of initial data: {\em i.e.,} the solution of the
constraint equations.
The four constraint equations (in vacuum) are
\begin{mathletters}
\label{eq:constraints}
\begin{eqnarray}
{}^{(3)}R + K^2 - K_{ab}K^{ab} &=& 0, \\
{}^{(3)}\nabla_a \left( K^{ia} - K \gamma^{ia} \right) &=& 0,
\end{eqnarray}
\end{mathletters}
where ${}^{(3)}R$ is the Ricci scalar associated with $\gamma_{ij}$,
${}^{(3)}\nabla_a$ is the covariant derivative associated with
$\gamma_{ij}$, and
\begin{equation}
K:= K_{ab}\gamma^{ab},
\end{equation}
is the trace of the extrinsic curvature $K_{ij}$. Note that, as
befits constraints, equations \ref{eq:constraints}
involve only derivatives of $\gamma_{ij}$ and $K_{ij}$ in the tangent
space of the corresponding hypersurface.
\subsection{Conformal imaging formalism}
Our goal is to determine a $\gamma_{ij}$ and a $K_{ij}$ that satisfy
the constraint equations and boundary conditions. These are both
symmetric tensors on a three-dimensional hypersurface; consequently,
between the two there are twelve functions that must be specified.
Equations \ref{eq:constraints} place only four
constraints on these twelve functions. In order to solve the initial
value problem, the remaining components of $\gamma_{ij}$ and $K_{ij}$
must be specified. York \cite{york79a} has developed a
convenient formalism, referred to as {\em conformal imaging,} for
dividing the spatial metric and extrinsic curvature into constrained
and unconstrained parts, which we summarize in this subsection.
Associate $\gamma_{ij}$ with a {\em conformal background three-metric}
$\bar{\gamma}_{ij}$ through a {\em conformal factor\/} $\psi$:
\begin{equation}
\gamma_{ij} = \psi^4 \bar{\gamma}_{ij}.
\end{equation}
The extrinsic curvature ${K}^{ij}$ is split into its trace $K$ and its
trace-free part
\begin{equation}
A^{ij} = K^{ij} - {1\over3} \gamma^{ij} K.
\end{equation}
The trace $K$ is treated as a given scalar function which will be
specified. The trace-free extrinsic curvature $\bar{A}^{ij}$ of
the conformal metric $\bar{\gamma}_{ij}$ can be expressed in terms of
$\psi$ and $A^{ij}$:
\begin{equation}
\bar{A}^{ij} = \psi^{10} A^{ij}.
\end{equation}
The constraint equations (eqs.~\ref{eq:constraints}) can also be
expressed in terms of the conformal background metric and its
trace-free extrinsic curvature:
\begin{mathletters}
\label{eq:cons}
\begin{eqnarray}
\bar{\nabla}^2 \psi - \frac{1}{8} \bar{R} \psi - \frac{1}{12} {K}^2 \psi^5
+ \frac{1}{8} \bar{A}_{ab} \bar{A}^{ab} \psi^{-7} &=& 0,
\label{eq:Hamcon}\\
\bar{\nabla}_a \bar{A}^{ia} - \frac{2}{3} \psi^6 \bar{\gamma}^{ia}
\bar{\nabla}_a {K} &=& 0,
\label{eq:momcon}
\end{eqnarray}
\end{mathletters}
where ${}^{(3)}\bar{\nabla}_i$ is the covariant derivative associated
with $\bar{\gamma}_{ij}$.
Equation \ref{eq:Hamcon} is generally referred to as the Hamiltonian
constraint, while equations \ref{eq:momcon}
are generally referred to as the momentum constraints.
Since the only derivatives of $\bar{A}_{ij}$ appear in the combination
of a divergence, only the longitudinal part of $\bar{A}_{ij}$ is
constrained by the momentum constraints.
Now turn to the boundary conditions. We are interested in problems
with a single black hole. Let the initial hypersurface be
asymptotically flat, so that on the hypersurface far from the black
hole the curvature vanishes. Describe the black hole by an
Einstein-Rosen bridge ({\em i.e.,} by two asymptotically flat
three-surfaces connected by a throat) and insist that the spacetime be
inversion symmetric through the throat. These choices impose the
boundary conditions
\begin{mathletters}
\begin{eqnarray}
\lim_{r\rightarrow\infty}\psi(r) &=& 1\qquad\text{asymptotic
flatness},\\
\left[{\partial\psi\over\partial r} + {\psi \over 2a}\right]_{r=a}
&=& 0 \qquad\text{inversion symmetry},
\end{eqnarray}
\end{mathletters}
on $\psi$ where $r=a$ is the coordinate location of the throat.
We can now describe how to solve the constraint equations. Let
${K}$ vanish on the initial hypersurface; then the Hamiltonian and
the momentum constraints (eqs.~\ref{eq:cons})
decouple.\footnote{Such a hypersurface is said to have vanishing mean
curvature, or be {\em maximally embedded.}} Pick a conformal
background metric $\bar\gamma_{ij}$ (which determines
${}^{(3)}\bar{\bbox{\nabla}}$) and transverse-traceless part
$\bar{A}_{TT}^{ij}$ of the conformal extrinsic curvature. Solve the
momentum constraints (eqs.~\ref{eq:momcon}) for the
longitudinal part of the trace-free conformal extrinsic curvature
$\bar{A}_{ij}$. Together with $\bar{A}_{TT}^{ij}$ the trace-free conformal
extrinsic curvature is thus fully determined and the Hamiltonian
constraint (eq.\ \ref{eq:Hamcon}) can be solved for the conformal
factor. This determines the three-metric $\gamma_{ij}$ and its
extrinsic curvature $K^{ij}$ and completes the specification of the
initial data.
\subsection{Three test problems}
\subsubsection{Black hole with angular momentum}\label{sec:bham}
Focus first on the initial data corresponding to an axisymmetric black
hole spacetime with angular momentum. This problem was first examined
analytically by \cite{bowen80a}, and has been explored numerically by
\cite{choptuik86a,cook90a}.
Choosing the conformal background metric to be flat ({\em i.e.,}
$\bar{\gamma}_{ij}=\delta_{ij}$), \cite{bowen80a} found
an analytic solution to the momentum constraints
(eqs.~\ref{eq:momcon}) that carries angular momentum and obeys the
isometry condition at the black hole throat. Draw through any point a
sphere centered on the black hole throat, and let ${n}^i$ be the
outward-pointing unit vector to the sphere there. Letting $J^i$ be
the angular momentum of the {\em physical\/} space, the Bowen-York
solution to the momentum constraints is
\begin{equation}
\bar{A}_{ij} = \frac{3}{{r}^3} \left[ \epsilon_{aib}
J^b{n}^a{n}_j + \epsilon_{ajb}J^b{n}^a{n}_i \right].
\end{equation}
Corresponding to this solution is the Hamiltonian constraint
(eq.~\ref{eq:Hamcon}) for the conformal factor $\psi$,
\begin{equation}
\bar{\nabla}^2 \psi + \frac{9}{4} \frac{J^2 \sin^2 \theta}{r^6}
\psi^{-7} = 0,
\end{equation}
together with its boundary conditions,
\begin{mathletters}
\label{eq:bc}
\begin{eqnarray}
\lim_{r\rightarrow\infty} \psi (r) &=& 1, \\
\left[ \frac{\partial \psi}{\partial r} + \frac{1}{2a} \psi
\right]_{r=a} &=& 0, \\
\left(\frac{\partial \psi}{\partial \theta} \right)_{\theta = 0,\pi} &=& 0,
\end{eqnarray}
\end{mathletters}
which result from asymptotic flatness, the isometry condition at
the throat (which is located at ${r}=a$), and axisymmetry respectively. The
equation with boundary conditions for the conformal factor is
solved on the domain $a\leq r <\infty$.
Once the conformal factor is determined, the geometry of the
initial slice is completely specified.
A useful diagnostic of an initial data slice is
to compute the total energy contained in the slice.
\'{O} Murchadha and York \cite{omurchadha74a} have examined the ADM
energy (cf.\ \cite{arnowitt62a}) in terms of the conformal
decomposition formalism giving
\begin{equation}
E_{ADM} = \hat E_{ADM} - \frac{1}{2\pi} \oint_\infty \bar{\nabla}^j
\psi d^2\bar{S_j},
\end{equation}
where $\hat E_{ADM}$ is the energy of the conformal metric.
Thus when the conformal metric is flat, the ADM energy reduces to
\begin{equation}
E_{ADM} = - \frac{1}{2\pi} \oint_\infty \bar{\nabla}^j \psi d^2\bar{S_j},
\label{eq:admE}
\end{equation}
{\em i.e.,} it is proportional to the integral of the normal component
of the gradient of the conformal factor about the sphere at infinity.
\subsubsection{A model problem}\label{sec:model}
Bowen and York \cite{bowen80a} also describe a nonlinear ``model'' of
the Hamiltonian constraint equation that can be solved
exactly, which we utilize in section \ref{sec:specimpl} to test our
code. The model equation is
\begin{equation}
\bar{\nabla}^2 \psi + \frac{3}{4} {P^2 \over r^4} \left( 1 -
\frac{a^2}{r^2} \right)^2 \psi^{-7} = 0,
\label{eq:modham2}
\end{equation}
with $P$ a constant. Together with the boundary
conditions described above (eqs.~\ref{eq:bc}), equation~\ref{eq:modham2}
has the solution
\begin{mathletters}
\begin{equation}
\psi = \left[ 1 + \frac{2E}{r} + 6 \frac{a^2}{r^2} + \frac{2a^2E}{r^3}
+ \frac{a^4}{r^4} \right]^{1/4},
\end{equation}
where
\begin{equation}
E = \left( P^2 + 4a^2 \right)^{1/2}.
\end{equation}
\end{mathletters}
If we evaluate equation \ref{eq:admE} for this solution, we find that it
has ADM energy $E$.
\subsubsection{Black hole plus Brill wave}
\label{sec:bhbw}
The second physical problem upon which we demonstrate the use of
spectral methods for numerical relativity is that of a black hole
superposed with a Brill \cite{brill59a} wave, a problem studied using
FD by \cite{bernstein94a}.
Following \cite{bernstein94a}, let the initial slice be a spacetime
isometry surface ({\em i.e.,} time symmetric); then, the extrinsic
curvature $K_{ij}$ vanishes and the momentum constraints
(eqs.~\ref{eq:momcon}) are trivially satisfied.
To determine the conformal factor the Hamiltonian constraint
(eq.\ \ref{eq:Hamcon}) must be solved, which requires the
specification of a conformal background metric. Let the
line-element of the conformal background
metric have the form
\begin{equation}
d\bar{s}^2 = \left[ e^{2q} \left( dr^2 + r^2 d\theta^2 \right) + r^2
\sin^2\theta d\phi^2 \right],
\end{equation}
where
\begin{mathletters}
\begin{eqnarray}
q &:=& A \sin^n\theta \left\{ \exp\left[
- \left( \frac{\eta + \eta_0}{\sigma} \right)^2
\right] \right. \nonumber \\ &&+ \left. \exp \left[ -
\left(\frac{\eta - \eta_0}{\sigma} \right)^2
\right] \right\} \label{eq:brillq},\\
\eta &:=& \ln{r \over a},
\end{eqnarray}
\end{mathletters}
$n$ is an even integer, and $A$, $\eta_0$, and $\sigma$ are constant
parameters that describe the superposed Brill wave's amplitude,
position, and width, respectively.
With this choice the Hamiltonian constraint
equation becomes
\begin{eqnarray}
\frac{\partial^2 \psi}{\partial r^2}
&+& \frac{2}{r}\frac{\partial \psi}{\partial r}
+ \frac{1}{r^2} \frac{\partial^2 \psi}{\partial \theta^2}
+ \frac{\cot \theta}{r^2} \frac{\partial \psi}{\partial \theta} \nonumber \\
&+& \frac{\psi}{4} \left(\frac{\partial^2 q}{\partial r^2}
+ \frac{1}{r} \frac{\partial q}{\partial r}
+ \frac{1}{r^2} \frac{\partial^2 q}{\partial\theta^2} \right) = 0.
\end{eqnarray}
The boundary conditions of asymptotic
flatness and inversion symmetry are again given by equations
\ref{eq:bc}. Furthermore, since the conformal metric $\bar
\gamma_{ij}$ has no ``$1/r$'' parts in its expansion at infinity, its
energy vanishes and the ADM energy for these slices is given
by equation~\ref{eq:admE}.
\section{Spectral Methods}\label{sec:specmeth}
\subsection{Introduction}
Consider an elliptic differential equation, specified by the operator
$L$ on the $d$-dimensional open, simply-connected domain ${\cal D}$,
with boundary conditions given by the operator $S$ on the boundary
$\partial{\cal D}$:
\begin{mathletters}
\begin{eqnarray}
L({u})(\bbox{x}) &=& f(\bbox{x}) \qquad \bbox{x}\in{\cal
D}, \\
S({u})(\bbox{x}) &=& g(\bbox{x}) \qquad \bbox{x}\in\partial{\cal
D}.
\end{eqnarray}
\end{mathletters}
There may be more than one boundary condition, in which case we can
index $S$ and $g$ over the boundary conditions.
Approximate the solution $u(\bbox{x})$ to this system as a sum
over a sequence of {\em basis functions\/} $\phi_k(\bbox{x})$ on
${\cal D}+\partial\cal D$,
\begin{equation}
u_N(\bbox{x}) = \sum_{k=0}^{N-1} \tilde{u}_k\phi_k(\bbox{x}),
\end{equation}
where the $\tilde{u}_k$ are constant coefficients. Corresponding to
the approximate solution $u_N$ is a residual $R_N$ on $\cal D$ and
$r_N$ on $\partial{\cal D}$:
\begin{mathletters}
\begin{eqnarray}
R_N &=& L(u_N) - f \qquad \text{on $\cal D$},\\
r_N &=& S(u_N) - g \qquad \text{on $\partial\cal D$}.
\end{eqnarray}
\end{mathletters}
The residual vanishes everywhere for the exact solution $u$.
In PSC we determine the coefficients $\tilde{u}_k$ by requiring
that $u_N$ satisfies the differential equation and boundary conditions
{\em exactly\/} at a
fixed set of {\em collocation points\/} $x_n$:
{\em i.e.,} we require that
\begin{mathletters}
\begin{eqnarray}
0 &=& L[u_N(x_n)] - f(x_n)\qquad\text{for $x_n$ in ${\cal D}$,}\\
0 &=& S[u_N(x_n)] - g(x_n)\qquad\text{for $x_n$ on $\partial{\cal D}$,}
\end{eqnarray}
\end{mathletters}
for all $n$. When the expansion functions and collocation points are
chosen appropriately a numerical solution of these equations
can be found very efficiently. In the following
subsection we discuss choices of the expansion basis and collocation
points.
\subsection{Expansion basis and collocation points}
In PSC we require that the approximate solution $u_N$ satisfies the
differential equation and boundary conditions exactly at the $N$
collocation points $x_{n}$. The basis $\phi_k$ should not constrain
the values of the approximation at the collocation points;
correspondingly, we can write the basis as a set of $N$ functions
$\phi_{k}(x)$ that satisfy a discrete orthogonality relationship on
the collocation points $x_{n}$:
\begin{equation}
\sum_{n=0}^{N-1}\phi_{j}(x_{n})\phi^{*}_{k}(x_{n}) =
\nu^{2}_k \delta_{jk} ,
\end{equation}
where the $\nu_k$ are normalization constants. Note that the basis
functions are inextricably linked with the collocation points.
It is sometimes the case that the basis can be chosen so
that the boundary conditions are automatically satisfied. For example,
consider a one-dimensional problem on the interval
\begin{equation}
{ \Bbb{I}} = [-1,1].
\end{equation}
If the boundary conditions are periodic then each element of the basis
\begin{mathletters}
\label{eq:fourier}
\begin{equation}
\phi_{k}(x) =
\exp\left[\pi i \left(x+1\right)k\right],
\end{equation}
satisfies the boundary conditions; correspondingly, the approximate
solution $u_N$ automatically satisfies the boundary conditions. If, in
addition, we choose the collocation points
\begin{equation}
x_{n} = {2n\over N}-1,
\end{equation}
then the basis satisfies the discrete orthogonality relation
\begin{equation}
\delta_{jk} = {1 \over N} \sum_{n=0}^{N-1}\phi_{j}(x_{n})\phi^{*}_{k}(x_{n}).
\end{equation}
\end{mathletters}
In an arbitrary basis, or with arbitrarily chosen collocation points,
finding the $\tilde{u}_{k}$ from the $u_{N}(x_{n})$ requires the
solution of a general linear system of $N$ equations in $N$ unknowns,
which involves ${\cal O}(N^{3})$ operations. For the basis and
collocation points given in equations \ref{eq:fourier} the
$\tilde{u}_{k}$ can be determined from the $u_{N}(x_{n})$ quickly and
efficiently via the Fast Fourier Transform in ${\cal O}(N\ln N)$
operations.
Arbitrary derivatives of the $u_{N}$ can also be computed quickly:
writing
\begin{mathletters}
\begin{equation}
{d^{p}u_{N}\over dx^{p}} = \sum_{k=0}^{N-1}\tilde{u}_{k}^{(p)}\phi_{k}(x),
\end{equation}
we see immediately that
\begin{equation}
\tilde{u}_{k}^{(p)} = \left(\pi i k\right)^{p}.
\end{equation}
\end{mathletters}
Consequently, any derivative of $u_{N}$ can be evaluated at all the
collocation points in just ${\cal O}(N\ln N)$ operations.
The ability to evaluate efficiently the derivatives of $u_{N}$ at the
collocation points is much more important than finding a basis whose
individual members satisfy the boundary conditions. In the case of
periodic boundary conditions we can have our cake and eat it, too.
More generally we choose a basis in which we can efficiently
compute the derivatives of $u_{N}$ at the collocation points and
require separately that the approximate solution $u_{N}$ satisfy the
boundary conditions at collocation points on the boundary.
For general boundary conditions a basis of Chebyshev polynomials often
meets all of our requirements.\footnote{The geometry of a problem
might suggest other expansion functions, such as Legendre polynomials;
however, a Chebyshev expansion does quite
well and has the added convenience that, with appropriately chosen
collocation points, only ${\cal O}(N\ln N)$ are required to convert
from the expansion coefficients to the function values at the
collocation points and vice versa \cite{orszag80a}.} Recall that the Chebyshev
polynomials are defined on $\Bbb{I}$ by
\begin{equation}
T_k(x) = \cos\left(k\cos^{-1} x\right).\label{def:cheb}
\end{equation}
A simple recursion relation allows us to find the derivative of $u_N$
as another sum over Chebyshev polynomials: if \footnote{For Chebyshev
bases the conventional notation is that $k$ runs from $0$ to $N$,
not $N-1$; thus, there are $N+1$ coefficients and collocation points.}
\begin{equation}
u_N(x) = \sum_{k=0}^{N} \tilde{u}_k T_k(x),
\end{equation}
then
\begin{equation}
{du_N\over dx}(x) = \sum_{k=0}^{N-1} \tilde{u}'_k T_k(x),
\end{equation}
where
\begin{equation}
c_k \tilde{u}'_{k} = \tilde{u}'_{k+2}+2(k+1)\tilde{u}_{k+1},
\end{equation}
with
\begin{equation}
c_k = \left\{ \begin{array}{ll}
2 & \; \text{$k = 0$}\\
1 & \; \text{$k \geq 1$}.
\end{array}\right.
\label{eq:c}
\end{equation}
If we choose collocation points $x_n$ (for $0\leq n \leq N$) according to
\begin{equation}
x_n = \cos {\pi n\over N},
\end{equation}
then the Chebyshev polynomials satisfy the discrete orthogonality
relation
\begin{equation}
\delta_{jk} = {2\over N \bar{c}_k}
\sum_{n = 0}^{N}
{1\over\bar{c}_{n}} T_{j}(x_{n})T_{k}(x_{n}),
\end{equation}
where
\begin{equation}
\bar{c}_k = \left\{ \begin{array}{ll}
2 & \; \text{$k = 0$ or $N$} \\
1 & \; \text{otherwise}.
\end{array}\right.
\label{eq:cbar}
\end{equation}
Finally, exploiting the relation between the Chebyshev polynomials
and the Fourier
basis (cf.\ \ref{def:cheb}) allows us to find the $\tilde{u}_k$ from
the $u_N(x_n)$ in ${\cal O}(N\ln N)$ time using a fast transform
\cite[appendix B]{canuto88a}. With an expansion basis of Chebyshev
polynomials and an appropriate choice of collocation points we can thus
evaluate derivatives of arbitrary order at the collocation points in
${\cal O}(N\ln N)$ operations.
For problems on an arbitrary domain of dimension $d$ greater than unity it
is rarely the case that we can find a basis which permits rapid
evaluation of derivatives. If the domain can be mapped smoothly to
${\Bbb{I}}^d$ then we can write
\begin{mathletters}
\begin{equation}
u_{N^{(1)}\cdots N^{(d)}}(\bbox{x}) =
\sum_{k_{1}=0}^{N^{(1)}}\cdots
\sum_{k_{d}=0}^{N^{(d)}} \tilde{u}_{k_{1}\cdots k_{d}}
\phi_{k_{1}\cdots k_{d}}(\bbox{x}),
\end{equation}
where
\begin{eqnarray}
\phi_{k_{1}\cdots k_{d}}(\bbox{x}) &=&
\prod_{\ell=1}^d\phi^{(\ell)}_{k_{\ell}}(x^{(\ell)}),\\
\bbox{x} &=& \left(x^{(1)},\ldots,x^{(d)}\right),
\end{eqnarray}
and the $\phi^{(\ell)}_k$, for fixed $\ell$, is a basis on $\Bbb{I}$ which
permits fast evaluation of derivatives with respect to its argument
({\em e.g.,} Chebyshev polynomials). Associated with each set of basis
functions are the collocation points $x^{(\ell)}_{n}$;
correspondingly, the collocation points associated with
$\phi_{k_1\cdots k_d}$ are just the $N_{1}\cdots N_{d}$-tuples
\begin{equation}
\bbox{x}_{n_1\cdots n_d} = \left(
x^{(1)}_{n_1},\cdots,x^{(d)}_{n_d}
\right).
\end{equation}
\end{mathletters}
With this choice of basis and collocation points we can evaluate
efficiently arbitrary derivatives of an approximation
$u_{N^{(1)}\cdots N^{(d)}}$. If the domain cannot be mapped smoothly
to a $d$-cube in ${\Bbb{R}}^d$, either more sophisticated methods such
as domain decomposition \cite{boyd89a,canuto88a} must be used, or the
problem may not be amenable to solution by PSC.
\subsection{Solving the system of equations}
The expansion basis, collocation points and differential equation with
boundary conditions determine a system of equations for the
coefficients $\tilde{u}_k$ or, equivalently, the approximate solution
$u_N$ evaluated at the collocation points.
Iterative solution methods (which require as few as ${\cal
O}(N\ln N)$ operations) work well to solve the kind of systems
of equations that arise from the application of a PSC method.
If the elliptic system being solved is linear then the algebraic
equations arising from either a FD or a PSC method are also linear and
a unique solution is guaranteed. If, on the other hand, the
differential system is nonlinear, then the equations arising from FD
or PSC are also nonlinear and a unique solution is not guaranteed.
Newton's method \cite[sec.\ 12.13 and appendices C and D]{boyd89a},
where one solves the linearized equations beginning with a guess and
then iterating, works well for these types of equations. As long as a
good initial guess is chosen, the iteration will usually converge.
In appendix \ref{sec:solvsys} we describe in detail the variant of
Newton's method (Richardson's iteration) that we have used to solve the
nonlinear system of algebraic equations that arise when we apply PSC
to solve the Hamiltonian constraint equations as posed in section
\ref{sec:init}.
\section{Comparing Finite Difference and Pseudospectral Collocation
Methods}
\label{sec:compare}
\subsection{Introduction}
Finite differencing and pseudospectral collocation are alternative
ways to find approximate solutions to a system of differential
equations. Consider the Poisson problem in one dimension:
\begin{mathletters}
\label{eq:system}
\begin{equation}
{d^2u\over dx^2} = f(x),\label{eq:poisson}
\end{equation}
on the interval $\Bbb I$ with Dirichlet boundary conditions
\begin{equation}
u(-1) = u(1) = 0. \label{eq:dbc}
\end{equation}
\end{mathletters}
In a FD approach to this problem we seek the values of $u$ at discrete
points $x_n$, say
\begin{equation}
x_n = -1 + {2n \over N},
\end{equation}
for $n = 0,1,\dots N$. Algebraic equations
are found by approximating the differential operator
$d^2u/dx^2$ in equation \ref{eq:poisson} by a ratio of differences: {\em e.g.,}
\begin{equation}
{d^2u\over dx^2}(x_n) \simeq
{u_{n+1}-2u_n+u_{n-1}\over\Delta x^2},
\end{equation}
for integer $n = 1,2,\dots N-1$
where
\begin{mathletters}
\begin{eqnarray}
u_n &:=& u(x_n),\\
\Delta x &:=& 2/N.
\end{eqnarray}
\end{mathletters}
With this discretization the differential equation \ref{eq:poisson}
yields $N-1$ equations for the $N+1$ unknown $u_n$. The boundary
conditions (eq.\ \ref{eq:dbc}) yield two more equations,
completely determining the $u_n$:
\begin{mathletters}
\begin{equation}
{u_{n+1}-2u_n+u_{n-1}\over\Delta x^2} = f(x_n) \qquad
\text{for
$n = 1,2,\dots N-1$},
\end{equation}
\begin{eqnarray}
u_{0} &=& 0,\\
u_{N} &=& 0.
\end{eqnarray}
\end{mathletters}
The solution to these equations is the FD approximation to
$u(x)$ at the points $x_n$.
The FD solution to equations \ref{eq:system} begins by approximating
the differential equations. In the PSC method, on the other hand, we
first approximate the solution at all points in $\Bbb I$ by a sum over
a finite set of basis functions. For this example, we choose a Chebyshev
basis; so, we write
\begin{equation}
u_{N}(x) = \sum_{k=0}^{N} \tilde{u}_k T_k(x).
\end{equation}
Now insist that $u_{N}$ satisfies the differential equation and
boundary conditions exactly at the collocation points
\begin{equation}
x_n = \cos{\pi n\over N},
\end{equation}
for $n = 0,1,\dots N$.
In particular, we require that the boundary conditions are satisfied
and that, in addition, the differential equation is satisfied
for integer $n$ ranging from $1$ to $N-1$:
\begin{mathletters}
\begin{eqnarray}
u_N(x_0) &=& 0,\\
u_N(x_N) &=& 0,\\
{d^2u_N \over dx^2}(x_n) &=& f(x_n).\label{eq:difeq}
\end{eqnarray}
\end{mathletters}
To evaluate equation \ref{eq:difeq} note that $d^2u_N/dx^2$ can be
written as
\begin{mathletters}
\label{eq:2ndDeriv}
\begin{equation}
{d^2u_N \over dx^2}(x_n) = \sum_{m=0}^{N} d^{(2)}_{nm} u_N(x_m).
\end{equation}
The $d^{(2)}_{nm}$ can be determined by noting that
\begin{equation}
{d^2u_N\over dx^2}(x_n) = \sum_{k=0}^{N} \tilde{u}_k'' T_k(x_n),
\end{equation}
with
\begin{eqnarray}
c_k \tilde{u}_k'' &=& \tilde{u}_{k+2}'' + 2(k+1) \tilde{u}_{k+1}',\nonumber\\
c_k \tilde{u}_k' &=& \tilde{u}_{k+2}' + 2(k+1) \tilde{u}_{k+1},
\end{eqnarray}
and
\begin{equation}
\tilde{u}_k = {2\over N \bar{c}_k}
\sum_{n = 0}^{N}
{1\over\bar{c}_{n}} u_N(x_{n})T_{k}(x_{n}),
\label{eq:speccoefs}
\end{equation}
\end{mathletters}
where $c_k$ and $\bar{c}_k$ are given by equations \ref{eq:c} and
\ref{eq:cbar}, respectively.
The result is, again, a set of algebraic equations for $u_N(x_n)$: the
values of the approximate solution at the collocation points. Finding
the $u_N(x_n)$ yields an approximate solution to the differential
equation over the entire domain $\Bbb I$ since the spectral
coefficients $\tilde{u}_k$ are given by equation
\ref{eq:speccoefs}.\footnote{Alternatively, we could have constructed
a system of equations in terms of the unknown spectral coefficients.
This would correspond to a spectral tau method: {\em cf.}
\cite{boyd89a,canuto88a}.}
For the linear problem posed here the solution to the
algebraic system of equations that arise in either a FD or PSC solution can
be solved directly or by any of the many standard iterative methods.
For nonlinear problems the systems are generally solved by linearizing
the equations about an initial guess and then iterating the solution
until it converges. We discuss one method of solution in appendix
\ref{sec:solvsys}.
\subsection{Convergence of approximations}
In either a FD or PSC solution to a differential equation with
boundary conditions we expect that, as $N$ tends to infinity, the
approximate solution should become arbitrarily accurate. For large
$N$, the $L_2$ error in a FD approximation converges upon the exact
solution as $N^{-p}$ for positive integer $p$. The value of $p$
depends on the smoothness of $f$ and the error in the approximation of
the differential operator (in the example above, $d^2/dx^2$). Assuming
that $f$ is smooth the rate of convergence (measured by the $L_2$
error of the FD solution) is $N^{-p}$ when the truncation error of
the differential operator is ${\cal O}(\Delta x^p)$.
In contrast, when the solution $u$ is smooth the error made by a
properly formulated spectral approximation decreases faster than any
fixed power of $N$ (where $N$ is now the number of collocation points
or basis functions).\footnote{In addition the individual spectral
coefficient $\tilde u_k$ should decrease exponentially with $N$ once
the problem is sufficiently resolved.} For a heuristic understanding
of this rapid convergence, note first that a PSC solution's
derivatives at each collocation point involve all the $\{ u_N(x_n) \}$
(cf.~eq.~\ref{eq:2ndDeriv}). Correspondingly, it is as exact as
possible, given the information available at the $N$ collocation
points. This suggests that an order $N$ collocation spectral
approximation to the derivatives of the unknown should make errors on
order ${\cal O}(\Delta x^{N})$. The interval $\Delta x$, however, is
also proportional to $N^{-1}$; so, we expect that the error in the
spectral solution $u_N$ should vary as ${\cal O}(N^{-N})$. A more
rigorous analysis using convergence theory \cite[chapter 2]{boyd89a}
shows that for any function which is analytic on the domain of
interest, a Chebyshev expansion will converge exponentially ({\em
i.e.} as ${\cal O}(e^{-N})$). If the function is also periodic then a
Fourier expansion will converge exponentially.
\subsection{Computational cost of solutions}
The computational cost, in time, of a FD solution to a system of
elliptic differential equations scales linearly with the number of
grid points $N$ while the accuracy $\epsilon$ of the solution scales
as $N^{-p}$, where $p$ is the order of the FD operator
truncation error. Correspondingly, the cost $K_{\text{FD}}$ for a
given accuracy
scales as
\begin{mathletters}
\begin{equation}
K_{\text{FD}} \sim \epsilon^{-1/p}.
\end{equation}
The cost $K_{\text{PSC}}$ of a PSC solution to the same system, on the
other hand, scales as $N\ln N$ (for an iterative solution) while
$\epsilon$ scales as $\exp(-N)$; consequently, the cost scales with
accuracy $\epsilon$ as
\begin{equation}
K_{\text{PSC}} \sim
-(\ln \epsilon)\,\ln \left(-\ln \epsilon\right).
\end{equation}
\end{mathletters}
Since it is the computational cost required to achieve a given
accuracy that is important, the more rapid convergence of a PSC
solution confers upon it a clear advantage.
This advantage is made clear by considering how the ratio of costs
scales with accuracy:
\begin{equation}
{K_{\text{PSC}}\over K_{\text{FD}}} \sim
-\epsilon^{1/p}\ln\epsilon\,\ln\left(-\ln\epsilon\right),
\end{equation}
which tends to zero with $\epsilon$; consequently, increasing accuracy
with a PSC solution is always more efficient than with a FD
solution.
The equations that arise from either a FD or PSC treatment of an
elliptic differential system are typically solved using iterative
methods; thus, {\em at fixed resolution\/} the storage requirements
for either solution method are equivalent. As we have seen, however,
fixed resolution does not correspond to fixed solution accuracy. As
the desired solution accuracy increases the storage requirements of a
PSC solution fall relative to those of an FD solution a factor of
$-\epsilon^{1/p}\ln\epsilon$.
\section{Solving the Hamiltonian Constraint}
\label{sec:specimpl}
\subsection{Nonlinear model problem}
As a first example we solve the model Hamiltonian constraint
equation described in section \ref{sec:model}, equation
\ref{eq:modham2}:
\begin{mathletters}
\begin{equation}
\bar{\nabla}^2 \psi + \frac{3}{4} {P^2 \over r^4} \left( 1 -
\frac{a^2}{r^2} \right)^2 \psi^{-7} = 0,
\label{eq:modham5}
\end{equation}
for $r\in[a,\infty)$ with $P$ a constant and with the boundary conditions
\begin{eqnarray}
\lim_{r\rightarrow\infty}\psi(r) &=& 1\qquad\text{asymptotic
flatness},\\
\left[{\partial\psi\over\partial r} + \frac{1}{2a} \psi \right]_{r=a}
&=& 0\qquad\text{inversion symmetry},\\
\left(\frac{\partial \psi}{\partial \theta} \right)_{\theta = 0,\pi}
&=& 0\qquad\text{axisymmetry}.\label{eq:angbc}
\end{eqnarray}
\end{mathletters}
For this model problem the solution $\psi$ is
\begin{mathletters}
\label{eq:modsol}
\begin{equation}
\psi = \left[ 1 + \frac{2E}{r} + 6 \frac{a^2}{r^2} + \frac{2a^2E}{r^3}
+ \frac{a^4}{r^4} \right]^{1/4},
\end{equation}
where
\begin{equation}
E = \left( P^2 + 4a^2 \right)^{1/2},
\end{equation}
\end{mathletters}
is also the ADM energy for the initial data (cf. eq.~\ref{eq:admE}).
As described this problem is spherically symmetric; nevertheless, we
treat it as axisymmetric to illustrate the methods used to solve the
Hamiltonian constraint for the black hole with angular momentum (cf.\
sec.\ \ref{sec:bham}) and the black hole plus Brill wave problems (cf.\
sec.\ \ref{sec:bhbw}).
As a first step we map the domain $r\in[a,\infty)$, $\theta\in[0,\pi]$
to a square in ${\Bbb R}^{2}$: letting
\begin{mathletters}
\label{eq:mapxy}
\begin{eqnarray}
x &=& \frac{2a}{r} - 1,\\
y &=& \cos \theta,
\end{eqnarray}
\end{mathletters}
we have $x\in(-1,1]$ and $y\in[-1,1]$.
In terms of the $(x,y)$ coordinates, the model Hamiltonian constraint
(eq.\ \ref{eq:modham5}) becomes
\begin{eqnarray}
\left( x+1 \right)^2
\frac{\partial^2 \psi}{\partial x^2} +&& \left( 1-y^2 \right)
\frac{\partial^2 \psi}{\partial y^2} - 2y \frac{\partial \psi}{\partial y}
\nonumber \\ + \frac{3}{256} \left( \frac{P}{a} \right)^2 &&
\left( x+1 \right)^2\left( 3-2x-x^2 \right)^2 \psi^{-7} = 0,
\end{eqnarray}
subject to the boundary conditions
\begin{mathletters}
\label{eq:bcxy}
\begin{eqnarray}
\lim_{x\rightarrow-1}\psi &=& 1,\\
\left[{\partial \psi \over \partial x} - {1 \over 4} \psi \right]_{x=1} &=& 0.
\end{eqnarray}
\end{mathletters}
Note that with our choice of variables and expansion bases the angular
boundary conditions (eq.~\ref{eq:angbc}) are automatically satisfied.
Since $\psi$ is not periodic in either $x$ or $y$, we adopt a
Chebyshev basis for the approximate solution:
\begin{mathletters}
\begin{equation}
\psi_{N_x,N_Y} (x,y) = \sum_{j=0}^{N_x} \sum_{k=0}^{N_y} \tilde
\psi_{jk} T_j(x)T_k(y),\label{eq:expansion}
\end{equation}
with the corresponding collocation points
\begin{eqnarray}
x_{j} &=& \cos{\pi j\over N_x},\\
y_{k} &=& \cos{\pi k\over N_y}.
\end{eqnarray}
\end{mathletters}
For this problem, focus on approximations
\begin{equation}
\Psi_{\ell} = \psi_{4\ell,4},
\end{equation}
for integer $\ell$. We keep $N_y$ fixed as the model problem is
independent of $y$.
Following the discussion in appendix \ref{sec:solvsys}, solve the PSC
equations using Richardson's iteration with a second-order FD
preconditioner. To obtain $\Psi_\ell$, we need an initial guess
$\Psi_\ell^{(0)}$ to begin the iteration. For the lowest resolution
expansion ($N_x=4$) begin the iteration with the guess
\begin{equation}
\Psi^{(0)}_1(x,y) = \frac{(3+x)}{2},
\end{equation}
which is the trivial solution for $P=0$. Applying Richardson's
iteration will then give us the approximate solution $\Psi_1$.
Through the expansion \ref{eq:expansion} this determines an
approximation for $\psi$ everywhere; in particular, it determines an
approximation at the collocation points corresponding to $N_x=8$,
which we then use as the initial guess for determining the approximate
solution $\Psi_{2}$. In this same way we use a lower-resolution
approximate solution as the initial guess for the approximate solution
at the next higher-resolution, {\em i.e.}
\begin{equation}
\Psi_\ell^{(0)} = \Psi_{\ell -1}.
\end{equation}
To investigate the accuracy of our solution as a
function of resolution (basis dimension for PSC, number of grid points
for FD) we evaluate a number of solutions differing only in resolution
and evaluate several different error measures.
\begin{enumerate}
\item For this problem we know the exact solution (cf.\
\ref{eq:modsol}); so, we calculate the $L_2$ norm of the
absolute error as a function of $\ell$:
\begin{eqnarray}
\Delta\Psi_{\ell} &=& \Biggl\{ \sum_{j=0}^{N_x}\sum_{k=0}^{N_y}
\frac{1}{N_x N_y \bar{c}_j \bar{c}_k} \left[\Psi_\ell(x_j,x_k)
\right. \Biggr. \nonumber \\ && \Biggl. \left. -
\psi(x_j,x_k)\right]^2\Biggr\}^{1/2}\nonumber\\
&:=& \left|\left| \Psi_\ell - \psi \right|\right|_2,
\label{eq:Delta}
\end{eqnarray}
where $c_k$ is given by equation \ref{eq:cbar}.
\item We can also characterize the convergence of the approximate
solutions $\Psi_{\ell}$ by calculating the $L_2$ norm of the
difference between the successive approximate solutions:
\begin{equation}
\delta\Psi_{\ell} = \left|\left| \Psi_\ell - \Psi_{\ell-1}
\right|\right|_2.\label{eq:delta}
\end{equation}
The errors $\delta\Psi$ and $\Delta\Psi$ are defined for either FD or
PSC solutions.
\item We also evaluate, by analogy with $\Delta\Psi$ and $\delta\Psi$,
the quantities
\begin{eqnarray}
\Delta{E}_{\ell} &=& | E_\ell - E |,\\
\delta{E}_{\ell} &=& | E_\ell - E_{\ell-1} |,
\end{eqnarray}
where $E_\ell$
is the ADM mass-energy associated with the approximate conformal
factor $\Psi_\ell$. We evaluate $E$ using equation \ref{eq:admE}.
\item For PSC solutions only we define the relative error
measure
\begin{equation}
\delta\tilde{\Psi}_{\ell} = \sum_{j=0}^{N_x} \sum_{k=0}^{N_y}
\left|\tilde{\psi}^{(\ell)}_{jk} -
\tilde{\psi}^{(\ell-1)}_{jk}\right|,
\end{equation}
which characterizes the changes in the spectral coefficients as the
order of the approximation increases.
\end{enumerate}
For a properly formulated spectral method, all of our error measures should
decrease exponentially with $N$ if the solution to the problem is analytic.
Figure \ref{fig:modprob} shows the absolute and relative errors
$\Delta\Psi_{\ell}$ and $\delta\Psi_{\ell}$, along with the change in
the spectral coefficients $\delta\tilde{\Psi}_{\ell}$, for $P=1$.
The exponential convergence of the solution with increasing $N_x$
is apparent. Experience shows that as the problem becomes more
nonlinear ({\em i.e.,} $P$ becomes larger) more terms are needed in
the expansion in order to achieve the same accuracy.
\epsfxsize\columnwidth
\begin{figure}
\begin{center}
\epsffile{specFig1.eps}
\end{center}
\caption{Spectral convergence for a nonlinear model problem. Plotted are
a measure of the absolute error $\Delta\Psi_{\ell}$, and two approximate
measures of the error $\delta\Psi_{\ell}$ and $\delta\tilde{\Psi}_{\ell}$
as a function of $N_x$, the number of radial functions, for the case $P=1$.}
\label{fig:modprob}
\end{figure}
This system of equations has also been solved using FD methods
\cite{cook90a}. A point comparison is telling: in \cite{cook90a} a
second order accurate FD solution with a resolution of 1024 radial
points points were required for a solution with a
$\Delta{E}\simeq10^{-5}$, independent of $P$. The PSC solution
described here achieves the same accuracy using an expansion with only
12 radial functions for $P=1$, and 24 functions for $P=10$. In either
case a PSC solution with an accuracy of $\Delta E \approx 10^{-10}$ is
obtained by doubling the number of radial functions. To achieve the
same accuracy the FD approximation would require (assuming second
order FD) a resolution of $3\times10^5$ radial points.
\subsection{Black hole with angular momentum}
\epsfxsize\columnwidth
\begin{figure}
\begin{center}
\epsffile{specFig2.eps}
\end{center}
\caption{Spectral convergence for the solution of the Hamiltonian constraint
equation for a black hole with angular momentum . Plotted are three
approximate measures of the
error $\delta\Psi_{\ell}$, $\delta\tilde{\Psi}_{\ell}$ and $\delta E$
as a function of $N_x$, the number of
radial functions, for $J=1$.}
\label{fig:bhangmomj1}
\end{figure}
Now turn to consider a truly non-radial, but still axisymmetric,
problem: a rotating black hole (cf.\ \ref{sec:bham}). As before (cf.\
\ref{eq:mapxy}) we map the semi-infinite domain $r\geq a$ to the
finite box $x\in(-1,1]$, $y\in[-1,1]$, obtaining the system of
equations
\begin{eqnarray}
\left( x+1 \right)^2
\frac{\partial^2 \psi}{\partial x^2} +&& \left( 1-y^2 \right)
\frac{\partial^2 \psi}{\partial y^2} - 2y \frac{\partial \psi}{\partial y}
\nonumber \\
+ \frac{9}{64} \left( \frac{J}{a^2} \right)^2 &&\left( x+1 \right)^4
\left( 1-y^2 \right) \psi^{-7} = 0,
\end{eqnarray}
subject to the boundary conditions given in equations \ref{eq:bcxy}.
For this problem we do not have the exact solution; so, we consider
only the relative errors $\delta\Psi$, $\delta\tilde{\Psi}$ and
$\delta E$. Figure \ref{fig:bhangmomj1} (\ref{fig:bhangmomj100}) shows
these quantities as functions of $N_x$ for $J/M^2$ equal to 1 (100).
For these solutions $\Psi_{\ell} = \psi_{4\ell,N_y}$, where initially
$N_y=4$ and is incremented by two\footnote{Along with axisymmetry, this
problem has equatorial plane symmetry so $\Psi_{\ell}$ is even in $y$.
By exploiting this symmetry, we could reduce our number of angular
functions by a factor of two} whenever the difference between
$\delta \Psi_\ell$ with and without the increment was greater than
ten percent.
Again we see rapid, exponential convergence
of the solution with $N$.
This problem has also been solved using second order FD
\cite{cook90a}. For a solution accuracy $\delta{E}\simeq10^{-5}$,
\cite{cook90a} found that a resolution 1024 radial and 384 angular
grid points was required, roughly independent of the value of $J$. We
find that PSC achieves the same accuracy with an expansion basis of 12
radial (and 4 angular) functions for $J=1$, and 24 radial (and 8
angular) functions for $J=100$. Solution accuracies of $10^{-10}$ can
be obtained for the PSC solution simply by doubling the size of the
expansion basis (in $x$ and $y$). For a similar increase in accuracy
of the FD solution a grid approximately 300 times larger in each
dimension would be required.
\epsfxsize\columnwidth
\begin{figure}
\begin{center}
\epsffile{specFig3.eps}
\end{center}
\caption{Same as figure \ref{fig:bhangmomj1} with $J=100$.}
\label{fig:bhangmomj100}
\end{figure}
\subsection{Black hole plus Brill wave}
\epsfxsize\columnwidth
\begin{figure}
\begin{center}
\epsffile{specFig4.eps}
\end{center}
\caption{Spectral convergence for the solution of the Hamiltonian constraint
equation for a black hole plus Brill wave. Plotted is an
approximate measure of the
error $\delta\tilde{\Psi}_{\ell}$ as a function of $N_x$, the number of
radial functions, for the case $A=\eta_0=\sigma=1$, $n=2$.}
\label{fig:bhbrill}
\end{figure}
As a final example we consider the Hamiltonian constraint for a black
hole superposed with a Brill wave. After mapping this problem to the
$(x,y)$ domain we obtain the system of equations
\begin{mathletters}
\begin{equation}
\left( x+1 \right)^2
\frac{\partial^2 \Psi}{\partial x^2} + \left( 1-y^2 \right)
\frac{\partial^2 \Psi}{\partial y^2} - 2y \frac{\partial \Psi}{\partial y}
+ \frac{\Psi R}{4} = 0,
\end{equation}
with
\begin{equation}
R = \left( x+1 \right)^2
\frac{\partial^2 q}{\partial x^2} + (x+1) \frac{\partial q}{\partial x}
+ \left( 1-y^2 \right)\frac{\partial^2 q}{\partial y^2}
- y \frac{\partial q}{\partial y},\label{eq:R}
\end{equation}
\end{mathletters}
where $q$ is given by equation \ref{eq:brillq}, and subject to the boundary
conditions \ref{eq:bcxy}.
In figure \ref{fig:bhbrill} we show $\delta\tilde\Psi$ as a function
of $N_x$ for the Brill wave parameters $\sigma = A = \eta_0 = 1$ and
$n = 2$. For these solutions $\Psi_{\ell} = \psi_{4\ell,N_y}$ where initially
$N_y=4$, and is incremented by two whenever the difference between
$\delta \Psi_\ell$ with and without the increment was greater than
ten percent.
The convergence, while rapid, is not
quite exponential. In addition, the nearly exponentially decreasing
error is impressed with a wave that is nearly periodic in spectral
resolution $\log N_x$. We attribute this behavior to the resolution of
the factor $R$ (cf.\ eq.\ \ref{eq:R} and also eq.\ \ref{eq:brillq} for
$q$). Figure \ref{fig:ricci} shows the the error $\Delta R$ obtained when
we form approximate $R_{N_x,N_y}$ according to
\begin{mathletters}
\begin{equation}
R_{N_x,N_y} = \sum_{j=0}^{N_x}\sum_{k=0}^{N_y} \tilde{R}_{jk}
T_j(x)T_j(y),
\end{equation}
with
\begin{equation}
\tilde{R}_{jk} =
{4\over N_xN_y \bar{c}_k\bar{c}_j}
\sum_{\ell=0}^{N_x}\sum_{m=0}^{N_y} {1 \over \bar{c}_\ell\bar{c}_m}
T_{j}(x_{\ell})T_{k}(y_{m})R(x_\ell,y_m).
\end{equation}
\end{mathletters}
The structure in the
solution is the same as the structure in the Chebyshev approximation
to $R$.
\epsfxsize\columnwidth
\begin{figure}
\begin{center}
\epsffile{specFig5.eps}
\end{center}
\caption{The error in the spectral representation of $R$ (equation
\ref{eq:R}) for the case shown in figure \ref{fig:bhbrill}}
\label{fig:ricci}
\end{figure}
This problem has also been solved using FD methods
\cite{bernstein94a}, enabling us to compare the resolution required
for approximate FD or PSC solution for a given accuracy. With second
order FD a solution whose error $\delta E$ is $3 \times 10^{-5}$ required a
resolution of 400 radial and 105 angular grid points. To achieve the
same accuracy the PSC solution described here requires a basis of only
36 radial (and 12 angular) Chebyshev polynomials.
\section{Discussion}
\label{sec:concl}
Pseudospectral collocation is a very efficient way of solving the
nonlinear elliptic equations that arise in numerical relativity.
These problems typically have smooth solutions; correspondingly, the
approximate solutions obtained using pseudospectral collocation
converge upon the exact solution exponentially with the number of
collocation points. As a result, the cost of a high accuracy
pseudospectral solution is not significantly greater than the cost of
a similar solution of moderate accuracy. Since the computational
burden of solving the pseudospectral collocation equations with a
given number of collocation points is no greater than that required to
solve the finite difference equations for the same number of grid
points, the computational demands of a pseudospectral collocation
solution are far less than those of a finite difference solution for
even modest accuracy.
Another important advantage of a pseudospectral collocation solution
over a finite differencing solution involves the formulation of the
boundary conditions. In a finite difference solution the boundary
conditions must be reformulated as finite difference equations
or incorporated approximately into the formulation of the finite
difference operator of the differential equations being solved. This
generally involves the introduction of auxiliary boundary conditions,
which are not part of the original problem. For example, consider the
second order elliptic equation on $\Bbb I$:
\begin{mathletters}
\label{eq:fdexample}
\begin{eqnarray}
{d^{2}u\over dx^{2}} &=& f(x),\label{eq:fdexamplea}\\
u(-1) &=& u(1) = 0.
\end{eqnarray}
\end{mathletters}
A fourth-order accurate finite difference approximation to the
differential operator $d^{2}/dx^{2}$ is
\begin{equation}
{16(u_{j-1}-2u_{j}+u_{j+1}) - (u_{j-2}-2u_{j}+u_{j+2})\over\Delta
x^{2}} = f(x_{j}),
\end{equation}
where
\begin{equation}
u_{j} = u(j\Delta x).
\end{equation}
Before this finite difference operator can be used in equation
\ref{eq:fdexample} it must be modified at the grid points
$-1+\Delta x$ and $1-\Delta x$ since $-1-\Delta x$ and $1+\Delta x$
both lie outside the computational domain. In this case, four boundary
conditions are required (at $x$ equal to $-1$, $-1+\Delta x$,
$1-\Delta x$ and $1$) even though the second order equation
\ref{eq:fdexamplea} properly admits of only two boundary conditions.
In pseudospectral collocation, on the other hand, no auxiliary
boundary conditions need be formulated: the approximate solution is
expressed as an analytic function, which is required to satisfy the
boundary condition equations exactly at the collocation points on the
boundary.
These advantages of pseudospectral collocation solution come at a
cost. When properly implemented the computational expense of
pseudospectral collocation may be considerably less than the expense
of finite differencing; however, the difficulty of implementation is
greater. The efficient solution of the algebraic equations arising
from pseudospectral collocation generally require the use of
sophisticated iterative methods. Additionally, the exact solution
itself should be smooth on the computational domain in order that
exponential convergence is attained. Finally, and perhaps most
importantly, for problems of dimension greater than unity the
computational domain must be sufficiently simple that it can be mapped
to ${\Bbb I}^{d}$ or be decomposed into sub-domains that
can each be mapped to ${\Bbb I}^{d}$ ({\em e.g.,} an L-shaped region
can be decomposed into two regions, each of which can be mapped to
${\Bbb I}^{2}$).
We have not here investigated the application of pseudospectral
collocation techniques to evolution problems. Pseudospectral
collocation methods have been used to solve problems in other fields
({\em e.g.,} fluid dynamics) with great success
\cite{canuto88a,fletcher84a}. Our own experience in applying these
techniques to evolution problems in numerical relativity shows
promise, but is not yet complete.
\begin{acknowledgements}
It is a pleasure to acknowledge the support of the National Science
Foundation (PHY/ASC93-18152, ARPA supplemented, PHY 99-00111, PHY
99-96213, PHY 95-03084, PHY 99-00672, PHY 94-08378). L.S.F. would also
like to acknowledge the support of the Alfred P. Sloan Foundation.
\end{acknowledgements}
|
2,877,628,089,609 | arxiv | \section{Introduction}
Recognizing what a group of people is doing, which is named the collective activity, is critical and useful in some real-world applications including visual surveillance. A critical point for collective activity analysis is to model the interaction between persons in the collective scenario, and inferring relations among individuals in images/videos remains challenging.
Existing approaches for the collective activity recognition typically modeled the collective interactions in terms of person-person interactions \cite{choi2014understanding, lan2012discriminative, amer2014hirf,hajimirsadeghi2015learning}. For instance, Lan \etal \cite{lan2010beyond} explicitly modeled pairwise potential between individuals based on atomic action labels. Choi \etal \cite{choi2014understanding} explored several hand-crafted interaction features for constructing pairwise potentials. Chang \etal \cite{chang2015learning} chose to model person-person interaction in collective scene by an interaction metric matrix.
In addition, deep learning models such as recurrent neural network are also proposed for modeling pairwise person interaction \cite{deng2015deep, ibrahim2015hierarchical, deng2015structure}.
The person-person interaction based models intend to describe activities from a local perspective, but it causes ambiguities. Besides, those models are intrinsically limited in capturing high-level collective activities due to the inherent visual ambiguity caused by the activity
invaders\footnote{We use activity invaders to indicate the individuals irrelevant to the activity.} and local pattern uncertainty.
In this work, we aim to describe collective interactions from a more global perspective, where the interactions between each anchor individual and the rest individuals are explicitly modeled. We call this interaction the \emph{person-group interaction}.
For effectively capturing person-group interactions, we introduce a set of latent variables that are modeled by jointly considering all the related person in a collective scenario. We infer those latent variables with complicated dependencies by embedding them into feature space using deep neural network instead of defining hand-crafted potentials in conventional graphical model. The benefits are twofold. First, by utilizing embedding-based method, our model is able to model more complex collective structures beyond pairwise person-person structures. Second, the non-linear dependencies between person and group can be inferred by discriminative learning procedure in deep learning framework. To obtain a more concise collective activity representation, an attention mechanism is employed to modify the contextual structure by setting different impact factors for each individuals during the embedding procedure.
In summary, the contributions of our paper are threefold. Firstly, a latent variable model
capable of capturing complex connections among individuals is developed for collective activity recognition.
Secondly, the complicated dependencies between person and group are represented by latent variable embedding and an attention mechanism is integrated for obtaining a compact embedding representation.
Thirdly, a new dataset with more activity samples is collected for the benchmarking of collective activity recognition.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{embedding_fig2.pdf}
\end{center}
\centering\caption{Illustration of constructing latent variables to model person-group interaction. The latent variable $h_i$ captures local person-group interaction of person $i$ while $h_{scene}$ mines the global interaction by aggregate all the local interaction information. To effectively model complex dependencies, we learn the representation of latent variables in embedding feature space.}
\label{fig:fig2}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{pipeline.pdf}
\end{center}
\centering\caption{The pipeline of our Latent Variable Embedding procedure. We represent the sub-procedure at each time step as Attention Embedding Module for simplicity. The output of module in time step $t$ is $\mathbf{u}_{scene}^{(t)}$ which indicates the summarized person-group interaction in collective scenario. Then the interaction information is propagated in an iterative fashion. Note that the dash line between $\mathbf{u}_N^{(T)}$ and $\mathbf{u}_{scene}^{(T)}$ means that the interaction of person in yellow barely related to the current collective activity. Activity classification performs at the last time step $T$.}
\label{fig:fig3}
\end{figure*}
\section{Our Approach}
Instead of capturing collective structures using person-person interaction only, we consider mining more globally structural interactions between each individual and the rest group (other individuals).
Here, we utilize latent variables to capture the complicated dependencies between person and group in collective activity scenario. Rather than directly infer the latent states, we exploit the embeddings of latent variables parametrized by a deep neural network to represent structural information from a global view and then explicitly model person-group interaction for collective activity recognition.
\thispagestyle{empty}
\subsection{Modeling Collective Activity with Latent Variable}
In this section, we aim to construct a mid-level feature representation indicating the collective interactions among individuals via latent variables encompassed by a graphical model.
For simplification, we denote $x_i$ as visible variable of person $i$, where $i\in\mathcal{V}_p$ and $\mathcal{V}_p$ is the set of people involved in a collective scene, and the visible variable of scene as $x_{scene}$. In addition, we use $h_i$ and $h_{scene}$ to indicate the hidden variables of the $i-th$ individual and scene, respectively.
Based on the isolated representation of entities, the interactions among actors, related group and the context can then be captured by the corresponding latent variables.
Figure \ref{fig:fig2} provides a graphical representation of our model.
Thus, the posterior probability of each latent variable can be expressed as $p(h_i|x_i,{\{x_j\}}_{j\in\mathcal{V}_p\backslash i},h_{scene})$ and $p(h_{scene}|x_{scene},{\{x_i\}}_{i\in\mathcal{V}_p},{\{h_i\}}_{i\in\mathcal{V}_p})$. It means that the hidden variable $h_i$ captures the person-group interaction information for anchor person $i$ and $h_{scene}$ captures all the interaction in the collective scenario from global view. Building upon the latent variables, the collective activities can be recognized by jointly considering local person-group interactions and global context as $p(y|{\{h_i\}}_{i\in\mathcal{V}_p},h_{scene})$.
\subsection{Latent Variable Embedding}
However, even though we can define the posterior probability of these latent variables, the exact inference procedure is difficult and sometimes even intractable in conventional graphical model based on hand-crafted potentials.
Inspired by \cite{dai2016discriminative} where latent variables are embedded into feature space for structural modeling, we utilize a deep neural network to capture the non-linear dependencies among person-group interactions and represent it as the embeddings of latent variables.
The embeddings can be viewed as an indication of the posterior probabilities.
As shown in \reffig{fig:fig3}, we develop a mean-field like procedure in order to approximate inference and capture the person-group interaction during embedding. Thus, the embeddings of latent variable can be learnt in the iterative manner introduced below.
We first denote $\mathbf{u}_i^{(t)}$ as the embedding of latent variable $h_i$ and formulate it by jointly considering the unary image feature $\mathbf{x}_i$ of person $i$, the averaged appearance feature $(\sum_{j\in\mathcal{N}(i)}\mathbf{x}_j)/|\mathcal{N}(i)|$ of all the neighbours of person $i$, and the embedding of global scene from last step $\mathbf{u}_{scene}^{(t-1)}$. We denote the neighbouring persons of $i$ as $\mathcal{N}(i)$, $\mathcal{N}(i)\subseteq\mathcal{V}_p$. Then, the update of $\mathbf{u}_i^{(t)}$ is below: $\forall i\in\mathcal{V}_p$,
\begin{equation} \label{eq:u_i}
\begin{split}
\mathbf{u}_i^{(t)} &=(1-\lambda)\cdot\mathbf{u}_i^{(t-1)} \\
&+\lambda\cdot\sigma(\mathbf{W}_u [\mathbf{x}_i; \frac{\sum_{j\in\mathcal{N}(i)}\mathbf{x}_j}{|\mathcal{N}(i)|}; \mathbf{u}_{scene}^{(t-1)}]),
\end{split}
\end{equation}
where "$;$" indicates vector vertical concatenation, $\sigma(\cdot)$ is a rectified linear unit and $\lambda$ is the update step size. Here, we omit the biases term for simplicity. Intuitively, the aggregated neighbour feature $(\sum_{j\in\mathcal{N}(i)}\mathbf{x}_j)/|\mathcal{N}(i)|$ is employed to represent group appearance information, while $\mathbf{u}_{scene}^{(t-1)}$ indicates the global context information. Thus, the local person-group interaction can be represented by the embedding $\mathbf{u}_i^{(t)}$.
Likewise, $\mathbf{u}_{scene}^{(t)}$ is the embedding of $h_{scene}$, and it aims to capture collective interactions from a global view. To this end, we formulate it by the global image feature $\mathbf{x}_{scene}$, the pooled low-level representation of person, \ie $\sum_{i\in\mathcal{V}_p}\mathbf{x}_i/|\mathcal{V}_p|$ and the aggregate embeddings of individuals, \ie $\sum_{i\in\mathcal{V}_p}\mathbf{u}_i^{(t)}/|\mathcal{V}_p|$. Specifically, it can be formulated as:
\begin{equation} \label{eq:u_s}
\begin{split}
\mathbf{u}_{scene}^{(t)} &= (1-\lambda)\cdot\mathbf{u}_{scene}^{(t-1)} \\
&+\lambda\cdot\sigma(\mathbf{W}_{s} [\mathbf{x}_{scene}; \frac{\sum_{i\in\mathcal{V}_p}\mathbf{x}_i}{|\mathcal{V}_p|}; \frac{\sum_{i\in\mathcal{V}_p}\mathbf{u}_i^{(t)}}{|\mathcal{V}_p|}]),
\end{split}
\end{equation}
Thus, $\mathbf{u}_{scene}^{(t)}$ can be considered as global relation representation since it models the non-linear dependencies of individuals and their local relations.
Based on the embeddings of latent variable, we can define the posterior probability of assigning activity label $\mathbf{y}$ to a given sample by non-linearly combining all the embeddings together:
\begin{equation}\label{eq:p_y}
\begin{split}
p(\mathbf{y}&|{\{\mathbf{u}_i\}}_{i\in\mathcal{V}_p},\mathbf{u}_{scene}) = \\
&\phi(\mathbf{W}_{out}\sigma(\mathbf{W}_{y}[\frac{\sum_{i\in\mathcal{V}_p}\mathbf{u}_i^{(T)}}{|\mathcal{V}_p|};\mathbf{u}_{scene}^{(T)}])).
\end{split}
\end{equation}
Here, $\phi$ is an activation function used for scaling the network outputs and we set it as softmax.
Finally, we use the following cross entropy loss function to measure the consistency between the model outputs and manual annotations:
\begin{equation}\label{eq:loss}
\begin{split}
L(\theta)&=-\sum_{k=1}^{\mathrm{K}}y_k\log(p(y_k|{\{\mathbf{u}_i^{(T)}\}}_{i\in\mathcal{V}_p},\mathbf{u}_{scene}^{(T)})),
\end{split}
\end{equation}
where $\theta$ is the model parameters to be learned, $K$ is the number of collective activity labels and $y_k$ is $1$ if the frame belongs to class $k$ and $0$ otherwise.
The model parameters are optimized using the back propagation through time (BPTT) algorithm.
\subsection{Embedding with Attention}
\thispagestyle{empty}
\label{sec:Attention}
Note that the update of $\mathbf{u}_{scene}^{(t)}$ in Eq.\eqref{eq:u_s} involves the summation of individual embeddings ${\{\mathbf{u}_i^{(t)}\}}_{i\in\mathcal{V}_p}$, which means that all the person-group interactions are equally connected to the group activity. However, to correctly discover the collective structure information, one should pay more attention to some relevant person-group interactions. For example, in a waiting scenario presented in \reffig{fig:fig3}, those individuals who are waiting in line should be paid more attention to, since their person-group interactions are strongly relating to the activity, while the subjects walking behind are less valuable for the recognition, and sometimes even cause ambiguity.
Thus the influence of the interactions between walking subjects and waiting group should be suppressed in this case. Inspired by the recent success of attention models for sequential modeling \cite{chorowski2015attention, ramanathan2015detecting}, we use an attention mechanism to encode the relevance of each individual embedding and scene embedding as:
\begin{equation}\label{eq:alpha}
\alpha_i^{(t)}=tanh(\mathbf{w}_g^\mathrm{T}\mathbf{u}_i^{(t)} + \mathbf{w}_{gs}^\mathrm{T}\mathbf{u}_{scene}^{(t-1)}),
\end{equation}
where $\mathbf{w}_g,\mathbf{w}_{gs}\in\mathbb{R}^d$.
Given the relevance of individuals in the collective scenario, we can measure the importance of the person-group interactions derived from individual $i$ as:
\begin{equation}\label{eq:gi}
g_i^{(t)}=\frac{e^{\alpha_i^{(t)}/\tau}}{\sum_{j\in\mathcal{V}_p}e^{\alpha_j^{(t)}/\tau}},
\end{equation}
where $\tau$ is the softmax temperature parameter.
By considering all the individuals in the given collective scenario together,
we can reformulate the embedding of scene as following:
\begin{equation}\label{eq:newus}
\begin{split}
\mathbf{u}_{scene}^{(t)} &= (1-\lambda)\cdot\mathbf{u}_{scene}^{(t-1)} \\
&+\lambda\cdot\sigma(\mathbf{W}_{s} [\mathbf{x}_{scene}; \frac{\sum_{i\in\mathcal{V}_p}\mathbf{x}_i}{|\mathcal{V}_p|}; \sum_{i\in\mathcal{V}_p}g_i^{(t)}\mathbf{u}_i^{(t)}).
\end{split}
\end{equation}
\section{Experimental Results}
For evaluation, we tested our model on three collective activity datasets: collective activity dataset \cite{Choi_VSWS_2009}, the collective activity extended dataset and a newly proposed dataset by ourselves denoted as CA Dataset, CAE Dataset and SYSU-CA Dataset, respectively. We have compared our model with the state-of-the-art collective activity recognition methods \cite{antic2014learning, lan2012discriminative, choi_eccv12, hajimirsadeghi2015learning, hajimirsadeghi2015visual, deng2015deep, deng2015structure, ibrahim2015hierarchical}. In the following, we first provide some implementation details and then report our results on these three benchmark.
For feature representation, we used the feature maps obtained in the ``pool5'' layer of two-stream ResNet-50 net (pretrained on the UCF101 action set \cite{twostream01}) as our two-stream feature. For each person in the collective scenario, we extracted its two-stream feature as our individual feature. We also extracted the two-stream feature from the entire collective image as the feature representation of a scene. Our algorithm was implemented using the Tensorflow package \cite{tensorflow2015-whitepaper}. We empirically set the Softmax temperature parameter $\tau$ and the update step size as 0.25 and 0.3 respectively. The hidden units of latent embedding was set as 256. The dropout weight of the dropout layer employed in Eq.(\ref{eq:p_y}) was set to 0.5 during training phase.
We deployed Xavier Initiaizer suggested in \cite{glorot2010understanding} and optimized parameters with Adam Optimization strategy \cite{kingma2014adam}.
We also conducted two baselines including Image Classification and Person Classification for comparison. In Image Classification, we built a softmax classifier on top of two-stream feature of each single frame. While in Person Classification, we constructed a feature representation by averaging features over all people instead.
\subsection{Collective Activity (CA) Dataset}
\thispagestyle{empty}
The collective activity dataset contains 44 video clips of 5 collective activities including crossing, waiting, queueing, walking and talking. Each participant appeared in the videos was annotated every 10 frames with a bounding box and the collective activity labels were provided for evaluation. We followed exactly the testing protocol used in \cite{deng2015structure} and compared our method with the state-of-the-art methods in Table \ref{Tab:CompResCAD}. We set the model parameters $T$ variables as 3. Their effects will be further discussed in section \ref{sec:moreEvaluation}.
As shown, our model outperformed both the deep learning based and non-deep learning based competitors, and obtained the state-of-the-art results on the Collective Activity Dataset. Specifically, our method achieved an accuracy of 85.4\%, which is about 2\% higher than that of Cardinality Kernel model. Our model outperformed the Deep Structure Model by a margin of 4\%. The results demonstrate that our proposed person-group interaction based modeling performs better than the existing person-person interaction based modelings.
The relevant confusion matrix obtained by our method is presented in Figure \ref{Fig:confusion} (a). It reveals that our method can achieve a good result for the recognition of activities like talking, waiting, and queueing . We also observe that our method often misclassified activity Walking as Crossing. This is because that subjects in both walking and crossing actvities performed similar atomic action (walking), and the person-group interactions in these activities were not as distinguishable as those in the other activities such as talking and queueing. This result is consistent with the claim drawn in \cite{Choi_CVPR_2011} that the walking activity in this set could be biased.
\begin{table}[]
\begin{center}
\resizebox{.45\textwidth}{!}{
\begin{tabular}{|l|c|}
\hline
Method & Accuracy \\
\hline\hline
Image Classification (Two-stream feature + softmax) & 71.2\%\\
\hline
Person Classification (Average two-stream features + softmax) & 77.2\%\\
\hline\hline
Latent Constituent Model \cite{antic2014learning} & 75.1\% \\
\hline
Discriminative Latent SVM Model \cite{lan2012discriminative} & 79.7\% \\
\hline
Unified Tracking and Recognition \cite{choi_eccv12} & 80.6\% \\
\hline
HCRF-Boost \cite{hajimirsadeghi2015learning} & 82.5\% \\
\hline
Cardinality Kernel \cite{hajimirsadeghi2015visual} & 83.4\% \\
\hline
Deep Structure Model \cite{deng2015deep} & 80.6\% \\
\hline
Sructure Inference Machines \cite{deng2015structure} & 81.2\% \\
\hline
Hierarchical Deep Temporal Model \cite{ibrahim2015hierarchical} & 81.5\% \\
\hline
Our Model & {\bf 85.4\%} \\
\hline
\end{tabular}
}
\end{center}
\caption{Comparison on the Collective Activity Dataset.}
\label{Tab:CompResCAD}
\end{table}
\begin{figure*}[]
\begin{center}
\resizebox{.75\textwidth}{!}{
\includegraphics[width=.951\linewidth]{metric.pdf}
}
\end{center}
\caption{Confusion matrices obtained by our method.}
\label{Fig:confusion}
\end{figure*}
\subsection{Collective Activity Extended (CAE) Dataset}
By replacing the walking activity with two activities dancing and jogging, the Collective Activity extended Dataset with 6 collective activities was proposed in \cite{Choi_CVPR_2011}. For evaluation, we also set $T$ as 3 and followed the evaluation protocol in \cite{deng2015structure}.
Table \ref{Tab:CompResCADE} presents the detailed comparison results. As shown, our model can obtain an accuracy of 97.94\%, which is 7\% superior to the best result obtained by the Structure Inference Machines model \cite{deng2015structure}. This again demonstrates the effectiveness of the proposed person-group interaction modeling for collective activity recognition.
By exactly examining the confusion table obtained by our method in Figure \ref{Fig:confusion} (b), our model can obtain good recognition results for most of the activities. We also observe that about 12\% of the Waiting samples were misclassified as Crossing, since in most of misclassified scenarios, waiting activity was usually followed by crossing, and the transition boundary between these two activities is indistinguishable, which are usually labelled as crossing.
\begin{table}[]
\begin{center}
\resizebox{.45\textwidth}{!}{
\begin{tabular}{|l|c|}
\hline
Method & Accuracy \\
\hline\hline
Image Classification (Two-stream feature + softmax) & 92.26\%\\
\hline
Person Classification (Average two-stream features + softmax) & 95.10\%\\
\hline\hline
CRF + CNN \cite{deng2015structure} & 86.75\% \\
\hline
Structural SVM + CNN \cite{deng2015structure} & 87.34\% \\
\hline
Sructure Inference Machines \cite{deng2015structure} & 90.23\% \\
\hline
Our Model & {\bf 97.94\%} \\
\hline
\end{tabular}
}
\end{center}
\caption{Comparison on the Collective Activity Extended Dataset.}
\label{Tab:CompResCADE}
\end{table}
\subsection{SYSU Collective Activity (SYSU-CA) Dataset}
For more in-depth evaluation, we also collected a new multi-view collective activity dataset.
This dataset includes 7 different collective activities (\emph{Talking}, \emph{Fighting}, \emph{Following}, \emph{Waiting}, \emph{Entering}, \emph{Gathering} and \emph{Dismissing}) distributed in total 285 video clips which were captured from 3 different views.
Compared with other existing datasets, this set is unique in the following aspects:
1) each activity was captured from three different views; 2) the set contains more activity samples for collective activity analysis; 3) the dynamic motions in collective activities are more complex.
Our results are obtained by different methods on this dataset followed in four different settings: view1, view2, view3 and an integrated version. As for single view evaluation, we employed three-fold cross validation protocol, where two-thirds of the videos from the corresponding view were used for training and the rest for testing. In the integrated evaluation setting, we report the accumulative accuracies obtained on separate views. In details, we set the parameters $T$ as 4, respectively.
The experimental results are presented in Table \ref{Tab:CompResNewCAD} and Figure \ref{Fig:confusion} (c). Compared with the baselines Image Classification and Person Classification, our method achieved the best recognition result on most of the view settings and obtained an accuracy of 85.85\% on the total setting. We also observe that our baseline Image Classification model can obtain a reliable performance on this set, while the Person Classification performed unsatisfactorily. We further concluded that our model is robust to view variation since the results on three different views were consistent and satisfactory. However, since the person-group interactions were explicitly modeled, the performance has been further improved. The confusion table in Figure \ref{Fig:confusion} (c) indicates that our method often confuses activities Gathering and Dismissing with each other.
This can be attributed to that the individuals in both activities had high similarity on the spatial and temporal distribution.
\begin{table}[]
\begin{center}
\resizebox{.45\textwidth}{!}{
\begin{tabular}{|l|c|c|c|c|}
\hline
Method & View1 & View2 & View3 & Total \\
\hline\hline
Image Classification & 86.89\% & 81.62\% & 82.60\% & 83.70\%\\
\hline
Person Classification & 63.90\% & 64.42\% & 66.38\% & 64.90\%\\
\hline
Ours & 85.58\% & 87.02\% & 84.92\% & {\bf 85.84\%}\\
\hline
\end{tabular}
}
\end{center}
\centering\caption{Comparison on the SYSU Collective Activity Dataset.}
\label{Tab:CompResNewCAD}
\end{table}
\subsection{More Discussions}
\begin{table}[]
\begin{center}
\resizebox{.47\textwidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Dataset & $T$=1 & $T$=2 & $T$=3 & $T$=4 & $T$=15\\
\hline\hline
CA Dataset & 84.51\% & {\bf 85.45\%} & {\bf 85.45\%} & 84.89\% & 82.63\% \\
\hline
CAE Dataset & 97.26\% & 96.47\% & {\bf 97.94\%} & 97.55\% & 97.06\%\\
\hline
SYSU-CA Dataset & 85.23\% & 82.96\% & 85.79\% & {\bf 85.84\%} & 80.99\%\\
\hline
\end{tabular}
}
\end{center}
\caption{Evaluation on iteration step number $T$.}
\label{Tab:EvalT}
\end{table}
\label{sec:moreEvaluation}
\begin{table}[]
\begin{center}
\resizebox{.45\textwidth}{!}{
\begin{tabular}{|l|c|c|}
\hline
Dataset & Without Attention & With Attention\\
\hline\hline
CA Dataset & 83.68\% & {\bf 85.45\%} \\
\hline
CAE Dataset & 97.45\% & {\bf 97.94\%} \\
\hline
SYSU-CA Dataset& 85.20\% & {\bf 85.79\%} \\
\hline
\end{tabular}
}
\end{center}
\caption{Evaluation on the Attention mechanism.}
\label{Tab:EvalAtt}
\end{table}
\thispagestyle{empty}
\noindent
\textbf{Efficiency of our model.} Compared with baseline models, the difference in our model is that we explicitly model person-group interaction in latent space so that our model is able to compensate the individual information with group context such that our model outperforms most of the baseline models with the same feature in all dataset as shown in Table \ref{Tab:CompResCAD}, Table \ref{Tab:CompResCADE} and Table \ref{Tab:CompResNewCAD}.
\noindent
\textbf{Effect of iteration step $T$.} Table \ref{Tab:EvalT} provides the results of varying the iteration step number $T$ in our embedding procedure. In this experiment, attention mechanism was employed and the number of hidden neurons was set as 256. We can observe that, a better recognition result can be obtained by setting $T$ as 3 or 4 in most of the cases, which means that the collective interactions can be effectively discovered by our embedding model with a quite small $T$.
\noindent
\textbf{With vs. without attention.} Here, we investigated the effect of the employed attention embedding mechanism. For comparison, we set the parameters $T$ and number of hidden units as 3 and 256, respectively. The detailed comparison results are presented in Table \ref{Tab:EvalAtt}. As shown, using attention mechanism can always benefit the recognition. Especially on the collective activity dataset, the introduced attention mechanism can improve the accuracy by a margin of 2\%, which demonstrates that the attention mechanism can help to suppress the influence of activity invaders and thus obtained a better activity representation.
\section{Conclusion}
In this paper, we developed a latent embedding model for collective activity recognition. By embedding the latent variables in the collective graphical model and combining with an attention mechanism, our method can effectively capture the complex collective structures depicted in collective activity videos (images) and obtain the state-of-the-art results on two benchmarking datasets and a new collective activity set.
\thispagestyle{empty}
\section*{Acknowledgment}
This work was supported partially by the National Key Research and Development Program of China (2016YFB1001002, 2016YFB1001003), NSFC (No.61522115, 61472456, 61628212), Guangdong Natural Science Funds for Distinguished Young Scholar under Grant S2013050014265, the Guangdong Program (No.2015B010105005), the Guangdong Science and Technology Planning Project (No.2016A010102012, 2014B010118003), and Guangdong Program for Support of Top-notch Young Professionals (No.2014TQ01X779).
{\small
\bibliographystyle{ieee}
|
2,877,628,089,610 | arxiv | \section{Introduction}\label{sec:intro}
Filaments are crucial to star formation in giant molecular clouds \citep{2014prpl.conf...27A}, as they contain the majority of the mass budget at large column density and contain the majority of star-forming cores in the clouds \citep{2015A&A...584A..91K,2020A&A...635A..34K}. Understanding filament formation thus becomes a crucial part of a complete picture of star formation \citep{2019A&A...623A.142S}. Previously, ideas of filament formation include turbulent shocks \citep[e.g.,][]{2001ApJ...553..227P}, sheet fragmentation \citep[e.g.,][]{2009ApJ...700.1609M}, magnetic-field channeling \citep[e.g.,][]{2019MNRAS.485.4509L}, and Galactic dynamics \citep[e.g.,][]{2020MNRAS.492.1594S}. A summary of filament formation mechanisms can be found in the latest review in \citet{2022arXiv220309562H}.
Recently, \citet[][hereafter K21]{2021ApJ...906...80K} demonstrated a new mechanism of filament formation via collision-induced magnetic reconnection (CMR). The study was motivated by the special morphology of the sub-structures of the Stick filament that resemble those created by magnetic reconnection. Given the fact that Orion A is between a large-scale magnetic field reversal \citep{1997ApJS..111..245H} and the position-velocity (PV) diagram shows two velocity components, K21 proposed the scenario in which two clumps collide with antiparallel magnetic fields. The model successfully reproduced observational features of the Stick filament, including the morphology (ring/fork-like structures), the density probability distribution function (PDF), the line channel maps, and the PV diagrams. Moreover, the model results gave an alternative explanation to the findings in \citet{2017ApJ...846..144K} that cores in Orion A were mostly pressure-confined. The natural result of the helical field around the filament exerts a surface magnetic pressure on the filament, confining the filament and the cores. For the first time, the CMR model provides a complete picture of structure formation in Orion A that self-consistantly incorporates the 25-year mystery of the reversed magnetic field.
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{cmr1.pdf}
\includegraphics[width=0.49\textwidth]{cmr2.pdf}
\caption{
An illustration of CMR in two viewing angles. {\bf (a):} A view in the x-y plane. The Cartesian coordinate system (red) centers at the collision point. The x-axis points rightward and the y-axis points to the top. The z-axis points toward us as indicated by the red circle-point. The clouds have colliding velocities $v_{\rm 1,x}$ and $v_{\rm 2,x}$, respectively. The magnetic field points toward us (marked as black circle-points) for $x<0$ and away from us (marked as black circle-crosses) for $x>0$. After collision, the filament (orange) forms along the y-axis. {\bf (b):} A view in the z-x projection. In this view, the magnetic field is parallel to the plane of the sky. The y-axis points toward us as indicated by the red circle-point. After collision, the filament (orange) forms along the y-axis which points toward us. The green ellipse marks the location of the compression pancake if no magnetic fields. With antiparallel fields and CMR, the field reconnects at two tips of the pancake and forms a loop (black dashed arrow curve) around the pancake. Due to the magnetic tension force, the pancake is squeezed into the central axis (y-axis) becoming a filament.}
\label{fig:cmr}
\end{figure*}
Figure \ref{fig:cmr} illustrates the CMR filament formation. In panel (a), we view the process from the side of the filament. Two clouds move along the x-axis and collide at the origin. On the left side of the y-z plane, the magnetic field points toward us. On the other side, the field points away from us. After collision, the reversed field reconnects in the z-x plane and forms field loops that pull the compression pancake into the central axis (y-axis in our setup). The pulling is due to the magnetic tension the field loop exerts on the gas. As a result, a filamentary structure forms along the y-axis. In panel (b), we view the process in the z-x plane. In this projection, we are looking at the filament cross-section at the origin. The green ellipse represents the compression pancake and the black dashed arrow curve around the pancake denotes the reconnected field loop. The loop has a strong magnetic tension that pulls the dense gas in the pancake to the origin in each z-x plane. As a result, the filament (orange cross-section) forms along the y-axis. Essentially, the filament forms along the field symmetry axis that crosses the collision point.
While K21 outlined the skeleton of the theory, more follow-up studies are needed to further understand the physical process. Among the unknowns about CMR, the most urgent one is whether a CMR-filament can produce stars. While K21 showed that CMR can quickly make dense gas with $n_{\rm H_2}\sim10^5~{\rm cm}^{-3}$, it was not obvious that the dense gas would eventually collapse and form stars instead of being transient in the interstellar medium.
In this paper, we aim to confirm star formation within CMR-filaments, and compare it with star formation in other types of conditions. We will see how CMR star formation differs from other star formation pathways. In the following, we introduce the numerical method in \S\ref{sec:method}. Then, in \S\ref{sec:ic}, we describe the initial conditions for our fiducial model. In \S\ref{sec:results}, we present results from the simulations. Finally, we summarize and conclude in \S\ref{sec:conclu}.
\section{Method}\label{sec:method}
We use a modified version of the \textsc{Arepo} code \citep{Springel10} to model the formation of the filament. In particular, we simulate the compressible and inviscid magnetohydrodynamics (MHD). The code adopts the finite-volume method on an unstructured Voronoi grid that is dynamically created from mesh generating points that move according to the local velocity of the fluid. The target mass contained within each cell can be arbitrarily selected by the user, meaning that the spatial resolution of \textsc{Arepo} varies according to the local gas density. In our simulations we set a default target mass for each cell of $3.6\times10^{-4}$ M$_\odot$, however we also require that the Jeans scale be resolved by a minimum of 16 cells as to avoid artificial fragmentation \citep{Truelove97} and ensure many cells span the width of the filament.
The implementation of magnetic fields in \textsc{Arepo} was described in \citet{Pakmor11} and uses a HLLD Riemann solver and Dedner divergence cleaning. Gravity is included using a tree-based approach improved and modified for \textsc{Arepo} from \textsc{Gadget-2} \citep{Springel05}. When calculating the gravitational forces we do not use periodic boundaries.
We use a custom implementation of chemistry whose development is described in \citet{Smith14a,Clark19}. The gas chemistry is based off the network of \citet{Gong17} and was first implemented in \textsc{Arepo} in \citet{Clark19}. The \citet{Gong17} network was designed to accurately reproduce the CO abundances in low density regions using a 1D equilibrium model, but in high density regions may over-produce atomic carbon. Our implementation is a non-equilibrium, time-dependent 3D version of the above that contains several additional reactions that are unimportant in PDR conditions but that make the network more robust when dealing with hot, shocked gas. Full details of these modifications can be found in Hunter et al (in prep.).
Heating and cooling of the gas is computed simultaneously with the chemical evolution using the cooling function described in \citet{Clark19}. To do this accurately it is important to calculate the local shielding from dust and H$_2$ self shielding with respect to the Interstellar Radiation Field (ISRF). We calculate this using the \textsc{TreeCol} algorithm that \citet{Clark12b} first implemented in \textsc{Arepo}. The background radiation is assumed to be constant at the level calculated by \citet{Draine78} and enters uniformly through the edges of the box. Cosmic ray ionisation is assumed to occur at a rate of $3 \times 10^{-17}$ s$^{-1}$.
Star formation is modelled within the code using sink particles \citep{Bate95,Greif11}. Above number densities of $n_{\rm H_2}\sim10^8~{\rm cm}^{-3}$, we check whether the densest cell in the deepest potential well and its neighbours satisfy the following three conditions: (1) the cells are gravitationally bound, (2) they are collapsing, and (3) the divergence of the accelerations is less than zero, so the particles will not re-expand (see also \citealt{Federrath10a}). If all these conditions are satisfied the cell and its neighbours are replaced with a sink particle, which interacts with the gas cells purely through gravitational forces. Additional material can be accreted by the sink particles from neighbouring cells. This occurs via skimming mass above this density threshold if the adjacent cells move within an accretion radius about three times the Jeans scale (0.0018 pc) and are gravitationally bound to it. In our current study we focus on the early stages of star formation at the core fragmentation phase, and therefore we neglect any radiative feedback from the sinks, which would play a role later in the evolution.
We adopt the same unit system as K21. Specifically, the code unit for mass density is $3.84\times10^{-21}$ g cm$^{-3}$ ($n_{\rm H_2}$=840 cm$^{-3}$, assuming a mean molecular mass per H$_2$ of $\mu_{\rm H_2}=2.8 m_\textrm{H}$). The code unit for time is 2.0 Myr. The code unit for length scale is 1.0 pc. The code unit for velocity is 0.51 km s$^{-1}$. With these settings, the gravitational constant is $G=1$, and the magnetic field unit is 3.1 $\mu$G.
\section{Initial Conditions}\label{sec:ic}
\begin{table}
\caption{Model Parameters.}\label{tab:ic}
\begin{tabular}{cccc}
\hline
Parameters & \mbox{MRCOLA} & \mbox{COLA\_sameB} & \mbox{COLA\_noB}\\
\hline
$L$ & 8 pc & 8 pc & 8 pc \\
$T_{\rm dust}$ & 15 K & 15 K & 15 K \\
$T_{\rm gas}$ & 15 K & 15 K & 15 K \\
$\zeta$ & $3.0\times10^{-17}$ s$^{-1}$ & $3.0\times10^{-17}$ s$^{-1}$ & $3.0\times10^{-17}$ s$^{-1}$ \\
$G$ & 1.7$G_0$ & 1.7$G_0$ & 1.7$G_0$ \\
DGR & $7.09\times10^{-3}$ & $7.09\times10^{-3}$ & $7.09\times10^{-3}$ \\
$n_{\rm amb}$ & 42 cm$^{-3}$ & 42 cm$^{-3}$ & 42 cm$^{-3}$ \\
\hline
$n_1$ & 420 cm$^{-3}$ & 420 cm$^{-3}$ & 420 cm$^{-3}$ \\
$x_1$ & -0.9 pc & -0.9 pc & -0.9 pc \\
$R_1$ & 0.9 pc & 0.9 pc & 0.9 pc \\
$v_{\rm 1,x}$ & 1.0 km s$^{-1}$ & 1.0 km s$^{-1}$ & 1.0 km s$^{-1}$ \\
$v_{\rm 1,z}$ & 0.25 km s$^{-1}$ & 0.25 km s$^{-1}$ & 0.25 km s$^{-1}$ \\
$B_{\rm 1,z}$ & 10 $\mu$G & 10 $\mu$G & 0 \\
\hline
$n_2$ & 420 cm$^{-3}$ & 420 cm$^{-3}$ & 420 cm$^{-3}$ \\
$x_2$ & 0.9 pc & 0.9 pc & 0.9 pc \\
$R_2$ & 0.9 pc & 0.9 pc & 0.9 pc \\
$v_{\rm 2,x}$ & -1.0 km s$^{-1}$ & -1.0 km s$^{-1}$ & -1.0 km s$^{-1}$ \\
$v_{\rm 2,z}$ & -0.25 km s$^{-1}$ & -0.25 km s$^{-1}$ & -0.25 km s$^{-1}$ \\
$B_{\rm 2,z}$ & -10 $\mu$G & 10 $\mu$G & 0 \\
\hline
\end{tabular}\\
{$L$ is the domain size. $T_{\rm dust}$ is the initial dust temperature. $T_{\rm gas}$ is the initial gas temperature. $\zeta$ is the cosmic-ray ionization rate. $G$ is the ISRF in unit of Habing field $G_0$. DGR is the dust-to-gas mass ratio. $n_{\rm amb}$ is the ambient H$_2$ number density. $n_1$ is the Cloud1 H$_2$ number density. $x_1$ is the Cloud1 location. $R_1$ is the Cloud1 radius. $v_{\rm 1,x}$ is the Cloud1 collision velocity. $v_{\rm 1,z}$ is the Cloud1 shear velocity. $B_{\rm 1,z}$ is the Cloud1 B-field. $n_2$ is the Cloud2 H$_2$ number density. $x_2$ is the Cloud2 location. $R_2$ is the Cloud2 radius. $v_{\rm 2,x}$ is the Cloud2 collision velocity. $v_{\rm 2,z}$ is the Cloud2 shear velocity. $B_{\rm 2,z}$ is the Cloud2 B-field. See Figure \ref{fig:cmr} for illustration.}
\end{table}
The setup for the fiducial model follows the K21 fiducial model (see K21 Figure 8), but with several additional parameters. Following the nomenclature in K21, we name our fiducial model \mbox{MRCOLA} (MRCOL+\textsc{Arepo}). As shown in Figure \ref{fig:cmr}, Cloud1 has density $n_1$, radius $R_1$, colliding velocity $v_{\rm 1,x}$ (positive x), shear velocity $v_{\rm 1,z}$ (positive z), magnetic field $B_{\rm 1,z}$ (positive z); Cloud2 has $n_2$, radius $R_2$, colliding velocity $v_{\rm 2,x}$ (negative x), shear velocity $v_{\rm 2,z}$ (negative z), magnetic field $B_{\rm 2,z}$ (negative z). Table \ref{tab:ic} lists the values for these parameters. They are the same as those in K21. With our adopted reference cell mass, the equivalent cell size in the cloud is 0.014 pc before refinement, which is about twice the cell size in K21.
To avoid artifacts due to the periodic boundary condition, the computation domain is enlarged to 8 pc in each dimension, which is twice the size of the computation domain in K21. This is because density waves due to the colliding gas could propagate through the boundaries and impact the dynamical evolution of the filament and the sink formation. The new setup has more padding area between the clouds and the boundaries, so the boundary waves do not affect the central filament before $t=3$ Myr (the ending time).
We follow K21 to adopt an initial dust and gas temperature of 15 K. In turn, we assume a fully molecular composition for the system simply due to the low temperature. We include the standard ISRF of 1.7$G_0$ that illuminates the computation domain from all directions. Here $G_0$ is the Habing field. The cosmic-ray ionization rate is fixed at $3.0\times10^{-17}~{\rm s}^{-1}$. A standard dust-to-gas mass ratio of 1/141 is adopted. Table \ref{tab:ic} summarizes the parameters.
\section{Results and Analysis}\label{sec:results}
\subsection{Fiducial model}\label{subsec:fiducial}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{vxslices8pc.pdf}
\caption{
Density slice plots for the collision midplane (x=4 pc) as a function of time for \mbox{MRCOLA}. The color plot is in unit of $n_{\rm H_2}$ (cm$^{-3}$). The time step is shown at the upper right. The white arrows show the velocity vectors in the plane. Their lengths are proportional to the magnitudes. The red circles show the sink locations. Their sizes are proportional to the sink masses.}\label{fig:mrcolavxlin}
\end{figure*}
Figure \ref{fig:mrcolavxlin} shows density slice plots for the x=4 pc plane as a function of simulation time (upper right). The color background shows the density field. We use a linear color scale to highlight the clumpy structures. The white vectors show the velocity field. Here we only include snapshots from t=0.6 Myr to t=2.2 Myr. We also zoom in to the central 2 pc region to focus on the filament. A more complete view of the domain is shown in Appendix \S\ref{app:fiducial}.
The slice is the collision midplane where the compression pancake forms. The pancake is the dense structure in the central region at t=0.6 Myr (more prominent at t$<$0.6 Myr in Figure \ref{fig:mrcolavx}). It pushes gas outwards at its periphery, so we see the radial velocity vectors at the boundaries. In the central 1 pc region, however, the velocity vectors point toward the z=4 pc axis. The inward velocity is caused by the magnetic reconnection. The reconnected field pulls the gas toward the central axis, as shown in \S\ref{sec:intro}. Through t=0.8 Myr, the inward velocity persists and more material continues to be pulled to the central axis where the filament forms (also see Figure \ref{fig:mrcolavy}). Note the filament has always been clumpy.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{yslices8pcBB.pdf}
\caption{
Density slice plots for the y=4 pc plane as a function of time. The color plot is in unit of $n_{\rm H_2}$ (cm$^{-3}$). The time step is shown at the upper right. We are viewing the filament cross-section at (x,z)=(4 pc,4 pc). Magnetic fields are shown as arrow stream lines in the plots.}\label{fig:mrcolaybb}
\end{figure*}
Figure \ref{fig:mrcolaybb} shows the CMR phenomenon in a different angle. Here we show the y=4 pc slice at different time steps. Magnetic field lines are overlaid on the density slice plots. In the center, we see the cross section of the filament, which is wrapped by circular fields. The circular fields are results of magnetic reconnection at the two ends of the compression pancake. The reconnection creates field loops which enclose the pancake. The magnetic tension squeeze the pancake to form the filament in the center. The process is the same as that shown in K21. Therefore, the filament formation mechanism through CMR is confirmed with \textsc{Arepo}.
In Figure \ref{fig:mrcolavxlin}, the filament continues to become denser through t=1.2 Myr. At t=1.4 Myr, the filament starts to collapse along its main axis, which is also indicated by the longitudinal velocity vectors in the filament. Meanwhile, gas in the vicinity of the filament shows converging velocity vectors, especially in the horizontal directions. The convergence indicates that the filament gravity dominates the central region and the region begins a global collapse.
In fact, the collapsing gas spirals into the filament. Figure \ref{fig:mrcolavy} shows the gas kinematics better in the y=4 pc plane. Here, the x=4 pc line corresponds to the slice of Figure \ref{fig:mrcolavxlin}. At t$\gtrsim$1.2 Myr, we can see the gas around the central filament (cross-section) spiraling toward the filament. Note, the horizontal inflow velocity in Figure \ref{fig:mrcolavxlin} t=1.2 Myr panel is not the gas flow along the field reversal plane, which has a spiral shape in Figure \ref{fig:mrcolavy}. Due to the reconnected field, dense gas along the field-reversal plane continues to be dragged into the central filament, which is shown in the t=1.2 Myr panel in Figure \ref{fig:mrcolaybb}. However, this field-reversal plane is not captured in the x=4 pc slice plot in Figure \ref{fig:mrcolavxlin}. The horizontal inflowing gas in Figure \ref{fig:mrcolavxlin} is indeed due to gravity.
Also shown in the t=1.2 Myr panel of Figure \ref{fig:mrcolavy} are multiple striations perpendicular to the field reversal plane. They are also perpendicular to the incoming spiral velocity and the magnetic field (Figure \ref{fig:mrcolaybb}). If we look at the x=4 pc plane which is shown in Figure \ref{fig:mrcolavx}, there are multiple vertical striations parallel to the central filament. Now we again look at Figure \ref{fig:mrcolavy}, we realize that the striations are actually dense sheets perpendicular to the magnetic field. The spiral-in gas moves along the field lines and accumulates in sheets, similar to what was seen in previous studies, e.g., \citet{2007MNRAS.382...73T}. Later, these sheets merge into a spiral structure (nearly perpendicular to the field-reversal) to be accreted by the filament.
In Figure \ref{fig:mrcolavxlin}, at t=1.6 Myr, the filament almost shrinks to become a dense core while the collapse continues along the horizontal and vertical directions. Some gas moves away from the region through an X-shaped outflow (not a protostellar outflow). Until now, no sinks form. So at least in the fiducial model \mbox{MRCOLA}, the clumpy dense gas initially in the filament is not able to form stars. In fact, the formation of the clumpy gas is different from other filament models in which dense clumps form due to fragmentation of a critical filament. Here, the dense gas is moved and bound to the central axis piece by piece. The gas is already clumpy during the transportation \citep[cf.][for a similar but not identical scenario in which sub-filaments merge into a single large structure]{2014MNRAS.445.2900S}. The gas clumps constitute the filament. It is almost a reverse process of fragmentation. Essentially, the filament morphology is determined by the dynamics due to the magnetic field.
\subsection{Cluster Formation}\label{subsec:sink}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{zoom_vxslices8pc.pdf}
\caption{
Zoom-in view of the density slice plot for the x=4 pc plane in the fiducial model. The color plot is in unit of $n_{\rm H_2}$ (cm$^{-3}$). Each slice plot centers at the mass-weighted cluster center. The arrows show the velocity vectors in the plane. Their sizes are proportional to the magnitudes. The red semi-opaque circles show the sink location. Their sizes are proportional to the sink mass.}\label{fig:zvx}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{zoom_vyslices8pc.pdf}
\caption{
Zoom-in view of the density slice plot for the y=4 pc plane in the fiducial model. The format is the same as Figure \ref{fig:zvx}.} \label{fig:zvy}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{zoom_vzslices8pc.pdf}
\caption{
Zoom-in view of the density slice plot for the z=4 pc plane in the fiducial model. The format is the same as Figure \ref{fig:zvx}.}\label{fig:zvz}
\end{figure*}
Once the filament starts to collapse along its main axis, dense gas accumulates in the central region, which we term the dense core \citep[not necessarily the dense core in observations, e.g.,][]{2018ApJ...855L..25K,2019ApJ...873...31K,2021ApJ...912..156K}. Soon after t=1.4 Myr, the first sink forms in the core. By t=1.6 Myr, 9 sinks are present in the core, as indicated by the red circles. The sink formation indicates that the CMR mechanism is capable of forming stars, which answers the opening question in \S\ref{sec:intro}. Not only does CMR form stars, it is capable of producing a cluster (see below). However, the star formation does not happen during the initial filament phase, but happens after the filament collapses into a central dense core.
In Figure \ref{fig:mrcolavxlin}, the collapse continues from t=1.8 Myr to t=2.2 Myr. The sinks grow more massive by accreting the inflowing gas. The most massive sink in the t=2.2 Myr panel is 8.1 M$_\odot$. Meanwhile, dense gas continues to flow toward the cluster forming region, as indicated by the converging velocity vectors, feeding the mini cluster. The central dense core and the star cluster grow together, showing a concurrent, dynamical star cluster formation picture. By the time of 2.2 Myr, some sinks should probably have protostars and their feedback should change the subsequent fragmentation and accretion. Since we do not include the feedback, we do not continue the simulation further.
Combining all the analyses above, we can see that overall the CMR star formation (CMR-SF) is a two-phase process, at least in the specific model of \mbox{MRCOLA}. First, a dense, clumpy filament forms due to magnetic tension. Second, the filament collapses and forms a dense core in which a star cluster emerges.
To better show the cluster structure and how the collapsing gas feeds the cluster growth, we zoom in to the central 0.2 pc region and show the slice plots. Figures \ref{fig:zvx}, \ref{fig:zvy}, \ref{fig:zvz} show the zoom-in view of constant x, y, z planes, respectively. Each slice plot centers at the mass-weighted cluster center. The red filled circles show the sinks.
From the three figures we can see that the dense gas around the cluster is chaotic. Figure \ref{fig:zvy} shows spiral gas structures and velocities, consistent with our interpretation on the large scale in \S\ref{subsec:fiducial}. As we discussed, the spiral structure originates from the initial shear velocity. However, it does not develop into a flat disk, as we can see in Figures \ref{fig:zvx} and \ref{fig:zvz}. Perhaps a flat structure is visible at t=2.2 Myr. But more often, the dense gas is in disorder. Sometimes, there are gas streamers that show coherent inflow velocities toward the cluster. They are the main source of mass supply that feeds into the accreting cluster. In contrast, the same cloud-cloud collision without magnetic fields develops a flat disk starting from 1.0 Myr (see \S\ref{subsec:control}).
As shown in Figures \ref{fig:zvx} and \ref{fig:zvz}, the cluster is generally distributed in a constant-y plane, more so for $t=2.2$ Myr. Figure \ref{fig:zvy} shows that the cluster rotates in the constant-y plane, following the rotation of the dense gas. The angular momentum of the cluster, which it inherits from the gas, makes the cluster settle in a stellar disk. The reason the cluster is not as chaotic as the dense gas is because the sinks only interact with the gas through gravity, i.e., they do not feel the gas pressure or the magnetic field.
There is only one cluster in the computation domain and it is highly concentrated within a diameter $\lesssim$ 0.05 pc ($\sim10^4$ AU). The cluster concentration is largely due to the dense gas concentration. Again in Figure \ref{fig:zvy}, we can see that the size of the densest gas is also about 0.05 pc, just enclosing the cluster. Here, the gas density reaches $\gtrsim10^7$ cm$^{-3}$. Outside the cluster, the gas streamers/spirals connect the system to the larger collapsing region.
Within the cluster, we can see that the most massive members tend to stay at the center. This apparent mass segregation is more prominent from t=2.0 Myr to t=2.2 Myr. The segregation is not surprising because those stars closer to the collapse center form earlier and have the advantage of accreting denser gas, which is similar to the idea of ``Competitive Accretion'' \citep{2001MNRAS.323..785B,2006MNRAS.370..488B}, where the mass segregation is not the result of initial condition but a natural result of accretion at different locations.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{tr.pdf}
\caption{
Distance from a sink to the mass-weighted center of the cluster $R_s$ as a function of time $t$. The color shows the sink ID. Larger IDs indicate later formation time. The size of the circle represents the sink mass. The normalization is different from previous Figures. To reduce overlap, we spread the circles along the horizontal axis, while the valid time steps only include 1.6, 1.8, 2.0, and 2.2 Myr.}\label{fig:tr}
\end{figure}
In Figure \ref{fig:tr}, we show sink locations as a function of time. $R_s$ is defined as the distance from the sink to the mass-weighted cluster center. Darker colors indicate sinks formed earlier (marked with increasing integers). We can see that the most massive sinks at t=2.2 Myr are those formed the earliest. They also stay near the cluster center all the time. Those formed at larger distances do not grow as massive as those near the center. Meanwhile, new members (lighter colors) emerge at different radii. Those near the center will likely grow faster than those farther away.
\subsection{Control models}\label{subsec:control}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{zoom_vxslices8pc_noB.pdf}
\caption{
Same as Figure \ref{fig:zvx} but for the \mbox{COLA\_noB} model. The size of the domain is twice that of Figure \ref{fig:zvx}. The time spans from t=0.8 Myr to t=1.4 Myr.}\label{fig:zvxnob}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{zoom_vyslices8pc_noB.pdf}
\caption{
Same as Figure \ref{fig:zvy} but for the \mbox{COLA\_noB} model. The size of the domain is twice that of Figure \ref{fig:zvy}. The time spans from t=0.8 Myr to t=1.4 Myr.}\label{fig:zvynob}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{zoom_vzslices8pc_noB.pdf}
\caption{
Same as Figure \ref{fig:zvz} but for the \mbox{COLA\_noB} model. The size of the domain is twice that of Figure \ref{fig:zvz}. The time spans from t=0.8 Myr to t=1.4 Myr.}\label{fig:zvznob}
\end{figure*}
For comparison, we run two more simulations with a uniform field (hereafter \mbox{COLA\_sameB}) and no field (hereafter \mbox{COLA\_noB}), respectively. All other parameters remain the same. Table \ref{tab:ic} lists the two models with their parameters. In \mbox{COLA\_sameB}, no sinks form (up to 3 Myr) because the gas density never gets high enough. It is not surprising that magnetic pressure hinders the formation of dense gas \citep[also see][]{2020ApJ...891..168W}.
In \mbox{COLA\_noB}, however, sinks do form and show different behaviours compared to \mbox{MRCOLA}. Figures \ref{fig:zvxnob}, \ref{fig:zvynob}, \ref{fig:zvznob} show zoom-in slice plots for \mbox{COLA\_noB}. The zoom-in region is twice that in Figures \ref{fig:zvx}, \ref{fig:zvy}, \ref{fig:zvz}. First, sinks form earlier in \mbox{COLA\_noB} than \mbox{MRCOLA}. The first sink already forms at t=1.0 Myr in \mbox{COLA\_noB}. By t=1.4 Myr, there are 92 sinks present in the domain. This is more than the number of sinks (66) in \mbox{MRCOLA} at t=2.2 Myr which is 0.8 Myr later. Again, since we do not include feedback, we stop the \mbox{COLA\_noB} simulation at t=1.4 Myr. By this time, the most massive sink is 4.4 M$_\odot$.
Second, the overall star formation rate in \mbox{COLA\_noB} is higher than that in \mbox{MRCOLA}. Within 0.43 Myr, \mbox{COLA\_noB} sinks have a total mass of 41 M$_\odot$. The star formation rate is $9.5\times10^{-5}\epsilon$ M$_\odot$ yr$^{-1}$ where $\epsilon$ is the fraction of sink mass that is eventually converted to stars. In \mbox{MRCOLA}, within 0.72 Myr, 66 sinks form with a total mass of 33 M$_\odot$. The star formation rate is $4.6\times10^{-5}\epsilon$ M$_\odot$ yr$^{-1}$. Here we simply assume the same efficiency $\epsilon$ for both models. Then, \mbox{MRCOLA} has a star formation rate 2.1 times smaller than \mbox{COLA\_noB}.
Third, \mbox{COLA\_noB} sinks form in a wider region compared to the fiducial model \mbox{MRCOLA}. Figure \ref{fig:zvynob} shows the y=4 pc slice from \mbox{COLA\_noB}. We can see that sinks spread over a region of $\sim$0.2 pc which is about four times the scale of the sink formation region in \mbox{MRCOLA}. At t=1.2 Myr, the sinks form along the dense gas elongation in multiple groups that are almost equally spaced. The elongation is the compression layer due to the collision. In \mbox{MRCOLA}, this elongation is squeezed by field loops into the central core, which is why we see a tighter cluster in Figure \ref{fig:zvy}. Figure \ref{fig:zvynob} also shows that the cluster follows the gas spiraling motion in the disk.
Compared to \mbox{MRCOLA}, \mbox{COLA\_noB} sinks are embedded in a better-defined dense gas disk. As shown in Figures \ref{fig:zvxnob} and \ref{fig:zvznob}, the sinks quickly settle in the y=4 pc plane after the first sink formation. Gas is falling from above and below the disk. This indicates a global collapse at t$\gtrsim$1.2 Myr that feeds the dense gas and cluster accretion in the disk. On the contrary, due to the complex magnetic fields, the global gas inflow in \mbox{MRCOLA} is only viable through those streamers, although a coherent inflow from above and below the cluster temporarily exists at t$\lesssim$1.6 Myr (see Figures \ref{fig:zvx} and \ref{fig:zvz}). The global collapse toward the central core is disturbed by the wrapping field, which is part of the reason (at large scales) that \mbox{MRCOLA} has a lower star formation rate than \mbox{COLA\_noB}.
\section{Discussion}\label{sec:discus}
\subsection{The role of CMR in cluster formation}\label{subsec:cmrrole}
Comparing \mbox{MRCOLA} and \mbox{COLA\_noB}, we can summarize two properties of CMR-SF that distinguish itself from other mechanisms. First, CMR-SF is confined in a relatively small region. The cluster is very tight at least during the accretion. Second, CMR-SF is relatively slow. Inflowing gas is only able to feed the cluster through streamers. The former is mainly due to the confinement of the helical field. The latter is again due to the helical field that is orthogonal to the gas inflow.
In fact, if we re-think about the CMR process, it is essentially a process that gathers a large volume ($\sim$1 pc) of gas and compresses it to a small volume ($\sim$0.05 pc), creating an over-dense region that forms stars and also a potential well that accretes more gas. First, the colliding clouds bring gas from afar. The collision compresses the 3D spheres into a 2D sheet. Second, CMR compresses the gas into a dense filament. The reconnected field compresses the 2D sheet into a 1D filament. Third, the filament collapses into a dense core and a cluster forms. Gravity compresses the 1D filament into a 0D core. The three physical processes, i.e., the collision, the reconnection, and gravity, compress relatively diffuse gas into much denser gas through a step-by-step dimension reduction.
As shown in \S\ref{subsec:control}, the cluster would be spread over a larger disk if the CMR mechanism is absent. One speculation is that the concentrated cluster in CMR is more likely to be bound, compared to the cluster without CMR. While we need more models to confirm the boundness, the difference makes CMR a possible explanation for those highly concentrated clusters in observations. However, as will be discussed in \S\ref{subsec:fb}, protostellar heating may suppress the fragmentation in the gas concentration in \mbox{MRCOLA}. Thus the number of stars is limited. If the first few stars accrete the majority of the gas, there may be more massive stars in the reduced cluster. Energetic feedback from the massive stars will eventually disperse the gas.
However, it is also possible that CMR-SF produces multiple clusters if the initial clouds are much larger. A collision between such clouds may form a much larger filament with multiple large fragments, each forming a star cluster. The large filament may or may not have the longitudinal collapse which pushes everything to the center. Future work will address this scenario.
The reconnected field from CMR makes it difficult for the gas inflow to feed the central star formation. In the absence of CMR, gas collpases and falls onto the star-forming disk easily through large-scale flows. With CMR, gas falls to the center in similarly coherent flows initially. But a toroidal region around the central core narrows the angle of inflowing gas (almost no horizontal flow toward the core in Figures \ref{fig:zvx} and \ref{fig:zvz}). It is the magnetic pressure from the helical/toroidal field that hinders the gas inflow. Shortly after, the gas movement in the vicinity of the cluster becomes chaotic. The inflowing gas carry magnetic flux to the central region, changing the topology of the helical field. Everything becomes more chaotic and the gas inflow becomes inefficient. Now, gas can only reach the cluster through streamers.
We can see that the helical field, which is the natural result from CMR, is responsible for the disturbance of the mass supply, which is why \mbox{MRCOLA} has a relatively low star formation rate. One thing that would be interesting to explore is the effect of magnetic field diffusion due to Ohmic resistivity and ambipolar diffusion. At such a small scale, the magnetic Reynolds number should become small, and magnetic diffusion should become important. If some amount of the magnetic energy is lost, the field will exert less pressure on the inflowing gas which may resume the coherent flow. In turn, the star formation rate may approach that in the case without CMR. Future work should address this uncertainty.
\subsection{Effect of protostellar feedback}\label{subsec:fb}
The concentrated cluster in \mbox{MRCOLA} is probably a result of the lack of protostellar heating. As we can see from Figure \ref{fig:tr}, the separation between the more massive sinks is $\lesssim$0.01 pc (2000 AU). Around each sink creation site, the typical gas density is $\gtrsim10^7$ cm$^{-3}$, corresponding to a Jeans scale of $\sim$0.005 pc at $\sim$20 K (the typical temperature in the CMR-filament). So the crowdedness is a result of fragmentation in the central dense core.
However, protostars should form during the cluster formation because the free-fall time is just of order 10000 yr for a density of $10^7$ cm$^{-3}$. The protostellar accretion will inevitably heat the surroundings, thus increasing the overall Jeans scale in the core. Consequently, the number of fragmentations/sinks should be reduced. In fact, \citet{2009MNRAS.392.1363B} have studied the effect of protostellar radiative feedback. They found that the number of protostars was reduced by a factor of 4 in the radiation hydrodynamic simulation compared to the hydrodynamic simulation. Observationally, a recent ALMA result \citep{2017ApJ...837L..29H} showed that protostellar accretion can impact a volume of 2000 AU scale, which is larger than the sink separation in \mbox{MRCOLA}. Therefore, the sink number in our simulations is an upper-limit.
However, unless the feedback can stop the global collapse completely, the inflowing gas will keep transferring material to the central cluster, continuously feeding the protostellar accretion. Naively, we would expect more massive stars in \mbox{MRCOLA}. For instance, \citet{2011ApJ...740...74K} showed that the first generation of protostars from the initial fragmentation will keep accreting the inflowing gas, resulting in more massive stars. Similar results may happen in the CMR-SF if we consider protostellar heating. However, energetic feedback from the massive stars will probably halt further accretion and even completely disperse the gas. In the future, a CMR simulation with protostellar feedback will clarify the situation.
\subsection{Applicability of CMR to star-forming filaments}
The CMR-SF in the fiducial model occurs after the filament collapses. During the filament phase, there is no sink formation. The sterility of the filament, at least in the one model in this paper, raises the question the applicability of this model to star forming filaments in the Galaxy.
For example, the Orion A filament is elongated and star formation is already ongoing in multiple (OMC-1/2/3/4) regions unlike our fiducial model where the filament first collapses into a core and then stars form in the core. Furthermore, how likely is the initial condition of antiparallel B-fields to occur in the cold ISM? The answers to these two questions will help clarify how common the CMR-SF mechanism is in filament and star formation.
To address the likelihood of antiparallel B-fields, we go back to the original proposal (K21) of the CMR mechanism. The K21 model, along with the fiducial model in this paper, was established specifically for the Stick filament in Orion A. At a first glance, the initial condition (see K21 figure 8) that led to CMR seemed unusual. However, it was what observational facts showed us. The field reversal around Orion A was clearly shown in \citet{1997ApJS..111..245H} and later in \citet{2019A&A...632A..68T}, using two different methods. The former showed that the field-reversal was a large-scale feature, not just a local small-scale stochastic fluctuation. Then, \citet{2019A&A...629A..96S} showed that the plane-of-the-sky B-field was nearly perpendicular to the filament. Combining the B-field observations and the two-component pattern in the PV-diagram (K21 figure 5), K21 set up the only possible initial condition in their figure 8, which surprisingly formed a filament at the collision front instead of a compression pancake. The K21 model successfully reproduced a number of observational facts, including the morphology (which was the motivation for the model as there were several ring/fork-like structures in the Stick), the density PDF, the line channel maps, and the PV-diagrams. Therefore, at least for the Stick filament, the CMR model was undoubtedly applicable.
In a broader context, how likely is the antiparallel B-field in the Milky Way and other galaxies? Are there molecular clouds formed via the CMR mechanism? The field-reversal is common in theoretical studies. In fact, with high enough resolution, a turbulent MHD simulation will show many field-reversal interfaces \citep[e.g.,][]{2018PhRvL.121p5101D,2020ApJ...895L..40C}.
In the Galactic disk, Faraday Rotation measurements have long shown field reversal in our solar neighborhood, and several authors \citep[e.g.,][]{1983ApJ...265..722S,1994A&A...288..759H} have proposed a bisymmetric spiral disk field for the Milky Way. Such configurations have multiple field reversals along spiral arms in which a cloud-cloud collision would trigger CMR in a global simulation \citep[][note that we need a large dynamic range to be able to capture CMR]{2022ApJ...933...40K}. So, in both theoretical and observational senses, the CMR mechanism is a viable physical process that produces dense gas and clouds. Most recently, Faraday Rotation measurements showed that the Orion A cloud, the Perseus cloud, and the California cloud all sit between reversed B-fields \citep{2020IAUGA..30..103T}. It could be just a coincidence that all these clouds sat between two large-scale fields with inverted polarity. However, the CMR model showed that if there was a field-reversal and a cloud-cloud collision, the filament formation was automatically fulfilled at the field-reversing interface.
Strictly speaking, it is unlikely for the B-fields to be exactly antiparallel in the sense of probability theory. In reality, there is also possibly small-scale fluctuation in the B-field orientation due to turbulence. K21 has briefly explored these effects. First, if the initial B-field had a relative angle of 20 degree on the two sides, the cloud-cloud collision was still able to create a dense filament (see their figure 26). But the filament in the middle of the compression pancake had a lower density and was shorter. With an initial B-field angle of 90 degree, the collision produced a diagonally symmetric dense patch (see their figure 27). In general, the trend was that CMR was less capable of forming a dense filament with a larger B-field tilting angle, which was not surprising because the reconnected fields were no longer loops in a flat plane. Second, K21 ran a CMR simulation with turbulence that was injected at the beginning. They found that the CMR with turbulence was still able to form the filament which had a smaller width and a wiggling morphology (see K21 figure 28). Theoretical studies have shown that turbulence accelerates magnetic reconnection \citep[e.g.,][]{1999ApJ...517..700L}. So, as long as the initial B-field is somewhat antiparallel, a cloud-cloud collision shall trigger CMR \citep[see detailed discussions in][]{2022ApJ...933...40K}. Here we use \textsc{Arepo} to explore CMR-SF with the initial B-field angle ranging from 10 to 90 degree at a step of 10 degree. We find that models with a tilting angle $\lesssim40$ degree are able to form sinks. Models with a tilting angle $\gtrsim50$ degree do not form sinks until the end of the simulation (3 Myr). Also, the larger the initial tilting angle, the less the sink formation, which is consistent with the trend of dense gas formation.
To answer the question about the sterility of the filament, we first need to discuss the fate of the Stick. As shown by the fiducial model in this paper, the filament will collapse along its main axis toward the center where a dense core forms. In the core, sink formation happens once the core density is significantly increased, and eventually a cluster emerges. In reality, will the Stick do the same thing? As shown by \citet{2000MNRAS.311...85F}, under the assumption of axisymmetry, an initially stable filament remains so against radial perturbation. The stability originates from the fact that the gravitational potential of the filament is independent of its radius, so its self-gravity will never dominate due to radial contraction. However, the same is not true for the longitudinal collapse, as the potential scales as $\sim L^{-1}$ \citep[$L$ is the filament length,][]{2000MNRAS.311...85F}, which indicates that the filament will inevitably collapse along its main axis. Currently, the Stick filament is still cold and starless. Most likely, the filament will collapse longitudinally and form stars.
Following the above reasoning, it becomes clear that the key to the sterility question is the length scale of the filament. The filament will collapse along its main axis eventually, so it cannot form stars before collapsing into the central core if it is too short and the gravitational instability does not have time to grow. The filament in the fiducial model has a length $\sim$1 pc, which is also the length scale of the Stick filament. In the simulation, the filament collapses within $\sim$1.4 Myr. As shown by \citet{2000MNRAS.311..105F}, the growth timescale for the gravity-driven mode is $\sim$1.8 Myr, longer than the collapse time of the filament \citep[also see][]{1997ApJ...480..681I}. Of course, the CMR-filament has rich sub-structures, some of which are quite dense ($\gtrsim 10^5$ cm$^{-3}$). They break the axisymmetry assumption in the \citet{2000MNRAS.311...85F} model.
In fact, the dense sub-structures do not result from the traditional sense of filament fragmentation. They are created by magnetic tension and brought to the filament. Instead of forming a filament first and then letting it fragment, the CMR mechanism creates multiple clumpy sub-structures and then brings them together to constitute a filament (a bottom-up process). So the dense sub-structures exist from the beginning of the filament. They have a chance to grow denser if the filament lasts longer, possibly followed by star formation. We can imagine two 20 pc clouds colliding. Their sizes are $\sim$10 times larger than those in the fiducial model, and the collision timescale is also 10 times longer. Now it will take much longer for the filament to collapse into a central core. Sink formation should happen during the filament phase before the core formation. In fact, evidence has shown that the densest part (OMC-1) of the integral-shaped filament (ISF) in Orion A is undergoing a longitudinal collapse \citep{2017A&A...602L...2H}. It is also the region with the most active star formation in Orion A (the Trapezium cluster). Meanwhile, in the northern OMC-2/3 regions, star formation is also ongoing \citep[e.g.,][]{2021A&A...653A.117B}. For even longer filaments like Nessie \citep[$\gtrsim 100$ pc,][]{2010ApJ...719L.185J,2014ApJ...797...53G}, the longitudinal collapse may not bring everything into a central core before other dynamic processes breaking the filament, e.g., the Galactic shear and feedback. In fact, as shown by the mid infrared images \citep{2010ApJ...719L.185J}, the filament Nessie breaks into multiple dark sub-filaments, each showing signs of protostellar activity, indicating local collapses.
Alternatively, one can imagine the collision between two (almost) plane-parallel gas structures, whatever their physical and chemical states are (cold neutral medium vs. cold neutral medium, or warm neutral medium vs. warm neutral medium, or even cold neutral medium vs. warm neutral medium). As long as there are protruding structures on the surface that collide with (nearly) antiparallel B-fields, CMR shall be triggered and dense gas shall form \citep{2022ApJ...933...40K}. For instance, it can be the collision between the expanding bubble from a massive star or supernova and a wall of atomic/molecular gas. The surfaces of the bubble and the wall are likely not smooth but with ripples. As long as the bubble brings the antiparallel B-field, the collision shall trigger multiple CMR events at the collision interface. Each of these events will form a dense filament. Depending on the geometry, all the filaments may constitute a large filament or a web of filaments. Following the above reasoning about the filament collapse, we may see star formation happening at different locations. More interestingly, these star-forming clouds will have turbulence that originates from the chaotic reconnected field. The helical field will guide the incoming plasma into different directions, converting the coherent colliding velocity into chaotic turbulent energy, which gives a natural explanation of one origin of turbulence in molecular clouds. Future studies shall address all these physical processes.
\section{Summary and Conclusion}\label{sec:conclu}
In this paper, we have investigated star formation in the context of collision-induced magnetic reconnection (CMR). Using the \textsc{Arepo} code, we have confirmed the filament formation via CMR, which was first shown in \citet{2021ApJ...906...80K}. With the sink formation module in \textsc{Arepo}, we have shown that the CMR-filament is able to not only form stars, but a mini star cluster. We stop the fiducial model simulation at t=2.2 Myr when there are 66 sinks in the computation domain. Further evolution of the gas and cluster is likely impacted by protostellar feedback, including outflows and radiative heating, which we currently do not consider.
At least in the fiducial model, the CMR star formation (CMR-SF) is a two-phase process. The first phase is the filament formation due to the magnetic reconnection. During this phase, we see a dense, clumpy filament with no star formation. The reason is that the cloud is not bound by gravity but by the surface pressure from the wrapping helical magnetic field. This starless phase lasts about 1.4 Myr. In the second phase, the filament starts to collapse longitudinally into a central dense core, shortly before the first sink formation in the core. With continuous fragmentation, a star cluster forms in the core. Those stars that become massive later form earlier and stay near the cluster center, while the outer part of the cluster preferentially consists of lower mass stars. The apparent mass segregation is indicative of competitive accretion.
Qualitatively, there are two distinctive features in CMR-SF. First, the number of clusters and their extent are limited. In our fiducial model, only one cluster forms and it is confined within a region of $\sim$0.05 pc. Both result from the highly concentrated dense gas, which is strongly confined by the helical/toroidal magnetic field and gravity. In comparison, the same model but without magnetic field has multiple cluster-forming sites that spread over a larger volume of $\gtrsim$0.2 pc. Second, because of the field, which acts like a surface shield, inflowing gas is only able to transfer material to the core/cluster through streamers. The limited gas inflow results in a relatively low star formation rate. Compared to the model without magnetic field, CMR-SF has an overall star formation rate a factor of 2 smaller.
In CMR-SF, the crowdedness of the cluster will probably result in more massive stars if protostellar feedback is included. For instance, the radiative heating will suppress fragmentation, thus limiting the number of stars in the cluster. So the same mass reservoir will supply more massive stars if they keep the accretion. Eventually, feedback from the massive stars will stop the accretion and disperse the gas.
\section{Acknowledgements}
An allocation of computer time from the UA Research Computing High Performance Computing (HPC) at the University of Arizona is gratefully acknowledged. RJS gratefully acknowledges an STFC Ernest Rutherford fellowship (grant ST/N00485X/1) and HPC from the Durham DiRAC supercomputing facility (grants ST/P002293/1, ST/R002371/1, ST/S002502/1, and ST/R000832/1).
\section*{Data Availability}
The data underlying this article are generated from numerical simulations with the modified version of \textsc{Arepo} code. The data will be shared upon reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
2,877,628,089,611 | arxiv | \section{Introduction}
\hspace{16pt} A multi-level inverter is a power electronic system that synthesizes a desired voltage output from several levels of DC voltages as inputs \cite{pv-citation-1}. Today, there are many different topologies of multilevel converters including, but not limited to, Diode-Clamped, Flying Capacitor, and Cascade H-bridge (CHB). While the topologies may be different, they all offer similar beneficial features. For sinusoidal outputs, multilevel converters improve their output voltage in quality as the number of levels of the converter increase, thus decreasing the Total Harmonic Distortion (THD) \cite{Kouro}. For this reason and others, multilevel converters have been used for high power photovoltaic (PV) inversion, electric motor drivers in electric vehicles, and other research and commercial applications \cite{Kouro, ev-citation-1,pv-citation-2, ev-citation-1, Franquelo}. Although, technological problems such as reliability, efficiency, the increase of the control complexity, and the design of simple modulation methods have slowed down the application of multilevel converters \cite{Kouro}.
\begin{figure}[h]
\centering
\includegraphics[height=9cm]{images/CHB-5level.png}
\caption{5 Level Cascade H-bridge Converter \cite{pv-citation-2}}
\label{chb-5level}
\end{figure}
\\
\par \hspace{16pt} Figure \ref{chb-5level} shows a 5 level CHB converter. As can be seen, CHB converters consist of multiple MOSFET (or equivalent) H-bridges that are connected in series. Each H-bridge having its own isolated DC voltage source. For the shown 5 level case, it requires two H-bridges that can be configured to output the 5 levels: +V, +V/2, 0, -V/2, and -V. While the theory of adding and subtracting isolated voltage sources is simple, such is harder to do in practice. Common methods include using isolated DC-DC converters such as flyback and forward converters that have transformers with multiple secondary windings \cite{Kouro}. Others use individual, or multiple isolated sets of PVs that power individual H-bridges \cite{pv-citation-1, pv-citation-2}.
\\
\par \hspace{16pt} This paper uses two of the common modulation techniques for CHB converters, Phase Shifted PWM (PSPWM) and Nearest Level Control (NLS), to propose designs for a utility 3 phase PV inverter. Our designs will display many of the common advantages and disadvantages of the different modulation techniques for CHB. Our design requirements were to develop a 3 phase utility PV CHB inverter to supply 125kW at $480 V_{L-L}^{RMS}$ with a THD below 5\%.
\section{Nearest Level Switching}
\par \hspace{16pt}
The Nearest-Level Switching control for a multilevel converter compares a control sine wave to DC voltage levels. For a Cascading Multi-level Converter consisting of \emph{N} H-bridges, each trigger when the voltage control sinusoidal is greater than their respective threshold given by the equation:
\begin{align}
V_{thresh}(i) = V_{pk} \frac{2i-1}{2N} \label{nls-vthresh}
\end{align}
\par \hspace{16pt} Where \emph{N} is the number of H-bridges equal to $N = \frac{L-1}{2}$ for an \emph{L} level Cascade H-bridge (CHB), and \emph{i} is the switch number which ranges from 0 to \emph{N-1}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/NLS/NLS-theory-wave.png}
\caption{Nearest Level Switching Waveform Synthesis \cite{Kouro}}
\label{nls-theory-wave}
\end{figure}
\par \hspace{16pt} Figure \ref{nls-theory-wave} shows the waveform for the Nearest Level Switching. It should be noted that for every DC voltage source, the threshold voltage is at half of the DC value. Additionally, the peak output voltage is equal to $N*V_{DC}$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/NLS/NLS-pwm.png}
\caption{9-Level NLS CHB PWM Waveforms}
\label{nls-pwm-wave}
\end{figure}
\par \hspace{16pt} Figure \ref{nls-pwm-wave} shows the PWM waveforms for the top H-bridge of a 9-level NLS CHB. The PWM for switches 1 and 3 are not the same, as is the case for switches 2 and 4. For a single H-bridge inverter, the diagonal switches could have the same PWM input, but that is not possible for multilevel converters. In order to achieve all of the voltage steps, either voltages need to be added and subtracted, or the outputs of H-bridges need to be shorted \cite{Franquelo}. Our approach was to use the shorting technique. This essentially adds an equivalent of a zero state in Space Vector Modulation \cite{Kouro}. This output shorting was implemented by making the PWM for switch 3 the inverse of PWM 2 and PWM 4 the inverse of PWM 1. This shorted the outputs through switches 4 and 3. In practice, both zero states of switches 3,4 as well as switches 1,2 should be used in order to have more even wear on individual switches.
\section{Nearest Level Switching Resistive Load RMS and THD Calculations}
\par \hspace{16pt}For a cascading H-Bridge Multilevel Inverter with $L$ levels, using the Nearest-Level-Switching technique, the switching point $\alpha_i$ for level $i = 0$ to $i=\frac{L-1}{2}$, are given by the equation:
\begin{align}
\alpha_i = sin^{-1}\left( \frac{2i+1}{L-1}\right) \label{eqn-alpha}
\end{align}
\subsection{Root Mean Squared}
\par \hspace{16pt} From \cite{Mohan}, the equation for calculating the RMS ($X^{RMS}$) of a function $x(t)$ is given as:
\begin{align}
X^{RMS} = \sqrt{\frac{1}{T}\int_0^T (x(t))^2 dt} \label{RMS_equation}
\end{align}
\subsubsection{3-Level}
For the simple 3-level Inverter with NLS, the RMS voltage of a resistive load can be calculated to be:
\begin{align*}
V_{3-L}^{RMS} & = \sqrt{\frac{1}{\pi}\int_{\alpha_0}^{\pi-\alpha_0}(V_m)^2 dt}\\
& = V_m \sqrt{ \frac{1}{\pi}\left[ t \right]_{\alpha_0}^{\pi-\alpha_0}}\\
& = V_m \sqrt{\frac{1}{\pi}\left( \pi - \alpha_0 - \alpha_0 \right)}\\
& = V_m \sqrt{1 - \frac{2}{\pi}\alpha_0}
\end{align*}
\par \hspace{16pt} For $\alpha_0 = sin^{-1}\left( \frac{1}{2}\right) = \frac{\pi}{6}$
\begin{align*}
V_{3-L}^{RMS} & = V_m \sqrt{\frac{2}{3}}
\end{align*}
\subsection{5-Level} For the simple 5-level Inverter with NLS, the RMS voltage of a resistive load can be calculated to be:
\begin{align*}
V_{5-L}^{RMS} & = \biggl\{\frac{1}{\pi}\biggl( \int_{\alpha_0}^{\alpha_1}\biggl( \frac{V_m}{2}\biggl)^2 dt + \int_{\alpha_1}^{\pi - \alpha_1}\biggl( V_m\biggl)^2 dt \\
&+ \int_{\pi-\alpha_1}^{\pi-\alpha_0}\biggl( \frac{V_m}{2}\biggl)^2 dt\biggl)\biggr\}^{1/2}\\ \\
& = V_m \sqrt{\frac{1}{\pi}\left( \left[\frac{t}{4}\right]_{\alpha_0}^{\alpha_1} + \left[t\right]_{\alpha_1}^{\pi - \alpha_1} + \left[\frac{t}{4}\right]_{\pi -\alpha_1}^{\pi -\alpha_0}\right)}\\
& = V_m \sqrt{\frac{1}{\pi} \left( \pi - \frac{1}{2}\alpha_0 - \frac{3}{2}\alpha_1 \right)}\\
& = V_m \sqrt{1 - \frac{1}{2 \pi}\alpha_0 + \frac{3}{2 \pi} \alpha_1}
\end{align*}
\par \hspace{16pt} For $\alpha_0 = sin^{-1}\left( \frac{1}{4}\right)$ and $\alpha_1 = sin^{-1}\left( \frac{3}{4}\right)$
\begin{align*}
V_{5-L}^{RMS} & \approx V_m 0.7449
\end{align*}
\subsection{7-Level}
For the simple 7-level Inverter with NLS, the RMS voltage of a resistive load can be calculated to be:
\begin{align*}
V_{7-L}^{RMS} & = \biggl\{\frac{1}{\pi}\biggl( \int_{\alpha_0}^{\alpha_1}\left( \frac{V_m}{3}\right)^2 dt + \int_{\alpha_1}^{\alpha_2}\left( \frac{2 V_m}{3}\right)^2 dt \\
&+ \int_{\alpha_2}^{\pi - \alpha_2}\left( V_m\right)^2 dt + \int_{\pi - \alpha_2}^{\pi - \alpha_1}\left( \frac{2V_m}{3}\right)^2 dt \\
&+ \int_{\pi-\alpha_1}^{\pi-\alpha_0}\left( \frac{V_m}{3}\right)^2 dt\biggl)\biggl\}^{1/2}\\
\\
& = V_m \biggl\{\frac{1}{\pi} \biggl( \left[\frac{t}{3} \right]_{\alpha_0}^{\alpha_1} + \left[\frac{2t}{3} \right]_{\alpha_1}^{\alpha_2} \\
&+ \left[t \right]_{\pi-\alpha_2}^{\alpha_2} + \left[\frac{2t}{3} \right]_{\pi-\alpha_1}^{\pi-\alpha_2} + \left[\frac{t}{3} \right]_{\pi-\alpha_1}^{\pi-\alpha_0}\biggl)\biggl\}^{1/2}\\
\\
& = V_m \sqrt{\frac{1}{\pi}\left(\pi - \frac{2}{9}\alpha_0 - \frac{6}{9}\alpha_1 - \frac{10}{9}\alpha_2 \right)}\\
& = V_m \sqrt{1 - \frac{2}{9\pi}\alpha_0 - \frac{6}{9\pi}\alpha_1 - \frac{10}{9\pi}\alpha_2}
\end{align*}
\par \hspace{16pt} For $\alpha_0 = sin^{-1}\left( \frac{1}{6}\right)$, $\alpha_1 = sin^{-1}\left( \frac{3}{6}\right)$, and $\alpha_2 = sin^{-1}\left( \frac{5}{6}\right)$
\begin{align*}
V_{7-L}^{RMS} & \approx V_m 0.7217
\end{align*}
\subsection{L-Level}
From the previous derivations and Equation \ref{eqn-alpha}, a pattern for the RMS voltage can be seen. For a multilevel cascade H-Bridge Inverter with L levels, the equation for the RMS voltage of a resistive load can be given by:
\begin{align}
V_L^{RMS} &= V_m \sqrt{1 - \sum^{N-1}_{i=0}\left( \frac{2(2i+1)}{\pi*N^2}*\alpha_i\right)} \nonumber \\
&= V_m \sqrt{1 - \frac{2}{\pi N^2} \sum^{N-1}_{i=0}\left( sin^{-1}\left(\frac{2 i + 1}{L -1} \right) \left(2i+1 \right) \right)} \label{eqn-L-level-RMS}
\end{align}
\subsection{Fourier Series Expansion}
\par \hspace{16pt} The Fourier Series Expansion was also performed on the ideal cascade H-bridge Inverter. In the ideal case with a purely resistive load, the output waveform has odd symmetry and quarter-wavelength symmetry inherently. From \cite{Mohan}, the Fourier Series Expansion Coefficients for such symmetry are given as:
\begin{align*}
a_h &= 0\\
b_h &= \frac{4}{\pi}\int_0^{\frac{\pi}{2}}\left( x(t) sin(h\omega t )\right) d\omega t
\end{align*}
\subsubsection{3-Level}
For the simple 3-level Inverter with NLS, the Fourier Series Expansion Coefficients of a resistive load can be expressed as:
\begin{align*}
b_h (3) & = \frac{4}{\pi}\int_{\alpha_0}^{\frac{\pi}{2}}\left( V_m sin(h\omega t )\right) d\omega t\\
& = \frac{4V_m}{\pi h} \left[-cos(h \omega t) \right]^{\frac{\pi}{2}}_{\alpha_0}\\
& = \frac{4V_m}{\pi h} cos(h \alpha_0)
\end{align*}
\subsection{5-Level}
For the simple 3-level Inverter with NLS, the Fourier Series Expansion Coefficients of a resistive load can be expressed as:
\begin{align*}
b_h (5) & = \frac{4}{\pi} \biggl(\int_{\alpha_0}^{\frac{\pi}{2}}\left( \frac{V_m}{2} sin(h\omega t )\right) d\omega t \\
&+ \int_{\alpha_1}^{\frac{\pi}{2}}\left( \frac{V_m}{2} sin(h\omega t )\right) d\omega t \biggl)\\
\\
& = \frac{2 V_m }{\pi h}\left( \left[-cos(h\omega t) \right]_{\alpha_0}^{\frac{\pi}{2}} + \left[-cos(h\omega t) \right]_{\alpha_1}^{\frac{\pi}{2}} \right)\\
& = \frac{2 V_m }{\pi h}\left( cos(h\alpha_0) +cos(h\alpha_1) \right)\\
& = \frac{2 V_m }{\pi h}\left(cos(h\alpha_0) + cos(h\alpha_1) \right)
\end{align*}
\subsection{L-Level}
For a multilevel cascade H-Bridge Inverter with L levels, the Fourier Series Expansion Coefficients of a resistive load can be expressed as:
\begin{align*}
b_h (L) & = \frac{4}{\pi} \left(\sum^{N-1}_{i=0} \int_{\alpha_i}^{\frac{\pi}{2}}\left( \frac{V_m}{N} sin(h\omega t )\right) d\omega t \right)\\
& = \frac{4 V_m}{\pi h N} \sum^{N-1}_{i=0} \left( cos \left(h \alpha_i\right) \right)
\end{align*}
\par \hspace{16pt} This can be simplified using the trigonometry identity:
\begin{align*}
cos(sin^{-1}(x)) = \sqrt{1 - x^2}
\end{align*}
\par \hspace{16pt} From Equation \ref{eqn-alpha}, the Fourier Series first Coefficient can be simplified to be:
\begin{align}
b_1 (L) & =\frac{4 V_m}{\pi N} \sum^{N-1}_{i=0} \left( \sqrt{1 - \left( \frac{(2i+1)}{L-1} \right)^2}\right) \label{eqn-fourier}
\end{align}
\subsection{Total Harmonic Distortion}
\par \hspace{16pt} From \cite{Mohan}, the equation for the Total Harmonic Distortion (THD) is given by:
\begin{align}
THD = \frac{\sqrt{(X^{RMS})^2 - (X^{RMS}_1)^2}}{X^{RMS}_1} \label{eqn-thd}
\end{align}
\par \hspace{16pt} From Equation \ref{eqn-fourier}, the equation for the RMS magnitude of the first harmonic can be calculated as:
\begin{align}
V^{RMS}_1 (L) = \frac{4 V_m}{\pi N \sqrt{2}} \sum^{N-1}_{i=0} \left( \sqrt{1 - \left( \frac{(2i+1)}{L-1} \right)^2}\right) \label{eqn-first-rms}
\end{align}
\par \hspace{16pt} Equations \ref{eqn-first-rms}, \ref{eqn-L-level-RMS}, and \ref{eqn-thd} were combined and plotted using python (Appendix A) to calculate the THD of an L-level CHB inverter with a resistive load. At the same time, PSIM simulations for the same number of levels were run with resistive loads and compared against one another.
\begin{table}[h]
\centering
\caption{PSIM Simulated and Theoretically Calculated THD}
\begin{tabular}{ |c|c|c|}
\hline
Levels (L) & PSIM THD (\%) & Calculated THD(\%)\\
\hline
3& 31.0512 &31.08419\\
5& 17.5799 &17.6012\\
7& 12.2126 &12.2272\\
9& 9.35322 &9.363669\\
11& 7.58321 &7.587252\\
13& 6.3712 &6.378124\\
15& 5.49467 &5.502021\\
17& 4.83621 &4.837995\\
19& 4.31314 &4.317328\\
21& 3.89612 &3.89809\\
23& 3.55342 &3.553263\\
25& 3.26193 &3.264629\\
27& 3.017 &3.01947\\
\hline
\end{tabular}
\label{thd-table}
\end{table}
\par \hspace{16pt} Table \ref{thd-table} shows the THD results from both the Theoretical Calculated THD and the PSIM simulated THD for 3 levels to 27 levels. The two data sets are consistent with each other. From the Table, as the number of levels increases, the THD decreases. This quantitatively confirms that the output sinusoidal quality improves with more and more additional levels of the CHB.
\section{Nearest Level Switching Simulation Design}
\par \hspace{16pt} As previously mentioned, the design goals were to produce a 60 Hz 3 phase inverter at $480 V_{L-L}^{RMS}$ with a real power output of 125 kW. The THD of our inverter also needed to be below 5 \%. These specifications were consistent with other commercially available PV inverters \cite{industry-inverters}.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{images/NLS/NLS-busk-hbridge.png}
\caption{Nearest Level Switching Simulated H-bridge and Buck Converter}
\label{nls-h-and-buck}
\end{figure}
\par \hspace{16pt} Figure \ref{nls-h-and-buck} shows the PSIM simulation for the NLS-CHB inverter individual H-bridge and buck converter. From the previous section, it was found that for a resistive load, the theoretical THD of a 27 level NLS-CHB was approximately 3\%, so a 27 level inverter was chosen. From the number of levels and the peak grid voltage of $480 V_{L-L}^{RMS}$, the individual H-bridge voltage was calculated to be $V_{dc} = 480\sqrt{2/3}/13\approx30.15V$. Our simulations did not include PV maximum power point tracking, so our simulations used a constant voltage source equal to the maximum voltage output of a single PV from the datasheet in \cite{solar-panels}. \\
\par \hspace{16pt}From \cite{Krein}, for a buck converter, we know:
\begin{align}
V_o & = DVs \label{buck-d}\\
\Delta i_L & \approx \frac{V_l* \Delta t}{L}\nonumber\\
& \approx \frac{D V_s (1-D)T}{L}\nonumber\\
L & \approx \frac{D V_s (1-D)T}{\Delta i_L} \label{buck-L}\\
\Delta V_c & \approx \frac{T \Delta i_L}{8C} \label{buck-C}
\end{align}
\par \hspace{16pt} We expect $I_{Lpk}\approx30A$. Let $f_s = 200kHz$, $V_s = 48.9V$, $V_o = 480\sqrt{2/3}/13\approx30.15V$,$\Delta V_c = 4V$ , and $\Delta i_L = 5\% = 6A$. Then:
\begin{align*}
D &= 0.6165\\
L &= 9.633 \mu H \approx 10 \mu H\\
C &= 937 nF \approx 1 \mu F
\end{align*}
\par \hspace{16pt} The capacitor and inductor were intentionally kept small, but above critical values such to reduce their equivalent impedance when placed in series. Once the buck converters were designed, a single phase was simulated against a grid ac voltage source with a phase shift of -2.5 degrees in order to determine the desired cutoff frequency for the output filter. From our simulations, a cutoff frequency of approximately 700Hz was selected based on the output current harmonics. A simple LC low pass filter was chosen, with a cutoff frequency given by \cite{Krein} as:
\begin{align}
f_c = \frac{1}{2 \pi \sqrt{LC}} \label{f-c-LC}
\end{align}
\par \hspace{16pt} An inductor $L_f = 1mH$ and a capacitor $C_f=50\mu F$ were selected.
\section{Nearest Level Switching Simulation Simulation}
\subsection{NLS Results}
\par \hspace{16pt} The NLS-CHB was first simulated with a single phase using ideal switches and a phase angle of -2.5 degrees from the grid voltage source. The simulation was able to reach a power level of 8.5 kW per phase at the desired line to neutral voltage level. The output current RMS was 30.66A and the current THD was 3.02\%. At this current level, using the PV arrays from \cite{solar-panels}, we would need two in parallel going connected to each buck converter.
\\
\par \hspace{16pt} The NLS-CHB was then simulated with a single phase using the PSIM default lossy switching models for NMOS MOSFETs. In addition, all reactive components were given series resistance values of 50m $\Omega$. The circuit was simulated for 0.5s and the output THD increased to 4.486\% at a power factor of 99.56\%.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/NLS/NLS-3Phase.png}
\caption{Nearest Level Switching PSIM 3-Phases (Y Connection)}
\label{nls-3phase}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/NLS/NLS-Sphase-wave.png}
\caption{Nearest Level Switching 3 Phase Voltage and Current Waveforms}
\label{nls-3phase-wave}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/NLS/NLS-3phase-PSIM.png}
\caption{Nearest Level Switching 3 Phase Voltage and Current Characteristics}
\label{nls-3phase-psim}
\end{figure}
\par \hspace{16pt} The NLS-CHB was finally simulated with all three phases, each with a respective phase shift of -2.5 degrees. Figure \ref{nls-3phase} shows the PSIM file of the NLS-CHB three phase simulation. Figures \ref{nls-3phase-wave} and \ref{nls-3phase-psim} show the output waveforms and characteristics respectively. The desired output voltage of $480 V_{L-L}^{RMS}$ was achieved. The output real power was approximately 25kW with an output current THD of approximately 3.12\%. From our simulation, in order to achieve our design requirements, 5 of such three phase inverters would need to be placed in parallel.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/NLS/NLS-switch-stress.png}
\caption{Nearest Level Switching H-bridge Switch Stresses}
\label{nls-switch-stresses}
\end{figure}
\par \hspace{16pt} Figure \ref{nls-switch-stresses} displays the individual switch stresses for the H-Bridge MOSFETs. Each MOSFET in each H-bridge needed to conduct a peak of 50A and block a peak of 40V. Base on design experience, voltage and current ratings are desired to be increased by ~ 150 \% to maintain safe operation at extreme performance cases. For a $480 V_{L-L}^{RMS}$ system, these are comparatively small values \cite{Franquelo}. While in this topology there are more switches, each switch experiences less stress. This leads to the possibility of using more efficient switches that have lower voltage and current ratings. Our simulation did kept each H-bridge at a constant duty ratio. The duty cycle for individual H-Bridges would need to be different, or a more complex controller would need to be used in order to evenly apply duty cycles to all the H-bridges.
\subsection{NLS Comments}
\par \hspace{16pt} The benefits that the Nearest Level Switching brings include greater efficiency and the ability to use additional active elements (additional H-bridges) in order to improve output waveform quality \cite{ev-citation-1, Franquelo}. From our analysis it has been shown that by increasing the number of H-bridges and levels, the THD of the output waveform decreases, thus making the output closer to a pure sinusoid. Due to the slow switching nature of the NLS technique, all switches turn on and off only once for the fundamental period of the output waveform, reducing the commutation losses of the switches but increasing the conduction losses. Also, as the number of H-bridges increases, the voltages across each of the switches in the H-bridges decreases (Equation \ref{nls-vthresh}), thus reducing individual switch losses and stresses. At the same time, as the number of H-bridges increases, the number of total switches will increase, thus increasing the total losses as well as the overall complexity of the control for all of the switches \cite{Franquelo}. Thus the optimized number of switches depends on the specifications of the switches used as well as the output voltage and power. As a result, NLS CHB circuits are better suited for some applications more than others \cite{ev-citation-1, Kouro}.
\section{Phase Shifted PWM}
\par \hspace{16pt}
The Phase Shifted PWM control for a multilevel converter applies a triangular waveform in comparison to a control sinusoidal function in order to obtain the desired PWM for each H-bridge. Each H-bridge's triangle waveform has a phase shift depending on the number of levels:
\begin{align}
\theta_{shift} = \frac{360^\circ}{L-1} = \frac{360^\circ}{2N}\label{ps-theta}
\end{align}
\par \hspace{16 pt} Where N is the number of H-bridges in the cascade. The DC voltage for each H-bridge level is defined as:
\begin{align}
V_{DC} = \frac{V_{DC,0}}{N} \label{ps-vdc}
\end{align}
\par \hspace{16pt} Where $V_{DC,0}$ is the DC voltage required to generate desired AC output voltage in case of a single level inverter.
\section{Phase Shifted PWM Inverter Design}
\par \hspace{16pt} For this project, it has been decided to choose cascade consisting of 6 level H-bridge inverters with PSPWM control. The carrier waveforms for all of the 6 levels were triangle waves of 100kHz. The overall 3 phase circuit with load, and level circuitry are shown in Figures \ref{ps-3phase} and \ref{ps-carrier}. Note that the inverter is connected to the grid and the grid has an associated inductance is 1 mH. The grid voltage was given a -2.5° phase shift with respect to inverter output voltage to facilitate current flow from inverter to grid.
\\
\par \hspace{16pt} From our project requirements, the required output voltage was $480 V_{L-L}^{RMS}$ , or $\approx277 V_{L-N}^{RMS}$. From \cite{Mohan} and Equation \ref{ps-vdc}, the total required DC voltage $V_{DC,0}$ and the individual H-bridge DC voltages $V_{DC,level}$ were found as:
\begin{align*}
V_{DC,0} = \frac{V_{rms,LL}}{m_a|_{m_a = 0.8}} &* \sqrt{\frac{2}{3}} = \sqrt{\frac{2}{3}} \frac{480}{0.8} \approx 490 V\\
V_{DC,level} &= \frac{490}{6}\approx81.67V
\end{align*}
\par \hspace{16pt} Similarly, from equation \ref{ps-theta}, the individual carrier wave phase shift can be found as:
\begin{align*}
\theta_{shift} = \frac{360^\circ}{2*6} = 30^\circ
\end{align*}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/PS/PS-3phase.png}
\caption{3 phase 6 level cascaded H-bridge inverter with grid as a load}
\label{ps-3phase}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/PS/PS-carrier.png}
\caption{PSPWM carriers (leg A, leg B) for 6 level cascaded H-bridge inverter. Phase shift is $30^\circ$}
\label{ps-carrier}
\end{figure}
\par \hspace{16pt} Figure \ref{ps-carrier} shows the carrier signals for a 6 level cascaded H-bridge inverter (positive leg control signals are shown in top part, and negative leg control signals are shown in bottom part)
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/PS/PS-Buck.png}
\caption{Buck converter required to facilitate 81.67 VDC for inverter level}
\label{ps-buck}
\end{figure}
\par \hspace{16pt} Figure \ref{ps-buck} displays the PS-PWM buck converter used. The inductor and capacitor values were calculated using Equations \ref{buck-L} and \ref{buck-C} (Final inductor and capacitor values were chosen to be $100 \mu H$ and $100 \mu F$).This circuit will facilitate power flow from the PV network to the cascaded inverter. Based on the single phase inverter, current draw from the circuit was $\approx13.7 ADC$. That leads to conclude that the inverter's levels will behave as resistive load of ~ $6 \Omega$ for the buck converter. This figure has been used to design the appropriate buck converter (Equation \ref{f-c-LC}).
\par \hspace{16pt} Based on the data for PV cells in \cite{solar-panels}, it was estimated that two sets of three series PV cells in parallel connected to each buck converter would be needed in order to provide sufficient voltage and current for our inverter. Hence, the input voltage and available current for buck converter are: 120.6 VDC @ 19.42 ADC. This defines duty cycle of buck converter to be ~ 0.677 from Equation \ref{buck-d}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/PS/PS-Freq.png}
\caption{Filtered and unfiltered voltage and current output waveforms}
\label{ps-frq}
\end{figure}
\par \hspace{16pt} Based on the unfiltered voltage and current simulation data, the voltage high frequency harmonics at frequencies above 2 kHz. Therefore, in order to have a low filtered THD with reasonably high L and C values for filter, it was decided to set cut-off frequency of LC filter at $\approx1453 Hz$. In addition to LC filter, the equivalent grid line inductance of 1 mH also acts as a filter. From Equation \ref{f-c-LC} the calculated filter values were:
\begin{align*}
L_f = 200 \mu H\\
C_f = 60 \mu F
\end{align*}
\section{PS-PWM Simulation Results}
\subsection{$3 \phi$ Simulation Data Analysis}
\par \hspace{16pt} First, the PSIM simulations for a single phase using the default PSIM lossy MOSFET models and lossy reactive elements ($R_{series} = 50m \Omega$) were examined.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/PS/PS-waveforms.png}
\caption{Phase A filtered/unfiltered voltage and current waveforms}
\label{ps-phasea-wave}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/PS/PS-PSIM_OUT.png}
\caption{Phase A filtered/unfiltered voltage/current waveform characteristic Figures}
\label{ps-ph-a-psim}
\end{figure}
\par \hspace{16pt} Figures \ref{ps-phasea-wave} and \ref{ps-ph-a-psim} display the output voltage and current filtered and unfiltered waveforms as well as the wave characteristics respectively for the single phase lossy simulation. As can be seen, the output voltage met the required line-line voltage of $480 V_{L-L}^{RMS}$. In addition, all of the THD values were below 5\%. The single phase output power was approximately 6.8kW.
\subsection{$3 \phi$ Simulation Data Analysis}
\par \hspace{16pt} Next, the PSIM simulations for a three phase circuit using the default PSIM lossy MOSFET models and lossy reactive elements ($R_{series} = 50m \Omega$) were examined.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{images/PS/PS-waveforms-3phase.png}
\caption{Three phase filtered/unfiltered voltage and current waveforms}
\label{ps-3phase-wave}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{images/PS/PS-3phase-PSIM-OUT.png}
\caption{Three phase filtered/unfiltered voltage/current waveform characteristic Figures}
\label{ps-3phase-psim}
\end{figure}
\par \hspace{16pt} Figures \ref{ps-3phase-wave} and \ref{ps-3phase-psim} display the output voltage and current filtered and unfiltered waveforms as well as wave their characteristics for the single phase lossy simulation. As can be seen, the output voltage and output current THD values are below the required 5\%. The total output power from the three phase simulation was calculated to be approximately 20.3kW. Thus in order to meet the design requirement of 125kW, at least seven three phase H-bridge inverters of this topology would need to be used.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{images/PS/PS-Switches.png}
\caption{6 level cascaded H-bridge inverter. Switch voltage/current stresses}
\label{ps-switches}
\end{figure}
\par \hspace{16pt} Figure \ref{ps-switches} shows the voltages and currents experienced by the H-Bridges of the PS-PWM inverter. Based on the simulation data, the voltage stress on MOSFETs was approximately 85 VDC, and current stress was approximately 40 ADC. Base on design experience, voltage and current ratings are desired to be increased by ~ 150 \% to maintain safe operation at extreme performance cases. That implies that actual switch ratings should be 150 VDC / 60 ADC.
\section{Conclusions}
\par \hspace{16pt} Multilevel converters inherently provide desired characteristics for high powered applications; but with them come inherent issues such as more complex structure and operation. The CHB in particular has a structure that allows for very high power applications due to their series connections of isolated power supplies. The drawback of this structure is that the if a single power source is used to supply each of the levels, then the isolation transformer would require currently non-standard transformers with large numbers of secondary windings\cite{Franquelo}. Our solutions, as well as \cite{pv-citation-1, pv-citation-2}, solve this problem by having isolated PV cells. This method assumes that all of the PV's output the same current, but in practice would require more complex control in order to achieve the desired output voltage and power from PV cells that are not providing equal power. In addition, the gate control of each H-bridge would require to be isolated.
This paper proposes two solutions for creating CHB inverters capable of outputting 125kW at $480 V_{L-L}^{RMS}$. Our simulation results show that such is possible while maintaining a current THD below 5\% as required for IEEE-519 \cite{519}. Multilevel converters such as the CHB have unique features for power quality and modularity. Although they are not commonly used in industry now, they have great potential for the future.
\bibliographystyle{plain}
|
2,877,628,089,612 | arxiv | \section{Introduction}
\label{intro}
The STEREO experiment \cite{stereoRef1,stereopost} searches for a sterile neutrino by measuring the anti-neutrino energy spectrum as a function of the distance from the source, the core of the Institut Laue-Langevin (ILL) research nuclear reactor.
This measurement will be done using the interaction of the anti-neutrino in a liquid scintillator (LS) via the inverse beta decay process (IBD).
The anti-neutrino signature relies on the coincidence of a positron interaction and the delayed neutron capture within a few 10\,\textmu s (exponential decay).
The target of the detector is filled with a gadolinium-loaded LS.
The volume is optically segmented in six optically-separated cells, each containing four photomultipliers (PMT).
The target volume is surrounded by an outer crown called ``Gamma-catcher'' and filled with LS without gadolinium.
The outer crown recovers part of the escaping gammas to improve the detection efficiency and the energy resolution. The Gamma-catcher is viewed by 24 PMT.
For each anti-neutrino interaction occurring in a detector cell, each PMT can collect up to 1500 photo-electrons.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.7\textwidth]{./figs/stereo_exploded_descr}
\caption{Exploded view of the STEREO detector (dimensions $\rm 3\,m \times 1\,m \times 1.5\,m$). The Cerenkov detector used to reject cosmic background events is not shown.}
\label{stereoExploded}
\end{center}
\end{figure}
In addition, in order to reject cosmic background events, a Cerenkov detector containing 20 PMT is located above the whole detector.
For the three parts of the STEREO detector, the total duration of the expected light signals is between 100 and 200\,ns and the fast rise part of the signal is lasting about 20\,ns.
The event rate for the whole detector could be as high as 1\,kHz.
The STEREO detector is regularly calibrated with a LED system.
The light produced by different and independent LED boxes, each containing 6 LED, is injected at different points in the detector thanks to optical fibers.
\section{Electronics requirements overview}
\label{overview}
A dedicated electronic system, hosted in a single microTCA (MTCA) crate, was designed for the STEREO experiment, see figure~\ref{electronicsOverview}.
It serves several purposes: triggering, processing, readout and on-line calibration.
For that purpose, the MTCA crate is equipped with ten 8-channels front-end electronic boards (FE8), one trigger and readout mezzanine board (TRB) mounted on the MicroTCA Carrier Hub (MCH) and one LED board to drive the LED boxes.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.65\textwidth]{./figs/overview_2015_12_01}
\caption{STEREO experiment DAQ electronics overview. The microTCA crate is equipped with ten FE8 boards, one TRB board and one LED board. Readout and slow control are done by IPBUS.}
\label{electronicsOverview}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=0.55\textwidth]{./figs/PSD_real}
\caption{Illustration of the first level trigger generation and of the pulse shape discrimination. A real signal pulse is shown, each 4\,ns sample is represented by a dot.}
\label{PSD_1}
\end{center}
\end{figure}
The FE8 are in charge of continuously digitizing the 68 PMT signals at 250\,MSPS for performing two main tasks that are illustrated in figure~\ref{PSD_1}.
At first, a FE8 can generate a candidate trigger (or a veto) if any of its channel is above the trigger threshold.
This candidate trigger is used by the TRB to build, depending on the selected trigger conditions, a first level accepted trigger (T1a) that is returned to the FE8 in 124\,ns.
In the second step, when the confirmed trigger is received, each FE8 performs signal processing.
For that, the beginning of the signal is to be found with a Constant Fraction Discriminator (CFD) having its own threshold set above noise,
this search is done in a window having a length of $\rm N_{sample}$ samples (see figure~\ref{PSD_1}).
The total charge $\rm Q_{tot}$ and the tail charge $\rm Q_{tail}$, useful for the pulse shape discrimination (PSD) method, are respectively the results of the Riemann integration over $\rm N_{tot}$ samples (typical total pulse duration)
and $\rm N_{tail}$ samples (typical pulse tail duration).
The TRB is used to perform the second level trigger (T2), to collect and aggregate the processed data provided by the FE8.
The TRB is also used to drive the LED board during calibrations.
The TRB is installed on a commercial MicroTCA Carrier Hub (MCH) from NAT\textregistered \ \cite{NAT} and provides the system clock (250\,MHz) to all AMC slots, i.e. the FE8 and the LED boards.
The amount of light generated by the LED is set by the LED board which is configured by slow control.
The communication between the electronic boards is done via custom serial protocols.
The slow control and the data acquisition is done by a modified version of the IPBUS \cite{IPBUS}, that allows the Dynamic Host Configuration Protocol (DHCP).
For the reader comfort, it may be added that the standard User Datagram Protocol (UDP) protocol\cite{UDP}, which is a very simple Ethernet protocol, is unreliable by definition.
Consequently, the IPBUS was designed as a UDP enhancement aimed at adding a reliability mechanism.
As the IPBUS protocol is simple, it can be implemented directly in FPGA, without the need of having a Central Processing Unit.
\section{Front-end board}
\label{FEB}
\begin{figure}
\begin{center}
\includegraphics[angle=-0,width=0.5\textwidth]{./figs/photo_fe8_V2}
\caption{Picture of the front-end electronic board (FE8).}
\label{FE8pic}
\end{center}
\end{figure}
The front-end electronic board, shown in fig.~\ref{FE8pic}, features 8 analog inputs that are pre-amplified with two selectable gains ($\times 1$ and $\times 20$).
The second gain was implemented to allow the single photo-electron separation, hence permitting PMT calibration with LED.
The amplifiers outputs are AC coupled ($\rm f_c >30\,kHz$) and equipped with 5\textsuperscript{th} order anti-aliasing filters ($\rm f_{-3dB}=85\,MHz$).
Finally, the 8 filtered signals are sampled by four dual 14-bit ADC (ADS42LB49) operated at 250\,MSPS.
By comparing the charge of simulated pulses before and after the sampling, we ensured that the selected signal sampling rate is sufficient to permit an accurate charge calculation and to allow a pulse-shape discrimination (PSD) which fulfills the STEREO requirements.
ADC are controlled and read out by a single FPGA (XC7K70T-2TFBG676).
This FPGA is also in charge of providing the IPBus connectivity with the DAQ and the trigger and readout communication links with TRB.
It must be noted that the FPGA is configured via the Boot Parallel Interface mode (BPI) and that the firmware stored in the flash memory can be updated via IPBus.
As required by the standard \cite{MTCA}, the board features a Module Management Controller (MMC).
The MMC implemented is a modified version of the one distributed by CERN \cite{MMC}.
As shown in the FE8 firmware block diagram (fig.~\ref{FE8fw}), the channel processing is done in parallel.
At first, the baseline caused by various offsets (amplifier, ADC) is removed thanks to an Infinite Impulse Response (IIR) first order high pass filter ($\rm f_{-3dB} \simeq 40\,kHz$).
Each filtered signal is sent in parallel to the trigger modules and to a circular buffer.
At the channel level, the trigger can be done on the amplitude or on the Riemann integration over a sliding window of $\rm N_{charge}$ samples ($\rm N_{charge}$ is a configurable parameter).
At the board level, these two kinds of trigger can be generated with the instantaneous sum of 4 or 8 channels.
For each trigger source, a 32-bit counter is implemented for monitoring the trigger rate.
The circular buffer is used to compensate the trigger validation path delay (about 30 clock cycles, i.e. 120\,ns) and to permit pre-triggering.
The data flowing out of the circular buffer are used to fed the CFD that searches the sample numbers ($\rm N_{Zc}$) corresponding to each threshold crossing.
Then the found CFD times ($\rm N_{Zc}$) are used to parametrize the PSD computations according to the sketch shown in fig.~\ref{PSD_1}.
The PSD block provides the computed $\rm Q_{tot}$, $\rm Q_{tail}$, $\rm N_{Zc}$, over-range monitors and, if requested through the ``debug mode'', all the samples used for the computation to the channel FIFO.
Over-range monitors were implemented to flag any miscalculation of $\rm Q_{tot}$ and $\rm Q_{tail}$ due to ADC clipping (over-range).
This is mandatory in ``normal mode'' since the samples used for the charge calculations are no longer available.
All channel FIFO data are aggregated in the TX\_FIFO and transferred serially with a 16-bit custom protocol to the TRB.
The custom synchronous serial protocol, which is operated at 125\,Mbps, is using port 5/6 for data/enable and TCLKB for clocking.
The trigger channel, operated at 250\,MHz, is hosted by port 7 (candidate and confirmed).
It must be noted that the CFD/PSD processing, which is pipelined, starts only upon the reception of a confirmed trigger.
Therefore, in theory a new confirmed trigger could be accepted every $\rm N_{sample}$ clock cycles (see fig.~\ref{PSD_1}), in practice: 8 additional clock cycles are required in both the normal and debug mode to save the computed data in the channel FIFO and $\rm N_{sample}$ further clock cycles are required in the debug mode.
Consequently, in normal mode with a typical setting $\rm N_{sample}=60$, an instantaneous accepted trigger rate of $\rm \frac{250}{60+8}=3.6\,MHz$ can be reached.
\begin{figure}
\begin{center}
\includegraphics[angle=-0,width=0.8\textwidth]{./figs/FE8_block_diagram}
\caption{Block diagram of the FE8 board firmware. The adjustable parameters are shown in red.}
\label{FE8fw}
\end{center}
\end{figure}
\section{Trigger and readout board}
In order to benefit from the star architecture of the MTCA, the trigger and readout board was designed as a set of two mezzanine extension boards that are mounted on the NAT\textregistered \ MCH as shown in fig.~\ref{DAQhw}.
The TRB is connected to the MCH with a connector that provides the power supply and the Ethernet connectivity.
The PCB associated with the second tongue contains only buffers for the clock trees (FCLKA, TCLKA) and the required connectivity for interconnecting the MCH, the tongue 2 and the main TRB board.
The main TRB board is equipped with an FPGA (XC6SLX45T-FGG484) which takes care of the trigger and readout, its associated flash memory for the BPI mode, the power converters and the connectivity with the tongue 3 and 4.
Likewise the FE8, the flash memory can be in-situ updated via IPBus.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=0.49\textwidth]{./photo/DCDC_monte}
\includegraphics[angle=0,width=0.49\textwidth]{./photo/vue_trois_quart}
\caption{Pictures of the two boards composing the TRB mounted on the NAT\textregistered \ MCH. The main TRB board is the top board in the left picture. The extension PCB providing the SMA connectivity with the front panel is not mounted.}
\label{DAQhw}
\end{center}
\end{figure}
A block diagram detailing the firmware can be seen in fig.~\ref{DAQfw}.
It is composed of three parts: the LED board controller, the trigger and the readout.
The LED controller is in charge of requesting periodic LED flashes to the LED board according to a preselected pattern (incremental or fixed) of 6 LED.
The 6-bit pattern is communicated to the LED board by a custom synchronous serial protocol using port 5/6 for data/enable and TCLKB for clocking.
This serial link operates at 125\,Mbps; the 4\,ns LED pulse is sent a few hundredths of nanoseconds after the pattern transmission.
Given the fact that the latencies are fixed and known, a delayed version of the LED pulse is used as a candidate trigger (for T1) to ensure that the good event is recorded by the DAQ.
The trigger part receives the candidate trigger from the ten FE8, the external trigger and a delayed version of the LED pulse.
These candidate triggers are passed through a trigger function to form the global T1 trigger candidate.
The T1 trigger function is a logical OR of the candidate triggers where a veto condition can be set if candidate triggers are issued from FE8 monitoring the Cerenkov detector.
This latter trigger is accepted by the multi-event buffer only if there is enough space in the readout buffers (TX\_FIFO in FE8 and DAQ\_FIFO in TRB) to store the event, and if the trigger pulses are separated by the minimum time interval, see CFD/PSD processing in section~\ref{FEB} for further details.
Given the condition for accepting the trigger set on the DAQ FIFO ($\rm 64\,k \times 16$) fullness, and the IPBus readout rate, an average accepted trigger rate larger than 1\,kHz can be sustained.
Two 32-bit counters are implemented for monitoring the accepted and rejected candidate T1 triggers.
The readout part is in charge of deserializing the data provided by the ten FE8 boards and to aggregate them.
The aggregated data are then analyzed by the ``\emph{select}'' finite state machine (FSM), that provides a second level (T2) trigger condition, and stored in a temporary buffer.
Eventually, if the T2 condition is met, the event stored in the temporary buffer is transferred to the DAQ\_FIFO and made available for readout.
The T2 trigger definition is still evolving.
Currently a multiplicity condition on the number of hits is implemented.
In the future, total charge conditions on specific parts of the detector could be implemented.
\begin{figure}
\begin{center}
\includegraphics[angle=-0,width=0.8\textwidth]{./figs/DAQpart_V2}
\caption{Block diagram of the TRB firmware. The three main parts are shown: the LED board controller, the trigger and the readout.}
\label{DAQfw}
\end{center}
\end{figure}
\section{LED board}
As introduced in section~\ref{intro}, LED are used to calibrate the detector and to monitor its stability.
The LED system is composed of one LED board (fig.~\ref{LEDboard}) that controls up to five remote LED boxes: three for the inner-detector, one for the Cerenkov detector and one spare.
Each LED box contains six LED and their associated driving electronics (fig.~\ref{LEDhw}).
The light generated by each LED is injected inside the detector by optical fibers.
To evaluate the detector linearity, the LED can combined by lighting them simultaneously.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.55\textwidth]{./figs/photo_led_board}
\caption{Picture of the LED board.\label{LEDboard}}
\end{center}
\end{figure}
It must be noted, that the LED boxes do not require any power supply.
A single signal pair is used to control each channel: the mean voltage level sets the light level and a superimposed square pulse fires the LED.
The LED board (fig.~\ref{LEDboard}) is built around a FPGA (XC6SLX45T-FGG484) and its associated BPI flash memory, it also features the mandatory MMC module and the five LED box drivers, see fig.~\ref{LEDhw}.
The FPGA permits the slow control of the board, achieved via IPBus, and the on-line control, achieved via the custom serial link, by the TRB.
The slow control part is used to select the LED box driver to use and to adjust the corresponding DAC channels in order to set the mean voltage level (0 to about 22\,V).
This allows to produce up to 2000 photoelectrons in each PMT.
The on-line part is used to apply in real time and with a known timing the LED pattern by firing the good LED driver channels.
The LED pattern is an incremental pattern covering all possible combinations.
\begin{figure}
\begin{center}
\includegraphics[angle=-0,width=0.5\textwidth]{./figs/led_box_driver}
\includegraphics[angle=-0,width=0.45\textwidth]{./figs/led_driver_modif}
\caption{Left hand side shows the LED box driver hosted by the LED board. Right hand side is one channel of the LED box.}
\label{LEDhw}
\end{center}
\end{figure}
\section{Summary}
In this paper, we have presented the design of the dedicated trigger and acquisition electronics for the STEREO experiment taking place at ILL.
The electronic, which fits in a single microTCA crate, is designed to instrument 68 PMT signals continuously digitized at 250\,MSPS.
It features two levels of trigger and can record selected data at rate higher than 1\,kHz.
The STEREO electronics is fabricated and was fully tested with cosmic rays using the prototype of the veto-muon detector and in calibration with LED.
\section*{Acknowledgements}
This STEREO collaboration has been funded by the ANR-13-BS05-0007.
|
2,877,628,089,613 | arxiv | \section{Introduction}
A star can be tidally stripped into pieces near a black hole as the tidal force exceeds the star's self-gravity and this phenomenon is called a tidal disruption event (TDE) \citep{1976MNRAS.176..633F, 1988Natur.333..523R}. In a simplified fallback model, an impulse approximation is assumed where the star is frozen until it reaches the pericenter where a short duration impulse of tidal potential disrupts the star \citep{2009MNRAS.392..332L}. The bound debris follows a Keplerian orbit and returns to the pericenter with a mass fallback rate that evolves as $\dot{M}_{\rm fb} \propto t^{-5/3}$ at late time $t$ . At the initial time, the mass fallback rate depends on the stellar density profile and the distribution of mass per orbital energy after a disruption. The returning debris interacts with the outflowing debris and this stream interaction results in a formation of an accretion disc \citep{1994ApJ...422..508K,2016MNRAS.461.3760H}. The nature of the formed disc depends on the stream-stream interactions, the thermal radiative efficiency of the debris, viscous dynamics within the debris, and on the pericenter and the eccentricity of the initial stellar orbit; this can result in a circular or an elliptical disc with or without still infalling debris \citep{2020A&A...642A.111C}. If the energy and angular momentum of infalling debris is lost on a timescale smaller than the orbital time of the debris, the mass accretion rate follows the mass fallback rate ($\dot{M}_{\rm fb}$) and the bolometric luminosity is $L_b \propto \dot{M}_{\rm fb} c^2$, where $c$ is the light speed \citep{2002ApJ...576..753L}. However, the accretion of matter to the black hole depends on the viscous dynamics in the accretion disc and the pressure which can be dominated by either radiation or gas pressure.
TDE accretion disc provides an excellent opportunity to study the evolution of the accretion phenomenon around supermassive black holes, which in general take millions of years for active galactic nuclei (AGN). TDE disc may evolve through super and sub-Eddington phases and the accretion models constructed in literature are at individual phases. The sub-Eddington disc with gas pressure and without fallback has been modelled analytically using a self-similar formulation for a non-relativistic disc by \citet{1990ApJ...351...38C} and numerically for a relativistic disc by \citet{2019MNRAS.489..132M}. \citet{2009MNRAS.400.2070S} has constructed a steady slim disc accretion model with an adiabatic and spherical outflow whereas \citet{2021NewA...8301491M} has constructed a time-dependent and self-similar model with mass infall to the disc for both sub and super-Eddington with outflow phases, where the mass outflow rate is a function of time only. \citet{2020MNRAS.496.1784M} have constructed a relativistic thin disc model with a mass fallback at the outer radius for full and partial TDEs, and showed that the late time luminosity decline is higher than the luminosity obtained using $L \propto \dot{M}_{\rm fb}$. The transition of a disc from one phase to another is usually obtained by equating the mass accretion rate \citep{2014ApJ...784...87S} and results in a non-smooth transition in disc evolving parameters such as surface density and luminosity. Here, we aim to construct an advective accretion model with a fallback that shows a smooth transition from Eddington to the sub-Eddington phase.
\citet{2019ApJ...883...76R} compared the correlations between the UV to X-ray spectral index ($\alpha_{\rm OX}$) and the Eddington ratio ($L / L_E$) in AGN and X-ray binaries, and the AGN and X-ray binaries observations show a remarkable similarity to accretion state transitions. They concluded that the dynamics of black hole accretion flows directly scale across the various black hole masses and the different accretion states. \citet{2020MNRAS.497L...1W} studied the correlation for seven TDEs and showed that the $\alpha_{\rm OX} - L / L_E$ correlation for TDE sources is similar to that observed by \citet{2019ApJ...883...76R} for AGN and X-ray binaries. They found that the X-ray emission is dominated by the power-law spectrum at a low Eddington ratio and by the soft disc spectrum at a high Eddington ratio such that the power-law X-ray emission is suppressed. The spectral state transition occurs around the Eddington ratio $\sim 0.03$. \citet{2016MNRAS.463.3813H} found that an absorbed blackbody plus power-law model shows a good fit to the X-ray spectrum of ASAS-SN 15oi. \citet{2017ApJ...838..149A} have used an absorbed power-law model to fit thirteen TDE X-ray spectra to study the time evolution of TDE X-ray luminosity. \citet{2017A&A...598A..29S} using the X-ray and UV observations of the source XMMSL1 J074008.2-853927 have shown that a power-law emission dominates over the disc emission above 2 keV and the source has both thermal and non-thermal components. The power-law emission indicates the presence of the non-thermal and hotter medium surrounding the standard accretion disc. Here, we assume that this hot medium is a corona above the disc and the energy transported from the disc to the corona is Compton scattered.
The effective temperature in a steady thin disc scales as $T_{\rm eff} \propto M_{\bullet}^{-1/4} \dot{m}^{1/4} (r/r_g)^{-3/4}$, where $M_{\bullet}$ is the black hole mass, $\dot{m}$ is the accretion rate normalized to Eddington rate, $r$ is the radial variable and $r_g$ is the gravitational radii. The disc thermal emission peaks in UV for AGNs, and in soft X-ray for X-ray binaries, whereas the Comptonized coronal emission dominates the hard X-rays. The X-ray spectrum is either dominated by thermal emission from the disc or non-thermal and power-law emission from the corona. The presence of corona plays a part in cooling the accretion flow and explaining the excess hard X-ray emission. We show that the impact on the disc bolometric luminosity and spectrum due to the inclusion of the corona is insignificant when the luminosity is high but is significant when the luminosity is sub-Eddington.
In this paper, we construct a non-relativistic and time-dependent advective accretion disc-corona model for TDEs with fallback. We include the energy loss to the corona in the energy conservation equation. We use a viscosity that is a combination of total pressure and gas pressure. We include the gravitational and Doppler redshift in calculating the spectral luminosity. The infalling debris is assumed to form a seed accretion disc that evolves due to mass gain through the mass fallback and mass loss through the viscous accretion onto the black hole with energy loss to the corona. The outer radius in our model is constant, where the mass accretion rate is equal to the mass fallback rate. The outer radius of an accretion disc with mass fallback may evolve with time, but we have limited our modelling to a constant outer radius, which simplifies our numerical calculation and shows reasonable solutions. We do not impose a global angular momentum conservation where the disc's outer radius evolves with time but truncate our disc to a constant outer radius assuming the infalling matter loses its angular momentum to the external infalling debris whose evolution is not considered in this paper. We consider the accretion dynamics after the disc has formed and the time at which matter starts crossing the innermost stable circular orbit (ISCO) is the beginning of the accretion to the black hole. The beginning time is obtained using the initial condition and we find that the increase in the contribution of gas pressure to the viscous stress delays the beginning of disc accretion. The presence of corona affects the bolometric disc and spectral luminosity at the late times in the sub-Eddington phase whereas the mass accretion rate increases in presence of corona at initial times and shows a weak rise at late times. Our time-dependent accretion model shows a disc evolution from Eddington to the sub-Eddington phase. We also estimate the evolution of coronal properties such as electron temperature, optical depth, and Compton $y$ parameter using a two-temperature plasma model in the corona where electrons cool via bremsstrahlung, synchrotron, and Compton cooling.
In the super-Eddington phase, outflows are driven by the strong radiation force at the disc surface. The dynamics of a time-dependent disc with an outflow are complex and the presence of corona surrounding such a disc is uncertain. \citet{2020MNRAS.497L...1W} showed that the non-thermal X-ray emission is suppressed at high luminosity and the spectrum is dominated by the disc spectrum. We also show that the disc luminosity declines significantly due to corona at low luminosity. This suggests that the non-thermal emission is negligible if the disc has a radiatively driven outflow. We have neglected the outflow in our disc-corona model, and this model is more applicable for near to sub-Eddington accretion. We have included the mass fallback rate in our formulation and show that the late-time mass accretion rate nearly follows the mass fallback rate. The obtained mass accretion rate is super-Eddington if the mass fallback rate is super-Eddington. We emphasise that our model is suitable when the mass fallback rate has dropped to sub-Eddington or when the peak mass fallback rate is already sub-Eddington.
In section \ref{mfb}, we present the mass fallback rate of the disrupted debris constructed using an impulse approximation. In section \ref{dcm}, we present the advective disc-corona model with fallback where the basic assumptions and conditions are discussed. In section \ref{result}, we present the results of our accretion model and the time-evolution of accretion rate and luminosity. We discuss our results in section \ref{discuss} and present the summary in section \ref{summary}.
\section{Mass fallback rate}
\label{mfb}
We assume that the star is on a nearly parabolic orbit and is tidally disrupted for the pericenter $r_p \leq r_t$, where the tidal radius $r_t \simeq (M_{\bullet}/M_{\star})^{1/3} R_{\star}$. The specific energy of the disrupted debris is governed by the variation of the black hole potential across the star and the tidal spin-up of the star as a result of the tidal interaction. The dynamics of tidal interaction that results in the stellar spin-up are complex and depend on the stellar structure \citep{1992ApJ...385..604K}. A numerical simulation by \citet{2001ApJ...549..948A} showed that the tidal interaction can spin up the rotating velocity of the star close to the angular velocity of the stellar orbit at the pericenter. By including the tidal interaction, \citet{2002ApJ...576..753L} formulated the disrupted debris energy given by $E_{\rm d}= -k G M_{\bullet} \Delta R/ r_t^2$, where $k=1$ for no tidal spin up and $k=3$ for tidal spin up. The disrupted debris follows a Keplerian orbit and the time period of innermost debris is given by \citep{1988Natur.333..523R}
\begin{equation}
t_m= 40.8~{\rm days}~ M_6^{1/2} m^{1/5} k^{-3/2},
\label{tmt}
\end{equation}
\noindent where $M_6= M_{\bullet}/[10^6 M_{\odot}]$, $m=M_{\star}/M_{\odot}$ and the radius of star $R_{\star}=R_{\odot} m^{0.8}$ \citep{1994sse..book.....K}. Following the impulse approximation, the mass fallback rate is given by \citep{2009MNRAS.392..332L}
\begin{equation}
\dot{M}_{\rm fb}= \frac{4 \pi b}{3} \frac{M_{\star}}{t_m} \tau^{-5/3} \int_{x}^{1} \theta^{u}(x') x' \, {\rm d} x',
\label{mfbn}
\end{equation}
\noindent where $\theta(x)$ is the solution of Lane-Emden equation with a polytropic index $\Gamma= 1+ 1/u$, $b$ is the ratio of central to mean density of the star \citep{1943ApJ....97..255C} and $x=\Delta R / R_{\star}= \tau^{-2/3}$ with $\tau = t/t_m$. The integral is nearly constant at late times that results in the mass fallback rate $\dot{M}_{\rm fb} \propto t^{-5/3}$. We consider the tidal spin up in our model by taking $k=3$ and the polytropic index $\Gamma= 5/3$ which results in $u=3/2$.
The specific angular momentum at the pericenter for a parabolic stellar orbit is $J= \sqrt{2 G M_{\bullet} r_t}$. The specific angular momentum for a circular orbit is $J_c= r^2 \omega= \sqrt{G M_{\bullet} r_c}$, where the Keplerian angular frequency is $\omega=\sqrt{G M_{\bullet}/r_c^3}$, and $r_c$ is the circularization radius. The angular momentum conservation of debris results in $r_c= 2 r_t$ \citep{1999ApJ...514..180U,2009MNRAS.400.2070S}, and is used as the outer radius in the steady accretion model of \citet{2009MNRAS.400.2070S} and in the time-dependent accretion model by \citet{2011ApJ...736..126M}. \citet{2016MNRAS.461.3760H} showed through the numerical simulation that the circularization radius is close to the $r_c$. To avoid an uncertainty in the outer radius, we consider it to be $r_c= 2 r_t$ in our calculation.
\section{Disc accretion model}
\label{dcm}
Here, we develop the advective accretion model for a TDE disc. In our accretion model, we consider the equations in the cylindrical coordinate and the vertical flow is assumed to be zero. We employ the vertically integrated mass and momentum conservation equations. The mass accretion rate is given by
\begin{equation}
\frac{\partial \Sigma}{\partial t} = \frac{1}{2 \pi r} \frac{\partial \dot{M}}{\partial r},
\label{mcons}
\end{equation}
\noindent where $\Sigma$ is the surface density and $\dot{M}$ is the mass accretion rate that depends on radial velocity $v_r$ as $\dot{M} = -2 \pi r \Sigma v_r$. We assume the angular velocity to be Keplerian given by $v_{\phi} = r \Omega_K $, where $\Omega_K = \sqrt{G M_{\bullet}/r^3}$, such that the angular momentum conservation results in
\begin{equation}
\dot{M} = 6 \pi \sqrt{r} \frac{\partial}{\partial r}(\sqrt{r} \nu \Sigma),
\label{mdot}
\end{equation}
\noindent where $\nu$ is the viscosity. In the standard thin disc model, the disc height is estimated using the vertical conservation equation with the assumption $H / r \ll 1$ and is given by $c_s^2 = H^2 \Omega_K^2$ which is used by \citet{2009MNRAS.400.2070S} for a steady state slim disc and they showed that $H \sim r$ for near to super-Eddington accretion. \citet{2002ApJ...576..908J} showed that vertically averaged disc scale height is $ c_s^2 = C_1 H^2 \Omega_K^2$, where $C_1 < 1$. The $C_1$ represents the correction due to the large disc scale height and the determination of $C_1$ requires the vertical structure of density. We use their scale-height formulation and consider $C_1$ to be a free parameter. The viscous heating is given by \citep{2002apa..book.....F}
\begin{equation}
Q^{+} = \frac{9}{4} \nu \Sigma \Omega_K^2,
\label{vis}
\end{equation}
\noindent and the advection flux $Q_{\rm adv}$ is given by
\begin{align}
Q_{\rm adv} &= C_v \Sigma T \left[\frac{1}{T} \frac{\partial T}{\partial t} + \frac{v_r}{T} \frac{\partial T}{\partial r} - (\Gamma_3 -1) \left\{\frac{1}{\rho} \frac{\partial \rho}{\partial t} + \frac{v_r}{\rho} \frac{\partial \rho}{\partial r} \right\} \right], \label{qadv}\\
C_v &= \frac{4 - 3 \beta_g}{\Gamma_3-1} \frac{\bar{P}}{\Sigma T},~~{\rm and}~~
\Gamma_3-1 = \frac{(4 -3 \beta_g)(\gamma_g -1)}{\beta_g + 12 (1-\beta_g)(\gamma_g-1)},
\end{align}
\noindent where $T$ is the temperature, $\rho = \Sigma / (2 H)$ is the density, vertically integrated pressure $\bar{P} \approx 2 P H$, $\beta_g$ is the ratio of gas to total pressure and $\gamma_g$ is the ratio of specific heats.
The corona above the accretion disc is heated via magnetic reconnections and a fraction of energy generated via viscous heating is transported to the corona by the magnetic fields. The amount of energy flux escapes to the corona from the disc depends on the vertical transport of magnetic flux tubes. In this paper, we follow the prescription of \citet{2003MNRAS.341.1051M} where the standard conservation equations of an optically thick disc are self-consistently coupled with the corona, and the fraction of energy dissipated to the corona is given by $f$ which at any radius $r$ is given by
\begin{equation}
f = \frac{Q_{\rm cor}}{Q^{+}},
\label{feq}
\end{equation}
\noindent where $Q_{\rm cor}$ is the coronal energy flux at any $r$ which is the vertical Poynting flux given by $Q_{\rm cor} = v_D P_{\rm mag}$ \citep{1994ApJ...436..599S}. The magnetic pressure is given by $P_{\rm mag} = B^2/(8 \pi)$ with magnetic field $B$ and the drift velocity $v_D$ is taken to be proportional to the Alfvén speed $v_A$ via an order-unity constant $b_1$ \citep{2019A&A...628A.135A}. The drift velocity is given by $v_D = b_1 \sqrt{2 P_{\rm mag}/\rho}$, where $\rho$ is the density. We assume that the stress is dominated by Maxwell stresses \citep{2015ApJ...808...54M} and thus given by $\tau_{r\phi} = k_0 P_{\rm mag}$, where $k_0$ is of order unity. The magneto-rotational instability (MRI) growth rate depends on the ratio of radiation to gas pressure \citep{2002ApJ...566..148T}. \citet{2001ApJ...553..987B} through linear perturbation theory showed the compressibility of MHD turbulence in the radiation pressure dominated regime which slows the magnetic field growth rate and it attains a saturation point. In the saturation limit, \citet{2003MNRAS.341.1051M} derived the viscous stress to be $ \tau_{r\phi} \propto P_{\rm mag} \propto \sqrt{P_g P_t}$, and a more generalised version of the viscous stress is $\tau_{r\phi} \propto P_{\rm mag} \propto P_g^{1-\mu} P_t^{\mu}$ \citep{2008ApJ...683..389D}. \citet{2009MNRAS.394..207C} considered $\mu = 1/2$ in his steady disc-corona model. We use the generalised version of magnetic pressure that is given by $P_{\rm mag} = \alpha_0 P_g^{1-\mu} P_t^{\mu}$, with $\mu$ constant and the total pressure $P_t= P_r + P_g$, radiation pressure $P_r = a T^4/3$ where $a$ is a radiation constant, and the gas pressure is given by $P_g = k_B \rho T/(\mu_m m_p)$, where $T$ is the temperature in the disc, $m_p$ is the mass of a proton, $k_B$ is the Boltzmann constant, and $\mu_m$ is the mean molecular weight taken to be ionized solar mean molecular weight of $0.65$.
The vertically integrated viscous stress is given by $\tau_{r\phi} = (\nu \Sigma/ 2 H) r (\partial \Omega / \partial r)$ \citep{2002apa..book.....F}, such that for a Keplerian velocity
\begin{equation}
\nu \Sigma = \frac{4 \alpha_s}{3} \frac{H}{\Omega_K} P_g^{1-\mu} P_t^{\mu},
\label{nuvis}
\end{equation}
\noindent where $\alpha_s = k_0 \alpha_0$. Then, the fraction of energy transported $f$ using equations (\ref{feq}) and (\ref{vis}) is given by
\begin{equation}
f(\beta_g) = b_2 \left(\frac{P_g}{P_t}\right)^{\frac{1-\mu}{2}} = b_2~ \beta_g^{\frac{1-\mu}{2}},
\label{fcor}
\end{equation}
\noindent where the constant $b_2 = \sqrt{2}b_1 \alpha_0^{1/2} / (3 k_0) $. The quantities $f(\beta_g),~ \beta_g,~{\rm and}~\mu$ are within the limits $\{0,~1\}$, and thus imposes a constrain $b_2 \in \{0,~1\}$. The $b_2 = 0$ implies no disc energy is transported to the corona and the disc evolves as if the corona is absent. In the case $\mu \neq 1$, $f(\beta_g)$ decreases with a decrease in $\beta_g$ implying that the radiation pressure reduces the cooling of accretion flow by decreasing the fractional energy loss to the corona. The $\alpha_s$ and $b_2$ are the free parameters and we take their values within $\{0,~1\}$.
The energy conservation equation is given by $Q^{+} = Q_{\rm adv} + Q_{\rm rad} + Q_{\rm cor}$, where $Q_{\rm rad}$ is the radiative energy loss. The radiation flux is given by $Q_{\rm rad} = 4 \sigma T^4 / (3 \kappa \Sigma)$ with Thomson opacity $\kappa = 0.34~{\rm g~cm^{-2}}$. Using equation (\ref{feq}), we obtain $Q_{\rm adv} + Q_{\rm rad} = [1-f(\beta_g)]Q^{+} $.
The advection flux in the steady state is approximated as $Q_{\rm adv} = [\dot{M}/(2 \pi r^2)] c_s^2 \xi $, where $\xi$ which is a function of log radial variation of entropy is of the order of unity \citep{2002apa..book.....F}. Thus, $Q_{\rm adv} / Q^{+} = [2 C_1 / (9 \pi)] \dot{M}/(\nu \Sigma) (H/r)^2 $ and can be approximated to $Q_{\rm adv} = s(r) (H/r)^2 Q^{+}$, where $s(r) = [4 c_1 / 3] \partial [\ln (\sqrt{r} \nu \Sigma)])/\partial[\ln r]$. The $s(r)$ depends on the angular momentum distribution throughout the disc, but in general, it is close to the unity \citep{1995ApJ...438L..37A}. \citet{2011ApJ...736..126M} considered it to be a unity whereas we consider it to be a constant parameter denoted by $k_1$ such that $Q_{\rm adv} = k_1 (H/r)^2 Q^{+}$. In presence of an energy loss to the corona, the amount of energy available for advection and radiation decreases by $1-f(\beta_g)$, and thus, we approximate the advection energy loss as
\begin{equation}
Q_{\rm adv} = k_1 \left(\frac{H}{r}\right)^2 [1-f(\beta_g)] Q^{+},
\label{qadv1}
\end{equation}
\noindent which results in radiation energy loss given by
\begin{equation}
Q_{\rm rad} = [1-f(\beta_g)] \left[1-k_1 \left(\frac{H}{r}\right)^2\right]Q^{+}.
\label{qrad}
\end{equation}
\noindent In the absence of corona, the radiative flux is given by $Q_{\rm rad} = [1-k_1 (H/r)^2]Q^{+}$ and thus implies that the total cooling flux of accretion flow in presence of corona is $[1-f(\beta_g)]^{-1} 4 \sigma T^4 / (3 \kappa \Sigma)$ which is similar to the cooling rate approximated by \citet{2002ApJ...576..908J} for a constant $f(\beta_g)$.
The conservation equations (\ref{mcons}) and (\ref{mdot}) results in a second order radial derivative in surface density and temperature. We transform the conservation equations from $\Sigma$ and $T$ to $\mathcal{F}$ and $W = \nu \Sigma$. This simplifies the non-derivative and the second-order derivative terms in the conservation equations. This transformation is also useful in estimating the boundary values with the assumption and the conditions we employ. We perform a transformation using $\mathcal{F} = \left[(\mu_m m_p/k_B C_1) (a^2/9)\right]^{1/2} T^{7/2} \Sigma^{-1} \Omega_K^{-1}$, such that the total pressure ($p_t$) and gas pressure ($p_g$) are given by
\begin{eqnarray}
p_t =& \frac{1}{2} \left(\frac{k_B C_1}{\mu_m m_p}\right)^{4/7} \left(\frac{9}{a^2}\right)^{1/14} \Sigma^{8/7} \Omega_K^{8/7} \mathcal{F}^{1/7} \left[\mathcal{F} + \sqrt{\mathcal{F}^2 + 1}\right] \\
p_g =& \frac{1}{2} \left(\frac{k_B C_1}{\mu_m m_p}\right)^{4/7} \left(\frac{9}{a^2}\right)^{1/14} \Sigma^{8/7} \Omega_K^{8/7} \mathcal{F}^{1/7} \left[\mathcal{F} + \sqrt{\mathcal{F}^2 + 1}\right]^{-1}.
\end{eqnarray}
\noindent The ratio of gas to total pressure $\beta_g = \left[\mathcal{F} + \sqrt{\mathcal{F}^2 + 1}\right]^{-2} $, and the viscosity using equation (\ref{nuvis}) is given by
\begin{equation}
\nu \Sigma = \chi \Sigma^{9/7} \Omega_K^{-5/7} \mathcal{F}^{2/7} \left[\mathcal{F} + \sqrt{\mathcal{F}^2 + 1}\right]^{2 \mu},
\label{nusig}
\end{equation}
\noindent where $\chi = (2\alpha_s / 3 C_1) (k_B C_1 / \mu_m m_p)^{8/7} (9/a^2)^{1/7}$. The sound speed $c_s = \sqrt{p/\rho}$ is then given by
\begin{equation}
c_s^2 = \eta_1 \Sigma^{2/7}\Omega_K^{2/7} \mathcal{F}^{2/7} \left[\mathcal{F} + \sqrt{\mathcal{F}^2 + 1}\right]^{2},
\end{equation}
\noindent where $\eta_1 = (1/C_1) (k_B C_1 / \mu_m m_p)^{8/7} (9/a^2)^{1/7} $. The radiation flux is given by $Q_{\rm rad} = \eta_2 \Sigma^{1/7} \Omega_K^{8/7} \mathcal{F}^{8/7}$, where $\eta_2 = [4 \sigma/(3 \kappa)] [k_B C_1/ \mu_m m_p ]^{4/7} (9 / a^2)^{4/7}$.
We assume that the debris circularizes to form a seed disc with outer radius $r_{\rm out} = r_c$ and the mass is added to the disc at $r_{\rm out}$ by the later infalling debris. The rate at which the mass is added at the outer radius is the mass fallback rate $\dot{M}_{\rm fb}$. If the mass accretion rate is smaller or higher than the mass fallback rate, the disc's outer radius will increase or decrease accordingly. In this paper, we limit our calculation to the constant outer radius and assume that mass added by the infalling debris is accreted inward. To calculate the boundary conditions at $r_{\rm out}$, we assume that the mass accretion rate at the outer radius is equal to the mass fallback rate \citep{2011ApJ...736..126M}. In the steady limit and for no-corona energy loss case ($b_2 = 0$), the $Q_{\rm adv} = k_1 (H/r)^2 Q^{+}$ with $Q_{\rm adv} = \dot{M} c_s^2/(2 \pi r^2)$, and $\dot{M} = 6 \pi \sqrt{r}~{\rm d}(\sqrt{r} \nu\Sigma)/{\rm d} r$ \citep{2002apa..book.....F}. Using equation (\ref{vis}), we get $\displaystyle{\nu \Sigma \propto r^{\frac{3}{4}\frac{k_1}{C_1} - \frac{1}{2}}}$. For $k_1 = (2/3)C_1$, $\nu \Sigma$ is a constant such that ${\rm d}(\nu\Sigma)/{\rm d} r = 0$. We assume this result in our calculation to get the boundary condition at the outer radius, and thus $\partial (\nu \Sigma)/\partial r|_{r_{\rm out}} = 0$. Then, the equation (\ref{mdot}) at $r_{\rm out}$ results in $\dot{M}_{\rm fb} = 3 \pi \nu \Sigma$ which results in
\begin{equation}
\Sigma_{\rm out}(t) = \left[\frac{\dot{M}_{\rm fb}(t)}{3 \pi \chi}\right]^{7/9} \Omega_K^{5/9}(r_{\rm out}) \mathcal{F}^{-2/9} \left[\mathcal{F} + \sqrt{\mathcal{F}^2 + 1}\right]^{-14 \mu /9}.
\label{sigout}
\end{equation}
\noindent Using this in equation (\ref{qrad}), the energy conservation equation results in
\begin{multline}
1 -\frac{k_1 \eta_1}{C_1}\left[\frac{\dot{M}_{\rm fb}(t)}{3 \pi \chi}\right]^{2/9} \frac{\Omega_K^{-14/9}(r_{\rm out})}{r_{\rm out}^2} \mathcal{F}^{2/9} \left[\mathcal{F} + \sqrt{\mathcal{F}^2 + 1}\right]^{2-4 \mu /9} - \\ \frac{1}{1-f(\beta_g)} \frac{4 \eta_2}{9 \chi} \left[\frac{\dot{M}_{\rm fb}(t)}{3 \pi \chi}\right]^{-8/9} \Omega_K^{-7/9}(r_{\rm out}) \mathcal{F}^{10/9} \left[\mathcal{F} + \sqrt{\mathcal{F}^2 + 1}\right]^{-2\mu /9} \\ = 0,
\end{multline}
\noindent which results in $\mathcal{F}_{\rm out}(t)$ and thus we can obtain $\Sigma_{\rm out}(t)$ and $T_{\rm out}(t)$ {which are the boundary conditions in our calculations. We do not assume any boundary conditions at the inner radius which is taken to be the innermost stable circular orbit (ISCO), and we let all the variables evolve according to their equations.
We perform another transformation by assuming $\nu \Sigma = W$, such that mass conservation equation (\ref{mcons}) is given by
\begin{equation}
\frac{7}{9} \frac{1}{W} \frac{\partial W}{\partial t} - \frac{2}{9}\left[1+ \frac{7 \mu \mathcal{F}}{\sqrt{\mathcal{F}^2+1}}\right] \frac{1}{\mathcal{F}} \frac{\partial \mathcal{F}}{\partial t} = \frac{3}{r \Sigma} \frac{\partial}{\partial r} \left[\sqrt{r} \frac{\partial}{\partial r} (\sqrt{r} W)\right],
\label{twft1}
\end{equation}
\noindent and the energy conservation using equations (\ref{qadv})and (\ref{qadv1}) is given by
\begin{multline}
\frac{2}{3} \left(\frac{4}{3}-\Gamma_3\right) \frac{1}{W} \frac{\partial W}{\partial t} + \frac{S(\mathcal{F})}{\mathcal{F}} \frac{\partial \mathcal{F}}{\partial t} + \frac{v_r}{r} \left[\frac{2}{3} \left(\frac{4}{3}-\Gamma_3\right) \frac{r}{W} \frac{\partial W}{\partial r} + \right. \\ \left. \frac{S(\mathcal{F}) r}{\mathcal{F}} \frac{\partial \mathcal{F}}{\partial r} - 2 \left(\frac{4}{3}-\Gamma_3\right)\right] = \frac{9 k_1}{4 C_1} \frac{\Gamma_3 -1}{4 - 3 \beta_g} \frac{W}{r^2 \Sigma} [1-f(\beta_g)],
\label{twft2}
\end{multline}
\noindent where $\Sigma$ and $S(\mathcal{F})$ are given by
\begin{equation}
\Sigma = \chi^{-7/9} W^{7/9} \Omega_K^{5/9} \mathcal{F}^{-2/9} \left[\mathcal{F} + \sqrt{\mathcal{F}^2 + 1}\right]^{-14 \mu /9},
\label{sigw}
\end{equation}
\begin{equation}
S(\mathcal{F}) = -\frac{1}{9} - \left(1 + \frac{16 \mu}{9}\right)\frac{\mathcal{F}}{\sqrt{\mathcal{F}^2 + 1}} + \left[\frac{1}{3} + \left(1 + \frac{4 \mu}{3}\right) \frac{\mathcal{F}}{\sqrt{\mathcal{F}^2 + 1}}\right]\Gamma_3.
\label{sft}
\end{equation}
We solve equations (\ref{twft1}) and (\ref{twft2}) with initial and boundary conditions to obtain the disc evolution. We now focus on calculating the initial solution for the disc. We assume that at initial time, the solutions are given by $W = W_t W_r(r) $ and $\mathcal{F} = \mathcal{F}_t \mathcal{F}_r(r)$, where $W_t$ and $\mathcal{F}_t$ have values equal to $W_{\rm out}$ and $\mathcal{F}_{\rm out}$, such that $W_r(r) $ and $\mathcal{F}_r(r)$ is equal to unity at $r_{\rm out}$. We further assume that the time derivatives represented by $\frac{\partial W_t}{\partial t}$ and $\frac{\partial \mathcal{F}_t}{\partial t}$ are constant at all radius and take the value at the boundary $r_{\rm out}$. Then, the equations (\ref{twft1}) and (\ref{twft2}) became a function of $r$ only and we solved them with boundary values $W_r(r_{\rm out}) =1$, $\partial W_r / \partial r |_{r_{\rm out}} =0 $ and $\mathcal{F}_r (r_{\rm out}) =1$. These assumptions are made for simplification to get the initial solution for $W$ and $\mathcal{F}$ that are consistent with the boundary conditions at the outer radius. This postulation is only at the initial time to get the initial solution and at later times, the solution for $W$ and $\mathcal{F}$ are obtained by solving the coupled differential equations numerically in $\{t,~r\}$ space. We calculate the surface density at initial time given by $\Sigma_i$ using the equation (\ref{sigw}) with the solution $W = W_t W_r(r)$ and $\mathcal{F} = \mathcal{F}_t \mathcal{F}_r(r)$.
If the moment of disruption is $t=0$, the innermost bound debris return to the pericenter at a time $t=t_m$ and the infalling debris forms an initial accretion disc in time $t_c$ with mass $\displaystyle{M_{\rm i}(t_c) = \int_{t_m}^{t_c} \dot{M}_{\rm fb} \, {\rm d} t}$. With the obtained surface density profile ($\Sigma_i$) at initial time, the disc mass at initial time is given by $\displaystyle{M_d = \int_{r_{\rm in}}^{r_{\rm out}} \Sigma_i 2 \pi r \, {\rm d} r}$. Since, we do not know the initial time $t_c$, we get the initial $\Sigma$ profile and thus the corresponding disc mass $M_d$ at various time $t$. Then, by equating $M_{\rm i}(t_c) = M_d$, we calculate the initial time $t_c$ which is the time corresponding to the matter crossing the ISCO and the beginning of accretion to the black hole. With the obtained initial and boundary solutions of the disc, we solve equations (\ref{twft1}) and (\ref{twft2}) for the disc evolutions. To solve the coupled partial differential equations, we use the NDSolve package in Wolfram Mathematica Version 12\footnote{\url{https://reference.wolfram.com/language/tutorial/NDSolveIntroductoryTutorial.html}} with the method of line and spatial discretization using TensorProductGrid.
We then calculate the effective temperature of disc using $ \sigma T_{\rm eff}^4 = [1- f(\beta_g)] Q^{+} - Q_{\rm adv} $, and thus the luminosity is calculated assuming a black body approximations. The bolometric luminosity from the disc is given by
\begin{equation}
L_b = \int_{r_{\rm in}}^{r_{\rm out}} \sigma T_{\rm eff}^4 2 \pi r \, {\rm d} r.
\label{lbol}
\end{equation}
\noindent The energy transported to the corona $Q_{\rm cor} = f(\beta_g)Q^{+}$ is Compton scattered and constitutes the X-ray power-law spectrum. We assume a plane-parallel geometry for the corona and the total X-ray luminosity is given by
\begin{equation}
L_c = \int_{r_{\rm in}}^{r_{\rm out}} Q_{\rm cor} 2 \pi r \, {\rm d} r.
\end{equation}
Even though we have assumed that the accretion disc is non-relativistic, we consider the effect of relativistic gravitational redshift of photons in Kerr geometry. The gravitational redshift effect and Doppler redshift are included by considering the Lorentz invariant $I_{\nu}/\nu^3$ \citep{1979rpa..book.....R}, such that $I_{\nu}(\nu_{\rm obs}) = g^3 I_{\nu}(\nu_{\rm em})$, where $\nu_{\rm em}$ is the emitted frequency, $g$ is the redshift factor and $I_{\nu}$ is the blackbody intensity. The kinematic and gravitational redshift is taken to be
\begin{equation}
g = \frac{1}{U^0} = \frac{1-3/x +2j/x^{3/2}}{1 +j/x^{3/2}},
\end{equation}
\noindent where $j$ is the black hole spin, $x = r/r_g$ with $r_g$ as the gravitational radii and $U^0$ is the time component of four velocity for a circular orbit. The radiation flux per unit frequency as seen by a distant observer at rest, is given by
\begin{equation}
F_{\nu} (\nu_{\rm obs}) = \int I_{\nu}(\nu_{\rm obs}) \, {\rm d} \Theta,
\end{equation}
\noindent where $I_{\nu}(\nu_{\rm obs})$ is the intensity at the observer's wavelength and we approximate the differential element ${\rm d} \Theta$ as seen from a distant observer in the Newtonian limit that is given by
\begin{equation}
{\rm d} \Theta = \frac{2 \pi r {\rm d} r }{D_L^2} \cos\theta_{\rm obs},
\label{dtheta}
\end{equation}
\noindent where $D_L$ is the luminosity distance of the source to the observer. Using Lorentz invariant, the observed flux in a spectral band $\{\nu_l,~\nu_h\}$ where $\nu_l$ and $\nu_h$ are low and high frequencies, is given by
\begin{equation}
F (\nu_{\rm obs}) = \frac{\cos\theta_{\rm obs}}{D_L^2} \int_{r_{\rm in}}^{r_{\rm out}} \int_{\nu_l}^{\nu_h} g^3 I_{\nu}\left(\frac{\nu_{\rm obs}}{g}\right) \, {\rm d} A \, {\rm d} \nu_{\rm obs}.
\label{fnuobs}
\end{equation}
\noindent Then, the observed luminosity in a given spectral band is $L (\nu_{\rm obs}) = 4 \pi D_L^2 F (\nu_{\rm obs}) $ and we consider the galaxy to be face on such that $\theta_{\rm obs} = 0^{\circ} $. In the next section, we present the result of our model.
\begin{table}
\scalebox{0.93}{
\begin{tabular}{|c|ccc|}
\hline
&&&\\
Cases & $M_{\bullet,6}$ & $m$ & $j$ \\
&&& \\
\hline
&&&\\
\Romannum{1} & 1 & 1 & 0 \\
&&&\\
\Romannum{2} & 5 & 1 & 0 \\
&&&\\
\Romannum{3} & 10 & 1 & 0 \\
&&&\\
\Romannum{4} & 1 & 5 & 0 \\
&&&\\
\Romannum{5} & 1 & 10 & 0 \\
&&&\\
\Romannum{6} & 1 & 1 & 0.5 \\
&&&\\
\Romannum{7} & 1 & 1 & 0.8 \\
&&&\\
\hline
\end{tabular}
}
{
\begin{tabular}{|c|c|}
\hline
&\\
Cases & $b_2$ \\
& \\
\hline
&\\
B1 & 0 \\
&\\
B2 & 0.1 \\
&\\
B3 & 0.5 \\
&\\
B4 & 0.9 \\
&\\
\hline
\end{tabular}
}
{
\begin{tabular}{|c|c|}
\hline
&\\
Cases & $\mu$ \\
& \\
\hline
&\\
A1 & 1 \\
&\\
A2 & 0.9 \\
&\\
A3 & 0.8 \\
&\\
A4 & 0.7 \\
&\\
\hline
\end{tabular}
}
\caption{The free variables in our model are $b_2$, $\mu$, $M_{\bullet,6} = M_{\bullet}/[10^6 M_{\odot}]$, $m = M_{\star}/M_{\odot}$ and $j$ is the black hole spin. We have shown the parameters set taken in our model. The $b_2$ shown in equation (\ref{fcor}) signifies the amount of energy transported from disc to corona and $\mu$ shown below equation (\ref{feq}) is a constant power index that indicates the contribution of gas and total pressure to the viscous stress. See section \ref{result} for more discussion. }
\label{parset}
\end{table}
\begin{table}
\scalebox{0.82}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
&&&&&&&&\\
$b_2$ & $\mu$ & \Romannum{1} & \Romannum{2} & \Romannum{3} & \Romannum{4} & \Romannum{5} & \Romannum{6} & \Romannum{7} \\
&&&&&&&&\\
\hline
&&&&&&&&\\
B1 & A1 & 3.47 & 7.28 & 10.25 & 6.67 & 7.2 & 3.48 & 3.48 \\
&&&&&&&&\\
B1 & A2 & 4.54 & 8.57 & 11.54 & 8.07 & 10.49 & 4.55 & 4.56 \\
&&&&&&&&\\
B1 & A3 & 6.62 & 11.9 & 15.26 & 12.13 & 15.68 & 6.65 & 6.66 \\
&&&&&&&&\\
B1 & A4 & 10.09 & 18.47 & 23.38 & 17.67 & 20.84 & 10.11 & 10.11\\
&&&&&&&&\\
\hline
&&&&&&&&\\
B2 & A1 & 3.48 & 7.32 & 10.31 & 5.71 & 7.25 & 3.49 & 3.50 \\
&&&&&&&&\\
B2 & A2 & 4.56 & 8.59 & 11.58 & 8.11 & 10.53 & 4.57 & 4.58 \\
&&&&&&&&\\
B2 & A3 & 6.63 & 11.92 & 15.28 & 12.15 & 15.73 & 6.66 & 6.68 \\
&&&&&&&&\\
B2 & A4 & 10.10 & 18.48 & 23.39 & 17.63 & 21.13 & 10.12 & 10.12 \\
&&&&&&&&\\
\hline
&&&&&&&&\\
B3 & A1 & 3.58 & 7.53 & 10.64 & 5.84 & 7.48 & 3.59 & 3.59 \\
&&&&&&&&\\
B3 & A2 & 4.62 & 8.71 & 11.74 & 8.23 & 10.69 & 4.64 & 4.65 \\
&&&&&&&&\\
B3 & A3 & 6.68 & 11.99 & 15.37 & 12.23 & 15.82 & 6.71 & 6.73 \\
&&&&&&&&\\
B3 & A4 & 10.14 & 18.54 & 23.22 & 17.68 & 20.93 & 10.16 & 10.14 \\
&&&&&&&&\\
\hline
&&&&&&&&\\
B4 & A1 & 3.82 & 8.44 & 12.18 & 6.19 & 7.83 & 3.83 & 3.84 \\
&&&&&&&&\\
B4 & A2 & 4.69 & 8.86 & 11.98 & 8.37 & 10.87 & 4.72 & 4.73 \\
&&&&&&&&\\
B4 & A3 & 6.73 & 12.07 & 15.48 & 12.31 & 15.98 & 6.76 & 6.78 \\
&&&&&&&&\\
B4 & A4 & 10.16 & 18.59 & 23.53 & 17.69 & 21.05 & 10.18 & 10.17 \\
&&&&&&&&\\
\hline
\end{tabular}
}
\caption{The time $t_c$ in days for the infalling debris to form a seed disc after disruption ($t = 0$), and the formed disc evolves via a viscous mechanism in the disc and the later debris infall. The parameter set is given in Table \ref{parset}. See section \ref{result} for more discussion.}
\label{tctm}
\end{table}
\section{Result}
\label{result}
In this section, we present the results of the advection disc-corona model constructed in section \ref{dcm}. First, we discuss the various unknown parameters in the model. In the steady approximation (discussed before equation \ref{sigout}), we have shown $ k_1 = 2 C_1 / 3$. We follow this approximation and for simplicity, we assume $C_1 = 1$ such that $c_s^2 = H^2 \Omega_K^2$ and thus $k_1 = 2/3$. We take $\alpha_s=0.1$ and $\gamma_g = 5/3$ throughout in our calculations. The free variables in our model are $b_2$, $\mu$, $M_{\bullet}$, $M_{\star}$ and $j$. We normalise the black hole mass as $M_{\bullet,6} = M_{\bullet}/[10^6 M_{\odot}]$ and the stellar mass as $m = M_{\star}/M_{\odot}$. Table \ref{parset} shows the parameters set used in this paper. We take the disc inner radius to be innermost stable circular orbit (ISCO) given by $r_{\rm in} = r_g Z(j)$, where $r_g = G M_{\bullet}/c^2 $ and $Z(j)$ is given by
\begin{equation}
Z(j) =3+Z_2(j)-\sqrt{(3-Z_1(j)) (3+Z_1(j)+2 Z_2(j))}, \label{zjb}
\end{equation}
\noindent where
\begin{subequations}
\begin{align}
Z_1(j) &=1+(1-j^2)^{\frac{1}{3}} \left[(1+j)^{\frac{1}{3}}+(1-j)^{\frac{1}{3}}\right]\\
Z_2(j) &=\sqrt{3 j^2+Z_1(j)^{2}}.
\end{align}
\end{subequations}
We have obtained an initial solution for the disc following discussion presented below equation (\ref{sft}) for $\mu \geq 0.7$, and below this, we did not get any solutions that satisfy the mass constraint (disc mass at initial time equal to the debris mass infall by that time). This may be due to the assumption of constant $\partial W_t/\partial t$ and $\partial \mathcal{F}_t/\partial t$ at the initial time. We are able to produce the time-dependent advective solution for a disc with fallback for $\mu \geq 0.7$. Thus, our solution implies that the viscous stress is dominated by total pressure, and the seed disc formed is at a very early time (low $t_c/t_m$). Table \ref{tctm} shows the beginning time of accretion $t_c$ for the various parameters given in Table \ref{parset}. The time $t_c$ is measured from the moment of disruption which is taken to be $t= 0$. For a given $b_2$, by comparing case \Romannum{1} for various $\mu$ (cases A1, A2, A3, A4), we can see that the time $t_c$ increases with $\mu$ indicating that the increase in the contribution of gas pressure in the viscous stress delays the beginning of accretion. The time $t_c$ also increases with black hole mass (cases \Romannum{1}, \Romannum{2} and \Romannum{3}) and stellar mass (cases \Romannum{1}, \Romannum{4} and \Romannum{5}). Since the black hole spin ($j$) in the calculation is through the inner radius only, the time $t_c$ shows a weak variation with $j$. The energy loss to the corona does not have a severe effect on time $t_c$ as it shows a small increment with $b_2$ (cases B1, B1, B3, B4).
\begin{figure*}
\begin{center}
\subfigure[]{\includegraphics[scale=0.56]{sig.pdf}}~~
\subfigure[]{\includegraphics[scale=0.56]{temp.pdf}}
\subfigure[]{\includegraphics[scale=0.56]{betag.pdf}}~~
\subfigure[]{\includegraphics[scale=0.56]{HR.pdf}}
\end{center}
\caption{The time evolution of surface density $\Sigma$, disc temperature $T$, the ratio of gas to total pressure $\beta_g$, and disc height are shown for the parameter set B1-A1-\Romannum{1}. The $\beta_g$ increases at late times implying that the gas pressure dominates at late times. The disc height to radius ratio decreases with time and thus the disc tends to be a thin disc at later times. The $\Delta t=0$ corresponds to the beginning of accretion to the black hole and the duration between stellar disruption and the beginning of accretion is the initial time $t_c$ given in Table \ref{tctm}. The coloured number in plot (a) near the curve represents $\Delta t$ in the year. See section \ref{result} for more discussion.}
\label{sigt}
\end{figure*}
From now onwards in the plots, we consider $\Delta t = t-t_c$ such that $\Delta t=0$ corresponds to the matter crossing the ISCO and the beginning of accretion to the black hole. The time evolution of surface density $\Sigma$ and disc temperature $T$ are shown in Fig. \ref{sigt}. The ratio of gas pressure to total pressure $\beta_g$ increases with time and the gas pressure dominates at the late time. Since the radiation pressure dominates at an early time, the total pressure is dominated by radiation pressure, the accretion is near to super Eddington and decreases to sub-Eddington at later times. The disc height to radius ratio decreases with time and the disc tends to a sub-Eddington thin disc phase at later times.
The time evolution of mass accretion rate at the inner radius $\dot{M}_a$ is shown in Fig. \ref{macc} for case \Romannum{1} with various values of $b_2$ and $\mu$. The fraction of energy transported to corona is $f(\beta_g) = b_2 \beta_g^{(1-\mu)/2}$ and for $\mu = 1$, it is a constant. For $\mu \neq 1$, it is a function of $\beta_g$ and is small when pressure is dominated by radiation pressure ($\beta_g \ll 1$). Thus, variation in mass accretion rate with $b_2$ is higher for $\mu = 1$ at initial time. The accretion rate is close to the fallback rate at late times and with an increase in $b_2$, the mass accretion rate deviates from the fallback rate for higher $\mu$. The increase in $b_2$ corresponding to an increase in energy transport to corona results in a decrease in the time corresponding to deviation from the mass fallback rate. The beginning time $t_c$ for accretion increases with a decrease in $\mu$ which suggests that the initial disc mass is higher for lower $\mu$ as can be seen from figure (d). The disc mass evolves with time that decreases when $\dot{M}_a > \dot{M}_{\rm fb}$ and increases for $\dot{M}_a < \dot{M}_{\rm fb}$.
\begin{figure*}
\begin{center}
\subfigure[]{\includegraphics[scale=0.6]{macc_mu.pdf}}~~
\subfigure[]{\includegraphics[scale=0.6]{macc_mu_n.pdf}}
\subfigure[]{\includegraphics[scale=0.48]{macc_mu_fb.pdf}}~~
\subfigure[]{\includegraphics[scale=0.6]{macc_mu_md.pdf}}
\end{center}
\caption{(a) The time evolution of mass accretion rate at inner radius $\dot{M}_a$ normalized to Eddington rate for case \Romannum{1} .(b) The time evolution of $n$ such that $\dot{M}_a \propto (t-t_c)^{n}$. See Table \ref{tctm} for $t_c$. The black dashed lines correspond to $n = -5/3$. (c) The ratio of mass accretion rate to mass fallback rate as a function of time. The mass accretion rate is close to the mass fallback rate but shows deviation with $b_2$ at late times. (d) The time evolution of disc mass $M_d$. The disc mass evolves with time that decreases when $\dot{M}_a > \dot{M}_{\rm fb}$ and increases for $\dot{M}_a < \dot{M}_{\rm fb}$. See section \ref{result} for more discussion.}
\label{macc}
\end{figure*}
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[scale=0.55]{lbmu.pdf}}
\subfigure[]{\includegraphics[scale=0.53]{lbmu_p.pdf}}
\subfigure[]{\includegraphics[scale=0.58]{lblcmu.pdf}}
\end{center}
\caption{(a) The time evolution of disc luminosity for case \Romannum{1} and the plot's legend is same as given in Fig. \ref{macc}a. The disc luminosity decreases with an increase in energy transport to the corona. The luminosity shows a power-law decline at the late time and we approximate $L_b \propto (t-t_c)^{p}$. See Table \ref{tctm} for $t_c$. (b) The evolution of $p= {\rm d} \log(L_b) / {\rm d} \log(t)$. The black dashed line correspond to $p = -5/3$. The late time evolution of the disc luminosity is close to $t^{-5/3}$ evolution. (c) The total X-ray luminosity from the corona ($L_c$) to the disc bolometric luminosity $L_b$ ratio is shown. The ratio decreases with an increase in $\mu$ implying that the energy transport to corona decreases with an increase in gas pressure contribution to the viscous stress. For $\mu =1 $, at late time $L_c / L_b \simeq b_2 / (1- b_2) = 1$. See section \ref{result} for more discussion.}
\label{lbmu}
\end{figure}
The bolometric luminosity from the disc corresponding to case \Romannum{1} with various values of $b_2$ (cases B1, B2, B3, B4) and $\mu$ (cases A1, A2, A3, A4) is shown in Fig. \ref{lbmu}. The luminosity at the initial time increases with a decrease in $\mu$ due to an increase in mass accretion rate and thus the viscous emission. As time progress, the radiation pressure decreases, and the viscous stress tends toward gas pressure dominated at which the effect of $\mu$ on viscous stress is weak. Thus, the luminosity at late times is similar for various $\mu$ with no corona ($b_2 = 0$). However, the presence of corona affects the disc luminosity and with an increase in $b_2$, the disc luminosity decreases. With a decrease in $\mu$, the disc luminosity shows weak changes which implies that an increase in the gas pressure contribution to the viscous stress reduces the energy transport to the corona. The disc luminosity decreases with an increase in $b_2$ due to an increase in corona flux loss. The luminosity shows a transition from super-Eddington to sub-Eddington phase and the disc luminosity decreases with an increase in energy transport to the corona. The luminosity follows a power-law decline at late time and we approximate $L_b \propto (t-t_c)^{p}$. The time evolution of $p$ shows that at late times, the luminosity evolution is close to $t^{-5/3}$.
\begin{figure*}
\begin{center}
\subfigure[]{\includegraphics[scale=0.6]{macc_1.pdf}}~~
\subfigure[]{\includegraphics[scale=0.58]{macc_1_fb.pdf}}
\subfigure[]{\includegraphics[scale=0.62]{mdis_1.pdf}}~~~~~~~~~~~
\subfigure[]{\includegraphics[scale=0.43]{Legend_case.pdf}}
\end{center}
\caption{(a) The time evolution of mass accretion rate at inner radius $\dot{M}_a$ normalized to Eddington rate for no corona [$b_2 =0$, B1, solid lines] and with corona [$b_2 =0.5$, B3, dashed lines]. The $\mu = 1 $ correspond to case A1. The blue, pink, and brown lines are nearly overlapped implying that the black hole spin has an insignificant effect on $\dot{M}_a$. The normalized mass accretion rate increases with stellar mass but decreases with black hole mass implying that the disc is sub-Eddington for higher mass black holes. (b) The ratio of mass accretion rate to the mass fallback rate is shown and at late times, the ratio is dominated by a disc with corona. (c) The time evolution of disc mass and it evolves with time that decreases when $\dot{M}_a > \dot{M}_{\rm fb}$ and increases for $\dot{M}_a < \dot{M}_{\rm fb}$. (d) Shows the legend of various coloured curves.
}
\label{macct}
\end{figure*}
The time evolution of mass accretion rate for various values of black hole mass and spin and stellar mass is shown in Fig. \ref{macct}. The mass accretion rate decreases with time and tends to sub-Eddington phase at late times. The mass accretion rate normalized to the Eddington rate increases with stellar mass but decreases with black hole mass resulting in a sub-Eddington disc for higher mass black holes. The mass accretion rate is close to the mass fallback rate and the ratio increases at late times with the presence of corona. The time evolution of power index given by $n = {\rm d} \log(\dot{M}_a) / {\rm d} \log(t)$ is close to $n = -5/3$ corresponding to late time mass fallback rate. The disc mass decreases for $\dot{M}_a > \dot{M}_{\rm fb}$ and increases for $\dot{M}_a < \dot{M}_{\rm fb}$. The disc mass at an initial time in presence of corona is higher than in absence of corona because of an increase in the beginning time of accretion resulting in more debris mass for the initial disc (see Table \ref{tctm}). Since the disc is formed at an early time, the initial disc mass in terms of the stellar mass is small. Since the mass accretion rate is comparable to the mass fallback rate, the amount of mass accreted by the black hole is of the order of mass added by the infalling debris. Thus, the disc mass remains small compared to the total stellar mass.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[scale=0.6]{lbps.pdf}}
\subfigure[]{\includegraphics[scale=0.6]{lbps_1.pdf}}
\subfigure[]{\includegraphics[scale=0.6]{lblcps.pdf}}
\end{center}
\caption{(a) The time evolution of disc bolometric luminosity $L_b$ for no corona [$b_2 =0$, B1, solid lines] and with corona [$b_2 =0.5$, B3, dashed lines]. The $\mu = 1 $ corresponding to case A1. The plot's legend is same as given in Fig. \ref{macct}d. The luminosity follows a power-law decline at late time and we approximate $L_b \propto (t-t_c)^{p}$. See Table \ref{tctm} for $t_c$. (b) The evolution of $p= {\rm d} \log(L_b) / {\rm d} \log(t)$. The black dashed line correspond to $p = -5/3$. The late time evolution of luminosity is close to $t^{-5/3}$ evolution. (c) The total X-ray luminosity from the corona ($L_c$) to the disc bolometric luminosity $L_b$ ratio for $b_2 =0.5$. See section \ref{result} for more discussion.}
\label{lbps}
\end{figure}
The time evolution of disc bolometric luminosity for various values of black hole mass and spin, and stellar mass is shown in Fig. \ref{lbps}. The disc luminosity decreases in presence of the corona due to energy loss to the corona. The bolometric luminosity increases with stellar mass and black hole spin but decreases with black hole mass at the initial time. At a late time, the luminosity increases with an increase in black hole mass. The duration of a disc in the super-Eddington phase increases with stellar mass and spin but reduces with black hole mass. Thus, the higher mass black holes will have a sub-Eddington disc. The luminosity follows a power-law decline at late times and is close to $t^{-5/3}$.
The ratio of total X-ray luminosity from the corona ($L_c$) to the disc bolometric luminosity $L_b$ is shown in Fig. \ref{lbmu}c. The ratio decreases with a decrease in $\mu$ implying that the energy transport to corona decreases with an increase in gas pressure contribution to the viscous stress. For $\mu = 1$, $f(\beta_g) = b_2$ and as the advection weakens with time, the radiative flux is given by $Q_{\rm rad} \approx (1 - f(\beta_g)) Q^{+}$ and thus the luminosity ratio at late time $L_c / L_b \simeq b_2 / (1- b_2)$. However, for $\mu \neq 1$, the $f(\beta_g)$ increases due to an increase in $\beta_g$ with time which results in an increase in $L_c / L_b$ as can be seen from the figure. The luminosity ratio for various stellar mass, black hole mass and spin is shown in Fig. \ref{lbps}c calculated for $b_2 = 0.5$ and $\mu = 1$ resulting in $f(\beta_g) = 0.5$. At the late time, $L_c / L_b \simeq 1$.
In the previous calculations, we have taken $C_1 = 1$ such that $k_1 = 2/3$. We now consider various values of $C_1 = $ 0.17 \citep{2002ApJ...576..908J}, 0.5, 0.8 and 1 such that $k_1 = $ 0.113, 1/3, 8/15 and 2/3. For the case \Romannum{1} with $\mu = 1$, the beginning time of accretion $t_c ({\rm days}) = $ 3.447, 3.460, 3.468, 3.472 for $b_2 =0$ and 3.539, 3.56, 3.573 and 3.577 for $b_2 = 0.5$ calculated for $C_1 =$ 0.17, 0.5, 0.8 and 1.0 respectively. The onset time of accretion shows a weak increment with $C_1$. The mass accretion rate shows a small increment with $C_1$ at initial times as can be noticed from Fig. \ref{c1v}. The beginning time $t_c$ for accretion increases with an increase in $C_1$ which suggest that the initial disc mass is higher for higher $C_1$. The disc mass decreases when $\dot{M}_a > \dot{M}_{\rm fb}$ and increases for $\dot{M}_a < \dot{M}_{\rm fb}$. The bolometric luminosity shows a weak decline with $C_1$ at the initial time but at a late time, the luminosity shows a negligible increment with $C_1$. The late time luminosity decline is close to $t^{-5/3}$ evolution. The total luminosity from the corona decreases with an increase in $C_1$ and implies that the energy transport to the corona is higher for lower $C_1$.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[scale=0.6]{macc_c1.pdf}}
\subfigure[]{\includegraphics[scale=0.56]{macc_c1_fb.pdf}}
\subfigure[]{\includegraphics[scale=0.56]{macc_c1_md.pdf}}
\end{center}
\caption{(a) The evolution of mass accretion rate $\dot{M}_a$ for the case \Romannum{1} with $\mu =1$ (A1). The solid lines correspond to no corona [$b_2 =0$] and dashed lines correspond to with corona [$b_2 =0.5$]. (b) The ratio of mass accretion rate to mass fallback rate. (c) The time evolution of disc mass. The disc mass evolves with time that decreases when $\dot{M}_a > \dot{M}_{\rm fb}$ and increases for $\dot{M}_a < \dot{M}_{\rm fb}$. The beginning time of accretion $t_c ({\rm days}) = $ 3.447, 3.460, 3.468, 3.472 for $b_2 =0$ and 3.539, 3.56, 3.573 and 3.577 for $b_2 = 0.5$. See section \ref{result} for more discussion.}
\label{c1v}
\end{figure}
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[scale=0.58]{lum_c1.pdf}}
\subfigure[]{\includegraphics[scale=0.57]{lum_c1_p.pdf}}
\subfigure[]{\includegraphics[scale=0.55]{lblc_c1.pdf}}
\end{center}
\caption{(a) The time evolution of disc bolometric luminosity for the case \Romannum{1} with $\mu =1$ (A1). The plot's legend is same as given in Fig. \ref{c1v}a. The luminosity follows a power-law decline at late time and we approximate $L_b \propto (t-t_c)^{p}$. See caption \ref{c1v} for $t_c$. (b) The evolution of $p= {\rm d} \log(L_b) / {\rm d} \log(t)$. The black dashed line correspond to $p = -5/3$. The late time evolution of luminosity is close to $t^{-5/3}$ evolution. (c) The total X-ray luminosity from the corona ($L_c$) to the disc bolometric luminosity $L_b$ ratio for $b_2 = 0.5$. The ratio decreases with $C_1$ which implies that the energy transport to corona is higher for lower $C_1$. See section \ref{result} for more discussion. }
\label{c1vl}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.7]{specps.pdf}
\end{center}
\caption{The time evolution of disc luminosity in various spectral bands calculated using equation (\ref{fnuobs}) for $\mu =1$, $C_1 =1$ and $b_2 =$ 0 (solid) and 0.5 (dashed). The plot's legend is same as given in Fig. \ref{macct}d. The spectral bands are V (5000-6000 \AA), B (3600 - 5000 \AA), U (3000-4000 \AA), UVM2 (1660-2680 \AA), UVW2 (1120 - 2640 \AA) and X-ray (0.2 - 2 keV). The luminosity increases with black hole mass for optical/UV wavelengths but decreases in X-ray band. The presence of corona shows a considerable difference in spectral luminosity at late times. See section \ref{result} for more discussion.}
\label{spec}
\end{figure*}
The luminosity in various spectral bands calculated using equation (\ref{fnuobs}) is shown in Fig. \ref{spec}. The presence of corona has a severe effect on the spectral luminosity at late times by decreasing the spectral luminosity. The increase in stellar mass increases the mass accretion rate increasing the viscous stress and thus the luminosity. The luminosity increases with black hole mass for optical/UV wavelengths but decreases in the X-ray band.
We now calculate the time-evolving properties of the corona using a two-temperature model from ions and electrons. The energy transport to the corona $Q_{\rm cor}$ heats the corona and the energy is transferred from ions to electrons through Coulomb coupling and the plasma cools via electron through various mechanisms such as bremsstrahlung, synchrotron, and Compton cooling. The bremsstrahlung and synchrotron cooling rates are obtained from \citet{1995ApJ...452..710N} and the Compton cooling is approximated via an amplification factor $\eta$ \citep{1991ApJ...369..410D}. Assuming equipartition between magnetic pressure and gas pressure in the corona and ion temperature equal to the 90\% of the virial temperature \citep{2009MNRAS.394..207C}, we calculate the electron temperature, the optical depth of electron scattering $\tau_{es}$ and Compton $y$ parameter in the corona. The details of the model are given in appendix \ref{corhc}.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[scale=0.45]{corona_Te_b2.pdf}}
\subfigure[]{\includegraphics[scale=0.45]{corona_tes_b2.pdf}}
\subfigure[]{\includegraphics[scale=0.45]{corona_yc_b2.pdf}}
\end{center}
\caption{The electron temperature $T_e$, optical depth of electron scattering $\tau_{es}$ and Compton parameter $y$ are shown in (a), (b) and (c) respectively for the case \Romannum{1} with $\mu =1$ (A1) and $b_2 = $ 0.5 (B3: solid) and 0.9 (B4: dashed). The increase in $b_2$ increases the energy flux transport to the corona (see equation \ref{fcor}). The model is given in appendix \ref{corhc}. The corresponding bolometric disc luminosity is shown as blue dotted and dot-dashed lines in Fig. \ref{lbmu}. The decrease in optical depth with time indicates that the corona is transparent to the soft photons from the disc and the $y \ll 1$ suggests that there is no significant change in the photon energy and the comptonization will be negligible. See section \ref{result} for more discussion.}
\label{cb2}
\end{figure}
Fig. \ref{cb2} shows the electron temperature, optical depth and Compton $y$ parameter for the case \Romannum{1} ($M_{\bullet,6} = 1$) with $\mu =1$ (A1) and with $b_2 = $ 0.5 (B3: solid) and 0.9 (B4: dashed). The electron temperature is comparable with that obtained by \citet{2009MNRAS.394..207C} for a steady accretion model without a fallback. The electron temperature increases whereas optical depth decrease as disc bolometric luminosity decreases. The decrease in optical depth with time implies that at late times, the corona is transparent to the soft photons from the disc. The Compton $y$ parameter that determines whether a photon will significantly change its energy in traversing the medium, decreases with time. For $y \ll 1$, we can conclude that there is no significant change in the photon energy. A similar result for case \Romannum{3} ($M_{\bullet,6} = 10$) with $\mu =1$ (A1) and $b_2 = 0.5$ (B3) is shown in Fig. \ref{clc}.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[scale=0.45]{corona_Te_M610.pdf}}
\subfigure[]{\includegraphics[scale=0.45]{corona_tesy_M610.pdf}}
\end{center}
\caption{The electron temperature $T_e$ in (a), and optical depth of electron scattering $\tau_{es}$ (solid) and Compton parameter $y$ (dashed) in (b) for the case \Romannum{3} with $\mu =1$ (A1) and $b_2 = $ 0.5 (B3). The colour plots legend is the same as given in Fig. \ref{cb2}a. The model is given in appendix \ref{corhc}. The corresponding bolometric disc luminosity is shown as blue and red dashed lines in Fig. \ref{lbps}. See section \ref{result} for more discussion.}
\label{clc}
\end{figure}
\section{Discussion and conclusions}
\label{discuss}
TDEs are an essential phenomenon to perceive the accretion dynamics around supermassive black holes over a period of a few years. The numerical simulations on the circularization of the disrupted debris and the formation of an accretion disc, have been limited to a few parameter ranges of stellar mass and black hole mass and spin. The simulations have shown that the formed accretion disc can be a circular or an elliptical disc with or without still infalling debris \citep{2020A&A...642A.111C}. When the tidal radius lies above the ISCO, the stream interactions result in an inflow of matter to form a circular disc with an inner radius at ISCO. When the tidal radius lies below ISCO, some fraction of debris will plunge to the black hole during circularization and the amount of debris mass plunged to the black hole and the dynamics of circularization require detailed relativistic stream modelling. Due to uncertainty in the disc formation, here, we assume those cases where the tidal radius lies above ISCO. Thus, $r_t \geq r_g Z(j)$ which results in a condition given by $Z(j) \leq r_t/r_g$ and thus a minimum spin value given by $j_{m} = j_m(M_{\bullet,6},~m)$. We consider black holes with a prograde spin only ($j \geq 0$).
The infalling debris forms a seed accretion disc that evolves in the effect of viscous accretion and the mass fallback at the outer radius. We assume that the formed initial disc in time $t_c$ has the disc mass equal to the total debris mass that has infalled by the time $t_c$. There is no mass loss to black hole for $t < t_c$. Thus, $t_c$ is the time at which matter crosses the ISCO and the accretion to the black hole begins. During the formation of an accretion disc ($t < t_c$), the debris stream self-interaction will result in emissions, but we have not considered the dynamics of disc formation in this paper. The disc dynamics presented is after the matter crosses the disc inner radius. The obtained $t_c$ shows a weak increment with the presence of corona but increases with a decrease in $\mu$ which implies that an increase in gas pressure contribution to the disc delays the onset of accretion. An important aspect here is that we have used a semi-analytic formulation of the mass fallback rate calculated for a polytrope star using the impulse approximation. However, a star is tidally deformed before reaching the pericenter which influences the stellar density structure and the stellar density at the moment of disruption can be different from the original polytrope. The mass fallback rate obtained through simulation differs from the impulse approximation at initial times but the late time evolution is nearly the same \citep{2009MNRAS.392..332L,2019ApJ...872..163G}. This difference can affect the initial time $t_c$ and initial disc luminosity but the late time luminosity evolution will be similar to the luminosity evolution we have obtained using the semi-analytic model of the mass fallback rate.
We include an energy loss to the corona in our model and the fraction $f$ of viscous heat energy transported to the corona is a function of $\beta_g$ and $\mu$. In the case of $\mu = 1$, the magnetic pressure and the viscous stress are a function of total pressure only and we found that the fraction is a constant, but with $\mu \neq 1$, the fraction $f \propto \beta_g^{(1-\mu)/2}$ and is small when the disc is dominated by radiation pressure ($\beta_g \ll 1$). The energy loss to the corona increases the accretion rate at the initial time but has an insignificant effect at the late times (see Fig. \ref{macc}), however, the bolometric and spectral luminosity declines in presence of the corona at late times. For $\mu \neq 1$, the increase in energy transport has a weak effect on disc luminosity at early times but shows a significant decline in late time luminosity. With an increase in gas pressure contribution to the magnetic pressure and viscous stress ($\mu$ decreases), $L_c / L_b$ decreases as can be seen from Fig. \ref{lbmu}c and thus implies a reduction in energy transport to the corona. The MRI growth rate depends on the ratio of radiation to gas pressure and with a decrease in $\mu$, the contribution of gas pressure increases compared to the radiation pressure which reduces the MRI and thus the magnetic stress \citep{2002ApJ...566..148T}. This indicates that the gas pressure acts as a support to the disc stability.
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{jdisk.pdf}
\end{center}
\caption{The time evolution of total angular momentum of the disc for no corona [$b_2 =0$, B1, solid lines] and with corona [$b_2 =0.5$, B3, dashed lines]. The $\mu = 1 $ correspond to case A1. The color plots legend is the same as given in Fig. \ref{macct}d. The total angular momentum shows insignificant variation with black hole spin and thus pink and brown lines coincide with the blue line. See section \ref{discuss} for more discussion.}
\label{jdisk}
\end{figure}
We have taken the outer radius of the disc to be constant and the mass added at the outer radius is accreted inwardly through accretion. The total angular momentum ($J_d$) of the disc is not constant and evolves with time as shown in Fig. \ref{jdisk}. The total angular momentum shows insignificant variation with black hole spin. The $J_d$ is higher when there is an energy loss to corona ($b_2 \neq 0$). The non-constant disc total angular momentum results in a late time luminosity evolution ($ L_b \propto t^{-5/3}$) that deviates from the self-similar solution of a sub-Eddington disc by \citet{1990ApJ...351...38C}, where the outer radius evolves with time and the luminosity $L_b \propto t^{-1.2}$.
In the appendix \ref{sssol}, we present the self-similar solution of a non-advective accretion disc \citep{1990ApJ...351...38C} with energy loss to the corona included in the energy conservation equation. We compare the mass accretion rate and disc bolometric luminosity of the self-similar model with the advective disc-corona model in Figs. \ref{can} and \ref{can1}, and they show that the bolometric disc luminosity evolves linearly with the mass accretion rate for both advective disc and self-similar models in the sub-Eddington phase. The self-similar model is consistent when the evaluation is in the sub-Eddington phase, $\dot{M}/\dot{M}_E \ll 1$, where the pressure is dominated by gas pressure. Near the Eddington accretion, the pressure is dominated by radiation pressure and the advection is crucial to avoid the Lightman-Eardely instability. In the self-similar formulation, the outer radius evolves with time whereas we have taken it to be a constant. The advective model includes a fallback at the outer radius and the mass accretion rate follows the mass fallback rate whereas the fallback is not included in the self-similar model.
The time-dependent advective accretion disc model with $\alpha-$viscosity developed by \citet{2002ApJ...576..908J} to study the radiation pressure instability driven variability assumed a steady solution at the initial time and truncated their model at a constant outer radius where the boundary condition is the stable steady solution with a constant mass accretion rate. This assumption implies
that some amount of the disc angular momentum is taken away at the outer radius of the computed disc region. \citet{2011ApJ...736..126M} developed a TDE advective model without corona and for $\beta-$viscosity with a mass fallback at the constant outer radius. In their numerical calculation, the mass accretion rate at the outer radius is equal to the mass fallback rate and they showed that the mass accretion rate and the disc bolometric luminosity at the late time evolves as $t^{-5/3}$. We have also considered a similar assumption at the outer radius.
In our advective disc model, we consider a seed disc that is formed at an early time and we expect the formed disc to be in a non-steady state where the mass accretion rate varies with radius. The mass supply rate at the outer radius changes with time and the stable steady state solution may be inconsistent. We obtain a non-steady initial and boundary solution for the disc. We assume that the mass is added uniformly along the azimuthal direction and we truncate our advective disc model computation region to the circularization radius and the infalling matters at the outer radius lose their angular momentum following equation (\ref{mdot}) to the external infalling debris. This will induce an evaluation of the infalling debris but we do not consider its dynamics and assume that they maintain the mass fallback rate given by equation (\ref{mfbn}) at the outer radius. This assumption helps in maintaining both the angular momentum conservation equation and the mass accretion equal to the mass fallback at the outer radius. We understand that the outer radius can evolve with time depending on the mass accretion and mass fallback rates, but the considered assumptions for a simple disc-corona model with fallback provide a reasonable solution.
Here, we calculate the total X-ray luminosity from the corona by integrating the energy transported to the corona from the disc. The photons emitted from the disc are scattered by the hot electrons in the corona and some fraction of the scattered photons travel downward to the disc known as the downward component. Some fraction of these downward-moving photons are reflected from the disc surface as disc albedo. The downward component and disc albedo evolve with time and can be a function of the hard component X-ray photon index. This analysis will require a detailed radiative transfer model of scattering in the corona which includes the synchrotron and bremsstrahlung emissivities that depend on the electron distribution in the corona. We have shown the effect of energy loss to corona on the disc evolution and this energy transported to the corona can be used to study the scattering dynamics in the corona and the evolution of X-ray hard components with time.
In the appendix \ref{corhc}, we present an efficient yet approximate cooling mechanism of hot electrons to estimate the time-evolution of coronal properties and the Compton $y$ parameter. For the case of unsaturated repeated scattering by non-relativistic thermal electrons, up-scattered photons have a power-law energy distribution with photon index $\Gamma = -1/2 + \sqrt{9/4 + 4/y}$ \citep{2009PASP..121.1279S}. For the Compton $y$ parameter given in Fig. \ref{cb2} for $b_2 = 0.5$, the mean photon index at times $\Delta t{\rm (yr)} = $ 0.001, 0.01, 0.1, 1, 2, 5 and 10 are $\Gamma =$ 2.63, 2.64, 2.68, 4.448, 6.33, 11.12 and 16.50 respectively. The spectral index at initial times agrees with the spectral index from the observations \citep{2020MNRAS.497L...1W}. At late times, the spectral index is higher than the observed values. This is because of the low $y$ implying negligible Comptonization. The Comptonization approximation using an amplification factor given by equation (\ref{etacomp}) is derived for an optically thin medium with $\tau_{es} \ll 1$ \citep{1991ApJ...369..410D}. We can see from Figs. \ref{cb2} and \ref{clc}, that the optical depth shows significant evolution with time and thus we need to develop a more detailed Comptonization scattering model for the Compton cooling from an optically thick medium. We will construct such a detailed Comptonization process and the Comptonized spectra of the disc-corona model in the future.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.6]{Tsps.pdf}
\end{center}
\caption{Left: The X-ray temperature ($T_X$) obtained using procedure discussed below equation (\ref{xspecc}) is shown for no corona [$b_2 =0$, B1, solid lines] and with corona [$b_2 =0.5$, B3, dashed lines]. The $\mu = 1 $ corresponding to case A1. The color plots legend is the same as given in Fig. \ref{macct}d. The bolometric luminosity for each line is given in Fig. \ref{lbps}. The $T_X$ agrees with the blackbody temperature from the X-ray observations. The temperature shows a power-law decline at late time and we approximate $T_X \propto (t-t_c)^{s}$. See Table \ref{tctm} for $t_c$. (b) The evolution of $s= {\rm d} \log(T_X) / {\rm d} \log(t)$ is shown. The black dashed line correspond to $s = -5/12$ which indicates $T \propto \dot{M}_{\rm fb}^{1/4} \propto (t-t_c) ^{-5/12}$. See section \ref{discuss} for more discussion. }
\label{tsp}
\end{figure*}
The single blackbody temperature obtained from a blackbody model fit to the X-ray spectrum of the source such as XMMSL1 J061927.1-655311 is $\sim 1.4 \times 10^6~{\rm K}$ \citep{2014A&A...572A...1S}, ASAS-SN 14li is $\sim 10^{5}~{\rm K}$ \citep{2016MNRAS.455.2918H}, Abell-1795 is $\sim 1.2 \times 10^{6}~{\rm K}$ \citep{2013MNRAS.435.1904M} and NGC-3599 is $\sim 1.1 \times 10^6~{\rm K}$ \citep{2008A&A...489..543E}. Using equation (\ref{fnuobs}), the observed spectrum in the frequency range $\{\nu_l,~\nu_h\} = \{0.3,~10\}~{\rm keV}$, is given by
\begin{equation}
F_{\nu_{\rm obs}} = \frac{\cos\theta_{\rm obs}}{D_L^2} \nu_{\rm obs} \int_{r_{\rm in}}^{r_{\rm out}} g^3 I_{\nu}\left(\frac{\nu_{\rm obs}}{g}\right) \, {\rm d} A.
\label{xspecc}
\end{equation}
\noindent Then, the luminosity is given by $L_{\nu_{\rm obs}} = 4 \pi D_L^2 F_{\nu_{\rm obs}}$. We fit the obtained spectrum with a luminosity model given by $L = 4 \pi \nu B(T,~\nu) A$, where $B(T,~\nu)$ is the blackbody function, and $T$ and $A$ are the single blackbody temperature and area of the disc. By taking the disc area ($A$) to be the $A = \pi (r_{\rm out}^2-r_{\rm in}^2)$, we calculate the X-ray temperature of the disc ($T_X$) and is shown in Fig. \ref{tsp} for various black hole mass and spin and stellar mass whose bolometric luminosity is given in Fig. \ref{lbps}. The obtained X-ray temperature agrees with the blackbody temperature from the X-ray observations. At late time, $T_X$ declines as a power-law given by $T_X \propto (t-t_c)^{s}$. The effective temperature for a steady thin disc accretion model is given by $T \propto \dot{M}_{\rm fb}^{1/4}$ \citep{2011MNRAS.410..359L} and decreases as $T \propto t^{-5/12}$. The late-time evolution of the disc X-ray temperature is slower than the temperature evolution in a steady thin disc.
\begin{figure*}
\begin{center}
\subfigure[]{\includegraphics[scale=0.55]{radeff.pdf}}~~
\subfigure[]{\includegraphics[scale=0.55]{enrad.pdf}}
\end{center}
\caption{(a) The time evolution of radiative efficiency in the disc for no corona [$b_2 =0$, B1, solid lines] and with corona [$b_2 =0.5$, B3, dashed lines]. The $\mu = 1 $ correspond to case A1. (b) The integrated radiated energy given by equation (\ref{enint}) is shown. The gray lines corresponds to stellar mass energy ($m M_{\odot} c^2$) with solid, dashed and dot-dashed represents $m =1,~5,~{\rm and}~10$ respectively. The color plots legend is the same as given in Fig. \ref{macct}d. See section \ref{discuss} for more discussion.}
\label{raden}
\end{figure*}
The bolometric luminosity $L_b$ of the disc shown in Figs. \ref{lbmu}a, \ref{lbps}a and \ref{c1vl}a are high even though the disc mass is small compared to the stellar mass. The radiative efficiency given by $\eta_E = L_b/(\dot{M} c^2)$ where $\dot{M}$ is the mass accretion rate at the inner radius, is shown in Fig \ref{raden}. The radiative efficiency at the initial time is small as is expected for an advective disc in near Eddington to super-Eddington phase. With time, the disc mass accretion rate and the bolometric luminosity decrease, and the disc tends to a sub-Eddington phase. Thus, the radiative efficiency increases with time. At late times, the disc is sub-Eddington and the radiative efficiency attains a saturation value. The integrated radiated energy $E_{\rm rad}$ is given by
\begin{equation}
E_{\rm rad}(\Delta t) = \int_{0}^{\Delta t} L_b(\Delta t') \, {\rm d} \Delta t',
\label{enint}
\end{equation}
\noindent where the bolometric luminosity is given by equation (\ref{lbol}). The integrated energy at various times is shown in Fig. \ref{raden}b and is smaller than the total stellar mass energy reservoir of $M_{\star} c^2$.
The temperature obtained using a blackbody model fit to UV and optical observations is ten times smaller than the X-ray temperature. This is explained by considering the optical/UV emissions either from the outflowing winds \citep{2009MNRAS.400.2070S}, the reprocessing of X-ray emission \citep{2014ApJ...783...23G}, or the emission at a higher radius from an elliptical disc \citep{2021ApJ...908..179L}. We consider a time-dependent circular disc, and the reprocessing of the disc luminosity is the only plausible explanation for the observed temperature difference. The reprocessing layer can be either the outflowing wind \citep{2020ApJ...894....2P} or a static atmosphere \citep{2016ApJ...827....3R}. From Fig \ref{spec}, we can see that the X-ray luminosity from the disc is higher than the UV/optical luminosity. However, the observations have shown that the UV/optical luminosity dominates over the X-ray luminosity with many TDE showing no significant X-ray observations. We present a simple reprocessing model in the appendix \ref{repr} and it shows the dominance of optical/UV flux over the X-ray flux. Thus, the reprocessing of the disc luminosity is crucial to explaining the observations and the reprocessing analysis requires a detailed radiative transfer of disc luminosity through the medium. The disc-corona model presented in this paper shows that the spectral luminosity dominates in X-rays and thus TDEs with presented disc evolution is more suitable for X-ray TDEs.
The mass accretion rate $\dot{M}$ exceeds the Eddington rate $\dot{M}_E$ for low mass supermassive black holes and with an increase in black hole mass, the mass accretion rate to Eddington rate decreases, and the disc is sub-Eddington for higher mass black holes. \citet{2019ApJ...885...93F} showed through global steady disc solution that the outflows are driven from the disc surface if the mass accretion rate is $\dot{M} / \dot{M}_E \gtrsim \dot{m}_{\rm crit} = 1.78-1.91$, where $\dot{m}_{\rm crit}$ is the critical mass accretion rate normalized to the Eddington rate. When the mass accretion rate at the outer radius is less than the critical mass accretion rate, there is no outflow, and the disc is similar to a slim disc. They showed that the mid-plane temperature of these slim discs is substantially lower than that of the discs with outflows, but their effective temperature does not deviate much from that of the discs with outflows. The radiation in the wind-driving region is automatically regulated by the wind, which sets an upper limit on the effective temperature. Thus, the disc bolometric luminosity increases slowly with the total mass loss rate when there is wind in the disc. The presence of an outflow results in the disc losing its mass, angular momentum, and energy resulting in a cooling of the accretion flow. The luminosity that a disc without outflow emits will decrease when the outflow is included because the outflowing wind will carry some of the energy generated via viscous heating. The increase in the mass outflow will reduce the mass accretion rate. The radial velocity is comparable to the azimuthal velocity in a super-Eddington disc, and the disc surface density at the outer radius decreases with an increase in the radial velocity if the mass accretion rate is equal to the mass fallback rate. The time evolution of disc surface density is a function of the mass accretion and outflow rates. For such a disc, it is necessary to include the outflow in conservation equations, and the resulting disc will be the advection-dominated inflow-outflow (ADIO) disc \citep{1999MNRAS.303L...1B}.
The reprocessing models for TDEs have shown that the outflow is crucial as it forms the atmosphere for the reprocessing of the disc emissions. However, the presence of a corona surrounding the disc that has an outflow is uncertain, as the strong wind may destroy the low-density corona. If the strong wind sweeps the low-density corona, then there will be no coronal emission, and the spectrum will be dominated by the disc. As the disc evolves with time and the radiation force decreases, the mass outflow rate decreases. As the wind strength decreases, the corona will tend toward a stable structure and its emission will increase. In the sub-Eddington phase, the coronal emission dominates the X-ray spectrum as a non-thermal emission. Our model is limited to an accretion disc without outflow and with a stable corona and is applicable to the near to sub-Eddington accretion phase. The time-dependent accretion model with an outflowing wind and the dynamics of the corona surrounding such a disc are complex. The global disc solution of a TDE disc from super-Eddington with outflow to sub-Eddington requires a detailed numerical simulation. Our present accretion model that begins at a very early time after disruption (see Table \ref{tctm} for initial time $t_c$) shows the disc evolution from Eddington to sub-Eddington phases. Our accretion model explores the evolution of the accretion disc in the presence of infalling debris and energy loss to the corona. Our advective disc-corona model fit to the observations will return the physical parameters such as star mass, black hole mass and spin, and the circularization time.
\section{Summary}
\label{summary}
We have constructed a non-relativistic and time-dependent advective accretion disc model with corona and fallback at the constant outer radius. The stress tensor is assumed to be dominated by Maxwell stresses ($\tau_{r\phi} \propto P_{\rm mag}$) with magnetic pressure given by $P_{\rm mag} \propto P_g^{1-\mu}P_t^{\mu}$. We have obtained initial and boundary solutions.
The increase in the contribution of gas pressure to the viscous stress (decreasing $\mu$) increases $t_c$ (see Table \ref{tctm}) and thus delays the beginning of disc accretion. The increase in the black hole and stellar mass also increases $t_c$ whereas the black hole spin does not affect much.
The gas to total pressure $\beta_g$ increases and disc height decreases with time implying that the disc evolves from Eddington to sub-Eddington phase and thus our model presents the disc transition through different phases (see Fig. \ref{sigt}).
The mass accretion rate is of the order of the mass fallback rate and the time evolution of mass accretion rate depends on the viscous dynamics. The mass accretion rate deviates from the mass fallback rate at the initial time, then follows it for a certain time and shows a small variation at the later times. The mass accretion rate at late times is close to the $t^{-5/3}$ evolution. The presence of corona increases the mass accretion rate at the initial time followed by a small variation at late times. The variation increases with an increase in energy transport to the corona and can be reduced by decreasing $\mu$.
The beginning time $t_c$ for accretion increases with a decrease in $\mu$ which results in a higher initial disc mass. The disc mass decreases when $\dot{M}_a > \dot{M}_{\rm fb}$ and increases for $\dot{M}_a < \dot{M}_{\rm fb}$. The disc mass at late times shows a significant variation in evolution with an increase in $b_2$ (increase in energy transport to corona) as can be seen from Figs. \ref{macc} and \ref{macct}.
The total angular momentum of the disc is not conserved and evolves with time. This is due to the considered assumption of constant outer radius. The angular momentum is higher for a disc with energy loss to corona as can be seen from Fig. \ref{jdisk}.
The bolometric luminosity increases with stellar mass and black hole spin but decreases with black hole mass at the initial time. The duration of the disc in the super-Eddington phase increases with stellar mass and spin but reduces with an increase in black hole mass. The luminosity at late times is close to $t^{-5/3}$ evolution.
The radiative efficiency ($\eta_E$) increases with time and attains a saturation value at late times in the sub-Eddington phase (Fig. \ref{raden}). The $\eta_E$ shows a significant increment with the black hole spin and decreases when energy loss to the corona is included. The integrated radiated energy from the disc is smaller than the stellar mass energy ($M_{\star} c^2$).
The ratio of total X-ray luminosity from corona to bolometric disc luminosity increases with $\mu$ implying that the energy transport to corona decreases with an increase in gas pressure contribution to the viscous stress. The disc bolometric luminosity decreases with an increase in energy transport to the corona. The ratio is constant at late times for $\mu =1 $ but increases for $\mu \neq 1$.
The presence of corona impacts the spectral luminosity at late times by decreasing the spectral luminosity. The luminosity increases with black hole mass for optical/UV wavelengths but decreases in X-ray band as can be seen from Fig. \ref{spec}. The increase in stellar mass increases the bolometric and spectral luminosity. The X-ray luminosity declines faster with an increase in the black hole mass.
The electron temperature $T_e$ in the corona is $10^7 -10^{8.5}$ depending on the disc mass accretion rate and the energy transport to the corona. The electron temperature increases with a decrease in the mass accretion rate. The Compton $y$ parameter decreases with mass accretion rate. The decrease in optical depth with time indicates that the corona is transparent to the soft photons from the disc and the $y \ll 1$ suggests that the comptonization is negligible. The spectral index $\Gamma$ calculated at earlier times agrees with the observations. The evaluation of the optical depth and $y$ with time suggest a more detailed Comptonization scattering model for the Compton cooling.
The X-ray temperature $T_X$ of the disc is in agreement with the blackbody temperature of $\sim ~{\rm few}~\times~10^{5}~{\rm K}$ from the X-ray observations. The late time evaluation of $T_X$ (see Fig. \ref{tsp}) is slower than the temperature evolution in a steady thin disc ($\propto t^{-5/12}$). The spectral luminosity dominates in X-ray and we have shown that the reprocessing is crucial to explain the observed dominance of optical/UV emission over the disc emission. Without reprocessing, the disc-corona model presented is suitable for X-ray TDEs.
We have neglected the mass loss due to an outflow in our model. The outflow is crucial when the disc is in super-Eddington accretion where the outflow will carry the mass, angular momentum, and energy from the disc. The outflow is also important as it forms the atmosphere for the reprocessing of the disc emission. However, the possibility of a corona around a disc with an outflow is uncertain as the strong outflow may destroy the low-density corona. \citet{2020MNRAS.497L...1W} has shown that the X-ray spectrum evolves from a disc dominated to a non-thermal power-law dominated as the luminosity decreases. We have shown in Figs. \ref{lbmu}, \ref{lbps} and \ref{spec} that the disc bolometric and spectral luminosity shows weak variation in presence of corona when the luminosity is high but declines significantly at low luminosity. The presence of an outflow will impact the disc evolution and the luminosity. A global disc solution of a disc with outflow and including corona is highly complex. Within the framework of our disc-corona model, we have shown the disc evolution and the corresponding time-evolution of disc mass, angular momentum and luminosity in the presence and absence of corona. A caveat is the possibility of an accretion disc with an evolving outer radius and mass fallback; such an accretion disc-corona system is computationally tedious and will be taken up in the future.
\section*{Acknowledgements}
We thank the anonymous referee for the constructive suggestions that have improved the paper. MT has been supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2016R1A5A1013277 and 2020R1A2C1007219).
\section*{Data availability}
No new data were generated or analysed in support of this research. Any other relevant data will be available on request.
\bibliographystyle{mnras}
|
2,877,628,089,614 | arxiv | \section{Definitions}\label{definitions}
We will define the higher-order degrees $\bar{\delta}_\Gamma$ and ranks $\operatorname{r}_{_\Gamma}$ of a group $G$ and surjective homomorphism $\phi_\Gamma : G \twoheadrightarrow \Gamma$. This definition will agree with the definition of $\bar{\delta}_n$ given for a CW-complex $X$ (as defined in \S3 of \cite{Ha1}) when $G=\pi_1(X)$, $\Gamma=G/{G_r^{(n+1)}}$ and $\phi_\Gamma = \phi_n : G \twoheadrightarrow G/{G_r^{(n+1)}}$, the natural projection map.
For more details see \cite[\S3, \S4 and \S5]{Ha1} and \cite[\S2,\S3,\S5]{Co}.
We recall the definition of a poly-torsion-free-abelian group.
\begin{definition}\label{ptfa}
A group $\Gamma$ is poly-torsion-free-abelian (PTFA) if it admits
a normal series $\left\{ 1\right\} =G_{0}\vartriangleleft
G_{1}\vartriangleleft \cdots\vartriangleleft G_{n}=\Gamma$ such
that each of the factors $G_{i+1}/G_{i}$ is torsion-free abelian.
\end{definition}
\begin{remark}
\label{PTFA} Recall that if $A\vartriangleleft G$ is torsion-free-abelian and
$G/A$ is PTFA then $G$ is PTFA. Any PTFA group is torsion-free and
solvable (the converse is not true). Also, any subgroup of a PTFA group
is a PTFA group \cite[Lemma 2.4, p.421]{Pa}.
\end{remark}
Some examples of interesting series associated to a group $G$ are the rational lower central series of $G$
(see Stallings \cite{Sta}), the rational lower central series of the rational commutator subgroup of $G$,
the rational derived series $G_r^{(n)}$ of $G$ (defined below), and the torsion-free derived series $G_H^{(n)}$ of $G$ (see \cite{CH}).
In this paper, our examples and applications will use the rational derived series of a group (defined below).
We point out that the torsion-free derived series is very interesting since it gives new concordance invariants of
links in $S^3$ (see \cite{CH} or \cite{Ha2}). For any of the subgroups $N$ in the above mentioned series, $G/N$ is a PTFA group.
In particular, for each $n \geq 0$, $\left.G\right/G^{(n+1)}_r$ is PTFA by Corollary 3.6 of \cite{Ha1}. We recall the definition of $G_r^{(n)}$.
\begin{definition}\label{rds}Let $G$ be a group and $G_{r}^{\left( 0\right) }=G$. For $n\geq1$ define
\[ G_{r}^{\left( n\right) } =\left\{ g\in G_{r}^{\left( n-1\right) }\mid
g^{k}\in\left[ G_{r}^{\left( n-1\right) },G_{r}^{\left(
n-1\right) }\right] \text{ for some } k \in \mathbb{Z}-\{ 0\}
\right\}
\]
to be the \textbf{$n^{th}$ term of the rational derived
series} of $G$.
\end{definition}
R. Strebel showed that if $G$ is the fundamental group of a
(classical) knot exterior then the quotients of successive terms
of the derived series are torsion-free abelian \cite{Str}. Hence
for knot exteriors we have $G_{r}^{\left( i\right) }=G^{\left(
i\right) }$. This is also well known to be true for free groups.
Since any non-compact surface has free fundamental group, this
also holds for all orientable surface groups.
We make some remarks about PTFA groups.
Recall that if $\Gamma$ is PTFA then $\mathbb{Z}\Gamma$ is an Ore domain and hence $\mathbb{Z}\Gamma$ embeds in it \emph{right ring of quotients}
$\mathcal{K}_\Gamma : = \mathbb{Z}\Gamma (\mathbb{Z}\Gamma-\{0\})^{-1}$ which is a skew field.
More generally, if
$S\subseteq R$ is a right divisor set of a ring $R$ then the
\emph{right quotient ring} $R S^{-1}$ exists (\cite[p.146]{Pa} or
\cite[p.52]{Ste}). By $R S^{-1}$ we mean a ring containing $R$
with the property that
\begin{enumerate}
\item Every element of $S$ has an inverse in $RS^{-1}$.
\item Every element of $RS^{-1}$ is of the form $rs^{-1}$ with $r \in R$, $s \in S$.
\end{enumerate}
If R is an Ore domain and S is a right divisor set then $RS^{-1}$ is flat as
a left R-module [Ste, Proposition II.3.5]. In particular, $\mathcal{K}_\Gamma$ is a flat left $\mathbb{Z}\Gamma$-module.
Moreover, every finitely generated
right module over a skew field is free and such modules have a well defined rank, $\operatorname{rank}_{\mathcal{K}_\Gamma}$,
which is additive on short exact sequences \cite[p.48]{Coh}.
Thus,
if C is a non-negative finite chain complex of finitely generated
free right $\mathbb{Z}\Gamma$-modules then the Euler characteristic $\chi(\mathcal{C}) = \sum_{i=0}^\infty (-1)^i \operatorname{rank} C_i$ is
defined and is equal to $\sum_{i=0}^\infty (-1)^i \operatorname{rank}_{\mathcal{K}_\Gamma} H_i(C;\mathcal{K}_\Gamma)$. In this paper, we will repeatedly use this fact
about the Euler characteristic.
Let $\psi:G \twoheadrightarrow \mathbb{Z}$ be a surjective homomorphism. Note that we will always be considering $\mathbb{Z}$ as the multiplicative group $\mathbb{Z}=\left<t\right>$ generated by $t$.
We wish to define $\bar{\delta}_\Gamma(\psi)$ as an non-negative integer. However, in order to do this, we need some compatibility conditions on $\Gamma$ and $\psi$.
\begin{definition}
Let $G$ be a group, $\phi_{_\Gamma}:G \twoheadrightarrow \Gamma$, and $\psi:G \twoheadrightarrow \mathbb{Z}$ where $\Gamma$ is a PTFA group.
We say that $(\phi_{_\Gamma},\psi)$ is an \textbf{admissible pair} for $G$ if there exists a surjection
$\alpha_{_{\Gamma,\mathbb{Z}}} : \Gamma \twoheadrightarrow \mathbb{Z}$ such that $\psi= \alpha_{_{\Gamma,\mathbb{Z}}} \circ \phi_{_\Gamma}$.
If $\alpha_{_{\Gamma,\mathbb{Z}}}$ is an isomorphism then we say that $(\phi_{_\Gamma},\psi)$ is \textbf{initial}.\end{definition}
Let $(\phi_{_\Gamma},\psi)$ be an admissible pair for $G$. We define $\Gamma^{\prime} := \ker(\alpha_{_{\Gamma,\mathbb{Z}}})$.
It is clear that $(\phi_{_\Gamma},\psi)$ is initial if and only if $\Gamma^{\prime}=1$. Since $\Gamma$ is PTFA
by Remark~\ref{PTFA}, $\Gamma^{\prime}$ is PTFA. Hence $\Gamma^{\prime}$
embeds in its right ring of quotients which we call $\mathbb{K}_\Gamma$. Moreover, $\mathbb{Z}\Gamma^{\prime}-\{0\}$ is known to be a
right divisor set of $\mathbb{Z}\Gamma$ \cite[p. 609]{Pa} hence we can define the right quotient ring
$R_\Gamma := \mathbb{Z}\Gamma (\mathbb{Z}\Gamma^{\prime}-\{0\})^{-1}$. After choosing a splitting $\xi: \mathbb{Z} \rightarrow \Gamma$, we see that any
element of $R_\Gamma$ can be written uniquely as $\sum t^{n_i} k_i$ where $t=\xi(1)$ and $k_i \in \mathbb{K}_\Gamma$. In this way, one sees
that $R_{\Gamma}$ is isomorphic to the skew polynomial ring $\mathbb{K}_\Gamma[t^{\pm 1}]$ (see the proof of Proposition 4.5 of \cite{Ha1} for more details).
Moreover, the embedding $g_\psi : \mathbb{Z}\Gamma^{\prime} \rightarrow \mathbb{K}_{\Gamma}$
extends to this isomorphism $R_{\Gamma} \rightarrow \mathbb{K}_\Gamma[t^{\pm 1}]$ (here we are identifying $\mathbb{K}_\Gamma$ and $t^0 \mathbb{K}_\Gamma$).
The abelian group $(G_\Gamma)_{ab}={\ker \phi_{_\Gamma}}\left/{[\ker \phi_{_\Gamma},\ker \phi_{_\Gamma} ]}\right.$ is a right $\mathbb{Z} \Gamma$-module via conjugation,
\[ [g] \gamma = [\gamma^{-1} g \gamma] \]
for $\gamma \in \Gamma$ and $g \in \ker \phi_{_\Gamma}$. Moreover, $(G_\Gamma)_{ab}$ is a $\mathbb{Z}\Gamma^{\prime}$-module via the
inclusion $\mathbb{Z}\Gamma^{\prime} \hookrightarrow \mathbb{Z}\Gamma$. Thus, $(G_\Gamma)_{ab}\otimes_{\mathbb{Z}\Gamma} \mathcal{K}_\Gamma$ and
$(G_\Gamma)_{ab}\otimes_{\mathbb{Z}\Gamma^{\prime}} \mathbb{K}_\Gamma$ are right $\mathcal{K}_\Gamma$ and $\mathbb{K}_\Gamma$-modules respectively.
\begin{definition}Let $G$ be a group and $\phi_{_\Gamma}:G \twoheadrightarrow \Gamma$ a coefficient
system with $\Gamma$ a PTFA group . We define the \textbf{$\Gamma$-rank of G} to be
\[\operatorname{r}_{_\Gamma}(G) = \operatorname{rank}_{\mathcal{K}_\Gamma} \left(\frac{\ker \phi_{_\Gamma}}{[\ker \phi_{_\Gamma},\ker \phi_{_\Gamma} ]} \otimes_{\mathbb{Z}\Gamma} \mathcal{K}_\Gamma \right).\]
\end{definition}
For a general group $G$ and coefficient system $\phi_{_\Gamma}$, this rank may be infinite. However, if $G$ is finitely generated and $\phi_{_\Gamma}$ is
non-zero then by Proposition~2.11 of \cite{COT}, $r_{_\Gamma}(G)\leq \beta_1(G)-1$ and hence is finite. In the case that
$\phi_{_\Gamma}$ is the zero map, $r_{_\Gamma}(G)=\beta_1(G)$.
\begin{definition}Let $G$ be a finitely generated group and $(\phi_{_\Gamma},\psi)$ an admissible pair for $G$. We define the \textbf{$\Gamma$-degree of $\psi$} to be
\[\bar{\delta}_{\Gamma}(\psi) = \operatorname{rank}_{\mathbb{K}_\Gamma}\left( \frac{\ker \phi_{_\Gamma}}{[\ker \phi_{_\Gamma},\ker \phi_{_\Gamma} ]} \otimes_{\mathbb{Z}\Gamma^{\prime}} \mathbb{K}_\Gamma \right) \]
if $\operatorname{r}_{_\Gamma}(G)=0$ and $\bar{\delta}_\Gamma(\psi)=0$ otherwise.\end{definition}
We remark that $(G_\Gamma)_{ab}\otimes_{\mathbb{Z}\Gamma^{\prime}} \mathbb{K}_\Gamma$ is merely $(G_\Gamma)_{ab}\otimes_{\mathbb{Z}\Gamma} \mathbb{K}_\Gamma[t^{\pm 1}]$ viewed as a $\mathbb{K}_\Gamma$-module.
Since $G$ is a finitely generated group, $(G_\Gamma)_{ab}\otimes_{\mathbb{Z}\Gamma} \mathbb{K}_\Gamma[t^{\pm 1}]$ is a finitely generated
$\mathbb{K}_\Gamma[t^{\pm 1}]$-module. Moreover, since $\mathbb{K}_\Gamma[t^{\pm 1}]$ is a (noncommutative left and right) principal ideal domain, \cite[2.1.1, p.49]{Cohn}, the latter is isomorphic to
\[ \oplus_{i=1}^l \left.\mathbb{K}_\Gamma[t^{\pm 1}]\right/\left<p_i(t)\right> \oplus \left(\mathbb{K}_\Gamma[t^{\pm 1}]\right)^{\operatorname{r}_{_\Gamma}(G)}
\]\cite[Theorem 16, p.43]{Ja}.
Thus, $(G_\Gamma)_{ab}\otimes_{\mathbb{Z}\Gamma^{\prime}} \mathbb{K}_\Gamma$ is a finitely generated $\mathbb{K}_\Gamma$-module if and only if $\operatorname{r}_{_\Gamma}(G)=0$
. In particular, if $\operatorname{r}_{_\Gamma}(G)= 0$ then $\bar{\delta}_\Gamma(\psi)$ is the sum of the degrees of the $p_i(t)$. Therefore, $\bar{\delta}_\Gamma(\psi)$ as defined above is always finite.
Let us consider the case when $\Gamma=\mathbb{Z}^m$. Let $X$ be a CW-complex with $\pi_1(X)=G$ and $X_{\phi_{\Gamma}}$ be the regular $\mathbb{Z}^m$-cover of $X$ corresponding to $\phi_{_\Gamma}$. Consider an admissible pair $(\phi_{\mathbb{Z}^m},\psi)$ for $G$. This is one such that $\psi=\psi^{\prime} \circ \phi_\Gamma$ where $\psi^\prime: \mathbb{Z}^m \twoheadrightarrow \mathbb{Z}$.
In this case, $H_1(X_{\phi_{\Gamma}};\mathbb{Z})= \left.{\ker \phi_{_\Gamma}}\right/{[\ker \phi_{_\Gamma},\ker \phi_{_\Gamma}]}$ is a module over the Laurent polynomial ring with $m$ variables, $\mathbb{Z}[\mathbb{Z}^m]$. Moreover, $H_1(X_{\phi_{\Gamma}};\mathbb{Z})$ can be considered as a module over the Laurent polynomial ring with $m-1$ variables $\mathbb{Z}\Gamma^\prime=\mathbb{Z}[\mathbb{Z}^{m-1}]$. Note that the $m-1$ variables in $\mathbb{Z}[\mathbb{Z}^{m-1}]$ correspond to a choice of basis elements of $\Gamma^\prime=\ker(\alpha_{\mathbb{Z}^m,\mathbb{Z}}: \mathbb{Z}^m \twoheadrightarrow \mathbb{Z})$. Therefore, as long as the rank of $H_1(X_{\phi_{\Gamma}};\mathbb{Z})$ as a $\mathbb{Z}[\mathbb{Z}^m]$-module is 0, $\bar{\delta}_{\mathbb{Z}^m}(\psi)$ is equal to the rank of $H_1(X_{\phi_{\Gamma}};\mathbb{Z})$ as a $\mathbb{Z}[\mathbb{Z}^{m-1}]$-module. In particular, when $m=1$, $\bar{\delta}_\mathbb{Z}(\psi)$ is equal the rank of $H_1(X_\psi;\mathbb{Z})$ as an \emph{abelian group} where $X_\psi$ is the infinite cover corresponding to $\psi$ as long as this rank is finite (otherwise $\bar{\delta}_\mathbb{Z}(\psi)=0$). When $\mathbb{Z}^m$ is the abelianization of $G$, $\bar{\delta}_{\mathbb{Z}^m}(\psi)=\bar{\delta}_0(\psi)$ (see below for the definition of $\bar{\delta}_0$) is equal to the Alexander norm (see \cite{McM} for the definition of the Alexander norm) of $\psi$ by \cite[Proposition~5.12]{Ha1}.
We now define the \emph{higher-order degrees} and \emph{ranks} associated to a group $G$. For each $n\geq0$, let
$\Gamma_n=\left.G\right/G_r^{(n+1)}$ where $G_r^{(n+1)}$ is the $(n+1)^{st}$-term of the rational derived series of $G$
as defined in Definition~\ref{rds}. We
define the \textbf{$n^{th}$-order rank of $X$} to be \[\operatorname{r}_n(X)= \operatorname{r}_{_{\Gamma_n}}(X).\]
Next, we remark that if $\psi \in H^1(G;\mathbb{Z})\cong \operatorname{Hom}(G;\mathbb{Z})$, then $\psi(G_r^{(n+1)})=1$. Hence
for each primitive $\psi \in H^1(G;\mathbb{Z})$ the pair $(\phi_{_{\Gamma_n}},\psi)$ is an admissible pair for $G$.
For primitive $\psi$, we define the \textbf{$n^{th}$-order degree of $\psi$} to be \[\bar{\delta}_n(\psi)= \bar{\delta}_{\Gamma_n}(\psi).\]
For non-primitive $\psi$, there is a primitive cohomology class $\psi^{\prime} \in H^1(X;\mathbb{Z})$ such that
$\psi=m \psi^{\prime}$. Define $\bar{\delta}_n(\psi)=m \bar{\delta}_n(\psi^{\prime}).$
Thus, for each group $G$ and $n\geq 0$ we have defined a function $\bar{\delta}_n:H^1(G;\mathbb{Z}) \rightarrow \mathbb{Z}$ which is ``linear on rays through the origin''.
We put a partial ordering on these functions by $\bar{\delta}_i \leq \bar{\delta}_j$ if $\bar{\delta}_i(\psi) \leq \bar{\delta}_j(\psi)$ for
all $\psi \in H^1(G;\mathbb{Z})$. Also, we say that $\bar{\delta}_i=0$ provided $\bar{\delta}_i(\psi)=0$ for all $\psi \in H^1(G;\mathbb{Z})$.
Suppose $f:E \twoheadrightarrow G$ is a surjective homomorphism and $(\phi_{_\Gamma},\psi)$ is an
admissible pair for $G$. Then there is an induced admissible pair $(\phi_{_\Gamma} \circ f,\psi \circ f)$ for
$E$. In particular, we can speak $\bar{\delta}_\Gamma^YE(\psi \circ f)$. When we have this situation,
unless otherwise noted, we will use this admissible pair induced by $G$. When there is no confusion, we will suppress
the $f$ and just write $(\phi_{_\Gamma},\psi )$ when we mean $(\phi_{_\Gamma} \circ f,\psi \circ f)$ or
$\psi$ when we mean $\psi \circ f$.
In this paper, we will often use the notation $\operatorname{r}_{_\Gamma}(X)$ and $\bar{\delta}_\Gamma^X(\psi)$ for $X$ a CW-complex and $\psi$ an element of $H^1(X;\mathbb{Z})\cong H^1(\pi_1(X);\mathbb{Z})$.
By this, we mean $\operatorname{r}_{_\Gamma}(\pi_1(X))$ and $\bar{\delta}_\Gamma^{\pi_1(X)}(\psi)$ for an admissible pair $(\phi_{_\Gamma},\psi )$ for $\pi_1(X)$. These
are equivalent to the homological definitions given in \cite{Ha1}. That is, if $(\phi_{_\Gamma},\psi)$ is an admissible pair for $\pi_1(X)$ then
$H_1(X;\mathbb{K}_{\Gamma}[t^{\pm1}])$ and $H_1(X;\mathcal{K}_\Gamma)$ are right $\mathbb{K}_\Gamma$ and $\mathcal{K}_\Gamma$-modules
respectively and since $\mathcal{K}_\Gamma$ and $\mathbb{K}_\Gamma[t^{\pm 1}]$ are flat left $\mathbb{Z}\Gamma$-modules \cite[Proposition II.3.5]{Ste}, we see that
\[
\operatorname{r}_{_\Gamma}(X)=\operatorname{rank}_{\mathcal{K}_{\Gamma}}H_1(X;\mathcal{K}_{\Gamma})
\]
and
\[
\bar{\delta}_{\Gamma}(\psi)=\operatorname{rank}_{\mathbb{K}_{\Gamma}} H_1(X;\mathbb{K}_{\Gamma}[t^{\pm1}])
\]
if $\operatorname{r}_{_\Gamma}(X)=0$ and $\bar{\delta}_{\Gamma}(\psi)=0$ otherwise.
\section{Main Results}\label{mainresults}
We seek to study the behavior of $\bar{\delta}_n(\psi)$ as $n$ increases. More generally, we would like to compare $\bar{\delta}_\Gamma$ as we vary the group $\Gamma$. We show that the $\bar{\delta}_\Gamma$ satisfy a monotonicity
condition provided the groups satisfy a compatibility condition. We describe this condition below.
\begin{definition}
Let $G$ be a group, $\phi_{_\Lambda}:G \twoheadrightarrow \Lambda$, $\phi_{_\Gamma}:G \twoheadrightarrow \Gamma$,
and $\psi:G \twoheadrightarrow \mathbb{Z}$ where $\Lambda$ and $\Gamma$ are PTFA groups. We say that
$(\phi_{_\Lambda}, \phi_{_\Gamma},\psi)$ is an \textbf{admissible triple} for $G$ if there exist surjections
$\alpha_{_{\Lambda,\Gamma}} : \Lambda \twoheadrightarrow \Gamma$ and
$\alpha_{_{\Gamma,\mathbb{Z}}} : \Gamma \twoheadrightarrow \mathbb{Z}$ such that $\phi_{_\Gamma}= \alpha_{_{\Lambda,\Gamma}} \circ
\phi_{_\Lambda}$, $\psi= \alpha_{_{\Gamma,\mathbb{Z}}} \circ \phi_{_\Gamma}$, and $\alpha_{_{\Lambda,\Gamma}}$ is not an
isomorphism. If $\alpha_{_{\Gamma,\mathbb{Z}}}$ is an isomorphism then we say that $(\phi_{_\Lambda}, \phi_{_\Gamma},\psi)$
is \textbf{initial}.
\end{definition}
Note that if $(\phi_{_\Lambda}, \phi_{_\Gamma},\psi)$ an admissible triple then $(\phi_{_\Lambda}, \psi)$ and
$(\phi_{_\Gamma}, \psi)$ are both admissible pairs. Hence, in this case, we can define both $\bar{\delta}_{\Lambda}(\psi)$
and $\bar{\delta}_{\Gamma}(\psi)$. We note that $(\phi_{_\Lambda}, \phi_{_\Gamma},\psi)$ is initial if and only if
$(\phi_{_\Gamma}, \psi)$ is initial. Moreover, $(\phi_{_\Lambda}, \psi)$ is never initial since $\Lambda \twoheadrightarrow \Gamma$
is not an isomorphism. We will show that $\bar{\delta}_{\Lambda}(\psi) \geq \bar{\delta}_{\Gamma}(\psi)$ as long as the triple is not
initial. We point out that even if $\alpha_{_{\Lambda,\Gamma}}$ is an isomorphism, we can define both the $\Lambda$- and $\Gamma$-degrees and
in this case $\delta_\Gamma(\psi)=\delta_\Lambda(\psi)$!
We now proceed to state and prove the main theorems.
\begin{theorem}\label{2complex}Let $G$ be a finitely presented group with $\operatorname{def}(G)\geq 1$ and
$(\phi_{_\Lambda},\phi_{_\Gamma},
\psi)$ be an admissible triple for $G$. If $(\phi_{_\Lambda},\phi_{_\Gamma},\psi)$ is not initial then
\begin{equation}\bar{\delta}_{\Lambda}(\psi) \geq \bar{\delta}_{\Gamma}(\psi)\end{equation} otherwise
\begin{equation} \bar{\delta}_{\Lambda}(\psi) \geq \bar{\delta}_{\Gamma}(\psi)-1.
\end{equation}
\end{theorem}
Before proving Theorem~\ref{2complex}, we will state a Corollary of the theorem and make some remarks about the deficiency hypothesis in the theorem. First,
let $\Gamma_n$ be the quotient of $G$ by the $(n+1)^{st}$ term of the rational derived series as in Definition~\ref{rds}.
Recall that for any $\psi \in H^1(G;\mathbb{Z})$, $(\phi_{_{\Gamma_n}}, \psi)$ is an admissible pair. Moreover,
$(\phi_{_{\Gamma_{n+1}}}, \phi_{_{\Gamma_n}}, \psi)$ is an admissible triple unless $G_r^{(n+1)}=G_r^{(n+2)}$ which is initial if and only if
$\beta_1(G)=1$ and $n=0$. Hence by Theorem~\ref{2complex} we see that the $\bar{\delta}_n$ are a nondecreasing function
of $n$ (for $n \geq 1$). This behavior was first established for the fundamental groups of knot complements in $S^3$ by T. Cochran in \cite[Theorem 5.4]{Co}.
Recall that $\bar{\delta}_{n+1} \geq \bar{\delta}_n$ (respectively $\bar{\delta}_{n}=0$) means that $\bar{\delta}_{n+1}(\psi) \geq \bar{\delta}_n(\psi)$ (respectively $\bar{\delta}_{n}(\psi)=0$) for all $\psi \in H^1(G;\mathbb{Z})$.
\begin{corollary}\label{2complexn}Let $G$ be a finitely presented group with $\operatorname{def}(G)\geq 1$.
If $\beta_1(G)\geq 2$ then
\[\bar{\delta}_0 \leq \bar{\delta}_1 \leq \cdots \leq \bar{\delta}_n \leq \cdots.
\]
If $\beta_1(G)=1$
and $\psi$ is a generator of $H^1(G;\mathbb{Z})$ then $\bar{\delta}_0(\psi) -1 \leq \bar{\delta}_1(\psi) \leq \cdots \leq \bar{\delta}_n(\psi) \leq \cdots$.
\end{corollary}
\begin{proof}Let $\psi$ be a primitive class in $H^1(G;\mathbb{Z})$. We can assume that $G^{(n+1)}_r \neq G_r^{(n+2)}$ since if $G^{(n+1)}_r =G_r^{(n+2)}$ then
$\bar{\delta}_{n+1}(\psi)=\bar{\delta}_{n}(\psi)$ (note that in the case
$\beta_1(G)=1$ and $n=0$, $\bar{\delta}_1(\psi)=\bar{\delta}_0(\psi) \geq \bar{\delta}_0(\psi)-1$ is also satisfied). Therefore
$T=(\phi_{_{\Gamma_{n+1}}}, \phi_{_{\Gamma_n}}, \psi)$ is
an admissible triple. As mentioned above, $T$ is initial if and only if $\beta_1(G)=1$ and $n=0$.
Hence if $\beta_1(G)=1$ and $n=0$ then by Theorem~\ref{2complex}, $\bar{\delta}_1(\psi) \geq \bar{\delta}_0(\psi)-1$.
Otherwise, $\bar{\delta}_{n+1}(\psi)\geq\bar{\delta}_n(\psi)$.
If $\beta_1(G)\geq2$ and $\psi$ is not primitive then
$\psi=m \psi^{\prime}$ for some primitive $\psi^{\prime}$ and $m \geq 2$. Hence,
$\bar{\delta}_{n+1}(\psi)=m \bar{\delta}_{n+1}(\psi^\prime) \geq m \bar{\delta}_{n}(\psi^\prime) = \bar{\delta}_{n}(\psi)$.
\end{proof}
We now make some remarks about the condition $\operatorname{def}(G)\geq 1$. First, if $G$ has deficiency
at least $2$ then the results of Theorem~\ref{2complex} and Corollary~\ref{2complexn} hold
simply because all of the degrees are zero.
\begin{remark}\label{remark1}If $G$ is a finitely presented group with $\operatorname{def}(G)\geq 2$ and $(\phi_{_\Gamma},\psi)$ is an
admissible pair for $G$ then $\operatorname{r}_{_\Gamma}(G)\geq1$ and hence $\bar{\delta}_\Gamma(\psi)=0$.
\end{remark}
To see this, let $X_G$ be a finite, connected 2-complex with one 0-cell $x_0$, $m$ 1-cells, $r$ 2-cells where $m-r\geq 2$ and $G=\pi_1(X_G,x_0)$. Then $H_1(X_G,x_0;\mathcal{K}_\Gamma)$ has a presentation with $m$
generators and $r$ relations so $\operatorname{rank}_{\mathcal{K}_\Gamma} H_1(X_G,x_0;\mathcal{K}_\Gamma)\geq2$ and hence
$\operatorname{r}_{_\Gamma}(G)=\operatorname{r}_{_\Gamma}(X_G) = \operatorname{rank}_{\mathcal{K}_\Gamma} H_1(X_G,x_0;\mathcal{K}_\Gamma) -1 \geq 1$ \cite[\S 4 and \S 5]{Ha1}.
Therefore, $\bar{\delta}_\Gamma(\psi)=0$ for all $\psi \in H^1(G;\mathbb{Z})$.
\vspace{5pt}
However, if the deficiency of G is not positive, we can create an infinite number of examples where the theorem is false!
We construct finitely presented groups for which the degrees are ``large'' up to (but not
including) the $n^{th}$ stage but the degree at the $n^{th}$ stage is zero! For simplicity, we only describe examples
when $\beta_1(G)=1$. However, the reader should notice that the same type of behavior can be seen for groups with $\beta_1(G)\geq 2$ using the same techniques.
\begin{proposition}\label{remark2}For each $g \geq 1$ and $n \geq 1$ there exist examples of finitely presented groups
$G_{n,g}$ with $\operatorname{def}(G_{n,g})\leq 0$ and $\beta_1(G_{n,g})=1$ such that $\bar{\delta}_0(\psi)=2g$,
$\bar{\delta}_i(\psi)=2g-1$ for $1 \leq i \leq n-1$ and $\bar{\delta}_{n}(\psi)=0$ whenever $\psi$ is a generator of $H^1(G_{n,g};\mathbb{Z})$.
\end{proposition}
\begin{proof}
We will construct these examples by adding relations to the fundamental group of a fibered knot complement $G$ that kill
the generators the $\left.G^{(n+1)}\right/G^{(n+2)} \otimes \mathbb{K}_n$. Let $G$ be the fundamental group of a fibered knot $K$
in $S^3$ of genus $g \geq 1$ and $n \geq 1$. Since $K$ is fibered, $G^{(1)}$ is free, so $G_r^{(n+1)}/G_r^{(n+2)}=G^{(n+1)}/G^{(n+2)}$ and
$\mathcal{A}_n=\left.G^{(n+1)}\right/G^{(n+2)} \otimes_{\mathbb{Z} \Gamma_n^{\prime}} \mathbb{K}_n$ is a finitely generated free
right $\mathbb{K}_n$-module of rank $2g-1$. Let $a_1,\ldots , a_{2g-1}$ be the generators of $\mathcal{A}_n$. Since $\mathbb{K}_n$ is
an Ore domain, we can find $k_j \in \mathbb{K}_n$ such that $a_j k_j \in G^{(n+1)}/G^{(n+2)} \otimes 1$.
Pick $\gamma_1, \ldots , \gamma_{2g-1} \in G^{(n+1)}$ such that $[\gamma_j]=a_j k_j$ and let
$H=G/<\gamma_1, \ldots , \gamma_{2g-1}>$ and $\eta: G \twoheadrightarrow H$.
Note that since any knot group has deficiency 1, $H$ has a presentation with $m$ generators and $m+2g-2$ relations. Since $\gamma_1, \ldots , \gamma_{2g-1} \in G^{(n+1)}$, we have
an isomorphism $G/G^{(n+1)}\cong H/H^{(n+1)} \cong H/H_r^{(n+1)}$. Therefore, $\bar{\delta}_0^H(\psi)=\bar{\delta}_0^G(\psi)=2g$ and $\bar{\delta}_i^Y(\psi)=\bar{\delta}_i^X(\psi)=2g-1$ for $1 \leq i \leq n-1$.
Since $G^\prime \twoheadrightarrow H^\prime$, we have $H^{\prime}/H^{(n+1)}\cong G^{\prime}/G^{(n+1)}$ for $0\leq i\leq n$ hence
$\mathbb{K}_n = \mathbb{K}_n^G \cong \mathbb{K}_n^H$.
Moreover, since
$G^{(n+1)} \twoheadrightarrow H^{(n+1)}$, the map
$\left.G^{(n+1)}\right/G^{(n+2)} \otimes \mathbb{K}_n \rightarrow \left.H^{(n+1)}\right/H^{(n+2)} \otimes \mathbb{K}_n$ is surjective.
But the generators of $\mathcal{A}_n$ are sent to zero under this map, so $H^{(n+1)}/H^{(n+2)} \otimes \mathbb{K}_n=0$.
Finally, $H_r^{(n+1)}=H^{(n+1)}$ so
$$\frac{H_r^{(n+1)}}{H_r^{(n+2)}} \otimes \mathbb{K}_n \cong \frac{H^{(n+1)}}{H_r^{(n+2)}} \otimes \mathbb{K}_n \cong \left(\left.\frac{H^{(n+1)}}{H^{(n+2)}}\right/\{\mathbb{Z}\text{-torsion}\}\right) \otimes \mathbb{K}_n=0$$
(see Lemma 3.5 of \cite{Ha1} for the second isomorphism) hence $\bar{\delta}_n(\psi)=0$.
\end{proof}
We will now prove Theorem~\ref{2complex}.
\begin{proof}[Proof of Theorem~\ref{2complex}]If the deficiency of $G$ is at least 2 then by Remark~\ref{remark1}, all of the degrees are zero hence the conclusions of the theorem are true. Now we prove the case when $\operatorname{def}(G)=1$. We can assume that $\operatorname{r}_{_\Gamma}(G) =0$, otherwise $\bar{\delta}_{\Gamma}(\psi)=0$
and hence the statement of the theorem is true since $\bar{\delta}_\Lambda(\psi)$ is always non-negative.
Since $G$ is finitely presented, there is a finite 2-complex $X$ such that $G=\pi_1(X)$ and $\chi(X)=1-\operatorname{def}(G)=0$.
Recall that $X$ is obtained from the presentation of $G$ with deficiency 1 by starting with one 0-cell, attaching a
1-cell for each generator and a 2-cell for each relation in the presentation of $G$. Since
$\Gamma \twoheadrightarrow \mathbb{Z}$ and $\phi_{_\Gamma}$ is surjective, $H_i(X;\mathcal{K}_\Gamma)=0$ for
$i \neq 1,2$ \cite[Proposition 2.9]{COT}.
Moreover, $\chi(X)=0$ implies that
$\operatorname{rank}_{\mathcal{K}_{\Gamma}} H_2(X;\mathcal{K}_\Gamma)=
\operatorname{rank}_{\mathcal{K}_{\Gamma}} H_1(X;\mathcal{K}_\Gamma)=\operatorname{r}_{_\Gamma}(G)=0$ since the Euler characteristic can be computed using $\mathcal{K}_\Gamma$-coefficients as mentioned in \S-~\ref{definitions}.
Since $\operatorname{r}_{_\Gamma}(X)=0$, it follows that $\operatorname{r}_{_\Lambda}(X)=0$ \cite{Ha2}.
Replacing $\Gamma$ by $\Lambda$ in the above argument, it follows that
$\operatorname{rank}_{\mathcal{K}_{\Lambda}} H_2(X;\mathcal{K}_\Lambda)=0$.
Let $X_\psi$ be the infinite cyclic cover of $X$ corresponding to $\psi$. There is a coefficient system for
$X_\psi$, $\phi^{\prime}_{_\Gamma}:\pi_1(X_\psi) \twoheadrightarrow \Gamma^{\prime}$, given by restricting
$\phi_{_\Gamma}$ to $\pi_1(X_\psi)$. Moreover, as $\mathbb{K}_\Gamma$-modules $H_1(X;\mathcal{K}_\Gamma)\cong H_1(X_\psi;\mathbb{K}_\Gamma)$ so $H_1(X_\psi;\mathbb{K}_\Gamma)$ is a finitely generated free
$\mathbb{K}_\Gamma$-module of rank $\bar{\delta}_{\Gamma}(\psi)$ (similarly for $\Lambda$).
Since $\Gamma^{\prime}$ is PTFA (and hence $\mathbb{Z} \Gamma^{\prime}$ is an Ore domain), there exists a wedge of $e$ circles
$W$ and a map
$f:W \rightarrow X_\psi$ such that
\[
f_*:H_1(W;\mathbb{K}_\Gamma) \rightarrow H_1(X_\psi;\mathbb{K}_\Gamma)
\]
is an isomorphism. Here, the coefficient system on $W$ is given by $\phi^{\prime}_{_\Gamma} \circ f_{\ast}$. By the proof of Lemma 2.1 in \cite{COT}, $\ker \phi_{_\Gamma} \neq \ker \psi$ if and only if $\phi^{\prime}_{_\Gamma} \circ f_{\ast}$ is non-trivial.
Moreover, since $W$ is a finite connected 2-complex with $H_2(W)=0$, if $\ker \phi_{_\Gamma} \neq \ker \psi$ then
$H_1(W;\mathbb{K}_\Gamma)\cong \mathbb{K}_\Gamma^{e-1}$ \cite[Lemma 2.12]{COT}; otherwise $H_1(W;\mathbb{K}_\Gamma)\cong \mathbb{K}_\Gamma^e$.
Up to homotopy we can assume that $W$ is a subcomplex of $X_\psi$ by replacing $X_\psi$ with the mapping cylinder of $f$.
Consider the long exact sequence of the pair $(X_\psi,W)$ with coefficients in $\mathbb{K}_\Gamma$:
\[
H_2(X_\psi;\mathbb{K}_\Gamma) \rightarrow H_2(X_\psi,W;\mathbb{K}_\Gamma) \rightarrow H_1(W;\mathbb{K}_\Gamma) \rightarrow H_1(X_\psi;\mathbb{K}_\Gamma).
\]
Since $X$ has no 3-cells, there is a cell complex, $C_i(X;\mathbb{Z}\Gamma)$, which has no 3-cells. Therefore, $TH_2(X;\mathbb{Z}\Gamma)$, the $\mathbb{Z}\Gamma$-torsion
submodule of $H_2(X;\mathbb{Z}\Gamma)$, is zero. Now, the kernel of the map $H_2(X;\mathbb{Z}\Gamma) \rightarrow H_2(X;\mathcal{K}_\Gamma)$ is $TH_2(X;\mathbb{Z}\Gamma)$.
Moreover, we have shown that $H_2(X;\mathcal{K}_\Gamma)=0$ hence $H_2(X;\mathbb{Z}\Gamma)=0$. Thus, $H_2(X_\psi;\mathbb{K}_\Gamma)
\cong H_2(X;\mathbb{K}_\Gamma[t^{\pm 1}]) \cong H_2(X;\mathbb{Z}\Gamma)\otimes_{\mathbb{Z}\Gamma}\mathbb{K}[t^{\pm 1}] =0$.
Since the last arrow in the sequence is an isomorphism, $H_2(X_\psi,W;\mathbb{K}_\Gamma)=0$.
Our goal is to show that $H_2(X_\psi,W;\mathbb{K}_\Lambda)=0$. Then by analyzing the long exact sequence of the pair $(X_\psi,W)$
with coefficients in
$\mathbb{K}_\Lambda$, it will follow that $H_1(W;\mathbb{K}_\Lambda) \rightarrow H_1(X_\psi;\mathbb{K}_\Lambda)$ is a monomorphism.
We note that $\ker \phi_{_\Lambda} \neq
\ker \phi_{_\Gamma}$ implies that $\operatorname{rank}_{\mathbb{K}_\Lambda} H_1(W;\mathbb{K}_\Lambda)=e-1$ as above. Thus, if $\ker \phi_{_\Gamma} \neq \ker \psi$
then (assuming the monomorphism above) $\bar{\delta}_\Lambda(\psi) \geq e-1=\bar{\delta}_\Gamma(\psi)$; otherwise $\bar{\delta}_\Lambda(\psi) \geq e-1=\bar{\delta}_\Gamma(\psi)-1$.
Consider the relative chain complex of $(X_\psi,W)$ with coefficients in $\mathbb{Z}\Gamma^{\prime}$:
\[
0 \rightarrow C_2(X_\psi,W;\mathbb{Z}\Gamma^{\prime}) \xrightarrow{\partial_2^{\Gamma^{\prime}}} C_1(X_\psi,W;\mathbb{Z}\Gamma^{\prime}) \rightarrow.
\]
Since $W$ has no 2-cells, $X_\psi$ has no 3-cells. Therefore $H_2(X_\psi,W;\mathbb{Z}\Gamma^{\prime})$ is $\mathbb{Z}\Gamma^\prime$-torsion free, so $H_2(X_\psi,W;\mathbb{K}_\Gamma)=0$ implies that
$H_2(X_\psi,W;\mathbb{Z}\Gamma^{\prime})=0$
and hence $\partial_2^{\Gamma^{\prime}}$ is injective.
Let $A=\ker(\alpha_{_{\Lambda,\Gamma}|_{\Lambda^{\prime}}}:\Lambda^{\prime} \twoheadrightarrow \Gamma^{\prime})$. Since $A$ is a subgroup of a PTFA group, A is PTFA by Remark~\ref{PTFA}.
If $M$ is any right $\mathbb{Z}\Lambda^{\prime}$-module then $M \otimes_{\mathbb{Z} A} \mathbb{Z}$ has the structure of a right $\mathbb{Z}\Gamma^{\prime}$-module given by
\[(\sum \sigma \otimes n)\gamma = \sum \sigma \gamma \otimes n
\]
for any $\gamma \in \Gamma^{\prime}$.
Moreover, one can check that
$C_{\ast}(X_\psi,W;\mathbb{Z}\Lambda^{\prime}) \otimes_{\mathbb{Z} A} \mathbb{Z}$ is isomorphic to $C_{\ast}(X_\psi,W;\mathbb{Z}\Gamma^{\prime})$ as right
$\mathbb{Z}\Gamma^{\prime}$-modules.
Thus, after making this identification, $\partial_2^{\Lambda^{\prime}}: C_2(X_\psi,W;\mathbb{Z}\Lambda^{\prime}) \rightarrow C_1(X_\psi,W;\mathbb{Z}\Lambda^{\prime})$
is injective by the following result of R. Strebel.
\begin{proposition}[R. Strebel, \cite{Str} p. 305]\label{strebel}Suppose $\Gamma$ is a PTFA group and $R$ is a commutative
ring. Any map between projective right $R\Gamma$-modules whose image under the functor $- \otimes_{R\Gamma} $ is
injective, is itself injective.
\end{proposition}
\noindent Finally, since $\mathbb{K}_{\Lambda}$ is flat as a $\mathbb{Z}\Lambda^{\prime}$-module, $H_2(X_\psi,W;\mathbb{K}_\Lambda)=0$ as desired.
\end{proof}
Suppose $\Lambda$ and $\Gamma$ are abelian groups and $G$ is the fundamental group of a compact orientable manifold with toroidal (or empty) boundary. In this case, it can easily be shown, using the results in \cite{McM} and \cite{Ha1}, that the inequalities in Theorem~\ref{2complex} (and Theorem~\ref{closed} below) are in fact \emph{equalities} for all $\psi$ which lie in the cone of an open face of the Alexander norm ball.
We show below that even in this case, there are $\psi$ for which the inequality in Theorem~\ref{2complex} is necessary.
\begin{example}\label{borr_rings}
Let $X$ be the exterior of the Borromean rings in $S^3$ and let $G$ be the fundamental group of the $X$. A Wirtinger presentation of $G$ is given by $\left<x,y,z \hspace{3pt}|\hspace{3pt} [z,[x,y^{-1}]],[y,[z,x^{-1}]]\right>$ (see \cite[p.10]{F} for a similar presentation). Thus, there is an epimorphism $f: G \twoheadrightarrow \left<y,z\right>$ by sending $x$ to $1$. Let $\psi_{(0,m,n)}:G\twoheadrightarrow\mathbb{Z}$ be the homomorphism defined by $\psi(x)=1$, $\psi(y)=t^m$, $\psi(z)=t^n$ where $\text{gcd}(m,n)=1$. Since $f$ factors through $\psi_{(0,m,n)}$, the rank of $H_1$ of the infinite cyclic cover of $X$ corresponding to $\psi_{(0,m,n)}$ is non-zero (see, for example, \cite[Proposition~2.2]{HaC}). It follows that $\bar{\delta}_\mathbb{Z}(\psi_{(0,m,n)})=0$. However, one can compute the Alexander polynomial of $X$ (from the presentation of $G$) to be $\Delta_X=(x-1)(y-1)(z-1)$. Therefore, $\bar{\delta}_{\mathbb{Z}^3}(\psi_{(0,m,n)})=|m|+|n|>0$.
\end{example}
Now we consider the case when $G$ is the fundamental group of a \emph{closed} 3-manifold. In this case, the deficiency of $G$ is 0 so Theorem~\ref{2complex} does not suffice to prove a monotonicity result for $G$. The proof that the degrees satisfy a monotonicity relation will
use Theorem~\ref{2complex} for 2-complexes but will also use some additional topology of the 3-manifold. Before stating the
corresponding theorem for closed 3-manifolds, we introduce an important lemma which will be used in the proof of Theorem~\ref{closed}.
\begin{lemma}\label{whenzero}Let $K$ be a nullhomologous knot in a 3-manifold $X$, $M_K$ be the 0-surgery on $K$,
$\psi:\pi_1(M_K) \twoheadrightarrow \mathbb{Z}$ which maps the meridian of $K$ to a nonzero element of $\mathbb{Z}$, and $(\phi_{_\Gamma},\psi)$ be an
admissible pair for $\pi_1(M_K)$. If $\operatorname{r}_{_\Gamma}(M_K)=0$ and $(\phi_{_\Gamma},\psi)$ is not initial then the longitude
of $K$ is not 0 in $H_1(X \setminus K;\mathbb{K}_\Gamma [t^{\pm 1}])$.
\end{lemma}
\begin{proof}
Let $l \subset N(K)$ be the longitude of $K$. Here, $N(K)$ is an open neighborhood of $K$ in $X$. Note that
$M_K=(X \setminus N(K)) \cup e^2 \cup e^3$ where the attaching circle of $e^2$ is $l$. Since $X \setminus N(K)$ is
homotopy equivalent to $X \setminus K$ we use the latter. Consider the diagram below.
\[
\begin{diagram}
\mathbb{K}_\Gamma[t^{\pm 1}] & & & & \\
\dTo_{\partial_3} & & & & \\
H_2(X \setminus K \cup e^2;\mathbb{K}_\Gamma[t^{\pm 1}]) & \rTo^{\pi} & \mathbb{K}_\Gamma[t^{\pm 1}] & \rTo^{\partial_2} & H_1(X\setminus K; \mathbb{K}_\Gamma[t^{\pm 1}]) \\
\dTo_{i_\ast} & & & & \\
H_2(M_K;\mathbb{K}_\Gamma[t^{\pm 1}])
\end{diagram}
\]
The horizontal (respectively vertical) sequence is the long exact sequence of the pair $\left(X \setminus K \cup e^2,X \setminus K\right)$
(respectively $\left(M_K,X \setminus K \cup e^2\right)$) and the $\mathbb{K}_\Gamma[t^{\pm 1}]$ term in the sequence is generated
by the relative class coming from $e^2$ (respectively $e^3$). We note that the boundary of the class represented by $e^2$
is the class represented by the longitude of $K$ in $H_1(X\setminus K; \mathbb{K}_\Gamma[t^{\pm 1}])$. By analyzing the attaching
map of $\partial e^3$, we see that $\pi \circ \partial_3$ is the map which sends $1$ to $t^r-1$ where $t^r$ is the image
of the meridian of $K$ under $\phi$. Since $r \neq 0$ we see that this map is never surjective since $t^r-1$ is not a unit in $\mathbb{K}_\Gamma[t^{\pm 1}]$.
Since $\operatorname{r}_{_\Gamma}(M_K)=0$, by Remark 2.8 of \cite{COT} we have
$H_2(M_K;\mathbb{K}_\Gamma[t^{\pm 1}])\cong H^1(M_K;\mathbb{K}_\Gamma[t^{\pm 1}])\cong
\operatorname{Ext}^1_{\mathbb{K}_\Gamma[t^{\pm 1}]}(H_0(M_K;\mathbb{K}_\Gamma[t^{\pm 1}]),\mathbb{K}_\Gamma[t^{\pm 1}])$.
By the proof of Proposition 2.9 in \cite{COT},
$H_0(M_K;\mathbb{K}_\Gamma[t^{\pm 1}]) = \left.\mathbb{K}_\Gamma[t^{\pm 1}]\right/(\mathbb{K}_\Gamma[t^{\pm 1}]\cdot I)$ where $I$ is the
augmentation ideal of $\mathbb{Z} \pi_1(M_K)$ acting via $\mathbb{Z}\pi_1(M_K) \rightarrow \mathbb{Z}\Gamma \rightarrow \mathbb{K}_\Gamma[t^{\pm 1}]$.
Thus, $H_0(M_K;\mathbb{K}_\Gamma[t^{\pm 1}])\neq 0$ if and only if $(\phi_{_\Gamma},\psi)$ is initial. Thus, if
$(\phi_{_\Gamma},\psi)$ is not initial, $\partial_3$ is surjective. Suppose $[l]=0$ in $H_1(X\setminus K; \mathbb{K}_\Gamma[t^{\pm 1}])$,
then $\pi$ would be surjective, making $\pi \circ \partial_3$ surjective which is a contradiction.
\end{proof}
Consider the situation when $X=S^3\setminus K$, $G=\pi_1(S^3\setminus K)$, $\psi$ is the abelianization map of $G$, and
$\phi_{_\Gamma}:G \twoheadrightarrow \Gamma= G\left/G^{(2)}\right.$ be the quotient map where $G^{(n)}$ is the $n^{th}$
term of the derived series of $G$. It is known that $\Gamma$ is a PTFA group \cite{Str}. Let $l$ be the longitude of
$K$. Since $l \in G^{(2)}$, $\phi_{_\Gamma}$ extends to a map $\pi_1(M_K) \twoheadrightarrow G\left/G^{(2)}\right.$.
We note that in this case, the pair $(\phi_{_\Gamma},\psi)$ is initial if and only if the Alexander polynomial is 1.
The longitude being nonzero in $H_1(S^3\setminus K;\mathbb{K}_\Gamma [t^{\pm 1}])$ implies that $l$ is nonzero in
$H_1(S^3\setminus K;\mathbb{Z}\Gamma)=\frac{G^{(2)}}{G^{(3)}}$. Hence, if the Alexander polynomial of $K$ is not 1 then
$l \not \in G^{(3)}$. This was first proved by T. Cochran in Proposition 12.5 of \cite{Co}.
We now state our main monotonicity theorem for closed 3-manifolds.
\begin{theorem}\label{closed}Let $G$ be the fundamental group of a closed, orientable, connected 3-manifold and $(\phi_{_\Lambda},\phi_{_\Gamma},
\psi)$ be an admissible triple for $G$. If $(\phi_{_\Lambda},\phi_{_\Gamma},\psi)$ is not initial then
\begin{equation}\bar{\delta}_{\Lambda}(\psi) \geq \bar{\delta}_{\Gamma}(\psi)\end{equation} otherwise
\begin{equation} \bar{\delta}_{\Lambda}(\psi) \geq \bar{\delta}_{\Gamma}(\psi)-2.
\end{equation}
\end{theorem}
As we saw for finitely presented groups with deficiency 1 (Corollary~\ref{2complexn}), for groups of closed 3-manifolds, when $n \geq 1$, the
$\bar{\delta}_n$ are a nondecreasing function of $n$.
\begin{corollary}\label{closedn}Let $G$ be the fundamental group of a closed, orientable, connected 3-manifold. If
$\beta_1(G)\geq 2$ then
\[\bar{\delta}_0 \leq \bar{\delta}_1 \leq \cdots \leq \bar{\delta}_n \leq \cdots.
\]
If $\beta_1(G)=1$ and $\psi$ is
a generator of $H^1(G;\mathbb{Z})$ then $\bar{\delta}_0(\psi) -2 \leq \bar{\delta}_1(\psi) \leq \cdots \leq \bar{\delta}_n(\psi) \leq \cdots$.
\end{corollary}
\begin{proof}[Proof of Theorem~\ref{closed}]
Let $X$ be a closed, orientable, connected 3-manifold with $G=\pi_1(X)$. We will need the following lemma which is an extension of a lemma of C. Lescop \cite{L}.
\begin{lemma}\label{closedX}
Let $X$ be a closed, connected, orientable 3-manifold and $\psi:\pi_1(X) \twoheadrightarrow \mathbb{Z}=\left<t\right>$ be a surjective map. $X$ can be presented
as surgery on an framed link, $L=\amalg_{i=1}^{\beta_1(x)} L_i$, in a rational homology sphere $R$ such that
\begin{enumerate} \item the components of $L$ are null-homologous in $R$ \label{nullhom}
\item the surgery coefficients on $L_i$ are all 0 \label{zero}
\item \label{link} $\operatorname{lk}(L_i,L_j)=0$ for $i \neq j$ and
\item $\psi(\mu_i)=t^{\delta_{1i}}$ when $\mu_i$ is a meridian of $L_i$ and $\delta_{ij}$ is the Kronecker delta.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Lemma]By Lemma 5.1.1 in \cite{L}, $X$ can be obtained by surgery on a framed link $L$ with
$\beta_1(X)$ components such that (\ref{zero}), (\ref{link}), and (\ref{nullhom}) are satisfied. Now we note that
any automorphism of $H_1(X)/\{\mathbb{Z}\text{-torsion}\}\cong \mathbb{Z}^{\beta_1(X)}$ corresponds to a sequence of handleslides and
reordering or reorienting of the components of $L$. Moreover, since $\psi$ is a surjective map to $\mathbb{Z}$, there exists
an automorphism of $H_1(X)/\{\mathbb{Z}\text{-torsion}\}$ that sends the first basis element to $t$ (a generator of $\mathbb{Z}$) and
the other basis elements to $t^0=1$. That is, we can do a sequence of handleslides (along with possible reorienting or
reordering) to get a new link $L^{\prime}$ for which the meridian of the first component maps to $t$ and the other
meridians map to $1$. Since the original surgery coefficients and linking numbers of $L$ were $0$, the same is true
for $L^{\prime}$. We also note that the components of $L^{\prime}$ are null-homologous in $R$.
\end{proof}
By Lemma~\ref{closedX} above,
$X$ can be presented as surgery on a framed link $L=\amalg_{i=1}^{\beta_1(x)} L_i$,
in a rational homology sphere $R$ such that the first component, $L_1$, has surgery coefficient $0$,
$\operatorname{lk}(L_1,L_i)=0$ for $i \neq 1$ and
$\psi(\mu_1)=t$ when $\mu_1$ is a meridian of $L_1$.
Let $l$ be the longitude of $L_1$ and
$Y^{\prime}$ be the space obtained by performing 0-surgery in $R$ on the components $L_2, \dots , L_k$. Let $Y=Y^{\prime}-N(L_1)$ where $N(L_1)$
is a open neighborhood of $L_1$ in $Y^{\prime}$.
Finally,
$X^{\prime}=Y\cup_l D^2$ be the space obtained by adding a 2-disk to $Y$ which identifies $\partial D^2$
with $l$.
After picking a basepoint in $Y$ (hence in $X$ and $X^{\prime}$), we note that the inclusion map
induces an isomorphism
$i_{\ast}:\pi_1(X^\prime) \xrightarrow{\cong} \pi_1(X)$. Thus any coefficient system $\phi_{_\Gamma}$ for $X$
induces a coefficient system for $X^{\prime}$. Moreover, if $M$ is any $\mathbb{Z}\Gamma$-module then
$H_1(X;M) \cong H_1(X^{\prime};M)$. In particular, $\operatorname{r}_{_\Gamma}(X)=\operatorname{r}_{_\Gamma}(X^{\prime})$ and
$\bar{\delta}_\Gamma^{X}(\psi) = \bar{\delta}_\Gamma^{X^{\prime}}(\psi)$ for all $\psi \in H^1(X)\cong H^1(X^{\prime})$.
Since $l$ is null-homologous in $Y$, we can identify $H^1(X^{\prime})$ and $H^1(Y)$.
We define the coefficient systems and admissible pairs for $\pi_1(Y)$ by pre-composing the coefficient systems and admissible pairs for $\pi_1(X)$ with $\pi_1(Y)\rightarrow \pi_1(X)$ induced by the inclusion $Y\subset X$.
We pick the splitting $s:\mathbb{Z} \rightarrow \Gamma$ which sends $t$ to $\phi_{_\Gamma}(\mu_1)$.
Now we consider the long exact sequence of the pair $(X^\prime,Y):$
\begin{equation}\label{les_pair}
\rightarrow H_2(X^\prime,Y;\mathbb{K}_\Gamma[t^{\pm 1}]) \xrightarrow{\partial_2} H_1(Y;\mathbb{K}_\Gamma[t^{\pm 1}]) \rightarrow
H_1(X^\prime;\mathbb{K}_\Gamma[t^{\pm 1}]) \rightarrow 0.
\end{equation}
As a $\mathbb{K}_\Gamma[t^{\pm 1}]$-module $H_2(X^\prime,Y;\mathbb{K}_\Gamma[t^{\pm 1}]) \cong \mathbb{K}_\Gamma[t^{\pm 1}]$
generated by the relative 2-cell $\alpha$.
Hence as a $\mathbb{K}_\Gamma$-module, $H_2(X^\prime,Y;\mathbb{K}_\Gamma[t^{\pm 1}])$ is an infinitely generated free module,
generated by $\alpha t^k$ for $k \in \mathbb{Z}$. Since the 2-cell is attached along $l$, we have $\partial \alpha = [l]$.
We note that $l$ and $\mu_1$ live on $\partial N(L_1)$, hence $[l,\mu_1]=1 \in \pi_1(Y)$. Thus, $[l] (t-1)=0$
in $H_1(Y;\mathbb{K}_\Gamma[t^{\pm 1}])$. Equivalently, $[l]=[l] t^k$ for all k hence the image of $\partial_2$ as a
$\mathbb{K}_\Gamma$-module has at most one dimension and is generated by $[l]$.
Using the same argument as in the first paragraph of Theorem~\ref{2complex}, we can assume that $\operatorname{r}_{_\Gamma}(X)=0$ and
$\operatorname{rank}_\Gamma H_2(X;\mathcal{K}_\Gamma)=0$. Since $[l]$ is $t-1$ torsion, the $\partial_2$ map in the long exact sequence
of the pair $(X^\prime,Y)$ with coefficients in $\mathcal{K}_\Gamma$ is $0$.
Since $\operatorname{r}_{_\Gamma}(X^{\prime})=\operatorname{r}_{_\Gamma}(X)=0$, we see that $\operatorname{r}_{_\Gamma}(Y)=0$.
By the Theorem in \cite{Ha2}, $\operatorname{r}_{_\Lambda}(Y)=0$.
Thus, $H_1(Y;\mathbb{K}_\Gamma[t^{\pm 1}])$ and
$H_1(X^\prime;\mathbb{K}_\Gamma[t^{\pm 1}])$ are finitely generated right $\mathbb{K}_\Gamma$-modules of dimensions $\bar{\delta}^Y_\Gamma(\psi)$ and
$\bar{\delta}^{X^\prime}_\Gamma(\psi)=\bar{\delta}^{X}_\Gamma(\psi)$ respectively.
Since $\operatorname{r}_{_\Gamma}(X)=0$, if $(\phi_{_\Gamma},
\psi)$ is not initial then $[l]\neq 0$ in $H_1(Y;\mathbb{K}_\Gamma[t^{\pm 1}])$ by Lemma~\ref{whenzero}. Also, we note that if $(\phi_{_\Gamma},
\psi)$ is initial, then $\Gamma=\mathbb{Z}$ so all of the meridians except $\psi_1$ lift to the $\Gamma$-cover. Moreover, since $l$ is
nullhomologous in $Y$, it bounds a surface $F$ in $Y$. $F$ will lift to the $\Gamma$-cover which implies that $l=0$ in
$H_1(Y;\mathbb{K}_\Gamma[t^{\pm 1}])$. Thus, $(\phi_{_\Gamma},\psi)$ is initial if and only if $[l]=0$ in
$H_1(Y;\mathbb{K}_\Gamma[t^{\pm 1}])$. Recall that $(\phi_{_\Lambda},\psi)$ is never initial.
Suppose $(\phi_{_\Lambda},\phi_{_\Gamma},
\psi)$ is not initial. Then $(\phi_{_\Gamma},
\psi)$ is not initial hence $\bar{\delta}^Y_\Gamma(\psi)=\bar{\delta}^{X^\prime}_\Gamma(\psi)+1$ (similarly for $\Gamma$). Since $Y$ is
homotopy equivalent to a 2-complex with $\chi(Y)=0$, $\bar{\delta}^Y_\Lambda(\psi) \geq \bar{\delta}^Y_\Gamma(\psi)$ by Theorem~\ref{2complex}.
Therefore
\[
\bar{\delta}_\Lambda^{^X}(\psi)=\bar{\delta}_\Lambda^{^{X^\prime}}(\psi)=\bar{\delta}^{^Y}_\Lambda(\psi) -1 \geq \bar{\delta}^{^Y}_\Gamma(\psi) -1 = \bar{\delta}_\Gamma^{^{X^\prime}}(\psi) = \bar{\delta}_\Gamma^{^X}(\psi).
\]
Now suppose $(\phi_{_\Lambda},\phi_{_\Gamma},
\psi)$ is initial. Then $(\phi_{_\Gamma},
\psi)$ is initial so $\bar{\delta}^Y_\Gamma(\psi)=\bar{\delta}^{X^\prime}_\Gamma(\psi)$ but $\bar{\delta}^Y_\Lambda(\psi)=\bar{\delta}^{X^\prime}_\Lambda(\psi)+1$.
Since $Y$ is homotopy equivalent to
a 2-complex with $\chi(Y)=0$, $\bar{\delta}^Y_\Lambda(\psi) \geq \bar{\delta}^Y_\Gamma(\psi)-1$ by Theorem~\ref{2complex}.
Therefore
\[
\bar{\delta}_\Lambda^{^X}(\psi)=\bar{\delta}_\Lambda^{^{X^\prime}}(\psi)=\bar{\delta}^{^Y}_\Lambda(\psi) -1 \geq (\bar{\delta}^{^Y}_\Gamma(\psi) -1) -1 = \bar{\delta}_\Gamma^{^{X^\prime}}(\psi) -2 = \bar{\delta}_\Gamma^{^X}(\psi)-2.
\]
\end{proof}
We point out that there are other higher-order degrees, $\delta_n(\psi)$, for a CW-complex $X$ defined in terms of the
$\mathbb{K}_n[t^{\pm1}]$-torsion submodule of $H_1(X;\mathbb{K}_n[t^{\pm1}])$ (see \cite{Ha1}). These are equal to $\bar{\delta}_n(\psi)$ when $\operatorname{r}_n(X)= 0$. It would be very interesting to understand
the monotonicity behavior of these $\delta_n(\psi)$. In particular, for $n\geq1$ are the $\delta_n(\psi)$ a nondecreasing
function of $n$?
\section{Applications}
\subsection{Deficiency of a group and obstructions to a group being the fundamental group of a 3-manifold}
Recall that the higher-order ranks and degrees of a CW-complex $X$ only depend on the fundamental group of $X$. Hence it
makes sense to talk about the higher-order ranks and degrees of a finitely presented group. One consequence of the
theorems in the previous section is that the higher-order degrees give obstructions to a finitely presented group having
positive deficiency or being the fundamental group of a 3-manifold.
\begin{proposition}\label{defG}Let $G$ be a finitely presented group and $(\phi_{_\Lambda},\phi_{_\Gamma},\psi)$ be an
admissible triple for $G$.
\begin{enumerate}\item Suppose $(\phi_{_\Lambda},\phi_{_\Gamma},\psi)$ is not initial.
If $\bar{\delta}_\Lambda(\psi) < \bar{\delta}_\Gamma(\psi)$ then $\operatorname{def}(G) \leq 0$ and $G$ cannot be the fundamental group of
a compact, orientable 3-manifold (with or without boundary).
\item Suppose $(\phi_{_\Lambda},\phi_{_\Gamma},\psi)$ is initial.
If $\bar{\delta}_\Lambda(\psi) < \bar{\delta}_\Gamma(\psi) - 1$ then $\operatorname{def}(G) \leq 0$ and $G$ cannot be the fundamental group
of a compact, orientable 3-manifold with at least one boundary component which is not a 2-sphere. In addition, if
$\bar{\delta}_\Lambda(\psi) < \bar{\delta}_\Gamma(\psi) - 2$ then $G$ cannot be the fundamental group of a compact, orientable 3-manifold
(with or without boundary). \end{enumerate}
\end{proposition}
\begin{proof}
First, suppose that $\operatorname{def}(G)\geq 1$. Then, by Theorem~\ref{2complex}, $\bar{\delta}_\Lambda(\psi) \geq \bar{\delta}_\Gamma(\psi)$ when $(\phi_{_\Lambda},\phi_{_\Gamma},\psi)$ is not
initial and $\bar{\delta}_\Lambda(\psi) \geq \bar{\delta}_\Gamma(\psi)-1$ when $(\phi_{_\Lambda},\phi_{_\Gamma},\psi)$ is initial.
Now, suppose that $G$ is the fundamental group of a closed, orientable, 3-manifold $X$. Then, by Theorem~\ref{closed},
$\bar{\delta}_\Lambda(\psi) \geq \bar{\delta}_\Gamma(\psi)$ when $(\phi_{_\Lambda},\phi_{_\Gamma},\psi)$ is not initial and
$\bar{\delta}_\Lambda(\psi) \geq \bar{\delta}_\Gamma(\psi)-2$ when $(\phi_{_\Lambda},\phi_{_\Gamma},\psi)$ is initial.
Finally, suppose $G$ is the fundamental group of a connected, orientable 3-manifold with boundary. If at least 1 boundary component is not a 2-sphere then
$\operatorname{def}(G)\geq 1$ in which case the paragraph above applies. Moreover, if all the boundary components $X$ are 2-spheres then $G$ is the
fundamental group of a closed 3-manifold.
\end{proof}
We point out that Proposition~\ref{defG} is sometimes very easy to use computationally since the groups $\Lambda$ and $\Gamma$ can be taken to be finitely generated free abelian groups. Using Proposition~\ref{defG}, one can easily prove the well known fact that $\mathbb{Z}^m$ cannot be the group of a compact 3-manifold when $n\geq4$.
\begin{example}\label{Zm} Consider the initial triple $(\text{id}_{\mathbb{Z}^m},\psi,\psi)$ for $\mathbb{Z}^m$ where $\psi:\mathbb{Z}^m \twoheadrightarrow \mathbb{Z}$ is any surjective map. Since $\ker(\psi)\cong \mathbb{Z}^{m-1}$, we see that $\bar{\delta}_\mathbb{Z}(\psi)=m-1$. Moreover, since $\ker(\text{id}_{\mathbb{Z}^m})=0$, we see that $\bar{\delta}_{\mathbb{Z}^m}(\psi)=0$. Therefore, if $m\geq 4$,
$0=\bar{\delta}_{\mathbb{Z}^m}(\psi) < \bar{\delta}_\mathbb{Z}(\psi) -2 = m-3$. Thus, by Proposition~\ref{defG}, for $m\geq4$, $\operatorname{def}(\mathbb{Z}^m)\leq 0$ and $\mathbb{Z}^m$ cannot be the fundamental group of any compact, connected, orientable 3-manifold.
\end{example}
If we consider the case when the groups $\Gamma$ and $\Lambda$ are quotients of $G$ by the terms of its rational derived series we have the following immediate corollary to Proposition~\ref{defG}.
\begin{corollary}\label{specific_ob}Let $G$ be a finitely presented group.
\begin{enumerate}\item Suppose $\beta_1(G)\geq 2$. If there exists a $\psi \in H^1(G;\mathbb{Z})$, and $m,n\in \mathbb{Z}$ such that $n> m\geq 0$ and $\bar{\delta}_{n}(\psi)<\bar{\delta}_m(\psi)$ then $\operatorname{def}(G)\leq 0$ and $G$ cannot be the fundamental group of a compact, orientable 3-manifold.
\item Suppose $\beta_1(G)=1$ and $\psi$ in a generator of $H^1(G;\mathbb{Z})$.
\begin{enumerate}\item If there exists $m,n\in \mathbb{Z}$ such that $n> m\geq 1$ and $\bar{\delta}_{n}(\psi)<\bar{\delta}_m(\psi)$ then $\operatorname{def}(G)\leq 0$ and $G$ cannot be the fundamental group of a compact, orientable 3-manifold.
\item If there exists an $n\in \mathbb{Z}$ such that $n\geq 1$ and $\bar{\delta}_n(\psi)<\bar{\delta}_0(\psi)-1$ then $\operatorname{def}(G) \leq 0$ and $G$ cannot be the fundamental group of a compact, orientable 3-manifold with at least one boundary component which is not a 2-sphere.
In addition, if $\bar{\delta}_n < \bar{\delta}_0 - 2$ then $G$ cannot be the fundamental group of a compact, orientable 3-manifold. \end{enumerate}
\end{enumerate}
\end{corollary}
\begin{example} \label{ex}
We saw that the examples $G_{n,g}$ in Proposition~\ref{remark2} satisfy $\bar{\delta}_1(\psi) < \bar{\delta}_0(\psi) - 1 $ when $n=1$
and $g=1$, $\bar{\delta}_1(\psi) < \bar{\delta}_0(\psi) - 2 $ when $n=1$ and $g \geq 2$, and
$\bar{\delta}_n(\psi) < \bar{\delta}_{n-1}(\psi)$ when $n \geq 2$.
Thus, by Corollary~\ref{specific_ob}, for each $n\geq1$ and $g\geq1$ the groups $G_{n,g}$ in Proposition~\ref{remark2} have
$\operatorname{def}(G_{n,g})\leq 0$. Moreover, except in the case that $g=1$ and $n=1$, for each $n\geq 1$ and $g\geq1$, the group
$G_{n,g}$ cannot be the fundamental group of a compact, orientable 3-manifold (with or without boundary). The group
$G_{1,1}$ cannot be the fundamental group of a compact, orientable 3-manifold with at least one boundary
component which is not a 2-sphere.
\end{example}
\subsection{Obstructions to $X \times S^1$ admitting a symplectic structure}
We will show that a consequence of Corollaries~\ref{2complexn} and \ref{closedn} is that the $\bar{\delta}_n(\psi)$ give obstructions to a 4-manifold
of the form $X \times S^1$ admitting a symplectic structure.
It is well known that if $X$ is a closed
3-manifold that fibers over $S^1$ then $X \times S^1$ admits a
symplectic structure. Taubes asks whether the converse is true.
\begin{question}
[Taubes]\label{taubes}Let $X$ be a 3-manifold such that $X\times
S^{1}$ admits a symplectic structure. Does $X$ admit a fibration
over $S^{1}$?
\end{question}
In \cite{Ha1}, we showed that if $X$ is a 3-manifold that fibers over $S^1$ with $\beta_1(X)\geq2$ and $\psi$ representing
the fibration then $\bar{\delta}_n(\psi)$ is equal to Thurston norm $\|\psi\|_T$ of $\psi$. This generalized the work of McMullen
who showed that the Alexander norm gives a lower bound for the Thurston norm which is an equality when $\psi$ represents
a fibration.
\begin{theorem}[\cite{Ha1}]
\label{delbarthm}Let $X$ be a compact, orientable 3-manifold
(possibly with boundary). For all $\psi\in H^{1}(X;\mathbb{Z}) $ and $n\geq0$
\[
\bar{\delta}_{n}(\psi) \leq \| \psi \|_T
\]
except for the case when $\beta_{1} (X) =1$, $n=0$,
$X\ncong S^1 \times S^2$, and $X \ncong S^1 \times D^2$. In this
case, $\bar{\delta}_{0} (\psi) \leq \|
\psi\|_{T}+1+\beta_{3}\left( X\right) $ whenever $\psi$ is
a generator of $H^{1}\left( X;\mathbb{Z}\right) \cong\mathbb{Z}$.
Moreover, equality holds in all cases when $\psi:\pi_{1}(
X) \twoheadrightarrow \mathbb{Z}$ can be represented by a
fibration $X\rightarrow S^{1}$.
\end{theorem}
Using the work of Meng-Taubes and Kronheimer-Mrowka, S. Vidussi
\cite{Vi} has recently given a proof of McMullen's inequality (that the Alexander norm gives a lower bound for the
Thurston norm of a 3-manifold)
using Seiberg-Witten theory. This generalizes the work of
Kronheimer \cite{K2} who dealt with the case that $X$ is the
0-surgery on a knot. Moreover, Vidussi shows that if $X\times
S^{1}$ admits a symplectic structure (and $\beta_1\left(X\right)
\geq 2$) then the Alexander and Thurston norms of $X$ coincide on
a cone over a face of the Thurston norm ball of $X$, supporting a positive answer to Question~\ref{taubes} asked by Taubes.
\begin{theorem}
[Kronheimer, Vidussi \cite{K2,V,Vi}]\label{vid} Let $X$ be an closed, irreducible
3-manifold such that $X \times S^1$ admits a symplectic structure.
If $\beta_1(X) \geq 2$ there exists a $\psi \in
H^1(X;\mathbb{Z})$ such that $\|\psi\|_A =
\|\psi\|_T$. If $\beta _{1}( X) =1$ then for
any generator $\psi$ of $H^1(X;\mathbb{Z})$,
$\|\psi\|_A = \|\psi\|_T + 2$.
\end{theorem}
In \cite[Theorem~12.5]{Ha1}, we used Vidussi's result and our result that the $\bar{\delta}_n$ give lower bounds for the Thurston norm \cite[Theorem~10.1]{Ha1} to show that the higher-order degrees of a 3-manifold $X$ give algebraic obstructions to a 4-manifold of the form $X\times S^1$ admitting a symplectic structure. As a result, we were able to show that the closed, irreducible 3-manifolds (with $\beta_1(X)\geq2$) in Theorem 11.1 of \cite{Ha1} have $\bar{\delta}_0 < \bar{\delta}_1 < \cdots <\bar{\delta}_n$ hence cannot admit a symplectic structure.
However, it was still unknown at this time whether Vidussi's Theorem holds if one replaces the Alexander norm with $\bar{\delta}_n$. In \cite[Conjecture~12.7]{Ha1}, we conjectured this to be true. Since the Alexander norm is equal to $\bar{\delta}_0$, Vidussi's theorem gives us the case when $n=0$.
We will show that Conjecture~12.7 of \cite{Ha1} is true when $n\geq1$. This is theoretically important since it gives more evidence that the only symplectic 4-manifolds of the form $X\times S^1$ are such that $X$ fibers over $S^1$, supporting a positive answer to the question of Taubes.
\begin{theorem} \label{symplectic}
Let $X$ be a closed, orientable, irreducible
3-manifold such that $X \times S^1$ admits a symplectic structure.
If $\beta_1(X) \geq 2$ there exists a $\psi \in
H^1(X;\mathbb{Z})$ such that \[\bar{\delta}_0(\psi)=\bar{\delta}_1(\psi)= \cdots =
\bar{\delta}_n(\psi) = \cdots =\|\psi\|_T.
\] If $\beta _{1}( X) =1$ then for
any generator $\psi$ of $H^1(X;\mathbb{Z})$,
\[
\bar{\delta}_0(\psi)-2=\bar{\delta}_1(\psi)=
\cdots
\bar{\delta}_n(\psi) = \cdots =\|\psi\|_T.
\]
\end{theorem}
\begin{proof}If $X$ is a closed, orientable, irreducible, 3-manifold with $\beta_1(X)\geq 2$ such that $X \times S^1$
admits a symplectic structure then by Theorem~\ref{vid} there exists a $\psi \in
H^1(X;\mathbb{Z})$ such that $\bar{\delta}_0(\psi)=\|\psi\|_A =\|\psi\|_T$.
By Corollary~\ref{closedn} and Theorem~\ref{delbarthm}, $\bar{\delta}_0(\psi) \leq \bar{\delta}_n(\psi) \leq \|\psi\|_T$ hence
for all $n\geq0$, $\bar{\delta}_0(\psi)=\bar{\delta}_n(\psi) = \|\psi\|_T$. Similarly, if $\beta_1(X)=1$ then for $\psi$ a generator of
$H^1(X;\mathbb{Z})$, $\bar{\delta}_0(\psi)-2=\|\psi\|_T$.
Since $S^1 \times S^2$ is not irreducible, for $n \geq 1$ we have $\bar{\delta}_0(\psi) -2 \leq \bar{\delta}_n(\psi) \leq \|\psi\|_T$ hence
$\bar{\delta}_0(\psi)-2=\bar{\delta}_n(\psi) = \|\psi\|_T$.
\end{proof}
\subsection{Behavior of the Thurston norm under a continuous map which is surjective on $\pi_1$}
An important problem in 3-manifold topology is determine the behavior of the Thurston norm under continuous maps $f : X \rightarrow Y$ between 3-manifolds. It was shown by D. Gabai in \cite{Ga} that if $f$ is a
p-fold covering map then $||f^{\ast}(\psi)||_T=p \, ||\psi||_T$. Moreover, Gabai showed that if $f$ is a degree $d$ map then $||f^{\ast}(\psi)||_T \geq |d| \, ||\psi||_T$.
These statements were first conjectured by Thurston his original paper on the Thurston norm in \cite[Conjecture 2(b)]{Th}. We sketch a proof of the latter, since it does not seem to explicitly appear in \cite{Ga}.
\begin{theorem}[Gabai]Let $f: X \rightarrow Y$ be a degree $d$ map between closed, orientable, 3-manifolds.
Then for each $\psi \in H^1(Y;\mathbb{Z})$, $||f^{\ast}(\psi)||_T \geq |d| \, ||\psi||_T$.
\end{theorem}
\begin{proof}
Let $\psi \in H^1(Y;\mathbb{Z})$ and $F$ be an embedded (possibly disconnected) surface in $X$ such that $[F]$ is
dual to $f^{\ast}(\psi)$ and $\chi_{-}(F)=||f^{\ast}(\psi)||_T$.
Since the following diagram commutes \cite[Theorem 67.2]{Mu}, $[f(F)]=f_{\ast}([F]) = d (\psi \cap \Gamma_Y)$.
\begin{diagram}H^1(X;\mathbb{Z}) & \rTo^{\cap \Gamma_X}_{\cong} & H_2(X;\mathbb{Z}) \\
\uTo_{f^{\ast}} & & \dTo_{f_{\ast}} \\
H^1(Y;\mathbb{Z}) & \rTo^{\cap d\Gamma_Y} & H_2(Y;\mathbb{Z})
\end{diagram}
By Corollary 6.18 of \cite{Ga}, $||-||_T = x_s(-)$ where $x_s$ is the singular norm.
Hence $|d|\,||\psi ||_T \leq \chi_{-}(f(F)) \leq \chi_{-}(F) = ||f^{\ast}(\psi)||_T$.
\end{proof}
Recall that a degree one map is surjective on $\pi_1$. Hence one could ask if the existence of a map $f: X \rightarrow Y$ between compact, orientable, 3-manifolds,
that is surjective on $\pi_1$ suffices to guarantee that $||f^{\ast}(\psi)||_T \geq ||\psi||_T$ for all
$\psi \in H^1(Y;\mathbb{Z})$. We will give some (algebraic) conditions on $X$ and $Y$ (i.e. that do not depend on the map $f$)
that will guarantee $||f^{\ast}(\psi)||_T \geq ||\psi||_T$.
This question was first asked by J. Simon (see Kirby's Problem List \cite[Question 1.12(b)]{Ki}) for knot complements.
Recall that if $K$ is a nontrivial knot in $S^3$ then $H^1(S^3\setminus K;\mathbb{Z})\cong \mathbb{Z}$ generated by
$\psi$ and $||\psi||_T = 2 g(K)-1$ where $g(K)$ is the genus of $K$.
\begin{jsimon}[J. Simon] If $J$ and $K$ are knots in $S^3$ and $f: S^3\setminus L \rightarrow S^3 \setminus K$ is surjective
on $\pi_1$, is $g(L) \geq g(K)$?
\end{jsimon}
The answer to the above question is known to be yes when $\delta_0(K)=2 g(K)$. We strengthen this result to the case when $\delta_{n}(K)=2 g(K)-1$ in Corollary~\ref{newgenresult}. By $\bar{\delta}_n(K)$ we mean $\bar{\delta}_n(\psi)$ for a generator $\psi$ of $H^1(S^3\setminus K;\mathbb{Z})\cong \mathbb{Z}$. Note that by Theorems~5.4 and~7.1 of \cite{Co}, $$\bar{\delta}_0(K)-1\leq \bar{\delta}_1(K) \leq \cdots \leq \bar{\delta}_n(K) \leq \cdots \leq 2 g(K)-1.$$ Moreover, by Corollary~7.4 of \cite{Co}, there exist knots $K$ for which $\bar{\delta}_0(K)-1 < \bar{\delta}_1(K) < \cdots < \bar{\delta}_n(K)$. Therefore, the result in Corollary~\ref{newgenresult} is strict generalization of the previously known result.
Before we state the results concerning the behavior of the Thurston norm under a surjective map on $\pi_1$, we state and prove the following theorem which describes the behavior of $\bar{\delta}_n$ under a surjective map on $\pi_1$. We only consider the case that $\operatorname{def}(G)=1$ since if $\operatorname{def}(G)\geq 2$ then
by Remark~\ref{remark1}, $\operatorname{r}_0(G)\geq 1$.
\begin{theorem}\label{group_greater}Let $G$ be either $(1)$ a finitely presented group with $\operatorname{def}(G)=1$ or $(2)$
the fundamental group of closed, connected, orientable 3-manifold. If $P$ is a group with $\beta_1(P)=\beta_1(G)$, $\operatorname{r}_0(G)=0$, and
$\rho : G \twoheadrightarrow P$ is a surjective map
then for each $n \geq 0$ and $\psi \in H^1(P;\mathbb{Z})$, $$\bar{\delta}_n(\rho^{\ast}(\psi)) \geq \bar{\delta}_n(\psi).$$
\end{theorem}
\begin{proof} We will first show that the theorem holds for primitive elements of $H^1(Y;\mathbb{Z})$. It will then follow for arbitrary elements of
$H^1(Y;\mathbb{Z})$ since for any
$k\in \mathbb{Z}$,
$\rho^{\ast}(k \psi)=k \rho^{\ast}(\psi)$, $\bar{\delta}_n(k \rho^{\ast}(\psi))= |k| \bar{\delta}_n(\rho^{\ast}(\psi))$ and $\bar{\delta}_n(k \psi)= |k| \bar{\delta}_n(\psi)$. Let $\psi$ be a primitive element of $H^1(P;\mathbb{Z})$, $G_n = G/G_r^{(n+1)}$, and $P_n = P/P_r^{(n+1)}$.
For each $n \geq 0$, we have two coefficient systems for $G$, $\phi^1_n: G \twoheadrightarrow G_n$ and $\phi^2_n: G \twoheadrightarrow P_n$, defined by $\phi^1_n(g)=[g]$ and $\phi^2_n(g)=[\rho(g)]$. Note that $\rho$ induces a surjection $\overline{\rho}:G_n \twoheadrightarrow P_n$. Moreover, $\overline{\rho}$ has non-trivial kernel if and only if
$(\phi_n^1,\phi_n^2,\rho^{\ast}(\psi))$ is an admissible triple.
If $\overline{\rho}$ is an isomorphism, then $\bar{\delta}_{G_n}(\rho^{\ast}(\psi))=\bar{\delta}_{P_n}(\rho^{\ast}(\psi))$. Suppose $\overline{\rho}$ is an not an isomorphism.
We remark that $(\phi_n^1,\phi_n^2,\rho^{\ast}(\psi))$ is initial if and only if $\beta_1(P)=1$ and $n=0$. However, since $\rho$ is surjective and $\beta_1(G)=\beta_1(P)$, we have $\overline{\rho}:G_0 \xrightarrow{\cong} P_0$. Thus $(\phi_n^1,\phi_n^2,\rho^{\ast}(\psi))$ is never inital and hence
by Theorems \ref{2complex} and \ref{closed}), $\bar{\delta}_{G_n}(\rho^{\ast}(\psi))\geq\bar{\delta}_{P_n}(\rho^{\ast}(\psi))$.
To finish the proof, we will show that $\bar{\delta}_{P_n}(\rho^{\ast}(\psi))=\bar{\delta}_{P_n}(\psi)$. Since $\rho$ is surjective, we have a surjective map
$$\rho_{\ast}:H_1(G;\mathbb{Z} P_n)=\frac{\ker(\phi^2_n)}{[\ker(\phi^2_n),\ker(\phi^2_n)]} \twoheadrightarrow \frac{P_r^{(n+1)}}{[P_r^{(n+1)},P_r^{(n+1)}]}=H_1(P;\mathbb{Z} P_n).$$
Moreover, since $\mathbb{K}^P_n[t^{\pm 1}]$
is a flat (right) $\mathbb{Z} P_n$-module, $\rho_{\ast}:H_1(G;\mathbb{K}^P_n[t^{\pm 1}]) \twoheadrightarrow H_1(P;\mathbb{K}^P_n[t^{\pm 1}])$ is surjective.
The condition $\operatorname{r}_0(G)=0$ implies that both of these modules are torsion \cite{Ha2} hence $\operatorname{rank}_{\mathbb{K}^P_n} H_1(G;\mathbb{K}^P_n[t^{\pm 1}]) \geq \operatorname{rank}_{\mathbb{K}^P_n} H_1(P;\mathbb{K}^P_n[t^{\pm 1}])$ which completes the proof.
\end{proof}
\begin{corollary}\label{thurston_greater}Suppose there exists an epimorphism $\rho: \pi_1(X) \twoheadrightarrow \pi_1(Y)$, where $X$ and $Y$ are compact, connected,
orientable $3$-manifolds, with toroidal or empty boundaries, such that $\beta_1(X)=\beta_1(Y)$ and $\operatorname{r}_0(X)=0$.
Let $\psi \in H^1(\pi_1(Y);\mathbb{Z})$. If any of the following
conditions is satisfied
\begin{description}
\item[a] $\beta_1(Y) \geq 2$ and $\bar{\delta}_n(\psi)=||\psi||_T$ for some $n \geq 0$
\item[b] $\beta_1(Y)=1$ and $\bar{\delta}_n(\psi)=||\psi||_T$ for some $n \geq 1$
\item[c] $\beta_1(Y)=1$, $\beta_3(X) \leq \beta_3(Y)$, $\psi$ is primitive and $\bar{\delta}_0(\psi)=||\psi||_T+1+\beta_3(Y)$
\end{description}
then $$||\rho^{\ast}(\psi)||_T \geq ||\psi||_T.$$
\end{corollary}
\begin{proof}Let $G=\pi_1(X)$ and $P=\pi_1(Y)$. If $X$ were
$S^1 \times D^2$ or $S^1 \times S^2$ then $\pi_1(X)\cong\mathbb{Z}$ and $\pi_1(Y)\cong \mathbb{Z}$ hence $\bar{\delta}_n(\psi)=0$ for all $n$. Thus, we would be in case \textbf{b} and would have $||\psi||_T=0$ which trivially satisfies the conclusion of the corollary. Therefore, we can assume that $X$ is neither $S^1 \times D^2$ nor $S^1 \times S^2$.
We also remark that since $\operatorname{r}_0(X)=0$, $\operatorname{def}(\pi_1(X))\leq 1$ by Remark~\ref{remark1}. Thus, if \textbf{a} or \textbf{b} is satisfied then by Theorem~10.1 of \cite{Ha1},
$\bar{\delta}_n(\rho^\ast(\psi))\leq ||\rho^\ast(\psi)||_T$. Hence by Theorem~\ref{group_greater} we have
$||\rho^\ast(\psi)||_T \geq \bar{\delta}_n(\rho^\ast(\psi))\geq \bar{\delta}_n(\psi)=||\psi||_T$. If \textbf{c} is
satisfied then by Theorem~10.1 of \cite{Ha1} we have $\bar{\delta}_0(\rho^\ast(\psi))\leq ||\rho^\ast(\psi)||_T+1+\beta_3(X)$. Therefore,
$||\rho^\ast(\psi)||_T \geq \bar{\delta}_0(\rho^\ast(\psi))-1-\beta_3(X) \geq \bar{\delta}_0(\psi)-1-\beta_3(Y)=||\psi||_T$.
\end{proof}
We will now discuss the case when $G$ is the fundamental group of a knot complement.
\begin{corollary}\label{knot_greater}If $J$ and $K$ are knots in $S^3$ such that there
exists a surjective homomorphism $\rho: \pi_1(S^3\setminus L) \twoheadrightarrow \pi_1(S^3\setminus K)$ then for
each $n\geq 0$, $\bar{\delta}_n(L) \geq \bar{\delta}_n(K)$.
\end{corollary}
\begin{proof} Let $G=\pi_1(S^3\setminus L)$, $P=\pi_1(S^3\setminus K)$, $\psi_P: P \twoheadrightarrow P/P^{(1)} \cong \mathbb{Z}$ be the
abelianization map, and $\psi_G=\psi_P \circ \rho$. Since $\rho$ is surjective and $\beta_1(S^3-L)=1$, $\psi_G$ is a generator of $H^1(S^3-L;\mathbb{Z})$.
By \cite[Proposition~2.11 ]{COT}, $\operatorname{r}_0(G)=0$ hence by Theorem~\ref{group_greater},
$\bar{\delta}_n(L)=\bar{\delta}_n(\psi_G) \geq \bar{\delta}_n(\psi_P)=\bar{\delta}_n(K)$.
\end{proof}
\begin{corollary}\label{newgenresult}Suppose $J$ and $K$ are knots in $S^3$ such that there
exists a surjective homomorphism $\rho: \pi_1(S^3\setminus L) \twoheadrightarrow \pi_1(S^3\setminus K)$. If $\bar{\delta}_0(K)=2 g(K)$ or $\bar{\delta}_n(K)=2 g(K) -1$ for
some $n \geq 1$ then $g(L)\geq g(K)$.
\end{corollary}
This corollary follows immediately from Corollary~\ref{thurston_greater}. Instead of omitting any proof,
we will supply a proof which is a simplified version of the proof of Corollary~\ref{thurston_greater}.
\begin{proof}We can assume that $L$ is not the unknot since $\phi$ is surjective. If $n\geq 1$ we have $\bar{\delta}_n(L)\leq 2 g(L)-1$ by
Theorem~7.1 of \cite{Co} or Theorem~10.1 of \cite{Ha1}. Hence, by Corollary~\ref{knot_greater},
$2 g(K)-1=\bar{\delta}_n(K) \leq \bar{\delta}_n(L)\leq 2 g(L)-1$. In the other case, we have $\bar{\delta}_0(L)\leq 2 g(L)$ so
$2 g(K)=\bar{\delta}_0(K) \leq \bar{\delta}_0(L)\leq 2 g(L)$
\end{proof}
|
2,877,628,089,615 | arxiv | \section{Introduction and Set-up}
In this note, we study a topic regarding the behaviour
of Ricci curvature for K\"ahler-Ricci flows over closed
K\"ahler manifolds. This relates to the general brief
(or conjecture) that Ricci curvature would have uniform
lower bound towards (most) finite or infinite time
singularities. A typical result is as follows, which is
a more English version of Theorem \ref{th:finite}.
\begin{theorem}
For any K\"ahler-Ricci flow with finite time singularity,
if the global volume is not going to zero towards the
time of singularity, then the Ricci curvature can not
have a uniform lower bound.
\end{theorem}
K\"ahler-Ricci flow is nothing but Ricci flow with
initial metric being K\"ahler. For this K\"ahler
condition of the initial metric, the closed smooth
manifold, $X$, would be given a complex structure,
which is fixed for the consideration. So we still
call this complex manifold by $X$. The smooth flow
metric would always be K\"ahler with respect to $X$,
as first observed by R. Hamilton. We consider $dim_
\mathbb{C} X=n\geqslant 2$.
Thus the standard form of K\"ahler-Ricci flow, a
directly transformation from Ricci flow to a metric
form flow, is as follows over $X\times [0, S)$ (for
some $S\in (0, \infty]$)
\begin{equation}
\label{eq:rfk}
\frac{\partial \omega(s)}{\partial s}=-2{\rm Ric}\(\omega(s)
\), ~~~~\omega(0)=\omega_0,
\end{equation}
where $\omega_0$ is the metric form for the initial
K\"ahler metric. The key advantage in our study of
this flow, comparing with many earlier works, is
that we no longer force cohomology restriction to
$[\omega_0]$ when setting up this flow (and the
equivalent scalar metric potential flow appearing
shortly). This allows more applications for this
kind of geometric flow techniques and more importantly,
makes it possible to analyze degenerate situation.
This idea first appeared in \cite{tsu} and has been
rigorized and generalized in the works of \cite{t-znote},
\cite{song-tian} and continuations.
We often perform the following time-metric scaling
for the above evolution equation,
$$\omega(s)=e^t\widetilde\omega_t, ~~~~s=
\frac{e^t-1}{2},$$
and arrive at an equivalent version of K\"ahler-Ricci
flow over $X\times [0, T)$ (for $T=\log(1+2S)\in (0,
\infty]$),
\begin{equation}
\label{eq:krf}
\frac{\partial \widetilde\omega_t}{\partial t}=-{\rm Ric}
(\widetilde\omega_t)-\widetilde\omega_t,
~~~~\widetilde\omega_0=\omega_0,
\end{equation}
which is somewhat more convenient as explained in the
following.
To begin with, let's point out that the time scaling
makes sure that these two flows would both exist up
to some finite times (with finite time singularity)
or exist forever. In the finite time singularity case,
the metric scaling is by uniformly control positive
constants, and so one can say the equivalence is
stronger. When they both exist forever, the metric
scaling might have significant impact on the flow
metric. For example, in the case of $c_1(X)=0$ as
studied in \cite{cao}, $\omega(s)$ converges at time
infinity to the unqiue Ricci-flat metric in the K\"ahler
class $[\omega_0]$ with the lower $CY$ standing for
the more popular name as Calabi-Yau metric, while
$(X, \widetilde\omega_t)$ shrinks to a metric point
at time infinity. Meanwhile, in other cases, $\omega
(s)$ has volume tending to infinty while $\widetilde
\omega_t$ has uniformly controlled volume. In principle,
(\ref{eq:krf}) always has the cohomology information
$[\widetilde\omega_t]$ under uniform control, as is
clear in following discussion.
{\it For the rest of this note, we focus on (\ref
{eq:krf}).}
Let's fix the convention that $[{\rm Ric}]=c_1(X)$, and
then one can reduce (\ref{eq:krf}) to an ODE in the
cohomology space $H^{1, 1}(X, \mathbb{R}):=H^2(X;
\mathbb{R})\cap H^{1, 1}(X; \mathbb{C})$. It's easy
to solve it and we end up with
$$[\widetilde\omega_t]=-c_1(X)+e^{-t}\([\omega_0]+
c_1(X)\)$$
which stands for an interval in this vector space
with two endpoints being $[\omega_0]$ for $t=0$
and $-c_1(X)$ formally for $t=\infty$. It's easy
to see that for (\ref{eq:rfk}), $[\omega(s)]$
would evolve linearly in general, which might be a
simpler function but the control is not as uniform.
However, for either one, the optimal existence result
in \cite{t-znote} tells that the flow metric exists
as long as the class from the above consideration
stays inside the open cone, consisting of all K\"ahler
classes (called K\"ahler cone). In this note, we use
${\rm KC}(X)$ to denote the K\"ahler cone of $X$, and then
we call its closure (in the finite dimensional vector
space $H^{1, 1}(X, \mathbb{R})$), $\overline{{\rm KC}(X)}$,
is sometime called the numerically effective cone, using
a terminology borrowed from Agebraic Geometry. Now the
optimal existence result simply says
$$T=\sup\{t\,|\,-c_1(X)+e^{-t}\([\omega_0]+c_1(X)\)
\in {\rm KC}(X)\}$$
is the largest time to consider classic solution for
(\ref{eq:krf}). This would be our definition for $T$
for the rest of this work, which takes value in $(0,
\infty]$. Also from the result in \cite{t-znote},
we've already shown that singularity only happens when
the class hits the boundary of this cone, at either
finite or infinite time. In other words, $[\omega_T]
\in \overline{{\rm KC}(X)}\setminus {\rm KC}(X)$. Of course,
now the interesting and in general hard question is
to see how the non-K\"ahler feature of this class
at the boundary of ${\rm KC}(X)$ and the behaviour of the
K\"ahler-Ricci flow would interact with each other.
The study of this topic, as well as for many other
things regarding K\"ahler-Ricci flow, usually makes
of the scalar version of the K\"ahler-Ricci flow.
For (\ref{eq:krf}), we define the following background
form,
$$\omega_t=-{\rm Ric}(\omega_0)+e^{-t}\(\omega_0+{\rm Ric}
(\omega_0)\)$$
compatible with the notation $\omega_0$. The whole
point is that $[\omega_t]=[\widetilde\omega_t]$, and
so $\widetilde\omega_t=\omega_t+\sqrt{-1}\partial\bar\partial u$.
It's not so hard to prove the following scalar
evolution equation for metric potential $u$ over
$X\times [0, T)$ is equivalent to (\ref{eq:krf}),
\begin{equation}
\label{eq:skrf}
\frac{\partial u}{\partial t}=\log\frac{\widetilde\omega^n_t}
{\omega^n_0}-u=\log\frac{(\omega_t+\sqrt{-1}\partial\bar
\partial u)^n}{\omega^n_0}-u, ~~~~u(\cdot, 0)=0.
\end{equation}
This evolution equation can be reformulated as
\begin{equation}
\label{eq:cma}
(\omega_t+\sqrt{-1}\partial\bar\partial u)^n=e^{\frac{\partial u}
{\partial t}+u}\omega^n_0,
\end{equation}
which is why sometimes we also call it the complex
Monge-Amp\`ere equation type of K\"ahler-Ricci flow.
\vspace{0.1in}
Now let's also clarify that the uniform Ricci lower
bound mentioned at the beginning. It means that there
exists some constant $C$ such that
$${\rm Ric}(\widetilde\omega_t)\geqslant -C\widetilde
\omega_t$$
uniformly for $t\in [0, T)$. In general, Ricci
curvature being bounded from below provides some
control on the metric and topology of the underlying
manifold. So such a control is certainly favourable
as an assumption and interesting as a result in the
study of Ricci flow (see in \cite{bing} for examples
of such assumptions).
\vspace{0.1in}
\noindent{\bf Note:} in the following, $C$ always
stands for a (positive) constant, which might be
different at places.
\section{Finite Time Singularity}
In this section, we consider the case of $T<\infty$.
Then clearly, $[\omega_T]\in\overline{{\rm KC}(X)}\setminus
{\rm KC}(X)$. The following is the main result.
\begin{theorem}
\label{th:finite}
Consider (\ref{eq:krf}) with finite time singularity,
i.e., $T<\infty$. If $[\omega_T]^n>0$, then the Ricci
curvature can NOT have a uniform lower bound, i.e.,
there is NO constant $D>0$ such that ${\rm Ric}(\widetilde
\omega_t)\geqslant -D\widetilde\omega_t$ uniformly
for $t\in [0, T)$.
\end{theorem}
The proof is a combination of techniques from \cite
{r-blow-up} and \cite{weak-limit}. We begin with
some general situation and finally specify to the
case of the above theorem to prove it.
The following is observed earlier as in Remark 2.3 of
\cite{weak-limit}. It's clear that $[\omega_t]^n=
[\widetilde\omega_t]^n>0$ for $t\in [0, T)$, and we
also have $[\widetilde\omega]^n=[\omega_t]^n\to[\omega
_T]^n$ as $t\to T$. So $[\omega_T]^n\geqslant 0$. In
exactly the same manner, we see $[\omega_T]^{n-k}\cdot
[\omega_0]^k\geqslant 0$ for $k=1, \cdots, n-1$. Now
rewrite $\omega_t$ as follows,
$$\omega_t=\(\frac{1-e^{-t}}{1-e^{-T}}\)\omega_T+\(
\frac{e^{-t}-e^{-T}}{1-e^{-T}}\)\omega_0$$
and it is then obvious that for $t\in [0, T]$,
$$[\omega_t]^n\thicksim (T-t)^K$$
where $K$ is defined as follows,
$$n\geqslant K:=\min\{k\in\{0, 1, 2, \cdots, n\}\,|\,
[\omega_T]^{n-k}\cdot[\omega_0]^k>0\},$$
which is well-defined since $[\omega_0]^n>0$. Here
$A\thicksim B$ for $A$ and $B$ both non-negative
means $\frac{1}{C}B\leqslant A\leqslant CB$ for
some positive constant $C$.
\vspace{0.1in}
\noindent{\bf Note:} when $[\omega_T]^n>0$, $K=0$
and $\frac{1}{C}\leqslant [\omega_t]^n\leqslant C$
for constant $C>0$.
\vspace{0.1in}
As in Subsection 2.3 of \cite{weak-limit}, after
assuming ${\rm Ric}(\widetilde\omega_t)\geqslant -C
\widetilde\omega_t$ for some constant $C>0$,
plugging it into (\ref{eq:krf}) gives $\frac{\partial
\widetilde\omega_t}{\partial t}\leqslant C\widetilde\omega$.
Then noticing $T<\infty$, we arrive at $\widetilde
\omega_t\leqslant C\omega_0$ and so ${\rm Ric}(\widetilde
\omega_t)\geqslant -C\omega_0$.
The equivalent equations (\ref{eq:krf}) and (\ref
{eq:skrf}) give
$${\rm Ric}\(\widetilde\omega_t\)=-\frac{\partial \widetilde\omega_
t}{\partial t}-\widetilde\omega_t={\rm Ric}(\omega_0)-\sqrt{-1}
\partial\bar\partial \(\frac{\partial u}{\partial t}+u\),$$
and so one arrives at
$$C\omega_0+\sqrt{-1}\partial\bar\partial\(-\frac{\partial u}{\partial t}-u\)
\geqslant 0.$$
Thus we can apply the classic result in \cite{tian-87}
and have a constant $\alpha>0$ depending only on $(X,
\omega_0)$ such that for $t\in [0, T)$,
$$\int_X e^{\alpha\(\sup_X(-\frac{\partial u}{\partial t}-u)+
(\frac{\partial u}{\partial t}+u)\)}\omega^n_0\leqslant C.$$
Of course, we could make sure $\alpha\leqslant 1$.
This gives
$$\inf_X\(\frac{\partial u}{\partial t}+u\)\geqslant\frac{1}
{\alpha}\log\(\frac{1}{C}\int_X e^{\alpha(\frac
{\partial u}{\partial t}+u)}\omega^n_0\).$$
As summarized in \cite{weak-limit}, we have $\frac
{\partial u}{\partial t}+u\leqslant C$, and so
\begin{equation}
\begin{split}
\int_X e^{\alpha(\frac{\partial u}{\partial t}+u)}\omega^n_0
&= e^{\alpha C}\int_X e^{\alpha(\frac{\partial u}
{\partial t}+u-C)}\omega^n_0 \\
&\geqslant e^{\alpha C}\int_X e^{\frac{\partial u}{\partial t}+
u-C}\omega^n_0 \\
&\geqslant C\int_X e^{\frac{\partial u}{\partial t}+u}\omega^n_
0 \\
&= C[\widetilde\omega_t]^n=C[\omega_t]^n\geqslant
C(T-t)^K \nonumber
\end{split}
\end{equation}
where $\alpha\leqslant 1$ is applied for the second
step. So we conclude that for $t\in [0, T)$,
$$\inf_X\(\frac{\partial u}{\partial t}+u\)\geqslant -C+\frac
{K}{\alpha}\log(T-t)$$
and so
$$\frac{\partial u}{\partial t}+u\geqslant -C+\frac{K}{\alpha}
\log(T-t)$$
for $\alpha\in (0, 1]$ depending only on $(X,
\omega_0)$. Directly applying Maximum Principle
to (\ref{eq:skrf}), we have $u\leqslant C$ and so
\begin{equation}
\label{ieq:volume-lower}
\frac{\partial u}{\partial t}\geqslant -C+\frac{K}{\alpha}
\log(T-t).
\end{equation}
So in a way slightly different from that in \cite
{weak-limit}, we have $u\geqslant -C$.
The above estimate provides a pointwise lower
bound of the volume form, $\widetilde\omega^
n_t=e^{\frac{\partial u}{\partial t}+u}\omega^n_0$.
Combining with the metric upper bound, we arrive
at the following proposition.
\begin{prop}
\label{prop:metric-finite}
Consider (\ref{eq:krf}) with singularity at some
finite time $T$. If ${\rm Ric}(\widetilde\omega_t)
\geqslant -D\widetilde\omega_t$ for some constant
$D>0$ and $t\in [0, T)$, then
$$(T-t)^\beta\omega_0\leqslant \widetilde\omega_t
\leqslant C\omega_0$$
for positive constants $\beta$ and $C$ depending
on $X$, $\omega_0$, $T$ and $D$.
\end{prop}
Now we restrict to the case of Theorem
{\ref{th:finite}}. In this case, $K=0$
and so the above lower bound for $\frac
{\partial u}{\partial t}$, (\ref{ieq:volume-lower})
is uniform. Hence, the metric control in
Proposition {\ref{prop:metric-finite}}
is also uniform. The argument in \cite
{r-blow-up} can be used to draw contradiction
with the existence of finite time singularity
at $T$. Theorem {\ref{th:finite}} is
thus proven.
\begin{remark}
This theorem indicates that for the problem in
\cite{weak-limit} on general weak limit, when
Ricci curvature has uniform lower bound, the
discussion there is actually only for the global
volume collapsed case. This again stresses the
point that the discussion of collapsed case is
the core for the topic of weak limit in general.
It's worth pointing out that there are numerous
examples satisfying the assumption of this theorem.
For instance, we have the case discussed in \cite
{s-w}. In fact, such manifold of $X$ belongs to
the class of so-called manifolds of general type,
indicating ``majority".
The problem on finite time singularity has been
studied extensively since R. Hamilton's original
work \cite{ham4}. The Ricci lower bound assumption
and N. Sesum's result on the blow-up of Ricci
curvature for finite time singularity of Ricci
flows over closed manfolds in \cite{sesum}
automatically gives the blow-up of the scalar
curvature, which is conjectured in general and
proven for K\"ahler case in \cite{r-blow-up}.
Our theorem here actually shows that the
situation has to be more complicated, at least
in the global volume non-collapsed case.
\end{remark}
\section{Infinite Time Singularity}
Now we consider the infinite time singularity case,
i.e., $T=\infty$ and $[-{\rm Ric}(\omega_0)]=-c_1(X)\in
\overline{{\rm KC}(X)}\setminus {\rm KC}(X)$. When $X$ is
projective, it is then a minimal manifold. Again
we assume ${\rm Ric}(\widetilde\omega_t)\geqslant -D
\widetilde\omega_t$, which clearly gets weaker as
$D$ gets larger. Our discussion below is separated
into cases for increasing $D$ value, with the
conclusion getting weaker.
\begin{itemize}
\item $D<1$
In this case, (\ref{eq:krf}) gives $\frac{\partial \widetilde
\omega_t}{\partial t}\leqslant (D-1)\widetilde\omega_t$, and
so $\widetilde\omega_t\leqslant e^{(D-1)t}\omega_0$.
So clearly $-c_1(X)=[-{\rm Ric}(\omega_0)]=0$, and the result
in \cite{cao} can be scaled to provide a very satisfying
description of the flow metric as follows. Using the
notations in Section 1, $\omega(s)=e^t\widetilde\omega_t$
converges exponentially fast (for example, Section 9.3 in
\cite{thesis}) to the Ricci-flat K\"ahler metric $\omega_
{CY}$, where this exponentially fast convergence is with
respect to the parameter $s=\frac{e^t-1}{2}$. So as smooth
forms, ${\rm Ric}(\widetilde\omega_t)$ is $e^{-s}$-small while
$\widetilde\omega_t$ is $e^{-t}$-positive, and so the above
Ricci lower bound is obviously true for large time.
\item $D=1$
In this case, (\ref{eq:krf}) gives $\frac{\partial \widetilde
\omega_t}{\partial t}\leqslant 0$, and so $\widetilde\omega_t
\leqslant\omega_0$.
${\rm Ric}(\widetilde\omega_t)+\widetilde\omega_t\geqslant 0$
also tells us that the corresponding cohomology class
$$-c_1(X)+\(-c_1(X)+e^{-t}\bigl([\omega_0]+c_1(X)\bigr)
\)=e^{-t}([\omega_0]+c_1(X))\in\overline{\rm KC}(X),$$
and so $[\omega_0]+c_1(X)\in\overline{\rm KC}(X)$, providing
a topological restriction.
The above uniform metric upper bound allows most of
the discussion in Section 2 to be carried through.
Together with ${\rm Ric}(\widetilde\omega_t)\geqslant-
\widetilde\omega\geqslant -\omega_0$,
$${\rm Ric}\(\widetilde\omega_t\)=-\frac{\partial \widetilde\omega_
t}{\partial t}-\widetilde\omega_t={\rm Ric}(\omega_0)-\sqrt{-1}
\partial\bar\partial \(\frac{\partial u}{\partial t}+u\),$$
will give us
$$C\omega_0+\sqrt{-1}\partial\bar\partial\(-\frac{\partial u}{\partial t}-u\)
\geqslant 0.$$
Again apply the classic result in \cite{tian-87} to get
constant $\alpha>0$ depending only on $(X, \omega_0)$
such that for $t\in [0, \infty)$,
$$\int_X e^{\alpha\(\sup_X(-\frac{\partial u}{\partial t}-u)+
(\frac{\partial u}{\partial t}+u)\)}\omega^n_0\leqslant C.$$
Of course, we could make sure $\alpha\leqslant 1$.
This gives
$$\inf_X\(\frac{\partial u}{\partial t}+u\)\geqslant\frac{1}
{\alpha}\log\(\frac{1}{C}\int_X e^{\alpha(\frac
{\partial u}{\partial t}+u)}\omega^n_0\).$$
As summarized in \cite{weak-limit}, we still have
$\frac{\partial u}{\partial t}\leqslant C$ and $u\leqslant C$,
and so in the same way as in Section 2, we arrive
at
$$\int_X e^{\alpha(\frac{\partial u}{\partial t}+u)}\omega^n_0
\geqslant C[\omega_t]^n.$$
Repeating the same discussion at the beginning of
Section 2, we have $[-{\rm Ric}(\omega_0)]^{n-k}
\cdot [\omega_0]^k\geqslant 0$ for $k\in\{0, 1,
\cdots, n\}$, where $-{\rm Ric}(\omega_0)$ can be viewed
as $\omega_T$ for $T=\infty$. Furthermore, $[\omega_
t]^n\thicksim e^{-Kt}$ with
$$n\geqslant K:=\min\{k\in\{0, 1, 2, \cdots, n\}\,|
\, [-{\rm Ric}(\omega_0)]^{n-k}\cdot[\omega_0]^k>0\},$$
which is well-defined since $[\omega_0]^n>0$. So we
conclude that for $t\in [0, \infty)$,
$$\inf_X\(\frac{\partial u}{\partial t}+u\)\geqslant -\frac{K}
{\alpha}t-C,$$
and so
$$\frac{\partial u}{\partial t}+u\geqslant -\frac{K}{\alpha}t-
C$$
for $\alpha\in (0, 1]$ depending only on $(X, \omega_
0)$. This provides a pointwise lower bound of the
volume form, $\widetilde\omega^n_t=e^{\frac{\partial u}
{\partial t}+u}\omega^n_0$. Combining with the metric
upper bound, we arrive at the following proposition.
\begin{prop}
\label{prop:metric-infinite}
Consider (\ref{eq:krf}) with the solution existing
forever but having infinite time singularitiy. If
${\rm Ric}(\widetilde\omega_t)\geqslant -\widetilde\omega_
t$ for $t\in [0, \infty)$, then $[\omega_0]+c_1(X)
\in\overline{\rm KC}(X)$ and
$$e^{-\beta t}\omega_0\leqslant \widetilde\omega_t
\leqslant \omega_0$$
for some positive constant $\beta$ depending on $(X,
\omega_0)$.
\end{prop}
If we further assume $K=0$, i.e., $[-{\rm Ric}(\omega_0)
]^n>0$, then it's the global volume non-collapsed
case and the metric bound from the above proposition
is uniform. As in \cite{r-blow-up}, this implies
$[-{\rm Ric}(\omega_0)]=-c_1(X)\in {\rm KC}(X)$, which
contradicts the infinite time sinuglarity assumption
which indicates $[-{\rm Ric}(\omega_0)]\in \overline{{\rm KC}
(X)}\setminus {\rm KC}(X)$. Let's summarize it in the
following corollary, which is similar to Theorem
{\ref{th:finite}} but not as neat.
\begin{corollary}
\label{infinite}
Consider (\ref{eq:krf}) with the solution exists
forever but having infinite time singularity. If
${\rm Ric}(\widetilde\omega_t)\geqslant -\widetilde
\omega_t$ for $t\in [0, \infty)$, then $c_1(X)^n
=0$.
\end{corollary}
\item $D>1$
This is the general case. We can only have
$\widetilde\omega_t\leqslant e^{Ct}\omega_0$
for some $C>0$. Then
$$-Ce^{Ct}\omega_0\leqslant {\rm Ric}(\widetilde
\omega_t)={\rm Ric}(\omega_0)-\sqrt{-1}\partial\bar\partial\(
\frac{\partial u}{\partial t}+u\),$$
and we could only have
$$C\omega_0-\sqrt{-1}\partial\bar\partial \(e^{-Ct}\(
\frac{\partial u}{\partial t}+u\)\)\geqslant 0.$$
Applying the same discussion as before for
$e^{-Ct}(\frac{\partial u}{\partial t}+u)$ only gives
$$\frac{\partial u}{\partial t}+u\geqslant -Ce^{Ct}.$$
Hence, the metric bound corresponding to
Proposition \ref{prop:metric-infinite} is
$$e^{-Ce^{Ct}}\omega_0\leqslant\widetilde
\omega_t\leqslant e^{Ct}\omega_0,$$
which is not enough to draw any decent
conclusion.
\begin{remark}
With this general lower bound of Ricci curvature
for infinite time singularity case, when $X$ is
a projective manifold of general type, i.e.,
$(-c_1(X))^n>0$, by the results in \cite{bound-r},
one has the Ricci curvature being bounded from
both sides and $\frac{\partial u}{\partial t}+u\geqslant -C$.
Thus the metric bound can be improved to $Ce^{-Ct}
\omega_0\leqslant\widetilde\omega_t\leqslant e^{Ct}
\omega_0$, not yet good enough. Notice that by
Corollary \ref{infinite}, $D$ has to be strictly
bigger than one to have a reasonable Ricci lower
bound assumption for this case.
\end{remark}
\end{itemize}
|
2,877,628,089,616 | arxiv | \@startsection{subsubsection}{3}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\bf}{\@startsection{subsubsection}{3}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\bf}}
\def\@startsection{paragraph}{4}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\textit}{\@startsection{paragraph}{4}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\textit}}
\renewcommand\@biblabel[1]{#1}
\renewcommand\@makefntext[1]%
{\noindent\makebox[0pt][r]{\@thefnmark\,}#1}
\makeatother
\renewcommand{\figurename}{\small{Fig.}~}
\sectionfont{\large}
\subsectionfont{\normalsize}
\fancyfoot{}
\fancyfoot[LO,RE]{\vspace{-7pt}\includegraphics[height=9pt]{headers/LF}}
\fancyfoot[CO]{\vspace{-7.2pt}\hspace{12.2cm}\includegraphics{headers/RF}}
\fancyfoot[CE]{\vspace{-7.5pt}\hspace{-13.5cm}\includegraphics{headers/RF}}
\fancyfoot[RO]{\footnotesize{\sffamily{1--\pageref{LastPage} ~\textbar \hspace{2pt}\thepage}}}
\fancyfoot[LE]{\footnotesize{\sffamily{\thepage~\textbar\hspace{3.45cm} 1--\pageref{LastPage}}}}
\fancyhead{}
\renewcommand{\headrulewidth}{1pt}
\renewcommand{\footrulewidth}{1pt}
\setlength{\arrayrulewidth}{1pt}
\setlength{\columnsep}{6.5mm}
\setlength\bibsep{1pt}
\twocolumn[
\begin{@twocolumnfalse}
\noindent\LARGE{\textbf{Compositional dependence of anomalous thermal expansion in perovskite-like ABX$_3$ formates$^\dag$}}
\vspace{0.6cm}
\noindent\large{\textbf{Ines E.\ Collings,\textit{$^{a,b}$} Joshua A.\ Hill,\textit{$^{a}$} Andrew B.\ Cairns,\textit{$^{a}$} Richard I.\ Cooper,\textit{$^{a}$} Amber L.\ Thompson,\textit{$^{a}$} Julia E.\ Parker,\textit{$^{c}$} Chiu C.\ Tang,\textit{$^{c}$} and Andrew L.\ Goodwin$^{\ast}$\textit{$^{a}$} }}\vspace{0.5cm}
\noindent\textit{\small{\textbf{Received Xth XXXXXXXXXX 20XX, Accepted Xth XXXXXXXXX 20XX\newline
First published on the web Xth XXXXXXXXXX 200X}}}
\noindent \textbf{\small{DOI: 10.1039/b000000x}}
\vspace{0.6cm}
\noindent \normalsize{The compositional dependence of thermal expansion behaviour in 19 different perovskite-like metal--organic frameworks (MOFs) of composition [A$^{\textrm{I}}$][M$^{\textrm{II}}$(HCOO)$_3$] (A = alkylammonium cation; M = octahedrally-coordinated divalent metal) is studied using variable-temperature X-ray powder diffraction measurements. While all systems show essentially the same type of thermomechanical response---irrespective of their particular structural details---the magnitude of this response is shown to be a function of A$^{\textrm{I}}$ and M$^{\textrm{II}}$ cation radii, as well as the molecular anisotropy of A$^{\textrm{I}}$. Flexibility is maximised for large M$^{\textrm{II}}$ and small A$^{\textrm{I}}$, while the shape of A$^{\textrm{I}}$ has implications for the direction of framework hingeing.}
\vspace{0.5cm}
\end{@twocolumnfalse}
]
\footnotetext{\dag~Electronic Supplementary Information (ESI) available: Details of Gua-Cd single-crystal X-ray diffraction, Rietveld fits, variable-temperature lattice parameters, XBU $r$ and $\theta$ equations, and A-site cation size details.}
\footnotetext{\textit{$^{a}$~Department of Chemistry, University of Oxford, Inorganic Chemistry Laboratory, South Parks Road, Oxford OX1 3QR, U.K. Fax: +44 1865 274690; Tel: +44 1865 272137; E-mail: andrew.goodwin@chem.ox.ac.uk.}}
\footnotetext{\textit{$^{b}$~Laboratory of Crystallography, University of Bayreuth, D-95440 Bayreuth, Germany.}}
\footnotetext{\textit{$^{c}$~Diamond Light Source Ltd., Harwell Science and Innovation Campus, Didcot,
Oxfordshire, OX11 0DE, U.K. }}
\section{Introduction}
The discovery that many metal--organic frameworks (MOFs) respond mechanically to external stimuli in extreme and counterintuitive ways (\emph{e.g.}\ negative thermal expansion (NTE),\cite{Zhou_2008,Wu_2008} negative compressibility,\cite{Li_2012,Ogborn:2012,Cairns_2015} breathing transitions,\cite{Serre_2007,Ferey:2009} and amorphisation\cite{Chapman_2009,Bennett_2010}) has focussed attention within the field on establishing composition--property relationships. The hope---as in much of MOF science---is to develop design strategies that will allow the targeted synthesis of MOFs with specific and optimised physical properties.\cite{Allendorf:2008,Horcajada:2008} Given the enormous structural and compositional diversity of this family, the task of establishing design rules has focussed on two separate areas: namely, establishing in turn the roles of network \emph{topology/geometry} and network \emph{composition} in governing elastic response.
With regard to the former, it is now well established that certain network connectivities intrinsically favour counterintuitive mechanical responses.\cite{Sarkisov:2014,Collings:2013,Bouessel:2014,Bennett:2015,Ortiz:2013} Perhaps the best known example is the wine-rack topology, which is associated with anomalous elastic behaviour across a large variety of chemically-distinct MOF families.\cite{Nanthamathee:2014,Hunt:2015,Cai:2014,Henke:2013,Zhou:2013,DeVries:2011,Yang:2009} For a given network topology, changes in geometry (\emph{e.g.}, network angles) can invert the type of anisotropy but the basic mechanical response remains the same.\cite{Collings:2014,Henke:2014}
With regard to network composition, variation of the chemical makeup of MOFs with the same topology will usually influence only the magnitude of mechanical response.\cite{Dubbeldam_2007,Bennett:2010,Henke:2013,Li:2014} For instance, nanoindentation studies of the perovskite-structured [(CH$_3$)$_2$NH$_2$][M(HCOO)$_3$] family (M = Mn, Co, Ni, Zn) showed a strong relationship between ligand field stabilisation energy of the transition metal dication and the mechanical stiffness of the corresponding MOF.\cite{Tan:2012} In the related systems [C(NH$_2$)$_3$][Mn(HCOO)$_3$] and [(CH$_2$)$_3$NH$_2$][Mn(HCOO)$_3$], the hydrogen-bonding strength of the extra-framework alkylammonium cation was found to direct the stiffness and flexibility of the two structures.\cite{Li:2014} In all cases, the fundamental mechanism of elastic response is unchanged by chemical substitution, with the degree of flexibility scaling inversely with strength of interaction.\cite{Ogborn:2012} Precisely the same conclusion has been reached in similar studies of inorganic frameworks. For example, an investigation of the thermal expansion response of Prussian Blue analogues and related lanthanide hexacyanocobaltates showed that cation size correlates with the magnitude of NTE.\cite{Chapman:2006b,Adak:2011,Duyker:2013} Likewise, cation size also plays an important role in the magnitudes of thermal expansion behaviour in the family of frameworks related to NaZr$_2$(PO$_4$)$_3$.\cite{Petkov:2003} Perhaps the only exceptions to these general rules are in instances where chemical substitution alters the nature of the dominant chemical interaction within a particular MOF.\cite{Nanthamathee:2014,Millange:2008}
One particularly important way in which dense MOFs can differ from ``conventional'' ceramic frameworks is in their capacity to accommodate extra-framework ions that are molecular rather than monatomic in nature. The added complexity of cation \emph{asphericity} is conceptually related to the symmetry-lowering effect of second-order Jahn-Teller instabilities and may play a key role in the lattice dynamics of high-profile hybrid frameworks such as [CH$_3$NH$_3$]PbI$_3$.\cite{Lee_2015} Yet, to the best of our knowledge, there are no systematic studies of the relationship between counterion shape and mechanical response in MOF-type systems.
In this paper, we use thermal expansion measurements to explore the relationship between framework flexibility and chemical composition across 19 MOFs drawn from the widely-studied family of alkylammonium transition-metal (``ABX$_3$'') formates. These compounds have the same general formula [A$^{\textrm{I}}$][M$^{\textrm{II}}$(HCOO)$_3$] and are structurally related to the perovskites [Fig.~\ref{fig1}]. The larger M$\ldots$M separation in formates relative to perovskites ($\sim$6\,\AA\ \emph{vs}.\ 4\,\AA) allows incorporation of molecular cations within the the framework cavities. As in the perovskites there is scope for substitution on both the ``twelve-coordinate'' A-site and on the transition-metal B-site. While the cation charges on each of these sites are fixed at 1+ (A) and 2+ (B) in the formates (unlike perovskites), there is now an additional degree of freedom in terms of the molecular shape at the A-site.
\begin{figure}
\centering
\includegraphics{fig1.png}
\caption{ABX$_3$ frameworks of [C(NH$_2$)$_3$][Co(HCOO)$_3$] and BaTiO$_3$. The hydrogen bonding in [C(NH$_2$)$_3$][Co(HCOO)$_3$] is shown with dotted red lines. The Co$^{2+}$ and Ti$^{4+}$ coordination environments are represented by polyhedra.}
\label{fig1}
\end{figure}
Their relative structural simplicity makes these compounds ideal candidates for a composition/property study such as ours. But the broader family is also of significance from a functional materials viewpoint. ABX$_3$ formates exhibit a variety of useful properties, including ferroelectricity,\cite{Jain:2009, Sanchez-Andujar:2010,Pato-Doldan:2012} ferroelasticity,\cite{Li:2013} and even multiferroic behaviour.\cite{Wang_W:2013,Stroppa:2011,Stroppa:2013,Tian:2014b} Composition/property studies of the ferroelectric response in the A$^{\textrm{I}} = \textrm{N}$H$_4^{+}$ or (CH$_3$)$_2$NH$_2$$^{+}$ members points to the different possible effects of substitution on the alkylammonium site and the transition-metal site.\cite{Xu:2010,Xu:2011,Sanchez-Andujar:2010} Substitution at the A-site allows tuning of the ferroelectric polarisation,\cite{DiSante:2013} whereas variation in transition-metal affects the ferroelectric transition temperature $T_{\rm c}$.\cite{Pato-Doldan:2012,Shang:2014} In this context, framework flexibility may have important implications for A-site orientational ordering and/or mechanisms of accommodating the large strains induced during ferroelectric/paraelectric switching. So this study has the additional relevance beyond establishing composition--property relationships of exploring the role of flexibility in ferroelectric MOFs.
Our paper is arranged as follows. We begin by summarising the synthesis and characterisation techniques used in our study. In the results section that follows, we report the thermal expansion properties of the various ABX$_3$ formates we study. Using the mechanical building unit (XBU) abstraction developed in Ref.~\citenum{Ogborn:2012}, we reduce the experimental lattice parameter data we measure to two characteristic values for each system: namely, the expansivities of the framework struts and of the intra-framework angles. By comparing the magnitudes of these two parameters as a function of framework composition, we are able to establish composition/flexibility relationships for this family.
\section{Experimental Methods}
\subsection{Sample preparation}
The 19 structures we investigated share the composition [A$^{\textrm{I}}$][M$^{\textrm{II}}$(HCOO)$_3$], where A$^{\textrm{I}} = \textrm{C}$H$_3$NH$_3$, (CH$_3$)$_2$NH$_2$, CH$_3$CH$_2$NH$_3$, (CH$_2$)$_3$NH$_2$, C(NH$_2$)$_3$ and M$^{\textrm{II}} = \textrm{M}$g, Mn, Fe, Co, Ni, Cu, Zn, Cd [Fig.~\ref{A-cations}]. For simplicity, the [CH$_3$NH$_3$][M(HCOO)$_3$] structures will be referred to as MeNH$_3$--M, where M is the metal cation. Likewise we refer to [CH$_3$CH$_2$NH$_3$][M(HCOO)$_3$] as EtNH$_3$--M, [(CH$_3$)$_2$NH$_2$][M(HCOO)$_3$] as Me$_2$NH$_2$--M, [(CH$_2$)$_3$NH$_2$][M(HCOO)$_3$] as Aze--M (\emph{i.e.}, Aze = azetidinium), and [C(NH$_2$)$_3$][M(HCOO)$_3$] as Gua--M (\emph{i.e.}, Gua = guanidinium).
Our general strategy for preparing [A][M(HCOO)$_3$] samples was as follows. Methanolic solutions of HCOOH (0.5\,M, 5\,mL) and of the (usually) neutral A-site amine (0.5\,M, 5\,mL) were mixed at the bottom of a glass vial. Onto this solution methanol (2\,mL) was carefully added, followed by a methanolic solution of the transition metal nitrate (0.1\,M, 8\,mL).\cite{Wang_Z:2004} The tube was then sealed and kept undisturbed. Following precipitation of the product (often as single crystals), the solution was filtered off, and the sample was washed with methanol, dried in air, and ground. The precise species used in the methanolic amine preparation were: methylamine (Acros Organics, 2\,M solution in methanol), dimethylamine (Sigma Aldrich, 2\,M solution in methanol), ethylamine (Sigma Aldrich, 2\,M solution in methanol), azetidine (Sigma Aldrich, 98\%), and guanidine carbonate (Aldrich, 99\%).
\begin{figure}
\centering
\includegraphics{figx.png}
\caption{Compositions of the various ABX$_3$ formates used in our study. For the structural representations of the different A-site cations given at the top of the table, N atoms are shown in blue, C atoms in black, and H atoms in pink. The shaded regions of the table show the four compositional studies possible in which one or other of the A and B components is kept constant.}
\label{A-cations}
\end{figure}
\subsection{X-ray powder diffraction}
Synchrotron X-ray powder diffraction data were collected using the I11 beamline\cite{Thompson:2009} at the Diamond Light Source ($\lambda = 0.82715$\,\AA) for each of the 19 different ABX$_3$ compositions given above. Finely-ground powder samples were loaded into 0.5\,mm diameter borosilicate capillaries and mounted on the diffractometer. The Cryostream Plus from Oxford Cryosystems was used to vary the temperature between 110 and 300\,K, and diffraction patterns were collected using the Mythen2 position sensitive detector (PSD). For each sample, data collection started at 300\,K, before cooling to 110\,K at a rate of 5\,K\,min$^{-1}$. To minimise the effects of beam damage, data were not collected continuously but rather at intervals of 10\,K during this cooling process. Once the minimum temperature was achieved, the goniometer was translated to allow a fresh part of the sample to be illuminated during heating. The sample temperature was then increased to 300\,K at a rate of 6\,K\,min$^{-1}$, with data collected at intervals of 10\,K. For all data collected, diffraction patterns were obtained using two separate measurements (5\,s each) at different angular orientations offset by 0.25$^\circ$ that were subsequently merged.\cite{Thompson:2011}
Structural models were determined from the powder diffraction patterns using
Rietveld refinement as implemented in the {\sc{topas}} software (academic
version 4.1).\cite{Coelho:2007} For most of the systems studied, the
corresponding crystal structures are already well known; the literature
values were used as a starting model for Rietveld
refinement.\cite{Wang_Z:2004,Hu:2009,Jain:2008,Sanchez-Andujar:2010,Jain:2009,Sletten:1973,Kong:2006}
To the best of our knowledge, the structures of MeNH$_3$--M (M = Mg, Fe, Co,
Zn, Cd) have not been determined previously; we found these to be
isostructural to MeNH$_3$--Mn and we used the known structure of this
compound as a starting point for Rietveld refinement. In the case of
Gua--Cd---which is also previously uncharacterised---we made use of both
single-crystal X-ray diffraction$^{\ddag}$ and our synchrotron X-ray powder
diffraction measurements to determine a relevant structural model (see SI for
further details).
Our Rietveld refinements made use of a number of chemically-informed restraints and constraints. Bonding restraints were applied to the metal--formate bonds, while rigid bodies were used to model the organic molecular units (\emph{i.e.}~the A-site cations and the formate ligands). The bond lengths and internal angles defining the rigid bodies were refined for the initial temperature point. After refinement of the ambient-temperature structure for all [A][M(HCOO)$_3$] compounds, subsequent diffraction patterns were refined sequentially using the refinement model from the previous temperature as a starting model. In these sequential refinements, the free variables were: a polynomial background function, the lattice parameters, the atomic coordinates, a scale factor, and peak shape parameters. The molecular geometries of the formate anions and the A-site cations were modelled as rigid bodies with translational and rotational degrees of freedom. The Rietveld fits to the ambient and 110\,K temperature points are given as SI.
\section{Results and Discussion}
\subsection{Thermal expansivities and XBU analysis}
\begin{figure*}
\centering
\includegraphics{figy.png}
\caption{Relationship between unit cell and perovskite-like cubic structural unit for each of the crystallographically-distinct phases in our study. In each case, the transition-metal coordination polyhedra are represented as filled octahedra. C atoms are shown in black, N atoms in blue, O atoms in red, and H atoms in pink.}
\label{figy}
\end{figure*}
Our determination of the thermal expansion characteristics of ABX$_3$ formates is based on the interpretation of temperature-dependent lattice parameters extracted \emph{via} Rietveld refinement of synchrotron X-ray powder diffraction data. While the various phases we study are isostructural in the sense that their topologies are identical, they adopt a variety of different space group symmetries. The various crystal phases included in our study are shown in Fig.~\ref{figy}. What is immediately clear is that direct comparison of lattice expansivities from one system to another is not physically meaningful, since the various structures involve different relationships between the perovskite lattice and the unit cell geometry.
Our solution to this problem is to interpret the lattice parameter changes for a given system in terms of two fundamental XBUs:\cite{Ogborn:2012} the framework strut length $r$ and the intra-framework framework angle $\theta$. For systems of sufficiently high symmetry (\emph{e.g.}\ the rhombohedral structures of Me$_2$NH$_2$--Mn/Co/Zn and Gua--Cd) there is a one-to-one mapping between the lattice parameters and XBU coordinates $r,\theta$. In other words, the elastic behaviour of the framework is completely described by the propensity for hingeing (changes in $\theta$) and network deformation (changes in $r$). For lower-symmetry structures, there will be more than one crystallographically-distinct value of $r$ and/or $\theta$; we use an average value in order to facilitate comparison between different systems (see SI for further details). This means that the relationship between lattice parameters and $r,\theta$ is approximate rather than exact, but we find that in practice the degree of approximation is $\lesssim10\%$, which is sufficient for our analysis.
\begin{figure}[h!]
\centering
\includegraphics{fig34.png}
\caption{Thermal expansion behaviour of MeNH$_3$--Mn, represented in terms of the temperature dependence of lattice parameters (top panel) and XBUs (bottom panel). Data collected during cooling are shown as open circles; those collected during heating are shown as filled circles. The discrepancy between cooling and heating runs likely reflects a combination of beam damage and thermal lag.}
\label{fig34}
\end{figure}
We demonstrate this approach using MeNH$_3$--Mn as a representative example. This compound has the $Pnma$ structure shown in the leftmost panel of Fig.~\ref{figy}. The relative changes in lattice parameters determined using Rietveld refinement against our experimental X-ray powder diffraction are shown in Fig.~\ref{fig34}. These reveal that, on heating, the framework expands rapidly along the $a$-axis and contracts almost as rapidly along the $c$-axis; this collective behaviour corresponds to hingeing of the framework as discussed elsewhere.\cite{Li_2012} The $b$-axis length is unaffected by framework hingeing, and its modest expansion with temperature reflects the intrinsic positive thermal expansion characteristic of M--formate--M linkages.\cite{Ogborn:2012} We quantify the magnitude of thermal response in terms of the coefficients of thermal expansion
\begin{equation}
\alpha=\frac{1}{\ell}\frac{{\rm d}\ell}{{\rm d}T},
\end{equation}
to give $\alpha_a=+88(3)$\,MK$^{-1}$, $\alpha_b=+19.5(4)$\,MK$^{-1}$ and $\alpha_c=-49(2)$\,MK$^{-1}$.\cite{Cliffe:2012}
\begin{table*}[tb]
\centering
\caption{Lattice and XBU coefficients of thermal expansion for the 110--300\,K temperature range, given in units of MK$^{-1}$. For orthorhombic and hexagonal structures $\alpha_1 = \alpha_a$, $\alpha_2 = \alpha_b$, $\alpha_3 = \alpha_c$; for monoclinic structures the $\alpha_i$ index represents the principal coefficients of thermal expansion.\cite{Cliffe:2012} The expansivities for Aze--Mn given here were determined for the orthorhombic cell characteristic of the ambient-temperature phase; see SI for the principal-axis expansivities of the low-temperature monoclinic phase. $^a$ For this phase, values of $\alpha$ were calculated using cooling data only.}
\label{expansivities_Mformates}
\begin{tabular}{c|c|c|ccc|cc}
A$^{+}$ & M$^{2+}$ & Space group & $\alpha_1$ & $\alpha_2$ & $\alpha_3$ & $\alpha_r$ & $\alpha_\theta$ \\
\hline \hline
MeNH$_3$ & Mg & $Pnma$ & 54.5(1.3) & 22.1(5) & $-$20.0(5) & 20.1(5) & 46.0(1.1) \\
& Mn & $Pnma$ & 88(3) & 19.5(4) & $-$49(2) & 21.8(6) & 84(3) \\
& Fe & $Pnma$ & 74(2) & 14.7(4) & $-$25.1(1.2) & 22.5(6) & 61(2) \\
& Co & $Pnma$ & 68.7(1.3) & 22.3(4) & $-$28.8(6) & 21.6(4) & 61.1(1.2) \\
& Zn & $Pnma$ & 69(3) & 18.9(9) & $-$34.6(1.0) & 18.9(1.2) & 65(3) \\
& Cd & $Pnma$ & 102(7) & 15.0(1.4) & $-$61(7) & 21.5(7) & 100(8) \\
\hline
EtNH$_3$& Mn$^{a}$ & $Pn2_1a$ & 44.2(4) & 33.2(4) & 4.14(13) & 28.4(3) & 23.88(15) \\
\hline
Me$_2$NH$_2$ & Mn & $R\bar3c$ & 4.5(5) & 4.5(5) & 60.4(1.3) & 25.9(6) & 24.2(6) \\
& Co & $R\bar3c$ & 10.6(2) & 10.6(2) & 46.5(1.0) & 24.2(5) & 15.5(3) \\
& Cu & $C2/c$ & $-$14.3(1.0) & 45.7(1.7)& 57(3) & 31.7(1.6) & 24.7(6) \\
& Zn & $R\bar3c$ & 8.3(9) & 8.3(9) & 36.4(4) & 18.9(1.9)& 12.2(1.2) \\
\hline
Aze & Mn & $Pnma$ & 104(8) & 15.3(1.2) & $-$21(6) & 31.6(1.1) & $-$78(9) \\
\hline
Gua & Mn & $Pnna$ & 42.2(6) & 30.0(4) & $-$10.6(3) & 19.2(3) & $-$32.2(5) \\
& Fe & $Pnna$ & 33.1(4) & 31.6(3) & $-$1.5(2) & 20.2(2) & $-$21.2(3) \\
& Co & $Pnna$ & 36.6(6) & 25.8(5) & $-$6.7(2) & 17.5(3) & $-$26.3(4) \\
& Ni & $Pnna$ & 27.1(8) & 22.4(6) & 3.59(16) & 17.1(5) & $-$14.3(4) \\
& Cu & $Pn2_1a$ & 45.9(1.8) & 21.4(9) & 1.4(8) & 22.0(1.1) & $-$27.2(7) \\
& Zn &$Pnna$ & 30.7(1.9) & 21.9(9) & $-$5.3(6) & 14.9(7) & $-$21.9(1.4)\\
& Cd & $R\bar3c$ & $-$16.8(9) & $-$16.8(9) & 106(3) & 16.3(3) & $-$43.7(1.3)\\ \hline
\end{tabular}
\end{table*}
For this particular structure, the projection from lattice parameter coordinates onto XBUs is given by the pair of equations
\begin{eqnarray}
r&=&\frac{1}{3}\left(\frac{b}{2}+\sqrt{a^2+c^2}\right),\label{xbu1}\\
\theta&=&2\tan^{-1}\left(\frac{a}{c}\right).\label{xbu2}
\end{eqnarray}
The corresponding change in XBU magnitude with temperature is shown in the bottom panel of Fig.~\ref{fig34}, from which it is clear that the thermomechanical response is dominated by hingeing (changes in $\theta$) rather than network deformation. The XBU coefficients of thermal expansion are $\alpha_r=+21.8(6)$\,MK$^{-1}$ and $\alpha_\theta=+84(3)$\,MK$^{-1}$. XBU projection equations for the various crystal structures of Fig.~\ref{fig1} are given as SI.
The extent of approximation in reducing the three lattice degrees of freedom to the two XBU degrees of freedom can be assessed by calculating values of $\alpha$ for the XBU-derived lattice parameters
\begin{eqnarray}
a_{\rm{XBU}}&=&\frac{2r\tan(\theta/2)}{\sqrt{1+\tan^2(\theta/2)}}\\
b_{\rm{XBU}}&=&2r\\
c_{\rm{XBU}}&=&\frac{2r}{\sqrt{1+\tan^2(\theta/2)}}
\end{eqnarray}
obtained \emph{via} inversion of Eqs.~\eqref{xbu1} and \eqref{xbu2}: we find $\alpha_{a_{\rm{XBU}}}=+87(3)$\,MK$^{-1}$, $\alpha_{b_{\rm{XBU}}}=+21.8(6)$\,MK$^{-1}$ and $\alpha_{c_{\rm{XBU}}}=-50(2)$\,MK$^{-1}$. These are all equal within error to the original lattice coefficients of thermal expansion.
The ABX$_3$ lattice and XBU expansivities determined from our entire ensemble of X-ray diffraction data and corresponding Rietveld fits are summarised in Table~\ref{expansivities_Mformates}; the raw lattice parameter data from which these values were derived are given as SI. While the magnitudes of the lattice expansivities vary over two orders of magnitude, we find essentially the same basic thermomechanical response for all ABX$_3$ structures. For example, nearly all of the orthorhombic structures exhibit large positive thermal expansion (PTE) along the $a$-axis (27--104\,MK$^{-1}$), moderate PTE along the $b$-axis (14.7--33\,MK$^{-1}$), and NTE along the $c$-axis ($-$61 to $-$1.5\,MK$^{-1}$).
\subsection{Effect of variation in M$^{2+}$}
\begin{figure*}
\centering
\includegraphics{fig4.png}
\caption{Lattice expansivities as a function of metal cation size for (a) MeNH$_3$--M and (b) Gua--M families (only data for the orthorhombic members are shown). The dotted ellipse in (b) indicates the $\alpha_a$ value for Gua--Cu, which is not included in the linear fit shown here as a guide-to-the-eye. XBU expansivities calculated for complete (c) MeNH$_3$--M and (d) Gua--M families. Ionic radii are taken from Ref.~\citenum{Shannon:1976} for octahedrally coordinated (high-spin) M$^{2+}$ cations.}
\label{CTE_lp_MeNH3_Gua}
\end{figure*}
In order to investigate the effect of M$^{2+}$ variation on the mechanics of [A][M(HCOO)$_3$] frameworks, we compare our results for the MeNH$_3$--M and Gua--M families---these are the two systems for which we have the greatest diversity of M$^{2+}$ substitution. Fig.~\ref{CTE_lp_MeNH3_Gua} presents graphically the relevant data from Table~\ref{expansivities_Mformates}, organised according to the (Shannon) radii of the M$^{2+}$ ions.\cite{Shannon:1976} Two results are immediately obvious from consideration of the dependence of XBU expansivity on ionic radius. The first is that the value of $\alpha_r$ is essentially composition-independent; the second is that the magnitude of $\alpha_\theta$ increases linearly with increasing M$^{2+}$ radius.
The magnitude of $\alpha_r$ reflects the effect of increased thermal motion on the M$\ldots$M separation across connected M--formate--M links. Our Rietveld refinements do not allow us to comment with certainty on the temperature-dependence of the position, orientation, and/or thermal displacement of the formate anions in the various compounds in the way that has allowed detailed analysis of thermal expansion effects in other dense MOFs.\cite{Collings:2013} Nevertheless a likely origin of the composition-independence of $\alpha_r$ is the balance between two competing effects: on the one hand, larger M$^{2+}$ ions would allow greater vibrational motion of bridging formates (\emph{i.e.}, lower vibrational frequencies); on the other hand, the increased M$\ldots$M separation for larger ions means that formate displacements have less effect on this separation \emph{in relative terms}. So while it is likely that the thermal motion of formate ions increases with increasing M$^{2+}$ radius, the relative effect on $r$ as an XBU remains roughly constant.
By contrast, the value of $\alpha_\theta$ represents the degree of hingeing flexibility in a given framework. The increase in flexibility we observe with increasing cation size could arise from several effects. First, the strength of the metal--formate coordination bond will decrease as larger metal cations are used, which in turn facilitates the movement of this bond. This causes an enhancement of the framework flexibility towards framework hingeing, especially as the neighbouring formate linkers are now further apart within the coordination sphere of the metal cation. In previous compositional dependent studies of Prussian Blue analogues, the strength of the coordination bonds was important for tuning the transverse vibrations of the M--ligand--M strut.\cite{Chapman:2006b} Second, the larger unit cells obtained with larger metal cations may allow greater structural freedom, especially as the hydrogen bonding from the A-site cations to the anionic formates of the framework are at greater distances. These two factors mean that small metal cations, such as Ni$^{2+}$, will have (\emph{i}) short and strong M--O bonds, restricting M--formate movement, and (\emph{ii}) a small unit cell, thus forming shorter intermolecular contacts between formate linkers, and between the framework itself and the A-site cation; all of these factors will result in a restriction of framework hingeing.
While the values of $\alpha_r$ are essentially identical for both MeNH$_3$--M and Gua--M families ($\sim20$\,MK$^{-1}$), the magnitudes of $\alpha_\theta$ \emph{and their sensitivity to cation radius} are substantially smaller for the latter family than for the former. We discuss this dependence on alkylammonium cation in more detail below, but note here that this difference has the consequence for Gua--M systems with small M$^{2+}$ that $|\alpha_\theta|\sim|\alpha_r|$ and so the NTE effect that would ordinarily arise from framework hingeing can be masked by the network deformation (expansion in $r$). This is why Gua--Ni and Gua--Cu do not exhibit NTE along any crystal axis.
\subsection{Effect of variation in A$^{+}$}
Our results for the MeNH$_3$--M and Gua--M families show clearly that variation in A-site cation can affect the extent of framework hingeing observed. To investigate this in more detail, we consider the family of Mn-containing ABX$_3$ formates with A = CH$_3$NH$_3$$^{+}$, CH$_3$CH$_2$NH$_3$$^{+}$, (CH$_3$)$_2$NH$_2$$^{+}$, (CH$_2$)$_3$NH$_2$$^{+}$, and C(NH$_2$)$_3$$^{+}$. The sizes of the different A-site cations were estimated by fitting the atomic coordinates refined from the powder diffraction data to a shape tensor, $L$, using the program {\sc{crystals}}.\cite{Betteridge:2003} This shape tensor represents an anisotropic ellipsoid that contains all atoms within the A-site cation.\cite{Cooper:2004} The smallest, median, and largest components of this ellipsoid are shown in Table \ref{Acation_size}. We initially use the maximum effective length of the A-site cation ($L_{\rm max}$) as our metric by which to compare the mechanical properties of our different systems. In a another study, a different method was used to calculate the sizes of the A-site cations mentioned here.\cite{Kieslich:2014} The two approaches are consistent with one another except that the order of the Me$_2$NH$_2$$^+$ and Gua$^+$ cations is reversed (see SI for further discussion).
\begin{table}
\centering
\caption[Effective lengths of the A-site cation molecular ellipsoids]{Computed principal axis effective lengths (given in units of \AA$^2$) for each of the A-site cation centroids in the host manganese formate, calculated within {\sc{crystals}}.\cite{Betteridge:2003,Cooper:2004} The asphericity $b$ (also given in \AA$^2$) is defined by Eq.~\eqref{asphericity}.}
\label{Acation_size}
\begin{tabular}{c|ccc|r}
A$^{+}$& $L_{\textrm{min}}$ & $L_{\textrm{med}}$ & $L_{\textrm{max}}$ & \multicolumn{1}{c}{$b$} \\ \hline\hline
MeNH$_3$ & 0.54 & 0.57 & 1.88 & $-$0.64 \\
EtNH$_3$ & 0.65 & 0.88 & 2.95 & $-$0.92 \\
Me$_2$NH$_2$ & 0.71 & 1.21 & 3.25 & $-$0.77 \\
Aze & 0.82 & 1.92 & 1.98 & 0.52 \\
Gua & 0.48 & 2.39 & 2.52 & 0.89 \\\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics{fig5_iec.png}
\caption{XBU expansivities determined for MeNH$_3$--Mn, Aze--Mn, Gua-Mn, EtNH$_3$--Mn, and Me$_2$NH$_2$--Mn, given as a function of maximum effective A-site cation length. The cation shape tensor generated using {\sc{crystals}} is represented for each compound below the graph.\cite{Betteridge:2003,Cooper:2004} The expansivities for Aze--Mn have an anomalously large uncertainty as a result of a low-temperature phase transition (see SI for further discussion).}
\label{fig5}
\end{figure}
Figure~\ref{fig5} shows the XBU expansivities for the five Mn-containing formate frameworks, arranged according to the value of $L_{\rm max}$ for their A-site cation. As observed in the previous section, there is essentially no meaningful compositional dependence of $\alpha_r$. By contrast, the magnitude of $\alpha_\theta$ is much reduced for those systems with larger A-site cations. Our use of the absolute value of $\alpha_\theta$ reflects the fact that its sign is not conserved amongst the various compounds we study, a point we will expand upon below. So, at face value our results suggest that bulkier A-site cations inhibit framework flexibility. This may be due simply to steric interactions between the A-site cation and the host framework, with bulkier cations preventing large changes in framework angles. From a chemical viewpoint it is likely that the different H-bonding strengths of the various A-site cations will also impact the framework flexibility;\cite{Li:2014} however, our data do not allow us to probe changes in hydrogen-bond characteristics with any certainty. Even in these simplistic terms, and taken together with the results of the previous section, our analysis already suggests that the most flexible formate frameworks will be those with large M$^{2+}$ but small A$^{+}$---precisely the combination that (rightly) identifies MeNH$_3$--Cd as the most flexible of the systems we study here [Table~\ref{expansivities_Mformates}].
One of the key motivations for our study was to understand the relationship between A-cation \emph{shape} and framework mechanics. There are a large number of metrics to quantify shape, but we constrain ourselves here (given that we have but five data points to compare) to the straightforward notion of \emph{asphericity}.\cite{Theodorou_1985} The asphericity parameter
\begin{equation}
b=L_{\rm med}-\frac{1}{2}\left(L_{\rm min}+L_{\rm max}\right)\label{asphericity}
\end{equation}
takes into account both the size of an object and the anisotropy of the shape tensor $\mathbf L$. The parameter $b$ assumes negative values for objects with prolate asphericity, positive values for oblate objects, and is zero for isotropic shapes. The various A-site cations we consider span a range of asphericities $-1\lesssim b\lesssim1$\,\AA$^2$ [Table~\ref{Acation_size}]. Re-ordering the hingeing XBU expansivities according to these values, we find a clear distinction between prolate and oblate A-cations, for which $\alpha_\theta>0$ and $\alpha_\theta<0$, respectively.\footnote{Note that in order for the sign of $\alpha_\theta$ to have any physical meaning, we are careful to define the value of $\theta$ in a consistent way for all frameworks we study. In particular, it is given as the obtuse angle in the rhombic face of the perovskite-like cube. Hence $\alpha_\theta>0$ implies that the cube geometry becomes more distorted with increasing temperature, and $\alpha_\theta<0$ reflects a tendency to adopt an increasingly cubic geometry on heating.} This switching in sign of anisotropic mechanical response as a consequence of cation asphericity is strongly reminiscent of the critical ratios that demarcate linear and area negative thermal expansion in a range of molecular framework materials.\cite{Collings:2014} As in those systems, the magnitude of hingeing response is actually larger for systems closer to the critical geometry (here, $b=0$). Our data show an approximately-linear relationship between $b$ and the inverse expansivity $\alpha_\theta^{-1}$ [Fig.~\ref{fig5_alg}].
\begin{figure}
\centering
\includegraphics{fig5_alg.png}
\caption{Relationship between asphericity $b$ and inverse XBU expansivity for the five Mn-containing ABX$_3$ formates in our study. The direction of framework flexing switches for prolate/oblate A-site cations; the extent of flexibility is largest for small, spherical cations ($b\rightarrow0$).}
\label{fig5_alg}
\end{figure}
So, from a design perspective, our analysis suggests that it is not only the A-cation size that affects framework flexibility, but also its anisotropy. Hence we can explain why EtNH$_3$--Mn is less flexible than Aze--Mn, despite the fact that EtNH$_3^+$ is the smaller cation by $\sim$25\% (see SI for details). This relationship also explains why the qualitative ordering of Fig.~\ref{fig5} in terms of $L_{\rm max}$ is physically meaningful.
\section{Concluding Remarks}
The relationships we establish between cation size and shape provide a set of straightforward structure--property rules which can be implemented as part of the strategic design of functional MOFs. The basis of these relationships is the correlation between cation size and M--formate bonding strength, which in turn affects the mechanical properties. In a computational study of the [Me$_2$NH$_2$][M(HCOO)$_3$] family, it was shown that the calculated Bader charges on the metal cations exhibited the following order: Mn $>$ Fe $>$ Zn $>$ Co $>$ Ni.\cite{Kosa:2014} This order was also suggested to represent the M--formate bonding strength, where Mn--formate exhibits the weakest bond (highest ionic component) and Ni--formate the strongest (greatest covalent component), and furthermore, the ordering is very similar to that of the metal cation size. There is a small discrepancy with the Zn and Co placement between the Bader charge and metal cation size sequences; however, as there is very little difference in the sizes (0.74 \emph{vs}.~0.745\,\AA), the variation is not significant. This small discrepancy can be also seen in the coefficients of thermal expansion, which vary subtly in the order for these two metal cations [Fig.~\ref{CTE_lp_MeNH3_Gua}].
In a nanoindentation study of the [Me$_2$NH$_2$][M(HCOO)$_3$] family, it was suggested that the metal ligand field stabilisation energy (LFSE) directs the resulting mechanical properties.\cite{Tan:2012} The LFSE order of Mn $=$ Zn $<$ Co $<$ Ni was reproduced in the magnitudes of the Young's Moduli of these compounds.\cite{Tan:2012} This finding places compounds which contain either Mn$^{2+}$, Zn$^{2+}$, or Cd$^{2+}$ on the same level of structural flexibility, which is not consistent with the thermal expansivity characteristics we measure here. Likewise, a Brillouin scattering study of related [NH$_4$][Mn(HCOO)$_3$] and [NH$_4$][Zn(HCOO)$_3$] structures indicated increased stiffness for M = Zn$^{2+}$ than M = Mn$^{2+}$.\cite{Maczka:2014} The thermal expansion data from this study and others\cite{Chapman:2006b} also show increased stiffness for Zn$^{2+}$-containing compounds compared to Mn$^{2+}$. This discrepancy could be due to the different experimental methods used: in the case of nanoindentation experiments, a uniaxial force is exerted upon the crystal, while upon temperature variation, an isotropic external stimulus is applied. One obvious and straightforward experiment would be to carry out a comparative nanoindentation study of Zn- and Cd-containing formate frameworks, since these species correspond to very different ionic radii but identical LFSEs.
A clear result of our study has been to show that the presence of A-site cations within the pores of the metal formate framework plays an important role in controlling the structural flexibility of the material. In particular, longer A-site cations cause a significant decrease in the framework hingeing observed. Thus these structures are not expected to give rise to anomalous mechanics, such as NTE. Instead their mechanics will be dominated by the behaviour of the M--linker--M units. In addition, the direction of framework hingeing can be switched by using differently-shaped A-site cations within the framework pores. In the case of oblate A-site cations, the framework hingeing is directed towards a convergence of framework angles (such as 90$^{\circ}$ for a 2D wine-rack) upon heating; whereas there is a divergence of framework angles in the case where prolate A-site cations are used. The framework hingeing direction could have implications for ferroelectric transitions arising from reorientation of extra-framework molecular ions. Stronger host--guest interactions might be expected for pore shapes which mimic the shape of the molecular ion. Thus ferroelectric ordering may be stabilised in cases where the pores vary (\emph{i.e.}\ framework angle evolve) towards a shape that mimics that of the molecular ion.
In summary, our work has shown that metal cation size correlates well with expansivity magnitudes. For the octahedral cation site, cations with smaller radii give rise to stiffer frameworks, while the larger metal cations are associated with the more extreme framework flexibility. At the A-site, cation size is also found to affect framework flexibility, though in the opposite sense: framework hingeing is constrained when long A-site cations are present within the framework pore. Moreover, the type of asphericity of the A-site cation determines the direction of framework hingeing. This is most likely to be observed only in dense MOFs, where the vibrational motion of the anisotropic A-site cation is of greater importance. The most readily applicable rule for rational mechanical design of MOFs involves that of the metal cations, where simple evaluation of its size within different oxidation states or coordination environments can provide a scale of flexibility within a series of isostructural frameworks.
\subsection*{Acknowledgments}
The authors thank the EPSRC (grant no.\ EP/G004528/2) and ERC (grant no.\ 279705) for financial support. This work was carried out with the support of the Diamond Light Source.
\renewcommand\refname{}
\section*{Notes and references}
\footnotesize{ $^{\ddag}$ Single-crystal X-ray diffraction data for Gua-Cd
were collected using a Nonius KappaCCD diffractometer or an Oxford
Diffraction (Agilent) SuperNova diffractometer fitted with an Oxford
Cryosystems Cryostream 600 Series/700 Plus open flow nitrogen cooling device.
\textsc{denzo/scalepack}\cite{Otwinowski_1997} or CrysAlisPro were used for
data collection and reduction as appropriate. In general, the structures
solved \textit{ab initio} using \textsc{sir}92\cite{Altomare_1994} although
coordinates from 150\,K were used as a starting model for the other
temperatures. All structures were refined with full-matrix least-squares on
$F^{2}$ using \textsc{crystals}.\cite{Betteridge:2003,Parois_2015} Hydrogen
atoms were generally visible in the difference Fourier map and treated in the
usual manner.\cite{Cooper_2010} Full structural data are included in the
Supplementary Information (SI) and have been submitted to the CCDC as numbers
1420162--1420165. These data can also be obtained free of charge from The
Cambridge Crystallographic Data Centre via
http:$//$www.ccdc.cam.ac.uk$/$data\_request$/$cif.
}
\vspace{-0.8cm}
\balance \footnotesize{
|
2,877,628,089,617 | arxiv | \section{Introduction}
In many research and engineering areas the matched filter (MF) represents the standard tool for the detection of signals with amplitude smaller than the level of the contaminating noise \citep{kay98, tuz01, lev08, mac05}.
The MF is a linear filter that optimally filters out the Fourier frequencies where the noise is predominant,
preserving those where the searched signal gives a greater contribution. In many situations, the MF displays the best performance, providing the greatest probability of true detection subject to a constant probability of false detection or false alarm
(PFA). The widespread success of the MF is due to its reliability and resulting robustness. The main limitation of this approach lies in the implicit assumption that the position of the signal within a sequence of data
(e.g., an emission line in a spectrum) or on an image (e.g., a point source in an astronomical map) is known. In most practical applications this is not the case.
For this reason, the MF is used assuming that, if present, the position of a signal corresponds to a peak of the filtered data.
In a recent paper, \citet{vio16} have shown that, when based on the standard but wrong assumption that the probability density function (PDF) of the peaks of a Gaussian noise process is a Gaussian, this approach may lead to a severe underestimation of the PFA. The same authors have provided an alternative procedure to correctly compute this quantity.
The PFA is a useful tool to fix a preliminary detection threshold, but with this quantity alone it is not possible to assess the reliability of a specific detection. This is because it does not provide the probability that a given detection is spurious but rather
the probability that a generic peak due to the noise in the filtered signal can exceed, by chance, a fixed detection threshold. However, there are often cases in which the knowledge of the probability for a peak to be a source is crucial to plan follow-up observations to identify the source itself. For this reason, in this work we introduce the so-called specific probability of false alarm (SPFA), which is able to provide a precise (within a few percent) quantification of the reliability of a detection.
In Sect.~\ref{sec:MF1}, the main characteristics of MF for one-dimensional signals are reviewed as well as the reason why it underestimates the PFA. In the same section the method suggested by \citet{vio16}
to correctly compute this quantity is briefly reconsidered.
In Sect.~\ref{sec:spfa} we address the question of the statistical meaning of the PFA and introduce the SPFA.
The arguments are extended to the two-dimensional signals in Sect.~\ref{sec:twodimensional}. Finally, in Sect.~\ref{sec:experiment} the procedure is applied to a simulated map, whereas in Sects.~\ref{sec:real} and ~\ref{sec:atca} it is applied
to two interferometric maps obtained with the Atacama Large Millimeter/submillimeter Array (ALMA) and to one map obtained with the Australia Telescope Compact Array (ATCA). The final remarks are deferred to Sect.~\ref{sec:conclusions}.
\section{Matched filter for one-dimensional signals} \label{sec:MF1}
\subsection{The basics} \label{sec:basicMF}
Given a discrete observed signal $\boldsymbol{x} = [x(0), x(1), \ldots, x(N-1)]^T$ of length $N$, the model assumed in the MF approach is
$\boldsymbol{x} = \boldsymbol{s} + \boldsymbol{n}$, where $\boldsymbol{s}$ is the deterministic signal to detect and $\boldsymbol{n}$ a zero-mean Gaussian stationary noise with known covariance matrix
\begin{equation} \label{eq:C}
\boldsymbol{C} = {\rm E}[\boldsymbol{n} \boldsymbol{n}^T].
\end{equation}
Here, symbols ${\rm E}[.]$ and $^T$ denote the expectation operator and the vector or matrix transpose, respectively.
Under these conditions, according to the Neyman-Pearson theorem \citep{kay98,vio16}, a detection is claimed when
\begin{equation} \label{eq:test1}
\mathcal{T}(\boldsymbol{x}) = \boldsymbol{x}^T \boldsymbol{f} > \gamma,
\end{equation}
with
\begin{equation} \label{eq:mf}
\boldsymbol{f}_s = \boldsymbol{C}^{-1} \boldsymbol{s}.
\end{equation}
Here $\boldsymbol{f}_s$, an $N \times 1$ array, represents the matched filter. The main characteristic of the MF is that it maximizes of the probability of detection (PD) under the constraint of a fixed PFA.
\subsection{Matched filter in practical applications} \label{sec:comments}
The MF has been obtained under two assumptions: first, signal $\boldsymbol{s}$ is known and, second, $\boldsymbol{x}$ and $\boldsymbol{s}$ have the same length $N$. This last point implicitly means that the position of $\boldsymbol{s}$ within
$\boldsymbol{x}$ is known.
Very often only the shape $\boldsymbol{g}$ of the signal $\boldsymbol{s} =a \boldsymbol{g}$ is known but not its amplitude $a$.
In this case, the relaxation of the first assumption does not have important consequences given that the test in Eq.~(\ref{eq:test1}) can be rewritten in the form
\begin{equation} \label{eq:test2}
\mathcal{T}(\boldsymbol{x}) = \boldsymbol{x}^T \boldsymbol{f}_g > \gamma',
\end{equation}
where $\gamma'=\gamma/a$ and the MF becomes
\begin{equation} \label{eq:mf2}
\boldsymbol{f}_g = \boldsymbol{C}^{-1} \boldsymbol{g}.
\end{equation}
Hence, the resulting $\mathcal{T}(\boldsymbol{x})$ is a statistic that is independent of $a$. This means that the unavailability of $a$ does not affect the PFA but only the PD.
If the second assumption is loosened, the signal $\boldsymbol{s}$ has a length $N_s$ that is smaller than the length of the observed data (e.g. an emission line in an experimental spectrum).
If the amplitude $a$ is also unknown, the standard approach is to claim a detection when
\begin{equation} \label{eq:test2b}
\mathcal{T}(\boldsymbol{x}, \hat{i}_0) = \boldsymbol{x}^T \boldsymbol{f}_g > u \hat{\sigma}_{\mathcal{T}},
\end{equation}
where
\begin{equation} \label{eq:dectx}
\mathcal{T}(\boldsymbol{x}, \hat{i}_0) = \underset{ i_0 \in [0, N-N_s] }{\max} \mathcal{T}(\boldsymbol{x}, i_0),
\end{equation}
where $\hat{i}_0$ is the estimate of the unknown position $i_0$ of the signal, $u$ a value typically in the range $[3, 5]$, and $\hat{\sigma}_{\mathcal{T}}$ the standard deviation of the sequence
\begin{equation} \label{eq:corrx}
\mathcal{T}(\boldsymbol{x}, i_0) = \sum_{i=i_0}^{i_0 + N_s - 1} x(i) f_g(i-i_0); \quad i_0 = 0, 1, \ldots, N-N_s.
\end{equation}
Namely, the observed signal $\boldsymbol{x}$ is cross-correlated with the MF~\eqref{eq:mf2}; see Eq.~\eqref{eq:corrx}.
The greatest peak detected in $ \mathcal{T}(\boldsymbol{x}, i_0) $; see Eq.~\eqref{eq:dectx}. Finally this latter is tested if it exceeds a threshold set to $u$ times the standard deviation of the sequence $\mathcal{T}(\boldsymbol{x}, i_0)$; see Eq.~\eqref{eq:test2b}.
In the affirmative case the peak corresponds to a detection, otherwise it is assumed to be due to the noise.
If the number of signals $\boldsymbol{s}$ present in $\boldsymbol{x}$ is also unknown, this procedure has to be applied to all the peaks in $ \mathcal{T}(\boldsymbol{x}, i_0)$.
It is widespread practice that the PFA corresponding to the statistics~\eqref{eq:test2b} is given by
\begin{equation} \label{eq:fd1}
\alpha = \Phi_c(u),
\end{equation}
where $\Phi_c(u) = 1 - \Phi(u)$ and $\Phi(u)$ is the standard Gaussian cumulative distribution function \footnote{In \citet{vio16} $\Phi_c(.)$ is denoted as $\Phi(.)$.}.
However, as shown by \citet{vio16}, such practice can lead to underestimate this quantity severely. The point is that the PDF of the peaks of a stationary Gaussian random signal is not a Gaussian as
implicitly assumed in Eq.~\eqref{eq:fd1}. For this reason, the correct PFA has to be estimated by means of
\begin{equation} \label{eq:corra}
\alpha = \Psi_c(u),
\end{equation}
where
\begin{equation}
\Psi_c(u)= 1 - \Psi(u)
\end{equation}
with
\begin{equation}
\Psi(u)=\int_{-\infty}^{u} \psi(z) dz,
\end{equation}
and
\begin{equation} \label{eq:pdf_z1}
\psi(z) = \frac{\sqrt{3 - \kappa^2}}{\sqrt{6 \pi}} {\rm e}^{-\frac{3 z^2}{2(3 - \kappa^2)}} + \frac{2 \kappa z \sqrt{\pi}}{\sqrt{6}} \phi(z) \Phi\left(\frac{\kappa z}{\sqrt{3 - \kappa^2}} \right)
\end{equation}
providing the PDF \footnote{In \citet{vio16} $\Psi_c(.)$ is denoted as $\Psi(.)$ and in Eqs.~(24)-(25) the function $\Phi(.)$ has to be intended as the Gaussian cumulative distribution function and not its complementary function
as it erroneously appears.}
of the local maxima of a zero-mean, unit-variance, smooth stationary one-dimensional Gaussian random field \citep{che15a, che15b}. Here,
\begin{equation} \label{eq:kd}
\kappa = - \frac{\rho'(0)}{\sqrt{\rho''(0)}},
\end{equation}
where $\rho'(0)$ and $\rho''(0)$ are, respectively, the first and second derivative with respect to $r^2$ of the two-point correlation function $\rho(r)$ at $r=0$, where $r$ is the inter-point distance.
When $\kappa=1$, the functional form of the two-point correlation function is a Gaussian. The condition of smoothness for the random
fields requires that $\rho(r)$ be differentiable at least six times with respect to $r$. In a recent work, \citet{che16} have shown that, contrary to what is claimed in
\citet{che15a, che15b} and in \citet{vio16}, the constraint $\kappa \leq 1$ actually is not necessary for the validity of the PDF~\eqref{eq:pdf_z1}.
The main problem in using Eq.~\eqref{eq:corra} is the estimation of parameter $\kappa$. One possibility, suggested by \citet{vio16}, is to obtain such a quantity from the fit of the discrete sample two-point correlation function of
$\mathcal{T}(\boldsymbol{x}, i_0)$ with an appropriate analytical function. The reason is that the estimation of the correlation function of the noise is also required by the MF and, therefore, it is not an additional condition of the procedure.
However, a reliable estimate of $\rho'(0)$ and $\rho''(0)$ is a very delicate issue. A robust alternative is to estimate $\kappa$ through a maximum likelihood approach
\begin{equation} \label{eq:ml}
\hat{\kappa} = \underset{\kappa }{\arg\max} \sum_{i=1}^{N_p} \log{\left(\psi(z_i; \kappa)\right)},
\end{equation}
where $\{ z_i \}$, $i=1,2,\ldots, N_p$, are the local maxima of $\mathcal{T}(\boldsymbol{x}, i_0)$. This approach cannot be adopted if the number and/or the amplitude of the point sources modify the statistical characteristics
of the noise background. The latter case is simple to deal with since it is sufficient to mask all the sources that clearly stand out from the noise. Conversely, in the first case the approach is unfeasible. A comparison of the histograms $H(x)$ of
the pixel values and $H(z)$ of the peak amplitudes with the respective PDFs $\phi(x)$ and $\psi(z)$ permits us to check if this condition is fulfilled (see more in Sects.~\ref{sec:experiment}-\ref{sec:atca}).
Once $\hat{\kappa}$ and $N_p$ are fixed, an additional benefit of the maximum likelihood approach is that
by means of Eq.~\eqref{eq:kd} and of the expected number $N^*_p$ of peaks per unit length \citep{che15b},
\begin{equation} \label{eq:np}
N^*_p= \frac{\sqrt{6}}{2 \pi} \sqrt{- \frac{\rho''(0)}{\rho'(0)}},
\end{equation}
it is possible to obtain an estimate of $\rho'(0)$ and $\rho''(0)$ as,
\begin{align}
\rho'(0) & = \frac{2}{3} \pi^2 (N^*_p)^2 \hat{\kappa}^2; \\
\rho''(0) & = \frac{4}{9} \pi^4 (N^*_p)^4 \hat{\kappa}^2.
\end{align}
In turn, by means of the Taylor expansion,
\begin{equation}
\rho(r) = 1 + r^2 \rho'(0) + \frac{1}{2} r^4 \rho''(0),
\end{equation}
it is possible to evaluate the form of $\rho(r)$ for $r \in (0, 1)$, an interval that in the case of discrete signals where $r$ can take only integer values is not computable by means of the classical estimators of the correlation function.
\subsection{Morphological analysis of the peaks} \label{sec:morph}
It is a common believe that it is possible to improve the rejection of spurious detections by means of a morphological analysis of the shape of the peaks in the filtered map $\mathcal{T}(\boldsymbol{x}, i_0)$. Such a conviction is based on the assumption that a peak
due to a signal $\boldsymbol{s}$ looks different from a peak due to noise. Unfortunately, the situation is more complex.
Let us assume, for the moment, that $\boldsymbol{n}$ is a standard Gaussian white noise. In the case of no signal, $\mathcal{T}(\boldsymbol{x}, i_0)$ is obtained through the correlation of $\boldsymbol{n}$ with $\boldsymbol{f}=\boldsymbol{g}$. As a consequence, it is a Gaussian stochastic process with
autocorrelation function
$\rho(\tau)$ given by
\begin{equation}
\rho(\tau) = \boldsymbol{g}_{\star},
\end{equation}
where $\boldsymbol{g}_{\star} = \boldsymbol{g} \otimes \boldsymbol{g}$ with symbol $\otimes$ denoting the correlation operator. Here, the point is that $ \boldsymbol{g}_{\star}$ is also the shape of the template $\boldsymbol{g}$ after the matched filtering. Moreover, the conditional expectation of
$\mathcal{T}(\boldsymbol{x}, i_p +\tau)$ given $x[i_p]$, with $i_p$ the position of the peak, is
\begin{equation}
{\rm E}[\mathcal{T}(i_p+\tau~ |~ x[i_p])] = a \rho(\tau),
\end{equation}
where $a = \mathcal{T}(\boldsymbol{x}, i_p)$ is the peak value. This means that in $\mathcal{T}(\boldsymbol{x}, i_0)$ the expected shape of a peak due to the noise is identical to that of the signal $\boldsymbol{s}$ after the matched filtering.
Something similar holds when the noise is of non-white type. Indeed, in the case of no signal, the quantity $\mathcal{T}(\boldsymbol{x}) = \boldsymbol{x}^T \boldsymbol{C}^{-1} \boldsymbol{g}$ in Eq.~\eqref{eq:test2} can be written in the form $\mathcal{T}(\boldsymbol{x}) = \boldsymbol{y}^T \boldsymbol{h}$ where $\boldsymbol{y}^T = \boldsymbol{x}^T \boldsymbol{C}^{-1/2}$ and
$\boldsymbol{h} = \boldsymbol{C}^{-1/2} \boldsymbol{g}$. Now, ${\rm E}[\boldsymbol{y} \boldsymbol{y}^T] = {\rm E}[\boldsymbol{C}^{-1/2} \boldsymbol{x} \boldsymbol{x}^T \boldsymbol{C}^{-1/2}] = \boldsymbol{C}^{-1/2} {\rm E}[\boldsymbol{x} \boldsymbol{x}^T] \boldsymbol{C}^{-1/2} = \boldsymbol{C}^{-1/2} \boldsymbol{C} \boldsymbol{C}^{-1/2} = \boldsymbol{I}$. This means that the non-white case can be brought back to a white case
where now the matched filter takes the form $\boldsymbol{f} = \boldsymbol{h}$.
\subsection{Previous similar works on this issue}
To the best of our knowledge to date, the beforehand detection procedure is the first that makes use of the statistical characteristics of the peaks of a Gaussian noise. In the past, \citet{lop05a, lop05b} have proposed two
modifications of the MF, known as the scale-adaptive filter (SAF) and the biparametric scale-adaptive filter (BSAF), which are claimed to be able to outperform
the MF in the sense of maximizing the probability of detection subject to a constant probability of false detection.
Contrary to the MF, these filters work not only with the amplitude of the peaks but also with the corresponding curvature. However, such a claim is incompatible with the Neyman-Pearson theorem given that, as seen in Sect.~\ref{sec:basicMF},
in the case of stationary Gaussian noise and conditioned on the a priori knowledge of the true position of the signal, the filter that maximizes the probability of detection subject to a constant probability of false detection is the MF. This means that, under the same conditions, no other filter can outperform it.
A careful reading of the above-mentioned works permits to realize that the optimal properties of SAF and BSAF hold only under an additional condition with respect to the MF, namely that the true position of the source coincides with a peak of the filtered noise. It is worth noting that this is an unrealistic condition. In fact, since the arguments supporting SAF and BSAF are developed in the framework of continuous signals, such a coincidence represents an event with probability zero.
As a consequence, the plain and uncritical extension of SAF and BSAF to the case of discrete signals operated by \citet{lop05a, lop05b} is questionable. The same also applies to their numerical simulations since the comparison of the performance of the SAF, BSAF, and MF should have been based on all the sources present in the simulated signals and not only on the subset of the sources whose true position, after the filtering, coincide with that of an observed peak.
\section{Specific probability of false alarm (SFPA)} \label{sec:spfa}
Contrary to what one could believe at first glance, the PFA given by Eq.~\eqref{eq:corra} does not provide the probability $\alpha$ that a specific detection is spurious but the probability that a generic peak due to the noise in $\mathcal{T}(\boldsymbol{x}, i_0)$
can exceed, by chance, the threshold $u$.
If $N_p$ peaks due to the noise are present in $ \mathcal{T}(\boldsymbol{x}, i_0)$, then a number $\alpha \times N_p$ among them is expected to exceed the prefixed detection threshold.
For example, if in $ \mathcal{T}(\boldsymbol{x}, i_0)$ there are $1000$ peaks, then there is a high probability that a detection with a PFA equal to $10^{-3}$ is spurious.
As a consequence, in spite of the low PFA, the reliability of the detection is actually small.
A possible strategy to avoid this problem is to fix a threshold $u$ such as $\alpha \times N_p \ll 1$.
However, in this way there is the concrete risk to be too conservative and miss some true detections. In literature some procedures are available to alleviate this kind of problem. They are essentially of non-parametric type (i.e. they are not able to exploit all the available information). A popular approach is represented by the false discovery rate (FDR) \citep{mil01, hop02}. Here, we propose a parametric solution that consists in a
preselection based on the PFA and then in the computation of the probability of false detection for each specific detection. We call this specific probability of false alarm (SPFA).
This quantity can be computed by means of the order statistics, in particular by exploiting the statistical characteristics of the greatest value of a finite sample of {\it identical and independently distributed} (iid) random variable from a given
PDF \citep{hog13}. Under the iid condition, the PDF $g(z_{\max})$ of the largest value among a set of $N_p$ peaks $\{ z_i \}$ is given by
\begin{equation} \label{eq:gz}
g(z_{\max}) = N_p \left[ \Psi(z_{\max}) \right]^{N_p-1} \psi(z_{\max}).
\end{equation}
Hence, the SPFA can be evaluated by means of
\begin{equation} \label{eq:intz}
\alpha = \int_{z_{\max}}^{\infty} g(z') dz'.
\end{equation}
Actually, since the peaks of a generic isotropic Gaussian random field tend to cluster, the iid condition is not necessarily valid. However, in situations where the two-point correlation function $\rho(r)$ of the noise is narrow with respect the
area spanned by the data (a basic situation for the application of the MF), this condition can be expected to hold with good accuracy. The rationale is that two points with a distance $r$ such that $\rho(r) \approx 0$
are essentially independent. The same
holds for two generic peaks. Hence, most of the peaks can be expected to be approximately iid and Eq.~\eqref{eq:gz} is still applicable but possibly with an effective number $N^+_p < N_p$ \citep[see Sect. 6 in][]{may05}.
This last point is due to the dependence among a set of random variable which lowers its number of degrees of freedom \footnote{The term degrees of freedom refers to the number of
items that can be freely varied in calculating a statistic without violating any constraints.}.
A way to measure the degree of dependence of the peaks is the two-point correlation function $\rho_p[d]$. This discrete function is computed on a
set of non-overlapping and contiguous distance bins of size $\Delta d$
\begin{equation}
\rho_p[d]= \sum_{\substack{i,j = 1 \\ d - \Delta d/ 2 < t_i - t_j \le d + \Delta d/ 2 }}^{N_d} \frac{z[t_i] z[t_j]}{N_d} / \sum_{i=1}^{N_p} \frac{z[t_i] z[t_i]}{N_p},
\end{equation}
with $N_d$ the number of peak pairs with a distance within the range $(d - \Delta d/ 2, d + \Delta d/ 2]$. It measures the tendency of two peaks with similar value to be next to each other.
The numerical evaluation of integral~\eqref{eq:intz} does not present particular difficulties since
\begin{align}
\alpha & = N_p \int_{z_{\max}}^{\infty} \left[\Psi(z) \right]^{N_p-1} d\Psi(z); \\
& = \left[ \Psi(z) \right]^{N_p} \Big|_{z_{\rm max}}^{\infty}; \\
& = 1-\left[ \Psi(z_{\rm max}) \right]^{N_p}.
\end{align}
If the number of signals present in $\mathcal{T}(\boldsymbol{x}, i_0)$ is unknown, the above procedure can be applied, in order of decreasing amplitude, to all peaks with a PFA smaller
than a prefixed $\alpha$, and reducing $N_p$ by one unit after any confirmed detection. The last step is based on the rationale that if a peak can be assigned to a signal $\boldsymbol{s}$ in $\boldsymbol{x}$, it can be removed from the set of the peaks related to the noise.
The importance of the SPFA is demonstrated by Fig.~\ref{fig01} where the PDF $g(z_{\max})$,
corresponding to the PDF $\psi(z)$ of the peaks of a stationary zero-mean unit-variance Gaussian random process with $\kappa=1$, is plotted for three different values of the sample size $N_p$,
i.e., $10^2$, $10^3$, and $10^4$. The color-filled areas provide the respective SPFA for a detection threshold $u$ corresponding to a PFA equal to $10^{-4}$.
It is evident that a detection threshold independent of
$N_p$ is not able to quantify the risk of a false detection. This figure also shows that
the determination of the number $N^+_p$ is not a critical operation since $g(z_{\max})$ is a slow changing function of $N_p$ and for weakly dependent peaks it is $N_p \approx N_p^+$.
The SPFA is also useful because, by means of the Poisson-binomial distribution \footnote{We recall that the Poisson-binomial distribution is the probability distribution of the number of successes
in a sequence of $N$ independent experiments having only two possible outcomes (yes/no) with success probabilities $p_1, p_2, \ldots, p_N$. When $p_1 = p_2 = \ldots = p_N$, it coincides with the binomial distribution.},
it is possible to estimate the probability that the number of false detection $N_{\rm FD}$ is equal or less than a given integer $k$ (see below).
\section{Extension of matched filter to the two-dimensional case} \label{sec:twodimensional}
In principle, the extension of MF to the two-dimensional signals $\boldsymbol{{\mathcal X}}$, $\boldsymbol{{\mathcal S}}$, and $\boldsymbol{{\mathcal N}}$ does not present particular difficulties. Indeed, it is sufficient to set
\begin{align}
\boldsymbol{s} & = {\rm VEC}[\boldsymbol{{\mathcal S}}]; \label{eq:stack1} \\
\boldsymbol{x} & = {\rm VEC}[\boldsymbol{{\mathcal X}}]; \label{eq:stack2} \\
\boldsymbol{n} & = {\rm VEC}[\boldsymbol{{\mathcal N}}], \label{eq:stack3}
\end{align}
where ${\rm VEC}[\boldsymbol{{\mathcal E}}]$ is the operator that transforms a matrix $\boldsymbol{{\mathcal E}}$ into a column array by stacking its columns one underneath the other,
to obtain a problem similar to that in Sect.~\ref{sec:MF1}. There are only two differences.
The first is the structure of matrix $\boldsymbol{C}$. In the one-dimensional case matrix $\boldsymbol{C}$ is of Toeplitz type, i.e., a matrix in which each descending diagonal from left to right is constant. In the two-dimensional case, however,
it becomes a block-Toeplitz with Toeplitz-block (BTTB) type, i.e., a matrix that contains Toeplitz blocks that are repeated down the diagonals of the matrix, as a Toeplitz matrix has elements repeated down the diagonal.
The second one concerns the PDF of the local maxima and their expected number per unit area that, in the case of a zero-mean unit-variance Gaussian isotropic noise, are given by \citep{che15a, che15b}
\begin{multline} \label{eq:pdf_z2}
\psi(z) = \sqrt{3} \kappa^2 (z^2-1) \phi(z) \Phi \left( \frac{\kappa z}{\sqrt{2 - \kappa^2}} \right) + \frac{\kappa z \sqrt{3 ( 2 - \kappa^2)}}{2 \pi} {\rm e}^{-\frac{z^2}{2 - \kappa^2}}\\
+\frac{\sqrt{6}}{\sqrt{\pi (3 - \kappa^2)}} {\rm{e}^{-\frac{3 z^2}{2 (3-\kappa^2)}}} \Phi\left( \frac{\kappa z}{\sqrt{(3 - \kappa^2) (2 - \kappa^2)}} \right).
\end{multline}
and
\begin{equation} \label{eq:number}
N^*_p = -\frac{\rho''(0)}{\pi \sqrt{3} \rho'(0)},
\end{equation}
respectively. However, a computational problem arises because, even for maps of moderate size, the covariance matrix $\boldsymbol{C}$
becomes rapidly huge. Hence, some efficient numerical methods based on a Fourier approach have to be used as in
\citet[ ][Chap.~5]{vog02}, \citet[][page $145$]{jai89}, \citet[][Appendix B]{els13}, and \citet[][Appendix A]{lag91}.
In comparison with Fig.~\ref{fig01}, Fig.~\ref{fig02} also shows the importance of the SPFA for the two-dimensional case.
Here, the PDF $g(z_{\max})$, corresponding to the PDF $\psi(z)$ of the peaks of an isotropic two-dimensional zero-mean unit-variance Gaussian random field with $\kappa=1$, is plotted for three different values of the sample size $N_p$, $10^2$, $10^3$, and $10^4$. Again, the color-filled areas provide the respective SPFA for a detection threshold $u$ corresponding to a PFA equal to $10^{-4}$.
A final note concerns the condition of isotropy of the noise. There are situations where such condition can be relaxed. In particular this happens with correlation functions of type
\begin{equation}
\rho(r)= f(\boldsymbol{r}^T \boldsymbol{A} \boldsymbol{r}),
\end{equation}
where $f(.)$ is a real function and $\boldsymbol{A} = \boldsymbol{B}^T \boldsymbol{B}$ with $\boldsymbol{B}$ a non-degenerated matrix. The random fields corresponding to this correlation function are not isotropic but only homogeneous (i.e., the statistical characteristics are constant along a given
direction). The arguments presented above can also be applied to this case given that an opportune rotation and rescaling of the axes can convert $\boldsymbol{r}^T \boldsymbol{A} \boldsymbol{r}$ into $\boldsymbol{r}^T \boldsymbol{r}$. Since these operations on the coordinate system
do not modify the values of the random field, the PDF of the peaks does not change
\footnote{The rotation and the rescaling of the coordinate system change only the number of peaks for unit area, which becomes $|{\rm Det}[ \boldsymbol{B} ]|^{1/2}$ times that of the isotropic case $\rho(\boldsymbol{r}) = f(\boldsymbol{r}^T \boldsymbol{r})$ (Cheng,
private communication). Here ${\rm Det}[.]$ denotes the determinant of a matrix.}.
\section{A numerical experiment} \label{sec:experiment}
To illustrate the usefulness of the proposed method, this is applied to a simulated $300 \times 300$ pixels map where $30$ point sources, with a uniform random spatial distribution and Gaussian profile with standard deviation along the horizontal and vertical directions set to $1.3$ and $1.8$ pixels, respectively, (see Fig.~\ref{fig03}(a)), are
embedded in a Gaussian white noise. All the point sources have the same amplitude set to the standard deviation of the noise. This experiment simulates a very difficult situation where the amplitude of the point sources is
well below the level of the noise. Indeed, in Fig.~\ref{fig03}(b), which shows the observed map, the sources are not even visible. In situations like this a matched filtering operation is unavoidable.
As seen above, the MF represents an optimal solution. However, a comparison of Figs.~\ref{fig03}(c) and ~\ref{fig03}(d), which show
the zero-mean unit-variance matched filtered versions of the noise component and observed map, respectively, indicates that also after the MF operation the detection of the point sources is still a problematic issue since no strong peaks are evident.
Moreover, as discussed in Sect.~\ref{sec:morph}, after the matched filtering, the blob-shaped structures due to the noise have a shape similar to that of the filtered point sources, i.e.,
the morphological analysis cannot be used to detect the searched point sources.
Because of the asymmetric shape of the point sources,
the noise background is non-isotropic but simply homogeneous. This does not represent a problem given that the corresponding two-point autocorrelation function is a two-dimensional Gaussian function, i.e., of the type $\rho(r)= f(\boldsymbol{r}^T \boldsymbol{A} \boldsymbol{r})$
as in Sect.~\ref{sec:twodimensional}. A situation like this one reflects the results found in ALMA observations (see below).
The histogram of the pixel values of the matched filtered map
in Fig.~\ref{fig04}(a) is clearly compatible with a Gaussian PDF $\phi(x)$. The number of identified peaks is $1601$ and the estimated $\hat{\kappa}$ is about $1$. Figure~\ref{fig04}(b) shows that the corresponding
PDF $\psi(z)$ is in good agreement with the histogram $H(z)$ of the peak values. The iid condition for the peaks, which is necessary for the computation of the PSFA, is demonstrated in Fig.~\ref{fig05},
which shows the two-point correlation function of the peaks. As a preliminary detection threshold, a value
of $u=3.72$ was chosen. Eleven peaks exceed it. In Figs.~\ref{fig06} they are highlighted and indexed with an increasing number according to the amplitude. Among these, only five ( \#4, \#6, \#8, \#10, and \#11)
correspond to a point source. In a situation like this one, an accurate quantification of the detection reliability is critical.
Since for the PFA~\eqref{eq:fd1} it is $\Phi_c(3.72)\approx 10^{-4}$, according to the standard procedure all the peaks should be considered true detections with a high
confidence level. To a minor extent, the same also holds for the the PFA~\eqref{eq:corra} since $\Psi_c(3.72)\approx 2.55 \times 10^{-3}$. On the other hand, with $u=5$, an often used threshold, no peak should have been selected.
This supports the unreliability of a detection based only on the threshold $u$.
The situation with SPFA is different. From Fig.~\ref{fig07}(a), which shows the Poisson-binomial distribution corresponding to the SPFAs of the selected peaks, it appears that the most probable number $N_{\rm FD}$ of false detections is
$N_{\rm FD}=7$. Moreover, Fig.~\ref{fig07}(b) shows that the probability that $N_{\rm FD} \leq 7$ is about $0.54$. This result is in good agreement with the true $N_{\rm FD} =6$.
If only the four highest peaks \#8 - \#11 are considered, with values of the SPFA equal to
$0.60$, $0.42$, $0.40$, and $0.24$, the most probable $N_{\rm FD}$ is $2$ and the probability that $N_{\rm FD} \leq 2$ is about $0.81$ (see Figs.~ \ref{fig07}(c)-(d)).
Among the four peaks, only the peak \#9 corresponds to a false detection, again in good agreement with the expected number.
It is useful to compare the proposed method with the popular approach of the least-squares fit, on a small submap centered on a given peak, of a model constituted by the template $\boldsymbol{g}$ superimposed on a two-dimensional low-degree polynomial . According to this approach, the reliability of the detection is tested by comparing the estimated amplitude with a multiple of the standard deviation $\sigma_r$ of the residuals.
When applied to the $11$ peaks selected above, assuming a two-dimensional polynomial of first degree and using a submap of size $11 \times 15$ pixels,
in all of the cases the estimated amplitude is greater than the threshold set to $5 \sigma_r$. As a consequence, all of the peaks should be considered true detections with an high degree of reliability. Two more true point sources have been detected,
but at the cost of six false detections.
Another useful comparison is with the FDR technique mentioned in Sect.~\ref{sec:spfa}. With this approach it is possible to control the expected number of detections falsely declared significant as a proportion $\alpha_{\rm FDR}$ of the number of all tests declared significant. Figure~\ref{fig08} shows that with $\alpha_{\rm FDR}=0.2$ and $0.4$ the number of detected point sources is $9$ and $12$, respectively.
However, in the first case the number of false detection is $4$, whereas it is $7$ in the second case. In both cases more true point sources have been detected
than the methodology based on the PDF $g(z_{\max})$, but at the cost of a larger fraction of false detections. Moreover, also here, no quantification is possible for the reliability of each specific detection.
Finally, it is worth noticing that the actual fraction of false detections is $0.44$ and $0.58$, which is well above the corresponding expected value $\alpha_{\rm FDR}$.
The conclusion is that only the SPFA is able to really quantify the true reliability of a given detection and should be considered before follow-up observations to identify the source itself.
\section{Application to real ALMA maps} \label{sec:real}
We apply the aforementioned procedure to two interferometric maps, obtained at two frequencies with ALMA, with the aim to detect the faint point sources in the field of the radio source \object{PKS0745-191}. These maps are characterized
by an excellent {\it uv}-plane coverage (see Fig.~\ref{fig09}). For this reason, the noise is expected to have a uniform spatial distribution over the entire area of interest that is the basic condition for the proposed method to work.
The first map, (hereafter M1) was obtained at a frequency of $100 {\rm GHz}$ ($3 {\rm mm}$, ALMA band 3), whereas the second map (hereafter M2) was obtained at a frequency of
$340 {\rm GHz}$ ($0.87 {\rm mm}$, ALMA band 7).
Both maps, with a size of $256 \times 256$ pixels, are centered on the bright cluster galaxy (BCG) of PKS0745-19 (RA 07:47:31.3, Dec -19:17:39.94, J2000), which was the target of the observations we used (ALMA ID = 2012.1.00837.S, PI R. McNamara). The data reduction was carried out with CASA version 4.2.2 and the ALMA reduction scripts \citep{mcm07}.
To produce the continuum maps, we selected and averaged only the channels free of the CO line emission, which was the original observation target.
The images were reconstructed via the CASA task \textit{clean} with Briggs weighting and a robust parameter of 1.5 in band 3 and 2 in band 7.
The beam size is 1.9” $\times$ 1.4” in band 3, and 0.27” $\times$ 0.19” in band 7.
In Figs.~\ref{fig10}(a) and (d) a bright source is visible in the central position of both maps, which corresponds to the BCG. Figures~\ref{fig10}(e) and (b) show that, when this is masked in M2 no additional sources become visible whereas a bright source appears in M1. This additional source has been identified as an IR source already known in the cluster, i.e., \object{2MASX J07473002-1917503} (RA 17:47:30.130, Dec -19:17:50.60, J2000). If this is also masked in M1 no additional sources are obviously visible in Fig.~\ref{fig10}(c). Since we are interested in the detection of point sources with amplitudes comparable to the noise level, the pixels in the masked areas are not used in the analysis.
Most of the structures visible in Figs.~\ref{fig10}(c) and (e) are certainly not due to physical emission but to the noise. In Figs.~\ref{fig11}(a)-(b), the histograms $H(x)$ of the pixel values of these maps,
standardized to zero mean and unit variance, indicate that the noise is Gaussian but, as shown by the two-point autocorrelation functions in Figs.~\ref{fig12}(a)-(b), not white.
As a first step, the two maps were independently analyzed. As already underlined by \citet{vio16}, the use of the MF with these kinds of ALMA maps is unfeasible because the MF filters out the Fourier frequencies where the noise is predominant, preserving those where the searched signals give a greater contribution.
However, the unresolved point sources and the blob-shaped structures due to the noise have similar appearances and the process of filtering cannot work.
In this case $\mathcal{T}(\boldsymbol{x}, i_0) = \boldsymbol{x}$ and the detection test becomes a thresholding test according to which a peak in the map can be attributed to a point source
if it exceeds a given threshold. To apply the procedure introduced in Sect.~\ref{sec:twodimensional} it is necessary to test the isotropy of the noise field.
From Figs.~\ref{fig12}(a)-(b) this condition appears to be approximately satisfied. The small differences between the autocorrelation functions along the vertical and horizontal directions is due to the elliptical shape of the ALMA beam.
We identified $328$ peaks in M1 and $948$ in M2. The iid condition for the peaks is supported by the two-point correlation functions $\rho_p(r)$ in Figs.~\ref{fig13}(a) and (b), which are almost completely contained in their $95\%$ confidence band.
We obtained the confidence band by means of a bootstrap method based on the $95\%$ percentile envelopes of the two-point correlation functions computed from $1000$ resampled sets of peaks with the same spatial coordinates as in the original map but whose values have been randomly permuted.
The reliability of the iid condition is confirmed by Fig.~\ref{fig14}, which shows the PDF of the largest peak value from a set of $256 x 256$ pixels Gaussian random fields obtained by filtering $5000$ discrete white noise maps by means of a Gaussian filter with dispersion of $3.7$ and $2.7$ pixels in such a way to approximately reproduce the noise in M1 and M2, respectively.
The maximum likelihood estimate~\eqref{eq:ml} provides $\hat{\kappa}\approx 1$ for both M1 and M2, which is a value that is typical of Gaussian two-point autocorrelations functions. This is an expected result for
ALMA given that, independently of the frequency, its PSF can be well approximated by bivariate Gaussian functions. The least-squares fit of the two-point correlation function $\rho(r)$
in Figs.~\ref{fig12}(a)-(b) with a Gaussian function supports this circumstance. As seen at the end of Sect.~\ref{sec:twodimensional}, this fact makes even more irrelevant the small anisotropy of noise background observed above.
The PDFs $\psi(z)$ are shown in Figs.~\ref{fig15}(c) and (d). The agreement with the respective histograms $H(z)$ is good. A threshold $u \approx 3.98$, corresponding to a PFA~\eqref{eq:corra} equal to $10^{-3}$, provides two candidate point sources in M1 and four in M2. All these detections are highlighted in Figs.~\ref{fig15}(a) and (b).
They are indexed with an increasing number according to the source amplitude.
Even though the PFA is identical for all the sources, their detection reliability is different. Indeed, in M1 the SPFA is $8.8 \times 10^{-2}$ and $2.0 \times 10^{-2}$ for
the sources 1a and 2a, whereas in M2 it is $5.7 \times 10^{-1}$, $5.0 \times 10^{-1}$, $1.4 \times 10^{-1}$, and $1.3 \times 10^{-1}$ for the sources 1b-4b, respectively (see also Fig.~\ref{fig16}).
The Poisson-binomial distribution corresponding to these SPFAs indicates that for M1 the probability of $N_{\rm FD}=0$ is about $0.89$, whereas for M2 the probability that $N_{\rm FD} \leq 1$
is $0.58$ and becomes $0.92$ for $N_{\rm FD} \leq 2$.
As a final comment, we note that if the standard PFA~\eqref{eq:fd1} based on the Gaussian PDF $\phi(z)$ had been used, the chosen detection threshold $u$
should have produced $104$ detections in M1 and $294$ in M2.
Source 1a in M1 was identified as an already known object \object{USNO B1.0 0707-10151219} (RA 07:47:30.817, Dec -19:17:18.48, J2000). On the map M2, the two peaks 1b and 3b look like a single extended source or two very close sources. However, the separation of the two peaks is only $0.022'' \times 0.04''$, which is smaller than the beam size
and the two sources cannot be resolved. Therefore, these two peaks are probably due to a single extended source. Table~\ref{list} summarizes our results.
\begin{table*}
\noindent \centering{}\protect\caption{List of detected sources}\label{list}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline \hline
& & & & \\
Source ID & RA & Dec & Size (arcsec $\times$ arsec)& Reference \\
\hline
& & & & \\
\textbf{Map M1/Band 3} & & & &
\\
\hline
1a & 07:47:30.817 & -19:17:18.48 & 1.76"$\times$1.57" & USNO B1.0 0707-10151219\\
\hline
2a & 07:47:30.520 & -19:17:23.39 & 2.01"$\times$1.85" & unknown \\
\hline
Bright source in Figure~\ref{fig10}(b) & 17:47:30.130 & -19:17:50.60 & 2.56"$\times$2.42" & 2MASX J07473002-1917503\\
\hline
& & & & \\
\textbf{Map M2/Band 7} & & & &
\\
\hline
1b & 07:47:31.206 & -19:17:37.38 & 0.32"$\times$0.26" & unknown \\
\hline
2b & 07:47:31.642 & -19:17:38.85 & 0.52"$\times$0.40" & unknown \\
\hline
3b & 07:47:31.228 & -19:17:37.35 & 0.35"$\times$0.34" & unknown\\
\hline
1b+3b (as a single source) & 07:47:31.220 & -19:17:37.35 & 0.44"$\times$0.32" & unknown \\
\hline
4b & 07:47:30.982 & -19:17:36.53 & 0.34"$\times$0.24" & unknown \\
\hline
\hline
& & & & \\
\textbf{Map M3/Band 3 + Band 7} & & & &
\\
\hline
\hline
1c & 07:47:31.618 & -19:17:35.464 & 0.39"$\times$0.18" & unknown \\
\hline
2c & 07:47:30.974 & -19:17:35.464 & 0.41"$\times$0.24" & unknown \\
\hline
3c & 07:47:31.192 & -19:17:37.33 & 0.57"$\times$0.31" & same as 1b+3b in M2 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
We computed the fluxes with the CASA statistics tool.
Since our sources are not resolved, to estimate the flux we measured the integrated flux over the PSF, which is assumed to have a Gaussian shape. For each source, we assign a
root mean square (RMS) value computed over the pixels around the source, which is a rough indication of the noise level. Results are presented in Tab.~\ref{flux1}.
\begin{table*}
\noindent \centering{}\protect\caption{Source fluxes computed on the original images}\label{flux1}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline \hline
& & \\
\multicolumn{3}{|c|}{\textbf{Map M1/Band 3}} \\
\hline
& & \\
Source ID & Integrated flux (mJy) & RMS (mJy) \\
\hline
BCG PKS0745-191 & 9.00 & 0.18 \\
\hline
USNO B1.0 0707-10151219 & 0.49 & 0.06 \\
\hline
2a & 0.18 & 0.04 \\
\hline
2MASX J07473002-1917503 & 0.41 & 0.03 \\
\hline
& & \\
\multicolumn{3}{|c|}{\textbf{Map M2/Band 7}} \\
\hline
& & \\
Source ID & Integrated flux (mJy) & RMS (mJy)\\
\hline
BCG PKS0745-191 & 4.24 & 0.02 \\
\hline
1b & 0.78 & 0.06 \\
\hline
2b & 0.30 & 0.07 \\
\hline
3b & 0.37 & 0.06 \\
\hline
4b & 0.48 & 0.06 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
It is impossible to compute an integrated flux over sources 1b+3b as a single extended source because the shape of the resulting source is neither circular nor elliptic. The fluxes measurement are therefore subject to a large uncertainty that depends on the chosen source shape.
The correlation coefficient between maps M1 and M2 is only $2.1 \times 10^{-2}$, hence these maps can be considered statistically independent. Therefore, they can be combined
in a map (hereafter M3) where the noise level is reduced, although this does not mean
the improvement of the S/N for each point source. Such operation was carried out by the algorithm Feather implemented in CASA \citep{mcm07}. This algorithm performs a weighted addition in the Fourier domain of M1 and M2
in such a way as to obtain a map with the highest resolution among the two.
The zero-mean unit-variance version of M3 is shown in Fig.~\ref{fig10}(f). Given the different areas covered by M1 and M2, M3 shows an area of size that is similar but
slightly smaller than that of M2. The corresponding histogram $H(x)$ of the pixel values is shown in
Fig.~\ref{fig11}(c), whereas the two-point correlation function is given
in Fig.~\ref{fig12}(c). In this map $948$ peaks are identified which, as shown by $\rho_p(r)$ in Fig.~\ref{fig13}(c), can be assumed to be independent and identically distributed with good accuracy. These peaks
provide a maximum likelihood estimate $\hat{\kappa}=0.96$. However, only three of these peaks have a PFA smaller than $10^{-3}$.
They are highlighted in Fig.~\ref{fig15}(f).
The SPFA is equal to $3.2 \times 10^{-1}$, $2.7 \times 10^{-1}$ and $4.0 \times 10^{-3}$ for the detections 1c-3c, respectively. The corresponding probability to have $N_{\rm FD} \leq 1$ is about $0.90$.
As before, if the standard PFA~\eqref{eq:fd1} had been adopted, the number of detections should have been $326$.
Two additional sources are detected in M3, while the source 3c is the same source as 3b (or 1b+3b) detected in M2.
The positions and size of the detected source in M3 are reported in the bottom rows of Tab.~\ref{list}.
The outcome of our investigation implies the following: the sources detected in M2 are not visible in M1 because of the lower spatial resolution in band 3. Source 3c in M3 is the same as source 3b (1b+3b) in M2, while the sources 1c and 2c in M3 are undetected both in M1 and M2, even if they are visible in M2 at the same spatial position.
Moreover, contrary to the PFA, the SPFA permits us to quantify the real risk of a detection claim. Although the PFA of the sources 1b-2b in M2 is $\approx 8.9 \times 10^{-4} $ and $\approx 7.4 \times 10^{-4}$, respectively,
the SPFA indicates that these detections have a confidence level of $43\%$ and $50\%$ (see also Fig.~\ref{fig16}).
\section{Application to a real ATCA map} \label{sec:atca}
We apply the above method to quantify the detection reliability of point sources extracted from a 500x500 pixels map cropped from a radio image taken with the ATCA array toward the Large Magellanic Cloud at a frequency of $4.8~{\rm GHz}$ by
\citet{dic05}. This map is shown in Fig.~\ref{fig17}(a). The same map, standardized to zero mean and unit variance and with some bright sources masked, is shown in Fig.~\ref{fig17}(b).
The {\it uv}-plane coverage of this instrument is less homogeneous than that of ALMA (see Fig.~2 in \citet{dic05} for comparison with Fig.~\ref{fig09}), hence the noise background is expected to be less uniform. This is also visible in the map itself, which shows artificial structures introduced by the gridding algorithm.
In spite of this, as Fig.~\ref{fig18}(a) shows, the histogram $H(x)$ of the value of the pixels indicates that the noise is Gaussian. In the map there are $2580$ peaks whose histogram $H(z)$ is shown in Fig.~\ref{fig18}(b).
The isotropy of the noise background is supported by Fig.~\ref{fig19}, which compares the autocorrelation functions along the vertical and horizontal directions
with the two-point correlations function $\rho(r)$. The two-point correlation function $\rho_p(r)$ of the peaks is shown in Fig.~\ref{fig20}. A weak correlation, probably due to the weighting method applied to the map to fill the gaps in the {\it uv}-plane coverage, is present only for small inter-point distances.
Hence, also the iid condition for the peaks is approximately satisfied. The estimated $\hat{\kappa}$ is $\approx 1$ with the corresponding PDFs $\psi(z)$ shown in Fig.~\ref{fig18}(b).
All these results, together with the good agreement of $H(z)$ with $\psi(z)$, indicate that in spite of the poorer uv-plane coverage of the ATCA map the method can be applied to this map too.
With a threshold $u$ set to $\approx 4.5$, corresponding to a PFA
equal to $1.25 \times 10^{-4}$, $11$ candidate point sources are detected with a SPFA of $0.20$, $0.13$, $0.10$, $0.02$, and $0.01$, respectively, and the remaining with values well below $10^{-3}$. These value correspond
to a small risk of false detection given that that probability of $N_{\rm FD} = 0$ is about $0.6$, which becomes $0.93$ for $N_{\rm FD} \leq 1$. As a consequence, these peaks are good candidate for a follow-up observations. The sources \#1, \#3, \#6, and \#7 can be identified with the SSTSL2 sources reported by \citet{mau03}. The corresponding identifiers are \object{SUMSS J051432-685446}, \object{SSTSL2 J051230.46-682802.0}, \object{SSTSL2 J051555.19-690746.2}, and
\object{SSTSL2 J051230.46-682802.0}.
\section{Final remarks} \label{sec:conclusions}
In this paper we have reconsidered the procedure suggested by \citet{vio16} for the correct computation of the probability of false detection (PFA) of the matched filter (MF) to the case of weak signals with unknown position. In particular,
we showed that although the PFA is useful for a preliminary selection of the candidate detections, it is not able to quantify the real risk in claiming a specific detection. For this reason we introduced a new quantity called the
specific probability of false alarm, which can provide this kind of information.
We applied this procedure to two ALMA maps at two different frequencies and we highlighted the presence of $7$ potential new point sources (2 in M1, 3 in M2, and 2 in M3). The same procedure applied to an ATCA map provided
$11$ potential point sources.
\begin{acknowledgements}
This research has been supported by a ESO DGDF Grant 2014 and R.V. thanks ESO for hospitality. \\
This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2012.1.00837.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
\end{acknowledgements}
|
2,877,628,089,618 | arxiv |
\section{Configuration of the Systems and Methodology}\label{sec:config_details}
To facilitate repeatability of our experimental results,
we describe how we configured each of the tested systems.
For BANKS, we tested both of their variants, but the figures show the
running times of the more efficient one, namely, bidirectional search.
We disabled the search of keywords in the metadata, because it is not
available in some of the other systems. The size of the heap was set
to $20$, which means that BANKS generated $120$ answers in order to output $100$.
For the sake of a fair comparison, GTF was also set to generate $20$ more
answers than it was required to output.
In addition,
BANKS produced the answers without duplicates, with respect to
undirected semantics. Therefore, we also executed GTF
until it produced $120$, $320$ or $1,020$ answers without
duplicates.\footnote{Duplicate elimination in GTF
is done by first
removing the keyword nodes so that the structure of the answers is
the same as in BANKS.} For clarity's sake,
Section~\ref{sec:experiments} omits these details and describes all
the algorithms as generating $100$, $300$ and $1,000$ answers. In
BANKS, as well as in GTF,
redundant answers (i.e.,~those having a root with a single child)
were discarded, that is, they were not included among the generated answers.
The other systems produced $100$, $300$ or $1,000$ according to their default
settings.
For SPARK, we used the block-pipeline algorithm, because it was their
best on our datasets. The size of candidate networks (CN)
was bounded by 5. The \emph{completeness factor} was set to 2,
because the authors reported that this value enforces the AND
semantics for almost all queries.
In BLINKS, we used a random partitioner. The minimum size of a
partition was set to 5, because a bigger size caused an out-of-memory
error. BLINKS stores its indexes in the main memory, therefore, when
running BILNKS, we reserved 20GB (instead of 10GB) for the Java heap.
BANKS and SPARK use PostgreSQL and MySQL, respectively.
In those two DBMS, the default buffer size is tiny.
Hence, we increased it to 5GB (i.e.,~the parameters \texttt{shared\_buffers} of
PostgreSQL and \texttt{key\_buffer} of MySQL were set to 5GB).
For each dataset, we first executed a long query to warm
the system and only then did we run the experiments on that dataset.
To allow systems that use a DBMS to take full advantage of the buffer
size, we made two consecutive runs on each dataset (i.e.,~executing
once the whole sequence of queries on the dataset followed immediately
by a second run). This was done for \emph{all} the systems.
The reported times are those of the second run.
The running times do not include the parsing stage or
translating answers into a user-readable format.
For BLINKS, the time needed to construct the indexes is ignored.
\section{Proof of Lemma~\ref{LEMMA:GTF-SHORTEST-PATH-MARKS}}\label{sec:proofLemmaSMP}
\setcounter{theorem}{2}
\lemmaSPM*
\begin{proof}
Suppose that the lemma is not true for some keyword $k\in K$.
Let $v$ be a closest node to $k$
among all those violating
the lemma with respect to $k$.
Node $v$ is different from $k$, because the path
$\anset{k}$ marks $k$ as $\textit{visited}$. We will derive a contradiction by
showing that a minimal path changes $v.\mathit{marks[k]}$
from $\textit{active}$ to $\textit{visited}$.
Let $p_s[v,k]$ be a minimal path from $v$ to $k$.
Consider the iteration $i$ of the main loop
(line~\ref{alg:gtf:mainLoop_start} in Figure~\ref{alg:gtf}) that changes
$v.\mathit{marks[k]}$ to $\textit{visited}$ (in line~\ref{alg:gtf:markVisited}).
Among all the nodes of $p_s[v,k]$
in which suffixes of some minimal paths from $v$ to $k$ are
frozen at the beginning of iteration $i$, let $z$ be the first one
when traversing $p_s[v,k]$ from $v$ to $k$
(i.e.,~on the path $p_s[v,z]$, node $z$ is the only one in which
such a suffix is frozen). Node $z$ exists for the following three reasons.
\begin{itemize}
\item The path $p_s[v,k]$ has not been exposed prior to
iteration $i$, because we assume that $v.\mathit{marks[k]}$ is
changed to $\textit{visited}$ in iteration $i$ and that change can happen only once.
\item The path $p_s[v,k]$ is acyclic (because it is minimal), so
a suffix of $p_s[v,k]$ could not have been discarded either
by the test of line~\ref{alg:gtf:testEssential} or due to
line~\ref{alg:gtf:reassignFalse}.
\item The path $p_s[v,k]$ (or any suffix thereof) cannot be on the
queue at the beginning of iteration $i$, because $v$ violates the
lemma, which means that a non-minimal path from $v$ to $k$ must be
removed from the queue at the beginning of that iteration.
\end{itemize}
The above three observations imply that a proper suffix of $p_s[v,k]$
must be frozen at the beginning of iteration $i$ and, hence, node $z$ exists.
Observe that $z$ is different from $v$, because a path to $k$
can be frozen only at a node $\hat v$, such that
${\hat v}.\mathit{marks[k]}=\textit{visited}$, whereas we assume that
$v.\mathit{marks[k]}$ is $\textit{active}$ at the beginning of iteration $i$.
By the selection of $v$ and $p_s[v,k]$ (and the above fact that $z\not=v$),
node $z$ does not violate the lemma, because $p_s[z,k]$ is a proper
suffix of $p_s[v,k]$ and, hence, $z$ is closer to $k$ than $v$.
Therefore, according to the lemma, there is a minimal path $p_m[z,k]$
that changes $z.\mathit{marks[k]}$ to $\textit{visited}$. Consequently,
\begin{equation}\label{eqn:min-path}
w(p_m[z,k]) \leq w(p_s[z,k]).
\end{equation}
Now, consider the path
\begin{equation}\label{eqn:new-path}
\bar{p}[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation}
Since $p_s[v,k]$ is a minimal path from $v$ to $k$,
Equations~(\ref{eqn:min-path}) and~(\ref{eqn:new-path}) imply that
so is $\bar{p}[v,k]$.
We now show that the conditions of Lemma~\ref{LEMMA:PATH-CONCAT} are
satisfied at the beginning of iteration $i$. In particular,
Condition~\ref{cond:new-first} holds, because $p_s[v,k]$ is acyclic
(since it is minimal) and, hence, so is the path $p_s[v,z]$.
Condition~\ref{cond:new-second} is satisfied, because of how
$p_m[z,k]$ is defined. Condition~\ref{cond:first} holds, because we chose
$z$ to be a node where a path to $k$ is frozen.
Condition~\ref{cond:second} is satisfied, because of how $z$ was
chosen and the fact that $\bar{p}[v,k]$ is minimal.
Condition~\ref{cond:third} is satisfied, because we have assumed
that $v.\mathit{marks[k]}$ is changed from $\textit{active}$ to $\textit{visited}$ during
iteration $i$.
By Lemma~\ref{LEMMA:PATH-CONCAT}, a suffix of
$\bar p[v,k]$ must be on the queue at the beginning of iteration
$i$. This contradicts our assumption that a non-minimal path (which
has a strictly higher weight than any suffix of $\bar p[v,k]$) changes
$v.\mathit{marks[k]}$ from $\textit{active}$ to $\textit{visited}$ in iteration $i$.
\end{proof}
\section{The Need for Essential Paths }\label{sec:looping_paths}
In this section, we give an example showing that essential cyclic paths must
be constructed by the GTF algorithm, in order not to miss some answers.
Suppose that we modify the test of line~\ref{alg:gtf:testEssential}
of Figure~\ref{alg:gtf} to be ``$v'$ is not on $p$'' (i.e.,~we omit
the second part, namely, ``$v' \rightarrow p$ is essential'').
Note that the new test means that only acyclic paths are inserted into $Q$.
\input{fig_looping_paths}
Consider the data graph of Figure~\ref{fig:freezing_looping_pahts}(a).
The weights of the nodes are 1 and the weight of each edge appears next to it.
Suppose that the query is $\{k_1,k_2\}$.
We now show that the modified GTF algorithm would miss
the answer presented in Figure~\ref{fig:freezing_looping_pahts}(c),
where the root is $r$.
The initialization step of the algorithm inserts into $Q$
the paths $k_1$ and $k_2$, each consisting of a single keyword.
Next, we describe the iterations of the main loop.
For each one, we specify the path that is removed from $Q$
and the paths that are inserted into $Q$.
We do not mention explicitly how the marks are changed,
unless it eventually causes freezing.
\begin{enumerate}
\item
The path $k_1$ is removed from $Q$, and
the paths $a \rightarrow k_1$ and $b \rightarrow k_1$ are inserted into $Q$.
\item
The path $k_2$ is removed from $Q$ and
the path $r \rightarrow k_2$ is inserted into $Q$.
\item
The path $b \rightarrow k_1$ is removed (since its weight is the
lowest on $Q$), and $r \rightarrow b \rightarrow k_1$ and
$p_l = c \rightarrow b \rightarrow k_1$ are inserted
(the latter is shown in blue in Figure~\ref{fig:freezing_looping_pahts}(a)).
\item
The path $a \rightarrow k_1$ is removed from $Q$ and the path
$p_s= c \rightarrow a \rightarrow k_1$ (shown in red) is inserted into $Q$.
\item
The path $p_l$ is removed from $Q$ and
$c.\mathit{marks}[k_1]$ is changed to $\textit{visited}$; then the path $d \rightarrow p_l$
is inserted to $Q$.
\item
The path $r \rightarrow b \rightarrow k_1$ is
removed from $Q$ and nothing is inserted into $Q$.
\item
The $p_s$ is removed from $Q$ and freezes at node $c$.
\item
The path $d \rightarrow p_l$ is removed from $Q$.
It can only be extended further to node $b$, but that would create a cycle,
so nothing is inserted into $Q$.
\end{enumerate}
Eventually, node $r$ is discovered to be a $K$-root. However,
$c.\mathit{marks}[k_1]$ will never be changed from $\textit{visited}$ to $\textit{in-answer}$
for the following reason.
The minimal path that first visited $c$ (namely, $p_l$) must make a
cycle to reach $r$. Therefore, the path $p_s$ remains frozen at
node $c$ and the answer of Figure~\ref{fig:freezing_looping_pahts}(c)
will not be produced.
\section{Conclusions}
We presented the GTF algorithm for enumerating, by increasing height,
answers to keyword search over data graphs. Our main contribution is
the freezing technique for avoiding the construction of (most if not
all) non-minimal paths until it is determined that they can reach
$K$-roots (i.e.,~potentially be parts of answers). Freezing is an
intuitive idea, but its incorporation in the GTF algorithm involves
subtle details and requires an intricate proof of correctness. In
particular, cyclic paths must be constructed (see
Appendix~\ref{sec:looping_paths
), although they are not part of any
answer. For efficiency's sake, however, it is essential to limit the
creation of cyclic paths as much as possible, which is accomplished by
lines~\ref{alg:gtf:reassignFalse} and~\ref{alg:gtf:testEssential} of
Figure~\ref{alg:gtf}.
Freezing is not merely of theoretical importance. Our extensive
experiments (described in Section~\ref{sec:experiments_sum}
and Appendix~\ref{sec:experiments
) show
that freezing increases efficiency by up to about one order of magnitude
compared with the naive approach (of Section~\ref{sec:naive}) that
does not use it.
The experiments of Section~\ref{sec:experiments_sum} and
Appendix~\ref{sec:experiments}
also show that in
comparison to other systems, GTF is almost always the best, sometimes
by several orders of magnitude.
Moreover, our algorithm is more scalable than other systems.
The efficiency of GTF is a significant achievement especially in light
of the fact that it is complete (i.e.,~does not miss answers).
Our experiments show that some of the other systems
sacrifice completeness for the sake of efficiency.
Practically, it means that they generate longer paths resulting
in answers that are likely to be less relevant than the missed ones.
The superiority of GTF over ParLMT is an indication that polynomial
delay might not be a good yard stick for measuring the practical
efficiency of an enumeration algorithm. An important topic for future
work is to develop theoretical tools that are more appropriate for
predicting the practical efficiency of those algorithms.
\section{Summary of the Experiments}\label{sec:experiments_sum}
In this section, we summarize our experiments.
The full description of the methodology and results is
given in Appendix~\ref{sec:experiments
.
We performed extensive experiments to measure the efficiency of GTF.
The experiments were done on the
Mondial\footnote{http://www.dbis.informatik.uni-goettingen.de/Mondial/}
and DBLP\footnote{http://dblp.uni-trier.de/xml/} datasets.
To test the effect of freezing, we ran the naive approach (described
in Section~\ref{sec:naive}) and GTF on both datasets. We measured the
running times of both algorithms for generating the top-$k$ answers
($k=100, 300, 1000$). We discovered that the freezing technique gives
an improvement of up to about one order of magnitude. It has a greater
effect on Mondial than on DBLP, because the former is highly cyclic
and, therefore, has more paths (on average) between a pair of nodes.
Freezing has a greater effect on long queries than short ones. This is
good, because the bigger the query, the longer it takes to produce its
answers. This phenomenon is due to the fact that the average height
of answers increases with the number of keywords. Hence, the naive
approach has to construct longer (and probably more) paths that do not
contribute to answers, whereas GTF avoids most of that work.
In addition, we compared the running times of GTF with those of
BANKS~\cite{icdeBHNCS02,vldbKPCSDK05}, BLINKS~\cite{sigmodHWYY07},
SPARK~\cite{tkdeLWLZWL11} and ParLMT~\cite{pvldbGKS11}. The last one
is a parallel implementation of~\cite{sigmodGKS08}; we used its
variant ES (early freezing with single popping) with 8 threads. BANKS
has two versions, namely, MI-BkS~\cite{icdeBHNCS02} and
BiS~\cite{vldbKPCSDK05}. The latter is faster than the former by up to one
order of magnitude and we used it for the running-time comparison.
GTF is almost always the best, except in two particular cases.
First, when generating $1,000$ answers over Mondial, SPARK is
better than GTF by a tiny margin on queries with 9 keywords, but is
slower by a factor of two when averaging over all queries. On DBLP, however,
SPARK is slower than GTF by up to two orders of magnitude. Second, when
generating $100$ answers over DBLP, BiS is slightly better than GTF on
queries with 9 keywords, but is 3.5 times slower when averaging over all
queries. On Mondial, however, BiS is slower than GTF by up to one
order of magnitude. All in all, BiS is the second best algorithm in most
of the cases. The other systems are slower than GTF by one to two
orders of magnitude.
Not only is our system faster, it is also
increasingly more efficient as either the number of generated answers
or the size of the data graph grows. This may seem counterintuitive,
because our algorithm is capable of generating all paths (between a
node and a keyword) rather than just the minimal one(s). However, our
algorithm generates non-minimal paths only when they can potentially
contribute to an answer, so it does not waste time on doing useless
work. Moreover, if only minimal paths are constructed, then longer
ones may be needed in order to produce the same number of answers,
thereby causing more work compared with an algorithm that is capable
of generating all paths.
GTF does not miss answers (i.e.,~it is capable of generating all of them).
Among the other systems we tested,
ParLMT~\cite{pvldbGKS11} has this property and is theoretically
superior to GTF, because it enumerates
answers with polynomial delay (in a 2-approximate order of increasing height),
whereas the delay of GTF could be exponential. In our experiments,
however, ParLMT was slower by two orders of magnitude, even though it
is a parallel algorithm (that employed eight cores in our tests).
Moreover, on a large dataset, ParLMT ran out of memory when the query had seven
keywords. The big practical advantage of GTF over ParLMT
is explained as follows.
The former constructs paths incrementally whereas the latter
(which is based on the Lawler-Murty procedure~\cite{mansciL72,orM68})
has to solve a new optimization problem for each produced answer,
which is costly in terms of both time and space.
A critical question is how important it is to have an algorithm that
is capable of producing all the answers. We compared our algorithm
with BANKS. Its two versions only generate answers consisting
of minimal paths and, moreover, those produced by BiS have distinct
roots. BiS (which is overall the second most efficient system in our
experiments) misses between $81\%$ (on DBLP) to $95\%$ (on Mondial) of
the answers among the top-$100$ generated by GTF. MI-BkS misses much
fewer answers, that is, between $1.8\%$ (on DBLP) and $32\%$ (on
Mondial), but it is slower than BiS by up to one order of magnitude. For
both versions the percentage of misses increases as the number of
generated answers grows. This is a valid and significant comparison,
because our algorithm generates answers in the same order as BiS and
MI-BkS, namely, by increasing height.
\section{Experiments}\label{sec:experiments}
In this appendix, we discuss the methodology and results of the experiments.
First, we start with a description of the datasets and the queries.
Then, we discuss the setup of the experiments.
Finally, we present the results.
This appendix expands the summary given in Section~\ref{sec:experiments_sum}.
\subsection{Datasets and Queries\label{sec:datasets}}
Most of the systems we tested are capable of parsing a relational
database to produce a data graph. Therefore, we selected this
approach. The experiments were done on
Mondial\footnote{http://www.dbis.informatik.uni-goettingen.de/Mondial/}
and DBLP\footnote{http://dblp.uni-trier.de/xml/} that are commonly
used for testing systems for keyword-search over data graphs.
Mondial is a highly connected graph.
We specified 65 foreign keys in Mondial,
because the downloaded version lacks such definitions.
DBLP has only an XML version that is available for download. In
addition, that version is just a bibliographical list. In order to
make it more meaningful, we modified it as follows. We replaced
the citation and cross-reference elements with IDREFS. Also, we
created a unique element for each \emph{author, editor} and
\emph{publisher}, and replaced each of their original occurrences with
an IDREF. After doing that, we transformed the XML dataset into a
relational database. To test scalability, we also created a subset of DBLP.
The two versions of DBLP are called \emph{Full DBLP} and \emph{Partial DBLP}.
Table~\ref{tbl:dataset_stats} gives the sizes of the data graphs. The
numbers of nodes and edges exclude the keyword nodes and their
incident edges. The average degree of the Mondial
data graph is the highest, due to its high connectivity.
We manually created queries for Mondial and DBLP.
The query size varies from 2 to 10 keywords, and there
are four queries of each size.
\begin{table}[t]
\centering
\caption{\label{tbl:dataset_stats}The sizes of the data graphs}
\begin{tabular}{l | c | c | c |}
Dataset Name & nodes & edges & average degree\\ \hline
Mondial & 21K & 86K & 4.04\\
Partial DBLP & 649K & 1.6M & 2.48\\
Full DBLP & 7.5M & 21M & 2.77\\
\hline
\end{tabular}
\end{table}
In principle, answers should be ranked according to their weight.
However, answers cannot be generated efficiently by increasing weight.
Therefore, a more tractable order (e.g.,~by increasing height) is used
for generating answers, followed by sorting according to the desired ranking.
We employed the weight function described in\cite{pvldbGKS11},
because based on our experience it is effective in practice.
The other tested systems have their own weight functions.
We have given the reference to the weight function
so that the presentation of our work is complete.
However, this paper is about efficiency of search algorithms, rather than their effectiveness.
Therefore, measuring recall and precision is beyond the scope of this paper.
\subsection{The Setup of the Experiments}\label{sec:setup}
We ran the tests on a server with two quad-core, 2.67GHz Xeon X5550
processors and 48GB of RAM. We used Linux Debian (kernel 3.2.0-4-amd64),
Java 1.7.0\_60, PostgreSQL 9.1.13 and MySQL 5.5.35.
The Java heap was allocated 10GB of RAM.
Systems that use a DBMS got additional 5GB of RAM for the buffer (of the DBMS).
The tests were executed on BANKS~\cite{icdeBHNCS02,vldbKPCSDK05},
BLINKS~\cite{sigmodHWYY07}, SPARK~\cite{tkdeLWLZWL11}, GTF (our
algorithm) and ParLMT~\cite{pvldbGKS11}. The latter is a parallel
implementation of~\cite{sigmodGKS08}; we used its
variant ES (early freezing with single popping) with 8 threads.
BANKS has two versions, namely, MI-BkS~\cite{icdeBHNCS02} and
BiS~\cite{vldbKPCSDK05}. The latter is faster than the former by one
order of magnitude and we used it for the running-time comparison.
We selected these systems because their code is available.
Testing the efficiency of the systems is a bit like comparing apples
and oranges. They use different weight functions and produce answers
in dissimilar orders. Moreover, not all of them are capable of
generating all the answers. We have striven to make the tests as equitable
as possible.
The exact configuration we used for each system, as well as some other details,
are given in Appendix~\ref{sec:config_details}.
We measured the running times as a function of the query size.
The graphs show the average over the four queries of each size.
The time axis is logarithmic, because there are order-of-magnitude differences
between the fastest and slowest systems.
\subsection{The Effect of Freezing}
To test the effect of freezing, we ran the naive approach (described in Section~\ref{sec:naive})
and GTF on all the
datasets. Usually, a query has thousands of answers,
because of multiple paths between pairs of nodes. We executed the
algorithms until the first $100$, $300$ and $1,000$ answers were
produced.
Then, we measured the speedup of GTF over the naive approach.
Typical results are shown in
Figure~\ref{fig:GTFSpeedup}.
Note that smaller queries are not necessarily subsets of
larger ones. Hence, long queries may be computed faster
than short ones. Generally, the computation time depends on the
length of the query and the percentage of nodes in which the
keywords appear.
\input{runtime_charts_freezing_b}
The freezing technique gives an improvement of up to about one order of
magnitude. It has a greater effect on Mondial than on DBLP, because
the former is highly cyclic and, therefore, there are more paths (on
average) between a pair of nodes. The naive approach produces all those paths,
whereas GTF freezes the construction of most of them until they are needed.
On all datasets, freezing has a greater effect on long queries than
short ones. This is good, because the bigger the query, the longer it
takes to produce its answers. This phenomenon is due to the fact that the
average height of answers increases with the number of
keywords. Hence, the naive approach has to construct longer (and
probably more) paths that do not contribute to answers, whereas GTF
avoids most of that work. In a few cases, GTF is slightly slower than
the naive approach, due to its overhead.
However, this happens for short queries that are computed very fast anyhow;
so, this is not a problem.
\input{runtime_charts_mondial_rdb}
\subsection{Comparison with Other Systems}\label{sec:comparison}
In this section, we compare the running times of the systems we
tested. We ran each system to produce $100$, $300$ and $1,000$ answers
for all the queries and datasets.
Typical results are shown in
Figs~\ref{fig:runtimeMondial100}--\ref{fig:runtimeDblp100}.
GTF is almost always the best, sometimes by several orders of magnitude,
except for a few cases in which it is second by a tiny margin.
When running SPARK, many answers contained only some of the keywords,
even though we set it up for the AND semantics (to be the same as the other
systems).
On Mondial, SPARK is generally the second best.
Interestingly, its running time increases only slightly as the number of
answers grows. In particular, Figure~\ref{fig:runtimeMondial1000} shows that for
$1,000$ answers, SPARK is the second best by a wide margin, compared with
BANKS which is third.
The reason for that is the way SPARK works. It produces expressions
(called \emph{candidate networks}) that are evaluated over a database.
Many of those expression are \emph{empty}, that is, produce no answer at all.
Mondial is a small dataset, so the time to compute a non-empty expression
is not much longer than computing an empty one. Hence, the running time hardly
depends on the number of generated answers.
However, on the partial and full versions of DBLP (which are large
datasets), SPARK is much worse than BANKS, even when producing only
$100$ answers (Figures~\ref{fig:runtimePdblp100}
and~\ref{fig:runtimeDblp100}). We conclude that by and large BANKS
is the second best.
In Figures~\ref{fig:runtimeMondial100}--\ref{fig:runtimeMondial1000},
the advantage of GTF over BANKS grows as more answers are produced.
Figures~\ref{fig:runtimePdblp100} and~\ref{fig:runtimeDblp100} show
that the advantage of GTF increases as the dataset becomes larger. We
conclude that GTF is more scalable than other systems when either the
size of the dataset or the number of produced answers is increased.
The running time of ParLMT grows linearly with the number of answers,
causing the gap with the other systems to get larger (as more answers
are produced). This is because ParLMT solves a new optimization
problem from scratch for each answer. In comparison, BANKS and GTF
construct paths incrementally, rather than starting all over again at
the keyword nodes for each answer.
ParLMT got an out-of-memory exception when computing long queries on
Full DBLP (hence, Figure~\ref{fig:runtimeDblp100} shows the
running time of ParLMT only for queries with at most six keywords).
This means that ParLMT is memory inefficient compared with the other systems.
BLINKS is the slowest system. On Mondial, the average running time to
produce 100 answers is 46 seconds. Since this is much worse than the
other systems,
Figures~\ref{fig:runtimeMondial100}--\ref{fig:runtimeMondial1000} do not
include the graph for BLINKS. On Partial DBLP,
BLINKS is still the slowest (see Figure~\ref{fig:runtimePdblp100}).
On Full DBLP, BLINKS got an out-of-memory exception
during the construction of the indexes.
This is not surprising, because they store their indexes solely in main memory.
\input{runtime_charts_dblps_rdb}
\input{tbl_coef_var}
Table~\ref{tbl:coef_var} gives the averages of the
coefficients of variation (CV) for assessing the confidence in the
experimental results. These averages are for each system, dataset
and number of generated answers. Each average is over all query
sizes. Missing entries indicate out-of-memory exceptions. Since
the entries of Table~\ref{tbl:coef_var} are less than $1$, it means
that the experimental results have a low variance. It is not surprising that
sometimes the CV of GTF and ParLaw are the largest, because these
are the only two systems that can generate all the
answers. Interestingly, on the largest dataset, GTF has the best CV,
which is another testament to its scalability. SPARK has the
best CV on small datasets, because (as mentioned earlier)
its running times are hardly affected by either the query size or the
selectivity of the keywords.
\subsection{The Importance of Producing all Answers}\label{sec:all}
The efficiency of GTF is a significant achievement especially in light
of the fact that it does not miss answers (i.e.,~it is capable of
generating all of them). BANKS, the second most efficient system
overall, cannot produce all the answers. In this section, we
experimentally show the importance of not missing answers.
BANKS has two variants: Bidirectional Search (BiS)~\cite{vldbKPCSDK05}
and Multiple Iterator Backward Search (MI-BkS)~\cite{icdeBHNCS02}.
Figures~\ref{fig:runtimeMondial100}--\ref{fig:runtimeDblp100} show the
running times of the former, because it is the faster of the two.
MI-BkS is at least an order of magnitude slower than BiS. For example,
it takes MI-BkS an average (over all query sizes) of $2.7$
seconds to produce 100 answers on Mondial. However, as noted
in~\cite{vldbKPCSDK05}, BiS misses more answers than MI-BkS,
because it produces just a single answer for each root. MI-BkS is
capable of producing several answers for the same root, but all of
them must have a shortest path from the root to each keyword
(hence, for example, it would miss the answer $A_3$ of
Figure~\ref{fig:answers}).
To test how many answers are missed by BANKS, we ran GTF on all the queries and measured
the following in the generated results. First, the percentage of
answers that are rooted at the same node as some previously created
answer. Second, the percentage of answers that contain at least one
non-shortest path from the root to some node that contains
a keyword of the query.\footnote{We did that after removing the
keyword nodes so that answers have the same structure as in BANKS.}
The former and the latter percentages show how many answers are missed
by BiS and MI-BkS, respectively, compared with GTF. The results are
given in Table~\ref{tbl:banks_misses}.
For example, it shows that on Mondial, among the
first 100 answers of GTF, there are only $5$ different roots. So, BiS
would miss $95$ of those answers. Similarly, BiS is capable of
generating merely $20$ answers among the first $1,000$ produced by
GTF. Also the Table~\ref{tbl:banks_misses} shows that MI-BkS is more
effective, but still misses answers. On Mondial, it misses $32$ and $580$ answers among
the first $100$ and $1,000$, respectively, generated by GTF.
Generally, the percentage of missed answers increases with the number
of generated answers and also depends heavily on the dataset. It is
higher for Mondial than for DBLP, because the former is highly
connected compared with the latter. Therefore, more answers rooted at
the same node or containing a non-shortest path from the root to a
keyword could be created for Mondial. Both GTF and BANKS enumerate answers by
increasing height. Since BANKS misses so many answers, it must
generate other answers having a greater height, which are likely to be
less relevant.
\input{tbl_banks_misses}
\section{Correctness and Complexity of GTF} \label{correctness}
\subsection{Definitions and Observations}
Before proving correctness of the GTF algorithm,
we define some notation and terminology (in addition
to those of Section~\ref{sec:prelim}) and state a few observations.
Recall that the data graph is $G=(V,E)$.
Usually, a keyword is denoted by $k$, whereas $r$, $u$, $v$ and $z$
are any nodes of $V$.
We only consider directed paths of $G$ that are defined as usual.
If $p$ is a path from $v$ to $k$, then we write it as $p[v,k]$ when we want
to explicitly state its first and last nodes.
We say that node $u$ is \emph{reachable} from $v$ if there is a path
from $v$ to $u$.
A \emph{suffix} of $p[v,k]$ is a traversal of $p[v,k]$ that starts at
(some particular occurrence of) a node $u$ and ends at the last node of $p$.
Hence, a suffix of $p[v,k]$ is denoted by $p[u,k]$.
A \emph{prefix} of $p[v,k]$ is a traversal of $p[v,k]$ that starts at $v$
and ends at (some particular occurrence of) a node $u$. Hence, a prefix of
$p[v,k]$ is denoted by $p[v,u]$.
A suffix or prefix of $p[v,k]$ is \emph{proper} if it is different from
$p[v,k]$ itself.
Consider two paths $p_1[v,z]$ and $p_2[z,u]$; that is, the former ends
in the node where the latter starts. Their \emph{concatenation},
denoted by $p_1[v,z] \circ p_2[z,u]$, is obtained by joining them at node $z$.
As already mentioned in Section~\ref{sec:prelim},
a positive \emph{weight} function $w$ is defined on the
nodes and edges of $G$. The weight of a path $p[v,u]$, denoted by
$w(p[v,u])$, is the sum of weights over all the nodes and
edges of $p[v,u]$. A \emph{minimal} path from $v$ to $u$ has the minimum weight
among all paths from $v$ to $u$. Since the weight function is
positive, there are no zero-weight cycles. Therefore, a minimal path
is acyclic. Also observe that the weight of a proper suffix or prefix
is strictly smaller than that of the whole path.\footnote{For the proof of correctness, it is
enough for the weight function to be non-negative
(rather than positive) provided that every cycle has a positive weight.
}
Let $K$ be a query (i.e.,~a set of at least two keywords). Recall
from Section~\ref{sec:prelim} the definitions of $K$-root, $K$-subtree
and height of a subtree. The \emph{best height} of a $K$-root $r$ is
the maximum weight among all the minimal paths from $r$ to any keyword
$k\in K$. Note that the height of any $K$-subtree rooted at $r$ is at
least the best height of $r$.
Consider a nonempty set of nodes $S$ and a node $v$.
If $v$ is reachable from every node of $S$, then we say that node $u\in S$
is \emph{closest to} $v$ if a minimal path from $u$ to $v$ has the
minimum weight among all paths from any node of $S$ to $v$.
Similarly, if every node of $S$ is reachable from $v$, then we say
that node $u\in S$ is \emph{closest from} $v$ if a minimal path from
$v$ to $u$ has the minimum weight among all paths from $v$ to any node
of $S$.
In the sequel, line numbers refer to the algorithm GTF of Figure~\ref{alg:gtf},
unless explicitly stated otherwise.
We say that a node $v \in V$ is \emph{discovered as a $K$-root}
if the test of line~\ref{alg:gtf:becomesRoot} is satisfied
and $v.\mathit{isRoot}$ is assigned $\bid{true}$ in
line~\ref{alg:gtf:isRootGetsTrue}.
Observe that the test of line~\ref{alg:gtf:becomesRoot} is $\bid{true}$
if and only if for all $k\in K$, it holds that
$v.\mathit{marks[k]}$ is either $\textit{visited}$ or $\textit{in-answer}$.
Also note that line~\ref{alg:gtf:isRootGetsTrue} is executed at most once
for each node $v$ of $G$. Thus, there is at most one iteration of the main
loop (i.e.,~line~\ref{alg:gtf:mainLoop_start}) that discovers $v$ as $K$-root.
We say that a path $p$ is \emph{constructed} when it is inserted into $Q$
for the first time, which must happen in line~\ref{alg:gtf:insertQ}.
A path is \emph{exposed} when it is removed from $Q$ in
line~\ref{alg:gtf:popBestPath}. Observe that a path $p[v,k]$ may be exposed
more than once, due to freezing and unfreezing.
\theoremstyle{definition}
\newtheorem{proposition}[theorem]{Proposition}
\begin{proposition}\label{prop:twice}
A path can be exposed at most twice.
\end{proposition}
\begin{proof}
When an iteration exposes a path $p[v,k]$ for the first time,
it does exactly one of the following.
It freezes $p[v,k]$ at node $v$,
discard $p[v,k]$ due to line~\ref{alg:gtf:reassignFalse}, or
extend (i.e.,~relax) $p[v,k]$ in the loop of line~\ref{alg:gtf:relaxStart}
and inserts the results into $Q$ in line~\ref{alg:gtf:insertQ}.
Note that some relaxations of $p[v,k]$ are never inserted into $Q$,
due to the test of line~\ref{alg:gtf:testEssential}.
Only if $p[v,k]$ is frozen at $v$, can it be inserted a second time
into $Q$, in line~\ref{unfreeze:insert} of the procedure $\textbf{unfreeze}$
(Figure~\ref{alg:gtf:helpers}) that also sets $v.\mathit{marks[k]}$ to $\textit{in-answer}$.
But then $p[v,k]$ cannot freeze again at $v$,
because $v.\mathit{marks[k]}$ does not change after becoming $\textit{in-answer}$.
Therefore, $p[v,k]$ cannot be inserted into $Q$ a third time.
\end{proof}
In the next section, we sometimes refer to the mark of
a node $v$ of a path $p$. It should be clear from the context that we
mean the mark of $v$ for the keyword where $p$ ends.
\subsection{The Proof}
We start with an auxiliary lemma that considers the concatenation of
two paths, where the linking node is $z$, as shown in
Figure~\ref{fig:illustration_a} (note that a wavy arrow denotes a
path, rather than a single edge). Such a concatenation is used in the
proofs of subsequent lemmas.
\input{lemma1}
\begin{restatable}{lemma}{lemmaPC}
\label{LEMMA:PATH-CONCAT}
Let $k$ be a keyword of the query $K$, and let $v$ and $z$ be nodes of the
data graph. Consider two paths $p_s[v,z]$ and $p_m[z,k]$.
Let $\bar p[v,k]$ be their concatenation at node $z$, that is,
\begin{equation*}
\bar p[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation*}
Suppose that the following hold at the beginning of iteration $i$ of the
main loop (line~\ref{alg:gtf:mainLoop_start}).
\begin{enumerate}
\item\label{cond:new-first}
The path $p_s[v,z]$ is minimal or (at least) acyclic.
\item\label{cond:new-second}
The path $p_m[z,k]$ has changed $z.\mathit{marks[k]}$ from $\textit{active}$ to $\textit{visited}$
in an earlier iteration.
\item\label{cond:first}
$z.\mathit{marks[k]}=\textit{visited}$.
\item\label{cond:second}
For all nodes $u\not=z$ on the path $p_s[v,z]$,
the suffix $\bar p[u,k]$ is not frozen at $u$.
\item\label{cond:third}
The path $\bar p[v,k]$ has not yet been exposed.
\end{enumerate}
Then, some suffix of $\bar p[v,k]$ must be on $Q$ at the beginning of
iteration $i$.
\end{restatable}
\begin{proof}
Suppose, by way of contradiction, that no suffix of $\bar p[v,k]$ is
on $Q$ at the beginning of iteration $i$. Since $\bar p[v,k]$
has not yet been exposed, there are two possible cases regarding its state.
We derive a contradiction by showing that none of them can happen.
\begin{description}
\item[Case 1:] Some suffix of $\bar p[v,k]$ is frozen. This cannot
happen at any node of $\bar p[z,k]$ (which is the same as
$p_m[z,k]$), because Condition~\ref{cond:first} implies that
$p_m[z,k]$ has already changed $z.\mathit{marks[k]}$ to $\textit{visited}$.
Condition~\ref{cond:second} implies that it cannot happen at the
other nodes of $\bar p[v,k]$ (i.e.,~the nodes $u$ of $p_s[v,z]$ that
are different from $z$).
\item[Case 2:] Some suffix of $\bar p[v,k]$ has already been discarded
(in an earlier iteration) either by the test of line~\ref{alg:gtf:testEssential}
or due to line~\ref{alg:gtf:reassignFalse}.
This cannot happen to any
suffix of $\bar p[z,k]$ (which is the same as $p_m[z,k]$), because
$p_m[z,k]$ has already changed $z.\mathit{marks[k]}$ to $\textit{visited}$. We
now show that it cannot happen to any other suffix $\bar p[u,k]$,
where $u$ is a node of $p_s[v,z]$ other than $z$. Note that $\bar
p[v,k]$ (and hence $\bar p[u,k]$) is not necessarily
acyclic. However, the lemma states that $p_s[v,z]$ is
acyclic. Therefore, if the suffix $\bar p[u,k]$, has a cycle that
includes $u$, then it must also include $z$. But
$z.\mathit{marks[k]}$ is $\textit{visited}$ from the moment it was changed to
that value until the beginning of iteration $i$ (because a mark
cannot be changed to $\textit{visited}$ more than once). Hence, the suffix $\bar
p[u,k]$ could not have been discarded by the test of
line~\ref{alg:gtf:testEssential}.
It is also not possible that
line~\ref{alg:gtf:reassignFalse}
has already discarded $\bar p[u,k]$ for the following reason.
If line~\ref{alg:gtf:reassignFalse} is reached
(in an iteration that removed $\bar p[u,k]$ from $Q$), then
for all nodes $x$ on $\bar p[u,k]$, line~\ref{alg:gtf:unfrAtOldRoot}
has already changed $x.\mathit{marks[k]}$ to $\textit{in-answer}$.
Therefore, $z.\mathit{marks[k]}$ cannot be $\textit{visited}$ at the beginning
of iteration $i$.
\end{description}
It thus follows that some suffix of $\bar p[v,k]$ is on $Q$
at the beginning of iteration $i$.
\end{proof}
\begin{restatable}{lemma}{lemmaSPM}
\label{LEMMA:GTF-SHORTEST-PATH-MARKS}
For all nodes $v\in V$ and keywords $k\in K$,
the mark $v.\mathit{marks[k]}$ can be changed from $\textit{active}$ to $\textit{visited}$
only by a minimal path from $v$ to $k$.
\end{restatable}
\begin{proof}
Suppose that the lemma is not true for some keyword $k\in K$.
Let $v$ be a closest node to $k$
among all those violating
the lemma with respect to $k$.
Node $v$ is different from $k$, because the path
$\anset{k}$ marks $k$ as $\textit{visited}$. We will derive a contradiction by
showing that a minimal path changes $v.\mathit{marks[k]}$
from $\textit{active}$ to $\textit{visited}$.
Let $p_s[v,k]$ be a minimal path from $v$ to $k$.
Consider the iteration $i$ of the main loop
(line~\ref{alg:gtf:mainLoop_start} in Figure~\ref{alg:gtf}) that changes
$v.\mathit{marks[k]}$ to $\textit{visited}$ (in line~\ref{alg:gtf:markVisited}).
Among all the nodes of $p_s[v,k]$
in which suffixes of some minimal paths from $v$ to $k$ are
frozen at the beginning of iteration $i$, let $z$ be the first one
when traversing $p_s[v,k]$ from $v$ to $k$
(i.e.,~on the path $p_s[v,z]$, node $z$ is the only one in which
such a suffix is frozen). Node $z$ exists for the following three reasons.
\begin{itemize}
\item The path $p_s[v,k]$ has not been exposed prior to
iteration $i$, because we assume that $v.\mathit{marks[k]}$ is
changed to $\textit{visited}$ in iteration $i$ and that change can happen only once.
\item The path $p_s[v,k]$ is acyclic (because it is minimal), so
a suffix of $p_s[v,k]$ could not have been discarded either
by the test of line~\ref{alg:gtf:testEssential} or due to
line~\ref{alg:gtf:reassignFalse}.
\item The path $p_s[v,k]$ (or any suffix thereof) cannot be on the
queue at the beginning of iteration $i$, because $v$ violates the
lemma, which means that a non-minimal path from $v$ to $k$ must be
removed from the queue at the beginning of that iteration.
\end{itemize}
The above three observations imply that a proper suffix of $p_s[v,k]$
must be frozen at the beginning of iteration $i$ and, hence, node $z$ exists.
Observe that $z$ is different from $v$, because a path to $k$
can be frozen only at a node $\hat v$, such that
${\hat v}.\mathit{marks[k]}=\textit{visited}$, whereas we assume that
$v.\mathit{marks[k]}$ is $\textit{active}$ at the beginning of iteration $i$.
By the selection of $v$ and $p_s[v,k]$ (and the above fact that $z\not=v$),
node $z$ does not violate the lemma, because $p_s[z,k]$ is a proper
suffix of $p_s[v,k]$ and, hence, $z$ is closer to $k$ than $v$.
Therefore, according to the lemma, there is a minimal path $p_m[z,k]$
that changes $z.\mathit{marks[k]}$ to $\textit{visited}$. Consequently,
\begin{equation}\label{eqn:min-path}
w(p_m[z,k]) \leq w(p_s[z,k]).
\end{equation}
Now, consider the path
\begin{equation}\label{eqn:new-path}
\bar{p}[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation}
Since $p_s[v,k]$ is a minimal path from $v$ to $k$,
Equations~(\ref{eqn:min-path}) and~(\ref{eqn:new-path}) imply that
so is $\bar{p}[v,k]$.
We now show that the conditions of Lemma~\ref{LEMMA:PATH-CONCAT} are
satisfied at the beginning of iteration $i$. In particular,
Condition~\ref{cond:new-first} holds, because $p_s[v,k]$ is acyclic
(since it is minimal) and, hence, so is the path $p_s[v,z]$.
Condition~\ref{cond:new-second} is satisfied, because of how
$p_m[z,k]$ is defined. Condition~\ref{cond:first} holds, because we chose
$z$ to be a node where a path to $k$ is frozen.
Condition~\ref{cond:second} is satisfied, because of how $z$ was
chosen and the fact that $\bar{p}[v,k]$ is minimal.
Condition~\ref{cond:third} is satisfied, because we have assumed
that $v.\mathit{marks[k]}$ is changed from $\textit{active}$ to $\textit{visited}$ during
iteration $i$.
By Lemma~\ref{LEMMA:PATH-CONCAT}, a suffix of
$\bar p[v,k]$ must be on the queue at the beginning of iteration
$i$. This contradicts our assumption that a non-minimal path (which
has a strictly higher weight than any suffix of $\bar p[v,k]$) changes
$v.\mathit{marks[k]}$ from $\textit{active}$ to $\textit{visited}$ in iteration $i$.
\end{proof}
\begin{restatable}{lemma}{lemmaOQ}
\label{LEMMA:ON-QUEUE}
For all nodes $v\in V$ and keywords $k\in K$, such that $k$ is reachable
from $v$, if $v.\mathit{marks[k]}$ is $\textit{active}$ at the beginning of an iteration
of the main loop (line~\ref{alg:gtf:mainLoop_start}), then $Q$ contains a suffix
(which is not necessarily proper) of a minimal path from $v$ to $k$.
\end{restatable}
\begin{proof}
The lemma is certainly true at the beginning of the first iteration,
because the path $\anset{k}$ is on $Q$.
Suppose that the lemma does not hold at the beginning of iteration $i$.
Thus, every minimal path $p[v,k]$ has a proper suffix that is frozen
at the beginning of iteration $i$.
(Note that a suffix of a minimal path cannot be discarded either by the
test of line~\ref{alg:gtf:testEssential} or due to
line~\ref{alg:gtf:reassignFalse}, because it is acyclic.)
Let $z$ be the closest node from $v$ having such a frozen suffix.
Hence, $z.\mathit{marks[k]}$ is $\textit{visited}$ and $z\not= v$
(because $v.\mathit{marks[k]}$ is $\textit{active}$).
By Lemma~\ref{LEMMA:GTF-SHORTEST-PATH-MARKS},
a minimal path $p_m[z,k]$ has changed $z.\mathit{marks[k]}$ to $\textit{visited}$.
Let $p_s[v,z]$ be a minimal path from $v$ to $z$.
Consider the path
\begin{equation*}
\bar{p}[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation*}
The weight of $\bar{p}[v,k]$ is no more than that of a minimal path
from $v$ to $k$, because both $p_s[v,z]$ and $p_m[v,k]$ are minimal
and the choice of $z$ implies that it is on some minimal path from $v$
to $k$. Hence, $\bar{p}[v,k]$ is a minimal path from $v$ to $k$.
We now show that the conditions of Lemma~\ref{LEMMA:PATH-CONCAT} are satisfied.
Conditions~\ref{cond:new-first}--\ref{cond:first} clearly hold.
Condition~\ref{cond:second} is satisfied because of how $z$ is chosen
and the fact that $\bar{p}[v,k]$ is minimal.
Condition~\ref{cond:third} holds because $v.\mathit{marks[k]}$ is $\textit{active}$
at the beginning of iteration $i$.
By Lemma~\ref{LEMMA:PATH-CONCAT}, a suffix of $\bar{p}[v,k]$ is on $Q$
at the beginning of iteration $i$, contradicting our initial assumption.
\end{proof}
\begin{lemma}\label{lemma:finite}
Any constructed path can have at most $2n(n+1)$ nodes, where $n=|V|$
(i.e.,~the number of nodes in the graph).
Hence, the algorithm constructs at most $(n+1)^{2n(n+1)}$ paths.
\end{lemma}
\begin{proof}
We say that $v_m \rightarrow \cdots \rightarrow v_1$ is a
\emph{repeated run} in a path $\bar p$ if some suffix (not
necessarily proper) of $\bar p$ has the form $v_m \rightarrow \cdots
\rightarrow v_1 \rightarrow p$, where each $v_i$ also appears in any
two positions of $p$. In other words, for all $i$ ($1\le i \le m$),
the occurrence of $v_i$ in $v_m \rightarrow \cdots \rightarrow v_1$
is (at least) the third one in the suffix $v_m \rightarrow \cdots
\rightarrow v_1 \rightarrow p$. (We say that it is the third,
rather than the first, because paths are constructed backwards).
When a path $p'[v',k']$ reaches a node $v'$ for the third time, the
mark of $v'$ for the keyword $k'$ has already been changed to $\textit{in-answer}$
in a previous iteration. This follows from the following two
observations. First, the first path to reach a node $v'$ is also the
one to change its mark to $\textit{visited}$. Second, a path that reaches a node
marked as $\textit{visited}$ can be unfrozen only when that mark is changed to $\textit{in-answer}$.
Let $v_m \rightarrow \cdots \rightarrow v_1$ be a repeated run in
$\bar p$ and suppose that $m>n=|V|$. Hence, there is a node $v_i$
that appears twice in the repeated run; that is, there is a $j<i$,
such that $v_j=v_i$. If the path $v_i \rightarrow \cdots
\rightarrow v_1 \rightarrow p$ is considered in the loop of
line~\ref{alg:gtf:relaxStart}, then it would fail the test of
line~\ref{alg:gtf:testEssential} (because, as explained earlier, all
the nodes on the cycle $v_i \rightarrow \cdots \rightarrow v_j$ are already
marked as $\textit{in-answer}$). We conclude that the algorithm does not
construct paths that have a repeated run with more than $n$ nodes.
It thus follows that two disjoint repeated runs of a constructed
path $\bar p$ must be separated by a node that appears (in a
position between them) for the first or second time. A path can have
at most $2n$ positions, such that in each one a node appears for the
first or second time. Therefore, if a path $\bar p$ is constructed
by the algorithm, then it can have at most $2n(n+1)$ nodes.
Using $n$ distinct nodes, we can construct at most
$(n+1)^{2n(n+1)}$ paths with $2n(n+1)$ or fewer nodes.
\end{proof}
\begin{lemma}\label{lemma:all-roots}
$K$-Roots have the following two properties.
\begin{enumerate}
\item\label{part:all-roots-part-one}
All the $K$-roots are discovered before the algorithm terminates.
Moreover, they are discovered in the increasing order of their best heights.
\item\label{part:all-roots-part-two}
Suppose that $r$ is a $K$-root with a best height $b$.
If $p[v,k]$ is a path (from any node $v$ to any keyword $k$)
that is exposed before the iteration that discovers $r$ as a $K$-root,
then $w(p[v,k]) \le b$.
\end{enumerate}
\end{lemma}
\begin{proof}
We first prove Part~\ref{part:all-roots-part-one}.
Suppose that a keyword $k$ is reachable from node $v$.
As long as $v.\mathit{marks[k]}$ is $\textit{active}$ at the beginning
of the main loop (line~\ref{alg:gtf:mainLoop_start}),
Lemma~\ref{LEMMA:ON-QUEUE} implies that
the queue $Q$ contains (at least) one suffix of a minimal path from $v$ to $k$.
By Lemma~\ref{lemma:finite}, the algorithm constructs a finite number of
paths. By Proposition~\ref{prop:twice}, the same path can be inserted
into the queue at most twice.
Since the algorithm does not terminate while $Q$ is not empty,
$v.\mathit{marks[k]}$ must be changed to $\textit{visited}$ after a finite time.
It thus follows that each $K$-root is discovered after a finite time.
Next, we show that the $K$-roots are discovered in the increasing order of
their best heights. Let $r_1$ and $r_2$ be two $K$-roots with best heights
$b_1$ and $b_2$, respectively, such that $b_1 < b_2$.
Lemma~\ref{LEMMA:GTF-SHORTEST-PATH-MARKS} implies the following for
$r_i$ ($i=1,2$). For all keywords $k\in K$, a minimal path from $r_i$
to $k$ changes $r_i.\mathit{marks[k]}$ from $\textit{active}$ to $\textit{visited}$; that is,
$r_i$ is discovered as a $K$-root by minimal paths.
Suppose, by way of contradiction, that $r_2$ is discovered first.
Hence, a path with weight $b_2$ is removed from $Q$ while
Lemma~\ref{LEMMA:ON-QUEUE} implies that
a suffix with a weight of at most $b_1$ is still on $Q$.
This contradiction completes the proof of Part~\ref{part:all-roots-part-one}.
Now, we prove Part~\ref{part:all-roots-part-two}.
As shown in the proof of Part~\ref{part:all-roots-part-one},
a $K$-root is discovered by minimal paths.
Let $r$ be a $K$-root with best height $b$.
Suppose, by way of contradiction, that a path $p[v,k]$,
such that $w(p[v,k]) > b$,
is exposed before the iteration, say $i$, that discovers $r$ as a $K$-root.
By Lemma~\ref{LEMMA:ON-QUEUE}, at the beginning of iteration $i$,
the queue $Q$ contains a suffix with weight of at most $b$. Hence,
$p[v,k]$ cannot be removed from $Q$ at the beginning of iteration $i$.
This contradiction proves Part~\ref{part:all-roots-part-two}.
\end{proof}
\begin{restatable}{lemma}{lemmaIH}
\label{LEMMA:INCREASING-HEIGHT}
Suppose that node $v$ is discovered as a $K$-root at iteration $i$.
Let $p_1[v',k']$ and $p_2[v,k]$ be paths that are exposed in iterations
$j_1$ and $j_2$, respectively. If $i < j_1 < j_2$, then
$w(p_1[v',k']) \le w(p_2[v,k])$.
Note that $k$ and $k'$ are not necessarily the same and similarly for
$v$ and $v'$; moreover, $v'$ has not necessarily been discovered as a $K$-root.
\end{restatable}
\begin{proof}
Suppose the lemma is false. In particular, consider an iteration $j_1$
of the main loop (line~\ref{alg:gtf:mainLoop_start})
that violates the lemma. That is, the following hold in iteration $j_1$.
\begin{itemize}
\item Node $v$ has already been discovered as a $K$-root in an earlier
iteration (so, there are no frozen paths at $v$).
\item A path $p_1[v',k']$ is exposed in iteration $j_1$.
\item A path $p_2[v,k]$ having a strictly lower weight than $p_1[v',k']$
(i.e.,~$w(p_2[v,k]) < w(p_1[v',k'])$) will be
exposed after iteration $j_1$. Hence, a proper suffix of this path is
frozen at some node $z$ during iteration $j_1$.
\end{itemize}
For a given $v$ and $p_1[v',k']$, there could be several paths
$p_2[v,k]$ that satisfy the third condition above. We choose one, such
that its suffix is frozen at a node $z$ that is closest from $v$.
Since $v$ has already been discovered as a $K$-root,
$z$ is different from $v$.
Clearly, $z.\mathit{marks[k]}$ is changed to $\textit{visited}$ before iteration
$j_1$. By Lemma~\ref{LEMMA:GTF-SHORTEST-PATH-MARKS}, a minimal path
$p_m[z,k]$ does that. Let $p_s[v,z]$ be a minimal path from $v$ to
$z$.
Consider the path
\begin{equation*}
\bar p[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation*}
Since both $p_s[v,z]$ and $p_m[z,k]$ are minimal, the weight
of their concatenation (i.e.,~$\bar p[v,k]$) is no more than that of
$p_2[v,k]$ (which is also a path that passes through node $z$). Hence,
$w(\bar p[v,k]) < w(p_1[v',k'])$.
We now show that the conditions of Lemma~\ref{LEMMA:PATH-CONCAT} are
satisfied at the beginning of iteration $j_1$
(i.e.,~$j_1$ corresponds to $i$ in Lemma~\ref{LEMMA:PATH-CONCAT}).
Conditions~\ref{cond:new-first}--\ref{cond:new-second} clearly hold.
Condition~\ref{cond:first} is satisfied because a suffix of $p_2[v,k]$
is frozen at $z$.
Condition~\ref{cond:second} holds, because
of the choice of $z$ and the fact
$w(\bar p[v,k]) < w(p_1[v',k'])$ that was shown earlier.
Condition~\ref{cond:third} holds, because otherwise
$\bar p[v,k]$ would be unfrozen and $z.\mathit{marks[k]}$ would be $\textit{in-answer}$
rather than $\textit{visited}$.
By Lemma~\ref{LEMMA:PATH-CONCAT}, a suffix of $\bar p[v,k]$ is on the
queue at the beginning of iteration $j_1$. This contradicts the
assumption that the path $p_1[v',k']$ is removed from the
queue at the beginning of iteration $j_1$, because $\bar p[v,k]$ (and, hence,
any of its suffixes) has a strictly lower weight.
\end{proof}
\begin{restatable}{lemma}{lemmaNVWT}
\label{LEMMA:NO-VISIT-WHEN-TERMINATING}
For all nodes $v\in V$, such that $v$ is a $K$-root,
the following holds.
If $z$ is a node on a simple path from $v$ to some $k\in K$,
then $z.\mathit{marks[k]} \not= \textit{visited}$ when the algorithm terminates.
\end{restatable}
\begin{proof}
The algorithm terminates when the test of
line~\ref{alg:gtf:mainLoop_start} shows that $Q$ is empty. Suppose
that the lemma is not true. Consider some specific $K$-root $v$ and
keyword $k$ for which the lemma does not hold. Among all the nodes $z$
that violate the lemma with respect to $v$ and $k$, let $z$ be a
closest one from $v$. Observe that $z$ cannot be $v$, because of the
following two reasons. First, by Lemma~\ref{lemma:all-roots}, node $v$ is
discovered as a $K$-root before termination. Second, when a $K$-root is
discovered (in
lines~\ref{alg:gtf:becomesRoot}--\ref{alg:gtf:isRootGetsTrue}), all
its marks become $\textit{in-answer}$ in
lines~\ref{alg:gtf:newRootStart}--\ref{alg:gtf:unfrNewRoot}.
Suppose that $p_m[z,k]$ is the path that changes $z.\mathit{marks[k]}$
to $\textit{visited}$. Let $p_s[v,z]$ be a minimal path from $v$ to $z$. Note
that $p_s[v,z]$ exists, because $z$ is on a simple path from $v$ to $k$.
Consider the path
\begin{equation*}
\bar p[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation*}
Suppose that the test of line~\ref{alg:gtf:mainLoop_start} is \textbf{false}
(and, hence, the algorithm terminates) on iteration $i$.
We now show that the conditions of Lemma~\ref{LEMMA:PATH-CONCAT}
are satisfied at the beginning of that iteration.
Conditions~\ref{cond:new-first}--\ref{cond:new-second} of
Lemma~\ref{LEMMA:PATH-CONCAT} clearly hold.
Conditions~\ref{cond:first}--\ref{cond:second} are satisfied
because of how $z$ is chosen. Condition~\ref{cond:third} holds, because
otherwise $z.\mathit{marks[k]}$ should have been changed to $\textit{in-answer}$.
By Lemma~\ref{LEMMA:PATH-CONCAT}, a suffix of $\bar p[v,k]$ is on $Q$
when iteration $i$ begins, contradicting our assumption that $Q$ is empty.
\end{proof}
\begin{theorem}\label{theorem:gtf-correct}
GTF is correct. In particular, it finds all and only answers
to the query $K$ by increasing height within $2(n+1)^{2n(n+1)}$ iterations
of the main loop (line~\ref{alg:gtf:mainLoop_start}), where $n=|V|$.
\end{theorem}
\begin{proof}
By Lemma~\ref{lemma:finite},
the algorithm constructs at most $(n+1)^{2n(n+1)}$ paths.
By Proposition~\ref{prop:twice},
a path can be inserted into the queue $Q$ at most twice.
Thus, the algorithm terminates after at most $2(n+1)^{2n(n+1)}$ iterations
of the main loop.
By Part~\ref{part:all-roots-part-one} of Lemma~\ref{lemma:all-roots},
all the $K$-roots are discovered.
By Lemma~\ref{LEMMA:NO-VISIT-WHEN-TERMINATING},
no suffix of a simple path from a $K$-root to a keyword can be frozen
upon termination. Clearly, no such suffix can be on $Q$
when the algorithms terminates. Hence, the algorithm constructs
all the simple paths from each $K$-root to every
keyword. It thus follows that the algorithm finds all the answers to $K$.
Clearly, the algorithm generates only valid answers to $K$.
Next, we prove that the answers are produced in the order of
increasing height. So, consider answers $a_1$ and $a_2$
that are produced in iterations $j'_1$ and $j_2$, respectively.
For the answer $a_i$ ($i=1,2$),
let $r_i$ and $h_i$ be its $K$-root and height, respectively.
In addition, let $b_i$ be the best height of $r_i$ ($i=1,2$).
Suppose that $j'_1 < j_2$. We have to prove that $h_1 \le h_2$.
By way of contradiction, we assume that $h_1 > h_2$.
By the definition of best height, $h_2 \ge b_2$. Hence, $h_1 > b_2$.
Let $p_2[r_2,k]$ be the path of $a_2$ that is exposed
(i.e.,~removed from $Q$) in iterations $j_2$.
Suppose that $p_1[r_1,k']$ is a path of $a_1$, such that
$w(p_1[r_1,k'])=h_1$ and $p_1[r_1,k']$ is exposed in the iteration $j_1$
that is as close to iteration $j'_1$ as possible
(among all the paths of $a_1$ from $r_1$ to a keyword with a weight
equal to $h_1$). Clearly, $j_1 \le j'_1$ and hence $j_1 < j_2$.
We now show that $w(p_1[r_1,k']) < h_1$, in contradiction to
$w(p_1[r_1,k'])=h_1$. Hence, the claim that $h_1 \le h_2$ follows.
Let $i$ be the iteration that discovers $r_2$ as a $K$-root.
There are two cases to consider as follows.
\begin{description}
\item[Case 1: $i < j_1$.] In this case, $i < j_1 < j_2$, since $j_1 <
j_2$. By Lemma~\ref{LEMMA:INCREASING-HEIGHT},
$w(p_1[r_1,k']) \le w(p_2[r_2,k])$. (Note that we
apply Lemma~\ref{LEMMA:INCREASING-HEIGHT} after replacing $v$ and
$v'$ with $r_2$ and $r_1$, respectively.) Hence,
$w(p_1[r_1,k']) < h_1$, because $w(p_2[r_2,k]) \le h_2$
follows from the definition of height and we have assumed that $h_1 > h_2$.
\item[Case 2: $j_1 \le i$.]
By Part~\ref{part:all-roots-part-two} of Lemma~\ref{lemma:all-roots},
$w(p_1[r_1,k']) \le b_2$. Hence, $w(p_1[r_1,k']) < h_1$,
because we have shown earlier that $h_1 > b_2$.
\end{description}
Thus, we have derived a contradiction and, hence,
it follows that answers are produced by increasing height.
\end{proof}
\begin{corollary}\label{cor:running}
The running time of the algorithm GTF is
$O\left(kn(n+1)^{2kn(n+1)+1}\right)$, where $n$ and $k$ are the number of
nodes in the graph and keywords in the query, respectively.
\end{corollary}
\begin{proof}
The most expensive operation is a call to $\bid{produceAnswers}(v.paths,p)$. By
Lemma~\ref{lemma:finite}, there are at most $(n+1)^{2n(n+1)}$ paths.
A call to the procedure $\bid{produceAnswers}(v.paths,p)$ considers all combinations of
$k-1$ paths plus $p$. For each combination, all its $k$ paths are
traversed in linear time. Thus, the total cost of one call to
$\bid{produceAnswers}(v.paths,p)$ is $O\left(kn(n+1)(n+1)^{(k-1)2n(n+1)}\right)$. By
Theorem~\ref{theorem:gtf-correct}, there are at most
$2(n+1)^{2n(n+1)}$ iterations. Hence, the running time is
$O\left(kn(n+1)^{2kn(n+1)+1}\right)$.
\end{proof}
\section{Preliminaries}\label{sec:prelim}
We model data as a directed graph $G$, similarly to~\cite{icdeBHNCS02}.
Data graphs can be constructed from a variety of formants (e.g.,~RDB, XML
and RDF).
Nodes represent entities and relationships, while edges correspond to
connections among them (e.g.,~foreign-key references when the data
graph is constructed from a relational database). We assume that text
appears only in the nodes. This is not a limitation, because we can
always split an edge (with text) so that it passes through a
node. Some nodes are for keywords, rather than entities and
relationships. In particular, for each keyword $k$ that appears in the
data graph, there is a dedicated node. By a slight abuse of notation,
we do not distinguish between a keyword $k$ and its node---both are
called \emph{keyword} and denoted by $k$. For all nodes $v$ of the
data graph that contain a keyword $k$, there is a directed edge from
$v$ to $k$. Thus, keywords have only incoming edges.
Figure~\ref{fig:datagraph} shows a snippet of a data graph.
The dashed part should be ignored unless explicitly stated otherwise.
Ordinary nodes are shown as
ovals. For clarity, the type of each node appears inside the oval.
Keyword nodes are depicted as rectangles. To keep the figure small,
only a few of the keywords that appear in the graph are shown as
nodes. For example, a type is also a keyword and has its own node in the full
graph. For each oval, there is an edge to every keyword that it contains.
Let $G=(V,E)$ be a directed data graph, where $V$ and $E$ are the sets
of nodes and edges, respectively.
A directed path is denoted by $\langle v_1,\ldots,v_m \rangle$.
We only consider \emph{rooted} (and, hence, directed) subtrees $T$ of
$G$. That is, $T$ has a unique node $r$, such that for all nodes $u$
of $T$, there is exactly one path in $T$ from $r$ to $u$.
Consider a query $K$, that is, a set of at least two
keywords. A \emph{$K$-subtree} is a rooted subtree of $G$, such that
its leaves are exactly the keywords of $K$. We say that a node $v\in
V$ is a \emph{$K$-root} if it is the root of some $K$-subtree of $G$. It is
observed in~\cite{icdeBHNCS02} that $v$ is a $K$-root if and only if
for all $k\in K$, there is a path in $G$ from $v$ to $k$.
An \emph{answer} to $K$ is a $K$-subtree $T$ that is \emph{non-redundant}
(or \emph{reduced}) in the sense that no proper subtree $T'$ of $T$ is
also a $K$-subtree. It is easy to show that a $K$-subtree $T$ of
$G$ is an answer
if and only if the root of $T$ has at least two
children. Even if $v$ is a $K$-root, it does not necessarily follow
that there is an answer to $K$ that is rooted at $v$ (because it is possible
that in all $K$-subtrees rooted at $v$, there is only one child of $v$).
Figure~\ref{fig:answers} shows three answers to the query
$\left\{\mathit{France},\mathit{Paris}\right\}$ over the data graph of
Figure~\ref{fig:datagraph}. The answer $A_1$ means that the city Paris is
located in a province containing the word France in its name.
The answer $A_2$ states that the city Paris is located in
the country France. Finally, the answer $A_3$ means that Paris is
located in a province which is located in France.
Now, consider also the dashed part of Figure~\ref{fig:datagraph}, that
is, the keyword $\mathit{Seine}$ and the node $\mathit{river}$ with
its outgoing edges. There is a path from $\mathit{river}$ to every keyword
of $K=\left\{\mathit{France},\mathit{Paris}\right\}$. Hence,
$\mathit{river}$ is a $K$-root. However, the $K$-subtree of
Figure~\ref{fig:redun_ans} is not an answer to $K$,
because its root has only one child.
For ranking, the nodes and edges of the data graph have positive
weights. The \emph{weight} of a path (or a tree) is the sum of
weights of all its nodes and edges. The rank of an answer is inversely
proportional to its weight. The \emph{height} of a tree is the
maximal weight over all paths from the root to any leaf
(which is a keyword of the query).
For example, suppose that the weight of each node and edge is $1$.
The heights of the answers $A_1$ and $A_3$ (of
Figure~\ref{fig:answers}) are $5$ and $7$, respectively. In $A_1$, the
path from the root to $\mathit{France}$ is a minimal
(i.e.,~shortest) one between these two nodes, in the whole graph, and
its weight is $5$. In $A_3$, however, the path from the root (which
is the same as in $A_1$) to $\mathit{France}$ has a higher weight, namely,~$7$.
\section{\label{sec:intro}Introduction}
Keyword search over data graphs is a convenient paradigm of querying
semistructured and linked data. Answers, however, are similar to those
obtained from a database system, in the sense that they are succinct
(rather than just relevant documents) and include semantics (in the
form of entities and relationships) and not merely free text. Data
graphs can be built from a variety of formats, such as XML, relational
databases, RDF and social networks. They can also be obtained from
the amalgamation of many heterogeneous sources. When it comes to
querying data graphs, keyword search alleviates their lack of
coherence and facilitates easy search for precise answers, as if
users deal with a traditional database system.
In this paper, we address the issue of efficiency. Computing keyword
queries over data graphs is much more involved than evaluation of
relational expressions. Quite a few systems have been developed
(see~\cite{tkdeCW14} for details). However, they
fall short of the degree of efficiency and scalability that is
required in practice. Some algorithms sacrifice \emph{completeness}
for the sake of efficiency; that is, they are not capable of
generating all the answers and, consequently, may miss some relevant ones.
We present a novel algorithm,
called \emph{Generating Trees with Freezing} (GTF).
We start with a straightforward generalization of Dijkstra's
shortest-path algorithm to the task of constructing all simple
(i.e.,~acyclic) paths, rather than just the shortest ones. Our main
contribution is incorporating the \emph{freezing} technique that
enhances efficiency by up to one \linebreak\linebreak\linebreak order of magnitude, compared with the
naive generalization of Dijkstra's algorithm. The main idea is to
avoid the construction of most non-shortest paths until they are
actually needed in answers. Freezing may seem intuitively clear, but
making it work involves subtle details and requires an intricate proof
of correctness.
Our main theoretical contribution is the algorithm GTF, which
incorporates freezing, and its proof of correctness. Our main
practical contribution is showing experimentally (in
Section~\ref{sec:experiments_sum}
and Appendix~\ref{sec:experiments
)
that GTF is both more efficient and more scalable than existing
systems. This contribution is especially significant in light of the
following. First, GTF is complete (i.e.,~it does not miss answers);
moreover, we show experimentally that not missing answers is important
in practice. Second, the order of generating answers is by increasing
height. This order is commonly deemed a good strategy for an initial
ranking that is likely to be in a good correlation with the final one
(i.e.,~by increasing weight).
\section{The GTF Algorithm}\label{sec:gtf}
\input{gt}
\subsection{Incorporating Freezing}
The general idea of freezing is to avoid the construction of paths that cannot contribute to production of answers.
To achieve that, a non-minimal path $p$ is frozen
until it is certain that $p$ can reach (when constructed backwards)
a $K$-root.
In particular, the first path that reaches a node $v$ is always a minimal one.
When additional paths reach $v$, they are frozen there until
$v$ is discovered to be on a path from a $K$-root to a keyword node.
The process of answer production in the GTF algorithm remains the same as in the naive approach.
We now describe some details about the implementation of GTF. We mark
nodes of the data graph as either $\textit{active}$, $\textit{visited}$ or $\textit{in-answer}$. Since we
simultaneously construct paths to all the keywords (of the query
$K=\left\{{k_1,\ldots,k_n}\right\}$), a node has a separate mark for
each keyword. The marks of a node $v$ are stored in the array
$v.\mathit{marks}$, which has an entry for each keyword. For a
keyword $k_i$, the mark of $v$ (i.e.,~$v.\mathit{marks[k_i]}$) means
the following. Node $v$ is $\textit{active}$ if we have not yet discovered that
there is a path from $v$ to $k_i$. Node $v$ is $\textit{visited}$ if a minimal
path from $v$ to $k_i$ has been produced. And $v$ is marked as
$\textit{in-answer}$ when we discover for the first time that $v$ is on a path
from some $K$-root to $k_i$.
If $v.\mathit{marks[k_i]}$ is $\textit{visited}$ and a path $p$ from $v$ to $k_i$
is removed from the queue, then $p$ is \emph{frozen} at
$v$. Frozen paths from $v$ to $k_i$ are stored in a
dedicated list $v.\mathit{frozen}[k_i]$. The paths of
$v.\mathit{frozen}[k_i]$ are \emph{unfrozen} (i.e.,~are moved back into
the queue) when $v.\mathit{marks[k_i]}$ is changed to $\textit{in-answer}$.
We now describe the execution of GTF on the graph snippet of
Figure~\ref{fig:datagraph}, assuming that the query is
$K=\left\{\mathit{France}, \mathit{Paris}\right\}$.
Initially, two paths $\langle\mathit{France}\rangle$ and
$\langle\mathit{Paris}\rangle$,
each consisting of one keyword of $K$, are inserted into the queue,
where lower weight means higher priority.
Next, the top of the queue is removed; suppose that it is
$\langle\mathit{France}\rangle$.
First, we change $\mathit{France}.\mathit{marks}[\mathit{France}]$ to $\textit{visited}$.
Second, for each parent $v$ of $\mathit{France}$,
the path $v \rightarrow \mathit{France}$ is inserted into the queue;
namely, these are the paths $p_1$ and $p_2$ of Figure~\ref{fig:gt}.
We continue to iterate in this way. Suppose that now
$\langle\mathit{Paris}\rangle$ has the lowest weight.
So, it is removed from the queue,
$\mathit{Paris}.\mathit{marks}[\mathit{Paris}]$ is changed to $\textit{visited}$,
and the path $p_7$ (of Figure~\ref{fig:gt}) is inserted into the queue.
Now, let the path $p_1$ be removed from the queue.
As a result, $\mathit{province}.\mathit{marks}[\mathit{France}]$ is
changed to $\textit{visited}$, and the path $p_6= \mathit{city} \rightarrow p_1$
is inserted into the queue. Next, assume that $p_2$ is removed from
the queue. So, $\mathit{country}.\mathit{marks}[\mathit{France}]$ is
changed to $\textit{visited}$, and the paths $p_3= \mathit{province} \rightarrow
p_2$ and $p_5= \mathit{city} \rightarrow p_2$ are inserted into the queue.
Now, suppose that $p_3$ is at the top of the queue.
So, $p_3$ is removed and immediately frozen at $\mathit{province}$
(i.e.,~added to $\mathit{province}.\mathit{frozen}[\mathit{France}]$),
because $\mathit{province}.\mathit{marks}[\mathit{France}]=\textit{visited}$.
Consequently, no paths are added to the queue in this iteration.
Next, assume that $p_6$ is removed from the queue.
The value of $\mathit{city}.\mathit{marks}[\mathit{France}]$ is changed to
$\textit{visited}$ and no paths are inserted into the queue,
because $\mathit{city}$ has no incoming edges.
Now, suppose that $p_7$ is at the top of the queue. So, it is removed
and $\mathit{city}.\mathit{marks}[\mathit{Paris}]$ is changed to
$\textit{visited}$. Currently, both
$\mathit{city}.\mathit{marks}[\mathit{Paris}]$ and
$\mathit{city}.\mathit{marks}[\mathit{France}]$ are $\textit{visited}$. That is,
there is a path from $\mathit{city}$ to all the keywords of the query
$\left\{\mathit{France}, \mathit{Paris}\right\}$. Recall that the
paths that have reached $\mathit{city}$ so far are $p_6$ and $p_7$.
For each one of those paths $p$, the following is done,
assuming that $p$ ends at the keyword $k$.
For each node $v$ of $p$, we change the mark of $v$ for $k$ to $\textit{in-answer}$
and unfreeze paths to $k$ that are frozen at $v$.
Doing it for $p_6$ means that
$\mathit{city}.\mathit{marks}[\mathit{France}]$,
$\mathit{province}.\mathit{marks}[\mathit{France}]$ and
$\mathit{France}.\mathit{marks}[\mathit{France}]$ are all changed to
$\textit{in-answer}$. In addition, the path $p_3$ is removed from
$\mathit{province}.\mathit{frozen}[\mathit{France}]$ and inserted back
into the queue. We act similarly on $p_7$. That is,
$\mathit{city}.\mathit{marks}[\mathit{Paris}]$ and
$\mathit{Paris}.\mathit{marks}[\mathit{Paris}]$ are changed to
$\textit{in-answer}$. In this case, there are no paths to be unfrozen.
Now, the marks of $\mathit{city}$ for all the keywords (of the query)
are $\textit{in-answer}$. Hence, we generate answers from the paths that have
already reached $\mathit{city}$. As a result, the answer $A_1$ of
Figure~\ref{fig:answers} is produced. Moreover, from now on, when a
new path reaches $\mathit{city}$, we will try to generate more answers
by applying $\bid{produceAnswers}(\mathcal{P}, p)$.
\subsection{The Pseudocode of the GTF Algorithm}\label{sec:pseudocode}
\input{alg_gtf}
The GTF algorithm is presented in Figure~\ref{alg:gtf}
and its helper procedures---in Figure~\ref{alg:gtf:helpers}. The input is
a data graph $G=(V,E)$ and a query $K=\left\{{k_1,\ldots,k_n}\right\}$.
The algorithm uses a single priority queue $Q$ to generate,
by increasing weight, all simple paths to every keyword node of $K$.
For each node $v\in V$, there is a flag $\mathit{isKRoot}$ that
indicates whether $v$ has a path to each keyword of $K$.
Initially, that flag is \textbf{false}.
For each node $v\in V$, the set of the constructed paths from $v$ to
the keyword $k$ is stored in $v.paths[k]$, which is initially empty.
Also, for all the keywords of $K$ and nodes of $G$, we initialize
the marks to be $\textit{active}$ and the lists of frozen paths to be empty.
The paths are constructed backwards, that is, from the last node (which is always a keyword).
Therefore, for each $k\in K$, we insert the path
$\langle k \rangle$ (consisting of the single node $k$) into $Q$.
All these initializations are done in lines \ref{alg:gtf:initQ_start}--\ref{alg:gtf:initQ_end} (of Figure~\ref{alg:gtf}).
The main loop of
lines~\ref{alg:gtf:mainLoop_start}--\ref{alg:gtf:insertQ} is repeated
while $Q$ is not empty. Line~\ref{alg:gtf:popBestPath} removes the
best (i.e.,~least-weight) path $p$ from $Q$. Let $v$ and $k_i$ be the first
and last, respectively, nodes of $p$. Line~\ref{alg:gtf:testFreeze}
freezes $p$ provided that it has to be done. This is accomplished by
calling the procedure $\textbf{freeze}(p)$ of
Figure~\ref{alg:gtf:helpers} that operates as follows. If the mark of
$v$ for $k_i$ is $\textit{visited}$, then $p$ is frozen at $v$ by adding it to
$v.frozen[k_i]$ and \textbf{true} is returned; in addition, the main
loop continues (in line~\ref{alg:gtf:freezeContinue}) to the next
iteration. Otherwise, \textbf{false} is returned and $p$ is handled
as we describe next.
Line~\ref{alg:gtf:testActiveStart} checks if $p$ is the first path
from $v$ to $k_i$ that has been removed from $Q$.
If so, line~\ref{alg:gtf:testActiveEnd}
changes the mark of $v$ for $k_i$ from $\textit{active}$ to $\textit{visited}$.
Line~\ref{alg:gtf:relaxGetsTrue} assigns \textbf{true} to the flag
$\mathit{relax}$, which means that (as of now) $p$ should
spawn new paths that will be added to $Q$.
The test of line~\ref{alg:gtf:testOldRoot} splits the execution of
the algorithm into two cases.
If $v$ is a $K$-root (which must have
been discovered in a previous iteration and means that for every
$k\in K$, there is a path from $v$ to $k$), then the
following is done. First, line~\ref{alg:gtf:unfrAtOldRoot} calls the
procedure $\textbf{unfreeze}(p,Q)$ of Figure~\ref{alg:gtf:helpers}
that unfreezes (i.e.,~inserts into $Q$) all the paths to $k_i$ that
are frozen at nodes of $p$ (i.e.,~the paths of ${\bar v}.frozen[k_i]$,
where $\bar v$ is a node of $p$). In addition, for all nodes ${\bar v}$
of $p$, the procedure $\textbf{unfreeze}(p,Q)$ changes the
mark of ${\bar v}$ for $k_i$ to $\textit{in-answer}$.
Second, line~\ref{alg:gtf:testLoopness} tests whether $p$ is acyclic. If so,
line~\ref{alg:gtf:addNodeVisit1} adds $p$ to the paths of $v$ that reach $k_i$,
and line~\ref{alg:gtf:prodAns1} produces new answers that include $p$
by calling $\textbf{produceAnswers}$ of Figure~\ref{alg:gtf:helpers}.
The pseudocode of $\bid{produceAnswers}(v.paths,p)$ is just an efficient implementation
of considering every combination of paths $p_1,\ldots,p_n$,
such that $p_i$ is from $v$ to $k_i$ ($1\le i\le n$),
and checking that it is an answer to $K$.
(It should be noted that GTF generates answers by increasing height.)
If the test of line~\ref{alg:gtf:testLoopness} is \textbf{false},
then the flag $\mathit{relax}$ is changed back to \textbf{false},
thereby ending the current iteration of the main loop.
If the test of line~\ref{alg:gtf:testOldRoot} is \textbf{false}
(i.e.,~$v$ has not yet been discovered to be a $K$-root), the execution
continues in line~\ref{alg:gtf:addToPaths} that adds $p$ to the paths
of $v$ that reach $k_i$. Line~\ref{alg:gtf:becomesRoot} tests whether
$v$ is now a $K$-root and if so, the flag $\mathit{isKRoot}$ is set to
\textbf{true} and the following is done. The nested loops of
lines~\ref{alg:gtf:newRootStart}--\ref{alg:gtf:newRootEnd} iterate
over all paths $p'$ (that have already been discovered) from $v$ to
any keyword node of $K$ (i.e.,~not just $k_i$). For each $p'$, where
${k'}$ is the last node of $p'$ (and, hence, is a keyword),
line~\ref{alg:gtf:unfrNewRoot} calls $\textbf{unfreeze}(p',Q)$,
thereby inserting into $Q$ all the paths to ${k'}$ that are frozen
at nodes of $p'$ and changing the mark (for ${k'}$) of those nodes
to $\textit{in-answer}$.
Line~\ref{alg:gtf:removeCyclic} removes all the cyclic paths among
those stored at $v$. Line~\ref{alg:gtf:prodAns2} generates answers
from the paths that remain at $v$.
If the test of line~\ref{alg:gtf:relaxIsTrue} is \textbf{true},
the relaxation of $p$ is done in
lines~\ref{alg:gtf:relaxStart}--\ref{alg:gtf:relaxEnd} as follows.
For each parent $v'$ of $v$, the
path $v' \rightarrow p$ is inserted into $Q$ if either one of the
following two holds (as tested in
line~\ref{alg:gtf:testEssential}). First, $v'$ is not on $p$.
Second, $v' \rightarrow p$
is essential, according to the following definition. The path $v'
\rightarrow p$ is \emph{essential} if $v'$ appears on $p$ and the
section of $v' \rightarrow p$ from its first node (which is $v'$) to
the next occurrence of $v'$ has at least one node $u$, such that
$u.\mathit{marks[k]}=\textit{visited}$, where the keyword $k$ is the last node of $p$.
Appendix~\ref{sec:looping_paths}
gives an example
that shows why essential paths (which are cyclic)
have to be inserted into $Q$.
Note that due to line~\ref{alg:gtf:reassignFalse}, no cyclic path $p[v,k]$
is relaxed if $v$ has already been discovered to
be a $K$-root in a previous iteration. The reason is that none of the nodes
along $p[v,k]$ could have the mark $\textit{visited}$ for the keyword $k$ (hence,
no paths are frozen at those nodes).
Observe that before $v$ is known to be a $K$-root, we add cyclic paths to
the array $v.paths$. Only when discovering that $v$ is a $K$-root, do we
remove all cyclic paths from $v.paths$ (in
line~\ref{alg:gtf:removeCyclic}) and stop adding them in subsequent
iterations. This is lazy evaluation, because prior to
knowing that answers with the $K$-root $v$ should be produced, it is a
waste of time to test whether paths from $v$ are cyclic.
\subsection{The Naive Approach}\label{sec:naive}
Consider a query $K=\left\{{k_1,\ldots,k_n}\right\}$.
In~\cite{icdeBHNCS02}, they use a backward shortest-path iterator from
each keyword node $k_i$. That is, starting at each $k_i$, they apply
Dijkstra's shortest-path algorithm in the opposite direction of the
edges. If a node $v$ is reached by the backward iterators from all the $k_i$,
then $v$ is a $K$-root (and, hence, might be the root of some answers).
In this way, answers are generated by increasing height.
However, this approach can only find answers that consist of
shortest paths from the root to the keyword nodes.
Hence, it misses answers (e.g.,~it cannot produce $A_3$ of
Figure~\ref{fig:answers}).
Dijkstra's algorithm can be straightforwardly generalized to construct
all the simple (i.e.,~acyclic) paths by increasing weight.
This approach is used\footnote{They used it on a small \emph{summary} graph
to construct database queries from keywords.}
in~\cite{icdeTWRC09} and it consists of two parts:
path construction and answer production. Each constructed path is from
some node of $G$ to a keyword of $K$. Since paths are constructed
backwards, the algorithm starts simultaneously from all the keyword
nodes of $K$. It uses a single priority queue to generate, by increasing weight,
all simple paths to every keyword node of $K$.
When the algorithm discovers that a node $v$ is a
$K$-root (i.e.,~there is a path from $v$ to every $k_i$), it
starts producing answers rooted at $v$. This is done by considering
every combination of paths $p_1,\ldots,p_n$, such that $p_i$ is from
$v$ to $k_i$ ($1\le i\le n$). If the combination is a non-redundant
$K$-subtree of $G$, then it is produced as an answer.
It should be noted that in~\cite{icdeTWRC09}, answers are subgraphs;
hence, every combination of paths $p_1,\ldots,p_n$ is an answer.
We choose to produce subtrees as answers for two reasons.
First, in the experiments of Section~\ref{sec:experiments_sum}, we compare
our approach with other systems that produce subtrees.
Second, it is easier for users to understand answers that are
presented as subtrees, rather than subgraphs.
\input{fig_gt}
The drawback of the above approach is constructing a large number of
paths that are never used in any of the generated answers. To
overcome this problem, the next section introduces the technique of
\emph{freezing}, thereby most non-minimal paths are generated only if
they are actually needed to produce answers.
Section~\ref{sec:pseudocode} describes the algorithm
\emph{Generating Trees with Freezing} (GTF)
that employs this technique.
To save space (when constructing all simple paths),
we use the common technique known as \emph{tree of paths}.
In particular, a path $p$ is a linked list,
such that its first node points to the rest of $p$.
As an example, consider the graph snippet of Figure~\ref{fig:datagraph}.
The paths that lead to the keyword $\mathit{France}$ are
$p_1$,~$p_2$, $p_3$, $p_4$, $p_5$~and $p_6$, shown in Figure~\ref{fig:gt}.
Their tree of paths is presented in Figure~\ref{fig:pathsTree}.
Since we build paths backwards, a data graph is preprocessed to
produce for each node $v$ the set of its \emph{parents}, that is, the
set of nodes $v'$, such that $(v',v)$ is an edge of the data graph. We
use the following notation. Given a path $p$ that starts at a node
$v$, the extension of $p$ with a parent $v'$ of $v$ is denoted by $v'
\rightarrow p$. Note that $v'$ is the first node of $v' \rightarrow p$
and $v$ is the second one.
\input{fig_paths_tree}
\section{Summary of the Experiments}\label{sec:experiments_sum}
In this section, we summarize our experiments.
The full description of the methodology and results is
given in Appendix~\ref{sec:experiments
.
We performed extensive experiments to measure the efficiency of GTF.
The experiments were done on the
Mondial\footnote{http://www.dbis.informatik.uni-goettingen.de/Mondial/}
and DBLP\footnote{http://dblp.uni-trier.de/xml/} datasets.
To test the effect of freezing, we ran the naive approach (described
in Section~\ref{sec:naive}) and GTF on both datasets. We measured the
running times of both algorithms for generating the top-$k$ answers
($k=100, 300, 1000$). We discovered that the freezing technique gives
an improvement of up to about one order of magnitude. It has a greater
effect on Mondial than on DBLP, because the former is highly cyclic
and, therefore, has more paths (on average) between a pair of nodes.
Freezing has a greater effect on long queries than short ones. This is
good, because the bigger the query, the longer it takes to produce its
answers. This phenomenon is due to the fact that the average height
of answers increases with the number of keywords. Hence, the naive
approach has to construct longer (and probably more) paths that do not
contribute to answers, whereas GTF avoids most of that work.
In addition, we compared the running times of GTF with those of
BANKS~\cite{icdeBHNCS02,vldbKPCSDK05}, BLINKS~\cite{sigmodHWYY07},
SPARK~\cite{tkdeLWLZWL11} and ParLMT~\cite{pvldbGKS11}. The last one
is a parallel implementation of~\cite{sigmodGKS08}; we used its
variant ES (early freezing with single popping) with 8 threads. BANKS
has two versions, namely, MI-BkS~\cite{icdeBHNCS02} and
BiS~\cite{vldbKPCSDK05}. The latter is faster than the former by up to one
order of magnitude and we used it for the running-time comparison.
GTF is almost always the best, except in two particular cases.
First, when generating $1,000$ answers over Mondial, SPARK is
better than GTF by a tiny margin on queries with 9 keywords, but is
slower by a factor of two when averaging over all queries. On DBLP, however,
SPARK is slower than GTF by up to two orders of magnitude. Second, when
generating $100$ answers over DBLP, BiS is slightly better than GTF on
queries with 9 keywords, but is 3.5 times slower when averaging over all
queries. On Mondial, however, BiS is slower than GTF by up to one
order of magnitude. All in all, BiS is the second best algorithm in most
of the cases. The other systems are slower than GTF by one to two
orders of magnitude.
Not only is our system faster, it is also
increasingly more efficient as either the number of generated answers
or the size of the data graph grows. This may seem counterintuitive,
because our algorithm is capable of generating all paths (between a
node and a keyword) rather than just the minimal one(s). However, our
algorithm generates non-minimal paths only when they can potentially
contribute to an answer, so it does not waste time on doing useless
work. Moreover, if only minimal paths are constructed, then longer
ones may be needed in order to produce the same number of answers,
thereby causing more work compared with an algorithm that is capable
of generating all paths.
GTF does not miss answers (i.e.,~it is capable of generating all of them).
Among the other systems we tested,
ParLMT~\cite{pvldbGKS11} has this property and is theoretically
superior to GTF, because it enumerates
answers with polynomial delay (in a 2-approximate order of increasing height),
whereas the delay of GTF could be exponential. In our experiments,
however, ParLMT was slower by two orders of magnitude, even though it
is a parallel algorithm (that employed eight cores in our tests).
Moreover, on a large dataset, ParLMT ran out of memory when the query had seven
keywords. The big practical advantage of GTF over ParLMT
is explained as follows.
The former constructs paths incrementally whereas the latter
(which is based on the Lawler-Murty procedure~\cite{mansciL72,orM68})
has to solve a new optimization problem for each produced answer,
which is costly in terms of both time and space.
A critical question is how important it is to have an algorithm that
is capable of producing all the answers. We compared our algorithm
with BANKS. Its two versions only generate answers consisting
of minimal paths and, moreover, those produced by BiS have distinct
roots. BiS (which is overall the second most efficient system in our
experiments) misses between $81\%$ (on DBLP) to $95\%$ (on Mondial) of
the answers among the top-$100$ generated by GTF. MI-BkS misses much
fewer answers, that is, between $1.8\%$ (on DBLP) and $32\%$ (on
Mondial), but it is slower than BiS by up to one order of magnitude. For
both versions the percentage of misses increases as the number of
generated answers grows. This is a valid and significant comparison,
because our algorithm generates answers in the same order as BiS and
MI-BkS, namely, by increasing height.
\section{Experiments}\label{sec:experiments}
In this appendix, we discuss the methodology and results of the experiments.
First, we start with a description of the datasets and the queries.
Then, we discuss the setup of the experiments.
Finally, we present the results.
This appendix expands the summary given in Section~\ref{sec:experiments_sum}.
\subsection{Datasets and Queries\label{sec:datasets}}
Most of the systems we tested are capable of parsing a relational
database to produce a data graph. Therefore, we selected this
approach. The experiments were done on
Mondial\footnote{http://www.dbis.informatik.uni-goettingen.de/Mondial/}
and DBLP\footnote{http://dblp.uni-trier.de/xml/} that are commonly
used for testing systems for keyword-search over data graphs.
Mondial is a highly connected graph.
We specified 65 foreign keys in Mondial,
because the downloaded version lacks such definitions.
DBLP has only an XML version that is available for download. In
addition, that version is just a bibliographical list. In order to
make it more meaningful, we modified it as follows. We replaced
the citation and cross-reference elements with IDREFS. Also, we
created a unique element for each \emph{author, editor} and
\emph{publisher}, and replaced each of their original occurrences with
an IDREF. After doing that, we transformed the XML dataset into a
relational database. To test scalability, we also created a subset of DBLP.
The two versions of DBLP are called \emph{Full DBLP} and \emph{Partial DBLP}.
Table~\ref{tbl:dataset_stats} gives the sizes of the data graphs. The
numbers of nodes and edges exclude the keyword nodes and their
incident edges. The average degree of the Mondial
data graph is the highest, due to its high connectivity.
We manually created queries for Mondial and DBLP.
The query size varies from 2 to 10 keywords, and there
are four queries of each size.
\begin{table}[t]
\centering
\caption{\label{tbl:dataset_stats}The sizes of the data graphs}
\begin{tabular}{l | c | c | c |}
Dataset Name & nodes & edges & average degree\\ \hline
Mondial & 21K & 86K & 4.04\\
Partial DBLP & 649K & 1.6M & 2.48\\
Full DBLP & 7.5M & 21M & 2.77\\
\hline
\end{tabular}
\end{table}
In principle, answers should be ranked according to their weight.
However, answers cannot be generated efficiently by increasing weight.
Therefore, a more tractable order (e.g.,~by increasing height) is used
for generating answers, followed by sorting according to the desired ranking.
We employed the weight function described in\cite{pvldbGKS11},
because based on our experience it is effective in practice.
The other tested systems have their own weight functions.
We have given the reference to the weight function
so that the presentation of our work is complete.
However, this paper is about efficiency of search algorithms, rather than their effectiveness.
Therefore, measuring recall and precision is beyond the scope of this paper.
\subsection{The Setup of the Experiments}\label{sec:setup}
We ran the tests on a server with two quad-core, 2.67GHz Xeon X5550
processors and 48GB of RAM. We used Linux Debian (kernel 3.2.0-4-amd64),
Java 1.7.0\_60, PostgreSQL 9.1.13 and MySQL 5.5.35.
The Java heap was allocated 10GB of RAM.
Systems that use a DBMS got additional 5GB of RAM for the buffer (of the DBMS).
The tests were executed on BANKS~\cite{icdeBHNCS02,vldbKPCSDK05},
BLINKS~\cite{sigmodHWYY07}, SPARK~\cite{tkdeLWLZWL11}, GTF (our
algorithm) and ParLMT~\cite{pvldbGKS11}. The latter is a parallel
implementation of~\cite{sigmodGKS08}; we used its
variant ES (early freezing with single popping) with 8 threads.
BANKS has two versions, namely, MI-BkS~\cite{icdeBHNCS02} and
BiS~\cite{vldbKPCSDK05}. The latter is faster than the former by one
order of magnitude and we used it for the running-time comparison.
We selected these systems because their code is available.
Testing the efficiency of the systems is a bit like comparing apples
and oranges. They use different weight functions and produce answers
in dissimilar orders. Moreover, not all of them are capable of
generating all the answers. We have striven to make the tests as equitable
as possible.
The exact configuration we used for each system, as well as some other details,
are given in Appendix~\ref{sec:config_details}.
We measured the running times as a function of the query size.
The graphs show the average over the four queries of each size.
The time axis is logarithmic, because there are order-of-magnitude differences
between the fastest and slowest systems.
\subsection{The Effect of Freezing}
To test the effect of freezing, we ran the naive approach (described in Section~\ref{sec:naive})
and GTF on all the
datasets. Usually, a query has thousands of answers,
because of multiple paths between pairs of nodes. We executed the
algorithms until the first $100$, $300$ and $1,000$ answers were
produced.
Then, we measured the speedup of GTF over the naive approach.
Typical results are shown in
Figure~\ref{fig:GTFSpeedup}.
Note that smaller queries are not necessarily subsets of
larger ones. Hence, long queries may be computed faster
than short ones. Generally, the computation time depends on the
length of the query and the percentage of nodes in which the
keywords appear.
\input{runtime_charts_freezing_b}
The freezing technique gives an improvement of up to about one order of
magnitude. It has a greater effect on Mondial than on DBLP, because
the former is highly cyclic and, therefore, there are more paths (on
average) between a pair of nodes. The naive approach produces all those paths,
whereas GTF freezes the construction of most of them until they are needed.
On all datasets, freezing has a greater effect on long queries than
short ones. This is good, because the bigger the query, the longer it
takes to produce its answers. This phenomenon is due to the fact that the
average height of answers increases with the number of
keywords. Hence, the naive approach has to construct longer (and
probably more) paths that do not contribute to answers, whereas GTF
avoids most of that work. In a few cases, GTF is slightly slower than
the naive approach, due to its overhead.
However, this happens for short queries that are computed very fast anyhow;
so, this is not a problem.
\input{runtime_charts_mondial_rdb}
\subsection{Comparison with Other Systems}\label{sec:comparison}
In this section, we compare the running times of the systems we
tested. We ran each system to produce $100$, $300$ and $1,000$ answers
for all the queries and datasets.
Typical results are shown in
Figs~\ref{fig:runtimeMondial100}--\ref{fig:runtimeDblp100}.
GTF is almost always the best, sometimes by several orders of magnitude,
except for a few cases in which it is second by a tiny margin.
When running SPARK, many answers contained only some of the keywords,
even though we set it up for the AND semantics (to be the same as the other
systems).
On Mondial, SPARK is generally the second best.
Interestingly, its running time increases only slightly as the number of
answers grows. In particular, Figure~\ref{fig:runtimeMondial1000} shows that for
$1,000$ answers, SPARK is the second best by a wide margin, compared with
BANKS which is third.
The reason for that is the way SPARK works. It produces expressions
(called \emph{candidate networks}) that are evaluated over a database.
Many of those expression are \emph{empty}, that is, produce no answer at all.
Mondial is a small dataset, so the time to compute a non-empty expression
is not much longer than computing an empty one. Hence, the running time hardly
depends on the number of generated answers.
However, on the partial and full versions of DBLP (which are large
datasets), SPARK is much worse than BANKS, even when producing only
$100$ answers (Figures~\ref{fig:runtimePdblp100}
and~\ref{fig:runtimeDblp100}). We conclude that by and large BANKS
is the second best.
In Figures~\ref{fig:runtimeMondial100}--\ref{fig:runtimeMondial1000},
the advantage of GTF over BANKS grows as more answers are produced.
Figures~\ref{fig:runtimePdblp100} and~\ref{fig:runtimeDblp100} show
that the advantage of GTF increases as the dataset becomes larger. We
conclude that GTF is more scalable than other systems when either the
size of the dataset or the number of produced answers is increased.
The running time of ParLMT grows linearly with the number of answers,
causing the gap with the other systems to get larger (as more answers
are produced). This is because ParLMT solves a new optimization
problem from scratch for each answer. In comparison, BANKS and GTF
construct paths incrementally, rather than starting all over again at
the keyword nodes for each answer.
ParLMT got an out-of-memory exception when computing long queries on
Full DBLP (hence, Figure~\ref{fig:runtimeDblp100} shows the
running time of ParLMT only for queries with at most six keywords).
This means that ParLMT is memory inefficient compared with the other systems.
BLINKS is the slowest system. On Mondial, the average running time to
produce 100 answers is 46 seconds. Since this is much worse than the
other systems,
Figures~\ref{fig:runtimeMondial100}--\ref{fig:runtimeMondial1000} do not
include the graph for BLINKS. On Partial DBLP,
BLINKS is still the slowest (see Figure~\ref{fig:runtimePdblp100}).
On Full DBLP, BLINKS got an out-of-memory exception
during the construction of the indexes.
This is not surprising, because they store their indexes solely in main memory.
\input{runtime_charts_dblps_rdb}
\input{tbl_coef_var}
Table~\ref{tbl:coef_var} gives the averages of the
coefficients of variation (CV) for assessing the confidence in the
experimental results. These averages are for each system, dataset
and number of generated answers. Each average is over all query
sizes. Missing entries indicate out-of-memory exceptions. Since
the entries of Table~\ref{tbl:coef_var} are less than $1$, it means
that the experimental results have a low variance. It is not surprising that
sometimes the CV of GTF and ParLaw are the largest, because these
are the only two systems that can generate all the
answers. Interestingly, on the largest dataset, GTF has the best CV,
which is another testament to its scalability. SPARK has the
best CV on small datasets, because (as mentioned earlier)
its running times are hardly affected by either the query size or the
selectivity of the keywords.
\subsection{The Importance of Producing all Answers}\label{sec:all}
The efficiency of GTF is a significant achievement especially in light
of the fact that it does not miss answers (i.e.,~it is capable of
generating all of them). BANKS, the second most efficient system
overall, cannot produce all the answers. In this section, we
experimentally show the importance of not missing answers.
BANKS has two variants: Bidirectional Search (BiS)~\cite{vldbKPCSDK05}
and Multiple Iterator Backward Search (MI-BkS)~\cite{icdeBHNCS02}.
Figures~\ref{fig:runtimeMondial100}--\ref{fig:runtimeDblp100} show the
running times of the former, because it is the faster of the two.
MI-BkS is at least an order of magnitude slower than BiS. For example,
it takes MI-BkS an average (over all query sizes) of $2.7$
seconds to produce 100 answers on Mondial. However, as noted
in~\cite{vldbKPCSDK05}, BiS misses more answers than MI-BkS,
because it produces just a single answer for each root. MI-BkS is
capable of producing several answers for the same root, but all of
them must have a shortest path from the root to each keyword
(hence, for example, it would miss the answer $A_3$ of
Figure~\ref{fig:answers}).
To test how many answers are missed by BANKS, we ran GTF on all the queries and measured
the following in the generated results. First, the percentage of
answers that are rooted at the same node as some previously created
answer. Second, the percentage of answers that contain at least one
non-shortest path from the root to some node that contains
a keyword of the query.\footnote{We did that after removing the
keyword nodes so that answers have the same structure as in BANKS.}
The former and the latter percentages show how many answers are missed
by BiS and MI-BkS, respectively, compared with GTF. The results are
given in Table~\ref{tbl:banks_misses}.
For example, it shows that on Mondial, among the
first 100 answers of GTF, there are only $5$ different roots. So, BiS
would miss $95$ of those answers. Similarly, BiS is capable of
generating merely $20$ answers among the first $1,000$ produced by
GTF. Also the Table~\ref{tbl:banks_misses} shows that MI-BkS is more
effective, but still misses answers. On Mondial, it misses $32$ and $580$ answers among
the first $100$ and $1,000$, respectively, generated by GTF.
Generally, the percentage of missed answers increases with the number
of generated answers and also depends heavily on the dataset. It is
higher for Mondial than for DBLP, because the former is highly
connected compared with the latter. Therefore, more answers rooted at
the same node or containing a non-shortest path from the root to a
keyword could be created for Mondial. Both GTF and BANKS enumerate answers by
increasing height. Since BANKS misses so many answers, it must
generate other answers having a greater height, which are likely to be
less relevant.
\input{tbl_banks_misses}
\section{Conclusions}
We presented the GTF algorithm for enumerating, by increasing height,
answers to keyword search over data graphs. Our main contribution is
the freezing technique for avoiding the construction of (most if not
all) non-minimal paths until it is determined that they can reach
$K$-roots (i.e.,~potentially be parts of answers). Freezing is an
intuitive idea, but its incorporation in the GTF algorithm involves
subtle details and requires an intricate proof of correctness. In
particular, cyclic paths must be constructed (see
Appendix~\ref{sec:looping_paths
), although they are not part of any
answer. For efficiency's sake, however, it is essential to limit the
creation of cyclic paths as much as possible, which is accomplished by
lines~\ref{alg:gtf:reassignFalse} and~\ref{alg:gtf:testEssential} of
Figure~\ref{alg:gtf}.
Freezing is not merely of theoretical importance. Our extensive
experiments (described in Section~\ref{sec:experiments_sum}
and Appendix~\ref{sec:experiments
) show
that freezing increases efficiency by up to about one order of magnitude
compared with the naive approach (of Section~\ref{sec:naive}) that
does not use it.
The experiments of Section~\ref{sec:experiments_sum} and
Appendix~\ref{sec:experiments}
also show that in
comparison to other systems, GTF is almost always the best, sometimes
by several orders of magnitude.
Moreover, our algorithm is more scalable than other systems.
The efficiency of GTF is a significant achievement especially in light
of the fact that it is complete (i.e.,~does not miss answers).
Our experiments show that some of the other systems
sacrifice completeness for the sake of efficiency.
Practically, it means that they generate longer paths resulting
in answers that are likely to be less relevant than the missed ones.
The superiority of GTF over ParLMT is an indication that polynomial
delay might not be a good yard stick for measuring the practical
efficiency of an enumeration algorithm. An important topic for future
work is to develop theoretical tools that are more appropriate for
predicting the practical efficiency of those algorithms.
\section{The Need for Essential Paths }\label{sec:looping_paths}
In this section, we give an example showing that essential cyclic paths must
be constructed by the GTF algorithm, in order not to miss some answers.
Suppose that we modify the test of line~\ref{alg:gtf:testEssential}
of Figure~\ref{alg:gtf} to be ``$v'$ is not on $p$'' (i.e.,~we omit
the second part, namely, ``$v' \rightarrow p$ is essential'').
Note that the new test means that only acyclic paths are inserted into $Q$.
\input{fig_looping_paths}
Consider the data graph of Figure~\ref{fig:freezing_looping_pahts}(a).
The weights of the nodes are 1 and the weight of each edge appears next to it.
Suppose that the query is $\{k_1,k_2\}$.
We now show that the modified GTF algorithm would miss
the answer presented in Figure~\ref{fig:freezing_looping_pahts}(c),
where the root is $r$.
The initialization step of the algorithm inserts into $Q$
the paths $k_1$ and $k_2$, each consisting of a single keyword.
Next, we describe the iterations of the main loop.
For each one, we specify the path that is removed from $Q$
and the paths that are inserted into $Q$.
We do not mention explicitly how the marks are changed,
unless it eventually causes freezing.
\begin{enumerate}
\item
The path $k_1$ is removed from $Q$, and
the paths $a \rightarrow k_1$ and $b \rightarrow k_1$ are inserted into $Q$.
\item
The path $k_2$ is removed from $Q$ and
the path $r \rightarrow k_2$ is inserted into $Q$.
\item
The path $b \rightarrow k_1$ is removed (since its weight is the
lowest on $Q$), and $r \rightarrow b \rightarrow k_1$ and
$p_l = c \rightarrow b \rightarrow k_1$ are inserted
(the latter is shown in blue in Figure~\ref{fig:freezing_looping_pahts}(a)).
\item
The path $a \rightarrow k_1$ is removed from $Q$ and the path
$p_s= c \rightarrow a \rightarrow k_1$ (shown in red) is inserted into $Q$.
\item
The path $p_l$ is removed from $Q$ and
$c.\mathit{marks}[k_1]$ is changed to $\textit{visited}$; then the path $d \rightarrow p_l$
is inserted to $Q$.
\item
The path $r \rightarrow b \rightarrow k_1$ is
removed from $Q$ and nothing is inserted into $Q$.
\item
The $p_s$ is removed from $Q$ and freezes at node $c$.
\item
The path $d \rightarrow p_l$ is removed from $Q$.
It can only be extended further to node $b$, but that would create a cycle,
so nothing is inserted into $Q$.
\end{enumerate}
Eventually, node $r$ is discovered to be a $K$-root. However,
$c.\mathit{marks}[k_1]$ will never be changed from $\textit{visited}$ to $\textit{in-answer}$
for the following reason.
The minimal path that first visited $c$ (namely, $p_l$) must make a
cycle to reach $r$. Therefore, the path $p_s$ remains frozen at
node $c$ and the answer of Figure~\ref{fig:freezing_looping_pahts}(c)
will not be produced.
\section{Proof of Lemma~\ref{LEMMA:GTF-SHORTEST-PATH-MARKS}}\label{sec:proofLemmaSMP}
\setcounter{theorem}{2}
\lemmaSPM*
\begin{proof}
Suppose that the lemma is not true for some keyword $k\in K$.
Let $v$ be a closest node to $k$
among all those violating
the lemma with respect to $k$.
Node $v$ is different from $k$, because the path
$\anset{k}$ marks $k$ as $\textit{visited}$. We will derive a contradiction by
showing that a minimal path changes $v.\mathit{marks[k]}$
from $\textit{active}$ to $\textit{visited}$.
Let $p_s[v,k]$ be a minimal path from $v$ to $k$.
Consider the iteration $i$ of the main loop
(line~\ref{alg:gtf:mainLoop_start} in Figure~\ref{alg:gtf}) that changes
$v.\mathit{marks[k]}$ to $\textit{visited}$ (in line~\ref{alg:gtf:markVisited}).
Among all the nodes of $p_s[v,k]$
in which suffixes of some minimal paths from $v$ to $k$ are
frozen at the beginning of iteration $i$, let $z$ be the first one
when traversing $p_s[v,k]$ from $v$ to $k$
(i.e.,~on the path $p_s[v,z]$, node $z$ is the only one in which
such a suffix is frozen). Node $z$ exists for the following three reasons.
\begin{itemize}
\item The path $p_s[v,k]$ has not been exposed prior to
iteration $i$, because we assume that $v.\mathit{marks[k]}$ is
changed to $\textit{visited}$ in iteration $i$ and that change can happen only once.
\item The path $p_s[v,k]$ is acyclic (because it is minimal), so
a suffix of $p_s[v,k]$ could not have been discarded either
by the test of line~\ref{alg:gtf:testEssential} or due to
line~\ref{alg:gtf:reassignFalse}.
\item The path $p_s[v,k]$ (or any suffix thereof) cannot be on the
queue at the beginning of iteration $i$, because $v$ violates the
lemma, which means that a non-minimal path from $v$ to $k$ must be
removed from the queue at the beginning of that iteration.
\end{itemize}
The above three observations imply that a proper suffix of $p_s[v,k]$
must be frozen at the beginning of iteration $i$ and, hence, node $z$ exists.
Observe that $z$ is different from $v$, because a path to $k$
can be frozen only at a node $\hat v$, such that
${\hat v}.\mathit{marks[k]}=\textit{visited}$, whereas we assume that
$v.\mathit{marks[k]}$ is $\textit{active}$ at the beginning of iteration $i$.
By the selection of $v$ and $p_s[v,k]$ (and the above fact that $z\not=v$),
node $z$ does not violate the lemma, because $p_s[z,k]$ is a proper
suffix of $p_s[v,k]$ and, hence, $z$ is closer to $k$ than $v$.
Therefore, according to the lemma, there is a minimal path $p_m[z,k]$
that changes $z.\mathit{marks[k]}$ to $\textit{visited}$. Consequently,
\begin{equation}\label{eqn:min-path}
w(p_m[z,k]) \leq w(p_s[z,k]).
\end{equation}
Now, consider the path
\begin{equation}\label{eqn:new-path}
\bar{p}[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation}
Since $p_s[v,k]$ is a minimal path from $v$ to $k$,
Equations~(\ref{eqn:min-path}) and~(\ref{eqn:new-path}) imply that
so is $\bar{p}[v,k]$.
We now show that the conditions of Lemma~\ref{LEMMA:PATH-CONCAT} are
satisfied at the beginning of iteration $i$. In particular,
Condition~\ref{cond:new-first} holds, because $p_s[v,k]$ is acyclic
(since it is minimal) and, hence, so is the path $p_s[v,z]$.
Condition~\ref{cond:new-second} is satisfied, because of how
$p_m[z,k]$ is defined. Condition~\ref{cond:first} holds, because we chose
$z$ to be a node where a path to $k$ is frozen.
Condition~\ref{cond:second} is satisfied, because of how $z$ was
chosen and the fact that $\bar{p}[v,k]$ is minimal.
Condition~\ref{cond:third} is satisfied, because we have assumed
that $v.\mathit{marks[k]}$ is changed from $\textit{active}$ to $\textit{visited}$ during
iteration $i$.
By Lemma~\ref{LEMMA:PATH-CONCAT}, a suffix of
$\bar p[v,k]$ must be on the queue at the beginning of iteration
$i$. This contradicts our assumption that a non-minimal path (which
has a strictly higher weight than any suffix of $\bar p[v,k]$) changes
$v.\mathit{marks[k]}$ from $\textit{active}$ to $\textit{visited}$ in iteration $i$.
\end{proof}
\section{Configuration of the Systems and Methodology}\label{sec:config_details}
To facilitate repeatability of our experimental results,
we describe how we configured each of the tested systems.
For BANKS, we tested both of their variants, but the figures show the
running times of the more efficient one, namely, bidirectional search.
We disabled the search of keywords in the metadata, because it is not
available in some of the other systems. The size of the heap was set
to $20$, which means that BANKS generated $120$ answers in order to output $100$.
For the sake of a fair comparison, GTF was also set to generate $20$ more
answers than it was required to output.
In addition,
BANKS produced the answers without duplicates, with respect to
undirected semantics. Therefore, we also executed GTF
until it produced $120$, $320$ or $1,020$ answers without
duplicates.\footnote{Duplicate elimination in GTF
is done by first
removing the keyword nodes so that the structure of the answers is
the same as in BANKS.} For clarity's sake,
Section~\ref{sec:experiments} omits these details and describes all
the algorithms as generating $100$, $300$ and $1,000$ answers. In
BANKS, as well as in GTF,
redundant answers (i.e.,~those having a root with a single child)
were discarded, that is, they were not included among the generated answers.
The other systems produced $100$, $300$ or $1,000$ according to their default
settings.
For SPARK, we used the block-pipeline algorithm, because it was their
best on our datasets. The size of candidate networks (CN)
was bounded by 5. The \emph{completeness factor} was set to 2,
because the authors reported that this value enforces the AND
semantics for almost all queries.
In BLINKS, we used a random partitioner. The minimum size of a
partition was set to 5, because a bigger size caused an out-of-memory
error. BLINKS stores its indexes in the main memory, therefore, when
running BILNKS, we reserved 20GB (instead of 10GB) for the Java heap.
BANKS and SPARK use PostgreSQL and MySQL, respectively.
In those two DBMS, the default buffer size is tiny.
Hence, we increased it to 5GB (i.e.,~the parameters \texttt{shared\_buffers} of
PostgreSQL and \texttt{key\_buffer} of MySQL were set to 5GB).
For each dataset, we first executed a long query to warm
the system and only then did we run the experiments on that dataset.
To allow systems that use a DBMS to take full advantage of the buffer
size, we made two consecutive runs on each dataset (i.e.,~executing
once the whole sequence of queries on the dataset followed immediately
by a second run). This was done for \emph{all} the systems.
The reported times are those of the second run.
The running times do not include the parsing stage or
translating answers into a user-readable format.
For BLINKS, the time needed to construct the indexes is ignored.
\section{The GTF Algorithm}\label{sec:gtf}
\input{gt}
\subsection{Incorporating Freezing}
The general idea of freezing is to avoid the construction of paths that cannot contribute to production of answers.
To achieve that, a non-minimal path $p$ is frozen
until it is certain that $p$ can reach (when constructed backwards)
a $K$-root.
In particular, the first path that reaches a node $v$ is always a minimal one.
When additional paths reach $v$, they are frozen there until
$v$ is discovered to be on a path from a $K$-root to a keyword node.
The process of answer production in the GTF algorithm remains the same as in the naive approach.
We now describe some details about the implementation of GTF. We mark
nodes of the data graph as either $\textit{active}$, $\textit{visited}$ or $\textit{in-answer}$. Since we
simultaneously construct paths to all the keywords (of the query
$K=\left\{{k_1,\ldots,k_n}\right\}$), a node has a separate mark for
each keyword. The marks of a node $v$ are stored in the array
$v.\mathit{marks}$, which has an entry for each keyword. For a
keyword $k_i$, the mark of $v$ (i.e.,~$v.\mathit{marks[k_i]}$) means
the following. Node $v$ is $\textit{active}$ if we have not yet discovered that
there is a path from $v$ to $k_i$. Node $v$ is $\textit{visited}$ if a minimal
path from $v$ to $k_i$ has been produced. And $v$ is marked as
$\textit{in-answer}$ when we discover for the first time that $v$ is on a path
from some $K$-root to $k_i$.
If $v.\mathit{marks[k_i]}$ is $\textit{visited}$ and a path $p$ from $v$ to $k_i$
is removed from the queue, then $p$ is \emph{frozen} at
$v$. Frozen paths from $v$ to $k_i$ are stored in a
dedicated list $v.\mathit{frozen}[k_i]$. The paths of
$v.\mathit{frozen}[k_i]$ are \emph{unfrozen} (i.e.,~are moved back into
the queue) when $v.\mathit{marks[k_i]}$ is changed to $\textit{in-answer}$.
We now describe the execution of GTF on the graph snippet of
Figure~\ref{fig:datagraph}, assuming that the query is
$K=\left\{\mathit{France}, \mathit{Paris}\right\}$.
Initially, two paths $\langle\mathit{France}\rangle$ and
$\langle\mathit{Paris}\rangle$,
each consisting of one keyword of $K$, are inserted into the queue,
where lower weight means higher priority.
Next, the top of the queue is removed; suppose that it is
$\langle\mathit{France}\rangle$.
First, we change $\mathit{France}.\mathit{marks}[\mathit{France}]$ to $\textit{visited}$.
Second, for each parent $v$ of $\mathit{France}$,
the path $v \rightarrow \mathit{France}$ is inserted into the queue;
namely, these are the paths $p_1$ and $p_2$ of Figure~\ref{fig:gt}.
We continue to iterate in this way. Suppose that now
$\langle\mathit{Paris}\rangle$ has the lowest weight.
So, it is removed from the queue,
$\mathit{Paris}.\mathit{marks}[\mathit{Paris}]$ is changed to $\textit{visited}$,
and the path $p_7$ (of Figure~\ref{fig:gt}) is inserted into the queue.
Now, let the path $p_1$ be removed from the queue.
As a result, $\mathit{province}.\mathit{marks}[\mathit{France}]$ is
changed to $\textit{visited}$, and the path $p_6= \mathit{city} \rightarrow p_1$
is inserted into the queue. Next, assume that $p_2$ is removed from
the queue. So, $\mathit{country}.\mathit{marks}[\mathit{France}]$ is
changed to $\textit{visited}$, and the paths $p_3= \mathit{province} \rightarrow
p_2$ and $p_5= \mathit{city} \rightarrow p_2$ are inserted into the queue.
Now, suppose that $p_3$ is at the top of the queue.
So, $p_3$ is removed and immediately frozen at $\mathit{province}$
(i.e.,~added to $\mathit{province}.\mathit{frozen}[\mathit{France}]$),
because $\mathit{province}.\mathit{marks}[\mathit{France}]=\textit{visited}$.
Consequently, no paths are added to the queue in this iteration.
Next, assume that $p_6$ is removed from the queue.
The value of $\mathit{city}.\mathit{marks}[\mathit{France}]$ is changed to
$\textit{visited}$ and no paths are inserted into the queue,
because $\mathit{city}$ has no incoming edges.
Now, suppose that $p_7$ is at the top of the queue. So, it is removed
and $\mathit{city}.\mathit{marks}[\mathit{Paris}]$ is changed to
$\textit{visited}$. Currently, both
$\mathit{city}.\mathit{marks}[\mathit{Paris}]$ and
$\mathit{city}.\mathit{marks}[\mathit{France}]$ are $\textit{visited}$. That is,
there is a path from $\mathit{city}$ to all the keywords of the query
$\left\{\mathit{France}, \mathit{Paris}\right\}$. Recall that the
paths that have reached $\mathit{city}$ so far are $p_6$ and $p_7$.
For each one of those paths $p$, the following is done,
assuming that $p$ ends at the keyword $k$.
For each node $v$ of $p$, we change the mark of $v$ for $k$ to $\textit{in-answer}$
and unfreeze paths to $k$ that are frozen at $v$.
Doing it for $p_6$ means that
$\mathit{city}.\mathit{marks}[\mathit{France}]$,
$\mathit{province}.\mathit{marks}[\mathit{France}]$ and
$\mathit{France}.\mathit{marks}[\mathit{France}]$ are all changed to
$\textit{in-answer}$. In addition, the path $p_3$ is removed from
$\mathit{province}.\mathit{frozen}[\mathit{France}]$ and inserted back
into the queue. We act similarly on $p_7$. That is,
$\mathit{city}.\mathit{marks}[\mathit{Paris}]$ and
$\mathit{Paris}.\mathit{marks}[\mathit{Paris}]$ are changed to
$\textit{in-answer}$. In this case, there are no paths to be unfrozen.
Now, the marks of $\mathit{city}$ for all the keywords (of the query)
are $\textit{in-answer}$. Hence, we generate answers from the paths that have
already reached $\mathit{city}$. As a result, the answer $A_1$ of
Figure~\ref{fig:answers} is produced. Moreover, from now on, when a
new path reaches $\mathit{city}$, we will try to generate more answers
by applying $\bid{produceAnswers}(\mathcal{P}, p)$.
\subsection{The Pseudocode of the GTF Algorithm}\label{sec:pseudocode}
\input{alg_gtf}
The GTF algorithm is presented in Figure~\ref{alg:gtf}
and its helper procedures---in Figure~\ref{alg:gtf:helpers}. The input is
a data graph $G=(V,E)$ and a query $K=\left\{{k_1,\ldots,k_n}\right\}$.
The algorithm uses a single priority queue $Q$ to generate,
by increasing weight, all simple paths to every keyword node of $K$.
For each node $v\in V$, there is a flag $\mathit{isKRoot}$ that
indicates whether $v$ has a path to each keyword of $K$.
Initially, that flag is \textbf{false}.
For each node $v\in V$, the set of the constructed paths from $v$ to
the keyword $k$ is stored in $v.paths[k]$, which is initially empty.
Also, for all the keywords of $K$ and nodes of $G$, we initialize
the marks to be $\textit{active}$ and the lists of frozen paths to be empty.
The paths are constructed backwards, that is, from the last node (which is always a keyword).
Therefore, for each $k\in K$, we insert the path
$\langle k \rangle$ (consisting of the single node $k$) into $Q$.
All these initializations are done in lines \ref{alg:gtf:initQ_start}--\ref{alg:gtf:initQ_end} (of Figure~\ref{alg:gtf}).
The main loop of
lines~\ref{alg:gtf:mainLoop_start}--\ref{alg:gtf:insertQ} is repeated
while $Q$ is not empty. Line~\ref{alg:gtf:popBestPath} removes the
best (i.e.,~least-weight) path $p$ from $Q$. Let $v$ and $k_i$ be the first
and last, respectively, nodes of $p$. Line~\ref{alg:gtf:testFreeze}
freezes $p$ provided that it has to be done. This is accomplished by
calling the procedure $\textbf{freeze}(p)$ of
Figure~\ref{alg:gtf:helpers} that operates as follows. If the mark of
$v$ for $k_i$ is $\textit{visited}$, then $p$ is frozen at $v$ by adding it to
$v.frozen[k_i]$ and \textbf{true} is returned; in addition, the main
loop continues (in line~\ref{alg:gtf:freezeContinue}) to the next
iteration. Otherwise, \textbf{false} is returned and $p$ is handled
as we describe next.
Line~\ref{alg:gtf:testActiveStart} checks if $p$ is the first path
from $v$ to $k_i$ that has been removed from $Q$.
If so, line~\ref{alg:gtf:testActiveEnd}
changes the mark of $v$ for $k_i$ from $\textit{active}$ to $\textit{visited}$.
Line~\ref{alg:gtf:relaxGetsTrue} assigns \textbf{true} to the flag
$\mathit{relax}$, which means that (as of now) $p$ should
spawn new paths that will be added to $Q$.
The test of line~\ref{alg:gtf:testOldRoot} splits the execution of
the algorithm into two cases.
If $v$ is a $K$-root (which must have
been discovered in a previous iteration and means that for every
$k\in K$, there is a path from $v$ to $k$), then the
following is done. First, line~\ref{alg:gtf:unfrAtOldRoot} calls the
procedure $\textbf{unfreeze}(p,Q)$ of Figure~\ref{alg:gtf:helpers}
that unfreezes (i.e.,~inserts into $Q$) all the paths to $k_i$ that
are frozen at nodes of $p$ (i.e.,~the paths of ${\bar v}.frozen[k_i]$,
where $\bar v$ is a node of $p$). In addition, for all nodes ${\bar v}$
of $p$, the procedure $\textbf{unfreeze}(p,Q)$ changes the
mark of ${\bar v}$ for $k_i$ to $\textit{in-answer}$.
Second, line~\ref{alg:gtf:testLoopness} tests whether $p$ is acyclic. If so,
line~\ref{alg:gtf:addNodeVisit1} adds $p$ to the paths of $v$ that reach $k_i$,
and line~\ref{alg:gtf:prodAns1} produces new answers that include $p$
by calling $\textbf{produceAnswers}$ of Figure~\ref{alg:gtf:helpers}.
The pseudocode of $\bid{produceAnswers}(v.paths,p)$ is just an efficient implementation
of considering every combination of paths $p_1,\ldots,p_n$,
such that $p_i$ is from $v$ to $k_i$ ($1\le i\le n$),
and checking that it is an answer to $K$.
(It should be noted that GTF generates answers by increasing height.)
If the test of line~\ref{alg:gtf:testLoopness} is \textbf{false},
then the flag $\mathit{relax}$ is changed back to \textbf{false},
thereby ending the current iteration of the main loop.
If the test of line~\ref{alg:gtf:testOldRoot} is \textbf{false}
(i.e.,~$v$ has not yet been discovered to be a $K$-root), the execution
continues in line~\ref{alg:gtf:addToPaths} that adds $p$ to the paths
of $v$ that reach $k_i$. Line~\ref{alg:gtf:becomesRoot} tests whether
$v$ is now a $K$-root and if so, the flag $\mathit{isKRoot}$ is set to
\textbf{true} and the following is done. The nested loops of
lines~\ref{alg:gtf:newRootStart}--\ref{alg:gtf:newRootEnd} iterate
over all paths $p'$ (that have already been discovered) from $v$ to
any keyword node of $K$ (i.e.,~not just $k_i$). For each $p'$, where
${k'}$ is the last node of $p'$ (and, hence, is a keyword),
line~\ref{alg:gtf:unfrNewRoot} calls $\textbf{unfreeze}(p',Q)$,
thereby inserting into $Q$ all the paths to ${k'}$ that are frozen
at nodes of $p'$ and changing the mark (for ${k'}$) of those nodes
to $\textit{in-answer}$.
Line~\ref{alg:gtf:removeCyclic} removes all the cyclic paths among
those stored at $v$. Line~\ref{alg:gtf:prodAns2} generates answers
from the paths that remain at $v$.
If the test of line~\ref{alg:gtf:relaxIsTrue} is \textbf{true},
the relaxation of $p$ is done in
lines~\ref{alg:gtf:relaxStart}--\ref{alg:gtf:relaxEnd} as follows.
For each parent $v'$ of $v$, the
path $v' \rightarrow p$ is inserted into $Q$ if either one of the
following two holds (as tested in
line~\ref{alg:gtf:testEssential}). First, $v'$ is not on $p$.
Second, $v' \rightarrow p$
is essential, according to the following definition. The path $v'
\rightarrow p$ is \emph{essential} if $v'$ appears on $p$ and the
section of $v' \rightarrow p$ from its first node (which is $v'$) to
the next occurrence of $v'$ has at least one node $u$, such that
$u.\mathit{marks[k]}=\textit{visited}$, where the keyword $k$ is the last node of $p$.
Appendix~\ref{sec:looping_paths}
gives an example
that shows why essential paths (which are cyclic)
have to be inserted into $Q$.
Note that due to line~\ref{alg:gtf:reassignFalse}, no cyclic path $p[v,k]$
is relaxed if $v$ has already been discovered to
be a $K$-root in a previous iteration. The reason is that none of the nodes
along $p[v,k]$ could have the mark $\textit{visited}$ for the keyword $k$ (hence,
no paths are frozen at those nodes).
Observe that before $v$ is known to be a $K$-root, we add cyclic paths to
the array $v.paths$. Only when discovering that $v$ is a $K$-root, do we
remove all cyclic paths from $v.paths$ (in
line~\ref{alg:gtf:removeCyclic}) and stop adding them in subsequent
iterations. This is lazy evaluation, because prior to
knowing that answers with the $K$-root $v$ should be produced, it is a
waste of time to test whether paths from $v$ are cyclic.
\subsection{The Naive Approach}\label{sec:naive}
Consider a query $K=\left\{{k_1,\ldots,k_n}\right\}$.
In~\cite{icdeBHNCS02}, they use a backward shortest-path iterator from
each keyword node $k_i$. That is, starting at each $k_i$, they apply
Dijkstra's shortest-path algorithm in the opposite direction of the
edges. If a node $v$ is reached by the backward iterators from all the $k_i$,
then $v$ is a $K$-root (and, hence, might be the root of some answers).
In this way, answers are generated by increasing height.
However, this approach can only find answers that consist of
shortest paths from the root to the keyword nodes.
Hence, it misses answers (e.g.,~it cannot produce $A_3$ of
Figure~\ref{fig:answers}).
Dijkstra's algorithm can be straightforwardly generalized to construct
all the simple (i.e.,~acyclic) paths by increasing weight.
This approach is used\footnote{They used it on a small \emph{summary} graph
to construct database queries from keywords.}
in~\cite{icdeTWRC09} and it consists of two parts:
path construction and answer production. Each constructed path is from
some node of $G$ to a keyword of $K$. Since paths are constructed
backwards, the algorithm starts simultaneously from all the keyword
nodes of $K$. It uses a single priority queue to generate, by increasing weight,
all simple paths to every keyword node of $K$.
When the algorithm discovers that a node $v$ is a
$K$-root (i.e.,~there is a path from $v$ to every $k_i$), it
starts producing answers rooted at $v$. This is done by considering
every combination of paths $p_1,\ldots,p_n$, such that $p_i$ is from
$v$ to $k_i$ ($1\le i\le n$). If the combination is a non-redundant
$K$-subtree of $G$, then it is produced as an answer.
It should be noted that in~\cite{icdeTWRC09}, answers are subgraphs;
hence, every combination of paths $p_1,\ldots,p_n$ is an answer.
We choose to produce subtrees as answers for two reasons.
First, in the experiments of Section~\ref{sec:experiments_sum}, we compare
our approach with other systems that produce subtrees.
Second, it is easier for users to understand answers that are
presented as subtrees, rather than subgraphs.
\input{fig_gt}
The drawback of the above approach is constructing a large number of
paths that are never used in any of the generated answers. To
overcome this problem, the next section introduces the technique of
\emph{freezing}, thereby most non-minimal paths are generated only if
they are actually needed to produce answers.
Section~\ref{sec:pseudocode} describes the algorithm
\emph{Generating Trees with Freezing} (GTF)
that employs this technique.
To save space (when constructing all simple paths),
we use the common technique known as \emph{tree of paths}.
In particular, a path $p$ is a linked list,
such that its first node points to the rest of $p$.
As an example, consider the graph snippet of Figure~\ref{fig:datagraph}.
The paths that lead to the keyword $\mathit{France}$ are
$p_1$,~$p_2$, $p_3$, $p_4$, $p_5$~and $p_6$, shown in Figure~\ref{fig:gt}.
Their tree of paths is presented in Figure~\ref{fig:pathsTree}.
Since we build paths backwards, a data graph is preprocessed to
produce for each node $v$ the set of its \emph{parents}, that is, the
set of nodes $v'$, such that $(v',v)$ is an edge of the data graph. We
use the following notation. Given a path $p$ that starts at a node
$v$, the extension of $p$ with a parent $v'$ of $v$ is denoted by $v'
\rightarrow p$. Note that $v'$ is the first node of $v' \rightarrow p$
and $v$ is the second one.
\input{fig_paths_tree}
\section{\label{sec:intro}Introduction}
Keyword search over data graphs is a convenient paradigm of querying
semistructured and linked data. Answers, however, are similar to those
obtained from a database system, in the sense that they are succinct
(rather than just relevant documents) and include semantics (in the
form of entities and relationships) and not merely free text. Data
graphs can be built from a variety of formats, such as XML, relational
databases, RDF and social networks. They can also be obtained from
the amalgamation of many heterogeneous sources. When it comes to
querying data graphs, keyword search alleviates their lack of
coherence and facilitates easy search for precise answers, as if
users deal with a traditional database system.
In this paper, we address the issue of efficiency. Computing keyword
queries over data graphs is much more involved than evaluation of
relational expressions. Quite a few systems have been developed
(see~\cite{tkdeCW14} for details). However, they
fall short of the degree of efficiency and scalability that is
required in practice. Some algorithms sacrifice \emph{completeness}
for the sake of efficiency; that is, they are not capable of
generating all the answers and, consequently, may miss some relevant ones.
We present a novel algorithm,
called \emph{Generating Trees with Freezing} (GTF).
We start with a straightforward generalization of Dijkstra's
shortest-path algorithm to the task of constructing all simple
(i.e.,~acyclic) paths, rather than just the shortest ones. Our main
contribution is incorporating the \emph{freezing} technique that
enhances efficiency by up to one \linebreak\linebreak\linebreak order of magnitude, compared with the
naive generalization of Dijkstra's algorithm. The main idea is to
avoid the construction of most non-shortest paths until they are
actually needed in answers. Freezing may seem intuitively clear, but
making it work involves subtle details and requires an intricate proof
of correctness.
Our main theoretical contribution is the algorithm GTF, which
incorporates freezing, and its proof of correctness. Our main
practical contribution is showing experimentally (in
Section~\ref{sec:experiments_sum}
and Appendix~\ref{sec:experiments
)
that GTF is both more efficient and more scalable than existing
systems. This contribution is especially significant in light of the
following. First, GTF is complete (i.e.,~it does not miss answers);
moreover, we show experimentally that not missing answers is important
in practice. Second, the order of generating answers is by increasing
height. This order is commonly deemed a good strategy for an initial
ranking that is likely to be in a good correlation with the final one
(i.e.,~by increasing weight).
\section{Preliminaries}\label{sec:prelim}
We model data as a directed graph $G$, similarly to~\cite{icdeBHNCS02}.
Data graphs can be constructed from a variety of formants (e.g.,~RDB, XML
and RDF).
Nodes represent entities and relationships, while edges correspond to
connections among them (e.g.,~foreign-key references when the data
graph is constructed from a relational database). We assume that text
appears only in the nodes. This is not a limitation, because we can
always split an edge (with text) so that it passes through a
node. Some nodes are for keywords, rather than entities and
relationships. In particular, for each keyword $k$ that appears in the
data graph, there is a dedicated node. By a slight abuse of notation,
we do not distinguish between a keyword $k$ and its node---both are
called \emph{keyword} and denoted by $k$. For all nodes $v$ of the
data graph that contain a keyword $k$, there is a directed edge from
$v$ to $k$. Thus, keywords have only incoming edges.
Figure~\ref{fig:datagraph} shows a snippet of a data graph.
The dashed part should be ignored unless explicitly stated otherwise.
Ordinary nodes are shown as
ovals. For clarity, the type of each node appears inside the oval.
Keyword nodes are depicted as rectangles. To keep the figure small,
only a few of the keywords that appear in the graph are shown as
nodes. For example, a type is also a keyword and has its own node in the full
graph. For each oval, there is an edge to every keyword that it contains.
Let $G=(V,E)$ be a directed data graph, where $V$ and $E$ are the sets
of nodes and edges, respectively.
A directed path is denoted by $\langle v_1,\ldots,v_m \rangle$.
We only consider \emph{rooted} (and, hence, directed) subtrees $T$ of
$G$. That is, $T$ has a unique node $r$, such that for all nodes $u$
of $T$, there is exactly one path in $T$ from $r$ to $u$.
Consider a query $K$, that is, a set of at least two
keywords. A \emph{$K$-subtree} is a rooted subtree of $G$, such that
its leaves are exactly the keywords of $K$. We say that a node $v\in
V$ is a \emph{$K$-root} if it is the root of some $K$-subtree of $G$. It is
observed in~\cite{icdeBHNCS02} that $v$ is a $K$-root if and only if
for all $k\in K$, there is a path in $G$ from $v$ to $k$.
An \emph{answer} to $K$ is a $K$-subtree $T$ that is \emph{non-redundant}
(or \emph{reduced}) in the sense that no proper subtree $T'$ of $T$ is
also a $K$-subtree. It is easy to show that a $K$-subtree $T$ of
$G$ is an answer
if and only if the root of $T$ has at least two
children. Even if $v$ is a $K$-root, it does not necessarily follow
that there is an answer to $K$ that is rooted at $v$ (because it is possible
that in all $K$-subtrees rooted at $v$, there is only one child of $v$).
Figure~\ref{fig:answers} shows three answers to the query
$\left\{\mathit{France},\mathit{Paris}\right\}$ over the data graph of
Figure~\ref{fig:datagraph}. The answer $A_1$ means that the city Paris is
located in a province containing the word France in its name.
The answer $A_2$ states that the city Paris is located in
the country France. Finally, the answer $A_3$ means that Paris is
located in a province which is located in France.
Now, consider also the dashed part of Figure~\ref{fig:datagraph}, that
is, the keyword $\mathit{Seine}$ and the node $\mathit{river}$ with
its outgoing edges. There is a path from $\mathit{river}$ to every keyword
of $K=\left\{\mathit{France},\mathit{Paris}\right\}$. Hence,
$\mathit{river}$ is a $K$-root. However, the $K$-subtree of
Figure~\ref{fig:redun_ans} is not an answer to $K$,
because its root has only one child.
For ranking, the nodes and edges of the data graph have positive
weights. The \emph{weight} of a path (or a tree) is the sum of
weights of all its nodes and edges. The rank of an answer is inversely
proportional to its weight. The \emph{height} of a tree is the
maximal weight over all paths from the root to any leaf
(which is a keyword of the query).
For example, suppose that the weight of each node and edge is $1$.
The heights of the answers $A_1$ and $A_3$ (of
Figure~\ref{fig:answers}) are $5$ and $7$, respectively. In $A_1$, the
path from the root to $\mathit{France}$ is a minimal
(i.e.,~shortest) one between these two nodes, in the whole graph, and
its weight is $5$. In $A_3$, however, the path from the root (which
is the same as in $A_1$) to $\mathit{France}$ has a higher weight, namely,~$7$.
\section{Correctness and Complexity of GTF} \label{correctness}
\subsection{Definitions and Observations}
Before proving correctness of the GTF algorithm,
we define some notation and terminology (in addition
to those of Section~\ref{sec:prelim}) and state a few observations.
Recall that the data graph is $G=(V,E)$.
Usually, a keyword is denoted by $k$, whereas $r$, $u$, $v$ and $z$
are any nodes of $V$.
We only consider directed paths of $G$ that are defined as usual.
If $p$ is a path from $v$ to $k$, then we write it as $p[v,k]$ when we want
to explicitly state its first and last nodes.
We say that node $u$ is \emph{reachable} from $v$ if there is a path
from $v$ to $u$.
A \emph{suffix} of $p[v,k]$ is a traversal of $p[v,k]$ that starts at
(some particular occurrence of) a node $u$ and ends at the last node of $p$.
Hence, a suffix of $p[v,k]$ is denoted by $p[u,k]$.
A \emph{prefix} of $p[v,k]$ is a traversal of $p[v,k]$ that starts at $v$
and ends at (some particular occurrence of) a node $u$. Hence, a prefix of
$p[v,k]$ is denoted by $p[v,u]$.
A suffix or prefix of $p[v,k]$ is \emph{proper} if it is different from
$p[v,k]$ itself.
Consider two paths $p_1[v,z]$ and $p_2[z,u]$; that is, the former ends
in the node where the latter starts. Their \emph{concatenation},
denoted by $p_1[v,z] \circ p_2[z,u]$, is obtained by joining them at node $z$.
As already mentioned in Section~\ref{sec:prelim},
a positive \emph{weight} function $w$ is defined on the
nodes and edges of $G$. The weight of a path $p[v,u]$, denoted by
$w(p[v,u])$, is the sum of weights over all the nodes and
edges of $p[v,u]$. A \emph{minimal} path from $v$ to $u$ has the minimum weight
among all paths from $v$ to $u$. Since the weight function is
positive, there are no zero-weight cycles. Therefore, a minimal path
is acyclic. Also observe that the weight of a proper suffix or prefix
is strictly smaller than that of the whole path.\footnote{For the proof of correctness, it is
enough for the weight function to be non-negative
(rather than positive) provided that every cycle has a positive weight.
}
Let $K$ be a query (i.e.,~a set of at least two keywords). Recall
from Section~\ref{sec:prelim} the definitions of $K$-root, $K$-subtree
and height of a subtree. The \emph{best height} of a $K$-root $r$ is
the maximum weight among all the minimal paths from $r$ to any keyword
$k\in K$. Note that the height of any $K$-subtree rooted at $r$ is at
least the best height of $r$.
Consider a nonempty set of nodes $S$ and a node $v$.
If $v$ is reachable from every node of $S$, then we say that node $u\in S$
is \emph{closest to} $v$ if a minimal path from $u$ to $v$ has the
minimum weight among all paths from any node of $S$ to $v$.
Similarly, if every node of $S$ is reachable from $v$, then we say
that node $u\in S$ is \emph{closest from} $v$ if a minimal path from
$v$ to $u$ has the minimum weight among all paths from $v$ to any node
of $S$.
In the sequel, line numbers refer to the algorithm GTF of Figure~\ref{alg:gtf},
unless explicitly stated otherwise.
We say that a node $v \in V$ is \emph{discovered as a $K$-root}
if the test of line~\ref{alg:gtf:becomesRoot} is satisfied
and $v.\mathit{isRoot}$ is assigned $\bid{true}$ in
line~\ref{alg:gtf:isRootGetsTrue}.
Observe that the test of line~\ref{alg:gtf:becomesRoot} is $\bid{true}$
if and only if for all $k\in K$, it holds that
$v.\mathit{marks[k]}$ is either $\textit{visited}$ or $\textit{in-answer}$.
Also note that line~\ref{alg:gtf:isRootGetsTrue} is executed at most once
for each node $v$ of $G$. Thus, there is at most one iteration of the main
loop (i.e.,~line~\ref{alg:gtf:mainLoop_start}) that discovers $v$ as $K$-root.
We say that a path $p$ is \emph{constructed} when it is inserted into $Q$
for the first time, which must happen in line~\ref{alg:gtf:insertQ}.
A path is \emph{exposed} when it is removed from $Q$ in
line~\ref{alg:gtf:popBestPath}. Observe that a path $p[v,k]$ may be exposed
more than once, due to freezing and unfreezing.
\theoremstyle{definition}
\newtheorem{proposition}[theorem]{Proposition}
\begin{proposition}\label{prop:twice}
A path can be exposed at most twice.
\end{proposition}
\begin{proof}
When an iteration exposes a path $p[v,k]$ for the first time,
it does exactly one of the following.
It freezes $p[v,k]$ at node $v$,
discard $p[v,k]$ due to line~\ref{alg:gtf:reassignFalse}, or
extend (i.e.,~relax) $p[v,k]$ in the loop of line~\ref{alg:gtf:relaxStart}
and inserts the results into $Q$ in line~\ref{alg:gtf:insertQ}.
Note that some relaxations of $p[v,k]$ are never inserted into $Q$,
due to the test of line~\ref{alg:gtf:testEssential}.
Only if $p[v,k]$ is frozen at $v$, can it be inserted a second time
into $Q$, in line~\ref{unfreeze:insert} of the procedure $\textbf{unfreeze}$
(Figure~\ref{alg:gtf:helpers}) that also sets $v.\mathit{marks[k]}$ to $\textit{in-answer}$.
But then $p[v,k]$ cannot freeze again at $v$,
because $v.\mathit{marks[k]}$ does not change after becoming $\textit{in-answer}$.
Therefore, $p[v,k]$ cannot be inserted into $Q$ a third time.
\end{proof}
In the next section, we sometimes refer to the mark of
a node $v$ of a path $p$. It should be clear from the context that we
mean the mark of $v$ for the keyword where $p$ ends.
\subsection{The Proof}
We start with an auxiliary lemma that considers the concatenation of
two paths, where the linking node is $z$, as shown in
Figure~\ref{fig:illustration_a} (note that a wavy arrow denotes a
path, rather than a single edge). Such a concatenation is used in the
proofs of subsequent lemmas.
\input{lemma1}
\begin{restatable}{lemma}{lemmaPC}
\label{LEMMA:PATH-CONCAT}
Let $k$ be a keyword of the query $K$, and let $v$ and $z$ be nodes of the
data graph. Consider two paths $p_s[v,z]$ and $p_m[z,k]$.
Let $\bar p[v,k]$ be their concatenation at node $z$, that is,
\begin{equation*}
\bar p[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation*}
Suppose that the following hold at the beginning of iteration $i$ of the
main loop (line~\ref{alg:gtf:mainLoop_start}).
\begin{enumerate}
\item\label{cond:new-first}
The path $p_s[v,z]$ is minimal or (at least) acyclic.
\item\label{cond:new-second}
The path $p_m[z,k]$ has changed $z.\mathit{marks[k]}$ from $\textit{active}$ to $\textit{visited}$
in an earlier iteration.
\item\label{cond:first}
$z.\mathit{marks[k]}=\textit{visited}$.
\item\label{cond:second}
For all nodes $u\not=z$ on the path $p_s[v,z]$,
the suffix $\bar p[u,k]$ is not frozen at $u$.
\item\label{cond:third}
The path $\bar p[v,k]$ has not yet been exposed.
\end{enumerate}
Then, some suffix of $\bar p[v,k]$ must be on $Q$ at the beginning of
iteration $i$.
\end{restatable}
\begin{proof}
Suppose, by way of contradiction, that no suffix of $\bar p[v,k]$ is
on $Q$ at the beginning of iteration $i$. Since $\bar p[v,k]$
has not yet been exposed, there are two possible cases regarding its state.
We derive a contradiction by showing that none of them can happen.
\begin{description}
\item[Case 1:] Some suffix of $\bar p[v,k]$ is frozen. This cannot
happen at any node of $\bar p[z,k]$ (which is the same as
$p_m[z,k]$), because Condition~\ref{cond:first} implies that
$p_m[z,k]$ has already changed $z.\mathit{marks[k]}$ to $\textit{visited}$.
Condition~\ref{cond:second} implies that it cannot happen at the
other nodes of $\bar p[v,k]$ (i.e.,~the nodes $u$ of $p_s[v,z]$ that
are different from $z$).
\item[Case 2:] Some suffix of $\bar p[v,k]$ has already been discarded
(in an earlier iteration) either by the test of line~\ref{alg:gtf:testEssential}
or due to line~\ref{alg:gtf:reassignFalse}.
This cannot happen to any
suffix of $\bar p[z,k]$ (which is the same as $p_m[z,k]$), because
$p_m[z,k]$ has already changed $z.\mathit{marks[k]}$ to $\textit{visited}$. We
now show that it cannot happen to any other suffix $\bar p[u,k]$,
where $u$ is a node of $p_s[v,z]$ other than $z$. Note that $\bar
p[v,k]$ (and hence $\bar p[u,k]$) is not necessarily
acyclic. However, the lemma states that $p_s[v,z]$ is
acyclic. Therefore, if the suffix $\bar p[u,k]$, has a cycle that
includes $u$, then it must also include $z$. But
$z.\mathit{marks[k]}$ is $\textit{visited}$ from the moment it was changed to
that value until the beginning of iteration $i$ (because a mark
cannot be changed to $\textit{visited}$ more than once). Hence, the suffix $\bar
p[u,k]$ could not have been discarded by the test of
line~\ref{alg:gtf:testEssential}.
It is also not possible that
line~\ref{alg:gtf:reassignFalse}
has already discarded $\bar p[u,k]$ for the following reason.
If line~\ref{alg:gtf:reassignFalse} is reached
(in an iteration that removed $\bar p[u,k]$ from $Q$), then
for all nodes $x$ on $\bar p[u,k]$, line~\ref{alg:gtf:unfrAtOldRoot}
has already changed $x.\mathit{marks[k]}$ to $\textit{in-answer}$.
Therefore, $z.\mathit{marks[k]}$ cannot be $\textit{visited}$ at the beginning
of iteration $i$.
\end{description}
It thus follows that some suffix of $\bar p[v,k]$ is on $Q$
at the beginning of iteration $i$.
\end{proof}
\begin{restatable}{lemma}{lemmaSPM}
\label{LEMMA:GTF-SHORTEST-PATH-MARKS}
For all nodes $v\in V$ and keywords $k\in K$,
the mark $v.\mathit{marks[k]}$ can be changed from $\textit{active}$ to $\textit{visited}$
only by a minimal path from $v$ to $k$.
\end{restatable}
\begin{proof}
Suppose that the lemma is not true for some keyword $k\in K$.
Let $v$ be a closest node to $k$
among all those violating
the lemma with respect to $k$.
Node $v$ is different from $k$, because the path
$\anset{k}$ marks $k$ as $\textit{visited}$. We will derive a contradiction by
showing that a minimal path changes $v.\mathit{marks[k]}$
from $\textit{active}$ to $\textit{visited}$.
Let $p_s[v,k]$ be a minimal path from $v$ to $k$.
Consider the iteration $i$ of the main loop
(line~\ref{alg:gtf:mainLoop_start} in Figure~\ref{alg:gtf}) that changes
$v.\mathit{marks[k]}$ to $\textit{visited}$ (in line~\ref{alg:gtf:markVisited}).
Among all the nodes of $p_s[v,k]$
in which suffixes of some minimal paths from $v$ to $k$ are
frozen at the beginning of iteration $i$, let $z$ be the first one
when traversing $p_s[v,k]$ from $v$ to $k$
(i.e.,~on the path $p_s[v,z]$, node $z$ is the only one in which
such a suffix is frozen). Node $z$ exists for the following three reasons.
\begin{itemize}
\item The path $p_s[v,k]$ has not been exposed prior to
iteration $i$, because we assume that $v.\mathit{marks[k]}$ is
changed to $\textit{visited}$ in iteration $i$ and that change can happen only once.
\item The path $p_s[v,k]$ is acyclic (because it is minimal), so
a suffix of $p_s[v,k]$ could not have been discarded either
by the test of line~\ref{alg:gtf:testEssential} or due to
line~\ref{alg:gtf:reassignFalse}.
\item The path $p_s[v,k]$ (or any suffix thereof) cannot be on the
queue at the beginning of iteration $i$, because $v$ violates the
lemma, which means that a non-minimal path from $v$ to $k$ must be
removed from the queue at the beginning of that iteration.
\end{itemize}
The above three observations imply that a proper suffix of $p_s[v,k]$
must be frozen at the beginning of iteration $i$ and, hence, node $z$ exists.
Observe that $z$ is different from $v$, because a path to $k$
can be frozen only at a node $\hat v$, such that
${\hat v}.\mathit{marks[k]}=\textit{visited}$, whereas we assume that
$v.\mathit{marks[k]}$ is $\textit{active}$ at the beginning of iteration $i$.
By the selection of $v$ and $p_s[v,k]$ (and the above fact that $z\not=v$),
node $z$ does not violate the lemma, because $p_s[z,k]$ is a proper
suffix of $p_s[v,k]$ and, hence, $z$ is closer to $k$ than $v$.
Therefore, according to the lemma, there is a minimal path $p_m[z,k]$
that changes $z.\mathit{marks[k]}$ to $\textit{visited}$. Consequently,
\begin{equation}\label{eqn:min-path}
w(p_m[z,k]) \leq w(p_s[z,k]).
\end{equation}
Now, consider the path
\begin{equation}\label{eqn:new-path}
\bar{p}[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation}
Since $p_s[v,k]$ is a minimal path from $v$ to $k$,
Equations~(\ref{eqn:min-path}) and~(\ref{eqn:new-path}) imply that
so is $\bar{p}[v,k]$.
We now show that the conditions of Lemma~\ref{LEMMA:PATH-CONCAT} are
satisfied at the beginning of iteration $i$. In particular,
Condition~\ref{cond:new-first} holds, because $p_s[v,k]$ is acyclic
(since it is minimal) and, hence, so is the path $p_s[v,z]$.
Condition~\ref{cond:new-second} is satisfied, because of how
$p_m[z,k]$ is defined. Condition~\ref{cond:first} holds, because we chose
$z$ to be a node where a path to $k$ is frozen.
Condition~\ref{cond:second} is satisfied, because of how $z$ was
chosen and the fact that $\bar{p}[v,k]$ is minimal.
Condition~\ref{cond:third} is satisfied, because we have assumed
that $v.\mathit{marks[k]}$ is changed from $\textit{active}$ to $\textit{visited}$ during
iteration $i$.
By Lemma~\ref{LEMMA:PATH-CONCAT}, a suffix of
$\bar p[v,k]$ must be on the queue at the beginning of iteration
$i$. This contradicts our assumption that a non-minimal path (which
has a strictly higher weight than any suffix of $\bar p[v,k]$) changes
$v.\mathit{marks[k]}$ from $\textit{active}$ to $\textit{visited}$ in iteration $i$.
\end{proof}
\begin{restatable}{lemma}{lemmaOQ}
\label{LEMMA:ON-QUEUE}
For all nodes $v\in V$ and keywords $k\in K$, such that $k$ is reachable
from $v$, if $v.\mathit{marks[k]}$ is $\textit{active}$ at the beginning of an iteration
of the main loop (line~\ref{alg:gtf:mainLoop_start}), then $Q$ contains a suffix
(which is not necessarily proper) of a minimal path from $v$ to $k$.
\end{restatable}
\begin{proof}
The lemma is certainly true at the beginning of the first iteration,
because the path $\anset{k}$ is on $Q$.
Suppose that the lemma does not hold at the beginning of iteration $i$.
Thus, every minimal path $p[v,k]$ has a proper suffix that is frozen
at the beginning of iteration $i$.
(Note that a suffix of a minimal path cannot be discarded either by the
test of line~\ref{alg:gtf:testEssential} or due to
line~\ref{alg:gtf:reassignFalse}, because it is acyclic.)
Let $z$ be the closest node from $v$ having such a frozen suffix.
Hence, $z.\mathit{marks[k]}$ is $\textit{visited}$ and $z\not= v$
(because $v.\mathit{marks[k]}$ is $\textit{active}$).
By Lemma~\ref{LEMMA:GTF-SHORTEST-PATH-MARKS},
a minimal path $p_m[z,k]$ has changed $z.\mathit{marks[k]}$ to $\textit{visited}$.
Let $p_s[v,z]$ be a minimal path from $v$ to $z$.
Consider the path
\begin{equation*}
\bar{p}[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation*}
The weight of $\bar{p}[v,k]$ is no more than that of a minimal path
from $v$ to $k$, because both $p_s[v,z]$ and $p_m[v,k]$ are minimal
and the choice of $z$ implies that it is on some minimal path from $v$
to $k$. Hence, $\bar{p}[v,k]$ is a minimal path from $v$ to $k$.
We now show that the conditions of Lemma~\ref{LEMMA:PATH-CONCAT} are satisfied.
Conditions~\ref{cond:new-first}--\ref{cond:first} clearly hold.
Condition~\ref{cond:second} is satisfied because of how $z$ is chosen
and the fact that $\bar{p}[v,k]$ is minimal.
Condition~\ref{cond:third} holds because $v.\mathit{marks[k]}$ is $\textit{active}$
at the beginning of iteration $i$.
By Lemma~\ref{LEMMA:PATH-CONCAT}, a suffix of $\bar{p}[v,k]$ is on $Q$
at the beginning of iteration $i$, contradicting our initial assumption.
\end{proof}
\begin{lemma}\label{lemma:finite}
Any constructed path can have at most $2n(n+1)$ nodes, where $n=|V|$
(i.e.,~the number of nodes in the graph).
Hence, the algorithm constructs at most $(n+1)^{2n(n+1)}$ paths.
\end{lemma}
\begin{proof}
We say that $v_m \rightarrow \cdots \rightarrow v_1$ is a
\emph{repeated run} in a path $\bar p$ if some suffix (not
necessarily proper) of $\bar p$ has the form $v_m \rightarrow \cdots
\rightarrow v_1 \rightarrow p$, where each $v_i$ also appears in any
two positions of $p$. In other words, for all $i$ ($1\le i \le m$),
the occurrence of $v_i$ in $v_m \rightarrow \cdots \rightarrow v_1$
is (at least) the third one in the suffix $v_m \rightarrow \cdots
\rightarrow v_1 \rightarrow p$. (We say that it is the third,
rather than the first, because paths are constructed backwards).
When a path $p'[v',k']$ reaches a node $v'$ for the third time, the
mark of $v'$ for the keyword $k'$ has already been changed to $\textit{in-answer}$
in a previous iteration. This follows from the following two
observations. First, the first path to reach a node $v'$ is also the
one to change its mark to $\textit{visited}$. Second, a path that reaches a node
marked as $\textit{visited}$ can be unfrozen only when that mark is changed to $\textit{in-answer}$.
Let $v_m \rightarrow \cdots \rightarrow v_1$ be a repeated run in
$\bar p$ and suppose that $m>n=|V|$. Hence, there is a node $v_i$
that appears twice in the repeated run; that is, there is a $j<i$,
such that $v_j=v_i$. If the path $v_i \rightarrow \cdots
\rightarrow v_1 \rightarrow p$ is considered in the loop of
line~\ref{alg:gtf:relaxStart}, then it would fail the test of
line~\ref{alg:gtf:testEssential} (because, as explained earlier, all
the nodes on the cycle $v_i \rightarrow \cdots \rightarrow v_j$ are already
marked as $\textit{in-answer}$). We conclude that the algorithm does not
construct paths that have a repeated run with more than $n$ nodes.
It thus follows that two disjoint repeated runs of a constructed
path $\bar p$ must be separated by a node that appears (in a
position between them) for the first or second time. A path can have
at most $2n$ positions, such that in each one a node appears for the
first or second time. Therefore, if a path $\bar p$ is constructed
by the algorithm, then it can have at most $2n(n+1)$ nodes.
Using $n$ distinct nodes, we can construct at most
$(n+1)^{2n(n+1)}$ paths with $2n(n+1)$ or fewer nodes.
\end{proof}
\begin{lemma}\label{lemma:all-roots}
$K$-Roots have the following two properties.
\begin{enumerate}
\item\label{part:all-roots-part-one}
All the $K$-roots are discovered before the algorithm terminates.
Moreover, they are discovered in the increasing order of their best heights.
\item\label{part:all-roots-part-two}
Suppose that $r$ is a $K$-root with a best height $b$.
If $p[v,k]$ is a path (from any node $v$ to any keyword $k$)
that is exposed before the iteration that discovers $r$ as a $K$-root,
then $w(p[v,k]) \le b$.
\end{enumerate}
\end{lemma}
\begin{proof}
We first prove Part~\ref{part:all-roots-part-one}.
Suppose that a keyword $k$ is reachable from node $v$.
As long as $v.\mathit{marks[k]}$ is $\textit{active}$ at the beginning
of the main loop (line~\ref{alg:gtf:mainLoop_start}),
Lemma~\ref{LEMMA:ON-QUEUE} implies that
the queue $Q$ contains (at least) one suffix of a minimal path from $v$ to $k$.
By Lemma~\ref{lemma:finite}, the algorithm constructs a finite number of
paths. By Proposition~\ref{prop:twice}, the same path can be inserted
into the queue at most twice.
Since the algorithm does not terminate while $Q$ is not empty,
$v.\mathit{marks[k]}$ must be changed to $\textit{visited}$ after a finite time.
It thus follows that each $K$-root is discovered after a finite time.
Next, we show that the $K$-roots are discovered in the increasing order of
their best heights. Let $r_1$ and $r_2$ be two $K$-roots with best heights
$b_1$ and $b_2$, respectively, such that $b_1 < b_2$.
Lemma~\ref{LEMMA:GTF-SHORTEST-PATH-MARKS} implies the following for
$r_i$ ($i=1,2$). For all keywords $k\in K$, a minimal path from $r_i$
to $k$ changes $r_i.\mathit{marks[k]}$ from $\textit{active}$ to $\textit{visited}$; that is,
$r_i$ is discovered as a $K$-root by minimal paths.
Suppose, by way of contradiction, that $r_2$ is discovered first.
Hence, a path with weight $b_2$ is removed from $Q$ while
Lemma~\ref{LEMMA:ON-QUEUE} implies that
a suffix with a weight of at most $b_1$ is still on $Q$.
This contradiction completes the proof of Part~\ref{part:all-roots-part-one}.
Now, we prove Part~\ref{part:all-roots-part-two}.
As shown in the proof of Part~\ref{part:all-roots-part-one},
a $K$-root is discovered by minimal paths.
Let $r$ be a $K$-root with best height $b$.
Suppose, by way of contradiction, that a path $p[v,k]$,
such that $w(p[v,k]) > b$,
is exposed before the iteration, say $i$, that discovers $r$ as a $K$-root.
By Lemma~\ref{LEMMA:ON-QUEUE}, at the beginning of iteration $i$,
the queue $Q$ contains a suffix with weight of at most $b$. Hence,
$p[v,k]$ cannot be removed from $Q$ at the beginning of iteration $i$.
This contradiction proves Part~\ref{part:all-roots-part-two}.
\end{proof}
\begin{restatable}{lemma}{lemmaIH}
\label{LEMMA:INCREASING-HEIGHT}
Suppose that node $v$ is discovered as a $K$-root at iteration $i$.
Let $p_1[v',k']$ and $p_2[v,k]$ be paths that are exposed in iterations
$j_1$ and $j_2$, respectively. If $i < j_1 < j_2$, then
$w(p_1[v',k']) \le w(p_2[v,k])$.
Note that $k$ and $k'$ are not necessarily the same and similarly for
$v$ and $v'$; moreover, $v'$ has not necessarily been discovered as a $K$-root.
\end{restatable}
\begin{proof}
Suppose the lemma is false. In particular, consider an iteration $j_1$
of the main loop (line~\ref{alg:gtf:mainLoop_start})
that violates the lemma. That is, the following hold in iteration $j_1$.
\begin{itemize}
\item Node $v$ has already been discovered as a $K$-root in an earlier
iteration (so, there are no frozen paths at $v$).
\item A path $p_1[v',k']$ is exposed in iteration $j_1$.
\item A path $p_2[v,k]$ having a strictly lower weight than $p_1[v',k']$
(i.e.,~$w(p_2[v,k]) < w(p_1[v',k'])$) will be
exposed after iteration $j_1$. Hence, a proper suffix of this path is
frozen at some node $z$ during iteration $j_1$.
\end{itemize}
For a given $v$ and $p_1[v',k']$, there could be several paths
$p_2[v,k]$ that satisfy the third condition above. We choose one, such
that its suffix is frozen at a node $z$ that is closest from $v$.
Since $v$ has already been discovered as a $K$-root,
$z$ is different from $v$.
Clearly, $z.\mathit{marks[k]}$ is changed to $\textit{visited}$ before iteration
$j_1$. By Lemma~\ref{LEMMA:GTF-SHORTEST-PATH-MARKS}, a minimal path
$p_m[z,k]$ does that. Let $p_s[v,z]$ be a minimal path from $v$ to
$z$.
Consider the path
\begin{equation*}
\bar p[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation*}
Since both $p_s[v,z]$ and $p_m[z,k]$ are minimal, the weight
of their concatenation (i.e.,~$\bar p[v,k]$) is no more than that of
$p_2[v,k]$ (which is also a path that passes through node $z$). Hence,
$w(\bar p[v,k]) < w(p_1[v',k'])$.
We now show that the conditions of Lemma~\ref{LEMMA:PATH-CONCAT} are
satisfied at the beginning of iteration $j_1$
(i.e.,~$j_1$ corresponds to $i$ in Lemma~\ref{LEMMA:PATH-CONCAT}).
Conditions~\ref{cond:new-first}--\ref{cond:new-second} clearly hold.
Condition~\ref{cond:first} is satisfied because a suffix of $p_2[v,k]$
is frozen at $z$.
Condition~\ref{cond:second} holds, because
of the choice of $z$ and the fact
$w(\bar p[v,k]) < w(p_1[v',k'])$ that was shown earlier.
Condition~\ref{cond:third} holds, because otherwise
$\bar p[v,k]$ would be unfrozen and $z.\mathit{marks[k]}$ would be $\textit{in-answer}$
rather than $\textit{visited}$.
By Lemma~\ref{LEMMA:PATH-CONCAT}, a suffix of $\bar p[v,k]$ is on the
queue at the beginning of iteration $j_1$. This contradicts the
assumption that the path $p_1[v',k']$ is removed from the
queue at the beginning of iteration $j_1$, because $\bar p[v,k]$ (and, hence,
any of its suffixes) has a strictly lower weight.
\end{proof}
\begin{restatable}{lemma}{lemmaNVWT}
\label{LEMMA:NO-VISIT-WHEN-TERMINATING}
For all nodes $v\in V$, such that $v$ is a $K$-root,
the following holds.
If $z$ is a node on a simple path from $v$ to some $k\in K$,
then $z.\mathit{marks[k]} \not= \textit{visited}$ when the algorithm terminates.
\end{restatable}
\begin{proof}
The algorithm terminates when the test of
line~\ref{alg:gtf:mainLoop_start} shows that $Q$ is empty. Suppose
that the lemma is not true. Consider some specific $K$-root $v$ and
keyword $k$ for which the lemma does not hold. Among all the nodes $z$
that violate the lemma with respect to $v$ and $k$, let $z$ be a
closest one from $v$. Observe that $z$ cannot be $v$, because of the
following two reasons. First, by Lemma~\ref{lemma:all-roots}, node $v$ is
discovered as a $K$-root before termination. Second, when a $K$-root is
discovered (in
lines~\ref{alg:gtf:becomesRoot}--\ref{alg:gtf:isRootGetsTrue}), all
its marks become $\textit{in-answer}$ in
lines~\ref{alg:gtf:newRootStart}--\ref{alg:gtf:unfrNewRoot}.
Suppose that $p_m[z,k]$ is the path that changes $z.\mathit{marks[k]}$
to $\textit{visited}$. Let $p_s[v,z]$ be a minimal path from $v$ to $z$. Note
that $p_s[v,z]$ exists, because $z$ is on a simple path from $v$ to $k$.
Consider the path
\begin{equation*}
\bar p[v,k] = p_s[v,z] \circ p_m[z,k].
\end{equation*}
Suppose that the test of line~\ref{alg:gtf:mainLoop_start} is \textbf{false}
(and, hence, the algorithm terminates) on iteration $i$.
We now show that the conditions of Lemma~\ref{LEMMA:PATH-CONCAT}
are satisfied at the beginning of that iteration.
Conditions~\ref{cond:new-first}--\ref{cond:new-second} of
Lemma~\ref{LEMMA:PATH-CONCAT} clearly hold.
Conditions~\ref{cond:first}--\ref{cond:second} are satisfied
because of how $z$ is chosen. Condition~\ref{cond:third} holds, because
otherwise $z.\mathit{marks[k]}$ should have been changed to $\textit{in-answer}$.
By Lemma~\ref{LEMMA:PATH-CONCAT}, a suffix of $\bar p[v,k]$ is on $Q$
when iteration $i$ begins, contradicting our assumption that $Q$ is empty.
\end{proof}
\begin{theorem}\label{theorem:gtf-correct}
GTF is correct. In particular, it finds all and only answers
to the query $K$ by increasing height within $2(n+1)^{2n(n+1)}$ iterations
of the main loop (line~\ref{alg:gtf:mainLoop_start}), where $n=|V|$.
\end{theorem}
\begin{proof}
By Lemma~\ref{lemma:finite},
the algorithm constructs at most $(n+1)^{2n(n+1)}$ paths.
By Proposition~\ref{prop:twice},
a path can be inserted into the queue $Q$ at most twice.
Thus, the algorithm terminates after at most $2(n+1)^{2n(n+1)}$ iterations
of the main loop.
By Part~\ref{part:all-roots-part-one} of Lemma~\ref{lemma:all-roots},
all the $K$-roots are discovered.
By Lemma~\ref{LEMMA:NO-VISIT-WHEN-TERMINATING},
no suffix of a simple path from a $K$-root to a keyword can be frozen
upon termination. Clearly, no such suffix can be on $Q$
when the algorithms terminates. Hence, the algorithm constructs
all the simple paths from each $K$-root to every
keyword. It thus follows that the algorithm finds all the answers to $K$.
Clearly, the algorithm generates only valid answers to $K$.
Next, we prove that the answers are produced in the order of
increasing height. So, consider answers $a_1$ and $a_2$
that are produced in iterations $j'_1$ and $j_2$, respectively.
For the answer $a_i$ ($i=1,2$),
let $r_i$ and $h_i$ be its $K$-root and height, respectively.
In addition, let $b_i$ be the best height of $r_i$ ($i=1,2$).
Suppose that $j'_1 < j_2$. We have to prove that $h_1 \le h_2$.
By way of contradiction, we assume that $h_1 > h_2$.
By the definition of best height, $h_2 \ge b_2$. Hence, $h_1 > b_2$.
Let $p_2[r_2,k]$ be the path of $a_2$ that is exposed
(i.e.,~removed from $Q$) in iterations $j_2$.
Suppose that $p_1[r_1,k']$ is a path of $a_1$, such that
$w(p_1[r_1,k'])=h_1$ and $p_1[r_1,k']$ is exposed in the iteration $j_1$
that is as close to iteration $j'_1$ as possible
(among all the paths of $a_1$ from $r_1$ to a keyword with a weight
equal to $h_1$). Clearly, $j_1 \le j'_1$ and hence $j_1 < j_2$.
We now show that $w(p_1[r_1,k']) < h_1$, in contradiction to
$w(p_1[r_1,k'])=h_1$. Hence, the claim that $h_1 \le h_2$ follows.
Let $i$ be the iteration that discovers $r_2$ as a $K$-root.
There are two cases to consider as follows.
\begin{description}
\item[Case 1: $i < j_1$.] In this case, $i < j_1 < j_2$, since $j_1 <
j_2$. By Lemma~\ref{LEMMA:INCREASING-HEIGHT},
$w(p_1[r_1,k']) \le w(p_2[r_2,k])$. (Note that we
apply Lemma~\ref{LEMMA:INCREASING-HEIGHT} after replacing $v$ and
$v'$ with $r_2$ and $r_1$, respectively.) Hence,
$w(p_1[r_1,k']) < h_1$, because $w(p_2[r_2,k]) \le h_2$
follows from the definition of height and we have assumed that $h_1 > h_2$.
\item[Case 2: $j_1 \le i$.]
By Part~\ref{part:all-roots-part-two} of Lemma~\ref{lemma:all-roots},
$w(p_1[r_1,k']) \le b_2$. Hence, $w(p_1[r_1,k']) < h_1$,
because we have shown earlier that $h_1 > b_2$.
\end{description}
Thus, we have derived a contradiction and, hence,
it follows that answers are produced by increasing height.
\end{proof}
\begin{corollary}\label{cor:running}
The running time of the algorithm GTF is
$O\left(kn(n+1)^{2kn(n+1)+1}\right)$, where $n$ and $k$ are the number of
nodes in the graph and keywords in the query, respectively.
\end{corollary}
\begin{proof}
The most expensive operation is a call to $\bid{produceAnswers}(v.paths,p)$. By
Lemma~\ref{lemma:finite}, there are at most $(n+1)^{2n(n+1)}$ paths.
A call to the procedure $\bid{produceAnswers}(v.paths,p)$ considers all combinations of
$k-1$ paths plus $p$. For each combination, all its $k$ paths are
traversed in linear time. Thus, the total cost of one call to
$\bid{produceAnswers}(v.paths,p)$ is $O\left(kn(n+1)(n+1)^{(k-1)2n(n+1)}\right)$. By
Theorem~\ref{theorem:gtf-correct}, there are at most
$2(n+1)^{2n(n+1)}$ iterations. Hence, the running time is
$O\left(kn(n+1)^{2kn(n+1)+1}\right)$.
\end{proof}
|
2,877,628,089,619 | arxiv | \section*{ACKNOWLEDGMENT}
\bibliographystyle{IEEEtran}
\subsection{IDM Variables}
\begin{table}[h!]
\centering
\begin{tabular}{l|l}
\textbf{Variable} & \textbf{Definition} \\
\hline
& \\
$v_0$ & Desired velocity \\
$T$ & Safe time headway \\
$a$ & Maximum acceleration \\
$b$ & Comfortable deceleration \\
$\delta$ & Minimum distance \\
$l$ & Vehicle length \\
$\Delta v_\alpha$ & Velocity difference with front vehicle
\end{tabular}
\caption{\label{tab:def}Variable descriptions for the Intelligent Driver Model.}
\end{table}
\subsection{Jacobian of the IDM}
\label{subsec:jacobian}
For ease of calculation, we represent the simulation state vector $q$ at a certain time step $t$ to be a $(1,2N)$ vector instead of a $(N,2)$ vector, where $N$ is the number of vehicles. Thus, the simulation state will take on the form
\begin{align*}
q_t =
\begin{bmatrix}
x_1 & v_1 & ... & x_N & v_N
\end{bmatrix}
\end{align*}
Then, we can expect the Jacobian relating one state to the next to be a vector of dimension $(2N, 2N)$. For a particular vehicle $\alpha$ indices from the front vehicle, the Jacobian of the IDM forward simulation is derived. Let
\begin{align*}
f(x_\alpha, v_\alpha) &= {\dot {x}}_{\alpha } \\
g(x_\alpha, v_\alpha) &= {\dot {v}}_{\alpha }
\end{align*}
Then the Jacobian of the IDM with respect to state values position $x$ and velocity $v$ will take on the form:
\begin{align*}
J_{idm}(q_0, q_1) &=
\begin{pmatrix}
\frac{\delta vehicle1_{t=1}}{\delta vehicle1_{t=0}} & ... & \frac{\delta vehicle1_{t=1}}{\delta vehicleN_{t=0}} \\
\vdots & \ddots & \vdots \\
\frac{\delta vehicleN_{t=1}}{\delta vehicle1_{t=0}} & ... & \frac{\delta vehicleN_{t=1}}{\delta vehicleN_{t=0}}
\end{pmatrix} \\
\end{align*}
Recall from Equation~\ref{eq:sim_state} that the state of a single agent or vehicle comprises both a position and a velocity component. For sake of readability, the derivative below is always taken with respect to the previous timestep. Then, the Jacobian above can then be expanded:
\begin{align*}
\begin{pmatrix}
\frac{\partial f(x_1,v_1)}{\partial x_1} & \frac{\partial f(x_1,v_1)}{\partial v_1} & ... & ... & \frac{\partial f(x_1,v_1)}{\partial x_N} & \frac{\partial f(x_1,v_1)}{\partial v_N} \\
\frac{\partial g(x_1,v_1)}{\partial x_1} & \frac{\partial g(x_1,v_1)}{\partial v_1} & ... & ... & \frac{\partial g(x_1,v_1)}{\partial x_N} & \frac{\partial g(x_1,v_1)}{\partial v_N} \\
\vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\
\vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\
\frac{\partial f(x_N,v_N)}{\partial x_1} & \frac{\partial f(x_N,v_N)}{\partial v_1} & ... & ... & \frac{\partial f(x_N,v_N)}{\partial x_N} & \frac{\partial f(x_N,v_N)}{\partial v_N} \\
\frac{\partial g(x_N,v_N)}{\partial x_1} & \frac{\partial g(x_N,v_N)}{\partial v_1} & ... & ... & \frac{\partial g(x_N,v_N)}{\partial x_N} & \frac{\partial g(x_N,v_N)}{\partial v_N} \\
\end{pmatrix} \\
\end{align*}
This resulting Jacobian ends up being a lower triangular matrix. This is because any entries above the main 2-by-2 diagonal represent the relation between a vehicle and the vehicles behind it. In car-following models, vehicle position and velocity are not affected by vehicles behind. Thus, the upper half of the Jacobian is zeroed out. Additionally, in the context of IDM, a vehicle is only influenced by the vehicle directly in front of it. Thus, any partial derivatives between a vehicle and any vehicle more than 1 position ahead is also zeroed out. An example of the Jacobian for a 3-vehicle simulation is shown below:
{\tiny
\begin{align*}
\begin{pmatrix}
\frac{\partial f(x_1,v_1)}{\partial x_1} & \frac{\partial f(x_1,v_1)}{\partial v_1} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0}\\
\frac{\partial g(x_1,v_1)}{\partial x_1} & \frac{\partial g(x_1,v_1)}{\partial v_1} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0}\\
\frac{\partial f(x_2,v_2)}{\partial x_1} & \frac{\partial f(x_2,v_2)}{\partial v_1} & \frac{\partial f(x_2,v_2)}{\partial x_2} & \frac{\partial f(x_2,v_2)}{\partial v_2} & \textbf{0} & \textbf{0} \\
\frac{\partial g(x_2,v_2)}{\partial x_1} & \frac{\partial g(x_2,v_2)}{\partial v_1} & \frac{\partial g(x_2,v_2)}{\partial x_2} & \frac{\partial g(x_2,v_2)}{\partial v_2} & \textbf{0} & \textbf{0} \\
\textbf{0} & \textbf{0} & \frac{\partial f(x_3,v_3)}{\partial x_2} & \frac{\partial f(x_3,v_3)}{\partial v_2} & \frac{\partial f(x_3,v_3)}{\partial x_3} & \frac{\partial f(x_3,v_3)}{\partial v_3} \\
\textbf{0} & \textbf{0} & \frac{\partial g(x_3,v_3)}{\partial x_2} & \frac{\partial g(x_3,v_3)}{\partial v_2} & \frac{\partial g(x_3,v_3)}{\partial x_3} & \frac{\partial g(x_3,v_3)}{\partial v_3} \\
\end{pmatrix} \\
\end{align*}
}%
From this example, we can see that the Jacobian can be found by generalizing the partial derivative on the 2-by-2 diagonal, as well as the partial derivatives one 2-by-2 row directly below the diagonal. There are thus eight generalized terms to build the Jacobian matrix of one simulation time step. One entry on the 2-by-2 diagonal of the Jacobian matrix can be intuitively defined as the Jacobian of a vehicle's state with respect to itself from the previous time step. Thus, every 2-by-2 entry on the diagonal takes this general form for a particular vehicle in index $\alpha$ from the front:
{\small
\begin{align*}
J_{idm}[2\alpha,2\alpha] &=
\begin{pmatrix}
\frac{\partial f}{\partial x_\alpha} & \frac{\partial f}{\partial v_\alpha} \\
\frac{\partial g}{\partial x_\alpha} & \frac{\partial g}{\partial v_\alpha}
\end{pmatrix} \\
\frac{\partial f}{\partial x_\alpha} &= 0 \\
\frac{\partial f}{\partial v_\alpha} &= 1 \\
\frac{\partial g}{\partial x_\alpha} &= \frac{-2as^*(v_\alpha, \Delta v_\alpha)^2}{s_\alpha^3} \\
\frac{\partial g}{\partial v_\alpha} &= \frac{-a \delta v_\alpha^{\delta-1}}{v_0^\delta} + \frac{-2a}{s_\alpha^2}\left(T + \frac{\Delta v_\alpha + v_\alpha}{2\sqrt{ab}}\right)s^*(v_\alpha, \Delta v_\alpha)
\end{align*}
}%
And, likewise, the diagonal one 2-by-2 row directly beneath it will take the form (excluding the first row, which represents the leading vehicle):
\begin{align*}
J_{idm}[2\alpha,2\alpha - 2] &=
\begin{pmatrix}
\frac{\partial f}{\partial x_{\alpha-1}} & \frac{\partial f}{\partial v_{\alpha-1}} \\
\frac{\partial g}{\partial x_{\alpha-1}} & \frac{\partial g}{\partial v_{\alpha-1}}
\end{pmatrix} \\
\frac{\partial f}{\partial x_{\alpha-1}} &= 0 \\
\frac{\partial f}{\partial v_{\alpha-1}} &= 0 \\
\frac{\partial g}{\partial x_{\alpha-1}} &= \frac{2as^*(v_\alpha, \Delta v_\alpha)^2}{s_\alpha^3} \\
\frac{\partial g}{\partial v_{\alpha-1}} &= \frac{2as^*(v_\alpha, \Delta v_\alpha)v_\alpha}{2s_\alpha^2\sqrt{ab}}
\end{align*}
With this analytically derived form of the Jacobian, we can now compute the Jacobian at any time step directly without the use of Autograd, given the current state as input. For $N$ vehicles, we calculate $4N$ values on the main diagonal, $4(N-1)$ values on the "subdiagonal", and zero out every other value to attain the $2N \times 2N$ Jacobian matrix for a particular time step.
Note that sometimes, IDM will produce negative velocities in intermediate or end states. To address this issue, any submatrices where IDM yields a negative value at the next time step will have a gradient of zeroes. This addresses negative velocity clipping in the forward simulation, where negative velocities after the simulation update are clipped to zero.
The boost in speed per iteration using the analytical gradient versus autodifferentiation is visualized in Figure~\ref{fig:runtime}.
\begin{figure}[htb!]
\includegraphics[width=8cm,height=6cm]{imgs/runtime.png}
\caption{\textbf{Comparison on runtime (s) over number of iterations for analytical differentiation versus auto differentiation of the car-following model IDM. }}
\label{fig:runtime}
\end{figure}
\section{Background}
\label{sec:difftraffic}
In this section, we establish formal notation and definitions that will be used throughout the rest of the paper.
\subsection{Simulation-related Notation and Definitions}
To integrate traffic simulation into learning and optimization frameworks for autonomous driving, forward simulation should be differentiable. Agent-based traffic simulation is typically governed by ordinary differentiable equations (ODEs) known as car-following models. These ODEs describe the position and velocity of each individual agent over time. In the context of traffic simulation, the agent directly in front is considered in calculating the position and velocity. Since the position and velocity of each individual agent can be used to describe the current state of the entire simulation, we define the \textbf{agent state $q^t_n$} and the \textbf{simulation state $s^t_n$} at time step $t$ for vehicle $n$ to be
\vspace*{-1.5em}
\begin{equation}
\label{eq:sim_state}
q_n =
\begin{bmatrix}
x_n \\
v_n
\end{bmatrix}, \\
s_n^t =
\begin{bmatrix}
q_1^T \ldots q_N^T
\end{bmatrix}^T \\
\end{equation}
\subsection{The Intelligent Driver Model (IDM)}
We model traffic dynamics of human drivers in our experiments with the Intelligent Driver Model (IDM)~\cite{Treiber_Hennecke_Helbing_2000}. IDM describes how the position and velocity of each individual vehicle change over time. Let $x_\alpha$ and $v_\alpha$ be the position and velocity of a vehicle $\alpha$ in line with the leading vehicle for a fixed time step. Note that $\alpha - 1$ denotes the position of the vehicle in front, so $\alpha = 0$ for the leading vehicle.
{\small
\begin{align*}
{\dot {x}}_{\alpha }&={\frac {\mathrm {d} x_{\alpha }}{\mathrm {d} t}}=v_{\alpha } \\
{\dot {v}}_{\alpha }&={\frac {\mathrm {d} v_{\alpha }}{\mathrm {d} t}}=a\,\left(1-\left({\frac {v_{\alpha }}{v_{0}}}\right)^{\delta }-\left({\frac {s^{*}(v_{\alpha },\Delta v_{\alpha })}{s_{\alpha }}}\right)^{2}\right) \\
s_\alpha &= x_{\alpha-1} - x_\alpha - l_{\alpha-1} \\
s^{*}(v_{\alpha },\Delta v_{\alpha })&=s_{0}+v_{\alpha }\,T+{\frac {v_{\alpha}\,\Delta v_{\alpha }}{2\,{\sqrt {a\,b}}}} \\
\Delta v_\alpha &= v_\alpha - v_{\alpha-1}
\end{align*}
}%
\noindent
Unlike previous works in traffic simulation, we analytically derive the gradient of forward simulation to optimize computation of backpropagation, rather than relying on autodifferentiation.
We leave the speedup analysis, variable definitions, and the full derivation and definition of the IDM Jacobian to the appendix, which can be found on the project website. The form of the analytical Jacobian matrix can also be generalized to any car-following model where vehicles actions are only affected by the vehicle directly in front.
\section{Conclusion and Discussion}
\vspace*{-0.5em}
In this paper, we present a method for traffic-aware autonomous driving which benefits both the autonomous vehicle and traffic around it. Our method complements existing work by providing additional supervision on acceleration behavior guided by differentiable traffic simulation. In addition, we improve the sample efficiency of on-policy RL algorithms using analytical gradients of car-following models and show that our method produces a driving policy that not only benefits the surrounding traffic system, but also improves driving performance of each autonomous vehicle on multiple benchmarks.
With traffic information available for both supervised and RL autonomous driving settings, this work illustrates that coupling traffic information with autonomous driving can help transferring such behaviors
to real-world applications.
{\bf Limitations and Future Work: } Independent of our method, current implementation integrated with CARLA simulation for learning imposes some limitations. For example, the shortage of difficult traffic scenarios on available driving benchmarks within CARLA limits more evaluation of our approach with FLOW on vehicle control; similarly with limited traffic tasks on the current driving benchmarks. Furthermore, current driving benchmarks do not consider societal goals, such as the overall traffic flow and fuel consumption, thus may downplay the potential positive impact of our methods.
More driving benchmarks that consider the systematic benefits of the entire road network system would allow for a more comprehensive evaluation of different alternatives
in both collective and individualistic sense.
\section{INTRODUCTION}
The ideal autonomous vehicle should be able to minimize the travel time of a route, maximize the energy efficiency, and provide a smooth and safe experience for the riders. Such objectives are not only important to the passenger’s experience, but also for the greater social benefits. In the US, traffic congestion is estimated to account for 29 billion dollars in cost annually~\cite{emissions}, transportation accounts for 27\% of carbon emissions~\cite{EPA_emissions}, and are the leading cause of unnatural death as of 2022~\cite{CDC_cause_of_death}. From a learning perspective, the extent an autonomous vehicle can improve on these objectives is often affected by the environment around it, i.e. the surrounding vehicle traffic and the road networks.
As with many other multi-agent problems, improving traffic flow and energy efficiency for all vehicles in the entire system can lead to a greater improvement for each vehicle’s individual objectives. Therefore, we define these objectives by taking into account of the overall traffic flow (i.e. the average vehicle velocity in the road network that the autonomous vehicle is traveling on), fuel consumption, and variations in speed over time.
In the same fashion that the environment would affect an agent’s decision making, the vehicle’s motion also impacts the traffic around it. An autonomous vehicle can influence the flow of human-driven traffic by acting as a pace car to those behind it. A ``{\em pace car}'' is commonly used in car racing to control speeds of competing vehicles; this same notion can be applied to autonomous driving on a road in the physical world.
\begin{figure}[th]
\vspace*{-1.5em}
\includegraphics[width=9cm,height=7cm, trim=0cm 0cm 6cm 0cm,clip]{imgs/phases.pdf}
\vspace*{-1.5em}
\caption{\textbf{Training for Traffic-aware Autonomous Driving.} Our method can be adapted to most existing imitation learning frameworks for driving by just adding an additional phase of training, where we purposefully ``Learn to Accel''.
An agent whose action space only spans possible acceleration actions navigates through a co-simulated environment, and is rewarded when it improves overall traffic flow and fuel consumption.
In Phase 2, where we ``Learn to do Everything Else'', the acceleration agent is frozen and supervises speed control of an imitation learning agent in a distillation-esque manner.
}
\label{fig:method}
\vspace*{-2em}
\end{figure}
With a model of the traffic and its dynamics on a road network, the autonomous driving policy can directly optimize for {\em traffic flow, energy efficiency, and smooth acceleration}.
Decades of traffic engineering research presents sophisticated mathematical equations modelling traffic, including car-following models~\cite{Treiber_Hennecke_Helbing_2000, Newell_2002, Wiedemann_1974, krauss}, traffic flow models~\cite{lighthill1955kinematic, gazis1961nonlinear, aw2000resurrection, zhang2002non} and theory~\cite{Kerner}. These mathematical models, though often too simplified to account for the uncertainty of driver behaviors, are computationally efficient and differentiable. One possibility is to model the traffic environment with a neural network, as in some of the latest work~\cite{Olayode_Tartibu_Okwu_2021, Diehl_Brunner_Le_Knoll_2019, Jiang_Luo_2022, Do_Taherifar_Vu_2019}. In this paper, we further propose to couple a learning-based traffic control algorithm with {\em differentiable} traffic models.
With differentiability, we can use the gradients of forward traffic dynamics to guide the learning of a driving policy, so long as the policy has access to traffic information.
\begin{figure*}[ht!]
\vspace*{-0.25em}
\includegraphics[width=\textwidth,height=3cm]{imgs/vel_comparison.pdf}
\includegraphics[width=\textwidth,height=3cm]{imgs/fuel_comparison.pdf}
\caption{\textbf{Comparison of overall traffic flow and fuel consumption between our method and baseline, for each CARLA test scenario.}
Our model consistently improves overall traffic flow over measured baseline traffic flow in nearly all scenarios, showing that our single-vehicle control can influence the traffic flow around it. Our method is also able to maintain similar, if not better, fuel consumption -- in spite of a direct relationship between increased flow and increased fuel usage.}
\vspace*{-1.5em}
\label{fig:barchart}
\end{figure*}
In our paper, we present a generalizable algorithm for {\em traffic-aware autonomous driving} through the use of {\em differentiable traffic simulation}. By coupling a driving environment with traffic simulation, the self-driving car can retrieve traffic information during training and learn behaviors which are both beneficial to its individual goals and other human-driven vehicles.
In our method, we add a phase of training in addition to traditional imitation learning for driving, where the vehicle “learns to accelerate”. This phase involves maximizes the overall traffic flow of a vehicle’s local lane, minimizing the fuel consumption of all vehicles, and discouraging the acceleration actions from being too jerky. The resulting model provides distillation-esque supervision to any standard imitation learning driving method. Because our model influences imitation learning via supervision, it is generalizable to nearly any standard imitation learning framework, regardless of architecture or design. Our results show that this method, when implemented on top of existing state of the art driving frameworks, improves traffic flow, minimizes energy consumption for the autonomous vehicle, and enhances the passenger's ride experience.
In summary, we present the following key contributions:
\begin{enumerate}
\item A simulated traffic-annotated driving dataset for imitation learning for self-driving cars;
\item Use of gradients from differentiable traffic simulation to improve sample efficiency for
autonomous vehicles;
\item A generalizable method for traffic-aware autonomous driving, which learns to control the vehicle via rewards based on societal traffic-based objectives.
\end{enumerate}
Additional results, materials, code, datasets, and information can be found on our project website.
\subsection{Traffic Light Control}
\begin{figure}[th]
\vspace*{-1.5em}
\includegraphics[width=9cm,height=6cm]{imgs/diff_trpo.png}
\vspace*{-2em}
\caption{\textbf{Effect of Gradients for On-Policy Learning.} We show the training curve for our sample efficiency enhancement method DiffTRPO versus the on-policy algorithm TRPO~\cite{Schulman_Levine_Abbeel_Jordan_Moritz_07}, as well as various experiments varying the perturbation threshold parameter $\delta$ discussed in section~\ref{sec:sample_enhancement}. Curves are averaged over 10 runs each to account for stochasticity. A perturbation threshold $\delta$ of 0.2 and 0.4, shown in red and green respectively, yields {\em faster learning} and {\em higher reward} than the baseline TRPO in blue. Additional results for PPO are shown in supplemental materials on the project website.}
\label{fig:grad_rl}
\vspace*{-1em}
\end{figure}
\begin{figure}[htb!]
\includegraphics[width=8cm,height=3cm]{imgs/single_linegraph.pdf}
\caption{\textbf{Comparison of Traffic Flow Over Time in the average case.} We visualize the average velocity of traffic flow over time for our method (in orange) versus the baseline (in blue).
Our method achieves {\bf +64.27\%} on average.
Plots for the worst and best cases can be found on the project website. }
\vspace*{-1.5em}
\label{fig:timeseries_leaderboard}
\end{figure}
\section{Traffic-informed Single-Vehicle Control}
Most, if not all self driving models focus on replicating expert human behavior as ground truth. Additionally, FLOW~\cite{wu2017flow} introduces the vision of mixed autonomy, where autonomous vehicles can drive in a way which benefits all members of the simulation. This behavior is not necessarily observed in the real world, or known to humans, and thus ground truth for optimal driving for societal good is not known. Currently, there is no existing research on integrating such behavior into single-vehicle driving control.
While FLOW's traffic simulation focuses on high-level control such as vehicle acceleration, we present a method to extend this to lower-level single vehicle control using imitation learning.
Differentiable traffic simulation enables direct involvement of simulation in supervised learning due to availability of simulation gradients for backpropagation. In our work, we use traffic simulation gradients to improve sample efficiency in on-policy RL.
\textbf{Dataset Collection.}
We collect our own driving dataset in the same format as ``Learning by Cheating''~\cite{chen2019lbc}. Driving data is collected by an expert driver in CARLA Simulator at a rate of 2 Hz, or every 10 frames at 20 FPS. Training data comprises of 50 routes over towns 1, 3, 5, and 6, while test data comprises 26 routes. These towns describe a small suburban town, a large complex city, a square-grid city, and a highway environment respectively. Sensors include a top-down birds eye segmentation map and three RGB cameras (left, right and center dashcam views). Annotations include vehicle position, vehicle control commands, and traffic state of current vehicle lane in the same format as Equation~\ref{eq:sim_state}.
Our driving dataset will be made available on our project website, along with code used to generate it upon acceptance.
\textbf{Hardware specs and Software.}
All experiments are trained on a single NVIDIA RTX A5000 GPU, Intel(R) Xeon(R) W-2255 CPU (20 cores), and 16 GB RAM, with the exception of experiments on TransFuser, which was trained on eight A5000 GPUs in parallel.
\begin{table*}[ht!]
\centering
\begin{tabular}{lcccccccc}
\toprule
Method & DrivingScore$\uparrow$ & RouteCompletion$\uparrow$ & PedCollisions$\downarrow$ & VehCollisions$\downarrow$ & OtherCollisions$\downarrow$ & Timeouts$\downarrow$\\
\midrule
LBC & 2.876 & 6.292 & 0.0 & 4.148 & 25.715 & 0.830 \\
LBC+Ours & \textbf{6.256} & \textbf{19.487} & 0.0 & \textbf{3.706} & \textbf{12.766} & \textbf{0.137} \\
\midrule
TransFuser & \textbf{33.908} & 77.657 & 0.0240 & 3.366 & \textbf{0.168} & 0.144\\
TransFuser+Ours & 31.291 & \textbf{79.964} & \textbf{0.0223} & \textbf{2.604} & 0.200 & \textbf{0.044}\\
\bottomrule
\end{tabular}
\caption{\textbf{Driving Performance Benchmark}.
Arrows denote the direction of improvement. We benchmark two baselines, LBC~\cite{chen2019lbc} and TransFuser~\cite{Chitta2022PAMI, Prakash2021CVPR}, and evaluate our method on CARLA Leaderboard Test Scenarios and Longest6 benchmarks from~\cite{Prakash2021CVPR} respectively. Our method is able to complement and enhance the driving performance of existing methods solely through distillation of acceleration behaviors, with no modifications to design governing the learning of other controls. For TransFuser, ours is able to improve the RouteCompletion \%, yet incurring negligible infractions. See Section~\ref{sec:results} for detail.}
\label{table:benchmark}
\vspace*{-1.5em}
\end{table*}
\subsection{Phase 1: Learning Acceleration with Online RL}
The goal of this phase is purely to learn acceleration behavior, given all other controls are optimal with respect to the expert, through the use of on-policy RL algorithms. For our experiments, we use PPO as well as differentiable PPO, which improves sample efficiency by taking advantage of simulation gradients. We also use on-policy model-free algorithms in our method to be consistent with results from FLOW~\cite{wu2017flow}.
The action space of this phase is continuous and defined by a maximum acceleration $\alpha_{max}$ and minimum acceleration $\alpha_{min}$, which represent units of meters per second squared ($m/s^2$). The observation space is multi-modal and depends on the inputs of the imitation learning framework. For LBC, we a top-down segmentation map $M$ plus the traffic state vector $s$ given by the simulator.
To account for traffic laws and discourage blind acceleration, the predicted acceleration is only used when the expert determines a non-braking state. It is also worth noting that reward functions from FLOW are not inherently differentiable due to the lack of differentiable traffic simulation; thus differentiable simulation enables differentiable rewards as well.
In our experiments, we use a weighted sum of two societal objectives, $R_{vel}$ and $R_{mpg}$, plus a jerkiness constraint related only to the autonomous vehicle itself.
The first term relates to average velocity of the traffic state, which directly address overall traffic flow. The second term is a ``miles per gallon'' metric, which relates to energy efficiency. Finally, the third term is an L2 constraint that discourages the next predicted actions to be too far from the last predicted action, and encourages ``smoother'' transitions of acceleration. This term is inspired by real world behavior, where a passenger will feel safer if a driver slowly presses down the gas pedal to accelerate. The third term can also be viewed as a form of time-dependent regularization on sequences of actions.
The reward function $R_{comb}$ is described below, where $g^i_t$ is gallons of fuel consumed per second and $v^i_t$ represents the velocity in meters per second by vehicle $i$ at timestep $t$ after action $a_t$:
\vspace*{-1.5em}
\begin{align*}
R_{vel}(s_t) &= \frac{1}{N} \sum_{i=1}^{N}{v^i_t} \\
R_{mpg}(s_t) &= \frac{1}{1609 N} \sum_{i=1}^{N}{\frac{v^i_t}{g^i_t}} \\
R_{comb}(s_t) &= \alpha R_{vel}(s_t) + \beta R_{mpg}(s_t) - \lambda \|a_t - a_{t-1}\|_{2} \\
\end{align*}
\vspace*{-3em}
\subsection{Phase 2: Learn to do Everything Else}
This phase involves the integration of the learned Phase 1 acceleration model into a autonomous driving framework for single-vehicle control. In our experiments, we use Learning by Cheating (LBC)~\cite{chen2019lbc} as the backbone for Phase 2. We first train the privileged agent that has access to a top-down ground truth map of the environment $M \in \{0,1\}^{W\times H\times7}$. The privileged agent learns to drive an autonomous vehicle in a fully-supervised manner, similar to LBC.
Our Phase 1 model, illustrated in the top half of Figure~\ref{fig:method}, directly provides ground truth for the supervision of throttle control in this step. Instead of learning the throttle based on ground-truth labels provided by the expert agent during data collection, the Phase 1 acceleration agent supervises the throttle command.
Each sample is also annotated with the local traffic state, which is also provided to the Phase 1 model for supervision of Phase 2.
Intuitively, the Phase 1 model is teaching the Phase 2 model how to accelerate; all other controls are learned through the original LBC pipeline. This includes the non-cheating ``student'' model, which learns from limited single-view information. Supervision of both speed and steering of the student model is done by the trained Phase 2 privileged model, which should have learned optimized acceleration behavior from the Phase 1 acceleration model.
This implementation design provides two main advantages: (1) Phases 1 and 2 are modular, and thus can be re-used to train multiple Phase 2 models; (2) Since traffic information transitions between models as a form of knowledge distillation, our method is able to complement prior works rather than compete with them. The involvement of traffic information in our method can be thought of as a form of knowledge distillation.
\begin{table*}[ht!]
\centering
\begin{tabular}{lccccccccc}
\toprule
Method & TrafficFlow$\uparrow$ & FuelCons$\downarrow$ & Jerk ($\Delta m/s$)$\downarrow$ & DrivingScore$\uparrow$ & RouteCompletion$\uparrow$ & InfractionScore$\downarrow$ & VehCollisions$\downarrow$ & OtherCollisions$\downarrow$ \\
\midrule
Baseline & 1.713 & 1.0399 & 2.28622e-3 & 4.021 & 6.184 & 0.302 & 2.879 & 10.557 \\
NoVelocity & 2.637 & 1.1010 & 1.73112e-3 & 4.197 & 6.871 & 0.350 & 2.309 & 15.701 \\
NoFuel & \textbf{2.679} & 1.0866 & 0.94659e-3 & \textbf{5.507} & \textbf{7.622} & 0.25 & 0.819 & 11.05 \\
NoJerk & 2.290 & \textbf{1.0248} & 0.94660e-3 & 3.900 & 5.263 & \textbf{0.229} & \textbf{0.608} & 9.731 \\
Ours & 2.428 & 1.0594 & \textbf{0.94659e-3} & 4.094 & 6.271 & 0.320 & 2.801 & \textbf{9.801} \\
\bottomrule
\end{tabular}
\caption{\textbf{Ablation on Impact of Each Factor in Reward Function}.
Arrows denote the direction of improvement.
As expected, the model omitting the fuel consumption term is able to freely accelerate without ``worrying'' about fuel efficiency. Despite fuel efficiency and traffic flow being inversely related, our model is able to achieve a middle ground between ablation models for the best of both worlds. More details on this experiment can be found in Section~\ref{sec:results}.}
\label{table:ablation}
\vspace*{-1.5em}
\end{table*}
\subsection{Improving Sample Efficiency with Traffic Gradients}
\label{sec:sample_enhancement}
We can use traffic gradients in enhancing the sample efficiency of the common on-policy RL algorithms, such as PPO or TRPO. Inspired by the sampling enhancement scheme of \cite{qiao2021efficient}, which applies the scheme to a model-based RL algorithm, here we present a method that is applicable to the other general on-policy RL algorithms and show its efficacy in the Section~\ref{sec:results}
To be specific, PPO and TRPO are both based on evaluating the perturbed policy $\widetilde{\pi}$ with the experience from the original policy $\pi$~\cite{schulman2015trust}. This is possible because the expected reward of the policy $\widetilde{\pi}$ can be approximated with the information from the original policy up to the first order. Our method is based on manipulating the collected set of experience with our gradients, so that we can maximize the efficiency of it.
Let us denote a single experience unit as $(s_n, a_n, r_n, s_{n+1}, \frac{\partial r_n}{\partial a_n}, \frac{\partial s_{n+1}}{\partial a_n})$, where $s_n$, $a_n$, and $r_n$ refer to the state, action, and reward at the time step $n$, and $\frac{\partial r_n}{\partial a_n}$ and $\frac{\partial s_{n+1}}{\partial a_n}$ stand for the gradient of the reward and the next state with respect to the action. These gradient terms come from our differentiable traffic simulator. Then we can perturb $(a_n, r_n, s_{n+1})$ with the gradient terms as follows:
\vspace*{-1.5em}
\begin{align*}
\widetilde{a}_n = a_n + \epsilon, \quad
&\widetilde{r}_n = r_n + \epsilon \cdot \frac{\partial r_n}{\partial a_n}, \quad
\widetilde{s}_{n+1} = s_n + \epsilon \cdot \frac{\partial s_n}{\partial a_n} \\
&\text{where } |\epsilon \cdot \frac{\partial s_n}{\partial a_n}| \le \delta \text{ for some } \delta > 0.
\end{align*}
\vspace*{-1.5em}
Note that the amount of perturbation $\epsilon$ is bounded by a threshold factor $\delta$, as we do not want to perturb $s_{n+1}$ too much. This is because we still want to take advantage of the commonly used advanced advantage estimation techniques, such as GAE~\cite{schulman2015high}, in estimating the advantage, and they usually consider multiple time steps in a trajectory. With this constraint, we ensure that the perturbed trajectory does not deviate from the original one too much, so that we can still use the advantage estimation techniques that use whole trajectory.
By perturbing the action, reward, and next state in this fashion, we can expect that our experience buffer is filled with seemingly more ``important" experience than before. Intuitively, when is an action that would give higher immediate reward than the other actions, it is more likely that the action would bring about more meaningful results. This intuition is not true in every case, but in many cases we can observe that it holds. By using this more important experience and manipulate our policy based on it, we can expect our policy to learn more meaningful lessons with the same experience than before.
In experiments (Figure~\ref{fig:grad_rl}), we prefix gradient-enhanced algorithms with ``Diff''. Thus, gradient-enhanced TRPO becomes DiffTRPO, and gradient-enhanced PPO becomes DiffPPO.
\section{RELATED WORKS}
\subsection{Autonomous Driving with Traffic Information}
Zhu et al. recently proposed a method for safe, efficient, and comfortable velocity control using RL~\cite{Zhu_Wang_Pu_Hu_Wang_Ke_2020}. Similarly to one of our objectives, they aim to learn acceleration of autonomous vehicles which can exceed the safety and comfort of human-driven vehicles. One major difference is that our work complements existing end-to-end autonomous driving systems with multi-modal sensor data, and learned acceleration behavior cooperates with learned control behavior from imitation learning rather than learning acceleration in a purely traffic simulation setting. In addition, our objective is to directly optimize on an entire traffic state, not just the objectives for the autonomous vehicle itself. The reward objectives of ~\cite{Zhu_Wang_Pu_Hu_Wang_Ke_2020} are also inferred from a partially-observed point of view. Other works have considered learning driving behavior with passenger comfort and safety in mind, but many do not directly involve traffic state information beyond partially-observed settings~\cite{Shen_Zhang_Ouyang_Li_Raksincharoensak_2020, Zhu_Wang_Hu_2020, Li_Yang_Li_Qu_Lyu_Li_2022}. Wegener et al. also present a method for energy efficient urban driving via RL~\cite{Wegener_Koch_Eisenbarth_Andert_2021} in a simplistic, partially-observed setting purely in traffic simulation. It does not address integration with current works for more complex vehicle control. In short, our method addresses a broader method for learning a policy beneficial to both individual and societal traffic objectives, while can be easily integrated into existing state of the art end-to-end driving control methods.
\subsection{Differentiable Microscopic Traffic Simulation}
While differentiable physics simulation has been gaining popularity in recent years, differentiable traffic simulation is under-explored.
In 2021, Andelfinger first introduced the potential of differentiable agent-based traffic simulation, as well as techniques to address discontinuities of control flow~\cite{Andelfinger_2021}. In his work, Andelfinger highlights continuous solutions for discrete or discontinuous operations such as conditional branching, iteration, time-dependent behavior, or stochasticity in forward simulation, ultimately enabling the use of automatic differentiation (autodiff) libraries for applications such as traffic light control. One key difference between our work and ~\cite{Andelfinger_2021} is that our implementation of differentiable simulation accounts for learning agents acting independently from agents following a car-following model, and is compatible with existing learning frameworks. In addition, we optimize traffic-related learning by defining analytical gradients rather than relying on auto-differentiation.
\subsection{Deep Learning with Traffic Simulation}
Deep reinforcement learning has been used to address futuristic and complex problems on control of autonomous vehicles in traffic. One survey on Deep RL for motion planning for autonomous vehicles by Aradi~\cite{9210154} delineates challenges facing the application of DRL to traffic problems, one of which is the long and potentially unsuccessful learning process. This has been addressed in several ways through curriculum learning~\cite{Qiao_Muelling_Dolan_Palanisamy_Mudalige_2018, Bouton_Nakhaei_Fujimura_Kochenderfer_2019, Kaushik_Prasad_Krishna_Ravindran_2018}, adversarial learning~\cite{Ferdowsi_Challita_Saad_Mandayam_2018, Ma_Driggs-Campbell_Kochenderfer_2018}, or model-based action choice. In our work, we address this issue via sample enhancement for on-policy deep reinforcement learning. With forward traffic gradients and modeling of general human behaviors, we can artificially generate ``helpful'' samples during learning with respect to reward.
``FLOW'' by Wu et al. ~\cite{wu2017flow} presents a deep reinforcement learning (DRL) benchmarking framework, built on the popular microscopic traffic simulator SUMO~\cite{SUMO2018}. Wu et al. provides motivation for integrating traffic dynamics into autonomous driving objectives with DRL, coining the problem/task as "mixed autonomy". Novel objectives for driving include reducing congestion, carbon emissions, and other societal costs; these are all in futuristic anticipation of mixed autonomy traffic. Based on FLOW, Vinitsky et al. published a series of benchmarks highlighting 4 main scenarios regarding traffic light control, bottleneck throughput, optimizing intersection capacity, and controlling merge on-ramp shock waves~\cite{pmlr-v87-vinitsky18a}. We extend the environments from FLOW's DRL framework to be {\em differentiable}, and show benchmark results for enhanced DRL algorithms utilizing traffic flow gradients for optimization and control, while retaining benefits of FLOW.
\section{Implementation and Results}
\label{sec:results}
We show results for improved traffic flow via \href{https://leaderboard.carla.org/scenarios/}{CARLA Leaderboard test scenarios}, which describe 10 categories of scenarios based on the National Highway Traffic Safety Administration (NHTSA) pre-crash typology. For experiments with TransFuser in Table~\ref{table:benchmark}, we use the Longest6 benchmark presented in~\cite{Chitta2022PAMI}.
\subsection{Integration with SOTA and Improvements}
\label{subsec:sota}
We also implement our method on a recent state of the art method for imitation learning to demonstrate its generalizability and benefits on higher-performing benchmarks. We implement our method on TransFuser~\cite{Chitta2022PAMI, Prakash2021CVPR} and show its driving performance on individual objectives. For these experiments, we re-collect the TransFuser dataset by using our co-simulation wrapper to record corresponding traffic information for each sample. We then train our method on our dataset via transfer learning, where we use pre-trained weights provided by TransFuser authors. Each TransFuser model provides driving control prediction via an ensemble of three trained models from different initialization seeds. Our model and the baseline are initialized with the same three initialization weights.
We evaluate the learned driving policy for each model on CARLA driving benchmark metrics, which quantify route completion, infractions, collisions, timeouts, among others.
We observe from the top half of the results in Table~\ref{table:benchmark} that our method implemented on Learning By Cheating (LBC)~\cite{chen2019lbc} is able to improve Route Completion (\%) of scenario route driven, by {\bf over 2x}. In addition, we achieve lower overall collisions with pedestrians, other vehicles, and other objects. Agent timeouts are also improved by a factor of 6. Ultimately, we improve the overall Driving Score by {\bf over 2x} as well, where Driving Score is defined as the product between the Route Completion and an infraction penalty.
We also evaluate our method on the most recent state-of-the-art model from 2022, TransFuser~\cite{Prakash2021CVPR}. While this method was originally published in 2021, we use the improved 2022 version of the method. Since this method achieves significantly better performance than LBC, we observe the scaling effects of our method on a driving model with higher benchmark metrics. Despite improving Route Completion and collisions against pedestrians and other vehicles, we observe slightly worse Driving Score. This is most likely due to our model incurring a marginally higher rate of traffic infractions. Since our method increases overall flow, a slightly higher rate of infractions is likely as a vehicle may be traveling at a higher velocity when encountering red lights or stop signs. However, this slight degradation is minimal and does not actually result in more collisions overall.
\subsection{Ablation: Effect of Traffic Gradients on Performance}
We show the impact of traffic gradients and perturbation threshold values $\delta$ during training for on-policy algorithm TRPO~\cite{Schulman_Levine_Abbeel_Jordan_Moritz_07} in Figure~\ref{fig:grad_rl}. In this experiment, we compare the training reward curve of TRPO versus gradient-enhanced DiffTRPO with different perturbation threshold $\delta$ values.
We do not evaluate this experiment on CARLA benchmark scenarios, as not every scenario involves a difficult traffic-related task. If the task is not difficult enough, the benefits of using traffic gradients is less pronounced in learning. To demonstrate improvement in difficult traffic tasks, we use the Figure-Eight benchmark from~\cite{wu2017flow, pmlr-v87-vinitsky18a}.
In this environment, the ego vehicle is tasked with controlling dense traffic flow to maximize overall traffic flow to a maximally allowed speed, i.e. {\em congestion and shockwaves are undesirable in the optimal policy}.
While all DiffTRPO iterations visibly perform better than the baseline TRPO, threshold values of $\delta=0.2$ and $\delta=0.4$ converge the fastest to the highest reward.
Each training curve is averaged over ten runs to account for stochasticity.
More results for PPO can be found in supplemental materials. In short, a threshold value of $\delta=0.1$ achieves the best results for DiffPPO.
\subsection{Ablation: Effect of each reward term}
We study the effect of each of the three terms of the reward function. In this ablation experiment, we analyze three scenarios: (1) Without velocity term, (2) Without fuel consumption term, (3) Without jerkiness constraint. We evaluate them on societal metrics of traffic flow, fuel consumption, and driving jerkiness, as well as individual driving metrics from CARLA driving benchmark similar to Section~\ref{subsec:sota}. Results for this experiment can be found in Table~\ref{table:ablation}.
Overall, we find that our model expectedly achieves a middle ground between ablated models. In our combined reward function, we optimize for both fuel efficiency and traffic flow. However, these terms are inversely related in the real world; as overall traffic flow increases, more fuel is used in order to produce a higher velocity. Thus, we observe from results that the model omitting the fuel term is able to achieve the highest overall traffic flow, as fuel efficiency does not constrain optimization of traffic flow. In addition, omitting the jerk constraint results in less infractions. For scenarios involving yielding to traffic laws, such as red lights and stop signs, restricting jerkiness may harm infraction scores as vehicles cannot suddenly decelerate to a complete stop. Overall, we observe that our method achieves the ``best of both worlds'' across {\em all} other metrics, while the ablated models achieve the best results solely on individual metrics.
|
2,877,628,089,620 | arxiv | \section{Background}\label{section.background}
The goal of MBRL is to solve a task through learning a model $f$ of the true dynamics $f_\text{real}$ of the system that is subsequently used to solve an optimal control problem. The dynamics are described through $x_{t+1} = f(x_t,u_t)$ where $x_t$ and $u_t$ are the state and action of the current time step, and $x_{t+1}$ the state at the next time step. $f$ represents the learned model of the dynamics. MBRL seeks to find a policy $u_t = \pi(x_t)$ that minimizes a cost $\mathcal{J}(x_t,u_t)$ describing the desired behavior. Policy optimization can be performed in various ways such as trajectory sampling approaches as summarized and evaluated in \citep{chua2018deep}, random shooting methods, where trajectories are randomly chosen and evaluated with the learned model, or iterative LQG approaches, as in \citep{levine2013guided}. Model learning also can be tackled with various methods. \citep{levine2014learning} proposes learning linear models of the forward dynamics. In \citep{chua2018deep} the dynamics are learned with an ensemble of neural networks. In general, the learned model of dynamics can be deterministic as in \citep{levine2014learning} or probabilistic as in \citep{deisenroth2011pilco,chua2018deep}.
In MBRL, the learned model is used to simulate the robot behaviour when optimizing a trajectory or control policy. The learned model and the optimizer are task independent; this independence promises sample efficiency and generalization capabilities, as an already learned model can be reused for new tasks. As a side effect, however, the learned models quality can drastically affect the computed solution, as pointed out in \citep{schaal1997learning,atkeson1997comparison,deisenroth2010efficient}, since the policy is optimized given the current learned model and not by interacting with the robot. This effect is called model bias \citep{deisenroth2010efficient} and can lead to a policy with drastically lower performance on the real robot. We argue that exploration can alleviate this model-bias. Resolving model uncertainty while optimizing for a task can encourage visiting states which resolve ambiguities in the learned model and therefore lead to both better models and control policies.
\subsection{Intrinsic motivation for RL}
The concept of curiosity has also been explored within the reinforcement learning literature from various angles. For example, a first attempt towards intrinsically motivated agents consisted in rewarding agents to minimize prediction errors of sensory events \citep{Barto2004, Singh2004, Singh2010}. This initial work was designed for low-dimensional and discrete state-and-action spaces. Recently, curiosity as a means to better explore was also investigated for high-dimensional continuous state spaces \citep{Bellemare2016, Pathak2017}. Most of this work, including recent efforts towards curiosity driven robot learning \citep{tanneberg2019intrinsic, laversanne2018curiosity}, has defined curiosity as a function of model prediction error and within a model-free reinforcement learning framework. In MBRL, \citep{shyammax} recently proposed a measure of disagreement as exploration signal. \citep{levine2016end} propose a maximum entropy exploration behaviour. Other algorithms which take uncertainty into account have been presented as well \citep{deisenroth2011pilco,williams2017information,chua2018deep,boedecker2014approximate}. They differ in their choice of policy optimization, dynamics model representation and how they incorporate uncertainty. While \citep{deisenroth2011pilco, chua2018deep} utilize model uncertainty to generate trajectory distributions, the uncertainty does not play an explicit role in the cost. Thus, these approaches do not explicitly optimize actions that resolve uncertainty in the current model of the dynamics, which is in contrast to the approach we propose in this paper.
\subsection{Risk Sensitive stochastic optimal control}\label{sec.risk_sensitive_optimal_control}
Risk-sensitive optimal control has a long history \citep{Jacobson1973, WHITTLE:1981cd}.
The central idea is to not only minimize the expectation
of the performance objective under the stochastic dynamics but to also take into account higher-order moments of the cost distribution.
The objective function takes the form of an exponential transformation of the performance criteria $ J=\min _{ \pi }{\mathbb{E}\left\{\mathrm{exp}[\sigma \mathcal{J} \left( \pi \right) \right]\} }$ \citep{Jacobson1973}.
Here, $\mathcal{J}(\pi)$ is the performance index, which is a random variable, and a functional of the policy $\pi$. $\mathbb{E}$ is the expected value of $\mathcal{J}$ over stochastic trajectories induced by the policy $\pi$.
$\sigma \in \mathbb{R}$ accounts for the sensitivity of the cost to higher order moments (variance, skewness, etc.).
Notably, from \citep{Farshidian}, the cost is $\frac{1}{\sigma} \log(J) = \mathbb{E}(\mathcal{J}^*)+ \frac{\sigma}{2} \mathrm{var}(\mathcal{J}^*) + \frac{\sigma^2}{6} \mathrm{sk}(\mathcal{J}^*) + \cdots$,
where $\mathrm{var}$ and $\mathrm{sk}$ stand for variance and skewness and $\mathcal{J}^*$ is the optimal task cost.
When $\sigma>0$ the optimal control will be risk-averse, favoring low costs with low variance but
when $\sigma<0$ the optimal control will be risk-seeking, favoring low costs with high variance.
$\sigma =0$, reduces to the standard, risk-neutral, optimal control problem.
Jacobson \citep{Jacobson1973} originally demonstrated that for linear dynamics and quadratic costs the optimal control could be computed as the solution of a Riccati equation.
Leveraging this result, \citep{Farshidian} recently proposed a risk-sensitive extension of iLQR and \citep{Ponton} further extended the approach to explicitly incorporate measurement noise.
\section{MBRL via Curious iLQR}\label{section.curious-ilqr}
We present our approach to incorporate curious behaviour into a robot's learning control loop. We are interested in minimizing a performance objective $\mathcal{J}$ to achieve a desired robot behavior and approximate the true dynamics of the system with a discrete-time dynamical system
\begin{equation}
\boldsymbol{\mathrm{x}}_{t+1}=\boldsymbol{\mathrm{x}}_t+\boldsymbol{\mathrm{f}}\left( \boldsymbol{\mathrm{x}}_t,\boldsymbol{\mathrm{u}}_t \right) \Delta t
\end{equation}
where $\BR{x}_t$ denotes the state of the system at time step $t$ and $\boldsymbol{\mathrm{f}}$ represents the unknown model of the dynamics of the system and needs to be learned to achieve the desired task. The hypothesis we seek to confirm is that, by trying to explore uncertain parts of the model, our MBRL algorithm can learn a good dynamics model more quickly and find behaviors with higher performance.
Our algorithm learns a probabilistic model of the system dynamics while concurrently optimizing a desired cost objective (Figure \ref{fig:overview}).
It combines i) a risk-seeking iLQR algorithm and ii) a probabilistic model of the dynamics.
We describe the algorithm in the following.
In particular, we show how to incorporate model uncertainty in risk-sensitive optimal control.
Algorithm \ref{algo:mbrl_loop} shows the complete algorithm.
\begin{minipage}{0.41\textwidth}
\vspace{-0.4cm}
\begin{algorithm}[H]
\small{
\begin{algorithmic}[1]
\STATE{$\mathcal{D} \gets \text{motor babbling data}$}
\STATE{$ \text{train model } f \text{on } D$}
\WHILE{$i < \text{iter}$}
\STATE{$\pi \gets \text{optimize policy via \text{Alg 3}}$}
\STATE{$D_\text{new} \gets \text{rollout $\pi$ on system}$}
\STATE{$\mathcal{D} =\mathcal{D} \cup D_\text{new}$}
\STATE{$\text{train model } f \text{on } \mathcal{D}$}
\ENDWHILE
\end{algorithmic}
\caption{MBRL Algorithm}
\label{algo:mbrl_loop}
}
\end{algorithm}
\vspace{-0.5cm}
\begin{algorithm}[H]
\small{
\begin{algorithmic}[1]
\STATE{$x^\text{new}_0 \gets x_0$}
\WHILE{$t < T$}
\STATE{$\tau^\text{new}_t \gets \tau_t + \alpha k_t + K_t (x_t - x^\text{new}_t)$}
\STATE{$x^\text{new}_{t+1} \gets f(x^\text{new}_t, \tau_t$)}
\ENDWHILE
\STATE{return $\tau^\text{new}, x^\text{new}$}
\end{algorithmic}
\caption{\small{simulate-policy($x,\tau, k, K,\alpha)$}}
}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}{0.56\textwidth}
\vspace{-0.4cm}
\begin{algorithm}[H]
\small{
\begin{algorithmic}[1]
\STATE{$\tau \gets \text{Initial random torque trajectory}$}
\STATE{$x^{*} \gets \text{unroll $\tau$ using $f$}$}
\STATE{$\mathrm{\mathbb{A}} \gets \text{Line search parameters, [0,\dots,1]}$}
\STATE{$J^* \gets \text{Optimal iLQR cost so far}$}
\WHILE{$i<\text{opt iter}$}
\STATE{$k,K \gets \text{backward pass, see \ref{sec.curious_recursions}}$}
\FOR{ $\alpha \in \mathrm{\mathbb{A}}$}
\STATE{$\tau^{new}, x^{new} \gets \text{simulate-policy}(x, \tau, k, K, \alpha)$}
\STATE{$J_{new} \gets \text{Compute cost of $\tau^\text{new}, x^\text{new}$}$}
\IF{$J_{new}<J^*$}
\STATE{$\tau, x^{*} \gets x^\text{new}, \tau^{new}$}
\ENDIF
\IF{converged}
\STATE{return $\pi(x) = \tau + K (x - x^{*})$}
\ENDIF
\ENDFOR
\ENDWHILE
\end{algorithmic}
\caption{curious-iLQR}
\label{algo:curious-iLQR}
}
\end{algorithm}
\end{minipage}
\subsection{Risk-sensitive iLQR}\label{sec.risk_sensitive}
Consider the following general nonlinear stochastic difference equation
\begin{equation}\label{eq.dynamics}
\boldsymbol{\mathrm{x}}_{t+1}=\boldsymbol{\mathrm{x}}_t+\boldsymbol{\mathrm{f}}\left( \boldsymbol{\mathrm{x}}_t,\boldsymbol{\mathrm{u}}_t \right) \Delta t +\boldsymbol{\mathrm{g}}\left( \boldsymbol{\mathrm{x}}_t,\boldsymbol{\mathrm{u}}_t\right) \Delta \omega
\end{equation}
where $\boldsymbol{\mathrm{g}}$ maps a Brownian motion $\Delta \omega$, with 0 mean and covariance $(\boldsymbol{\Sigma} \cdot \Delta t)$, to system states. $\Delta \omega$ and the nonlinear map $\mathbf{g}$,
typically model an unknown physical disturbance, while assuming a known model $\boldsymbol{\mathrm{f}}$ of the dynamics.
When considering the exponentiated performance criteria $ J=\min _{ \pi }{\mathbb{E}\left\{\mathrm{exp}[\sigma \mathcal{J} \left( \pi \right) \right]\} }$ (see \ref{section.background} for more details), it has been shown that iLQR \citep{Tassa2014} can be extended to risk-sensitive stochastic nonlinear optimal control problems \citep{Farshidian}. The algorithm begins with a nominal state and control input trajectory $\boldsymbol{\mathrm{x^{n}}}$ and $\boldsymbol{\mathrm{u^{n}}}$.
The dynamics and cost are approximated to first and second order respectively along the nominal trajectories $\boldsymbol{\mathrm{u^{n}_{t}}}$, $\boldsymbol{\mathrm{x^{n}_{t}}}$ in terms of state and control deviations
$\delta \boldsymbol{\mathrm{x_{t}}}=\boldsymbol{\mathrm{x_{t}}}-\boldsymbol{\mathrm{x^{n}_{t}}}$, $\delta \boldsymbol{\mathrm{u_{t}}}=\boldsymbol{\mathrm{u_{t}}}-\boldsymbol{\mathrm{u^{n}_{t}}}$.
Given a quadratic control cost, the locally optimal control law will be of the form $\delta\boldsymbol{\mathrm{u_t}}=\boldsymbol{\mathrm{k_t}}+\boldsymbol{\mathrm{K_t}}\delta\boldsymbol{\mathrm{x_t}}$. The underlying optimal control problem can be solved by using Bellman equation
\begin{equation}\label{eq.recursions}
\Psi_\sigma(\delta \boldsymbol{\mathrm{x_t}},\BR{t}) = \min_{\boldsymbol{u}}\{ l(\boldsymbol{\mathrm{x}},\boldsymbol{\mathrm{u}},\BR{t}) + \mathbb{E}[ \Psi_\sigma(\delta \boldsymbol{\mathrm{x_{t+1}}},\BR{t+1})] \}
\end{equation}
where $l$ is the quadratic cost, and by making the following quadratic approximation of the value function $\Psi(\delta \boldsymbol{\mathrm{x_t}},\BR{t}) = \frac{1}{2}\delta \boldsymbol{\mathrm{x_t^T}}\boldsymbol{\mathrm{S_t}}\delta \boldsymbol{\mathrm{x}} + \delta \boldsymbol{\mathrm{x_t^T}}\boldsymbol{\mathrm{s_t}} + s_t$
where $\boldsymbol{\mathrm{S_t}} = \nabla_{\delta x \delta x} \Psi$ and $\boldsymbol{\mathrm{s_t}} = \nabla_{\delta x}\Psi - \boldsymbol{\mathrm{S_t}}\delta \boldsymbol{\mathrm{x_t}}$ are functions of the partial derivatives of the value function.
Using the (time-varying) linear dynamics, the quadratic cost and the quadratic approximation of $\Psi$, and solving for the optimal control, we get
\begin{equation}\delta\boldsymbol{\mathrm{u_t}}=\boldsymbol{\mathrm{k_t}}+\boldsymbol{\mathrm{K_t}}\delta\boldsymbol{\mathrm{x_t}}, \ \ \boldsymbol{\mathrm{k_t}} = -\boldsymbol{\mathrm{H_{t}^{-1}g_t}}, \ \ \textrm{and}\ \ \boldsymbol{\mathrm{K_t}} = -\boldsymbol{\mathrm{H_{t}^{-1}G_t}}
\end{equation}
where $\boldsymbol{\mathrm{H_t}}$, $\boldsymbol{\mathrm{g_t}}$, $\boldsymbol{\mathrm{G_t}}$ are given by
\begin{equation}\label{equation.recursion}
\begin{split}
& \mathbf{H_t} = \mathbf{R_t} + \mathbf{B_t^T}\mathbf{S_t}\mathbf{B_t} + \sigma \mathbf{B_t^T S_t^T C\Sigma_{t+1} C^T S_tB_t}\\
& \mathbf{g_t} = \mathbf{r_t} + \mathbf{B_t^T s_t} + \sigma \mathbf{B_t^T S_t^T C\Sigma_{t+1} C^T s_t} \\
& \mathbf{G_t} = \mathbf{P_t^T} + \mathbf{B_t^T S_t A_t} + \sigma \mathbf{B_t^T S_t^T C\Sigma_{t+1} C^T S_t A_t}
\end{split}
\end{equation}
where
$ \boldsymbol{\mathrm{A_t}} = \Delta t\frac{\partial\boldsymbol{\mathrm{f}}}{\partial\boldsymbol{\mathrm{x_t}}}$, $\boldsymbol{\mathrm{B_t}} = \Delta t\frac{\partial\boldsymbol{\mathrm{f}}}{\partial\boldsymbol{\mathrm{u_t}}}$ and $\boldsymbol{\mathrm{q_t}}$, $\boldsymbol{\mathrm{r_t}}$, $\boldsymbol{\mathrm{Q_t}}$, $\boldsymbol{\mathrm{R_t}}$ and $\boldsymbol{\mathrm{P_t}}$ are the coefficients of the Taylor expansion of the cost function around the nominal trajectory. The corresponding backward recursions are
\begin{equation}\label{equation.value_recursion}
\mathbf{s_{t}} = \mathbf{q_t} + \mathbf{A_t^T s_{t+1}} + \mathbf{G_t^T k_t} + \mathbf{K_t^T H_t k_t} + \sigma \mathbf{A_t^T S_{t+1}^T C\Sigma_{t+1} C^T s_{t+1} }
\vspace{-0.4cm}
\end{equation}
\begin{equation}\label{equation.value_recursion2}
\mathbf{S_{t}} = \mathbf{Q_t} + \mathbf{A_t^T S_{t+1} A_t} + \mathbf{K_t^T H_t K_t} + \mathbf{G_t^T K_t} + \mathbf{K_t^T G_t} + \sigma \mathbf{A_t^T S_{t+1}^T C\Sigma_{t+1} C^T S_{t+1} A_t}
\end{equation}
We note that this Riccati recursion is different from usual iLQR (\citep{Tassa2014})
due to the presence of the covariance $\Sigma$: the locally optimal control law explicitly depends on the noise uncertainty.
\subsection{Curious iLQR: seeking out uncertainties}\label{sec.curious_recursions}
We use Gaussian Process (GP) regression to learn a probabilistic model of the dynamics in order to include the predictive variance
from the model into the risk-sensitive iLQR algorithm. This predictive variance will then capture both model as well as measurement uncertainty.
Specifically, we set $\BR{x_t} = [\boldsymbol{\mathrm{\theta_t}},\dot{\boldsymbol{\mathrm{\theta_t}}}]$ where $\boldsymbol{\mathrm{\theta_t}}$, $\dot{\boldsymbol{\mathrm{\theta_t}}}$ are joint position and velocity vectors respectively. We let $\BR{u_t}$ denote the vector of commanded torques. After each system rollout, we get a new set of tuples of states and actions $(\BR{x_t},\BR{u_t})$ as inputs and $\BR{\ddot{\theta}_{t+1}}$, joint accelerations at the next time step, as outputs which we add to our dataset $\mathcal{D}$ on which we re-train the probabilistic dynamics model (see Algorithm~\ref{algo:mbrl_loop}). Once trained, the model produces a one step prediction of the joint accelerations
of the robot as a probability distribution of the form
\begin{equation}\label{eq:prob_dynamics}
p(\BR{\ddot{\theta}_{t+1}}|\BR{x_{t}},\BR{u_{t}}) = \mathcal{N}( \boldsymbol{\mathrm{\ddot{\theta}_{t+1}}} | \boldsymbol{\mathrm{h}}\left( \boldsymbol{\mathrm{x_t}},\boldsymbol{\mathrm{u_t}} \right) \Delta t, \boldsymbol{\Sigma_{t+1}})
\end{equation}
where $\BR{h}$ is the mean vector and $\BR{\Sigma_{t+1}}$ the covariance matrix of the predictive distribution evaluated at $(\BR{x_t, u_t})$.
The outputs is the acceleration at the next time step $\boldsymbol{\mathrm{\ddot{\theta}_{t+1}}}$ which is numerically integrated to velocity $ \boldsymbol{\mathrm{\ddot{\theta}_{t+1}}}\Delta t + \boldsymbol{\mathrm{\dot{\theta_t}}} = \boldsymbol{\mathrm{\dot{\theta}_{t+1}}}$
and position $\boldsymbol{\mathrm{\dot{\theta}_{t+1}}}\Delta t + \boldsymbol{\mathrm{\theta_t}} = \boldsymbol{\mathrm{\theta_{t+1}}}.$ This results in a Gaussian predictive distribution of the system dynamics $\mathbf{f}$
\begin{equation}
\boldsymbol{\mathrm{x_{t+1}}} \sim \mathcal{N}( \boldsymbol{\mathrm{x_{t+1}}} | \boldsymbol{\mathrm{x_{t}}} + \boldsymbol{\mathrm{h}}\left( \boldsymbol{\mathrm{x_t}},\boldsymbol{\mathrm{u_t}} \right) \Delta t , \boldsymbol{\Sigma_{t+1}})
\end{equation}
It is the covariance matrix $\Sigma_{t+1}$ of this distribution that is incorporated into the Riccati equations from above. Specifically, during each MBRL iteration we optimize a new local feedback policy under the current dynamics model $\mathbf{f}$, via Algorithm~\ref{algo:curious-iLQR}. Each outer loop of the optimization, re-linearizes $\mathbf{f}$ with respect to the current nominal trajectories $\boldsymbol{\mathrm{u^{n}_{t}}}$, $\boldsymbol{\mathrm{x^{n}_{t}}}$ in the backward-pass:
\begin{equation}\label{eq.linarize_dyn_cur_ilqr}
\delta \boldsymbol{\mathrm{x_{t+1}}} = \boldsymbol{\mathrm{A_{t}}}\delta \boldsymbol{\mathrm{x_{t}}} + \boldsymbol{\mathrm{B_t}}\delta \boldsymbol{\mathrm{u_{t}}} + \boldsymbol{\mathrm{C_{t}}}\omega_{t}
\end{equation}
with $\boldsymbol{\mathrm{A_t}}=\Delta t\frac{\partial\boldsymbol{\mathrm{f}}}{\partial\boldsymbol{\mathrm{{x_t}^n}}}$, $\boldsymbol{\mathrm{B_t}}=\Delta t\frac{\partial\boldsymbol{\mathrm{f}}}{\partial\boldsymbol{\mathrm{{u_t}^n}}}$ and $
\omega_t \sim \mathcal{N}(\omega_t|0,\BR{\Sigma_{t+1}})$,
where $\BR{A_t}$ and $\BR{B_t}$ are the analytical gradients of the probabilistic model prediction at each time step and $\BR{C_t}$ weights how the uncertainty is propagated trough the system. We utilize the Riccati equations from Section~\ref{sec.risk_sensitive}, Equations~\eqref{equation.recursion} and \eqref{equation.value_recursion}, to optimize a new local feedback policy that utilizes the models predictive covariance $\Sigma_{t+1}$.
During the shooting phase of the algorithm, we integrate the nonlinear model from the GP and,
to guarantee convergence to lower costs, we use a line search approach during the optimization.
We leverage the risk-seeking capabilities of the optimization by setting $\sigma<0$.
The algorithm then favors costs with higher variance which is related to exploring regions of the state space with higher uncertainty in the dynamics. As a result, the agent is encouraged to select actions that explore uncertain regions of the dynamic model while still trying to reduce the task specific error. With $\sigma=0$ the agent will ignore any uncertainty in the environment and therefore not explore. This is equivalent to standard iLQR optimization which ignores higher order statistics of the cost function.
An overview of \textsl{curious iLQR } is given in Algorithm \ref{algo:curious-iLQR}.
\section{Experiments on high-dimensional problems}\label{section.experiments}
Finally, the goal of this work is to learn motor skills on a torque-controlled manipulator. Our experimental platform is the Sawyer robot from Rethink Robotics \citep{sawyer}, a 7 degrees of freedom manipulator. We start with experiments performed in the PyBullet physics simulator \citep{pybullet}. In the next Section, we present results on the Sawyer robot arm hardware. Previous work such as \citep{Farshidian} and \citep{Ponton}, which use risk-sensitive control variations of iLQR, primarily deal with simplified, low dimensional problems. Our experiments are conducted on a 7 degree of freedom robot, and the higher dimensional system adds some complexities to the approach: the gradients in Section \ref{sec.risk_sensitive} of the value function (Equations \eqref{equation.value_recursion}, \eqref{equation.value_recursion2}) tend to suffer from numerical ill-conditioning in high-dimensions. We account for this issue with Tikhonov regularization: before inversion for calculating the optimal control we add a diagonal matrix to $\boldsymbol{\mathrm{H_k}}$ from Equation \eqref{equation.recursion}. The regularization parameter and the line search parameter $\alpha$ are adapted following the Levenberg Marquardt heuristic \citep{Tassa2014}.
The goal of these experiments is to reach a desired target joint configuration $\theta$. We show results for dynamics learned with GP regression (GPR), as well as initial results on ensemble of probabilistic neural networks (EPNN) following the approach presented in \citep{lakshminarayanan2017simple}. When using GPs, a separate GP is trained for each output dimension.
We perform two sets of experiments, both in simulation and on hardware, to analyze the effect of using curiosity. Specifically, we believe that curiosity helps to find better solutions faster, because it guides exploration within the MBRL loop. Intuitively, curiosity helps to observe more diverse data samples during each rollout such that the model learns more about the dynamics.
We start with evaluation in simulation. Throughout all of the simulation experiments the optimization horizon was 150 time steps long at a sampling rate of $240$ Hz. Motor babbling was performed at the beginning for 0.5$s$ by commanding random torques in each joint.
\subsection{Reaching task from scratch}\label{exp.1}
During the first set of experiments, we compare the performance when learning to reach a given target configuration from scratch. We compare our MBRL loop, as before, when using our \textsl{curious iLQR } optimization, regular iLQR, a random exploration controller and a maximum entropy exploration behaviour as described previously. PILCO was not able to learn the reaching movement on the 7-DoF manipulator, so we exclude the results from the analysis. We perform this experiments for each kind of controller 5 times. Each run slightly perturbs the initial joint positions, and uses a random initial torque trajectories for the optimizer. For a given target joint configuration, $5$ iterations of optimizing the trajectory, running the trajectory on the system and updating the dynamics model, were performed. We perform this experiment for $3$ different target joint configurations. The following results are averaged across the $5x3$ runs ($5$ runs per target).
\begin{figure}[ht]
\vspace{-0.7cm}
\centering
\includegraphics[width=0.8\textwidth]{Figures/images_corl/sawyer_experiments_with_baselines_with_EPNN.png}
\caption{\small{Distance in end effector space for EPNN vs. GP in m (1). Distance in end effector space in m (2), iLQR rollout cost (3) and model prediction error (4) with the GP model, compared to our baselines.}}
\label{fig:model_performance}
\vspace{-0.66cm}
\end{figure}
The left most plot in Figure~\ref{fig:model_performance} compares performance of \textsl{curious iLQR } when using EPNN vs GPR for dynamics model learning, with and without curiosity. Our analysis shows that MBRL via \textsl{curious iLQR } improves performance over regular iLQR, for both model architectures. While the EPNN is more promising in scaling our approach, it currently requires more data to train. For this reason we will focus on the GP model for the remainder of our experimental section. In the 2nd to 4th plot of Figure~\ref{fig:model_performance}, we compare the performance of \textsl{curious iLQR } against the above mentioned baselines for exploration during policy optimization, when using GPR for model learning. We compare the methods with respect to 3 metrics: final Euclidean end-effector distance (plot 2), iLQR cost (plot 3) and the predictive performance of the model on each rollout (plot 4). We can consistently see that, on average, MBRL via \textsl{curious iLQR } outperforms the other approaches: the error/cost is smaller and the solutions are more consistent across trials as the standard deviation is lower. This shows that curiosity can lead to faster learning of a new task, when learning from scratch.
The results on the predictive performance of the model suggest that the quality of the model learned via \textsl{curious iLQR } might be better in terms of generalization. In the next section we present results that investigate this assumption.
\subsection{Optimizing towards new targets after model learning}\label{exp.2}
\begin{figure}[h]
\vspace{-0.5cm}
\centering
\includegraphics[width=0.77\textwidth]{Figures/images_corl/sawyer_test_new_targets_small.png}
\caption{\small{Optimizing to reach new targets with regular iLQR after models were learned. $4$ different targets (one per row) are evaluated and the final end-effector trajectories presented. Constant lines are targets for x/y/z.}}
\label{fig:reaching_new_targets}
\vspace{-0.40cm}
\end{figure}
To confirm the hypothesis that the models learned by MBRL with curious iLQR generalize better, because they have explored the state space better, we decided to evaluate the learned dynamics models on a second set of experiments in which the robot tries to reach new, unseen targets. In this experiment we take the GP models learned during experiment 1 in Section~\ref{exp.1} and use them to optimize trajectories to reach new targets that were not seen during training of the model. The results are shown in Figure~\ref{fig:reaching_new_targets}, where four randomly chosen targets were set and the trajectory was optimized with regular iLQR. Note, that here we use regular iLQR to optimize for the trajectory so that we can better compare the models learned with/without curiosity in the previous set of experiments. Figure \ref{fig:reaching_new_targets} shows the trajectory in end effector space for each coordinate dimension, together with the target end effector position as a solid horizontal line.
The results are averaged across $5$ trials. The trials correspond to using one of the $5$ dynamics models at the end of Experiment 1 in Section~\ref{exp.1}. For each trial, the initial torque trajectory was initialized randomly, and the initial joint configuration slightly perturbed. The mean and the standard deviation of the optimized trajectories are computed across the $5$ models learned via MBRL with curious iLQR (first col), MBRL with normal iLQR (second col), iLQR with random exploration (third col) and iLQR with maximum entropy exploration bonus (fourth col.). We see that MBRL with curious iLQR results in a model that performs better when presented with a new target. The new targets are reached more reliably and precisely.
\section{Real hardware experiments}\label{seq.real_hardware}
\begin{minipage}{0.6\textwidth}
\vspace{-0.7cm}
\begin{table}[H]
\scalebox{0.6}{
\begin{tabular}{|c|l|l|l|l|l|l|l|l|}
\hline
\textbf{Target} & \multicolumn{8}{c|}{\textbf{Distance to Target in m (Learning Iteration)}} \\ \hline
\multicolumn{1}{|l|}{} & \multicolumn{4}{c|}{\textbf{Curious}} & \multicolumn{4}{c|}{\textbf{Normal}} \\ \hline
\textbf{1} & 0.05 (6) & 0.09 (2) & 0.09 (3) & \textbf{0.07 (3.67)} & 0.37 (8) & 0.08 (2) & 0.18 (8) & \textbf{0.21 (7.0)} \\ \hline
\textbf{2} & 0.05 (3) & 0.09 (4) & 0.09 (4) & \textbf{0.07 (3.67)} & 0.20 (8) & 0.08 (3) & 0.09 (5) & \textbf{0.12 (5.3)} \\ \hline
\textbf{3} & 0.09 (6) & 0.09 (4) & 0.09 (3) & \textbf{0.09 (4.33)} & 0.17 (8) & 0.16 (8) & 0.11 (8) & \textbf{0.15 (8.0)} \\ \hline
\textbf{4} & 0.04 (2) & 0.07 (2) & 0.07 (2) & \textbf{0.06 (2.33)} & 0.04 (3) & 0.08 (3) & 0.05 (3) & \textbf{0.06 (3.0)} \\ \hline
\multicolumn{1}{|l|}{} & & & & \textbf{0.07 (3.5)} & & & & \textbf{0.14 (5.9)} \\ \hline
\end{tabular}
}
\caption{\small{Results on a reaching task. Each task (target) was repeated three times. The mean values are reported in bold font.}}\label{table.sawyer}
\end{table}
\vspace{-0.7cm}
\end{minipage}
\hfill
\begin{minipage}{0.3\textwidth}
\vspace{-0.7cm}
\begin{table}[H]
\centering
\scalebox{0.6}{
\begin{tabular}{|l|l|l|}
\hline
& \multicolumn{2}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Reaching Precision (m)\end{tabular}}} \\ \hline
Target & \textbf{Curious} & \textbf{Normal} \\ \hline
1 & 0.20 & 0.67 \\ \hline
2 & 0.26 & 0.61 \\ \hline
3 & 0.25 & 1.06 \\ \hline
4 & 0.24 & 0.67 \\ \hline
5 & 0.37 & 0.49 \\ \hline
& \textbf{0.26} & \textbf{0.7} \\ \hline
\end{tabular}
}
\caption{\small{Reaching a new target not seen during training.}}\label{table.new.target.sawyer}
\end{table}
\vspace{-0.7cm}
\end{minipage}
The experimental platform for our hardware experiments is the Sawyer Robot \citep{sawyer}. The purpose of the experiments was to demonstrate the applicability and the benefits of our algorithm on real hardware. We perform reaching experiments for 4 different target locations. Each experiment is started from scratch with no prior data, and the number of hardware experiments needed to reach the target are compared. The results are summarized in Table~\ref{table.sawyer} and show the number of learning iterations needed in order to reach the target together with the precision in end-effector space. If the target was reached with a precision of below 10 cm, we would consider the task as achieved; if the target was not reached after the 8th learning iteration we would stop the experiment and consider the last end-effector position. We decided to terminate our experiments after the eight iteration as running the experiment on hardware was a lengthy process, as the GP training and the rollout would happen iteratively and GP training time increases with growing amount of data. Also, the reaching precision that we were able to achieve on hardware was significantly lower, compared to the simulation experiments.
\begin{wrapfigure}{l}{0.45\textwidth}
\vspace{-0.2cm}
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/sawyer_rest_pose.png}
\caption{\small{start configuration}}
\label{fig:f1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/sawyer_target.png}
\caption{\small{target configuration}}
\label{fig:f2}
\end{subfigure}
\caption{Joint configuration of Sawyer.}
\vspace{-0.4cm}
\end{wrapfigure}
We believe this is due to the data collected from the Sawyer robot, as we could only control the robot at $100Hz$ which introduces inaccuracies when reading the effects of the sent torque command. We repeated each experiment three times to demonstrate the repeatability of our method as we expected measurement noise to affect solutions.
From the table we can see that MBRL with curious iLQR would reach a target on average after 3.5 iterations with an average precision of 7 cm, compared to MBRL with regular iLQR that needed 5.9 iterations (often not ever reaching the target after eight iterations with the desired precision), with a precision of 14cm on average.
As in simulation, similar to Experiment~\ref{exp.2} we wanted to evaluate the quality of the learned models on new target positions. The results are summarized in Table~\ref{table.new.target.sawyer} and are similar to what we observe in simulation: the models learned with curiosity, when used to optimize for new targets, can achieve higher precision than when using the models learned without curiosity.
\section{Illustration: Curious iLQR}\label{section.illustration}
In this section, we want to illustrate the advantages of using the motivation to resolve model uncertainty as an exploration tool.
The objectives of this section is to give an intuitive example of the effect of our MBRL loop. In the following, and throughout the paper, we will refer to the agent that tries to resolve the uncertainty in its environment as curious and the one that is not following the uncertainty but only optimizes for the task related cost as normal.
\begin{figure}[h]
\vspace{-0.3cm}
\centering
\includegraphics[width=\textwidth]{Figures/images_corl/End_Effector_2DARM_corl.png}
\caption{End-effector position of curious and normal agent for 4 learning iterations on 2 different targets. The targets are represented by the black dots, the starting position by the black squares.}
\label{fig:2D_Arm_states}
\vspace{-0.3cm}
\end{figure}
\begin{wrapfigure}{r}{0.4\textwidth}
\vspace{-0.65cm}
\centering
\includegraphics[width=0.4\textwidth]{Figures/images_corl/2d_reacher_comparison.png}
\caption{\small{Reacher performance /10 trials.}}
\label{fig:comparison_reacher}
\vspace{-0.4cm}
\end{wrapfigure}
The experimental platform is the OpenAI Gym Reacher environment \citep{gym}, a two degrees of freedom arm attached at the center of the scene. The goal of the task is to reach a target placed in the environment. In the experiments presented here, actions were optimized as described in section \ref{section.curious-ilqr}. The probabilistic model was learned with Gaussian Process (GP) regression using the GPy library \citep{gpy2014}. The intuition behind this experiment is that, if an agent is driven to resolve uncertainty in its model, a better model of the system dynamics can be learned and therefore used to optimize a control sequence more reliably. Our hypothesis is that, the model learned by the curious agent is better by the end of learning and therefore we expect it to perform better when using it to solve new reaching tasks.
\begin{figure}[h]
\vspace{-0.3cm}
\centering
\includegraphics[width=0.7\textwidth]{Figures/images_corl/2D_Pred_error_and_Uncertain_Plot_EEspace_CoRL.png}
\caption{\small{The uncertainty and the prediction error in end-effector space after training, for the normal and curious agent. The cross is the initial position. Regions that are not by the arm reachable are shown in blue.}}
\label{fig:2D_Arm_var}
\vspace{-0.4cm}
\end{figure}
In Figure~\ref{fig:2D_Arm_states} we show the resulting end-effector trajectories of 8 consecutive MBRL iterations when optimizing to reach 2 different targets in sequence. We compare the behavior of the curious and normal agent in orange and blue, respectively. The targets are represented by the black dot. The curious agent tries to resolve the uncertainty within the model; the normal agent optimizes only for the task related cost. The normal agent seemingly reaches the first target after the second learning iteration; the curious agent only manages to reach the target during the third iteration. Interestingly, the exploration of the curious agent leverages the arm to reach the second target immediately and continues to reach it consistently thereafter. Figure \ref{fig:2D_Arm_var} confirms the intuition that the curious agent has learned a better model than the normal agent. The figure shows the uncertainty and the prediction error (in end-effector space) of the model learned by the normal and the curious agent respectively. With curiosity, the learned model has overall lower uncertainty and prediction error values over the whole state space. We also compare our MBRL loop via \textsl{curious iLQR } optimization to: normal iLQR, a random exploration controller that adds Gaussian noise to the actions with mean $0$ and variance $0.2$, a maximum entropy exploration behaviour following the approach proposed in \citep{levine2016end} and PILCO \citep{deisenroth2011pilco}, in Figure \ref{fig:comparison_reacher}. For these experiments, we initialize the model with only two data points collected randomly during motor babbling. We report the mean and the standard deviation across 10 trials, where each trial starts from a different initial joint configuration and is initialized with a different initial torque trajectory for optimization. In this scenario, with a very poor initial model quality, PILCO could not perform comparably to our MBRL loop. MBRL via \textsl{curious iLQR } outperforms all the other approaches. Furthermore it converges to solutions more reliably, as the variance between trials is lowest.
\section{Introduction}
Model-based reinforcement learning holds promise for sample-efficient learning on real robots \citep{atkeson1997comparison}. The hope is that a model learned on a set of tasks can be used to learn to achieve new tasks faster. A challenge is then to ensure that the learned model generalizes beyond the specific tasks used to learn it. We believe that curiosity, as means of exploration, can help with this challenge. Though curiosity has been defined in various ways, it is generally considered a fundamental building block of human behaviour \citep{loewenstein1994psychology} and essential for the development of autonomous behaviour \citep{white1959motivation}.
In this work, we take inspiration from \citep{Kagan1972}, which defines curiosity as motivation to resolve uncertainty in the environment. Following this definition,
we postulate that by seeking out uncertainties, a robot is able to learn a model faster and therefore achieve lower costs more quickly compared to a non-curious robot. Keeping real robot experiments in mind, our goal is to develop a model-based reinforcement learning (MBRL) algorithm that optimizes action sequences to not only minimize a task cost but also to reduce model uncertainty.
\begin{wrapfigure}{o}{0.5\textwidth}
\vspace{-1.2cm}
\begin{center}
\includegraphics[width=0.5\textwidth, trim={4.0cm 6.0cm 4.0cm 4.0cm}, clip]{Figures/sawyer_mbrl_overview}
\vspace{-0.7cm}
\caption{\small{Approach overview: motor babbling data initializes the dynamic model, the main loop then alternates between model learning and policy updates.}}
\label{fig:overview}
\end{center}
\vspace{-1cm}
\end{wrapfigure}
Specifically, our MBRL algorithm iterates between learning a probabilistic model of the robot dynamics and using that model to optimize local control policies (i.e. desired joint trajectories and feedback gains) via a \textsl{curious} version of the iterative Linear Quadratic Regulator (iLQR) \citep{Tassa2014}. These policies are executed on the robot to gather new data to improve the dynamics model, closing the loop, as summarized in Figure~\ref{fig:overview}.
In a nutshell, our curious iLQR aims at optimizing local policies that minimize the cost \textsl{and} explore parts of the model with high uncertainty.
In order to encourage actions that explore states for which the dynamics model is uncertain, we incorporate the variance of the model predictions into the cost function evaluation.
We propose a computationally efficient approach to incorporate this uncertainty by leveraging results on risk-sensitive optimal control \citep{Jacobson1973, Farshidian}.
\citep{Jacobson1973} showed that optimizing actions with respect to the expected exponentiated cost directly takes into account higher order moments of the cost distribution while affording the explicit computation of the optimal control through Riccati equations. A risk-sensitive version of iLQR was recently proposed in \citep{Farshidian}. While in these approaches the dynamic model is typically considered known and uncertainty comes from external disturbances, we propose to instead explicitly incorporate model uncertainty in the algorithm to favor the exploration of uncertain parts of the model.
The proposed coupling between model learning and risk-sensitive control explicitly favours actions that resolve the uncertainty in the model while minimizing a task-related cost.
The contributions of this work are as follows: 1) We present a MBRL algorithm that learns a global probabilistic model of the dynamics of the robot from data and show how to utilize the uncertainty of the model for exploration through our curious iLQR optimization. 2) We demonstrate that our MBRL algorithm can scale to seven degree of freedom (DoF) manipulation platform in the real world without requiring demonstrations to initialize the MBRL loop. 3) The results show that using curiosity not only learns a better model faster on the initial task, but also that this model generalizes to new tasks more reliably. We perform an extensive evaluation in both simulation and on hardware.
\section{Conclusion and future work}\label{section.conclusion}
In this work, we presented a model-based reinforcement learning algorithm that uses an optimal control framework to trade-off between optimizing for a task specific cost and exploring around a locally optimal trajectory. Our algorithm explicitly encourages actions that seek out uncertainties in our model by incorporating them into the cost. By doing so, we are able to learn a model of the dynamics that achieves the task faster than MBRL with standard iLQR, and also transfers well to other tasks. We present experiments on a Sawyer robot in simulation and on hardware. In both sets of experiments, MBRL with \textsl{curious iLQR } (our approach) not only learns to achieve the specified task faster, but also generalizes to new tasks and initial conditions. All this points towards the conclusion that resolving dynamics uncertainty during model-based reinforcement learning is indeed a powerful tool.
As \citep{loewenstein1994psychology} states, curiosity is a superficial affection: it can arise, diverge and end promptly. We were able to observe similar behaviour in our experiments as well, as can be seen in Figure \ref{fig:model_performance}: towards the end of learning, the exploration signal around the trajectory decreases and the robot would explore, deviate from the task slightly, before going back to exploiting once it is fairly certain about the dynamics. In the future, we would like to explore this direction by considering how to maintain exploration strategies. This could be helpful if the robot is still certain about a task, even though the environment or task has changed.
\clearpage
\acknowledgments{The authors would like to thank Stefan Schaal for his advise throughout the project. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Sarah Bechtle. Part of this work was supported by the Max-Planck Society, New York University, the European Unions Horizon 2020 research and innovation program (grant agreement 780684 and European Research Councils grant 637935) and a Google Faculty Research Award.}
|
2,877,628,089,621 | arxiv |
\section{Introduction}
\label{sec:sec1}
The Domain Name System (DNS) \cite{rfc1034} is critical to the integrity of Internet services and applications. The DNS is a distributed database for storing information on domain names, the primary namespace for hosts on the Internet. The name space is organised in a hierarchical structure to ensure domain name uniqueness. Each node in the DNS tree corresponds to a zone. Each zone belonging to a single administrative authority is served by multiple authoritative name servers. \par
The correct and error-free operation of the DNS is crucial for the reliability of most applications on the Internet. Operational guidelines \cite{rfc1912,rfc2182,wg4} require that a zone have multiple authoritative name servers, and that they be distributed through diverse network topological and geographical locations to increase the reliability of that zone as well as improve overall network performance and access. It also makes DNS services robust against unexpected failures. Recent work \cite{deccio2010,osterweil2011} outlines the need for zone operators to understand how many inter-dependencies they may inadvertently be incurring through the deployment and sharing of DNS secondary servers. \par
The original DNS design focused mainly on system robustness against physical failures, and neglected the impact of operational errors such as misconfigurations and bad deployment choices. Several previous measurements \cite{pappas2009,kalafut2008,wessels2004} showed that zones with configuration errors suffer from reduced availability and increased query delays up to an order of magnitude. DNS administrators have to decide on operational parameters and be aware of their implications for the DNS's overall system qualities. On the deployment level, configuring the number of redundant authoritative DNS servers for a certain zone shall take into consideration the operational overhead associated with querying multiple servers in parallel. Choosing servers with names under other zones provides zone redundancy but may incur security and resiliency threats to the zone. Deciding on where to physically locate the servers should ensure a certain degree of resistance against different types of failures. Peering with external organizations for secondary server hosting should take into consideration the impact of transitional trust and administrative complexity \cite{ram2005,herzberg2013}.\par
While the original DNS design documents \cite{rfc1033,rfc1034,rfc1035,rfc1912,rfc2182} call for diverse placement of authoritative name servers for a zone, bad configurations may lead to \textit {cyclic dependencies} while bad deployment choices may lead to \textit{diminished and false server redundancy}. It was also assumed that redundant DNS servers fail independently; previous measurements \cite{ram2005,deccio2010} showed that operational deployment choices made at individual zones can introduce \textit {excessive zone influence} that severely affect the availability, security and resiliency of other zones. \par
This research is motivated by the lack of formal analysis of the DNS interdependencies stemming from the delegation-based architecture as well as operational deployment choices made by system administrators. We approached the problem from a design point of view that takes into consideration the DNS zone configuration and server deployment choices rather than from the dynamic behavioural view \cite{casalicchio2012} which includes statistical and post-deployment measurements. We propose a method to identify, specify and detect misconfigurations and bad deployment choices in the form of operational bad smells. \par
The method utilizes a set of structural metrics defined over a DNS operational model to detect the smells in early stages of the DNS deployment. It also suggests graph-based refactoring rules as correction mechanisms for the bad smells. We apply and validate the method using several representative case studies. The method will be used to build a \textit {pre-emptive diagnostic advisory} tool that will detect and flag configuration changes that might decrease the robustness or security posture of a domain name, before even the changes become into production. The contributions of this research are:
\begin{enumerate}
\item Introduction of the concept of operational bad smells, i.e., recurring DNS deployment bad choices and misconfigurations that have negative impact on certain aspects of the DNS's quality.
\item Description in detail of a set of representative operational bad smells to build a DNS operational bad smells catalogue.
\item Identification of a set of structural metrics, defined over a DNS operational model, to query the dependency graph of the system to detect DNS operational bad smells.
\item Suggestion of graph-based refactoring rules as correction mechanisms to eliminate the bad smells.
\end{enumerate}
The rest of the paper is structured as follows: Section~\ref{sec:sec2} discusses relevant background. Section~\ref{sec:sec3} presents the DNS operational model. Section~\ref{sec:sec4} discusses the bad smells' identification, specification, detection and refactoring method. Section~\ref{sec:sec5} validates our method by applying it to a set of representative case studies. Section~\ref{sec:sec6} discusses some related work and Section~\ref{sec:sec7} concludes the paper and discusses future work.
\vspace{-2mm}
\vspace{-2mm}
\section{The Operation and Structure of the DNS}
\label{sec:sec2}
DNS is responsible for the mapping of human-friendly domain names to the corresponding machine-oriented IP addresses. Operators of each zone determine the number of authoritative name servers and their placement and manage all changes to the zone's data content. In spite of the fact that zone administration is autonomous, some coordination is required to maintain the consistency of the DNS hierarchy.
\vspace{-2mm}
\vspace{-2mm}
\subsection{General Operation of the DNS}
\begin{figure} [ht]
\centering
\includegraphics[trim=0cm 1cm 0cm 0.5cm, clip=true,width=0.75\textwidth]{fig1.pdf}
\caption{An illustration of the DNS resolution process.}
\label{fig:f1}
\vspace{-2mm}
\vspace{-2mm}
\end{figure}
Figure~\ref{fig:f1} shows the process by which an application looks up the domain name www.le.ac.uk and how it is mapped to the DNS data, control and management operational planes.
To find the IP address of www.le.ac.uk, the client (e.g a web browser) submits a DNS query to a recursive DNS resolver (step 1). Assuming that the corresponding IP is not in the resolver cache, it will ask one of the root name servers for the translation (step 2). The names and IP addresses of root name servers are locally stored within each server. The root name servers will respond with a “referral”, telling the resolver to query the DNS servers of the .uk domain for an answer (step 3). The resolver then repeats this process for the .uk name servers and get a referral to ask the .ac.uk name servers which in turn answers with a referral to as the le.ac.uk name servers (step 4 -7). The resolver next asks one of the le.ac.uk name servers for the translation (step 8), and gets the answer in step (9), and finally forwards the answer to the requesting client (step 10) who will use this information to connect to the web server hosting the web site www.le.ac.uk. Throughout the process, resolvers may encounter name servers hosted under other zones whose names need to be resolved before contacting them about the original request.
\vspace{-2mm}
\vspace{-2mm}
\subsection{DNS Operational Inter-dependencies}
Inter-dependencies are common in the DNS and stem from the hierarchal structure of the DNS, the DNS protocol as well as from different motivations and goals \cite{deccio2010}. A zone is said to depend on a name server if the name server could be involved in the resolution of names in that zone. The dependencies among name servers that directly or indirectly affect a zone are represented as a dependency graph.\par
\begin{figure}
\centering
\includegraphics[ trim=0cm 0cm 0cm 0cm, clip=true,width=0.75\textwidth]{le-ac-uk-dg.pdf}
\vspace{-2mm}
\vspace{-2mm}
\caption{Name Dependency Graph of (le.ac.uk).}
\label{fig:f2}
\vspace{-2mm}
\vspace{-2mm}
\end{figure}
Figure~\ref{fig:f2} shows the delegation graph of the zone (le.ac.uk) where the zone le.ac.uk depends on 4 authoritative name servers (ns0, ns1 and ns2.le.ac.uk) under the management of the University of Leicester (UoL), while the fourth name server (adns0.bath.ac.uk) is managed by the University of Bath. In order to resolve any domain under the zone (le.ac.uk), resolver will ask the name servers of the root zone down to the set of authoritative name servers of the zone. While Leicester University directly trusts bath.ac.uk to serve its namespace, it has no control over the name servers that Bath trusts (i.e. name servers under Cambridge, Salford, and Imperial College and so on). Each name server or group of name servers are administered by different organization which creates another layer of transitive trust dependencies amongst those organizations.
\vspace{-2mm}
\vspace{-2mm}
\subsection{Operational Planes}
The zone's data plane is the interconnected graph of all infrastructure resource records defined within the zone's configuration file. The interconnected graph of all authoritative name servers involved in the resolution process of a domain within a certain zone is called the zone's control plane and the interconnected graph of all administrative units involved is called the management plane. One reason that the DNS is so powerful is that its data plane allows administrators a great deal of flexibility: they can manage their name space however they like. However, the control and management planes' flexibility can lead to operational problems if not managed conscientiously.
\vspace{-2mm}
\vspace{-2mm}
\subsection{Dependency Graphs}
The recursive structures of inter-dependencies within and between the DNS operational planes is represented by dependency graph. A dependency graph \cite{deccio2010} is a directed connected graph with a distinguished node (r) which is the root zone. Each node in the graph represents a zone name, and each edge signifies that its source is directly dependent on its target for proper resolution of itself and any descendant domain names. Dependency graphs capture most attributes and relationships between the various operational entities within the DNS and they can be effectively utilized in detecting configuration weaknesses and servers’ deployment problems. Figure~\ref{fig:f3} shows deferent dependencies that occur at the different DNS operational planes.\par
\begin{figure}[ht]
\centering
\includegraphics[trim=0cm 2cm 0cm 0cm, clip=true,width=0.7\textwidth]{dg-layered.pdf}
\vspace{-2mm}
\caption{DNS Operational Planes and Their Dependencies.}
\label{fig:f3}
\vspace{-2mm}
\vspace{-2mm}
\end{figure}
Since many of the misconfigurations can't be detected from the zone file or deployments directly, there is a need for an operational model that encompasses all information related to the zone file and the server deployments in one conceptual graph. The instance of the model (the dependency graph) will enable us to detect zone integrity violations as well as violations in the deployment of name servers and the choice of peering organizations and management structures. The conceptual graph representation facilitates modelling at multiple levels of details simultaneously.
\vspace{-2mm}
\vspace{-2mm}
\section{DNS Operational Model}
\label{sec:sec3}
The DNS Operational Model aims to support operational goals, such as detecting violations of the design and deployment principles, at the authoritative level. To this end we have to search for certain patterns indicating such violations in the instances of the operational model of the system, i.e., the dependency graphs. This means we have to be able to specify a problem as a pattern, and to query the dependency graph about the existence and occurrences of the specified pattern. The model is composed of the following elements:
\begin{itemize}
\item Operational Entities (e.g. resource records, zones, servers and organizations)
\item Properties of operational entities such as (in-bailiwick and out-of-bailiwick name servers)
\item Relations between the entities (e.g. access attributes such as dependability, containment, delegation and management)
\end{itemize}
\begin{figure}[ht]
\centering
\includegraphics[trim=1cm 3.2cm 1cm 3.5cm, clip=true, width=0.95\textwidth]{model.pdf}
\vspace{-2mm}
\vspace{-2mm}
\caption{DNS Operational Model.}
\label{fig:f4}
\vspace{-2mm}
\vspace{-2mm}
\end{figure}
The operational DNS entities that appear in our model fall into two categories: primitive and composed entities. Composed entities have an identity and a set of properties. In addition to these, composed entities have a list of contained entities, which are primitive or composed entities. A composed entity type is one that contains other entities. The model supports the following composed entities: Organization, Server, Zone and Resource Record. In order to describe a composed entity we have to specify its properties, containment structure (i.e. the entities that it contains), relations and container entity. As an example, we can look at the server component where it can be managed (contained) by organizations. Multiple servers can be managed by one organization. The server can host many zone files and it has the name and IP address as attributes. There are many types of servers and in this context we are concerned with in-bailiwick servers whose name is within the zone file hosted at that particular server and out-bailiwick servers who has a name from a zone hosted in another name server.\par
Three specific dependencies are present within the DNS operational planes and they are:
\begin{enumerate}
\item Parent Dependency: resolving the name of a domain name is always dependent on resolving its parent name since the resolver must learn the authoritative servers for a zone from referrals from the zone’s hierarchical parent.
\item Authoritative Name Server (NS) Dependency: A zone is said to depend on a name server if the name server could be involved in the resolution of names in that zone.
\item CNAME Aliasing Dependency (Name pointing to another Name): the resolution of an alias is always dependent on the resolution of its target CNAME. If a resolver receives a response indicating that the name in question is an alias to another name, it must subsequently resolve the target of the alias, and so on until an address is returned.
\end{enumerate}
The dependency graph can be extracted from the zone file and from the chain of authoritative name servers and organizations involved in the resolving process of domains under that particular zone. This is done by analysing the zone file and the dependencies between the different resource records and their data elements and by following the query process as outlined in Fig.~\ref{fig:f1} using certain DNS tracing tools extensively. All types of dependencies and recursive queries are followed to get the full dependency graph of the zone in the three operational planes.
\vspace{-2mm}
\vspace{-2mm}
\section {Operational Bad Smells}
\vspace{-2mm}
\label{sec:sec4}
In software engineering, bad smells in code \cite{fowler1999} identify risks to non-functional quality in a software system based on structural properties and metrics. We transfer these ideas to the realm of the DNS, where operational bad smells are configuration and deployment choices by zone administrators that are not errant or technically incorrect, and do not currently prevent the system from doing its designated functionality. Instead, they indicate weaknesses that may impose additional overhead on DNS queries, or increase the system vulnerability to threats, or increase the risk of failures in the future.\par
The set of identified bad smells is being formally specified in concise and reusable terms based on a template that includes the bad smell name, type, inspection plane(s), description \& occurrences, quality impacts and detection strategies. The catalogue will be expanded by including refactoring rules for each smell and how these rules have to be applied on the model instance to eliminate the concerned bad smell. Examples of catalogue entries are shown in Table~\ref{tab:t2} and Table~\ref{tab:t4} listed as part of the case studies in Section~\ref{sec:sec5}.\par
Although DNS troubleshooting techniques and problem identification methods have been proposed and several tools have been built, most of these methods and tools apply their detection techniques directly on the zone files through a predefined zone schema and integrity constraints. They don't take into account the inter-dependencies stemming from the hierarchical nature of the DNS or the zone administrators practices. Instead, we propose a model-based approach that subsumes all the steps necessary to identify, specify and detect the DNS operational bad smells. The \textbf{\textit{ISDR}} method is composed of four stages and produces the operational bad smells catalogue:
\begin{enumerate}
\item \textbf{I}dentification, including domain analysis using DNS standards in the form of Request for Comments (RFCs), best practices and policy documents, literature review and DNS expert views.
\item \textbf{S}pecification of a set of operational bad smells using a reusable vocabulary and classification of the bad smells in a taxonomy that shows the scope of the inspection element or plane and system's external qualities affected by the smell.
\item \textbf{D}etection of bad smells in the form of general detection queries and formulas.
\item \textbf{R}efactoring as a correction mechanism to the operational bad smells. Other correction mechanisms may be formulated in the form of reports or reconfiguration recommendations.
\end{enumerate}
The following are the ISDR method stages in details:
\vspace{-2mm}
\subsection {Identification}
The first stage in our method consists of performing deep analysis of the DNS standards, Request for Comments (RFCs), best practices and policy documents to identify weaknesses in configuration and deployment choices made by administrators that may impose additional overhead on DNS queries, or increase the system vulnerability to threats, or increase the risk of cascaded failures.
\vspace{-2mm}
\subsection{Specification}
The weaknesses identified in the previous step, termed as \textit{operational bad smells}, are then defined using certain key terms, unified vocabulary and reusable concepts in this domain. We developed a taxonomy that describes the structural relationships between the various bad smells. The taxonomy has an important role in defining the scope of inspection and highlighting the metrics or structural properties related to the bad smell. It classifies the bad smells based on the following categories:
\begin{enumerate}
\item Operational plane: Data, control and management planes.
\item Affected entity types: Single type, inter-type, intra-type, or inter-zone.
\item Property of the smell: Lexical, structural or measurable.
\end{enumerate}
\begin{figure}[ht]
\centering
\includegraphics[trim=3.5cm 0.5cm 3.5cm 0.5cm, clip=true,width=0.75\textwidth]{taxonomy-1.pdf}
\vspace{-2mm}
\caption{DNS Operational Bad Smells Taxonomy.}
\label{fig:f5}
\vspace{-2mm}
\vspace{-2mm}
\end{figure}
Figure~\ref{fig:f5} shows a partial graphical representation of the DNS operational bad smells taxonomy. The taxonomy is generic and defines a bad smell in more than one category. It can easily be extended by defining new categories of bad smells based on subsequent iterations of the DNS operational domain analysis. So far we have already identified 19 bad smells that can be used as a representative set that spans the different operational planes with various detection properties.\par
In the context of metrics-based analysis techniques, the aforementioned classification of design entities (as explained in Section~\ref{sec:sec3}) has a particular relevance: it provides a pertinent explanation about why metrics are defined and computed only for some entity types (i.e., organizations, network, server, zones and resource records). The explanation resides basically in the distinctive aspects that exist between the two, i.e., the fact that a composed entity can contain other entities and that it can have relations with other entities. As direct measurements are mainly ”counting” the different entities contained in, or related to a measured entity, it becomes obvious why the object of measurement is restricted to composed design entities. Interesting examples of metrics are per-server and per-zone distributions such as:
\vspace{-2mm}
\begin{enumerate}
\item The number of zones that are served from multiple name servers in different network autonomous systems or diverse geographical locations \textit{(Server Redundancy)}.
\item The number of zones that influence the resolving of domain names within a particular zone \textit{(Zone Influence)}.
\vspace{-2mm}
\end{enumerate}
For the proper interpretation of each structural metric defined over the operational model, we give the metric definition, usability, how to measure and a formula for computing that metric. Table~\ref{tab:t0} shows the interpretation model for the metric \textit{Administrative Complexity} \cite{deccio2010}. \par
\begin{table}[ht]
\centering
\vspace{-2mm}
\caption{Interpretation of the Administrative Complexity Metric.}
\label{tab:t0}
\begin{tabular}{ | p{2.6cm} | p{12cm} |}
\hline
Metric &Administrative Complexity. \\
\hline
Definition &Describes the diversity of a zone with respect to the organisations administering its authoritative name servers. \\
\hline
Usability &The advantage mutual hosting of zones between organizations is an increased availability but at the same time increased potential of failure and instability of the zone resolution process. \\
\hline
How to Measure & Count the number of authoritative name servers of each organization involved in the dependency graph of zone(z). \\
\hline
Metric Notation &$O_z$: set of organizations administering authoritative name servers hosting zone ($z$); $n$: total number of authoritative name servers of zone ($z$); $NS^o_z$: the subset of name servers administered by organization $o$ in $O_z$. \\
\hline
Formula &$Ac(z)=$1-
$\sum_o^n$
$(\frac{NS^o_z}{NS_z})^n$. \\
\hline
\end {tabular}
\vspace{-2mm}
\end{table}
\vspace{-2mm}
\vspace{-2mm}
\subsection {Detection}
In order to be able to detect bad smells occurring on model instances, we need to capture deviations of the particular instance model from the good and recommended operational best practices. Lexical and structural properties are used to detect some of the bad smells using direct queries on the instance model such as (Are there any cycles in the dependency graph.?).The metric-based approach combines a set of metrics and set operators to compare them against absolute or relative threshold values. Setting the absolute or relative operational metrics threshold values can be done using local policy constraints or best practices from the wider DNS domain literature and expert views.
\vspace{-2mm}
\subsection{Refactoring}
In the area of object-oriented programming, refactoring \cite{opdyke1992} is the technique of choice for improving the structure of existing code without changing its external behaviour. Graph-based, general refactoring rules \cite{bisztray2009} will be suggested to remove the bad smells identified and detected in the previous stages. The general approach of refactoring \cite{mens} is to include the following steps: (1) identify the location for refactoring, (2) determine which refactoring rules should be applied and on what sequence, (3) guarantee that refactoring rules are preserving the external behaviour of the system, (4) application of selected refactoring rules, (5) assess the effect of refactoring on the system’s external qualities and (6) maintain the consistency between the refactored elements and other system artefacts.
\vspace{-2mm}
\subsection {Method Execution and Tool Support}
The ISDR method is executed on a particular instance of the DNS operational model (Dependency Graph) using the following steps:
\begin{itemize}
\item\textbf{Step 1:} Extract the dependency graph from the zone configuration file and the authoritative servers' deployment.
\item\textbf{Step 2:} Query the dependency graph for any bad smell using the methods and metrics defined in the Bad Smells Catalogue.
\item\textbf{Step 3:} Apply relevant refactoring rule(s) on all matching occurrences of the LHS of the rule on the instance model. A new dependency graph is generated as an output of this step.
\item\textbf{Step 4:} New zone file(s) and authoritative name servers deployment layout can be automatically generated from the new Dependency Graph or a set of recommendations can be presented to the system administrator with relevant quality impacts.
\end{itemize}
The method will be implemented using a pre-emptive diagnostic advisory tool that will detect and flag configuration changes that might decrease the robustness, resilience or security posture of a domain name, before even the changes become into production.
\vspace{-2mm}
\vspace{-2mm}
\section{Validation}
\label{sec:sec5}
We validate our method by applying it and its associated execution technique to several bad smells where some of them have been already identified as misconfigurations in the literature \cite{pappas2009,deccio2010,kalafut2008,herzberg2013,lu2014}.
\vspace{-2mm}
\subsection{Case Study (1): Cyclic Dependency}
To achieve acceptable geographical and network diversity, zone administrators often establish mutual arrangement with peer organizations to host each other’s zone files. Authoritative name servers located in other zones are normally identified by their names instead of their addresses and called out-of-bailiwick name servers. A cyclic zone dependency \cite{pappas2009} occurs when two or more zones depend on each other in a circular way.\par
Table~\ref{tab:t1} shows that the zone (example.com) has 4 authoritative name servers responsible for resolving domain names under this zone as defined in its parent zone (.com). Two servers (ns1 and ns2.example.com) are \textit{in-bailiwick} servers and it is mandatory to include their IP addresses in the parent zone in order to properly resolve domain names under that zone. The other two servers (dns1 and dns2.example.net) are located in another zone and there is no need to include their IP addresses in the (.com), example.com parent zone file. On the other hand, the (.net) zone which is the parent of the (example.net) zone, is served by two \textit{out-of-bailiwick} name servers located in the (example.com) zone.\par
\begin{table}[ht]
\vspace{-2mm}
\vspace{-2mm}
\caption{Content of Zone File for Case Study (1).}
\label{tab:t1}
\begin{tabular}{| p{7cm} | p{0.5cm} | p{7cm} |}
\cline{1-1}
\cline{3-3}
\$ORIGIN .com. && \$ORIGIN .net. \\
\cline{1-1}
\cline{3-3}
\end {tabular}
\begin{tabular}{ | p{2.8cm} | p{0.55cm} | p{2.8cm} | p{0.52cm} | p{2.8cm} | p{0.55cm} |p{2.8cm} |}
example.com. &NS & ns1.example.com.&&example.net.&NS&ns1.example.com. \\
example.com. &NS & ns2.example.com.&&example.net.&NS&ns2.example.com. \\
\cline{5-7}
example.com. &NS & dns1.example.net.& \multicolumn{3}{c}{} \\
example.com. &NS & dns2.example.net.& \multicolumn{3}{c}{} \\
\cline{1-3}
ns1.example.com. &A& 1.1.1.1& \multicolumn{3}{c}{} \\
ns2.example.com. &A & 1.1.1.2& \multicolumn{3}{c}{} \\
\cline{1-3}
\end{tabular}
\vspace{-2mm}
\end{table}
In this example, the two zones work nicely under normal circumstances but if (for any reason), both in-bailiwick name servers become unavailable, both example.com and example.net zones will not be reachable because the IP Addresses of the other two authoritative name servers can't be resolved. This example illustrates the failure dependency between zones, where the failure of some servers in one zone will render the other zone unreachable. The quality impacts of such a bad smell are significant reduction on availability and resiliency of the zone against multiple points of failure.
\begin{figure}[ht]
\vspace{-2mm}
\centering
\includegraphics[trim=1.3cm 1.7cm 1.8cm 1cm, clip=true,width=0.75\textwidth]{dg1.pdf}
\caption{Part of the Dependency Graph of Case Study (1).}
\label{fig:f7}
\vspace{-2mm}
\end{figure}
Checking each zone individually for configuration errors will not lead to the detection of this bad smell since they are both configured correctly. On the other hand, constructing the dependency graph will easily show the occurrence of two circular paths that identify the smell. \par
\begin{table}[ht]
\centering
\caption{Catalogue Entry for the Cyclic Dependency Bad Smell.}
\vspace{-2mm}
\label{tab:t2}
\begin{tabular}{ | p{3cm} | p{9cm} |}
\hline
Name &Cyclic Dependency. \\
\hline
Type &Intra-Zone, Structural. \\
\hline
Inspection Planes &Data and Control Planes. \\
\hline
Occurrences & Cyclic zone dependency occurs when two or more zones depend on each other in a circular way. \\
\hline
Quality Impacts &Reduced availability and reduced resiliency. \\
\hline
Detection Strategy &Is there any cycle in the Dependency Graph? (Query on the DNS Operational Model Instance). \\
\hline
Correction Mechanism (Refactoring) &Add a glue record for the (out-of-bailiwick) authoritative name servers involved in the cycle in the zone file. \\
\hline
\end {tabular}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[trim=1cm 0.5cm 0.5cm 0.5cm, clip=true, width=0.75\textwidth]{rule1.pdf}
\vspace{-2mm}
\vspace{-2mm}
\caption{Refactoring Rule: AddGlueRecord.}
\label{fig:f8}
\end{figure}
Cyclic Dependencies can be eliminated by the inclusion of specific resource records (RRType: A) for both out-of-bailiwick servers in the (.com) zone. This will enable resolving the domain names under the (example.com) and (example.net) zones even when the two in-bailiwick servers are unreachable. We execute this correction mechanism in the form of a graph transformation based refactoring rule (AddGlueRecord) applied on the dependency graph as shown in Figure~\ref{fig:f8}. Since we have two matches for the LHS of the rule on the actual instantiation of the model (the dependency graph in Figure~\ref{fig:f7}), then the rule needs to be applied twice in order to remedy all occurrences of the bad smell. A new zone file can then be automatically generated from the new Dependency Graph as shown in Table~\ref{tab:t3} or a set of recommendations can be presented to the system administrator to eliminate the bad smell.\\
\vspace{-2mm}
\vspace{-2mm}
\begin{table}[ht]
\vspace{-2mm}
\caption{New Zone File Generated After Executing the Refactoring Rule(s).}
\vspace{-2mm}
\label{tab:t3}
\begin{tabular}{| p{7cm} | p{0.5cm} | p{7cm} |}
\cline{1-1}
\cline{3-3}
\$ORIGIN .com. && \$ORIGIN .net. \\
\cline{1-1}
\cline{3-3}
\end {tabular}
\begin{tabular}{ | p{2.8cm} | p{0.55cm} | p{2.8cm} | p{0.52cm} | p{2.8cm} | p{0.55cm} |p{2.8cm} |}
example.com. &NS & ns1.example.com.&&example.net.&NS&ns1.example.com. \\
example.com. &NS & ns2.example.com.&&example.net.&NS&ns2.example.com. \\
\cline{5-7}
example.com. &NS & dns1.example.net.& \multicolumn{3}{c}{} \\
example.com. &NS & dns2.example.net.& \multicolumn{3}{c}{} \\
\cline{1-3}
ns1.example.com. &A& 1.1.1.1& \multicolumn{3}{c}{} \\
ns2.example.com. &A & 1.1.1.2& \multicolumn{3}{c}{} \\
dns1.example.net. &A & 1.1.1.3& \multicolumn{3}{c}{} \\
dns2.example.net. &A & 1.1.1.4& \multicolumn{3}{c}{} \\
\cline{1-3}
\end{tabular}
\vspace{-2mm}
\vspace{-2mm}
\end{table}
\vspace{-2mm}
\subsection{Case Study (2): False Redundancy}
Redundancy \cite{rfc2182} is one of two mechanisms used by DNS administrators to ensure high availability of domain names. The level of availability provided by redundant servers is a function not only of their number, but also of their physical location and the networks they connect to. In 2001, a DNS bad deployment choice \cite{microsoft} caused many Microsoft's web sites and email servers to be unreachable (although they were actually still operational). All authoritative name servers for the zone (microsoft.com) were place in one location, connected to the same network, and placed behind one particular network router. When the router failed, this local bad choice has a large global impact by increasing the queries on one of the DNS root servers (F server) from the normal 0.003\% of all queries to over 25\% \cite{pappas2009}. The catalogue entry for the False Redundancy bad smell is shown in Table~\ref{tab:t4}.\par
\begin{table}[ht]
\centering
\vspace{-2mm}
\caption{Catalogue Entry for the False Redundancy Bad Smell.}
\label{tab:t4}
\begin{tabular}{ | p{3cm} | p{11.5cm} |}
\hline
Name &False-Redundancy. \\
\hline
Type & Measurable and Inter-zone. \\
\hline
Inspection Planes &Control Plane. \\
\hline
Occurrences & When all redundant servers are located within the same physical location, connected to the same network, placed within the same address prefix. \\
\hline
Quality Impacts & Reduced availability, decreased resilience, and the system become susceptible to single point of failure at certain granularity. \\
\hline
Detection & Queries on the dependency graph regarding the following metrics: a) number of authoritative name servers, b) geographical locations servers are placed in, c) networks connected to, and d) BGP prefixes, \\
\hline
Refactoring & Applying the MoveServerLocation refactoring rule will ensure the availability of the zone and its resilience to a single point of failure. \\
\hline
\end{tabular}
\vspace{-2mm}
\end{table}
In this example, we are looking into one aspect of False Redundancy, which is the geographical placement of the authoritative name servers. Looking at the dependency graph extracted from the zone file, generated as an output of case study (1) and the deployment of authoritative name servers, as shown in Figure ~\ref{fig:f9}, it is clear that the \textit{geographical redundancy} of the zone (example.com) is one which is much less than the server's Redundancy which is supposed to be 4 (the total number of authoritative name servers defined in the zone). Looking at the IP address associated with each of these servers, it is evident that all of them are connected to the same network, and behind even the same router. This deployment choice introduces a single point of failure since all servers are located in the same geographical area and this badly affect the resiliency and availability of the zone and its domain names. Geographical area may be defined as a continent, country, city or even a certain building, which may also be susceptible to power outage, natural disaster or any other risk.
\begin{figure}[ht]
\centering
\includegraphics[trim=0.65cm 0.7cm 0.68cm 0.65cm, clip=true,width=0.75\textwidth]{dg2.pdf}
\vspace{-2mm}
\caption{Geographical Location Dependency Graph of Case Study (2).}
\label{fig:f9}
\vspace{-2mm}
\end{figure}
In order to detect the occurrence of the False Redundancy bad smell, one set of queries regarding the number of authoritative name servers of the particular zone, number of distinct geographical locations where those servers are placed in, can be executed against the dependency graph in Figure~\ref{fig:f9}. The resulted measurements are used in detecting the bad smell as defined in its catalogue entry. The threshold values for the metrics are set based on the best practices and policies as identified in the first step of the ISDR method or can be left to the system administrator to set based on the local policies and operational requirements.\par
\begin{figure}[ht]
\centering
\includegraphics[trim=0cm 0.7cm 0cm 0.4cm, clip=true, width=0.75\textwidth]{rule2.pdf}
\vspace{-2mm}
\caption{Refactoring Rule: MoveServerLocation.}
\label{fig:f11}
\vspace{-2mm}
\end{figure}
\vspace{-2mm}
It should be noted that the refactoring rule shown in Figure~\ref{fig:f11} is just one of the options to eliminate this bad smell, i.e., there could be another rule for creating a new server in a new location rather than moving an existing one. We can look at network number or BGP prefix instead of location. It can also take more than one rule application to resolve the situation, so a single rule specifies an incremental improvement, which may have to be repeated or combined with others.
\vspace{-2mm}
\section{Related Work}
\label{sec:sec6}
\subsection{DNS Interdependencies and Misconfigurations}
Ramasubramanian, et al. \cite{ram2005} demonstrate the far-reaching effects of DNS dependencies. Their results show that a domain name relies on 44 name servers on average. Deccio et al. \cite{deccio2010} perform further examination of name resolution behaviors to create a formal model of name dependencies in the DNS and quantify the significance of such dependencies. Several surveys of production DNS deployments have been conducted \cite{pappas2009,kalafut2008,wessels2004} with various misconfigurations are analyzed. So far the main efforts in addressing the problem have focused on informing the operators about the existence of DNS configuration errors, either by Internet RFCs \cite{rfc1912,rfc2182} or with directives set by specific organizations \cite{wg4}.
\vspace{-2mm}
\vspace{-2mm}
\subsection{DNS Troubleshooting}
\vspace{-2mm}
Although several DNS troubleshooting techniques and problem identification methods \cite{Pappas2004tool,decciodnsviz} have been proposed and several tools \cite{dnschecker,intodns,zonemaster} have been built, most of these methods and tools apply their detection techniques directly on the zone files through a predefined zone schema and a specified set of integrity constraints. Chandramouli and Rose \cite{chan} considered integrity constraints for Resource Records (RRs) from single and multiple zones. They found that many integrity constraints have to be satisfied across zones. \cite{casalicchio2012} proposed a set of metrics to be used to evaluate the health of the DNS by measuring the DNS along three dimensions, namely vulnerability, security and resilience. Most of these studies can detect only a subset of the DNS infrastructure-related configuration errors. On the other hand, they implement diagnostic tests that can identify errors related with application-specific resource records. They do not take into account the inter-dependencies stemming from the hierarchical nature of the DNS, zone administrators' practices and deployment choices.\par
Despite all the existing efforts, DNS configuration errors are still widespread today \cite{lu2014}, and one of the main reasons is the lack of adequate tools to help DNS operator identify and correct configuration errors in their own domains. Previous studies are largely based on empirical analysis, whereas in this paper we derive a formal operational model and methodology to systematically identify misconfigurations and bad deployment choices in the form of operational bad smells.
\vspace{-2mm}
\vspace{-2mm}
\subsection{Bad Smells and Refactoring}
There is a large body of work on the identification of problems in software, database and networks. Several books have been written on smells \cite{fowler1999} in the context of object-oriented programming. Marinescu \cite{mari} presented a metric-based approach to detect code smells. Alikacem and Sahrouri \cite{alikacem} proposed a language to detect violations of quality principles and smells in object-oriented systems. Mens and Tourwe \cite{mens} have conducted a comprehensive survey of software refactoring. While software refactoring has started focusing on restructuring of programs, the research has extended to model refactoring \cite{emf}. \par
Our objectives are similar to those of previous DNS operation studies but our approach differs. Our method utilizes a set of measurable, structural and lexical properties defined over a DNS operational model to detect the smells in early stages of the DNS deployment. It also suggests graph-based refactoring rules as correction mechanisms for those bad smells.
\vspace{-2mm}
\vspace{-2mm}
\section {Conclusion and Future Work}
\vspace{-2mm}
\label{sec:sec7}
Currently, there is little consensus on the right measures and acceptable performance levels for the DNS as a whole related to availability, security, stability and resiliency. Individual operators and independent researchers have measured various aspects of the DNS, but to date little progress has been made in defining and implementing standard, system-wide metrics or acceptable service levels. Efforts to improve risk management related to DNS security, stability and resiliency must be guided by an improved ability to measure these characteristics and assess the utility of programs and resources investments. A key enabler of improving this situation will be to ensure that composite parts of DNS operations are correctly configured, deployed, instrumented and measured.\par
The method presented in this paper will lay the basis for developing a visual advisory tool for system administrators to analyse, discover, and remedy operational bad smells. The diagnostic tool will consider several properties and metrics from the DNS operational model presented in this research in relation to the domain name whose zone is being modified. The tool, in a systematic process, can automatically direct the zone administrator to places in the zone file that contain potential design and deployment problems that may compromise availability, resiliency or security of a domain name before the changes become into production. Zone administrator will be able to run several scenarios and apply several refactoring rules through the tool to determine the solution that best meets their local policies.\par
The tool is being designed to cope with zones with very large size and need to be fast enough to be practically applicable. A set of consistent refactoring steps will be applied (or recommended) as graph transformation rules using available tools and techniques. The rule-based behaviour preservation \cite{bisztray2009} will be verified to make sure the suggested rules preserve the system functionality and improve its external qualities. Execution of the refactoring rules may introduce complex sequence of operations to transform the model changes into physical resources relocation. In order to implement some of these refactoring rules, we need to take into consideration access control permissions and physical access to, or coordination actions such as service level agreements (SLAs), with external sites. These concerns will be tackled as part of the refactoring execution steps and available techniques and tools \cite{henshin} are being currently investigated.
\vspace{-2mm}
\vspace{-2mm}
\vspace{-2mm}
\nocite{*}
\bibliographystyle{eptcs}
|
2,877,628,089,622 | arxiv | \section{Introduction}\label{s1}
The General Data Protection Regulation (GDPR) grants all natural persons the right of access to their personal data if this is being processed by data controllers, such as tech companies, governments and mobile phone providers \citep{regulation2016regulation}. Data controllers are obliged to provide a copy of this personal data in a machine readable format and most large data controllers currently comply with this by providing users with the option to retrieve an electronic ``Data Download Package" (DDP). These DDPs contain all data collected by public and private entities during the course of citizens' digital life and form a new treasure trove for social scientists \citep{king2011ensuring,boeschoten2020digital}. However, depending on which data controller is used, the data collected through DDPs can be deeply private and potentially sensitive. Therefore, collecting DDPs for scientific research raises serious privacy concerns and it would not be in line with the principles listed in the GDPR if appropriate measures to protect the privacy of research participants donating their DDPs would not be taken.
To protect privacy of research participants while using DDPs for scientific research, different types of security measures should be taken such as using shielded (cloud)environments to store the data and using privacy-preserving algorithms when analyzing the data. One key issue here is that the privacy of the participants should be preserved while their data is investigated by researchers and that, although appropriate security measures are taken to prevent this, in case of a data breach, it should not be possible to identify research participants. Because of these reasons, a thorough de-identification procedure is imperative. Many different types of software are already available for this, such as DEDUCE \citep{menger2018deduce} and `de-identify Twitter' \citep{Coppersmith2017Twitter}. However, existing methods are not able to handle the highly complex and unstructured nature of DDPs. A particular characteristic of DDPs, that a de-identification procedure should consider, is the fact that the primary identifier of a natural person can be different for different DDPs and is often a username. Furthermore, some DDPs store private interactions of research participants with their contacts, which should be de-identified as well. At last, in case of personal data protected by the GDPR, `machine readable' unfortunately does not mean equally structured nor easy to parse. Due to this great variety in content and structure, a new method for de-identification of DDPs is essential.
In this research project we developed an automatic de-identification approach that can deal with the variety in DDPs. In the development we focused on DDPs from Instagram but we believe that our approach forms the basis of the de-identification of most DDPs and can easily be extended in order to de-identify DDPs from other companies.
Our contributions are the following:
\begin{itemize}
\item We give insight in the structure and content of Instagram DDPs.
\item We have developed a de-identification algorithm and provide it open source.
\item We have created an evaluation data set and provide it open source.
\item We prove that our algorithm is able to find and de-identify a substantive amount of personal data within DDPs.
\item We provide the validation algorithm and ground truth used open source.
\end{itemize}
In the Background section we describe in more detail the structure of DDPs and we discuss how privacy of research subjects can be preserved when their DDPs are used for scientific research. In the Methods section we describe our de-identification strategy and how we deal with variety in Instagram DDPs. In addition, this section contains a description of the algorithm that we developed. In the Evaluation section we describe the creation of the evaluation data set. In the Results section we describe the outcomes of this evaluation procedure.
\section{Background}\label{s2}
The aim of the software introduced in this paper is to enable researchers to use DDPs for scientific research while preserving the privacy of participants. In this section, we explain in more detail the specific type of data that can be found in DDPs, define our aims in terms of data protection in more detail and discuss relevant existing literature and software.
\subsection{Data Download Packages}\label{s2s1}
Most large data controllers currently comply with the right of data access by providing users with the option to retrieve an electronic ``Data Download Package" (DDP). This DDP typically comes as a .zip-file containing .json, .html, .csv, .txt, .JPEG and/or .MP4 files in which all the digital traces left behind by the data subject with respect to the data controller are stored. The structure and content of a DDP varies per data controller, and even within data controllers there are differences among data subjects. Data subjects may use different features provided by the data controller and this is reflected by their DDP, for example, if a data subject does not share photos on Facebook, there will be no data folder with .JPEG files in the corresponding DDP.
One particular characteristic of DDPs is that their content and structure is often subject to change. For example, if a data subject downloads the DDP at a data controller, and repeats this a month later, differences may be found in the structure of the DDP. This can have several causes. The most straightforward cause is that the data subject generated additional data throughout this month. However, other important factors also play a role. First, data controllers can develop new features by which new types of data regarding the data subject are collected. Second, other features are phased out. Third, some data (for example search history) is only saved for a limited amount of time and is destroyed by the data controller after that period. In that case, it will also not be present in the DDP anymore. At last, the GDPR is still relatively new and data controllers continue to optimize the processes used to transfer the relevant data to its subjects, leading to changes in the structure of DDPs.
\subsection{Instagram DDPs}\label{s2s2}
As the software in this research project was initially developed to de-identify Instagram DDPs, the structure of these DDPs has been thoroughly investigated. Instagram DDPs come as one or multiple zipfiles (depending on the amount of data available on the data subject). The .zip-file contains a number of folders in which all the visual content is stored, namely ``photos", ``videos", ``profile" and ``stories". The different folders refer to the different Instagram features used by the data subject to generate the visual content. For example, in the folder ``profile", a subject's profile picture can be found, while in the folder ``stories", visual content can be found generated using the ``stories" feature in Instagram, a form of ephemeral sharing. All textual information is collected in a number of .json files. Some of these files have a simple list structure. For example the file ``likes.json" lists all the `likes' given by the subject, supplemented with a timestamp and the username of the Instagram account to which the `like' was given. Files such as `connections.json', `searches.json' and `seen\_content.json' have similar structures. Other files, such as `profile.json' are typically shorter in size but have a more complex structure, as they typically contain different auxiliary characteristics. Other files with such a structure are for example `account\_history.json', `devices.json' and `settings.json'. However, a substantial number of files contains data that is less structured. Examples of such files are `comments.json', `media.json', `messages.json' and `stories\_activities.json'. Furthermore, data subjects at Instagram are not necessarily natural persons. Data subjects at Instagram can be identified by a single and unique Username. Typically, natural persons have individual accounts with an accompanying username, but other institutions, such as for example retail shops or bands can also have an individual account with an accompanying username.
To summarize, software to de-identify Instagram DDPs should be able to handle:
\begin{itemize}
\item An ever changing file structure
\item Both visual and textual content
\item Different file formats
\item Files in highly structured and highly unstructured format and different variants in between.
\item Natural persons and other users which are identified by their unique username.
\end{itemize}
\subsection{Presevering privacy of research subjects}\label{s2s3}
If DDPs are collected for research purposes, researchers are also considered data controllers and the GDPR applies to them as well \citep[p.95]{van2020general}. Among other things, they are obliged to take technical and organisational security measures aiming to minimise the risk of data abuse \citep[p.112]{van2020general}.
To determine what type of security measures are exactly appropriate in a situation where DDPs are collected for scientific research, the content of the DDPs and the purpose of the research play an important role. DDPs can contain various types of data. It can be structured or unstructured and can come in many different types of formats. Each researcher can be interested in a different aspects of the DDPs, depending on their research question. One researcher might be interested in the frequency of social media use during a Covid-19 lockdown \citep{zhong2021mental}, and uses Instagram DDPs to investigate this. Another researcher might be interested political opinion and electoral success \citep{jungherrAnalyzingPoliticalCommunication2015} \citep{schoenPowerPredictionSocial2013} and uses Twitter DDPs. A third researcher might be interested in personality profiling using Facebook ``likes" \citep{kosinskiPrivateTraitsAttributes2013}.
As can be seen from these examples, some researchers are interested in text, while others are interested in likes or visual content. Consider the situation of a researcher interested in extracting measures of political opinions from text found in DDPs in more detail. Although political opinion is considered a category of sensitive personal data \citep[p.79]{van2020general}, they are allowed to be collected when necessary for scientific research purposes \citep[p.85]{van2020general}. However, as discussed, the researcher collecting this data is obliged to take appropriate security measures such as incorporating data protection measures by design and by default.
Although the sensitive personal data is typically essential for the researcher, this is not necessarily the information from which identification of research subjects can occur. Research subject identification from a DDP in case of a data breach is much more likely to occur due to the direct personal data that can be found within a DDP. However, direct personal data is less likely to be relevant for the research. Therefore, incorporating a step to remove direct personal data from DDPs in the data processing phase when collecting DDPs for research purposes reduces the probability that a research subject is identified in case of a data breach while it will not affect the quality of the data needed to answer the research question.
\subsection{Related work}\label{s2s4}
To remove direct personal data from DDPs, the software should be able to adhere to the five key characteristics of DDPs introduced in the previous subsection. A first step is to investigate to what extent existing software and literature is able to remove direct personal data from DDPs. A well-known approach is $k-$anonymity \citep{sweeney2002k} which requires that each record in a data-set is similar to at least $k-1$ other records on the potentially identifying variables \citep{el2008protecting}. However, parts of the DDPs are highly unstructured and thereby unique per DDP and reaching $k-$anonymity is therefore not feasible. Much research has focused on the de-identification of electronic health records, for example to enable their use in multi-center research studies \citep{kushida2012strategies}. Scientific open source de-identification tools are available such as DEDUCE \citep{menger2018deduce} as well as commercial tools, such as Amazon Comprehend \citep{simon2018amazon} and CliniDeID \citep{liu2020identifying} \citep{heider2020comparative}. Similar initiatives have taken place to de-identify personal data in other types of data, such as for human resource purposes \citep{vanevaluating}. However, textual content generated from structured data-bases such as for electronic health records or human resources typically have a higher level of structure compared to DDPs and does not handle key identifying information in DDPs, such as usernames or visual content and therefore existing software was not sufficient for our purpose. Alternatively, software has been developed focusing on the removal of usernames, for example for Twitter data \citep{Coppersmith2017Twitter}. Furthermore, many different types of both open source and commercial software are available to identify and blur faces on images and videos, such Microsoft Azure \citep{Microsoft2021Azure}, and Facenet-PyTorch \citep{esler2019}. However, none of the investigated software was able to handle both textual and visual content and both structured and unstructured data within one procedure.
To summarize, a de-identification procedure is required that works appropriately when file structures change rapidly over time, while there are substantive differences in the level of structure within the files, that is able to handle different file formats, that is able to handle both visual and textual content and that recognizes the username as the primary identifier for natural persons, while other types of person identifying information (PII) should also be accounted for, such as first names, phone numbers and e-mail addresses. The developed software aims for such a level of protection that the privacy of the DDP owners (the participants) is always preserved. Importantly, the goal is not to prepare the DDPs for public sharing, however, in the unlikely event of a data breach, the individual research participants should not be directly identifiable. Therefore, the de-identification procedure introduced here should always be supplemented with other security measures such as using a shielded (cloud)environment to store the data and using privacy-preserving algorithms when analyzing the data.
\section{Method} \label{s3}
In this section we describe the approach and implementation of our de-identification algorithm. The developmental corpus for our algorithm is a small set of DDPs downloaded by the researchers. Although this data-set was small, we could already see a lot of variety in structure and content providing a useful basis for developing and testing our de-identification approach. All software is written in python and publicly available at \url{https://github.com/UtrechtUniversity/anonymize-ddp}.
\subsection{Approach}
To de-identify a number of Instagram DDPs, three main steps are undertaken per DDP (see also Figure \ref{fig:anonymize}):
\begin{enumerate}
\item Preprocess DDP
\item De-identify text files:
\begin{itemize}
\item Detecting PII in structured text
\item Replacing PII with corresponding de-identification codes
\end{itemize}
\item De-identify media files by detecting and blurring human faces and text
\end{enumerate}
\begin{figure}[t]
\includegraphics[width=\textwidth]{flowanonymize.png}
\caption{The software takes zipped DDP as input. Looping over the text (.json) files, all unique instances of PII are detected in the structured part of the data using pattern- and label recognition. The extracted info, together with the most common Dutch first names and, optionally, the participant file, is added to a key file. All occurrences of the keys in the DDP will be replaced with the corresponding hash. Finally, occurrences of human faces and text in media files are detected and blurred. The software will return a de-identified copy of the DDP in the output folder.}\label{fig:anonymize}
\end{figure}
\subsection{Preprossessing}
The software consists of a wrapper and de-identification algorithms. The wrapper handles the pre-processing of the DDP and contains steps specific for Instagram. It unpacks the DDP and removes all files that are not considered relevant for social science reseach, like ``autofill.json" and ``account history.json". The user's profile ``profile.json" is de-identified separately in this pre-processing phase, as its content and structure deviate from the other text files in the DDP. After the DDP is cleaned, the PII needs to be extracted.
\subsection{De-identify text files}
\subsubsection{Detecting PII in structured text}
All text files in an Instagram DDP contain a nested structure of keys and values (see Figure \ref{fig:struct_text}). To extract PII from these texts, we have determined which key and value combinations and patterns are indicative for PII.
\begin{figure}[t]
\includegraphics[scale=0.4]{struct_text_ex.png}
\caption{Example of key-value structure in .json files with structured and unstructured text.}\label{fig:struct_text}
\end{figure}
Per .json file, the algorithm is recursively parsed over the nested structure, each time checking if the specific structure matches (1) a label: username value combination, (2) a username label: timestamp value combination, or (3) a list of length X with at least one timestamp and username value.
To illustrate the first pattern, each conversation between two or more users stored in the ``messages.json" file is a dictionary, containing multiple sub-dictionaries per sent message. Within this `smallest structure' there is always a label `sender' followed by the username. The algorithm will look for `sender' and other similar standard labels. When the corresponding value matches a username (i.e., a string between 3 and 30 elements without special characters except underscores or points), it will be added to the dictionary.
The second situation can be found in the ``connections.json" file, a dictionary with multiple types of connection labels (e.g., `close\_friends'). Subsequently, each label is made up of another dictionary with all corresponding usernames as labels and timestamps (moment of connection) as values. If the label matches a username and the value a timestamp, the username labels will be saved to the dictionary.
Finally, an example of the (most occurring) third option is the ``comments.json" file. Here you have the various commenting labels (e.g., `media\_comments'), each containing a list of lists. The smallest structure in this file is a list with the time of the comment, the comment, and the username of the owner of the media. After checking if one of the items is a timestamp, the algorithm checks if one of the other items matches a username pattern. If this is the case, the username will be added to the dictionary.
It should be noted that there is also a fourth way of extracting usernames. Even though most usernames found in Instagram DDPs match the above described patterns, usernames can also be mentioned in free text. In this case, there is no standard pattern to look for. Therefore, using regular expressions, the algorithm will search for tagged people (i.e., `@username') and shared media (i.e., `Shared username’s story') using regular expressions.
Similar to usernames, the text files are checked for patterns (i.e., `label: PII') and free text indicative of email-addresses and phone numbers. Different from extracting usernames, the regular expressions used to find email-addresses, phone numbers, and URLs are not applied in the `PII-identifying phase', but are explicitly added to the final dictionary. This way, not all occurrences will be added to the dictionary, increasing its size and reducing the efficiency (during the de-identification phase (see below), the algorithm needs to look for each key separately). Instead, by only adding the search patterns to the dictionary, the de-identification process remains efficient and becomes more inclusive. An important side note is that the regular expressions will only look for Instagram URLs. This because most of the URLs in the DDPs represent links to public websites. These cannot be traced to an individual person and they might be valuable for social science research. Therefore, these URLs can be left unchanged.
As (first) names exclusively occur in free text and not in a structured format, it was not possible to systematically extract this type of PII. Therefore, instead of working bottom-up, we applied a top-down approach. After all text files have been checked and the key dictionary is filled, a list of the $10,000$ most common Dutch names is added to this dictionary (which we obtained from the DEDUCE software \citep{menger2018deduce}). Of course, it is also possible to add another list (of another country), making the algorithm applicable in multiple languages.
\subsubsection{De-identifying PII in text}
After the PII is extracted and added to the dictionary, a PII specific de-identification needs to be added. Usernames and names receive a unique hexadecimal code. Note that the same name will always receive the same code. This way it is still possible to perform a network analysis after anonymization is complete. Additionally, it is also possible to provide the algorithm with a list of (user)names (and/or other information) and specific their corresponding codes yourself. This might be interesting for scientific research in which the (user)names of participants need to be (clearly) distinguishable from other (user)names. In short, (user)names are pseudonymized as they all receive their own specific code and can, therefore, be reverted back if the dictionary is saved. It is up to the user to decide if this dictionary is saved. On the other hand, email-addresses, phone numbers and URLs will anonymized, as they will be hashed with the general `\_\_emailaddress', `\_\_phonenumber', and `\_\_url' codes, respectively.
For each DDP, the algorithm will look per PII listed in the dictionary for its occurrences, and replace it with the corresponding de-identification code. The replacement extends from file content to file/folder names, resulting in an entirely de-identified DDP.
\subsection{De-identifying PII in media}
Besides being able to link textual data to specific individuals, individuals may also be identified by their presence in the images or videos in a DDP. In addition, the images or videos can contain text which may include usernames, person names or other sensitive information. We detect faces in visual content using multi-task Cascaded Convolutional Networks \cite{zhang2016facial} in Facenet Pytorch \cite{esler2019} and blur all occurrences using the Python Imaging Library \cite{vankemenade2020}. We detect text using a pre-trained \cite{Yadong2018} EAST text detection model \cite{zhou2017east} and blur all occurrences using the Gaussian blur option provided by OpenCV \cite{opencv2000}.
\section{Evaluation}
\subsection{Data-set}
To evaluate the performance of the software introduced in the Methodology Section, a group of $11$ participants generated Instagram DDPs by actively using a new Instagram account for approximately a week. Here, the participants followed guidelines instructing them to actively generate the type of information that the software aims to de-identify.
The participants were instructed not to share any of their personal information via the Instagram accounts. Instead, participants were instructed to share either fake or publicly available information, such as URLS of news websites, images of celebrities or likes and follows of verified Instagram accounts. As the final data-set does not contain any personal information it is publicly available at \url{http://doi.org/10.5281/zenodo.4472606}.
\begin{table}[ht]
\begin{tabular}{llrrrrrr}
\hline \hline
Visual \\ \hline
& & Direct & Photos & Profile & Stories & Videos & Total \\ \hline
Files \\
& .JPEG & 11 & 525 & 11 & 176 & - & 723 \\
& .MP4 & - & - & - & 92 & 15 & 107 \\ \hline
Faces \\
& .JPEG & 20 &1046 & - & 290 & - & 1,356 \\
& .MP4 & - & - & - & 163 & 36 & 199 \\ \hline
Usernames \\
& .JPEG & - & 49 & - & 255 & - & 304 \\
& .MP4 & - & - & - & 105 & 21 & 126 \\ \hline \hline
Textual\\ \hline
& DDP\_id & E-mail & Name & Phone & URL & Username & Total \\ \hline
comments.json & - & 28 & 105 & 29 & 1 & 261 & 424 \\
connections.json & - & - & - & - & - & 1,222 & 1,222\\
likes.json & - & - & - & - & - & 883 & 883 \\
media.json & - & 28 & 54 & 9 & - & 43 & 134 \\
messages.json & 294 & 152 & 421 & 139 & 267 & 2,659 & 3,932\\
profile.json & 18 & 10 & - & - & 10 & 1 & 39 \\
saved.json & - & - & - & - & - & 6 & 6 \\
searches.json & - & - & - & - & - & 314 & 314 \\
seen\_content.json & - & - & - & - & - & 3,143 & 3,143\\
shopping.json & - & - & - & - & - & 1 & 1 \\
stories\_activities.json & - & - & - & - & - & 35 & 35 \\\hline
total & 312 & 218 & 580 & 177 & 278 & 8,568 &10,133\\
\hline
\end{tabular}\\
\caption{Descriptive statistics of visual and textual content in the generated Instagram DDP data-set}
\label{tab:descriptive}
\end{table}
The final data-set comprised $11$ Instagram DDPs, containing a total of $723$ .JPEG files (images) on which $1,336$ faces were identified and $304$ usernames and $107$ videos on which $164$ faces were identified and $126$ usernames. In addition, the .json files contain $8,866$ usernames, $904$ first names, $218$ e-mail addresses, $178$ phone numbers and $278$ URLS. See Table \ref{tab:descriptive} for more detailed descriptive statistics regarding the visual content of the generated Instagram DDPs data-set.
\subsection{Approach for textual content}
To evaluate the performance of the de-identification procedure in terms of textual content we consider PII in the form of usernames, first names, e-mail addresses, phone numbers and URLS.
The first step of the evaluation procedure is establishing a \textit{ground truth}. Using the $11$ Instagram DDPs, a human rater had to manually label all PII categories per text file, per DDP\footnote{N.B. Establishing the ground truth only has to be done once. The labeling output, together with the 11 Instagram DDPs, are made available for research.}. To make the counting of the labels more efficient and less prone to errors, the labeling process was done in Label-Studio (Figure \ref{fig:labelstudio}).
\begin{figure}[t]
\includegraphics[scale=0.7]{labelstudio.PNG}
\caption{An example of how labeling a comments.json file would look like in Label-Studio.}\label{fig:labelstudio}
\end{figure}
Label-Studio returns an output file (result.json) that consists of multiple dictionaries; one per file (e.g., `messages.json'), per package (e.g., `100billionfaces\_20201021'). These dictionaries contain all the labeled text-items (e.g., `horsesarecool52') and corresponding labels (e.g., `Username') present in that specific file (Figure \ref{fig:validation}).
Based on the ground truth, the number of PII categories per text file, per DDP can be determined. Next, using the key files created in the de-identification process, the number of corresponding hashes present in the de-identified DDPs are also calculated per text file, per DDP.
\begin{figure}[t]
\includegraphics[scale=0.5]{flowvalidation.png}
\caption{The raw DDPs in which all PII categories are labeled (i.e., the ground truth) is compared with the de-identified DDPs. The software counts the number of PII categories (total), correctly hashed PII (TP), falsely hashed information (FP), and unhashed PII (FN). Subsequently, a recall-, precision-, and F1-score can be calculated.}\label{fig:validation}
\end{figure}
Comparing the PII occurences in the raw DDPs with the PII and corresponding hash occurences, the software can determine the number of times a type of PII was correctly de-identified (True Positive, TP), the number of times a piece of text was incorrectly de-identified (False Positive, FP) and the number of times PII was not de-identified (False Negative, FN). Finally, the recall-, precision-, and F1-score are calculated.
The username is the most important type of PII in DDPs, this holds for Instagram but for DDPs of many other data controllers as well, as usernames are typically unique and can be related to the data subject directly. The software distinguishes between two types of usernames. The researcher can provide a list with usernames of all research participants, and these usernames should be replaced with participant numbers (first type). The second type are all other usernames that appear in the DDPs and those should be replaced by a unique identification code. For both types it holds that they can by correctly de-identified (TP), not be de-identified (FN) or a random piece of text can be replaced by the participant number of the hash (FP). In addition, when a username of a participant is replaced by a wrong participant number or a unique identification code, this is also considered a FN. Researchers intended to use this software can decide for themselves if they want to include a list with participants.
First names should be replaced by a unique identification code (TP). If first names are not replaced they are flagged as falve negatives. In addition, false positives can occur, for example if a hash is applied to a word that is mistaken for a first name, such as the word ``ben" in the Dutch sentence ``Ik ben vandaag jarig." In addition to the list containing the $10.000$ most frequently used Dutch first names that has been used in the EHR de-identification software DEDUCE \citep{menger2018deduce}, we added the first names of the research participants to the list. Furthermore, the software allows you to decide if you want to hash only names that appear in the names list and that start with a capital in the DDP, or if you also want to hash names that do not start with a capital.
\subsection{Approach for visual content}
To annotate visual content, a procedure was carried our by hand, as for each file it had to be determined whether there were one or multiple identifiable faces present and for each detected face whether it was indeed de-identified by the software. To determine whether a face was identifiable, we used a pragmatic definition where we defined a faces as identifiable if at least three out of five facial landmarks were visible (right eye, left eye, nose, right mouth corner and left mouth corner) \cite{zhang2016facial}. This definition will not hold if a person will for example actively try to identify individuals by combining multiple images where a person is partly visible, but it provides a sufficient quality in the sense that in case of a data leak, the person on the images is not directly identified.
For each piece of visual content it holds that each identified face is considered a single observation which can be either appropriately de-identified (TP) or not (FN). Note that although a video consists of multiple frames in which the possibility arises that a face is identifiable, an instance of one frame showing an identifiable face following our definition results in one FN for this face in the movie. As the determination of whether a face is defined identifiable or not is performed by a human rater and this distinction is sometimes not straightforward, the questionable cases are independently rated by two raters and classification is performed based on consensus. In addition, a set of $100$ .JPEG files and $20$ .MP4 files were independently annotated by two separate annotators. On the .JPEG files, $204$ faces were identified and from these, $193$ were identified by both raters, which equals $94.6\%$. On this subset, a Cohen's $\kappa$ inter-rater reliability was calculated of $1$, so the raters highly agreed on which faces were appropriately de-identified and which not. For the .MP4 files, $49$ faces were identified and from these, $41$ were identified by both raters, which equals $83.7\%$. On this subset, a Cohen's $\kappa$ inter-rater reliability was calculated of $0.62$. The sample of faces was much smaller for .MP4 compared to .JPEG, and it was apparently also a lot more difficult to determine whether a face was appropriately identified when the image was moving compared to when it was a still image.
In addition, particularly on Instagram, visual content can contain usernames. The software is not able to distinguish between usernames and other types of text, and therefore usernames on visual content can only be detected and de-identified, distinctions between research participants and other usernames are not made. Therefore, appropriately de-identified usernames are counted as true positives (TP) and usernames not de-identified are counted as false negatives (FN). False positives cannot be quantified in the current procedure.
\subsection{Evaluation criteria}
For each category of PII in each filetype in the set of DDPs regarding textual content, we count the number of TP, FP and FN. For the visual content, we calculate the TP and FN. We use scikit learn to further evaluate the performance of the procedure on the different aspects \citep{pedregosa2011scikit}. First, we calculate the recall, or the sensitivity, as
\begin{equation}
Recall = \frac{TP}{TP + FN}.
\end{equation}
Here, we measure the ratio of the correctly de-identified cases to all the cases that were supposed to be de-identified (i.e. ground truth). Each false negative potentially results in not preserving the privacy of a research participant and therefore a high value for the recall is particularly important. The precision is calculated as
\begin{equation}
Precision = \frac{TP }{TP + FP}.
\end{equation}
Precision shows the ratio of correctly de-identified observations to the total of de-identified observations and a high precision illustrates that the amount of additional information lost due to unnecessary de-identification is limited. Given that DDPs are typically collected to analyze aspects such as the free text or the images, losing a lot of this information by the de-identification process challenges the intended research goal. At last, we calculate the F1 score
\begin{equation}
F1-score = 2 \times \frac{precision \times recall}{precision + recall},
\end{equation}
which combined the precision and recall and considered both false positives and false negatives. Note that we do not calculate the accuracy as the number of true negatives cannot be determined appropriately in our data-set.
\section{Results}
\subsection{Initial Results}
In Table \ref{tab:results1}, the results of the application of the software to our Instagram DDP data-set can be found, where we chose for settings including a participant file and capital sensitivity for first names. Regarding the visual content, we can conclude a large proportion of faces on images is appropriately detected and blurred, while on videos this proportion is substantively lower. Apparently, faces are harder to detect by the detection algorithm when the images are moving.
Regarding textual content, we can conclude that email addresses are appropriately detected and anonymized throughout all files within the DDPs. Regarding names, phone numbers and URLs, we can conclude that a substantial amount of names are not detected by the algorithm throughout the different files. The quality of the anonymization of usernames differs a lot depending on the file. Only in the file `messages.json', false positives are detected. Furthermore, relatively lower recall values are measured for the files `media.json' and `saved.json', although these files have a small number of total observations.
By critically investigating the results found in Table \ref{tab:results1}, and investigating what coding decisions led to the most (negatively) outstanding results, improvements to the code were made.
\begin{table}
\begin{tabular}{llrrrrrrr}
\hline \hline
Visual \\ \hline
&& Total & TP & FN & FP & Recall & Precision & F1 \\ \hline
Faces \\
& .JPEG & 1,356 & 1,205 & 151 & - & 0.89 & - & - \\
& .MP4 & 199 & 131 & 68 & - & 0.66 & - & - \\ \hline
& Total & 1,555 & 1,336 & 219 & - & 0.86 & - & - \\
Usernames \\
& .JPEG & 304 & 302 & 2 & - & 0.99 & - & - \\
& .MP4 & 126 & 125 & 1 & - & 0.99 & - & - \\ \hline
& Total & 430 & 427 & 3 & - & 0.99 & - & - \\ \hline \hline
Textual \\ \hline
& file & total & TP & FN & FP & Recall & Precision & F1 \\
Email \\
& comments.json & 28 & 28 & 0 & 0 & 1 & 1 & 1 \\
& media.json & 28 & 28 & 0 & 0 & 1 & 1 & 1 \\
& messages.json & 152 & 152 & 0 & 0 & 1 & 1 & 1 \\
& profile.json & 10 & 10 & 0 & 0 & 1 & 1 & 1 \\ \hline
& total & 218 & 218 & 0 & 0 & 1 & 1 & 1 \\ \hline \hline
Name \\
& comments.json & 105 & 61 & 44 & 0 & 0.5619 & 0.9365 & 0.7024 \\
& media.json & 54 & 41 & 13 & 0 & 0.7593 & 1 & 0.8530 \\
& messages.json & 427 & 386 & 41 & 0 & 0.9040 & 0.9836 & 0.9374 \\
& profile.json & 10 & 6 & 4 & 0 & 0.6 & 1 & 0.75 \\ \hline
& total & 596 & 494 & 102 & 0 & 0.8255 & 0.9798 & 0.8936 \\ \hline \hline
Phone \\
& comments.json & 29 & 26 & 3 & 0 & 0.4828 & 1 & 0.6512 \\
& media.json & 9 & 7 & 2 & 0 & 0.4444 & 1 & 0.6154 \\
& messages.json & 139 & 121 & 18 & 0 & 0.3022 & 1 & 0.4641 \\ \hline
& total & 177 & 154 & 23 & 0 & 0.3390 & 1 & 0.5063 \\ \hline \hline
URL \\
& comments.json & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\
& messages.json & 267 & 168 & 99 & 0 & 0.6180 & 1 & 0.7639 \\
& profile.json & 10 & 10 & 0 & 0 & 1 & 1 & 1 \\ \hline
& total & 278 & 178 & 100 & 0 & 0.6295 & 1 & 0.7726 \\ \hline \hline
Username \\
& comments.json & 261 & 252 & 9 & 0 & 0.9655 & 1 & 0.9813 \\
& connections.json & 1,222 & 1,190 & 32 & 0 & 0.9722 & 1 & 0.9858 \\
& likes.json & 883 & 823 & 60 & 0 & 0.9320 & 1 & 0.9611 \\
& media.json & 43 & 33 & 10 & 0 & 0.7674 & 0.7907 & 0.7788 \\
& messages.json & 2,947 & 2,835 & 112 & 50 & 0.9067 & 0.9500 & 0.9196 \\
& profile.json & 10 & 10 & 0 & 0 & 1 & 1 & 1 \\
& saved.json & 6 & 4 & 2 & 0 & 0.6667 & 1 & 0.8 \\
& searches.json & 314 & 305 & 9 & 0 & 0.9713 & 1 & 0.9855 \\
& seen\_content.json & 3,144 & 2,619 & 525 & 0 & 0.8330 & 0.9876 & 0.8931 \\
& shopping.json & 1 & 1 & 0 & 0 & 1 & 1 & 1 \\
& stories\_activities.json & 35 & 34 & 1 & 0 & 0.9714 & 1 & 0.9851 \\ \hline
& total & 8,866 & 8,106 & 760 & 50 & 0.89567 & 0.9775 & 0.9324 \\ \hline \hline
\end{tabular}\\
\caption{Results in terms of TP, FP, FN, recall, precision and F1.}
\label{tab:results1}
\end{table}
\subsection{Further improvements}
The first improvement relates to the `profile.json' file. Here, the entire entry that can be found after `name' is now added to the key file and the similar key is used for the DDP username. In this way, the participant can be recognized throughout the complete DDP with either their username of their name. A second improvement has been made after further inspecting the relatively large amount of false positives in the `seen\_content.json' file. Based on this, the list of labels that should be exempted from hashing has been extended. Based on a more thorough inspection of the type of usernames that were not detected by the algorithm, the username format has been adjusted in such a way that usernames are detected as such when they contain at least three characters, the minimum limit in the previous version of the code as six characters. After further inspecting the false positive first names, the names `Van', `Door' and `Can' were removed from the list with the $10,000$ most frequently used first names because they also represent words commonly used in free text, resulting in a lot of FPs. At last, the hash function for usernames became case insensitive, as Instagram does not distinguish between lowercases and uppercases in usernames, while the software initially generated a different hash as an uppercase was used somewhere in the username compared to the username without uppercase.
The improved script has fewer false negatives regarding names, phone numbers and URLS. Regarding usernames, both the number of false negatives and false positives has decreased substantively.
\begin{table}
\begin{tabular}{llrrrrrrr}
\hline \hline
& file & total & TP & FN & FP & Recall & Precision & F1 \\ \hline
DDP\_id \\
& messages.json & 294 & 294 & 0 & 0 & 1 & 1 & 1 \\
& profile.json & 18 & 18 & 0 & 0 & 1 & 1 & 1 \\ \hline
& total & 312 & 312 & 0 & 0 & 1 & 1 & 1 \\ \hline \hline
E-mail \\
& comments.json & 28 & 28 & 0 & 0 & 1 & 1 & 1 \\
& media.json & 28 & 28 & 0 & 0 & 1 & 1 & 1 \\
& messages.json & 152 & 152 & 0 & 0 & 1 & 1 & 1 \\
& profile.json & 10 & 10 & 0 & 0 & 1 & 1 & 1 \\ \hline
& total & 218 & 218 & 0 & 0 & 1 & 1 & 1 \\ \hline \hline
Name \\
& comments.json & 105 & 98 & 7 & 0 & 0.9333 & 1 & 0.9654 \\
& media.json & 54 & 45 & 9 & 0 & 0.8333 & 1 & 0.9042 \\
& messages.json & 421 & 385 & 36 & 0 & 0.9145 & 1 & 0.9509 \\ \hline
& total & 580 & 528 & 52 & 0 & 0.9103 & 1 & 0.9519 \\ \hline \hline
Phone \\
& comments.json & 29 & 29 & 0 & 0 & 1 & 1 & 1 \\
& media.json & 9 & 9 & 0 & 0 & 1 & 1 & 1 \\
& messages.json & 139 & 138 & 1 & 24 & 0.9928 & 0.8519 & 0.9169 \\ \hline
& total & 177 & 176 & 1 & 24 & 0.9943 & 0.88 & 0.9337 \\ \hline \hline
URL\\
& comments.json & 1 & 1 & 0 & 0 & 1 & 1 & 1 \\
& messages.json & 267 & 267 & 0 & 0 & 1 & 1 & 1 \\
& profile.json & 10 & 10 & 0 & 0 & 1 & 1 & 1 \\ \hline
& total & 278 & 278 & 0 & 0 & 1 & 1 & 1 \\ \hline \hline
Username \\
& comments.json & 261 & 258 & 3 & 0 & 0.9885 & 1 & 0.9940 \\
& connections.json & 1,222 & 1,219 & 3 & 0 & 0.9975 & 1 & 0.9988 \\
& likes.json & 883 & 881 & 2 & 0 & 0.9977 & 1 & 0.9989 \\
& media.json & 43 & 42 & 1 & 0 & 0.9767 & 1 & 0.9881 \\
& messages.json & 2,659 & 2,658 & 1 & 2 & 0.9846 & 0.9868 & 0.9847 \\
& profile.json & 1 & 1 & 0 & 1 & 0 & 0 & 0 \\
& saved.json & 6 & 6 & 0 & 0 & 1 & 1 & 1 \\
& searches.json & 314 & 313 & 1 & 0 & 0.9968 & 1 & 0.9984 \\
& seen\_content.json & 3,143 & 3,137 & 6 & 0 & 0.9981 & 1 & 0.9990 \\
& shopping.json & 1 & 1 & 0 & 0 & 1 & 1 & 1 \\
& stories\_activities.json & 35 & 35 & 0 & 0 & 1 & 1 & 1 \\ \hline
& total & 8,568 & 8,551 & 17 & 3 & 0.9932 & 0.9985 & 0.9952 \\ \hline \hline
\end{tabular}\\
\caption{Results in terms of TP, FP, FN, recall, precision and F1 after improvements to the script have been made.}
\label{tab:results2}
\end{table}
\section{Conclusions and future work}
Data Download Packages (DDPs) contain all data collected by public and private entities during the course of citizens' digital life. Although they form a treasure trove for social scientists, they contain data that can be deeply private. To protect the privacy of research participants while they let their DDPs be used for scientific research, we developed de-identification software that is able to anonymize and pseudonymize data that follow typical DDP structures.
We evaluated the performance of the de-identification software on a set of Instagram DDPs. From this application we could conclude that the software is particularly well suited to anonymize and/or pseudonymize usernames, e-mail addresses and phone-numbers from structured and unstructured text files. In addition, it was able to appropriately anonymize faces on .jpg files. Appropriate anonymization and/or pseudonymization of first names appeared more challenging, particularly because some first names can also appear as words in open text and vice versa. However, when applying the software researchers can decide if their focus is on precision or on recall and take measures to accomodate this. Furthermore, anonymizing faces on .mp4 files appeared more challenging, typically because in moving images sometimes different parts of faces can be visible at different moments, together providing sufficient information to identify a face, and because Instagram provides in so-called 'filters', which also make it more difficult for the software to detect a face for de-identification.
The aim of the software was to remove identifiers from DDPs in such a way that research participants cannot be identified when the data is manually investigated or in the undesired situation that someone gains unauthorized access to the data. Appropriate safety measures to prevent this remain required, but based on the results from the validation we do believe that the intended goal of this software is met.
If researchers intend to use this software for their own research projects, a number of issues should be taken into account. A first issue is that the current script has been primarily be developed to de-identify the Instagram DDPs. However, the software has been written in such a way that with small adjustments it could be applied to DDPs from other data controllers. In future work, we could provide some of these adjustments for specific other data controllers to illustrate how this works in practice, but we also encourage other researchers and software developers to develop such adjustments and share this with the community. A second issue is that, besides adjustments to DDPs from different data controllers, we can also imagine that different researchers might have different research intentions with the collected data and that based on this adjustments to the software might be desired. For example a sociologist with interest in what types of accounts are followed and liked by the research participant might not want to pseudonymize all usernames present in the DDP, but instead only the usernames of the participants for example. A third issue to consider is that if a higher level of security is desired, adjustments can also be made in a quite straightforward manner. For example, it can be chosen not to save the key file or to use hashing and blurring algorithms with higher safety standards.
An important issue to note further is that because of the fact that faces on images are blurred when this software is used, it is no longer possible to for example apply emotion detection algorithms to the faces on the images in the DDPs under investigation. If emotion detection of faces is a goal of the researcher, it can be considered to replace the blurring part of the software with a procedure that replaces the face with a deepfake of the face \citep{korshunov2018deepfakes}. With such an algorithm, it remains possible to detect the emotions on faces, while protecting the privacy of the participants. However, this will inevitably also introduce some noise.
Another remark regarding the blurring of visual content is that this part of the software could be further developed to be more refined so that it can distinguish between usernames and regular text and that it only blurs the usernames. In addition, it can be further refined in such a way that text written for example at a 45$^{\circ}$ or 90$^{\circ}$ is evaluated in a single sequence as well. currently, angled text is typically evaluated in small separate pieces. A last point of attention is that sound in .mp4 files is currently removed. This might be a good thing as it thereby also removes possibly identifying sounds but it might be disadvantageous for certain purposes. Although the use of digital trace data for scientific purposes, and appropriate de-identification of digital trace data are fields that are still at their infancy, our developed software enormously contributes to privacy preserving analysis of digital trace data collected with DDPs.
|
2,877,628,089,623 | arxiv | \section{Introduction}
Given a dataset $A \in \mathbb{R}^{m \times d}$, which consists of $m$ individuals with $d$-dimensional features,
methods for preprocessing or prediction from $A$ often use the covariance matrix $M:= A^\top A$ of $A$.
In many such applications one computes a rank-$k$ approximation to $M$, or finds a matrix {\em close} to $M$ with a specified set of eigenvalues $\lambda=(\lambda_1,\ldots,\lambda_d)$ \cite{shikhaliev2019low, hubert2004robust, shen2008sparse}.
Examples include the rank-$k$ covariance matrix approximation problem where one seeks to compute a rank-$k$ matrix that minimizes a given distance to $M$,
and the subspace recovery problem where the goal is to compute a rank $k$-projection matrix $H = V_k V_k^\top$, where $V_k$ is the $d \times k$ matrix whose columns are the top-$k$ eigenvectors of $M$.
These matrix approximation problems are ubiquitous in ML and have a rich algorithmic history; see \cite{james2013introduction,vogels2019powersgd,candes2010matrix,blum2020foundations}.
In some cases, the rows of $A$ correspond to sensitive features of individuals and the release of solutions to aforementioned matrix approximation problems may reveal their private information, e.g., as in the case of the Netflix prize problem \cite{bennett2007netflix}.
Differential privacy (DP) has become a popular notion to quantify the extent to which an algorithm preserves the privacy of individuals \cite{dwork2006differential}.
Algorithms for solving low-rank matrix approximation problems have been widely studied under DP constraints \cite{kapralov2013differentially, blum2005practical, dwork2014analyze, dwork2006calibrating}.
Notions of DP studied in the literature include $(\varepsilon, \delta)$-DP \cite{dwork2006calibrating, hardt2012beating, hardt2013beyond, dwork2014analyze} which is the notion we study in this paper, as well as pure $(\varepsilon, 0)$-DP \cite{dwork2006calibrating, kapralov2013differentially, amin2019differentially, leake2020polynomial}.
To define a notion of DP in problems involving covariance matrices, following \cite{blum2005practical, dwork2006calibrating}, two matrices $M=A^\top A$ and $M' = A'^\top A'$ are said to be {\em neighbors} if they arise from $A, A'$ which differ by at most one row.
Moreover, as is oftentimes done, we assume that each row of the datasets $A, A'$ has norm at most $1$.
For any $\varepsilon, \delta \geq 0$, a randomized mechanism $\mathcal{A}$ is $(\varepsilon, \delta)$-differentially private if for all neighbors $M, M' \in \mathbb{R}^{d \times d}$, and any measurable subset $S$ of outputs of $\mathcal{A}$, we have $\mathbb{P}(\mathcal{A}(M) \in S) \leq e^\varepsilon \mathbb{P}(\mathcal{A}(M') \in S) + \delta$.
{\bf The problem.} We consider a class of problems where one wishes to compute an approximation to a symmetric $d \times d$ matrix under $(\varepsilon,\delta)$-differential privacy constraints.
Specifically, given $M = A^\top A$ for $A \in \mathbb{R}^{m \times d}$, together with a vector $\lambda$ of target eigenvalues $\lambda_1 \geq \cdots \geq \lambda_d$, the goal is to output a $d \times d$ matrix $\hat{H}$ with eigenvalues $\lambda$ which minimizes the Frobenius-norm distance
$\|\hat{H}- H\|_F$ under $(\varepsilon,\delta)$-differential privacy constraints.
Here $H$ is the matrix with eigenvalues $\lambda$ and the same eigenvectors as $M$.
This class of problems includes as a special case the subspace recovery problem if we set $\lambda_1 = \cdots = \lambda_k = 1$ and $\lambda_{k+1} = \cdots = \lambda_d = 0$.
It also includes the rank-$k$ covariance approximation problems if we set $\lambda_i = \sigma_i$ for $i \leq k$, where $\sigma_1 \geq \cdots \geq \sigma_d$ are the eigenvalues of $M$.
Since revealing $\sigma_i$s may violate privacy constraints, the eigenvalues of the output matrix $\hat{H}$ should not be the same as those of $H$.
Various distance functions have been used in the literature to evaluate the utility of $(\varepsilon,\delta)$-DP mechanisms for matrix approximation problems, including the Frobenius-norm distance $\|\hat{H} - H\|_F$ (e.g. \cite{dwork2014analyze, amin2019differentially})
and the Frobenius inner product utility $\langle M, H - \hat{H}\rangle$ (e.g. \cite{chaudhuri2012near, dwork2014analyze, gilad2017smooth}).
Note that while a bound $\| H -\hat{H} \|_F \leq b$ implies an upper bound on the inner product utility of $\langle M, H -\hat{H} \rangle \leq \| M\|_F \cdot b$ (by the Cauchy-Schwarz inequality),
an upper bound on the inner product utility does not (in general) imply any upper bound on the Frobenius-norm distance.
Moreover, the Frobenius-norm distance can be a good utility metric to use if the goal is to recover a low-rank matrix $H$ from a dataset of noisy observations (see e.g. \cite{davenport2016overview}).
Hence, we use the Frobenius-norm distance to measure the utility of an $(\varepsilon,\delta)$-DP mechanism.
{\bf Related work.} The problem of approximating a matrix under differential privacy constraints has been widely studied.
In particular, prior works have provided algorithms for problems where the goal is to approximate a covariance matrix under differential privacy constraints, including rank-$k$ PCA and subspace recovery \cite{blum2005practical, kapralov2013differentially, dwork2014analyze, leake2020computability} as well as rank-$k$ covariance matrix approximation \cite{blum2005practical, dwork2014analyze, amin2019differentially}.
Another set of works has studied the problem of approximating a rectangular data matrix $A$ under DP \cite{blum2005practical, achlioptas2007fast, hardt2012beating, hardt2013beyond}.
We note that upper bounds on the utility of differentially-private mechanisms for rectangular matrix approximation problems can grow with the number of datapoints $m$, while those for covariance matrix approximation problems oftentimes depend only on the dimension $d$ of the covariance matrix and do not grow with $m$.
%
Prior works which deal with covariance matrix approximation problems such as rank-$k$ covariance matrix approximation and subspace recovery are the most relevant to our paper.
%
The notion of DP varies among the different works on differentially-private matrix approximation, with many of these works considering the notion $(\varepsilon,\delta)$-DP \cite{hardt2012beating, hardt2013beyond, dwork2014analyze}, while other works focus on (pure) $(\varepsilon, 0)$-DP \cite{kapralov2013differentially, amin2019differentially, leake2020computability}.
{\em Analysis of the Gaussian mechanism in \cite{dwork2014analyze}.}
\cite{dwork2014analyze} analyze a version of the Gaussian mechanism of \cite{dwork2006our}, where one perturbs the entries of $M$ by adding a symmetric matrix $E$ with i.i.d. Gaussian entries $N(0,\nicefrac{\sqrt{\log(\frac{1}{\delta})}}{\varepsilon})$, to obtain an $(\varepsilon, \delta)$-differentially private mechanism which outputs a perturbed matrix $\hat{M} = M+E$.
One can then post-process this matrix $\hat{M}$ to obtain a rank-$k$ projection matrix which projects onto the subspace spanned by the top-$k$ eigenvectors of $\hat{M}$ (for the rank-$k$ PCA or subspace recovery problem), or a rank-$k$ matrix $\hat{H}$ with the same top-$k$ eigenvectors and eigenvalues as $\hat{M}$ (for the rank-$k$ covariance matrix approximation problem).
\cite{dwork2014analyze} consider different notions of utility in their results, including the inner product utility (for PCA), and the Frobenius-norm and spectral-norm distance distances (for low-rank approximation and subspace recovery).
In one set of results, \cite{dwork2014analyze} give lower utility bounds of $\tilde{\Omega}(k\sqrt{d})$ w.h.p. for the rank-$k$ PCA problem with respect to the inner product utility $\langle M, H \rangle$, together with matching upper bounds provided by a post-processing of the Gaussian mechanism, where $\tilde{\Omega}$ hides polynomial factors of $\frac{1}{\varepsilon}$ and $\log(\frac{1}{\delta})$ (their Theorems 3 and 18).
As noted by the authors, their lower bounds are tight for matrices $M$ with the ``worst-case'' spectral profile $\sigma$, but they can obtain improved upper bounds for matrices $M$ where $\sigma_k - \sigma_{k+1} > \tilde{\Omega}(\sqrt{d})$ (Theorem 3 of \cite{dwork2014analyze}).
For the subspace recovery problem, \cite{dwork2014analyze} obtain a Frobenius-distance bound of $\|\hat{H}- H\|_F \leq \tilde{O}\left(\nicefrac{\sqrt{kd}}{(\sigma_k - \sigma_{k+1})}\right)$ w.h.p. for a post-processing of the Gaussian mechanism whenever $\sigma_k - \sigma_{k+1} > \tilde{\Omega}(\sqrt{d})$ (implied by their Theorem 6, which is stated for the spectral norm).
And for the rank-$k$ covariance matrix approximation problem, \cite{dwork2014analyze} show a utility bound of $\|\hat{H}- M\|_F - \|H- M\|_F \leq \tilde{O}(k \sqrt{d})$ w.h.p. for a post-processing of the Gaussian mechanism (Theorem 7 in \cite{dwork2014analyze}), and also give related bounds for the spectral norm.
While their Frobenius bound for the covariance matrix approximation problem is independent of the number of datapoints $m$, it may not be tight.
For instance, when $k=d$, one can easily obtain a better bound since, by the triangle inequality, $\|\hat{H}- M\|_F - \|H- M\|_F \leq \|\hat{H}- H\|_F = \|\hat{M}- M\|_F = \|E\|_F \leq O(d)$ w.h.p., since $\|E\|_F$ is just the norm of a vector of $d^2$ Gaussians with variance $\tilde{O}(1)$.
Moreover, the bound for the rank-$k$ covariance approximation problem, $\|\hat{H}- H\|_F \leq \tilde{O}(k \sqrt{d})$, is also a worst-case upper bound for any spectral profile $\sigma$ as the right-hand side of the bound not depend on the eigenvalues $\sigma$.
{\em
Thus, a question arises of whether the Frobenius-norm utility bounds for the rank-$k$ covariance matrix approximation and subspace recovery problems
are tight for all spectral profiles $\sigma$, and whether the analysis of the Gaussian mechanism can be improved to achieve better utility bounds.
A more general question is to obtain utility bounds for the Gaussian mechanism for the matrix approximation problems for arbitrary $\lambda$.
}
{\bf Our contribution.} Our main result is a new upper bound on the Frobenius-distance utility of the Gaussian mechanism for the general matrix approximation problem for a given $M$ and $\lambda$ (Theorem \ref{thm_large_gap}).
Our bound depends on the eigenvalues of $M$ and the entries of $\lambda$.
The novel insight is to view the perturbed matrix $M+E$ as a continuous-time symmetric matrix diffusion, where each entry of the matrix $M+E$ is the value reached by a (one-dimensional) Brownian motion after some time $T = \nicefrac{\sqrt{\log(\frac{1}{\delta})}}{\varepsilon}$.
This matrix-valued Brownian motion, which we denote by $\Phi(t)$, induces a stochastic process on the eigenvalues $\gamma_1(t) \geq \cdots \geq \gamma_d(t)$ and corresponding eigenvectors $u_1(t), \ldots, u_d(t)$ of $\Phi(t)$ originally discovered by Dyson and now referred to as Dyson Brownian motion, with initial values $\gamma_i(0) = \sigma_i$ and $u_i(0)$ which are the eigenvalues and eigenvectors of the initial matrix $M$ \cite{dyson1962brownian}.
We then use the stochastic differential equations \eqref{eq_DBM_eigenvalues} and \eqref{eq_DBM_eigenvectors}, which govern the evolution of the eigenvalues and eigenvectors of the Dyson Brownian motion, to track the perturbations to each eigenvector.
Roughly speaking, these equations say that, as the Dyson Brownian motion evolves over time, every pair of eigenvalues $\gamma_i(t)$ and $\gamma_j(t)$, and corresponding eigenvectors $u_i(t)$ and $u_j(t)$, interacts with the other eigenvalue/eigenvector with the magnitude of the interaction term proportional to $ \frac{1}{\gamma_i(t) - \gamma_j(t)}$ at any given time $t$.
This allows us to bound the perturbation of the eigenvectors at every time $t$, provided that the initial gaps in the top $k+1$ eigenvalues of the input matrix are $\geq \Omega(\sqrt{d})$ (Assumption \ref{assumption_gaps}).
Empirically, we observe that Assumption \ref{assumption_gaps} is satisfied for covariance matrices of many real-world datasets (see Section \ref{appendix_data}), as well as on Wishart random matrices $W = A^\top A$, where $A$ is an $m \times d$ matrix of i.i.d. Gaussian entries, for sufficiently large $m$ (see Section \ref{appendix_wishart}).
We then derive a stochastic differential equation that tracks how the utility changes as the Dyson Brownian motion evolves over time (Lemma \ref{Lemma_projection_differntial}) and integrate this differential equation over time to obtain a bound on the (expectation of) the utility $\mathbb{E}[\|\hat{H} -H \|_F]$ (Lemma \ref{Lemma_integral}) as a function of the gaps $\gamma_i(t) - \gamma_j(t)$.
Plugging in basic estimates (Lemma \ref{lemma_gap_concentration}) for the eigenvalue gaps $\gamma_i(t) - \gamma_j(t)$ to Lemma \ref{Lemma_integral}, we obtain a bound on the expected utility $\mathbb{E}[\|\hat{H} -H \|_F]$ (Theorem \ref{thm_large_gap}) for the different matrix approximation problems as a function of the eigenvalue gaps $\sigma_i - \sigma_j$ of the input matrix $M$.
Roughly speaking, our bound is the square-root of a sum-of-squares of the ratios, $\frac{\lambda_i - \lambda_j}{\sigma_i - \max(\sigma_j, \sigma_{k+1})}$, of eigenvalue gaps of the input and output matrices.
When applied to the rank-$k$ covariance matrix approximation problem (Corollary \ref{cor_rank_k_covariance2}), Theorem \ref{thm_large_gap} implies a bound of $\mathbb{E}[\|\hat{H} -H \|_F] \leq \tilde{O}(\sqrt{k d})$ whenever the eigenvalues $\sigma$ of the input matrix $M$ satisfy $\sigma_k - \sigma_{k+1} \geq \Omega(\sigma_k)$ and the gaps in top $k+1$ eigenvalues satisfy $\sigma_i - \sigma_{i+1} \geq \tilde{\Omega}(\sqrt{d})$.
Thus, when $M$ satisfies the above condition on $\sigma$, our bound improves by a factor
of $\sqrt{k}$ on the (expectation of) the previous bound of \cite{dwork2014analyze}, which says that $\|\hat{H}- M\|_F - \|H- M\|_F \leq \tilde{O}(k \sqrt{d})$ w.h.p., since by the triangle inequality $\|\hat{H}- M\|_F - \|H- M\|_F \leq \|\hat{H}- H\|_F$.
This condition on $\sigma$ is satisfied, e.g., for matrices $M$ whose eigenvalue gaps are at least as large as those of the Wishart random covariance matrices with sufficiently many datapoints $m$ (see Section \ref{sec_results} for details).
And, if $\sigma$ is such that $\sigma_i - \sigma_{i+1} \geq \Omega(\sigma_k - \sigma_{k+1})$ for $i \leq k$, Theorem \ref{thm_large_gap} implies a bound of $\mathbb{E}[\|\hat{H} -H \|_F] \leq \tilde{O}(\nicefrac{\sqrt{d}}{(\sigma_k- \sigma_{k+1})})$ for the subspace recovery
problem (Corollary \ref{cor_subspace_recovery}), improving by a factor of $\sqrt{k}$ (in expectation) on the previous bound of \cite{dwork2014analyze}, which implies that $\|\hat{H}- M\|_F - \|H- M\|_F \leq \tilde{O}\left(\nicefrac{\sqrt{kd}}{(\sigma_k - \sigma_{k+1})}\right)$ w.h.p.
\section{Results} \label{sec_results}
Our main result (Theorem \ref{thm_large_gap}) gives a new and unified upper bound on the Frobenius-norm utility of a post-processing of the Gaussian mechanism, for the general matrix approximation problem where one is given a symmetric matrix $M \in \mathbb{R}^{d \times d}$ and a vector $\lambda$ with $\lambda_1 \geq \cdots \geq \lambda_d$, and the goal is to compute a matrix $\hat{H}$ with eigenvalues $\lambda$ which minimizes the distance $\|\hat{H}- H\|_F$.
Here $H$ is the matrix with eigenvalues $\lambda$ and the same eigenvectors as $M$.
Plugging in different choices of $\lambda$ to Theorem \ref{thm_large_gap}, we obtain as corollaries new Frobenius-distance utility bounds for the rank-$k$ covariance matrix approximation problem (Corollary \ref{cor_rank_k_covariance2}) and the subspace recovery problem (Corollary \ref{cor_subspace_recovery}).
Our results rely on the following assumption about the eigenvalues of the input matrix $M$:
\begin{assumption}[($M,k,\lambda_1, \varepsilon, \delta$) Eigenvalue gaps]\label{assumption_gaps}
The gaps in the top $k+1$ eigenvalues eigenvalues $\sigma_1 \geq \cdots \geq \sigma_d$ of the matrix $M \in \mathbb{R}^{d \times d}$ satisfy $\sigma_i - \sigma_{i+1} \geq \frac{8\sqrt{\log(\frac{1.25}{\delta})}}{\varepsilon} \sqrt{d} + 3\log^{\frac{1}{2}}(\lambda_1 k)$ for every $i \in [k]$.
\end{assumption}
\noindent
We observe empirically that Assumption \ref{assumption_gaps} is satisfied on a number of real-world datasets which were previously used as benchmarks in the differentially private matrix approximation literature \cite{chaudhuri2012near, amin2019differentially} (see Section \ref{appendix_data}).
Assumption \ref{assumption_gaps} is also satisfied, for instance, by random Wishart matrices $W = A^\top A$, where $A$ is an $m \times d$ matrix of i.i.d. Gaussian entries, which are a popular model for sample covariance matrices \cite{wishart1928generalised}.
This is because the minimum gap $\sigma_i - \sigma_{i+1}$ of a Wishart matrix grows proportional to $\sqrt{m}$ with high probability; thus for large enough $m$, Assumption \ref{assumption_gaps} holds (see Section \ref{appendix_wishart} for details).
Hence, the assumption requires that the gaps in the top $k+1$ eigenvalues of $M$ are at least as large as the gaps in a random Wishart matrix.
\begin{theorem}[\bf Main result]\label{thm_large_gap}
Let $\varepsilon, \delta>0$, and given a symmetric matrix $M \in \mathbb{R}^{d \times d}$ with eigenvalues $\sigma_1 \geq \cdots \geq \sigma_d$ and corresponding orthonormal eigenvectors $v_1,\ldots, v_d$.
Let $G$ be a matrix with i.i.d. $N(0,1)$ entries, and consider the mechanism that outputs $\hat{M} = M+ \frac{\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon}(G +G^\top)$.
Then such a mechanism is $(\varepsilon,\delta)$-differentially private.
Moreover, let $\lambda_1 \geq \cdots \geq \lambda_d$ and $k \in [d]$ be any numbers such that $\lambda_i = 0$ for $i>k$, and define $\Lambda := \mathrm{diag}(\lambda_1, \ldots, \lambda_d)$ and $V = [v_1,\ldots, v_d]$, and define $\hat{\sigma}_1 \geq \cdots \geq \hat{\sigma}_d$ to be the eigenvalues of $\hat{M}$ with corresponding orthonormal eigenvectors $\hat{v}_1,\ldots, \hat{v}_d$ and $\hat{V} = [\hat{v}_1,\ldots,\hat{v}_d]$.
Then if $M$ satisfies Assumption \ref{assumption_gaps} for ($M,k,\lambda_1,\varepsilon,\delta$), we have
\begin{equation*}
\mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right]
%
\leq O\left(\sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}
%
\right) \frac{\log(\frac{1}{\delta})}{\varepsilon^2}.
%
%
\end{equation*}
\end{theorem}
\noindent
The fact that the mechanism in this theorem is $(\varepsilon,\delta)$-differentially private follows from standard results about the Gaussian mechanism \cite{dwork2014analyze}.
Given any list of eigenvalues $\lambda$, and letting $\Lambda = \mathrm{diag}(\lambda)$, one can post-process the matrix $\hat{M}$ by computing its spectral decomposition $\hat{M} = \hat{V}\hat{\Sigma} \hat{V}^\top$ and replacing its eigenvalues to obtain a matrix $\hat{V} \Lambda \hat{V}^\top$ with eigenvalues $\lambda$ and eigenvectors $\hat{V}$.
Since $\hat{V} \Lambda \hat{V}^\top$ is a post-processing of the Gaussian mechanism, the mechanism which outputs $\hat{V} \Lambda \hat{V}^\top$ is differentially private as well.
Theorem \ref{thm_large_gap} bounds the excess utility $\mathbb{E}[\|\hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top\|_F^2]$ (whenever the gaps in the eigenvalues $\sigma_1 \geq \cdots \geq \sigma_d$ of the input matrix satisfy Assumption \ref{assumption_gaps}) as a sum-of-squares of the ratio of the gaps $\lambda_i-\lambda_j$ in the given eigenvalues to the corresponding gaps $\sigma_i-\max(\sigma_j, \sigma_{k+1})$ in the eigenvalues of the input matrix (note that $\lambda_i-\lambda_j = \lambda_i-\max(\lambda_j, \lambda_{k+1})$ since $\lambda_j = 0$ for $j \geq k+1$).
While we do not know if Theorem \ref{thm_large_gap} is tight for all choices of $\lambda$ and $k$, it does give a tight bound for some problems.
Namely, when applied to the covariance matrix estimation problem, in the special case where $k=d$ Theorem \ref{thm_large_gap} implies a bound of $\mathbb{E}[\|\hat{M} - M \|_F] \leq \tilde{O}(\sqrt{kd}) = O(d)$ (see Corollary \ref{cor_rank_k_covariance2}).
Since $\hat{M} - M = \frac{\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon}(G +G^\top)$,
the matrix $\hat{M} - M$ has independent Gaussian entries with mean zero and variance $\tilde{O}(1)$, and we have from concentration results for Gaussian random matrices (see e.g. Theorem 2.3.6 of \cite{tao2012topics}) that $\mathbb{E}[\|\hat{M} - M \|_F] = \tilde\Omega(d)$, implying that the bound in Theorem \ref{thm_large_gap} is tight in this case.
The proof of Theorem \ref{thm_large_gap} differs from prior works, including that of \cite{dwork2014analyze} which use Davis-Kahan-type theorems \cite{davis1970rotation} and trace inequalities, and instead relies on an interpretation of the Gaussian mechanism as a diffusion process which may be of independent interest (See Section \ref{sec_challenges} for additional comparison to previous approaches).
This connection allows us to use sophisticated tools from stochastic differential equations and random matrix theory. We present an outline of the proof in Section \ref{sec:proof}.
\paragraph{Application to covariance matrix approximation:}
Plugging $\lambda_i = \sigma_i$ for $i \leq k$ and $\lambda_i=0$ for $i>k$ into Theorem \ref{thm_large_gap}, and plugging in concentration bounds for the perturbation to the eigenvalues $\sigma_i$, we obtain utility bounds for covariance matrix approximation:
\begin{corollary}[\bf Rank-$k$ covariance matrix approximation]\label{cor_rank_k_covariance2}
Let $\varepsilon, \delta>0$, and given a symmetric matrix $M \in \mathbb{R}^{d \times d}$ with eigenvalues $\sigma_1 \geq \cdots \geq \sigma_d$ and corresponding orthonormal eigenvectors $v_1,\ldots, v_d$.
Let $G$ be a matrix with i.i.d. $N(0,1)$ entries, and consider the mechanism that outputs $\hat{M} = M+ \frac{\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon}(G +G^\top)$.
Then such a mechanism is $(\varepsilon,\delta)$-differentially private.
Moreover, for any $k \in [d]$, define $\Sigma_k := \mathrm{diag}(\sigma_1, \ldots, \sigma_k, 0 \ldots, 0)$ and $V = [v_1,\ldots, v_d]$, and define $\hat{\sigma}_1 \geq \cdots \geq \hat{\sigma}_d$ to be the eigenvalues of $\hat{M}$ with corresponding orthonormal eigenvectors $\hat{v}_1,\ldots, \hat{v}_d$, and define $\hat{\Sigma}_k := \mathrm{diag}(\hat{\sigma}_1, \ldots, \hat{\sigma}_k, 0 \ldots, 0)$ and $\hat{V} := [\hat{v}_1,\ldots,\hat{v}_d]$.
Then if $M$ satisfies Assumption \ref{assumption_gaps} for ($M,k,\sigma_1,\varepsilon,\delta$), and defining $\sigma_{d+1}:=0$, we have
$$ \mathbb{E}\left[\| \hat{V} \hat{\Sigma}_k \hat{V}^\top - V \Sigma_k V^\top \|_F \right] \leq O\left(\sqrt{kd} \times \frac{\sigma_k}{\sigma_{k}-\sigma_{k+1}
}\right) \frac{\log^{\frac{1}{2}}(\frac{1}{\delta})}{\varepsilon}.$$
\end{corollary}
\noindent
The proof appears in Section \ref{sec:covariance}.
If $\sigma_k - \sigma_{k+1} = \Omega(\sigma_k)$, then Corollary \ref{cor_rank_k_covariance2} implies that
$$\mathbb{E}\left[\| \hat{V} \hat{\Sigma}_k \hat{V}^\top - V \Sigma_k V^\top \|_F\right] \leq O\left( \sqrt{kd} \frac{\log^{\frac{1}{2}}(\frac{1}{\delta})}{\varepsilon}\right).$$
Thus, for matrices $M$ with eigenvalues satisfying Assumption \ref{assumption_gaps} and where $\sigma_k - \sigma_{k+1} = \Omega(\sigma_k)$, Corollary \ref{cor_rank_k_covariance2} improves by a factor of $\sqrt{k}$ on the bound in Theorem 7 of \cite{dwork2014analyze} which says $\| \hat{V} \hat{\Sigma}_k \hat{V}^\top - M\|_F - \|V \Sigma_k V^\top - M\|_F = \tilde{O}(k\sqrt{d})$ w.h.p..
This is because an upper bound on $\| \hat{V} \hat{\Sigma}_k \hat{V}^\top - V \Sigma_k V^\top \|_F$ implies an upper bound on $\| \hat{V} \hat{\Sigma}_k \hat{V}^\top - M\|_F - \|V \Sigma_k V^\top - M\|_F$ by the triangle inequality.
On the other hand, while their result does not require a bound on the gaps in the eigenvalue of $M$ and bounds their utility w.h.p., our Corollary \ref{cor_subspace_recovery} requires a bound on the gaps of the top $k+1$ eigenvalues of $M$ and bounds the expected utility $\mathbb{E}[\| \hat{V} \hat{\Sigma}_k \hat{V}^\top - V \Sigma_k V^\top \|_F ]$.
\paragraph{Application to subspace recovery:} Plugging in $\lambda_1 = \cdots = \lambda_k =1$ and $\lambda_{k+1} = \cdots = \lambda_d = 0$, the post-processing step in Theorem \ref{thm_large_gap} outputs a projection matrix, and we obtain utility bounds for the subspace recovery problem.
\begin{corollary}[\bf Subspace recovery]\label{cor_subspace_recovery}
Let $\varepsilon,\delta>0$, and given a symmetric matrix $M \in$ $\mathbb{R}^{d \times d}$ with eigenvalues $\sigma_1 \geq \cdots \geq \sigma_d$ and corresponding orthonormal eigenvectors $v_1,\ldots, v_d$.
Let $G$ be a matrix with i.i.d. $N(0,1)$ entries, and consider the mechanism that outputs $\hat{M} = M+ \frac{\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon}(G +G^\top)$.
Then such a mechanism is $(\varepsilon,\delta)$-differentially private.
Moreover, for any $k \in [d]$, define the $d\times k$ matrices $V_k = [v_1,\ldots,v_k]$ and $\hat{V}_k = [\hat{v}_1,\ldots,\hat{v}_k]$, where $\hat{\sigma}_1 \geq \cdots \geq \hat{\sigma}_d$ denote the eigenvalues of $\hat{M}$ with corresponding orthonormal eigenvectors $\hat{v}_1,\ldots, \hat{v}_d$.
Then if $M$ satisfies Assumption \ref{assumption_gaps} for ($M,k,2,\varepsilon,\delta$), we have
$ \mathbb{E}\left[\| \hat{V}_k \hat{V}_k^\top - V_k V_k^\top \|_F \right] \leq O\left(\frac{\sqrt{kd}}{\sigma_{k}-\sigma_{k+1}
}\times \frac{\log^{\frac{1}{2}}(\frac{1}{\delta})}{\varepsilon}\right).$
Moreover, if we also have that $\sigma_i - \sigma_{i+1} \geq \Omega(\sigma_k - \sigma_{k+1})$ for all $i \leq k$, then
$$\mathbb{E}\left[\| \hat{V}_k \hat{V}_k^\top - V_k V_k^\top \|_F \right] \leq O\left(\frac{\sqrt{d}}{\sigma_{k}-\sigma_{k+1}
}\times \frac{\log^{\frac{1}{2}}(\frac{1}{\delta})}{\varepsilon}\right).$$
\end{corollary}
\noindent
The proof appears in Section \ref{sec:cor_rank_subspace}.
For matrices $M$ satisfying Assumption \ref{assumption_gaps}, the first inequality
of Corollary \ref{cor_subspace_recovery} recovers (in expectation) the Frobenius-norm utility bound implied by Theorem 6 of \cite{dwork2014analyze}, which states that $\| \hat{V}_k \hat{V}_k^\top - V_k V_k^\top \|_F \leq O\left(\frac{\sqrt{kd}}{\sigma_k - \sigma_{k+1}} \times \frac{\log^{\frac{1}{2}}(\frac{1}{\delta})}{\varepsilon}\right)$ w.h.p.
Moreover, for many input matrices, $M$ with spectral profiles $\sigma_1 \geq \cdots \geq \sigma_d$ satisfying Assumption \ref{assumption_gaps}, Theorem \ref{thm_large_gap} implies stronger bounds than those in \cite{dwork2014analyze} for the subspace recovery problem.
For instance, if we also have that $\sigma_i - \sigma_{i+1} \geq \Omega(\sigma_k - \sigma_{k+1})$ for all $i \leq k$, the bound given in the second inequality of Corollary \ref{cor_subspace_recovery} improves on the bound of \cite{dwork2014analyze} by a factor of $\sqrt{k}$.
On the other hand, while their result only requires that $\sigma_k - \sigma_{k+1} \geq \sqrt{d}$ and bounds the Frobenius distance $\|\hat{V}_k \hat{V}_k^\top - V_k V_k^\top \|_F$ w.h.p., our Corollary \ref{cor_subspace_recovery} requires a bound on the gaps of the top $k+1$ eigenvalues of $M$ and bounds the expected Frobenius distance $\mathbb{E}[\| \hat{V}_k \hat{V}_k^\top - V_k V_k^\top \|_F]$.
\section{Preliminaries}
{\bf Brownian motion and stochastic calculus.}
A Brownian motion $W(t)$ in $\mathbb{R}$ is a continuous process that has stationary
independent increments (see e.g., \cite{morters2010brownian}).
In a multi-dimensional Brownian motion, each coordinate is an independent and identical Brownian motion.
The filtration $\mathcal{F}_t$ generated by $W(t)$ is defined as $\sigma \left(\cup_{s \leq t} \sigma(W(s))\right)$, where $\sigma(\Omega)$ is the $\sigma$-algebra generated by $\Omega$.
$W(t)$ is a martingale with respect to $\mathcal{F}_t$.
\begin{definition}[\bf It\^o Integral]
Let $W(t)$ be a Brownian motion for $t \geq 0$, let $\mathcal{F}_t$ be the filtration generated by $W(t)$, and let $z(t) : \mathcal{F}_t \rightarrow \mathbb{R}$ be a stochastic process adapted to $\mathcal{F}_t$.
The It\^o integral is defined as
$\int_0^T z(t) \mathrm{d}W(t) := \lim_{\omega \rightarrow 0} \sum_{i=1}^{\frac{T}{\omega}} z(i\omega)\times[W((i+1)\omega) -W(i\omega)].$
\end{definition}
\begin{lemma}[\bf It\^o's Lemma, integral form with no drift; Theorem 3.7.1 of \cite{lawler2010stochastic}] \label{lemma_ito_lemma_new}
Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be any twice-differentiable function.
Let $W(t) \in \mathbb{R}^n$ be a Brownian motion, and let $X(t) \in \mathbb{R}^n$ be an It\^o diffusion process with mean zero defined by the following stochastic differential equation:
\begin{equation}
\mathrm{d}X_j(t) = \sum_{i=1}^d R_{i j }(t) \mathrm{d}W_i(t),
\end{equation}
for some It\^o diffusion $R(t) \in \mathbb{R}^{n \times n}$ adapted to the filtration generated by the Brownian motion $W(t)$.
Then for any $T\geq 0$,
\begin{align*}
f(X(T)) -f(X(0)) &= \int_0^T \sum_{i=1}^n \sum_{\ell=1}^n \left(\frac{\partial}{\partial X_\ell} f(X(t))\right) R_{i \ell}(t) \mathrm{d}W_i(t)\\
%
&\qquad + \frac{1}{2} \int_0^T \sum_{i=1}^n \sum_{j=1}^n \sum_{\ell=1}^n \left(\frac{\partial^2}{\partial X_{j} \partial X_\ell} f(X(t))\right) R_{i j }(t) R_{i \ell}(t) \mathrm{d}t.
\end{align*}
\end{lemma}
\paragraph{Dyson Brownian motion.}
Let $W(t) \in \mathbb{R}^{d \times d}$ be a matrix where each entry is an independent standard Brownian motion with distribution $N(0, tI_d)$ at time $t$, and let $B(t) = W(t) + W^\top(t)$.
Define the symmetric-matrix valued stochastic process $\Phi(t)$ as follows:
\begin{equation} \label{eq_DBM_matrix}
\Phi(t):= M + B(t) \qquad \forall t\geq 0.
\end{equation}
\noindent The process $\Phi(t)$ is referred to as (matrix) Dyson Brownian motion.
At every time $t>0$ the eigenvalues $\gamma_1(t), \ldots, \gamma_d(t)$ of $\Phi(t)$ are distinct with probability $1$, and \eqref{eq_DBM_matrix} induces a stochastic process on the eigenvalues and eigenvectors.
The process on the eigenvalues and eigenvectors can be expressed via the following diffusion equations.
The eigenvalue diffusion process, which is also referred to as (eigenvalue) ``Dyson Brownian motion'', is defined by the stochastic differential equation \eqref{eq_DBM_eigenvalues}.
The (eigenvalue) Dyson Brownian motion is an It\^o diffusion and can be expressed can be expressed by the following stochastic differential equation \cite{dyson1962brownian}:
\begin{equation} \label{eq_DBM_eigenvalues}
\mathrm{d} \gamma_i(t) = \mathrm{d}B_{i i}(t) + \sum_{j \neq i} \frac{1}{\gamma_i(t) - \gamma_j(t)} \mathrm{d}t \qquad \qquad \forall i \in [d], t > 0.
\end{equation}
\noindent The corresponding eigenvector process $v_1(t), \ldots, v_d(t)$, referred to as the Dyson vector flow, is also an It\^o diffusion and, conditional on the eigenvalue process \eqref{eq_DBM_eigenvalues}, can be expressed by the following stochastic differential equation (see e.g., \cite{anderson2010introduction}):
\begin{equation} \label{eq_DBM_eigenvectors}
\mathrm{d}u_i(t) = \sum_{j \neq i} \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}u_j(t) - \frac{1}{2}\sum_{j \neq i} \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2}u_i(t) \qquad \qquad \forall i \in [d], t > 0.
\end{equation}
\paragraph{Eigenvalue bounds.}
The following two Lemmas will help us bound the gaps in the eigenvalues of the Dyson Brownian motion:
\begin{lemma} [Theorem 4.4.5 of \cite{vershynin2018high}, special case \footnote{The theorem is stated for sub-Gaussian entries in terms of a constant $C$; this constant is $C=2$ in the special case where the entries are $N(0,1)$ Gaussian.}] \label{lemma_concentration}
Let $W \in \mathbb{R}^{d \times d}$ with i.i.d. $N(0,1)$ entries. Then
$ \mathbb{P}(\|W\|_2 > 2(\sqrt{d} +s) < 2e^{-s^2}$
for any $s>0$
\end{lemma}
\begin{lemma}[\bf Weyl's Inequality; \cite{bhatia2013matrix}]\label{lemma_weyl}
If $A,B \in \mathbb{R}^{d\times d}$ are two symmetric matrices, and denoting the $i$'th-largest eigenvalue of any symmetric matrix $M$ by $\sigma_i(M)$, we have
$ \sigma_i(A) + \sigma_d(B) \leq \sigma_i(A + B) \leq \sigma_i(A) + \sigma_1(B).
$
\end{lemma}
\section{Proof Overview of Theorem \ref{thm_large_gap} -- Main Result}\label{sec:proof}
We give an overview of the proof of Theorem \ref{thm_large_gap}, along with the main technical lemmas used to prove this result.
Section \ref{sec_outline_of_proof} outlines the different steps in our proof.
In Steps 1 and 2 we construct the matrix-valued diffusion used in our proof.
Steps 3,4, and 5 present the main technical lemmas,
and in step 6 we explain how to complete the proof.
The statements of the lemmas and the highlights of their proofs are given in Sections \ref{sec_computing_derivative}, \ref{sec_gap_bounds}, \ref{Sec_integration}.
In Section \ref{sec_completing_the_proof} we explain how to complete the proof.
The full proofs appear in Sections \ref{sec_proof_of_lemmas} and \ref{sec_proof_thm_large_gap}.
\subsection{Outline of proof} \label{sec_outline_of_proof}
\begin{enumerate}
[leftmargin=10pt]
\item \textbf{Step 1: Expressing the Gaussian Mechanism as a Dyson Brownian Motion.}
To obtain our utility bound, we view the Gaussian mechanism as a matrix-valued Brownian motion \eqref{eq_DBM_matrix} initialized at the input matrix $M$:
$ \Phi(t):= M + B(t) \qquad \forall t\geq 0.$
If we run this Brownian motion for time $T = \nicefrac{\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon}$ we have that $\Phi(T) = (\nicefrac{\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon})(G +G^\top)$, recovering the output of the Gaussian mechanism.
In other words, the input to the Gaussian mechanism is $M = \Phi(0)$, and the output is $\hat{M} = \Phi(T)$.
\item \textbf{Step 2: Expressing the post-processed mechanism as a matrix diffusion $\Psi(t)$.} Our goal is to bound $\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F$, where $M = V \Sigma V^\top$ and $\hat{M} = \hat{V} \hat{\Sigma} \hat{V}^\top$ are spectral decompositions of $M$ and $\hat{M}$.
To bound the error $\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F$ we will define a stochastic process $\Psi(t)$ such that $\Psi(0) = V \Lambda V^\top$ and $\Psi(t) = \hat{V} \Lambda \hat{V}^\top$, and then bound the Frobenius distance $\|\Psi(T) - \Psi(0)\|_F$ by integrating the (stochastic) derivative of $\Psi(t)$ over the time interval $[0,T]$.
Towards this end, at every time $t$, let $\Phi(t) = U(t) \Gamma(t) U(t)^\top$ be a spectral decomposition of the symmetric matrix $\Phi(t)$, where $\Gamma(t)$ is a diagonal matrix with diagonal entries $\gamma_1(t) \geq \cdots \geq \gamma_d(t)$ that are the eigenvalues of $\Phi(t)$, and $U(t) = [u_1(t), \ldots, u_d(t)]$ is a $d\times d$ orthogonal matrix whose columns $u_1(t), \ldots, u_d(t)$ are an orthonormal basis of eigenvectors of $\Phi(t)$.
At every time $t$, define $\Psi(t)$ to be the symmetric matrix with eigenvalues $\Lambda$ and eigenvectors given by the columns of $U(t)$:
$ \Psi(t):= U(t) \Lambda U(t)^\top$ $\forall t \in [0,T].$
\item \textbf{Step 3: Computing the stochastic derivative $\mathrm{d}\Psi(t)$.} To bound the expected squared Frobenius distance $\mathbb{E}[\|\Psi(T) - \Psi(0)\|_F^2]$, we first compute the stochastic derivative $\mathrm{d}\Psi(t)$ of the matrix diffusion $\Psi(T)$ (Lemma \ref{Lemma_orbit_differntial}).
\item \textbf{Step 4: Bounding the eigenvalue gaps.}
The equation for the derivative $\mathrm{d}\Psi(t)$ includes terms with magnitude proportional to the inverse of the eigenvalue gaps $\Delta_{ij}(t):= \gamma_i(t) - \gamma_j(t)$ for each $i,j \in[d]$, which evolve over time.
In order to bound these terms, we use Weyl's inequality (Lemma \ref{lemma_weyl}) to show that w.h.p. the gaps in the top $k+1$ eigenvalues $\Delta_{ij}(t)$ satisfy $\Delta_{ij}(t) \geq \Omega(\sigma_i - \sigma_j)$ for every time $t\in [0,T]$ (Lemma \ref{lemma_gap_concentration}), provided that the initial gaps are sufficiently large (Assumption \ref{assumption_gaps}) (See Section \ref{sec_necessity_of_assumption} for a discussion on why we need this assumption for our proof to work).
\item \textbf{Step 5: Integrating the stochastic differential equation.} Next, we express the expected squared Frobenius distance $\mathbb{E}[\|\Psi(T) - \Psi(0)\|_F^2]$ as an integral
$ \|\Psi(T) - \Psi(0)\|_F^2 ]=\mathbb{E}\left[ \left\|\int_0^T \mathrm{d}\Psi(t)\right\|_F^2\right].$
We then apply It\^o's Lemma (Lemma \ref{lemma_ito_lemma_new}) to obtain a formula for this integral.
Roughly speaking, the formula we obtain (Lemma \ref{Lemma_integral}) is
\begin{equation}\label{eq_g3} \mathbb{E}\left[\left\|\Psi(T) - \Psi(0)\right \|_F^2 \right] \approx \int_0^{T} \mathbb{E}\left[ \sum_{i=1}^{d} \sum_{j \neq i} \frac{(\lambda_i - \lambda_j)^2}{\Delta^2_{ij}(t)} \right] \mathrm{d}t
%
+ T \int_0^{T}\mathbb{E}\left[\sum_{i=1}^{d}\left(\sum_{j\neq i} \frac{\lambda_i - \lambda_j}{\Delta^2_{ij}(t)}\right)^2 \right]\mathrm{d}t
\end{equation}
\item \textbf{Step 6: Completing the proof.} Plugging the bound $\Delta_{ij}(t) \geq \Omega(\sigma_i - \sigma_j)$ into \eqref{eq_g3}, and noting that the first term on the r.h.s. of \eqref{eq_g3} is at least as large as the second term since $\sigma_i - \sigma_j \geq \sqrt{d}$,
we obtain the bound in Theorem \ref{thm_large_gap}.
\end{enumerate}
\subsection{Step 3: Computing the stochastic derivative $\mathrm{d}\Psi(t)$}\label{sec_computing_derivative}
$\Psi(t)$ is itself a matrix-valued diffusion.
We use the eigenvalue and eigenvector dynamics \ref{eq_DBM_eigenvalues} and \ref{eq_DBM_eigenvectors} together with It\^o's Lemma (Lemma \ref{lemma_ito_lemma_new}) to compute the It\^o derivative of this diffusion.
Towards this end, we first decompose the matrix $\Psi(t)$ as a sum of its eigenvectors: $\Psi(t) = \sum_{i=1}^{d} \lambda_i u_i(t) u_i^\top(t)$.
Thus, we have
\begin{equation} \label{eq_g4}
\mathrm{d}\Psi(t) = \sum_{i=1}^{d} \lambda_i \mathrm{d}(u_i(t) u_i^\top(t)).
\end{equation}
We begin by computing the stochastic derivative $\mathrm{d}(u_i(t) u_i^\top(t))$ for each $i \in [d]$, by applying the formula for the derivative of $u_i(t)$ in \eqref{eq_DBM_eigenvectors}, together with It\^o's Lemma (Lemma \ref{lemma_ito_lemma_new}):
\begin{lemma}[\bf Stochastic derivative of $u_i(t) u_j^\top(t)$]\label{Lemma_projection_differntial}
For all $t \in [0,T]$,
\begin{align*}
\mathrm{d}(u_i(t) u_i^\top(t))= \sum_{j \neq i} &\frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t))\\
& + \sum_{j \neq i} \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2} (u_i(t) u_i^\top(t) - u_j(t)u_j^\top(t)).
\end{align*}
\end{lemma}
\noindent
The proof is in Section \ref{sec_proof_of_lemmas}.
Plugging Lemma \ref{Lemma_projection_differntial} into \eqref{eq_g4}, we get an expression for $\mathrm{d}\Psi(t)$:
\begin{lemma}[\bf Stochastic derivative of $\Psi(t)$; see Section \ref{sec_proof_of_lemmas} for proof]\label{Lemma_orbit_differntial}
For all $t \in [0,T]$ we have that
$ \mathrm{d}\Psi(t) = \frac{1}{2}\sum_{i=1}^{d} \sum_{j \neq i} (\lambda_i - \lambda_j) \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t))
%
\textstyle + \sum_{i=1}^{d} \sum_{j\neq i} (\lambda_i - \lambda_j) \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2} u_i(t) u_i^\top(t).$
\end{lemma}
\subsection{Step 4: Bounding the eigenvalue gaps} \label{sec_gap_bounds}
The derivative in Lemma \ref{Lemma_orbit_differntial} contains terms with magnitude proportional to the inverse of the eigenvalue gaps $\Delta_{ij}(t):= \gamma_i(t) - \gamma_j(t)$.
To bound these terms, we would like to show that $\inf_{t\in [0,T]} \Delta_{ij}(t) \geq \Omega(\sigma_i - \sigma_j)$ for each $i<j \leq k+1$ with high probability.
Towards this end, we first apply the spectral norm concentration bound for Gaussian random matrices (Lemma \ref{lemma_concentration}), which provides a high-probability bound for $\|B(t)\|_2$ at any time $t$, together with Doob's submartingale inequality, to show that the spectral norm of the matrix-valued Brownian motion $B(t)$ does not exceed $T\sqrt{d}$ at any time $t\in [0,T]$ w.h.p.:
\begin{lemma}[\bf Spectral norm bound] \label{lemma_spectral_martingale}
For every $T>0$, we have,
$$ \mathbb{P}\left(\sup_{t \in [0,T]}\|B(t)\|_2 > 2T\sqrt{d} + \alpha)\right) \leq 2\sqrt{\pi} e^{-\frac{1}{8}\frac{\alpha^2}{T^2}}.$$
\end{lemma}
\noindent
The proof appears in Section \ref{sec_proof_of_lemmas}.
Next, we use Lemma \ref{lemma_spectral_martingale} to bound the eigenvalue gaps:
\begin{lemma}[\bf Eigenvalue gap bound]\label{lemma_gap_concentration}
Whenever $\gamma_i(0) - \gamma_{i+1}(0) \geq 4 T \sqrt{d}$ for every $i \in S$ and $T>0$ and some subset $S \subset [d-1]$, we have $$\mathbb{P}\left( \bigcup_{i\in S} \left\{\inf_{t \in [0,T]} \gamma_i(t) - \gamma_{i+1}(t) < \frac{1}{2}(\gamma_i(0) - \gamma_{i+1}(0)) - \alpha) \right\}\right) \leq 2\sqrt{\pi} e^{-\frac{1}{32} \alpha^2}.$$
\end{lemma}
\noindent
To prove Lemma \ref{lemma_gap_concentration}, we plug Lemma \ref{lemma_spectral_martingale} into Weyl's Inequality (Lemma \ref{lemma_weyl}), to show that
\begin{equation*}
\gamma_i(t) - \gamma_{i+1}(t) \geq \sigma_i - \sigma_{i+1} - \|B(t)\|_2 \geq \Omega(\sigma_i - \sigma_{i+1} - T \sqrt{d}) \geq \Omega(\sigma_i - \sigma_{i+1}),
\end{equation*}
with high probability for each $i\leq k$ (Lemma \ref{lemma_gap_concentration}).
The last inequality holds since Assumption \ref{assumption_gaps} ensures $\sigma_i - \sigma_{i+1} \geq \frac{1}{2}T \sqrt{d}$ for $i\leq k$.
The full proof is in Section \ref{sec_proof_of_lemmas}.
\subsection{Step 5: Integrating the stochastic differential equation} \label{Sec_integration}
Next, we would like to integrate the derivative $\mathrm{d}\Psi(t)$ to obtain an expression for $\mathbb{E}[\|\Psi(T) - \Psi(0)\|_F^2]$, and to then plug in our high-probability bounds (Lemma \ref{lemma_gap_concentration}) for the gaps $\Delta_{ij}(t)$.
To allow us to later plug in these high-probability bounds after we integrate and take the expectation, we define a new diffusion process $Z_\eta(t)$ which has nearly the same stochastic differential equation as \ref{Lemma_orbit_differntial}, except that each eigenvalue gap $\Delta_{ij}(t)$ is not permitted to become smaller than the value $\eta_{ij} = \frac{1}{4}(\sigma_i - \max(\sigma_{j}, \sigma_{k+1}))$ for each $i<j$.
Towards this end, fix any $\eta \in \mathbb{R}^{d\times d}$, define the following matrix-valued It\^o diffusion $Z_\eta(t)$ via its It\^o derivative $\mathrm{d}Z_\eta(t)$:
\begin{eqnarray} \label{eq_orbit_diffusion2}
\textstyle \mathrm{d}Z_\eta(t) & := \frac{1}{2}\sum_{i=1}^{d} \sum_{j \neq i} |\lambda_i - \lambda_j| \frac{\mathrm{d}B_{ij}(t)}{\max(|\Delta_{ij}(t)|, \eta_{ij})}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)) \nonumber \\
%
& + \sum_{i=1}^{d} \sum_{j\neq i} (\lambda_i - \lambda_j) \frac{\mathrm{d}t}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} u_i(t) u_i^\top(t),
\end{eqnarray}
with initial condition $Z_\eta(0):= \Psi(0)$.
Thus, $Z_\eta(t) = \Psi(0)+ \int_0^t \mathrm{d}Z_\eta(s)$ for all $t \geq 0$.
We then integrate $\mathrm{d}Z_\eta(t)$ over the time interval $[0,T]$, and apply It\^o's Lemma (Lemma \ref{lemma_ito_lemma_new}) to obtain an expression for the Frobenius norm of this integral:
\begin{lemma}[\bf Frobenius distance integral] \label{Lemma_integral}
For any $T>0$,
$\mathbb{E}\left[\left\|Z_\eta\left(T\right) - \textstyle Z_\eta(0)\right \|_F^2 \right]=$
%
\begin{eqnarray*} \textstyle
%
2\int_0^{T} \mathbb{E}\left[ \sum_{i=1}^{d} \sum_{j \neq i} \frac{(\lambda_i - \lambda_j)^2}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} \mathrm{d}t \right]
%
+ T \int_0^{T}\mathbb{E}\left[\sum_{i=1}^{d}\left(\sum_{j\neq i} \frac{\lambda_i - \lambda_j}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)}\right)^2 \right]\mathrm{d}t.
\end{eqnarray*}
\end{lemma}
\noindent
To prove Lemma \ref{Lemma_integral}, we write
\begin{align} \label{eq_z2}
Z_\eta\left(T\right) - Z_\eta(0)
&= \frac{1}{2} \int_0^{T}\sum_{i=1}^{d} \sum_{j \neq i} |\lambda_i - \lambda_j| \frac{\mathrm{d}B_{ij}(t)}{\max(|\Delta_{ij}(t)|, \eta_{ij})}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)) \nonumber\\
%
& - \int_0^{T}\sum_{i=1}^{d} \sum_{j\neq i} (\lambda_i - \lambda_j) \frac{\mathrm{d}t}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} u_i(t) u_i^\top(t).
\end{align}
To compute the Frobenius norm of the first term on the r.h.s. of \eqref{eq_z2}, we use It\^o's Lemma (Lemma \ref{lemma_ito_lemma_new}), with $X(t):= \int_0^{t}\sum_{i=1}^{d} \sum_{j \neq i} |\lambda_i - \lambda_j| \frac{\mathrm{d}B_{ij}(s)}{\max(|\Delta_{ij}(s)|, \eta_{ij})}(u_i(s) u_j^\top(s) + u_j(s) u_i^\top(s))$ and the function $f(X):= \|X\|_F^2 = \sum_{i=1}^d \sum_{j=1}^d X_{ij}^2$.
By It\^o's Lemma, we have
\begin{align} \label{eq_z1}
&\mathbb{E}[\|X(T)\|_F^2 - \|X(0)\|_F^2]
%
= \mathbb{E} [\frac{1}{2} \int_0^t \sum_{\ell, r} \sum_{\alpha, \beta} (\frac{\partial}{ \partial X_{\alpha \beta}} f(X(t))) R_{(\ell r) (\alpha \beta)}(t) \mathrm{d}B_{\ell r}(t) ] \nonumber \\
%
&\qquad \qquad +\mathbb{E}\left [\frac{1}{2} \int_0^t \sum_{\ell, r} \sum_{i,j} \sum_{\alpha, \beta} \left(\frac{\partial^2}{\partial X_{ij} \partial X_{\alpha \beta}} f(X(t))\right) R_{(\ell r) (i j)}(t) R_{(\ell r) (\alpha \beta)}(t) \mathrm{d}t \right],
\end{align}
where $R_{(\ell r) (i j)}(t) := \left(\frac{ |\lambda_i - \lambda_j| }{\max(|\Delta_{ij}(t)|, \eta_{ij})}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)) \right)[\ell, r]$, and where we denote by either $H_{\ell r}$ or $H[\ell, r]$ the $(\ell, r)$'th entry of any matrix $H$.
The first term on the r.h.s. of \eqref{eq_z1} is equal to zero since $\mathrm{d}B_{\ell r}(s)$ is independent of both $X(t)$ and $R(t)$ for all $s \geq t$ and the time-integral of each Brownian motion increment $\mathrm{d}B_{\alpha \beta}(s)$ has zero mean.
To compute the second term on the r.h.s. of \eqref{eq_z1}, we use the fact that $\frac{\partial^2 }{\partial X_{ij} \partial X_{\alpha \beta}}f(X)$ is equal to 2 for $i = j$ and $0$ for $i \neq j$
To compute the Frobenius norm of the second term on the r.h.s. of \eqref{eq_z2}, we use the Cauchy-Schwarz inequality.
The full proof appears in Section \ref{sec_proof_of_lemmas}.
\subsection{Step 6: Completing the proof} \label{sec_completing_the_proof}
To complete the proof, we plug in the high-probability bounds on the eigenvalue gaps from Section \ref{sec_gap_bounds} into Lemma \ref{Lemma_integral}.
Since by Lemma \ref{lemma_gap_concentration} $\Delta_{ij}(t) \geq \frac{1}{2}(\sigma_i - \sigma_j)$ w.h.p. for each $i,j \leq k+1$, and $\eta_{ij} = \frac{1}{4}(\sigma_i - \max(\sigma_{j}, \sigma_{k+1}))$, we must also have that $Z_\eta(t) = \Psi(t)$ for all $t \in [0, T]$ w.h.p.
Plugging in the high-probability bounds $\Delta_{ij}(t) \geq \frac{1}{2}(\sigma_i - \sigma_j)$ for each $i,j \geq k+1$, and noting that $\lambda_i - \lambda_j = 0$ for all $i,j >k$, we get that
\begin{align}\label{eq_g5} &\mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right] = \mathbb{E}\left[\left\|\Psi\left(T\right) - \textstyle \Psi(0)\right \|_F^2 \right] \nonumber\\
%
&\leq 2\int_0^{T} \mathbb{E}\left[ \sum_{i=1}^{d} \sum_{j \neq i} \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i - \sigma_j)^2}\right] \mathrm{d}t
%
+ T \int_0^{T}\mathbb{E}\left[\sum_{i=1}^{d}\left(\sum_{j\neq i} \frac{\lambda_i - \lambda_j}{(\sigma_i - \sigma_j)^2}\right)^2 \right]\mathrm{d}t \nonumber\\
%
&\leq T\sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i - \max(\sigma_j, \sigma_{k+1}))^2} + T^2\sum_{i=1}^{k}\left(\sum_{j = i+1}^d \frac{\lambda_i - \lambda_j}{(\sigma_i - \max(\sigma_j, \sigma_{k+1}))^2}\right)^2.
\end{align}
Since $(\sigma_i - \max(\sigma_j, \sigma_{k+1})
\geq \Omega(\sqrt{d})$ for all $i\leq k$ and $j\in [d]$, we can use the Cauchy-Schwarz inequality to show that the second term is (up to a factor of T) smaller than the first term: $\sum_{i=1}^{k}\left(\sum_{j = i+1}^d \frac{\lambda_i - \lambda_j}{(\sigma_i - \max(\sigma_j. \sigma_{k+1}))^2}\right)^2 \leq \sum_{i=1}^{k} \sum_{j=i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1})}$. Plugging $T = \frac{\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon}$ into \eqref{eq_g5}, we obtain the bound in Theorem \ref{thm_large_gap}.
For the full proof of Theorem \ref{thm_large_gap}, see Section \ref{sec_proof_thm_large_gap}
\section{Proof of Lemmas} \label{sec_proof_of_lemmas}
\begin{proof}[Proof of Lemma \ref{Lemma_projection_differntial}]
We compute the stochastic derivative $\mathrm{d}(u_i(t) u_i^\top(t))$ by applying the formula \eqref{eq_DBM_eigenvectors} for the stochastic derivative $\mathrm{d}u_i(t)$ of the eigenvector $u_i(t)$ in Dyson Brownian motion.
For any $t \in [0,T]$, we have that the stochastic derivative $\mathrm{d}(u_i(t) u_i^\top(t))$ satisfies
\begin{align}\label{eq_eq_derivative1}
%
&\mathrm{d}(u_i(t) u_i^\top(t))
%
=(u_i(t) + \mathrm{d}u_i(t))(u_i(t) + \mathrm{d}u_i(t))^\top - u_i(t)u_i(t)^\top \nonumber\\
%
&= \left(u_i(t)+ \sum_{j \neq i} \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}u_j(t) - \frac{1}{2}\sum_{j \neq i} \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2}u_i(t) \right) \nonumber\\
%
&\qquad\times \left(u_i(t) + \sum_{j \neq i} \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}u_j(t) - \frac{1}{2}\sum_{j \neq i} \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2}u_i(t) \right)^\top - u_i(t)u_i(t)^\top \nonumber\\
%
&= u_i(t) u_i^\top(t) + \sum_{j \neq i} \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)) - \sum_{j\neq i} \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2} u_i(t) u_i^\top(t)\nonumber\\
%
& \qquad + \sum_{j \neq i} \sum_{\ell \neq i} \frac{\mathrm{d}B_{ij}(t)\mathrm{d}B_{i\ell}(t)}{(\gamma_i(t) - \gamma_j(t)) (\gamma_i(t) - \gamma_\ell(t))} u_j(t)u_{\ell}^\top(t) \nonumber\\
%
& \qquad \qquad \qquad -\varphi_1(t)\varphi_2(t)^\top -\varphi_2(t)\varphi_1(t)^\top + -\varphi_2(t)\varphi_2(t)^\top - u_i(t)u_i(t)^\top,
%
\end{align}
%
where we define $\varphi_1(t):= \sum_{j \neq i} \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}u_j(t)$ and $\varphi_2(t):=\sum_{j \neq i} \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2}u_i(t)$.
The terms $\varphi_1(t) \varphi_2(t)^\top$ and $\varphi_2(t) \varphi_1(t)^\top$ have differentials $O(\mathrm{d}B_{ij} \mathrm{d}t)$, and $\varphi_2(t) \varphi_2(t)^\top$ has differentials $O(\mathrm{d}t^2)$; thus, all three terms vanish in the stochastic derivative by Lemma \ref{lemma_ito_lemma_new}.
Therefore, \eqref{eq_eq_derivative1} implies that the stochastic derivative $\mathrm{d}(u_i(t) u_i^\top(t))$ satisfies
%
\begin{align}\label{eq_eq_derivative2}
%
&\mathrm{d}(u_i(t) u_i^\top(t)) = \sum_{j \neq i} \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)) - \sum_{j\neq i} \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2} u_i(t) u_i^\top(t)\nonumber\\
%
& \qquad + \sum_{j \neq i} \sum_{\ell \neq i} \frac{\mathrm{d}B_{ij}(t)\mathrm{d}B_{i\ell}(t)}{(\gamma_i(t) - \gamma_j(t)) (\gamma_i(t) - \gamma_\ell(t))} u_j(t)u_{\ell}^\top(t)\nonumber\\
%
&= \sum_{j \neq i} \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)) - \sum_{j\neq i} \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2} u_i(t) u_i^\top(t)\nonumber\\
%
& \qquad + \sum_{j \neq i} \left(\frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)} \right)^2 u_j(t)u_j^\top(t)\nonumber\\
%
&= \sum_{j \neq i} \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)) - \sum_{j\neq i} \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2} u_i(t) u_i^\top(t)\nonumber\\
%
& \qquad + \sum_{j \neq i} \frac{\mathrm{d}t}{(\gamma_i(t) - \gamma_j(t))^2} u_j(t)u_j^\top(t),
%
%
\end{align}
where the second-to-last equality holds since all terms $\mathrm{d}B_{ij}(t)\mathrm{d}B_{i\ell}(t)$ with $j \neq \ell$ in the sum $\sum_{j \neq i} \sum_{\ell \neq i} \frac{\mathrm{d}B_{ij}(t)\mathrm{d}B_{i\ell}(t)}{(\gamma_i(t) - \gamma_j(t)) (\gamma_i(t) - \gamma_\ell(t))} u_j(t)u_{\ell}^\top(t)$ vanish by It\^o's Lemma (Lemma \ref{lemma_ito_lemma_new}) since they have mean 0 and are $O(\mathrm{d}B_{ij}(t) \mathrm{d}B_{i\ell}(t))$; we are therefore left only with the terms $j = \ell$ in the sum which have differential terms $(\mathrm{d}B_{ij}(t))^2$ which have mean $\mathrm{d}t$ plus higher-order terms which vanish by It\^o's Lemma.
Therefore \eqref{eq_eq_derivative2} implies that
\begin{align*}
&\mathrm{d}(u_i(t) u_i^\top(t))\\
%
& = \sum_{j \neq i} \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)) - \sum_{j\neq i} \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2} (u_i(t) u_i^\top(t) - u_j(t)u_j^\top(t)).
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma \ref{Lemma_orbit_differntial}]
To compute the stochastic derivative of $\Psi(t)$, we would like to apply our formula for the stochastic derivative of the projection matrix $u_i(t) u_i^\top(t)$ for each eigenvector $u_i(t)$ (Lemma \ref{Lemma_projection_differntial}).
Towards this end, we first decompose the matrix $\Psi(t)$ as a sum of these projection matrices $u_i(t) u_i^\top(t)$:
\begin{equation}\label{eq_derivative3}
\textstyle \Psi(t) = \sum_{i=1}^{d} \lambda_i (u_i(t) u_i^\top(t)).
\end{equation}
Taking the derivative on both sides of \eqref{eq_derivative3}, we have
\begin{equation} \label{eq_derivative4}
\textstyle \mathrm{d}\Psi(t) = \sum_{i=1}^{d} \lambda_i \mathrm{d}(u_i(t) u_i^\top(t)).
\end{equation}
Thus, plugging in Lemma \ref{Lemma_projection_differntial} for each $i \in[d]$ into \eqref{eq_derivative4}, we have that
\begin{align*}
%
\mathrm{d}\Psi(t) &= \sum_{i=1}^{d} \lambda_i \mathrm{d}(u_i(t) u_i^\top(t))\\
%
&\stackrel{\textrm{Lemma \ref{Lemma_projection_differntial}}}{=} \sum_{i=1}^{d} \lambda_i \left( \sum_{j \neq i} \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t))\right. \\
&- \left.\sum_{j\neq i} \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2} (u_i(t) u_i^\top(t) - u_j(t)u_j^\top(t)) \right)\\
%
& = \frac{1}{2}\sum_{i=1}^{d} \sum_{j \neq i} (\lambda_i - \lambda_j) \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t))\\
%
& - \frac{1}{2} \sum_{i=1}^{d} \sum_{j\neq i} (\lambda_i - \lambda_j) \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2} (u_i(t) u_i^\top(t) - u_j(t)u_j^\top(t))\\
%
& = \frac{1}{2}\sum_{i=1}^{d} \sum_{j \neq i} (\lambda_i - \lambda_j) \frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t))\\
%
& - \sum_{i=1}^{d} \sum_{j\neq i} (\lambda_i - \lambda_j) \frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2} u_i(t) u_i^\top(t),
\end{align*}
where the second equality holds by Lemma \ref{Lemma_projection_differntial}.
To see why the third equality holds, for the first term inside the summation, note that $\mathrm{d}B(t)$ is a symmetric matrix of differentials which means that $\mathrm{d}B_{ij}(t) = \mathrm{d}B_{ji}(t)$ for all $i,j$, and hence that $\frac{\mathrm{d}B_{ij}(t)}{\gamma_i(t) - \gamma_j(t)}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)) = - \frac{\mathrm{d}B_{ji}(t)}{\gamma_j(t) - \gamma_i(t)}(u_j(t) u_i^\top(t) + u_i(t) u_j^\top(t))$ for all $i \neq j$.
For the second term inside the summation, note that $\frac{\mathrm{d}t}{(\gamma_i(t)- \gamma_j(t))^2} (u_i(t) u_i^\top(t) - u_j(t)u_j^\top(t)) = - \frac{\mathrm{d}t}{(\gamma_j(t)- \gamma_i(t))^2} (u_j(t) u_j^\top(t) - u_i(t)u_i^\top(t))$ for all $i \neq j$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma_spectral_martingale}]
To prove Lemma \ref{lemma_spectral_martingale} we will use Doob's submartingale inequality.
Towards this end, let $\mathcal{F}_s$ be the filtration generated by $B(s)$.
First, we note that $\exp(\|B(t)\|_2)$ is a submartingale for all $t \geq 0$; that is, $\mathbb{E}[\exp(\|B(t)\|_2) | \mathcal{F}_s] \geq \mathrm{exp}(\| B(s)\|_2 )$ for all $0\leq s \leq t$.
This is because for all $s \leq t$, we have
\begin{align*}
\mathbb{E}[\exp(\|B(t)\|_2) | \mathcal{F}_s] &= \mathbb{E}\left[\mathrm{exp}\left(\sup_{v \in \mathbb{R}^d: \|v\|_2 = 1} v^\top B(t) v\right) \, \, \bigg | \, \,\mathcal{F}_s\right]\\
&\geq \mathrm{exp}\left(\mathbb{E}\left[\sup_{v \in \mathbb{R}^d: \|v\|_2 = 1} v^\top B(t) v \, \, \bigg | \, \,\mathcal{F}_s\right]\right)\\
&\geq \mathrm{exp}\left(\sup_{v \in \mathbb{R}^d: \|v\|_2 = 1} \mathbb{E}\left[v^\top B(t) v \, \, | \, \, \mathcal{F}_s\right]\right)\\
&= \mathrm{exp}\left(\sup_{v \in \mathbb{R}^d: \|v\|_2 = 1} \mathbb{E}\left[v^\top (B(t) - B(s)) v + v^\top B(s) v \, \, | \, \, \mathcal{F}_s\right] \right)\\
&= \mathrm{exp}\left(\sup_{v \in \mathbb{R}^d: \|v\|_2 = 1} \mathbb{E}\left[v^\top (B(t) - B(s)) v | \, \, \mathcal{F}_s\right] + \mathbb{E}\left[v^\top B(s) v \, \, | \, \, \mathcal{F}_s\right] \right)\\
&= \mathrm{exp}\left( \sup_{v \in \mathbb{R}^d: \|v\|_2 = 1} \mathbb{E}\left[v^\top B(s) v \, \, | \, \, \mathcal{F}_s\right] \right)\\
&= \mathrm{exp}\left(\sup_{v \in \mathbb{R}^d: \|v\|_2 = 1} v^\top B(s) v \right),\\
&= \mathrm{exp}(\| B(s)\|_2 ),
\end{align*}
where the first inequality holds by Jensen's inequality since $\exp(\cdot)$ is convex, and the third equality holds since $v^\top (B(t) - B(s)) v$ is independent of $\mathcal{F}_s$ and is distributed as $N(0,2(t-s))$.
Thus, by Doob's submartingale inequality, for any $\beta>0$ (we will choose the value of $\beta$ later to optimize our bound) we have,
\begin{align*}
\mathbb{P}\left(\sup_{t \in [0,T]}\|B(t)\|_2 > 2T(\sqrt{d} + \alpha)\right) & = \mathbb{P}\left(\sup_{t \in [0,T]}\frac{\beta}{2T}\|B(t)\|_2 - \beta\sqrt{d} > \beta \alpha\right) \\
%
&=\mathbb{P}\left(\sup_{t \in [0,T]}\exp\left(\frac{\beta}{2T}\|B(t)\|_2 - \beta\sqrt{d}\right) > \exp(\beta \alpha)\right)\\
%
& \leq \frac{\mathbb{E}[\exp(\frac{\beta}{2T}\|B(t)\|_2 - \beta\sqrt{d})]}{\exp(\beta\alpha)}\\
%
& = \frac{\int_{0}^{\infty} \mathbb{P}[\exp(\frac{\beta}{2T}\|B(t)\|_2 - \beta\sqrt{d})>x] \mathrm{d}x}{\exp(\beta \alpha)}\\
%
& = \frac{\int_{0}^{\infty} \mathbb{P}[\frac{1}{2}\|B(t)\|_2 - \sqrt{d}> \beta^{-1} \log(x)] \mathrm{d}x}{\exp(\beta \alpha)}\\
%
& \leq \frac{\int_{0}^{\infty} 2 e^{-\beta^{-2}\log^2(x)} \mathrm{d}x}{\exp(\beta \alpha)}\\
& = \frac{2\sqrt{\pi} \beta e^{\frac{1}{4}\beta^2}}{\exp(\beta \alpha)}\\
%
& \leq \frac{2\sqrt{\pi}e^{\frac{1}{2}\beta^2}}{\exp(\beta \alpha)}\\
%
&= 2\sqrt{\pi} e^{\frac{1}{2}\beta^2 - \beta \alpha},
\end{align*}
where the first inequality holds by Doob's submartingale inequality, and the second inequality holds by Lemma \ref{lemma_concentration}.
Setting $\beta = \alpha$, we have
\begin{eqnarray*}
\textstyle \mathbb{P}\left(\sup_{t \in [0,T]}\|B(t)\|_2 > T(\sqrt{d} + \alpha)\right) \leq 2\sqrt{\pi} e^{-\frac{1}{2}\alpha^2}.
\end{eqnarray*}
%
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma_gap_concentration}]
To prove Lemma \ref{lemma_gap_concentration}, we plug our high-probability concentration bound for $\sup_{t\in[0,T]} \|B(t)\|_2$ (Lemma \ref{lemma_spectral_martingale}) into Weyl's Inequality (Lemma \ref{lemma_weyl}).
Since, at every time $t$, $\Phi(t)= M + B(t)$ and $\gamma_1(t) \geq \cdots \geq \gamma_d(t)$ are the eigenvalues of $\Phi(t)$, Weyl's Inequality implies that
\begin{equation} \label{eq_gap1}
\gamma_i(t) - \gamma_{i+1}(t) \geq \gamma_i(0) - \gamma_{i+1}(0) - \|B(t)\|_2, \qquad \forall t\in[0,T], i \in [d].
\end{equation}
%
Therefore, plugging Lemma \ref{lemma_spectral_martingale} into \eqref{eq_gap1} we have that
\begin{align*}
&\mathbb{P}\left( \bigcup_{i\in S} \left\{\inf_{t \in [0,T]} \gamma_i(t) - \gamma_{i+1}(t) < \frac{1}{2}(\gamma_i(0) - \gamma_{i+1}(0)) - \alpha) \right\}\right)\\
%
&\stackrel{\textrm{Eq. \eqref{eq_gap1}}}{\leq}\mathbb{P}\left( \bigcup_{i\in S} \left\{ \gamma_i(0) - \gamma_{i+1}(0) - \sup_{t \in [0,T]} 2\|B(t)\|_2 < \frac{1}{2}(\gamma_i(0) - \gamma_{i+1}(0)) - \alpha) \right\}\right) \\
%
&= \mathbb{P}\left( \bigcup_{i\in S} \left\{ \sup_{t \in [0,T]} \|B(t)\|_2 > \frac{1}{4}(\gamma_i(0) - \gamma_{i+1}(0)) + \frac{1}{2} \alpha) \right\}\right) \\
%
&\stackrel{\textrm{Assumption} \ref{assumption_gaps}}{\leq} \mathbb{P}\left( \bigcup_{i\in S} \left\{ \sup_{t \in [0,T]} \|B(t)\|_2 > 2T \sqrt{d} + \frac{1}{2} \alpha) \right\}\right)\\
%
&= \mathbb{P}\left(\sup_{t \in [0,T]} \|B(t)\|_2 > 2T \sqrt{d} + \frac{1}{2} \alpha) \right)\\
%
&\stackrel{\textrm{Lemma \ref{lemma_spectral_martingale}}}{\leq} 2\sqrt{\pi} e^{-\frac{1}{32} \alpha^2},
\end{align*}
%
The first inequality holds by \eqref{eq_gap1},
and the second inequality holds by Assumption \ref{assumption_gaps} since $\gamma_i(0) = \sigma_i$ for each $i\in[d]$ because $\Phi(0) = M$.
The last inequality holds by the high-probability concentration bound for $\sup_{t\in[0,T]} \|B(t)\|_2$ (Lemma \ref{lemma_spectral_martingale}).
\end{proof}
\begin{proof}[Proof of Lemma \ref{Lemma_integral}]
By the definition of $Z_\eta(t)$ we have that
\begin{align*}
Z_\eta\left(T\right) - Z_\eta(0) &= \int_0^{T} \mathrm{d}Z_\eta(t)\\
&= \frac{1}{2} \int_0^{T}\sum_{i=1}^{d} \sum_{j \neq i} |\lambda_i - \lambda_j| \frac{\mathrm{d}B_{ij}(t)}{\max(|\Delta_{ij}(t)|, \eta_{ij})}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)) \\
%
& - \int_0^{T}\sum_{i=1}^{d} \sum_{j\neq i} (\lambda_i - \lambda_j) \frac{\mathrm{d}t}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} u_i(t) u_i^\top(t).
\end{align*}
Therefore, we have that
\begin{align} \label{eq_int_1}
\left\|Z_\eta(T) - Z_\eta(0)\right \|_F^2 &\leq \frac{1}{2} \left\| \int_0^{T}\sum_{i=1}^{d} \sum_{j \neq i} |\lambda_i - \lambda_j| \frac{\mathrm{d}B_{ij}(t)}{\max(|\Delta_{ij}(t)|, \eta_{ij})}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t))\right\|_F^2 \nonumber\\
%
& + \left\| \int_0^{T}\sum_{i=1}^{d} \sum_{j\neq i} (\lambda_i - \lambda_j) \frac{\mathrm{d}t}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} u_i(t) u_i^\top(t) \right\|_F^2.
\end{align}
The first term on the r.h.s. of \eqref{eq_int_1} (inside its Frobenius norm) is a ``diffusion'' term--that is, the integral has mean 0 and Brownian motion differentials $\mathrm{d}B_{ij}(t)$ inside the integral.
The second term on the r.h.s. (inside its Frobenius norm) is a ``drift'' term-- that is, the integral has non-zero mean and deterministic differentials $\mathrm{d}t$ inside the integral.
We bound the diffusion and drift terms separately.
\paragraph{Bounding the diffusion term:}
We first use It\^o's Lemma (Lemma \ref{lemma_ito_lemma_new}) to bound the diffusion term in \eqref{eq_int_1}.
Towards this end, let $f: \mathbb{R}^{d \times d}: \rightarrow \mathbb{R}$ be the function which takes as input a $d \times d$ matrix and outputs the square of its Frobenius norm: $f(X):= \|X\|_F^2 = \sum_{i=1}^d \sum_{j=1}^d X_{ij}^2$ for every $X \in \mathbb{R}^{d\times d}$.
Then
\begin{equation}\label{eq_int_5}
\frac{\partial^2 }{\partial X_{ij} \partial X_{\alpha \beta}}f(X) =
\begin{cases}
2 & \textrm{if } \, \, \, (i,j)= (\alpha, \beta) \\
0 & \textrm{otherwise}.
\end{cases}
\end{equation}
Define $X(t):= \int_0^{t}\sum_{i=1}^{d} \sum_{j \neq i} |\lambda_i - \lambda_j| \frac{\mathrm{d}B_{ij}(s)}{\max(|\Delta_{ij}(s)|, \eta_{ij})}(u_i(s) u_j^\top(s) + u_j(s) u_i^\top(s))$ for all $t \geq 0$.
Then
\begin{equation*}
\mathrm{d}X_{\ell r}(t) = \sum_{j=1}^d R_{(\ell r) (i j)}(t) \mathrm{d}B_{(ij)}(t) \qquad \qquad \forall t \geq 0,
\end{equation*}
where $R_{(\ell r) (i j)}(t) := \left(\frac{ |\lambda_i - \lambda_j| }{\max(|\Delta_{ij}(t)|, \eta_{ij})}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)) \right)[\ell, r]$, and where we denote by either $H_{\ell r}$ or $H[\ell, r]$ the $(\ell, r)$'th entry of any matrix $H$.
Then we have
\begin{align}\label{eq_int_2b}
&\mathbb{E}\left[\left\| \int_0^{T}\sum_{i=1}^{d} \sum_{j \neq i} |\lambda_i - \lambda_j| \frac{\mathrm{d}B_{ij}(t)}{\max(|\Delta_{ij}(t)|, \eta_{ij})}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t))\right\|_F^2 \right] \nonumber \\
%
&= \mathbb{E}[f(X(T))] \nonumber\\
%
&=\mathbb{E}[f(X(T)) - f(X(0))] \nonumber\\
%
&\stackrel{\textrm{It\^o's Lemma (Lemma \ref{lemma_ito_lemma_new})}}{=} \mathbb{E}\left [\frac{1}{2} \int_0^t \sum_{\ell, r} \sum_{\alpha, \beta} \left(\frac{\partial}{ \partial X_{\alpha \beta}} f(X(t))\right) R_{(\ell r) (\alpha \beta)}(t) \mathrm{d}B_{\ell r}(t) \right] \nonumber\\
%
&\qquad \qquad \qquad \qquad \qquad +\mathbb{E}\left [\frac{1}{2} \int_0^t \sum_{\ell, r} \sum_{i,j} \sum_{\alpha, \beta} \left(\frac{\partial^2}{\partial X_{ij} \partial X_{\alpha \beta}} f(X(t))\right) R_{(\ell r) (i j)}(t) R_{(\ell r) (\alpha \beta)}(t) \mathrm{d}t \right] \nonumber\\
%
&= 0 +\mathbb{E}\left [\frac{1}{2} \int_0^t \sum_{\ell, r} \sum_{i,j} \sum_{\alpha, \beta} \left(\frac{\partial^2}{\partial X_{ij} \partial X_{\alpha \beta}} f(X(t))\right) R_{(\ell r) (i j)}(t) R_{(\ell r) (\alpha \beta)}(t) \mathrm{d}t \right],
\end{align}
%
where the third equality is It\^o's Lemma (Lemma \ref{lemma_ito_lemma_new}), and the last equality holds since $$\mathbb{E}\left[\int_0^T \left(\frac{\partial}{ \partial X_{\alpha \beta}} f(X(t))\right) R_{(\ell r) (\alpha \beta)}(t) \mathrm{d}B_{\ell r}(t) \right] = 0$$ for each $\ell, r, \alpha, \beta \in [d]$ because $\mathrm{d}B_{\ell r}(s)$ is independent of both $X(t)$ and $R(t)$ for all $s \geq t$ and the Brownian motion increments $\mathrm{d}B_{\alpha \beta}(s)$ satisfy $\mathbb{E}[\int_t^{\tau} \mathrm{d}B_{\alpha \beta}(s)] = \mathbb{E}[B_{\alpha \beta}(\tau) - B_{\alpha \beta}(t)]= 0$ for any $\tau \geq t$.
Thus, plugging \eqref{eq_int_5} into \eqref{eq_int_2b}, we have
%
\begin{align}\label{eq_int_2}
&\mathbb{E}\left[\left\| \int_0^{T}\sum_{i=1}^{d} \sum_{j \neq i} |\lambda_i - \lambda_j| \frac{\mathrm{d}B_{ij}(t)}{\max(|\Delta_{ij}(t)|, \eta_{ij})}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t))\right\|_F^2 \right] \nonumber\\
%
&\stackrel{\textrm{Eq.} \eqref{eq_int_5}, \eqref{eq_int_2b}}{=} \mathbb{E}\left [\frac{1}{2} \int_0^t \sum_{\ell, r} \sum_{i,j} 2 R_{(\ell r) (i j)}^2(t) \mathrm{d}t \right]\nonumber\\
%
&= \mathbb{E}\left [ \int_0^t \sum_{\ell, r} \sum_{i,j} \left(\left( \frac{ |\lambda_i - \lambda_j| }{\max(|\Delta_{ij}(t)|, \eta_{ij})^2}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t))\right)[\ell, r]\right)^2 \mathrm{d}t \right]\nonumber\\
%
&= \mathbb{E}\left [ \int_0^t \sum_{i,j} \sum_{\ell, r} \left(\left( \frac{ |\lambda_i - \lambda_j| }{\max(|\Delta_{ij}(t)|, \eta_{ij})^2}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t))\right)[\ell, r]\right)^2 \mathrm{d}t \right]\nonumber\\
%
%
%
&= \mathbb{E}\left [ \int_0^t \sum_{i,j} \left \|\frac{ |\lambda_i - \lambda_j| }{\max(|\Delta_{ij}(t)|, \eta_{ij})}(u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t))\right\|_F^2\mathrm{d}t \right]\nonumber\\
&= 2\int_0^{T} \mathbb{E}\left[ \sum_{i=1}^{d} \sum_{j \neq i} \frac{(\lambda_i - \lambda_j)^2}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} \|u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)\|_F^2 \mathrm{d}t\right] \nonumber\\
%
&= 4\int_0^{T} \mathbb{E}\left[ \sum_{i=1}^{d} \sum_{j \neq i} \frac{(\lambda_i - \lambda_j)^2}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} \mathrm{d}t\right],
%
\end{align}
%
where the fifth equality holds because $\langle u_i(t) u_j^\top(t) , u_\ell(t) u_h^\top(t) \rangle = 0$ for all $(i,j) \neq (\ell,h)$, and the last equality holds because $\|u_i(t) u_j^\top(t) + u_j(t) u_i^\top(t)\|_F^2 = 2$ for all $t$ with probability $1$.
\paragraph{Bounding the drift term:}
To bound the drift term in \eqref{eq_int_1}, we use the Cauchy-Shwarz inequality:
\begin{align}\label{eq_int_3}
&\left\|\int_0^{T}\sum_{i=1}^{d} \sum_{j\neq i} (\lambda_i - \lambda_j) \frac{\mathrm{d}t}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} u_i(t) u_i^\top(t) \right\|_F^2\nonumber \\
%
& = \left\|\int_0^{T}\sum_{i=1}^{d} \sum_{j\neq i} \frac{\lambda_i - \lambda_j}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} u_i(t) u_i^\top(t) \times 1 \mathrm{d}t \right\|_F^2 \nonumber\\
%
& \stackrel{\textrm{Cauchy-Schwarz Inequality}}{\leq} \int_0^{T}\left\|\sum_{i=1}^{d} \sum_{j\neq i} \frac{\lambda_i - \lambda_j}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} u_i(t) u_i^\top(t)\right\|_F^2 \mathrm{d}t\times \int_0^{T} 1^2 \mathrm{d}t \nonumber\\
%
&= T \int_0^{T}\left\|\sum_{i=1}^{d} \sum_{j\neq i} \frac{\lambda_i - \lambda_j}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} u_i(t) u_i^\top(t) \right\|_F^2 \mathrm{d}t \nonumber \\
%
&= T \int_0^{T}\sum_{i=1}^{d} \left\|\sum_{j\neq i} \frac{\lambda_i - \lambda_j}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} u_i(t) u_i^\top(t) \right\|_F^2 \mathrm{d}t \nonumber\\
%
&= T \int_0^{T}\sum_{i=1}^{d} \left\|\left(\sum_{j\neq i} \frac{\lambda_i - \lambda_j}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)}\right) u_i(t) u_i^\top(t) \right\|_F^2 \mathrm{d}t \nonumber\\
%
&= T \int_0^{T}\sum_{i=1}^{d}\left(\sum_{j\neq i} \frac{\lambda_i - \lambda_j}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)}\right)^2 \left\| u_i(t) u_i^\top(t) \right\|_F^2 \mathrm{d}t \nonumber\\
%
&= T \int_0^{T}\sum_{i=1}^{d}\left(\sum_{j\neq i} \frac{\lambda_i - \lambda_j}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)}\right)^2 \times 1 \mathrm{d}t,
\end{align}
where the first inequality is by the Cauchy-Schwarz inequality for integrals (applied to each entry of the matrix-valued integral).
The third equality holds since $\langle u_i(t) u_i^\top(t) , u_j(t) u_j^\top(t) \rangle = 0$ for all $i \neq j$.
The last equality holds since $\| u_i(t) u_i^\top(t) \|_F^2=1$ with probability $1$.
Therefore, taking the expectation on both sides of \eqref{eq_int_1}, and plugging \eqref{eq_int_2} and \eqref{eq_int_3} into \eqref{eq_int_1}, we have
\begin{align} \label{eq_int_4}
\mathbb{E}\left[\left\|Z_\eta\left(T\right) - Z_\eta(0)\right \|_F^2\right] &\leq 2\int_0^{T} \mathbb{E}\left[ \sum_{i=1}^{d} \sum_{j \neq i} \frac{(\lambda_i - \lambda_j)^2}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} \right]\mathrm{d}t \nonumber \\
%
& + T \int_0^{T}\mathbb{E}\left[\sum_{i=1}^{d}\left(\sum_{j\neq i} \frac{\lambda_i - \lambda_j}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)}\right)^2\right] \mathrm{d}t .
\end{align}
\end{proof}
\section{Proof of Theorem \ref{thm_large_gap} -- Main Result} \label{sec_proof_thm_large_gap}
\begin{proof}[Proof of Theorem \ref{thm_large_gap}]
To complete the proof of Theorem \ref{thm_large_gap}, we plug in the high-probability concentration bounds on the eigenvalue gaps $\Delta_{ij}(t) = \gamma_{i}(t) - \gamma_j(t)$ (Lemma \ref{lemma_gap_concentration}) into Lemma \ref{Lemma_integral}.
Since by Lemma \ref{lemma_gap_concentration} $\Delta_{ij}(t) \geq \frac{1}{2}(\sigma_i - \sigma_j)$ w.h.p. for each $i,j \leq k+1$, and $\eta_{ij} = \frac{1}{4}(\sigma_i - \max(\sigma_{j}, \sigma_{k+1}))$, by Lemma \ref{Lemma_orbit_differntial} we have that the derivative $\mathrm{d}\Psi(t)$ satisfies $\mathrm{d}\Psi(t) = \mathrm{d}Z_\eta(t)$ for all $t \in [0,T]$ w.h.p. and hence that $Z_\eta(t) = \Psi(t)$ for all $t \in [0, T]$ w.h.p.
Plugging in the high-probability bounds on the gaps $\Delta_{ij}(t)$ (Lemma \ref{lemma_gap_concentration}) into the bound on $\mathbb{E}\left[\left\|Z_\eta\left(T\right) - Z_\eta(0)\right \|_F^2\right]$ from Lemma \ref{Lemma_integral} therefore allows us to obtain a bound for $\mathbb{E}\left[\left\|\Psi\left(T\right) - \Psi(0)\right \|_F^2\right]$.
To obtain a bound for the utility $\mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right]$ we set $T = \frac{\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon}$, in which case we have $\Psi(T) =\hat{V} \Lambda \hat{V}^\top$ and hence that $\mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right] = \mathbb{E}\left[\left\|\Psi\left(T\right) - \Psi(0)\right \|_F^2\right]$.
Towards this end, for all $i\neq j$ we define $\eta_{ij}$ as follows:
Let $\eta_{ij} = \frac{1}{4}(\sigma_i - \max(\sigma_{j}, \sigma_{k+1}))$ for $0<i<j \leq d$ and $i\leq k$, $\eta_{ij}= 0$ if $0<i<j \leq d$ and $i> k$, and $\eta_{ij} = \eta_{ji}$ otherwise.
Define the event $E = \cap_{i,j \in [d], i\neq j}\{ \inf_{t\in[0,T]}\Delta_{ij}(t) \geq \eta_{ij}\}$.
And define the event $$\hat{E}:=\bigcap_{i\in [k]} \left\{\inf_{t \in [0,T]} \gamma_i(t) - \gamma_{i+1}(t) \geq \frac{1}{4}(\sigma_i - \sigma_{i+1})) \right\}.$$
Then $\hat{E} \subseteq E$.
In particular, whenever the event $E$ occurs, by Lemma \ref{Lemma_orbit_differntial} we have that the derivative $\mathrm{d}\Psi(t)$ satisfies $\mathrm{d}\Psi(t) = \mathrm{d}Z_\eta(t)$ for all $t \in [0,T]$ and hence that
\begin{equation*}
\Psi(t) = \Psi(0) + \int_0^t \mathrm{d}\Psi(s) = Z_\eta(0) + \int_0^t \mathrm{d}Z_\eta(s) = Z_\eta(t)\qquad \qquad \forall t \in [0,T],
\end{equation*}
whenever the event $E$ occurs, since $Z_\eta(0) = \Psi(0)$ by definition.
Thus we have that, conditioning $\Psi(t)$ and $Z_\eta(t)$ on the event $E$,
\begin{equation}\label{eq_e6}
\Psi(t)|E = Z_\eta(t)|E \qquad \forall t \in [0,T].
\end{equation}
To bound the utility $\mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right]$, we first separate $\mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right]$ into a sum of terms conditioned on the event $E$ and its complement $E^c$.
By Lemma \ref{Lemma_integral} we have
\begin{align}\label{eq_e1}
&\mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right] \nonumber \\
%
&= \mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\bigg| \, E\right]\times \mathbb{P}(E) + \mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\bigg| E^c \, \right]\times \mathbb{P}(E^c)\nonumber \\
%
&\leq \mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\bigg| \, E\right]\times \mathbb{P}(E) \nonumber\\
%
&+ 4\left(\mathbb{E}\left[\| (\hat{V} - V)\Lambda \hat{V}^\top\|_F^2\bigg| E^c \, \right] +\mathbb{E}\left[\| V \Lambda (\hat{V}-V)^\top \|_F^2\bigg| E^c \, \right]\right)\times \mathbb{P}(E^c)\nonumber \\
%
&\leq \mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2|E\right]\times \mathbb{P}(E) + 8\mathbb{E}\left[\|\hat{V} -V\|_2^2\times \|\Lambda\|_F^2 \, \bigg| \, E^c\right]\times \mathbb{P}(E^c)\nonumber \\
%
&\leq \mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\bigg| \, E\right]\times \mathbb{P}(E) + 32\|\Lambda\|_F^2\times \mathbb{P}(E^c) \nonumber \\
%
&=\mathbb{E}\left[\left\|\Psi\left(T\right) - \Psi(0)\right \|_F^2 \bigg| \, E\right] \times \mathbb{P}(E) + 32\|\Lambda\|_F^2\times \mathbb{P}(E^c) \nonumber \\
&\leq\mathbb{E}\left[\left\|\Psi\left(T\right) - \Psi(0)\right \|_F^2 \bigg| \, E\right] \times \mathbb{P}(E) + 32\lambda_1^2 k\times \mathbb{P}(\hat{E}^c),
%
\end{align}
%
where the second inequality holds by the sub-multiplicative property of the Frobenius norm which says that $\|XY\|_F \leq \|X\|_2\times \|Y\|_F$ for any $X,Y \in \mathbb{R}^{d \times d}$.
%
The third inequality holds since $\|V\|_2 = \|\hat{V}\|_2= 1$ since $V, \hat{V}$ are orthogonal matrices.
To bound the first term in \eqref{eq_e1}, we use the fact that $\Psi(t)|E = Z_\eta(t)|E$ (Equation \eqref{eq_e6}) and apply Lemma \ref{Lemma_integral} to bound $\mathbb{E}\left[\left\|Z_\eta\left(T\right) - Z_\eta(0)\right \|_F^2\right]$.
%
Thus we have,
%
\begin{align} \label{eq_e2}
&\mathbb{E}\left[\left\|\Psi\left(T\right) - \Psi(0)\right \|_F^2 \bigg| \, E\right]\times \mathbb{P}(E) \nonumber\\
%
&\stackrel{\textrm{Eq. }\eqref{eq_e6}}{=} \mathbb{E}\left[\left\|Z_\eta\left(T\right) - Z_\eta(0)\right \|_F^2 \bigg| \, E\right]\times \mathbb{P}(E) \nonumber \\
%
&\leq \mathbb{E}\left[\left\|Z_\eta\left(T\right) - Z_\eta(0)\right \|_F^2\right] \nonumber \\
%
&\stackrel{\textrm{Lemma }\ref{Lemma_integral}}{=} 2\int_0^{T} \mathbb{E}\left[ \sum_{i=1}^{d} \sum_{j \neq i} \frac{(\lambda_i - \lambda_j)^2}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)}\right] \mathrm{d}t
%
+ T \int_0^{T}\mathbb{E}\left[\sum_{i=1}^{d}\left(\sum_{j\neq i} \frac{\lambda_i - \lambda_j}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)}\right)^2 \right]\mathrm{d}t \nonumber \\
%
&\leq 4\int_0^{T} \mathbb{E}\left[ \sum_{i=1}^{d} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)}\right] \mathrm{d}t
%
+ 2T \int_0^{T}\mathbb{E}\left[\sum_{i=1}^{d}\left(\sum_{j=i+1}^d \frac{|\lambda_i - \lambda_j|}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)}\right)^2 \right]\mathrm{d}t \nonumber \\
%
&=4\int_0^{T} \mathbb{E}\left[ \sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)}\right] \mathrm{d}t
%
+ 2T \int_0^{T}\mathbb{E}\left[\sum_{i=1}^{k}\left(\sum_{j=i+1}^d \frac{|\lambda_i - \lambda_j|}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)}\right)^2 \right]\mathrm{d}t \nonumber\\
%
&\leq 64\int_0^{T} \mathbb{E}\left[ \sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2} \right] \mathrm{d}t \nonumber \\
%
& + 32T \int_0^{T}\mathbb{E}\left[\sum_{i=1}^{k}\left(\sum_{j = i+1}^d \frac{|\lambda_i - \lambda_j|}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}\right)^2 \right]\mathrm{d}t \nonumber \\
%
&= 64T \mathbb{E}\left[ \sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2} \right]
%
+ 32T^2 \mathbb{E}\left[\sum_{i=1}^{k}\left(\sum_{j=i+1}^d \frac{|\lambda_i - \lambda_j|}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}\right)^2 \right] \nonumber \\
%
&= 64T \sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}
%
+ 32T^2 \sum_{i=1}^{k}\left(\sum_{j=i+1}^d \frac{|\lambda_i - \lambda_j|}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}\right)^2,
%
%
\end{align}
where the first equality holds since $\Psi(t)|E = Z_\eta(t)|E$ by \eqref{eq_e6}, and the second equality holds by Lemma \ref{Lemma_integral}.
The second inequality holds since, $ \frac{(\lambda_i - \lambda_j)^2}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} = \frac{(\lambda_j - \lambda_i)^2}{\max(\Delta^2_{ji}(t), \eta_{ji}^2)}$ and $\frac{|\lambda_i - \lambda_j|}{\max(\Delta^2_{ij}(t), \eta_{ij}^2)} = \frac{|\lambda_j - \lambda_i|}{\max(\Delta^2_{ji}(t), \eta_{ji}^2)}$ for all $i,j \in [d]$.
%
The third equality holds since $\lambda_{i} = 0$ for all $i \geq k+1$.
%
The third inequality holds since $\eta_{ij} = \frac{1}{4}(\sigma_i - \max(\sigma_{j}, \sigma_{k+1}))$ for all $0<i<j \leq d$.
To bound the second term in \eqref{eq_e1}, we have by Lemma \ref{lemma_gap_concentration} that
\begin{align}\label{eq_e3}
\mathbb{P}(\hat{E}^c) &= \mathbb{P}\left(\left(\bigcap_{i\in [k]} \left\{\inf_{t \in [0,T]} \gamma_i(t) - \gamma_{i+1}(t) \geq \frac{1}{4}(\sigma_i - \sigma_{i+1})) \right\}\right)^c \, \, \right) \nonumber \\
%
&= \mathbb{P}\left(\bigcup_{i\in [k]} \left\{\inf_{t \in [0,T]} \gamma_i(t) - \gamma_{i+1}(t) < \frac{1}{4}(\sigma_i - \sigma_{i+1})) \right\}\right) \nonumber\\
%
&\stackrel{\textrm{Assumption} \ref{assumption_gaps}}{\leq} \mathbb{P}\left(\bigcup_{i\in [k]} \left\{\inf_{t \in [0,T]} \gamma_i(t) - \gamma_{i+1}(t) < \frac{1}{2}(\sigma_i - \sigma_{i+1}) - 3\log^{\frac{1}{2}}(\lambda_1 k)) \right\}\right) \nonumber\\
%
&\stackrel{\textrm{Lemma} \ref{lemma_gap_concentration}}{\leq} \min(e^{-\log(\lambda_1^2 k))},1) \nonumber\\
%
&=\min\left(\frac{1}{\lambda_1^2 k }, 1\right),
%
\end{align}
where the first inequality holds by Assumption \ref{assumption_gaps},
and the second inequality holds by Lemma \ref{lemma_gap_concentration}.
Therefore, plugging \eqref{eq_e2} and \eqref{eq_e3} into \eqref{eq_e1}, we have
\begin{align}\label{eq_e4}
&\mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right] \nonumber\\
%
&\stackrel{\textrm{Eq. }\eqref{eq_e1}, \eqref{eq_e2}, \eqref{eq_e3}}{\leq} 64T \sum_{i=1}^{k} \sum_{i = j+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}
%
+ 32T^2 \sum_{i=1}^{k}\left(\sum_{i=j+1}^d \frac{|\lambda_i - \lambda_j|}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}\right)^2 \nonumber\\ & + \min(32, \, \, 32\lambda_1^2 k) \nonumber\\
%
&\leq O\left(\sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}
%
+ \left(\sum_{j=i+1}^d \frac{|\lambda_i - \lambda_j|}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}\right)^2\right) \frac{\log(\frac{1}{\delta})}{\varepsilon^2},
\end{align}
where the last inequality holds since $T = \frac{\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon}$.
Finally, we have by the Cauchy-Schwarz inequality and Assumption \ref{assumption_gaps} that
\begin{align}\label{eq_e5}
&\left(\sum_{j=i+1}^d \frac{|\lambda_i - \lambda_j|}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}\right)^2 = \left(\sum_{j=i+1}^d \frac{1}{|\sigma_i-\max(\sigma_j, \sigma_{k+1})|} \times \frac{|\lambda_i - \lambda_j|}{|\sigma_i-\max(\sigma_j, \sigma_{k+1})|}\right)^2 \nonumber\\
%
& \stackrel{\textrm{Cauchy-Schwarz inequality}}{\leq} \left(\sum_{j=i+1}^d \frac{1}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}\right) \times \left(\sum_{j=i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}\right)\nonumber\\
%
&\stackrel{\textrm{Assumption} \ref{assumption_gaps}}{\leq}
%
\left(\sum_{j=i+1}^d \frac{1}{(\sqrt{d})^2}\right) \times \left(\sum_{j=i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}\right)\nonumber\\
%
&\leq \sum_{j=i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}.
\end{align}
In other words, \eqref{eq_e5} says that the first term inside the outer summation on the r.h.s. of \eqref{eq_e4} is at least as large as the second term.
Therefore, plugging in \eqref{eq_e5} into \eqref{eq_e4}, we have that
\begin{align*}
\mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right] \leq O\left(\sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}\right) \frac{\log(\frac{1}{\delta})}{\varepsilon^2}.
\end{align*}
\end{proof}
\section{Proof of Corollary \ref{cor_rank_k_covariance2} -- Covariance Matrix Approximation}\label{sec:covariance}
\begin{proof}[Proof of Corollary \ref{cor_rank_k_covariance2}]
To prove Corollary \ref{cor_rank_k_covariance2},
%
we must bound the utility $\mathbb{E}[\| \hat{V} \hat{\Sigma}_k \hat{V}^\top - V \Sigma_k V^\top \|_F]$ of the post-processing of the Gaussian mechanism for the rank-$k$ covariance matrix estimation problem.
%
Towards this end, we first plug in $\lambda_i = \sigma_i$ for $i \leq k$ and $\lambda_i=0$ for $i>k$ into Theorem \ref{thm_large_gap} to obtain a bound for $\mathbb{E}[\| \hat{V} \Sigma_k \hat{V}^\top - V \Sigma_k V^\top \|_F]$ (Inequality \eqref{eq_f6}).
%
We then apply Weyl's inequality (Lemma \ref{lemma_weyl}) together with a concentration bound for $\|B(T)\|_2$ (Lemma \ref{lemma_spectral_martingale}) to bound the perturbation to the eigenvalues of $M$ when the Gaussian noise matrix $B(T)$ is added to $M$ by the Gaussian mechanism.
%
This implies a bound on $\mathbb{E}[\| \hat{V} \Sigma_k \hat{V}^\top - \hat{V} \hat{\Sigma}_k \hat{V}^\top \|_F]$ (Inequality \eqref{eq_f8}).
%
Combining these two bounds \eqref{eq_f6} and \eqref{eq_f8}, implies a bound on the utility $\mathbb{E}[\| \hat{V} \hat{\Sigma}_k \hat{V}^\top - V \Sigma_k V^\top \|_F]$ for the post-processing of the Gaussian mechanism (Inequality \eqref{eq_f11}).
\paragraph{Bounding the quantity $\mathbb{E}[\| \hat{V} \Sigma_k \hat{V}^\top - V \Sigma_k V^\top \|_F]$.}
Let $\lambda_i = \sigma_i$ for $i \leq k$ and $\lambda_i=0$ for $i>k$ , and let $\Lambda:= \mathrm{diag}(\lambda_1,\cdots, \lambda_d)$.
Then by Assumption \ref{assumption_gaps} we have that $\sigma_i - \sigma_{i+1} \geq \frac{8\sqrt{\log(\frac{1.25}{\delta})}}{\varepsilon} \sqrt{d} + c\log^{\frac{1}{2}}(\sigma_1 k)$.
By Theorem \ref{thm_large_gap}, we have
\begin{equation}\label{eq_f4}
\mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right]
%
\leq O\left(\sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2} \right) \frac{\log(\frac{1}{\delta})}{\varepsilon^2}.
\end{equation}
First, we note that $\lambda_i - \lambda_j \leq \sigma_i - \sigma_j$ for all $i \leq j \leq k$.
Then for all $i<j\leq k$, we have
\begin{align*}
\frac{\lambda_i - \lambda_j}{\sigma_i - \max(\sigma_j, \sigma_{k+1})} \leq
\frac{\sigma_i - \sigma_k}{\sigma_i - \max(\sigma_j, \sigma_{k+1})} \leq 1.
\end{align*}
And for all $i\leq k <j\leq d$ we have
\begin{align*}
&\frac{\lambda_i - \lambda_j}{\sigma_i - \max(\sigma_j, \sigma_{k+1})} =
\frac{\sigma_i}{\sigma_i - \sigma_{k+1}} = \frac{\sigma_i-\sigma_k}{\sigma_i - \sigma_{k+1}} + \frac{\sigma_k}{\sigma_i - \sigma_{k+1}}
\leq 1 + \frac{\sigma_k}{\sigma_k - \sigma_{k+1}}.
\end{align*}
Thus, the summation term in \eqref{eq_f4} simplifies to
\begin{equation} \label{eq_f1}
\sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}
\leq kd\left( 1 + \frac{\sigma_k}{\sigma_k - \sigma_{k+1}}\right)^2 \leq 4kd\left(\frac{\sigma_k}{\sigma_k - \sigma_{k+1}}\right)^2.
\end{equation}
Therefore, plugging \eqref{eq_f1}
into \eqref{eq_f4}, we have
%
\begin{align}\label{eq_f6}
\mathbb{E}[\| \hat{V} \Sigma_k \hat{V}^\top - V \Sigma_k V^\top \|_F] &\leq \sqrt{\mathbb{E}[\| \hat{V} \Sigma_k \hat{V}^\top - V \Sigma_k V^\top \|_F^2]} \nonumber\\
%
%
& \stackrel{\textrm{Eq.} \eqref{eq_f1}, \eqref{eq_f4} }{\leq} O\left(\sqrt{kd} \times \frac{\sigma_k}{\sigma_k - \sigma_{k+1}}\right) \frac{\log^{\frac{1}{2}}(\frac{1}{\delta})}{\varepsilon},
\end{align}
%
where the first inequality holds by Jensen's inequality.
\paragraph{Bounding the perturbation to the eigenvalues.}
By Weyl's inequality (Lemma \ref{lemma_weyl}), we have that for every $i \in [d]$
\begin{align}\label{eq_f9}
\mathbb{E}[(\hat{\sigma}_i-\sigma_i)^2] &\leq \mathbb{E}[(\|B(T)\|_2)^2] \nonumber \\
%
&\leq 4\mathbb{E}[( (T\sqrt{d})^2] + 4\mathbb{E}\left[\left(\|B(T)\|_2 - T\sqrt{d} \right)^2\right] \nonumber\\
%
&\leq 4T^2d + 4\int_0^\infty \mathbb{P}\left(\left(\|B(T)\|_2 - T\sqrt{d} \right)^2 > \alpha)\right) \mathrm{d} \alpha \nonumber \\
%
&= 4T^2d + 4\int_0^\infty \mathbb{P}\left(\|B(T)\|_2 - T\sqrt{d} > \sqrt{\alpha})\right) \mathrm{d} \alpha \nonumber\\
%
&\stackrel{\textrm{Lemma} \eqref{lemma_spectral_martingale} }{\leq} 4T^2d + 8\sqrt{\pi} \int_0^\infty e^{-\frac{1}{8}\frac{\alpha}{T^2}}\mathrm{d} \alpha \nonumber\\
%
&= 4T^2d + 64\sqrt{\pi} T^2 e^{-\frac{1}{8}\frac{\alpha}{T^2}} \bigg|_{\alpha=0}^\infty \nonumber\\
%
&= 4T^2d + 64\sqrt{\pi} T^2\nonumber\\
%
&\leq 64\sqrt{\pi} \frac{\log(\frac{1}{\delta})}{\varepsilon^2} d.
\end{align}
%
The first inequality holds by Weyl's inequality (Lemma \ref{lemma_weyl}), and the fourth inequality holds by Lemma \ref{lemma_spectral_martingale}.
Therefore, \eqref{eq_f9} implies that,
\begin{align}\label{eq_f8}
\mathbb{E}[\| \hat{V} \Sigma_k \hat{V}^\top - \hat{V} \hat{\Sigma}_k \hat{V}^\top \|_F] &= \mathbb{E}[\| \Sigma_k- \hat{\Sigma}_k \|_F] \nonumber \\
%
&\leq \sqrt{\mathbb{E}[\| \Sigma_k- \hat{\Sigma}_k \|_F^2]} \nonumber \\
%
&\stackrel{\textrm{Eq.} \eqref{eq_f9} }{\leq} O\left(\sqrt{kd}\frac{\log^{\frac{1}{2}}(\frac{1}{\delta})}{\varepsilon}\right),
\end{align}
where the first inequality holds by Jensen's inequality, and the second inequality holds by \eqref{eq_f9}.
Thus, plugging \eqref{eq_f8} into \eqref{eq_f6}, we have that
\begin{align}\label{eq_f11}
\mathbb{E}[\| \hat{V} \hat{\Sigma}_k \hat{V}^\top - V \Sigma_k V^\top \|_F] &\leq \mathbb{E}[ \| \hat{V} \Sigma_k \hat{V}^\top - V \Sigma_k V^\top \|_F] + \mathbb{E}[\| \hat{V} \Sigma_k \hat{V}^\top - \hat{V} \hat{\Sigma}_k \hat{V}^\top \|_F] \nonumber\\
%
&\stackrel{\textrm{Eq.} \eqref{eq_f6}, \eqref{eq_f8} }{\leq} O\left(\sqrt{kd} \times\frac{\sigma_k}{\sigma_{k}-\sigma_{k+1}
}\right) \frac{\log^{\frac{1}{2}}(\frac{1}{\delta})}{\varepsilon}.
\end{align}
\paragraph{Privacy:}
{\em Privacy of perturbed covariance matrix $\hat{M}$:}
Recall that two matrices $M=A^\top A$ and $M' = A'^\top A'$ are said to be {\em neighbors} if they arise from $A, A' \in \mathbb{R}^{d\times n}$ which differ by at most one row, and that each row of the datasets $A, A'$ has norm at most $1$.
In other words, we have that $M-M' = xx^\top$ for some $x\in \mathbb{R}^d$ such that $\|x\| \leq 1$.
Define the sensitivity $S:= \max_{M,M' \mathrm{neighbors}} \|M - M'\|_{\ell_2}$, where $\|X\|_{\ell_2}$ denotes the Euclidean norm of the upper triangular entries of $X$ (including the diagonal entries).
Then we have
\begin{equation*}
S = \max_{M,M' \mathrm{neighbors}} \|M - M'\|_{\ell_2} \leq \|M - M'\|_F \leq \max_{\|x\|\leq 1} \|xx^\top \|_F = 1.
\end{equation*}
Then by standard results for the Gaussian Mechanism (e.g., by Theorem A.1 of \cite{dwork2014algorithmic}), we have that the Gaussian mechanism which outputs the upper triangular matrix $\hat{M}_{\mathrm{upper}}$ with the same upper triangular entries as $\hat{M} = M+ \frac{S\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon}(G +G^\top) = M+ \frac{\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon}(G +G^\top)$, where $G$ has i.i.d. $N(0,1)$ entries, is $(\varepsilon, \delta)$-differentially private.
However, since the perturbed matrix $\hat{M}$ is symmetric, it can be obtained from its upper triangular entries $\hat{M}_{\mathrm{upper}}$ without accessing the original matrix $M$.
Thus, the mechanism which outputs $\hat{M} = M+ \frac{\sqrt{2\log(\frac{1.25}{\delta})}}{\varepsilon}(G +G^\top)$ must also be $(\varepsilon, \delta)$-differentially private.
{\em Privacy of rank-$k$ approximation $ \hat{V} \hat{\Sigma}_k \hat{V}^\top$:} The mechanism which outputs the rank-$k$ approximation $\hat{V} \hat{\Sigma}_k \hat{V}^\top$ is $(\varepsilon, \delta)$-differentially private, since $\hat{V} \hat{\Sigma}_k \hat{V}^\top$ is obtained by post-processing the perturbed matrix $\hat{M}$ without any additional access to the matrix $M$.
Namely, to obtain $\hat{V} \hat{\Sigma}_k \hat{V}^\top$, we first (i) compute the spectral decomposition $\hat{M}= \hat{V} \hat{\Sigma} \hat{V}^\top$.
Next, (ii) we take the top-$k$ eigenvalues $ \hat{\sigma}_1, \ldots, \hat{\sigma}_k$ of $\hat{M}$, and set $\hat{\Sigma}_k = \mathrm{diag}(\hat{\sigma}_1, \ldots, \hat{\sigma}_k, 0, \ldots, 0)$.
Finally, we output $\hat{M}_k := \hat{V}\hat{\Sigma}_k \hat{V}^\top$.
Both of these steps (i) and (ii) are post-processing of $\hat{M}$ and do not require additional access to the matrix $M$.
In particular, the eigenvalues $ \hat{\sigma}_1, \ldots, \hat{\sigma}_k$ of $\hat{M}_k$ are obtained from the perturbed matrix $\hat{M}$, and thus do not compromise privacy.
Therefore, the mechanism which outputs the rank-$k$ approximation $\hat{M}_k :=\hat{V} \hat{\Sigma} \hat{V}^\top$ must also be $(\varepsilon, \delta)$-differentially private.
\end{proof}
\section{Proof of Corollary \ref{cor_subspace_recovery} -- Subspace Recovery}\label{sec:cor_rank_subspace}
\begin{proof}[Proof of Corollary \ref{cor_subspace_recovery}]
To prove Corollary \ref{cor_subspace_recovery}, we plug in $\lambda_1 = \cdots = \lambda_k =1$ and $\lambda_{k+1} = \cdots = \lambda_d = 0$ to Theorem \ref{thm_large_gap}.
Corollary \ref{cor_subspace_recovery} considers two cases.
In the first case (referred to here as Case I), the eigenvalues $\sigma$ of the input matrix $M$ satisfies Assumption \ref{assumption_gaps}
In the second case (referred to here as Case II) the eigenvalues $\sigma$ of $M$ also satisfy both Assumption \ref{assumption_gaps} as well as the lower bound $\sigma_i - \sigma_{i+1} \geq \Omega(\sigma_k - \sigma_{k+1})$ for all $i \leq k$.
We derive a bound on the utility $\mathbb{E}[\| \hat{V}_k \hat{V}_k^\top - V_k V_k^\top \|_F]$ in each case separately.
\paragraph{Case I: $M$ satisfies Assumption \ref{assumption_gaps}.}
Plugging in $\lambda_1 = \cdots = \lambda_k =1$ and $\lambda_{k+1} = \cdots = \lambda_d = 0$ to Theorem \ref{thm_large_gap} we get that, since $M$ satisfies Assumption \ref{assumption_gaps} for ($M,k,2,\varepsilon,\delta$),
\begin{align}\label{eq_h1}
\mathbb{E}\left[\| \hat{V}_k \hat{V}_k^\top - V_k V_k^\top \|_F^2\right] &= \mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right]\nonumber\\
%
&\stackrel{\textrm{Theorem }\ref{thm_large_gap}}{\leq} O\left(\sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}
\right) \frac{\log(\frac{1}{\delta})}{\varepsilon^2} \nonumber\\
%
&= O\left(\sum_{i=1}^{k} \sum_{j = k+1}^d \frac{1}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2} \right) \frac{\log(\frac{1}{\delta})}{\varepsilon^2}\nonumber\\
&\leq O\left(\sum_{i=1}^{k} \sum_{j = k+1}^d \frac{1}{(\sigma_k- \sigma_{k+1})^2} \right) \frac{\log(\frac{1}{\delta})}{\varepsilon^2}\nonumber\\
%
%
&= O\left(\frac{k d}{(\sigma_k- \sigma_{k+1})^2} \frac{\log(\frac{1}{\delta})}{\varepsilon^2}\right),
%
\end{align}
where the first inequality holds by Theorem \ref{thm_large_gap}, and the second equality holds since $\lambda_1 = \cdots = \lambda_k =1$ and $\lambda_{k+1} = \cdots = \lambda_d = 0$.
Thus, applying Jensen's Inequality to Inequality \eqref{eq_h1}, we have that
\begin{equation*}
\mathbb{E}[\| \hat{V}_k \hat{V}_k^\top - V_k V_k^\top \|_F] \leq O\left(\frac{\sqrt{kd}}{(\sigma_k- \sigma_{k+1})} \frac{\log^{\frac{1}{2}}(\frac{1}{\delta})}{\varepsilon}\right).
\end{equation*}
\paragraph{Case II: $M$ satisfies Assumption \ref{assumption_gaps} and $\sigma_i - \sigma_{i+1} \geq \Omega(\sigma_k - \sigma_{k+1})$ for all $i \leq k$.}
Plugging in $\lambda_1 = \cdots = \lambda_k =1$ and $\lambda_{k+1} = \cdots = \lambda_d = 0$ to Theorem \ref{thm_large_gap} we get that, since $M$ satisfies Assumption \ref{assumption_gaps} for ($M,k,2,\varepsilon,\delta$),
\begin{align}\label{eq_h2}
\mathbb{E}\left[\| \hat{V}_k \hat{V}_k^\top - V_k V_k^\top \|_F^2\right] &= \mathbb{E}\left[\| \hat{V} \Lambda \hat{V}^\top - V \Lambda V^\top \|_F^2\right]\nonumber\\
%
&\stackrel{\textrm{Theorem }\ref{thm_large_gap}}{\leq} O\left(\sum_{i=1}^{k} \sum_{j = i+1}^d \frac{(\lambda_i - \lambda_j)^2}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2}
\right) \frac{\log(\frac{1}{\delta})}{\varepsilon^2} \nonumber\\
%
&= O\left(\sum_{i=1}^{k} \sum_{j = k+1}^d \frac{1}{(\sigma_i-\max(\sigma_j, \sigma_{k+1}))^2} \right) \frac{\log(\frac{1}{\delta})}{\varepsilon^2}\nonumber\\
&\leq O\left(\sum_{i=1}^{k} \sum_{j = k+1}^d \frac{1}{(i-k-1)^2(\sigma_k- \sigma_{k+1})^2} \right) \frac{\log(\frac{1}{\delta})}{\varepsilon^2}\nonumber\\
%
&\leq O\left(\sum_{i=1}^{k} \frac{d}{(i-k-1)^2(\sigma_k- \sigma_{k+1})^2} \right) \frac{\log(\frac{1}{\delta})}{\varepsilon^2}\nonumber\\
%
&\leq O\left(\frac{d}{(\sigma_k- \sigma_{k+1})^2} \frac{\log(\frac{1}{\delta})}{\varepsilon^2} \sum_{i=1}^{k} \frac{1}{i^2} \right)\nonumber\\
%
&\leq O\left(\frac{d}{(\sigma_k- \sigma_{k+1})^2} \frac{\log(\frac{1}{\delta})}{\varepsilon^2}\right),
%
\end{align}
where the first inequality holds by Theorem \ref{thm_large_gap}, the second equality holds since $\lambda_1 = \cdots = \lambda_k =1$ and $\lambda_{k+1} = \cdots = \lambda_d = 0$,
the second inequality holds since $\sigma_i - \sigma_{i+1} \geq \Omega(\sigma_k - \sigma_{k+1})$ for all $i \leq k$,
and the last inequality holds since $\sum_{i=1}^{k} \frac{1}{i^2} \leq \sum_{i=1}^{\infty} \frac{1}{i^2} = O(1)$.
Thus, applying Jensen's Inequality to Inequality \eqref{eq_h2}, we have that
\begin{equation*}
\mathbb{E}[\| \hat{V}_k \hat{V}_k^\top - V_k V_k^\top \|_F] \leq O\left(\frac{\sqrt{d}}{(\sigma_k- \sigma_{k+1})} \frac{\log^{\frac{1}{2}}(\frac{1}{\delta})}{\varepsilon}\right).
\end{equation*}
\end{proof}
\section{Conclusion and Future Work}\label{sec_conclusion}
We present a new analysis of the Gaussian mechanism for a large class of symmetric matrix approximation problems, by viewing this mechanism as a Dyson Brownian motion initialized at the input matrix $M$.
This viewpoint allows us to leverage the stochastic differential equations which govern the evolution of the eigenvalues and eigenvectors of Dyson Brownian motion to obtain new utility bounds for the Gaussian mechanism.
To obtain our utility bounds, we show that the gaps $\Delta_{ij}(t)$ in the eigenvalues of the Dyson Brownian motion stay at least as large as the initial gap sizes (up to a constant factor), as long as the initial gaps in the top $k+1$ eigenvalues of the input matrix are $\geq \Omega(\sqrt{d})$ (Assumption \ref{assumption_gaps}).
While we observe that our assumption on the top-$k+1$ eigenvalue gaps holds on multiple real-world datasets, in practice one may need to apply differentially private matrix approximation on any matrix where the ``effective rank'' of the matrix is $k$— that is, on any matrix where the $k$’th eigenvalue gap $\sigma_k - \sigma_{k+1}$ is large— including on matrices where the gaps in the other eigenvalues may not be large and may even be zero.
Unfortunately, for matrices with initial gaps in the top-$k$ eigenvalues smaller than $O(\sqrt{d})$, the gaps $\Delta_{ij}(t)$ in the eigenvalues of the Dyson Brownian motion become small enough that the expectation of the (inverse) second-moment term $\frac{1}{\Delta^2_{ij}(t)}$ appearing in the It\^o integral (Lemma \ref{Lemma_integral}) in our analysis may be very large or even infinite.
Thus, the main question that remains open is whether one can obtain similar bounds on the utility for differentially private matrix approximation for any initial matrix $M$ where the $k$’th gap $\sigma_k - \sigma_{k+1}$ is large, without any assumption on the gaps between the other eigenvalues of $M$.
Finally, this paper analyzes a mechanism in differential privacy, which has many implications for preserving sensitive information of individuals. Thus, we believe our work will have positive societal impacts and do not foresee any negative impacts on society.
\section*{Acknowledgements}
This research was supported in part by NSF CCF-2104528 and CCF-2112665 awards.
|
2,877,628,089,624 | arxiv |
\section{Conclusions}
\label{sec:conclusions}
With this paper we introduced a fast and unsupervised method addressing the problem of finding semantic categories by detecting consistent visual pattern repetitions at a given scale. The proposed pipeline hierarchically detects self-similar regions represented by a segmentation mask.
As we demonstrated in the experimental evaluation, our approach retrieves more than one pattern and achieves better performances with respect to competitors methods. We also introduce the concept of \emph{semantic levels} endowed with a dedicated dataset and a new metric to provide to other researchers tools to evaluate the consistency of their approaches.
\subsection{Acknowledgments}
We would like to express our gratitude to Alessandro Torcinovich and Filippo Bergamasco for their suggestions to improve the work. We also thank Mattia Mantoan for his work to produce the dataset labeling.
\section{Experiments}
\label{sec:experimental}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{figures/superpixels_distributions.jpg}
\caption{(top) Analysis of measures as the number of superpixels $|\mathcal{P}|$ retrieved varies. The rightmost figure shows the running time of the algorithm. We repeated the experiments with the noisy version of the dataset but report only the mean since variation is almost equal to the original one. (bottom) Distributions of the measures for the two semantic levels, by varying the two main parameters $r$ and $|\mathcal{P}|$.}
\label{fig:superpixels}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/qualitative.jpg}
\caption{Qualitative comparison between \cite{DBLP:conf/cvpr/LiuL13}, \cite{DBLP:conf/wacv/LettryPVG17} and our algorithm. Our method detects and segments more than one pattern and does not constrain itself to a particular geometrical disposition.}
\label{fig:distributions}
\end{figure}
\subsubsection{Dataset}
As we introduced in Section \ref{sec:introduction} one of the aims of this work is to provide a better comparative framework for visual pattern detection. To do so we created a public dataset by taking 104 pictures of store shelves. Each picture has been took with a 5mpx camera with approximatively the same visual conditions. We also rectified the images to eliminate visual distortions.
We manually segmented and labeled each repeating product in two different semantic levels. In the \textbf{first semantic level} \emph{products made by the same company} share the same label. In the \textbf{second semantic level} visual repetitions consist in the \emph{exact identical products}. In total the dataset is composed by 208 ground truth images, half in the first level and the rest for the second one.
\subsubsection{$\mu$-consistency}
We devised a new measure that captures the semantic consistency of a detected pattern that is a proxy of the average precision of detection.
In fact, we want to be sure that all pattern instances fall on similar ground truth objects. First we introduce the concept of semantic consistency for a particular pattern $\vec{p}$. Let $\vec{P}$ be the set of patterns discovered by the algorithm. Each pattern $\vec{p}$ contains several instances $\vec{p}_{i}$. $\vec{L}$ is the set of ground truth categories, each ground truth category $\vec{l}$ contain several objects instances $\vec{l}_{i}$. Let us define $\vec{t}_{p}$ as the vector of ground truth labels touched by all instances of $\vec{p}$. We say that $\vec{p}$ is consistent if all its instances $\vec{p}_{i}, i=0\dots |\vec{p}|$ fall on ground truth regions sharing the same label. In this case $\vec{t}_{p}$ would be uniform and we consider $\vec{p}$ a good detection. The worst scenario is when given a pattern $\vec{p}$ every $\vec{p}_{i}$ falls on objects with different label $\vec{l}$ i.e. all the values in $\vec{t}_{p}$ are different.
To get an estimate of the overall consistency of the proposed detection, we average the consistency for each $\vec{p} \in \vec{P}$ giving us:
\begin{equation}
\text{$\mu$-consistency} = \frac{1}{\left | \vec{P} \right |} \sum_{\vec{p} \in \vec{P}} \frac{\left| \operatorname{mode}\left(\vec{t}_{p}\right)\right|}{\left|\vec{t}_{p}\right|}
\end{equation}
\subsubsection{recall}
The second measure is the classical recall over the objects retrieved by the algorithm. Since our object detector outputs more than one pattern we average the recall for each ground truth label by taking the best fitting pattern.
\begin{equation}
\frac{1}{\left | \vec{L} \right |} \sum_{\vec{l} \in \vec{L}} \operatorname{max}_{\vec{p} \in \vec{P}} \text { recall }(\vec{p}, \vec{l})
\end{equation}
The last measure is the \textbf{total recall}, here we consider a hit if any of the pattern falls in a labeled region. In general we expect this to be higher than the recall.
We report the summary performances in Figure \ref{fig:distributions}. As can be seen the algorithm achieves a very high $\mu$-consistency while still able to retrieve the majority of the ground truth patterns in both levels.
One can observe in Figure \ref{fig:superpixels} an inverse behaviour between recall and consistency as the number of superpixels retrieved grows. This is expected since less superpixels means bigger patterns, therefore it is more likely to retrieve more ground truth patterns.
In order to study the robustness we repeated the same experiments with an altered version of our dataset. In particular for each image we applied one of the following corruptions: Additive Gaussian Noise ($scale=0.1*255$), Gaussian Blur ($\sigma = 3$), Spline Distortions (grid affine), Brightness ($+100$), and Linear Contrast ($1.5$).
\subsubsection{Qualitative Validation}
Firstly we begin the comparison by commenting on \cite{DBLP:conf/cvpr/LiuL13}. One can observe that our approach has a significant advantage in terms of how the visual pattern is modeled. While the authors model visual repetitions as geometrical artifacts associating points, we output a higher order representation of the visual pattern. Indeed the capability to provide a segmentation mask of the repeated instance region together the ability to span over different levels unlocks a wider range of use cases and applications.
As qualitative comparison we also added the latest (and only) deep learning based methodology \cite{DBLP:conf/wacv/LettryPVG17} we found. This methodology is only able to find a single instance of visual pattern, namely the most frequent and most significant with respect to the filters weights. This means that the detection strongly depends from the training set of the CNN backbone, while our algorithm is fully unsupervised and data agnostic.
\subsubsection{Quantitative Validation}
We compared quantitatively our method against \cite{DBLP:conf/cvpr/LiuL13} that constitutes, to the best of our knowledge, the only work developed able to detect more than one visual pattern. We recreated the experimental settings of the authors by using the Face dataset \cite{DBLP:journals/cviu/Fei-FeiFP07} as benchmark achieving $1.00$ precision vs. $0.98$ of \cite{DBLP:conf/cvpr/LiuL13} and $0.77$ in recall vs. and $0.63$. We considered a miss on the object retrieval task, if more than 20\% of a pattern total area falls outside from the ground truth. The parameter used were $|\mathcal{C}|=9000$, $k=15$, $r=30$, $\tau=5$, $| \mathcal{P} |=150$. We also fixed the window of the gaussian vote to be $11 \times11$ pixels throughout all the experiments.
\section{Introduction}
\label{sec:introduction}
While the vast majority of supervised object detection and segmentation approaches leverage rich datasets with semantically labelled categories, unsupervised methods cannot rely on such a luxury.
Indeed they are expected to infer from the image content itself what is a relevant object and which are its boundaries. This is a daunting task, as relevance is totally domain-specific and also highly subjective, especially when taking in account human judgement, which exploits a lot of out-of-band information that cannot be found in the sheer image data.
As a matter of fact, little effort have been put to investigate unsupervised automatic approaches to detect and segment semantically relevant objects without any additional information than the image or any \textit{a priori} knowledge of the context. This is due to the fact that a unique definition of what is a relevant object (or, how we prefer to call it, a \emph{visual category}) does not actually exist.
This is especially true if we are seeking to set a formal definition that can be adopted across all the domains in a consistent manner with respect to human judgement.
Within this paper, we try to address this problem by considering a visual category each pattern which appearance is consistent enough across the image. In other words, we consider something to be a relevant object if it appears more than once, exhibiting consistent visual features in different parts of the scene.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/frontpage.jpg}
\caption{A real world example of unsupervised segmentation of a grocery shelf. Our method can automatically discover both low-level coherent patterns (brands, flavor images and logos) and high-level compound objects (multi-packs and bricks) by controlling the semantical level of the detection and segmentation process.}
\label{fig:frontpage}
\end{figure}
From a cognitive and perceptual point of view this makes a lot of sense. In fact, it is easy to observe that if a human is presented with images representing several different but recurring objects, even in a cluttered scene, he does not need to know what the objects actually are representing in order to be able to assign semantically-consistent labels to each of them. He would even be able to label each pixel, defining the boundaries of the objects.
As an example, if someone takes a look at a large bin of different (but to some extent repeated) mechanical parts he never saw before, he is still able to tell one part from the other by exploiting their coherent visual and structural appearance.
This ability is also preserved with slight changes in scale, orientation or partial occlusion of the objects.
Since this automatic assignment to a visual category of recurrent object is both well-defined and quite natural in humans, it is a very good candidate as a rule for automatically detecting relevant objects in an unsupervised manner that has good chances of being coherent with human judgement applied to the same image.
To be fair, we must also underline the fact that, in order to define the boundaries of a visual category and thus obtain a meaningful segmentation, also the level of detail must be taken into account.
As an example, if we present to a human an image of a crowded road captured from a side, and we ask him to segment visual categories according to recurrent patterns, we could get slightly different results from different people depending on their attention to details. Some people will segment cars and trees. Other could consider the car body to be a different object from the wheels ad branches from the tree trunk. The most picky could even separate tires from wheel rims and segment out each single leaf. In practice semantic consistency can happen at different scale when dealing with compound objects presenting themselves internal self repetitions or made up of single parts that are also present in other objects.
To address this aspect we also have to design a proper strategy to perform visual category detection and interpretation at a particular scale, according to the level of detail we want to express during the segmentation process. We define this level of detail as \emph{semantical level}. Semantical levels, of course, do not map directly on specific high level concepts, such as whole objects, large parts or minute components. Rather the semantic level will act as a coarse degree of granularity of the segmentation process that will result in a hierarchical split of segments as it changes.
These two definitions of \emph{visual categories} and \emph{semantical levels}, that will be developed throughout the remainder of the paper, are the two key concepts driving our novel segmentation method.
The ability of our approach to leverage repetitions to capture the internal representation in the real world and then extrapolates visual categories at a specific semantical level is actually achieved through the combination of a couple of standard techniques, slightly modified for the specific task, and of a few key steps specifically crafted to make the process work in a consistent way with respect to the cognitive process adopted by humans. This happens, for instance, by seeking for highly relevant repetitive structural patterns, called \emph{semantical hotspots}, characterized by a novel feature descriptor, called \emph{splash}. We do this through a scale-invariant method and with no continuous geometrical constraints on the visual pattern disposition.
We also do not constrain ourselves to find only one visual pattern, which is another very common assumption with other approaches in literature. Rather our technique is designed from the start to be able to detect more patterns at once, being able to assign to each of them a different visual category label, corresponding to a different real world object or object part, according to the selected semantical level.
\pagebreak
Overall, with this paper, we are offering to the community the following contributions:
\begin{itemize}
\item A new pipeline, including the definition of a specially crafted feature descriptor, to capture semantical categories with the ability to hierarchically span over semantical levels;
\item A specially crafted conceptual framework to evaluate unsupervised semantic-driven segmentation methods through the introduction of the semantical levels notion along with a new metric;
\item A new dataset consisting of a few hundredths labelled images that can be used as a benchmark for visual repetition detection in general.
\end{itemize}
The remainder of the paper is organized as follows. Section \ref{sec:relatedworks} describes the related works with respect to feature extraction and automatic visual patterns detection. Section \ref{sec:methoddescription} introduces our method, giving details on the overall pipeline and on the implementation details. Section \ref{sec:experimental} presents an experimental evaluation and comparison with similar approaches. Finally, the conclusions are found in Section \ref{sec:conclusions}.
Code, dataset and notebooks used in this paper will be made available for public use.
\section{Introduction}
\label{sec:introduction}
The extraction of semantic categories from images is a fundamental task in image understanding \cite{mottaghi2014role,cordts2016cityscapes,zhou2017scene}. While the task is one that has been widely investigated in the community, most approaches are supervised, making use of labels to detect semantic categories \cite{chen2018encoder}. Comparatively less effort has been put to investigate automatic procedures which enable an intelligent system to learn autonomously extrapolating visual semantic categories without any \textit{a priori} knowledge of the context.
We observe the fact that in order to define what a visual pattern is, we need to define a scale of analysis (objects, parts of objects etc.). We call these scales \textit{semantic levels} of the real world. Unfortunately most influential models arising from deep learning approaches still show a limited ability over scale invariance \cite{singh2018analysis,li2019scale} which instead is common in nature. In fact, we don't really care much about scale, orientation or partial observability in the semantic world. For us, it is way more important to preserve an ``internal representation'' that matches reality \cite{DiCarlo2012HowDT,logothetis1996visual}.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/levels.jpg}
\caption{A real world example of unsupervised segmentation of a grocery shelf. Our method can automatically discover both low-level coherent patterns (brands and logos) and high-level compound objects (detergents) by controlling the semantic level of the detection and segmentation process.}
\label{fig:levels}
\end{figure}
Our method leverages repetitions (Figure \ref{fig:levels}) to capture the internal representation in the real world and then extrapolates categories at a specific semantic level. We do this without continuous geometrical constraints on the visual pattern disposition, which is common among other methodologies \cite{DBLP:conf/cvpr/PrittsCM14,DBLP:conf/cvpr/PrittsCM14,DBLP:conf/wacv/LettryPVG17,DBLP:journals/cg/Rodriguez-Pardo19,DBLP:conf/cvpr/HeZRS16}.
We also do not constrain ourselves to find only one visual pattern, which is another very common assumption. Indeed, what if the image has more than one visual pattern? One can observe that this is \textit{always} the case. Each visual repetition can be hierarchically decomposed in its smaller parts which, in turn, repeat over different semantic levels. This peculiar observation allow our work to contribute to the community as follows:
\begin{itemize}
\item A new pipeline able to capture semantic categories with the ability to hierarchically span over semantic levels.
\item A better conceptual framework to evaluate analogous works through the introduction of the semantic levels notion along with a new metric.
\item A new benchmark dataset of 208 labelled images for visual repetition detection.
\end{itemize}
Code, dataset and notebooks are public and available at: \texttt{\url{https://git.io/JT6UZ}}.
\section{Method Description}
\label{sec:methoddescription}
\subsection{Features Localization and Extraction}
\label{sec:featureslocalizationextraction}
We observe that any visual pattern is delimited by its contours. The first step of our algorithm, in fact, consists in the extraction of a set $\mathcal{C}$ of contour \emph{keypoints} indicating a position $\vec{c}_{j}$ in the image. To extract keypoints, we opted for the Canny algorithm, for its simplicity and efficiency, although more recent and better edge extractor could be used \cite{liu2019richer} to have a better overall procedure.
A descriptor $d_{j}$ is then computed for each selected $\vec{c}_{j} \in \mathcal{C}$ thus obtaining a \emph{descriptor set} $\mathcal{D}$. In particular, we adopted the DAISY algorithm because of its appealing dense matching properties that nicely fit our scenario. Again, here we can replace this module of the pipeline with something more advanced such as \cite{DBLP:conf/nips/OnoTFY18} at the cost of some computational time.
\subsection{Semantic Hot Spots Detection}
\label{sec:semantichotspotdetection}
\begin{figure*}[t!]
\centering
\includegraphics[width=1\linewidth]{figures/accumulator.jpg}
\caption{(a) A splash in the image space with center in the keypoint $\vec{c}_j$. (b) $\mathcal{H}$, with the superimposed splash at the center, you can note the different levels of the vote ordered by endpoint importance i.e. descriptor similarity. (c) 3D projection showing the gaussian-like formations and the thresholding procedure of $\mathcal{H}$. (d) Backprojection through the set $\mathcal{S}$.}
\label{fig:houghaccumulator}
\end{figure*}
In order to detect self-similar patterns in the image we start by associating the $k$ most similar descriptors for each descriptor $\vec{d}_j$. We can visualize this data structure as a star subgraph with $k$ endpoints called \emph{splash} ``centered'' on descriptor $\vec{d}_{j}$. Figure \ref{fig:houghaccumulator} (a) shows one.
Splashes potentially encode repeated patterns in the image and similar patterns are then represented by similar splashes.
The next step consists in separating these splashes from those that encode noise only, this is accomplished through an accumulator space.
In particular, we consider a $2$-D \emph{accumulator space} $\mathcal{H}$ of size double the image. We then superimpose each splash on the space $\mathcal{H}$ and cast $k$ votes as shown in Figure \ref{fig:houghaccumulator} (b). In order to take into account the noise present in the splashes, we adopt a gaussian vote-casting procedure $g(\cdot)$. Similar superimposed splashes contribute to similar locations on the accumulator space, resulting in peak formations (Figure \ref{fig:houghaccumulator} (c)). We summarize the voting procedure as follows:
\begin{equation}
\mathcal{H}_{\vec{w}} = \mathcal{H}_{\vec{w}} + g(\vec{w}, \vec{h}^{(j)}_{i})
\end{equation}
where $\vec{h}^{(j)}_{i}$ is the $i$-th splash endpoint of descriptor $\vec{d}_j$ in accumulator coordinates and $\vec{w}$ is the size of the gaussian vote. We filter all the regions in $\mathcal{H}$ which are above a certain \emph{threshold} $\tau$, to get a set $\mathcal{S}$ of the locations corresponding to the peaks in $\mathcal{H}$. The $\tau$ parameter acts as a coarse filter and is not a critical parameter to the overall pipeline. A sufficient value is to set it to $0.05 \cdot max(\mathcal{H})$.
Lastly, in order to visualize the semantic hotspots in the image plane we map splash locations between $\mathcal{H}$ and the image plane by means of a \emph{backtracking structure} $\mathcal{V}$.
In summary, the key insight here is that similar visual regions share similar splashes, we discern noisy splashes from representative splashes through an auxiliary structure, namely an accumulator. We then identify and backtrack in the image plane the semantic hotspots that are candidate points part of a visual repetition.
\subsection{Semantic Categories Definition and Extraction}
\label{sec:graph}
While the first part previously described acts as a filter for noisy keypoints allowing to obtain a good pool of candidates, we now transform the problem of finding visual categories in a problem of dense subgraphs extraction.
We enclose semantic hotspots in superpixels, this extends the semantic significance of such identified points to a broader, but coherent, area. To do so we use the SLIC \cite{DBLP:journals/pami/AchantaSSLFS12} algorithm which is a simple and one of the fastest approaches to extract superpixels as pointed out in this recent survey \cite{DBLP:journals/cviu/StutzHL18}. Then we choose the cardinality of the \emph{superpixels} $\mathcal{P}$ to extract. This is the second and most fundamental parameter that will allow us to span over different semantic levels.
Once the superpixels have been extracted, let $\mathcal{G}$ be an \emph{undirected weighted graph} where each node correspond to a superpixel $p \in \mathcal{P}$. In order to put edges between graph nodes (i.e. two superpixels), we exploit the splashes origin and endpoints. In particular the strength of the connection between two vertices in $\mathcal{G}$ is calculated with the number of splashes endpoints falling between the two in a mutual coherent way. So to put a weight of 1 between two nodes we need exactly 2 splashes endpoints falling with both origin and end point in the two candidate superpixels.
With this construction scheme, the graph has clear dense subraphs formations. Therefore, the last part simply computes a partition of $\mathcal{G}$ where each connected component correspond to a cluster of similar superpixels. In order to achieve such objective we optimize a function that is maximized when we partition the graph to represent so. To this end we define the following \textit{density score} that given $G$ and a set $K$ of connected components captures the optimality of the clustering:
\begin{equation}
s(G, K) = \sum_{k \in K} \mu(k) - \alpha \left | K \right |
\end{equation}
where $\mu(k)$ is a function that computes the average edge weight in a undirected weighted graph.
\begin{algorithm}[t]
\caption{Semantic categories extraction algorithm}
\vspace{2mm}
\begin{algorithmic}
\REQUIRE $G$ weighted undirected graph
\STATE $i=0$
\STATE $s^{*}=-\inf$
\STATE $K^{*}= \emptyset$
\WHILE{$G_{i}$ is not fully disconnected}
\STATE $i = i + 1$
\STATE Compute $G_{i}$ by corroding each edge with the minimum edge weight
\STATE Extract the set $K_{i}$ of all connected components in $G_{i}$
\STATE $s(G_{i}, K_{i}) = \sum_{k \in K_{i}} \mu(k) - \alpha \left | K_{i} \right |$
\IF{$s(G_{i}, K_{i}) > s^{*}$}
\STATE $s^{*} = s(G_{i}, K_{i})$
\STATE $K^{*} = K_{i}$
\ENDIF
\ENDWHILE
\RETURN $s^{*}, K^{*}$
\end{algorithmic}
\label{alg:graphcorrosion}
\end{algorithm}
The first term, in the score function, assign a high vote if each connected component is dense. While the second term acts as a regulator for the number of connected components. We also added a weighting factor $\alpha$ to better adjust the procedure. As a proxy to maximize this function we devised an \emph{iterative algorithm} reported in Algorithm \ref{alg:graphcorrosion} based on graph corrosion and with temporal complexity of $O(\left | E \right |^{2} + \left | E \right | \left | V \right |)$. At each step the procedure corrupts the graph edges by the minimum edge weight of $G$. For each corroded version of the graph that we call \emph{partition}, we compute $s$ to capture the density. Finally the algorithm selects the corroded graph partition which maximizes the $s$ and subsequently extracts the node groups.
In brevity we first enclose semantic hotspots in superpixels and consider each one as a node of a weighted graph. We then put edges with weight proportional to the number of splashes falling between two superpixels. This results in a graph with clear dense subgraphs formations that correspond to superpixels clusters i.e. \textit{semantic categories}. The semantic categories detection translates in the extraction of dense subgraphs. To this end we devised an iterative algorithm based on graph corrosion where we let the procedure select the corroded graph partition that filters noisy edges and let dense subgraphs emerge. We do so by maximizing score that captures the density of each connected component.
\section{Related Works}
\label{sec:relatedworks}
Several works have been proposed to tackle visual pattern discovery and detection. While the paper by Leung and Malik \cite{DBLP:conf/eccv/LeungM96} could be considered seminal, many other works build on their basic approach, working by detecting contiguous structures of similar patches by knowing the window size enclosing the distinctive pattern.
One common procedure in order to describe what a pattern is, consists to first extract descriptive features such as SIFT to perform a clustering in the feature space and then model the group disposition over the image by exploiting geometrical constraints, as in \cite{DBLP:conf/cvpr/PrittsCM14} and \cite{DBLP:conf/cvpr/ChumM10}, or by relying only on appearance, as in \cite{DBLP:conf/icpr/DoubekMPC10,DBLP:conf/cvpr/LiuL13,DBLP:journals/pami/ToriiSOP15}.
The geometrical modeling of the repetitions usually is done by fitting a planar 2-D lattice, or a deformation of it \cite{DBLP:journals/pami/ParkBCL09}, through RANSAC procedures as in \cite{DBLP:conf/bmvc/SchaffalitzkyZ98} \cite{DBLP:conf/cvpr/PrittsCM14} or even by exploiting the mathematical theory of crystallographic groups as in \cite{DBLP:journals/pami/LiuCT03}. Shechtman and Irani \cite{DBLP:conf/cvpr/ShechtmanI07}, also exploited an active learning environment to detect visual patterns in a semi-supervised fashion. For example Cheng et al. \cite{DBLP:journals/tog/ChengZMHH10} use input scribbles performed by a human to guide detection and extraction of such repeated elements, while Huberman and Fattal \cite{DBLP:conf/cvpr/HubermanF16} ask the user to detect an object instance and then the detection is performed by exploiting correlation of patches near the input area.
Recently, as a result of the new wave of AI-driven Computer Vision, a number of Deep Leaning based approaches emerged, in particular Lettry et al. \cite{DBLP:conf/wacv/LettryPVG17} argued that filter activation in a model such as AlexNet can be exploited in order to find regions of repeated elements over the image, thanks to the fact that filters over different layers show regularity in the activations when convolved with the repeated elements of the image. On top of the latter work, Rodríguez-Pardo et al. \cite{DBLP:journals/cg/Rodriguez-Pardo19} proposed a modification to perform the texture synthesis step.
A brief survey of visual pattern discovery in both video and image data, up to 2013, is given by Wang et al. \cite{DBLP:journals/widm/WangZY14}, unfortunately after that it seems that the computer vision community lost interest in this challenging problem. We point out that all the aforementioned methods look for \textit{only one} particular visual repetition except for \cite{DBLP:conf/cvpr/LiuL13} that can be considered the most direct competitor and the main benchmark against which to compare our results.
|
2,877,628,089,625 | arxiv | \section{Introduction}
Although the uncertainty principle is usually considered to be a fundamental principle of quantum mechanics, its precise theoretical formulation is not always clear. A breakthrough in the investigation of measurement uncertainties was achieved when Ozawa demonstrated in 2003 that the uncertainty trade-off between measurement error and disturbance may be much lower than the uncertainty trade-off between non-commuting properties in a quantum state \cite{Ozawa2003}. Recently, the definitions of measurement uncertainties introduced by Ozawa have been evaluated experimentally using two-level systems such as neutron spins \cite{Erhart2012} and photon polarizations \cite{Baek2013,Rozema2012}. These experimental tests have confirmed the lower uncertainty limits predicted by Ozawa and resulted in the formulation and confirmation of even tighter bounds \cite{Hall2013,Branciard2013,Ringbauer2014,Kaneda2014}. However, there has also been some controversy concerning the role of the initial state in this definition of measurement uncertainties \cite{Watanabe2011,Busch2013,Dressel2014}. It may therefore be useful to take a closer look at the definition of measurement errors and their experimental evaluation.
In principle, it is natural to define the error of a measurement as the statistical average of the squared difference between the measurement outcome and the actual value of the target observable. However, quantum theory makes it difficult to assign a value to an observable when neither the initial state nor the final measurement is represented by an eigenstate of the observable. Nevertheless, the operator formalism defines correlations between the measurement outcome and the operator $\hat{A}$ that represents the target observable, and this correlation between operators can be evaluated by weak measurements \cite{Lund2010} or by statistical reconstruction using variations of the input state \cite{Ozawa2004}. Essentially, the experimental evaluations of Ozawa uncertainties is therefore based on an evaluation of non-classical correlations between the measurement outcome and the target observable in the initial quantum state $\mid \psi \rangle$.
In the following, we investigate the role of non-classical correlations in quantum measurements by applying a sequential measurement to the polarization of a single photon, such that the initial measurement commutes with the target polarization, while the final measurement selects a complementary polarization. In this scenario, the initial measurement can be described by classical error statistics, and the evaluation of the measurement errors corresponds to conventional statistical methods. However, the final measurement introduces non-classical correlations that provide additional information on the target observable. By varying the strength of the initial measurement, we can control the balance between classical and non-classical effects in the correlations. In addition, we obtain two separate measurement outcomes, one of which refers directly to the target observable, and another one which can only relate to the target observable via correlations in the input state. Our measurement results thus provide a detailed characterization of non-classical effects in the relation between measurement outcomes and target observable. In particular, our results show that the intermediate measurement outcome modifies the non-classical correlations between the final outcome and the target observable, which can result in a counter-intuitive assignment of measurement values, where the intermediate measurement outcome and the estimates values seem to be anti-correlated. Our results thus illustrate that the combination of classical and non-classical correlations can be highly non-trivial and should be investigated in detail to achieve a more complete understanding of the experimental analysis of quantum systems.
The rest of the paper is organized as follows. In Sec. \ref{sec:ncorr}, we point out the role of non-classical correlations in the definition of measurement errors and discuss the experimental evaluation using variations of the input state. In Sec. \ref{sec:twolevel}, we derive the evaluation procedure for two level systems and discuss the evaluation of the experimental data. In Sec. \ref{sec:exp}, we introduce the experimental setup and discuss the sequential measurement of two non-commuting polarization components. In Sec. \ref{sec:data}, we discuss the measurement results obtained at different measurement strengths and analyze the role of non-classical correlations in the different measurement regimes. In Sec. \ref{sec:error}, we discuss the effects of non-classical correlations on the statistical error of the measurement. In Sec. \ref{sec:conclusions}, we conclude the paper by summarizing the insights gained from our detailed study of the non-classical aspects of measurement statistics.
\section{Measurement errors and non-classical correlations}
\label{sec:ncorr}
Measurement errors can be quantified by taking the average of the squared difference between the measurement outcomes $A_{\mathrm{out}}(m)$ and the target observable $\hat{A}$. As shown by Ozawa \cite{Ozawa2003}, this definition of errors can be applied directly to the operator statistics of quantum theory, even if the observable $\hat{A}$ does not commute with the measurement outcomes $m$. If the probability of the measurement outcome $m$ is represented by the positive valued operator $\hat{E}_m$, the measurement error for an input state $\mid \psi \rangle$ is given by
\begin{eqnarray}
\label{eq:ozawa}
\varepsilon^2(A)
& = &
\sum_{m} \langle \psi \mid (A_m -\hat{A}) \hat{E}_m (A_m -\hat{A}) \mid \psi \rangle
\nonumber \\
& = &
\langle \psi \mid \hat{A}^{2} \mid \psi \rangle + \sum_m A_m^{2} \langle \psi \mid \hat{E}_m \mid \psi \rangle - 2 \sum_{m} A_m \; \Re \left[ \langle \psi \mid \hat{E}_m \hat{A} \mid \psi \rangle \right].
\end{eqnarray}
The last term in Eq. (\ref{eq:ozawa}) evaluates the correlation between the target observable $\hat{A}$ and the measurement outcome $A_m$.
If $\hat{A}$ and $\hat{E}_m$ commute, the correlation in Eq. (\ref{eq:ozawa}) can be explained in terms of the joint measurement statistics of the outcomes $m$ and the eigenstate projections $a$, where the eigenvalues of $\hat{E}_m$ determine the conditional probabilities $P(m|a)$ of obtaining the result $m$ for an eigenstate input of $a$. However, the situation is not so simple if $\hat{A}$ and $\hat{E}_m$ do not commute. In this case, an experimental evaluation of the measurement error $\varepsilon(A)^2$ requires the reconstruction of a genuine quantum correlation represented by operator products. Perhaps the most direct method of obtaining the appropriate data is to vary the input state \cite{Ozawa2004}. To obtain the correlation between the measurement outcome $m$ and the observable $\hat{A}$, it is sufficient to use two superposition states as input,
\begin{eqnarray}
\label{eq:pm}
\mid + \rangle = \frac{1}{\sqrt{1+2 \lambda \langle \hat{A} \rangle + \lambda^2 \langle \hat{A}^2 \rangle}} \left(1 + \lambda \hat{A}\right) \mid \psi \rangle
\nonumber \\
\mid - \rangle = \frac{1}{\sqrt{1-2 \lambda \langle \hat{A} \rangle + \lambda^2 \langle \hat{A}^2 \rangle}} \left(1 - \lambda \hat{A}\right) \mid \psi \rangle,
\end{eqnarray}
where the expectation values in the normalization factors refer to the statistics of the original state $\mid \psi \rangle$. It is now possible to determine the correlation between the measurement outcome and the target observable from the weighted difference between the probabilities $P(m|+)$ and $P(m|-)$ obtained with these two superposition states. Specifically,
\begin{equation}
\label{eq:evaluate}
\Re \left[ \langle \psi \mid \hat{E}_m \hat{A} \mid \psi \rangle \right] = \frac{1}{4 \lambda} \left((1+2 \lambda \langle \hat{A} \rangle + \lambda^2 \langle \hat{A}^2 \rangle) P(m|+) - (1 - 2 \lambda \langle \hat{A} \rangle + \lambda^2 \langle \hat{A}^2 \rangle) P(m|-)\right).
\end{equation}
For $\lambda \ll 1$, the two states correspond to the outputs of a weak measurement with a two level probe state \cite{Hofmann2010}. The variation of input states is therefore closely related to the alternative method of evaluating measurement errors using weak measurements \cite{Lund2010}.
Eq. (\ref{eq:evaluate}) expresses the correlations between the outcome values $A_m$ and the target observable $\hat{A}$ in terms of conditional expectation values of $\hat{A}$ which correspond to optimal estimates of the target observables,
\begin{equation}
\label{eq:condav}
A_{\mathrm{opt.}}(m) = \frac{\Re \left[ \langle \psi \mid \hat{E}_m \hat{A} \mid \psi \rangle \right]}{ \langle \psi \mid \hat{E}_m \mid \psi \rangle}.
\end{equation}
As pointed out by Hall, this optimal estimate is equal to the real part of the weak value conditioned by the post-selection of the measurement outcome $m$ \cite{Hall2004}. If the non-classical correlation in Eq.(\ref{eq:ozawa}) is expressed using the conditional average in Eq.(\ref{eq:condav}), the result reads
\begin{equation}
\varepsilon^2(A) = \langle \hat{A}^2 \rangle - \sum_m \left( A_{\mathrm{opt.}}(m) \right )^2 P(m|\psi) + \sum_m \left( A_m-A_{\mathrm{opt.}}(m))^2 P(m|\psi) \right.
\end{equation}
It is then obvious that the minimal error $\varepsilon^2_{\mathrm{opt.}}(A)$ is obtained for $A_m=A_{\mathrm{opt.}}(m)$, and that this minimal error is given by the difference between the original variance of $\hat{A}$ in the quantum state $\psi$ and the variance of the conditional averages $A_{\mathrm{opt.}}(m)$,
\begin{equation}
\label{eq:varsum}
\varepsilon^2_{\mathrm{opt.}}(A) = \langle \hat{A}^2 \rangle - \sum_m \left(A_{\mathrm{opt.}}(m)\right)^2 P(m|\psi).
\end{equation}
Importantly, all of the necessary information can be obtained experimentally using the superposition input states $\mid+ \rangle$ and $\mid - \rangle$. As will be shown in the following, this means that for two level systems, the non-classical correlations can actually be derived from measurements performed on eigenstates of $\hat{A}$.
\section{Evaluation of two level systems}
\label{sec:twolevel}
In a two level system, all physical properties can be expressed in terms of operators with eigenvalues of $\pm 1$. This results in a significant simplification of the formalism. In particular, it is possible to define the states $\mid + \rangle$ and $\mid - \rangle$ used for the experimental evaluation of non-classical correlations in the measurement errors by setting $\lambda=1$ in Eq. (\ref{eq:pm}). The result is a projection onto eigenstates of $\hat{A}$, so that $\mid + \rangle$ and $\mid - \rangle$ are eigenstates of the target observable $\hat{A}$ with eigenvalues of $+1$ and $-1$, respectively. Surprisingly, this means that the non-classical correlations between measurement outcomes and target observables can be evaluated without applying the measurement of $m$ to the actual input state $\mid \psi \rangle$. According to Eq. (\ref{eq:evaluate}), the relation for the two-level system with eigenvalues of $A_{a}=\pm 1$ and $\lambda=1$ is
\begin{equation}
\label{eq:eval2L}
\Re \left[ \langle \psi \mid \hat{E}_{m} \hat{A} \mid \psi \rangle \right]
= P(m|+) P(+|\psi) - P(m|-) P(-|\psi).
\end{equation}
Note that this looks like a fully projective measurement sequence, where a measurement of $\hat{A}$ is followed by a measurement of $m$.
However, such a projective measurement of $\hat{A}$ actually changes the probabilities of the final outcomes $m$. It is therefore quite strange that the correlation between an undetected observable $\hat{A}$ and the measurement result $m$ obtained from an initial state $\psi$ can be derived from a sequential projective measurement, as if the measurement disturbance of a projective measurement of $\hat{A}$ had no effect on the final probabilities of $m$.
The non-classical features of the correlation in Eq. (\ref{eq:eval2L}) emerge when the conditional average is determined according to Eq. (\ref{eq:condav}),
\begin{equation}
\label{eq:cav2L}
A_{\mathrm{opt.}}(m) = \frac{P(m|+) P(+|\psi) - P(m|-) P(-|\psi)}{P(m|\psi)}.
\end{equation}
Although this equation looks almost like a classical conditional average, it is important to note that the probabilities are actually obtained from two different measurements. As a result, the denominator is not given by the sum of the probabilities in the numerator. In fact, it is quite possible that $P(m|\psi)$ is much lower than the sum of $P(m|+) P(+|\psi)$ and $P(m|-) P(-|\psi)$, so that the conditional average $A_{\mathrm{opt.}}(m)$ is much larger than $+1$ (or much lower than $-1$). In fact, we should expect such anomalous enhancements of the conditional average, since Eq. (\ref{eq:condav}) shows that $A_{\mathrm{opt.}}(m)$ is equal to the weak value of $\hat{A}$ conditioned by $\psi$ and $m$.
It may seem confusing that the combination of statistical results obtained in two perfectly normal experiments results in the defintion of a seemingly paradoxical conditional average. However, this is precisely why quantum statistics have no classical explanation. In fact, the present two level paradox is simply a reformulation of the violation of Leggett-Garg inequalities \cite{LGI,Knee2012,Suzuki2012}, where it is shown that it is impossible to explain the probabilities $P(m|\psi)$, $P(m|\pm)$ and $P(\pm|\psi)$ as marginal probabilities of the same positive valued joint probability $P(m,\pm|\psi)$. Effectively, the evaluation of measurement errors proposed by Ozawa \cite{Ozawa2004} and applied in the first experimental demonstration \cite{Erhart2012} is identical to the verification of Leggett-Garg inequality violation by parallel measurements proposed in \cite{LGI} and applied in \cite{Knee2012}.
We can now look at the evaluation of the measurement errors in more detail. Using the previous results to express Eq. (\ref{eq:ozawa}) in terms of experimental probabilities, the measurement error is given by
\begin{equation}
\varepsilon^2(A) = 1 + \sum_m A_m^2 P(m|\psi) - 2 \sum_m A_m \left(P(m|+) P(+|\psi) - P(m|-) P(-|\psi) \right).
\end{equation}
Although this is already a great simplification, it is interesting to note that the evaluation used in the first experimental demonstration \cite{Erhart2012} is even more simple. This is because of an additional assumption: if we only allow an assignment of $A_m=\pm1$, so that $m$ can be given by $+$ or $-$ and $A_m^2=1$,
\begin{equation}
\varepsilon^2(A) = 2 - 2 \left(P(+|+) P(+|\psi) + P(-|-) P(-|\psi) - P(+|-) P(-|\psi) - P(-|+) P(+|\psi)\right).
\end{equation}
In many cases, errors are symmetric, so that $P(+|+)=P(-|-)=1-P_{\mathrm{error}}$ and $P(+|-)=P(-|+)=P_{\mathrm{error}}$. If this assumption is used, the evaluation of measurement errors is completely independent of the input state, since the probabilities of $A_+=+1$ and of $A_-=-1$ add up to one, and the error is simply given by the error observed for eigenstate inputs,
\begin{equation}
\label{eq:error}
\varepsilon^2(A) = 4 P_{\mathrm{error}}.
\end{equation}
Importantly, this result is just a special case where the measurement error appears to be state independent because of a specific choice of $A_m$ for the evaluation of the measurement. In the following, we will consider a setup that explores the optimization of $A_m$ and the role of the non-classical correlations between measurement outcomes and target observable using the evaluation of experimental data developed above.
\section{Sequential measurement of photon polarization}
\label{sec:exp}
As mentioned in the previous section, the anomalous values of the conditional averages $A_{\mathrm{opt.}}(m)$ that also provide the optimal assignments of measurement outcomes $A_m$ originate from the same experimental statistics that are used to violate Leggett-Garg inequalities. We are therefore particularly interested in the correlations between Bloch vector components in the equatorial plane of the Bloch sphere. In the case of photon polarization, these are the linear polarizations, where the horizontal (H) and vertical (V) polarizations define one axis and the diagonal polarizations corresponding to positive (P) and negative (M) superposition of H and V define the orthogonal axis. In terms of operators with eigenvalues of $+1$ and $-1$, these polarizations can be expressed by $\hat{S}_{HV}$ and $\hat{S}_{PM}$.
If our target observable is $\hat{A}=\hat{S}_{PM}$, any measurement that commutes with $\hat{S}_{PM}$ can be explained in terms of classical statistics. We therefore use a setup that implements a variable strength measurement of diagonal polarization similar to the one we previously used to study Leggett-Garg iequality violations and weak measurements \cite{Suzuki2012,Iinuma2011}. In the output, we then perform a measurement of HV-polarization, so that the total measurement does not commute with the target observable. By dividing the measurement into two parts, we can vary the strength of the non-classical effects and study the transition between classical correlations and quantum correlations in detail.
\begin{figure}[h]
\centerline{\includegraphics[width=70mm]{setup.eps}}
\caption{Experimental setup of the sequential measurement of $\hat{S}_{\mathrm{PM}}$ followed by the projective measurement of $\hat{S}_{HV}$. This interferometer was realized by using a hybrid cube of a Polarizing Beam Splitter (PBS) and a Beam Splitter (BS), where the input beam is split by the PBS part and the outputs interfere at the BS part of the cube. The variable strength measurement of the positive (P) and negative (M) superposition of horizontal (H) and vertical (V) polarizations is realized by path interference between the H and the V polarized component. The measurement strength of the PM measurement is controlled by the angle $\theta$ of one of two half-wave plates (HWPs) inside the interferometer, which can be changed from zero for no measurement to 22.5$^{\circ}$ for a fully projective measurement. The other HWP is used for a phase compensation between H and V components.}
\label{fig:setup}
\end{figure}
The experimental setup is shown in Fig. \ref{fig:setup}. As explained in \cite{Iinuma2011}, a variable strength measurement is implemented by separating the horizintal and vertical polarizations at a polarization beam splitter (PBS), rotating the polarizations towards each other using a half-wave plate (HWP) and interfering them at a beam splitter (BS). The effect of the interference is to distinguish P-polarization from M-polarization, where the visibility of the interference and hence the strength of the measurement is controlled by the rotation angle of the HWP, where the angle $\theta$ can be changed from zero for no measurement to 22.5$^{\circ}$ for a fully projective measurement. As shown in Fig. \ref{fig:setup}, the interferomter is a Sagnac type, where the difference between input and output beam splitter is implemented by using a hybrid cube that acts as either a PBS or a BS, depending on the part of the cube on which the beam is incident. Input states were prepared using a Glan-Thompson prism (GT) and another HWP located just before the hybrid cube and a weak coherent light emitted by a CW TiS laser($\lambda=830$ nm). The output photon numbers in the output paths $b1$ (measurement outcome P or $m_1=+1$) and in the path $b2$ (measurement outcome M or $m_1=-1$) are counted by using the single photon detectors D1 and D2, respectively. Polarizers were inserted to realize the final measurement of $\hat{S}_{HV}$, corrsponding to $m_2=+1$ for H-polarization and $m_2=-1$ for V-polarization. The number of input photons in the initial state was monitored with the single photon detector D3 in order to compensate fluctuations of intensity in the weak coherent light used as input. In the actual setup, we also detected a systematic difference between the reflectivity and the transmissivity of the final BS resulting in a slight change of the orientation of the measurement basis from the directions of PM-polarization. The cancellation of this systematic effect is achieved by exchanging the roles of path b1 and path b2 using the settings of the HWP, which effectively restores the proper alignment of the polarization axes with the measurement \cite{Suzuki2012}.
The measurement has four outcomes $m=(m_1,m_2)$ given by the combinations of $\hat{S}_{PM}$ eigenvalues ($m_1=\pm1$) and $\hat{S}_{HV}$ eigenvalues ($m_2=\pm 1$). In the absence of experimental errors, the measurement outcomes can be described by pure state projections,
\begin{eqnarray}
\mid +1,+1 \rangle &=& \frac{1}{\sqrt{2}} \left(\cos(2 \theta) \mid H \rangle + \sin(2 \theta) \mid V \rangle \right),
\nonumber \\
\mid +1,-1 \rangle &=& \frac{1}{\sqrt{2}} \left(\sin(2 \theta) \mid H \rangle + \cos(2 \theta) \mid V \rangle \right),
\nonumber \\
\mid -1,+1 \rangle &=& \frac{1}{\sqrt{2}} \left(\cos(2 \theta) \mid H \rangle - \sin(2 \theta) \mid V \rangle \right),
\nonumber \\
\mid -1,-1 \rangle &=& \frac{1}{\sqrt{2}} \left(\sin(2 \theta) \mid H \rangle - \cos(2 \theta) \mid V \rangle \right).
\end{eqnarray}
\begin{figure}[h]
\begin{center}
\vspace{0.2cm}
\includegraphics[width=70mm]{PMdata.eps}
\caption{Experimental probabilities $P(m_1|a)$ of the PM-measurement obtained with P polarization ($a=+1$) as the initial state. The solid lines indicate the theoretically expected result for $V_{PM}=1$ and the broken line shows the theoretical expectation for $V_{PM}=0.93$.}
\label{fig:PMdata}
\end{center}
\end{figure}
The actual measurement is limited by the visibility of the interferometer, which was independently evaluated as $V_{PM}=0.93$ at $\theta=22.5^{\circ}$. It is possible to characterize the measurement error of the PM-measurement by preparing P-polarized and M-polarized input photons. If $A_m=+1$ is assigned to the $m_1=+1$ outcomes, and $A_m=-1$ is assigned to the $m_1=-1$ outcomes, this corresponds to a measurement of the error probability $P_{\mathrm{error}}$ in Eq.(\ref{eq:error}),
\begin{equation}
P_{\mathrm{error}} = P(m_1=-a|a) = \frac{1}{2}\left(1 - V_{PM} \sin(4 \theta) \right).
\end{equation}
Fig. \ref{fig:PMdata} shows the experimental results obtained with our setup. Note that this graph also provides all of the data needed to determine the probabilities $P(m_1,m_2|a)$ for the analysis of the conditional averages $A_{\mathrm{opt.}}(m)$ in the following section, since $P(m_1,m_2|a)=P(m_1|a)/2$.
For completeness, we have also evaluated the experimental errors in the final measurement of HV-polarization. We obtain a visibility of $V_{HV}=0.9976$ for the corresponding eigenstate inputs. With this set of data, we can fully characterize the performance of the measurement setup, as shown in the analysis of the following experimental results.
\section{Experimental evaluation of non-classical correlations}
\label{sec:data}
To obtain non-classical correlations between $\hat{S}_{PM}$ and $\hat{S}_{HV}$, we chose an input state $\psi$ with a linear polarization at 67.5$^{\circ}$, halfway between the P-polarization and the V-polarization. For this state, the initial expection value of the target observable is
\begin{equation}
\langle \hat{S}_{PM} \rangle = \frac{1}{\sqrt{2}}.
\end{equation}
We can now start the analysis of measurement errors by considering only the outcome $m_1$, in which case the measurement operators $\hat{E}_m$ commute with the target observable and the problem could also be analyzed using classical statistics. Specifically, commutativity means that the probability $P(m_1|\psi)$ is unchanged if a projective measurement of $\hat{S}_{PM}$ is performed before the measurement of $m_1$. It is therefore possible to determine $P(m_1|\psi)$ from the conditional probabilities $P(m_1|a)$ and $P(a|\psi)$, which results in a classical conditional average for $\hat{A}=\hat{S}_{PM}$ given by
\begin{eqnarray}
\label{eq:Bayes}
A_{\mathrm{opt.}}(m_1) &=& \frac{P(m_1|+) P(+|\psi) - P(m_1|-) P(-|\psi)}{P(m_1|+) P(+|\psi) + P(m_1|-) P(-|\psi)}.
\nonumber \\
&=& \frac{(1-2 P_{\mathrm{error}}) m_1 + \langle \hat{S}_{PM} \rangle}{m_1 +(1-2 P_{\mathrm{error}}) \langle \hat{S}_{PM} \rangle} \; m_1.
\end{eqnarray}
Eq. (\ref{eq:Bayes}) shows that the conditional averages are found somewhere between the original expectation value of $\langle \hat{S}_{PM} \rangle$ for $P_{\mathrm{error}}=1/2$ and the measurement result $m_1$ for $P_{\mathrm{error}}=0$. In the experiment, the error probability is controlled by the measurement strength $\theta$ as shown in Fig. \ref{fig:PMdata}. The corresponding dependence of $A_{\mathrm{opt.}}(m_1)$ on $\theta$ is shown in Fig. \ref{fig:Bayes}.
\begin{figure}[h]
\begin{center}
\vspace{0.2cm}
\includegraphics[width=70mm]{PMaverage.eps}
\caption{Conditional average $A_{\mathrm{opt.}}(m_1)$ of the PM-polarization $\hat{S}_{PM}$ obtained after a measurement of $m_1=+1$ (P-polarization) or $m_1=-1$ (M-polarization) at different measurement strengths $\theta$. At $\theta=0$, the measurement outcome is random ($P_{\mathrm{error}}=1/2$ and the conditional average is simply given by the original expectation value of the input state. As the likelihood of measurement errors decreases, the conditional average approaches the value given by the measurement outcome $m_1$.}
\label{fig:Bayes}
\end{center}
\end{figure}
It should be noted that the result does not change if it is based on the joint probabilites $P(m_1,m_2|\psi)$ shown in Fig. \ref{fig:ALLdata}, since the marginal probabilities $P(m_1|\psi)$ of this joint probability distribution are equal to the sums of the sequential measurement probabilities $P(m_1|a) P(a|\psi)$. This is an important fact, since the actual value of $a$ is fundamentally inaccessible once the final measurement of $m_2$ is performed, regardless whether the data obtained from $m_2$ is used or not. Even though the correlation between $\hat{S}_{PM}$ and $m_1$ can be explained using classical statistics, this possibility does not imply that we can safely assign a physical reality $a$ to the observable. The distinction between classical and non-classical correlations is therefore more subtle than the choice of measurement strategy.
Up to now, the analysis does not include any non-classical correlations, since the measurement is only sensitive to the target observable $\langle \hat{S}_{PM} \rangle$. This situation changes if we include the outcome $m_2$ of the final HV-measurement in the evaluation of the experimental data. Importantly, we intend to use the information gained from the outcome of the HV-measurement to update and improve our estimate of the PM-polarization in the input. For that purpose, we need to evaluate the non-classical correlations between $\langle \hat{S}_{PM} \rangle$ and $\langle \hat{S}_{HV} \rangle$, which can be done using the method developed in section \ref{sec:twolevel}. In addition to the known probabilities $P(a|\psi)$ and $P(m_1,m_2|a)$, we now need to include the measurement outcomes $P(m_1,m_2|\psi)$ which provide the essential information on the non-classical correlations. The experimental results for $P(m_1,m_2|\psi)$ obtained at variable measurement strengths $\theta$ are shown in Fig. \ref{fig:ALLdata}. The question is how the final result $m_2$ changes our estimate of $\hat{S}_{PM}$. According to Eq. (\ref{eq:cav2L}), we can find the answer by dividing the difference between the probabilities of a measurement sequence of $a$ followed by $(m_1,m_2)$ by the probabilities obtained by directly measuring $(m_1,m_2)$,
\begin{eqnarray}
\label{eq:seq}
A_{\mathrm{opt.}}(m_1,m_2) &=& \frac{P(m_1,m_2|+) P(+|\psi) - P(m_1,m_2|-) P(-|\psi)}{P(m_1,m_2|\psi)}
\nonumber \\ &=&
\frac{m_1 (1-2 P_{\mathrm{error}}) + \langle \hat{S}_{PM} \rangle}{4 P(m_1,m_2|\psi)}.
\end{eqnarray}
Note that the simplification of this relation is possible because the result $m_2$ of the HV-measurement is completely random when the input states are eigenstates of PM-polarization, so that $P(m_1,m_2|\pm) = P(m_1|\pm)/2$. Thus the $m_2$-dependence of the conditonal average only appears in the denominator. Specifically, the difference in the probability of finding H-polarization ($m_2=+1$) or V-polarization ($m_2=-1$) in the final measurement translates directly into a difference in the conditional probabilities, where a lower probability of $m_2$ enhances the estimated value $A_{\mathrm{opt.}}(m_1,m_2)$.
\begin{figure}[h]
\begin{center}
\vspace{0.2cm}
\includegraphics[width=70mm]{allData.eps}
\caption{Probabilities $P(m_1,m_2|\psi)$ for the outcomes of the sequential measurement of $m_1$ (PM-polarization) and $m_2$ (HV-polarization) on an input state polarized at 67.5$^{\circ}$, halfway between $P$ and $V$.}
\label{fig:ALLdata}
\end{center}
\end{figure}
Fig. \ref{fig:Aopt} shows the dependence of the conditional averages of $\hat{S}_{PM}$ on the measurement strength $\theta$. Significantly, the low probabilities of finding H-polarization ($m_2=+1$) result in estimates of $A_{\mathrm{opt.}}(m_1,m_2)$ that lie outside of the range of eigenvalues. The difference between $A_{\mathrm{opt.}}(+1,+1)$ and $A_{\mathrm{opt.}}(+1,-1)$ corresponds to the contribution of the non-classical correlation between $\hat{S}_{PM}$ and $m_2$, whereas the difference between $A_{\mathrm{opt.}}(+1,-1)$ and $A_{\mathrm{opt.}}(-1,-1)$ corresponds to the contribution of the correlation between $\hat{S}_{PM}$ and $m_1$, which is closely related to the classical correlation that determines the behavior of $A_{\mathrm{opt.}}(m_1)$ in Fig. \ref{fig:Bayes}. As the measurement strength increases, the correlation between $\hat{S}_{PM}$ and $m_2$ drops towards zero and the correlation between $\hat{S}_{PM}$ and $m_1$ increases, approaching the ideal identification of the measurement outcome $m_1$ with the eigenvalue of $\hat{S}_{PM}$. For intermediate measurement strengths, it is important to consider the correlations between the measurement outcomes as well, indicating that the non-classical correlations associated with $m_2$ are modified by the results of $m_1$ and vice versa. The adjustment of measurement strength is therefore a powerful tool for the analysis of masurement statistics that may give us important new insights into the way that classical and non-classical correlations complement each other.
\begin{figure}[h]
\begin{center}
\vspace{0.2cm}
\includegraphics[width=70mm]{Aopt.eps}
\caption{Conditional averages $A_{\mathrm{opt.}}(m_1,m_2)$ as a function of measurement strength $\theta$. The solid curve represents the theoretical prediction for a measurement without experimental imperfections, the broken line was calculated for an interferomter visibility of $V_{PM}=0.93$.}
\label{fig:Aopt}
\end{center}
\end{figure}
The conditional average $A_{\mathrm{opt.}}(m_1,m_2)$ is obtained from the correlations between $\hat{S}_{PM}$ and the two measurement results $m_1$ and $m_2$ that originate from the statistics of the initial state $\psi$. Specifically, the estimate is obtained by updating the initial statistics of $\psi$ based on the outcomes $m_1$ and $m_2$, where the measurement strength controls the relative statistical weights of the information obtained from $m_1$ and $m_2$. At a maximal measurement strength of $\theta=22.5^\circ$, the PM-measurement completely randomizes the HV-polarization, so that the conditional average $A_{\mathrm{opt.}}(m_1,m_2)$ is independent of $m_2$ and the estimation procedure is based on the classical correlations between $m_1$ and $\hat{S}_{PM}$. As the measurement strength is weakend, a small contribution of non-classical correlations emerges as the conditional averages for $m_2=+1$ and for $m_2=-1$ split, with the estimates for the more likely $m_2$-outcomes dropping towards zero and the estimates for the less likely $m_2$-outcomes diverging to values greater than $+1$ for $m_1=+1$ and more negative than $-1$ for $m_1=-1$. Even small contributions of non-classical correlations therefore result in estimates that cannot be reproduced by classical statistics. Due to experimental imperfections, the anomalous values of $A_{\mathrm{opt.}}(+1,+1)>1$ are easier to observe than the anomalous values of $A_{\mathrm{opt.}}(-1,+1)<-1$. Specifically, the small probabilities of the result $(-1,+1)$ are significantly enlarged by the noise background associated with limited visibilities. As the measurement strength drops, the initial bias in favor of P-polarization in the input state $\psi$ begins to outweigh the effect of the measurement result of $m_1=-1$ that would indicate M-polarization. Of particular interest is the crossing point around $\theta=12.3^\circ$, where the initial information provided by $\psi$ and the measurement information $m_1$ become equivalent and the estimate is $A_{\mathrm{opt.}}(-1,m_2)=0$ for both $m_2=+1$ and $m_2=-1$. For measurement strengths below this crossing point, the initial bias provided by the initial state towards P-polarization clearly dominates the estimate, resulting in positive values of $A_{\mathrm{opt.}}(-1,m_2)$. Significantly, the increase of the estimate with reduction in measurement strength is much faster for $m_2=+1$ than for $m_2=-1$, since the lower probability of the outcome $m_2=+1$ effectively enhances the statistical weight of the information. For $\theta \approx 11^\circ$, this enhancement of the estimate even results in a crossing between $A_{\mathrm{opt.}}(-1,+1)$ and $A_{\mathrm{opt.}}(+1,+1)$, so that the value estimated for an outcome of $m_1=-1$ actually exceeds the value estimated for an outcome of $m_1=+1$ at measurement strengths of $\theta < 11^\circ$. This counter-intuitive difference between the outcome of the PM-measurement and the estimated value of PM-polarization appears due to the effects of the measurement outcome $m_1$ on the quantum correlations between $m_2$ and the target observable $\hat{S}_{HV}$ in the initial state. Specifically, low probability outcomes always enhance the correlations between measurement results and target observable. Therefore, the low probability outcome $m_1=-1$ enhances the correlation between $m_2=+1$ and $\hat{S}_{HV}$, which favours the P-polarization. On the other hand, the much higher probability of $m_1=+1$ does not result in a comparative enhancement of this correlation, so that the estimated value $A_{\mathrm{opt.}}(+1,+1)$ for an outcome of $m_1=+1$ is actually lower than the estimated value $A_{\mathrm{opt.}}(-1,+1)$ for an outcome of $m_1=-1$. These non-classical aspects of correlations between measurement results and target observable highlight the importance of the relation between the two measurement outcomes: it is impossible to isolate the measurement result $m_1$ from the context established by both $\psi$ and $m_2$. Since the estimated values $A_{\mathrm{opt.}}(m_1,m_2)$ correspond to weak values, this observation may also provide a practical example of the relation between weak values and contextuality \cite{Pusey2014}.
In the limit of zero measurement strength ($\theta=0$), the estimated values depend only on $m_2$, with the unlikely measurement outcome of $m_2=+1$ resulting in an anomalous weak value of $A_{\mathrm{opt.}}(m_1,+1)=\sqrt{2}+1$ and the likely outcome of $m_2=-1$ resulting in a weak value estimate of $A_{\mathrm{opt.}}(m_1,-1)=\sqrt{2}-1$. Since these estimates are based only on the outcomes of precise measurements of HV-polarization, they provide a direct illustration of the non-classical correlation between $\hat{S}_{PM}$ and $\hat{S}_{HV}$ in $\psi$. Due to the specific choice of initial state, $A_{\mathrm{opt.}}(m_1,+1)$ is larger than $A_{\mathrm{opt.}}(m_1,-1)$, which means that the detection of H-polarization makes P-polarization more likely, while the detection of V-polarization increases the likelihood of M-polarization. If we disregard for a moment that the estimated values for $m_2=+1$ lie outside the range of possible eigenvalues, we can give a fairly intuitive characterization of this non-classical correlation. Clearly, the lowest likelihood is assigned to the combination of H-polarization and M-polarization, which are the least likely polarization results obtained in separate measurements of HV-polarization and PM-polarization for the input state $\psi$. We can therefore summarize the result by observing that quantum correlations between Bloch vector components
strongly suppress the joint contributions of the least likely results, to the point where the correlation can exceed positive probability boundaries, corresponding to an implicit assignment of negative values to the combination of the two least likely outcomes \cite{Suzuki2012}.
The results presented in this section clearly show that the final HV-measurement provides additional information about the target observable $\hat{A}=\hat{S}_{PM}$. We can therefore expect that the measurement error will be reduced significantly if we use $A_{m_1,m_2}=A_{\mathrm{opt.}}(m_1,m_2)$ as measurement result assigned to the joint outcome $(m_1,m_2)$. In the final section of our discussion, we will therefore take a look at the measurement errors obtained at different measurement strengths $\theta$ and identify the amount of PM-information obtained from the measurement of HV-polarization.
\section{Evaluation of measurement errors}
\label{sec:error}
According to Eq. (\ref{eq:varsum}), the measurement errors for optimized measurement outcomes $A_{m}=A_{\mathrm{opt.}}(m)$ can be evaluated directly by subtracting the statistical fluctuations of $A_{m}$ from the initial fluctuations of the target observable $\hat{A}$ in the initial state $\psi$. We can therefore use the results of the previous sections to obtain the measurement errors $\varepsilon^2(A)$ for the measurement outcomes $m_1$ and for the combined measurement outcomes $(m_1,m_2)$. The results are shown in Fig. \ref{fig:results}, together with the measurement error given by Eq. (\ref{eq:error}), which is obtained by assigning values of $A_{m_1}=\pm 1$ to the measurement outcomes $m_1$.
\begin{figure}[h]
\begin{center}
\vspace{0.2cm}
\includegraphics[width=70mm]{results.eps}
\caption{Measurement errors for different measurement strategies. The highest errors are obtained by assigning eigenvalues of $A_{m_1}=\pm 1$ to the outcomes $m_1$ of the PM-measurement. Optimization of the estimate based on $m_1$ results in an error that decreases with increasing measurement strength. By basing the estimate on the combined outcomes $(m_1,m_2)$, it is possible to achieve errors close to zero for low measurement strength $\theta$, since the undisturbed HV-measurement provides maximal information on the PM-polarization through the non-classical correlations between $\hat{S}_{PM}$ and $\hat{S}_{HV}$ in the initial state $\psi$.}
\label{fig:results}
\end{center}
\end{figure}
Not surprisingly, the sub-optimal assignment of eigenvalues to the measurement outcomes results in much avoidable extra noise. In fact, the error for this assignment exceeds the uncertainty of $\Delta A^2=0.5$ for the initial state $\psi$ at measurement strengths of $\theta < 13.5^\circ$, indicating that one can obtain a better estimate of PM-polarization from the expectation value of the input state.
This never happens in the case of the errors $\varepsilon_{\mathrm{opt.}}$ associated with the optimal estimates of the target observable, since the optimized estimates based on the conditional averages for the different measurement outcomes include the information of the initial state. In the case of the classical estimate $A_{\mathrm{opt.}}(m_1)$ obtained from the variable strength PM-measurement, the measurement error drops gradually from the variance of the initial state at $\theta=0$ to a residual error caused by the limited visibility $V_{PM}$ at $\theta=22.5^\circ$. By including the information of the final HV-measurement, the estimate can be improved to $A_{\mathrm{opt.}}(m_1,m_2)$, resulting in a reduction of the error that is particularly significant when the measurement strength approaches $\theta=0$.
The most interesting experimental result is definitely the error obtained for the optimal estimate $A_{\mathrm{opt.}}(m_1,m_2)$, which summarizes all of the available information in the estimates shown in Fig. \ref{fig:Aopt}. Theoretically, the error of this estimate would be zero if the measurements could be performed without any experimental imperfections, as indicated by the red solid line in Fig. \ref{fig:results}. The actual results are close to zero error in the limit of low measurement strength. In this limit, the high visibility of the final HV-measurement for $m_2$ dominate the estimate, with a much lower impact of the less reliable PM-measurement
for $m_1$. The errors then start to rise as the experimental values of $A_{\mathrm{opt.}}(-1,+1)$ in Fig. \ref{fig:Aopt} reach their maximal values near $\theta=8^\circ$.
The value of the error continues to rise beyond the maximum of $A_{\mathrm{opt.}}(-1,+1)$ and reaches its maximal value near the $\theta=12.3^\circ$ crossing point where $A_{\mathrm{opt.}}(-1,+1)=A_{\mathrm{opt.}}(-1,-1)=0$. At this point, the estimate is particularly sensitive to measurement noise, since the extremely low probabilities of an outcome of $(-1,+1)$ are strongly affected by experimental noise backgrounds. For measurement stengths greater than this crossing point ($\theta>12.3^\circ$), the error of $A_{\mathrm{opt.}}(m_1,m_2)$ is not much lower than the error of $A_{\mathrm{opt.}}(m_1)$, indicating that the final measurement result $m_2$ provides only very little additional measurement information on $\hat{S}_{PM}$. This appears to be a result of the experimental noise in the PM-measurement, which limits the error to $\varepsilon^2=0.12$ at a maximal measurement strength of $\theta=22.5^\circ$.
In summary, the analysis of the measurement errors shows that the non-classical correlation between $m_2$ and $\hat{S}_{PM}$ used to obtain the estimate $A_{\mathrm{opt.}}(m_1,m_2)$ in the limit of weak measurement interactions results in much lower errors than the use of the classical correlations between $m_1$ and $\hat{S}_{PM}$ that dominate in the strong measurement regime. This is a result of the fact that the errors in the limit of weak measurement are dominated by the HV-visibility of the setup, while the errors in the strong measurement regime mostly originate from the PM visibility, which happens to be much lower than the HV-visibility in the present setup.
Our setup is therefore ideally suited to illustrate the importance of non-classical correlations in the evaluation of measurement errors when the initial state is taken into account. The optimal estimate $A_{\mathrm{opt.}}(m_1,m_2)$ is obtained by
considering the specific relation between the measurement outcomes and the target observable in the specific input state, which may result in counter-intuitive assignments of values to the different measurement outcomes. In the present case, the lowest errors are obtained as a consequence of this counter-intuitive assignment, since the experimental setup is particularly robust against experimental imperfections
in the regime of low measurement strength which is most sensitive to the effects of non-classical correlations. Our results thus provide a particularly clear experimental demonstration of the reduction of measurement errors by non-classical correlations between measurement result and target observable in the initial quantum state.
\section{Conclusions}
\label{sec:conclusions}
We have investigated the non-classical correlations between the outcomes of a quantum measurement and the target observable of the measurement by studying the statistics of measurement errors in a sequential measurement. In the initial measurement, the measurement operator commutes with the target observable and the measurement outcome $m_1$ relates directly to the target observable, while the final measurement of a complementary observable introduces the effect of non-classical correlations between the outcome $m_2$ and the target observable. To evaluate the errors, we applied the operator formalism introduced by Ozawa and show that the evaluation of two-level statistics can be performed by combining the measurement statistics of the input state $\psi$ with the statistics obtained from eigenstate inputs of the target observable. By combining the statistics of separate measurements according to the rules obtained from the operator formalism, it is possible to identify the optimal estimate of the target observable using only the available experimental data. Due to the specific combination of the statistical results, this estimate can exceed the limits of classical statistics by obtaining values that lie outside the range of possible eigenvalues. Typically, the least likely outcomes are associated with extreme values of the target observable. In the present experiment, we find extremely high estimates of the target observable when the strength of the initial measurement is weak and the measurement result is dominated by the non-classical correlations between the target observable and the complementary observable detected in the final measurement. In this limit, the initial measurement outcome that refers directly to the target observable mainly enhances or reduces the effects of the non-classical correlations, which results in the counter-intuitive anti-correlation between the actual measurement result and the associated estimate of the target observable for a final outcome of $m_2=+1$.
Our discussion provides a more detailed insight into the experimental analysis of measurent errors that has recently been used to evaluate the uncertainty limits of quantum measurements derived by Ozawa \cite{Ozawa2003,Erhart2012,Baek2013,Rozema2012,Hall2013,Ringbauer2014,Kaneda2014}. It is important to note that the estimation procedure associated with this kind of error analysis also reveals important details of the non-classical statistics originating from the correlations between physical properties in the initial state. In the present work, we have taken a closer look at the experimental analysis of measurement errors and clarified its non-classical features. The results show that some of the effects involved in the optimal evaluation of the experimental data are rather counter-intuitive and exhibit features that exceed the possibilities of classical statistics in significant ways. For a complete understanding of measurement statistics in quantum mechanics, it is therefore necessary to explore the effects of non-classical correlations in more detail, and the present study may be a helpful starting point for a deeper understanding of the role such correlations can play in various measurement contexts.
\section*{Acknowledgments}
This work was supported by JSPS KAKENHI Grant Number 24540428. One of authors (Y.S.) is supported by Grant-in-Aid for JSPS Fellows 265259.
|
2,877,628,089,626 | arxiv | \section{Introduction} \label{intro}
The Volterra model, also known as the KM system is a well-known integrable system defined by
\begin{equation} \label{a1}
\dot x_i = x_i(x_{i+1}-x_{i-1}) \qquad i=1,2, \dots,n,
\end{equation}
where $x_0 = x_{n+1}=0$. It was studied by
Lotka in \cite{lotka} to model oscillating chemical reactions and by
Volterra in \cite{volterra} to describe population evolution in a
hierarchical system of competing species. It was first solved by
Kac and van-Moerbeke in \cite{kac}, using a discrete version of
inverse scattering due to Flaschka \cite{flaschka}. In
\cite{moser} Moser gave a solution of the system using the method
of continued fractions and in the process he constructed
action-angle coordinates. Equations \Ref{a1} can be considered
as a finite-dimensional approximation of the Korteweg-de Vries
(KdV) equation. The Poisson bracket for this system can
be thought as a lattice generalization of the Virasoro algebra
\cite{fadeev2}.
The Volterra system is associated with a simple Lie algebra of type $A_n$. Bogoyavlensky
generalized this system for each simple Lie algebra and showed
that the corresponding systems are also integrable. See
\cite{bog1,bog2} for more details. The generalization in this paper is different from the one of Bogoyavlensky.
The KM-system given by equation \Ref{a1} is Hamiltonian (see \cite{fadeev}, \cite{damianou91}) and can be written in Lax pair form in various ways.
The Lax pair in \cite{damianou91} is given by
\begin{equation*}
\dot{L}=[B, L],
\end{equation*}
where
\begin{equation*}
L= \begin{pmatrix} x_1 & 0 & \sqrt{x_1 x_2} & 0 & \cdots & & 0 \cr
0 & x_1 +x_2 & 0& \sqrt{x_2 x_3} & & & \vdots \cr
\sqrt{x_1 x_2} & 0 & x_2 +x_3 & & \ddots & & \cr
0 & \sqrt{x_2 x_3} & & & & & \cr
\vdots & & \ddots & & & & \sqrt{x_{n-1} x_n} \cr
& & & & & x_{n-1}+x_n & 0 \cr
0 & & \cdots & & \sqrt{x_{n-1} x_n} &0& x_n \end{pmatrix}
\end{equation*}
and
\begin{equation*}
B=\begin{pmatrix} 0 & 0 & \frac{1}{2} \sqrt{x_1 x_2} & 0 & \dots &
\ & 0 \cr
0 & 0 & 0&\frac{1}{2} \sqrt{x_2 x_3} & & & \vdots \cr
-\frac{1}{2} \sqrt{x_1 x_2} & 0 & 0 & & \ddots & & \cr
0 & -\frac{1}{2} \sqrt{x_2 x_3} & & & & & \cr
\vdots & & \ddots & & & & \frac{1}{2} \sqrt{x_{n-1} x_n} \cr
& & & & & 0 & 0 \cr
0& & \cdots & &-\frac{1}{2}\sqrt{x_{n-1} x_n} &0& 0 \end{pmatrix} \ .
\end{equation*}
Due to the Lax pair, it follows that the functions $H_i=\frac{1}{i}\, \tr \, L^i$ are constants of motion.
Following \cite{damianou91} we define the following quadratic
Poisson bracket,
\begin{displaymath} \{x_i, x_{i+1} \}=x_i x_{i+1}, \end{displaymath}
and all
other brackets equal to zero.
This bracket has a single Casimir det$L$, and the functions $H_i$ are all in involution.
Taking the function $\sum_{i=1} ^{n} x_i $ as the
Hamiltonian we obtain equations (\ref{a1}). This bracket can be
realized from the second Poisson bracket of the Toda lattice by
setting the momentum variables equal to zero \cite{fadeev}.
There is another Lax pair where $L$ is in the nilpotent subalgebra corresponding to the negative
roots. The Lax pair is of the form $ \dot{L}=[L, B] $ where
\begin{equation}
\label{lax-volterra}
L= \begin{pmatrix} 0 & 1 & 0 & \cdots & \cdots & 0 \cr
x_1 & 0 & 1 & \ddots & & \vdots \cr
0 & x_2 & 0 & \ddots & & \vdots \cr
\vdots & \ddots & \ddots & \ddots &\ddots & 0 \cr
\vdots & & & \ddots & \ddots & 1 \cr
0 & \cdots & \cdots & 0 & x_{n} & 0 \end{pmatrix},
\end{equation}
and
\begin{equation*}
B=\begin{pmatrix} 0 & 1 & 0 & \cdots & \cdots & 0 \cr
0 & 0 & 1 & \ddots & & \vdots \cr
x_1 x_2 & 0 & 0 & \ddots & & \vdots \cr
\vdots & x_2 x_3 & \ddots & \ddots & & 0 \cr
\vdots & & & \ddots & \ddots & 1 \cr
0 & \cdots & \cdots & x_{n-1} x_n & 0 & 0 \end{pmatrix}.
\end{equation*}
Finally, there is a symmetric version due to Moser where
\begin{equation} \label{lax-symmetric}
L= \begin{pmatrix} 0 & a_1 & 0 & \cdots & \cdots & 0 \cr
a_1 & 0 & a_2 & \ddots & & \vdots \cr
0 & a_2 & 0 & \ddots & & \vdots \cr
\vdots & \ddots & \ddots & \ddots & & 0 \cr
\vdots & & & \ddots & \ddots & a_n \cr
0 & \cdots & \cdots & 0 & a_{n} & 0 \end{pmatrix},
\end{equation}
and
\begin{equation*}
B=\begin{pmatrix} 0 & 0 & a_1 a_2 & \cdots & \cdots & 0 \cr
0 & 0 & 0 & \ddots & & \vdots \cr
-a_1 a_2 & 0 & 0 & \ddots & a_2 a_3 & \vdots \cr
\vdots & -a_2 a_3 & \ddots & \ddots & & a_{n-1} a_n \cr
\vdots & & & \ddots & \ddots & 0 \cr
0 & \cdots & \cdots & -a_{n-1} a_n & 0 & 0 \end{pmatrix}.
\end{equation*}
\noindent
The change of variables $x_i=2a_i^2$ gives equations \Ref{a1}.
It is evident from the form of $L$ in the various Lax pairs, that the position of the variables $a_i$ corresponds to the simple root vectors of a root system of type $A_n$. On the other hand a non-zero entry of the matrix $B$ occurs at a position corresponding to the sum of two simple roots $\alpha_i$ and $\alpha_j$. In this paper we generalize the Lax pair of Moser \Ref{lax-symmetric} as follows.
Instead of considering the set of simple roots $\Pi$, we begin with a subset $\Phi$ of the positive roots $\Delta^{+}$ which contains $\Pi$, i.e. $\Pi \subseteq \Phi \subseteq \Delta^{+}$. For each such choice of a set $\Phi$ we produce (almost always) a Lax pair and thus a new Hamiltonian system. In this paper we consider some specific examples for the case of $A_n$. In dimension 3 this procedure produces only two systems, the KM system and the periodic KM system. In dimensions 4 and 5 we introduce and study some new systems. We show that all such systems are Liouville integrable. To establish integrability we use standard techniques of Lax pairs and Poisson geometry, the method of chopping and also a particular technique of Moser which uses the square of the Lax matrix. A number of these systems also exist in the form (\ref{lax-volterra}). In that case one can use the method of chopping to show integrability.
\section{Lotka-Volterra Systems}\label{lotka}
The KM-system belongs to a large class of the so called Lotka-Volterra systems. The most
general form of the Lotka-Volterra equations is
$$\dot x_i = \varepsilon_i x_i + \sum_{j=1}^n a_{ij} x_i x_j, \ \ i=1,2, \dots , n.$$
We may assume that there are no linear terms ($\varepsilon_i=0$). We also assume
that the matrix $A= (a_{ij})$ is skew-symmetric. All these systems can be written in Hamiltonian form using the Hamiltonian function
\begin{displaymath} H=x_1+ x_2 +\cdots + x_n \, .\end{displaymath}
Hamilton's equations take the form $\dot x_i = \{x_i, H\}=\sum_{j=1}^n\ensuremath{\pi}_{ij}$ with quadratic functions \begin{equation} \label{quad} \ensuremath{\pi}_{i,j}=\{ x_i,x_j\} = a_{ij} x_i x_j, \ \ i,j = 1,2, \dots , n. \end{equation}
From the skew symmetry of the matrix $A=(a_{ij})$ it follows that the Schouten-Nijenhuis bracket $[\ensuremath{\pi},\ensuremath{\pi}]$ vanishes:
\[
\begin{split}
[\ensuremath{\pi},\ensuremath{\pi}]_{ij}&=
2\left(a_{ij}\{x_ix_j,x_k\}+a_{jk}\{x_jx_k,x_i\}+a_{ki}\{x_kx_i,x_j\}\right)\\
&=2\left(a_{ij}(a_{jk}+a_{ik})+a_{jk}(a_{ki}+a_{ji})+a_{ki}(a_{ij}+a_{kj})\right)x_ix_jx_k=0\,.\\
\end{split}
\]
The bivector field $\ensuremath{\pi}$ is an example of a \textit{diagonal Poisson structure}.
The Poisson tensor \Ref{quad} is Poisson isomorphic to the constant Poisson structure defined by the constant matrix $A$, see \cite{fairen}. If $\mathbf{k}=(k_1, k_2 \cdots, k_n)$ is a vector in the kernel of $A$ then the function
\begin{equation*}
f=x_1^{k_1} x_2^{k_2} \cdots x_n^{k_n}
\end{equation*}
is a Casimir. Indeed for an arbitrary function $g$ the Poisson bracket $\{f,g\}$ is
\[
\{f,g\}=\sum_{i,j=1}^n\{x_i,x_j\}\frac{\partial f}{\partial x_i}\frac{\partial g}{\partial x_j}=\sum_{j=1}^n\left(\sum_{i=1}^na_{ij}k_i\right)x_jf\frac{\partial g}{\partial x_j}=0\,.
\]
If the matrix $A$ has rank $r$ then there are $n-r$ functionally independent Casimirs. This type of integral can be traced back to Volterra \cite{volterra}; see also \cite{plank}, \cite{fairen}, \cite{bogo3}.
\section{Simple Lie algebras} \label{liealgebras}
\label{Procedure}
We recall the following procedure from \cite{damianou12}.
Let $\mathfrak{g}$ be any simple Lie algebra equipped with its Killing form $\langle\cdot\,\vert\,\cdot\rangle$. One chooses
a Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{g}$, and a basis $\Pi$ of simple roots for the root system $\Delta$ of $\mathfrak{h}$ in
$\mathfrak{g}$. The corresponding set of positive roots is denoted by $\Delta^+$. To each positive root $\alpha$ one can
associate a triple $(X_\alpha,X_{-\alpha},H_{\alpha})$ of vectors in $\mathfrak{g}$ which generate a Lie subalgebra
isomorphic to $sl_2(\mathbf{C})$. The set $(X_\alpha, X_{-\alpha})_{\alpha \in \Delta^+}\cup (H_\alpha)_{\alpha \in
\Pi}$ is a basis of $\mathfrak{g}$, called a root basis.
Let $\Pi=\{ \alpha_1, \dots, \alpha_{\ell} \}$ and let $X_{\alpha_1}, \ldots, X_{\alpha_\ell}$ be the corresponding root vectors in $\mathfrak{g}$. Define
\begin{displaymath} L=\sum_{\alpha_i \in \Pi} a_i (X_{\alpha_i}+X_{-\alpha_i}) \ . \end{displaymath}
To find the matrix $B$ we use the following procedure. For each $i,j$ form the vectors
$\left[X_{\alpha_i},X_{\alpha_j}\right]$. If $\alpha_i+\alpha_j $ is a root then
include a term of the form $a_i a_j \left[X_{\alpha_i},X_{\alpha_j}\right]$ in $B$.
We make $B$ skew-symmetric by including the corresponding negative root vectors $a_i a_j [X_{-\alpha_i},X_{-\alpha_j}]$. Finally, we define the system using the Lax pair
\begin{displaymath} \dot{L}=[L, B] \ . \end{displaymath}
For a root system of type $A_n$ we obtain the KM system.
In this paper we generalize this algorithm as follows. Consider a subset $\Phi$ of
$\Delta^{+}$ such that
\bd
\Pi \subset \Phi \subset \Delta^{+} \ .
\ed
The Lax matrix is easy to construct
\begin{displaymath}
L=\sum_{\alpha_i \in \Phi} a_i (X_{\alpha_i}+X_{-\alpha_i}) \ .
\end{displaymath}
Here we use the following enumeration of $\Phi$ which we assume to have $m$ elements. The variables $a_j$ correspond to the simple roots $\alpha_j$ for $j=1,2, \dots, \ell$. We assign the variables $a_j$ for $j=\ell+1, \ell+2, \dots, m $ to the remaining roots in $\Phi$.
To construct the matrix $B$ we use the following algorithm. Consider the set $\Phi \cup \Phi^{-}$ which consists of all the roots in $\Phi$ together with their negatives. Let
\bd \Psi =\left\{ \alpha+\beta \ | \ \alpha, \beta \in \Phi \cup \Phi^{-}, \alpha+\beta \in \Delta^{+} \right\} \ . \ed
Define
\bd B=\sum c_{ij} a_i a_j (X_{\alpha_i+\alpha_j}+X_{-\alpha_i - \alpha_j} ) \ed
where $c_{ij}=\pm 1$ if $\alpha_i+\alpha_j \in \Psi$ with $\alpha_i,\alpha_j\in\Phi\cup\Phi^-$ and $0$ otherwise. In almost all cases we are able to make the proper choices of the sign of the $c_{ij}$ so that we can produce a Lax pair. For example we are able to do this in all eight cases in $A_3$ and in all but five of the sixty four cases in $A_4$. In this paper we restrict our attention to the $A_n$ case. Examples from other Lie algebras will be presented in a future publication.
\section{Examples in $A_3$ and $A_4$} \label{examples34}
\begin{example}\label{example A_3 root system} ($A_3$ root system)\\*
Let $E$ be the hyperplane of $\mathbb{R}^4$ for which the coordinates sum to $0$ (i.e. vectors orthogonal to $(1,1,1,1)$). Let $\Delta$ be the set of vectors in $E$ of length $\sqrt{2}$ with integer coordinates. There are $12$ such vectors in all. We use the standard inner product in $\mathbb{R}^4$ and the standard orthonormal basis $\{ \epsilon_1, \epsilon_2, \epsilon_3, \epsilon_4 \}$. Then, it is easy to see that $\Delta = \{ \epsilon_i-\epsilon_j \ | \ i \not= j \}$. The vectors
\begin{displaymath}
\begin{array}{lcl}
\alpha_1 & =& \epsilon_1 -\epsilon_2 \\
\alpha_2 & =& \epsilon_2 -\epsilon_3 \\
\alpha_3 & =& \epsilon_3 -\epsilon_4 \
\end{array}
\end{displaymath}
form a basis of the root system in the sense that each vector in $\Delta$ is a linear combination of these three vectors with integer coefficients, either all nonnegative or all nonpositive. For example, $\epsilon_1 -\epsilon_3=\alpha_1+\alpha_2$, $ \epsilon_2 -\epsilon_4 =\alpha_2+\alpha_3$ and $\epsilon_1-\epsilon_4=\alpha_1+\alpha_2+\alpha_3$. Therefore $\Pi=\{\alpha_1, \alpha_2, \alpha_3 \}$, and the set of positive roots $\Delta^{+}$ is given by
\begin{displaymath}
\Delta^{+}= \{ \alpha_1, \alpha_2, \alpha_3, \alpha_1+\alpha_2, \alpha_2+\alpha_3, \alpha_1+\alpha_2+\alpha_3 \} \ .
\end{displaymath}
If we take $\Phi=\{ \alpha_1, \alpha_2, \alpha_3, \alpha_1 +\alpha_2 \} $
then
$$\Phi \cup \Phi^-=\{ \alpha_1, \alpha_2, \alpha_3, \alpha_1 +\alpha_2 , -\alpha_1, -\alpha_2, -\alpha_3, -\alpha_1 -\alpha_2 \}
$$
and
$\Psi =\{ \alpha_1, \alpha_2, \alpha_1 +\alpha_2 , \alpha_2+\alpha_3, \alpha_1+\alpha_2+\alpha_3 \}$. In this example the variables $a_i$ for $i=1,2,3$ correspond to the three simple roots $\alpha_1, \alpha_2, \alpha_3$. We associate the variable $a_4$ to the root $\alpha_1 +\alpha_2$.
We obtain the following Lax pair:
\label{ex1}
\[
L= \begin {pmatrix}
0 &a_{{1}}&a_{{4}}&0 \\\noalign{\medskip}
a_1 &0 &a_{{2}}&0 \\\noalign{\medskip}
a_{{4}}&a_{{2}}& 0 &a_{{3}}\\\noalign{\medskip}
0 &0 &a_{{3}}& 0
\end {pmatrix}
\]
\[
B=\begin {pmatrix}
0 &-a_4a_2&a_1a_2 &-a_4a_3\\\noalign{\medskip}
a_4a_2 &0 &-a_1a_4&a_2a_3 \\\noalign{\medskip}
-a_1a_2&a_1a_4 &0 &0 \\\noalign{\medskip}
a_4a_3 &-a_2a_3&0 &0
\end {pmatrix}.
\]
The Lax pair is equivalent to the following equations of motion:
\[
\begin{split}
\dot{a_1} & = a_1a^2_2-a_1a^2_4, \\
\dot{a_2} & =-a_2a^2_1+a_2a^2_3+a_2a^2_4,\\
\dot{a_3} & =-a_3a^2_2+a_3a^2_4, \\
\dot{a_4} & = a_4a^2_1-a_4a^2_2-a_4a^2_3 \, .
\end{split}
\]
With the substitution $x_i=a_i^2$ followed by scaling we obtain the following Lotka-Volterra system.
\[
\begin{split}
\dot{x_1} &=x_1x_2-x_1x_4,\\
\dot{x_2}&=-x_2x_1+x_2x_3+x_2x_4,\\
\dot{x_3}&=-x_3x_2+x_3x_4,\\
\dot{x_4}&=x_4x_1-x_4x_2-x_4x_3 \,.
\end{split}
\]
The system is integrable. There exist two functionally independent Casimir functions
$F_1=x_1 x_3={\rm det} \, L $ and $F_2=x_1 x_2 x_4$.
The additional integral is the Hamiltonian $H=x_1+x_2+x_3+x_4=\tr L^2$.
The standard quadratic Poisson bracket is given by
\[\pi=
\begin {pmatrix}
0 & x_1x_2 & 0 & -x_1x_4 \\\noalign{\medskip}
-x_2x_1 & 0 & x_2x_3 & x_2x_4 \\\noalign{\medskip}
0 & -x_3x_2 & 0 & x_3x_4 \\\noalign{\medskip}
x_4x_1 & -x_4x_2 & -x_4x_3 & 0
\end {pmatrix}.
\]
We can find the Casimirs by computing the kernel of the matrix
\begin{displaymath}
A=\begin{pmatrix}
0 & 1& 0 & -1 \cr
-1& 0& 1& 1 \cr
0&-1&0&1 \cr
1&-1&-1&0
\end{pmatrix}.
\end{displaymath}
The two eigenvectors with eigenvalue $0$ are $(1,0,1,0)$ and $(1,1, 0, 1)$. We obtain the two Casimirs $F_1=x_1^1 x_2^0 x_3^1 x_4^0=x_1 x_3$ and $F_2=x_1^1 x_2^1 x_3^0 x_4^1=x_1 x_2 x_4$.
\end{example}
There is a similar Lax pair defined by the matrix
\[
L= \left( \begin {array}{cccc}
0&a_{{1}}&0&0\\\noalign{\medskip}
a_{{1}}&0&a_{{2}}&a_{{4}}\\\noalign{\medskip}
0&a_{{2}}&0&a_{{3}}\\\noalign{\medskip}
0&a_{{4}}&a_{{3}}&0
\end {array} \right)
\]
but the resulting system is isomorphic to the previous example.
The Lax pair $L,B$ corresponding to $\Phi=\{\alpha_1,\alpha_2,\alpha_3,\alpha_1+\alpha_2+\alpha_3\}$ is
\[ L=
\left( \begin {array}{cccc}
0&a_{{1}}&0&a_{{4}}\\\noalign{\medskip}
a_{{1}}&0&a_{{2}}&0\\\noalign{\medskip}
0&a_{{2}}&0&a_{{3}}\\\noalign{\medskip}
a_{{4}}&0&a_{{3}}&0
\end {array} \right),
\]
\[B=\left( \begin {array}{cccc}
0&0&a_{{1}}a_{{2}}-a_{{4}}a_{{3}}&0\\\noalign{\medskip}
0&0&0&-a_{{1}}a_{{4}}+a_{{2}}a_{{3}}\\\noalign{\medskip}
-a_{{1}}a_{{2}}+a_{{4}}a_{{3}}&0&0&0\\\noalign{\medskip}
0&a_{{1}}a_{{4}}-a_{{2}}a_{{3}}&0&0
\end {array} \right).
\]
Using the substitution $x_i=2a_i^2$ we obtain the periodic KM-system
\[
\begin{split}
\dot{x_1} &=x_1x_2-x_1x_4,\\
\dot{x_2}&=-x_2x_1+x_2x_3,\\
\dot{x_3}&=x_3x_4-x_3x_2,\\
\dot{x_4}&=x_4x_1-x_4x_3\,.
\end{split}
\]
The Poisson matrix is
\[\pi= \left( \begin {array}{cccc}
0&x_{{1}}x_{{2}}&0&-x_{{1}}x_{{4}}\\\noalign{\medskip}
-x_{{1}}x_{{2}}&0&x_{{2}}x_{{3}}&0\\\noalign{\medskip}
0&-x_{{2}}x_{{3}}&0&x_{{3}}x_{{4}}\\\noalign{\medskip}
x_{{1}}x_{{4}}&0&-x_{{3}}x_{{4}}&0
\end {array} \right)
\]
with
$\Rank(\pi)=2$.
\noindent
In addition to the Hamiltonian
\bd
H=x_1+x_2+x_3+x_4
\ed
it possesses two Casimirs $C_1=x_1 x_3$ and $C_2=x_2 x_4$.
\begin{example}\mbox{}\\*
The Lax equation $\dot{L}=[B,L]$, corresponding to
$\Phi=\{\alpha_1,\alpha_2,\alpha_3,\alpha_1+\alpha_2,\alpha_2+\alpha_3\}$
with
\[ L=
\left( \begin {array}{cccc}
0&a_{{1}}&a_{{4}}&0\\\noalign{\medskip}
a_{{1}}&0&a_{{2}}&a_{{5}}\\\noalign{\medskip}
a_{{4}}&a_{{2}}&0&a_{{3}}\\\noalign{\medskip}
0&a_{{5}}&a_{{3}}&0\end {array} \right)
\]
and
\[B=\left( \begin {array}{cccc}
0&-a_{{4}}a_{{2}}&a_{{1}}a_{{2}}&-a_{{1}}a_{{5}}-a_{{4}}a_{{3}}\\\noalign{\medskip}
a_{{4}}a_{{2}}&0&-a_{{1}}a_{{4}}-a_{{5}}a_{{3}}&a_{{2}}a_{{3}}\\\noalign{\medskip}
-a_{{1}}a_{{2}}&a_{{1}}a_{{4}}+a_{{5}}a_{{3}}&0&-a_{{2}}a_{{5}}\\\noalign{\medskip}
a_{{1}}a_{{5}}+a_{{4}}a_{{3}}&-a_{{2}}a_{{3}}&a_{{2}}a_{{5}}&0
\end {array} \right)
\]
is equivalent to the following equations of motion
\[
\begin{split}
\dot{a_1} &=a_1a^2_2-a_1a^2_5-a_1a^2_4-2a_3a_4a_5,\\
\dot{a_2}&=a_2a^2_4+a_2a^2_3-a_2a^2_1-a_2a^2_5,\\
\dot{a_3}&=a_3a^2_5+a_3a^2_4-a_3a^2_2+2a_1a_4a_5,\\
\dot{a_4}&=a_4a^2_1-a_4a^2_2-a_4a^2_3,\\
\dot{a_5}&=a_5a^2_1-a_5a^2_3+a_5a^2_2.
\end{split}
\]
Note that the system is not Lotka-Volterra.
It is Hamiltonian with Hamiltonian function
$H=\frac{1}{2}\left(a_1^2+a_2^2+a_3^2+a_4^2+a_5^2\right)$.
The system has Poisson matrix
\[
\ensuremath{\pi}=\left( \begin {array}{ccccc}
0&a_{{1}}a_{{2}}&-2\,a_{{4}}a_{{5}}&-a_{{1}}a_{{4}}&-a_{{1}}a_{{5}}\\\noalign{\medskip}
-a_{{1}}a_{{2}}&0&a_{{2}}a_{{3}}&a_{{2}}a_{{4}}&-a_{{2}}a_{{5}}\\\noalign{\medskip}
2\,a_{{4}}a_{{5}}&-a_{{2}}a_{{3}}&0&a_{{3}}a_{{4}}&a_{{3}}a_{{5}}\\\noalign{\medskip}
a_{{1}}a_{{4}}&-a_{{2}}a_{{4}}&-a_{{3}}a_{{4}}&0&0\\\noalign{\medskip}
a_{{1}}a_{{5}}&a_{{2}}a_{{5}}&-a_{{3}}a_{{5}}&0&0
\end {array} \right)
\]
of rank 4 . The determinant $C=(a_1a_3-a_4a_5)^2$ of $L$ is the Casimir of the system.
The trace of $L^3$ gives the additional constant of motion
$$F=\frac{1}{6}\tr\left(L^3\right)=a_1 a_2 a_4+ a_2 a_3 a_5 $$
and therefore the system is Liouville integrable.
\end{example}
\begin{example}\mbox{}\\*
Let
\[ L= \left( \begin {array}{cccc}
0&a_{{1}}&0&a_{{5}}\\\noalign{\medskip}
a_{{1}}&0&a_{{2}}&a_{{4}}\\\noalign{\medskip}
0&a_{{2}}&0&a_{{3}}\\\noalign{\medskip}
a_{{5}}&a_{{4}}&a_{{3}}&0
\end {array} \right)
\]
and
\[B=\left( \begin {array}{cccc}
0&a_{{4}}a_{{5}}&a_{{1}}a_{{2}}+a_{{5}}a_{{3}}&-a_{{1}}a_{{4}}\\\noalign{\medskip}
-a_{{4}}a_{{5}}&0&-a_{{4}}a_{{3}}&a_{{1}}a_{{5}}+a_{{2}}a_{{3}}\\\noalign{\medskip}
-a_{{1}}a_{{2}}-a_{{5}}a_{{3}}&a_{{4}}a_{{3}}&0&-a_{{4}}a_{{2}}\\\noalign{\medskip}
a_{{1}}a_{{4}}&-a_{{1}}a_{{5}}-a_{{2}}a_{{3}}&a_{{4}}a_{{2}}&0
\end {array} \right) \ .
\]
The Lax pair is equivalent to the following equations of motion:
\[
\begin{split}
\dot{a_1} &=a_1a^2_5+a_1a^2_2-a_1a^2_4+2a_2a_3a_5,\\
\dot{a_2}&=a_2a^2_3-a_2a^2_1-a_2a^2_4,\\
\dot{a_3}&=-a_3a^2_5+a_3a^2_4-a_3a^2_2-2a_1a_2a_5,\\
\dot{a_4}&=-a_4a^2_5-a_4a^2_3+a_4a^2_1+a_4a^2_2,\\
\dot{a_5}&=-a_5a^2_1+a_5a^2_4+a_5a^2_3.
\end{split}
\]
The Poisson matrix is
\[
\ensuremath{\pi}= \left( \begin {array}{ccccc}
0&a_{{1}}a_{{2}}&2\,a_{{2}}a_{{5}}&-a_{{1}}a_{{4}}&a_{{1}}a_{{5}}\\\noalign{\medskip}
-a_{{1}}a_{{2}}&0&a_{{2}}a_{{3}}&-a_{{2}}a_{{4}}&0\\\noalign{\medskip}
-2\,a_{{2}}a_{{5}}&-a_{{2}}a_{{3}}&0&a_{{3}}a_{{4}}&-a_{{3}}a_{{5}}\\\noalign{\medskip}
a_{{1}}a_{{4}}&a_{{2}}a_{{4}}&-a_{{3}}a_{{4}}&0&-a_{{4}}a_{{5}}\\\noalign{\medskip}
-a_{{1}}a_{{5}}&0&a_{{3}}a_{{5}}&a_{{4}}a_{{5}}&0
\end {array} \right)
\]
with
$ \Rank(\ensuremath{\pi})=4 $.
\noindent
The constants of motion are
\[
\begin{split}
H&=\frac{1}{2}\left(a_1^2+a_2^2+a_3^2+a_4^2+a_5^2\right)\;\text{(Hamiltonian)},\\
F&=\frac{1}{6}\tr\left(L^3\right)=a_{{1}}a_{{4}}a_{{5}}+a_{{2}}a_{{3}}a_{{4}},\\
C&=\det(L)=\left(a_1a_3-a_2a_5\right)^2\;\text{(Casimir)}.
\end{split}
\]
\end{example}
\begin{example}\mbox{}\\*
For the root system of type $A_4$ the Lax pair corresponding to
$$\Phi=\{\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_2+\alpha_3\}$$
is given by the matrices
\[ L= \left( \begin {array}{ccccc} 0&a_{{1}}&0&0&0\\\noalign{\medskip}
a_{{1}}&0&a_{{2}}&a_{{5}}&0\\\noalign{\medskip}
0&a_{{2}}&0&a_{{3}}&0\\\noalign{\medskip}
0&a_{{5}}&a_{{3}}&0&a_{{4}}\\\noalign{\medskip}
0&0&0&a_{{4}}&0
\end {array} \right)
\]
and
\[B=\left( \begin {array}{ccccc}
0&0&a_{{1}}a_{{2}}&-a_{{1}}a_{{5}}&0\\\noalign{\medskip}
0&0&-a_{{5}}a_{{3}}&a_{{2}}a_{{3}}&-a_{{5}}a_{{4}}\\\noalign{\medskip}
-a_{{1}}a_{{2}}&a_{{5}}a_{{3}}&0&-a_{{2}}a_{{5}}&a_{{3}}a_{{4}}\\\noalign{\medskip}
a_{{1}}a_{{5}}&-a_{{2}}a_{{3}}&a_{{2}}a_{{5}}&0&0\\\noalign{\medskip}
0&a_{{5}}a_{{4}}&-a_{{3}}a_{{4}}&0&0
\end {array} \right).
\]
Using the change of variables $x_i=2a_i^2$ the corresponding system becomes
\[
\begin{split}
\dot{x_1} &=x_1x_2-x_1x_5,\\
\dot{x_2}&=-x_2x_5+x_2x_3-x_2x_1,\\
\dot{x_3}&=x_3x_5+x_3x_4-x_3x_2,\\
\dot{x_4}&=x_4x_5-x_4x_3,\\
\dot{x_5}&=-x_5x_4-x_5x_3+x_5x_1+x_5x_2 \,.
\end{split}
\]
The Poisson matrix is
\[
\ensuremath{\pi}= \left( \begin {array}{ccccc} 0&x_{{1}}x_{{2}}&0&0&-x_{{1}}x_{{5}}\\\noalign{\medskip}
-x_{{1}}x_{{2}}&0&x_{{2}}x_{{3}}&0&-x_{{2}}x_{{5}}\\\noalign{\medskip}
0&-x_{{2}}x_{{3}}&0&x_{{3}}x_{{4}}&x_{{3}}x_{{5}}\\\noalign{\medskip}
0&0&-x_{{3}}x_{{4}}&0&x_{{4}}x_{{5}}\\\noalign{\medskip}
x_{{1}}x_{{5}}&x_{{2}}x_{{5}}&-x_{{3}}x_{{5}}&-x_{{
4}}x_{{5}}&0\end {array} \right)
\]
with
$\Rank(\pi)=4 $.
The constants of motion are
\[
\begin{split}
H&=x_1+x_2+x_3+x_4+x_5\;\text{(Hamiltonian)},\\
F&=x_1x_3+x_1x_4+x_2x_4,\\
C&=x_2x_3x_5\;\text{(Casimir)}\,.
\end{split}
\]
These functions are obtained from the coefficients of the characteristic polynomial of $L$
\[
f(z)=\lambda^5- \left(x_1+x_2+x_3+x_4+x_5\right)\, \lambda^3-2\sqrt{x_2x_3x_5}\, \lambda^2+\left(x_1x_3+x_1x_4+x_2x_4\right) \, \lambda \,.
\]
\end{example}
\section{Families of Lotka-Volterra systems} \label{families}
In the previous section we saw several examples of cubic systems which (after a simple change of variables)
are equivalent to Lotka-Volterra systems. In this section we will describe all subsets
$\Phi$ of the positive roots of $A_n$ which produce, after a suitable change of variables, a
Lotka-Volterra system.
It can be verified that for a root system of type $A_n$ the only choice for the subset $\Phi$ of $\Delta^+$ which transforms into a Lotka-Volterra system using the
substitution $x_i=2a_i^2$ is one of the following five.
\begin{enumerate}
\item $\Phi=\Pi,$
\item $\Phi=\Pi\cup\{\alpha_2+\alpha_3+\cdots+\alpha_{n-1}\},$
\item $\Phi=\Pi\cup\{\alpha_1+\alpha_2+\cdots+\alpha_{n-1} \},$
\item $\Phi=\Pi\cup\{\alpha_2+\alpha_3+\cdots+\alpha_n \},$
\item $\Phi=\Pi\cup\{\alpha_1+\alpha_2+\cdots+\alpha_n \}\,.$
\end{enumerate}
Case (1) gives rise to the KM system while case (5) gives rise to the periodic KM system.
Case (2) corresponds to the Lax equation $\dot{L}=[B,L]$ with $L$ matrix
\[L=\begin{pmatrix}
0 &a_1 &0 &\cdots & 0 &0 &0 &0 \\
a_1 &0 &a_2 &0 & & 0 &a_{n+1}&0 \\
0 &a_2 &0 &a_3 &\ddots & & 0 &0 \\
\vdots &0 &a_3 &\ddots &\ddots & & &0 \\
0 & &\ddots &\ddots & 0 &a_{n-2}& 0 &\vdots \\
0 & 0 & & &a_{n-2}& 0 &a_{n-1}& 0 \\
0 &a_{n+1}& 0 & & 0 &a_{n-1}& 0 &a_n \\
0 &0 & 0 & 0 & \cdots& 0 &a_n &0
\end{pmatrix},
\]
The matrix $B$ is defined using the method described in section \Ref{Procedure} as
\[\begin{pmatrix}
0 &0 &a_1a_2 &0 &\cdots & 0&0 &-a_1a_{n+1} &0 \\
0 &0 &0 &a_2a_3 & & 0 &-a_{n-1}a_{n+1}&0 &-a_na_{n+1}\\
-a_1a_2 & 0 &0 &0 &\ddots & & 0 &-a_2a_{n+1}&0 \\
0 &-a_2a_3 &0 &\ddots & & & & 0 &0 \\
\vdots & &\ddots & & & \ddots &\ddots & & \vdots \\
0 &0 & & & \ddots & \ddots & 0 &a_{n-1}a_{n-2} & 0 \\
0 & a_{n-1}a_{n+1} &0 & & \ddots & 0 & 0 &0 & a_{n-1}a_n \\
a_1a_{n+1}&0 &a_2a_{n+1}& 0 &\ddots &-a_{n-1}a_{n-2}& 0 & 0 & 0 \\
0 &a_na_{n+1}& 0 & 0 & \cdots & 0 & -a_{n-1}a_n &0 &0
\end{pmatrix}.
\]
After substituting $x_i=2a_i^2$ for $i = 1,\dots,n+1$, the Lax pair $B,L$ becomes equivalent to the following equations of motion:
$$
\begin{array}{rcll}
\dot{x}_1&=&x_1(x_2-x_{n+1}),&\\
\dot{x}_2&=&x_2(x_3-x_1-x_{n+1}),&\\
\dot{x}_i&=&x_i(x_{i+1}-x_{i-1}),& i=3,4,\ldots ,n-2,n\\
\dot{x}_{n-1}&=&x_{n-1}(x_n-x_{n-2}+x_{n+1}),&\\
\dot{x}_{n+1}&=&x_{n+1}(x_1+x_2-x_{n-1}-x_n).&
\end{array}
$$
It is easily verified that for $n$ even, the rank of the Poisson matrix is $n$ and the function
$f=x_2x_3\cdots x_{n-1}x_{n+1}$ is the Casimir of the system,
while for $n$ odd, the rank of the Poisson matrix is $n-1$ and the functions
$f_1=x_1x_3\cdots x_n=\sqrt{\det L}$ and
$f_2=x_2x_3\cdots x_{n-1}x_{n+1}$ are the Casimirs.
Case (3) corresponds to the Lax pair $L,B$ where the matrices $L,B$ are given by
\[L=\begin{pmatrix}
0 &a_1 &0 &\cdots & &0 &a_{n+1}&0 \\
a_1 &0 &a_2 &0 & & &0 &0 \\
0 &a_2 &0 &a_3 &\ddots & & &0 \\
\vdots &0 &a_3 &\ddots &\ddots & & &\vdots \\
& &\ddots &\ddots & 0 &a_{n-2}& 0 & \\
0 & & & &a_{n-2}& 0 &a_{n-1}& 0 \\
a_{n+1}& 0 & & & 0 &a_{n-1}& 0 &a_n \\
0 &0 & 0 & \cdots& & 0 &a_n &0
\end{pmatrix},
\]
\[B=\begin{pmatrix}
0 &0 &a_1a_2 &0 &\cdots &-a_{n-1}a_{n+1}&0 &-a_na_{n+1}\\
0 &0 &0 &a_2a_3 & & 0 &-a_1a_{n+1} &0 \\
-a_1a_2&0 &0 &0 &\ddots & & 0 &0 \\
0 &-a_2a_3&0 &\ddots &\ddots & & &\vdots \\
\vdots & &\ddots &\ddots & & & & \\
0 & & & & & &a_{n-1}a_{n-2} & 0 \\
a_{n-1}a_{n} & 0 & & & & 0 &0 & a_{n-1}a_n \\
0&a_{1}a_{n+1} & 0 & &-a_{n-1}a_{n-2}& 0 & 0 & 0 \\
a_na_{n+1} &0 & 0 & \cdots & 0 & -a_{n-1}a_n &0 &0
\end{pmatrix}.
\]
After substituting $x_i=2a_i^2$ for $i=1,\dots, n+1$, the Lax pair $(B,L)$ becomes equivalent to the following equations of motion:
\begin{eqnarray*}
\dot{x}_1&=&x_1(x_2-x_{n+1})\\
\dot{x}_i&=&x_i(x_{i+1}-x_{i-1}), \ i=2,3,4,\ldots ,n-2,n\\
\dot{x}_{n-1}&=&x_{n-1}(x_n-x_{n-2}+x_{n+1})\\
\dot{x}_{n+1}&=&x_{n+1}(x_1-x_n-x_{n-1}).
\end{eqnarray*}
For $n$ even, the rank of the Poisson matrix is $n$ and
the function $f=x_1x_2\cdots x_{n-1}x_{n+1}$ is the Casimir,
while for $n$ odd, the rank of the Poisson matrix is $n-1$ and the functions
$f_1=x_1x_3x_5\cdots x_n=\sqrt{\det L}$ and
$f_2=x_1x_2\cdots x_{n-1}x_{n+1}$ are Casimirs.
The system obtained in case (4) turns out to be isomorphic to the one in case (3). In fact, the change of variables $u_{n+1-i}=-x_i$ for $i=1,2,\ldots,n$ and $u_{n+1}=-x_{n+1}$ in case (3) gives the corresponding system of case (4).
\section{Two Lax pair techniques} \label{2tech}
In this section we present two techniques that we use to prove the integrability of the generalized Lotka-Volterra systems. The first one is due to Deift, Li, Nanda and Tomei in \cite{DLNT}. It was used to establish the complete integrability of the full Kostant Toda lattice. The traces of powers of $L$ were not enough to prove integrability, therefore the method of chopping was used to obtain additional integrals.
First we describe the method:
For $k=0, \dots , \left\lfloor { n-1\over 2}\right\rfloor$,\, denote by $( L-
\lambda \, { \rm Id})_{ (k)}$ the result of removing the first $k$
rows and last $k$ columns from $L- \lambda \,{\rm Id}$, and let
\bd {\rm det} \ ( L- \lambda \, { \rm Id})_{ (k)} = E_{0k} \lambda
^{n- 2k} + \dots + E_{n-2k,k} \ . \ed
Set \bd { {\rm det} \ ( L- \lambda \, { \rm Id})_{ (k)} \over
E_{0k}} = \lambda^{ n-2k} + I_{1k} \lambda ^ {n-2k-1} + \dots +
I_{n-2k,k} \ . \ed The functions $I_{rk}$, $r=1, \dots, n-2k$, are
constants of motion for the FKT lattice.
\bigskip
\begin{example} We consider in detail the $gl(3,{\bf C})$
case of the full Toda. Let
\bd L= \begin{pmatrix} f_1&1&0\cr g_1&f_2&1 \cr h_1&g_2 &f_3
\end{pmatrix} \ ,
\ed and take $B$ to be the strictly lower part of $L$. The function $H_2={1 \over 2} \tr L^2$ is the Hamiltonian, and using a suitable linear Poisson bracket the equations \bd \dot x=\{H_2, x\} \ed are equivalent to \bd
\begin{array}{lcl}
\dot f_1 &=& -g_1 \cr \dot f_2 &=& g_1-g_2 \cr \dot f_3 &=& g_2
\cr \dot g_1 &=&g_1(f_{1}-f_2) -h_1\cr \dot g_2 &=&g_2(f_{2}-f_3)
+h_1\cr \dot h_1 &=& h_1 (f_{1}-f_3) \ .
\end{array}
\ed
Note that $H_1=f_1+f_2+f_3$ while $H_2= {1 \over 2}
(f_1^2+f_2^2+f_3^2)+g_1+g_2$.
The chopped matrix is given by \bd
\begin{pmatrix}
g_1& f_2-\lambda \cr h_1 & g_2
\end{pmatrix} \ .
\ed
The determinant of this matrix is $h_1 \lambda +g_1 g_2-h_1 f_2$
and one obtains the rational integral \be I_{11}={ g_1 g_2 -h_1 f_2
\over h_1} \ . \label{a8} \ee
Note that the phase space is six dimensional, we have two Casimirs
$(H_1, I_{11})$ and the functions $(H_2, H_3)$ are enough to
ensure integrability.
\end{example}
In the next example we use this technique to obtain the Casimir of a generalized Lotka-Volterra system.
\begin{example}
\noindent
Consider the generalized Lotka-Volterra system defined by the Lax matrix
$$ L=
\begin{pmatrix}
0 &a_1&0&a_5&0\\
a_1 &0&a_2&0&0\\
0 &a_2&0&a_3&0\\
a_5 &0&a_3&0&a_4\\
0&0&0&a_4&0
\end{pmatrix}
$$
which corresponds to the subset $\Phi=\{\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_1+\alpha_2+\alpha_3\}$.
According to section \ref{liealgebras} a suitable choice of signs for the entries of $B$ gives rise to a Lotka-Volterra system.
However, there is second choice of sings which results in a different system. Define the matrix $B$ to be
$$
\begin{pmatrix}
0& 0 &a_1a_2+a_3a_5&0&a_4a_5\\
0 &0&0&a_2a_3+a_1a_5&0\\
-a_1a_2-a_3a_5&0 &0&0&a_3a_4\\
0 &-a_2a_3-a_1a_5&0&0&0\\
-a_4a_5 &0&-a_3a_4&0&0\\
\end{pmatrix} \ .
$$
In this case the Lax equation $\dot{L}=[B,L]$ corresponds to the following system
\begin{align*}
\dot{a_1}&=a_1a_2^2+a_1a_5^2+2a_2a_3a_5\\
\dot{a_2}&=a_2a_3^2-a_2a_1^2\\
\dot{a_3}&=a_3a_4^2-a_3a_2^2-a_3a_5^2-2a_1a_2a_5\\
\dot{a_4}&=-a_4a_5^2-a_4a_3^2\\
\dot{a_5}&=-a_5a_1^2+a_5a_3^2+a_5a_4^2 \ .
\end{align*}
The Hamiltonian of the system is $H=\dfrac{1}{2}\left(a_1^2+a_2^2+a_3^2+a_4^2+a_5^2\right)$ and the Poisson matrix (of rank $4$) is
$$
\begin{pmatrix}
0&a_1a_2&2a_2a_5&0&a_1a_5\\
-a_1a_2&0&a_2a_3&0&0\\
-2a_2a_5&-a_2a_3&0&a_3a_4&-a_3a_5\\
0&0&-a_3a_4&0&-a_4a_5\\
-a_1a_5&0&a_3a_5&a_4a_5&0
\end{pmatrix} \ .
$$
The system is integrable with constants of motion $H=\dfrac{1}{2}\left(a_1^2+a_2^2+a_3^2+a_4^2+a_5^2\right)$ and
$$
F=\tr\left(\frac{L^4}{4}\right)=
\frac{1}{2}a_1^4+a_1^2a_5^2+\frac{1}{2}a_5^4+a_1^2a_2^2+2a_1a_5a_2a_3+a_3^2a_5^2+a_4^2a_5^2+\frac{1}{2}a_2^4+a_2^2a_3^2+\frac{1}{2}a_3^4+a_4^2a_3^2+\frac{1}{2}a_4^4.
$$
The Casimir of the system is
$
C=a_2^2-\dfrac{a_1a_2a_3}{a_5}
$
and may be obtained by the method chopping as follows.
We have
$$
x\cdot I_5-L=
\begin{pmatrix}
x &-a_1&0&-a_5&0\\
-a_1 &x&-a_2&0&0\\
0 &-a_2&x&-a_3&0\\
-a_5 &0&-a_3&x&-a_4\\
0&0&0&-a_4&x
\end{pmatrix}
$$
and the one-chopped matrix is
$$
\begin{pmatrix}
-a_1 &x&-a_2&0\\
0 &-a_2&x&-a_3\\
-a_5 &0&-a_3&x\\
0&0&0&-a_4
\end{pmatrix}
$$
with determinant
$
a_4a_5x^2+a_1a_2a_3a_4-a_2^2a_4a_5.
$
Dividing the constant term of this polynomial by the leading term $a_4a_5$ we obtain the Casimir $C$.
\end{example}
The second method that we use is an old recipe of Moser.
Moser in \cite{moser} describes a relation between the KM system and the non--periodic Toda lattice. The procedure is
the following: Form $L^2$ which is not anymore a tridiagonal matrix but is similar to one. Let $\{e_1, e_2, \dots, e_n \}$ be the standard
basis of ${\bf R}^n$, and $E_o= \{ {\rm span}\, e_{2i-1}, \, i=1,2, \dots \}$, $E_e= \{ {\rm span}\, e_{2i}, \, i=1,2, \dots \}$. Then $L^2$ leaves
$E_o$, $E_e$ invariant and reduces in each of these spaces to a tridiagonal symmetric Jacobi matrix.
For example, if we omit all even columns and all even rows we
obtain a tridiagonal Jacobi matrix and the entries of this new matrix define the transformation from the KM--system
to the Toda lattice. We illustrate with a simple example where $n=5$.
We use the symmetric version of the KM system Lax pair given by
\bd
L=\begin{pmatrix}
0 & a_1 & 0 & 0 & 0 \cr
a_1 & 0 & a_2 & 0 & 0 \cr
0 & a_2 & 0 & a_3 & 0 \cr
0 & 0 & a_3 & 0 & a_4 \cr
0 & 0 & 0 & a_4 & 0
\end{pmatrix} \ .
\ed
It is simple to calculate that $L^2$ is the matrix
\bd
\begin{pmatrix}
a_1^2 & 0 & a_1 a_2 & 0 & 0 \cr
0 & a_1^2+a_2^2 & 0 & a_2 a_3 & 0 \cr
a_1 a_2 & 0 & a_2^2+a_3^2 & 0 & a_3 a_4 \cr
0 & a_2 a_3 & 0 & a_3^2+a_4^2 & 0 \cr
0 & 0 & a_3 a_4 & 0 & a_4^2
\end{pmatrix} \ .
\ed
Omitting even columns and even rows of $L^2$ we obtain the matrix
\bd
\begin{pmatrix}
a_1^2 & a_1 a_2 & 0 \cr
a_1 a_2 & a_2^2+a_3^2 & a_3 a_4 \cr
0 & a_3 a_4 & a_4^2
\end{pmatrix} \ .
\ed
This is a tridiagonal Jacobi matrix. It is natural to define new variables $A_1=a_1 a_2$, $A_2=a_3 a_4$, $B_1=a_1^2$, $B_2=a_2^2+a_3^2$, $B_3=a_4^2$. The new
variables $A_1,A_2, B_1,B_2, B_3$ satisfy the Toda lattice equations.
This procedure shows that the KM-system and the Toda lattice are closely related: The explicit transformation
which is due to H\'enon
maps one system to the other. The mapping in the general case is given by
\be
A_i=-{ 1 \over 2} \sqrt {a_{2i} a_{2i-1}} \ , \qquad B_i= { 1 \over 2}\left( a_{2i-1}+a_{2i-2} \right) \label{a25} \ .
\ee
The equations satisfied by the new variables $A_i$, $B_i$ are given by:
\begin{displaymath}
\begin{array}{lcl}
\dot A _i& = & A_i \, (B_{i+1} -B_i ) \\
\dot B _i &= & 2 \, ( A_i^2 - A_{i-1}^2 ) \ .
\end{array}
\end{displaymath}
These are precisely the Toda equations in Flaschka's form.
This idea of Moser was applied with success to establish transformations from the generalized Volterra lattices of Bogoyavlensky \cite{bog1, bog2} to generalized Toda systems.
The relation between the Volterra systems of type $B_n$ and $C_n$ and the corresponding Toda systems is in \cite{damianou02}. The similar construction of the Volterra lattice of type $D_n$ and the generalized
Toda lattice of type $D_n$ is in \cite{damianou04}. We use this method in the next section to obtain a missing integral for some generalized Lotka-Volterra systems.
\section{$2$-diagonal systems} \label{2diag}
We define a family of systems with a cubic Hamiltonian vector field. We present each such system in Lax pair form
$\dot{L}=[B,L]$ which allows us to obtain a large family of first integrals, $H_i=\tr(L^i)$. Additional integrals are obtained by the method of Moser discribed in the previous section. In the examples we present, these integrals are enough to ensure
the Liouville integrability of the systems. We believe that all these systems are Liouville integrable.
We begin with the definition of the matrices $L$ and $B$.
For convenience we let $d_i$ denote the $i^{th}$ diagonal starting from the upper right corner and moving towards the main diagonal.
We take $L$ to be an $n\times n$ symmetric matrix with the only non-zero entries on two diagonals $d_m$ and $d_{n-1}$ where $n \geqslant 2m$ and $m \geqslant 2$. Note that for $m=1$ we obtain the periodic KM system.
The matrix $L$ is given by
\[
L=\begin{pmatrix}
0 &a_1 &0 &\cdots &0 & a_n &0 &\cdots &0 \\
a_1 &0 &a_2 &0 & & 0 &a_{n+1}&\ddots &\vdots \\
0 &a_2 &0 &a_3 &\ddots & &\ddots &\ddots &0 \\
\vdots &0 &a_3 &\ddots &\ddots & & &0 &a_{n+m-1}\\
0 & & &\ddots & & & & & 0 \\
a_n & 0 & & & & &a_{n-2}& 0 &\vdots \\
0 &a_{n+1}&\ddots & & &a_{n-2}& 0 &a_{n-2}& 0 \\
\vdots&\ddots & \ddots&0 & & 0 &a_{n-2}& 0 &a_{n-1} \\
0 &\cdots & 0 &a_{n+m-1}& 0 & \cdots& 0 &a_{n-1}&0
\end{pmatrix}.
\]
That is, $L$ is a symmetric $n \times n$ matrix whose non-zero upper diagonals are:
\[\displaystyle
\begin{array}{ccl}
d_{n-1} &=& (a_{1},a_{2}, \dots, a_{n-1})\\\noalign{\medskip}
d_m &=& (a_{n}, a_{n+1},\dots, a_{n+m-1})
\end{array} \]
To put it in the terminology of section \Ref{Procedure} this matrix has variables in the
positions corresponding to the simple roots and the positive roots of length $n-m$, i.e.
$$L=\displaystyle \sum_{\alpha_i\in\Phi}a_i(X_{\alpha_i}+X_{-\alpha_i}),$$
where
$$
\Phi=\{\alpha_1,\alpha_2,\ldots,\alpha_{n-1},\alpha_1+\alpha_2+\ldots+\alpha_{n-m},
\ldots,\alpha_{m}+\alpha_{m+1}+\ldots+\alpha_{n-1}\}.
$$
By considering the set
$
\Psi =
\left\{
\alpha+\beta \ | \ \alpha, \beta \in \Phi \cup \Phi^{-}, \alpha+\beta \in \Delta^{+}
\right\}
$
we define $B$ to be the matrix
\begin{equation}
\label{B_Matrix}
B=
\sum c_{ij} a_i a_j \left(\left[X_{\alpha_i},X_{\alpha_j}\right]+\left[X_{-\alpha_i},X_{-\alpha_j}\right]\right),
\end{equation}
where the non-zero terms are taken over all
$\alpha_i+\alpha_j \in \Psi$
with
$\alpha_i,\alpha_j\in\Phi\cup\Phi^-$
and
$c_{ij}=\pm 1$.
We compute the signs $c_{ij}$ in a way that leads to a consistent Lax pair. It turns out that $B$ is the $n \times n$ skew-symmetric matrix with non-zero upper diagonals:
\begin{equation}\label{B_Matrix_diagonals}
\begin{array}{{r@{\hspace{3pt}}c@{\hspace{3pt}}l@{\hspace{3pt}}}}
d_{n-2} &=& (a_{1}a_{2}, a_{2}a_{3},\dots , a_{n-2}a_{n-1}),\\
d_{m+1} &=& (-a_{n-m}a_{n}, -a_{n-m+1}a_{n+1}-a_{1}a_{n},\dots ,-a_{n-1}a_{n+m-1}-a_{m}a_{n+m-2}, -a_{m}a_{n+m-1}),\\
d_{m-1} &=& (a_{n-m+1}a_{n}+a_{1}a_{n+1}, a_{n-m+2}a_{n+1}+a_{2}a_{n+2},\dots, a_{n-1}a_{n+m-2}+a_{m-1}a_{n+m-1} ).
\end{array}
\end{equation}
The poisson bracket $\{\,,\}$ is determined by the $N \times N$ Poisson matrix $\ensuremath{\pi}=q-q^t$, where $N=n+m-1$, and the non-zero entries of $q$ are given by:
\begin{equation}\label{Poisson_Matrix}
\displaystyle
\begin{array}{lcll}
q_{i,i+n} &=& a_{i}a_{i+n} &\text{for } 1 \leqslant i \leqslant m-1,\\\noalign{\medskip}
q_{i,i+n-1} &=& -a_{i}a_{i+n-1} &\text{for } 1 \leqslant i \leqslant m,\\\noalign{\medskip}
q_{i+n-m-1,i+n-1} &=& a_{i+n-1}a_{i+n-m-1} &\text{for } 1 \leqslant i \leqslant m,\\\noalign{\medskip}
q_{i+n-m,i+n-1} &=& -a_{i+n-1}a_{i+n-m} &\text{for } 1 \leqslant i \leqslant m-1,\\\noalign{\medskip}
q_{i,i+1} &=& a_{i}a_{i+1} &\text{for } 1 \leqslant i \leqslant n-2,\\\noalign{\medskip}
q_{i+n-1,i+n} &=& 2a_{i}a_{i+n-m} &\text{for } 1 \leqslant i \leqslant m-1\,.
\end{array}
\end{equation}
\subsection{Example.}
We illustrate in detail the results with a specific example when $m=4$, $n=10$ and $N=13$.
Here $L$ is a $10\times10$ matrix with two diagonals $d_4$ and $d_9$, i.e.
\[
\displaystyle
L=
\begin{pmatrix}
0&a_{{1}}&0&0&0&0&a_{{10}}&0&0&0\\
a_{{1}}&0&a_{{2}}&0&0&0&0&a_{{11}}&0&0\\
0&a_{{2}}&0&a_{{3}}&0&0&0&0&a_{{12}}&0\\
0&0&a_{{3}}&0&a_{{4}}&0&0&0&0&a_{{13}}\\
0&0&0&a_{{4}}&0&a_{{5}}&0&0&0&0\\
0&0&0&0&a_{{5}}&0&a_{{6}}&0&0&0\\
a_{{10}}&0&0&0&0&a_{{6}}&0&a_{{7}}&0&0\\
0&a_{{11}}&0&0&0&0&a_{{7}}&0&a_{{8}}&0\\
0&0&a_{{12}}&0&0&0&0&a_{{8}}&0&a_{{9}}\\
0&0&0&a_{{13}}&0&0&0&0&a_{{9}}&0
\end{pmatrix}.
\]
The matrix $B $ is defined by equation \Ref{B_Matrix}
and the Poisson bracket $\{\,,\}$ is determined by equations \Ref{Poisson_Matrix}.
The resulting Lax pair, $(L,B)$, is equivalent to the following equations of motion:
\begin{eqnarray*}
\dot{a _{1}} &=& a_{1}a_{2}^{2}+a_{1}a_{11}^{2}-a_{1}a_{10}^{2}, \\\noalign{\medskip}
\dot{a _{2}} &=& a_{2}a_{3}^{2}-a_{1}^{2}a_{2}+a_{2}a_{12}^{2}-a_{2}a_{11}^{2}, \\\noalign{\medskip}
\dot{a _{3}} &=& a_{3}a_{4}^{2}-a_{2}^{2}a_{3}+a_{3}a_{13}^{2}-a_{3}a_{12}^{2}, \\\noalign{\medskip}
\dot{a _{4}} &=& a_{4}a_{5}^{2}-a_{3}^{2}a_{4}-a_{4}a_{13}^{2}, \\\noalign{\medskip}
\dot{a _{5}} &=& a_{5}a_{6}^{2}-a_{4}^{2}a_{5}, \\\noalign{\medskip}
\dot{a _{6}} &=& a_{6}a_{7}^{2}-a_{5}^{2}a_{6}+a_{6}a_{10}^{2}, \\\noalign{\medskip}
\dot{a _{7}} &=& a_{7}a_{8}^{2}-a_{6}^{2}a_{7}+a_{7}a_{11}^{2}-a_{7}a_{10}^{2}, \\\noalign{\medskip}
\dot{a _{8}} &=& a_{8}a_{9}^{2}-a_{7}^{2}a_{8}+a_{8}a_{12}^{2}-a_{8}a_{11}^{2}, \\\noalign{\medskip}
\dot{a _{9}} &=& -a_{8}^{2}a_{9}+a_{9}a_{13}^{2}-a_{9}a_{12}^{2}, \\\noalign{\medskip}
\dot{a} _{10} &=& a_{7}^{2}a_{10}-a_{6}^{2}a_{10}+a_{1}^{2}a_{10}+2a_{1}a_{7}a_{11}, \\\noalign{\medskip}
\dot{a} _{11} &=& a_8^2a_{11}-a_7^2a_{11}+a_2^2a_{11}-a_1^2a_{11}+2a_2a_8a_{12}-2a_1a_7a_{10}, \\\noalign{\medskip}
\dot{a} _{12} &=& a_9^2a_{12}-a_8^2a_{12}+a_3^2a_{12}-a_2^2a_{12}+2a_3a_9a_{13}-2a_2a_8a_{11}, \\\noalign{\medskip}
\dot{a} _{13} &=& -a_{9}^{2}a_{13}+a_{4}^{2}a_{13}-a_{3}^{2}a_{13}-2a_{3}a_{9}a_{12}. \\\noalign{\medskip}
\end{eqnarray*}
The Hamiltonian of the system is $H_2 = \frac{1}{2}\left(a_1^2 + a_2^2 + \cdots + a_{13}^2\right)$ and the Poisson matrix has rank $12$.
The following constants of motion
$$H_i=\tr L^i, \ \ i=2,4,6,7,8,9$$
together with the Casimir, $C=\det L$, ensure the integrability of the system.
\subsection{Special case with two diagonals, $m=2$}
In this subsection we consider the case where $m=2$.
The matrix $L$ is defined by
\[L=\left(
\begin {array}{ccccccc}
0 & a_1 & 0 &\cdots& 0 & a_n & 0 \\\noalign{\bigskip}
a_1 & 0 & a_2 &\ddots& & 0 &a_{n+1} \\\noalign{\bigskip}
0 & a_2 & 0 &\ddots& & &0 \\\noalign{\bigskip}
\vdots&\ddots &\ddots&\ddots& & 0 &\vdots \\\noalign{\bigskip}
0 & & & & &a_{n-2}&0 \\\noalign{\bigskip}
a_n & 0 & & 0 &a_{n-2}& 0 &a_{n-1} \\\noalign{\bigskip}
0 &a_{n+1}& 0 &\cdots& 0 &a_{n-1}&0
\end {array} \right)
\]
and corresponds to the subset $\Phi$ of the positive roots containing the simple roots and the roots of length $n-2$.
The matrix $B$ is defined by equation \Ref{B_Matrix} and its upper triangular part is
\[\begin{pmatrix}
0 &0 &a_1a_2 &0 &\cdots& 0 & -a_{n-2}a_{n} &0 &a_1a_{n+1}+a_{n-1}a_n \\
0 &0 &0 &a_2a_3 & & \; & 0 &-a_1a_{n}-a_{n-1}a_{n+1}&0 \\
\vdots & \ddots&0 &0 &\ddots& \; & & 0 &-a_2a_{n+1}\\
& & &\ddots &\ddots& \; & & & 0 \\
& & & & & \; & \ddots & 0 & \vdots \\
& & & & & \; & 0 &a_{n-3}a_{n-2} & 0 \\
& & & & & \; & 0 &0 & a_{n-2}a_{n-1}\\
\vdots & & & & & & \ddots & 0 &0 \\
0 &\cdots & & & & & \cdots & 0 &0 \end{pmatrix},
\]
The Lax equation $\dot{L} =[B,L]$ is equivalent to the following system:
\begin{eqnarray*}
\dot{a}_1 &=& a_1a^2_2+a_1a^2_{n+1}-a_1a^2_n, \\\noalign{\medskip}
\dot{a}_2 &=& a_2a^2_3-a^2_1a_2-a_2a^2_{n+1}, \\
\vdots \ & & \quad\quad\quad\quad \vdots \\
\dot{a}_i &=& a_ia^2_{i+1}-a^2_{i-1}a_i, \quad \quad \quad i = 3, 4, \dots, n-3\\
\vdots \ & & \quad\quad\quad\quad \vdots \\
\dot{a}_{n-2} &=& a_{n-2}a^2_n-a^2_{n-3}a_{n-2}+a_{n-2}a^2_{n-1},\\\noalign{\medskip}
\dot{a}_{n-1} &=& a_{n-1}a^2_{n+1}-a^2_{n-2}a_{n-1}-a_{n-1}a^2_n,\\\noalign{\medskip}
\dot{a}_n &=& a^2_1a_n+a^2_{n-1}a_n-a^2_{n-2}a_n+2a_{1}a_{n-1}a_{n+1},\\\noalign{\medskip}
\dot{a}_{n+1} &=& a^2_2a_{n+1}-a^2_1a_{n+1}-a_{n+1}a^2_{n-1}-2a_1a_{n-1}a_n\,.
\end{eqnarray*}
\noindent
The Poisson matrix $\ensuremath{\pi}$ is defined by equations \Ref{Poisson_Matrix} and its upper triangular part is
\[
\begin{pmatrix}
0 &a_1a_2 & 0 & \cdots & & & 0 & -a_1a_n & a_1a_{n+1} \\
\vdots &0 &a_2a_3 & 0 & & \; & & 0 & -a_2a_{n+1} \\
& &0 & a_3a_4 & & \; & & & 0 \\
& & & \ddots & & \ddots& 0 & & \\
& & & &\ddots& \ddots& 0 & 0 & \vdots \\
& & & & & 0 & a_{n-2}a_{n-1}& a_{n-2}a_{n} & 0 \\
& & & & & \; & 0 & -a_{n-1}a_n & a_{n-1}a_{n+1} \\
\vdots & & & & & \; & & 0 & 2a_1a_{n-1} \\
0 &\cdots & & & & \; & & \cdots & 0
\end{pmatrix},
\]
For $n=9$ the corresponding system is
\begin{eqnarray*}
\dot{a}_1 &=& a_1a^2_2+a_1a^2_{10}-a_1a^2_9,\\\noalign{\medskip}
\dot{a}_2 &=& a_2a^2_3-a^2_1a_2-a_2a^2_{10},\\\noalign{\medskip}
\dot{a}_3 &=& a_3a^2_4-a^2_2a_3,\\\noalign{\medskip}
\dot{a}_4 &=& a_4a^2_5-a^2_3a_4,\\\noalign{\medskip}
\dot{a}_5 &=& a_5a^2_6-a^2_4a_5,\\\noalign{\medskip}
\dot{a}_6 &=& a_6a^2_7-a^2_5a_6,\\\noalign{\medskip}
\dot{a}_7 &=& a_7a^2_8-a^2_6a_7+a_7a^2_9,\\\noalign{\medskip}
\dot{a}_8 &=& -a^2_7a_8+a_8a^2_{10}-a_8a^2_9,\\\noalign{\medskip}
\dot{a}_9 &=& a^2_1a_9+a^2_8a_9-a^2_7a_9+2a_1a_8a_{10},\\\noalign{\medskip}
\dot{a}_{10} &=& a^2_2a_{10}-a^2_1a_{10}-a^2_8a_{10}-2a_1a_8a_9.
\end{eqnarray*}
It has Lax representation $\dot{L}=[B,L]$ with
\[
L=
\begin {pmatrix}
0&a_{{1}}&0&0&0&0&0&a_{{9}}&0 \\\noalign{\medskip}
a_{{1}}&0&a_{{2}}&0&0&0&0&0&a_{{10}} \\\noalign{\medskip}
0&a_{{2}}&0&a_{{3}}&0&0&0&0&0 \\\noalign{\medskip}
0&0&a_{{3}}&0&a_{{4}}&0&0&0&0 \\\noalign{\medskip}
0&0&0&a_{{4}}&0&a_{{5}}&0&0&0 \\\noalign{\medskip}
0&0&0&0&a_{{5}}&0&a_{{6}}&0&0 \\\noalign{\medskip}
0&0&0&0&0&a_{{6}}&0&a_{{7}}&0 \\\noalign{\medskip}
a_{{9}}&0&0&0&0&0&a_{{7}}&0&a_{{8}} \\\noalign{\medskip}
0&a_{{10}}&0&0&0&0&0&a_{{8}}&0
\end{pmatrix}
\]
and the matrix $B$ is defined by relation \Ref{B_Matrix} and given by equations \Ref{B_Matrix_diagonals}.
We conjecture that for any $n$ the corresponding system is integrable.
For $n$ even, the system has $n+1$ variables and the Poisson matrix has rank $n$ and thus the Poisson structure has one Casimir.
The traces of $L$ give $n/2$ functionally independent first integrals in involution. Hence the system is integrable
in the sense of Liouville. The Casimir is $$C=\det{L}=(a_3a_5\ldots a_{n-5}a_{n-3}a_na_{n+1}-a_1a_3\ldots a_{n-3}a_{n-1})^2.$$
For $n$ odd, the system has $n+1$ variables and the Poisson matrix has rank $n+1$. Therefore the
Poisson structure is non-degenerate with no Casimirs. The traces $\tr(L^i)$ give only $\frac{n+1}{2}-1$
functionally independent first integrals in involution.
For the integrability of the system we need one more
constant of motion which we obtain using a procedure due to Moser described in section \Ref{2tech}.
We give two examples for $n=7,n=9$.
\begin{example}
Consider the following matrices
\[
L=
\begin{pmatrix}
0&a_{{1}}&0&0&0&a_{{7}}&0\\\noalign{\medskip}
a_{{1}}&0&a_{{2}}&0&0&0&a_{{8}}\\\noalign{\medskip}
0&a_{{2}}&0&a_{{3}}&0&0&0\\\noalign{\medskip}
0&0&a_{{3}}&0&a_{{4}}&0&0\\\noalign{\medskip}
0&0&0&a_{{4}}&0&a_{{5}}&0\\\noalign{\medskip}
a_{{7}}&0&0&0&a_{{5}}&0&a_{{6}}\\\noalign{\medskip}
0&a_{{8}}&0&0&0&a_{{6}}&0
\end{pmatrix},
\]
\[
\La_o(L^2)=
\begin{pmatrix}
a_1^2+a_2^2+a_8^2 & a_2a_3 & a_1a_7+a_6a_8 \\\noalign{\medskip}
a_2a_3 & a_3^2+a_4^2 & a_4a_5 \\\noalign{\medskip}
a_1a_7+a_6a_8 & a_4a_5 & a_5^2+a_6^2+a_7^2
\end{pmatrix}
\]
We define a new set of variables $A_1=a_2a_3,A_2=a_4a_5,A_3=a_1a_7+a_6a_8,B_1=a_1^2+a_2^2+a_8^2,B_2=a_3^2+a_4^2\text{ and } B_3=a_5^2+a_6^2+a_7^2$.
These variables satisfy the periodic Toda equations which are equivalent to the Lax equation $\dot{\La}_o(L^2)=[C,\La_o(L^2)]$ where
\[
\La_o(L^2)=
\begin{pmatrix}
B_{{1}}&A_{{1}}&A_{{3}}\\\noalign{\medskip}
A_{{1}}&B_{{2}}&A_{{2}}\\\noalign{\medskip}
A_{{3}}&A_{{2}}&B_{{3}}
\end{pmatrix}
\]
and
\[
C=
\begin{pmatrix}
0 & A_{{1}} &-A_{{3}}\\\noalign{\medskip}
-A_{{1}} & 0 & A_{{2}}\\\noalign{\medskip}
A_{{3}} & -A_{{2}} & 0
\end{pmatrix}.
\]
This system has two Casimirs $B_1+B_2+B_3$ and $A_1A_2A_3$. The Casimir $B_1+B_2+B_3$ expressed as a function of the original variables gives the Hamiltonian while the Casimir $A_1A_2A_3$ gives the extra integral
\[
A_1A_2A_3=a_2a_3a_4a_5 \left( a_1a_7+a_6a_8\right).
\]
We could also obtain this integral from the system $\dot{\La}_e(L^2)=\left[C,\La_e(L^2)\right]$ where
\[
\La_e(L^2)=
\begin{pmatrix}
a_1^2+a_7^2 & a_1a_2 & a_5a_7 & a_1a_8 + a_6a_7 \\\noalign{\medskip}
a_1a_2 & a_2^2+a_3^2 & a_3a_4 & a_2a_8 \\\noalign{\medskip}
a_5a_7 & a_3a_4 & a_4^2+a_5^2 & a_5a_6 \\\noalign{\medskip}
a_1a_8+a_7a_6 & a_2a_8 & a_5a_6 & a_6^2+a_8^2
\end{pmatrix}
=
\begin{pmatrix}
B_{{1}}&A_{{1}}&A_{{4}}&A_{{6}}\\\noalign{\medskip}
A_{{1}}&B_{{2}}&A_{{2}}&A_{{5}}\\\noalign{\medskip}
A_{{4}}&A_{{2}}&B_{{3}}&A_{{3}}\\\noalign{\medskip}
A_{{6}}&A_{{5}}&A_{{3}}&B_{{4}}
\end{pmatrix}
\]
and
\[
C=
\begin{pmatrix}
0 & A_{{1}} & -A_{{4}} & A_{{6}}\\\noalign{\medskip}
-A_{{1}} & 0 & A_{{2}} & -A_{{5}}\\\noalign{\medskip}
A_{{4}} & -A_{{2}} & 0 & A_{{3}}\\\noalign{\medskip}
-A_{{6}} & A_{{5}} & -A_{{3}} & 0
\end{pmatrix}.
\]
This system is not the full symmetric Toda lattice of Deift, Li, Nanda and Tomei \cite{DLNT}. Although the $L$ matrix is the same, the $C$ matrix is different. This system has two polynomial Casimirs,
$B_1+B_2+B_3+B_4$ and $A_{{1}}A_{{2}}A_{{4}}+A_{{2}}A_{{3}}A_{{5}}$, with
\[
A_{{1}}A_{{2}}A_{{4}}+A_{{2}}A_{{3}}A_{{5}}=a_{{2}}a_{{3}}a_{{4}}a_{{5}} \left( a_{{1}}a_{{7}}+a_{{6}}a_{{8}} \right).
\]
\end{example}
\begin{example}
We take $L$ to be
\[
\begin{pmatrix}
0&a_{{1}}&0&0&0&0&0&a_{{9}}&0\\\noalign{\medskip}
a_{{1}}&0&a_{{2}}&0&0&0&0&0&a_{{10}}\\\noalign{\medskip}
0&a_{{2}}&0&a_{{3}}&0&0&0&0&0\\\noalign{\medskip}
0&0&a_{{3}}&0&a_{{4}}&0&0&0&0\\\noalign{\medskip}
0&0&0&a_{{4}}&0&a_{{5}}&0&0&0\\\noalign{\medskip}
0&0&0&0&a_{{5}}&0&a_{{6}}&0&0\\\noalign{\medskip}
0&0&0&0&0&a_{{6}}&0&a_{{7}}&0\\\noalign{\medskip}
a_{{9}}&0&0&0&0&0&a_{{7}}&0&a_{{8}}\\\noalign{\medskip}
0&a_{{10}}&0&0&0&0&0&a_{{8}}&0
\end{pmatrix}.
\]
The matrix
\[
\La_o(L^2)=
\begin{pmatrix}
a_1^2+a_2^2+a_{10}^2&a_2a_3&0&a_1a_9+a_8a_{10}\\\noalign{\medskip}
a_2a_3&a_3^2+a_4^2&a_4a_5&0\\\noalign{\medskip}
0&a_4a_5&a_5^2+a_6^2&a_6a_7\\\noalign{\medskip}
a_1a_9+a_8a_{10}&0&a_6a_7&a_7^2+a_8^2+a_9^2
\end{pmatrix}
=
\begin{pmatrix}
B_{{1}}&A_{{1}}& 0&A_{{4}}\\\noalign{\medskip}
A_{{1}}&B_{{2}}&A_{{2}}&0\\\noalign{\medskip}
0&A_{{2}}&B_{{3}}&A_{{3}}\\\noalign{\medskip}
A_{{4}}& 0&A_{{3}}&B_{{4}}
\end{pmatrix}
\]
produces the periodic-Toda lattice which can be written in Lax pair form
$\dot{\La}_o(L^2)=\left[C,\La_o(L^2)\right]$ with
\[
C=
\begin{pmatrix}
0 & A_{{1}} & 0 & -A_{{4}} \\\noalign{\medskip}
-A_{{1}} & 0 & A_{{2}} & 0 \\\noalign{\medskip}
0 & -A_{{2}} & 0 & A_{{3}} \\\noalign{\medskip}
A_{{4}} & 0 & -A_{{3}} & 0
\end{pmatrix}.
\]
This system also has two polynomial Casimirs $B_1+B_2+B_3+B_4$ and $A_1A_2A_3A_4$. By writing the latter one in the original variables
we obtain the extra integral, namely
\[
A_1A_2A_3A_4=a_2a_3a_4a_5a_6a_7 \left( a_1a_9+a_{10}a_8 \right).
\]
The {intermediate} Toda system $\dot{\La}_e(L^2)=\left[C,\La_e(L^2)\right]$ with
\[\begin{split}
\La_e(L^2)&=
\begin{pmatrix}
a_1^2+a_9^2&a_1a_2&0&a_7a_9&a_1a_{10}+a_8a_9\\\noalign{\medskip}
a_1a_2&a_2^2+a_3^2&a_3a_4&0&a_2a_{10}\\\noalign{\medskip}
0&a_3a_4&a_4^2+a_5^2&a_5a_6&0\\\noalign{\medskip}
a_7a_9&0&a_5a_6&a_6^2+a_7^2&a_7a_8\\\noalign{\medskip}
a_1a_{10}+a_8a_9&a_2a_{10}&0&a_7a_8&a_8^2+a_{10}^2
\end{pmatrix}
\\
&=
\begin{pmatrix}
B_1&A_1&0&A_5&A_7\\\noalign{\medskip}
A_1&B_2&A_2&0&A_6\\\noalign{\medskip}
0&A_2&B_3&A_3&0\\\noalign{\medskip}
A_5&0&A_3&B_4&A_4\\\noalign{\medskip}
A_7&A_6&0&A_4&B_5
\end{pmatrix}
\end{split}
\]
and
\[
C=
\begin{pmatrix}
0 & A_1 & 0 &-A_5 & A_7 \\\noalign{\medskip}
-A_1 & 0 & A_2 & 0 &-A_6 \\\noalign{\medskip}
0 &-A_2 & 0 & A_3 & 0 \\\noalign{\medskip}
A_5 & 0 &-A_3 & 0 & A_4 \\\noalign{\medskip}
-A_7 & A_6 & 0 &-A_4 & 0
\end{pmatrix}.
\]
has two Casimirs
$B_1+B_2+B_3+B_4+B_5$ and $A_1A_2A_3A_5+A_2A_3A_4A_6$ and
\[
A_1A_2A_3A_5+A_2A_3A_4A_6=a_2a_3a_4a_5a_6a_7\left(a_1a_9+a_8a_{10} \right).
\]
Note that this intermediate Toda system is not of the type considered in \cite{damianou11}.
\end{example}
\subsection{Special case with two diagonals, $m=3$}
In this subsection we consider the case where $m=3$.
The matrix $L$ is given by
\[
L=
\begin {pmatrix}
0 & a_1 & 0 &\ldots& 0 & a_n & 0 & 0 \\\noalign{\bigskip}
a_1 & 0 & a_2 &\ddots& & 0 &a_{n+1}&0 \\\noalign{\bigskip}
0 & a_2 & 0 &\ddots& & &0 &a_{n+2} \\\noalign{\bigskip}
\vdots&\ddots &\ddots &\ddots& & & & 0 \\\noalign{\bigskip}
0 &\ddots & & & &\ddots & \ddots& \vdots \\\noalign{\bigskip}
a_n & 0 & & & \ddots&\ddots &a_{n-2}&0 \\\noalign{\bigskip}
0 &a_{n+1}& 0 & & \ddots&a_{n-2}& 0 &a_{n-1} \\\noalign{\bigskip}
0 &0 &a_{n+2}& 0 &\cdots & 0 &a_{n-1}&0
\end {pmatrix}
\]
That is $L$ is a symmetric $n \times n$ matrix whose non-zero upper diagonals are:
\[\displaystyle
\begin{array}{ccl}
d_{n-1} &=& (a_{1},a_{2},a_{3},a_{4},\dots, a_{n-1}),\\\noalign{\medskip}
d_3 &=& (a_{n}, a_{n+1}, a_{n+2}).
\end{array} \]
It corresponds to the subset $\Phi$ of the positive roots of the root system of type $A_{n-1}$ containing the simple roots
and the roots of length $n-3$. The matrix $B$ constructed using the procedure described in \Ref{B_Matrix} is an $n \times n$
skew-symmetric matrix whose non-zero upper diagonals are:
\[\displaystyle
\begin{array}{ccl}
d_{n-2} &=& (a_{1}a_{2}, a_{2}a_{3}, a_{3}a_{4},\dots, a_{n-2}a_{n-1}),\\\noalign{\medskip}
d_4 &=& (-a_{n-3}a_{n}, -a_{n-2}a_{n+1}-a_{1}a_{n}, -a_{n-1}a_{n+2}-a_{2}a_{n+1}, -a_{3}a_{n+2}),\\\noalign{\medskip}
d_2 &=& (a_{n-2}a_{n}+a_{1}a_{n+1}, a_{n-1}a_{n+1}+a_{2}a_{n+2}).
\end{array}\]
We believe that all of these systems are integrable. They are Hamiltonian systems with a Poisson matrix determined by the equations \Ref{Poisson_Matrix}. For $n$ even, the Poisson structure has two Casimirs and the traces of $L^i$ together with an extra constant of motion obtained by Moser's technique give the integrability of the system. For $n$ odd the system it has one Casimirs and the traces of the $L^i$ give enough first integrals to ensure the integrability of the system.
We illustrate this with two examples, one for $n=7$ and one for $n=8$.
\begin{example}
For $n=7$ the matrix $L$ is given by
\[\displaystyle
\begin{pmatrix}
0&a_{1}&0&0&a_{7}&0&0\\\noalign{\medskip}
a_{1}&0&a_{2}&0&0&a_{8}&0\\\noalign{\medskip}
0&a_{2}&0&a_{3}&0&0&a_{9}\\\noalign{\medskip}
0&0&a_{3}&0&a_{4}&0&0\\\noalign{\medskip}
a_{7}&0&0&a_{4}&0&a_{5}&0\\\noalign{\medskip}
0&a_{8}&0&0&a_{5}&0&a_{6}\\\noalign{\medskip}
0&0&a_{9}&0&0&a_{6}&0
\end{pmatrix},
\]
while $B$ is given by
\[
\displaystyle
\begin{pmatrix}
0&0&a_{1}a_{2}&-a_{4}a_{7}&0&a_{1}a_{8}+a_{5}a_{7}&0\\\noalign{\medskip}
0&0&0&a_{2}a_{3}&-a_{1}a_{7}-a_{5}a_{8}&0&a_{2}a_{9}+a_{6}a_{8}\\\noalign{\medskip}
-a_{1}a_{2}&0&0&0&a_{3}a_{4}&-a_{2}a_{8}-a_{6}a_{9}&0\\\noalign{\medskip}
a_{4}a_{7}&-a_{2}a_{3}&0&0&0&a_{4}a_{5}&-a_{3}a_{9}\\\noalign{\medskip}
0&a_{1}a_{7}+a_{5}a_{8}&-a_{3}a_{4}&0&0&0&a_{5}a_{6}\\\noalign{\medskip}
-a_{1}a_{8}-a_{5}a_{7}&0&a_{2}a_{8}+a_{6}a_{9}&-a_{4}a_{5}&0&0&0\\\noalign{\medskip}
0&-a_{2}a_{9}-a_{6}a_{8}&0&a_{3}a_{9}&-a_{5}a_{6}&0&0
\end{pmatrix}.
\]
The poisson bracket $\{\,,\}$ is determined by the following Poisson matrix.
\[\displaystyle \ensuremath{\pi}=
\begin{pmatrix}
0&a_{1}a_{2}&0&0&0&0&-a_{1}a_{7}&a_{1}a_{8}&0\\\noalign{\medskip}
-a_{1}a_{2}&0&a_{2}a_{3}&0&0&0&0&-a_{2}a_{8}&a_{2}a_{9}\\\noalign{\medskip}
0&-a_{2}a_{3}&0&a_{3}a_{4}&0&0&0&0&-a_{3}a_{9}\\\noalign{\medskip}
0&0&-a_{3}a_{4}&0&a_{4}a_{5}&0&a_{4}a_{7}&0&0\\\noalign{\medskip}
0&0&0&-a_{4}a_{5}&0&a_{5}a_{6}&-a_{5}a_{7}&a_{5}a_{8}&0\\\noalign{\medskip}
0&0&0&0&-a_{5}a_{6}&0&0&-a_{6}a_{8}&a_{6}a_{9}\\\noalign{\medskip}
a_{1}a_{7}&0&0&-a_{4}a_{7}&a_{5}a_{7}&0&0&2\,a_{1}a_{5}&0\\\noalign{\medskip}
-a_{1}a_{8}&a_{2}a_{8}&0&0&-a_{5}a_{8}&a_{6}a_{8}&-2\,a_{1}a_{5}&0&2\,a_{2}a_{6}\\\noalign{\medskip}
0&-a_{2}a_{9}&a_{3}a_{9}&0&0&-a_{6}a_{9}&0&-2\,a_{2}a_{6}&0
\end{pmatrix}.
\]
The above system is equivalent to the following equations of motion.
\begin{eqnarray*}
\dot{a}_{1} &=&a_{{1}}a_{{2}}^{2}+a_{{1}}a_{{8}}^{2}-a_{{1}}a_{{7}}^{2},\\\noalign{\medskip}
\dot{a}_{2} &=&a_{{2}}a_{{3}}^{2}-a_{{1}}^{2}a_{{2}}+a_{{2}}a_{{9}}^{2}-a_{{2}}a_{{8}}^{2},\\\noalign{\medskip}
\dot{a}_{3} &=&a_{{3}}a_{{4}}^{2}-a_{{2}}^{2}a_{{3}}-a_{{3}}a_{{9}}^{2},\\\noalign{\medskip}
\dot{a}_{4} &=&a_{{4}}a_{{5}}^{2}-a_{{3}}^{2}a_{{4}}+a_{{4}}a_{{7}}^{2},\\\noalign{\medskip}
\dot{a}_{5} &=&a_{{5}}a_{{6}}^{2}-a_{{4}}^{2}a_{{5}}+a_{{5}}a_{{8}}^{2}-a_{{5}}a_{{7}}^{2},\\\noalign{\medskip}
\dot{a}_{6} &=&-a_{{5}}^{2}a_{{6}}+a_{{6}}a_{{9}}^{2}-a_{{6}}a_{{8}}^{2},\\\noalign{\medskip}
\dot{a}_{7} &=&a_{{1}}^{2}a_{{7}}+a_{{5}}^{2}a_{{7}}-a_{{4}}^{2}a_{{7}}+2\,a_{{1}}a_{{5}}a_{{8}},\\\noalign{\medskip}
\dot{a}_{8} &=&a_2^2a_8-a_1^2a_8+a_6^2a_8-a_5^2a_8+2\,a_2a_6a_9-2\,a_1a_5a_7,\\\noalign{\medskip}
\dot{a}_{9} &=&a_3^2a_9-a_2^2a_9-a_6^2a_9-2\,a_2a_6a_8.
\end{eqnarray*}
The Casimir for the Poisson bracket is given by
$$\det L = -2a_1a_3a_4a_6 (a_1a_5a_9 + a_2a_6a_7 - a_7a_8a_9).$$
Note that the constants of motion, $H_i=\tr L^i$ for $i=4,5,6$, together with the Hamiltonian $H_2 = \frac{1}{2}\left(a_1^2 + a_2^2 + \cdots + a_9^2\right)$ are functionally independent and in involution. Therefore the system is integrable.
\end{example}
\begin{example}
For n=8 the matrix $L$ is given by
\[\displaystyle L= \left(
\begin{array}{cccccccc}
0&a_{1}&0&0&0&a_{8}&0&0\\\noalign{\medskip}
a_{1}&0&a_{2}&0&0&0&a_{9}&0\\\noalign{\medskip}
0&a_{2}&0&a_{3}&0&0&0&a_{10}\\\noalign{\medskip}
0&0&a_{3}&0&a_{4}&0&0&0\\\noalign{\medskip}
0&0&0&a_{4}&0&a_{5}&0&0\\\noalign{\medskip}
a_{8}&0&0&0&a_{5}&0&a_{6}&0\\\noalign{\medskip}
0&a_{9}&0&0&0&a_{6}&0&a_{7}\\\noalign{\medskip}
0&0&a_{10}&0&0&0&a_{7}&0
\end{array}
\right),
\]
and the matrix $B$ is determined by the relations \Ref{B_Matrix_diagonals}.
The corresponding system is given by:
\begin{eqnarray*}
\dot{a}_1 &=&a_{1}a_{2}^{2}+a_{1}a_{9}^{2}-a_{1}a_{8}^{2},\\\noalign{\medskip}
\dot{a}_2 &=&a_{2}a_{3}^{2}-a_{1}^{2}a_{2}+a_{2}a_{10}^{2}-a_{2}a_{9}^{2},\\\noalign{\medskip}
\dot{a}_3 &=&a_{3}a_{4}^{2}-a_{2}^{2}a_{3}-a_{3}a_{10}^{2},\\\noalign{\medskip}
\dot{a}_4 &=&a_{4}a_{5}^{2}-a_{3}^{2}a_{4},\\\noalign{\medskip}
\dot{a}_5 &=&a_{5}a_{6}^{2}-a_{4}^{2}a_{5}+a_{5}a_{8}^{2},\\\noalign{\medskip}
\dot{a}_6 &=&a_{6}a_{7}^{2}-a_{5}^{2}a_{6}+a_{6}a_{9}^{2}-a_{6}a_{8}^{2},\\\noalign{\medskip}
\dot{a}_7 &=&-a_{6}^{2}a_{7}+a_{7}a_{10}^{2}-a_{7}a_{9}^{2},\\\noalign{\medskip}
\dot{a}_8 &=&a_{1}^{2}a_{8}+a_{6}^{2}a_{8}-a_{5}^{2}a_{8}+2\,a_{1}a_{6}a_{9},\\\noalign{\medskip}
\dot{a}_9 &=&a_2^2a_9-a_1^2a_9+a_7^2a_9-a_6^2a_9+2\,a_2a_7a_{10}-2\,a_1a_6a_8,\\\noalign{\medskip}
\dot{a}_{10} &=&a_3^2a_{10}-a_2^2a_{10}-a_{7}^{2}a_{10}-2\,a_2a_7a_9.
\end{eqnarray*}
It is a Hamiltonian system with Poisson structure determined by the Poisson matrix
\[
\begin{pmatrix}
0 &a_{1}a_{2} & 0 & 0 & 0 & 0 & 0 & -a_{1}a_{8} & a_{1}a_{9} & 0 \\\noalign{\medskip}
-a_{1}a_{2} &0&a_{2}a_{3} & 0 & 0 & 0 & 0 & 0 & -a_{2}a_{9} & a_{2}a_{10} \\\noalign{\medskip}
0 &-a_{2}a_{3} & 0 & a_{3}a_{4} & 0 & 0 & 0 & 0 & 0 & -a_{3}a_{10} \\\noalign{\medskip}
0 &0&-a_{3}a_{4} & 0 & a_{4}a_{5} & 0 & 0 & 0 & 0 & 0 \\\noalign{\medskip}
0 & 0 & 0 & -a_{4}a_{5} & 0 & a_{5}a_{6} & 0 & a_{5}a_{8} & 0 & 0 \\\noalign{\medskip}
0 & 0 & 0 & 0 & -a_{5}a_{6} & 0 & a_{6}a_{7} & -a_{6}a_{8} & a_{6}a_{9} & 0 \\\noalign{\medskip}
0 & 0 & 0 & 0 & 0 & -a_{6}a_{7} & 0 & 0 & -a_{7}a_{9} & a_{7}a_{10} \\\noalign{\medskip}
a_{1}a_{8} & 0 & 0 & 0 & -a_{5}a_{8} & a_{6}a_{8} & 0 & 0 & 2a_{1}a_{6} & 0 \\\noalign{\medskip}
-a_1a_9 & a_{2}a_{9} & 0 & 0 & 0 & -a_{6}a_{9} & a_{7}a_{9} & -2a_{1}a_{6} & 0 & 2a_{2}a_{7} \\\noalign{\medskip}
0 & -a_{2}a_{10} & a_{3}a_{10} & 0 & 0 & 0 & -a_{7}a_{10} & 0 & -2a_{2}a_{7} & 0
\end{pmatrix}
\]
which has rank $8$.
The Hamiltonian of the system is $H_2 = \frac{1}{2}\left(a_1^2 + a_2^2 + \cdots + a_{10}^2\right)$. A
constant of motion is obtained using Moser's technique.
\noindent
If we delete the odd numbered rows and columns of $L^2$ we get the matrix
$$
\La_o(L^2)=\begin{pmatrix}
a_1^{2}+a_2^{2}+a_9^{2}&a_2a_3&a_1a_8+a_6a_9&a_7a_9+a_2a_{10}\\
a_2a_3&a_3^2+a_4^2&a_4a_5&a_3a_{10}\\
a_1a_8+a_6a_9&a_4a_5&a_5^{2}+a_6^{2}+a_8^{2}&a_6a_7\\
a_7a_9+a_2a_{10}&a_3a_{10}&a_6a_7&a_7^2+a_{10}^2
\end{pmatrix}
=
\begin{pmatrix}
B_{{1}}&A_{{1}}&A_{{4}}&A_{{6}}\\
A_{{1}}&B_{{2}}&A_{{2}}&A_{{5}}\\
A_{{4}}&A_{{2}}&B_{{3}}&A_{{3}}\\
A_{{6}}&A_{{5}}&A_{{3}}&B_{{4}}
\end{pmatrix}.
$$
We have
\begin{eqnarray*}
\dot{A_1}&=&\dot{(a_2a_3)}\\
&=&\dot{a_2}a_3+a_2\dot{a_3}=(a_2a_3^2+a_2a_{10}^2-a_1^2a_2-a_2a_9^2)a_3+a_2(a_3a_4^2-a_2^2a_3-a_3a_{10}^2)\\
&=&a_2a_3(a_3^2+a_4^2-a_1^2-a_2^2-a_9^2)\\
&=&A_1(B_2-B_1)
\end{eqnarray*}
and similarly the new variables $B_i,A_i$ satisfy the system
\begin{equation}
\label{equations}
\begin{array}{rcl}
\dot{B_1} &=& 2(A_1^2+A_6^2-A_4^2)\,,\\
\dot{B_2} &=& 2(A_2^2-A_1^2-A_5^2)\,,\\
\dot{B_3} &=& 2(A_3^2+A_4^2-A_2^2)\,,\\
\dot{B_4} &=& 2(A_5^2-A_3^2-A_6^2)\,,\\
\dot{A_1} &=& A_1(B_2-B_1)\,,\\
\dot{A_2} &=& A_2(B_3-B_2)\,,\\
\dot{A_3} &=& A_3(B_4-B_3)\,,\\
\dot{A_4} &=& A_4(B_1-B_3)+2A_3A_6\,,\\
\dot{A_5} &=& A_5(B_2-B_4)-2A_1A_6\,,\\
\dot{A_6} &=& A_6(B_4-B_1)-2A_3A_4+2A_1A_5\,.
\end{array}
\end{equation}
\noindent
This system can be written in Lax pair form $\dot{\La_o(L^2)}=\left[C,\La_o(L^2)\right]$ with
$$
C=
\begin{pmatrix}
0 & A_{{1}} & -A_{{4}} & A_{{6}}\\\noalign{\medskip}
-A_{{1}} & 0 & A_{{2}} & -A_{{5}}\\\noalign{\medskip}
A_{{4}} & -A_{{2}} & 0 & A_{{3}}\\\noalign{\medskip}
-A_{{6}} & A_{{5}} & -A_{{3}} & 0
\end{pmatrix}.
$$
It is Hamiltonian with Hamiltonian function
$$
H=\tr\left(\dfrac{\La_o\left(L^2\right)^2}{2}\right)=\dfrac{1}{2}(B_1^2+B_2^2+B_3^2+B_4^2)+A_1^2+A_2^2+A_3^2+A_4^2+A_5^2+A_6^2
$$ and Poisson matrix
$$
\begin{pmatrix}
0&0&0&0&A_1&0&0&-A_4&0&A_6\\
0&0&0&0&-A_1&A_2&0&0&-A_5&0\\
0&0&0&0&0&-A_2&A_3&A_4&0&0\\
0&0&0&0&0&0&-A_3&0&A_5&-A_6\\
-A_1&A_1&0&0&0&0&0&0&0&0\\
0&-A_2&A_2&0&0&0&0&0&0&0\\
0&0&-A_3&A_3&0&0&0&0&0&0\\
A_4&0&-A_4&0&0&0&0&0&0&A_3\\
0&A_5&0&-A_5&0&0&0&0&0&-A_1\\
-A_6&0&0&A_6&0&0&0&-A_3&A_1&0
\end{pmatrix}.
$$
It has 2 Casimir functions $B_1+B_2+B_3+B_4$ and $A_{{1}}A_{{2}}A_{{4}}+A_{{2}}A_{{3}}A_{{5}}$. The function
\[
\begin{split}
F&=A_{{1}}A_{{2}}A_{{4}}+A_{{2}}A_{{3}}A_{{5}}=
a_2a_3a_4a_5 \left( a_1a_8+a_6a_9\right) +a_3a_4a_5a_6a_7a_{10}=\\
&a_1a_2a_3a_4a_5a_8+a_2a_3a_4a_5a_6a_9+a_3a_4a_5a_6a_7a_{10}
\end{split}
\]
is a constant of motion for the original system.
The integrals $H_2,H_4,H_6,F$ together with the two Casimirs given by
\[\displaystyle
\begin{array}{ccl}
C_1 &=& a_1a_3a_5a_7,\\
C_2 &=& \sqrt{\det L}-C_1 =a_{1}a_{4}a_{6}a_{10} + a_{2}a_{4}a_{7}a_{8} - a_{4}a_{8}a_{9}a_{10}.
\end{array}\]
ensure the integrability of the system.
\end{example}
In general for $n$ even Moser's technique gives the following additional constant of motion.
\renewcommand{\arraystretch}{1.5}
\begin{table}[h]
$$\begin{array}{|c|c|l|}
\hline
n & F=a_2a_3\ldots a_{n-3}(a_1a_n+a_{n-2}a_{n+1})+a_3a_4\ldots a_{n-1}a_{n+2}\\
\hline
6 & a_1a_2a_3a_6+a_2a_3a_4a_7+a_3a_4a_5a_8\\
8 & a_1a_2a_3a_4a_5a_8+a_2a_3a_4a_5a_6a_9+a_3a_4a_5a_6a_7a_{10}\\
10 & a_1a_2a_3a_4a_5a_6a_7a_{10}+a_2a_3a_4a_5a_6a_7a_8a_{11}+a_3a_4a_5a_6a_7a_8a_9a_{12}\\
\hline
\end{array}$$
\caption{Additional constant of motion obtained using Moser's technique}
\end{table}
The following two tables contain the Casimirs of the Poisson structure for $m=3$.
\begin{table}[h]
$$\begin{array}{|c|c|l|}
\hline
n & C = -\frac{1}{2}\det L\\
\hline
7 & a_1a_3a_4a_6 (a_1a_5a_9 +a_2a_6a_7 - a_7a_8a_9)\\
9 & a_1a_3a_4a_5a_6a_8 (a_1a_7a_{11}+a_2a_8a_9 -a_9a_{10}a_{11})\\
11 & a_1a_3a_4a_5a_6a_7a_8a_{10} (a_1a_9a_{13}+a_2a_{10}a_{11}-a_{11}a_{12}a_{13})\\
13 & a_1a_3a_4a_5a_6a_7a_8a_9a_{10}a_{12} (a_1a_{11}a_{15}+a_2a_{12}a_{13}-a_{13}a_{14}a_{15})\\
\hline
n & a_1 a_3 a_4\cdots a_{n-3}a_{n-1} (a_1a_{n-2}a_{n+2} + a_2a_{n-1}a_{n} - a_{n}a_{n+1}a_{n+2})\\
\hline
\end{array}$$
\caption{Casimirs for $m=3$ and $n$ odd}
\end{table}
\begin{table}[h]
$$\begin{array}{|c|l|l|}
\hline
n & C_1 & C_2 = \sqrt{|\det{L}|} - C_1\\
\hline
6 & a_1a_3a_5 & -(a_{1}a_{4}a_{8} + a_{2}a_{5}a_{6} - a_{6}a_{7}a_{8})\\
8 & a_1a_3a_5a_7 & a_{4} (a_{1}a_{6}a_{10} + a_{2}a_{7}a_{8} - a_{8}a_{9}a_{10})\\
10 & a_1a_3a_5a_7a_9 & -a_{4}a_{6} (a_{1}a_{8}a_{12} + a_{2}a_{9}a_{10} - a_{10}a_{11}a_{12})\\
12 & a_1a_3a_5a_7a_9a_{11} & a_{4}a_{6}a_{8} (a_{1}a_{10}a_{14} + a_{2}a_{11}a_{12} - a_{12}a_{13}a_{14})\\
14 & a_1a_3a_5a_7a_9a_{11}a_{13}& -a_{4}a_{6}a_{8}a_{10} (a_{1}a_{12}a_{16} + a_{2}a_{13}a_{14} - a_{14}a_{15}a_{16})\\
\hline
n & a_1 a_3 \cdots a_{n-3} a_{n-1}& a_4 a_6 \cdots a_{n-6} a_{n-4} (a_1a_{n-2}a_{n+2} + a_2a_{n-1}a_{n} - a_{n}a_{n+1}a_{n+2})\\
\hline
\end{array} $$
\caption{Casimirs for $m=3$ and $n$ even}
\end{table}
{\bf Acknowledgments}. The first author was supported by a University of Cyprus Postdoctoral fellowship.
The work of the third author was co-funded by the European Regional Development Fund and the Republic of
Cyprus through the Research Promotion Foundation (Project: PENEK/0311/30).
|
2,877,628,089,627 | arxiv | \section{Introduction}
\label{sec-intro}
Given a smooth projective variety $X$, choose a normal crossings compactification
$\overline{X}=X\cup D$ and define a simplicial set called the {\em dual boundary complex}
${{\mathbb D} \partial} X$, containing the combinatorial information about multiple intersections of
divisor components of $D$. Danilov, Stepanov and Thuillier have shown
that the homotopy type of ${{\mathbb D} \partial} X$ is
independent of the choice of compactification, and this structure has been the subject of
much study.
We consider the case when $X={\rm M}_B(S; C_1,\ldots , C_k)$ is the character variety, of
local systems on a punctured sphere $S\sim {\mathbb P} ^1-\{ y_1,\ldots , y_k\}$ such that
the conjugacy classes of the monodromies around the punctures are given by $C_1,\ldots , C_k$
respectively \cite{Letellier}. If these conjugacy classes satisfy a natural genericity condition
then the character variety is a smooth affine variety. We prove that its
dual boundary complex is a sphere of the appropriate dimension (see Conjecture \ref{geopw}),
for local systems of rank $2$.
\begin{theorem}
\label{main}
Suppose $C_1,\ldots , C_k$ are conjugacy classes in $SL_2({\mathbb C} )$
satisfying the genericity Condition \ref{verygen}. Then the dual boundary complex
of the character variety is homotopy equivalent to a sphere:
$$
{{\mathbb D} \partial} {\rm M}_B(S; C_1,\ldots , C_k) \sim S^{2(k-3)-1}.
$$
\end{theorem}
This statement is a part of a general conjecture about the boundaries of moduli spaces of
local systems \cite{KNPS}. The conjecture says that the
dual boundary complex of the character variety or ``Betti moduli space'' should be a
sphere, and that it should furthermore be naturally identified with the sphere at infinity
in the ``Dolbeault'' or Hitchin moduli space of Higgs bundles.
We will discuss this topic in further detail in Section \ref{rel-hitch} at the
end of the paper.
The case $k=4$ of our theorem is a consequence of the
Fricke-Klein expression for the character variety, which was indeed the motivation for the
conjecture. The case $k=5$ of Theorem \ref{main} has been proven by Komyo \cite{Komyo}.
\subsection{Strategy of the proof}
Here is the strategy of our proof. We first notice that it is
possible to make some reductions, based on the following observation
(Lemma \ref{negligeable}):
if $Z\subset X$ is a smooth closed subvariety of a smooth quasiprojective variety, such that
the boundary dual complex is contractible ${{\mathbb D} \partial} Z\sim \ast$,
then the natural map ${{\mathbb D} \partial} X\rightarrow {{\mathbb D} \partial} (X-Z)$ is a homotopy
equivalence. This allows us to remove some subvarieties which will be ``negligeable'' for the
dual boundary complex. The main criterion is that
if $Z= {\mathbb A}^1\times Y$ then ${{\mathbb D} \partial} Z\sim \ast$
(Corollary \ref{affine}). Together, these two statements allow us successively to remove
a whole sequence of subvarieties (Proposition \ref{decomp}).
The main technique is to express the moduli space
${\rm M}_B(S; C_1,\ldots , C_k)$ in terms of a decomposition of $S$ into a sequence of
``pairs of pants'' $S_i$ which are three-holed spheres.
The decomposition is obtained by cutting $S$ along $(k-3)$ circles denoted $\rho _i$.
In each $S_i$, there is one
boundary circle corresponding to a loop $\xi _i$
around the puncture $y_i$, and two other boundary circles $\rho _{i-1}$ and $\rho _i$
along which $S$ was cut. At the start and the end of the sequence,
two of the circles correspond to $\xi _1,\xi _2$ or $\xi _{k-1},\xi _k$ and only one to a cut.
One may say that $\rho _1$ and $\rho _{k-1}$ are confused with the original
boundary circles $\xi _1$ and $\xi _k$ respectively.
We would like to use this decomposition
to express a local system $V$ on $S$ as being the result of ``glueing'' together
local systems $V|_{S_i}$ on each of the pieces, glueing across the circles $\rho _i$.
A basic intuition, which one learns from the
elementary theory of classical hypergeometric functions, is that a local system of
rank $2$ on a three-holed sphere is determined by the conjugacy classes of its three
monodromy transformations. This is true generically, but one needs to take some care in
degenerate cases involving potentially irreducible local systems, as will be discussed below.
The conjugacy classes of the monodromy transformations around $\rho _i$ are determined, except in
some special cases, by their traces. The special cases are when the traces are $2$ or $-2$.
If we assume for the moment the uniqueness of $V|_{S_i}$ as a function of $C_i$ and the
traces $t_{i-1}$ and $t_i$ of the monodromies around $\rho _{i-1}$ and $\rho _i$
respectively, then the local system $V$ is roughly speaking determined by specifying
the values of these traces $t_2,\ldots , t_{k-2}$, plus the glueing parameters.
The glueing parameters should respect the monodromy transformations, and are defined modulo
central scalars, so each parameter is an element of $\gggg _m$. In this rough picture then,
the moduli space could be viewed as fibering over $({\mathbb A}^1)^{k-3}$ with fibers
$\gggg _m ^{k-3}$.
The resulting coordinates are classically known as {\em Fenchel-Nielsen coordinates}.
Originally introduced to parametrize $PGL_2({\mathbb R} )$ local systems corresponding to
points in Teichm\"uller space, they have been extended to the complex character variety
by Tan \cite{Tan}.
In the above discussion we have taken several shortcuts. We assumed that the traces $t_i$ determined
the monodromy representations, and in saying that the glueing parameters would
be in $\gggg _m$ we implicitly assumed that these monodromy transformations were diagonal with
distinct eigenvalues. These conditions correspond to saying $t_i \neq 2,-2$.
We also assumed that the local system $V|_{S_i}$ was determined by $C_i$,
$t_{i-1}$ and $t_i$. This is not in general true if it can be reducible, which is to say
if there is a non-genericity relation between the conjugacy classes. The locus where that
happens is somewhat difficult to specify explicity since there are several possible choices
of non-genericity relation (the different choices of $\epsilon _i$ in Condition \ref{Kgen}).
We would therefore like a good way of obtaining such a rigidity even over the non-generic
cases.
Such a property is provided by the notion of {\em stability}. One may envision assigning parabolic
weights to the two eigenvalues of $C_i$ and assigning parabolic weights zero over $\rho _j$.
The parabolic weights induce a notion of {\em stable} local system over $S_i$. But in fact
we don't need
to discuss parabolic weights themselves since the notion of stability can also be defined
directly: a local system $V_i$ on $S_i$ is {\em unstable} if it admits a rank $1$ subsystem
$L$ such that the monodromy matrix in $C_i$ acts on $L$ by $c_i^{-1}$ (a previously chosen
one of the two
eigenvalues of $C_i$). It is {\em stable} otherwise. Now, it becomes true that a stable
local system is uniquely determined by $C_i$,
$t_{i-1}$ and $t_i$. This will be the basis of our calculations in Section \ref{sec-main},
see Corollary \ref{hyperge}.
The first phase of our proof is to use the possibility for reductions given by
Proposition \ref{decomp} to reduce to the case of the open subset
$$
M'\subset {\rm M}_B(S; C_1,\ldots , C_k)
$$
consisting of local systems $V$ such that $t_i\in {\mathbb A}^1-\{ 2,-2\}$ and
such that $V|_{S_i}$ is stable. In order to make these reductions, we show in Sections
\ref{sec-splitting} and \ref{sec-decomp-unstable} that the strata where some $t_i$ is $2$ or $-2$, or
where some
$V|_{S_i}$ is unstable, have a structure of product with ${\mathbb A}^1$, hence by Lemma \ref{affine}
these strata are negligeable in the sense that Lemma \ref{negligeable} applies.
For the open set $M'$, there is still one more difficulty. The glueing parameters depend {\em a priori}
on all of the traces, so we don't immediately get a decomposition of $M'$ as a product. A calculation
with matrices and a change of coordinates allow us to remedy this and we show in
Theorem \ref{fenchelnielsen} that $M' \cong {\bf Q}^{k-3}$ where ${\bf Q}$ is a space of choices of
a trace $t$ together with a point $[p,q]$ in a copy of $\gggg _m$.
It turns out that this family of multiplicative groups over ${\mathbb A}^1-\{ 2,-2\}$ is twisted:
the two endpoints of the fibers $\gggg _m$ get permuted as $t$ goes around $2$ and $-2$. This
twisting property is what makes it so that
$$
{{\mathbb D} \partial} {\bf Q} \sim S^1,
$$
and therefore by \cite[Lemma 6.2]{Payne},
${{\mathbb D} \partial} ({\bf Q}^{k-3})\sim S^{2(k-3)-1}$. This calculates ${{\mathbb D} \partial} M'$ and hence
also ${{\mathbb D} \partial} {\rm M}_B(S; C_1,\ldots , C_k)$ to prove Theorem \ref{main}.
We should consider the open subset $M'$ as the natural domain of definition of the Fenchel-Nielsen
coordinate system, and the components in the
expression $M' \cong {\bf Q}^{k-3}$ are the Fenchel-Nielsen coordinates.
\subsection{Relation with other work}
What we are doing here is closely related to a number of things. Firstly, as pointed out above,
our calculation relies on the Fenchel-Nielsen coordinate system coming from a pair of pants decomposition,
and this is a well-known technique. Our only contribution is to keep track
of the things which must be removed from the domain of definition, and of the precise form of the
coordinate system, so as to be able to conclude the structure up to homotopy of the dual boundary complex.
A few references about Fenchel-Nielsen coordinates include
\cite{FenchelNielsen} \cite{Goldman} \cite{ParkerPlatis} \cite{Wolpert},
and for the complex case Tan's paper \cite{Tan}.
Nekrasov, Rosly and Shatashvili's work on bend parameters
\cite{NekrasovRoslyShatashvili} involves similar coordinates
and is related to the context of polygon spaces \cite{GodinhoMandini}.
The work of Hollands and Neitzke \cite{HollandsNeitzke}
gives a comparison between Fenchel-Nielsen and Fock-Goncharov coordinates
within the theory of spectral networks \cite{GMN}.
Jeffrey and Weitsman \cite{JeffreyWeitsman} consider what is the effect of
a decomposition, in arbitrary genus, on the space of representations into a compact group.
Recently, Kabaya uses these decompositions to give algebraic coordinate systems
and furthermore goes on to study the mapping class group action \cite{Kabaya}.
These are only a few
elements of a vast literature.
Conjecture \ref{geopw} relating the dual boundary complex of the character variety and
the sphere at infinity of the Hitchin moduli space, should be viewed as a geometric statement
reflecting the first weight-graded piece of the $P=W$ conjecture of de Cataldo, Hausel and
Migliorini \cite{deCataldoHauselMigliorini} \cite{Hausel}. This will be discussed a little bit more in
Section \ref{rel-hitch} but should also be the subject of further study.
Komyo gave the first proof of the theorem that the dual boundary complex was a sphere,
for rank $2$ systems on the projective line minus $5$ points \cite{Komyo}. He did this by
constructing an explicit compactification and writing down the dual complex.
This provides more information than what we get in our proof of Theorem \ref{main},
because we use a large number of reduction steps iteratively replacing the character variety
by smaller open subsets.
I first heard from Mark Gross in Miami in 2012 about a statement, which he attributed
to Kontsevich, that if
$X$ is a log-Calabi-Yau variety (meaning that it has a compactification $\overline{X}=X\cup D$
such that $K_{\overline{X}}+D$ is trivial), then ${{\mathbb D} \partial} X$ should be a sphere.
Sam Payne points out that
this idea may be traced back at least to \cite[Remark 4]{KontsevichSoibelmanHMS}
in the situation of a degeneration.
Gross also stated that this property
should apply to character varieties, that is to say that some or all character varieties
should be log-CY. That has apparently been known folklorically in many instances cf
\cite{GHKK}.
Recently, much progress
has been made. Notably, Koll\'ar and Xu have
proven that the dual boundary of a log-CY variety is a sphere
in dimension $4$, and they go a long way towards the proof in general
\cite{KollarXu}.
They note that the correct statement, for general log-CY varieties, seems
to be that ${{\mathbb D} \partial} X$ should be a quotient of a sphere by a finite group.
In our situation of character varieties, part of the statement of
Conjecture \ref{geopw} posits that this
finite quotienting doesn't happen. This is supported by our theorem, but it is hard to
say what should be expected in general.
De Fernex, Koll\'ar and Xu have introduced a refined dual boundary complex \cite{deFernexKollarXu},
which is expected to be a sphere in the category of PL manifolds. That is much stronger than
just the statement about homotopy equivalence. See also Nicaise and Xu \cite{NicaiseXu}.
For character varieties, as well as for
more general cluster varieties and quiver moduli spaces, the Kontsevich-Soibelman wallcrossing
picture could be expected to be closely related to this PL sphere, more precisely the Kontsevich-Soibelman
chambers in the base of the Hitchin fibration should to correspond to cells
in the PL sphere. One may witness this phenomenon by explicit calculation for $SL_2$ character varieties
of the projective line minus $4$ points, under certain special choices of conjugacy classes where the character
variety is the Cayley cubic.
Recently, Gross, Hacking, Keel and Kontsevich \cite{GHKK} building on work of Gross, Hacking and Keel
\cite{GHK}, have given an explicit combinatorial description of a boundary divisor for log-Calabi-Yau
cluster varieties. Their description depends on a choice
of toroidal cluster coordinate patches, and the combinatorics involve
toric geometry. It should in principle be possible to conclude from their construction that
${{\mathbb D} \partial} {\rm M}_B(S; C_1,\ldots , C_k)$ is a sphere, as is mentioned in
\cite[Remark 9.12]{GHKK}. Their technique, based in essence on the Fock-Goncharov coordinate systems,
should probably lead to a proof
in much greater generality than our Theorem \ref{main}.
\subsection{Varying the conjugacy classes}
In the present paper, we have been considering the conjugacy classes $C_1,\ldots , C_k$ as
fixed. As Deligne pointed out, it is certainly an interesting next question to ask what happens as they vary.
Nakajima discussed it long ago \cite{Nakajima}. This has many different aspects
and it would go beyond our current scope to enter into a detailed discussion.
I would just like to point out that the natural domain on which everything is defined
is the space of choices of $C_1,\ldots , C_k$ which satisfy the Kostov genericity Condition \ref{Kgen}.
This is an open subset of $\gggg _m ^k$, the complement of a divisor $K$ whose components
are defined by multiplicative monomial equalities. It therefore looks like a natural
multiplicative analogue
of the hyperplane arrangement complements which enter into the theory of higher dimensional
hypergeometric functions \cite{SchechtmanVarchenko}. The variation with parameters of the
moduli spaces ${\rm M}_B(S; C_1,\ldots , C_k)$ leads, at the very least, to some variations
of mixed Hodge structure over $\gggg _m ^k-K$ which undoubtedly have interesting properties.
\subsection{Acknowledgements}
I would like to thank the Fund for Mathematics at the Institute
for Advanced Study for support.
This work was also supported in part by the
ANR grant 933R03/13ANR002SRAR (Tofigrou).
It is a great pleasure to thank L. Katzarkov, A. Noll and P. Pandit for all of the discussions
surrounding our recent projects, which have provided a major motivation for the present work.
I would specially like to thank D. Halpern-Leistner, L. Migliorini and S. Payne for some
very helpful and productive discussions about this work at the Institute for Advanced Study.
They have notably been suggesting several approaches to making the reasoning more canonical,
and we hope to be able to say more about that in the future.
I would also like to thank G. Brunerie, G. Gousin, A. Ducros, M. Gross, J. Koll\'ar, A. Komyo,
F. Loray, N. Nekrasov, M.-H. Saito, J. Weitsman, and R. Wentworth
for interesting and informative
discussions and suggestions.
\subsection{Dedication}
It is a great honor to dedicate this work to Vadim Schechtman. Vadim's interests and work
have illuminated many
aspects of the intricate interplay between topology and geometry
in the de Rham theory of algebraic varieties. His work on hypergeometric functions
\cite{SchechtmanVarchenko} motivates our consideration of moduli spaces of local systems on
higher-dimensional varieties. His work with Hinich on dga's in several papers such as
\cite{HinichSchechtman} was one of the first instances of
homotopy methods for algebraic varieties. His
many works on
the chiral de Rham complex have motivated wide developments in the theory of
${\mathcal D}$-modules and local systems. The ideas generated by these threads have been suffused
throughout my own research for a long time.
\section{Dual boundary complexes}
\label{sec-dualdel}
Suppose $X$ is a smooth quasiprojective variety over ${\mathbb C}$. By resolution of singularities
we may choose a normal crossings compactification $X\subset \overline{X}$ whose complementary
divisor $D:= \overline{X}-X$ has simple normal crossings. In fact, we may assume that
it satisfies a condition which might be called {\em very simple normal crossings}: if $D=\bigcup _{i=1}^m
D_i$ is the decomposition into irreducible components, then we can ask that any multiple
intersection $D_{i_1}\cap \cdots \cap D_{i_k}$ be either empty or connected. If the compactification
satisfies this condition, then we obtain a simplicial complex denoted ${{\mathbb D} \partial} X$,
the dual complex ${\mathbb D}(D)$ of the divisor $D$,
defined as follows: there are $m$ vertices $e_1,\ldots , e_m$
of ${{\mathbb D} \partial} X$, in one-to-one correspondence
with the irreducible components $D_1,\ldots , D_m$ of $D$; and a simplex spanned by $e_{i_1},\ldots , e_{i_k}$
is contained in ${{\mathbb D} \partial} X$ if and only if $D_{i_1}\cap \cdots \cap D_{i_k}$ is nonempty.
This defines a simplicial complex, which could be considered as a simplicial set, but which for
the present purposes we shall identify with its topological realization which is the union of
the span of those simplicies in ${\mathbb R}^m$ with $e_i$ being the standard basis vectors.
The simplicial complex ${{\mathbb D} \partial} X$ goes under several different terminologies and notations. We shall
call it the {\em dual boundary complex} of $X$. It contains the purely combinatorial information about
the divisor compactifying $X$. The main theorem about it is due to Danilov \cite{Danilov}:
\begin{theorem}[Danilov]
The homotopy type of ${{\mathbb D} \partial} X$ is independent of the choice of compactification.
\end{theorem}
The papers of Stepanov \cite{Stepanov1} \cite{Stepanov2}, concerning the analogous question
for singularities, started a lot of renewed activity.
Following these, a very instructive proof, which I first learned about from A. Ducros,
was given by Thuillier \cite{Thuillier}. He interpreted
the homotopy type of ${{\mathbb D} \partial} X$ as being equivalent to the homotopy type of the {\em Berkovich boundary} of
$X$, namely the set of points in the Berkovich analytic space
\cite{Berkovich}
associated to $X$ (over the trivially valued ground field),
which are valuations centered at points outside of $X$ itself.
Further refinements were given by Payne \cite{Payne} and de Fernex, Koll\'ar and Xu
\cite{deFernexKollarXu}. Payne showed that
the simple homotopy type of ${{\mathbb D} \partial} X$ was invariant, and proved
several properties crucial to our arguments below. De Fernex, Koll\'ar and Xu defined in
some cases a special choice of compactification leading to a boundary complex ${{\mathbb D} \partial} X$ whose PL
homeomorphism type is invariant. Nicaise and Xu show in parallel, in the case of a degeneration
at least, that the
essential skeleton of the Berkovich space is a pseudo-manifold \cite{NicaiseXu}.
Manon considers an embedding of ``outer space'' for character varieties,
into the Berkovich boundary \cite{Manon}.
These refined versions provide very interesting objects of study
but for the present paper we just use the homotopy type of ${{\mathbb D} \partial} X$.
Our goal will be to calculate the homotopy type of the dual boundary complex of some character varieties.
To this end, we describe here a few important reduction steps allowing us to modify a variety
while keeping its dual boundary complex the same.
One should be fairly careful when manipulating these
objects, as some seemingly insignificant changes in a variety could result in quite different boundary
complexes. For example, in the case of an isotrivial fibration it isn't enough to know the homotopy types
of the base and the fiber---essentially, the fibration should be
locally trivial in the Zariski rather than etale
topology in order for that kind of reasoning to work.
The space ${\bf Q}$ to be considered at the end of the paper provides an example
of this phenomenon.
In a similar vein, I don't know of a good notion of dual boundary complex for an Artin stack.
It is possible that a theory of Berkovich stacks could remedy this problem, but that seems
difficult. Payne has suggested, in answer to the problem of etale isotrivial fibrations,
to look at an equivariant notion of isotrivial fibration
which could give a natural group action on a dual boundary complex such that the
quotient would be meaningful. This type of theory might give an alternate approach to some of
our problems.
Let us get now to the basic properties of dual boundary complexes.
The first step is to note that if $U\subset X$ is an open subset of a smooth quasiprojective variety, then
we obtain a map ${{\mathbb D} \partial} X \rightarrow {{\mathbb D} \partial} U$.
\begin{lemma}[Payne]
If $X$ is an irreducible smooth projective variety and $Z\rightarrow X$ is obtained by blowing up a smooth
center, then it induces a homotopy equivalence on dual boundary complexes ${{\mathbb D} \partial} Z \sim {{\mathbb D} \partial} X$.
\end{lemma}
\begin{proof}
See \cite{Payne}, where more generally boundary complexes of singular varieties
are considered but we only need the smooth case.
\end{proof}
It follows from this lemma that
if $U\subset X$ is an open subset of a smooth quasiprojective variety, it induces a natural
map of boundary complexes $\partial X\rightarrow \partial U$.
Indeed, for that we may assume by resolving singularities and applying the previous lemma,
that $U$ is the complement of a divisor $B\subset X$, and furthermore there is a very simple
normal crossings compactification $\overline{X}=X\cup D$ such that $B\cup D$ also has very simple
normal crossings. Then ${{\mathbb D} \partial} U$ is the dual complex of the divisor $B\cup D$, which
contains ${{\mathbb D} \partial} X$, the dual complex of $D$, as a subcomplex.
Following up on this idea, here is our main reduction lemma:
\begin{lemma}
\label{negligeable}
Suppose $U\subset X$ is an open subset of an irreducible smooth quasiprojective variety, obtained by removing
a smooth irreducible closed subvariety of smaller dimension $Y=X-U\subset X$. Suppose that ${{\mathbb D} \partial} Y \sim \ast$ is contractible.
Then the map ${{\mathbb D} \partial} X\rightarrow {{\mathbb D} \partial} U$ is a homotopy equivalence.
\end{lemma}
\begin{proof}
Let $X^{{\rm Bl}Y}$ be obtained by blowing up $Y$. From the previous lemma,
${{\mathbb D} \partial} X^{{\rm Bl}Y} \sim {{\mathbb D} \partial} X$.
Let ${\rm Bl}(Y)\subset X^{{\rm Bl}Y}$ be the inverse image of $Y$.
It is an irreducible smooth divisor, and $U$ is also the complement of
this divisor in $X^{{\rm Bl}Y}$. By resolution
of singularities we may choose a compactification
$\overline{X^{{\rm Bl}Y}}$ such that the boundary divisor $D$, plus the
closure $B:= \overline{{\rm Bl}(Y)}$,
form a very simple normal crossings divisor. This combined divisor is therefore a boundary
divisor for $U$, so
$$
{{\mathbb D} \partial} U \sim {\mathbb D}( D \cup B ).
$$
Now this bigger dual complex ${\mathbb D}( D \cup B)$ has one
more vertex than ${\mathbb D}(D)$, corresponding to the irreducible component
$B$. The star of this vertex is the cone over ${{\mathbb D} \partial} {\rm Bl}(Y) = {\mathbb D}(B\cap D)$.
The cone is attached to ${\mathbb D}(D)$ via its base ${\mathbb D}(B\cap D)$, to give
${\mathbb D}(B\cup D)$.
We would like to show that ${{\mathbb D} \partial} {\rm Bl}(Y)\sim \ast$. The first step is to notice that
${\rm Bl}(Y)\rightarrow Y$ is the projective space bundle associated to the vector bundle
$N_{Y/X}$ over $Y$.
We claim in general that if $V$ is a vector bundle over a smooth
quasiprojective variety $Y$, then ${{\mathbb D} \partial} ({\mathbb P}(V))\sim {{\mathbb D} \partial} (Y)$.
The proof of this claim is that there exists a normal crossings compactification
$\overline{Y}$ of $Y$ such that the vector bundle $V$ extends to a vector bundle
on $\overline{Y}$. That may be seen by choosing a surjection from the dual
of a direct sum of very ample line bundles to $V$, getting $V$ as the pullback of
a tautological bundle under a map from $Y$ to a Grassmanian. The compactification
may be chosen so that the map to the Grassmanian extends.
We obtain a compactification of ${\mathbb P}(V)$ wherein the boundary divisor is
a projective space bundle over the boundary divisor of $Y$, and with these choices
${{\mathbb D} \partial} {\mathbb P}(V)={{\mathbb D} \partial} Y$. It follows from Danilov's theorem that
for any other choice, there is a homotopy equivalence.
Back to our situation where ${\rm Bl}(Y)={\mathbb P}(N_{Y/X})$, and assuming that
${{\mathbb D} \partial} Y\sim \ast$, we conclude that ${{\mathbb D} \partial} {\rm Bl}(Y)\sim \ast$ too.
Therfore the dual complex ${\mathbb D}(B\cap D)$ is contractible.
Now ${{\mathbb D} \partial} U = {\mathbb D} (B\cup D)$ is obtained by attaching to ${\mathbb D}(D)$
the cone over ${\mathbb D}(B\cap D)$. As we have seen above ${\mathbb D}(B\cap D)$
is contractible, so coning it off doesn't change the homotopy type. This shows that
the map
$$
{{\mathbb D} \partial} X = {\mathbb D}(D) \rightarrow {\mathbb D} (B\cup D)= {{\mathbb D} \partial} U
$$
is a homotopy equivalence.
\end{proof}
In order to use this reduction, we need a criterion for the condition ${{\mathbb D} \partial} Y \sim \ast$.
Note first the following general property of compatibility with products.
\begin{lemma}[Payne]
\label{join}
Suppose $X$ and $Y$ are smooth quasiprojective varieties. Then ${{\mathbb D} \partial} (X\times Y)$ is the
{\em join} of ${{\mathbb D} \partial} (X)$ and ${{\mathbb D} \partial} (Y)$, in other words we have a homotopy cocartesian
diagram of spaces
$$
\begin{array}{ccc}
{{\mathbb D} \partial} (X)\times {{\mathbb D} \partial} (Y) & \rightarrow & {{\mathbb D} \partial} (Y) \\
\downarrow && \downarrow \\
{{\mathbb D} \partial} (X) & \rightarrow & {{\mathbb D} \partial} (X\times Y)\, .
\end{array}
$$
\end{lemma}
\begin{proof}
This is \cite[Lemma 6.2]{Payne}.
\end{proof}
\begin{corollary}
\label{affine}
Suppose $Y$ is a smooth quasiprojective variety. Then ${{\mathbb D} \partial} ({\mathbb A} ^1\times Y) \sim \ast$.
\end{corollary}
\begin{proof}
Setting $X:= {\mathbb A}^1$ in the previous lemma, we have ${{\mathbb D} \partial} (X)\sim \ast$, so in the
homotopy cocartesian diagram the top arrow is an equivalence and the left vertical arrow
is the projection to $\ast$; therefore the homotopy pushout is also $\ast$.
\end{proof}
\begin{proposition}
\label{decomp}
Suppose $U\subset X$ is a nonempty open subset of a smooth irreducible
quasiprojective variety, and suppose
the complement $Z:= X-U$ has a decomposition into locally closed subsets $Z_j$ such that
$Z_j\cong {\mathbb A}^1\times Y_j$. Suppose that this decomposition can be ordered into
a stratification, that is to say there is a total order on the indices such that
$\bigcup _{j\leq a}Z_j$ is closed for any $a$. Then ${{\mathbb D} \partial} (X)\sim {{\mathbb D} \partial} (U)$.
\end{proposition}
\begin{proof}
We first prove the proposition under the additional hypothesis that the $Y_j$ are smooth.
Proceed by induction on the number of pieces in the decomposition. Let $Z_0$ be the
lowest piece in the ordering. The ordering hypothesis says that $Z_0$ is closed in $X$.
Let $X':= X-Z_0$. Now $U\subset X'$ is the complement
of a subset $Z' = \bigcup _{j>0}Z_j$ decomposing in the same way, with a smaller number of
pieces, so by induction we know that
${{\mathbb D} \partial} (X')\sim {{\mathbb D} \partial} (U)$.
By hypothesis $Z_0\cong {\mathbb A} ^1\times Y_0$.
Lemma \ref{affine} tells us that ${{\mathbb D} \partial} (Z_0)\sim \ast$ and now Lemma \ref{negligeable}
tells us that ${{\mathbb D} \partial} (X)\sim {{\mathbb D} \partial} (X')$, so ${{\mathbb D} \partial} (X)\sim {{\mathbb D} \partial} (U)$.
This completes the proof of the proposition under the hypothesis that $Y_j$ are smooth.
Now we prove the proposition in general. Proceed as in the first paragraph of the proof with
the same notations: by induction we may assume that ${{\mathbb D} \partial} (X')\sim {{\mathbb D} \partial} (U)$
where $X'=X-Z_0$ such that $Z_0$ is closed and isomorphic to ${\mathbb A}^1\times Y_0$.
Choose a totally ordered stratification of $Y_0$ by smooth locally closed subvarieties
$Y_{0,i}$. Set $Z_{0,i}:= {\mathbb A}^1\times Y_{0,i}$. This collection of subvarieties
of $X$ now satisfies the hypotheses of the proposition and the pieces are smooth.
Their union is $Z_0$ and its complement in $X$ is the open subset $X'$. Thus,
the first case of the proposition treated above tells us that ${{\mathbb D} \partial} (X)\sim {{\mathbb D} \partial} (X')$.
It follows that ${{\mathbb D} \partial} (X) \sim {{\mathbb D} \partial} (U)$, completing the proof.
\end{proof}
\noindent
{\em Caution:} A simple example shows that the condition of ordering, in the statement of the
propostion, is necessary. Suppose $X$ is a smooth
projective surface containing two projective lines $D_1,D_2\subset X$ such that their intersection
$D_1\cap D_2 = \{ p_1,p_2\}$ consists of two distinct points. Then we could look at
$Z_1=D_1-\{ p_1\}$ and $Z_2=D_2-\{ p_2\}$. Both $Z_1$ and $Z_2$ are affine lines.
Setting $U:= X-(D_1\cup D_2)=X-(Z_1\cup Z_2)$ we get an open set which is the complement
of a subset $Z=Z_1\sqcup Z_2$ decomposing into two affine lines; but ${{\mathbb D} \partial} X=\emptyset$ whereas
${{\mathbb D} \partial} U \sim S^1$.
\section{Hybrid moduli stacks of local systems}
\label{sec-hybrid}
The moduli space of local systems is different from the moduli stack, even at the points
corresponding to irreducible local systems. Indeed, the open substack of the moduli stack
parametrizing irreducible $GL_r$-local systems is a $\gggg _m$-gerbe over the corresponding open subset
of the moduli space. Even by considering $SL_r$-local systems we can only reduce this to being a
$\mu _r$-gerbe.
However, it is usual and convenient to consider the moduli space instead.
In this section, we mention a construction allowing to define what we
call\footnote{This is not new but I don't remember where it is from, so
no claim is made to originality.}
a {\em hybrid moduli stack}
in which the central action is divided out, making it so that for irreducible points it is the
same as the moduli space.
Our initial discussion will use some simple $2$-stacks, however the
reader wishing to avoid these is may refer to Proposition \ref{alternate}
which gives an equivalent definition in
more concrete terms.
Consider a reductive group $G$ with center $Z$.
The fibration sequence of $1$-stacks
$$
BZ \rightarrow BG \rightarrow B(G/Z)
$$
may be transformed into the cartesian diagram
\begin{equation}
\label{kzdiag}
\begin{array}{ccc}
BG &\rightarrow & B(G/Z) \\
\downarrow & & \downarrow\\
\ast & \rightarrow & K(Z,2)
\end{array}
\end{equation}
of Artin $2$-stacks on the site ${\rm Aff}^{ft, et}_{{\mathbb C}}$ of affine schemes
of finite type over ${\mathbb C}$ with the etale topology.
Suppose now that $S$ is a space or higher stack. Then we may consider the relative
mapping stack
$$
M(S, G):= \underline{Hom}(S, B(G/Z)/K(Z,2)) \rightarrow K(Z,2).
$$
It may be defined as the fiber product forming the middle arrow in the following
diagram where both squares are cartesian:
$$
\begin{array}{ccccc}
\underline{Hom}(S,BG)& \rightarrow & M(S,G) &\rightarrow & \underline{Hom}(S,B(G/Z)) \\
\downarrow & & \downarrow & & \downarrow\\
\ast & \rightarrow & K(Z,2) & \rightarrow & \underline{Hom}(S,K(Z,2))
\end{array} .
$$
Here the bottom right map is the ``constant along $S$'' construction induced by pullback
along $S\rightarrow \ast$.
The bottom left arrow $\ast \rightarrow K(Z,2)$ is the universal $Z$-gerbe, so its pullback
on the upper right is again a $Z$-gerbe. We have thus constructed a stack
$M(S,G)$ over which $\underline{Hom}(S,BG)$ is a $Z$-gerbe. From the definition it is
{\em a priori} a $2$-stack, and indeed $M(\emptyset , G)=K(Z,2)$, but the following
alternate characterization tells us that $M(S,G)$ is usually a usual $1$-stack.
\begin{proposition}
\label{alternate}
Suppose $S$ is a nonempty connected CW-complex with basepoint $x$. Then
the hybrid moduli stack may be expressed as the stack-theoretical quotient
$$
M(S,G) = {\rm Rep}(\pi _1(S,x), G) {/\!\! /} (G/Z) .
$$
In particular, it is an Artin $1$-stack.
\end{proposition}
\begin{proof}
The representation space may be viewed as a mapping stack
$$
{\rm Rep}(\pi _1(S,x), G) = \underline{Hom}((S,x), (BG,o)).
$$
Consider the big diagram
$$
\begin{array}{ccccc}
\underline{Hom}((S,x), (BG,o)) & \rightarrow & \underline{Hom}((S,x), (B(G/Z),o)) & & \\
\downarrow & & \downarrow & & \\
\ast & \rightarrow & \underline{Hom}((S,x), (K(Z,2),o)) & \rightarrow & \ast \\
\downarrow & & \downarrow & & \downarrow \\
K(Z,2) & \rightarrow & \underline{Hom}(S, K(Z,2)) & \rightarrow & K(Z,2)
\end{array}
$$
where the bottom right map is evaluation at $x$. The pointed mapping $2$-stack in the
middle is defined by the condition that the bottom right square is homotopy cartesian.
The composition along the bottom is the identity, so if we take the homotopy fiber
product on the bottom left, the full bottom rectangle is a pullback too so that homotopy
fiber product would be $\ast$ as is written in the diagram. In other words, the bottom left
square is also homotopy cartesian. The middle horizontal map on the left sends the
point to the map $S\rightarrow o \hookrightarrow K(Z,2)$, indeed it is constant along $S$ because
it comes from pullback of the bottom left map, and its value at $x$ is $o$ because of the
right vertical map. Now, the upper left square is homotopy cartesian, just the result
of applying the pointed mapping stack to the diagram \eqref{kzdiag}. It follows that the
whole left rectangle is homotopy cartesian.
Consider, on the other hand, the diagram
$$
\begin{array}{ccccc}
\underline{Hom}((S,x), (BG,o)) & \rightarrow & \underline{Hom}((S,x), (B(G/Z),o)) &
\rightarrow & \ast \\
\downarrow & & \downarrow & & \downarrow \\
M(S,G) & \rightarrow & \underline{Hom}(S, B(G/Z)) & \rightarrow & B(G/Z) \\
\downarrow & & \downarrow & & \\
K(Z,2) & \rightarrow & \underline{Hom}(S, K(Z,2)) & & .
\end{array}
$$
The bottom square is homotopy-cartesian by the definition of $M(S,G)$.
We proved in the previous paragraph that
the full left rectangle is homotopy cartesian. In this $2$-stack situation note that
a commutative rectangle constitutes a piece of data rather than just a property. In this case,
these data for the left squares are obtained by just considering the equivalence found in
the previous paragraph, from $\underline{Hom}((S,x), (BG,o))$ to the homotopy pullback
in the full left rectangle which is the same as the composition of the homotopy pullbacks
in the two left squares. In particular, the upper left square is homotopy-cartesian.
It now follows that the upper full rectangle is homotopy-cartesian. That exactly says
that we have an action of $G/Z$ on $\underline{Hom}((S,x), (BG,o))= {\rm Rep}(\pi _1(S,x), G)$
and $M(S,G)$ is the quotient.
\end{proof}
The hybrid moduli stacks also satisfy the same glueing or factorization property as the
usual ones.
\begin{lemma}
\label{glueing}
Suppose $S=S_1\cup S_2$ with $S_{12}:= S_1\cap S_2$ excisive. Then
$$
M(S,G) \cong M(S_1,G)\times _{M(S_{12},G)} M(S_2,G).
$$
\end{lemma}
\begin{proof}
The mapping stacks entering into the definition of $M(S,G)$ as a homotopy
pullback, satisfy this
glueing property. Notice that this is true even for the constant functor which
associates to any $S$ the stack $K(Z,2)$. The homotopy pullback therefore also satisfies
the glueing property since fiber products commute with other fiber products.
\end{proof}
Suppose $G=GL_r$
so $Z=\gggg _m$ and $G/Z = PGL_r$, and suppose $S$ is a connected CW-complex.
Let
$$
\underline{Hom}(S,BGL_r)^{\rm irr} \subset \underline{Hom}(S,BGL_r)
$$
denote the open substack of irreducible local systems. It is classical that
the stack $\underline{Hom}(S,BGL_r)$ has a {\em coarse moduli space} ${\rm M}_B (S,GL_r)$,
and that the open substack
$\underline{Hom}(S,BGL_r)^{\rm irr}$ is a $\gggg _m$-gerbe over the corresponding open subset
of the coarse moduli space ${\rm M}_B(S, G)^{\rm irr}$.
\begin{proposition}
In the situation of the previous paragraph,
we have a map
$$
M(S,GL_r)\rightarrow {\rm M}_B (S,GL_r)
$$
which restricts to an isomorphism
$$
M(S,GL_r)^{\rm irr}\cong {\rm M}_B (S,GL_r)^{\rm irr}
$$
between
the open subsets parametrizing
irreducible local systems.
\end{proposition}
The same holds for $G=SL_r$.
\begin{lemma}
The determinant map $GL_r \stackrel{{\rm det}}{\rightarrow} \gggg _m$ induces a cartesian diagram
$$
\begin{array}{ccc}
M(S, SL_r) &\rightarrow & M(S,GL_r) \\
\downarrow & & \downarrow\\
\ast & \rightarrow & M(S, \gggg _m )
\end{array}
$$
which essentially says that $M(S,SL_r)$ is the substack of $M(S,GL_r)$ parametrizing local systems
of trivial determinant. Note that $M(S, \gggg _m )$ is isomorphic to the quasiprojective variety
${\rm Hom}(H_1(S), \gggg _m )$.
\end{lemma}
In what follows, we shall use these stacks $M(S,GL_r)$ which we
call
{\em hybrid moduli stacks} as good replacements intermediary between
the moduli stacks of local systems and their coarse moduli spaces.
\section{Boundary conditions}
\label{sec-local}
Let $S$ denote a $2$-sphere with $k$ open disks removed. It has $k$ boundary
circles denoted $\xi _1,\ldots , \xi _k\subset S$ and
$$
\partial S = \xi _1\sqcup \cdots \sqcup \xi _k .
$$
From now on we consider rank $2$ local systems on this surface $S$.
Fix complex numbers $c_1,\ldots , c_k$ all different from $0$, $1$ or $-1$. Let
$$
C_i := \left\{ P \left(
\begin{array}{cc}
c_i & 0 \\
0 & c_i ^{-1}
\end{array}
\right) P^{-1} \right\}
$$
denote the conjugacy class of matrices with eigenvalues $c_i, c_i^{-1}$.
Consider the hybrid moduli stack $M(S, GL_2)$ constructed above,
and let
$$
M(S;C_{\cdot}) \subset M(S,GL_2)
$$
denote the closed substack consisting of local systems such that the monodromy transformation
around $\xi _i$ is in the conjugacy class $C_i$. See \cite{Letellier}.
If we choose a basepoint $x\in S$ and paths $\gamma _i$ going from $x$ out by straight paths to the boundary
circles, around once and then back to $x$, then $\pi _1(S,x)$ is generated by the $\gamma _i$
subject to the relation that their product is the identity.
Therefore, the moduli stack of framed
local systems is the affine variety
$$
\underline{Hom}((S,x), (BGL_2, o)) =
{\rm Rep}(\pi _1(S,x), GL_2)
$$
$$
= \{ (A_1,\ldots , A_k)\in (GL_2)^k \mbox{ s.t. } A_1\cdots A_k =1\} .
$$
The unframed moduli stack is the stack-theoretical quotient
$$
\underline{Hom}(S, BGL_2) = {\rm Rep}(\pi _1(S,x), GL_2) {/\!\! /} GL_2
$$
by the action of simultaneous conjugation.
The center $\gggg _m \subset GL_2$ acts trivially on ${\rm Rep}(\pi _1(S,x), GL_2)$ so
the action of $GL_2$ there
factors through an action of $PGL_2$.
Proposition \ref{alternate} may be restated as
\begin{lemma}
The hybrid moduli stack $M(S,GL_2)$ may be described as
the stack-theoretical quotient
$$
M(S,GL_2) =
{\rm Rep}(\pi _1(S,x), GL_2) {/\!\! /} PGL_2 .
$$
\end{lemma}
Let ${\rm Rep}(\pi _1(S,x), GL_2; C_{\cdot})\subset {\rm Rep}(\pi _1(S,x), GL_2)$
denote the closed subscheme of representations which send $\gamma _i$ to the conjugacy
class $C_i$. These conditions are equivalent to the equations ${\rm Tr}(\rho (\gamma _i))=c_i+c_i^{-1}$.
We have
$$
{\rm Rep}(\pi _1(S,x), GL_2; C_{\cdot}) =
\{ (A_1,\ldots , A_k)\, \mbox{s.t.}\, A_i\in C_i \mbox{ and } A_1\cdots A_k =1\} .
$$
\begin{corollary}
\label{hybquot}
The hybrid moduli stack with fixed conjugacy classes is given by
$$
M(S; C_{\cdot} ) = {\rm Rep}(\pi _1(S,x), GL_2; C_{\cdot}) {/\!\! /} PGL_2
$$
$$
=\{ (A_1,\ldots , A_k) \mbox{ s.t. } A_i\in C_i \mbox{ and } A_1\cdots A_k =1\} {/\!\! /} PGL_2.
$$
It is also isomorphic to the stack one would have gotten by using the group $SL_2$ rather than
$GL_2$.
\end{corollary}
\begin{proof}
Our conjugacy classes have been defined as having determinant one. Since the $\gamma _i$ generate
the fundamental group, if the $\rho (\gamma _i)$ have determinant one then the representation
$\rho$ goes into $SL_2$. As $PGL_2=PSL_2$, the hybrid moduli stack for $GL_2$ is the same as
for $SL_2$.
\end{proof}
Recall the following Kostov-genericity condition \cite{Kostov} on the choice of the numbers $c_i$.
\begin{condition}
\label{Kgen}
For any choice of $\epsilon _1,\ldots , \epsilon _k\in \{ 1,-1\}$ the product
$$
c_1^{\epsilon _1} \cdots c_k^{\epsilon _k}
$$
is not equal to $1$.
\end{condition}
The following basic lemma has been observed by Kostov and others.
\begin{lemma}
\label{Kstab}
If Condition \ref{Kgen} is satisfied then any
representation in
${\rm Rep}(\pi _1(S,x), GL_2; C_{\cdot})$ is irreducible.
In particular, the automorphism group of the corresponding $GL_2$ local system is the central
$\gggg _m$.
\end{lemma}
The set of $(c_1,\ldots , c_k)$ satisfying this condition is a
nonempty open subset of $(\gggg _m - \{ 1,-1\} )^k$. We also speak of the same condition for the
sequence of conjugacy classes $C_{\cdot}$.
\begin{proposition}
\label{variety}
Suppose $C_{\cdot}$ satisfy Condition \ref{Kgen}. The hybrid moduli stack
$M(S; C_{\cdot} )$ is an irreducible smooth affine variety. It is
equal to the coarse, which is indeed fine, moduli space
${\rm M}_B(S; C_1,\ldots , C_k)$
of local systems with our given conjugacy classes.
\end{proposition}
\begin{proof}
The representation space ${\rm Rep}(\pi _1(S,x), GL_2; C_{\cdot})$ is an affine variety,
call it ${\rm Spec}(A)$,
on which the group $PGL_2$ acts.
The moduli space is by definition
$$
{\rm M}_B(S; C_1,\ldots , C_k):= {\rm Spec}(A^{PGL_2}).
$$
By Lemma \ref{Kstab} and using the hypothesis \ref{Kgen}
it follows that the stabilizers of the action are trivial.
Luna's etale slice theorem
(see \cite{Drezet}) implies that the quotient map
$$
{\rm Spec}(A)\rightarrow {\rm Spec}(A^{PGL_2})
$$
is an etale fiber bundle with fiber $PGL_2$. Therefore this quotient is also the
stack-theoretical quotient:
$$
{\rm Spec}(A^{PGL_2})= {\rm Spec}(A){/\!\! /} PGL_2 .
$$
By Corollary \ref{hybquot} that stack-theoretical quotient is $M(S;C_{\cdot})$, completing
the identification between the hybrid moduli stack and the moduli space
required for the proposition.
Smoothness of the moduli space has
been noted in many places, see for example \cite{HauselLetellierRodriguez} \cite{Letellier}.
Irreducibility is proven in a general context in \cite{HauselLetellierRodriguez2} \cite{Letellier}
as a consequence of computations of $E$-polynomials, and a different proof is given in
\cite{ASoibelman} using moduli stacks of parabolic bundles. In our case
irreducibility could also be obtained by including dimension estimates for the subvarieties which will be
removed in the course of our overall discussion.
\end{proof}
This proposition says that our hybrid moduli stack $M(S; C_{\cdot} )$ is the same as the
usual moduli space. A word of caution is necessary: we shall also be using
$M(S',C_{\cdot})$ for subsets $S'\subset S$, and those are in general
stacks rather than schemes, for example when Condition \ref{Kgen} doesn't hold over $S'$.
\section{Interior conditions and factorization}
\label{sec-interior}
We now define some conditions concerning what happens in the interior of the surface $S$.
These conditions will serve to define a stratification of $M(S; C_{\cdot} )$.
The biggest open stratum denoted $M'$, treated in detail in Section \ref{sec-main},
turns out to be the main piece, contributing the essential structure of the
dual boundary complex. The smaller strata will be negligeable for the dual boundary complex,
in view of Lemmas \ref{negligeable} and \ref{affine} as combined in Proposition \ref{decomp}.
Divide $S$ into closed regions denoted $S_2,\ldots , S_{k-1}$
such that $S_{i}\cap S_{i+1} = \rho _i$ is a circle for $2\leq i \leq k-2$, and
the regions are otherwise disjoint.
We assume that $S_i$ encloses the boundary circle $\xi _i$, so it is a $3$-holed
sphere with boundary circles $\rho _{i-1}$, $\xi _i$ and $\rho _i$.
The orientation of $\rho _{i-1}$ is reversed when it is viewed from inside
$S_i$. The end piece $S_2$ has boundary circles $\xi _1$, $\xi _2$ and $\rho _2$ while
the end piece $S_{k-1}$ has boundary circles $\rho _{k-2}$, $\xi _{k-1}$ and $\xi _i$.
This is a ``pair of pants'' decomposition.
Factorization properties, related to
chiral algebra cf \cite{FrancisGaitsgory} \cite{FrenkelBenZvi}, are a kind of descent.
We will be applying the factorization property of Theorem \ref{factorization} to the
decomposition of our surface into pieces $S_i$. This classical technique in geometric topology
was also used extensively in the study of the Verlinde formula. The factorization
is often viewed as coming from
a degeneration of the curve into a union of rational lines with three marked points.
For our argument it will be important to consider strata of the moduli space defined by
fixing additional combinatorial data with respect to our decomposition.
To this end, let us consider some nonempty subsets $\sigma _i \subset \{ 0,1\}$
for $i=2,\ldots , k-1$, and conjugacy-invariant subsets $G_2,\ldots , G_{k-2}\subset SL_2$.
We denote by $\alpha = (\sigma _1,\ldots , \sigma _{k-1}; G_2,\ldots , G_{k-2})$
this collection of data. The subsets $G_i$ will impose conditions on
the monodromy around the circles $\rho _i$, while the $\sigma _i$ will correspond to
the following {\em stability condition} on the restrictions of our local system to
$S_i$. Recall that a local system $V\in M(S;C_{\cdot})$ is required to have monodromy
around $\xi _i$ with eigenvalues $c_i$ and $c_i^{-1}$. We are making a choice of orientation
of these boundary circles, and $c_i\neq c_i^{-1}$ by hypothesis,
so the $c_i^{-1}$ eigenspace is a well-defined rank $1$ subspace of $V|_{\xi _i}$.
\begin{definition}
\label{stab-def}
We say that a local system $V|_{S_i}$ on $S_i$, satisfying the conjugacy class
condition, is {\em unstable} if there exists a rank $1$ subsystem
$L\subset V|_{S_i}$ such that the monodromy of $L$ around $\xi _i$ is $c_i^{-1}$. Say that
$V|_{S_i}$ is {\em stable}
otherwise.
\end{definition}
An irreducible local system $V|_{S_i}$
is automatically stable; one which decomposes as a direct sum is automatically unstable.
If $V|_{S_i}$ is a nontrivial extension with a unique rank $1$ subsystem $L$, then
$V|_{S_i}$ is unstable if $L|_{\xi _i}$ is the $c_i^{-1}$-eigenspace of the monodromy,
whereas it is stable if $L|_{\xi _i}$ is the $c_i$-eigenspace. We will later express these
conditions more concretely in terms of vanishing or nonvanishing of a certain matrix
coefficient.
\begin{definition}
Let $M^{\alpha}(S; C_{\cdot})\subset M(S; C_{\cdot})$ denote the locally closed substack
of local systems $V$ satisfying the following conditions:
\begin{itemize}
\item
if $\sigma _i = \{ 0\}$ then $V|_{S_i}$ is required to be unstable; if $\sigma _i = \{ 1\}$ then
it is required to be stable; and if $\sigma _i = \{ 0,1\}$ then there is no condition; and
\item
the monodromy of $V$ around $\rho _i$ should lie in $G_i$.
\end{itemize}
Consider a subset $S'\subset S$ made up of some or all of the $S_i$ or the circles.
Let $M ^{\alpha} (S';C_{\cdot})$ denote the moduli stack of local systems on $S'$ satisfying the above
conditions where they make sense (that is, for the restrictions to those subsets which
are in $S'$).
\end{definition}
In the case of the inner boundary circles we may just use the notation
$M^{\alpha} (\rho _i)$ since the choices of conjugacy classes $C_i$, corresponding
circles $\xi _i$, don't intervene.
In the case of $S_i$, only the conjugacy class $C_i$ matters so we may use the notation
$M ^{\alpha} (S_i;C_i)$.
Suppose $S'\subset S$ is connected and $x\in S'$. Let
$$
{\rm Rep}^{\alpha}(\pi _1(S',x), GL_2; C_{\cdot}) \subset
{\rm Rep}(\pi _1(S',x), GL_2)
$$
denote the locally closed subscheme of representations which
satisfy conjugacy class conditions corresponding to $C_{\cdot}$
and the conditions corresponding to $\alpha$, that is
to say whose corresponding local systems are in $M ^{\alpha} (S';C_{\cdot})$.
Proposition \ref{alternate} says:
\begin{lemma}
The simultaneous conjugation action of $GL_2$ on the space of representations
${\rm Rep}^{\alpha}(\pi _1(S',x), GL_2; C_{\cdot})$ factors through an action of $PGL_2$ and
$$
M ^{\alpha} (S';C_{\cdot})={\rm Rep}^{\alpha}(\pi _1(S',x), GL_2; C_{\cdot}){/\!\! /} PGL_2
$$
is the stack-theoretical quotient.
\end{lemma}
The hybrid moduli stacks allow us to state a glueing or {\em factorization property},
expressing the fact that a local system $L$ on $S$ may be viewed as being obtained
by glueing together its pieces $L|_{S_i}$ along the circles $\rho _i$.
\begin{theorem}
\label{factorization}
We have the following expression using homotopy fiber products of stacks:
$$
M ^{\alpha} (S;C_{\cdot})= M^{\alpha} (S_2;C_{\cdot})\times _{M^{\alpha} (\rho _i)} M^{\alpha} (S_3
;C_{\cdot})
\cdots \times _{M^{\alpha} (\rho _{k-2})}
M^{\alpha} (S_{k-1};C_{\cdot}).
$$
\end{theorem}
\begin{proof}
Apply Lemma \ref{glueing}.
\end{proof}
\begin{corollary}
Suppose the requirements given for the boundary pieces of $\partial S'$ (which are
circles either of the form $\xi _i$ or $\rho _i$) satisfy Condition \ref{Kgen}
for $S'$. Then
the moduli stack $M^{\alpha} (S';C_{\cdot})$ is in fact a quasiprojective variety.
\end{corollary}
\begin{proof}
This follows from Proposition \ref{variety} applied to $S'$.
\end{proof}
\section{Universal objects}
\label{sec-universal}
Let us return for the moment to the general situation of Section \ref{sec-hybrid},
of a space $S$ and a group $G$.
If $x\in S$ is a basepoint, then we obtain a principal $(G/Z)$-bundle over
$\underline{Hom}(S,B(G/Z))$, and this pulls back to a principal $(G/Z)$-bundle
denoted $F(S,x)\rightarrow M(S,G)$. It may be viewed as the bundle of frames
for the local systems, up to action of the center $Z$.
If $y\in S$ is another point, and $\gamma$ is a path from $x$ to $y$ then it
gives an isomorphism of principal bundles $F(S,x)\cong F(S,y)$ over $M(S,G)$.
In particular, $\pi _1(S,x)$ acts on $F(S,x)$ in a tautological representation.
Suppose $S=S_1\cup S_2$ such that the intersection $S_{12}=S_1\cap S_2$ is connected.
Choose a basepoint $x\in S_{12}$. This yields principal $(G/V)$-bundles
$F(S_1,x)$ and $F(S_2,x)$ over $M(S_1,G)$ and $M(S_2,G)$ respectively.
The fundamental group $\pi _1(S_{12},x)$ acts on both of these.
We may restate the glueing property of Lemma \ref{glueing}
in the following way.
\begin{proposition}
\label{principal}
We have an isomorphism of stacks lying over the product
$M(S_1,G)\times M(S_2,G)$,
$$
M(S,G) \cong {\rm Iso}_{\pi _1(S_{12},x)-G/V}(p_1^{\ast} F(S_1,x),
p_2^{\ast}F(S_2,x))
$$
where on the right is the stack of isomorphisms, relative to $M(S_1,G)\times M(S_2,G)$,
of principal $G/V$-bundles provided with
actions of $\pi _1(S_{12},x)$.
\end{proposition}
Return now to the notation from the immediately preceding sections. There are several ways of
dividing our surface $S$ into two or more pieces, various of which shall be used in the
next section.
Choose basepoints $x_i$ in the interior of $S_i$, and $s_i$ on the boundary circles
$\rho _i$. Connect them by paths, nicely arranged with respect to the other paths
$\gamma _i$. Then, over any subset $S'$ containing a basepoint $x_i$, we obtain
a principal $PGL_2$-bundle $F(S',x_i)\rightarrow M(S',C_{\cdot})$, and the same for $s_i$. Our paths,
when in $S'$, give isomorphisms between these principal bundles.
It will be helpful to think of the description of glueing given by Proposition \ref{principal},
using these basepoints and paths. The following local triviality property is useful.
\begin{lemma}
\label{lem62}
Suppose $S'$ has at most one boundary circle of the form $\rho_i$,
and suppose that the conjugacy classes determining the moduli problem on $M^{\alpha}(S',C_{\cdot})$
satisfy Condition \ref{Kgen},
and suppose that $x\in S'$ is one of our basepoints.
Then the principal $PGL_2$-bundle $F(S',x)\rightarrow
M^{\alpha}(S',C_{\cdot})$
is locally trivial in the Zariski topology of the moduli space $M^{\alpha}(S',C_{\cdot})$, and Zariski
locally
$F(S',x)$
may be viewed as the projective frame bundle of a rank $2$ vector bundle.
\end{lemma}
\begin{proof}
Consider a choice of three loops $(\gamma _{j_1},\gamma _{j_2},\gamma _{j_3})$
and a choice of one of the two eigenvalues of the
conjugacy class $C_{j_1}$, $C_{j_1}$, or $C_{j_1}$
for each of them. This gives three rank $1$ eigenspaces in $V_x$
for any local system $V$. Over the Zariski open subset of the moduli space where
these three subspaces are distinct, they provide the required projective frame.
Notice that the eigenspaces of the $\gamma _j$ cannot all be aligned since
these loops generate the fundamental group of $S'$, by the hypothesis that there is at most
one other boundary circle $\rho _i$. Therefore, as the our choices
of triple of loops and triple of eigenvalues range over the possible ones, these Zariski open subsets
cover the moduli space. We get the required frames. A framed $PGL_2$-bundle comes from a vector bundle
so $F(S',x)$ locally comes from a $GL_2$-bundle.
\end{proof}
\section{Splitting along the circle $\rho _i$}
\label{sec-splitting}
In this section we consider one of the circles $\rho_i$ which divides $S$ into two pieces.
Let
$$
S_{<i}:=\bigcup _{j<i}S_j\; , \;\;\;\;
S_{>i}:=\bigcup _{j>i}S_j ,
$$
and similarly define $S_{\leq i}$ and $S_{\geq i}$.
We have the decomposition
$$
S=S_{\leq i} \cup S_{>i}
$$
into two pieces intersecting along the circle $\rho _i$.
Thus,
$$
M^{\alpha}(S;C_{\cdot}) = M^{\alpha}(S_{\leq i};C_{\cdot})
\times _{M^{\alpha}(\rho _i)} M^{\alpha}(S_{>i};C_{\cdot}).
$$
This factorization will allow us to analyze strata where $G_i$ is
a unipotent or trivial conjugacy class. The following condition will be in effect:
\begin{condition}
\label{verygen}
We assume that the sequence of conjugacy classes $C_1,\ldots , C_k$ is {\em very
generic}, meaning that for any $i$ the partial sequences $C_1,\ldots , C_i$ and
$C_i,\ldots , C_k$ satisfy Condition \ref{Kgen}, and they also satisfy that condition
if we add the scalar matrix $-1$.
That is to say, no product of eigenvalues or their inverses should be
equal to either $1$ or $-1$.
\end{condition}
Suppose that $G_i=\{ 1\}$. Then $M^{\alpha} (\rho _i)=B(PGL_2)$.
On the other hand, Condition \ref{verygen} means that
the sequences of conjugacy classes defining the
moduli problems on $S_{\leq i}$ and $S_{>i}$ themselves satisfy Condition \ref{Kgen}.
Therefore Proposition \ref{variety} applies saying that the moduli stacks
$M^{\alpha}(S_{\leq i};C_{\cdot})$ and $M^{\alpha}(S_{>i};C_{\cdot})$ exist as quasiprojective
varieties.
The projective frame bundles over a basepoint
of $\rho _i$ are principal $PGL_2$-bundles denoted
$$
F_{\leq i}\rightarrow M^{\alpha} (S_{\leq i};C_{\cdot})
$$
and
$$
F_{>i}\rightarrow M^{\alpha} (S_{>i};C_{\cdot}).
$$
These principal bundles may be viewed as given by the maps
$$
M^{\alpha} (S_{\leq i};C_{\cdot}) \rightarrow M^{\alpha} (\rho _i)=
B(PGL_2) \leftarrow M^{\alpha} (S_{>i};C_{\cdot}).
$$
These principal bundles are locally trivial in the Zariski topology
by Lemma \ref{lem62}.
The principal bundle
description of the moduli space in Proposition \ref{principal} now says
$$
M^{\alpha} (S;C_{\cdot}) =
{\rm Iso} (p_1^{\ast} (F_{\leq i}), p_1^{\ast} (F_{\leq i}) ) /
M^{\alpha} (S_{\leq i};C_{\cdot})\times M^{\alpha} (S_{>i};C_{\cdot}) .
$$
The bundle of isomorphisms between our two principal bundles, is a fiber bundle with
fiber $PGL_2$, locally trivial in the Zariski topology because the two principal bundles are
Zariski-locally trivial. We may sum up this conclusion with the following lemma, noting that
the argument also works the same way if $G_i = \{ -1\}$.
\begin{lemma}
\label{idcase}
Under the assumption that $G_i = \{ 1\}$, the moduli space $M^{\alpha} (S;C_{\cdot})$ is a
fiber bundle over $M(S_{\leq i};C_{\cdot})\times M(S_{>i};C_{\cdot})$,
locally trivial in the Zariski topology,
with fiber $PGL_2$. The same holds true if $G_i = \{ -1\}$.
\end{lemma}
Consider the next case: suppose that $G_i$ is the conjugacy class of matrices
conjugate to a nontrivial unipotent
matrix
$$
{\mathcal U} =
\left( \begin{array}{cc}
1 & 1 \\
0 & 1
\end{array}
\right) .
$$
In that case, $M^{\alpha} (\rho _i)= B{\mathbb G} _a$. The situation is the same as before: the
moduli spaces $M^{\alpha} (S_{\leq i};C_{\cdot})$
and $M^{\alpha} (S_{>i};C_{\cdot}) $ are quasiprojective varieties, and we have
principal bundles $F_{\leq i}$ and $F_{>i}$. This time, these principal bundles have
unipotent automorphisms denoted $R'$ and $R$ respectively, in the conjugacy class of ${\mathcal U}$.
We have
$$
M^{\alpha} (S;C_{\cdot})=
{\rm Iso}
_{M^{\alpha} (S_{\leq i};C_{\cdot})\times M^{\alpha} (S_{>i};C_{\cdot})}
(p_1^{\ast} (F_{\leq i}, R'), p_1^{\ast} (F_{\leq i},R) ) .
$$
This means the relative isomorphism bundle of the principal bundles together with their
automorphisms.
We claim that these principal bundles together with their automorphisms may be trivialized locally
in the Zariski topology. For the principal bundles themselves this is Lemma \ref{lem62}. The unipotent
endomorphisms then correspond, with respect to these local trivializations, to maps
into $PGL_2/{\mathbb G}_a$. One can write down explicit sections of the projection
$PGL_2\rightarrow PGL_2/{\mathbb G}_a$ locally in the Zariski topology of the base, and these give the
claimed local trivializations. One might alternatively notice here that a ${\mathbb G}_a$-torsor
for the etale topology is automatically locally trivial in the Zariski topology by
``Hibert's theorem 90''.
From the result of the previous paragraph, $M^{\alpha} (S;C_{\cdot})$ is a fiber bundle over
$M^{\alpha} (S_{\leq i};C_{\cdot})\times
M^{\alpha} (S_{>i};C_{\cdot})$, locally trivial in the Zariski topology, with fiber the
centralizer $Z(R)\subset PGL_2$ of a unipotent element $R\in PGL_2$.
This centralizer is ${\mathbb G} _a\cong {\mathbb A}^1$.
We obtain the following statement.
\begin{lemma}
\label{unicase}
Under the assumption that $G_i$ is the unipotent conjugacy class, the moduli space
$M^{\alpha}(S;C_{\cdot})$ is a
fiber bundle over $M^{\alpha} (S_{\leq i};C_{\cdot})\times
M^{\alpha} (S_{>i};C_{\cdot})$, locally trivial in the Zariski topology,
with fiber ${\mathbb A}^1$. The same holds true if $G_i$ is the conjugacy class of matrices
conjugate to $-{\mathcal U}$.
\end{lemma}
We may sum up the conclusion of this section as follows.
\begin{proposition}
With the hypothesis of Condition \ref{verygen} in effect,
suppose that the datum $\alpha$ is chosen such that for some $i$,
$G_i$ is one of the following four conjugacy classes
$$
\{ 1\} , \;\;
\{ -1\} , \;\;
\{ P{\mathcal U} P^{-1}\} , \mbox{ or }
\{ - P {\mathcal U} P^{-1} \} ,
$$
that is to say the conjugacy classes whose traces are $2$ or $-2$.
Then the dual boundary complex of the $\alpha$-stratum is contractible:
$$
{{\mathbb D} \partial} M^{\alpha}(S, C_{\cdot}) \sim \ast .
$$
\end{proposition}
\begin{proof}
In all four cases, covered by Lemmas \ref{idcase} and \ref{unicase} above, the space
$M^{\alpha}(S, C_{\cdot})$ admits a further decomposition into locally closed
pieces all of which have the form ${\mathbb A}^1\times Y$. Therefore, Lemmas \ref{idcase} and \ref{unicase}
apply to show that the dual boundary complex is contractible.
\end{proof}
\section{Decomposition at $S_i$ in the unstable case}
\label{sec-decomp-unstable}
Define the function $t_i : M(S,C_{\cdot})\rightarrow {\mathbb A}^1$ sending a local system to
the trace of its monodromy around the circle $\rho _i$. In the previous section, we have treated
any strata which might be defined in such a way that at least one of the $G_i$ is a conjugacy
class with $t_i$ equal to $2$ or $-2$. Therefore, we may now assume that all of our subsets $G_i$
consist entirely of matrices with trace different from $2,-2$. In particular, these matrices are
semisimple with distinct eigenvalues.
If $G_i$ consists of a single conjugacy class, it is
possible to choose one of the two eigenvalues. But in general, this is not possible. However, in the
situation considered in the present section, where one of the $\sigma _i$ indicates an unstable
local system, then the destabilizing subsystem serves to pick out a choice of eigenvalue.
In the case where one of the $\sigma _i$ is $\{ 0\}$ stating that $V|_{S_i}$ should be unstable,
we will again obtain a structure of decomposition into a product with ${\mathbb A}^1$ locally over a
stratification, essentially by considering the extension class of the unstable local system.
Some arguments are needed in order to show that this leads to direct product decompositions.
\subsection{Some cases with $G_{i-1}$ and $G_i$ fixed}
\label{sec-fixed}
We suppose in this subsection that $G_{i-1}$ and $G_i$ are single conjugacy classes,
with traces different from $2,-2$, and furthermore
chosen so that the moduli problem for $M^{\alpha} (S_{> i};C_{\cdot})$ on one side
is Kostov-generic. Hence, that moduli stack is a quasiprojective variety. Furthermore we assume that
$\sigma _i = \{ 0\}$. Therefore, $M^{\alpha} (S_i;C_i)$ is the moduli stack of
unstable local systems on $S_i$.
The elements here are local systems $V$ fitting into an exact sequence
$$
0\rightarrow L \rightarrow V \rightarrow L' \rightarrow 0
$$
such that the monodromy of $L$ on $\xi _i$ has eigenvalue $c_i^{-1}$.
We assume that $M^{\alpha}(S_i;C_i)$ is nonempty.
\begin{remark}
If we are given the conjugacy classes $G_{i-1}$ and $G_i$ such that there
exists an unstable local system $V$ on $S_i$, then the eigenvalues
$b_{i-1}$ of $L$ on $\rho _{i-1}$, and $b_i$ of $L$ on $\rho_i$, are
uniquely determined.
\end{remark}
\begin{proof}
The conjugacy classes $G_{i-1}$, $G_i$ determine the pairs $(b_{i-1}, b_{i-1}^{-1})$
and $(b_{i}, b_{i}^{-1})$ respectively. The instability condition says that $L$ has
eigenvalue $c_i^{-1}$ along $\xi_i$. Suppose that $b_{i-1}c_i^{-1}b_i=1$ so there
exists a local system $L$ with eigenvalues $b_{i-1}$ and $b_i$. We show that
the other products with either $b_{i-1}^{-1}$ or $b_i^{-1}$ or both, are different from $1$.
For example, $b_{i-1}c_i^{-1}b_i^{-1} = b_i^{-2}$, but $b_i^2\neq 1$ since we are assuming
that $G_i$ is a conjugacy class with distinct eigenvalues. Thus $b_{i-1}c_i^{-1}b_i^{-1}\neq 1$.
Similarly, $b_{i-1}^{-1}c_i^{-1}b_i\neq 1$. Also, $b_{i-1}^{-1}c_i^{-1}b_i^{-1}=c_i^{-2}\neq 1$.
This shows that if there is one possible combination of eigenvalues for a sub-local system,
then it is unique.
\end{proof}
From the assumption that $M^{\alpha} (S_i; C_i)$ is nonempty and the previous remark, we may
denote by $b_{i-1}$ and $b_i$ the eigenvalues of $L$ on $\rho _{i-1}$ and $\rho _i$
respectively.
We are assuming a genericity condition implying
that $M^{\alpha} (S_{> i}; C_{\cdot})$ is a quasiprojective variety. It has a universal
principal bundle $F_{>i}$ over it, and this has an automorphism $R$ corresponding
to the monodromy transformation around $\rho _i$. The eigenvalues of $R$ are $b_i$ and $b_i^{-1}$.
Restrict to a finer stratification of $M^{\alpha} (S_{> i}; C_{\cdot})$
into strata denoted $M^{\alpha} (S_{> i})^a$ on
which $(F_{>i}, R)$ is trivial. Let $M^{\alpha} (S; C_{\cdot})^a$ be the
inverse image of $M^{\alpha} (S_{>i}; C_{\cdot})^a$ under the map
$M^{\alpha} (S; C_{\cdot})\rightarrow M^{\alpha} (S_{>i}; C_{\cdot})$.
\begin{proposition}
We have
$$
M^{\alpha} (S; C_{\cdot})^a = M^{\alpha} (S_{>i}; C_{\cdot})^a
\times M^{\alpha} (S_{\leq i}; C_{\cdot})^{{\rm fr},R}
$$
where
$M^{\alpha} (S_{\leq i}; C_{\cdot})^{{\rm fr},R}$ is the moduli space of {\em framed} local systems,
that is to say local systems with a projective framing along $\rho _i$ compatible
with the monodromy and having the specified eigenvalues $(b_i,b_i^{-1})$.
\end{proposition}
\begin{proof}
Use Proposition \ref{principal}.
\end{proof}
Without the conditions $\alpha = (\sigma _{\cdot}, G_{\cdot})$,
the framed moduli space is just the space of
sequences of group elements $A_1,\ldots , A_i$,
in conjugacy classes $C_1,\ldots , C_i$ respectively,
such that $A_1\cdots A_iR=1$.
Denote this space by
$$
{\rm Rep} (C_1,\ldots , C_i; R).
$$
The moduli space $M^{\alpha}(S_{\leq i},C_{\cdot})^{{\rm fr},R}$ is the subspace of
${\rm Rep} (C_1,\ldots , C_i; R)$ given by the conditions $\sigma _{\cdot}$ and $G_{\cdot}$.
Notice here that, since we don't know a genericity condition for
$(C_1,\ldots, C_i, G_i)$ the moduli space might not be smooth. Even though we are
considering framed representations, at a reducible representation the space
will in general have a singularity. Furthermore, the conditions $G_j$ might,
in principle, introduce
other singularities.
\begin{theorem}
\label{abelian}
With the above notations,
let $R'$ be an element in the conjugacy class $G_{i-1}$.
We have
$$
M^{\alpha} (S_{\leq i}; C_{\cdot})^{{\rm fr},R}
\cong {\mathbb A} ^1 \times M^{\alpha} (S_{\leq i-1}; C_{\cdot})^{{\rm fr},R'}.
$$
\end{theorem}
\begin{proof}
It isn't too hard to see that the moduli space is an ${\mathbb A}^1$-bundle over
the second term on the right hand side, where the ${\mathbb A}^1$-coordinate is the extension
class. The statement that we would like to show,
saying that there is a natural decomposition as a direct product, is a
sort of commutativity property.
Let ${\rm Rep} (C_1,\ldots , C_i; R)^u$ denote the subspace of
${\rm Rep} (C_1,\ldots , C_i; R)$ consisting of representations which are unstable on $S_i$.
This is equivalent to saying that $A_i$ fixes, and acts by $c_i^{-1}$ on
the eigenvector of $R$ of eigenvalue $b_i$.
We will show an isomorphism
$$
{\rm Rep} (C_1,\ldots , C_i; R)^u\cong {\mathbb A}^1\times {\rm Rep} (C_1,\ldots , C_{i-1}; R'),
$$
and this isomorphism will preserve the conditions $(\sigma _{\cdot}, G_{\cdot})$ over
$S_{i-1}$ so it restricts to an isomorphism between the moduli spaces as claimed in the
theorem.
Write
$$
R =
\left( \begin{array}{cc}
b_i^{-1} & 0 \\
0 & b_i
\end{array}
\right) .
$$
Then
${\rm Rep} (C_1,\ldots , C_i; R)^u$ is the space of sequences $(A_1,\ldots , A_i)$
such that
$$
A_1\cdots A_iR=1
$$
and
\begin{equation}
\label{aidef}
A_i =
\left( \begin{array}{cc}
c_i & 0 \\
y & c_i^{-1}
\end{array}
\right)
\end{equation}
for some $y\in {\mathbb A}^1$.
Similarly, write
$$
R' =
\left( \begin{array}{cc}
b_{i-1}^{-1} & 0 \\
0 & b_{i-1}
\end{array}
\right) ,
$$
and ${\rm Rep} (C_1,\ldots , C_{i-1}; R')$ is the space of sequences $(A'_1,\ldots , A'_{i-1})$
such that
$$
A'_1\cdots A'_{i-1} R' =1.
$$
Suppose $(A_1,\ldots , A_i)$ is a point in ${\rm Rep} (C_1,\ldots , C_i; R)^u$
and let $y\in {\mathbb A}^1$ be the lower left coefficient of $A_i$ from \eqref{aidef}.
Note that $c_i^{-1}b_i =b_{i-1}$ so
$$
A_i R = \left( \begin{array}{cc}
b_i^{-1}c_i & 0 \\
b_i^{-1}y & c_i^{-1}b_i
\end{array}
\right)
=
\left( \begin{array}{cc}
b_{i-1}^{-1} & 0 \\
b_i^{-1}y & b_{i-1}
\end{array}
\right) .
$$
Let
$$
U:= \left( \begin{array}{cc}
1 & 0 \\
u & 1
\end{array}
\right)
$$
be chosen so that $UA_i R U^{-1} = R'$, which happens if and only if
$$
b_{i-1}^{-1} u + b_i ^{-1} y - b_{i-1} u = 0,
$$
in other words
$$
u:= \frac{-b_i^{-1}y}{b_{i-1}^{-1}-b_{i-1}}.
$$
The denominator is nonzero because we are assuming the trace of $G_{i-1}$ is different
from $2$ or $-2$, which is equivalent to asking $b_{i-1}\neq b_{i-1}^{-1}$.
Then put $A'_j:= UA_jU^{-1}$. From the equation $UA_i R U^{-1} = R'$ we get
$$
A'_1\cdots A'_{i-1}R' = U (A_1\cdots A_{i-1})U^{-1}(UA_i R U^{-1}) = 1.
$$
Hence, $(y, (A'_1,\ldots , A'_{i-1}))$ is a point in
${\mathbb A}^1\times {\rm Rep} (C_1,\ldots , C_{i-1}; R')$.
This defines the map
$$
{\rm Rep} (C_1,\ldots , C_i; R)^u\rightarrow {\mathbb A}^1\times {\rm Rep} (C_1,\ldots , C_{i-1}; R'),
$$
Its inverse is obtained by mapping $(y, (A'_1,\ldots , A'_{i-1}))$
to $(A_1,\ldots , A_i)$ where for $1\leq j \leq i-1$ we put
$A_j = U^{-1} A'_j U$ with $U$ defined as above using $y$,
and $A_i$ is the upper triangular matrix \eqref{aidef}. We obtain the claimed isomorphism.
\end{proof}
By symmetry the same holds in case of Kostov-genericity on the other side, giving
a statement written as
$$
M^{\alpha} (S_{\geq i-1}; C_{\cdot})^{{\rm fr},R'}
\cong {\mathbb A} ^1 \times M^{\alpha} (S_{\geq i}; C_{\cdot})^{{\rm fr},R}.
$$
\subsection{Open $G_{i-1}$ and $G_i$}
\label{sec-variable}
If $\sigma _i=0$ and the moduli space is nonempty, then we cannot have both sides
being Kostov-nongeneric at once.
Therefore, the remaining case is when $G_{i-1}$ and
$G_i$ are open sets which are
unions of all but finitely many conjugacy classes (that is to say, allowing all traces but a finite number),
such that the moduli problems on both $S_{<i}$ and $S_{>i}$ are Kostov-generic.
In this situation, which we now assume,
the moduli spaces $M^{\alpha} (S_{<i}; C_{\cdot})$ and
$M^{\alpha} (S_{>i}; C_{\cdot})$ exist and have principal bundles
$F_{<i}$ and $F_{>i}$ respectively.
We have a map
$$
M^{\alpha} (S_{<i}; C_{\cdot})\times M^{\alpha} (S_{>i}; C_{\cdot})
\rightarrow
G_{i-1}\times G_i.
$$
Consider the etale covering space $\widetilde{G_i}$
which parametrizes matrices with a choice of one of the two eigenspaces.
This was considered extensively by Kabaya \cite{Kabaya}.
Let
$$
\widetilde{M}^{\alpha} (S_{>i}; C_{\cdot}):=
M^{\alpha} (S_{>i}; C_{\cdot})\times _{G_i}\widetilde{G_i}
$$
and similarly for $\widetilde{M}^{\alpha} (S_{<i}; C_{\cdot})$.
Our hypothesis that $\sigma _i = \{ 0\}$, in other words that for any
local system $V$ in $M^{\alpha}(S;C_{\cdot})$ the restriction is unstable,
provides a factorization of the projection map through
$$
M^{\alpha}(S;C_{\cdot})\rightarrow
\widetilde{M}^{\alpha} (S_{<i}; C_{\cdot})\times
\widetilde{M}^{\alpha} (S_{>i}; C_{\cdot}).
$$
Indeed the destabilizing rank one subsystem is uniquely determined by the condition that
the monodromy around $\xi_i$ have eigenvalue $c_i^{-1}$, and this rank one subsystem
serves to pick out the eigenvalues of the matrices for $\rho _{i-1}$ and $\rho _i$.
Now the same argument as before goes through.
We may choose a stratification such that on each stratum the principal bundles have
framings such that the automorphisms $R'$ and $R$ are diagonal (note, however, that the
eigenvalues are now variable).
We reduce to the following situation: $Z$ is a quasiprojective variety with
invertible functions $b_{i-1}$ and $b_i$ such that $b_{i_1}^{-1}c_i^{-1}b_i=1$,
and we look at the moduli space of quadruples
$(z,V_i,\beta ',\beta )$ such that $z\in Z$, $V_i$ is an unstable local system on $S_i$,
and
$$
\beta ': V|_{\rho _{i-1}} \cong (V,R'(z)),
$$
$$
\beta : V|_{\rho _{i}} \cong (V,R(z))
$$
where
$$
R'(z)= \left( \begin{array}{cc}
b_{i-1}(z)^{-1} & 0 \\
0 & b_{i-1}(z)
\end{array}
\right)
\;\;\;\;
R(z)= \left( \begin{array}{cc}
b_{i}(z)^{-1} & 0 \\
0 & b_{i}(z)
\end{array}
\right) .
$$
The map $Y=\beta ' \beta ^{-1}$ is an automorphism of $V$ (defined up to scalars, so it is a group element in $PGL_2$) and it preserves the marked subspace, so it is a lower-triangular matrix. It uniquely
determines the data $(V_i, \beta , \beta ')$ up to isomorphism. Indeed we may consider
$V_i\cong V$ using for example $\beta '$, then our local system is
$(R', A_i, YRY^{-1})$ where $A_i$ is specified by the condition $(R')^{-1}A_iYRY^{-1}=1$.
As the group of lower triangular matrices in $PGL_2$ is isomorphic to $\gggg _m \times {\mathbb G} _a$
we obtain an isomorphism between our stratum and $Z\times \gggg _m \times {\mathbb G} _a$.
Alternatively, one could just do a parametrized version of the proof of Theorem \ref{abelian}.
\subsection{Synthesis}
\label{sec-synth}
We may gather together the various cases that have been treated in this section so far.
\begin{theorem}
Suppose $\alpha$ is any datum such that for some $i$ we have $\sigma _i = \{ 0\}$.
If $M^{\alpha}(S; C_{\cdot})$ is nonempty, then
${{\mathbb D} \partial} M^{\alpha}(S; C_{\cdot})\sim \ast$.
\end{theorem}
\begin{proof}
Let $G^v$ be the set of matrices $R$ with ${\rm Tr}(R)\neq 2,-2$. In the previous section
we have treated the cases where any $G_i$ is one of the four conjugacy classes of trace $2$ or $-2$.
Therefore we may assume that $G_{i-1}, G_i\subset G^v$.
Suppose that $G_{i-1}$ and $G_i$ are
conjugacy classes chosen so that the sequences $(C_1,\ldots , C_{i-1}, G_{i-1})$
and $(G_i, C_{i+1}, \ldots , C_k)$ are both Kostov-nongeneric.
Under the hypothesis $\sigma _i=\{ 0\}$ and supposing $M^{\alpha}(S; C_{\cdot})$ nonempty,
containing say a local system $V$,
then an eigenvalue of $G_{i-1}$ is the product of an eigenvalue of $C_i$ and an eigenvalue
of $G_i$, since there exists a rank one subsystem of $V|_{S_i}$. The same holds for the other
eigenvalue of $G_{i-1}$. Combining with the nongenericity relations among eigenvalues of
$(C_1,\ldots , C_{i-1}, G_{i-1})$
and $(G_i, C_{i+1}, \ldots , C_k)$, we obtain a nongenericity relation for $(C_1,\ldots , C_k)$.
This contradicts the hypothesis of Condition \ref{Kgen} for $C_{\cdot}$. Therefore, we conclude that
if $M^{\alpha}(S; C_{\cdot})$ is nonempty, then for any specific choice of conjugacy classes
$G_{i-1}$ and $G_i$, at least one of the moduli problems over $S_{<i}$ or $S_{>i}$
has to satisfy Condition \ref{Kgen}.
These cases are then covered by Theorem \ref{abelian} above.
There are finitely many choices of single conjugacy classes $G_{i-1}$ (resp. $G_i$) such that
$(C_1,\ldots , C_{i-1}, G_{i-1})$
(resp. $(G_i, C_{i+1}, \ldots , C_k)$) is Kostov non-generic. We may therefore isolate these choices
and treat them by Theorem \ref{abelian} according to the previous paragraph. Let now $G_{i-1}$ and $G_i$ be
the complement in $G^v$ of these nongeneric conjugacy classes. These are open subsets such that
for any conjugacy classes therein, the moduli problems on $S_{<i}$ and $S_{>i}$ satisfy
Condition \ref{Kgen}. The discussion of subsection \ref{sec-variable} now applies to give the
conclusion that this part of $M^{\alpha}(S, C_{\cdot})$ has contractible dual boundary complex.
\end{proof}
\section{Reduction to $M'$}
\label{sec-red}
In this section, we put together the results of the previous sections to obtain a reduction
to the main biggest open stratum. Recall from Condition \ref{verygen}
that we are assuming that $C_{\cdot}$ is very
generic.
Let the datum $\alpha '$ consist of the following choices:
for all $i$, $\sigma '_i=\{ 1\}$
and $G_i$ is the set $G^v$ of matrices with trace $\neq 2,-2$.
Then we put
$$
M':= M^{\alpha '} (S, C_{\cdot}) .
$$
It is an open subset of $M(S,C_{\cdot})$ since stability, and the conditions on the traces, are open
conditions.
\begin{theorem}
There exists a collection of data denoted $\alpha ^j$ such that
$$
M(S, C_{\cdot}) = M' \sqcup \coprod _j M^{\alpha ^j} (S, C_{\cdot})
$$
is a stratification, i.e. a decomposition into locally closed subsets admitting a total order
satisfying the closedness condition of \ref{decomp}. Furthermore, this admits a
further refinement into a stratification with $M'$ together with pieces denoted
$Z^{j,a}\subset M^{\alpha ^j} (S, C_{\cdot})$,
such that all of the pieces $Z^{j,a}$ have the form
$$
Z^{j,a} = Y^{j,a} \times {\mathbb A} ^1.
$$
\end{theorem}
\begin{proof}
Let $G^v$ be the set of matrices of trace $\neq 2,-2$ and let $G^u$ be the set of matrices
of trace $2$ or $-2$. Let $\alpha ^j$ run over the $2^{2k-3}$ choices of
$(\sigma _2,\ldots , \sigma _{k-1}; G_2,\ldots , G_{k-2})$ with $\sigma _i$ either
$\{ 0\}$ or $\{ 1\}$, and $G_i$ either $G^u$ or $G^v$.
The locally closed pieces $M^{\alpha ^j} (S, C_{\cdot})$
are disjoint and their union is $M(S, C_{\cdot})$. Furthermore, the set of indices is partially
ordered with the product order induced by saying that $\{ 0\} < \{ 1\}$ and $G^u < G^v$
and $j_1\leq j_2$ if each component of $\alpha ^{j_1}$ is $\leq$ the corresponding component of
$\alpha ^{j_2}$.
If $J$ is a downward cone in this partial ordering then
$\bigcup _{j\in J} M^{\alpha ^j} (S, C_{\cdot})$ is closed, because specialization
decreases the indices (stable specializes to unstable and $G^v$ specializes to $G^u$).
Choosing a compatible total ordering
we obtain the required closedness property.
The highest element in the partial ordering is the datum
$\alpha '$ considered above, so $M'$ is the open stratum of the stratification.
The discussion of the previous two sections allows us to further decompose all of the
other strata $M^{\alpha ^j} (S, C_{\cdot})$,
in a way which again preserves the ordered closedness condition, into pieces
of the form $Z^{j,a} = Y^{j,a} \times {\mathbb A} ^1$.
\end{proof}
\begin{corollary}
\label{cor92}
The natural map ${{\mathbb D} \partial} M(S,C_{\cdot}) \rightarrow {{\mathbb D} \partial} M'$
is a homotopy equivalence.
\end{corollary}
\begin{proof}
Apply Proposition \ref{decomp} to the stratification given by the theorem.
Note that $M'$ is nonempty and the full moduli space
is irreducible so the other strata are subvarieties of strictly smaller dimension.
\end{proof}
\section{Fenchel-Nielsen coordinates}
\label{sec-main}
We are now reduced to the main case
$M'= M^{\alpha}(S; C_{\cdot})$ for $\alpha '$ such that all $\sigma _i = \{ 1\}$ and all $G_i=G^v$.
We would like to get
an expression for $M'$ allowing us to understand its dual boundary complex
by inspection. We will show $M'\cong {\bf Q}^{k-3}$ where ${\bf Q}$ is defined near the
end of this section,
such that ${{\mathbb D} \partial} ({\bf Q}) \sim S^1$. The conclusion ${{\mathbb D} \partial} M' \sim S^{2(k-3)-1}$
then follows from Lemma \ref{join}.
This product decomposition is a system of Fenchel-Nielsen coordinates for the
open subset $M'$ of the moduli space.
One of the main things we learn from the basic theory of the classical hypergeometric function
is that a rank two local system
on ${\mathbb P}^1 - \{ 0,1,\infty\}$ is heuristically determined by the three conjugacy classes of the
monodromy transformations at the punctures. This general principle is not actually true, in cases
where there might be a reducible local system. But, imposing the condition of stability
provides a context in which this rigidity holds precisely. This is the statement of
Corollary \ref{hyperge} below.
Let $t_{i-1}$ and $t_i$ be points in ${\mathbb A} ^1- \{ 2,-2\}$. We will write down a stable
local system $V_i(t_{i-1}, t_i)$ on $S_i$, whose monodromy traces around $\rho _{i-1}$ and $\rho _i$
are $t_{i-1}$ and $t_i$ respectively, and whose monodromy around $\xi _i$ is in the
conjugacy class $C_i$. Furthermore, any stable local system with these traces
is isomorphic to $V_i(t_{i-1}, t_i)$ in a unique way up to scalars.
Construct $V_i(t_{i-1}, t_i)$ together with a basis at the basepoint $x_i$, by exhibiting monodromy
matrices $R'_{i-1}$, $R_i$ and $A_i$ in $SL_2$. Set
$$
A_i := \left( \begin{array}{cc}
c_i & 0 \\
0 & c_i^{-1}
\end{array}
\right)
\mbox{ and }
R_i := \left( \begin{array}{cc}
u_i & 1 \\
w_i & (t_i-u_i)
\end{array}
\right)
$$
with $u_i$ given by the formula \eqref{uiform} to be determined below, and $w_i:= u_i(t_i-u_i)-1$
because of the determinant one condition.
We could just write down the formula for $u_i$ but in order to motivate it let us first calculate
$$
R'_{i-1}= A_iR_i =
\left( \begin{array}{cc}
c_iu_i & c_i \\
c_i^{-1}w_i & c_i^{-1}(t_i-u_i)
\end{array}
\right) .
$$
We need to choose $u_i$ such that
$$
t_{i-1} = {\rm Tr}(R'_{i-1}) = c_i u_i + c_i^{-1}(t_i-u_i).
$$
This gives the formula
\begin{equation}
\label{uiform}
u_i= \frac{t_{i-1}-c_i^{-1}t_i}{c_i-c_i^{-1}} .
\end{equation}
The denominator is nonzero since by hypothesis $c_i\neq c_i^{-1}$.
\begin{lemma}
Suppose $V_i$ is an $SL_2$ local system with traces $t_{i-1}$ and $t_i$.
Suppose $V_i$ is given a frame at the base point $x_i$,
such that the monodromy matrix around the loop $\gamma _i$ is
diagonal with $c_i$ in the upper left, and such that the monodromy matrix
around $\rho _i$ (via the path going from $x_i$ to $s_i\in \rho _i$)
has a $1$ in the upper right corner. Then the three monodromy
matrices of $V_i$ are the matrices $R'_{i-1}$, $R_i$ and $A_i$ defined above.
\end{lemma}
\begin{proof}
The matrix $A_i$ is as given, by hypothesis. The matrix $R_i$ has trace $t_i$
and upper right entry $1$ by
hypothesis, so it too has to look as given. Now the calculation of the trace $t_{i-1}$
as a function of $u_i$ has a unique inversion: the value of $u_i$ must be given by
\eqref{uiform} as a function of $t_{i-1}$, $t_i$ and $c_i$. This determines the matrices.
\end{proof}
\begin{lemma}
Suppose $V_i$ is an $SL_2$ local system with traces $t_{i-1}$ and $t_i$
different from $2$ or $-2$, and
suppose $V_i$ is stable. Then, up to a scalar multiple,
there is a unique frame for $V_i$ over the basepoint $x_i$ satisfying the conditions of
the previous lemma.
\end{lemma}
\begin{proof}
Let $e_1$ and $e_2$ be eigenvectors for the monodromy around $\gamma _i$, with eigenvalues
$c_i$ and $c_i^{-1}$ respectively. They are uniquely determined up to a separate scalar for
each one. We claim that the upper right entry of the monodromy around $\rho _i$ is nonzero.
If it were zero, then the subspace generated by $e_2$ would be fixed, with the monodromy
around $\xi _i$ being $c_i^{-1}$; that would contradict the assumption of stability.
Now since the upper right entry of the monodromy around $\rho _i$ is nonzero, we may
adjust the vectors $e_1$ and $e_2$ by scalars such that this entry is equal to $1$.
Once that condition is imposed, the only further allowable change of basis vectors
is by multiplying $e_1$ and $e_2$ by the same scalar.
\end{proof}
\begin{corollary}
\label{hyperge}
Suppose $V_i$ is a local system on $S_i$, with conjugacy class $C_i$ around $\xi _i$,
stable, and whose traces around $\rho _{i-1}$ and $\rho _i$ are
$t_{i-1}$ and $t_i$ respectively. Then there is up to a scalar an unique isomorphism
$V_i\cong V_i(t_{i-1},t_i)$ with the system constructed above.
\end{corollary}
Suppose $V$ is a point in $M'$, and let $t_i$ denote the traces of the monodromies
of $V$ around the loops $\rho_i$. Then by the definition of the datum $\alpha '$,
$t_i\neq 2,-2$ and
the restriction to each $S_i$ is stable, so by the corollary there
is up to scalars a unique isomorphism $h _i: V|_{S_i} \cong V_i(t_{i-1},t_i)$.
Recall that $x_i$ is a basepoint in $S_i$, and that we have chosen a path in $S_i$
from $x_i$ to a basepoint $s_i$ in $\rho_i$, and then a path in $S_{i+1}$ from $s_i$ to $x_{i+1}$.
Let $\psi _i$ denote composed the path from $x_i$ to $x_{i+1}$, and use the same symbol to denote
the transport along this path which is an isomorphism $\psi _i:V_{x_i}\cong V_{x_{i+1}}$.
The stalk of the local system $V_i(t_{i-1},t_i)$ at $x_i$ is by construction ${\mathbb C}^2$,
and the same at $x_{i+1}$, so the map
$$
P_i:= h_{i+1}\psi _i h_i^{-1} : V_i(t_{i-1},t_i)_{x_i} \rightarrow V_{i+1}(t_{i},t_{i+1})_{x_{i+1}}
$$
is a matrix $P_i: {\mathbb C}^2 \rightarrow {\mathbb C}^2$ well-defined up to scalars, that is $P_i\in PGL_2$.
By the factorization property of $M'$, the local system $V$ is
determined by these glueing isomorphisms $P_i$, subject to the constraint that
they should intertwine the monodromies around the circle $\rho_i$
for $V_i$ and $V_{i+1}$. We have used the notation $R'_i$ for the monodromy
of the local system $V_{i+1}$ around the circle $\rho _i$, whereas
$R_i$ denotes the monodromy of $V_i$ around here. We will have made sure to use the same paths
from $x_i$ or $x_{i+1}$ to the basepoint $s_i\in \rho _i$ in order to define these monodromy matrices
as were combined together to make the path $\psi _i$. Therefore, the compatibility
condition for $P_i$ says
\begin{equation}
\label{Pcond}
R'_i \circ P_i = P_i \circ R_i .
\end{equation}
The frames for $V_{x_i}$ are only well-defined up to scalars, so the matrices $P_i$ are
only well-defined up to scalars and conversely if we change them by scalars then it doesn't change
the isomorphism class of the local system.
Putting together all of these discussions, we obtain the following preliminary description of $M'$.
\begin{lemma}
The moduli space $M'$ is isomorphic to the space of
$(t_2,\ldots , t_{k-2})\in ({\mathbb A} ^1 - \{ 2,-2\} )^{k-3}$ and
$(P_2,\ldots , P_{k-2})\in (PGL_2)^{k-3}$ subject to the equations \eqref{Pcond},
where $R'_i$ and $R_i$ are given by the previous formulas in terms of the $t_j$.
For the end pieces, one should formally set
$t_1:= c_1+c_1^{-1}$ and $t_{k-1}:= c_k + c_k^{-1}$.
\end{lemma}
At this point, we have not yet obtained a good ``Fenchel-Nielsen'' style coordinate system,
because the equation \eqref{Pcond} for $P_i$ contains $R'_i$ which depends on
$t_{i+1}$ as well as $t_i$, and $R_i$ which depends on $t_{i-1}$ as well as $t_i$.
We now proceed to decouple the equations. The strategy is to introduce the matrices
$$
T_i := \left( \begin{array}{cc}
0 & 1 \\
-1 & t_i
\end{array}
\right)
$$
which serve as a canonical normal form for matrices with given traces $t_i$, not
requiring the marking of one of the two eigenvalues. Notice that if we set
$$
U_i:=
\left( \begin{array}{cc}
1 & 0 \\
u_i & 1
\end{array}
\right)
$$
then
$$
U_i^{-1}T_iU_i=
\left( \begin{array}{cc}
1 & 0 \\
-u_i & 1
\end{array}
\right)
\left( \begin{array}{cc}
0 & 1 \\
-1 & t_i
\end{array}
\right)
\left( \begin{array}{cc}
1 & 0 \\
u_i & 1
\end{array}
\right)
=
\left( \begin{array}{cc}
u_i & 1 \\
w_i & t_i-u_i
\end{array}
\right)
$$
with $w_i$ as before. Therefore, using the formula \eqref{uiform} for $u_i$ we may
write
$$
R_i(t_{i-1}, t_i) = U_i^{-1}T_iU_i.
$$
Now
$$
R'_{i-1}= A_i R_i = A_i U_i^{-1}T_iU_i = U_i^{-1} (U_i A_i U_i^{-1} T_i )U_i .
$$
Furthermore, $U_iA_i U_i^{-1} $ is lower triangular with $c_i$ and $c_i^{-1}$ along
the diagonal, and when we multiply with $T_i$ it gives a matrix of the form
$$
U_i^{-1} A_i U_iT_i =
\left( \begin{array}{cc}
c_i & 0 \\
\ast & c_i^{-1}
\end{array}
\right)
\left( \begin{array}{cc}
0 & 1 \\
-1 & t_i
\end{array}
\right)
=
\left( \begin{array}{cc}
0 & c_i \\
-c_i^{-1} & \ast
\end{array}
\right) .
$$
However, we know that $u_i$ was chosen so that this matrix has trace $t_{i-1}$
(it is conjugate to $R'_{i-1}$), therefore in fact
$$
U_i^{-1} A_i U_iT_i =
\left( \begin{array}{cc}
0 & c_i \\
-c_i^{-1} & t_{i-1}
\end{array}
\right)
$$
as could alternately be seen by direct computation.
By inspection this matrix is conjugate to $T_{i-1}$ as it should be from its trace.
Interestingly enough, the conjugation is by the matrix
$$
A^{\frac{1}{2}}_i:=
\left( \begin{array}{cc}
c_i^{\frac{1}{2}} & 0 \\
0 & c_i^{-\frac{1}{2}}
\end{array}
\right) ,
$$
with
$$
U_i^{-1} A_i U_iT_i = A^{\frac{1}{2}}_iT_{i-1} A^{-\frac{1}{2}}_i .
$$
This half-power seems also to occur
somewhere in the classical treatments of the Fenchel-Nielsen coordinates,
We obtain
$$
R'_{i-1} = U_i^{-1} (U_i A_i U_i^{-1} T_i )U_i =
U_i^{-1} A^{\frac{1}{2}}_iT_{i-1} A^{-\frac{1}{2}}_i U_i .
$$
Recall that the equation \eqref{Pcond} for $P_{i-1}$ reads
$$
R'_{i-1} \circ P_{i-1} = P_{i-1} \circ R_{i-1},
$$
and using the above formula for $R'_{i-1} $ as well
as $R_{i-1} = U_{i-1}^{-1}T_{i-1}U_{i-1}$, this equation reads
\begin{equation}
\label{intermediatecond}
U_i^{-1} A^{\frac{1}{2}}_iT_{i-1} A^{-\frac{1}{2}}_i U_i \circ P_{i-1}
=P_{i-1} \circ U_{i-1}^{-1}T_{i-1}U_{i-1}.
\end{equation}
Set
$$
Q_{i-1}:= A^{-\frac{1}{2}}_i U_i P_{i-1} U_{i-1}^{-1}.
$$
This is a simple change of variables of the matrix $P_{i-1}$, with the matrices
entering into the change of variables depending however on $t_{i-2}$, $t_{i-1}$ and $t_i$.
Notice that the coefficients of $Q_{i-1}$ are linear functions of the coefficients of
$P_{i-1}$, in particular the action of scalars is the same on both.
Our equation which was previously \eqref{Pcond} (but for $i-1$ instead of $i$),
has become \eqref{intermediatecond}
which, after multiplying on the left by $U_i$ then by $A^{-\frac{1}{2}}_i$ and on the
right by $U_{i-1}^{-1}$ and substituting $Q_{i-1}$,
becomes:
\begin{equation}
\label{Qcond}
T_{i-1}\circ Q_{i-1} = Q_{i-1} T_{i-1}.
\end{equation}
A sequence of matrices $Q_i$ satisfying these equations leads back to a sequence of matrices
$P_i$ satisfying \eqref{Pcond} and vice-versa. Recall that the glueing for the local system
depended on these matrices modulo scalars, that is to say in $PGL_2$. We may sum up with the
following proposition:
\begin{proposition}
\label{feniprelim}
The moduli space $M'$ is isomorphic to the space of
choices of
$$
(t_2,\ldots , t_{k-2})\in ({\mathbb A} ^1 - \{ 2,-2\} )^{k-3}\mbox{ and }
(Q_2,\ldots , Q_{k-2})\in (PSL_2)^{k-3}
$$
subject to the equations $T_iQ_i=Q_iT_i$.
\end{proposition}
This expression for the moduli space is now decoupled, furthermore the equations are
in a nice and simple form.
\begin{theorem}
\label{fenchelnielsen}
Let ${\bf Q}$ be the space of pairs $(t,[p:q]) \in {\mathbb A}^1\times {\mathbb P}^1$ such that
$t\neq 2,-2$ and
\begin{equation}
\label{Qdef}
p^2+tpq + q^2 \neq 0.
\end{equation}
Then we have
$$
M'\cong {\bf Q}^{k-3}.
$$
\end{theorem}
\begin{proof}
This will follow from the previous proposition, once we calculate that the
space of matrices $Q_i$ in $PGL_2$ commuting with $T_i$, is equal to the
space of points $[p,q]\in {\mathbb P}^1$ such that $p^2+t_ipq + q^2 \neq 0$.
Write
$$
Q_i = \left( \begin{array}{cc}
p & q \\
p' & q'
\end{array}
\right)
$$
then
$$
Q_iT_i =
\left( \begin{array}{cc}
p & q \\
p' & q'
\end{array}
\right)
\left( \begin{array}{cc}
0 & 1 \\
-1 & t_i
\end{array}
\right)
=
\left( \begin{array}{cc}
-q & p+t_iq \\
-q' & p' + t_i q'
\end{array}
\right)
$$
whereas
$$
T_iQ_i =
\left( \begin{array}{cc}
0 & 1 \\
-1 & t_i
\end{array}
\right)
\left( \begin{array}{cc}
p & q \\
p' & q'
\end{array}
\right)
=
\left( \begin{array}{cc}
p' & q' \\
t_ip'-p & t_i q'-q
\end{array}
\right) .
$$
The equation $Q_iT_i=T_iQ_i$ thus gives from the top row
$$
p'=-q,\;\;\;\; q' = p+t_iq
$$
and then, those actually make the other two equations hold automatically.
Therefore a solution $Q_i$ may be written
$$
Q_i = \left( \begin{array}{cc}
p & q \\
-q & p+t_iq
\end{array}
\right) .
$$
The statement $Q_i\in PGL_2$ means that $Q_i$ is taken up to multiplication by scalars,
in other words $[p:q]$ is a point in ${\mathbb P}^1$ (clearly those coordinates are not both zero);
and
$$
{\rm det}(Q_i) = p^2 + t_i pq + q^2 \neq 0.
$$
We conclude that the space of $(t_i,Q_i)\in ({\mathbb A}^1- \{ 2,-2\}) \times PGL_2$
such that $T_iQ_i=Q_iT_i$ is isomorphic to ${\bf Q}$. Therefore
Proposition \ref{feniprelim} now says $M'\cong {\bf Q}^{k-3}$.
\end{proof}
\begin{lemma}
\label{ddQ}
The dual boundary complex of ${\bf Q}$ is
$$
{{\mathbb D} \partial} {\bf Q} \sim S^1.
$$
Therefore
$$
{{\mathbb D} \partial} {\bf Q}^{k-3} \sim S^{2(k-3)-1}.
$$
\end{lemma}
\begin{proof}
Let $\Phi \subset {\mathbb P}^1\times {\mathbb A}^1$ be the open subset defined by the same inequation
\eqref{Qdef}. Then ${\bf Q}\subset \Phi$ is an open subset, whose complement is the disjoint union
of two affine lines. Furthermore, $\overline{\Phi}:= {\mathbb P}^1\times {\mathbb P}^1$ is a
(non simple) normal crossings
compactification of $\Phi$. The divisor at infinity is the union of two copies of ${\mathbb P}^1$,
namely the fiber over $t=\infty$ and the conic defined by $p^2+tpq+q^2=0$. These intersect
transversally in two points. Therefore, the incidence complex of $\Phi\subset \overline{\Phi}$
at infinity is a graph with
two vertices and two edges joining them.
It follows that the incidence complex at infinity for ${\bf Q}$ is a circle. That may also be seen directly
by blowing up two times over each ramification point of the conic lying over $t=\pm 2$.
Now applying Lemma \ref{join} successively, and noting that the successive join of $k-3$ times
the circle is $S^{2(k-3)-1}$, we obtain the second statement.
\end{proof}
\begin{corollary}
\label{cor-mainthm}
Let $C_{\cdot}$ be a collection of conjugacy classes satisfying Condition \ref{verygen}.
Then the moduli space
${\rm M}_B(S; C_1,\ldots , C_k)$ of rank
$2$ local systems with those prescribed conjugacy classes, has dual boundary
complex homotopy equivalent to a sphere
$$
{{\mathbb D} \partial} {\rm M}_B(S; C_1,\ldots , C_k) \sim S^{2(k-3)-1}.
$$
\end{corollary}
\begin{proof}
We have been working with the hybrid moduli stack $M(S;C_{\cdot})$ above, but
Proposition \ref{variety} says that this is the same as the moduli space
${\rm M}_B(S; C_1,\ldots , C_k)$. By Corollary \ref{cor92},
${{\mathbb D} \partial} M(S;C_{\cdot})\sim {{\mathbb D} \partial} M'$. By Theorem \ref{fenchelnielsen},
$M'\cong {\bf Q}^{k-3}$, and by Lemma \ref{ddQ} ${{\mathbb D} \partial} {\bf Q}^{k-3}\sim
S^{2(k-3)-1}$. Putting these all together we obtain the desired conclusion.
\end{proof}
This completes the proof of Theorem \ref{main}.
\begin{remark}
The space $\Phi ^{k-3}$ itself has a modular interpretation: it is $M^{\alpha}(S,C_{\cdot})$
for $\alpha$ given by setting all $\sigma _i$ to $\{ 1\}$ (requiring stability of each $V|_{S_i}$),
but having $G_i=GL_2$ for all $i$, that is no longer constraining the traces.
\end{remark}
\section{A geometric $P=W$ conjecture}
\label{rel-hitch}
In this section we discuss briefly the relationship between the theorem proven above,
and the Hitchin fibration. For this discussion, let us suppose that the eigenvalues
$c_i$ are $n_i$-th roots of unity, so the conjugacy classes $C_i$ have finite order $n_i$.
Fix points $y_1,\ldots , y_k\in {\mathbb P}^1$ and
let
$$
X:= {\mathbb P}^1[ \frac{1}{n_1}y_1, \ldots , \frac{1}{n_k}y_k]
$$
be the {\em root stack} with denominators $n_i$ at the points $y_i$ respectively. It is
a smooth proper Deligne-Mumford stack. The fundamental group of its topological
realization \cite{Noohi} is generated by the
paths $\gamma _1,\ldots , \gamma _k$ subject to the relations that $\gamma _1\cdots \gamma _k=1$ and
$\gamma _i^{n_i}=1$. We may also let $S$ be a punctured sphere such as considered above,
the complement of a collection of small discs in
${\mathbb P} ^1$ centered at the points $y_i$. Therefore, a local system on $X^{\rm top}$ is the same thing
as a local system on $S$ such that the monodromies around the boundary loops $\xi _i$ have order $n_i$ respectively. We have
$$
{\rm M}_B(X^{\rm top}, GL_r) = \coprod _{(C_1,\ldots , C_k)} {\rm M}_B(S, C_{\cdot})
$$
where the disjoint union runs over the sequences of conjugacy classes such that $C_i$ has order
$n_i$.
Recall that if we assume that $C_{\cdot}$ satisfies the Kostov-genericity condition
then the
character variety with fixed conjugacy classes ${\rm M}_B(S, C_{\cdot})$
is the same as the hybrid moduli stack $M(S, C_{\cdot})$.
It may be seen as a connected component of the character variety ${\rm M}_B(X^{\rm top},GL_r)$.
Now we recall that there is a homeomorphism between the character variety
${\rm M}_B(X^{\rm top},GL_r)$ and the Hitchin-Nitsure moduli space
${\rm M}_{Dol}(X^{\rm top},GL_r)$ of Higgs bundles. One may consult for example
\cite{lspavm}, \cite{Konno}, \cite{Nakajima} for the general theory in the open or orbifold setting.
We denote by ${\rm M}_{Dol}(S, C_{\cdot})$ the connected component of
${\rm M}_{Dol}(X^{\rm top},GL_r)$ corresponding to the choice of conjugacy classes, which it may be
recalled corresponds to fixing appropriate parabolic weights for the parabolic Higgs bundles.
Hitchin's equations give a homeomorphism , the ``nonabelian Hodge correspondence''
\begin{equation}
\label{nahc}
{\rm M}_{Dol}(S, C_{\cdot})^{\rm top}\cong {\rm M}_B(S, C_{\cdot})^{\rm top}.
\end{equation}
Recall that the resulting two complex structures on the same underlying moduli space,
form a part of a hyperk\"ahler triple \cite{Hitchin}.
In the smooth proper orbifold setting we have the same theory of the Hitchin map
$$
{\rm M}_{Dol}(S, C_{\cdot}) \rightarrow {\mathbb A}^n
$$
which is a Lagrangian fibration to the space of integrals of Hitchin's Hamiltonian system
\cite{HitchinDuke}. In particular, $n$ is one-half of the complex dimension of the moduli
space, that dimension being even because of the hyperk\"ahler structure.
Fix a neighborhood of infinity in the Hitchin base ${\mathbb B}^{\ast}\subset {\mathbb A} ^n$,
and let $N^{\ast}_{Dol}$ denote its preimage in ${\rm M}_{Dol}(S, C_{\cdot})$. Similarly,
let $N^{\ast}_B$ denote a neighborhood of infinity in ${\rm M}_B(S, C_{\cdot})$.
The homeomorphism \ref{nahc} gives a natural homotopy equivalence
$N^{\ast}_{Dol}\sim N^{\ast}_B$.
The neighborhood at infinity ${\mathbb B}^{\ast}\subset {\mathbb A} ^n$ has the homotopy
type of the sphere $S^{2n-1}$, and indeed we may view
$S^{2n-1}$ as the quotient of ${\mathbb B}^{\ast}$ by radial scaling,
so the Hitchin map provides a natural map
$$
N^{\ast}_{Dol}\rightarrow S^{2n-1}.
$$
On the other hand, there is a natural projection $N^{\ast}_B\rightarrow {{\mathbb D} \partial} {\rm M}_B(S, C_{\cdot})$.
This is a general phenomenon, indeed if we have chosen a very simple normal crossings compactification
with divisor components $D_1,\ldots , D_m$ then we may choose an open covering of
$N^{\ast}_B$ by open subsets $U_1,\ldots , U_m$ punctured neighborhoods of the $D_i$, such that
$U_{i_1}\cap \cdots \cap U_{i_r}$ is nonempty if and only if
$D_{i_1}\cap \cdots \cap D_{i_r}$ is nonempty. Then, any partition of unity for this covering
provides a map $N^{\ast}_B\rightarrow {\mathbb R}^m$ which just goes into the subspace
${{\mathbb D} \partial} {\rm M}_B(S, C_{\cdot})$.
Recall the following conjecture \cite{KNPS}, which was motivated by consideration of
the case ${\mathbb P} ^1-\{ y_1,y_2,y_3,y_4\}$.
\begin{conjecture}
\label{geopw}
There is a homotopy-commutative square
$$
\begin{array}{ccc}
N^{\ast}_{Dol}& \stackrel{\sim}{\rightarrow} & N^{\ast}_B\\
\downarrow & & \downarrow \\
S^{2n-1}& \stackrel{\sim}{\rightarrow} & {{\mathbb D} \partial} {\rm M}_B(S, C_{\cdot})
\end{array}
$$
where the top and side maps are those described above, such that the bottom map is a
homotopy equivalence.
\end{conjecture}
Our main theorem provides a homotopy equivalence such as the one which is conjectured to
exist on the bottom of the square, for the group $GL_2$ on ${\mathbb P}^1-\{ y_1,\ldots , y_k\}$.
This was our motivation, and it was also the motivation for Komyo's proof in the case $k=5$
\cite{Komyo}.
We haven't shown anything about commutativity of the diagram. This is one of the motivations for
looking at the geometric theory of harmonic maps to buildings developed in \cite{KNPS} \cite{KNPS2}.
A result in this direction is shown by Daskalopoulos, Dostoglou and Wentworth \cite{DDW}.
The Kontsevich-Soibelman wallcrossing picture
\cite{KontsevichSoibelmanWallcrossing}
should provide a global framework for this question.
Conjecture \ref{geopw} may be viewed as a geometrical analogue of the first weight-graded piece of the
$P=W$ conjecture
\cite{deCataldoHauselMigliorini}
\cite{Hausel}. That conjecture states that weight filtration $W$
of the mixed Hodge structure on the
cohomology of the character variety $M_B$ should be naturally identified with the
perverse Leray filtration $P$ induced by the Hitchin fibration.
For the case of rank two character varieties on a compact Riemann surface, it
was in fact proved by de Cataldo, Hausel and Migliorini \cite{deCataldoHauselMigliorini}.
Davison treats a twisted version \cite{Davison}.
It is known \cite{Payne} that the cohomology of the dual boundary
complex is the first weight-graded piece of the cohomology of $M_B$. Conjecture
\ref{geopw} states that this should come from the cohomology of the sphere at infinity in the
Hitchin fibration, which looks very much like a Leray piece.
Furthermore, indeed from discussions with L. Migliorini and S. Payne
it seems to be the case that the characterization of the cohomology of the dual boundary
complex in \cite{Payne}, and the computations
\cite{HauselThaddeus1} \cite{HauselThaddeus2} \cite{HauselLetellierRodriguez}
\cite{HauselLetellierRodriguez2}
of the cohomology ring of
${\rm M}_{Dol}$ used to prove the $P=W$ conjecture for $SL_2$
in \cite{deCataldoHauselMigliorini}, should serve to show commutativity of the diagram in rational
cohomology.
The question of proving the analogue of our Theorem \ref{main} for a compact Riemann surface,
even in the rank $2$ case, is an interesting problem for further study. One may also envision
the case of a punctured curve of higher genus. The techniques used here involved a choice of
stability condition on each of the pieces of the decomposition, which in the higher genus
case would require having at least a certain number of punctures. Weitsman suggests,
following \cite{Weitsman} and \cite{JeffreyWeitsman},
that it might be possible to obtain a similar argument with only
at least one puncture. The compact case would seem to be more difficult to handle.
Let us note that Kabaya \cite{Kabaya} gives a general discussion of coordinate systems
which can be obtained using decompositions, and he treats the problems of indeterminacy of
choices of eigenspaces up to permutations.
The other direction which needs to be considered is local systems of higher rank.
Here, the first essential case is ${\mathbb P}^1-\{ 0,1,\infty \}$, where there is no useful decomposition
of the surface into simpler pieces. We could hope that if this basic case could be treated in all ranks,
then the reduction techniques we have used above could allow for an extension to the case of
many punctures.
\bibliographystyle{amsplain}
|
2,877,628,089,628 | arxiv | |
2,877,628,089,629 | arxiv | \section{Preamble}
The study of the individual contributions to the nucleon spin provides a critical window
for testing detailed predictions of QCD for the internal quark and gluon structure of hadrons.
Fundamental spin predictions can be tested experimentally to high precision,
particularly in measurements of deep inelastic scattering (DIS) of polarized
leptons on polarized proton and nuclear targets.
The spin of the nucleons was initially thought to originate simply from the spin
of the constituent quarks, based on intuition from the parton model.
However, experiments have shown that this expectation was incorrect.
It is now clear that nucleon spin physics is much more complex, involving
quark and gluon orbital angular momenta (OAM) as well as gluon spin and sea-quark contributions.
Contributions to the nucleon spin, in fact, originate from the
nonperturbative dynamics associated with color confinement as well as from perturbative QCD (pQCD) evolution.
Thus, nucleon spin structure has become an active aspect of QCD
research, incorporating important theoretical advances such as the development of
GPD and TMD.
Fundamental sum rules, such as the Bjorken sum rule for polarized DIS or the Drell-Hearn-Gerasimov sum rule for polarized photoabsorption cross-sections, constrain critically the
spin structure. In addition, elastic lepton-nucleon scattering and other
exclusive processes, e.g. Deeply Virtual Compton Scattering (DVCS),
also determine important aspects of nucleon spin dynamics.
This has led to a vigorous theoretical and experimental program to obtain an
effective hadronic description of the strong force in terms of the
basic quark and gluon fields of QCD.
Furthermore, the theoretical program for determining the spin structure of hadrons has
benefited from advances in lattice gauge theory simulations of QCD
and the recent development of light-front holographic QCD ideas based on the AdS/CFT
correspondence, an approach to hadron structure based on the
holographic embedding of light-front dynamics in a higher dimensional gravity theory, together with
the constraints imposed by the underlying superconformal algebraic structure.
This novel approach to nonperturbative QCD and color confinement has provided
a well-founded semiclassical approximation to QCD.
QCD-based models
of the nucleon spin and dynamics must also successfully account
for the observed spectroscopy of hadrons. Analytic calculations of
the hadron spectrum, a long-sought goal,
are now being carried out using Lorentz frame-independent light-front holographic
methods.
We begin this review by discussing why nucleon
spin structure has become a central topic of hadron physics (Section~\ref{overview}).
The goal of this introduction is to engage the non-specialist reader by providing a phenomenological
description of nucleon structure in general and its spin structure
in particular.
We then discuss the scattering reactions (Section~\ref{Scattering spectrum})
which constrain nucleon spin structure, and the
theoretical methods (Section~\ref{Computation methods}) used
for perturbative or nonperturbative QCD calculations.
A fundamental tool is Dirac's front form (\emph{light-front quantization})
which, while keeping a direct connection to the QCD Lagrangian,
provides a frame-independent, relativistic description of hadron
structure and dynamics, as well as a rigorous physical formalism
that can be used to derive spin sum rules (Section ~\ref{sum rules}).
Next, in Section~\ref{sec:data}, we discuss the existing spin structure
data, focusing on the inclusive lepton-nucleon scattering results, as well as
other types of data, such as semi-inclusive deep
inelastic scattering (SIDIS) and proton-proton scattering.
Section~\ref{sec:perspectives} provides an example of the
knowledge gained from nucleon spin studies which illuminates fundamental features of hadron dynamics and structure.
Finally, we summarize in Section~\ref{cha:Futur-results} our present understanding of
the nucleon spin structure and its impact on testing nonperturbative aspects of QCD.
A lexicon of terms specific to the nucleon spin
structure and related topics is provided at the end of this review to assist non-specialists.
Words from this list are italicized throughout the review.
Also included is a list of acronyms used in this review.
Studying the spin of the nucleon is a complex subject because light
quarks move relativistically within hadrons; one needs special care
in defining angular momenta beyond conventional nonrelativistic treatments~\cite{Chiu:2017ycx}.
Furthermore, the concept of gluon spin is gauge dependent; there is no gauge-invariant definition of the spin of gluons
-- or gauge particles in general~\cite{Ji:2014lra, Kinoshita:1990nb}; the definition
of the spin content of the nucleon is thus dependent on the choice of gauge.
In the light-front form one usually takes the light-cone gauge~\cite{Chiu:2017ycx} where the spin is well defined: there are no ghosts or
negative metric states in this transverse gauge (See Sec. \ref{LC dominance and LF quantization}).
Since nucleon structure is nonperturbative, calculations based solely on first principles of QCD are difficult.
These features make the nucleon spin structure an active and challenging field of study.
There are several excellent previous reviews which discuss the high-energy
aspects of proton spin dynamics~\cite{Bass:2004xa, Chen:2005tda, Burkardt:2008jw, Kuhn:2008sy, Chen:2010qc, Aidala:2012mv, Blumlein:2012bf}.
This review will also cover less conventional topics, such as how studies of
spin structure illuminate aspects of the strong force in its nonperturbative domain,
the consequences of color confinement, the origin of the QCD mass scale,
and the emergence of hadronic degrees of freedom from its partonic ones.
It is clearly important to know how the quark and gluon
spins combine with their OAM to form the total nucleon spin.
A larger purpose is to use empirical information on the spin structure of hadrons
to illuminate features of the strong force -- arguably the least understood fundamental force
in the experimentally accessible domain. For example, the
parton distribution functions (PDFs) are themselves nonperturbative quantities.
Quark and gluon OAMs -- which significantly
contribute to the nucleon spin -- are directly connected to color confinement.
We will only briefly discuss
some high-energy topics such as GPDs, TMDs, and the nucleon spin observables
sensitive to final-state interactions such as the Sivers effect. These topics
are well covered in the reviews mentioned above. A recent review on the transverse
spin in the nucleon is given in Ref.~\cite{Perdekamp:2015vwa}.
These topics are needed to understand the details of nucleon spin structure at
high energy, but they only provide qualitative information on our main topic, the
nucleon spin~\cite{Jaffe:1991kp}.
For example, the large transverse spin asymmetries measured in
singly-polarized lepton-proton and proton-proton
collisions hint at significant transverse-spin--orbit coupling in
the nucleon. This provides an important motivation for the TMD and GPD studies which
constrain OAM contributions to nucleon spin.
\section{Overview of QCD and the nucleon structure \label{overview}}
The description of phenomena given by the Standard Model is
based on a small number of basic elements: the fundamental particles (the six
quarks and six leptons, divided into three families), the four fundamental interactions
(the electromagnetic, gravitational, strong and weak nuclear forces) through
which these particles interact, and
the Higgs field which is at the origin of the masses of the fundamental particles.
Among the four interactions, the strong force is the least
understood in the presently accessible experimental domains.
QCD, its gauge theory, describes the interaction of quarks
via the exchange of vector gluons, the gauge bosons
associated with the color fields. Each quark carries a ``color" charge
labeled blue, green or red, and they interact by the exchange of colored gluons
belonging to a color octet.
QCD is best understood and well
tested at small distances thanks to the property of \emph{asymptotic freedom}~\cite{Gross:1973id}:
the strength of the interaction between color charges effectively
decreases as they get closer. The formalism of pQCD can therefore
be applied at small distances; {\it i.e.}, at high momentum transfer, and it has met with remarkable success.
This important feature allows one to validate QCD as the correct fundamental theory
of the strong force. However, most natural phenomena involving hadrons, including color confinement,
are governed by nonperturbative aspects of QCD.
\emph{Asymptotic freedom} also implies that the binding of quarks becomes stronger
as their mutual separation increases. Accordingly, the quarks
confined in a hadron react increasingly coherently as the
characteristic distance scale at which the hadron is probed becomes
larger: The nonperturbative distributions of all quarks and gluons within
the nucleon can participate in the reaction.
In fact, even in the perturbative domain, the nonperturbative
dynamics which underlies hadronic bound-state structure is nearly always
involved and is incorporated in distribution amplitudes, structure functions, and quark and gluon jet fragmentation
functions. This is why, as a general rule,
pQCD cannot predict the analytic form and magnitude of such distributions, but only their evolution
with a change of scale, such as the momentum transfer of the probe.
For a complete understanding of the strong force
and of the hadronic and nuclear matter surrounding us (of which $\approx95\%$ of the
mass comes from the strong force), it is essential
to understand QCD in its nonperturbative domain. The key
example of a nonperturbative mechanism
which is still not clearly understood is color confinement.
At large distances, where the internal structure cannot be resolved, effective degrees of freedom emerge;
thus the fundamental degrees of freedom of QCD, quarks and gluons, are effectively replaced by baryons and mesons.
The emergence of relevant degrees of freedom associated with
an effective theory is a standard occurence in physics; {\it e.g.}, Fermi's theory for
the weak interaction at large distances, molecular physics with its
effective Van der Waals force acting on effective degrees of
freedom (atoms), or geometrical optics whose essential degree of freedom
is the light ray. Even outside of the field of physics, a science based
on natural processes often leads to an effective theory in which the complexity
of the basic phenomena is simplified by the introduction of effective
degrees of freedom, sublimating the underlying effects that become
irrelevant at the larger scale. For example, biology takes root from
chemistry, itself based on atomic and molecular physics which in part are based on effective
degrees of freedom such as nuclei. Thus the importance of understanding
the connections between the fundamental theories and effective theories
to satisfactorily unify knowledge on a single
theoretical foundation. An important avenue of research in
QCD belongs to this context: to understand the connection
between the fundamental description of nuclear matter in terms
of quarks and gluons and its effective description in terms of the
baryons and mesons. A part of this review will discuss how spin
helps with this endeavor.
QCD is most easily studied with the nucleon, since it is
stable and its structure is determined by the strong force.
As a first step, one studies its structure without accounting for the
spin degrees of freedom. This simplifies both theoretical and
experimental aspects. Accounting for spin then tests QCD in detail.
This has been made possible due to continual technological
advances such as polarized beams and polarized targets.
A primary way to study the nucleon is to scatter beams of particles
-- leptons or hadrons -- on a fixed target. The interaction
between the beam and target typically occurs by the exchange of
a photon or a $W$ or $Z$ vector boson.
The momentum of the exchanged quantum
controls the time and distance scales of the probe.
Alternatively, one can collide two beams. Hadrons either
constitute one or both beams (lepton-hadron or hadron-hadron colliders)
or are generated during the collision ($e^+$--$e^-$ colliders).
The main facilities where nucleon spin structure has been studied are
SLAC in California, USA (tens of GeV electrons impinging on fixed proton or nuclear targets),
CERN in France/Switzerland (hundreds of GeV muons colliding with fixed targets),
DESY in Germany (tens of GeV electrons in a ring scattering off an internal gas target),
Jefferson Laboratory (JLab) in Virginia, USA (electrons with energy up to 11 GeV with fixed targets),
the Relativistic Heavy Ion Collider (RHIC) at Brookhaven Laboratory in New York,
USA (colliding beams of protons or nuclei with energies about 10 GeV per nucleon), and
MAMI (electrons of up to 1.6 GeV on fixed targets) in Germany.
We will now survey the formalism describing the
various reactions just described.
\subsection{Charged lepton-nucleon scattering}
\begin{wrapfigure}{r}{0.45\textwidth}
\vspace{-1.5cm}
\hspace{0.6cm}
\includegraphics[width=6.0cm]{el_scat2}
\vspace{-0.5cm}
\caption{\label{Flo:electron scattering}\small Inclusive electron scattering off
a nucleon, in the first Born approximation.
The blob represents the nonperturbative response of the target
to the photon.}
\vspace{-0.5cm}
\end{wrapfigure}
We start our discussion with experiments where charged leptons scatter
off a fixed target. We focus on the ``inclusive" case
where only the scattered lepton is detected. The interactions involved
in the reaction are the electromagnetic force (controlling the scattering of the lepton)
and the strong force (governing the nuclear or nucleon structures).
Neutrino scattering,
although it is another important probe of nucleon structure, will not
be discussed in detail here because the small weak interaction cross-sections, and the unavailability of large
polarized targets, have so far prohibited its use for detailed spin structure studies.
Nonetheless, as we shall discuss, neutrino scattering off unpolarized targets and parity-violating electron
scattering yield constraints on nucleon spin~\cite{Alberico:2001sd}.
The formalism for inelastic lepton scattering, including the weak interaction,
can be found {\it e.g.} in Ref.~\cite{Anselmino:1994gn}.
\subsubsection{The first Born approximation {\label{sub:born1}}}
The electromagnetic interaction of a lepton with a hadronic or nuclear target proceeds
by the exchange of a virtual photon. The first-order amplitude, known as the first
Born approximation, corresponds to a single photon exchange, see Fig.~\ref{Flo:electron scattering}.
In the case of electron scattering, where the lepton mass is small,
higher orders in perturbative quantum electrodynamics (QED)
are needed to account for bremsstrahlung (real photons emitted by the incident or the scattered
electron), vertex corrections (virtual photons emitted by the incident
electron and re-absorbed by the scattered electron) and ``vacuum polarization" diagrams
(the exchanged photon temporarily turning into pairs of charged
particles). In some cases, such as high-$Z$ nuclear targets, it is also necessary to account for the cases
where the interaction between the electron and the target is transmitted
by the exchange of multiple photons (see {\it e.g.}~\cite{Guichon:2003qm}).
This correction will be negligible for the reactions and kinematics discussed here. Perturbative techniques
can be applied to the electromagnetic probe, since the QED coupling $\alpha \approx 1/137$, but not to the target structure
whose reaction to the absorption of the photon is governed by the strong force at large distances where
the QCD coupling $\alpha_s$ can be large.
\subsubsection{Kinematics }
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-1.95cm}
\includegraphics[scale=0.65]{polkine2}
\vspace{-0.5cm}\caption{\label{fig:spinangles}\small Definitions of the polar angle $\theta^{*}$ and azimuthal angle $\phi^{*}$
of the target spin $\vec{S}$. The scattering plane is defined by $x\otimes z$.}
\vspace{-0.5cm}
\end{wrapfigure}
In inclusive reactions the final state system
$X$ is not detected. In the case of an ``elastic" reaction, the target particle emerges without structure modification.
Alternatively, the target nucleon or nucleus can emerge as excited states which promptly decay by emitting new particles (the resonance region),
or the target can fragment, with additional particles produced in the final state as in DIS.
We first consider measurements in the laboratory frame where the nucleon or nuclear target
is at rest (Figs.~\ref{Flo:electron scattering} and \ref{fig:spinangles}).
The laboratory energy of the virtual photon is $\nu\equiv E-E'$.
The direction of the momentum $\overrightarrow{q}\equiv\overrightarrow{k}-\overrightarrow{k'}$ of the virtual photon
defines the $\overrightarrow{z}$ axis,
while $\overrightarrow{x}$ is in the $(\overrightarrow{k},\overrightarrow{k'})$
plane. $\overrightarrow{S}$ is the target
spin, with $\theta^{*}$ and $\phi^{*}$ its polar and azimuthal angles,
respectively. In inclusive reactions, two variables
suffice to characterize the kinematics; in the elastic case, they are related, and one variable is enough.
During an experiment, the transferred energy $\nu$ and the scattering angle $\theta$ are typically varied. Two of the following
relativistic invariants are used to characterize the kinematics:
$\bullet$ The exchanged 4-momentum squared $Q^2 \equiv -(k-k')^2 =4EE'\sin^2\frac{\theta}{2}$
for ultra-relativistic leptons. For a real photon, $Q^2=0$.
$\bullet$ The invariant mass squared $W^2\equiv(p+q)^2=M_t^2+2M_t\nu-Q^2$,
where $M_t$ is the mass of the target nucleus. $W$
is the mass of the system formed after the lepton-nucleus collision; {\it e.g.}, a nuclear excited state.
$\bullet$ The Bjorken variable $x_{Bj} \equiv \frac{Q^2}{2p.q} = \frac{Q^2}{2M_t\nu}$.
This variable was introduced by Bjorken in the context of scale invariance in
DIS; see Section~\ref{DISscaling}.
One has $0<x_{Bj}<M_t/M$, where $M$ the nucleon mass, since $W \geq M_t$, $Q^2>0$ and $\nu>0$.
$\bullet$ The laboratory energy transfer relative to the incoming lepton energy $y=\nu/E$.
\noindent Depending on the values of $Q^2$ and $\nu$, the target
can emerge in different excited states. It is advantageous to study
the excitation spectrum in terms of $W$ since each excited state
corresponds to specific a value of $W$ rather than $\nu$, see Fig.~\ref{fig:gross}.
\subsubsection{General expression of the reaction cross-section \label{sub:general XS}}
In what follow, ``hadron'' can refer to either a nucleon or a nucleus.
The reaction cross-section is obtained from the scattering amplitude $T_{fi}$ for an initial state $i$
and final state $f$. $T_{fi}$ is computed from the photon propagator and the leptonic current contracted with the
electromagnetic current of the hadron for the exclusive reaction $\ell H \to \ell^\prime H^\prime$,
or a tensor in the case of an incompletely known final state.
These quantities are conserved at the leptonic and hadronic vertices (gauge invariance).
\begin{wrapfigure}{r}{0.4\textwidth}
\vspace{-0.8cm}
\includegraphics[scale=0.55]{kine3d}
\vspace{-0.5cm}
\caption{{\label{fig:gross}\small Response of the nucleon to the electromagnetic probe
as a function of $Q^2$ and $\nu$. The $\nu$-positions of the
peaks (N, $\Delta$,...) change as $Q^2$ varies.
(After~\cite{Gross:1985dn}.)} }
\end{wrapfigure}
In the first Born approximation:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
T_{fi}=\left\langle k'\right|j^{\mu}(0)\left|k\right\rangle \frac{1}{Q^2}\left\langle P_X\right|J_{\mu}(0)\left|P\right\rangle,
\label{eq:amplitran}
\end{equation}
where the leptonic current is $j^{\mu}=e\overline{\psi_l}\gamma^{\mu}\psi_l$
with $\psi_l$ the lepton spinor, $e$ its electric charge and $J^\mu$ the quark current. The exact expression of the hadron's current matrix element
$\left\langle P_X\right|J_{\mu}(0)\left|P\right\rangle$ is unknown because of our
ignorance of the nonperturbative hadronic structure and, for non-exclusive experiments, that of the
final state. However, symmetries (parity, time reversal, hermiticity, and current conservation) constrain the matrix elements of $J^\mu$
to a generic form written in terms of the vectors and tensors pertinent to the reaction. Our ignorance of
the hadronic structure is thus parameterized by functions which can be either measured,
computed numerically, or modeled. These are called either
``form factors" (elastic scattering, see Section~\ref{elastic scatt}),
``response functions" (quasi-elastic reaction, see Section~\ref{qel})
or ``structure functions" (DIS case, see Section~\ref{DIS}).
A significant advance of the late 1990s and early 2000s is the unification of
form factors and structure functions under the concept of GPDs.
The differential cross-section $d\sigma$ is obtained from the absolute square of the amplitude
(\ref{eq:amplitran}) times the lepton flux and a phase space factor, given {\it e.g.}, in Ref.~\cite{Olive:2016xmw}.
\subsubsection{Leptonic and hadronic tensors, and cross-section parameterization\label{tensors}}
The leptonic tensor $\eta^{\mu\nu}$ and the hadronic tensor $W^{\mu\nu}$ are defined such that
$d\sigma\propto\left|T_{fi}\right|^2=\eta^{\mu\nu}\frac{1}{Q^{4}}W_{\mu\nu}$. That is,
$\eta^{\mu\nu}\equiv\frac{1}{2}\sum j^{\mu *} j^{\nu}$, where all the possible final states of the lepton
have been summed over ({\it e.g.}, all of the lepton final spin states for the unpolarized experiments), and the tensor
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
W^{\mu\nu}=\frac{1}{4\pi} \int d^4 \xi \, e^{iq_\alpha \xi^\alpha} \left\langle P \right|\left[J^{\mu \dagger} (\xi),J^\nu (0)\right] \left| P \right\rangle,
\label{hadronic tensor}
\end{equation}
follows from the optical theorem by computing the forward matrix element of a product of currents in the proton state.
The contribution to $W^{\mu\nu}$ which is symmetric in ${\mu , \nu}$ -- thus constructed from the hadronic vector current --
contributes to the unpolarized cross-section, whereas its antisymmetric part
-- constructed from the pseudo-vector (axial) current -- yields the spin-dependent contribution.
In the unpolarized case; {\it i.e.}, with summation over all spin states,
the cross-section can be parameterized with six photoabsorption terms.
Three terms originate from the three possible polarization states of the virtual
photon. (The photon spin is a 4-vector but for a virtual photon,
only three components are independent because of the constraint from gauge invariance.
The unphysical fourth component is called a \emph{ghost} photon.)
The other three terms stem from the multiplication of the two
tensors. They depend in particular on the azimuthal scattering angle, which is integrated over
for inclusive experiments. Thus, these three terms disappear and
\vspace{-0.35cm}
\begin{equation}
\vspace{-0.35cm}
\left|T_{fi}\right|^2=\frac{e^2}{Q^2(1-\epsilon)}\left[(w_{RR}+w_{LL})+2\epsilon w_{ll}\right],
\label{eq:sep}
\end{equation}
where $R$, $L$ and $l$ label the photon helicity state (they are not Lorentz indices) and \\
$\epsilon\equiv1/\left[1+2\left(\nu^2/Q^2+1\right)\tan^2
(\theta/2)\right]$ is the virtual photon degree of polarization in the $m_e=0$ approximation.
The right and left helicity terms are $w_{RR}$ and $w_{LL}$, respectively.
The longitudinal term $w_{ll}$ is non-zero only for virtual photons.
It can be isolated by varying $\epsilon$~\cite{Rosenbluth:1950yq}, but $w_{RR}$ and $w_{LL}$
cannot be separated. Thus, writing $w_T=w_{RR}+w_{LL}$ and $w_L=w_{ll}$,
the cross-section takes the form:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
d\sigma\propto\left|T_{fi}\right|^2=\frac{e^2}{Q^2(1-\epsilon)}\left[w_T+2\epsilon w_{L}\right].
\label{eq:tensor}
\end{equation}
The total unpolarized inclusive cross-section is expressed
in terms of two photoabsorption partial cross-sections, $\sigma_{L}$ and $\sigma_T$.
The parameterization in term of virtual photoabsorption quantities
is convenient because the leptons create the virtual photon flux probing the target.
For doubly-polarized inclusive inelastic scattering, where both the beam and target are polarized,
two additional parameters are required: $\sigma_{TT}$ and $\sigma_{LT}'$. (The reason for the prime $'$
is explained below). The $\sigma_{TT}$ term stems from the interference of the amplitude involving one of
the two possible transverse photon helicities with the amplitude involving the other
transverse photon helicity. Likewise, $\sigma_{LT}'$ originates from the
imaginary part of the longitudinal-transverse interference amplitude. The real part, which produces
$\sigma_{LT}$, disappears in inclusive experiments because all angles defined by variables describing the hadrons produced during
the reaction are averaged over. This term, however, appears in exclusive or semi-exclusive reactions,
see {\it e.g.}, the review~\cite{Burkert:2004sk}.
\subsubsection{Asymmetries}
The basic observable for studying nucleon spin structure in doubly polarized lepton scattering
is the cross-section asymmetry with respect to the lepton and nucleon spin directions.
Asymmetries can be absolute: $A=\sigma^{\downarrow\Uparrow}-\sigma^{\uparrow\Uparrow}$, or relative:
$A=(\sigma^{\downarrow\Uparrow}-\sigma^{\uparrow\Uparrow})/(\sigma^{\downarrow\Uparrow}+\sigma^{\uparrow\Uparrow})$.
The $\downarrow$ and $\uparrow$ represent the leptonic beam helicity in the laboratory frame
whereas $\Downarrow$ and $\Uparrow$ define the direction of the target polarization (here, along the beam direction).
Relative asymmetries convey less information, the absolute magnitude of the process
being lost in the ratio, but are easier to measure than absolute asymmetries or cross-sections
since the absolute normalization ({\it e.g.}, detector acceptance, target density, or inefficiencies) cancels in the ratio.
Measurements of absolute asymmetries can also be advantageous, since the contribution
from any unpolarized material present in the target cancels out. The optimal choice between relative
and absolute asymmetries thus depends on the experimental conditions;
see Section~\ref{Structure functions extraction}.
One can readily understand why the asymmetries appear physically,
and why they are related to the spin distributions of the quarks in the nucleon.
Helicity is defined as the projection of spin in the direction of motion.
In the Breit frame where the massless lepton and the quark flip their spins
after the interaction, the polarization of the incident relativistic leptons sets the polarization of the probing
photons because of angular momentum conservation; {\it i.e.},
these photons must be transversally polarized and have helicities $\pm 1$.
Helicity conservation requires that a photon of a given helicity couples only to quarks of
opposite helicities, thereby probing the quark helicity (spin) distributions in the nucleon.
Thus the difference of scattering probability between leptons of $\pm 1$ helicities (asymmetry)
is proportional to the difference of the population of quarks of different helicity. This is the basic physics
of quark longitudinal polarization as characterized by the target hadron's longitudinal spin structure function.
Note also that virtual photons can also be longitudinally polarized, {\it i.e.}, with
helicity 0, which will also contribute to the lepton asymmetry at finite $Q^2$.
\subsection{Nucleon-Nucleon scattering \label{n-n scattering}}
Polarized proton--(anti)proton scattering, as done at RHIC (Brookhaven, USA),
is another way to access the nucleon spin structure.
Since hadron structure is independent of the measurement, the PDFs measured in lepton-nucleon and
nucleon-nucleon scattering should be the same.
This postulate of pQCD factorization underlies the ansatz that PDFs are universal.
Several processes in nucleon-nucleon scattering are available to access PDFs, see Fig. \ref{Flo:Drell-Yan}.
Since different
PDFs contribute differently in different processes, investigating all of these reactions
will allow us to disentangle the contributing PDFs.
The analytic effects of evolution generated by pQCD is known at least to
next-to-leading order (NLO) in $\alpha_s$ for these processes, which permits
the extraction of the PDFs to high precision. The most studied processes which access nucleon spin structure are:
\begin{figure}[ht]
\centering
\vspace{-0.cm}
\includegraphics[width=9.0cm]{Drell_yan}
\vspace{-0.3cm}
\caption{\label{Flo:Drell-Yan}\small Various $\protect\overrightarrow{p} \protect\overrightarrow{p}$ reactions
probing the proton spin structure.
Panel A: Drell-Yan process and its underlying LO diagram.
Panel B: Direct diphoton production at LO.
Panel C: $W^{+ / -}$ production at LO.
Panel D: LO process dominating photon, pion and/or Jet production
in $\protect\overrightarrow{p} \protect\overrightarrow{p}$ scattering.
Panel E: heavy-flavor meson production at LO.}
\end{figure}
\noindent \textbf{A) The Drell-Yan process}
A lepton pair detected in the final state corresponds to
the Drell-Yan process, see Fig.~\ref{Flo:Drell-Yan}, panel A.
In the high-energy limit, this process is described as the annihilation of a
quark from a proton with an antiquark from the other (anti)proton, the
resulting timelike photon then converts into a lepton-antilepton pair.
Hence, the process is sensitive to the convolution of the quark
and antiquark polarized PDFs $\Delta q(x_{Bj})$ and $\Delta \overline{q}(x_{Bj})$. (They will be
properly defined by Eq.~(\ref{eq:pdf def.}).)
Another process that leads to the same final state is lepton-antilepton pair creation from a virtual photon
emitted by a single quark. However, this process
requires large virtuality to produce a high energy lepton--anti-lepton pair, and it is thus
kinematically suppressed compared to the panel A case.
An important complication is that the Drell-Yan process is sensitive to double initial-state
corrections, where both the quark and antiquark before annihilation interact
with the spectator quarks of the other projectile. Such corrections are
``leading \emph{twist}"; {\it i.e.}, they are not power-suppressed at high lepton pair virtuality.
They induce strong modifications of
the lepton-pair angular distribution and violate the Lam-Tung relation~\cite{Boer:2002ju}.
A fundamental QCD prediction is that a naively time-reversal-odd
distribution function, measured \emph{via} Drell-Yan should change sign compared to a
SIDIS measurement~\cite{Brodsky:2002cx,Collins:2002kn,Brodsky:2002rv,Brodsky:2013oya}.
An example is the Sivers function~\cite{Sivers:1989cc}, a transverse-momentum dependent
distribution function sensitive to spin-orbit effects inside the polarized proton.
\noindent \textbf{B) Direct diphoton production}
Inclusive diphoton production $\overrightarrow{p}\overrightarrow{p} \rightarrow \gamma \gamma+X$ is
another process sensitive to $\Delta q(x_{Bj})$ and $\Delta \overline{q}(x_{Bj})$.
The underlying leading order (LO) diagram is shown on panel B of Fig.~\ref{Flo:Drell-Yan}.
\noindent \textbf{C) $W^{+ / -}$ production}
The structure functions probed in lepton scattering involve the quark charge squared
(see Eqs.~(\ref{eq:eqf1parton}) and (\ref{eq:eqg1parton})):
They are thus only sensitive to $\Delta q + \Delta \overline{q}$.
$W^{+ / -}$ production is sensitive to $\Delta q(x_{Bj})$ and $\Delta \overline{q}(x_{Bj})$ separately.
Panel C in Fig.~\ref{Flo:Drell-Yan}
shows how $W^{+ / -}$ production allows the measurement of both mixed $\Delta u \Delta \overline{d}$ and
$\Delta d \Delta \overline{u}$ combinations;
thus combining $W^{+ / -}$ production data and data providing
$\Delta q + \Delta \overline{q}$ ({\it e.g.}, from lepton scattering) permits
individual quark and antiquark contributions to be separated. The produced $W$
is typically identified \emph{via} its leptonic decay to $\nu l$, with the $\nu$ escaping detection.
\noindent \textbf{D) Photon, Pion and/or Jet production}
These processes are
$\overrightarrow{p}\overrightarrow{p} \rightarrow \gamma+X$,
$\overrightarrow{p}\overrightarrow{p} \rightarrow \pi+X$,
$\overrightarrow{p}\overrightarrow{p} \rightarrow jet(s)+X$ and
$\overrightarrow{p}\overrightarrow{p} \rightarrow \gamma+jet+X$.
At high momenta, such reactions are dominated by either
gluon fusion or gluon-quark Compton scattering with a gluon or photon in the final state;
See panel D in Fig.~\ref{Flo:Drell-Yan}. These processes are sensitive to the polarized gluon distribution
$\Delta g(x,Q^2)$.
\noindent \textbf{E) Heavy-flavor meson production}
Another process which is sensitive to $\Delta g(x,Q^2)$ is $D$ or $B$ heavy meson production
\emph{via} gluon fusion $\overrightarrow{p}\overrightarrow{p} \rightarrow D+X$ or
$\overrightarrow{p}\overrightarrow{p} \rightarrow B+X$. See panel E in Fig.~\ref{Flo:Drell-Yan}. The
heavy mesons subsequently decay into charged leptons which are detected.
\subsection{$e^+ ~ e^-$ annihilation \label{e^+e^- annihilation}}
\begin{wrapfigure}{r}{0.45\textwidth}
\vspace{-1.35cm}
\centering
\includegraphics[width=5.0cm]{ep_annihilation}
\vspace{-0.45cm}
\caption{\label{Flo:ep_annihilation}\small Annihilation of $e^+ ~ e^-$ with only one detected hadron from the final state.}
\vspace{-0.5cm}
\end{wrapfigure}
The $e^+ ~ e^-$ annihilation process where only one hadron is detected in the final state (Fig.~\ref{Flo:ep_annihilation})
is the timelike version of DIS if the final state hadron is a nucleon. The nucleon structure is parameterized
by \emph{fragmentation functions}, whose analytic form is limited -- as for the spacelike case -- by fundamental symmetries.
\section{Constraints on spin dynamics from scattering processes \label{Scattering spectrum}}
We now discuss the set of inclusive scattering processes which are sensitive to
the polarized parton distributions and provide the cross-sections for each type of reaction.
We start with DIS where the nucleon structure is best understood.
DIS was also historically the first \emph{hard}-scattering reaction which provided an understanding of fundamental hadron dynamics.
Thus, DIS is the prototype -- and it remains the archetype -- of tests of QCD.
We will then survey other inclusive reactions and explore their connection to
exclusive reactions such as elastic lepton-nucleon scattering.
\subsection{Deep inelastic scattering \label{DIS}}
\subsubsection{Mechanism\label{sub:MecaDIS}}
The kinematic domain of DIS where leading-twist Bjorken scaling is valid requires
$W \gtrsim 2$ GeV
and $Q^2 \gtrsim 1$~GeV$^2$.
Due to \emph{asymptotic freedom}, QCD
can be treated perturbatively in this domain, and standard gauge theory calculations are possible.
In the Bjorken limit where
$\nu\to\infty$ and $Q^2\to\infty$, with $x_{Bj}=Q^2/(2M\nu)$ fixed,
DIS can be represented in the first approximation by a lepton scattering elastically off
a fundamental quark or antiquark constituent of the target nucleon,
as in Feynman's parton model. The momentum distributions of the quarks (and gluons)
in the nucleon, which determine the DIS cross-section, reflect its nonperturbative bound-state structure.
The ability to separate, at high lepton momentum transfer, perturbative photon-quark interactions from
the nonperturbative nucleon structure is known as the \emph{factorization
theorem}~\cite{factorization theorem} -- a direct consequence of \emph{asymptotic freedom}. It is
an important ingredient in establishing the validity of QCD as a description of the strong interactions.
The momentum distributions of quarks and gluons are parameterized by the structure functions:
These distributions
are universal;
{\it i.e.}, they are properties of the hadrons themselves, and thus should be independent of the
particular high-energy reaction used to probe the nucleon.
In fact, all of the interactions within the nucleon which occur before the
lepton-quark interaction, including the dynamics, are contained in the frame-independent
light-front (LF) wave functions (LFWF) of the nucleon -- the eigenstates of the QCD LF Hamiltonian. They thus reflect
the nonperturbative underlying confinement dynamics of QCD; we discuss
how this is assessed in models and confining theories such as Light Front Holographic QCD (LFHQCD) in Section~\ref{sec:LFHQCD}.
Final-state interactions -- processes happening after the lepton interacts
with the struck quark -- also exist. They lead to novel phenomena such as diffractive DIS (DDIS),
$\ell p \to \ell^\prime p^\prime X$,
or the pseudo-T-odd Sivers single-spin asymmetry $\vec S_p \cdot \vec q \times \vec p_q$ which is observed in
polarized SIDIS. These processes also contribute at ``leading twist"; {\it i.e.}, they contribute to the
Bjorken-scaling DIS cross-section.
\subsubsection{Bjorken scaling\label{DISscaling}}
DIS is effectively represented by the elastic scattering of leptons on the pointlike
quark constituents of the nucleon in the Bjorken limit.
Bjorken predicted that the hadron structure functions would depend only on the dimensionless
ratio $x_{Bj}$,
and that the structure functions reflect \emph{conformal} invariance; {\it i.e.}, they will be $Q^2$-invariant.
This is in fact the prediction of ``conformal" theory -- a quantum field theory of pointlike quarks with no
fundamental mass scale. Bjorken's expectation was verified by the first measurements at
SLAC~\cite{Breidenbach:1969kd} in the domain $x_{Bj} \sim 0.25$.
However, in a gauge theory such as QCD, Bjorken scaling is
broken by logarithmic corrections from pQCD processes, such as gluon radiation -- see
Section~\ref{sub:pQCD}. One also predicts deviations from Bjorken scaling due to
power-suppressed $M^2/Q^2$ corrections called
\emph{higher-twist} processes. They reflect finite mass corrections and \emph{hard} scattering
involving two or more quarks. The effects become particularly evident at low $Q^2$
($\lesssim1$~GeV$^2$), see Section~\ref{OPE}.
The underlying \emph{conformal} features of chiral QCD (the massless quark limit) also has important consequence for color confinement and hadron dynamics at low $Q^2$. This
perspective will be discussed in Section~\ref{sec:LFHQCD}.
\subsubsection{DIS: QCD on the light-front \label{LC dominance and LF quantization}}
An essential point of DIS is that the lepton interacts via the exchange of a virtual photon
with the quarks of the proton -- not at the same \emph{instant time} $t$ (the ``\emph{instant form}" as defined by Dirac),
but at the time along the LF, in analogy to a flash photograph.
In effect DIS provides a measurement of hadron structure at fixed LF time $\tau = x^+ = t + z/c$.
The LF coordinate system in position space is based on the LF variables
$x^{\pm} = (t \pm z) $. The choice of the $\hat z = {\hat x}^3$ direction is arbitrary. The two other orthogonal vectors
defining the LF coordinate system are written as $x_{\bot}=(x,y)$.
They are perpendicular to the $(x^+,x^-)$ plane. Thus $x^2 = x^+ x^- - x_\perp^2$.
Similar definitions are applicable to momentum space: $p^{\pm} = (p^0 \pm p^3)$,
$p_{\bot}=(p_1,p_2)$.
The product of two vectors $a^{\mu}$ and $b^{\mu}$ in LF coordinates is
\vspace{-0.15cm}
\begin{equation}
\vspace{-0.15cm}
a^{\mu} b_{\mu} = \frac{1}{2} (a^+b^- + a^-b^+ ) - a_{\bot} b_{\bot}.
\label{lc vector product}
\end{equation}
The relation between covariant and contravariant vectors is $a^+=a_-$, $a^-=a_+$ and the relevant metric is:
\vspace{-0.1cm}
\footnotesize
\begin{equation}
\left(
\begin{array}{cccc}
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & -1 & 0\\
0 & 0 & 0 & -1
\end{array}
\right).
\nonumber
\vspace{-0.1cm}
\end{equation}
\normalsize
Dirac matrices $\gamma^\mu$ adapted to the LF coordinates can also be defined~\cite{Kogut:1969xa}.
The LF coordinates
provide the natural coordinate system for DIS and other \emph{hard} reactions.
The LF formalism, called the ``Front Form" by Dirac, is Poincar\'{e} invariant
(independent of the observer's Lorentz frame) and
``causal" (correlated information is only possible as allowed by the finite speed of light).
The momentum and spin distributions of the quarks which are probed in DIS experiments
are in fact determined by the LFWFs of the target hadron -- the eigenstates
of the QCD LF Hamiltonian $H_{LF}$ with the Hamiltonian defined at fixed $\tau$.
$H_{LF}$ can be computed directly from the QCD Lagrangian.
This explains why quantum field theory quantized at fixed $\tau $ (\emph{LF quantization}) is the
natural formalism underlying DIS experiments. The LFWFs being independent of
the proton momentum, one obtains the same predictions for DIS at an electron-proton collider
as for a fixed target experiment where the struck proton is at rest.
Since important nucleon spin structure information is derived from DIS experiments,
it is relevant to outline the basic elements of the LF formalism here.
The evolution operator in LF time is $P^- = P^0 - P^3$, while $P^+ = P^0 + P^3$ and $P_\perp$ are kinematical.
This leads to the definition of the Lorentz invariant LF Hamiltonian
$H_{LF} = P^\mu P_\mu = P^- P^+ - P^2_\perp$. The LF Heisenberg equation derived from the QCD LF Hamiltonian is
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
H_{LF} |\Psi_H \rangle = M^2_H |\Psi_H \rangle,
\end{equation}
%
where the eigenvalues $M_H^2$ are the squares of the masses of the hadronic eigenstates.
The eigensolutions $|\Psi_H \rangle$ projected on the free parton eigenstates $|n\rangle$
(the Fock expansion) are the boost-invariant hadronic LFWFs,
$ \langle n | \Psi_H \rangle = \Psi_n(x_i, \vec k_{\perp i}, \lambda_i)$, which underly
the DIS structure functions. Here $x_i= k^+_i/P^+$, with
$\sum_i x_i = 1$, are the LF momentum fractions of the quark and gluon constituents
of the hadron eigenstate in the $n$-particle Fock state, the $\vec k_{\perp i}$ are the
transverse momenta of the $n$ constituents where $\sum_i\vec k_{\perp i} = 0_\perp $;
the variable $\lambda_i$ is the spin projection of constituent $i$ in the $\hat z$ direction.
A critical point is that \emph{LF quantization} provides the LFWFs
describing relativistic bound systems, independent of the observer's Lorentz frame;
{\it i.e.}, they are boost invariant. In fact, the LF provides an exact and rigorous framework to
study nucleon structure in both the perturbative and nonperturbative domains of QCD~\cite{Brodsky:1997de}.
Just as the energy $P^0$ is the conjugate of the standard time $x^0$ in the \emph{instant form},
the conjugate to the LF time $x^+$ is the operator $P^- = i \frac{d}{d x^+} $. It represents the LF time evolution operator
\vspace{-0.1cm}
\begin{equation}
\vspace{-0.1cm}
P^- \Psi =\frac{(M^2+P_{\bot}^2)}{2P^+} \Psi,
\label{LF Hamiltonian}
\end{equation}
and generates the translations normal to the LF.
The structure functions measured in DIS are computed from integrals of the square of the LFWFs,
while the hadron form factors measured in elastic lepton-hadron scattering are given by the
overlap of LFWFs. The power-law fall-off of the form factors at high-$Q^2$ are predicted from first principles by simple
counting rules which reflect the composition of the hadron~\cite{Brodsky:1973kr, Matveev:1973ra}.
One also can predict observables such as the DIS spin asymmetries for polarized targets~\cite{Brodsky:1994kg}.
\emph{LF quantization} differs from the traditional equal-time quantization at fixed $t$~\cite{Weinberg:1966jm}
in that eigensolutions of the Hamiltonian defined at a fixed time $t$ depend on the hadron's momentum $\vec P$.
The boost of the \emph{instant form} wave function is then a complicated dynamical problem;
even the Fock state structure depends on $P^\mu$. Also, interactions
of the lepton with quark pairs (connected time-ordered diagrams) created from the \emph{instant form} vacuum
must be accounted for.
Such complications are absent in the LF formalism.
The LF vacuum is defined as the state with zero $P^-$; {\it i.e.},
invariant mass zero and thus $P^\mu=0$. Vacuum loops do not
appear in the LF vacuum since $P^+$ is conserved at
every vertex; one thus cannot create particles with $k^+ \ge 0$ from the LF vacuum.
It is sometimes useful to simulate \emph{LF quantization} by using \emph{instant time}
in a Lorentz frame where the observer has ``infinite momentum" $ P^z \to - \infty.$
However, it should be stressed that the LF formalism is frame-independent; it is
valid in any frame, including the hadron rest frame. It reduces to standard nonrelativistic
Schr\"odinger theory if one takes $c \to \infty$.
The \emph{LF quantization} is thus the natural, physical, formalism for QCD.
As we shall discuss below, the study of dynamics
with the LF holograpic approach which incorporates the exact \emph{conformal} symmetry of the classical QCD Lagrangian in the chiral limit,
provides a successful description of color confinement and nucleon structure at low $Q^2$~\cite{Brodsky:2014yha}.
An example is given in Section~\ref{unpo cross-section} where
nucleon form factors emerge naturally from the LF framework
and are computed in LFHQCD.
\noindent \textbf{Light-cone gauge}
The gauge condition often chosen in the LF framework is the ``light-cone" (LC) gauge
defined as $A^+ = A^0 + A^3= 0$; it is an
axial gauge condition in the LF frame. The LC gauge is analogous to the usual
Coulomb or radiation gauge since there are no longitudinally polarized nor
\emph{ghosts} (negative-metric) gluon. Thus, Fadeev--Popov \emph{ghosts}~\cite{Faddeev:1967fc} are
also not required. In LC gauge one can show that $A^-$ is a function of $A_\bot$.
Therefore, this physical gauge simplifies the study of hadron structure since the
transverse degrees of freedom of the gluon field $A_\bot$ are the only independent
dynamical variables.
The LC gauge also insures that at LO, \emph{twist}-2 expressions do not explicitly involve the gluon
field, although the results retain color-gauge
invariance~\cite{Jaffe:1996zw}. Instead a LF-instantaneous interaction proportional to
$1\over {k^+}^2$ appears in the LF Hamiltonian, analogous to the \emph{instant time} instantaneous
${1}\over {\vec k}^2$ interaction which appears in Coulomb (radiation) gauge in QED.
\noindent \textbf{Light-cone dominance}
\begin{wrapfigure}{r}{0.4\textwidth}
\vspace{-1.2cm}
\centering
\includegraphics[width=6.0cm]{Handbag}
\vspace{-0.4cm}
\caption{\label{Handbag+HT}\small Forward virtual Compton scattering with a Wilson line.}
\vspace{-0.3cm}
\end{wrapfigure}
Using unitarity, the hadronic tensor $W^{\mu \nu}$, Eq.~(\ref{hadronic tensor}), can be computed
from the imaginary part of the forward virtual Compton scattering amplitude
$\gamma^*(q) N(p) \to \gamma^* (q) N(p)$, see Fig.~\ref{Handbag+HT}. At large $Q^2$, the quark propagator
which connects the two currents in the DVCS amplitude goes far-off shell; as a result, the invariant
spatial separation $x^2 = x_\mu x^\mu$ between the currents
$J^\mu(x)$ and $ J^\nu (0) $ acting on the quark line vanishes as
$x^2 \propto {1\over Q^2}$. Since $x^2 = x^+ x^- - x^2_\perp \to 0$,
this domain is referred to as ``light-cone dominance".
The interactions of gluons with this quark propagator are referred to as the \emph{Wilson line}.
It represents the final-state interactions between the struck quark and the target spectators
(``final-state", since the imaginary part of the amplitude in Fig.~\ref{Handbag+HT}
is related by the \emph{Optical Theorem} to the DIS cross-section
with the \emph{Wilson line} connecting the outgoing quark to the nucleon remnants).
Those can contribute to leading-\emph{twist} -- e.g. the Sivers effect~\cite{Sivers:1989cc} or DDIS, or can
generate \emph{higher-twists}.
In QED such final-state interactions are related to the ``Coulomb phase".
More explicitly, one can choose coordinates such that $ q^+ = -M x_{Bj} $ and
$q^- = (2\nu + M x_{Bj}) $ with $q_{\bot}=0$. Then
$q^\mu \xi_\mu = \big[ (2\nu+M x_{Bj})\xi^+ -M x_{Bj} \xi^- \big] $, with
$\xi$ the integration variable in Eq.~(\ref{hadronic tensor}). In the Bjorken limit,
$\nu \to \infty$ and $x_{Bj}$ is finite. One verifies then that the cross-section
is dominated by $\xi^+ \to 0$, $\xi^- \propto 1/(M x_{Bj})$ in the Bjorken limit, that is $\xi^+ \xi^- \approx 0$,
and the reaction happens on the LC specified by $\xi^+ \xi^- = \xi^2 = 0$. Excursions out of the LC generate
$M^2/Q^2$ \emph{twist}-4 and higher corrections ($M^{2n}/Q^{2n}$ power corrections), see Section~\ref{OPE}.
It can be shown that LC kinematics also
dominates Drell-Yan lepton-pair reactions (Section~\ref{n-n scattering})
and inclusive hadron production in $e^+ ~ e^-$ annihilation
(Section~\ref{e^+e^- annihilation}).
\noindent \textbf{Light-front quantization }
The two currents appearing in DVCS (Fig.~\ref{Handbag+HT}) effectively couple to the nucleon
as a local operator at a single LF time in the Bjorken limit.
The nucleon is thus described,
in the Bjorken limit, as distributions of partons along $x^-$ at a fixed LF time $x^+$
with $x_\bot =0$. At finite $Q^2$ and $\nu$ one becomes sensitive to
distributions with nonzero $x_\bot$. It is often convenient to expand the operator
product appearing in DVCS as a sum of ``good" operators,
such as $\gamma^+ = \gamma^0 + \gamma^-$, which have simple interactions
with the quark field. In contrast, ``bad" operators such as $\gamma^-$ have a complicated
physical interpretation since they can connect the electromagnetic current to more than one
quark in the hadron Fock state via LF instantaneous interactions.
The equal LF time condition, $x^+ = $ constant, defines a plane, rather than a cone, tangent to the LC,
thus the name ``Light-Front".
In high-energy scattering, the leptons and partons being ultrarelativistic,
it is often useful for purposes of intuition to interpret the DIS kinematics
in the Breit frame, or to use the \emph{instant form} in the infinite momentum frame (IMF).
However, since a change of frames requires Lorentz boosts in the \emph{instant form},
it mixes the dynamics
and kinematics of the bound system, complicating the study of the hadron dynamics and structure.
In contrast, the LF description of the nucleon structure is frame independent.
The LF momentum carried by a quark $i$ is $x_i = k^+_i/P^+$ and identifies with
the scaling variable, $x_i=x_{Bj}$, and $P^+=\sum_i k_i^+$.
Likewise, the hadron LFWF is the sum
of individual Fock state wave functions \emph{viz} the states corresponding to a specific number of partons in
the hadron.
One can use the QCD LF equations to reduce the 4-component Dirac spinors
appearing in LF quark wave functions to a description based
on two-component Pauli spinors by using the LC gauge.
The upper two components of the quark field are the dynamical quark field proper; it yields the
\emph{leading-twist} description, understood on the LF as the quark
probability density in the hadron eigenstate.
This procedure allows an interpretation in terms of a transverse confinement
force~\cite{Burkardt:2008ps, Abdallah:2016xfk}; it is thus of prime
interest for this review.
The lower two components of the quark spinor link to a field depending on
both the upper components and the gluon independent dynamical fields $A_\bot$;
it thus interpreted as a correlation of both quark and gluons \emph{higher-twists}:
They are further discussed in Sections~\ref{OPE} and \ref{sub:HT Extraction}.
Thus, LF formalism allows for a frame-independent description of the nucleon structure
with clear interpretation of the parton wave functions, of the Bjorken scaling variable
and of the meaning of \emph{twists}.
\noindent There are other advantages for studying QCD on the LF:
\noindent $\bullet$ As we have noted, the vacuum eigenstate in the LF formalism is the eigenstate
of the LF Hamiltonian with $P^\mu =0$; it thus has zero invariant mass $M^2 = P^\mu P_\mu = 0.$
Since $P^+ = 0$ for the LF vacuum, and $P^+ $ is conserved at every vertex, all disconnected
diagrams vanish.
The LF vacuum structure is thus simple, without the complication of vacuum loops of
particle-antiparticle pairs. The dynamical effects normally associated with the \emph{instant form}
vacuum, including quark and gluon condensates, are replaced by the nonperturbative dynamics
internal to the hadronic eigenstates in the front form.
\noindent $\bullet$ The LFWFs are universal objects which describe hadron structure at all scales.
In analogy to parton model structure functions, LFWFs have a probabilistic interpretation:
their projection on an $n$-particle Fock state is the probability amplitude
that the hadron has that number of partons at a fixed LF time $x^+$ -- the probability to be in a
specific Fock state.
This probabilistic interpretation remains valid
regardless of the level of analysis performed on the data; this contrasts with standard
analyses of PDFs which can only be interpreted
as parton densities at lowest pQCD order ({\it i.e.}, LO in $\alpha_s$), see Section~\ref{parton model}.
The probabilistic interpretation implies that PDFs, \emph{viz} structure functions, are
thus identified with the sums of the LFWFs squared.
In principle it allows for an exact nonperturbative
treatment of confined constituents. One thus can approach the challenging problems
of understanding the role of color confinement in hadron structure
and the transition between physics at short and long distances.
Elastic form factors also emerge naturally from LF QCD: they are
overlaps of the LFWFs based on matrix elements of the local operator $J^+ = \bar \psi \gamma^+ \psi $. %
In practice, approximations and additional constraints are required to carry out
calculations in 3+1 dimensions, such as the \emph{conformal} symmetry
of the chiral QCD Lagrangian. This will be discussed in Section~\ref{sec:LFHQCD}.
Phenomenological LFWFs can also be constructed using quark models; see {\it e.g.},
Refs.~\cite{Ma:1997gy}-\cite{Chabysheva:2012fe}.
Such models can provide predictions for polarized PDFs due to contributions to
nucleon spin from the \emph{valence quarks}. While higher Fock states are typically not present in these models,
some do account for gluons or $q\bar{q}$ pairs~\cite{Braun:2011aw,Pasquini:2007iz}.
Knowledge of the effective LFWFs
is relevant for the computation of form factors,
PDFs, GPDs, TMDs and parton distribution amplitudes~\cite{Chabysheva:2012fe},
for both unpolarized and polarized parton distributions~\cite{Maji:2017ill}-\cite{Nikkhoo:2017won}.
LFWFs also allow the study of the GPDs skewness
dependence~\cite{Traini:2016jko}, and to
compute other parton distributions, {\it e.g.}, the Wigner distribution
functions~\cite{Gutsche:2016gcd, Chakrabarti:2016yuw}, which encode
the correlations between the nucleon spin and the spins or OAM of its quarks~\cite{She:2009jq, Lorce:2011ni, Lorce:2011kd}.
Phenomenological models of parton distribution functions based on the LFHQCD
framework~\cite{Gutsche:2013zia, Maji:2015vsa, Abidin:2008sb}
use as a starting point the convenient analytic form of GPDs found in Refs.~\cite{Brodsky:2007hb}.
\noindent $\bullet$ A third benefit of QCD on the LF is its rigorous formalism to implement the DIS parton model,
alleviating the need to choose a specific frame, such as the IMF. QCD evolution
equations (\emph{DGLAP}~\cite{Gribov:1972ri}, \emph{BFKL}~\cite{BFKL} and \emph{ERBL}~\cite{ERBL}
(see Sec.~\ref{sub:pQCD}) can be derived using the LF framework.
\noindent $\bullet$ A fourth advantage of LF QCD is that in the LC gauge,
gluon quanta only have transverse polarization. The difficulty to define physically
meaningful gluon spin and angular momenta~\cite{Ji:2012gc, Ji:2013fga, Hatta:2013gta} is thus circumvented;
furthermore, negative metric degrees of freedom \emph{ghosts} and
Fadeev--Popov \emph{ghosts}~\cite{Faddeev:1967fc} are unnecessary.
\noindent $\bullet$ A fifth advantage of LF QCD is that
the LC gauge allows one to identify the sum of gluon
spins with
$\Delta G$~\cite{Anselmino:1994gn}
in the longitudinal spin sum rule, Eq.~(\ref{eq:spin SR}). It will be discussed more in Section~\ref{SSR components}.
The LFWFs fulfill conservation of total angular momentum:
$J^z=\sum_{i=1}^n s^z_i + \sum_{j=1}^{n-1} l^z_j , $ Fock state by Fock state.
Here $s^z_i$ labels each constituent spin, and the $l^z_j$ are the $n-1$ independent
OAM of each $n$-particle Fock state projection.
Since $[H_{LF}, J^z] =0$, each Fock component of
the LFWF eigensolution has fixed angular momentum $ J^z $ for any choice of the 3-direction $\hat z$.
$J^z$ is also conserved at every vertex in LF time-ordered perturbation theory.
The OAM can only change by zero or one unit at any vertex in a
renormalizable theory. This provides a useful constraint on the spin structure of amplitudes in pQCD~\cite{Chiu:2017ycx}.
While the definition of spin is unambiguous for non-relativistic objects,
several definitions exist for relativistic spin~\cite{Chiu:2017ycx}.
In the case of the front form, LF ``helicity" is the spin projected on the same $\overrightarrow z$
direction used to define LF time. Thus, by definition, LF
helicity is the projection $S^z$ of the particle spin which
contributes to the sum rule for $J^z$ conservation.
%
This is in contrast to the usual ``Jacob-Wick" helicity defined as the projection of each
particle's spin vector along the particle's 3-momentum; The Jacob-Wick helicity is thus not conserved.
In that definition, after a Lorentz boost from
the particle's rest frame -- in which the spin is defined --
to the frame of interest, the particle momentum does not in general coincide with the $z$-direction.
Although helicity is a Lorentz invariant quantity regardless of its definition, the spin $z$-projection is not Lorentz invariant
unless it is defined on the LF~\cite{Chiu:2017ycx}.
In the LF analysis the OAM $L^z_i $ of each particle in a
composite state~\cite{Chiu:2017ycx,Brodsky:2000ii} is also defined as the projection on
the $\hat z$ direction; thus the total $J^z$ is conserved and is the same for each Fock projection of the eigenstate.
Furthermore, the LF spin of each fermion is conserved at each vertex in QCD if $m_q=0.$
One does not need to choose a specific frame, such as the Breit frame, nor require high
momentum transfer (other than $Q \gg m_q$). Furthermore, the LF definition preserves
the LF gauge $A^+=0$.
We conclude by an important prediction of LFQCD for nucleon spin structure: a non-zero anomalous
magnetic moment for a hadron requires a non-zero quark transverse OAM
$\mbox{L}_\bot$ of its components~\cite{Brodsky:1980zm, Burkardt:2005km}.
Thus the discovery of the proton anomalous magnetic moment in the 1930s by Stern and
Frisch~\cite{Stern:1933} actually gave the
first evidence for the proton's composite structure, although this was not recognized at that time.
\subsubsection{Formalism and structure functions \label{Formalism}}
Two structure functions are measured in unpolarized DIS:
$F_1(Q^2,\nu)$ and $F_2(Q^2,\nu)$\footnote{Not to be confused with the
Pauli and Dirac form factors for elastic scattering, see Section~\ref{unpo cross-section}},
where $F_1$ is proportional to the
photoabsorption cross-section of a transversely polarized virtual photon, {\it i.e.}, $F_1 \propto \sigma_T$.
Alternatively, instead of $F_1$ or $F_2$, one can define $F_L = F_2 /(2x_{Bj}) - F_1$, a structure function proportional
to the photabsorption of a purely longitudinal virtual photon.
Each of these structure functions can be related to the imaginary part of the corresponding forward double
virtual Compton scattering amplitude $\gamma^* p \to \gamma^* p$ through the \emph{Optical Theorem}.
The inclusive DIS cross-section for the scattering of polarized leptons off of a polarized nucleon
requires four structure functions (see Section~\ref{tensors}).
The additional two polarized structure functions are denoted by $g_1(Q^2,\nu)$ and $g_2(Q^2,\nu)$:
The function $g_1$ is proportional to the transverse photon scattering asymmetry. Its
first moment in the Bjorken scaling limit is related to the nucleon axial-vector current
$\langle P | \overline{\psi}(0) \gamma^{\nu}\gamma_5 \psi(\xi) | P \rangle$, which provides
a direct probe of the nucleon's spin content (see Eq.~(\ref{hadronic tensor}) and below).
The second function, $g_2$, has no simple interpretation, but $g_t=g_1+g_2$ is proportional
to the scattering amplitude of a virtual photon which has transverse polarization in its initial state
and longitudinal polarization in its final state~\cite{Jaffe:1996zw}.
If one considers all possible Lorentz invariant combinations formed with the available vectors and tensors,
three spin structure functions emerge after applying the usual
symmetries (see Section \ref{sub:general XS}).
One ($g_1$, \emph{twist}-2) is associated with the $P^+$ LC vector.
Another one ($g_3$, \emph{twist}-4, see Eq.~(\ref{eq:g3})) is associated with the $P^-$ direction. The third one,
($g_t$, \emph{twist}-3) is associated with the transverse direction; {\it i.e.}, it represents effects arising
from the nucleon spin polarized transversally to the LC.
Only $g_1$ and $g_2$ are typically considered in polarized DIS analyses because $g_t$ and $g_3$ are suppressed as
$1/Q$ and $1/Q^2$, respectively.
The DIS cross-section involves the contraction of the hadronic and leptonic tensors. If the
target is polarized in the beam direction one has~\cite{Roberts}:
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
\left(\frac{d^2\sigma}{d\Omega dE'}\right)_{\parallel} = \sigma_{Mott}
\bigg\{ \frac{F_1(Q^2,\nu)}{E'} \tan^2\frac{\theta}{2} + \frac{{2E'F}_2(Q^2,\nu)}{M\nu} \nonumber \\
\pm \frac{4}{M}\tan^2\frac{\theta}{2}
\bigg[ \frac{E+E'\cos\theta}{\nu}g_1(Q^2,\nu) - \gamma^2 g_2(Q^2,\nu)\bigg]\bigg\} ,
\label{eq:sigmapar}
\end{eqnarray}
where $\pm$ indicates that the initial lepton is polarized
parallel {\it vs.} antiparallel to the beam direction. Here $\gamma^2\equiv Q^2/\nu^2$. At fixed
$ x_{Bj} = Q^2/(2M\nu)$, the contribution from $g_2$ is suppressed as $\approx1/E$ in the target rest frame.
It is useful to define
$\sigma_{Mott}$, the photoabsorption cross-section for a point-like, infinitely heavy, target in its rest frame:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\sigma_{Mott} \equiv \frac{\alpha^2\cos^2(\theta/2)}{4E^2\sin^{4}(\theta/2)}.
\label{eq:mott}
\end{equation}
The $\sigma_{Mott}$ factorization thus isolates the effects of the hadron structure.
If the target polarization is perpendicular to both the beam direction and the
lepton scattering plane, then:
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
\left(\frac{d^2\sigma}{d\Omega dE'}\right)_{\perp} = \sigma_{Mott}
\bigg\{\frac{F_1(Q^2,\nu)}{E'}\tan^2\frac{\theta}{2}+\frac{{2E'F}_2(Q^2,\nu)}{M\nu} \nonumber \\
\pm\frac{4}{M}\tan^2\frac{\theta}{2}E'\sin\theta \bigg[\frac{1}{\nu}g_1(Q^2,\nu)+\frac{2E}{\nu^2}g_2(Q^2,\nu) \bigg] \bigg\},
\label{eq:sigmaperp}
\end{eqnarray}
In this case $g_2$ is not suppressed compared to $g_1$, since typically $\nu \approx E$ in DIS in the nucleon target rest frame.
The unpolarized contribution is evidently identical in Eqs.~(\ref{eq:sigmapar}) and~(\ref{eq:sigmaperp}).
Combining them provides the cross-section for any target polarization direction within the plane of the lepton scattering.
The general formula for any polarization direction, including nucleon spin normal to the lepton plane, is given in Ref.~\cite{Jaffe:1989xx}.
From Eqs.~(\ref{eq:sigmapar}) and~(\ref{eq:sigmaperp}), the cross-section relative asymmetries are:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
A_{\Vert}\equiv\frac{\sigma^{\downarrow\Uparrow}-\sigma^{\uparrow\Uparrow}}{\sigma^{\downarrow\Uparrow}+\sigma^{\uparrow\Uparrow}}=\frac{4\tan^2\frac{\theta}{2}\left[\frac{E+E'\cos\theta}{\nu}g_1(Q^2,\nu)- \gamma^2 g_2(Q^2,\nu)\right]}{M\left[\frac{F_1(Q^2,\nu)}{E'}\tan^2\frac{\theta}{2}+\frac{{2E'F}_2(Q^2,\nu)}{M\nu}\right]},
\label{Apar}
\end{equation}
\begin{equation}
\vspace{-0.2cm}
A_{\bot}\equiv\frac{\sigma^{\downarrow\Rightarrow}-\sigma^{\uparrow\Rightarrow}}{\sigma^{\downarrow\Rightarrow}+\sigma^{\uparrow\Rightarrow}}=\frac{4\tan^2\frac{\theta}{2}E'\sin\theta\big[\frac{1}{\nu}g_1(Q^2,\nu)+\frac{2E}{\nu^2}g_2(Q^2,\nu)\big]}{M\left[\frac{F_1(Q^2,\nu)}{E'}\tan^2\frac{\theta}{2}+\frac{{2E'F}_2(Q^2,\nu)}{M\nu}\right]}.
\label{Aperp}
\end{equation}
\subsubsection{Single-spin asymmetries}
The beam and target must both be polarized to produce non-zero asymmetries in an inclusive cross-section.
The derivation of these asymmetries typically assumes the ``first Born approximation",
a purely electromagnetic interaction, and the standard symmetries -- in particular C, P and T invariances.
In contrast, single-spin asymmetries (SSA) arise when one of these assumptions is invalidated; {\it e.g.}, in
SIDIS by the selection
of a particular direction corresponding to the 3-momentum of a produced hadron.
Note that T-invariance should be distinguished from ``pseudo T-odd" asymmetries.
For example, the final-state interaction in single-spin SIDIS $ \ell p_\updownarrow \to \ell' H X$
with a polarized proton target produces correlations such as
$i \vec S_p \cdot \vec q \times \vec p_H$.
Here $\vec S_p$ is the proton spin vector
and $\vec p_H$ is the 3-vector of the tagged final-state hadron.
This triple product changes sign under time reversal $T \to -T$; however, the factor $i$,
which arises from the struck quark FSI on-shell cut diagram,
provides a signal which retains time-reversal invariance.
The single-spin asymmetry measured in SIDIS thus can access effects beyond the naive
parton model described in Section~\ref{parton model}~\cite{DAlesio:2007bjf} such as rescattering
or ``lensing" corrections~\cite{Brodsky:2002cx}.
Measurements of SSA have in fact become a vigorous research area of QCD called ``Transversity".
The observation of parity violating (PV) SSA in DIS can test fundamental symmetries
of the Standard Model~\cite{Wang:2014bba}. When one allows for $Z^0$ exchange, the
PV effects are enhanced by the interference between the $Z^0$ and virtual photon interactions.
Parity-violating interactions in the elastic and resonance region of DIS can also reveal novel aspects
of nucleon structure~\cite{Armstrong:2012bi}.
Other SSA phenomena; {\it e.g.}, correlations arising \emph{via} two-photon exchange,
have been investigated both theoretically~\cite{DeRujula:1973pr} and
experimentally~\cite{Zhang:2015kna}.
In the inclusive quasi-elastic experiment reported in Ref.~\cite{Zhang:2015kna}, for which the target
was polarized vertically ({\it i.e.}, perpendicular to the scattering plane), the SSA is
sensitive to departures from the single photon time-reversal conserving contribution.
\subsubsection{Photo-absorption asymmetries\label{opt. theo. and photo-abs}}
In electromagnetic photo-absorption reactions, the probe is the photon. Thus, instead of
lepton asymmetries, $A_{\Vert}$ and $A_{\bot}$, one can also consider the physics of
photoabsorption with polarized photons. The effect of polarized photons can be deduced from combining
$A_{\Vert}$ and $A_{\bot}$ (Eq.~(\ref{eq:asyp}) below).
The photo-absorption cross-section is related to the
imaginary part of the forward virtual Compton scattering amplitude
by the \emph{Optical Theorem}.
Of the ten angular momentum-conserving Compton amplitudes, only
four are independent because of parity and time-reversal symmetries.
The following ``partial cross-sections" are typically used~\cite{Roberts}:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\sigma_{T,3/2} = \frac{4\pi^2\alpha} {M \kappa_{\gamma^*}} \left[F_1(Q^2,\nu) - g_1(Q^2, \nu) + \gamma^2 g_2(Q^2,\nu) \right] ,
\label{eq:sigma3/2}
\end{equation}
\begin{equation}
\vspace{-0.1cm}
\sigma_{T,1/2}=\frac{4\pi^2\alpha}{M\kappa_{\gamma^*}}\left[F_1(Q^2,\nu)+g_1(Q^2,\nu)-\gamma^2g_2(Q^2,\nu)\right],
\label{eq:sigma1/2}
\end{equation}
\begin{equation}
\vspace{-0.1cm}
\sigma_{L,1/2}=\frac{4\pi^2\alpha}{M\kappa_{\gamma^*}}\left[-F_1(Q^2,\nu)+\frac{M}{\nu}(1+\frac{1}{\gamma^2})F_2(Q^2,\nu)\right],
\label{eq:sigma1/2L}
\end{equation}
\begin{equation}
\vspace{-0.1cm}
{\sigma}_{LT,3/2}'=\frac{4\pi^2\alpha}{\kappa_{\gamma^*}}\frac{\gamma}{\nu}\left[g_1(Q^2,\nu)+g_2(Q^2,\nu)\right],
\label{eq:sigma3/2TL}
\end{equation}
where {\scriptsize \emph{T,1/2} } and {\scriptsize \emph{T,3/2}} refer to the absorption
of a photon with its spin antiparallel or parallel,
respectively, to that of the spin of the longitudinally polarized target. As a result,
{\scriptsize1/2} and {\scriptsize 3/2} are the total spins in the
direction of the photon momentum. The notation {\scriptsize \emph{L}} refers to longitudinal virtual photon absorption and
{\scriptsize \emph{LT}} defines the contribution from the transverse-longitudinal interference.
The effective cross-sections can be negative and depend on the convention chosen for
flux factor of the virtual photon, which is proportional to
the ``equivalent energy of the virtual photon'' $\kappa_{\gamma^*}$.
(Thus, the nomenclature of ``cross-section" can be misleading.)
The expression for $\kappa_{\gamma^*}$ is arbitrary
but must match the real photon energy $\kappa_{\gamma}=\nu$
when $Q^2\to0$. In the Gilman convention, $\kappa_{\gamma^*}=\sqrt{\nu^2+Q^2}$~\cite{Gilman:1967sn}.
The Hand convention~\cite{Hand:1963bb} $\kappa_{\gamma^*}=\nu-Q^2/(2M)$ has also been
widely used. Partial cross-sections
must be normalized by $\kappa_{\gamma^*}$ since the total cross-section, which is
proportional to the virtual photon flux times a sum of partial cross-sections is an observable and thus
convention-independent. We define:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\sigma_T\equiv \frac{\sigma_{T,1/2}+\sigma_{T,3/2}}{2}=\frac{8\pi^2\alpha}{M\kappa_{\gamma^*}}F_1 , \\\
~~~~~\sigma_{L}\equiv\sigma_{L,1/2},
\nonumber
\end{equation}
\begin{equation}
\vspace{-0.1cm}
\sigma_{TT} \equiv \frac{\sigma_{T,1/2} - \sigma_{T,3/2}}{2} \equiv -\sigma_{TT}' = \frac{4\pi^2\alpha}{M\kappa_{\gamma^*}}(g_1-\gamma^2g_2), \label{sigmaTT} \\\
~~~~ \sigma_{LT}'\equiv\sigma_{LT,3/2}' ,
\end{equation}
\begin{equation}
\vspace{-0.1cm}
R\equiv\frac{\sigma^{L}}{\sigma^T}=\frac{1+\gamma^2}{2x}\frac{F_2}{F_1}-1,
\label{R(Q2)}
\end{equation}
as well as the two asymmetries
$
A_1\equiv \sigma^{TT} / \sigma^T,
A_2\equiv \sigma^{LT} / \left(\sqrt{2}\sigma^T\right),
$
with $\left|A_2\right|\leq R$, since
$\left|\sigma^{LT}\right|<\sqrt{\sigma^T\sigma^{L}}$.
A tighter constraint can also be derived: the ``Soffer bound"~\cite{Soffer:1994ww} which is also based on
\emph{positivity constraints}. These constraints can be used to improve PDF determinations~\cite{Leader:2005kw}.
Positivity also constrains the other structure functions and their moments, e.g. $\left|g_1\right|\leq F_1$. This is readily
understood when structure functions are interpreted in terms of PDFs, as discussed in the next section.
The $A_1$ and $A_2$ asymmetries are related to those defined by:
\vspace{-0.2cm}
\begin{equation}
\vspace{-0.2cm}
A_{\Vert}=D(A_1+\eta A_2), \label{eq:asyp} \\\
~~~~~A_{\bot}=d(A_2-\zeta A_1),
\end{equation}
where $D\equiv\frac{1-\epsilon E'/E}{1+\epsilon R}$ , $d\equiv D\sqrt{\frac{2\epsilon}{1+\epsilon}}$,
$\eta\equiv\frac{\epsilon\sqrt{Q^2}}{E-\epsilon E'}$, $\zeta \equiv\eta\frac{1+\epsilon}{2\epsilon}$, and
$\epsilon$ is given below Eq.~(\ref{eq:sep}).
\subsubsection{Structure function extraction}\label{Structure functions extraction}
One can use the relative asymmetries $A_1$ and $A_2$,
or the cross-section differences $\Delta \sigma_{\parallel}$ and $\Delta \sigma_{\perp}$
in order to extract $g_1$ and $g_2$,
The SLAC, CERN and DESY experiments used the asymmetry method, whereas the
JLab experiments have used both techniques.
\noindent \textbf{Extraction using relative asymmetries\label{par:Extraction asy}}
This is the simplest method: only relative measurements are necessary and
normalization factors (detector acceptance and inefficiencies,
incident lepton flux, target density, and data acquisition inefficiency) cancel out with high accuracy.
Systematic uncertainties are therefore minimized.
However, measurements of the unpolarized structure functions $F_1$ and $F_2$ (or
equivalently $F_1$ and their ratio $R$, Eq.~(\ref{R(Q2)})) must be used as input. In addition,
the measurements must be corrected for any unpolarized materials present in and around the target.
These two contributions increase the total systematic uncertainty.
Eqs.~(\ref{Apar}), (\ref{Aperp}) and (\ref{eq:asyp}) yield
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
A_1=\frac{g_1-\gamma^2g_2}{F_1}, \label{eq:A1} \\\
~~~~~A_2=\frac{\gamma\left(g_1+g_2\right)}{F_1},
\end{equation}
and thus
\begin{equation}
\vspace{-0.1cm}
g_1=\frac{F_1}{1+\gamma^2}\left[A_1+\gamma A_2\right] =
\frac{y(1+\epsilon R)F_1}{(1-\epsilon)(2-y)}\left[A_{\parallel}+\tan(\theta/2) A_{\perp}\right],
\nonumber
\end{equation}
\begin{equation}
g_2=\frac{F_1}{1+\gamma^{3}}\left[A_2-\gamma A_1\right]=\frac{y^2(1+\epsilon R)F_1}{2(1-\epsilon)(2-y)}\left[\frac{E+E'\cos\theta}{E'\sin\theta}A_{\perp}-A_{\parallel}\right].
\nonumber
\end{equation}
\noindent \textbf{Extraction from cross-section differences \label{par:Extraction-par-Xs}}
The advantage of this method is that it eliminates all unpolarized material contributions.
In addition, measurements of $F_1$ and $F_2$ are not needed. However, measuring absolute quantities is
usually more involved, which may lead
to a larger systematic error. According to Eqs.~(\ref{eq:sigmapar})
and (\ref{eq:sigmaperp}),
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\Delta\sigma_{\parallel}\equiv\frac{d^2\sigma^{\downarrow\Uparrow}}{dE'd\Omega}-\frac{d^2\sigma^{\uparrow\Uparrow}}{dE'd\Omega}=\frac{4\alpha^2}{MQ^2}\frac{E'}{E\nu}\left[g_1(E+E'\cos\theta)-Q^2\frac{g_2}{\nu}\right],
\nonumber
\end{equation}
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.1cm}
\Delta\sigma_{\perp}\equiv\frac{d^2\sigma^{\downarrow\Rightarrow}}{dE'd\Omega}-\frac{d^2\sigma^{\uparrow\Rightarrow}}{dE'd\Omega}=\frac{4\alpha^2}{MQ^2}\frac{E'^2}{E\nu}\sin\theta\left[g_1+2E\frac{g_2}{\nu}\right],
\nonumber
\end{equation}
which yields
\vspace{-0.3cm}
\small
\begin{equation}
\vspace{-0.3cm}
g_1=\frac{2ME\nu Q^2}{8\alpha^2E'(E+E')}\left[\Delta \sigma_{\parallel}+\tan(\theta/2)\Delta \sigma_{\perp}\right],
\\
~
g_2=\frac{M\nu^2Q^2}{8\alpha^2E'(E+E')}\left[\frac{E+E'\cos\theta}{E'\sin\theta}\Delta \sigma_{\perp}-\Delta \sigma_{\parallel}\right].
\nonumber
\end{equation}
\normalsize
\subsubsection{The Parton Model \label{parton model}}
\noindent \textbf{DIS in the Bjorken limit}
The moving nucleon in the Bjorken limit is effectively described as bound states of
nearly collinear partons. The underlying dynamics manifests itself by the fact that partons have
both position and momentum distributions. The partons are assumed to be loosely bound, and the lepton scatters incoherently
only on the point-like quark or antiquark constituents since gluons are electrically neutral. In this simplified description
the hadronic tensor takes a form similar to that of the leptonic tensor.
This simplified model, the ``Parton Model", was introduced by Feynman~\cite{Feynman:1969wa}
and applied to DIS by Bjorken and Paschos~\cite{Bjorken:1969ja}.
Color confinement, quark and nucleon masses, transverse momenta
and transverse quark spins are neglected and Bjorken scaling is satisfied. Thus, in this approximation,
studying the spin structure of the nucleon is reduced to studying its helicity structure. It
is a valid description only in the IMF~\cite{Weinberg:1966jm}, or equivalently, the frame-independent
Fock state picture of the LF. After integration over the quark momenta
and the summation over quark flavors, the measured hadronic tensor can be matched to the hadronic tensor parameterized by the structure functions to obtain:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
F_1(Q^2,\nu) \to F_1(x)=\sum_i\frac{e_i^2}{2}\left[q_i^\uparrow(x)+q_i^\downarrow(x)+\overline{q}_{i}^\uparrow(x)+\overline{q}_i^\downarrow(x)\right],
\label{eq:eqf1parton}
\end{equation}
\vspace{-0.3cm}
\begin{equation}
F_{_2}(Q^2,\nu) \to F_2(x)=2xF_1(x),
\label{eq:Callan-gross}
\end{equation}
\vspace{-0.3cm}
\begin{equation}
g_1(Q^2,\nu) \to g_1(x)=\sum_i\frac{e_i^2}{2}\left[q_i^\uparrow(x)-q_i^\downarrow(x)+\overline{q}_i^\uparrow(x)-\overline{q}_i^\downarrow(x)\right],
\label{eq:eqg1parton}
\end{equation}
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
g_2(Q^2,\nu) \to g_2(x)=0,
\end{equation}
where $i$ is the quark flavor, $e_i$ its charge and $q^\uparrow(x)$
($q^\downarrow(x)$) the probability that
its spin is aligned (antialigned) with the nucleon spin at a given
$x$.
Electric charges are squared in Eqs.~(\ref{eq:eqf1parton}) and~(\ref{eq:eqg1parton}), thus the inclusive DIS
cross-section in the parton model is unable to distinguish antiquarks from quarks.
The unpolarized and polarized PDFs are respectively
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
q_i(x)\equiv q_i^\uparrow(x)+q_i^\downarrow(x), \\\
~~~~~\Delta q_i(x)\equiv q_i^\uparrow(x)-q_i^\downarrow(x).
\label{eq:pdf def.}
\end{equation}
\begin{figure}
\center
\includegraphics[scale=0.49]{pdf_npol_pol}
\vspace{-0.3cm}
\caption{\label{fig:pdf_dist}\small{Left: Unpolarized PDFs as function
of $x$ for the proton from NNPDF~\cite{Ball:2013lla,Ball:2017nwa}. The
\emph{valence quarks} are denoted $u_{v}$ and
$d_{v}$, with $q_v(x) = q(x) - \bar q(x)$ normalized to the valence content of the proton: $\int_0^1 u_v(x)= 2$ and $\int_0^1 dx d_v(x) = 1$.
The gluon distribution $g$ is divided by 10 on the figure.
Right: Polarized PDFs for the proton. The $\mu^2$ values refer to scale at which the
PDFs are calculated.}}
\vspace{-0.6cm}
\end{figure}
These distributions can be extracted from inclusive DIS (see e.g. Fig.~\ref{fig:pdf_dist}).
The gluon distribution, also shown in Fig.~\ref{fig:pdf_dist},
can be inferred from sum rules and global fits of the DIS data. However,
the identification of the specific contribution of quark and gluon OAM to the nucleon spin
(Fig.~\ref{fig:Lq_dist}) is beyond the parton model analysis.
Note that Eq.~(\ref{eq:pdf def.}) imposes the constraint $\left|\Delta q_i(x)\right| \leq q_i(x)$, which together with
Eqs.~(\ref{eq:eqf1parton}) and~(\ref{eq:eqg1parton}) yields the
\emph{positivity constraint} $\left|g_1\right|\leq F_1$.
Eqs.~(\ref{eq:eqf1parton}) and~(\ref{eq:eqg1parton}) are derived assuming that there is no interference of amplitudes
for the lepton scattering at high momentum transfer on one type of quark or another; the final states in the parton model
are distinguishable and depend on which quark participates in the scattering and is
ejected from the nucleon target; likewise, the derivation of Eqs.~(\ref{eq:eqf1parton}) and~(\ref{eq:eqg1parton})
assumes that quantum-mechanical coherence is not possible for different quark scattering amplitudes
since the quarks are assumed to be quasi-free. Such interference and coherence effects can arise
at lower momentum transfer where quarks can coalesce into specific hadrons and thus participate
together in the scattering amplitude. In such a case, the specific quark which scatters cannot be identified
as the struck quark. This resonance regime is discussed in Sections~\ref{resonance region}
and~\ref{elastic scatt}.
The parton model naturally predicts
1) Bjorken scaling: the structure functions depend only on $x = x_{Bj}$;
2) the Callan-Gross relation~\cite{Callan:1969uq}, $F_2=2xF_1$, reflecting the spin-$1/2$ nature of quarks;
{\it i.e.}, $F_L=0$ (no absorption of longitudinal photons in DIS due to helicity conservation);
3) the interpretation of $x_{Bj}$ as the momentum fraction carried by the struck quark
in the IMF~\cite{Weinberg:1966jm}, or equivalently, the quarks' LF momentum fraction $x= k^+ / P^+$; and
4) a probabilistic interpretation of the structure functions: they are
the square of the parton wave functions and can be constructed
from individual quark distributions and polarizations in momentum space.
The parton model interpretations of $x_{Bj}$ and of structure functions is only valid in the DIS limit
and at LO in $\alpha_s$. For example, unpolarized PDFs extracted at NLO may be
negative~\cite{Ball:2013lla, Leader:2005ci}, see also~\cite{Brodsky:2002ue}.
In the parton model, only two structure functions are needed to describe the nucleon.
The vanishing of $g_2$ in the parton model does not mean it is zero
in pQCD. In fact, pQCD predicts a non-zero value for $g_2$, see Eq.~(\ref{eq:g2ww}).
The structure function $g_2$ appears when $Q^2$ is finite due to
1) quark interactions, and
2) transverse momenta and spins (see {\it e.g.},~\cite{Anselmino:1994gn}).
It also should be noted that the parton model cannot account for DDIS
events $\ell p \to \ell' p X$, where the proton remains intact in the final state.
Such events contribute to roughly 10\% of the total DIS rate.
DIS experiments are typically performed at beam energies for which at most the three or four
lightest quark flavors can appear in the final state.
Thus, for the proton and the neutron, with three active quark flavors:
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
F_1^p(x) & = & \frac{1}{2} \bigg(\frac{4}{9} \big(u(x)+ \overline{u} (x)\big)+
\frac{1}{9}\big(d(x)+\overline{d}(x)\big)+\frac{1}{9}\big(s(x)+\overline{s}(x)\big)\bigg),
\nonumber
\\
\vspace{-0.2cm}
g_1^p(x) & = &\frac{1}{2}\bigg(\frac{4}{9}\big(\Delta u(x)+\Delta\overline{u}(x)\big)+
\frac{1}{9}\big(\Delta d(x)+\Delta\overline{d}(x)\big)+\frac{1}{9}\big(\Delta s(x)+\Delta\overline{s}(x)\big)\bigg),
\nonumber
\\
\vspace{-0.1cm}
F_1^n(x) & = & \frac{1}{2}\bigg(\frac{1}{9}\big(u(x)+\overline{u}(x)\big)+
\frac{4}{9}\big(d(x)+\overline{d}(x)\big)+\frac{1}{9}\big(s(x)+\overline{s}(x)\big)\bigg),
\nonumber
\\
\vspace{-0.0cm}
g_1^n(x) & = & \frac{1}{2}\bigg(\frac{1}{9}\big(\Delta u(x)+\Delta\overline{u}(x)\big)+
\frac{4}{9}\big(\Delta d(x)+\Delta\overline{d}(x)\big)+\frac{1}{9}\big(\Delta s(x)+\Delta\overline{s}(x)\big)\bigg),
\nonumber
\end{eqnarray}
where the PDFs $q(x), ~\overline q(x), ~ \Delta q(x)$, and $ \Delta \overline q(x)$ correspond to the longitudinal light-front momentum fraction distributions of the quarks inside the nucleon.
This analysis assumes SU(2)$_f$ charge symmetry, which typically is believed to hold at the 1\% level~\cite{Miller:2006tv, Cloet:2012db}.
In the Bjorken limit, this description provides spin information in terms of $x$ (or $x$ and $Q^2$ at lower energies, as discussed below).
The spatial spin distribution is also accessible, \emph{via} the nucleon axial form factors. This is analogous to the fact that
the nucleon's electric charge and current distributions are accessible through the electromagnetic form factors
measured in elastic lepton-nucleon scattering (see Sec. \ref{elastic scatt}).
Form factors and particle distributions functions are linked by GPDs and Wigner Functions, which correlate both the spatial and
longitudinal momentum
information~\cite{Burkardt:2000za}, including that of OAM~\cite{Lorce:2017wkb}.
\subsubsection{Perturbative QCD at finite $Q^2$ \label{sub:pQCD}}
In pQCD, the struck quarks in DIS can radiate gluons;
the simplicity of Bjorken scaling is then broken by computable logarithmic corrections.
The lowest-order $\alpha_s$ corrections arise from
1) vertex correction, where a gluon links the incoming and outgoing quark lines;
2) gluon bremsstrahlung on either the incoming and outgoing quark lines;
3) $q$-$\overline{q}$ pair creation or annihilation. This latter leads to the axial anomaly and makes gluons
to contribute to the nucleon spin (see Sec.~\ref{DIS SR}).
These corrections introduce a power of $\alpha_s$ at each order, which
leads to logarithmic dependence in $Q^2$,
corresponding to the behavior of the strong coupling $\alpha_s(Q^2)$ at high $Q^2$~\cite{Deur:2016tte}.
Amplitude calculations, including gluon radiation, exist up to
next-to-next-to leading order (NNLO) in $\alpha_s$
\cite{Gorishnii:1990vf}. In some particular cases, calculations or assessments exist up to fourth order
{\it e.g.}, for the Bjorken sum rule, see Section~\ref{DIS SR}.
These gluonic corrections are similar to the effects derived from photon emissions (radiative corrections)
in QED; they are therefore called pQCD radiative corrections.
As in QED, canceling infrared and ultraviolet divergences appear and calculations must
be regularized and then renormalized. Dimensional regularization
is often used for pQCD (minimal subtraction scheme,
$\overline{MS}$)~\cite{Bardeen:1978yd}, although several other schemes are also commonly used.
The pQCD radiative corrections are described to first approximation by the
\emph{DGLAP evolution equations}~\cite{Gribov:1972ri}.
This formalism correctly predicts the $Q^2$-dependence of structure functions
in DIS.
The pQCD radiative corrections are renormalization scheme-independent at any order if one applies the
\emph{BLM/PMC}~\cite{the:BLM, the:PMC} scale-setting procedure.
The small-$x_{Bj}$ power-law Regge behavior of structure functions can be
related to the exchange of the Pomeron trajectory using
the \emph{BFKL equations}~\cite{BFKL}.
Similarly the $t$-channel exchange of the isospin $I=1$ Reggeon trajectory with
$\alpha_R = 1/2$ in DVCS can explain the observed behavior
$F_{2p}(x_{Bj},Q^2) - F_{2n}(x_{Bj},Q^2) \propto \sqrt {x_{Bj}}$,
as shown by Kuti and Weisskopf~\cite{Kuti:1971ph}.
This small-$x$ Regge behavior is incorporated in the LFHQCD structure for the $t$-vector meson exchange~\cite{deTeramond:2018ecg}.
A general discussion of the application of Regge dynamics to DIS structure functions is given in
Ref.~\cite{Landshoff:1970ff}.
The evolution of $g_1(x_{Bj},Q^2$) at low-$x_{Bj}$ has been investigated by
Kirschner and Lipatov, and Blumlein and Vogt~\cite{Kirschner:1983di},
by Bartels, Ermolaev and Ryskin~\cite{Bartels:1995iu};
and more recently by Kovchegov, Pitonyak and Sievert~\cite{Kovchegov:2015pbl};
See~\cite{Blumlein:2012bf} for a summary of small-$x_{Bj}$ behavior of the PDFs.
The distribution and evolution at low-$x_{Bj}$ of the gluon spin contributions
$\Delta g(x_{Bj})$ and $\mbox{L}_g(x_{Bj})$ is discussed
in~\cite{Hatta:2016aoc}, with the suggestion that in this domain, $\mbox{L}_g(x_{Bj}) \approx -\Delta g(x_{Bj})$.
In addition to structure functions, the evolution of the \emph{distribution amplitudes} in $\ln (Q^2)$
defined from the valence LF Fock state is also known and given
by the \emph{ERBL} equations~\cite{ERBL}.
Although the evolution of the $g_1$ structure function is
known to NNLO~\cite{Moch:2014sna}, we will focus here on the leading order (LO) analysis
in order to demonstrate the general formalism. At \emph{leading-twist} one finds
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
g_1(x_{Bj},Q^2)=\frac{1}{2} \sum_i e_i^2 \Delta q_i(x_{Bj},Q^2),
\label{g_1 LT evol}
\end{equation}
where the polarized quark distribution functions $\Delta q$ obey the evolution equation
%
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\frac{\partial \Delta q_i(x,t)}{\partial t}= \frac{\alpha_s(t)}{2\pi} \int_{x}^1 \frac{dy}{y}
\left[ \Delta q_i(y,t) P_{qq}\left(\frac{x}{y}\right) +
\Delta g(y,t) P_{qg}\left(\frac{x}{y}\right) \right] ,
\label{quark LO evol}
\end{equation}
with $t=\ln(Q^2/\mu^2)$. Likewise, the evolution equation for the polarized gluon distribution function $\Delta g$ is
%
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\frac{\partial\Delta g(x,t)}{\partial t}= \frac{\alpha_s(t)}{2\pi} \int_{x}^1 \frac{dy}{y}
\bigg[\sum_{i=1}^{2f} \Delta q_i(y,t) P_{gq}\left(\frac{x}{y}\right) +
\Delta g(y,t) P_{gg}\left(\frac{x}{y}\right) \bigg] .
\label{gluon LO evol}
\end{equation}
At LO the splitting functions $P_{\alpha \beta}$ appearing in Eqs. (\ref{quark LO evol}) and (\ref{gluon LO evol}) are given by
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
P_{qq}(z) &=& \frac{4}{3}\frac{1+z^2}{1-z}+2\delta(z-1), \nonumber \\
P_{qg}(z) &=& \frac{1}{2}\big(z^2-(1-z)^2 \big), \nonumber \\
P_{gq}(z) &=& \frac{4}{3}\frac{1-(1-z)^2}{z}, \nonumber \\
P_{gg}(z) &=& 3\bigg[\big(1+z^4\big)\bigg(\frac{1}{z}+\frac{1}{1+z} \bigg) -
\frac{(1-z)^3}{z}\bigg]+\bigg[\frac{11}{2}-\frac{f}{3} \bigg]\delta(z-1). \nonumber
\end{eqnarray}
\vspace{-0.6cm}
These functions are related to Wilson coefficients defined in the operator product expansion (OPE), see Section~\ref{OPE}.
They can be interpreted as the probability that:
\noindent$P_{qq}$: a quark emits a gluon and retains $z=x_{Bj}/y$ of its initial momentum;
\noindent$P_{qg}$: a gluon splits into $q$-$\overline{q}$, with the quark having a fraction $z$ of the gluon momentum;
\noindent$P_{gq}$: a quark emits a gluon with a fraction $z$ of the initial quark momentum;
\noindent$P_{gg}$: a gluon splits in two gluons, with one having the fraction $z$ of the initial momentum.
The presence of $P_{qg}$ allows inclusive polarized DIS to access the polarized gluon
distribution $\Delta g(x_{Bj},Q^2)$, and thus its moment $\Delta G \equiv \int_0^1 \Delta g~dx$,
albeit with limited accuracy. The evolution of $g_2$ at LO in $\alpha_s$ is obtained from the
above equations applied to the Wandzura-Wilczek relation, Eq.~(\ref{eq:g2ww}).
In general, pQCD can predict $Q^2$-dependence, but not the $x_{Bj}$- dependence of the parton
distributions which is derived from nonperturbative dynamics (see Section~\ref{DIS}).
The high-$x_{Bj}$ domain is an exception (see Section~\ref{sec: high-x}).
The intuitive \emph{DGLAP} results are recovered more formally using the OPE,
see Section~\ref{OPE}.
\subsubsection{The nucleon spin sum rule and the ``spin crisis" \label{spin crisis}}
The success of modeling the nucleon with quasi-free \emph{valence quarks} and with
\emph{constituent quark} models (see Section~\ref{CQM})
suggests that only quarks contribute to the nucleon spin:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.2cm}
J= \frac{1}{2}\Delta\Sigma+\mbox{L}_q =\frac{1}{2},
\end{equation}
where $ \Delta \Sigma $ is the quark spin contribution to the nucleon spin $J$;
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\Delta \Sigma = \sum_q \int_0^1 dx \, \Delta q(x),
\end{equation}
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-0.7cm}
\includegraphics[scale=0.37]{oam_ex}
\vspace{-0.5cm}
\caption{\label{fig:Lq_dist}\small{Models predictions
for the quark kinematical OAM $L_z$, from~Refs.~\cite{GonzalezHernandez:2012jv} (dot-dashed line),
\cite{Chakrabarti:2016yuw} (dots), and
\cite{Lu:2010dt} (dashes).
}}
\vspace{-0.5cm}
\end{wrapfigure}
and $ \mbox{L}_q$ is the quark OAM contribution. Extracted
polarized PDFs and modeled quark OAM distributions are shown in Figs.~\ref{fig:pdf_dist} and \ref{fig:Lq_dist}.
It should be emphasized that the existence of the proton's anomalous magnetic moment requires nonzero
quark OAM~\cite{Brodsky:1980zm}. For instance, in the Skyrme model, chiral symmetry implies a
dominant nonperturbative contribution to the proton spin from quark OAM~\cite{Brodsky:1988ip}.
It is interesting to quote the conclusion from Ref.~ \cite{Sehgal:1974rz}: ``Nearly 40\% of the angular
momentum of a polarized proton arises from the orbital motion of its constituents.
In the geometrical picture of hadron structure, this implies
that \emph{a polarized proton possesses a significant amount of rotation contribution
to $S_z$ and} $\mbox{L}_z$ \emph{comes from the valence quarks}.'' (emphasis by the author).
QCD radiative effects introduce corrections to the spin dynamics from gluon emission and
absorption which evolve in $\ln Q^2$.
It was generally expected that the radiated gluons would contribute to the nucleon spin, but only as a small correction
(beside their effect of introducing a $Q^2$-dependence to the different contributions to the nucleon spin).
The speculation that polarized gluons contribute significantly to nucleon spin,
whereas their sources -- the quarks -- do not, is unintuitive, although it is a scenario that
was (and still is by some) considered (see e.g. the bottom left panel of Fig.~\ref{spin history}
on page~\pageref{spin history}).
A small contribution to the nucleon spin from gluons would also imply a small role of the
\emph{sea quarks}, so that $ \Delta \Sigma $ and the quark OAM
would then be understood as coming mostly from \emph{valence quarks}.
In this framework, it was determined that the quark OAM contributes to about 20\%~\cite{Sehgal:1974rz, Ellis:1973kp}
based on the values for $F$ and $D$, the
weak hyperon decay constants (see Section~\ref{DIS SR}), SU(3)$_f$ flavor symmetry and
$\Delta s=0$~\cite{Isgur:1978xj, Jaffe:1989jz, Brodsky:1994fz}.
This prediction was made in 1974 and predates the first spin structure measurements by SLAC E80~\cite{Alguard:1976bm},
E130~\cite{Alguard:1978gf} and CERN EMC~\cite{Ashman:1987hv}.
The origin of the quark OAM was later understood as due to
relativistic kinematics~\cite{Jaffe:1989jz, Brodsky:1994fz}, whereas $ \Delta \Sigma $ comes from the quark axial currents
(see discussion below Eq.~(\ref{hadronic tensor})).
For a nonrelativistic quark, the lower component of the Dirac spinor is
negligible; only the upper component contributes to the axial current.
In hadrons, however, quarks are confined in a small volume and are thus relativistic. The
lower component, which is in a $p$-wave, with its spin anti-aligned to that of the nucleon,
contributes and reduces $ \Delta \Sigma $.
At that time, it seemed reasonable to neglect gluons, thus predicting
a nonzero contribution to $J$ from the quark OAM.
The result was the initial expectation $\Delta\Sigma\approx$ 0.65 and thus the quark OAM was about 18\%.
Since this review is also concerned with
spin composition of the nucleon at low energy, it is interesting to remark that a large quark OAM contribution would essentially be a confinement effect.
The first high-energy measurements of $ g_1(x_{Bj},Q^2) $ was performed at
SLAC in the E80~\cite{Alguard:1976bm} and E130~\cite{Alguard:1978gf} experiments.
The data covered a limited $x_{Bj}$ range and agreed with the naive model described above.
However, the later EMC experiment at CERN~\cite{Ashman:1987hv} measured $g_1(x_{Bj},Q^2)$
over a range of $x_{Bj}$ sufficiently large to evaluate moments. It showed the conclusions
based on the SLAC measurements to be incorrect.
The EMC measurement suggests instead that $ \Delta \Sigma \approx 0$, with large
uncertainty. This contradiction with the naive model became known as the ``spin crisis".
Although more recent measurements at COMPASS, HERMES and Jlab are consistent
with a value of $ \Delta \Sigma \approx 0.3$, the EMC indication still stands that
gluons and/or gluon and quark OAM are more important than had been foreseen;
see {\it e.g.}, Ref.~\cite{Myhrer:2007cf}.
Since gluons are possibly important,
$J$ must obey the total angular momentum conservation law
known as the ``nucleon spin sum rule"
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
J = \frac{1}{2}\Delta\Sigma(Q^2)+\mbox{L}_q(Q^2) + \Delta G(Q^2) + \mbox{L}_g(Q^2) = \frac{1}{2},
\label{eq:spin SR}
\end{equation}
at any scale $Q$. The gluon spin $\Delta G$ represents with $\mbox{L}_g$ a single term, $ \Delta G+\mbox{L}_g$,
since the individual $\Delta G$ and $\mbox{L}_g$
contributions are not separately gauge-invariant. (This is discussed in more detail
in the next section.)
The terms in Eq.~(\ref{eq:spin SR}) are obtained by utilizing \emph{LF-quantization}
or the IMF and the LC gauge, writing
the hadronic angular momentum tensor in terms of the quark and gluon fields~\cite{Jaffe:1989jz}.
In the gauge and frame-dependent partonic formulation, in which $\Delta G$ and $\mbox{L}_g$ can be separated,
Eq.~(\ref{eq:spin SR}) is referred to as the Jaffe-Manohar decomposition. An alternative formulation is given by Ji's
decomposition. It is gauge/frame independent, but its partonic interpretation is not as direct as for
the Jaffe-Manohar decomposition~\cite{Ji:2012vj}.
The quantities in Eq.~(\ref{eq:spin SR})
are integrated over $x_{Bj}$. They
have been determined at a moderate value of $Q^2$, typically 3 or 5 GeV$^2$.
Eq.~(\ref{eq:spin SR}) does not separate \emph{sea}
and \emph{valence quark} contributions. Although DIS observables do not
distinguish them, separating them is an important task.
In fact, recent data and theoretical developments
indicate that the \emph{valence quarks} are dominant contributors to $ \Delta \Sigma$.
We also note that the strange and anti-strange sea quarks can contribute differently
to the nucleon spin~\cite{Brodsky:1996hc}.
Finally, a separate analysis of spin-parallel and antiparallel PDFs
is clearly valuable since they have different nonperturbative inputs.
A transverse spin sum rule similar to Eq.~(\ref{eq:spin SR}) has also been derived~\cite{Bakker:2004ib, Leader:2011za}.
Likewise, transverse versions of the Ji sum rule (see next section)
exist~\cite{Leader:2011cr, Ji:2012ba}, together with debates on which version is correct.
Transverse spin not being the focus of this review, we will not discuss this issue further.
The $Q^2$-evolution of quark and gluon spins discussed in Section~\ref{sub:pQCD} provides
the $Q^2$-evolution of $\Delta\Sigma$ and $\Delta G$. The evolution equations are known to at least NNLO and are
discussed in Section~\ref{DIS SR}. The evolution of the quark and gluon OAM is known to
NLO~\cite{Ji:1995cu, Gluck:1994uf, Gluck:1995yr, Wakamatsu:2007ar, Altenbuchinger:2010sz}.
The evolution of the nucleon spin sum rule components at LO is given in Ref.~\cite{Ji:1995cu}:
\vspace{-0.3cm}
\begin{equation}
\Delta\Sigma(Q^2) = \mbox{constant}, \nonumber
\vspace{-0.1cm}
\end{equation}
\begin{equation}
\mbox{L}_q(Q^2) = \frac{-\Delta \Sigma(Q^2)}{2} + \frac{3n_f}{32+6n_f} +
\bigg(\mbox{L}_q(Q_0^2) + \frac{-\Delta \Sigma(Q^2_0)}{2} - \frac{3n_f}{32+6n_f}\bigg)(t/t_0)^{-\frac{32+6n_f}{9\beta_0}}, \nonumber
\end{equation}
\begin{equation}
\Delta G(Q^2) = \frac{-4\Delta \Sigma(Q^2)}{\beta_0} + \bigg(\Delta G(Q_0^2)+ \frac{4\Delta \Sigma(Q^2_0)}{\beta_0}\bigg)\frac{t}{t_0}, \nonumber
\end{equation}
\begin{equation}
\vspace{-0.1cm}
\mbox{L}_g(Q^2) = -\Delta G(Q^2) + \frac{8}{16+3n_f} +
\bigg(\mbox{L}_q(Q_0^2) + \Delta G(Q_0^2) - \frac{8}{16+3n_f} \bigg)(t/t_0)^{-\frac{32+6n_f}{9\beta_0}}
\label{eq:LO evol. of spin SR}
\end{equation}
with $t=\ln(Q^2/\Lambda_s^2)$ and $Q_0^2$ the starting scale of the evolution. The \emph{QCD $\beta$-series}
is defined here such that $\beta_0=11-\frac{2}{3}n_f$. The NLO equations can be found in Ref.~\cite{Altenbuchinger:2010sz}.
\subsubsection{Definitions of the spin sum rule components \label{SSR components}}
Values for the components of Eq.~(\ref{eq:spin SR}) obtained from experiments, Lattice Gauge
Theory or models are given in Section~\ref{nucleon spin structure at high energy} and in the Appendix.
It is important to recall that these values are convention-dependent for several reasons.
One is that the axial anomaly shifts contributions between $\Delta \Sigma$
and $\Delta G$, depending on the choice of renormalization scheme,
\emph{even at arbitrary high $Q^2$} (see Section~\ref{DIS SR}).
This effect was suggested as a cause for the smallness of $\Delta \Sigma$ compared to the naive quark model
expectation: a large value $\Delta G \approx 2.5$ would increase the measured $\Delta \Sigma$ to about 0.65. Such large value
of $\Delta G$ is nowadays excluded. Furthermore, it is unintuitive to use a specific renormalization scheme in
which the axial anomaly contributes, to match quark models that do not need renormalization.
Another reason is that the definitions of $\Delta G$, $\mbox{L}_q$, $\mbox{L}_g$
are also conventional.
This was known before the spin crisis~\cite{Jaffe:1989jz} but
the discussion on what the best operators are
has been renewed by the derivation of the Ji sum rule~\cite{Ji:1996ek}:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
J_{q,g}= \frac{1}{2} \int_{-1}^{1} x \big[E_{q,g}(x,0,0)+H_{q,g}(x,0,0)\big] dx,
\label{Eq. Ji SR}
\end{equation}
with $\sum_q J^q + J^g = \frac{1}{2}$ being frame and gauge invariant and
$J_{q,g}$ and the GPDs $ E_{q,g}$ and $H_{q,g}$ stand either for quarks or gluons.
For quarks, $J_q \equiv \Delta \Sigma/2 + \mbox{L}_q$. For gluons, $J_g$ cannot be separated
into spin and OAM parts in a frame or gauge invariant way. (However,
it can be separated in the IMF, with an additional ``potential" angular momentum term~\cite{Ji:2012sj}.)
Importantly, the Ji sum rule provides a model-independent access to $ \mbox{L}_q$, whose measurability
had been until then uncertain. Except for Lattice Gauge Theory (see Section~\ref{Ji's LGT method}) the theoretical
assessments of the quark OAM are model-dependent. We mentioned the
relativistic quark model that predicted about 20\%} even before the occurrence of the spin crisis.
More recently, investigation within an unquenched quark model suggested that the unpolarized
\emph{sea} asymmetry $\overline{u} - \overline{d}$ is proportional to the
nucleon OAM:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\mbox{L}(Q^2) \equiv \mbox{L}_q(Q^2)+\mbox{L}_g(Q^2) \propto \big(\overline{u}(Q^2) - \overline{d}(Q^2)\big),
\label{Eq. L propto sea}
\end{equation}
where $\overline{q}(Q^2)=\int_0^1 \overline{q}(x,Q^2)dx$.
The non-zero $\overline{u} - \overline{d}$ distribution is well measured~\cite{Baldit:1994jk}
and causes the violation of the Gottfried sum rule~\cite{Gottfried:1967kk, Amaudruz:1991at}.
The initial derivation of Eq.~(\ref{Eq. L propto sea}) by Garvey~\cite{Garvey:2010fi} indicates a strict equality,
$L = (\overline{u} - \overline{d}) = 0.147 \pm 0.027$, while a derivation in a chiral quark model~\cite{Bijker:2014ila}
suggests $L= 1.5(\overline{u} - \overline{d}) = 0.221 \pm 0.041$.
The lack of precise polarized PDFs at low-$x_{Bj}$ does not allow yet
to verify this remarkable prediction~\cite{Nocera:2016zyg}.
Another quark OAM prediction is from LFHQCD: $\mbox{L}_q(Q^2 \leq Q_0^2)=1$ in the strong regime of QCD, evolving to
$\mbox{L}_q=0.35 \pm 0.05$ at $Q^2=5$~GeV$^2$, see Section~\ref{sec:LFHQCD}.
Beside Eq.~(\ref{Eq. Ji SR}) and possibly Eq.~(\ref{Eq. L propto sea}), the quark OAM can also be accessed from
the two-parton \emph{twist}-3 GPD $G_2$~\cite{Penttinen:2000dg}:
\vspace{-0.2cm}
\begin{equation}
\vspace{-0.2cm}
\mbox{L}_q=-\int\ G^q_2(x,0,0) dx,
\label{Eq. quark OAM from twist-3}
\end{equation} }
or generalized TMD (GTMD)~\cite{Lorce:2011ni, Lorce:2011kd, Hatta:2011ku}.
TMD allow to infer $\mbox{L}_q$ model-dependently~\cite{Gutsche:2016gcd}.
Jaffe and Manohar set the original convention to define the angular momenta~\cite{Jaffe:1989jz}. They expressed
Eq.~(\ref{eq:spin SR}) using the canonical angular momentum and momentum tensors.
This choice is natural since it follows from Noether's theorem~\cite{Leader:2011za}.
For angular momenta, the relevant symmetry is the
rotational invariance of QCD's Lagrangian. The ensuing conserved quantity ({\it i.e.}, that commutes with the Hamiltonian)
is the generator of the rotations. This definition provides the four angular momenta of the longitudinal spin sum rule,
Eq.~(\ref{eq:spin SR}). A similar transverse spin sum rule was also derived~\cite{Bakker:2004ib, Leader:2011za}.
A caveat of the canonical definition is that in Eq.~(\ref{eq:spin SR}),
only $J$ and $\Delta\Sigma$ are gauge invariant, {\it i.e.}, are measurable. In the
light-cone gauge, however, the gluon spin term coincides with the measured observable $\Delta G$.
(This is true also in the $A^0=0$ gauge~\cite{Anselmino:1994gn}.)
The fundamental reason for the gauge dependence of the other components of Eq.~(\ref{eq:spin SR}) is their
derivation in the IMF
What triggered the re-inspection of the Jaffe-Manohar decomposition and subsequent discussions
was that Ji proposed another decomposition using the Belinfante-Rosenfeld energy-momentum
tensor~\cite{Belinfante tensor}, which lead to the Ji sum rule~\cite{Ji:1996ek}, Eq.~(\ref{Eq. Ji SR}).
The Belinfante-Rosenfeld tensor originates from General Relativity in which the canonical momentum
tensor is modified so that it becomes symmetric and conserved (commuting with the Hamiltonian):
in a world without angular momentum, the canonical momentum tensor would be symmetric.
However, adding spins breaks its symmetry. An appropriate combination of canonical momentum
tensor and spin tensor yields the Belinfante-Rosenfeld tensor, which is symmetric and thus natural for General Relativity
where it identifies to its field source ({\it i.e.} the Hilbert tensor). The advantages of such definition are
1) its naturalness even in presence of spin;
2) that it leads to a longitudinal spin sun rule in which all individual terms are gauge invariant;
and 3) that there is a known method to measure $\mbox{L}_q$ (Eq.~(\ref{Eq. Ji SR})), or to
compute it using Lattice Gauge Theory (see Section~\ref{Ji's LGT method}).
Its caveat is that the nucleon spin decomposition contains only three terms: $\Delta \Sigma$,
$\mbox{L}_q$ and a global gluon term, thus without a clear interpretation of the experimentally measured $\Delta G$.
While $\Delta \Sigma$ in the Ji and Jaffe-Manohar decompositions are identical,
the $\mbox{L}_q$ terms are different.
That several definitions of $\mbox{L}_q$ are possible comes from gauge invariance. To satisfy it,
quarks do not suffice; gluons must be included, which allows for choices in the separation of $\mbox{L}_q$ and
$\mbox{L}_g$~\cite{Leader:2013jra, Liu:2015xha}.
The general physical meaning of $\mbox{L}_q$ is that it is the
torque acting on a quark during the polarized DIS process~\cite{Abdallah:2016xfk, Burkardt:2012sd}:
Ji's $\mbox{L}_q$ is the OAM before the probing photon is absorbed by the quark, while the Jaffe-Manohar
$\mbox{L}_q$ is the OAM after the photon absorption, with the absorbing quark kicked out to infinity.
These two definitions of $\mbox{L}_q$ have been investigated with several
models, {\it e.g.},~\cite{Liu:2015xha,Courtoy:2016des}, whose results are shown in
Section~\ref{Individual contributions to the nucleon spin}.
Other definitions of angular moments and gluon fields have been proposed to eliminate the gauge-dependence
problem~\cite{Chen:2008ag}, leading to a spin decomposition
Eq.~(\ref{eq:spin SR}) with four gauge-invariant terms. The complication is that the
corresponding operators use non-local fields, \emph{viz} fields depending on several
space-time variables or, more generally, a field $A$ for which $A(x) \neq e^{-ipx} A(0) e^{ipx}$.
Recent reviews on angular momentum definition and separation are given in Ref.~\cite{Leader:2013jra}.
It remains to be added that in practice, to obtain $\mbox{L}_q$ in a \emph{leading-twist} (twist~2) analysis,
$\Delta \Sigma/2$ must be subtracted, see Eq.~(\ref{Eq. Ji SR}).
Thus, since $\Delta \Sigma$ is renormalization scheme dependent due to the axial anomaly, $\mbox{ L}_q$ is too
(but not their sum $J_q$). A \emph{higher-twist} analysis of the nucleon spin sum rule allows to separate quark and gluon
spin contributions (twist~2 PDFs/GPDs) from their OAM
(twist~3 GPD $G_2$)~\cite{Penttinen:2000dg, Ji:2012sj, Ji:2012ba, Hatta:2011ku, Kanazawa:2014nha}.
It is expected that OAM are \emph{twist}-3 quantities since they involve the parton's transverse motions.
However, the quark OAM, as defined in Eq.~(\ref{Eq. Ji SR}) can be related to \emph{twist}-2 GPDs.
Beside GPDs, OAM can also be accessed with
GTMDs~\cite{Gutsche:2016gcd, Lorce:2011kd, Rajan:2016tlg, Burkardt:2008ua}.
It is now traditional to call the Jaffe-Manohar OAM the \emph{canonical} expression and denote it by $l_z$, the Ji OAM is called
\emph{kinematical} and denoted by $L_z$.
We will use this convention for the rest of the review.
In summary, the components of Eq.~(\ref{eq:spin SR}) are scheme and definition (or gauge) dependent.
Thus, when discussing the origin of the nucleon spin, schemes and definitions must be specified.
This is not a setback since, as emphasized in the preamble, the main object of spin physics is not to provide
the pie chart of the nucleon spin but rather to use it to verify QCD's consistency
and understand complex mechanisms involving it, {\it e.g.}, confinement.
That can be done consistently in fixed schemes and definitions.
This leads us to the next section where such complex mechanisms start to arise.
\subsection{The resonance region \label{resonance region}}
At smaller values of $W$ and $Q^2$, namely below the DIS scaling region, the nucleon
reacts increasingly coherently to the photon until it eventually
responds fully rigidly. Before reaching this
elastic reaction on the nucleon ground state, scattering may
excite nucleon states of higher masses where no specific
quark can unambiguously be said to have been struck,
thus causing interference and coherence effects.
One thus leaves the DIS domain to enter the resonance region characterized
by bumps in the scattering cross-section, see Fig.~\ref{fig:gross}.
These higher-spin resonances are OAM and radially excited nucleon states.
They then decay by meson or/and photon emission and can be classified
into two groups: isospin 1/2 (N$^*$ resonances) and
isospin 3/2 ($\Delta^*$ resonances).
The resonance domain is important for this review since it covers the transition
from pQCD to nonperturbative QCD. It also illustrates how spin information can illuminate QCD phenomena.
Since the resonances are numerous, overlapping
and differ in origin, spin degrees of freedom are needed to identify and characterize them.
Modern hadron spectroscopy experiments typically involve polarized beams
and targets. However, inclusive reactions are ill suited to disentangle resonances:
final hadronic states must be partly or fully identified. Thus, we will cover this
extensive subject only superficially.
The nomenclature classifying nucleon resonances originates from $\pi N$ scattering.
Resonances are labelled by L$_{2I~2J}$, where L is the OAM \emph{in the $\pi N$ channel}
(not the hadron wavefunction OAM), $I$=1/2 or 3/2 is the isospin, and $J$ is the total angular momentum.
L is labeled by S (for L=0), P (L=1), D (L=2) or F (L=3).
An important tool to classify resonances and predict their
masses is the \emph{constituent quark} model, which is discussed next. Lattice gauge theory
(Section~\ref{LGT}) is now the main technique to predict and characterize resonances, with
the advantage of being a first-principle QCD approach. Another successful approach based on QCD's
basic principles and symmetries is LF Holographic QCD (Section~\ref{sec:LFHQCD}), an effective theory which uses
the gauge/gravity duality on the LF, rather than ordinary spacetime, to captures essential aspects of QCD dynamics
in its nonperturbative domain.
\subsubsection{Constituent quark models \label{CQM}}
The basic classification of the hadron mass spectra was motivated by the development of
\emph{constituent quark} models obeying an $SU(6) \supset SU(3)_{flavor} \otimes SU(2)_{spin}$
internal symmetry~\cite{Isgur:1978xj, Eichmann:2016yit}. Baryons are modeled as composites of three
\emph{constituent quarks} of mass $M/3$ (modulo binding energy corrections which
depend on the specific model) which provides the $J^{PC}$ quantum numbers. The
\emph{constituent quark} model predates QCD but is now interpreted and developed in its framework.
\emph{Constituent quarks} differ from \emph{valence quarks}
-- which also determine the correct quantum numbers of hadrons -- in that they are
not physical (their mass is larger) and are understood as \emph{valence quarks} dressed by virtual partons.
The large \emph{constituent quark} masses explicitly break both the \emph{conformal} and chiral symmetries
that are nearly exact for QCD at the classical level; see Sections~\ref{sub:Chipt}.
\emph{Constituent quarks} are assumed to be bound at LO by phenomenological potentials such as the
Cornell potential~\cite{the:Eichten. Cornell pot.}, an approach which was interpreted after the advent of QCD
as gluonic flux tubes acting between quarks.
The LO spin-independent potential is supplemented by a spin-dependent potential,
{\it e.g.}, by adding exchange of mesons~\cite{Matsuyama:2006rp}, instantons or by
including the interaction of a spin-1 gluon exchanged between the quarks
(``hyperfine correction"~\cite{Close:1979bt, Godfrey:1985xj}).
``Constituent gluons" have also been used to characterize mesons that may exhibit
explicit gluonic degrees of freedom (``hybrid mesons"). The constituent quark models, which have been built
to explain hadron mass spectroscopy, can reproduce it well. In particular, they historically lead to the
discovery of color charge.
Of particular interest to this review, such an approach can also account for
baryon magnetic moments which can be distinguished from the \emph{constituent quark} pointlike ({\it i.e.}, Dirac) magnetic
moments. Another feature of these models relevant to this review is that the
physical mechanisms that account for hyperfine corrections are also needed to explain polarized
PDFs at large-$x_{Bj}$, see Section~\ref{pqcd high-x}. Hyperfine corrections
can effectively transfer some of the quark spin contribution to quark OAM~\cite{Myhrer:1988ap}, consistent with
the need for non-zero quark OAM in order to describe the PDFs within pQCD~\cite{Avakian:2007xa}.
In non-relativistic \emph{constituent quark} models, the quark OAM is zero and there are no gluons: the nucleon spin
comes from the quark spins. SU(6) symmetry and requiring that the non-color
part of the proton wavefunction is symmetric yield~\cite{Close:1979bt, Close:1973xw}:
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
\label{SU(6) p wavefunction}
\left|p\uparrow\right\rangle =
\frac{1}{\sqrt{2}}\left|u\uparrow(ud)_{\mathbf s=0,s=0}\right\rangle +
\frac{1}{\sqrt{18}}\left|u\uparrow(ud)_{\mathbf s=1,s=0}\right\rangle - \\ \nonumber
\frac{1}{3}\left(\left|u\downarrow(ud)_{\mathbf s=1,s=1}\right\rangle -
\left|d\uparrow(uu)_{\mathbf s=1,s=0}\right\rangle +
\sqrt{2}\left|d\downarrow(uu)_{\mathbf s=1,s=1}\right\rangle \right),
\end{eqnarray}
where the arrows indicate the projection of the 1/2 spins along the quantization axis, while the subscripts $\mathbf s$
and $s$ denote the total and projected spins of the diquark system, respectively.
The neutron wavefunction is obtained from the proton wavefunction {\it via} isospin $u \leftrightarrow d$ interchange.
The spectroscopy of the excited states varies between models, depending in detail on the choice of the quark potential.
As mentioned in Section~\ref{spin crisis}, the disagreement between the EMC experimental results~\cite{Ashman:1987hv},
and the naive $\Delta \Sigma =1$ expectation from the
simplest \emph{constituent quark} models has led to the ``spin crisis".
Myhrer, Bass, and Thomas have interpreted the ``spin crisis" in the \emph{constituent quark} model framework
as a pion cloud effect~\cite{Myhrer:2007cf, Bass:2009ed}, which
together with relativistic corrections and one-gluon exchange, can transfer part of $\Delta \Sigma$ to
the quark OAM (mostly to $l^u_q$)~\cite{Tsushima:1988xv}.
Once these corrections have been applied, the \emph{constituent quark}
picture -- which has had success in describing other aspects of the strong force --
also becomes consistent with the spin structure data. Relativistic effects, one-gluon exchange and the pion cloud
reduce the naive $\Delta \Sigma=1$ expectation by 35\%, 25\% and 20\%,
respectively. The quark spin contribution is transferred to quark OAM,
resulting in $\Delta \Sigma /2 \approx 0.2$
and $l_q \approx +0.3$.
These predictions apply at the low momentum scale where \emph{DGLAP} evolution starts, estimated to be
$Q_0^2 \approx 0.16$~GeV$^2$~\cite{Thomas:2008ga}, which could be relevant to the \emph{constituent quark}
degrees of freedom. Evolving these numbers from $Q_0^2$
to the typical DIS scale of 4~GeV$^2$ using Eqs.~(\ref{eq:LO evol. of spin SR})
decreases the quark OAM to 0 ($l_q^d \approx - l_q^u \approx 0.1$), transferring it to $\Delta G +\mbox{L}_g$.
Thus, the Myhrer-Bass-Thomas model yields
$\Delta \Sigma/2 \approx 0.18$, $l_q \approx 0$ and $\Delta G +\mbox{L}_g \approx 0.32$, with
strange and heavier quarks not directly contributing to $J$.
This result is not supported by those of Refs.~\cite{Altenbuchinger:2010sz, Wakamatsu:2009gx} which assessed the value of
$L_q$ at low scales by evolving down large scale LGT estimates of the spin sum rule components.
A cause of the disagreement might be that Refs.~\cite{Altenbuchinger:2010sz, Wakamatsu:2009gx} use LGT
input, {\it i.e.}, with the quark OAM kinematical definition, while it is unclear which definition applies to the quark OAM
in constituent quark models, such as that used in Refs.~\cite{Thomas:2008ga}.
Furthermore, the high scale $L_q$ input of Refs.~\cite{Altenbuchinger:2010sz, Wakamatsu:2009gx} stems
from early LGT calculations which do not include disconnected diagrams. Those are now known to contribute
importantly to the quark OAM, which makes the $L_q$ input of Refs.~\cite{Altenbuchinger:2010sz, Wakamatsu:2009gx}
questionnable. Finally, the scale evolutions are preformed in~\cite{Altenbuchinger:2010sz, Thomas:2008ga, Wakamatsu:2009gx}
at leading \emph{twist}, which is known to be insufficient for scales below $Q_0\approx 1$ GeV~\cite{Deur:2014qfa, Deur:2016cxb}.
(We remark that some \emph{higher-twists} are effectively included when a non-perturbative $\alpha_s$ is employed).
The limitation of these evolutions in the very low scale region characterizing bag models (0.1-0.3 GeV$^2$)
is in particular studied in Ref.~\cite{Altenbuchinger:2010sz}.
The authors improved the cloudy bag model calculation of Ref.~\cite{Thomas:2008ga}
by using the gauge-invariant (kinematical) definition of the spin contributions.
It yields $Q_0^2 \approx 0.2$~GeV$^2$, $\Delta \Sigma/2=0.23\pm 0.01$,
$L_q=0.53\pm 0.09$ and $\Delta G+\mbox{L}_g=-0.26\pm 0.10$.
The importance of the pion cloud to $J$ has also been discussed in Refs.~\cite{Nocera:2016zyg, Speth:1996pz}.
\subsubsection{The resonance spectrum of nucleons}
The first nucleon excited state is the P$_{33}$, also called the $\Delta(1232)~3/2^+$ ($M_{\Delta}$=1232 MeV)
in which the three \emph{constituent quark} spins are aligned while in an S-wave.
Thus, the $\Delta(1232)~3/2^+$ has spin $J= 3/2, $ and its isospin is 3/2.
The $\Delta(1232)~3/2^+$ resonance is the
only one clearly identifiable in an inclusive reaction spectrum.
It has the largest cross-section and thus contributes dominantly to
\emph{sum rules} (Section~\ref{sum rules}) and moments of spin structure functions at moderate $Q^2$.
The nucleon-to-$\Delta$ transition is thus, in this SU(6)-based view, a spin (and isospin) flip;
{\it i.e.}, a magnetic dipole transition quantified by the $M_{1+}$ multipole amplitude.
Experiments have shown that there is also a small electric quadrupole component
$E_{1+}$ ($E_{1+}/M_{1+} < 0.01$ at Q$^2=0$) which violates SU(6) isospin-spin symmetry. This effect
can be interpreted as the deformation of the $\Delta(1232)~3/2^+$ charge and current
distributions in comparison to a spherical distribution.
The nomenclature for multipole longitudinal (also called scalar) amplitudes
$S_{l\pm}$, as well as the transverse $E_{l\pm}$ and $M_{l\pm}$ amplitudes
is given in Ref.~\cite{Burkert:2004sk}. The small $E_{1+}$ and $S_{1+}$
components are predicted by \emph{constituent quark} models improved with a $M_{1 }$ dipole-type one-gluon exchange
(see Section~\ref{sec: high-x}).
Due to their similar masses and short lifetimes ({\it i.e.,} large widths in excitation energy $W$), the higher mass
resonances overlap, and thus cannot be readily isolated as distinct contributions to inclusive cross-sections.
Their contributions can be grouped into four regions whose shapes and mean-$W$ vary with
$Q^2$, due to the different $Q^2$-behavior of the amplitudes of the individual resonances.
The second resonance region (the first is the $\Delta(1232)~3/2^+$) is located
at $W\approx1.5$~GeV and contains the N(1440)~$1/2^+$ P$_{11}$ (Roper resonance), the
N(1520)~$3/2^-$ D$_{13}$ and the N(1535)~$1/2^-$ S$_{11}$ which usually dominates over
the first two. The third region, at $W\approx1.7$~GeV,
includes the $\Delta$(1600)~$3/2^+$ P$_{33}$, N(1680)~$5/2^+ $ F$_{15}$, N(1710)~$1/2^-$ P$_{11}$,
N(1720)~$3/2^+$ P$_{13}$, $\Delta$(1620)~$1/2^-$ S$_{31}$, N(1675)~$5/2^-$ D$_{15}$, $\Delta$(1700)~$3/2^-$ D$_{33}$,
and N(1650)~$1/2^-$ S$_{11}$. The fourth region is located
around $W\approx1.9$~GeV and contains the $\Delta$(1905)~$5/2^+$ F$_{35}$,
$\Delta$(1920)~$3/2^+$ P$_{33}$, $\Delta$(1910)~$1/2^+$ P$_{31}$, $\Delta$(1930)~$5/2^+$ D$_{35}$ and
$\Delta$(1950)~$7/2^+$ F$_{37}$. Other resonances have been identified
beyond $W=2$~GeV~\cite{Olive:2016xmw}, but their structure cannot be distinguished in an
inclusive experiment not only because of the overlap of their widths, but also because of the dominance
of the ``non-resonant background" -- incoherent scattering similar to DIS at higher $Q^2$. Its
presence is necessary to satisfy the unitarity of the $S$ matrix in the resonance region.
The DIS cross-section formulae remain valid in the resonance domain.
Although the intepretation of structure functions as PDFs cannot be applied, the DIS cross-sections can nevertheless be related to
overlaps of LFWFs, as shall be discussed below.
\subsubsection{A link between DIS and resonances: hadron-parton duality}
Bloom and Gilman observed~\cite{Bloom:1970xb} that the unpolarized structure function
$F_2(x_{Bj},Q^2)$ measured in DIS matches $F_2(x_{Bj},Q^2)$ measured in the resonance domain
if the resonance peaks are suitably smoothed and if the
$Q^2$-dependence of $F_2$ -- due to pQCD radiations and the non-zero nucleon
mass -- is corrected for. This correspondence is known as {\it hadron-parton} duality.
It implies that Bjorken scaling, corrected for
\emph{DGLAP} evolution and non-zero mass terms (kinematic \emph{twists}, see Section~\ref{OPE}), is effectively valid in the
resonance region if the resonant structures can be averaged over.
This indicates that the effect of the third source of
$Q^2$-dependence, the parton correlations (dynamical \emph{twists}, see Section~\ref{OPE}),
can be neglected. Thus the resonance region can be described
in dual languages -- either hadronic or partonic~\cite{Melnitchouk:2005zr}. The understanding of
hadron-parton duality for spin structure functions has also progressed and is discussed in Section~\ref{sec:duality}.
\subsection{Elastic and quasi-elastic scatterings\label{elastic scatt}}
When a leptonic scattering reaction occurs at low energy transfer
$\nu = {p \cdot q/ M} $ and/or low photon virtuality $Q^2$,
nucleon excited states cannot form. Coherent elastic scattering occurs, leaving the target in its
ground state. The transferred momentum is shared by
the target's constituents, the target stays intact and its
structure undisrupted. The 4-momentum of the virtual photon is spent entirely
as target recoil. The energy transferred is $\nu_{el}=Q^2/(2M)$.
For a nuclear target, elastic scattering may occur on the nucleus itself
or on an individual nucleon. If the nuclear structure is disrupted,
the reaction is called quasi-elastic (not to
be confused with the ``quasi-elastic'' scattering of neutrinos, which
is charge-exchange elastic scattering; {\it i.e.,} involving $W^{+ / -}$ rather that $Z^0$).
For elastic scattering, there is no need for
``polarized form factors'': the unpolarized and polarized parts
of the cross-section contain the same form factors. This is
because in elastic scattering, the final hadronic state is known, from current and angular
momentum conservations. Thus, a hadronic current (a vector) can be constructed, which requires two parameters.
In contrast, in the inclusive inelastic case, such current cannot be constructed since the final state is
by definition undetermined. Only the hadronic tensor can be constructed, which requires four parameters.
That the same form factors describe both unpolarized and polarized elastic scattering allowes for accurate
form factor measurements~\cite{Pacetti:2015iqa}, which illustrates how
spin is used as a complementary tool for exploring nucleon structure.
The elastic reaction is important for doubly-polarized inclusive scattering experiments.
Since the same form factors control the unpolarized and polarized elastic cross-sections,
the elastic asymmetry is calculable from the well-measured unpolarized elastic scattering.
This asymmetry can be used to obtain or check beam and target polarizations.
Likewise, the unpolarized elastic cross-section can be used to set or to verify the
normalization of the polarized inelastic cross-section.
Furthermore, some spin sum rules, {\it e.g.}, Burkhardt-Cottingham sum rule (see
Section \ref{BCSR}), include the elastic contribution. Such sum rules are valid
for nuclei. Therefore, alongside the nucleon, we provide below the formalism
of doubly-polarized elastic and quasi-elastic scatterings
for the deuteron and $^3$He nuclei, which are commonly used in doubly-polarized inclusive experiments.
\subsubsection{Elastic cross-section \label{unpo cross-section}}
The doubly polarized elastic cross-section is:
\vspace{-0.1cm}
\footnotesize
\begin{equation}
\vspace{-0.1cm}
\frac{d\sigma}{d\Omega}=\frac{\sigma_{Mott} E' Z^2}{E}\bigg[\bigg(\frac{Q^2}{\overrightarrow{q}^2}\bigg)^2R_{L}(Q^2,\nu)+\big(\tan^2(\theta/2)-\frac{1}{2}\frac{Q^2}{\overrightarrow{q}^2}\big)R_T(Q^2,\nu)\pm\Delta(\theta^{*},\phi^{*},E,\theta,Q^2)\bigg] ,
\label{eq:unpocross}
\end{equation}
\normalsize
\noindent
where $Z$ is the target atomic number and the angles are defined in Fig.~\ref{fig:spinangles}. $R_{L}$ and $R_T$ are
the longitudinal and transverse response functions associated with the
corresponding polarizations of the virtual photon.
The cross-section asymmetry $\Delta$, where $\pm$ refers to the beam helicity sign~\cite{Donnelly:1985ry}, is:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\Delta=-\left(\tan\frac{\theta}{2}\sqrt{\frac{Q^2}{\overrightarrow{q}^2}+\tan^2\frac{\theta}{2}}R_{T'}(Q^2)\cos\theta^{*}-\frac{\sqrt{2}Q^2}{\overrightarrow{q}^2}\tan\frac{\theta}{2}R_{TL'}(Q^2)\sin\theta^{*}\cos\phi^{*}\right)\label{eq:delasy}.
\nonumber
\end{equation}
Cross-sections for the targets used in nucleon spin structure experiments are given below:
\noindent \textbf{Nucleon case}
The cross-section for scattering on a longitudinally polarized nucleon is:
\vspace{-0.4cm}
\begin{eqnarray}
\frac{d\sigma}{d\Omega} & = & \sigma_{Mott}\frac{E'}{E}\left(W_2+2W_1\tan^2(\theta/2)\right)\times\label{eq:hadroncross}\\
& & \left(1\pm\sqrt{\frac{\tau_r W_1}{(1+\tau_r)W_2-\tau_r W_1}}\frac{\frac{2M}{\nu}+\sqrt{\frac{W_1}{\tau_r\left((1+\tau_r)W_2-\tau_r W_1\right)}}\frac{2\tau_r M}{\nu}+2(1+\tau_r)\tan^2(\theta/2)}{1+\tau_r\frac{W_1}{\tau_r\left((1+\tau_r)W_2-\tau_r W_1\right)}\left(1+2(1+\tau_r)\tan^2(\theta/2)\right)}\right),
\nonumber
\end{eqnarray}
%
\noindent with the recoil term $\tau_r\equiv Q^2/(4M^2)$.
The hadronic current is usually parameterized by the Sachs form factors,
$G_E(Q^2)$ and $G_M(Q^2)$, rather than $W_1$ and $W_2$:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
W_1(Q^2)=\tau_r G_M(Q^2)^2, \\\
~~~~~W_2(Q^2)=\frac{G_E(Q^2)^2+\tau_r G_M(Q^2)^2}{1+\tau_r}.
\nonumber
\end{equation}
In the nonrelativistic domain the form factors $G_E$ and $G_M$ can be thought of as
Fourier transforms of the nucleon charge
and magnetization spatial densities, respectively. A rigorous interpretation in term of LF
charge densities is given in Refs.~\cite{Miller:2007uy} (nucleon) and
\cite{Carlson:2008zc} (deuteron, see next section).
The Dirac and Pauli form factors $F_1(Q^2)$
and $F_2(Q^2)$ can also be used (not to be confused with the DIS structure functions in Section~\ref{DIS}):
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
G_E(Q^2)=F_1(Q^2)-\tau_r\kappa_n F_2(Q^2), \\\
~~~~~G_M(Q^2)=F_1(Q^2)+\kappa_n F_2(Q^2),
\nonumber
\end{equation}
where $\kappa_n$ is the nucleon anomalous magnetic moment.
The helicity conserving current matrix element generates $F_1(Q^2)$.
$F_2(Q^2)$ stems from the helicity-flip matrix element.
LF quantization of QCD provides an interpretation of $F_1(Q^2)$ and $F_2(Q^2)$ which can then be modeled
using the structural forms for arbitrary twist inherent to the LFHQCD formalism~\cite{Sufian:2016hwn},
see Section~\ref{sec:LFHQCD}. In LF QCD, form factors are obtained from the Drell-Yan-West
formula~\cite{Drell:1969km, West:1970av} as the overlap of the hadronic LFWFs solutions
of LF Hamiltonian $P^-$, Eq.~(\ref{LF Hamiltonian})~\cite{Brodsky:1980zm}.
In particular, $F_2(Q^2)$ stems from the overlap
of $L = 0$ and $L = 1$ LFWFs. For a ground state system, the \emph{leading-twist} of a reaction,
that is, its power behavior in $Q^2$ (or in the LF impact parameter $\zeta$, see Section~\ref{OPE}),
reflects the \emph{leading-twist} $\tau $ of the target wavefunction,
which is equal to the number of constituents in the LF valence Fock state with zero
internal orbital angular momentum.
This result is intuitively clear, since in order to keep the target intact after elastic scattering,
a number $\tau-1$ of gluons of virtuality $\propto Q^2$ must be exchanged between the $\tau$ constituents.
For example, at high-$Q^2$, all nucleon components are resolved and the \emph{twist} is $\tau=3$.
Higher Fock states including additional $q \overline{q}$, $q \overline{q} q \overline{q}$,\ldots components
generated by gluons are responsible for the \emph{higher-twists} corrections.
These constraints are inherent to LFHQCD which can be used to model the LFWFs and thus obtain
predictions for the form factors.
Alternatively, one can parameterize the general form expected from the
\emph{twist} analysis in terms of weights reflecting the ratio of the higher Fock state
probabilities with respect to the leading Fock state wavefunction.
These weights provide the probabilities of finding the nucleon in a higher Fock state, computed from the square of the
higher Fock state LFWFs. Two parameters suffice to describe the world data for
the four spacelike nucleon form factors~\cite{Sufian:2016hwn}.
\noindent \textbf{Deuteron case}
The deuteron is a spin-1 nucleus. Three elastic form factors are
necessary to describe doubly polarized elastic cross-sections:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\frac{d\sigma}{d\Omega} = \sigma_{M}\frac{E'}{E}\big(A(Q^2)+B(Q^2)\tan^2(\theta/2)\big)\big(1+A^V + A^T \big),
\label{deuteron pol XS}
\end{equation}
where $A^V$ and $A^T$, the asymmetries stemming respectively from the vector
and tensor polarizations of the deuteron, are
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
A^V=\frac{3P_b P_z}{\sqrt2} \bigg( \frac{1}{\sqrt2}\cos \theta^* T_{10} - \sin \theta^*T_{11}\bigg),
\nonumber
\end{equation}
where $P_b$ is the beam polarization and $P_z$ the deuteron vector polarization, $P_z = (n_+ - n_-)/n_{tot}$. The
$n_i$ are the populations for the spin values $i$, and $n_{tot}=n_+ + n_- + n_0$,
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
A^T=\frac{P_{zz}}{\sqrt{2}}\bigg(\frac{3\cos^2 \theta^{*} -1}{2}T_{20}-\sqrt{\frac{3}{2}}\sin(2\theta^{*})\cos \phi^{*} T_{21}+\sqrt{\frac{3}{2}}\sin^2 \theta^{*} \cos(2\phi^{*})T_{22}\bigg),
\nonumber
\end{equation}
with the deuteron tensor polarization $P_{zz}=(n_+ + n_- - 2n_0)/n_{tot}$.
The seven factors in Eq.~(\ref{deuteron pol XS}), $A$, $B$, $T_{10}$, $T_{11}$, $T_{20}$, $T_{21}$
and $T_{22}$, are combinations of
three form factors (monopole $G_{C}$, quadrupole $G_Q$ and magnetic dipole $G_M$):
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
A & = & G_{C}^2+\frac{8}{9}\tau_r^2G_{Q}^2+\frac{2}{3}\tau_r G_M^2, \\\
B & = & \frac{4}{3}\tau_r(1+\tau_r)G_M^2,
\nonumber
\\
T_{10} & = & -\sqrt{\frac{2}{3}}\tau_r(1+\tau_r)\tan(\theta/2)\sqrt{\frac{1}{1+\tau_r}+\tan^2(\theta/2)G_M^2},
\nonumber
\\
T_{11} & = & \frac{2}{3}\sqrt{\tau_r(1+\tau_r)}\tan(\theta/2)G_M\big(G_C+\frac{\tau_r}{3}G_Q\big),
\nonumber
\\
T_{20} & = & -\frac{1}{\sqrt{2}}\left[\frac{8}{3}\tau_r G_{C}G_{Q}+\frac{8}{9}\tau_r^2G_{Q}^2+\frac{1}{3}\tau_r\left[1+2(1+\tau_r)\tan^2(\theta/2)\right]G_M^2\right],
\nonumber
\\
T_{21} & = & \frac{2}{\sqrt{3}\left[A(Q^2)+B(Q^2)\tan^2(\theta/2)\right]\cos(\theta/2)}\tau_r\left[\tau_r+\tau_r^2\sin^2(\theta/2)G_MG_{C}\right],
\nonumber
\\
\vspace{-0.3cm}
T_{22} & = & \frac{1}{\sqrt{3}\left[A(Q^2)+B(Q^2)\tan^2(\theta/2)\right]}\tau_r G_M^2.
\nonumber
\end{eqnarray}
$P_{zz}$ produces additional quantities in other reactions too:
in DIS, it yields the $b_1(x_{Bj},Q^2)$ and $b_2(x_{Bj},Q^2)$ spin structure functions~\cite{Hoodbhoy:1988am}. The first one,
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
b_1(x_{Bj},Q^2)= \sum_i \frac{e_i^2}{2} \big[2q^0_{\uparrow}(x_{Bj},Q^2)-\big(q^1_{\downarrow}(x_{Bj},Q^2)-q^{-1}_{\downarrow}(x_{Bj},Q^2) \big) \big],
\label{eq:b1 SF}
\end{equation}
has been predicted to be small but measured to be significant by the HERMES experiment~\cite{Airapetian:2005cb}.
For the PDFs $q^{-1,0,1}_{\uparrow,\downarrow}$, the superscript $0$
or $\pm1$ indicates the deuteron helicity and the arrow
the quark polarization direction, all of them referring to the beam axis.
The six quarks of the deuteron eigenstate can be projected onto five different color-singlet Fock states,
only one of which corresponds to a proton-neutron bound state.
The other five ``hidden color" Fock states lead to new QCD phenomena at high $Q^2$~\cite{Brodsky:1976mn}.
\noindent \textbf{Helium 3 case}
The doubly polarized cross-section for elastic lepton-$^3$He scattering is
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
\frac{d\sigma}{d\Omega}=\sigma_{Mott}\frac{E'}{E}\left(\frac{G_E^2+\tau_r G_M^2}{1+\tau_r}+2\tau_r G_M^2\tan^2(\theta_{}/2)\right) \bigg(1 \pm \nonumber \\
\frac{1}{\left(\frac{Q^2}{2M\nu+\nu^2}\right)^2(1+\tau_r)G_E^2+\big(\frac{Q^2}{2M\nu+\nu^2}+2\tan^2(\theta/2)\big)\tau_r G_M^2} \times \nonumber \\
\bigg[2\tau_r G_M^2\cos \theta^{*} \tan(\theta/2)\sqrt{\tan^2(\theta/2)+\frac{Q^2}{2M\nu+\nu^2}}+ \nonumber \\
2\sqrt{2\tau_r(1+\tau_r)}G_MG_E\sin \theta^{*} \cos \varphi^{*} \frac{Q^2}{\sqrt{2}\left(2M\nu+\nu^2\right)}\tan(\theta/2)\bigg]\bigg),
\nonumber
\end{eqnarray}
\normalsize
\noindent where the form factors are normalized to the $^3$He electric charge.
The magnetic and Coulomb form factors $F_{m}$ and $F_{c}$ are sometimes
used~\cite{Amroun:1994qj}. They are related to the response functions of a nucleus ($A$, $Z$) by
$F_c = Z G_E$ and $F_m = \mu_A G_M$ where $\mu_A$ is the
nucleus magnetic moment.
\subsubsection{Quasi-elastic scattering \label{qel}}
If the target is a composite nucleus and the transferred energy $\nu$ is greater than the
nuclear binding energy, but still small enough to not resolve
the quarks or excite a nucleon, the scattering loses nuclear coherence.
For example, the lepton may scatter elastically on one of the
nucleons, and the target nucleus breaks. This is quasi-elastic scattering. Its
threshold with respect to the elastic peak equals the nuclear binding
energy (2.224 MeV for the deuteron, 5.49 MeV for the $^3$He
two-body breakup and 7.72 MeV for its three-body breakup).
Unlike elastic scattering, the nucleons are not
at rest in the laboratory frame since they are
restricted to the nuclear volume.
This Fermi motion causes a Doppler-type broadening of the quasi-elastic peak
around the breakup energy plus $Q^2/(2M)$, the energy transfer in elastic scattering off
a free nucleon. The cross-section shape is nearly Gaussian with
a width of about 115 MeV (deuteron) or 136 MeV ($^3$He)~\cite{Martinez-Consentino:2017ryk}.
This model where the nucleon is assumed to be virtually free
(Fermi gas model) provides a qualitative description of the the cross-section, but it does not predict the
transverse and longitudinal components of the cross-section, nor the
distortions of its Gaussian shape. To
account for this, the approximation of free nucleons is abandoned and
a model for the nucleon-nucleon interaction is introduced. The
simplest implementation is \emph{via} the ``Plane Wave Impulse Approximation"
(PWIA), where the initial and final particles (the lepton and nucleons)
are described by plane waves in a mean field. In this approach, all nucleons are quasi-free and therefore
on their mass-shell, including the nucleon absorbing the virtual photon whose momentum
is not changed by the mean field. The other nucleons are passive spectators of the reaction.
%
The nucleon momentum distribution is given by the spectral function $P(k,E)$.
Thus, the PWIA hypothesis enables the nuclear tensor to be expressed from the hadronic ones.
The PWIA model can be improved by accounting for
1) Coulomb corrections on the lepton lines which distort the lepton plane waves.
This corrects for the long distance electromagnetic interactions between the lepton
and the nucleus whose interaction is no longer approximated by a single hard photon exchange;
2) Final state interactions between the nucleon absorbing the hard photon
and the nuclear debris;
3) Exchange of mesons between the
nucleons (meson exchange currents) which is dominated by one pion exchange; and
4) Intermediate excited nucleon configurations such as the Delta-isobar contribution.
\subsection{Summary}
We have described the phenomenology for spin-dependent inclusive lepton
scattering off a nucleus. These reactions, by probing the QCD-ruled nucleon structure, help
to understand QCD's nonperturbative aspects. The spin degrees of freedom allow for additional observables
which can address more complicated effects.
To interpret the observables and understand what they tell us about QCD,
a more fundamental theoretical framework is needed.
We now outline the most important theoretical approaches connected to perturbative and
nonperturbative spin structure studies.
\section{Computation methods \label{Computation methods}}
The strong non-linearity inherent to the QCD Lagrangian makes
traditional perturbation theory inadequate to study the nucleon structure.
In this Section, four important approaches are presented.
Other fruitful approaches to strong-QCD exist, such as solving the Dyson-Schwinger equations,
and the functional renormalization group method or the stochastic quantization method.
Since they have been less used in the nucleon spin structure context, they will not be discussed here.
An overview is given in~\cite{Deur:2016tte}, and an example of Dyson-Schwinger equations
calculation predicting nucleon spin observables can be found in~\cite{Roberts:2011wy}.
Many other models also exist, some will be briefly described when we compare their
predictions to experimental results.
The approaches discussed here are the Operator
Product Expansion (OPE), Lattice Gauge Theory (LGT), Chiral Perturbation Theory ($\chi$PT)
and LF Holographic QCD (LFHQCD).
They cover different QCD domains and are thus complementary:
$\bullet$ The OPE covers the pQCD domain (Section~\ref{DIS}), including nonperturbative
\emph{twist} corrections to the parton model plus the \emph{DGLAP} framework.
The OPE breaks down at low $Q^2$ due to 1) the
magnitude of the nonperturbative corrections; 2) the precision to which
$\alpha_s(Q^2)$ is known; and 3) the poor convergence of the $1/Q^n$ series.
The technique is thus typically valid for $Q^2\gtrsim1$~GeV$^2$.
$\bullet$ LGT covers both the nonperturbative and perturbative regimes.
It is limited at high $Q^2$ by the lattice mesh size $a$ (typically 1/$a \sim$ 2 GeV) and at
low $Q^2$ by 1) the total lattice size; 2) the large value of the pion mass used in LGT simulations (up to 0.5 GeV);
and 3) the difficulty of treating nonlocal operators.
$\bullet$ $\chi$PT, unlike OPE and LGT, uses effective degrees of freedom.
However, calculations are limited to small $Q^2$ (a few tenths of GeV$^2$)
because the momenta involved must be smaller than the pion mass (0.14 GeV).
\noindent The forward Compton scattering amplitude is calculable with the above
techniques. It can also be parameterized at any
$Q^2$ using \emph{sum rules}, see Section~\ref{sum rules}.
This is important for nucleon structure studies since
it allows to connect the different QCD regimes.
$\bullet$ LFHQCD is typically restricted to $Q^2 \lesssim1$~GeV$^2$, a domain characterized by the hadronic mass scale
$\kappa$ and of higher reach compared to $\chi$PT. The restriction comes from
ignoring short-distance effects and working in the strong-coupling regime.
However, in cases involving soft observables, LFHQCD
may extend to quite large $Q^2$~\cite{Sufian:2016hwn}. For example,
it describes well the nucleon form factors up to $Q^2 \sim 30$ GeV$^2$~\cite{Brodsky:2014yha}.
\noindent Although forward Compton scattering amplitudes in the nonperturbative regime have not yet been calculated
with the LFHQCD approach (they are available in the perturbative regime, see~\cite{Brodsky:2000xy}),
LFHQCD plays a important role in connecting the low and high momentum regimes of QCD:
the QCD effective charge~\cite{Grunberg:1980ja} can be computed in LFHQCD
and then be used in pQCD spin \emph{sum rules}
to extend it to the strong QCD domain, thereby linking the hadronic and partonic descriptions of QCD
(see Section~\ref{sec:perspectives}).
\subsection{The Operator Product Expansion \label{OPE}}
The OPE technique illuminates the features of matrix elements of the product of local operators. It is used to compute the
$Q^2$-dependence of structure functions and other quantities in the
DIS domain, as well as to isolate nonperturbative contributions
that arise at small $Q^2$. It also allows the derivations of
relations constraining physical observables, such as the Callan-Gross and Wandzura-Wilczek relations,
Eqs.~(\ref{eq:Callan-gross}) and (\ref{eq:g2ww}), respectively, as well as \emph{sum rules}
together with their $Q^2$-dependence. Due to the parity symmetry of the structure functions under
crossing symmetry, odd-moment sum rules
are derived from the OPE for $g_1$ and $g_2$, whereas even-moment sum rules are predicted
for $F_1$ and $F_2$~\cite{Manohar:1992tz}.
The OPE was developed as an alternative to the Lagrangian approach of quantum field theory
in order to carry out nonperturbative calculations~\cite{Wilson:1969zs}.
The OPE separates the perturbative contributions of a product of local
operators from its nonperturbative contributions by
focussing on distances ({\it i.e.}, inverse momentum scales) that are much smaller than the confinement scale.
Although DIS is LC dominated,
not short-distance dominated (Section~\ref{LC dominance and LF quantization}), the LC and
short-distance criteria are effectively equivalent for DIS in the IMF. However, there are
instances of LC dominated reactions; { \it e.g.}, inclusive hadron production in $e^+ e^-$ annihilation,
for which LC dominance and the short-distance limit are not equivalent~\cite{Jaffe:1996zw}.
In those cases, the OPE does not apply.
\noindent In the small-distance limit, the product of two local operators can be expanded as:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\lim_{d\to0}\sigma_a(d)\sigma_{b}(0)=\lim_{d\to0} \sum_{k}C_{abk}(d)\sigma_{k}(0)
\label{eq:OPE}.
\end{equation}
The Wilson coefficients $C_{abk}$ are singular functions containing
perturbative information and are therefore perturbatively calculable. The
$\sigma_{k}$ are regular operators containing the nonperturbative contributions. In DIS
this formalism is used to relate the
product of currents -- such as those needed to calculate
Compton scattering amplitudes -- to a basis of local operators. Such a basis
is given,{\it e.g.,} in Ref.~\cite{Manohar:1992tz}.
An operator $\sigma_{k}$ contributes to the cross-section by a
factor of $x_{Bj}^{-n}(M/Q)^{D-2-n}$ where $n$ is the spin and $D$ is the
energy dimension of the operator. This defines the \emph{twist}
$\tau \equiv D-n$. Eq.~(\ref{eq:OPE}) provides a $Q^{2-\tau}$ power series in which
the lowest \emph{twist} $C_{abk}$ functions are the most singular and thus are the most dominant at short
distances (large $Q$). Contrary to what Eq.~(\ref{eq:OPE}) might suggest, the $Q^2$-dependence
of a \emph{twist} term coefficient ({\it i.e.}, from pQCD radiative corrections) comes mainly
from the renormalization of the operator $\sigma_{k}$ rather than from the Wilson coefficient $C_{abk}$.
The twist of an operator has a simple origin in the \emph{LF-quantization} formalism: it measures the
excursion out of the LC. That is, it is related to the transverse
vector $x_\bot$, or equivalently to the invariant impact parameter $\zeta = x_\bot \sqrt{x_{Bj}(1-x_{Bj})}$.
The \emph{higher-twist} operators correspond to the number of ``bad" spinor components
(see Section~\ref{LC dominance and LF quantization}) that enters
the expression of distribution functions and gives the $\zeta^\tau$ power behavior of the LFWFs.
At high-$Q^2$, \emph{twist} $\tau = 2$ dominates: it is at this order that
the parton model, with its \emph{DGLAP} corrections, is applicable.
When $Q^2$ becomes small (typically a few GeV$^2$)
the \emph{higher-twist} operators must be accounted for. These nonperturbative corrections
are of two kinds:
\noindent $\bullet$ {\bf Dynamical twist corrections.} They are typically due to amplitudes involving \emph{hard} gluon exchange
between the struck quark and the rest of the nucleon, effectively a nonperturbative object.
Since these \emph{twists} characterize the nucleon structure, they are
relevant to this review. Dynamical \emph{twist} contributions reflect the fact that
the effects of the binding and confinement of the quarks become
apparent as $Q^2$ decreases. Ultimately, quarks react coherently when one of them is struck by the virtual photon.
The 4-momentum transfer is effectively distributed among the quarks
by the \emph{hard} gluons whose propagators and couplings generate $1/Q$ \emph{ power corrections}.
This is also the origin of the QCD \emph{counting rules}~\cite{Brodsky:1973kr}; see Section~\ref{unpo cross-section}.
\noindent $\bullet$ {\bf Kinematical finite-mass corrections.}
The existence of this additional correction to scale invariance can be understood by
recalling the argument leading to the invariance: At $Q^2\to\infty$, masses
are negligible compared to $Q$ and no specific distance scale exists since quarks are pointlike.
At $Q$ values of a few GeV, however, $M/Q$ is no longer negligible, a scale appears,
and the consequent scaling corrections must be functions of $M/Q$. Formally, these corrections arise
from the requirement that the local operators $\sigma_{k}$ are traceless~\cite{Jaffe:1996zw}.
These kinematical \emph{higher-twists} are systematically calculable~\cite{Blumlein:1998nv}.
The Wilson coefficients are calculable perturbatively.
For an observable $A$ expressed as a power series \label{twist-mix}
$A=\sum_\tau\frac{\mu_\tau}{Q^{\tau-2}}$, the parameters $\mu_\tau$
are themselves sums of kinematical \emph{twists} $\tau' \leq \tau$, each of them being
a perturbative series in $\alpha_{s}$ due to pQCD radiative corrections.
Since $\alpha_{s}$ is itself a series of the QCD \emph{$\beta$-function}~\cite{Deur:2016tte}, the
approximant of $A$ is a four-fold sum.
The nonperturbative nature of \emph{twists} implies that they can only be calculated using
models or nonperturbative approaches such as Lattice Gauge Theory,
LFHQCD or Sum Rule techniques. They are also obtainable
from experimental data (see Section~\ref{sub:HT Extraction}).
The construction and evaluation of \emph{higher-twist} contributions using LFWFs,
in particular for the twist~3 $g_2$, are given in Ref.~\cite{Braun:2011aw}.
\subsection{Lattice gauge theory \label{LGT}}
LGT employs the path integral formalism~\cite{Dirac:1933xn}. It
provides the evolution probability from an initial state $\left|x_i \right \rangle $
to a final state $\left|x_f \right \rangle$ by summing over all
spacetime trajectories linking $x_i$ to $x_f$. In this sum, a path is weighted according to
its action $S$. For instance, the propagator of a one-dimensional system is
$\left\langle x_f\right |e^{-iHt}\left|x_i\right\rangle =\int e^{-iS\left[x(t)\right]/\hbar} Dx(t)$
where $\int Dx$ sums over all possible trajectories with $x(t_f)=x_f$ and $x(t_i)=x_i$.
Here $\hbar$ is explicitly shown so that the relation between path integrals and the
principle of least action is manifest; the classical path ($\hbar\to 0$)
corresponds to the smallest $S$ value. The fact that $\hbar\neq 0$ allows for deviations
from the classical path due to quantum effects.
Path integrals are difficult to evaluate analytically, or even numerically, because for a 4-dimension space,
an $n$-dimension integration is required, where $n = 4 \times$(number of possible paths).
The ensemble of possible paths being infinite, it must be restricted to a representative sample on which the integration
can be done.
The standard numerical integration method for path integrals is the Monte Carlo technique in Euclidean
space: a Wick rotation
$it\rightarrow t$~\cite{Wick:1954eu} provides a weighting factor $e^{-S_E}$, which makes the integration
tractable, contrary to the oscillating factor $e^{-iS}$ which appears in Minkowski space.
Here, $S_E$ is the Euclidean action. Such an approach allows the computation of correlation functions
$\left\langle A_1\ldots A_n\right\rangle = \int \mbox{ }A_1\ldots A_ne^{-S_E}Dx/\int e^{-S_E} Dx$,
where $A_i$ is the gauge field value at $x_i$.
In particular, the two-point correlation function at $\left\langle x_1x_2\right\rangle $
provides the boson propagator. No analytical method is known to compute $\left\langle A_1\ldots A_n\right\rangle$
when $S_E$ involves interacting fields, except when the interactions are weak. In that case, the
integral can be evaluated analytically by expanding the exponential involving the interaction term, effectively a
perturbative calculation. If the interactions are too strong, the integration must be performed numerically.
In LGT, the space is discretized as a lattice of sites, and paths linking the sites are generated.
In the numerical integration program, the path generation probability follows
its $e^{-S_E}$ weight, with $S_E$ calculated for that specific path.
This is done using the Metropolis sampling method~\cite{metropolis}.
The computational time is reduced by using the previous path to produce the next one.
A path of action $S_1$ is randomly varied to a new path of
action $S_2$. If $S_2<S_1$ the new $S_2$ path is added in the sample. Otherwise, it is added
or rejected with probability $S_2-S_1$. However,
intermediate paths must be generated to provide a path sufficiently decorrelated
from the previously used path.
Correlation functions are then obtained by summing the integrand over all paths.
The paths are generated with probability $e^{-S_E}$, corresponding to the weighted
sum $\sum_{path} x_1 \ldots x_n e^{-S_E} \approx \int ~x_1 \ldots x_n e^{-S_E} Dx$.
The statistical precision of the procedure is characterized by the square root of the number of generated paths.
Gauge invariance in lattice gauge theory is enforced by the introduction
of \emph{gauge links} between the lattice sites~\cite{Wilson:1974sk}.
The \emph{link variable} is $U_{\overrightarrow{\mu}}=\mbox{exp}(-i\int_{x}^{x+a\overrightarrow{\mu}}gA\mbox{ }dy)$,
where $\overrightarrow{\mu}$ is an elementary vector of the Euclidean
space, $x$ is a lattice site, $a$ is the lattice spacing and $g$ the bare coupling.
The link $U_{\overrightarrow{\mu}}$ is explicitly
gauge-invariant and is used to construct closed paths (``\emph{Wilson loop}")
$U_1\ldots U_n$~\cite{Wilson:1974sk}.
In the continuum limit ($a\to 0$), the simplest
loop, a square of side $a$, dominates. However for discretized space, $a\neq 0$,
corrections from larger loops must be included.
High momenta are eliminated for $p\lesssim1/a$ by the discretization process, but if $a$ can be made
small enough, LGT results can be matched to pQCD results. The domain where
LGT and pQCD are both valid provides the renormalization procedure for LGT.
The case of \emph{pure gauge} field is described above.
It is not simple to include non-static quarks due to their fermionic nature.
The introduction of quark fields leads to the ``fermion doubling problem" which
multiplies the number of fermionic degrees of freedom and creates spurious particles. Several methods exist
to avoid this problem, {\it e.g.}, the Ginsparg-Wilson~\cite{Ginsparg:1981bj} method, which breaks chiral
symmetry, or the ``staggered fermions" method, which preserves chiral symmetry by using nonlocal
operators~\cite{Kogut:1974ag}. These fixes significantly increase the computation time. When the
quarks are included, the action becomes $S_E=S_A-\mbox{ln}\left(\mbox{Det}(K)\right)$
with $S_A$ the pure field action and $K$ is related to the
Dirac equation operator. Simplifying the computation by ignoring dynamical quarks corresponds to $\mbox{Det}(K)=1$
(\emph{quenched approximation}). In particular, it eliminates
the effects of quark anti-quark pair creation from the \emph{instant time} vacuum.
LGT has become the leading method for nonperturbative studies, but it still has serious limitations~\cite{Lin:2017snn}:
\noindent 1) ``Critical slowing down" limits the statistical precision.
It stems from the need for $a$ to be smaller than the studied phenomena's
characteristic scales, such that errors from discretization are small.
The relevant scale is the correlation length $L_{c}$ defined by
$\left\langle x_1x_2\right\rangle \sim e^{-x/L_{c}}$.
$L_{c}$ is typically small, except near critical points. Thus, calculations
must be done near such points, but long $L_{c}$ makes the technique used to
generate decorrelated paths inefficient. For QCD the statistical precision
is characterized by $\left(\frac{L_{R}}{a}\right)^{4}\left(\frac{1}{a}\frac{1}{m_{\pi}^2a}\right)$,
where $m_\pi$ is the pion mass and $L_{R}$ is the lattice size~\cite{Lepage:1998dt}. The first factor comes from the
number of sites and the second factor from the critical slow down.
\noindent 2) Another limitation is the extrapolation to the physical pion mass. LGT calculations are
often performed where $m_\pi$ is greater
than its physical value in order to reduce the critical slow down, but a new uncertainty arises from
the extrapolation of the LGT results to the physical $m_\pi$ value. This uncertainty can be
minimized by using $\chi$PT Theory~\cite{Bernard:2006gx} to guide the extrapolation. Some
LGT calculations can currently be performed at the physical $m_{\pi}$,
although this possibility depends on the observable.
A recent calculation of the quark and gluon contributions to the proton spin,
at the physical $m_{\pi}$, is reported in~\cite{Alexandrou:2016tuo}.
\noindent 3) Finite lattice-size systematic uncertainties arise from having $a$
small enough so that high momenta reach the pQCD domain, but with the
number of sites sufficiently small for practical calculations.
This constrains the total lattice
size which must remain large enough to contain the physical system and minimize boundary effects.
\noindent 4) Local operators are convenient for LGT
calculations since the selection or rejection of a given path entails calculating
the difference between the two actions, $S_2-S_1$. For local actions, $S_2-S_1$ involves
only one site and its neighbors (since $S$ contains derivatives). In four dimensions this implies only 9
operations whereas a nonlocal action necessitates calculations at all sites. The quark OAM
in the Ji expansion of Eq.~(\ref{eq:spin SR}) involves local operators
and is thus suitable for lattice calculations. In contrast,
calculations of nonlocal operators, such as those required to compute structure
functions, are impractical. Furthermore, quantities
such as PDFs are time-dependent in the \emph{instant form} front, and thus cannot be computed directly since
the lattice time is the Euclidean time $ix^0$. (They are, however, pure spatial correlation functions, {\it i.e.}, time-independent, when using the LF form.)
As discussed below, structure functions can still be calculated in LGT by computing their moments, or
by using a matching procedure that interpolates the high-momentum LGT calculations and LFQCD distributions.
\subsubsection{Calculations of structure functions}
An example of a non-local structure function is $g_3$,
Eq.~(\ref{eq:g3}). It depends on
the quark field $\psi$ evaluated at the 0 and $\lambda n$ loci. As discussed, the OPE provides
a local operator basis. Calculable quantities involve currents such as the
quark axial current $\overline{\psi}\gamma_{\mu}\gamma_{5}\psi$\label{axial current}. These currents
correspond to moments of structure functions. In order to obtain those, e.g. $g_1$,
the moments $\Gamma_1^n\equiv\int x^{n-1}g_1dx$
can be calculated and \emph{Mellin-transformed} from moment-space
to $x_{Bj}$-space.
However, the larger the value of $n$, the higher the degree of the derivatives in the moments (see e.g.
Eqs.~(\ref{eq:a2}) and (\ref{eq:a2op})), which increases their non-locality.
Thus, in practice, only moments up to $n=3$ have been calculated in LGT,
which is insufficient to accurately obtain structure functions (see {\it e.g.},
Refs.~\cite{Gockeler:1995wg, Gockeler:2000ja, Negele:2002vs, Hagler:2003jd} for
calculations of $\Gamma_{1,2}^n$ and discussions).
The \emph{higher-twist} terms discussed in Section~\ref{OPE} have the same problem, with an
additional one coming from the twist mixing discussed on page~\pageref{twist-mix}.
The mixing brings additional $1/a^2$ terms which diverge when $a\to0$.
This problem can be alleviated in particular cases by using \emph{sum rules} which relate a moment of a structure function,
whatever its twist content, to a quantity calculable on the lattice.
\subsubsection{Direct calculation of hadronic PDFs:
Matching LFQCD to LGT \label{Ji's LGT method}}
A method to avoid LGT's non-locality difficulty and compute directly $x$-dependencies of parton distributions
has recently been proposed by X. Ji ~\cite{Ji:2013dva}.
A direct application of LGT in the IMF is impractical because the $P \rightarrow \infty$ limit using ordinary time implies that $a \to 0$.
Since LFQCD is boost invariant (see Section~\ref{LC dominance and LF quantization}) calculating
LC observables using \emph{LF quantization} would fix this problem. However, direct LC calculations are not
possible on the lattice since it is based on Euclidean -- rather than real -- instant time and because the LC gauge
$A^+=0$ cannot be implemented on the lattice.
To avoid these problems, an operator $O(P,a)$ related to the
desired nonperturbative PDF is introduced and computed as usual using LGT; it is then evaluated at a large
3-momentum oriented, {\it e.g.}, toward the $x^3$ direction. The momentum-dependent
result (in the ``instant front form", except that the time is Euclidean: $ix^0$) is called a quasi-distribution, since it is not
the usual PDF as defined on the LC or IMF. In particular, the range of $x_{Bj}$ is not
constrained by $0<x_{Bj}<1$. The quasi-distribution computed on the lattice is then related to its LC counterpart
$o(\mu)$ through a matching condition $O(P,a) = Z(\mu/P) o(\mu) + \sum_{2n} C_n/P^n$, where the sum
represents higher-order power-law contributions. This matching is possible since the operators $O(P,a)$ and
$o(\mu)$ encompass the same nonperturbative physics. The matching coefficient $Z(\mu/P)$ can be computed
perturbatively~\cite{Ji:2014lra, Xiong:2013bka}. It contains the effects arising from:
1) the particular gauge choice made in the LGT calculation, although it cannot be the LC gauge $A^+=0$;
and
2) choosing a different frame and quantization time when computing quantities using \emph{LF quantization} and
Euclidean \emph{instant time} quantization in the IMF.
A special lattice with finer spacing $a$ along $ix^0$ and $x^3$ is needed in order to compensate for the Lorentz
contraction at large $P^3$. Each of the two transverse directions requires discretization enhanced by a factor $\gamma$
(the Lorentz factor of the boost), which becomes large for small-$x_{Bj}$ physics.
The computed PDFs, {\it i.e.}, the leading \emph{twist} structure functions, can be calculated for high and moderate $x_{Bj}$, as well
as the kinematical and dynamical \emph{higher-twist} contributions.
How to compute $\Delta G$ and $L_q$
with this method is discussed in Refs.~\cite{Ji:2013fga, Hatta:2013gta, Zhao:2015kca},
and Ref.~\cite{Lin:2017snn} reviews the method and prospects. Improvements of Ji's method
have been proposed, such as {\it e.g.}, the use of pseudo-distributions~\cite{Orginos:2017kos}
instead of quasi-distributions.
The quark OAM definition using either the Jaffe-Manohar or Ji decomposition,
see Section~\ref{SSR components}, corresponds to different choices of the \emph{gauge
links}~\cite{Ji:2012sj, Rajan:2016tlg, Hatta:2011ku, Burkardt:2012sd, Engelhardt:2017miy}.
Results of calculations related to nucleon spin structure are given in
Refs.~\cite{Gamberg:2014zwa, Lin:2014zya, Alexandrou:2015rja, Chen:2016utp}.
In particular, Ji's method was applied recently to computing $\Delta G$~\cite{Yang:2016plb}.
Although the validity of the matching
obtained in this first computation is not certain, these efforts represent
an important new development in the nucleon spin structure studies.
More generally, the PDFs, GPDs, TMDs and Wigner distributions are in principle calculable
with the innovative approaches described here, which are designed to circumvent the inherent
difficulties in the lattice computation of parton distributions.
\subsection{Chiral perturbation theory \label{sub:Chipt}}
$\chi$PT is an effective low-energy field theory consistent with the chiral symmetry of QCD, in which
the quark masses, the pion mass and the particle momenta can be taken small compared to
the nucleon mass. Since $M_n \approx 1$ GeV, $\chi$PT is typically restricted to the domain
$Q^2 \lesssim 0.1$~GeV$^2$. The chiral approach is valuable for nucleon spin studies
since it allows the extension of photoproduction spin \emph{sum rules} to non-zero $Q^2$, such as
the Gerasimov-Drell-Hearn sum rule~\cite{Gerasimov:1965et} as well as polarization
sum rules~\cite{Drechsel:2002ar, Lensky:2017dlc},
as first done in Ref.~\cite{Bernard:1992nz}.
Several chiral-based calculations using different approximations are available~\cite{Bernard:2002bs}-\cite{Kao:2002cp}.
For the most recent applications, see Refs.~\cite{Bernard:2012hb}-\cite{Lensky:2016nui}.
\subsubsection{Chiral symmetry in QCD \label{chiral_conformal_sym}}
The Lagrangian for a free spin 1/2 particle is $\mathcal{L}$= $\overline{\psi}(i\gamma_{\mu}\partial^{\mu}-m)\psi$.
The left-hand Dirac spinor is defined as $P_{l}\psi=\psi_{l}$, with $P_{l}=(1-\gamma_{5})/2$ the
left-hand helicity state projection operator. Likewise, $\psi_{r}$ is defined with
$P_{r}=(1 + \gamma_{5})/2$. If $m=0$ then $\mathcal{L}\mathcal{=L}_l + \mathcal{L}_r$ where
$\psi_{l}$ and $\psi_{r}$ are the eigenvectors of $P_{l}$ and $ P_{r}$, respectively:
the resulting Lagrangian decouples to two independent contributions. Thus, two classes of
symmetrical particles with right-handed or left-handed helicities can be distinguished.
Chiral symmetry is assumed to hold approximately for light quarks.
If quarks were exactly massless, then
$\mathcal{L}_{QCD} = \mathcal{L}^l _{quarks} + \mathcal{L}^r_{quarks} + \mathcal{L}_{int} + \mathcal{L}_{gluons}$.
Massless Goldstone bosons can be generated by spontaneous symmetry breaking.
The pion spin-parity and mass, which is much smaller than that of other hadrons,
allows the identification of the pion with the Goldstone boson. Non-zero quark masses -- which explicitly
break chiral symmetry -- then lead to the non-zero pion mass. The $\chi$PT calculations can be
extended to massive quarks by adding a perturbative term $\overline{\psi}m\psi$ which explicitly
breaks the chiral symmetry. The much larger masses of other hadrons
are assumed to come from spontaneous symmetry breaking caused by
quantum effects; {\it i.e.}, dynamical symmetry breaking. Calculations of observables at small
$Q^2$ use an ``effective" Lagrangian expressed in terms of hadronic fields, which incorporates chiral symmetry.
The resulting perturbative series is a function of $m_{\pi}/M_n$ and the momenta of the on-shell particles involved
in the reaction.
\subsubsection{Connection to conformal symmetry \label{conformal_sym}}
Once the quark masses are neglected, the classical QCD Lagrangian
$\mathcal{L}_{QCD}$ has no apparent mass scale and is effectively \emph{conformal}.
Since there are no dimensionful parameters in
$\mathcal{L}_{QCD}$, QCD is apparently scaleless.
This observation allows one to apply the
AdS/CFT duality~\cite{Maldacena:1997re} to semi-classical QCD, which is
the basis for LFHQCD discussed next.
The strong force is effectively \emph{conformal} at high-$Q^2$ (Bjorken scaling), and at low $Q^2$, one observes the
\emph{freezing} of $\alpha_s(Q^2)$~\cite{Deur:2016tte}. The observation of
\emph{conformal} symmetry at high-$Q^2$ is a key feature of QCD.
More recently, studying the \emph{conformal} symmetry of QCD at low $Q^2$ has provided
new insights into hadron structure, as will be discussed in the next section.
However, these signals for \emph{conformal} scaling fail at moderate $Q^2$
because of quantum corrections -- the QCD coupling $ \alpha_s$ varies strongly near
$\Lambda_s$, the scale arising from quantum effects and the \emph{dimensional transmutation}
property arising from renormalization. The QCD mass scale can also be expressed as
$\sigma_{str}$ (the string tension appearing in heavy quark phenomenology)
and as $\kappa$ (LFHQCD's universal scale) which controls the slope of Regge trajectories.
The pion decay constant $f_\pi$, characterizing the dynamical breaking of chiral symmetry, can also be related to
these mass scales~\cite{Kneur:2011vi}. Other characteristic mass scales exist, see~\cite{Deur:2016tte}.
\subsection{The light-front holographic QCD approach \label{sec:LFHQCD}}
\emph{LF quantization} allows for a rigorous and exact formulation of QCD, in particular in its nonperturbative domain.
Hadrons. {\it i.e.}, bound-states of quarks, are described on the LF by a relativistic Schr\"{o}dinger-like equation, see
Section~\ref{LC dominance and LF quantization}.
All components of this equation can in principle be obtained from the QCD Lagrangian; In
practice, the effective confining potential entering the equation has been obtained
only in (1+1) dimensions~\cite{Hornbostel:1988fb}. The complexity of such computations grows quickly with
dimensions and in (3+1) dimensions, the confining potential must be obtained from other than first-principle calculations.
An important possibility is to use the LFHQCD framework~\cite{Brodsky:2014yha}.
LFHQCD is based on the isomorphism between the
group of isometries of a 5-dimensional \emph{anti-de-Sitter space} (AdS$_5$) and the $SO(4,2)$
group of \emph{conformal} transformations in physical spacetime.
The isomorphism generates a correspondence between a strongly interacting
\emph{conformal field theory} (CFT) in $d$--dimensions
and a weakly interacting, classical gravity-type theory in $d+1$-dimensional AdS space~\cite{Maldacena:1997re}.
Since the strong interaction is approximately \emph{conformal} and
strongly coupled at low $Q^2$, gravity calculations
can be mapped onto the
boundary of AdS space -- representing the physical
Minkowski spacetime -- to create an approximation for QCD.
This approach based on the ``gauge/gravity correspondence", {\it i.e.}, the mapping of
a gravity theory in a 5-dimensional AdS space onto its 4-dimensional boundary, explains the nomenclature ``holographic".
In this approach, the fifth-dimension coordinate $z$ of AdS$_5$ space corresponds to the LF
variable $\zeta_\bot = x_\bot \sqrt{{x}(1-x)}$,
the invariant transverse separation between the $q \bar q$ constituents of a meson.
Here $x$ is the LF fraction $k^+\over P^+$. The holographic correspondence~\cite{deTeramond:2005su}
relating $z$ to $\zeta$ can be deduced from the fact that the formulae for hadronic
electromagnetic~\cite{Polchinski:2002jw} and gravitational~\cite{Abidin:2008ku} form factors in AdS space
match~\cite{Brodsky:2006uqa} their corresponding expressions for form factors of
composite hadrons in the LF~\cite{Drell:1969km, West:1970av}.
LFHQCD also provides a correspondence between hadron eigenstates and nonperturbative bound-state
amplitudes in AdS space, form factors and quark distributions: the analytic structure of the amplitudes leads to a
nontrivial connection with Regge theory and the hadron spectrum~\cite{deTeramond:2018ecg, Sufian:2018cpj}. It was
shown in Refs.~\cite{deTeramond:2014asa,Dosch:2015nwa,Brodsky:2016yod} how
implementing superconformal symmetry completely fixes the distortion of AdS space,
therefore fixing the confining potential of the boundary theory.
The distortion can be expressed in terms of a specific ``dilaton" profile in the AdS action.
This specific profile is uniquely recovered by the procedure of Ref.~\cite{deAlfaro:1976vlx} which shows
how a mass scale can be introduced in the Hamiltonian without affecting
the \emph{conformal invariance} of the action~\cite{Brodsky:2013ar}.
This uniquely determines the LF bound-state potential for mesons and
baryons, thereby making LFHQCD a fully determined approximation to QCD. ``Fully determined"
signifies here that in the chiral limit LFHQCD has a single free parameter, the minimal number that
dimensionfull theories using conventional (human chosen) units such as GeV, must have, see {\it e.g.},
the discussion in Chapter VII.3 of Ref.~\cite{Zee:2003mt}.
For LFHQCD this parameter is $\kappa$; for perturbative conventional QCD, it is $\Lambda_s$~\cite{Deur:2014qfa}.
In fact, chiral QCD being independent of
conventional units such as GeV, a theory or model of the strong force can
only provide dimensionless ratios, {\it e.g.}, $M_p / \Lambda_s$ or the proton to $\rho$-meson mass ratio $M_p / M_{\rho}$.
The derived confining potential has the form of a harmonic oscillator
$\kappa^4 \zeta^2 $ where $\kappa^2 = \lambda$: It effectively accounts for the
gluonic string connecting the
quark and
antiquark in a meson. It leads to a massless pion bound state in the chiral limit and
explains the mass symmetry between mesons and baryons~\cite{Dosch:2015nwa}.
The LF harmonic oscillator potential transforms to the well-known nonrelativistic
confining potential $\sigma_{str} r$ of heavy quarkonia in the \emph{instant form}
of dynamics~\cite{Trawinski:2014msa} (with $r$, the quark separation).
Quantum fluctuations are not included in the semiclassical LFHQCD computations.
Although heavy quark masses break \emph{conformal} symmetry, the introduction of a
heavy mass does not necessarily leads to supersymmetry breaking, since it can
stem from the underlying dynamics of color confinement~\cite{deTeramond:2016bre}. Indeed, it was shown in
Ref.~\cite{Dosch:2015bca} that supersymmetric relations between
the meson and baryon masses still hold to a good approximation even for heavy-light
({\it i.e.,} charm and bottom) hadrons, leading to remarkable connections between meson,
baryon and tetraquark states~\cite{Nielsen:2018uyn}.
A prediction of chiral LFHQCD for the nucleon spin is that the eigensolutions for the LF wave equation
for spin 1/2 (plus and minus components) associated with $L_z = 0$ and
$L_z = 1$ have equal normalization, see Eq. 5.41 of Ref.~\cite{Brodsky:2014yha}. Since there is no gluon quanta,
the gluons being sublimated into the effective
potential~\cite{Brodsky:2014yha}, the nucleon spin comes from quark OAM
in the effective quark-diquark two-body Hamiltonian approximation. This agrees with the (pre-EMC)
chiral symmetry prediction obtained in a Skyrme approach, namely, that the nucleon spin is carried by quark
OAM in the nonperturbative domain~\cite{Brodsky:1988ip}.
\subsection{Summary}
We have outlined the theoretical approaches that are used to interpret spin-dependent observables.
Simplifications, both for theory and experiments, arise when inclusive reactions are considered,
\emph{viz} reactions in which all hadronic final states are summed over.
Likewise, summing on all reactions; {\it i.e.}, integrating on $W$ or equivalently over $x_{Bj}$,
to form moments of structure functions yields further simplifications. These moments can
be linked to observables characterizing the nucleon by relations called \emph{sum rules}. They offer
unique opportunities for studying QCD because they are often
valid at any $Q^2$. Thus, they allow tests of the various calculation
methods applicable at low ($\chi$PT, LFHQCD), intermediate (Lattice QCD, LFHQCD), and high $Q^2$ (OPE).
Spin sum rules will now be discussed following the formalism of Refs.~\cite{Drechsel:2002ar, Drechsel:2004ki}.
\section{Sum rules\label{sum rules}}
Nucleon spin sum rules offer an important opportunity to study QCD. In the last 20 years, the Bjorken sum
rule~\cite{Bjorken:1966jh}, derived at high-$Q^2$, and the
Gerasimov-Drell-Hearn (GDH) sum rule~\cite{Gerasimov:1965et},
derived at $Q^2=0$, have been studied in detail, both experimentally and theoretically.
This primary set of sum rules links the moments of structure functions
(or equivalently of photoabsorption cross-sections) to the static properties of the nucleon.
Another class of sum rules relate the moments of structure functions to Doubly
Virtual Compton Scattering (VVCS) amplitudes rather than to static properties. This class includes the generalized
GDH sum rule~\cite{Ji:1999mr, Drechsel:2004ki, Anselmino:1988hn} and
spin polarisability sum rules~\cite{Drechsel:2002ar, Lensky:2016nui, Drechsel:2004ki}. The VVCS amplitudes are
calculable at any $Q^2$ using the techniques described in Section~\ref{Computation methods}. They can then
be compared to the measured moments. Thus, these sum rules are particularly well suited for
exploring the transition between fundamental and effective descriptions of QCD.
\subsection{General formalism}
Sum rules are generally derived by combining dispersion relations with
the \emph{Optical Theorem} ~\cite{Pasquini:2018wbl}.
Many sum rules can also be
derived using the OPE or QCD on the LC. In fact, the Bjorken and Ellis-Jaffe~\cite{Ellis:1973kp}
sum rules were originally derived using quark LC current algebra.
Furthermore, a few years after its original derivation \emph{via} dispersion relations, the GDH sum rule was
rederived using LF current algebra~\cite{Dicus:1972vp}.
A convenient formalism for deriving the sum rules
relevant to this review is given in ~\cite{Drechsel:2002ar, Drechsel:2004ki}.
The central principle is to apply the \emph{Optical Theorem} to the VVCS amplitude, thereby linking virtual
photoabsorption to the inclusive lepton scattering cross-section. Assuming causality, the VVCS amplitudes
can be analytically continued in the complex plane. The Cauchy relation
-- together with the assumption that the VVCS amplitude converges faster than
$1/\nu$ as $\nu \to \infty$ so that it fulfills Jordan's lemma --
yields the widely used Kramer-Kr\"{o}nig relation~\cite{KKR}:
\vspace{-0.2cm}
\begin{equation}
\vspace{-0.2cm}
\Re e\left(A_{VVCS}(\nu,Q^2)\right)=\frac{1}{\pi}{P\int}_{-\infty}^{+\infty}\frac{\Im m\left(A_{VVCS}(\nu',Q^2)\right)}{\nu'-\nu}d\nu'.
\label{Eq:KKR}
\end{equation}
The crossing symmetry of the VVCS amplitude allows one to restrict the integration range from 0 to $\infty$.
The \emph{Optical Theorem} then allows $\Im m\left(A_{VVCS}\right)$ to be expressed
in term of its corresponding photoabsorption cross-section.
Finally, after subtracting the target particle pole contribution (the elastic reaction), $\Re e\left(A_{VVCS}\right)$
is expanded in powers of $\nu$ using a low energy theorem~\cite{Low:1954kd}.
Qualitatively, the integrand at LO represents the electromagnetic current spatial distribution and at
NLO reflects the deformation of this spatial distribution due to the probing photon (polarizabilities).
The applicability of Jordan's lemma
has been discussed extensively.
It has been pointed out~\cite{Abarbanel:1967wk} that an amplitude
may not vanish as $\nu \rightarrow \infty$ due to fixed $J=0$ or $J=1$ poles of $\Re e\left(A_{VVCS}\right)$,
leading to sum rule modifications.
Here, we shall assume the validity of Jordan's lemma.
\vspace{-0.1cm}
\subsection{GDH and forward spin polarizability sum rules \label{sec:Spin polarizabilities}}
The methodology just discussed applied to the spin-flip VVCS amplitude yields the generalized GDH sum
rule when the first term of the $\nu$ expansion is considered:
\vspace{-0.2cm}
\begin{eqnarray}
\vspace{-0.2cm}
I_{TT}(Q^2) & = & \frac{M_t^2}{4\pi^2\alpha}\int_{\nu_0}^{\infty}\frac{\kappa_{\gamma^*}(\nu,Q^2)}{\nu}\frac{\sigma_{TT}}{\nu}d\nu\nonumber \\
& = & \frac{2M_t^2}{Q^2}\int_0^{x_0}\Bigr[g_1(x,Q^2)-\frac{4M_t^2}{Q^2}x^2g_2(x,Q^2)\Bigl]dx,
\label{eq:gdhsum_def1}
\end{eqnarray}
where Eq.~(\ref{sigmaTT}) was used for the second equality. $I_{TT}(Q^2)$
is the spin-flip VVCS amplitude in the low $\nu$ limit.
The limits $\nu_0$ and $x_0=Q^2/(2M_t\nu_0)$
correspond to the inelastic reaction threshold, and $M_t$ is the target mass.
For $Q^2 \rightarrow 0$, the low energy theorem relates $I_{TT}(0)$ to the anomalous magnetic moment $\kappa_t$,
and Eq.~(\ref{eq:gdhsum_def1}) becomes the GDH sum rule:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.2cm}
I_{TT}(0)=\int_{\nu_0}^{\infty}\frac{\sigma_{T,1/2}(\nu)-\sigma_{T,3/2}(\nu)}{\nu}d\nu=-\frac{2\pi^2\alpha\kappa_t^2}{M_t^2}.
\label{eq:gdh}
\end{equation}
Experiments at MAMI, ELSA and LEGS~\cite{Ahrens:2001qt} have
verified the validity of the proton GDH sum rule within an accuracy of about 10\%.
The low $Q^2$ JLab $\overline{I_{TT}^n}(Q^2)$ measurement extrapolated to $Q^2=0$ is compatible with
the GDH expectation for the neutron within the 20\% experimental uncertainty~\cite{Adhikari:2017wox}.
A recent phenomenological assessment of the sum rule also concludes its validity~\cite{Gryniuk:2015eza}.
The original and generalized GDH sum rules apply to any target, including nuclei, leptons, photons or gluons.
For these latter massless particles, the sum rule predicts $I_{TT}^{\gamma,~g}(0)=0$~\cite{Bass:1998bw}.
The NLO term of the $\nu$ expansion of the left-hand side of Eq.~(\ref{Eq:KKR}) yields the forward spin
polarizability~\cite{GellMann:1954db}:
\vspace{-0.7cm}
\begin{eqnarray}
\vspace{-0.3cm}
\gamma_0(Q^2) & = & \frac{1}{2\pi^2}\int_{\nu_0}^{\infty}\frac{\kappa_{\gamma^*}(\nu,Q^2)}{\nu}\frac{\sigma_{TT}(\nu,Q^2)}{\nu^{3}}d\nu\nonumber \\
& = & \frac{16\alpha M_t^2}{Q^{6}}\int_0^{x_0}x^2\Bigl[g_1(x,Q^2)-\frac{4M_t^2}{Q^2}x^2g_2(x,Q^2)\Bigr]dx.
\label{eq:gamma_0}
\end{eqnarray}
Alternatively, the polarized covariant VVCS amplitude $S_1$ can be considered. It is connected to the spin-flip
and longitudinal-transverse interference VVCS amplitudes, $g_{TT}$ and $g_{LT}$ respectively, by:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
S_1(\nu,Q^2)=\frac{\nu M_t}{\nu^2+Q^2}\Bigr[g_{TT}(\nu,Q^2)+\frac{Q}{\nu}g_{LT}(\nu,Q^2)\Bigl].
\nonumber
\end{equation}
%
Under the same assumptions, the dispersion relation yields:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\Re e[S_1(\nu,Q^2)-S_1^{pole}(\nu,Q^2)]=\frac{4\alpha}{M_t}I_1(Q^2)+\gamma_{g_1}(Q^2)\nu^2+O(\nu^{4}),
\nonumber
\end{equation}
where the LO term yields a generalized GDH sum rule differing from the one in Eq.~(\ref{eq:gdhsum_def1}):
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
I_1(Q^2)=\frac{2M_t^2}{Q^2}\int_0^{x_0}g_1(x,Q^2)dx.
\label{eq:gdhsum_def2}
\end{equation}
The original GDH sum rule is recovered for
$Q^2=0$ where $I_1(0)=-\frac{1}{4}\kappa_t^2$. The NLO term
defines the generalized polarizability $\gamma_{g_1}$:
\vspace{-0.5cm}
\begin{eqnarray}
\vspace{-0.5cm}
\gamma_{g_1}(Q^2) & = & \frac{16\pi\alpha M_t}{Q^{6}}\int_0^{x_0}x^2g_1(x,Q^2)dx.
\nonumber
\end{eqnarray}
\subsection{$\delta_{LT}$ sum rule}
Similarly, the longitudinal-transverse interference VVCS amplitude yields a sum rule for the $I_{LT}$
amplitude~\cite{Drechsel:2002ar, Drechsel:2004ki, Drechsel:1998hk} :
\vspace{-0.5cm}
\begin{eqnarray}
\vspace{-0.5cm}
I_{LT}(Q^2) & = & \frac{M_t^2}{4\pi^2\alpha}\int_{\nu_0}^{\infty}\frac{\kappa_{\gamma^*}(\nu,Q^2)}{\nu}\frac{\sigma_{LT}'(\nu,Q^2)}{Q}d\nu\nonumber \\
& = & \frac{2M_t^2}{Q^2}\int_0^{x}\Bigl[g_1(x,Q^2)+g_2(x,Q^2)\Bigr]dx,
\nonumber
\end{eqnarray}
and defines the generalized LT-interference polarizability:
%
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
\delta_{LT}(Q^2) & = & \left(\frac{1}{2\pi^2}\right)\int_{\nu_0}^{\infty}\frac{\kappa_{\gamma^*}(\nu,Q^2)}{\nu}\frac{\sigma_{LT}'(\nu,Q^2)}{Q\nu^2}d\nu\nonumber \\
& = & \frac{16\alpha M_t^2}{Q^{6}}\int_0^{x_0}x^2\Bigl[g_1(x,Q^2)+g_2(x,Q^2)\Bigr]dx.
\label{eq:delta_LT SR}
\end{eqnarray}
The quantities $\delta_{LT}$, $\gamma_{g_1}$, $I_{TT}$ and $I_1$ are related by:
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
M_t \delta_{LT} (Q^2) = \gamma_{g_1}(Q^2)- \frac{2\alpha}{M_t Q^2}\Bigr(I_{TT}(Q^2)-I_1(Q^2)\Bigl).
\nonumber
\end{eqnarray}
It was shown recently that the sum rules of Eqs.~(\ref{eq:gdhsum_def2}) and (\ref{eq:delta_LT SR})
are also related to several other generalized polarizabilities, which are experimentally poorly known,
but can be constrained by these additional relations~\cite{Pascalutsa:2014zna}.
\subsection{The Burkhardt-Cottingham sum rule \label{BCSR}}
We now consider the second VVCS amplitude $S_2$:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
S_2(\nu,Q^2)=-\frac{M_t^2}{\nu^2+Q^2}\Bigr[g_{TT}(\nu,Q^2)-\frac{\nu}{Q}g_{LT}(\nu,Q^2)\Bigl].
\nonumber
\end{equation}
Assuming a Regge behavior $S_2\to\nu^{-\alpha_2}$ as
$\nu\to\infty$, with $\alpha_2 > 1$, the dispersion relation for $S_2$ and
$\nu S_2$, including the elastic contribution, requires no subtraction. It thus leads to a
``super-convergence relation" -- the Burkhardt-Cottingham (BC) sum rule~\cite{Burkhardt:1970ti}:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\int_0^1g_2(x,Q^2)dx=0.
\label{eq:bc}
\end{equation}
Excluding the elastic reaction, the sum rule becomes:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
I_2(Q^2)=\frac{2M_t^2}{Q^2}\int_0^{x_0}g_2(x,Q^2)dx=\frac{1}{4}F_2(Q^2)\Bigl(F_1(Q^2)+F_2(Q^2)\Bigr),
\label{eq:bc_noel}
\end{equation}
where $F_1$ and $F_2$ are the Dirac and Pauli form factors, respectively, see Section~\ref{elastic scatt}.
The low energy expansion of the dispersion relation leads to:
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
\Re e \bigl[\nu \bigl( S_2(\nu,Q^2) - S_2^{pole}(\nu,Q^2)\bigr)\bigr] = \nonumber \\
\hspace{-1cm} {2\alpha I_2(Q^2)-\frac{2\alpha}{Q^2}\bigl(I_{TT}(Q^2)-I_1(Q^2)\bigr)\nu^2+\frac{M_t^2}{Q^2}\gamma_{g_2}(Q^2)\nu^{4}+O(\nu^{6}),}
\nonumber
\end{eqnarray}
where the term in $\nu^{4}$ provides the generalized polarisability $\gamma_{g_2}$:
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
\gamma_{g_2}(Q^2) = \frac{16\pi\alpha M_t^2}{Q^{6}}\int_0^{x_0}x_{Bj}^2g_2(x,Q^2)dx
= \delta_{LT}(Q^2)-\gamma_0(Q^2)+\frac{2\alpha}{M_t^2Q^2}\Bigl(I_{TT}(Q^2)-I_1(Q^2)\Bigr).
\nonumber
\end{eqnarray}
\subsection{Sum rules for deep inelastic scattering \label{DIS SR}}
At high-$Q^2$, the OPE
used on the VVCS amplitude leads to the \emph{twist} expansion:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\Gamma_1(Q^2)\equiv\int_0^1g_1(x,Q^2)dx=\sum_{\tau=2,4,...}\frac{\mu_{\tau}(Q^2)}{Q{}^{\tau-2}} ,
\label{eq:Gamma1}
\end{equation}
where the $\mu_{\tau}$ coefficients correspond to the matrix elements of operators of \emph{twist} $\leq \tau$.
The dominant \emph{twist} term (twist~2) $\mu_2$ is given by
the matrix elements of the axial-vector operator $\overline{\psi}\gamma_{\mu}\gamma_{5}\lambda^i\psi/2$
summed over quark flavors. $\lambda^i$ are the Gell-Mann matrices for $1 \leq i \leq 8$ and $\lambda^0 \equiv 2$.
Only $i=0,3$ and $i = 8 $ contribute,
with matrix elements
\vspace{-0.6cm}
\begin{eqnarray}
\vspace{-0.3cm}
\langle P,S|\overline{\psi}\gamma_{\mu}\gamma_{5}\lambda^0\psi|P,S \rangle =4Ma_0S_{\mu} ,\nonumber \\
\langle P,S|\overline{\psi}\gamma_{\mu}\gamma_{5}\lambda^3\psi|P,S \rangle =2Ma_3S_{\mu} ,\nonumber \\
\langle P,S|\overline{\psi}\gamma_{\mu}\gamma_{5}\lambda^8\psi|P,S \rangle =2Ma_8S_{\mu}, \nonumber
\end{eqnarray}
defining the triplet ($a_3$), octet ($a_8$) and singlet ($a_0$) axial charges. Then,
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
\mu_2(Q^2) & = & \left(\pm\frac{1}{12}a_3\ +\ \frac{1}{36}a_8\right)+\frac{1}{9}a_0\ +O\big(\alpha_{s}(Q^2)\big),
\label{eq:mu2}
\end{eqnarray}
where $+(-)$ is for the proton (neutron) and $O(\alpha_{s})$
reflects the $Q^2$-dependence derived from pQCD radiation.
The axial charges can be expressed in the parton model as combinations of quark polarizations:
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
a_3 & = & (\Delta u + \Delta \overline{u})-(\Delta d + \Delta \overline{d}), \nonumber \\
a_8 & = &(\Delta u + \Delta \overline{u})+(\Delta d + \Delta \overline{d}) -2(\Delta s + \Delta \overline{s}), \nonumber \\
a_0 & = & (\Delta u + \Delta \overline{u})+(\Delta d + \Delta \overline{d}) +(\Delta s + \Delta \overline{s}).
\nonumber
\end{eqnarray}
The charges $a_3$ and $a_8$ are $Q^2$-independent; the axial charge $a_0$,
which is identified with the quark spin contribution to $J$, namely
$\Delta \Sigma$, see Eq.~(\ref{eq:spin SR}), is $Q^2$-independent only at LO in $\alpha_s$.
At NLO, $a_0$ becomes $Q^2$-dependent because the singlet current is not
renormalization-group invariant and needs to be
renormalized. (That $a_3$ and $a_8$ remain $Q^2$-independent assumes the validity of $SU(3)_f$.)
In addition $a_0$ may also depend on the gluon spin contribution $\Delta G$ through the gluon axial
anomaly~\cite{Efremov:1988zh}. Such a contribution depends on the chosen renormalization scheme.
In the AB~\cite{Adler:1969er}, CI~\cite{Cheng:1996jr} and JET~\cite{Leader:1998qv, Leader:1998nh} schemes,
$a_0 = \Delta \Sigma - \frac{f}{2 \pi} \alpha_s(Q^2)\Delta G(Q^2)$, where $f$ is the number of active flavors.
In the case of the $\overline{\mbox{MS}}$ scheme, $\alpha_s(Q^2)\Delta G(Q^2)$ is absorbed in the definition of
$\Delta \Sigma$ and $a_0 = \Delta \Sigma$.
At first order, $\Delta G$ evolves as $1/\alpha_s$~\cite{Efremov:1988zh} and
$\alpha_s(Q^2)\Delta G(Q^2)$ is constant at high $Q^2$. Hence, contrary to the usual case where the scheme
dependence of a quantity disappears at large $Q^2$ due to the dominance of the scheme-independent LO,
$\Delta\Sigma$ remains scheme-dependent at arbitrarily high $Q^2$.
The $\alpha_s \Delta G$ term stems from the $g_1$ NLO evolution equations,
Eqs.~(\ref{g_1 LT evol})-(\ref{gluon LO evol}). In the $\overline{\mbox{MS}}$ scheme, the contribution of the
gluon evolution to the $g_1$ moment cancels at any order in perturbation theory. In the AB scheme the Wilson coefficient
controlling the gluon contribution is non-zero, $\Delta C_g=- \frac{f}{2 \pi} \alpha_s$.
This scheme-dependence and the presence of $1/\alpha_s$, which is
not an observable, emphasize that $\Delta \Sigma$ and $\Delta G$ are also not observables but depend on the convention
used for the renormalization procedure; {\it e.g.}, how high order ultraviolet divergent diagrams are arranged and regularized.
The origin of the logarithmic increase of $\Delta G$ is due to the fact that overall, the subprocess in which a gluon splits into
two gluons of helicity $+1,$ thereby increasing $\Delta G$, has a larger probability than subprocesses that
decrease the total gluon helicity, where a gluon splits into a quark-antiquark pair or a gluon splits into two gluons, one of helicity $+1$ and
the other of helicity $-1$)~\cite{Ji:1995cu}. The gluon splitting increases with the probe resolution, leading to the
logarithmic increase of $\Delta G$ with $Q^2.$
Assuming SU(3)$_f$ quark mass symmetry, the axial charges can be related to the weak decay constants $F$ and $D$:
$a_3=F+D=g_A$ and $a_8=3F-D$, where
$g_A$ is well measured from neutron $\beta-$decay: $g_A=1.2723(23)$~\cite{Olive:2016xmw}.
$a_8$ is extracted from the weak decay of hyperons, assuming SU(3)$_f$:
$a_8=0.588(33)$~\cite{Close:1993mv}. The 0.1 GeV strange quark mass is neglected in SU(3)$_f$,
but its violation is expected to affect $a_8$ only at a level of a few \%. However,
other effects may alter $a_8$: models based on the
one-gluon exchange hyperfine interaction as well as meson cloud effects
yield {\it e.g.}, a smaller value, $a_8=0.46(5)$~\cite{Bass:2009ed}.
If one expresses the axial charges in terms of quark polarizations and assumes that the strange and higher mass quarks do
not contribute to $\Delta \Sigma$, Eqs. (\ref{eq:Gamma1}) and (\ref{eq:mu2}) lead, at \emph{leading-twist},
to the Ellis-Jaffe sum rule. For the proton this sum rule is:
\vspace{-0.1cm}
\begin{equation}
\vspace{-0.1cm}
\Gamma_1^p(Q^2)\equiv\int_0^1g_1^p(x,Q^2)dx \xrightarrow[Q^2 \to \infty] {} \frac{1}{2}\left(\frac{4}{9}\Delta u+\frac{1}{9}\Delta d\right).
\label{eq:Ellis-Jaffe p}
\end{equation}
The neutron sum rule is obtained by assuming isospin symmetry, {\it i.e.}, $u \leftrightarrow d$ interchange.
The expected asymptotic values are $\Gamma_1^p=0.185\pm0.005$ and $\Gamma_1^n=-0.024\pm0.005$.
After the order $\alpha_s^3$ evolution to $Q^2=5$~GeV$^2$ they become $\Gamma_1^p=0.163$ and
$\Gamma_1^n=-0.019$. Measurements at this $Q^2$ disagree with the sum rule. The most precise ones are from
E154 and E155. E154 measured $\Gamma^n=-0.041 \pm 0.004 \pm 0.006$~\cite{Abe:1997cx} and
E155 measured $\Gamma^p=0.118 \pm 0.004 \pm 0.007$ and
$\Gamma^n=-0.058 \pm 0.005 \pm 0.008$~\cite{Anthony:1999rm}.
The proton-neutron difference for Eqs.~(\ref{eq:Gamma1}) and (\ref{eq:mu2}) gives the non-singlet relation:
\vspace{-0.25cm}
\begin{equation}
\vspace{-0.25cm}
\Gamma_1^p(Q^2)-\Gamma_1^n(Q^2) \equiv \Gamma_1^{p-n}(Q^2) =\frac{1}{6} g_A + O(\alpha_{s})+O(1/Q^2),
\xrightarrow[Q^2 \to \infty] {} \frac{\Delta u - \Delta d}{6} \nonumber
\end{equation}
which is the Bjorken sum rule for $Q^2\to\infty$~\cite{Bjorken:1966jh}. Charge symmetry corrections to the Ellis-Jaffe and
Bjorken sum rules are at the 1\% level~\cite{Cloet:2012db}.
\emph{DGLAP} corrections yield~\cite{Kataev:1994gd}:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.2cm}
\Gamma_1^{p-n}(Q^2)=\frac{g_A}{6}\bigg[1-\frac{\alpha_{{\rm {s}}}}{\pi}-3.58\left(\frac{\alpha_{{\rm {s}}}}{\pi}\right)^2
-20.21\left(\frac{\alpha_{{\rm {s}}}}{\pi}\right)^{3} -175.7\left(\frac{\alpha_{{\rm {s}}}}{\pi}\right)^{4}+... \bigg]+O(1/Q^2),
\label{eq:genBj}
\end{equation}
where the series coefficients are given for $n_f=3$.
Eq.~(\ref{eq:genBj}) exemplifies the power of sum rules: these relations connect moments
integrated over high-energy quantities to low-energy, static characteristics of the nucleon itself.
\label{Bjorken and g_a} It is clear why
$g_A \equiv g_A(Q^2=0)$ is involved in the $Q^2 \to \infty$ Bjorken sum rule. The spin-dependent
part of the cross-section comes from the matrix elements of $\bar \psi \gamma^\mu \gamma^5 \psi$,
the conserved axial-current
associated with chiral symmetry: $\psi \to e^{i\phi\gamma^5} \psi$, where the nucleon state
$\psi$ is projected to its right and left components as defined by the chiral projectors
$(1\pm\gamma^5$), respectively.
In elastic scattering, $\bar \psi \gamma^\mu \gamma^5 \psi$ generates the axial form factor
$g_A(Q^2),$ just as the electromagnetic current $\bar \psi \gamma^\mu \psi$
generates the electromagnetic form factors. And just as $G_E^N$ provides the
charge spatial distribution, the Fourier transform of $g_A(Q^2)$
maps the spatial distribution of the nucleon spin; {\it i.e.}, how the net parton polarization evolves from
the center of the nucleon to its boundary. Thus $g_A(Q^2)$ provides the isovector
component of the spatial parton polarizations: $g_A(Q^2=0)$ is the parton
polarizations without spatial resolution; {\it i.e.} its spatial average, which is directly connected to
the mean momentum-space parton polarization $\int g_1dx$.
Comparing Eqs.~(\ref{g_1 LT evol})--(\ref{gluon LO evol})
with Eq.~(\ref{eq:genBj}) shows that the $Q^2$-evolution is much simpler for
moments ({\it i.e.}, \emph{Mellin-transforms}) than for structure functions.
Thus it is beneficial to transform to \emph{Mellin-space} ($N,Q^2$), where $N$ is the moment's order,
to perform the $Q^2$-evolution and then transform back to $(x_{Bj},Q^2)$ space.
The coefficient $\mu_{\tau}$
in Eq.~(\ref{eq:Gamma1}) would only comprise
a twist~$\tau$ operator, if not for the effect discussed on page~\pageref{twist-mix} which adds operators
of \emph{twists} $\varsigma\leq\tau$. Thus, the twist~4 term,
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.2cm}
\mu_4(Q^2)=M^2\left(a_2(Q^2)+4d_2(Q^2)+4f_2(Q^2)\right)/9,
\label{eq:mu4}
\end{equation}
comprises a twist~2 contribution ($a_2$) and a twist~3 one ($d_2$) in addition to the genuine
twist~4 contribution $f_2$~\cite{Shuryak:1981kj,Ji:1993sv,Stein:1995si,Ji:1997gs}.
The twist~2 matrix element is:
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
a_2\ S^{\{\mu}P^{\nu}P^{\lambda\}} & = & \frac{1}{2}\sum_{f}e_{f}^2\
\langle P,S|\overline{\psi}_{f}\ \gamma^{\{\mu}iD^{\nu}iD^{\lambda\}}\psi_{f}|P,S\rangle,
\label{eq:a2op}
\vspace{-0.3cm}
\end{eqnarray}
where $f$ are the quark flavors and $\{\cdots\}$ signals index symmetrization.
The third moment of $g_1$ at \emph{leading-twist} gives $a_2 $:
\vspace{-0.5cm}
\begin{equation}
\vspace{-0.1cm}
a_2(Q^2)=2\int_0^1 x^2\ g_1^{twist~2}(x,Q^2) dx,
\label{eq:a2}
\end{equation}
which is thus twist~2. The twist~3 contribution $d_2$ is defined from the matrix element:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
d_2S^{[\mu}P^{\{\nu]}P^{\lambda\}}=\frac{\sqrt{4\pi}}{8}\sum_{q}\langle P,S|\overline{\psi}_{q}\
\sqrt{\alpha_{s}} \widetilde{f}^{\{\mu\nu}\gamma^{\lambda\}}\psi_{q}|P,S\rangle ,
\label{eq:d2op}
\end{equation}
where $\widetilde{f}^{\mu\nu}$ is the dual tensor of the gluon field:
$\widetilde{f}_{\mu\nu}=(1/2)\epsilon_{\mu\nu\alpha\beta}F^{\alpha\beta}$.
The third moments of $g_1$ and $g_2$ at \emph{leading-twist} give $d_2$:
\vspace{-0.3cm}
\begin{eqnarray}
\vspace{-0.3cm}
d_2(Q^2) = \int_0^1 x^2 2g_1(x,Q^2)+3g_2(x,Q^2) dx
= 3\int_0^1 x^2 g_2(x,Q^2)-g_2^{WW}(x,Q^2) dx,
\label{eq:d2mon}
\end{eqnarray}
where $g_2^{WW}$ is the twist~2 component of $g_2$:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
g_2^{WW}(x_{Bj},Q^2)={-g}_1(x_{Bj},Q^2)+\int_{x_{Bj}}^1\frac{g_1(y,Q^2)}{y}dy.
\label{eq:g2ww}
\end{equation}
This relation is derived from the Wandzura-Wilczek (WW) sum rule~\cite{Wandzura:1977qf}:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\int_0^1 x^{n-1}\bigg(\frac{n-1}{n}g_1(x,Q^2)+g_2(x,Q^2)\bigg) dx = 0,
\label{eq:g2ww general}
\end{equation}
where $n$ is odd. The Wandzura-Wilczek sum rule assumes the validity of the BC sum rule and
neglects \emph{higher-twist} contributions to $g_1$ and $g_2$. Eq.~(\ref{eq:g2ww}) furthermore assumes that
the sum rule also holds for even $n$, as it is discussed further in Section~\ref{sec:g2-g2ww}.
Eqs.~(\ref{eq:d2mon})-(\ref{eq:g2ww general}) originate from the OPE-derived expressions
valid at twist~3 and for $n$ odd~\cite{Jaffe:1996zw}:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\int_0^1 x^{n-1}g_1(x,Q^2) dx = \frac{a_{n-1}}{4}, \\\
\int_0^1 x^{n+1}g_2(x,Q^2) dx = \frac{n+1(d_{n+1} - a_{n+1})}{4(n+2)}.
\nonumber
\end{equation}
The twist~4 component of $\mu_4$ is defined by the matrix element:
\vspace{-0.2cm}
\begin{eqnarray}
\vspace{-0.3cm}
f_2\ M^2S^{\mu} = \frac{1}{2}\sum_{q}e_{q}^2\
\langle N|g\ \overline{\psi}_i\ \widetilde{f}^{\mu\nu}\gamma_{\nu}\ \psi_i|N\rangle ,
\label{eq:f2op}
\end{eqnarray}
and, in terms of moments:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
f_2(Q^2)=\frac{1}{2}\int_0^1 x^2\Bigl(7g_1(x,Q^2)+12g_2(x,Q^2)-9g_3(x,Q^2)\Bigr) dx,
\label{eq:f2}
\end{equation}
where $g_3$ (not to be confused with a spin structure function also denoted $g_3$ and
appearing in neutrino scattering off a polarized target~\cite{Anselmino:1994gn}) is the twist~4 function:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
g_3(x_{Bj})=\frac{1}{2\pi \Lambda_s^2}\int e^{i\lambda x_{Bj}}\left\langle PS\right|\overline{\psi}(0)\gamma_{5}{\displaystyle {\not}p}\psi(\lambda n)\left|PS\right\rangle d\lambda
\label{eq:g3}
\end{equation}
with $p=\frac{1}{2}\left(\sqrt{M^2+P^2}+P\right)(1,0,0,1)$ and
$n=\frac{1}{M^2}\left(\sqrt{M^2+P^2}-P\right)(1,0,0,-1)$.
Since only $g_1$ and $g_2$ are measured, $f _{2}$ must be
extracted using Eqs. (\ref{eq:Gamma1}) and (\ref{eq:mu4}).
This is discussed in Section~\ref{HT}.
As mentioned in Section~\ref{OPE}, the OPE provides only odd moment sum rules
for $g_1$ and $g_2$ (and even moment sum rules for $F_1$ and $F_2$) due to their positive parity under
crossing symmetry. DIS spin sum rules involving even moments do exist for inclusive observables,
such as the Efremov-Leader-Teryaev (ELT) sum rule~\cite{Efremov:1996hd}:
\vspace{-0.4cm}
\begin{equation}
\vspace{-0.2cm}
\int_0^1 x \big(g_1^V(x,Q^2) + 2g_2^V(x,Q^2) \big) dx = 0,
\nonumber
\end{equation}
where the superscript $V$ indicates \emph{valence} distributions.
Like the BC sum rule, the ELT prediction is a superconvergent relation.
The fact that \emph{sea quarks} do not contribute
minimizes complications from the low-$x_{Bj}$ domain that hinders the experimental checks of sum rules.
The ELT sum rule is not derived from the OPE, but instead follows from gauge invariance
or, more generally, from the structure and gauge properties of
hadronic matrix elements involved in $g_1$ and $g_2$.
It is an exact sum rule, but with the caveat that it
neglects \emph{higher-twist} contributions as OPE-derived sum rules do
(although \emph{higher-twists} can be subsequently added,
see {\it e.g.}, the twist~4 contribution to the Bjorken sum rule given by Eq.~(\ref{eq:mu4})).
Assuming that the \emph{sea} is isospin invariant leads to an isovector DIS sum rule,
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\int_0^1 x (g_1^p+2g_2^p - g_1^n-2g_2^n ) dx = 0,
\nonumber
\end{equation}
which agrees with its experimental value at $\langle Q^2\rangle = 5$~GeV$^2$, 0.011(8). It can be re-expressed as:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\int_0^1 x \big(g_2^p(x,Q^2) - g_2^n(x,Q^2) \big) dx =
\frac{-1}{12}\int_0^1 x \big( \Delta u_V(x,Q^2) -\Delta d_V(x,Q^2) \big) dx,
\label{Eq:ELT SR}
\end{equation}
which can be verified by comparing $g_2$ measurements for the l.h.s to PDF global fits for the r.h.s.
Neglecting twist~3 leads to a sum rule similar to the Wandzura-Wilczek sum rule,
Eq.~(\ref{eq:g2ww general}), but for $n$ even ($n=2$):
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\int_0^1 x \big(g_1 + 2g_2 \big) dx = 0.
\nonumber
\end{equation}
\subsection{Color polarizabilities}
The twist~3 and 4 operators discussed in the previous section describe the response of the
electric and magnetic-like components of the color field to the nucleon spin. They are therefore akin to polarizabilities, but
for the strong force rather than electromagnetism. Expressing the twist~3 and 4 matrix elements
as functions of the components of $\widetilde{f}^{\mu\nu}$
in the nucleon rest frame, $d_2$ and $f_2$ can be related to the electric
and magnetic color polarizabilities defined as~\cite{Shuryak:1981kj,Stein:1995si,Ji:1993sv,Ji:1997gs}:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\chi_E\ 2M_t^2\vec{J}=\langle N|\ \vec{j}_a\times\vec{E}_a\ |N\rangle\ ,\ \ \
\chi_{B}\ 2M_t^2\vec{J}=\langle N|\ j_a^0\ \vec{B}_a\ |N\rangle\ ,
\nonumber
\end{equation}
where $\vec{J}$ is the nucleon spin, $j_a^{\mu}$ is the
quark current, $\vec{E}_a$ and $\vec{B}_a$ are the color electric
and magnetic fields, respectively. They relate to $d_2$ and $f_2$ as:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\chi_E(Q^2)=\frac{2}{3}\left(2d_2(Q^2)\ +\ f_2(Q^2)\right),\ \ \
\chi_{B}(Q^2)=\frac{1}{3}\left(4d_2(Q^2)\ -\ f_2(Q^2)\right).
\label{eq:chi}
\end{equation}
\section{World data and global analyses \label{sec:data}}
\subsection {Experiments and world data \label{sec: world data}}
As mentioned in Section~\ref{LC dominance and LF quantization}, a hadron non-zero anomalous
magnetic moment requires a non-zero quark transverse OAM~\cite{Brodsky:1980zm, Burkardt:2005km}
and thus, information on the nucleon's internal angular momenta can be traced back at least as far as the
1930s with Stern and Frisch's discovery of the proton anomalous magnetic moment~\cite{Stern:1933}. However,
the first direct experimental information on the internal components making the nucleon spin
came from doubly-polarized DIS experiments.
They took place at SLAC, CERN, DESY, and are continuing at JLab and CERN.
The development of polarized beams~\cite{Sinclair:2007ez} and targets~\cite{Goertz:2002vv} has enabled this program.
It started at SLAC in the late 1970s and early 1980s with the pioneering E80 and
E130 experiments~\cite{Alguard:1976bm, Alguard:1978gf}. It continued in the 1990s with
E142~\cite{Anthony:1996mw}, E143~\cite{Abe:1994cp} -- which also forayed in the resonance region --
E154~\cite{Abe:1997cx,Abe:1997qk}, E155~\cite{Anthony:1999rm} and E155x~\cite{Anthony:1999py}
(an extension of E155 focused on $g_2$ and $A_2$).
The CERN experiments started in 1984 with EMC~\cite{Ashman:1987hv}
-- whose results triggered the ``spin crisis" -- continued with
SMC~\cite{Adeva:1993km}, and are ongoing with COMPASS~\cite{Alexakhin:2006oza}.
At the DESY accelerator, the HERMES experiment~\cite{Ackerstaff:1997ws, Airapetian:2006vy} ran from 1995 to 2007.
The inclusive program of these experiments focused on the Bjorken sum rule (Eq.~(\ref{eq:genBj})) and the longitudinal
nucleon spin structure, although $g_2$ or $A_2$, and resonance data were also taken. HERMES and COMPASS
also provided important SIDIS and GPDs data.
The JLab doubly polarized inclusive program started in 1998 with a first
set of experiments in the resonance region: E94-010~\cite{Amarian:2002ar} and
EG1a~\cite{Yun:2002td} measured the generalized GDH sum (Eqs.~(\ref{eq:gdhsum_def1}) or (\ref{eq:gdhsum_def2})),
$g_1$ and $g_2$ and their moments for $0.1<Q^2< 1$~GeV$^2$.
Then, the RSS experiment~\cite{Wesselmann:2006mw, Slifer:2008xu}
covered the resonance domain at $\left\langle Q^2\right\rangle = 1.3$~GeV$^2$.
In early 2000, another set of experiments was performed: EG1b~\cite{Dharmawardane:2006zd, Prok:2008ev, Bosted:2006gp, Fersch:2017qrq}
extended EG1a up to $Q^2= 4.2$~GeV$^2$ with improved statistics, E99-117~\cite{Zheng:2003un}
covered the high-$x_{Bj}$ region at $Q^2=5$~GeV$^2$,
E97-103~\cite{Kramer:2005qe} measured $g_2^n$ in the DIS,
and E01-012~\cite{Solvignon:2013yun, Solvignon:2008hk}
covered the resonance region at $Q^2>1$~GeV$^2$.
Furthermore, E97-110~\cite{E97110} and EG4~\cite{Adhikari:2017wox}
investigated $\Gamma_1$, $\Gamma_2$, $g_1$ and $g_2$ in the $Q^2 \to 0$ limit.
EG1dvcs~\cite{Prok:2014ltt} extended EG1
to $Q^2= 5.8$~GeV$^2$ with another large improvement in statistics,
and the SANE experiment~\cite{Rondon:2015mya} focused on $g_2$ and
the twist~3 moment $d_2$ up to $Q^2= 6.5$~GeV$^2$ and $0.3<x_{Bj}<0.85$.
Finally, E06-014 precisely measured $d_2^n$
at $Q^2= 3.2$ and 4.3 GeV$^2$~\cite{Posik:2014usi, Parno:2014xzb}.
These JLab experiments are inclusive, although EG1a~\cite{DeVita:2001ue},
EG1b~\cite{Chen:2006na}, EG4~\cite{Zheng:2016ezf} and EG1dvcs~\cite{Seder:2014cdc}
also provided semi-inclusive, exclusive and DVCS data.
The JLab polarized $^3$He SIDIS program
comprised E06-010/E06-011~\cite{Qian:2011py}, while E07-013~\cite{Katich:2013atq}
used spin degrees of freedom to study
the effect of two \emph{hard} photon exchange in DIS.
(Experiments using polarized beam on unpolarized protons and measuring the proton recoil polarization
had already revealed the importance of such reaction for the proton electric form factor~\cite{Jones:1999rz}.)
Data at $Q^2=0$ or low $Q^2$ from MIT-Bates, LEGS, MAMI and TUNL
also exist.
These experiments, their observables and kinematics are listed in Table~\ref{table_exp}.
The world data for $g_1^p$, as of 2017, is shown in Fig.~\ref{fig:g1p}.
Not included in Table~\ref{table_exp} because they are not discussed in this review, are the doubly or singly
polarized inclusive experiments measuring nucleon form factors~\cite{Pacetti:2015iqa}, including
the strange ones~\cite{Armstrong:2012bi}, or probing the resonance and DIS~\cite{Armstrong:2012bi}
or the Standard Model~\cite{Wang:2014bba} using parity violation.
\begin{table}
\footnotesize
\caption{\label{table_exp}\small Lepton scattering experiments on the nucleon spin structure and their kinematics.
The column ``Analysis" indicates wether the analysis was primarily
conducted in terms of asymmetries ($A_{1,2}$, or single spin asymmetry) or of cross-sections
($g_{1,2}$), and if transverse data were taken in addition to the longitudinal data.}
\hspace{-0.2cm}
\begin{tabular}{|p{3.2cm}|p{2.8cm}|p{0.9cm}|p{2.1cm}|p{1.6cm}|p{2.8cm}|p{1.8cm}|}
\hline
Experiment & Ref. & Target & Analysis & $W$ (GeV)& $\hspace{1cm}x_{Bj}$ & $Q^{2}$ (GeV$^{2}$)\tabularnewline
\hline
\hline
E80 (SLAC) & \cite{Alguard:1976bm} & p & $A_{1}$ & 2.1 to 2.6 & 0.2 to 0.33 & 1.4 to 2.7 \tabularnewline
\hline
E130 (SLAC) & \cite{Alguard:1978gf} & p & $A_{1}$ & 2.1 to 4.0 & 0.1 to 0.5 & 1.0 to 4.1 \tabularnewline
\hline
EMC (CERN) & \cite{Ashman:1987hv} & p & $A_{1}$ & 5.9 to 15.2 & $1.5\times10^{-2}\mbox{ to }0.47$ & 3.5 to 29.5 \tabularnewline
\hline
SMC (CERN) & \cite{Adeva:1993km} & p, d & $A_{1}$ & 7.7 to 16.1 & $10^{-4}\mbox{ to }0.482$ & 0.02 to 57 \tabularnewline
\hline
E142 (SLAC) & \cite{Anthony:1996mw} & $^{3}$He & $A_{1}$, $A_{2}$ & 2.7 to 5.5 & $3.6\times10^{-2}\mbox{ to }0.47$ & 1.1 to 5.5 \tabularnewline
\hline
E143 (SLAC) & \cite{Abe:1994cp} & p, d & $A_{1}$, $A_{2}$ & 1.1 to 6.4 & $3.1\times10^{-2}\mbox{ to }0.75$ & 0.45 to 9.5 \tabularnewline
\hline
E154 (SLAC) & \cite{Abe:1997cx,Abe:1997qk} & $^{3}$He & $A_{1}$, $A_{2}$ & 3.5 to 8.4 & $1.7\times10^{-2}\mbox{ to }0.57$ & 1.2 to 15.0 \tabularnewline
\hline
E155/E155x (SLAC) & \cite{Anthony:1999rm,Anthony:1999py} & p, d & $A_{1}$, $A_{2}$ & 3.5 to 9.0 & $1.5\times10^{-2}\mbox{ to }0.75$ & 1.2 to 34.7 \tabularnewline
\hline
{\scriptsize HERMES (DESY)} & \cite{Ackerstaff:1997ws, Airapetian:2006vy} & p, $^{3}$He & $A_{1}$ & 2.1 to 6.2 & $2.1\times10^{-2}\mbox{ to }0.85$ & 0.8 to 20 \tabularnewline
\hline
E94010 (JLab)& \cite{Amarian:2002ar} & $^{3}$He & $g_{1}$, $g_{2}$ & 1.0 to 2.4 & $1.9\times10^{-2}\mbox{ to }1.0$ & 0.019 to 1.2 \tabularnewline
\hline
EG1a (JLab) & \cite{Yun:2002td} & p, d & $A_{1}$ & 1.0 to 2.1 & $5.9\times10^{-2}\mbox{ to }1.0$ & 0.15 to 1.8 \tabularnewline
\hline
RSS (JLab) & \cite{Wesselmann:2006mw, Slifer:2008xu} & p, d & $A_{1}$, $A_{2}$ & 1.0 to 1.9 & $0.3\mbox{ to }1.0$ & 0.8 to 1.4 \tabularnewline
\hline
{\scriptsize COMPASS (CERN) DIS} & \cite{Alexakhin:2006oza} & p, d & $A_{1}$ & 7.0 to 15.5 & $4.6\times10^{-3}\mbox{ to }0.6$ & 1.1 to 62.1 \tabularnewline
\hline
{\scriptsize COMPASS low-$Q^2$}& \cite{Nunes:2016otf} & p, d & $A_{1}$ & 5.2 to 19.1 & $4\times10^{-5}\mbox{ to }4\times10^{-2}$ & 0.001 to 1. \tabularnewline
\hline
EG1b (JLab) &~\cite{Dharmawardane:2006zd, Prok:2008ev, Bosted:2006gp, Fersch:2017qrq} & p, d & $A_{1}$ & 1.0 to 3.1 & $2.5\times10^{-2}\mbox{ to }1.0$ & 0.05 to 4.2 \tabularnewline
\hline
E99-117 (JLab) & \cite{Zheng:2003un} & $^{3}$He & $A_{1}$, $A_{2}$ & 2.0 to 2.5 & $0.33\mbox{ to }0.60$ & 2.7 to 4.8 \tabularnewline
\hline
E97-103 (JLab) & \cite{Kramer:2005qe} & $^{3}$He & $g_{1}$, $g_{2}$ & 2.0 to 2.5 & $0.16\mbox{ to }0.20$ & 0.57 to 1.34 \tabularnewline
\hline
E01-012 (JLab) & \cite{Solvignon:2013yun, Solvignon:2008hk} & $^{3}$He & $g_{1}$, $g_{2}$ & 1.0 to 1.8 & $0.33\mbox{ to }1.0$ & 1.2 to 3.3 \tabularnewline
\hline
E97-110 (JLab) & \cite{E97110} & $^{3}$He & $g_{1}$, $g_{2}$ & 1.0 to 2.6 & $2.8\times10^{-3}\mbox{ to }1.0$ & 0.006 to 0.3 \tabularnewline
\hline
EG4 (JLab) & \cite{Adhikari:2017wox} & p, n & $g_{1}$ & 1.0 to 2.4 & $7.0\times10^{-3}\mbox{ to }1.0$ & 0.003 to 0.84 \tabularnewline
\hline
SANE (JLab) & \cite{Rondon:2015mya} & p & $A_{1}$, $A_{2}$ & 1.4 to 2.8 & $0.3\mbox{ to }0.85$ & 2.5 to 6.5 \tabularnewline
\hline
EG1dvcs (JLab) & \cite{Prok:2014ltt} & p & $A_{1}$ & 1.0 to 3.1 & $6.9\times10^{-2}\mbox{ to }0.63$ & 0.61 to 5.8 \tabularnewline
\hline
E06-014 (JLab) & \cite{Posik:2014usi, Parno:2014xzb} & $^{3}$He & $g_{1}$, $g_{2}$ & 1.0 to 2.9 & $0.25\mbox{ to }1.0$ & 1.9 to 6.9 \tabularnewline
\hline
E06-010/011 (JLab) & \cite{Qian:2011py} & $^{3}$He & single spin asy. & 2.4 to 2.9 & $0.16\mbox{ to }0.35$ & 1.4 to 2.7 \tabularnewline
\hline
E07-013 (JLab) & \cite{Katich:2013atq} & $^{3}$He & single spin asy. & 1.7 to 2.9 & $0.16\mbox{ to }0.65$ & 1.1 to 4.0 \tabularnewline
\hline
E08-027 (JLab) & \cite{g2p} & p & $g_{1}$, $g_{2}$ & 1. to 2.1 & $3.0\times10^{-3}\mbox{ to }1.0$ & 0.02 to 0.4 \tabularnewline
\hline
\end{tabular}
\end{table}
\normalsize
\begin{figure}
\center
\protect\includegraphics[scale=0.35]{g1p_world}
\protect\includegraphics[scale=0.35]{g1p_world_dis}
\vspace{-1.6cm}
\caption{\label{fig:g1p} \small{Left: Available world data on $g_1^p$ as of 2017. An offset $C(x_{Bj})$ is added
to $g_1^p$ for visual clarity. Only two of the four energies of
experiment EG1b are shown. The dotted lines mark a particular $x_{Bj}$ bin and do not represent
the $Q^2$-evolution.
Right: Same as left but for DIS data only.
Despite the modest energy, part of JLab's data reaches the DIS and, thanks to
JLab's high luminosity, they contribute significantly to the global data.
}}
\end{figure}
Global DIS data analyses~\cite{Ball:1995td}-\cite{Shahri:2016uzl} are discussed next.
Their primary goal is to provide the polarized
PDFs $\Delta q(x_{Bj})$ and $\Delta g(x_{Bj})$, as well as their integrals $\Delta \Sigma$ and $\Delta G$, which
enter the spin sum rule, Eq.~(\ref{eq:spin SR}).
Then, we present the specialized DIS experiments focusing on large $x_{Bj}$.
Next, we review the information on the nucleon spin structure emerging
from experiments with kinematics below the DIS.
Afterward, we review the parton correlations (\emph{higher-twists}) information obtained with
these low energy data together with the DIS ones and the closely related phenomenon of hadron-parton duality.
Finally, we conclude this section with our present knowledge on the nucleon spin at high energy,
in particular the components of the spin sum rule, Eq.~(\ref{eq:spin SR}), and discuss the origin of their values.
We conclude on the consistency of the data and remaining questions.
\subsection{Global analyses }
DIS experiments are analyzed in the pQCD framework.
Their initial goal was to test QCD using the Bjorken sum rule, Eq.~(\ref{eq:mu4}).
After 25 years of studies, it is now checked to almost
5\% level~\cite{Alekseev:2009ac, Alekseev:2010ub, Adolph:2015saz, Alexakhin:2005iw}.
Meanwhile, the nucleon spin structure started to be uncovered. Among the main results
of these efforts is the determination of the small contribution of the quark spins $\Delta\Sigma$, Eq.~(\ref{eq:mu2}),
which implies that the quark OAM or/and of the gluon contribution $\Delta G+\mbox{L}_g$ are important.
Global analyses, which now include not only DIS but SIDIS, p-p and $e^+$-$e^-$
collisions provide fits of PDFs and are the main avenue of interpreting the
data~\cite{Ball:1995td, Leader:2010rb, deFlorian:2009vb, Jimenez-Delgado:2013boa, Nocera:2014gqa}.
These analysis are typically at NLO in $\alpha_s$, although NNLO has become available recently~\cite{Moch:2014sna}.
Several groups have carried out such analyses. Beside data, the analyses are constrained by general principles,
including \emph{positivity constraints} (see Section~\ref{parton model}) and often other constraints such as
SU(2)$_f$ and SU(3)$_f$ symmetries (see Section~\ref{DIS SR}),
\emph{counting rules}~\cite{Brodsky:1973kr} and integrability ({\it i.e.}, the matrix elements of the axial current
are always finite).
A crucial difference between the various analyses
is the choice of initial PDF ansatz, particularly for $\Delta g(x_{Bj})$, and of methods
to minimize the bias stemming from such choice, which is the leading contribution
to the systematic uncertainty. Two methods are used to optimize the PDFs starting from
the original ansatz. One is to start from polynomial PDFs and optimize them with respect to the data and general
constraints using Lagrange multipliers or Hessian techniques.
The other approach determines the best PDFs using neural networks.
Other differences between analyses are the choice of renormalization schemes
(recent analyses typically use $\overline{MS}$), of factorization schemes and of
\emph{factorization scale}. Observables are in principle independent of these arbitrary choices but not
in practice because of the necessary truncation of the pQCD series: calculating perturbative
coefficients at high orders quickly becomes overbearing. Furthermore, pQCD series are \emph{Poincar\'{e} series}
that diverge beyond an order approximately given by $\pi/\alpha_{s}$. Thus, they
must be truncated at or before this order. However, at the typical scale $\mu^2=5$~GeV$^2$,
$\pi/\alpha_{s} \approx 11$ so this is currently not a limitation. The truncations make the perturbative approximant of an
observable to retain a dependence on the arbitrary choices made by the DIS analysts. In principle, this dependence
decreases with $Q^2$: at high enough $Q^2$ where the observable is close to the LO value of its perturbative
approximant, unphysical dependencies should disappear since LO is renormalization
scheme independent (with some exceptions
however, such as non-zero renormalons~\cite{Deur:2016tte}. Another noticeable example is $\Delta\Sigma$'s perturbative
approximant which contains a non-vanishing contribution at $Q^2 \to \infty$ from the gluon anomaly, see
Section~\ref{DIS SR}).
Evidently, at finite $Q^2$, observables also depend on the $\alpha_s$ order at which the analysis is carried out.
DIS analysis accuracy is limited by these unphysical dependencies.
Optimization methods exist to minimize them. For instance, the \emph{factorization scale} $\mu$
can be determined by comparing nonperturbative calculations to their corresponding perturbative
approximant, see { \it e.g.}, Refs.~\cite{Deur:2014qfa, Deur:2016cxb}.
That $\mu$ depends on the renormalization scheme (and of the pQCD order)
illustrates the discussion: at
N$^3$LO $\mu = 0.87 \pm 0.04$~GeV in the $\overline{MS}$ scheme, $\mu = 1.15 \pm 0.06$~GeV
in the $MOM$ scheme and $\mu = 1.00 \pm 0.05$~GeV in the $V$ scheme.
Another example of optimization procedure is implementing the renormalization group criterium
that an observable cannot depend on conventions such as the
renormalization scheme choice. Optimizing a pQCD series is then achieved by minimizing the
renormalization scheme dependence. One such approach is the \emph{BLM} procedure~\cite{the:BLM}.
The \emph{Principle of Maximum Conformality} (PMC)~\cite{the:PMC} generalizes it and
sets unambiguously order-by-order in pQCD the \emph{renormalization scale},
{\it i.e.}, the scale at which the renormalization procedure subtracts the ultraviolet divergences
(often also denoted $\mu$ but not to be confused with the \emph{factorization scale} just discussed).
By fulfilling renormalization group invariance the PMC provides approximants
independent of the choice of renormalization scheme.
While polarized DIS directly probes $\Delta q(x_{Bj},Q^2)$,
$\Delta g(x_{Bj},Q^2)$ is also accessed through the pQCD evolution
equations, Eq.~(\ref{quark LO evol}). However, the present precision and kinematics coverage of the data do not
constrain it well. It will be significantly improved by the
12 GeV spin program at JLab that will cover the largely unconstrained $x_{Bj} >0.6$ region, and
by the polarized EIC (electron-ion collider) that will cover the low-$x_{Bj}$ domain~\cite{Accardi:2012qut}.
(The EIC may also constrain the gluon OAM~\cite{Ji:2016jgn}). But $\Delta g(x_{Bj},Q^2)$ is best accessed \emph{via}
semi-exclusive DIS involving photon-gluon fusion, $\gamma^{*}g \to q \overline{q}$.
This was evaluated by the SMC, HERMES and COMPASS experiments.
Polarized p-p (RHIC-spin) provides other channels that
efficiently access $\Delta g(x_{Bj},Q^2)$, see Section~\ref{n-n scattering}.
Global analysis results are discussed in Section~\ref{nucleon spin structure at high energy}
which gives the current picture of the nucleon spin structure at high energy.
They are listed in Tables~\ref{table Delta Sigma 1}-\ref{table Delta G 1} in the Appendix.
\vspace{-0.3cm}
\subsection{PQCD in the high-$x_{Bj}$ domain\label{sec: high-x}}
The high-$x_{Bj}$ region should be relatively simple: as $x_{Bj}$ grows, the \emph{valence quark}
distribution starts prevailing over the ones of gluons and of
$q$-$\overline{q}$ pairs materializing from gluons, see Fig.~\ref{fig:pdf_dist}.
This prevalence allows the use of \emph{constituent quark} models (see page \pageref{CQM})~\cite{Isgur:1978xj}.
Thus the high-$x_{Bj}$ region is particularly interesting.
It has been studied with precision by the JLab collaborations E99-117, EG1b, E06-014 and EG1dvcs,
and by the CERN's COMPASS collaboration.
This region has been precisely studied only recently since there, unpolarized PDFs (Fig.~\ref{fig:pdf_dist}) are small, which
entails small cross-sections that, furthermore, have kinematic factors
varying at first order as $1/x_{Bj}$. Thus, early data high-$x_{Bj}$ lacked the
precision necessary to extract polarized PDF. The high polarized luminosity
of JLab has allowed to explore this region more precisely.
\subsubsection{$A_1$ in the DIS at high-$x_{Bj}$}\label{pqcd high-x}
Assuming that quarks are in a $S$ state, {\it i.e.}, they have no OAM, a quark carrying all the nucleon momentum
($x_{Bj}\to 1$) must carry the nucleon helicity~\cite{Farrar:1975yb}.
This implies $A_1 \xrightarrow[x_{Bj} \to 1] {}1$. This is a rare example of absolute prediction from QCD: generally
pQCD predicts only the $Q^2$-dependance of observables, see
Sections~\ref{sub:MecaDIS} and~\ref{sub:pQCD}.
(Other examples are the processes involving the chiral anomaly, such as $\pi^0 \to \gamma \gamma$.)
Furthermore, the \emph{valence quarks}
dominance makes known the nucleon wavefunction, see Eq.~(\ref{SU(6) p wavefunction}).
The BBS~\cite{Brodsky:1994kg} and LSS~\cite{Leader:2001kh}) global fits
include these two constraints. The $x_{Bj}$-range where the $S$-state dominates
is the only significant assumption of these fits which have been improved to include
the $\left|\mbox{L}_z(x_{Bj})\right|=1$ wavefunction components~\cite{Avakian:2007xa}.
The phenomenological predictions~\cite{Brodsky:1994kg, Avakian:2007xa, Leader:2001kh} for
$A_1 (x_{Bj} \to 1)$ are thus based on solid premises. Model predictions also exist and
are discussed next.
\subsubsection{Quark models and other predictions of $A_1$ for high-$x_{Bj}$ DIS}
\begin{wrapfigure}{r}{0.44\textwidth}
\vspace{-0.7cm}
\includegraphics[width=0.44\columnwidth]{f2nof2p}
\vspace{-1.cm}
\caption{\label{fig:f2n/f2p}\small{$F_2^n/F_2^p$ SLAC data~\cite{Bodek:1973dy}. SU(6) predicts
$F_2^n/F_2^p=2/3$.}
\vspace{-0.6cm}
}
\end{wrapfigure}
Modeling the nucleon as made of three \emph{constituent quarks} is justified in the high-$x_{Bj}$
DIS domain since there, \emph{valence quarks} dominate. This finite number of partons and
the SU(6) flavor-spin symmetry allow one to construct a simple nucleon wavefunction,
see Eq.~(\ref{SU(6) p wavefunction}), leading to $A_1^p=5/9$ and $A_1^n=0$.
However SU(6) is broken, as clearly indicated {\it e.g.}, by
the nucleon-$\Delta$ mass difference of 0.3 GeV or the failure of the
SU(6) prediction that $F_2^n/F_2^p=2/3$, see Fig.~\ref{fig:f2n/f2p}.
The one-gluon exchange (pQCD ``hyperfine interaction'', see page \pageref{CQM})
breaks SU(6) and can account for the nucleon-$\Delta$ mass difference.
It predicts the same $x_{Bj}\to1$ limits as for pQCD:
$A_1^p= A_1^n=1$.
A prediction
of the \emph{constituent quark} model improved with the hyperfine interaction~\cite{Isgur:1998yb}
is shown in Fig.~\ref{fig:A1}.
Another approach to refine the \emph{constituent quark} model is using
chiral \emph{constituent quark} models~\cite{Manohar:1983md}.
Such models assume a $\approx 1$~GeV scale for chiral symmetry breaking,
significantly higher than $\Lambda_s$ ($0.33$~GeV in the $\overline{MS}$ scheme)
and use an effective Lagrangian~\cite{Weinberg:1978kz} with
\emph{valence quarks} interacting \emph{via} Goldstone bosons as
effective degrees of freedom. The models include \emph{sea quarks}.
$x_{Bj}$-dependence is included phenomenologically in recent models, {\it e.g.}, in the
prediction~\cite{Dahiya:2016wjf}.
Augmenting quark models with meson clouds provides another possible SU(6) breaking
mechanism~\cite{Myhrer:2007cf, Myhrer:1988ap}.
Ref.~\cite{Signal:2017cds} compares $A_1$ predictions with this approach
and that of the ``hyperfine" mechanism.
Other predictions for $A_1$ at high-$x_{Bj}$ exist and are shown in Fig.~\ref{fig:A1}. They are:
\noindent ~$\bullet$ The statistical model of Ref.~\cite{Bourrely:2001du}. It describes the nucleon as fermionic and bosonic gases
in equilibrium at an empirically determined temperature;
\noindent ~$\bullet$ The hadron-parton duality (Section~\ref{sec:duality}). It relates
well-measured baryons form factors (elastic or $\Delta(1232)~3/2^+$ reactions, all at high-$x_{Bj}$) to DIS
structure functions at the same $x_{Bj}$~\cite{Close:2003wz}. Predictions depend on the mechanism
chosen to break SU(6), with two examples shown in Fig.~\ref{fig:A1};
\noindent ~$\bullet$ Dyson-Schwinger Equations with contact or realistic interaction.
They predict $A_1(1)$ values significantly smaller than pQCD~\cite{Roberts:2011wy};
\noindent ~$\bullet$ The bag model of Boros and Thomas, in which three free quarks are confined in a sphere
of nucleon diameter. Confinement is provided by the boundary conditions
requiring that the quark vector current cancels on the sphere surface~\cite{Boros:1999tb}.
\noindent ~$\bullet$ The quark model of Kochelev~\cite{Kochelev:1997ux} in which the quark polarization
is affected by instantons representing non-perturbative fluctuations of gluons.
\noindent ~$\bullet$ The chiral soliton models of Wakamatsu~\cite{Wakamatsu:2003wg} and
Weigel \emph{et al.}~\cite{Weigel:1996kw} in which the quark degrees of freedom explicitly generate
the hadronic chiral soliton properties of the Skyrme nucleon model.
\noindent ~$\bullet$ The quark-diquark model of Cloet \emph{et al.}~\cite{Cloet:2005pp}.
\subsubsection{ $A_1$ results }
\begin{figure}
\center
\includegraphics[scale=0.4]{A1p}\includegraphics[scale=0.4]{A1n}
\vspace{-0.6cm}
\caption{\label{fig:A1}\small{$A_1$ DIS data on the proton (left) and neutron (right).
The $Q^2$ values of the various results are not necessarily the same, but
$A_1$'s $Q^2$-dependence is weak.}}
\vspace{-0.6cm}
\end{figure}
Experimental results on $A_1$~\cite{Anthony:1999rm, Anthony:1996mw, Dharmawardane:2006zd,
Prok:2008ev, Fersch:2017qrq, Zheng:2003un, Prok:2014ltt, Parno:2014xzb}
are shown in Fig.~\ref{fig:A1}. They confirm that SU(6), whose prediction is
shown by the flat lines in Fig.~\ref{fig:A1}, is broken.
The $x_{Bj}$-dependence of $A_1$ is well reproduced by the \emph{constituent quark} model
with ``hyperfine" corrections. The systematic shift for $A_1^n$ at
$x_{Bj}<0.4$ may be a \emph{sea quark} effect.
The BBS/LSS fits to pre-JLab data disagrees with these data. The fits are constrained by pQCD but
assume no quark OAM. Fits including it~\cite{Avakian:2007xa} agree with the
data, which suggests the importance of the quark OAM.
However, the relation between the effect of states $\left|\mbox{L}_z(x_{Bj})\right|=1$
at high $x_{Bj}$ and $\Delta L$ in Eq.~(\ref{eq:spin SR}) remains to be elucidated.
To solve this issue, the nucleon wavefunction at low $x_{Bj}$ must be known.
While the data have excluded some of the models (bag model~\cite{Boros:1999tb}, or specific SU(6) breaking mechanisms
in the duality approach), high-precision data at higher $x_{Bj}$ are needed to test the remaining predictions.
Such data will be taken at JLab in 2019~\cite{large-x A_1 12 GeV exps}.
\subsection{Results on the polarized partial cross-sections $\sigma_{TT}$ and $\sigma_{LT}'$\label{sec:g1, g2, stt, slt}}
The pairs of observables ($g_1$, $g_2$), ($A_1$, $A_2$), or ($\sigma_{TT}$, $\sigma_{LT}'$)
all contain identical spin information.
$A_1$ at high-$x_{Bj}$ was discussed in the previous section.
The $g_1$ DIS data at smaller $x_{Bj}$ are discussed in the Section~\ref{nucleon spin structure at high energy},
and the $g_2$ data are discussed in Section~\ref{sec:g2-g2ww}.
Here, $\sigma_{TT}$ and $\sigma_{LT}'$, Eq.~(\ref{sigmaTT}), are discussed.
Data on $\sigma_{TT}$ and ${\sigma}_{LT}'$ on $^3$He are available
in the strong-coupling QCD region~\cite{Amarian:2002ar, E97110}
for $0.04<Q^2<0.90$~GeV$^2$ and $0.9<W<2$~GeV. Neutron data are unavailable
since for $x_{Bj}$-dependent quantities such as $g_1$ or $\sigma_{TT}$,
there is no known accurate method to extract the neutron from $^3$He.
Yet, since in $^3$He, protons contribute little to polarized observables, the results of
Refs.~\cite{Amarian:2002ar, E97110} suggest how neutron data may look like. Neutron information
can be extracted for moments, see Sections~\ref{GDHsum} and \ref{sec:Spin polarizabilities}.
A large trough is displayed at the $\Delta(1232)~3/2^+$ resonance by $\sigma_{TT}$.
It is also present for other resonances, but not as marked.
The $\Delta(1232)~3/2^+$ dominates because it is the lightest resonance (see Eq.~(\ref{sigmaTT}))
and because its spin 3/2 makes the nucleon-$\Delta$ transition largely
transverse. \label{sub:sigmaTT} Since $\sigma_{TT} = (\sigma_{T,1/2} - \sigma_{T,3/2}) /2$,
where {\scriptsize 1/2} and {\scriptsize 3/2} refer to the spin of the intermediate state, here the $\Delta(1232)~3/2^+$,
$\sigma _{TT}$ is maximum and negative. At large $Q^2$, chiral symmetry is restored, which forbids
spin-flips and makes $\sigma _{T,1/2}$ dominant.
This shrinkage of the $\Delta(1232)~3/2^+$ trough is seen in the
$1 \leq Q^2 \leq 3.5$~GeV$^2$ data used to study duality, Section~\ref{sec:duality}.
All this implies that at low $Q^2$
the $\Delta(1232)~3/2^+$ contribution dominates the generalized GDH integral ($\propto\int\sigma_{TT}/\nu\, d\nu$),
a dominance further amplified by the $1 / \nu$ factor in the integral.
This latest effect is magnified in higher moments, such as those of
generalized polarizabilities, Eqs.~(\ref{eq:gamma_0}) and (\ref{eq:delta_LT SR}).
${\sigma}_{LT}'$ is rather featureless
compared to $\sigma_{TT}$ and in particular shows no structure at the
$\Delta(1232)~3/2^+$ location. It confirms that the nucleon-to-$\Delta$ transition occurs mostly via spin-flip
(magnetic dipole transition) induced by
transversely polarized photons. The longitudinal photons contributing little, the
longitudinal-transverse interference cross-section ${\sigma}_{LT}'$ is almost zero.
At higher $W$, ${\sigma}_{LT}'$ becomes distinctly positive.
\subsection{Results on the generalized GDH integral}
\label{GDHsum} The generalized GDH integral $I_{TT}(Q^2)$, Eq.~(\ref{eq:gdhsum_def1}),
was measured for the neutron and proton at DESY (HERMES)~\cite{Airapetian:2000yk} and
JLab~\cite{Adhikari:2017wox, Amarian:2002ar, E97110}.
The measurements cover the energy range from the pion production threshold up to typically $ W \approx 2.0$~GeV.
The higher-$W$ contribution is estimated with parameterizations, {\it e.g.}, that of Ref.~\cite{Bianchi:1999qs}.
At low $ Q^2$, $I_{TT}$ can be computed using $\chi$PT~\cite{Bernard:2002bs,
Bernard:2002pw, Ji:1999pd, Ji:1999mr, Kao:2002cp, Bernard:2012hb, Lensky:2014dda}.
The Ji-Kao-Lensky \emph{et al.} calculations~\cite{Ji:1999pd, Kao:2002cp, Lensky:2014dda}
and data agree, up to about $Q^2=0.2$~GeV$^2 $. After this, the calculation uncertainties become too large for a
relevant comparison. The Bernard \emph{et al.} calculations and
data~\cite{Bernard:2002pw, Bernard:2002bs, Bernard:2012hb} also agree, although marginally.
The MAID model underestimates the data~\cite{Drechsel:1998hk}.
($I_{TT}(Q^2)$ constructed with MAID is integrated only up to $W\le2$~GeV
and thus must be compared to data without large-$W$ extrapolation.
The extrapolation of the p+n data~\cite{Adhikari:2017wox} together with the proton GDH sum rule
world data~\cite{Ahrens:2001qt} yield $I_{TT}^{n}(0)= -0.955\pm 0.040\,(stat) \pm 0.113 \,(syst)$, which agrees
with the sum rule expectation.
\subsection{Moments of $g_1$ and $g_2$ \label{sec:Gamma1s} }
\subsubsection{Extractions of the $g_1$ first moments}
\noindent \textbf{$\Gamma_1^p$ and $\Gamma_1^n$ moments:}
The measured $ \Gamma_1(Q^2)$ is constructed by integrating $g_1$
from $x_{Bj,min}$ up to the pion production threshold. $x_{Bj,min}$, the minimum $x_{Bj}$ reached,
depends on the beam energy and minimum detected scattering angle for a given $Q^2$ point.
Table~\ref{table_exp} on page \pageref{table_exp} provides these limits.
When needed, contributions below $x_{Bj,~min}$ are estimated using low-$x_{Bj}$
models~\cite{Bianchi:1999qs, Bass:1997fh}.
For the lowest $Q^2$, typically below the GeV$^2$ scale, the large-$x_{Bj}$
contribution (excluding elastic) is also added when it is not measured.
The data for $ \Gamma_1$, shown in Fig.~\ref{fig:gamma1pn},
are from SLAC~\cite{Anthony:1996mw}-\cite{Anthony:1999py},
CERN~\cite{Ashman:1987hv, Adeva:1993km}-\cite{Ageev:2007du, Alekseev:2010hc,
Alexakhin:2005iw, Alekseev:2009ac}-\cite{Adolph:2015saz},
DESY~\cite{Airapetian:2000yk} and
JLab~\cite{Amarian:2002ar}-\cite{Prok:2008ev, Fersch:2017qrq,
Solvignon:2013yun, Solvignon:2008hk}-\cite{Prok:2014ltt, Deur:2004ti, Deur:2008ej}.
\noindent \textbf{Bjorken sum $\Gamma_1^{p-n}$:}
The proton and neutron (or deuteron) data can be combined to form the isovector moment $ \Gamma_1^ {p -n}$.
The Bjorken sum rule predicts that
{\vspace{-0.1cm}{$ \Gamma_1^ {p -n} \xrightarrow[Q^2 \to \infty] {} g_A/6$}}~\cite{Bjorken:1966jh}.
The prediction is generalized to finite $Q^2$ using OPE, resulting in a relatively
simple leading--twist $Q^2$-evolution in which only non-singlet coefficients remain, see Eq.~(\ref{eq:genBj}).
The sum rule has been experimentally validated, most precisely by E155~\cite{Anthony:1999rm}:
$\Gamma_1^{p-n}=0.176 \pm 0.003 \pm 0.007$ at $Q^2$ = 5 GeV$^2$, while the sum rule prediction at the same
$Q^2$ is $\Gamma_1^{p-n} = 0.183 \pm 0.002$. $\Gamma_1^{p-n}$ was first measured by
SMC~\cite{Adeva:1993km} and then E143~\cite{Abe:1994cp}, E154~\cite{Abe:1997cx},
E155~\cite{Anthony:1999rm} and HERMES~\cite{Airapetian:2000yk}.
Its $Q^2$-evolution was mapped at JLab~\cite{Fersch:2017qrq, Deur:2004ti, Deur:2008ej}.
The latest measurement (COMPASS) yields
$\Gamma_1^{p-n}= 0.192 \pm 0.007$ (stat) $\pm 0.015$
(syst)~\cite{Alekseev:2009ac, Alekseev:2010ub, Adolph:2015saz, Alexakhin:2005iw, Alekseev:2010hc}.
\\
As an isovector quantity, $ \Gamma_1^ {p -n}$ has no $\Delta(1232)~3/2^+$ resonance contribution.
This simplifies $\chi$PT calculations, which may remain valid to higher $Q^2$ than typical for
$\chi$PT~\cite{Burkert:2000qm}.
In addition, a non-singlet moment is simpler to calculate with LGT since the CPU-expensive
disconnected diagrams (quark loops) do not contribute. (Yet, the axial charge $g_A$ and the axial form factor $g_A(Q^2)$
remain a challenge for LGT~\cite{Capitani:2012gj} because
of their strong dependence to the lattice volume. Although the calculations are
improving~\cite{Liang:2016fgy}, the LGT situation for $g_A$ is still unsatisfactory.)
Thus, $ \Gamma_1^ {p -n}$ is especially convenient to test the techniques discussed in Section~\ref{Computation methods}.
As for all moments, a limitation is the impossibility to measure the $x_{Bj} \to 0$ contribution, which would require
infinite beam energy. The Regge behavior $g_1^{p-n}(x_{Bj}) = (x_0/x_{Bj})^{0.22}$ may provide
an adequate low-$x_{Bj}$ extrapolation~\cite{Bass:1997fh} (see also~\cite{Kirschner:1983di,Bartels:1995iu,Kovchegov:2015pbl}).
\begin{figure}[ht!]
\center
\includegraphics[scale=0.4]{Gamma1pn}
\vspace{-0.3cm}
\caption{\small The moments {$\Gamma_1^p$ (top left), {$\Gamma_1^n$
(top right) and the Bjorken integral (bottom left), all without elastic contribution. The
derivatives at $ Q^2 = 0$ are predicted by the GDH sum rule.
In the DIS, the \emph{leading-twist} pQCD evolution is shown by the gray band.
Continuous lines and bands at low $Q^2$ are $\chi$PT predictions.
$\Gamma_2^n$, with and without elastic contribution, is shown on the
lower right panel wherein the upper bands are experimental
systematic uncertainties. The lower bands in the figure are the systematic
uncertainties from the unmeasured part below $x_{Bj,min}$.
($ \Gamma_2^p$ is not shown since only two points, from E155x and RSS, are presently available.)
The Soffer-Teryaev~\cite{Soffer:2004ip}, Burkert-Ioffe~\cite{Burkert:1992tg}, Pasechnik
\emph{et al.}~\cite{Pasechnik:2010fg} and MAID~\cite{Drechsel:1998hk}
models are phenomenological parameterizations.
}}}
\label{fig:gamma1pn}
\end{figure}
\subsubsection{Data and theory comparisons}
At $Q^2 = 0$, the GDH sum rule, Eq.~(\ref{eq:gdh}), predicts $d \Gamma_1/d Q^2 $
(see Fig.~\ref{fig:gamma1pn}). At small $Q^2$, $ \Gamma_1(Q^2) $ can be computed using
$\chi$PT. The comparison between data and $\chi$PT results on moments is given in Table~\ref{xpt-comp}
in which one sees that in most instances, tensions exist between data and claculations of $\Gamma_1$.
\begin{table}
{\small
\caption{Comparison between $\chi$PT results and data for moments. The bold symbols denote moments for which
$\chi$PT was expected to provide robust predictions. ``{\color{blue}{\bf{A}}}" means that data and calculations agree up to at least
$Q^2=0.1$ GeV$^2$, ``{\color{red}{\bf{X}}}" that they disagree and ``-" that no calculation is available.
The $p + n$ superscript indicates either deuteron data without deuteron break-up channel,
or proton+neutron moments added together with neutron information either from D or $^3$He.
\label{xpt-comp}}
\vspace{0.30cm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Ref. & $\Gamma_1^p$ & $\Gamma_1^n$ & $\pmb{\Gamma_1^{p-n}}$ & $\Gamma_1^{p+n}$ & $\gamma_0^p$ & $\gamma_0^n$ & $\pmb{\gamma_0^{p-n}}$ & $\gamma_0^{p+n}$ & $\pmb{\delta_{LT}^n}$ & $d_2^n$ \tabularnewline
\hline
\hline
Ji 1999 \cite{Ji:1999pd,Ji:1999mr} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{blue}\bf{A}} & {\color{red}\bf{X}} & - & - & - & - & - & - \tabularnewline
\hline
Bernard 2002 \cite{Bernard:2002bs,Bernard:2002pw} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{blue}\bf{A}} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{blue}\bf{A}} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{red}\bf{X}}\tabularnewline
\hline
Kao 2002 \cite{Kao:2002cp} & - & - & - & - & {\color{red}\bf{X}} & {\color{blue}\bf{A}} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{red}\bf{X}}\tabularnewline
\hline
Bernard 2012 \cite{Bernard:2012hb} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{blue}\bf{A}} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{blue}\bf{A}} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & -\tabularnewline
\hline
Lensky 2014 \cite{Lensky:2014dda} & {\color{red}\bf{X}} & {\color{blue}\bf{A}} & {\color{blue}\bf{A}} & {\color{blue}\bf{A}} & {\color{blue}\bf{A}} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{red}\bf{X}} & {\color{blue}\bf{$\sim$ A}} & {\color{blue}\bf{A}}\tabularnewline
\hline
\end{tabular}
}
\vspace{-0.6cm}
\end{table}
The models of Soffer-Teryaev~\cite{Soffer:2004ip}, Burkert-Ioffe~\cite{Burkert:1992tg} and Pasechnik
\emph{et al.}~\cite{Pasechnik:2010fg} agree well with the data, as does the
LFHQCD calculation~\cite{Brodsky:2010ur}.
The Soffer-Teryaev model uses
the weak $Q^2$-dependence of $\Gamma_T=\Gamma_1+\Gamma_2$
to robustly interpolate $\Gamma_T$ between its zero value and known derivative at $Q^2 = 0$
and its known values at large $Q^2$. $\Gamma_1$ is
obtained from $\Gamma_T$ using the BC sum rule,
Eq.~(\ref{eq:bc_noel}), where PQCD radiative corrections and \emph{higher-twists} are accounted for.
%
Pasechnik ~\emph{et al.} improved this model by
using for the pQCD and \emph{higher-twist} corrections a strong coupling $\alpha_s$ analytically continued
at low-$Q^2$, which removes the unphysical Landau-pole divergence at $Q^2=\Lambda_s^2$,
and minimizes {\emph{higher-twist} effects}~\cite{Deur:2016tte}.
This extends pQCD calculations to lower $Q^2$ than typical.
The improved $\Gamma_1$ is continued to $Q^2=0$ by
using $\Gamma_1(0)=0$ and $d\Gamma_1(0)/dQ^2$ from the GDH sum rule.
The Burkert-Ioffe model is based on a parameterization of the resonant
and non-resonant amplitudes~\cite{Burkert:1992yk},
complemented with a DIS parameterization~\cite{Anselmino:1988hn} based on vector dominance.
In LFHQCD, the effective charge $\alpha_{g_1}$ (\emph{viz} the coupling $\alpha_s$ that includes
the pQCD gluon radiations and \emph{higher-twist} effects of $\Gamma_1^{p-n}$~\cite{Deur:2016tte})
is computed and used in the leading order expression of the Bjorken sum to obtain $ \Gamma_1^{p-n}$.
The \emph{leading-twist} $Q^2$-evolution is shown in Fig.~\ref{fig:gamma1pn} (gray bands).
The values $ a_8 $ = 0.579, $g_A $ = 1.267 and $ \Delta \Sigma^p $ = 0.15 ($ \Delta \Sigma^n $ = 0.35)
are used to anchor the $\Gamma_1^{p (n)} $ evolutions,
see Eq.~(\ref{eq:mu2}). For $ \Gamma_1^{p-n} $, $g_A$ suffices to fix the absolute scale. In all
cases, \emph{leading-twist} pQCD follows the data down to surprisingly low $Q^2$,
exhibiting hadron-parton global duality {\it i.e.}, an overall suppression of \emph{higher-twists},
see Sections~\ref{sub:HT Extraction} and~\ref{sec:duality}.
\subsubsection{Results on $\Gamma_2$ and on the BC and ELT sum rules}
\noindent \textbf{Neutron results:}
$ \Gamma_2^n(Q^2)$ from E155x~\cite{Anthony:1999py}, E94-010~\cite{Amarian:2002ar},
E01-012~\cite{Solvignon:2013yun}, RSS~\cite{Slifer:2008xu} and
E97-110~\cite{E97110} is shown
in Fig.~\ref{fig:gamma1pn}. Except for
E155x for which the resonance
contribution is negligible, measurements comprise essentially
the whole resonance region. This region contributes positively and significantly yielding
$ \Gamma_2^{n,res.} \approx - \Gamma_1^{n,res.}$,
as expected since there, $g_2 \approx -g_1$ (see Section~\ref{sec:g2-g2ww}).
The MAID parameterization (continuous line) agrees well with these data.
The elastic contribution, estimated from the parameterization in Ref.~\cite{Mergell:1995bf},
is of opposite sign and nearly cancels the resonance contribution,
as expected from the BC sum rule $\Gamma_2(Q^2)=0$.
The unmeasured part below $x_{Bj,min}$ is estimated assuming
$ g_2 = g_2^{WW}$, see Eq.~(\ref{eq:g2ww}). (While at \emph{leading-twist} $g_2^{WW}$
satisfies the BC sum rule, $ \int g_2^{WW} dx = 0 $, the low-$x_{Bj}$
contribution is the non-zero partial integral
$ \int_0^{x_{Bj,min}} g_2(Q^2,y) dy = x_{Bj,min}\big[g_2^{WW}(Q^2,x_{Bj,min}) +
g_1(Q^2,x_{Bj,min})\big]$.)
The resulting $ \Gamma_2^n $ fulfills the BC sum rule.
The interesting fact that the elastic contribution nearly cancels that of the resonances
accounts for the sum rule validity at low and moderate $Q^2$.
\noindent \textbf{Proton results:}
The E155x proton result ($Q^2 = 5$~GeV$^2$)~\cite{Anthony:1999py} agrees with the BC sum rule:
$ \Gamma_2^p = -0.022 \pm 0.022$ where, as for the JLab data, a 100\% uncertainty is assumed
on the unmeasured low-$x_{Bj}$ contribution estimated to be 0.020 using Eq.~(\ref{eq:g2ww}).
Neglecting \emph{higher-twists} for the low-$x_{Bj}$ extrapolation, RSS yields,
$\Gamma_2^p = \big(-6 \pm8 $(stat)$\pm20$(syst)\big)$\times10^{-4}$ at
$Q^2 = 1.28$~GeV$^2$~\cite{Slifer:2008xu}, which agrees with the BC sum rule.
Finally $g_2^p$ has been measured at very low $Q^2$~\cite{g2p},
from which $\Gamma_2^p$ should be available soon.
\noindent \textbf{Conclusion:}
Two conditions for the BC sum rule validity are that
1) $ g_2 $ is well-behaved, so that $ \Gamma_2 $ is finite, and
2) $ g_2 $ is not singular at $x_{Bj}= 0 $.
The sum rule validation implies that the conditions
are satisfied. Moreover, since $ g_2^{WW}$ fulfills the sum rule at large $Q^2$,
these conclusions can be applied to twist~3 contribution describing the quark-gluon correlations. Finally, since
the sum rule seems verified from $Q^2\sim0$ to 5 GeV$^2$
and since the contributions of \emph{twist}-$\tau$ are $Q^{2-\tau}$-suppressed,
the conclusion ensuring that the $ g_2 $ function is regular should be true for all
the terms of the twist series that represents $g_2$.
\noindent \textbf{The Efremov-Leader-Teryaev sum rule:}
The ELT sum rule, Eq.~(\ref{Eq:ELT SR}), is compatible with the current world data. However,
the recent global PDF fit KTA17~\cite{Shahri:2016uzl} indicates that the sum rule for $n=2$
and twist~2 contribution only is violated at $Q^2=5$~GeV$^2$,
finding $\int_0^1 x \big(g_1 + 2g_2 \big) dx = 0.0063(3) $ rather
than the expected null sum. If this is true, it would suggest a contribution of \emph{higher-twists} even at $Q^2=5$~GeV$^2$.
\vspace{-0.1cm}
\subsection{Generalized spin polarizabilities $\gamma_0$, $\delta_{LT}$}
\vspace{-0.1cm}
Generalized spin polarizabilities offer another test of strong QCD calculations.
Contrary to $\Gamma_1$ or $\Gamma_2$, the kernels of the polarizability integrals,
Eqs.~(\ref{eq:gamma_0}) and (\ref{eq:delta_LT SR}), have a $1 / \nu^2$ factor that
suppresses the low-$x_{Bj}$ contribution.
Hence, polarizability integrals converge faster and have smaller low-$x_{Bj}$ uncertainties.
At low $Q^2$, generalized polarizabilities have been calculated
using $\chi$PT, see Table~\ref{xpt-comp}.
It is difficult to include in these calculations the resonances, in particular $\Delta(1232)~3/2^+$.
It was however noticed that this excitation is suppressed in $\delta_{LT}$, making it ideal
to test $\chi$PT calculations for which the $\Delta(1232)~3/2^+$ is not included, or included
phenomenologically~\cite{Kao:2002cp, Bernard:2002bs}.
Measurements of $\gamma_0$ and $\delta_{LT}$ are available for the neutron (E94-010 and E97-110)
for $0.04<Q^2<0.9$~GeV$^2$~\cite{Amarian:2002ar, E97110}.
JLab CLAS results are also available for $\gamma_0$ for the proton, neutron and
deuteron~\cite{Prok:2008ev, Dharmawardane:2006zd, Fersch:2017qrq, Adhikari:2017wox}
for approximately $0.02<Q^2<3$~GeV$^2$.
\vspace{-0.1cm}
\subsubsection{Results on $\gamma_0$}
\vspace{-0.1cm}
The $ \gamma_0^n$ extracted either from $^3$He~\cite{Amarian:2002ar}
or D~\cite{Dharmawardane:2006zd} agree well with each other.
The MAID phenomenological model~\cite{Drechsel:1998hk} agrees with the
$\gamma_0^n$ data, and so do the $\chi$PT results (Table~\ref{xpt-comp}), except
the recent Lensky \emph{et al.} calculation~\cite{Lensky:2014dda}.
For $ \gamma_0^p$, the situation is reversed: only Ref.~\cite{Lensky:2014dda} agrees well
with the data, but not the others (including MAID).
This problem motivated an isospin analysis of $ \gamma_0$~\cite{Deur:2008ej} since, e.g.
axial-vector meson exchanges in the $t-$channel (short-range interaction) that are not included
in computations could be important for only one of the isospin components of $\gamma_0$.
$\chi$PT calculations disagree with $ \gamma_0^{p +n} $ but MAID agrees.
Alhough the $\Delta(1232)~3/2^+$ is suppressed in $ \gamma_0^{p-n}$, $\chi$PT disagrees with the data.
Thus, the disagreement on $ \gamma_0^p $ and $ \gamma_0^n $ cannot be assigned to the $\Delta(1232)~3/2^+$.
MAID also disagrees with $ \gamma_0^{p-n} $.
\vspace{-0.1cm}
\subsubsection{The $\delta_{LT}$ puzzle}
\vspace{-0.1cm}
Since the $\Delta(1232)~3/2^+$ is suppressed in $\delta_{LT}$, it was
expected that its $\chi$PT calculation would be robust.
However, the $\delta_{LT}^n$ data~\cite{Amarian:2002ar} disagreed
with the then available $\chi$PT results.
This discrepancy is known as the ``$\delta_{LT}$ puzzle".
Like $\gamma_0$, an isospin analysis of $ \delta_{LT}$ may help with this puzzle.
The needed $ \delta_{LT}^p $ data are becoming available~\cite{g2p}.
The second generation of $\chi$PT calculations on
$ \delta_{LT}^n$~\cite{Bernard:2012hb, Lensky:2014dda} agrees better with the data.
At larger $Q^2$ (5~GeV$^2 $), the E155x data~\cite{Anthony:1999py} agree with a quenched LGT
calculation~\cite{Gockeler:1995wg, Gockeler:2000ja}.
At large $Q^2$, generalized spin polarisabilities are expected to scale as
$1/ Q^{6}$, with the usual additional softer dependence from pQCD radiative
corrections~\cite{Drechsel:2002ar, Drechsel:2004ki}.
Furthermore, the Wandzura-Wilczek relation, Eq.~(\ref{eq:g2ww}), relates $\delta_{LT}$ to $\gamma_0$:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
\delta_{LT}(Q^2)\to{\frac{1}{3}}\gamma_0(Q^2)\ \ \ {\rm if~} g_2 \approx g_2^{WW}
\label{eq:relation gam0 deltaLT}
\end{equation}
The available data being mostly at
$Q^2 <1 $~GeV$^2$, this relation and the scaling law have not been tested yet.
Furthermore, the signs of the $\gamma_0 $ and $\delta_{LT}$ data disagree with
Eq.~(\ref{eq:relation gam0 deltaLT}). These facts are not worrisome:
for $ \Gamma_1 $ and $ \Gamma_2 $,
scaling is observed for $Q^2 \gtrsim1 $~GeV$^2$, when the overall effect of \emph{higher-twists}
decreases. For higher moments, resonances contribute more, so scaling should begin at larger $Q^2$.
The violation of Eq.~(\ref{eq:relation gam0 deltaLT}) is consistent with the fact that
$ g_2 \neq g_2^{WW}$ in the resonance domain, see Section~\ref{sub:g2 in res}.
\subsection{$d_2$ results}
Another combination of second moments, $ d_2$ (Eqs.~(\ref{eq:d2op}) and (\ref{eq:d2mon})),
is particularly interesting because it is interpreted as part of the transverse confining force acting on
quarks~\cite{Burkardt:2008ps, Abdallah:2016xfk}, see Section~\ref{color pol.}.
Furthermore, $d_2$ offers another possibility to study the nucleon spin structure at
large $Q^2$ since it has been calculated by
LGT~\cite{Gockeler:1995wg, Gockeler:2000ja, Gockeler:2005vw} and modeled with
LC wave functions~\cite{Braun:2011aw}.
$d_2$ can also be used
to study the transition between large and small $Q^2$. $ \overline{d_2}(Q^2)$ is shown in Fig.~\ref{fig:d2}
(the bar over $ d_2 $ indicates that the elastic contribution is excluded).
The experimental results are from JLAB (neutron from
$^3$He~\cite{Amarian:2002ar, Zheng:2003un, Solvignon:2013yun, Posik:2014usi}
and from D~\cite{Slifer:2008xu}, and proton~\cite{Wesselmann:2006mw}), from
SLAC (neutron from D and proton)~\cite{Anthony:1999py}, and from global analyses
(JAM~\cite{Jimenez-Delgado:2013boa, Sato:2016tuz}, KTA17~\cite{Shahri:2016uzl}), which
contain only DIS contributions.
\subsubsection{Results on the neutron}
At moderate $Q^2$, $ \overline{d_2}^n$ is positive and reaches a maximum at
$Q^2 \gtrsim0.4 $~GeV$^2$. Its sign is uncertain at large $Q^2$.
At low $Q^2$ the comparison with $\chi$PT is summarized in Table~\ref{xpt-comp}.
MAID agrees with the data.
That MAID and the RSS datum (both covering only the resonance region) match
the DIS-only global fits and E155x datum suggests
that hadron-parton duality is valid for $d_2^n$, albeit uncertainties are large.
The LGT~\cite{Gockeler:1995wg, Gockeler:2000ja, Gockeler:2005vw}, Sum Rule
approach~\cite{Balitsky:1989jb}, Center-of-Mass bag model~\cite{Song:1996ea}
and Chiral Soliton model~\cite{Weigel:1996jh} all yield a small $d_2^n$ at $Q^2 > 1$~GeV$^2$, which agrees
with data. At these large $Q^2$, the data precision is still insufficient to discriminate between these predictions.
The negative $d_2^n$ predicted with a LC model~\cite{Braun:2011aw} disagrees with the data.
\subsubsection{Results on the Proton}
Proton data are scarce, with a datum from RSS~\cite{Wesselmann:2006mw} and
one from E155x~\cite{Anthony:1999py}. In Fig.~\ref{fig:d2}, the RSS point was evolved to the E155x $Q^2$
assuming the $ 1/Q $-dependence expected for a twist~3 dominated quantity (neglecting
the weak log dependence from pQCD radiation). The E155x and RSS results agree although
RSS measured only the resonance contribution. As for $d_2^n$, this suggests that hadron-parton duality
is valid for $d_2^p$. However, this conclusion is at odds with the mismatch between the (DIS-only) JAM global PDF
fit~\cite{Sato:2016tuz} and the (resonance-only) result from RSS.
\subsubsection{Discussion}
Overall, $ \overline{d_2}$ is small compared to the twist~2 term ($| \Gamma_1| \approx 0.1$
typically at $Q^2 = 1 $~GeV$^2$, see Fig.~\ref{fig:gamma1pn}) or to the twist~4 term ($f_2 \approx 0.1 $, see Fig.~\ref{fig:f2}).
This smallness was predicted by several models.
The high-precision JLab experiments measured a clearly non-zero $ \overline{d_2} $.
More data for $ \overline{d_2^p}$ are needed and will be provided shortly at low $Q^2$~\cite{g2p}
and in the DIS~\cite{Rondon:2015mya}, see Table~\ref{table_exp}.
Then, the 12 GeV upgrade of JLAB will provide $ \overline{d_2}$ in the DIS
with refined precision, in particular with the SoLID detector~\cite{SoLID}.
\begin{figure}[ht!]
\center
\includegraphics[scale=0.36]{d2n.eps}
\hspace{0.5cm}
\includegraphics[scale=0.36]{d2p.eps}
\vspace{-0.5cm}
\caption{\small{$\overline{d_2}$ data from SLAC, JLab and PDF global analyses, compared to
LGT~\cite{Gockeler:1995wg, Gockeler:2000ja, Gockeler:2005vw},
$\chi$PT~\cite{Bernard:2002pw, Lensky:2014dda}
and models~\cite{Braun:2011aw, Balitsky:1989jb,Song:1996ea,Weigel:1996jh}.
Left panel: neutron data (the inner error bars are statistical. The outer ones are for
the systematic and statistic uncertainties added in quadrature). Right: proton data. }}
\label{fig:d2}
\end{figure}
\subsection{ \emph{Higher-twist} contributions to $\Gamma_1$, $g_1$ and $g_2$ \label{sub:HT Extraction}}
Knowledge of \emph{higher-twists} is important
since for inclusive lepton scattering, they are the
next nonperturbative distributions beyond the PDFs, correlating them.
\emph{higher-twists} thus underlie the parton-hadron transition, {\it i.e.}, the process of
strengthening the quark binding as the probed distance increases. In fact, some
\emph{higher-twists} are interpreted as confinement forces~\cite{Burkardt:2008ps, Abdallah:2016xfk}.
Furthermore, knowing \emph{higher-twists} permits one to set the limit of applicability to pQCD and
extend it to lower $Q^2$, see {\it e.g.}, Massive Perturbation Theory~\cite{Pasechnik:2010fg, Natale:2010zz}.
Despite their phenomenological importance, \emph{higher-twists} have
been hard to measure accurately because they are often surprisingly small.
\subsubsection{Leading and higher-twist analysis of $\Gamma_1$ \label{HT} }
%
\begin{wrapfigure}{r}{0.5\textwidth}
\center
\vspace{-1.65cm}
\includegraphics[scale=0.4]{f2.eps}
\vspace{-0.45cm}
\caption{\label{fig:f2}\small{Top: twist coefficients $\mu_i$ {\it vs}. $i$.
The lines linking the points show the oscillatory behavior.
Bottom: twist~4 $f_2$. Newer results ({\it e.g.}, EG1dvcs) include the older data ({\it e.g.}, EG1a).}}
\vspace{-0.5cm}
\end{wrapfigure}
The \emph{higher-twist} contribution to $\Gamma_1$ can be obtained by fitting its data with
a function conforming to Eqs.~(\ref{eq:Gamma1})-(\ref{eq:mu2}) and (\ref{eq:genBj})-(\ref{eq:mu4}).
The perturbative series is truncated to an order relevant to the data accuracy.
Once $\mu_4 $ is extracted, the pure twist~4 matrix element $ f_2 $ is obtained by subtracting $ a_2 $ (twist~2)
and $d_2 $ (twist~3) from Eq.~(\ref{eq:mu4}).
For $ \Gamma_1^{p,n} $, $\mu_2^{p,n} $ is set by fitting high-$Q^2$ data, {\it e.g.}, $Q^2 \ge 5 $~GeV$^2$,
and assuming that \emph{higher-twists} are negligible there. For
$ \Gamma_1^{p-n} $, $\mu_2^{p-n} $ is set by $g_A =1.2723(23)$~\cite{Olive:2016xmw}.
The resulting $\mu_2^{p,n} $, together with $ a_8 $ from the hyperons $\beta$-decay, yield
$ \Delta \Sigma = 0.169 \pm 0.084 $ for the proton and $\Delta \Sigma = 0.35 \pm 0.08 $
for the neutron~\cite{Fersch:2017qrq, Meziani:2004ne, Deur:2005jt}. The discrepancy
may come from the low-$x_{Bj }$ part of
$ \Gamma_1^n $, which is still poorly constrained, as the COMPASS deuteron data~\cite{Alexakhin:2006oza}
suggest. Specifically, it may be the low-$x_{Bj }$ contribution to the isoscalar quantity $ \Gamma_1^{n+p}$,
since $ \Gamma_1^{p-n} $ agrees well with the Bjorken sum rule.
Another possibility is a SU(3)$_f$ violation.
The $ \Delta \Sigma$ obtained from global analyses (see Section~\ref{nucleon spin structure at high energy})
mix the proton and neutron data and agree with the averaged value of $ \Delta \Sigma^p $ and $ \Delta \Sigma^n $.
Fit results~\cite{Fersch:2017qrq, Posik:2014usi, Deur:2004ti, Deur:2008ej, Meziani:2004ne, Deur:2005jt}
are shown and compared to available
calculations~\cite{ Ji:1997gs, Balitsky:1989jb, Ioffe:1981kw, Lee:2001ug, Sidorov:2006vu}
in Fig.~\ref{fig:f2}. There are no predictions yet for \emph{twists} higher than $ f_2 $.
We note the sign alternation between $ \mu_2 $, $ \mu_4 $ and $ \mu_6 $.
All higher power corrections are folded in
$\mu_8$, which is thus not a clean term and does not follow the alternation.
This one decreases the \emph{higher-twist} effects and could explain the global
quark-hadron spin duality (see Section~\ref{sec:duality}). The sign alternation is opposite for proton and neutron,
as expected from isospin symmetry, see Eq.~(\ref{eq:mu2}) in which the non-singlet $ g_A/ 12 \approx 0.1 $ dominates the
singlet terms $ \Delta \Sigma / 9 \approx 0. 03 $ and $ a_8 / 36 \approx 0.008 $.
The discrepancy between $ \Delta \Sigma^p$ and $\Delta \Sigma^n $ explains why the value of $ f_2 $
extracted from $ \Gamma_1^{p-n}$ differs from the $f_2$ values extracted individually.
Indeed, $ \Delta \Sigma $ vanishes in the Bjorken sum rule whose derivation does not assume SU(3)$_f$ symmetry.
Although the overall effect of \emph{higher-twists} is small at $Q^2 > 1 $~GeV$^2$, $ f_2 $ itself is large:
$|f_2^p| \approx 0.1$, to compare to $\mu_2^p=0.105(5)$;
$f_2^n \approx 0.05$ for $|\mu_2^n|=0.023(5)$;
$|f_2^{p-n}| \approx 0.1$ for $\mu_2^{p-n} = 0.141(11)$.
These large values conform to the intuition that nonperturbative effects should be important at moderate $Q^2$.
The smallness of the total \emph{higher-twist} effect is due to the factor $M^2/9 \approx 0.1$ in Eq.~(\ref{eq:mu4}),
and to the $\mu_i$ alternating signs. Such oscillation can be understood with vector meson
dominance~\cite{Weiss_pc}.
\subsubsection{Color polarizabilities and confinement force \label{color pol.}}
Electric and magnetic color polarizabilities can be determined using Eq.~(\ref{eq:chi}).
For the proton,
$\chi_E^p = -0.045(44)$ and $\chi_B^p = 0.031(22)$~\cite{Fersch:2017qrq}.
For the neutron, $\chi_E^n =0.030(17) $, $\chi_B^n =-0.023(9) $. The Bjorken sum data yield
$\chi_E^{p-n} = 0.072(78)$, $\chi_B^{p-n} = -0.020(49)$.
These values are small and the proton and neutron have opposite signs. Since $ f_2 \gg d_2 $, this
reflects the dominance of the non-singlet term $g_A$.
The electric and magnetic Lorentz transverse confinement forces are proportional
to the color polarizabilities~\cite{Burkardt:2008ps, Abdallah:2016xfk}:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
F^y_E=-\frac{M^2}{4}\chi_E , ~~~F_B=-\frac{M^2}{2}\chi_B.
\label{eq:color forces}
\end{equation}
Their size of a few $10^{-2}$~GeV$^2$ can be compared to the string tension
$\sigma_{str} = 0.18$~GeV$^2$ obtained from heavy quarkonia. Several coherent processes
prominent for the proton and neutron, {\it e.g.}, the $\Delta(1232)~3/2^+$, are nearly inexistent for
$ \Gamma_1^{p-n}$~\cite{Burkert:2000qm}. This may explain why $ \Gamma_1^{p-n}$ is suited to extract
$\alpha_s$ at low $Q^2$~\cite{Deur:2016tte, Deur:2005cf}.
\subsubsection{\emph{Higher-twist} studies for $g_1$, $A_1$, $g_2$ and $A_2$\label{sec:g2-g2ww}}
\begin{wrapfigure}{r}{0.52\textwidth}
\vspace{-2.1cm}
\center
\includegraphics[scale=0.39]{E97103_g1_g2.eps}
\vspace{-0.6cm}
\caption{\label{fig:E97-103 g2}\footnotesize{Top: $g_1^n(Q^2)$ from E97103
(symbols). The inner error bars give the
statistical uncertainty while outer bars are the systematic and statistical uncertainties added in quadrature.
The continuous line is a global fit of the world data on $g_1^n$~\cite{Bluemlein:2002be},
with its uncertainty given by the hatched band.
Bottom: Corresponding $g_2^n$ data with various models and $g_2^{WW}$ computed from the global fit on $g_1^n$.
The data are at $x_{Bj} \approx 0.2$.
}}
\vspace{-1.8cm}
\end{wrapfigure}
\vspace{-0.2cm}
\emph{Higher-twists} and their $x_{Bj}$-dependence have been extracted from spin structure
data~\cite{Anthony:1999py, Zheng:2003un,Kramer:2005qe},
in particular by global fits~\cite{Leader:2006xc, Blumlein:2010rn, Leader:2002ni}.
More \emph{higher-twists} data are expected soon~\cite{Rondon:2015mya}.
\noindent \textbf{Study of $g_2$ in the DIS}
We consider first $g_2$ data in the DIS. Lower $W$ or $Q^2$ data are discussed afterwards.
The Wandzura-Wilczek term $g_2^{WW}$, Eq.~(\ref{eq:g2ww}), is the twist~2 part of $ g_2$.
Nevertheless, due to the asymmetric part of the axial matrix element entering the
OPE~\cite{Manohar:1992tz, Jaffe:1990qh},
it contributes alongside the twist~3 part of $ g_2$, similarly to {\it e.g.}, the twist-2 term $a_2$ and
twist-3 term $d_2$ contributing alongside the twist-4 term $f_2$ in Eq.~(\ref{eq:mu4}).
Indeed, in Eq.~(\ref{eq:sigmapar}), $g_2$ is suppressed as $Q/(2E) = 2Mx_{Bj}/Q$ compared to $g_1$.
Just like there is no reason in $\mu_4$ that
$ a_2 \gg d_2 \gg f_2 $ (which is indeed \\
not the case), there is no obvious reason for \\
having $g_2^{WW}\gg g_2^{twist~3}$ and thus $g_2\approx g_2^{WW}$. \\
This is, however, the empirical observation: all the $g_2^{p,n}$
DIS data (SMC~\cite{Adeva:1993km}, E143~\cite{Abe:1995dc},
E154~\cite{Abe:1997qk} and E155x~\cite{Anthony:1999py},
E99-117~\cite{Zheng:2003un}, E97-103~\cite{Kramer:2005qe}, E06-104 \cite{Posik:2014usi} and
HERMES~\cite{Airapetian:2011wu}) are compatible with $g_2^{WW}$.
Below $Q^2=1$~GeV$^2$, E97-103~\cite{Kramer:2005qe} did observe that
$g_2^n>g_2^{WW,n}$, see Fig.~\ref{fig:E97-103 g2}.
Its data cover $0.55<Q^2<1.35$~GeV$^2$, at a fixed $x_{Bj} \approx 0.2$ to isolate
the $Q^2$-dependence. The deviation seems
to decrease with $ Q^2 $ as expected for \emph{higher-twists}.
Models~\cite{Braun:2011aw, Weigel:1996jh, Wakamatsu:2000ex, Stratmann:1993aw} predict a
negative contribution from \emph{higher-twists}
while the data indicate none, or a positive one for $Q^2 \lesssim 1$~GeV$^2$.
The \emph{leading-twist} part of $g_1$, \emph{viz} $g_1^{LT}$, is needed to form $ g_2^{WW}$. To check the
PDF~\cite{Bluemlein:2002be} used to compute $g_1^{LT}$, $ g_1$ was measured by
E97-103, see Fig.~\ref{fig:E97-103 g2}. No \emph{higher-twist} is
seen: $g_1 \approx g_1^{LT}$. However,
at such $x_{Bj}$ and $Q^2$, the LSS global fit~\cite{Leader:2006xc}
saw a twist-4 contribution $h/Q^2=0.047(29)$ at $Q^2=1$~GeV$^2$, which E97-103 should have seen.
While the large uncertainties preclude firm conclusions, this implies
either a tension between LSS and E97-103, or that kinematical and dynamical
\emph{higher-twists} compensate each other.
The BC sum rule, Eq.~(\ref{eq:bc}) implies a zero-crossing of $g_2(x_{Bj})$.
The E99-117~\cite{Zheng:2003un} and E06-104~\cite{Posik:2014usi} DIS
data suggest it is near $x_{Bj} \approx 0.6$ for the neutron.
E143~\cite{Abe:1995dc}, E155x~\cite{Anthony:1999py} and
HERMES~\cite{Airapetian:2011wu} indicate it is between $0.07<x_{Bj}<0.2$ for the proton.
\vspace{-0.5cm}
\paragraph{Study of $g_2$ in the resonance domain} \label{sub:g2 in res}
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-1.cm}
\center
\includegraphics[scale=0.43]{E94010_g1_g2_2}
\vspace{-0.6cm}
\caption{\label{fig:g1g2 E94010}\small{The symmetry between $g_1$ and $g_2$.
(JLab $^3$He data from E94-010~\cite{Amarian:2002ar}.)
}}
\end{wrapfigure}
So far, $g_2$ DIS data have been discussed. Many data at $W<2$~GeV and
$6 \times 10^{-3} <Q^2< 3.3$~GeV$^2$ also exist.
Being derived using OPE, the Wandzura-Wilczek relation, Eq.~(\ref{eq:g2ww}), should
not apply there. Yet, it is
instructive to compare $ g_2 $ and $ g_2^{WW}$ in this region. (In fact, it was done
when $d_2(Q^2)$ was discussed, since $d_2=\int x^2[g_2-g_2^{WW}]dx$.)
Proton and deuteron $ g_2$ data are available from the RSS experiment at
$\left\langle Q^2\right\rangle =1.3$~GeV$^2$ and for $0.3 \leq x_{Bj} \leq 0.8$~\cite{Wesselmann:2006mw, Slifer:2008xu}.
The $x_{Bj}$-dependences of $g_2$ and $g_2^{WW*}$ (the $^*$ means it is formed using $g_1$
measured by RSS and thus is not leading \emph{twist}) are similar except that generally
$ |g_2^p| < |g_2^{WW*,p}|$, while $ |g_2^n| > |g_2^{WW*,n}|$.
The inequality indicates either \emph{higher-twist} effects or coherent resonance
contributions. The ranks and types of the \emph{higher-twists} are unclear since $ g_2^{WW*}$
itself contains \emph{higher-twists} whereas its OPE expression is twist~2.
A similar study on $ g_2^{^3He}$ from E97-110~\cite{E97110} was done
for $6 \times 10^{-3} <Q^2< 0.3$~GeV$^2$.
Again, $ g_2^{^3He}$
is close to $ g_2^{WW*,^3He}$.
Their difference may come from \emph{higher-twists} or
coherence effects, but now also possibly from nuclear effects.
Resonance data on $g_2^{^3He}$ are also available from E01-012~\cite{Solvignon:2013yun} and were
compared to $g_2^{WW,^3He}$ computed at \emph{leading-twist}.
It results that $g_2^{WW,^3He}$ provides an accurate approximation of
$ g_2^{^3He}$, maybe facilitated by the smearing of resonances in nuclei. Such analysis amounts
to assessing the size of twist-3 and higher in $g_2$, neglecting structures due to resonances. It also tests hadron-parton
spin-duality for $^3$He, see Section~\ref{sec:duality}.
A feature of the $g_1$ and $g_2$ resonance data is the symmetry around 0 of their
$x_{Bj}$-behavior, see Fig.~\ref{fig:g1g2 E94010}.
It is observed for the proton~\cite{Wesselmann:2006mw} and for
$^3$He~\cite{Amarian:2002ar, Kramer:2005qe, Solvignon:2013yun, E97110}.
DIS data do not display the symmetry. It arises from the smallness of ${\sigma}_{LT}'$:
since ${\sigma}_{LT}'\propto(g_1+g_2)$, then $g_1 \approx -g_2$. In particular,
for the $\Delta(1232)~3/2^+$, ${\sigma}_{LT}'\approx0$ because the dipole component $M_{1+}$
dominates the nucleon-$\Delta$ transition.
This holds at low $Q^2$ where $M_{1+} \gg E_{1+}$ and $S_{1+}$. At larger $Q^2$,
another reason arises: resonances being at high $x_{Bj} $,
$\int_{x_{Bj}}^1(g_1/y )dy$
in Eq.~(\ref{eq:g2ww}) is negligible and since $ g_2^{WW} \approx g_2$, then $g_2 \approx -g_1$.
\subsection{Study of the hadron-parton spin duality \label{sec:duality}}%
Hadron-parton duality is the observation that a structure function in the DIS appears as a precise
average of its measurement in the resonance domain.
This coincidence can be understood as a dearth of dynamical \emph{higher-twists}.
Duality is thus related to the study of parton correlations.
In the last two decades precise data were gathered to test duality on $g_1$.
Duality on $ g_1^p $, $ g_1^n $, $ g_1^d $ and $ g_1^{^3He}$ has been studied using the SLAC and JLab data from
E143~\cite{Abe:1994cp},
E154~\cite{Abe:1997cx,Abe:1997qk},
E155~\cite{Anthony:1999rm,Anthony:1999py},
E94010~\cite{Amarian:2002ar},
E97-103~\cite{Kramer:2005qe},
E99-117~\cite{Zheng:2003un},
E01-012~\cite{Solvignon:2013yun, Solvignon:2008hk} (which was dedicated to studying spin-duality),
EG1b~\cite{Dharmawardane:2006zd, Prok:2008ev, Bosted:2006gp, Fersch:2017qrq} and
RSS~ \cite{Wesselmann:2006mw, Slifer:2008xu}.
The $x_{Bj} $-value at which duality appears depends on $Q^2$. At low $Q^2$, duality is
violated around the $\Delta(1232)~3/2^+$. This is expected since there, $ g_1 < 0$ due to
the $M_{1+}$ transition dominance; see discussions about $\sigma_{TT}$ on page~\pageref{sub:sigmaTT}
and about the $g_1$ and $g_2$ symmetry page~\pageref{fig:g1g2 E94010}.
(The discussion applies to $ g_1 $ because
$\sigma_{TT}\propto(g_1-\gamma^2g_2)\approx g_1(1+\gamma^2)\propto g_1$
%
at the $\Delta(1232)~3/2^+$.) Above $Q^2 = 1.2$~GeV$^2$ duality seems to be valid at all $x_{Bj}$.
Duality's onset for $ g_1^d $ and $ g_1^{^3He}$ appears at smaller $Q^2$
than for $ g_1^p $ as expected, since duality
is aided by the nucleon Fermi motion inside a composite nucleus.
Spin-duality was also studied using the $ A_1 $ and $ A_2 $ asymmetries using the SLAC, HERMES~
\cite{Airapetian:2002rw} and JLab data. Duality in $ A_1 $ arises for $Q^2 \gtrsim2.6 $~GeV$^2$.
At lower $Q^2$, it is invalidated by the $\Delta(1232)~3/2^+$.
The $A_1^{^3 He}$ $Q^2$-dependence is weak for both DIS
and resonances (except near the $\Delta(1232)~3/2^+$).
This is expected in DIS since $ A_1 \approx g_1 / F_1 $ and $ g_1 $ and $ F_1 $
have the same $Q^2$-dependence at LO of \emph{DGLAP} and \emph{leading-twist}.
The weak $Q^2$-dependence in the resonances signals duality.
Duality in $ A_1 $ seems to arise at greater $Q^2$
than for $ g_1 $. Duality in $ A_2 $ arises at lower $Q^2$ than
for $ A_1 $ because the $\Delta(1232)~3/2^+$ is suppressed in $ A_2 $, since $ A_2 \propto \sigma_{LT}' $.
The similar $Q^2$- and $x_{Bj}$-dependences of the DIS and
resonance structure functions discussed so far is called ``local duality". ``Global duality"
considers the moments. It is tested by forming the partial moments
$\widetilde{\Gamma}^{res}$ integrated only over the resonances.
They are compared to $\widetilde{\Gamma}^{DIS}$ moments covering the
same $x_{Bj}$ interval and formed using \emph{leading-twist} structure functions.
$\widetilde{\Gamma}^{DIS}$ is corrected for pQCD radiation and kinematical \emph{twists}.
Global duality has been tested on $\widetilde{\Gamma}_1^p$
and $\widetilde{\Gamma}_1^d$~\cite{Bosted:2006gp}, and
on $\widetilde{\Gamma}_1^n$ and $\widetilde{\Gamma}_1^{^{3}\textnormal{He}}$~\cite{Solvignon:2013yun}.
For the proton, duality arises for $Q^2 \gtrsim1.8 $~GeV$^2$ or $Q^2 \gtrsim1.0 $~GeV$^2$ if the elastic reaction
is included. For the deuteron, $^3$He and neutron (extracted from the previous nuclei), duality arises earlier,
as expected from Fermi motion.
\subsection{Nucleon spin structure at high energy \label{nucleon spin structure at high energy}}
In this section we will discuss the picture of the nucleon spin structure painted by both high-energy experiments and theory.
The PDFs quoted here are for the proton. The neutron PDFs should be nearly identical after SU(2) isospin symmetry rotation.
\subsubsection{General conclusions}
The polarized inclusive DIS experiments from SLAC, CERN and DESY laid the foundation
for our understanding of the nucleon spin structure and showed that:
\noindent$\bullet$ The strong force is well described by pQCD,
even when spin degrees of freedom are accounted for. Since QCD is the accepted paradigm,
the contribution of inclusive, doubly polarized DIS experiments to nucleon spin studies provided an important
test of the theory. For example, the verification of the Bjorken sum rule, Eq.~(\ref{eq:genBj}), has played a central role.
To emphasize this, one can recall the oft-quoted statement of Bjorken~\cite{Bjorken:1996dc}:
``Polarization data has often been the graveyard of fashionable theories. If theorists had their way,
they might well ban such measurements altogether out of self-protection."
\noindent$\bullet$ QCD's fundamental quanta, the quarks and gluons,
and their OAM should generate the nucleon spin,
see Eq.~(\ref{eq:spin SR}):
\vspace{-0.4cm}
\begin{equation}
\vspace{-0.15cm}
J=\frac{1}{2}=\frac{1}{2}\Delta\Sigma+\mbox{L}_q+\Delta G+\mbox{L}_g
\nonumber
\end{equation}
Estimates for each of the components are discussed in the next Section. Recent determinations suggest
$\Delta \Sigma \approx 0.30(5)$,
$\mbox{L}_q \approx 0.2(1)$, and
$\Delta G+\mbox{L}_g \approx 0.15(10)$ at $Q^2=4$~GeV$^2$. Thus the nucleon spin is shared
between the three components, with the quark OAM possibly the largest contribution.
This result includes the PDF evolution effects from the low $Q^2$ nonperturbative domain to the
experimental resolution at $Q^2=4$~GeV$^2$.
\noindent$\bullet$ The PDFs extracted from diverse DIS data and evolved to the
same $Q^2$ are generally consistent. Global analyses
show that the up quark polarization in the proton is large and positive,
$ \Delta \Sigma_u \approx 0.85$, whereas the down quark one is smaller and negative,
$\Delta \Sigma_d \approx -0.43$. The $x_{Bj}$-dependences of $\Delta u+\Delta\overline{u}$ and $\Delta d+\Delta\overline{d}$
are well determined in the kinematical domains of the experiment.
\noindent$\bullet$ The gluon axial anomaly~\cite{Efremov:1988zh} is small and cannot
explain the ``spin crisis".
\noindent$\bullet$ The contribution of the gluon spin,
which is only indirectly accessible in inclusive experiments, seems to be moderate.
\noindent$\bullet$ Quark OAM, which is required in the baryon LFWF to have
nonzero Pauli form factor and anomalous magnetic moment~\cite{Brodsky:1980zm},
is the most difficult component to measure from DIS;
however, an analysis of DIS data at high-$x_{Bj}$, GPD data, as well as LGT suggest it is a major contribution to $J$.
\noindent$\bullet$ The Ellis-Jaffe sum rule, Eq.~(\ref{eq:Ellis-Jaffe p}), is violated for both nucleons.
This either implies a large $\Delta s$, large SU(3)$_f$ breaking effects,
or an inaccurate value of $a_8$~\cite{Bass:2009ed}. Global fits indicate
$ \Delta s\approx-0.05(5)$, which is too small to fully explain the violation.
\noindent$\bullet$ \emph{Higher-twist} power-suppressed contributions are small at $Q^2 > 1$~GeV$^2$.
\subsubsection{Individual contributions to the nucleon spin \label{Individual contributions to the nucleon spin}}
\paragraph{Total quark spin contribution}
The most precise determinations of $\Delta \Sigma$ are from global fits, see
Table~\ref{table Delta Sigma 1} in the Appendix. In average, $\Delta \Sigma \approx 0.30(5)$.
A selection of LGT results is shown in Table~\ref{table Delta Sigma 2}.
The early calculations typically did not include the disconnected diagrams
that are responsible for the \emph{sea quark} contribution. They account for the
larger uncertainty in some recent LGT analyses and reduce the predicted
$\Delta\Sigma$ by about 30\%~\cite{Alexandrou:2016tuo, Alexandrou:2018xnp}.
(An earlier result indicated only a 5\% reduction, but this was evaluated with $m_{\pi} = 0.47$~GeV~\cite{Chambers:2015bka}).
The determination of $\Delta \Sigma$ from SIDIS at COMPASS~\cite{Alexakhin:2005iw}
agrees with the inclusive data~\cite{Alexakhin:2006oza}. The two analyses have similar statistical precision.
\paragraph{Individual quark spin contributions}
Inclusive DIS data on proton, neutron, and deuteron targets can be used to
separate the contributions from different quark polarizations assuming SU(3)$_f$ validity.
SIDIS, which tags the struck
quark allows the identification of individual quark spin contributions without this
assumption. However, the domain where PDFs can be safely
extracted assuming factorizations demands a larger momentum scale than untagged DIS. It is presently
unclear whether the kinematical range of available data has reached this domain.
Tables~\ref{table Delta q 1} and~\ref{table Delta q 2} list $\Delta q$ from experiments,
models and results from LGT. Overall, $ \Delta \Sigma_u \approx 0.85$ and $\Delta \Sigma_d \approx -0.43$.
$\Delta s$ is of special interest since it could explain
the violation of the Ellis-Jaffe sum rule (Eq.~(\ref{eq:Ellis-Jaffe p})), and also underlines
the limitations of \emph{constituent quark} models. Ref.~\cite{Chang:2014jba} reviewed recently
the nucleon \emph{sea}, including $\Delta s$. The current favored value for $\Delta s, $ approximately $-0.05(5)$, is
barely enough to reconcile the Ellis-Jaffe sum rule, which predicts $\Delta \Sigma^{EJ} = 0.58(12)$ without $\Delta s,$
with the measured $\Delta \Sigma \approx 0.30(5)$. Recent LGT data yield an even smaller $\Delta s$
value, about $-0.03(1)$. (Early quenched LGT data yielded a larger $\Delta s=0.2(1)$, agreeing with the EMC
initial determination.) Thus, this suggests that SU(3)$_f$ breaking also contributes to
the Ellis-Jaffe sum rule violation~\cite{Bass:2009ed}.
This conclusion is supported by recent global analyses from
DSSV~\cite{deFlorian:2009vb}, NNPDF14~\cite{Nocera:2014gqa} and in particular JAM~\cite{Ethier:2017zbq}.
Nevertheless, this question remains open since
for example, LGT investigations of hyperon axial couplings show no evidence
of SU(3)$_f$ violation~\cite{Lin:2007ap}.
There is also tension between the values for $\Delta s$ derived from DIS and from kaon SIDIS data.
Those suggest that the $x_{Bj}$-dependence of $\Delta s + \Delta \overline{s}$
flips sign and thus contributes less to $J$ than indicated by DIS data.
For example, COMPASS obtains $\Delta s + \Delta \overline{s}= -0.01 \pm 0.01$(stat) $\pm 0.01$(syst) from SIDIS
whereas a PDF fit of inclusive asymmetries yields
$\Delta s + \Delta \overline{s}= -0.08 \pm 0.01$(stat) $\pm 0.02$(syst), in clear disagreement. This
suggests that even at the large CERN energies, we may not yet be in the factorization domain for SIDIS.
Furthermore, a LSS analysis showed that the SIDIS $\Delta s$ is very sensitive to the
parameterization of the fragmentation functions and that the lack of their precise
knowledge may cause the tension~\cite{Leader:2011tm}.
However, the JAM analysis recently suggested~\cite{Ethier:2017zbq} that the tension
comes from imposing SU(3)$_f$, which is consistent
with the likely explanation of the Ellis-Jaffe sum rule violation ~\cite{Bass:2009ed, Leader:2000dw}.
The JAM analysis, done at NLO and in the $\overline{MS}$ scheme, was
aimed at determining $\Delta s + \Delta \overline{s} (x_{Bj})$ with minimal bias. It used
DIS, SIDIS and $e^+ e^-$ annihilation data without imposing SU(3)$_f$, and allowed for \emph{higher-twist} contributions.
It finds $\Delta s + \Delta \overline{s} = -0.03 \pm 0.10$ at $Q^2=5$~GeV$^2$.
Fragmentation function data from LHC, COMPASS, HERMES, BELLE and BaBar may clarify the
situation. Measurements of $\overrightarrow{p} p \to \overrightarrow{\Lambda}X$ may also help since the $\Lambda$
polarization depends on $\Delta s$.
Reactions utilizing parity violation are also useful: proton
strange form factor data, together with neutrino scattering data yield
$g_A^s=\Delta s + \Delta \overline{s}= -0.30 \pm 0.42$~\cite{Pate:2008va}.
New parity violation data on $g_A^s$ should be available soon~\cite{Woodruff:2017kex} and can be complemented
with measurements using the future SoLID detector at JLab ~\cite{SoLID}.
A polarized $^3$He target and unpolarized electron beam can
provide $g_1^{\gamma Z,n}$ and $g_5^{\gamma Z,n}$ from $Z^0$--$\gamma$ parity-violating interference.
These measurements, combined with the existing $g_1^p$ and $g_1^n$ data, can determine $\Delta s$
without assuming SU(3)$_f$~\cite{LOI12-16-007}.
The $x_{Bj}$-dependence of $\Delta u$ and $\Delta d$ can be obtained
from $A_1\approx g_1/F_1$ at high $x_{Bj}$ (see Section~\ref{pqcd high-x})
and from SIDIS at lower $x_{Bj}$. At high $x_{Bj}$, \emph{sea quarks} contribute little so
$F_1$ and $g_1$ mostly depend on $u^+$, $u^-$, $d^+$ and $d^-$
(see Eqs.~(\ref{eq:eqf1parton}) and (\ref{eq:eqg1parton})). They can thus be extracted from
$F_1^p$, $F_1^n$, $g_1^p$ and $g_1^n$ assuming isospin symmetry.
The results for $\Delta u/u$ and $\Delta d/d$ extracted
from $A_1$~\cite{Dharmawardane:2006zd, Zheng:2003un, Parno:2014xzb, Airapetian:2004zf}
are shown in Fig.~\ref{fig:partons_polar_vs_x}.
For clarity, only the most precise data are plotted.
Smaller $x_{Bj}$ points are from SIDIS data~\cite{Ackerstaff:1999ey}.
Global fits are also shown~\cite{ Leader:2001kh, Jimenez-Delgado:2014xza, deFlorian:2009vb, Nocera:2014gqa}.
The latter Ref. used the high-$x_{Bj}$ pQCD constraints
discussed in Section~\ref{pqcd high-x} and assumed no quark OAM.
OAM is included in the results from Refs.~\cite{Avakian:2007xa, Jimenez-Delgado:2014xza}.
The $\Delta d/d$ data are negative, agreeing with most models but not with pQCD evolution
which predicts that $\Delta d/d>0$ for $x_{Bj} \gtrsim 0.5$ without quark OAM.
Including OAM pushes the zero crossing to $x_{Bj} \approx 0.75$,
which agrees with the data. PQCD's validity being established, this suggests that quark OAM is important.
Integrating $\Delta u(x_{Bj})$ and $\Delta d(x_{Bj})$ over $x_{Bj}$ yield a large positive $\Delta u$ and a
moderate negative $\Delta d$.
\begin{figure}
\center
\vspace{-0.5cm}
\includegraphics[scale=0.39]{dqoq}\includegraphics[scale=0.4]{dgog}
\vspace{-0.6cm}
\caption{\label{fig:partons_polar_vs_x} \small{Data and global fits for
$\Delta q/q$ vs quark momentum fraction $x_{Bj}$ (left), and for
$\Delta g/g$ vs the gluon momentum fraction $x_g$ (right).
}}
\vspace{-0.5cm}
\end{figure}
\noindent
First results on $\Delta u - \Delta d$ from LGT are becoming available~\cite{Chen:2016utp, Alexandrou:2018pbm}.
\paragraph{The $ \Delta \overline{u} - \Delta \overline{d}$ difference}
Global fits and LGT calculations indicate a nonzero total polarized \emph{sea} difference
$ \Delta \overline{u} - \Delta \overline{d}$.
(We use the term ``sea difference" rather than the conventional ``\emph{sea} asymmetry"
in order to avoid confusion with spin asymmetry, a central object of this review.)
Ref.~\cite{Chang:2014jba} recently reviewed
the nucleon \emph{sea} content, including its polarization. An unpolarized
non-zero \emph{sea} difference $\overline{u} - \overline{d} \approx -0.12$
has been known since the early 1990s~\cite{Baldit:1994jk, Amaudruz:1991at}.
Such phenomenon must be nonperturbative since the perturbative process $g \to q \bar{q}$
generating \emph{sea quarks} is nearly symmetric, and Pauli blocking for
$g \to u \bar{u}$ in the proton ($g \to d \bar{d}$ in the neutron) is expected to be very small.
Many of the nonperturbative processes proposed for $ \overline{u} - \overline{d} \neq 0$ also predict
$ \Delta \overline{u} - \Delta \overline{d} \neq 0$.
As mentioned, $\overline{u} - \overline{d}$ may be related to the total OAM, see Eq.~(\ref{Eq. L propto sea}).
Table~\ref{table sea asy} provides data and predictions for $ \Delta \overline{u} - \Delta \overline{d}$.
Other predictions are provided in Refs.~\cite{Kumano:2001cu}.
\paragraph{Spin from intrinsic heavy-quarks}
More generally, the nonperturbative contribution to the nucleon spin
arising from its ``intrinsic" heavy quark Fock states -- intrinsic strangeness, charm, and
bottom~\cite{Brodsky:1984nx} -- is an interesting question.
Such contributions arise from $Q \bar Q$ pairs which are
multiply connected to the valence quarks. One can show from the OPE that the probability of heavy
quark Fock states such as $|uud Q\bar Q \rangle$ in the proton scales as
$1/M^2_Q$ ~\cite{Brodsky:1984nx, Franz:2000ee}.
In the case of Abelian theory, a Fock state such as $ |e^+ e^- L \bar L \rangle$ in positronium atoms
arises from the heavy lepton loop light-by-light insertion in the self-energy of positronium.
In the Abelian case the probability scales as $1/M^4_L$.
The proton spin $J^z$ can receive contributions from the spin $S^z$ of the heavy quarks
in the $|uud Q \bar Q\rangle$ Fock state. For example, the least off-shell hadronic contribution
to the $|uud s \bar s\rangle$ Fock state has a dual representation as a
$|K^+(u\bar s) \Lambda(uds) \rangle$ fluctuation where the polarization of the $\Lambda$ hyperon is
opposite to the proton spin $J^z$~\cite{Brodsky:1996hc}. Since the spin of the $s$ quark is
aligned with $\Lambda$ spin, the $s$ quark will have spin $S^z_s$ opposite to the proton $J^z$.
The $\bar s$ in the $K^+$ is unaligned. Similarly, the spin $S^z_c$ of the intrinsic charm quark
from the $|D^+(u\bar c) \Lambda_c(udc) \rangle$ fluctuation of the proton will also be anti-aligned to
the proton spin. The magnitude of the spin correlation of the intrinsic Q quark with the proton is thus
bounded by the $|uudQ\bar Q\rangle$ Fock state probability. The net spin correlation of
the intrinsic heavy quarks can be bounded using the OPE~\cite{Polyakov:1998rb}.
It is also of interest to consider the intrinsic heavy quark distributions of nuclei.
For example, as shown in Ref.~\cite{Brodsky:2018zdh}, the gluon and intrinsic heavy quark
content of the deuteron will be enhanced due its ``hidden-color" degrees of
freedom~\cite{Brodsky:1983vf, Bashkanov:2013cla}, such as $ |(uud)_{8_C} (ddu)_{8_C} \rangle$.
\paragraph{The gluon contribution to the proton spin}
$\Delta g/g (x_{Bj})$ and $\Delta g (x_{Bj})$
have been determined from either global fits to $g_1$ data
\emph{via} the sensitivity introduced by the \emph{DGLAP} equations, or from more direct semi-exclusive processes.
Tables~\ref{table Delta G 1} and~\ref{table Delta G 2} summarize the current information on $\Delta G$ and
$\Delta G+\mbox{L}_g$. Results on $\Delta g/g$ are shown in Fig.~\ref{fig:partons_polar_vs_x}.
The averaged value is $\Delta g/g = 0.113 \pm 0.038$(stat)$\pm0.035$(syst).
\paragraph{Orbital angular momenta \label{OAM}}
Of all the nucleon spin components, the OAMs are the hardest to measure.
Quark OAM can be extracted \emph{via} the GPDs $E$ and $H$, see Eq.~(\ref{Eq. Ji SR}),
the two-parton twist~3 GPD $G_2$, see Eq.~(\ref{Eq. quark OAM from twist-3}),
or GTMDs. They can also be assessed using TMDs
with nucleon structure models~\cite{Gutsche:2016gcd}.
While GPDs yield the kinematical OAM,
GTMDs provide the canonical definition, see Section~\ref{SSR components}.
GPD and GTMD measurements are difficult and, in order to obtain the OAM,
must be extensive since sum rule analyses are required.
The present dearth of data can be alleviated by models if
the data are sufficiently constraining so that the model dependence
is minimal. See Refs.~\cite{Bacchetta:2011gx, Courtoy:2016des} for examples of such work.
In Ref.~\cite{Bacchetta:2011gx},
a model is used to connect $E$ and the Sivers TMD.
The fit to the single-spin transverse asymmetries allows to extract the TMD, to which $E$ is connected and then used to extracted $J_q$.
Thus $L_q=J_q-\Delta q/2 $ can be obtained.
In Ref.~\cite{Courtoy:2016des}, the quark OAM is computed within a
bag model using Eq.~(\ref{Eq. quark OAM from twist-3}).
A LF analysis of the deuteron single-spin transverse asymmetry~\cite{Alexakhin:2005iw} also constrains OAM and
suggests a small value for $\mbox{L}_g$~\cite{Brodsky:2006ha}. Similar conclusions are reached
using measurements of the $pp^\uparrow \to \pi^0X$ single spin transverse asymmetry \cite{Anselmino:2006yq}.
LGT can predict $L_q$ by calculating $J_q$ and
subtracting the computed or experimentally known $\Delta q/2 $. $\mbox{L}_g$ is obtained
likewise.
Alternatively, a first direct LGT calculation of quark OAMs obtained from the cross-product of
position and momentum is outlined in~\cite{Engelhardt:2017miy}. Quark OAMs are obtained
from GTMDs~\cite{Lorce:2011kd, Hatta:2011ku, Rajan:2016tlg} and can be set to
follow the canonical $l_q$ or kinematical $L_q$ OAM definition, or any definition in between
by varying the shape of the Wilson link chosen for the calculation. $l_q$ and $L_q$ can be
compared, as well as how they transform into each other, and it is found that $l_q > L_q$.
Early LGT calculations, which indicated small $L_q$ values, did not include the contributions of disconnected diagrams.
More recent calculations including the disconnected diagrams yield
larger values for the quark OAM, in agreement with several observations:
A) The predictions from LF at first order and the Skyrme model that in the nonperturbative domain,
the spin of the nucleon comes entirely from the quark OAM, see Section~\ref{sec:LFHQCD}.
B) The relativistic quark model which predicts $ l_q\approx 0.2$~\cite{Jaffe:1989jz, Brodsky:1994fz};
C) The $\Delta d/d$ high-$x_{Bj}$ data that is understood within pQCD only
if the quark OAM is sizeable~\cite{Avakian:2007xa}, see Section~\ref{pqcd high-x};
and
D) A non-zero nucleon anomalous magnetic moment implies a non-zero quark OAM~\cite{Brodsky:1980zm,Burkardt:2005km}.
Although $L_q$ is dominated by disconnected diagrams in LGT, they are absent in the LF and quark models,
and highly suppressed for the large-$x_{Bj}$ data.
Thus, although the various approaches agree that the quark OAM is important, the underlying mechanisms are evidently different.
Tables~\ref{table OAM 1} and~\ref{table OAM 2} provide the LGT
results and the indirect phenomenological determinations from single spin asymmetries.
If only the quark OAM or quark total angular momenta are provided in a reference, we have computed the other one assuming
$\Delta u/2 = 0.41(2)$, $\Delta d/2 = -0.22(2)$ and $\Delta s/2 = -0.05(5)$. One notices on the tables
that the strange quark OAM seems to be of opposite sign to $\Delta s$, effectively suppressing the total strange quark contribution to the nucleon
spin.
\subsubsection{High-energy picture of the nucleon spin structure}
The contributions to $J$ listed in Tables~\ref{table Delta Sigma 1}-\ref{table OAM 2}
are shown on Fig.~\ref{spin history}. It allows for a visualization of the evolution of our knowledge.
\begin{figure}
\center
\includegraphics[scale=0.52]{history}
\vspace{-0.45cm}
\caption{\label{spin history}\small{History of the measurements, models and LGT results on
$\Delta \Sigma/2$ (top left panel);
$\mbox{L}$ (top right panel);
$(\Delta q+ \Delta\bar{q})/2$ (middle left panel);
quark OAM for light flavors (middle right panel);
$\Delta G$ (bottom left panel);
and $\Delta G+\mbox{L}_g$ (bottom right panel).
The results shown, from Tables~\ref{table Delta Sigma 1}-\ref{table OAM 2}, are not comprehensive.
The determinations of $\mbox{L}$ use
different definitions, and may thus not be directly comparable, see Section.~\ref{SSR components}.
The data points are significantly correlated since
they use the same data set and/or related assumptions and/or similar approximations, {\it e.g.}, quench approximation
or neglecting disconnected diagrams for the earlier LGT results. Values were LO-evolved to $Q^2=4$ GeV$^2$.
The uncertainties, when available, were not evolved.}}
\end{figure}
While the measured $\Delta u+\Delta \overline{u}$ agrees with the relativistic quark model,
its prediction for $\Delta d+\Delta \overline{d}$ is 50\% smaller than the data. Thus the
failure of the relativistic quark model stems in part from neglecting the \emph{sea quarks},
chiefly $\Delta \overline{d}$ and $\Delta s$ to a lesser extent.
The situation for the quark OAM is still unclear due the data scarcity.
The indication that $\mbox{L}_s$ and $\Delta s$ have opposite signs reduces the overall strange
quark contribution to $J$ to a second-order effect.
Finally, $\Delta G+\mbox{L}_g$ appears to be of moderate size and thus not as important as initially thought.
The picture of the nucleon spin structure arising from these high-energy results is as follows:
The nucleon appears as a mixture of quasi-free quarks and bremsstrahlung-created gluons,
which in turn generate \emph{sea quarks}. At $Q^2 \sim 4$~GeV$^2$,
the \emph{valence quarks} carry between 30\% to 40\% of $J$.
The \emph{sea} quarks contribute a smaller value and have opposite
sign -- about $-10\%$; it is dominated by $\Delta \bar{d}$.
The gluons carry about 20\% to 40\% of $J$.
The remainder, up to ~50\%, comes from the quark OAM.
This agrees with the asymptotic prediction $L_q \to \Delta\Sigma(Q_0) + \frac{3n_f}{32+6n_f}$,
assuming $Q_0 \approx$~ 1 GeV for the \emph{DGLAP} evolution starting scale.
This, together with the LFHQCD first order prediction that the spin of the nucleon comes entirely from the quark OAM, and hence
$\Delta\Sigma(Q_0)=0$ yields $L_q \xrightarrow[Q^2 \to \infty]~0.52 \,J$ at LO.
Part of this physics can be understood as a relativistic effect, the consequence of the Dirac equation
for light quarks in a confining potential. In the \emph{constituent quark} model, this effect
is about $0.3 \,J$.
Finally, DIS experiments indicate small \emph{higher-twist} contributions,
{\it i.e.}, power-law suppressed contributions from parton correlations such as quark-quark interactions, even though
the lower $Q$ values of the SLAC or HERMES experiments are of the GeV order, close to the
$\kappa \approx 0.5 $~GeV confinement scale~\cite{Brodsky:2016yod}.
This is surprising since such correlations are related to quark confinement.
(We refer to $\kappa$ rather than $\Lambda_s$ which is renormalization scheme dependent and hence ambiguous.
Typically $0.3 < \Lambda_s <1$~GeV~\cite{Deur:2016tte}.)
\subsubsection{Pending Questions}
\noindent The polarized DIS experiments leave several important questions open:
\noindent$\bullet$ Why is scale invariance precocious ({\it i.e.}, why are \emph{higher-twist} effects small)?
\noindent$\bullet$ What are precisely the values of $\Delta G$, $\mbox{L}_q$ and $\mbox{L}_g$?
\noindent$\bullet$ What are the values and roles of parton correlations (\emph{higher-twists}), and their connection
to strong-QCD phenomena such as confinement and hadronic degrees of freedom?
\noindent$\bullet$ Is the nucleon simpler to understand at high $x_{Bj} $?
\noindent$\bullet$ How does the transverse momentum influence the nucleon spin structure?
\noindent$\bullet$ What is the behavior of the polarized PDFs at small $x_{Bj}$?
Except for the two last points, recent inclusive data at lower energy have partially addressed these questions, as will be discussed below.
Experiments which measure GPDs and GTMDs are relevant to all of these questions,
except for the last point
which can be addressed by future polarized EIC experiments
\subsubsection{Contributions from lower energy data}
The information gained from low energy experiments includes parton correlations, the high-$x_{Bj}$
domain of structure functions, the various contributions to the nucleon spin, the transition between the hadronic
and partonic degrees of freedom, and tests of nucleon structure models.
\noindent$\bullet$ Parton correlations:
Overall \emph{higher-twist} leads only to small deviations from Bjorken scaling even at $Q^2 \approx 1$~GeV$^2$.
In fact, the low-$Q^2$ data allow us to quantify the characteristic scale $Q_0$ at which
\emph{leading-twist} pQCD fails, see Section~\ref{sec:perspectives}.
In the $\overline{MS}$ scheme and N$^4$LO, $ Q_0\approx0.75$~GeV.
Individual \emph{higher-twist} contributions, however, can be significant.
For example, for $\Gamma_1(Q^2 = 1$~GeV$^2)$
$f_2$ (twist~4) has similar strength as $\Gamma_1^{\mbox{\scriptsize{twist~2}}}$.
The overall smallness of the total \emph{higher-twist} effect comes from the sign
alternation of the $Q^{2-twist}$ series
and the similar magnitude of its coefficients near $Q^2 = 1$~GeV$^2$.
\noindent$\bullet$ The $x_{Bj}$-dependence of the effect of parton correlations has
been determined for $g_1$;
the dynamical \emph{higher-twist} contribution was found to be significant at moderate $x_{Bj}$ but becomes
less important at high and low $x_{Bj} $. Since $ g_1 $ is itself small at high $x_{Bj} $,
\emph{higher-twists} remain important there.
This conclusion can agree with the absence of large \emph{higher-twist} contribution in $g_1^n $
for $Q^2 \sim 1$~GeV$^2$ (Fig.~\ref{fig:E97-103 g2}), if
kinematical \emph{higher-twist} contribution cancels the dynamical contribution.
\noindent$\bullet$ The verification of the Burkhardt-Cottingham sum rule,
Eq.~(\ref{eq:bc_noel}), implies that $ g_2$ is not singular.
This should apply to each term of the $ g_2 $ \emph{twist} series.
\noindent$\bullet$ At $Q^2 <1 $~GeV$^2$, \emph{higher-twist} effects become noticeable:
For example, at $Q^2 = 0.6 $~GeV$^2$, their contribution to $g_2^n $ appears to be similar to the \emph{twist}-2 term contributing to
$ g_2^{WW}$ (Fig.~\ref{fig:E97-103 g2}), although uncertainties remain important.
The indications that the overall \emph{higher-twist} contributions are under control
allow one to extend the database used to extract the polarized PDF
~\cite{Leader:2006xc, Jimenez-Delgado:2013boa, Shahri:2016uzl}.
\noindent \textbf{High-$x_{Bj}$ data}
Measurements from JLab experiments have provided the first significant constraints on polarized PDFs at
high $x_{Bj}$.
\emph{Valence quark} dominance is confirmed.
\noindent \textbf{Information on the nucleon spin components}
The data at high $x_{Bj}$ have constrained
$ \Delta \Sigma $, the quark OAM and $ \Delta G $. For example, in the global analysis of
Ref.~\cite{Leader:2006xc}, the uncertainty on $ \Delta G $
has decreased by a factor of 2 at $x_{Bj} = 0.3 $ and by a factor of 4 at $x_{Bj } = 0.5$.
Furthermore, these data have revealed the importance of the quark OAM.
However, to reliably obtain its value, the
quark wave functions of the nucleon have to be known for all $x_{Bj}$, rather
than only at high $x_{Bj} $.
Fits of the $ \Gamma_1$ data at $Q^2> 1$~GeV$^2$ indicate $ \Delta \Sigma^p = 0.15 \pm 0.07 $
and $ \Delta \Sigma^n = 0.35 \pm 0.08 $.
This difference suggests an insufficient knowledge $ g_1 $ at low $x_{Bj}$, rather than a breaking of isospin
symmetry.
\noindent \textbf{The transition between partonic and hadronic descriptions}
At large $Q^2$, data and pQCD predictions agree well without the need to account for parton correlations;
this is at first surprising, but it can be understood in terms of \emph{higher-twist} contributions of alternating signs.
At intermediate $Q^2$, the transition between descriptions based on partonic
versus hadronic descriptions of the strong force such as the $\chi$PT approach, is
characterized by a marked $Q^2$-evolution for most moments. However, the evolution
is smooth {\it e.g.}, without indication of phase transition, an important fact in the context
of Section~\ref{sec:perspectives}.
At lower $Q^2$, $\chi$PT predictions initially disagreed with most of
the data for structure function moments.
Recent calculations agree better, but some challenges remain for $\chi$PT.
New LGT methods are being developed
which should allow tractable, reliable first principle calculations of PDFs.
\noindent \textbf{Neutron information}
Constraints on neutron structure extracted from experiments using deuteron and
$^3$He targets appear to be consistent;
this validates the use of light nuclei as effective polarized neutron targets in the $Q^2$ range of the data.
These results provide
complementary checks on nuclear effects: such effects are small ($\approx10\%$) for $^3$He due
to the near cancelation between proton spins,
but nuclear corrections are difficult to compute since the $^3$He nucleus
is tightly bound. Conversely, the corrections are large ($\approx50\%$) for the deuteron
but more computationally tractable because the deuteron is a weakly bound $n-p$ object.
\section{Perspectives: Unexpected connections \label{sec:perspectives}}
Studying nucleon structure is fundamental since nucleons represent most of the known matter.
It provides primary information on the strong force and the confinement of quarks and gluons.
We provide here an example of what has been learned from doubly polarized inclusive
experiments at moderate $Q^2$ from JLab.
These experiments determined the $Q^2$-dependence of spin observables and
thus constrained the connections between partonic and hadronic degrees of freedom.
A goal of these experiments was to motivate new
nonperturbative theoretical approaches and insights into understanding nonperturbative QCD.
We discuss here how this goal was achieved.
As discussed at the end of the previous Section, the data at the transition
between the perturbative and nonperturbative-QCD domains evolve smoothly.
A dramatic behavior could have been expected from the pole structure of the perturbative running coupling;
$\alpha_s \xrightarrow[Q \to \Lambda_s]{}\infty$. However, this \emph{Landau pole}
is unphysical and only signals the breakdown of pQCD~\cite{Deur:2016tte} rather than the
actual behavior of $\alpha_s$. In contrast, a smooth behavior is observed {\it e.g.} for the Bjorken
sum $\Gamma_1^{p-n}$, see Fig.~\ref{fig:gamma1pn}.
At low $Q^2$, $\Gamma_1^{p-n}$ is effectively $Q^2$-independent, {\it i.e.}, QCD's
approximate \emph{conformal} behavior seen at large $Q^2$ (Bjorken scaling) is recovered
at low $Q^2$ (see Section~\ref{conformal_sym}).
This permits us to use the AdS/CFT correspondence~\cite{Maldacena:1997re} an incarnation of which is
the LFHQCD framework~\cite{Brodsky:2014yha}, see Section~\ref{sec:LFHQCD}, which predicts that
$\Gamma_1^{p-n} (Q^2) = \big(1- e^{-\frac{Q^2}{4\kappa^2}} \big)/6 $~\cite{Brodsky:2010ur}.
Data~\cite{Deur:2005cf} and LFHQCD prediction agree well; see
Fig.~\ref{fig:gamma1pn}. Remarkably, the prediction
has no adjustable parameters since $\kappa$ is fixed by hadron masses (in Fig.~\ref{fig:gamma1pn},
$\kappa=M_{\rho}/\sqrt{2}$).
The LFHQCD prediction is valid up to $Q^2 \approx 1$~GeV$^2$.
At higher $Q^2$, gluonic corrections not included in LFHQCD become important. However, there
pQCD's Eq.~(\ref{eq:mu4}) may be applied. The validity domains of LFHQCD and pQCD overlap
around $Q^2 \approx 1$~GeV$^2$; matching the magnitude and the first derivative of their predictions
allows one to relate the pQCD parameter $\Lambda_s$ to the LFHQCD
parameter $\kappa$ or equivalently to hadronic masses~\cite{Deur:2016cxb}.
For example, in ${\overline{MS}}$ scheme at LO,
$\Lambda_{\overline{MS}}=M_\rho e^{-a}/\sqrt{a},$
\label{eq: Lambda LO analytical relation}
where $a=4\big(\sqrt{\ln(2)^{2}+1+\beta_0/4}-\ln(2)\big)/\beta_0$. For $n_f = 3$ quark flavors, $a\approx 0.55$.
The $\rho$ meson is the ground-state solution of the quark-antiquark LFHQCD Schr\"{o}dinger
equation including the spin-spin interaction~\cite{Brodsky:2016yod, deTeramond:2008ht}, {\it i.e.}, the solution with radial
excitation $n=0$ and internal OAM $L=0$ and $S = 1$. Higher mass mesons are
described with $n > 0$ or/and $L > 0$. They are shown in Fig.~\ref{Fig:masses}.
The baryon spectrum can be obtained similarly or \emph{via} the mass symmetry between baryons and mesons
using superconformal algebra~\cite{Dosch:2015nwa}.
\begin{figure}
\centerline{\includegraphics[width=.4\textwidth]{VMlambda}
\includegraphics[width=.4\textwidth]{Kslambda}}
\vspace{-0.45cm}
\caption{\label{Fig:masses}\small The mass spectrum for unflavored (a) and
strange light vector mesons (b) predicted by LFHQCD
using only $\Lambda_s$ as input~\cite{Deur:2014qfa} .
The gray bands provide the uncertainty. The points indicate the experimental values.}
\vspace{-0.7cm}
\end{figure}
Computing the hadron spectrum from $\Lambda_s$, such as shown in Fig.~\ref{Fig:masses},
has been a long-thought goal of the strong force studies.
LFHQCD is not QCD but it represents a semiclassical approximation that successfully incorporates basic aspects of QCD's
nonperturbative dynamics that are not explicit from its Lagrangian.
Those include confinement and the emergence of a related mass scale,
universal Regge trajectories, and a massless pion in the chiral limit~\cite{deTeramond:2016htp}.
The confinement potential is determined by implementing QCD's \emph{conformal
symmetry}, following de Alfaro, Fubini and Furlan who showed how a
mass scale can be introduced in the Hamiltonian without affecting the action
\emph{conformal} invariance~\cite{deAlfaro:1976vlx, Brodsky:2013ar}.
The potential is also related by LFHQCD to a dilaton-modified representation
of the \emph{conformal} group in AdS$_5$ space,
Thus, the connection of the hadron mass spectrum~\cite{Deur:2014qfa} to key
results derived from the QCD Bjorken sum rule represents an exciting progress
toward long-sought goals of physics, and it provides an example of how spin studies
foster progress in our understanding of fundamental physics. Another profound connection
relates the holographic structure of form factors (and unpolarized quark distributions), which
depends on the number of components of a bound state, to the properties of the Regge
trajectory of the vector meson that couples to the quark current in a given
hadron~\cite{deTeramond:2018ecg, Sufian:2018cpj}. This procedure
incorporates axial currents and the axial-vector meson spectrum to describe
axial form factors and the structure of polarized quark distributions in the LFHQCD
approach~\cite{HLFHS:2019}.
\section{Outlook\label{cha:Futur-results}}
We reviewed in Section~\ref{sec:data} the constraints on the composition of nucleon spin
which have been obtained from existing doubly polarized inclusive data.
In Section~\ref{sec:perspectives}, we gave an example of the exciting
advances obtained from these data. In this section we will discuss constraints which can be
obtained from presently scheduled future spin experiments.
Most of these experiments are dedicated to measurements of
GPDs and TMDs, which now provide the main avenue for spin structure studies.
JLab's upcoming experimental studies will utilize the upgrade of the electron beam energy from 6 to 12
GeV.\footnote{Halls A, B and C, the halls involved in nucleon spin structure studies, are
limited to 11 GeV, the 12 GeV beam being deliverable only to Hall D.}
The upgraded JLab retains its high polarized luminosity (several $10^{36}$ cm$^{-2}$s$^{-1}$)
which will allow larger kinematic coverage of the DIS region.
In particular, higher values of $x_{Bj}$ will be reached, allowing for
$\Delta u / u $ and $\Delta d / d$ measurements up to $x_{Bj} \approx 0.8$ for $W> 2$~GeV.
The quark OAM analysis discussed in Section~\ref{OAM} will thus be improved.
Three such experiments have been approved for running:
one on neutron utilizing a $^3$He target in JLab Hall A, one in Hall B on
proton and neutron (Deuteron) targets, and
the third one, planned in Hall C with a neutron ($^3$He) target~\cite{large-x A_1 12 GeV exps},
is scheduled to run very soon (2019).
The large solid angle detector CLAS12~\cite{Burkert:2008rj} in Hall B is well suited to measure $\Gamma_1$
up to $Q^2 = $ 6 GeV$^2$ and to minimize the low-$x_{Bj}$ uncertainties
at the values of $Q^2$ reached at 6 GeV.
These data will also refine the determination of \emph{higher twists}.
In addition, inclusive data from CLAS12 will significantly constrain the
polarized PDFs of the nucleons~\cite{Leader:2006xc}:
the precision on $\Delta G$ extracted from lepton DIS \emph{via} \emph{DGLAP} analysis
is expected to improve by a factor of 3 at moderate and low $x_{Bj}$. It will complement the $\Delta G$
measurements from p-p reactions at RHIC. The precision on $\Delta u$
and $\Delta d$ will improve by a factor of 2.
Knowledge of $\Delta s$ will be less improved since the inclusive data only give weak constraints.
Constrains on $\Delta s$ can be obtained in Hall A using the SoLID~\cite{SoLID}
experiment without assuming SU(3)$_f$ symmetry~\cite{LOI12-16-007}.
Measurements of $\Delta G$ at RHIC are expected to continue for another decade
using the upgraded STAR and sPHENIX detectors~\cite{Aschenauer:2016our},
until the advent of the electron-ion collider (EIC)~\cite{Accardi:2012qut}.
The GPDs are among the most important quantities to be measured at the upgraded
JLab~\cite{12-06-114, Hall B DVCS 12 GeV prog.}. A first experiment
has already taken most of its data~\cite{12-06-114}. Since at $Q^2$ of a few GeV$^2$,
$L_q$ appears to be the largest contribution to the nucleon spin,
the JLab GPD program is clearly crucial. Information on the quark OAM will also be provided by measurements of the nucleon GTMDs
on polarized H, D and $^3$He targets~\cite{12 GeV prog. SIDIS program} utilizing
the Hall A and B SIDIS experimental programs.
The ongoing SIDIS and Drell-Yan measurements which access TMDs
are expected to continue at CERN using the COMPASS phase-III upgrade.
TMDs can also be measured with the upgraded STAR and sPHENIX detectors at
RHIC~\cite{Aschenauer:2016our}. Spin experiments are also possible at the LHC
with polarized nucleon and nuclear targets using the proposed fixed-target facility AFTER@LHC~\cite{Brodsky:2012vg}.
Precise DIS data are lacking at $x_{Bj} \lesssim 0.05$ (see {\it e.g.}, the DSSV14 global
fit~\cite{deFlorian:2014yva}). The proposed EIC can access this domain
with a luminosity of up to $10^{34}$ cm$^{-2}$s$^{-1}$. It will
allow for traditional polarized DIS, DDIS, SIDIS, exclusive
and charged current ($W^{+/-}$) DIS measurements.
Precise inclusive data over a much extended $x_{Bj}$ range will yield
$\Delta G$ with increased precision from \emph{DGLAP} global fits.
The discrepancy between $ \Delta \Sigma^p $ and $ \Delta \Sigma^n $ (Section~\ref{HT}),
which is most likely due to the paucity of low-$x_{Bj}$ data, should thus be clarified. Furthermore, the
tension between the $\Delta s$ from DIS and SIDIS can be solved by DIS charged current
charm production with a high-luminosity collider such as the EIC. Charged current DIS will
allow for flavor separation at high $Q^2$ and a first glance at
the $g_5^{\gamma Z,n}$ structure function~\cite{Anselmino:1994gn}.
Other future facilities for nucleon spin structure studies are NICA (Nuclotron-based Ion Collider Facilities)
at JINR in Dubna~\cite{Savin:2014sva}, and possibly an EIC in China (EIC@HIAF).
The NICA collider at Dubna was approved in 2008; it will provide
polarized proton and deuteron beams up to $\sqrt s =27$~GeV.
These beams will allow polarized Drell-Yan studies of TMDs and direct photon
production which can access $\Delta G$.
China's HIAF (High Intensity Heavy Ion Accelerator Facility) was approved in 2015. EIC@HIAF, the
facility relevant to nucleon spin studies, is not yet approved as of 2018. The EIC@HIAF collider would provide a 3 GeV
polarized electron beam colliding with 15 GeV polarized protons. It would measure $\Delta s$,
$\overline{\Delta u}-\overline{\Delta d}$, GPDs and TMDs over
$0.01\leq x_{Bj}\leq0.2$ with a luminosity of about $5\times10^{32}$ cm$^{-2}$s$^{-1}$.
Improvements of the polarized sources, beams, and targets
are proceeding at these facilities.
The success of the \emph{constituent quark} model in the early days of QCD suggested a simple picture for the origin
of the nucleon spin: it was expected to come from the quark spins, $\Delta \Sigma = 1$. However,
the first nucleon spin structure experiments, in particular EMC, showed that the nucleon spin
composition is far from being trivial. This complexity means that spin degrees of freedom reveal interesting
information on the nucleon structure and on the strong force nonperturbative mechanisms.
The next experimental step was the verification of the Bjorken sum rule, thereby
verifying that QCD is valid even when spin degrees of freedom are involved.
The inclusive programs of SLAC, CERN and DESY also provided a mapping of the $x_{Bj}$ and $Q^2$
dependences of the $g_1$ structure function, yielding knowledge on the quark polarized distributions $\Delta q (x_{Bj})$ and some constraints
on the gluon spin distribution $\Delta G$ and on \emph{higher twists}.
The main goal of the subsequent JLab program was to study how partonic degrees
of freedom merge to produce hadronic systems. These data have led to
advances that permit an analytic computation of the
hadron mass spectrum with $\Lambda_s$ as the sole input.
Such a calculation represents exciting progress toward reaching the long-sought and primary goals of strong force studies.
The measurements and theoretical understanding discussed in this review, which has been focused
on doubly-polarized inclusive observables, have provided
testimony on the importance and dynamism of studies of the spin structure of the nucleon. The
future prospects discussed here show that this research remains as dynamic as it was in the
aftermath of the historical EMC measurement.
\paragraph{Acknowledgments} The authors thank S. \v{S}irca and R. S. Sufian for their useful comments on the manuscript.
This review is based in part upon work supported by the U.S. Department of Energy, the Office of Science,
and the Office of Nuclear Physics under contract DE-AC05-06OR23177. This work is also supported by the
Department of Energy contract DE--AC02--76SF00515.
\noindent
\begin{table}
\Large{\bf{Appendix.}}\normalsize~Tables of the contributions to the nucleon spin
~
\scriptsize{
\center
\begin{tabular}{|c|c|c|c|}
\hline
Ref. & $Q^{2}$ (GeV$^{2}$) & $\Delta\Sigma$ & Remarks\tabularnewline
\hline
- & - & 1 & naive quark model \tabularnewline
\hline
\cite{Jaffe:1989jz} & - & 0.75$\pm$0.05 & relativistic quark model \tabularnewline
\hline
\cite{Ellis:1973kp} & - & 0.58$\pm0.12$ & Ellis-Jaffe SR \tabularnewline
\hline
\cite{Sehgal:1974rz} & - & 0.60 & quark parton model \tabularnewline
\hline
\textbf{\cite{Ashman:1987hv}} & \textbf{10.7} & \textbf{0.14$\pm$0.23} & \textbf{EMC} \tabularnewline
\hline
\textbf{\cite{Jaffe:1989jz}} & \textbf{10.7} & \textbf{0.01$\pm$0.29} & \textbf{EMC (Jaffe-manohar analysis)} \tabularnewline
\hline
\cite{Li:1990xr} & - & 0.30 & Skyrme model \tabularnewline
\hline
\cite{Dorokhov:1993fc} & - & $0.09 $ & Instanton model \tabularnewline
\hline
\textbf{\cite{Adeva:1993km}} & \textbf{10} & \textbf{0.28$\pm$0.16} & \textbf{SMC} \tabularnewline
\hline
\textbf{\cite{Close:1993mv}} & \textbf{-} & \textbf{0.41$\pm0.05$} & \textbf{global analysis} \tabularnewline
\hline
\textbf{\cite{Abe:1994cp}} & \textbf{3} & \textbf{0.33$\pm$0.06} & \textbf{E143} \tabularnewline
\hline
\textbf{\cite{Brodsky:1994kg}} & \textbf{10} & \textbf{0.31$\pm$0.07} & \textbf{BBS} \tabularnewline
\hline
\cite{Cheng:1994zn}& - & 0.37 & $\chi$ quark model \tabularnewline
\hline
\textbf{\cite{Ball:1995td}} & \textbf{1} & \textbf{0.5$\pm$0.1} & \textbf{global fit} \tabularnewline
\hline
\textbf{\cite{Gluck:1995yr}}& \textbf{4} & \textbf{0.168} & \textbf{GRSV 1995} \tabularnewline
\hline
\textbf{\cite{Anthony:1996mw}} & \textbf{2} & \textbf{0.39$\pm$0.11} & \textbf{E142} \tabularnewline
\hline
\textbf{\cite{Abe:1997cx}} & \textbf{5} & \textbf{0.20$\pm$0.08} & \textbf{E154} \tabularnewline
\hline
\textbf{\cite{Leader:1997kw}} & \textbf{4} & \textbf{0.342} & \textbf{ LSS 1997} \tabularnewline
\hline
\cite{Qing:1998at} & - & $0.4$ & relativistic quark model \tabularnewline
\hline
\textbf{\cite{Altarelli:1998nb}} & \textbf{1} & \textbf{0.45$\pm$0.10} & \textbf{ABFR 1998} \tabularnewline
\hline
\textbf{\cite{Goto:1999by}} & \textbf{5} & \textbf{0.26$\pm$0.02} & \textbf{AAC 2000} \tabularnewline
\hline
\textbf{\cite{Anthony:1999rm}} & \textbf{5} & \textbf{0.23$\pm$0.07} & \textbf{E155} \tabularnewline
\hline
\textbf{\cite{Gluck:2000dy}} & \textbf{5} & \begin{tabular}{@{}c@{}} \textbf{0.197} \\ \textbf{0.273} \end{tabular} & \begin{tabular}{@{}c@{}} \textbf{Standard GRSV 2000} \\ \textbf{SU(3)$_f$ breaking}\end{tabular} \tabularnewline
\hline
\cite{Bourrely:2001du} & 4 & 0.282 & Stat. model \tabularnewline
\hline
\textbf{\cite{Leader:2001kh}} & \textbf{1} & \textbf{0.21$\pm$0.10} & \textbf{LSS 2001} \tabularnewline
\hline
\textbf{\cite{Forte:2001ph}} & \textbf{4} & \textbf{0.198} & \textbf{ABFR 2001} \tabularnewline
\hline
\textbf{\cite{Filippone:2001ux}} & \textbf{5} & \textbf{0.16$\pm$0.08} & \textbf{Global analysis} \tabularnewline
\hline
\textbf{\cite{Bluemlein:2002be}} & \textbf{4} & \textbf{0.298} & \textbf{BB 2002} \tabularnewline
\hline
\textbf{\cite{Hirai:2003pm}} & \textbf{5} & \textbf{0.213$\pm$0.138} & \textbf{AAC 2003} \tabularnewline
\hline
\textbf{\cite{Meziani:2004ne}} & \textbf{5} & \textbf{0.35$\pm$0.08} & \textbf{Neutron ($^3$He) data (Section~\ref{HT})} \tabularnewline
\hline
\textbf{\cite{Fersch:2017qrq}} & \textbf{5} & \textbf{0.169$\pm$0.084} & \textbf{Proton data (Section~\ref{HT})} \tabularnewline
\hline
\cite{Silva:2005fa} & - & 0.366 & $\chi$ quark soliton model \tabularnewline
\hline
\cite{Wakamatsu:2006dy, Wakamatsu:2007ar} & $\infty$ & $0.33 $&chiral quark soliton model. $n_f=6$ \tabularnewline
\hline
\textbf{\cite{Hirai:2006sr}} & \textbf{5} & \textbf{0.26$\pm$0.09} & \textbf{AAC 2006} \tabularnewline
\hline
\textbf{\cite{Airapetian:2006vy}} & \textbf{5} & \textbf{0.330$\pm$0.039}& \textbf{HERMES Glob. fit} \tabularnewline
\hline
\textbf{\cite{Alexakhin:2006oza}}& \textbf{10} & \textbf{0.35$\pm$0.06} & \textbf{COMPASS} \tabularnewline
\hline
\textbf{\cite{Hirai:2008aj}} & \textbf{5} & \textbf{0.245$\pm$0.06} & \textbf{AAC 2008} \tabularnewline
\hline
\cite{Bass:2009ed} & $\approx0.2$ & 0.39 & cloudy bag model w/ SU(3)$_f$ breaking\tabularnewline
\hline
\textbf{\cite{deFlorian:2009vb}} & \textbf{4} & \textbf{0.245} & \textbf{DSSV08} \tabularnewline
\hline
\textbf{\cite{Leader:2010rb}} & \textbf{4} & \textbf{0.231$\pm$65} & \textbf{LSS 2010} \tabularnewline
\hline
\textbf{\cite{Blumlein:2010rn}} & \textbf{4} & \textbf{0.193$\pm$75} & \textbf{BB 2010} \tabularnewline
\hline
\cite{Altenbuchinger:2010sz} & $\approx0.2$ & $0.23\pm0.01$ & Gauge-invariant cloudy bag model \tabularnewline
\hline
\textbf{\cite{Ball:2013lla}} & \textbf{4} & \textbf{0.18$\pm$0.20} & \textbf{NNPDF 2013} \tabularnewline
\hline
\textbf{\cite{Nocera:2014gqa}}& \textbf{10} & \textbf{0.18$\pm$0.21} & \textbf{NNPDF 2014} \tabularnewline
\hline
\cite{Bijker:2014ila} & - & 0.72$\pm$0.04 & unquenched quark mod. \tabularnewline
\hline
\cite{Brodsky:2014yha} & 5 & 0.30 & LFHQCD \tabularnewline
\hline
\cite{Li:2015exr} & 3 & $0.31\pm0.08$ & $\chi$ effective $\mathcal{L}$ model\tabularnewline
\hline
\cite{Liu:2015jna} & - & 0.308 & LFHQCD \tabularnewline
\hline
\textbf{\cite{Adolph:2015saz}} & \textbf{3} & \textbf{0.32$\pm$0.07} & \textbf{COMPASS 2017 deuteron data} \tabularnewline
\hline
\textbf{\cite{Sato:2016tuz} } & \textbf{5} & \textbf{0.28$\pm$0.04} & \textbf{JAM 2016}\tabularnewline
\hline
\cite{Dahiya:2016wjf} & $\approx 1$ & 0.602 & chiral quark model \tabularnewline
\hline
\textbf{\cite{Shahri:2016uzl}}& \textbf{5} & \textbf{0.285} & \textbf{KTA17 global fit} \tabularnewline
\hline
\cite{Maji:2017ill} & 1 & 0.17 & AdS/QCD q-qq model \tabularnewline
\hline
\textbf{\cite{Ethier:2017zbq}} & \textbf{5} & \textbf{0.36$\pm$0.09} & \textbf{JAM 2017} \tabularnewline\hline
\end{tabular}
\caption{\label{table Delta Sigma 1} \small Determinations of $\Delta\Sigma$ from
experiments and models. Experimental results, including global fits, are in bold
and are given in the $\overline{MS}$ scheme. The model list is indicative rather than comprehensive.
}
}
\end{table}
\noindent
\begin{table}
\scriptsize{
\center
\begin{tabular}{|c|c|c|c|}
\hline
Ref. & $Q^{2}$ (GeV$^{2}$) & $\Delta\Sigma$ & Remarks\tabularnewline
\cite{Altmeyer:1992nt} & - & $0.18\pm0.02$& Altmeyer, Gockeler \emph{et al.} Quenched calc. \tabularnewline
\hline
\cite{Fukugita:1994fh} & - & $0.18 \pm0.10$ & Fukugita \emph{et al.} Quenched calc. w/ $\chi$ extrap. \tabularnewline
\hline
\cite{Dong:1995rx} & - & $0.25 \pm0.12$ & U. Kentucky group. Quenched calc. w/ $\chi$ extrap.\tabularnewline
\hline
\cite{Gockeler:1995wg} & 2 & $0.59 \pm0.07$ & Gockeler \emph{et al.} u,d only. Quenched calc. w/ $\chi$ extrap.\tabularnewline
\hline
\cite{Mathur:1999uf}& 3 & $0.26 \pm 0.12$ & U. Kentucky group. Quenched calc. w/ $\chi$ extrap.\tabularnewline
\hline
\cite{Gusken:1999xy} & 5 & $0.20 \pm 0.12$ & SESAM 1999. $\chi$ extrap. Unspecified RS \tabularnewline
\hline
\cite{Hagler:2003jd} & 4 & $0.682 \pm 0.18$ & LHPC 2003. u, d only. $\chi$ extrap. \tabularnewline
\hline
\cite{Gockeler:2003jfa} & 4 & $0.60 \pm 0.02$ & QCDSF 2003. u, d only. Quenched calc. w/ $\chi$ extrap. \tabularnewline
\hline
\cite{Brommel:2007sb} & 4 & $0.402 \pm 0.048$ & QCDSF-UKQCD 2007. u, d only. $\chi$ extrap. \tabularnewline
\hline
\cite{Bratt:2010jn} & 5 & $0.42 \pm 0.02$ & LHPC. u, d only. $\chi$ extrap. \tabularnewline
\hline
\cite{QCDSF:2011aa} & 7.4 & $0.448 \pm 0.037$ & QCDSF 2011. $m_{\pi}$=285 MeV. Partly quenched calc. \tabularnewline
\hline
\cite{Alexandrou:2011nr} & 4 & $0.296 \pm 0.010$ & Twisted-Mass 2011 u, d only. W/ $\chi$ extrap. \tabularnewline
\hline
\cite{Alexandrou:2013joa} & 4 & 0.606$\pm$0.052 & Twisted-Mass 2013 u, d only. $m_{\pi}$=213 MeV \tabularnewline
\hline
\cite{Abdel-Rehim:2013wlz} & 4& 0.507$\pm$0.008 & Twisted-Mass 2013. Phys. q masses \tabularnewline
\hline
\cite{Deka:2013zha} & 4 & $0.25 \pm 0.12$ & $\chi$QCD 2013. Quenched calc. w/ $\chi$ extrap. \tabularnewline
\hline
\cite{Alexandrou:2016mni} & 4 & $0.400 \pm 0.035$ & Twisted-Mass 2016. Phys. $\pi$ mass \tabularnewline
\hline
\cite{Alexandrou:2016tuo} & 4 & 0.398$\pm0.031$ & Twisted-Mass 2017. Phys. $\pi$ mass \tabularnewline
\hline
\cite{Green:2017keo} & 4 & $0.494 \pm 0.019$ & Partly quenched calc. $m_\pi = 317$ MeV \tabularnewline
\hline
\end{tabular}
\caption{\label{table Delta Sigma 2} \small Continuation of Table~\ref{table Delta Sigma 1}, for LGT results. They are given in the $\overline{MS}$ scheme unless stated otherwise. The list is not comprehensive.
}
}
\end{table}
\noindent
\begin{table}
\scriptsize{
\center
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Ref. & \begin{tabular}{@{}c@{}} $Q^{2}$ \\ (GeV$^{2}$) \end{tabular} & $\Delta u + \Delta \overline{u}$ & $\Delta d + \Delta \overline{d}$ & $\Delta s + \Delta \overline{s}$ & Remarks\tabularnewline
\hline
- & - & 4/3 & -1/3 & 0 & quark model \tabularnewline
\hline
\cite{Jaffe:1989jz} & - & 0.86 & -0.22 & 0 & relat. q. mod. \tabularnewline
\hline
\textbf{\cite{Ashman:1987hv}} & \textbf{10} & \textbf{0.74(10)} & \textbf{-0.54(10)} & \textbf{-0.20(11)} & \textbf{EMC} \tabularnewline
\hline
\cite{Li:1990xr} & - & 0.78 & -0.48 & 0 & Skyrme model \tabularnewline
\hline
\cite{Park:1991fb}& - & - & -& -0.03 & $g_a^s$ SU(3) skyrme model \tabularnewline
\hline
\cite{Dorokhov:1993fc} & - & 0.867 & -0.216 & - & Instanton model \tabularnewline
\hline
\textbf{\cite{Adeva:1993km}} & \textbf{10} & \textbf{0.82(5)} &\textbf{-0.44(5)} & \textbf{-0.10(5)} & \textbf{SMC} \tabularnewline
\hline
\textbf{\cite{Abe:1994cp}} & \textbf{3} & \textbf{0.84(2)} & \textbf{-0.42(2)} & \textbf{-0.09(5)} & \textbf{E143} \tabularnewline
\hline
\textbf{\cite{Brodsky:1994kg}} & \textbf{10} &\textbf{0.83(3)} & \textbf{-0.43(3)} & \textbf{-0.10(3)} & \textbf{BBS} \tabularnewline
\hline
\cite{Cheng:1994zn}& - & 0.79 & -0.32 & -0.10 & $\chi$ quark model \tabularnewline
\hline
\textbf{\cite{Gluck:1995yr}}& \textbf{4} & \textbf{0.914} & \textbf{-0.338} & \textbf{-0.068} & \textbf{GRSV 1995} \tabularnewline
\hline
\textbf{\cite{Anthony:1996mw}} & \textbf{2} & - & -& \textbf{-0.06(6)} & \textbf{E142} \tabularnewline
\hline
\textbf{\cite{Abe:1997cx}} & \textbf{5} & \textbf{0.69$^{(15)}_{(5)}$} & \textbf{-0.40$^{(8)}_{(5)}$} & \textbf{ -0.02$^{(1)}_{(4)}$} & \textbf{E154} \tabularnewline
\hline
\textbf{\cite{Leader:1997kw}} & \textbf{4} & \textbf{0.839} & \textbf{-0.405} & \textbf{-0.079} & \textbf{LSS 1997} \tabularnewline
\hline
\textbf{\cite{Ackerstaff:1997ws}} & \textbf{5}& \textbf{0.842(13)} & \textbf{-0.427(13)} & \textbf{-0.085(18)} & \textbf{HERMES (1997)} \tabularnewline
\hline
\cite{Qing:1998at} & - & 0.75 & -0.48 & -0.07 & relat. quark model \tabularnewline
\hline
\textbf{\cite{Goto:1999by}} & \textbf{5} & \textbf{0.812} & \textbf{-0.462} & \textbf{-0.118(74)} & \textbf{AAC 2000 global fit} \tabularnewline
\hline
\textbf{\cite{Anthony:1999rm}} & \textbf{5} & \textbf{0.95} & \textbf{-0.42} & \textbf{0.01} & \textbf{E155} \tabularnewline
\hline
\textbf{\cite{Gluck:2000dy}} & 5 &
\textbf{ \begin{tabular}{@{}c@{}} 0.795 \\ 0.774 \end{tabular}} &
\textbf{ \begin{tabular}{@{}c@{}} -0.470 \\ -0.493 \end{tabular}} &
\textbf{ \begin{tabular}{@{}c@{}} -0.128 \\ -0.006 \end{tabular}} &
\textbf{ \begin{tabular}{@{}c@{}} \textbf{Standard GRSV 2000} \\ \textbf{SU(3)$_f$ breaking}\end{tabular}} \tabularnewline
\hline
\cite{Bourrely:2001du} & 4 & 0.714 & -0.344 & -0.088 & Stat. model \tabularnewline
\hline
\textbf{\cite{Leader:2001kh}} & \textbf{1} & \textbf{0.80(3)} & \textbf{-0.47(5)} & \textbf{-0.13(4)} & \textbf{LSS 2001} \tabularnewline
\hline
\textbf{\cite{Forte:2001ph}} & \textbf{4} & \textbf{0.692} & \textbf{-0.418} & \textbf{-0.081} & \textbf{ABFR 2001} \tabularnewline
\hline
\textbf{\cite{Bluemlein:2002be}} & \textbf{4} & \textbf{0.854(66)} & \textbf{-0.413(104)} & \textbf{-0.143(34)} & \textbf{BB 2002} \tabularnewline
\hline
\cite{Lyubovitskij:2002ng}& - & - & -& -0.0052(15) & $g_a^s$ chiral quark model \tabularnewline
\hline
\textbf{\cite{Hirai:2003pm}} & \textbf{5} & \textbf{-} & \textbf{-} & \textbf{-0.124(46)} & \textbf{AAC 2003} \tabularnewline
\hline
\cite{An:2005cj}& - & - & -& - 0.05(2) & $g_a^s$ pentaquark model \tabularnewline
\hline
\cite{Silva:2005fa} & - & 0.814 & -0.362 & -0.086 & $\chi$ quark soliton model \tabularnewline
\hline
\textbf{\cite{Hirai:2006sr}} & \textbf{5} & \textbf{-} & \textbf{-} & \textbf{-0.12(4)} & \textbf{AAC 2006} \tabularnewline
\hline
\textbf{\cite{Alexakhin:2006oza}}& \textbf{10} & \textbf{-} & \textbf{-}& \textbf{-0.08(3)} & \textbf{COMPASS} \tabularnewline
\hline
\textbf{\cite{Airapetian:2006vy}} & \textbf{5} & \textbf{0.842(13)} & \textbf{-0.427(13)} & \textbf{-0.085(18)} & \textbf{HERMES Glob. fit} \tabularnewline
\hline
\textbf{\cite{Pate:2008va}} & \textbf{-} & \textbf{-} & \textbf{-} & \textbf{-0.30(42)} & \textbf{PV + $\nu$ data} \tabularnewline
\hline
\cite{Bass:2009ed} & - & 0.84(2) & -0.43(2) & -0.02(2) & cloudy bag model w/ SU(3)$_f$ breaking\tabularnewline
\hline
\textbf{\cite{deFlorian:2009vb}} & \textbf{4} & \textbf{0.814} & \textbf{-0.456} & \textbf{-0.056} & \textbf{DSSV08} \tabularnewline
\hline
\textbf{\cite{Leader:2010rb}} & \textbf{4} & \textbf{-} & \textbf{-} & \textbf{-0.118(20)} & \textbf{LSS 2010} \tabularnewline
\hline
\textbf{\cite{Blumlein:2010rn}} & \textbf{4} & \textbf{0.866(0)} & \textbf{-0.404(0)} & \textbf{-0.118(20)} & \textbf{BB 2010} \tabularnewline
\hline
\cite{Lorce:2011kd} &- & 0.996 & -0.248 & - & LC const. quark mod. \tabularnewline
\hline
\cite{Lorce:2011kd} &- & 1.148 & -0.286 & - & LC $\chi$ qu. solit. mod. \tabularnewline
\hline
\cite{Altenbuchinger:2010sz} & $\approx0.2$ & $0.38\pm0.01$ & $-0.15\pm0.01$ & - & Gauge-invariant cloudy bag model \tabularnewline
\hline
\textbf{\cite{Ball:2013lla}} & \textbf{1} & \textbf{0.80(8)} & \textbf{-0.46(8)} & \textbf{-0.13(9)} & \textbf{NNPDF 2013 } \tabularnewline
\hline
\textbf{\cite{Nocera:2014gqa}}& \textbf{10} & \textbf{0.79(7)} & \textbf{-0.47(7)} & \textbf{-0.07(7)} & \textbf{NNPDF (2014)} \tabularnewline
\hline
\cite{Bijker:2014ila} & - & 1.10(3) & -0.38(1) & 0 & unquenched quark mod. \tabularnewline
\hline
\cite{Li:2015exr} & $\approx0.5$ & 0.90(3) & -0.38(3) & -0.07($^4_7$) & $\chi$ effective $\mathcal{L}$ model\tabularnewline
\hline
\textbf{\cite{Adolph:2015saz}} & \textbf{3} & \textbf{0.84(2)} & \textbf{-0.44(2)} & \textbf{-0.10(2)} & \textbf{D COMPASS} \tabularnewline
\hline
\cite{Gutsche:2016gcd}& 1 & 0.606 & -0.002 & - & LF quark mod. \tabularnewline
\hline
\textbf{\cite{Sato:2016tuz}} & \textbf{1} & \textbf{0.83(1)} & \textbf{-0.44(1)} & \textbf{-0.10(1)} & \textbf{JAM16} \tabularnewline
\hline
\textbf{\cite{Shahri:2016uzl}}& \textbf{5} & \textbf{0.926} & \textbf{-0.341} & \textbf{-} & \textbf{KTA16 global fit} \tabularnewline
\hline
\cite{Dahiya:2016wjf} & $\approx 1$ & 1.024 & -0.398 & -0.023 & chiral quark model \tabularnewline
\hline
\cite{Chakrabarti:2016yuw} & - & 1.892 & 0.792 & - & AdS/QCD q-qq model \tabularnewline
\hline
\cite{Maji:2017ill} & 1 & 0.71(9) & -0.54$^{19}_{13}$ & - & AdS/QCD q-qq model \tabularnewline
\hline
\textbf{\cite{Ethier:2017zbq}} & \textbf{5} & \textbf{-} & \textbf{-} & \textbf{-0.03(10)} & \textbf{JAM17} \tabularnewline
\hline
\end{tabular}
\caption{\label{table Delta q 1}\small Same as Table~\ref{table Delta Sigma 1} but for $\Delta q$. Results are ordered chronologically. The list for models is indicative rather than comprehensive.
}
}
\end{table}
\noindent
\begin{table}
\scriptsize{
\center
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Ref. & \begin{tabular}{@{}c@{}} $Q^{2}$ \\ (GeV$^{2}$) \end{tabular} & $\Delta u + \Delta \overline{u}$ & $\Delta d + \Delta \overline{d}$ & $\Delta s + \Delta \overline{s}$ & Remarks\tabularnewline
\hline
\hline
\cite{Fukugita:1994fh} & - & 0.638(54) & -0.347(46) & -0.0109(30) & Fukugita \emph{et al.} Quenched calc. w/ $\chi$ extrap.\tabularnewline
\hline
\cite{Dong:1995rx} & - & 0.79(11) & -0.42(11) & -0.12(1) & U. Kentucky group. Quenched calc. w/ $\chi$ extrap. \tabularnewline
\hline
\cite{Gockeler:1995wg} & 2 & 0.830(70) & -0.244(22) & -& Gockeler \emph{et al.} u,d only. Quenched calc. w/ $\chi$ extrap.\tabularnewline
\hline
\cite{Mathur:1999uf} & 3 & - & - & -0.116(12) & U. Kentucky group. Quenched calc. w/ $\chi$ extrap. \tabularnewline
\hline
\cite{Gusken:1999xy} & 5 & 0.62(7)& -0.29(6)& -0.12(7) & SESAM 1999. $\chi$ extrap. Unspecified RS \tabularnewline
\hline
\cite{Gockeler:2003jfa} & 4 & 0.84(2)& -0.24(2)& - & QCDSF 2003. u, d only. Quenched calc. w/ $\chi$ extrap.\tabularnewline
\hline
\cite{Babich:2010at} & - & - & - & -0.019(11) & Unrenormalized result. W/ $\chi$ extrap. \tabularnewline
\hline
\cite{Bratt:2010jn} & 5 & 0.822(72) & -0.406(70) & - & LHPC 2010. u, d only. $\chi$ extrap. \tabularnewline
\hline
\cite{QCDSF:2011aa} &7.4 & 0.787(18) & -0.319(15) & -0.020(10) & QCDSF 2011. $m_{\pi}$=285 MeV. Partly quenched calc. \tabularnewline
\hline
\cite{Alexandrou:2011nr} & 4 & 0.610(14) & -0.314(10) & - & Twisted-Mass 2011 u, d only. W/ $\chi$ extrap. \tabularnewline
\hline
\cite{Abdel-Rehim:2013wlz} & 4& 0.820(11) &-0.313(11) & -0.023(34) & Twisted-Mass 2013. Phys. q masses \tabularnewline
\hline
\cite{Alexandrou:2013joa} & 4 & 0.886(48) & -0.280(32) & - & Twisted-Mass 2013 u, d only. $m_{\pi}$=213 MeV \tabularnewline
\hline
\cite{Deka:2013zha} & 4 & 0.79(16) & -0.36(15) & -0.12(1) & $\chi$QCD 2013. Quenched calc. w/ $\chi$ extrap. \tabularnewline
\hline
\cite{Alexandrou:2016mni} & 4 & 0.828(32) & -0.387(20) & -0.042(10) & Twisted-Mass 2016. Phys. $\pi$ mass \tabularnewline
\hline
\cite{Alexandrou:2016tuo}& 4 & 0.826(26) & -0.386(14) & -0.042(10) & Twisted-Mass 2017. Phys. $\pi$ mass \tabularnewline
\hline
\cite{Green:2017keo} & 4 & 0.863(17) & -0.345(11) & -0.0240(24) & Partly quenched calc. $m_\pi = 317$ MeV \tabularnewline
\hline
\end{tabular}
\caption{\label{table Delta q 2}\small Continuation of Table~\ref{table Delta q 1}, for LGT results. They are given in the $\overline{MS}$ scheme unless stated otherwise. The list is not comprehensive.
}
}
\end{table}
\noindent
\begin{table}
\center
\scriptsize{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Ref. & \begin{tabular}{@{}c@{}} $Q^{2}$ \\ (GeV$^{2}$) \end{tabular}& $ \Delta \overline{u} - \Delta \overline{d}$ & $ \Delta \overline{u}$ & $\Delta \overline{d}$ & Remarks\tabularnewline
\hline
\hline
\cite{Thomas:1983fh} & - & 0 & 0 & 0 & $\pi$-cloud model \tabularnewline
\hline
\cite{Dorokhov:1993fc} & 4 & 0.215 & - & - & Instanton model \tabularnewline
\hline
\cite{Fries:1998at} & 2 & 0.014(13) & - & - & $\rho$-cloud model \tabularnewline
\hline
\textbf{\cite{Adeva:1997qz}} & \textbf{10} & \textbf{0.00(19)} & \textbf{0.01(6)}& \textbf{0.01(18)} & \textbf{SMC} \tabularnewline
\hline
\cite{Boreskov:1998hp} & 4 & 0.76(1) & -& -& cloud model, $\rho$-$\pi$ interf. \tabularnewline
\hline
\cite{Dressler:1998zi} & - & 0.31 & - & - & $\chi$ soliton model \tabularnewline
\hline
\textbf{\cite{Ackerstaff:1999ey}} & \textbf{2.5} & \textbf{0.01(6)} & \textbf{-0.01(4)} & \textbf{-0.02(5)} & \textbf{HERMES} \tabularnewline
\hline
\textbf{\cite{Gluck:2000dy}} & \textbf{5} &
\begin{tabular}{@{}c@{}} \textbf{0} \\ \textbf{0.32} \end{tabular} &
\begin{tabular}{@{}c@{}} \textbf{-0.064} \\ \textbf{0.085} \end{tabular} &
\begin{tabular}{@{}c@{}} \textbf{-0.064} \\ \textbf{-0.235} \end{tabular} &
\begin{tabular}{@{}c@{}} \textbf{Standard GRSV 2000} \\ \textbf{SU(3)$_f$ breaking}\end{tabular} \tabularnewline
\hline
\cite{Cao:2001nu} & 4 & 0.023(31) & - & - & meson cloud bag model \tabularnewline
\hline
\cite{Dorokhov:2001pz} & - & 0.2 & - &- & Instanton model \tabularnewline
\hline
\cite{Bourrely:2001du} & 4 & 0.12 & 0.046 & -0.087 & Stat. model \tabularnewline
\hline
\cite{Steffens:2002zn}& - & 0.2 & -& - & sea model with Pauli-blocking\tabularnewline
\hline
\cite{Fries:2002um}& 1 & 0.12 & - & - & cloud model $\sigma$-$\pi$ interf. \tabularnewline
\hline
\textbf{\cite{Airapetian:2004zf}} & \textbf{2.5} & \textbf{0.048(64)} & \textbf{-0.002(23)} & \textbf{-0.054(35)} & \textbf{HERMES} \tabularnewline
\hline
\textbf{\cite{Alekseev:2007vi}} & \textbf{10} & \textbf{0.00(5)} & - & - & \textbf{COMPASS} \tabularnewline
\hline
\textbf{\cite{deFlorian:2009vb}} & \textbf{5} & \textbf{0.15 }& \textbf{0.036} & \textbf{-0.114} &\textbf{DSSV08} \tabularnewline
\hline
\textbf{\cite{Alekseev:2009ac}} & \textbf{3} & \textbf{-0.04(3)} & - & - & \textbf{COMPASS} \tabularnewline
\hline
\textbf{\cite{Alekseev:2010ub}} & \textbf{3} & \textbf{0.06(5)} &\textbf{ 0.02(2)} & \textbf{-0.05(4)} & \textbf{COMPASS} \tabularnewline
\hline
\textbf{\cite{Nocera:2014gqa}} & \textbf{10} & \textbf{0.17(8)} & \textbf{0.06(6)} & \textbf{-0.11(6)} & \textbf{NNPDF (2014)} \tabularnewline
\hline
\begin{tabular}{@{}c@{}} \textbf{\cite{Adamczyk:2014xyw}} \\ \textbf{\cite{Adare:2015gsd}} \end{tabular} & - &
- &
\textbf{$>0$} & - & \begin{tabular}{@{}c@{}} \textbf{0.05$<x_{Bj}<$0.2. STAR} \\ \textbf{and PHENIX $W^{\pm}$, $Z$ prod.} \end{tabular} \tabularnewline
\hline
\textbf{\cite{Ethier:2017zbq}} & \textbf{5} & \textbf{0.05(8)} & - & - & \textbf{global fit (JAM 2017)} \tabularnewline
\hline
\textbf{\cite{Lin:2014zya}} & \textbf{4} & \textbf{0.24(6)} & - & - & \textbf{$m_{\pi}$=310 MeV} \tabularnewline
\hline
\end{tabular}
\caption{\label{table sea asy} \small Phenomenological (top) and LGT (bottom) results on the sea asymmetry $ \Delta \overline{u} - \Delta \overline{d}$. Results are in the $\overline{MS}$ scheme.
The lists for models and LGT are ordered chronologically and are not comprehensive.
}}
\end{table}
\noindent
\begin{table}
\center
\scriptsize{
\vspace{-1.5cm}
\begin{tabular}{|c|c|c|c|}
\hline
Ref. & $Q^{2}$ (GeV$^{2}$) & Contribution & Remarks\tabularnewline
\hline
\hline
\textbf{\cite{Adeva:1993km}} & \textbf{5} & \textbf{$\Delta G$=0.9(6)} & \textbf{SMC incl. \emph{DGLAP}}\tabularnewline
\hline
\textbf{\cite{Brodsky:1994kg}} & \textbf{1} & \textbf{$\Delta G$=0.5} & \textbf{BBS global fit} \tabularnewline
\hline
\textbf{\cite{Ball:1995td}} & \textbf{1} & \textbf{$\Delta G$=1.5(8)} & \textbf{Ball \emph{et al.} global fit} \tabularnewline
\hline
\textbf{\cite{Gluck:1995yr}}& \textbf{4} & \textbf{$\Delta G$=1.44} & \textbf{GRSV 1995} \tabularnewline
\hline
\textbf{\cite{Abe:1997cx}} & \textbf{5} & \textbf{$\Delta G$=0.9(5)} & \textbf{E154 incl. \emph{DGLAP}}\tabularnewline
\hline
\textbf{\cite{Altarelli:1998nb}} & \textbf{1} & \textbf{$\Delta G$=1.5(9)} & \textbf{ABFR 1998} \tabularnewline
\hline
\textbf{\cite{Goto:1999by}} & \textbf{5} & \textbf{$\Delta G$=0.920(2334)} & \textbf{AAC 2000} \tabularnewline
\hline
\textbf{\cite{Airapetian:1999ib}} & \textbf{2} & \begin{tabular}{@{}c@{}} \textbf{$\Delta g / g$=0.41(18)} \\ at \textbf{$\left\langle x_{g} \right\rangle$= 0.17} \end{tabular} & \begin{tabular}{@{}c@{}} \textbf{HERMES DIS+high-$p_T$} \\ \textbf{hadron pairs} \end{tabular} \tabularnewline
\hline
\textbf{\cite{Anthony:1999rm}} & \textbf{5} & \textbf{$\Delta G$=0.8(7)} & \textbf{E155 incl. \emph{DGLAP}} \tabularnewline
\hline
\textbf{\cite{Gluck:2000dy}} & \textbf{5} & \begin{tabular}{@{}c@{}} \textbf{$\Delta G$=0.708} \\ \textbf{$\Delta G=0.974$} \end{tabular} & \begin{tabular}{@{}c@{}} \textbf{Standard GRSV 2000} \\ \textbf{SU(3)$_f$ breaking}\end{tabular} \tabularnewline
\hline
\textbf{\cite{Leader:2001kh}} & \textbf{1} & \textbf{$\Delta G$=0.68(32)} & \textbf{LSS 2001} \tabularnewline
\hline
\textbf{\cite{Forte:2001ph}} & \textbf{4} & \textbf{$\Delta G=1.262$} & \textbf{ABFR 2001} \tabularnewline
\hline
\textbf{\cite{Bluemlein:2002be}} & \textbf{4} & \textbf{$\Delta G=0.931(669)$} & \textbf{BB2002} \tabularnewline
\hline
\textbf{\cite{Hirai:2003pm}} & \textbf{5} & \textbf{$\Delta G=0.861(2185)$} & \textbf{AAC 2003} \tabularnewline
\hline
\textbf{\cite{Adeva:2004dh}} & \textbf{13} & \begin{tabular}{@{}c@{}} \textbf{$\Delta g / g$=-0.20(30)} \\ \textbf{at $\left\langle x_{g} \right\rangle = 0.07$} \end{tabular} & \begin{tabular}{@{}c@{}} \textbf{SMC DIS+high-$p_T$} \\ \textbf{hadron pairs} \end{tabular} \tabularnewline
\hline
\cite{Diehl:2004cx} & 4 & $\Delta G+\mbox{L}_g=0.40(5) $ &Valence only. GPD constrained w/ nucl. form factors \tabularnewline
\hline
\cite{Guidal:2004nd} & 2 & $\Delta G+\mbox{L}_g= 0.22$ & GPD model \tabularnewline
\hline
\textbf{\cite{Procureur:2006sg}} & \textbf{3} & \begin{tabular}{@{}c@{}} \textbf{$\Delta g / g$=0.016(79)} \\ \textbf{at $\left\langle x_{g} \right\rangle = 0.09$} \end{tabular} & \begin{tabular}{@{}c@{}} \textbf{COMPASS quasi-real high-$p_T$} \\ \textbf{hadron pairs prod.}\end{tabular} \tabularnewline
\hline
\textbf{\cite{Hirai:2006sr}} & \textbf{5} & \textbf{$\Delta G=0.67(186)$} & \textbf{AAC 2006} \tabularnewline
\hline
\begin{tabular}{@{}c@{}} \textbf{\cite{Ellinghaus:2005uc}} \\ \textbf{\cite{Mazouz:2007aa}} \end{tabular} & \textbf{1.9} & \textbf{$\Delta G+\mbox{L}_g$=0.23(27)} & \begin{tabular}{@{}c@{}} \textbf{JLab and HERMES} \\ \textbf{DVCS data} \end{tabular} \tabularnewline
\hline
\begin{tabular}{@{}c@{}} \cite{Wakamatsu:2006dy} \\ \cite{Wakamatsu:2007ar} \end{tabular} &
$\infty$ & $\Delta G+\mbox{L}_g=0.264 $ &
\begin{tabular}{@{}c@{}} $\chi$ quark solit. \\ mod. $n_f=6$ \end{tabular}\tabularnewline
\hline
\textbf{\cite{Hirai:2008aj}} & \textbf{5} & \textbf{$\Delta G=1.07(104)$} & \textbf{AAC 2008} \tabularnewline
\hline
\begin{tabular}{@{}c@{}}\cite{Myhrer:2007cf}, \\ \cite{Thomas:2008ga} \end{tabular}& 4 & $\Delta G+\mbox{L}_g=0.208(63) $ & \begin{tabular}{@{}c@{}} quark model \\ w/ pion cloud \end{tabular} \tabularnewline
\hline
\cite{Goloskokov:2008ib} & 4 & $\Delta G + \mbox{L}_g = 0.20(7)$ & GPD model \tabularnewline
\hline
\textbf{\cite{Alekseev:2009ad}} & \textbf{13} & \begin{tabular}{@{}c@{}} \textbf{$\Delta g / g$=-0.49(29)} \\ \textbf{at $\left\langle x_{g} \right\rangle$ = 0.11} \end{tabular} & \begin{tabular}{@{}c@{}} \textbf{COMPASS Open} \\ \textbf{Charm} \end{tabular} \tabularnewline
\hline
\textbf{\cite{deFlorian:2009vb}} & \textbf{5} & \textbf{$\Delta G$=-0.073} &\textbf{DSSV08} \tabularnewline
\hline
\textbf{\cite{Airapetian:2010ac}} & \textbf{1.35} & \begin{tabular}{@{}c@{}} $\Delta g / g =0.049(35)(^{126}_{~99})$ \\ \textbf{at $\left\langle x_{g} \right\rangle$ = 0.22} \end{tabular} & \begin{tabular}{@{}c@{}} \textbf{HERMES DIS +} \\ \textbf{high-$p_T$ incl. hadron production} \end{tabular} \tabularnewline
\hline
\cite{Garvey:2010fi} & - & $\Delta G+\mbox{L}_g=0.163(28) $ & \begin{tabular}{@{}c@{}} quark model+unpol. sea \\ asym. (Garvey relation) \end{tabular} \tabularnewline
\hline
\textbf{\cite{Leader:2010rb}} & \textbf{4} & \textbf{$\Delta G=-0.02(34)$} & \textbf{LSS 2010} \tabularnewline
\hline
\textbf{\cite{Blumlein:2010rn}} & \textbf{4} & \textbf{$\Delta G=0.462(430)$} & \textbf{BB 2010} \tabularnewline
\hline
\cite{Altenbuchinger:2010sz} & $\approx0.2$ & $\Delta G+\mbox{L}_g= -0.26(10)$ & Gauge-invariant cloudy bag model \tabularnewline
\hline
\cite{Bacchetta:2011gx} & 4 & $\Delta G+\mbox{L}_g= 0.23(3)$ & single spin trans. asy. \tabularnewline
\hline
\cite{Bass:2011zn} & 5 & $\Delta G \lesssim 0.4$ & c-quark axial-charge constraint \tabularnewline
\hline
\textbf{\cite{Adolph:2012ca}} & \textbf{13} & $ \begin{tabular}{@{}c@{}} $\Delta g / g$\textbf{=-0.13(21)} \\ \textbf{at $\left\langle x_{g} \right\rangle$= 0.2} \end{tabular}$ & \textbf{COMPASS open charm} \tabularnewline
\hline
\textbf{\cite{Adolph:2012ca}} & \textbf{3} & \textbf{$\Delta G$=0.24(9)} & \textbf{Global fit+COMPASS open charm} \tabularnewline
\hline
\cite{GonzalezHernandez:2012jv} & 4 & $\Delta G+\mbox{L}_g=0.263(107) $ & GPD constrained w/ nucl. form factors \tabularnewline
\hline
\textbf{\cite{Adolph:2012vj}} & \textbf{3} & \begin{tabular}{@{}c@{}} \textbf{$\Delta g / g$=0.125(87)} \\ \textbf{at $\left\langle x_{g} \right\rangle$=0.09} \end{tabular} & \begin{tabular}{@{}c@{}} \textbf{COMPASS DIS +} \\\textbf{high-$p_T$ hadron pairs} \end{tabular} \tabularnewline
\hline
\textbf{\cite{Ball:2013lla}} & \textbf{4} & \textbf{$\Delta G=-0.9(39)$} & \textbf{NNPDF 2013} \tabularnewline
\hline
\cite{Diehl:2013xca} & 4 & $\Delta G+\mbox{L}_g=0.274(29) $ & \begin{tabular}{@{}c@{}} GPD constrained w/ \\ nucl. form factors \end{tabular} \tabularnewline
\hline
\cite{Bijker:2014ila} & - & $\Delta G+\mbox{L}_g=0.14(7) $ & \begin{tabular}{@{}c@{}} unquenched \\ quark model \end{tabular} \tabularnewline
\hline
\cite{Brodsky:2014yha} & 5 & $\Delta G + \mbox{L}_g =0.09$ & LFHQCD \tabularnewline
\hline
\textbf{\cite{deFlorian:2014yva}} & \textbf{10} & \textbf{$\int_{0.001}^1 \Delta g dx$=0.37(59)} & \textbf{DSSV14} \tabularnewline
\hline
\textbf{\cite{Adamczyk:2014ozi}} & \textbf{10} & \textbf{$\Delta G$=0.21(10)} & \textbf{NNPDF~\cite{Nocera:2014gqa} including STAR data} \tabularnewline
\hline
\textbf{\cite{Adolph:2015cvj}}& \textbf{3} & \begin{tabular}{@{}c@{}} \textbf{$\Delta g / g$=0.113(52)} \\ \textbf{at $\left\langle x_{g} \right\rangle$= 0.1} \end{tabular} & \begin{tabular}{@{}c@{}} \textbf{COMPASS SIDIS} \\ \textbf{deuteron data} \end{tabular} \tabularnewline
\hline
\cite{Gutsche:2016gcd} & 1 & $\Delta G+\mbox{L}_g=0.152$ & LF quark model \tabularnewline
\hline
\textbf{\cite{Shahri:2016uzl}}& \textbf{5} & \textbf{$\Delta G$=0.391} & \textbf{KTA17 global fit} \tabularnewline
\hline
\cite{Dahiya:2016wjf} & $\approx 1$ & $ \Delta G +\mbox{L}_g= 0$ & chiral quark model \tabularnewline
\hline
\cite{Chakrabarti:2016yuw} & - & $\Delta G+\mbox{L}_g=-0.035$ & AdS/QCD scalar quark-diquark model \tabularnewline
\hline
\end{tabular}
\caption{\label{table Delta G 1}\small Same as Table~\ref{table Delta Sigma 1} but for gluon contributions. $x_g$ is the gluon momentum fraction. Results are in the $\overline{MS}$ scheme. The lists for models and LGT are ordered chronologically and are not comprehensive.
}
}
\end{table}
\noindent
\begin{table}
\vspace{-1.5cm}
\center
\scriptsize{
\begin{tabular}{|c|c|c|c|}
\hline
Ref. & $Q^{2}$ (GeV$^{2}$) & $\Delta G+\mbox{L}_g$ & Remarks\tabularnewline
\hline
\hline
\cite{Mathur:1999uf}& 3 & 0.20(7) & U. Kentucky group. Quenched calc. w/ $\chi$ extrap. \tabularnewline
\hline
\cite{Gockeler:2003jfa} & 4 & 0.17(7) & QCDSF 2003. u, d only. Quenched calc. w/ $\chi$ extrap. \tabularnewline
\hline
\cite{Dorati:2007bk} & 4 & 0.249(12) & CC$\chi$PT. u, d only. W/ $\chi$ extrap. \tabularnewline
\hline
\cite{Brommel:2007sb} & 4 & 0.274(11) & QCDSF-UKQCD. u, d only. $\chi$ extrap. \tabularnewline
\hline
\cite{Bratt:2010jn} & 5 & 0.262(18) & LHPC 2010. u, d only. $\chi$ extrap. \tabularnewline
\hline
\cite{Alexandrou:2011nr} & 4 & 0.358(40) & Twisted-Mass 2011 u, d only. W/ $\chi$ extrap. \tabularnewline
\hline
\cite{Alexandrou:2013joa} & 4 & 0.289(32) & Twisted-Mass 2013 u, d only. $m_{\pi}$=0.213 GeV\tabularnewline
\hline
\cite{Abdel-Rehim:2013wlz} & 4 & 0.220(110) & Twisted-Mass 2013. Phys. q masses \tabularnewline
\hline
\cite{Deka:2013zha} & 4 & 0.14(4) & $\chi$QCD col. w/ $\chi$ extrap. \tabularnewline
\hline
\cite{Alexandrou:2016mni} & 4 & 0.325(25) & Twisted-Mass 2016. Phys. $\pi$ mass \tabularnewline
\hline
\cite{Yang:2016plb} & 10 & 0.251(47) & $\chi$QCD 2017. Phys. $\pi$ mass \tabularnewline
\hline
\cite{Alexandrou:2016tuo} & 4 & 0.09(6) & Twisted-Mass 2017. Phys. $\pi$ mass \tabularnewline
\hline
\end{tabular}
\vspace{-0.3cm}
\caption{\label{table Delta G 2} \small Continuation of Table~\ref{table Delta G 1}, for LGT results. They are given in the $\overline{MS}$ scheme.
}
}
\end{table}
\noindent
\begin{table}
\center
\scriptsize{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Ref. & \begin{tabular}{@{}c@{}} $Q^{2}$ \\ (GeV$^{2}$) \end{tabular}&
\begin{tabular}{@{}c@{}} $\mbox{L}_{u}$ \\ $J_{u}$ \end{tabular} &
\begin{tabular}{@{}c@{}} $\mbox{L}_{d}$ \\ $J_{d}$ \end{tabular} &
\begin{tabular}{@{}c@{}} $\mbox{L}_{s}$ \\ $J_{s}$ \end{tabular} &
\begin{tabular}{@{}c@{}} disc. \\ diag.? \end{tabular} & Remarks\tabularnewline
\hline
\hline
\cite{Sehgal:1974rz} & - & \multicolumn{3}{|l|}{$ \hspace{3cm} \mbox{L}_q=0.20$} & N/A & quark parton model \tabularnewline
\hline
\begin{tabular}{@{}c@{}} \cite{Jaffe:1989jz} \\ \cite{Thomas:2008ga} \end{tabular}& - & \begin{tabular}{@{}c@{}} 0.46 \\ 0.89 \end{tabular} &
\begin{tabular}{@{}c@{}} -0.11 \\ -0.22 \end{tabular} &
\begin{tabular}{@{}c@{}} 0 \\ 0 \end{tabular} &
N/A &
\begin{tabular}{@{}c@{}} relat. quark model \\ Canonical def. \end{tabular} \tabularnewline
\hline
\cite{Cheng:1994zn}& - & \multicolumn{3}{|l|}{$ \hspace{3cm} \mbox{L}_q=0.32$} & N/A & $\chi$ quark model \tabularnewline
\hline
\cite{Gluck:2000dy} & 5 & \multicolumn{4}{|l|}{\begin{tabular}{@{}c@{}} \hspace{3cm}$\mbox{L}_{q+g}=0.18$ \\ \hspace{3cm}$\mbox{L}_{q+g}=0.08$ \end{tabular} }& \begin{tabular}{@{}c@{}} Standard GRSV 2000 \\ SU(3)$_f$ breaking\end{tabular} \tabularnewline
\hline
\cite{Guidal:2004nd} & 2 &
\begin{tabular}{@{}c@{}} -0.12(2) \\ 0.29 \end{tabular} &
\begin{tabular}{@{}c@{}} 0.20(2) \\ -0.03 \end{tabular} &
\begin{tabular}{@{}c@{}} 0.07(5) \\ 0.02 \end{tabular} &
N/A & GPD model \tabularnewline
\hline
\cite{Diehl:2004cx} & 4 &
\begin{tabular}{@{}c@{}} -0.26(1) \\ 0.15(3) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.17(3) \\ -0.05(4) \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
\begin{tabular}{@{}c@{}} Valence \\ contr. only \end{tabular} &
\begin{tabular}{@{}c@{}} GPD constrained w/ \\ nucl. form factors \end{tabular} \tabularnewline
\hline
\begin{tabular}{@{}c@{}} \cite{Wakamatsu:2006dy} \\ \cite{Wakamatsu:2007ar} \end{tabular} &
$\infty$ & \multicolumn{3}{|l|}{\begin{tabular}{@{}c@{}} $L_{u+d}=0.050$ \\ $J_{u+d}=0.236$ \end{tabular} }&
\begin{tabular}{@{}c@{}} Valence \\ contr. only \end{tabular} &
\begin{tabular}{@{}c@{}} $\chi$ quark solit. \\ mod. $n_f=6$ \end{tabular}\tabularnewline
\hline
\begin{tabular}{@{}c@{}}\cite{Myhrer:2007cf}, \\ \cite{Thomas:2008ga} \end{tabular}& 4 &
\begin{tabular}{@{}c@{}} -0.005(60) \\ 0.405(57) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.107(33) \\ -0.113(26) \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
N/A & \begin{tabular}{@{}c@{}} quark model \\ w/ pion cloud \end{tabular} \tabularnewline
\hline
\begin{tabular}{@{}c@{}} \textbf{\cite{Ellinghaus:2005uc}} \\ \textbf{\cite{Mazouz:2007aa}} \end{tabular} & \textbf{1.9} &
\begin{tabular}{@{}c@{}} \textbf{-0.03(23)} \\ \textbf{0.38(23)} \end{tabular} &
\begin{tabular}{@{}c@{}} \textbf{0.11(15)} \\ \textbf{-0.11(15)} \end{tabular} &
\begin{tabular}{@{}c@{}} \textbf{-} \\ \textbf{-} \end{tabular} &
\textbf{N/A} & \begin{tabular}{@{}c@{}} \textbf{JLab and HERMES} \\ \textbf{DVCS data} \end{tabular} \tabularnewline
\hline
\cite{Goloskokov:2008ib} & 4 &
\begin{tabular}{@{}c@{}} -0.17(4) \\ 0.24(3) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.24(3) \\ 0.02(3) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.07(6) \\ 0.02(3) \end{tabular} &
N/A & GPD model \tabularnewline
\hline
\cite{Garvey:2010fi} & - & \multicolumn{3}{|l|}{\begin{tabular}{@{}c@{}} $l_{u+d+s}=0.147(27)$ \\ $J_{u+d+s}=0.337(28)$ \end{tabular}} &
N/A & \begin{tabular}{@{}c@{}} quark model+unpol. sea \\ asym. (Garvey relation) \end{tabular} \tabularnewline
\hline
\begin{tabular}{@{}c@{}} \cite{Altenbuchinger:2010sz} \end{tabular}& $\approx0.2$ & \begin{tabular}{@{}c@{}} 0.34(13) \\ 0.72(14) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.19(13) \\ 0.04(14) \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
N/A &
\begin{tabular}{@{}c@{}}Gauge-invariant \\ cloudy bag model \end{tabular} \tabularnewline
\hline
\cite{Bacchetta:2011gx} & 4 &
\begin{tabular}{@{}c@{}} -0.166(15) \\ 0.244(11) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.235(12) \\ 0.015(6)$(^{20}_{~5})$ \end{tabular} &
\begin{tabular}{@{}c@{}} 0.062($^5_9$) \\ 0.012 $(^{2}_{8})$ \end{tabular} &
N/A & \begin{tabular}{@{}c@{}} single spin \\ trans. asy. \end{tabular} \tabularnewline
\hline
\cite{Lorce:2011kd} & - &
\begin{tabular}{@{}c@{}} 0.071 \\ 0.569 \end{tabular} &
\begin{tabular}{@{}c@{}} 0.055 \\ -0.069 \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
N/A & \begin{tabular}{@{}c@{}} LC constituent \\ quark model \end{tabular} \tabularnewline
\hline
\cite{Lorce:2011kd} & - &
\begin{tabular}{@{}c@{}} -0.008 \\ 0.566 \end{tabular} &
\begin{tabular}{@{}c@{}} 0.077 \\ -0.066 \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
N/A & \begin{tabular}{@{}c@{}} $\chi$ quark \\ soliton model \end{tabular} \tabularnewline
\hline
\cite{GonzalezHernandez:2012jv} & 4 &
\begin{tabular}{@{}c@{}} -0.12(11) \\ 0.286(107) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.17(2) \\ -0.049(7) \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
N/A & \begin{tabular}{@{}c@{}} GPD constrained w/ \\ nucl. form factors \end{tabular} \tabularnewline
\hline
\cite{Diehl:2013xca} & 4 &
\begin{tabular}{@{}c@{}} -0.18(3) \\ $0.230(^9_{24})$ \end{tabular} &
\begin{tabular}{@{}c@{}} 0.21(3) \\ $-0.004(^{10}_{16})$ \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
N/A & \begin{tabular}{@{}c@{}} GPD constrained w/ \\ nucl. form factors \end{tabular} \tabularnewline
\hline
\cite{Brodsky:2014yha} & 5 & \multicolumn{3}{|l|}{
\begin{tabular}{@{}c@{}} $\mbox{L}_{u+d+s}=0.25$ \\ $J_{u+d+s}=0.31$\end{tabular}} &
N/A & LFHQCD. \tabularnewline
\hline
\cite{Bijker:2014ila} & - & \multicolumn{3}{|l|}{$l_{u+d+s}=0.221(41)$, $J_{u+d+s}=0.36(7)$} &
N/A & \begin{tabular}{@{}c@{}} unquenched \\ quark model \end{tabular} \tabularnewline
\hline
\cite{Gutsche:2016gcd} & 1 &
\begin{tabular}{@{}c@{}} 0.055 \\ 0.358 \end{tabular} &
\begin{tabular}{@{}c@{}} -0.001 \\ -0.010 \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
N/A & LF quark model \tabularnewline
\hline
\cite{Dahiya:2016wjf} & $\approx 1$ &
\begin{tabular}{@{}c@{}} 0.265 \\ 0.777 \end{tabular} &
\begin{tabular}{@{}c@{}} -0.066 \\ -0.265 \end{tabular} &
\begin{tabular}{@{}c@{}} 0 \\ -0.012 \end{tabular} &
N/A & chiral quark model \tabularnewline
\hline
\cite{Chakrabarti:2016yuw} & - &
\begin{tabular}{@{}c@{}} -0.3812 \\ 0.565 \end{tabular} &
\begin{tabular}{@{}c@{}} -0.4258 \\ -0.030 \end{tabular} &
\begin{tabular}{@{}c@{}} \\ \end{tabular} &
& \begin{tabular}{@{}c@{}} AdS/QCD scalar \\ quark-diquark model \end{tabular} \tabularnewline
\hline
\end{tabular}
\vspace{-0.3cm}
\caption{\label{table OAM 1}\small Phenomenological results on quark
$\mbox{L}_q=\mbox{L}_u+\mbox{L}_d+\mbox{L}_s$ and total angular momenta
$J_q=\mbox{L}_q+\Delta \Sigma_q/2$. Results are in the $\overline{MS}$ scheme. They use
different definitions of $\mbox{L}_q$, and may thus not be directly comparable, see Section~\ref{SSR components}.
The list is ordered chronologically and is not comprehensive.
}}
\end{table}
\noindent
\begin{table}
\center
\scriptsize{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Ref. & \begin{tabular}{@{}c@{}} $Q^{2}$ \\ (GeV$^{2}$) \end{tabular}&
\begin{tabular}{@{}c@{}} $L_{u}$ \\ $J_{u}$ \end{tabular} &
\begin{tabular}{@{}c@{}} $L_{d}$ \\ $J_{d}$ \end{tabular} &
\begin{tabular}{@{}c@{}} $L_{s}$ \\ $J_{s}$ \end{tabular} &
\begin{tabular}{@{}c@{}} disc. \\ diag.? \end{tabular} & Remarks\tabularnewline
\hline
\hline
\cite{Mathur:1999uf} & 3 & \multicolumn{3}{|l|}{$L_{u+d+s}=0.17(6)$, $J_{u+d+s}=0.30(7)$} &
yes & \begin{tabular}{@{}c@{}} U. Kentucky group. Quenched \\ calc. w/ $\chi$ extrap. \end{tabular} \tabularnewline
\hline
\cite{Hagler:2003jd} & 4 & \multicolumn{3}{|l|}{$ \hspace{3cm} J_{q}=0.338(4)$} & No & LHPC 2003. u, d only. $\chi$ extrap. \tabularnewline
\hline
\cite{Gockeler:2003jfa} & 4 &
\begin{tabular}{@{}c@{}} -0.05(6) \\ 0.37(6) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.08(4) \\ -0.04(4) \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
no & \begin{tabular}{@{}c@{}} QCDSF. u, d only. Quenched \\ calc. w/ $\chi$ extrap. \end{tabular} \tabularnewline
\hline
\cite{Dorati:2007bk} & 4 &
\begin{tabular}{@{}c@{}} -0.14(2) \\ 0.266(9) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.21(2) \\ -0.015(8) \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
no & \begin{tabular}{@{}c@{}} CC$\chi$PT. u, d only. \\ W/ $\chi$ extrap.\end{tabular} \tabularnewline
\hline
\cite{Brommel:2007sb} & 4 &
\begin{tabular}{@{}c@{}} -0.18(2) \\ 0.230(8) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.22(2) \\ -0.004(8) \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
no & QCDSF-UKQCD. u, d only. $\chi$ extrap. \tabularnewline
\hline
\cite{Bratt:2010jn} & 5 &
\begin{tabular}{@{}c@{}} -0.175(40) \\ 0.236(18) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.205(35) \\ 0.002(4) \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
no & LHPC 2010. u, d only. $\chi$ extrap. \tabularnewline
\hline
\cite{Alexandrou:2011nr} & 4 &
\begin{tabular}{@{}c@{}} -0.141(30) \\ 0.189(29) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.116(30) \\ -0.047(28) \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
no & \begin{tabular}{@{}c@{}} Twisted-Mass 2011 u, d only. \\ W/ $\chi$ extrap.\end{tabular} \tabularnewline
\hline
\cite{Alexandrou:2013joa} & 4 &
\begin{tabular}{@{}c@{}} -0.229(30) \\ 0.214(27) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.137(30) \\ -0.003(17) \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
no & \begin{tabular}{@{}c@{}} Twisted-Mass 2013 u, d only. \\ $m_{\pi}$=0.213 GeV\end{tabular} \tabularnewline
\hline
\cite{Deka:2013zha} & 4 &
\begin{tabular}{@{}c@{}} -0.003(8) \\ 0.37(6) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.195(8) \\ -0.02(4) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.07(1) \\ 0.012(4) \end{tabular} &
yes & $\chi$QCD col. w/ $\chi$ extrap. \tabularnewline
\hline
\cite{Abdel-Rehim:2013wlz} & 4 &
\begin{tabular}{@{}c@{}} -0.208(95) \\ 0.202(78) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.078 (95)\\ 0.078(78) \end{tabular} &
\begin{tabular}{@{}c@{}} - \\ - \end{tabular} &
yes & Twisted-Mass 2013. Phys. q masses \tabularnewline
\hline
\cite{Alexandrou:2016mni} & 4 &
\begin{tabular}{@{}c@{}} -0.118(43) \\ 0.296(40) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.252(41) \\ 0.058(40) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.067(21) \\ 0.046(20)\end{tabular} &
yes & Twisted-Mass 2016. Phys. $\pi$ mass \tabularnewline
\hline
\cite{Alexandrou:2016tuo} & 4 &
\begin{tabular}{@{}c@{}} -0.104(29) \\ 0.310(26) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.249(27) \\ 0.056(26) \end{tabular} &
\begin{tabular}{@{}c@{}} 0.067(21) \\ 0.046(21) \end{tabular} &
yes & Twisted-Mass 2017. Phys. $\pi$ mass \tabularnewline
\hline
\end{tabular}
\caption{\label{table OAM 2} \small Same as Table~\ref{table OAM 1} but for LGT results.
}}
\end{table}
\FloatBarrier
\pagebreak
\huge{\bf{Lexicon and acronyms}}\normalsize
\vspace{10pt}
To make this review more accessible to non-specialists,
we provide here specific terms associated with the nucleon structure, with short explanations
and links to where they are first discussed in the review. For convenience, we also provide
the definitions of the acronyms used in this review.
\begin{itemize}
\item AdS/CFT: \emph{anti-de-Sitter/conformal field theory}.
\item AdS/QCD: \emph{anti-de-Sitter/quantum chromodynamics}.
\item anti-de-Sitter (AdS) space: a maximal symmetric space endowed with a constant negative curvature.
\item Asymptotic freedom: QCD's property that its strength decreases at short distances.
\item Asymptotic series: see Poincar\'{e} series.
\item $\beta$-function: the logarithmic derivative of $\alpha_{s}$: $\beta\left(\mu^{2}\right)=\frac{d\alpha_{s}\left(\mu\right)}{d\mbox{\footnotesize{ln}}(\mu)}$
where $\mu$ is the \emph{subtraction point}. In the perturbative
domain, $\beta$ can be expressed as a perturbative series \textbf{$\beta=-\frac{1}{4\pi}\sum_{n=0}\left(\frac{\alpha_{s}}{4\pi}\right)^{n}\beta_{n}$}.
\item Balitsky-Fadin-Kuraev-Lipatov (BFKL) evolution equations: the equations controlling the low-$x_{Bj}$ behavior of structure functions.
\item BBS: Brodsky-Burkardt-Schmidt.
\item BC: Burkhardt-Cottingham.
\item BLM: Brodsky-Lepage-Mackenzie. See Principle of Maximal Conformality (PMC).
\item CERN: Conseil Europ\'een pour la Recherche Nucl\'eaire.
\item $\chi$PT: chiral perturbation theory.
\item CEBAF: continuous electron beam accelerator facility.
\item CLAS: CEBAF large acceptance spectrometer.
\item COMPASS: common muon and proton apparatus for structure and spectroscopy.
\item Condensate (or Vacuum Expectation Value, VEV): the vacuum expectation value of a given local operator.
Condensates allow one to parameterize the nonperturbative \emph{OPE}'s power
corrections. Condensates and vacuum loop diagrams do not appear in the frame-independent
light-front Hamiltonian since all lines have $k^+ = k^0 + k^3 \ge 0$
and the sum of $+$ momenta is conserved at every vertex.
In the light-front formalism condensates are associated with physics of the hadron
wavefunction and are called ``in-hadron" condensates, which refers to physics
possibly contained in the higher LF Fock states of the hadrons \cite{Casher:1974xd}. In the case of the Higgs theory,
the usual Higgs VEV of the \emph{instant form} Hamiltonian is replaced by a ``zero mode", a background field with $k^+=0$ \cite{Brodsky:2012ku}.
\item Conformal behavior/theory: the behavior of a quantity or a theory
that is scale invariant. In a conformal theory the \emph{$\beta$-function} vanishes.
More rigorously, a conformal theory is invariant under both
dilatation and the special conformal transformations which involve coordinate inversion.
\item Cornwall-Norton moment: the moment $\int_0^1 x^N g(x,Q^2) dx$
of a structure function $g(x_{Bj},Q^2)$. See Mellin-transform.
\item Constituent quarks: unphysical particles of approximately a third of the nucleon mass and ingredients
of \emph{constituent quark} models. They provide the $J^{PC}$ quantum numbers describing the hadron.
Constituent quarks can be viewed as \emph{valence quarks} dressed by virtual pairs of partons.
\item DDIS: diffractive deep inelastic scattering.
\item DESY: Deutsches Elektronen-Synchrotron.
\item Dimensional transmutation: the emergence of a mass or momentum scale in a quantum theory with
a classical Lagrangian devoid of explicit mass or energy parameters \cite{Coleman:1973sx}.
\item DIS: deep inelastic scattering.
\item Distribution amplitudes: universal quantities describing the \emph{valence quark} structure of hadrons and nuclei.
\item Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations:
the equations controlling the $Q^2$ behavior of structure functions, except at extreme $x_{Bj}$ (low- and large-$x_{Bj}$).
The DGLAP equations are used in global determinations of parton distributions by evolving the distribution functions from an initial to a final scale.
\item DVCS: deeply virtual Compton scattering.
\item Effective charge: an effective coupling defined from a perturbatively calculable observable. It includes
all perturbative and relevant nonperturbative effects~\cite{Grunberg:1980ja}.
\item Effective coupling: the renormalized (running) coupling, in contrast
with the constant unphysical bare coupling.
\item EFL: Efremov-Leader-Teryaev.
\item Efremov-Radyushkin-Brodsky-Lepage (ERBL) evolution equations: the equations controlling the evolution of the \emph{Distribution amplitudes} in $\ln (Q^2)$.
\item EIC: electron-ion collider.
\item EMC: european muon collaboration.
\item Factorization scale: the scale at which nonperturbative effects become negligible.
\item Factorization theorem: the ability to separate at short distance the perturbative coupling of the probe to the nucleon, from
the nonperturbative nucleon structure~\cite{factorization theorem}.
\item Freezing: the loss of scale dependence of finite $\alpha_{s}$ in the infrared. See
also conformal behavior.
\item Gauge link or link variable: in Lattice QCD, the segment(s) linking two lattice sites to which a unitary matrix is
associated to implement gauge invariance. While quarks reside at the lattice sites, gauge links effectively represent the gluon field.
Closed links are \emph{Wilson loops} used to construct the LGT Lagrangian.
\item GDH: Gerasimov-Drell-Hearn.
\item Ghosts: ghosts referred to unphysical fields. For example in certain gauges
in QED and QCD, such as the Feynman gauge, there are four vector-boson fields:
two transversely polarized bosons (photons and gluons, respectively),
a longitudinally polarized one, and
a scalar one with a negative metric. This later is referred to as a ghost photon/gluon and is unphysical since it does not
represent an independent degree of freedom: While vector-bosons have in principle 4-spin degrees of freedom,
only three are independent due to the additional constraint from gauge invariance.
In Yang-Mills theories, Faddeev-Popov ghosts are fictitious particles of spin zero but that obey
the Fermi--Dirac statistics (negative-metric particles). These characteristics are chosen so that the
ghost propagator complements the non-transverse term in the gluon propagator to
make it transverse, and thus insure current conservation.
In radiation or Coulomb gauge, the scalar and longitudinally polarized vector-bosons
are replaced by the Coulomb interaction.
Axial gauges where vector-bosons are always transverse, in particular the LC gauge $A^+$, can alternatively
be used to avoid introducing ghosts.
\item GPD: generalized parton distributions.
\item GTMD: generalized transverse momentum distributions
\item Hard reactions or hard scattering: high-energy processes, in particular in which the quarks are resolved.
\item HIAF: high intensity heavy ion accelerator facility.
\item Higher-twist: See \emph{Twist}
\item HLFHS: holographic light-front hadron structure collaboration.
\item IMF: infinite momentum frame.
\item Instant form, or instant time quantization: the traditional second quantization of a field theory,
done at instant time $t$; one of the forms of relativistic dynamics introduced by Dirac.
See \emph{Light-front quantization} and Sec.~\ref{LC dominance and LF quantization}.
\item JAM: JLab angular momentum collaboration
\item JINR: Joint Institute for Nuclear Research.
\item JLab: Jefferson Laboratory.
\item Landau pole, Landau singularity or Landau ghost: the point where a perturbative coupling
diverges. At first order (1-loop) in pQCD, this occurs at the \emph{scale parameter}
$\Lambda_s$. The value can depend on the choice of renormalization scheme, the order $\beta_i$
at which the coupling series is estimated, the number of flavors $n_f$
and the approximation chosen to solve the QCD $\beta$ equation. The Landau pole is unphysical.
\item LC: light cone.
\item LEGS: laser electron gamma source.
\item LF: light-front.
\item LFHQCD: light-front holographic QCD.
\item Light-front quantization: second quantizaton of a field theory done at fixed LF-time $\tau$,
rather than at \emph{instant time} $t$; one of the relativistic forms introduced by Dirac.
The equal LF-time condition defines a plane, rather than a cone, tangent to the light-cone.
Thus the name "Light-Front".
See \emph{Instant form} and Sec.~\ref{LC dominance and LF quantization}.
\item LFWF: light-front wave function.
\item LGT: lattice gauge theory.
\item LO: leading order.
\item LSS: Leader-Sidorov-Stamenov.
\item LT: longitudinal-transverse.
\item MAMI: Mainz Microtron.
\item Mellin transform: the moment $\int _0^1 x^N g(x,Q^2) dx$, typically of a structure function $g(x_{Bj},Q^2)$.
It transforms $g(x_{Bj},Q^2)$ to Mellin space ($N,Q^2$), with $N$ the moment's order.
Advantages are 1) that the $Q^2$-evolution of moments are simpler than that
of structure function $Q^2$-evolution, since the nonperturbative $x_{Bj}$-dependence is integrated over.
Furthermore, convolutions of PDFs partition functions (see
Eqs.~(\ref{g_1 LT evol})--(\ref{gluon LO evol})) become simple products in \emph{Mellin-space}.
The structure functions are then recovered by inverse transforming back to the $x_{Bj},Q^2$ space; and 2)
low-$N$ moments are computable on the lattice with smaller noise than (non-local) structure functions.
Structure functions can be obtain by inverse transform the 1- to $N$-moments, if $N$ is large enough.
\item NICA: nuclotron-based ion collider facilities.
\item NLO: next-to-leading order.
\item NNLO: next-to-next-to-leading order.
\item OAM: orbital angular momentum.
\item Operator Product Expansion (OPE). See also higher-twist:
the \emph{OPE} uses the \emph{twist} of effective operators to predict the power-law fall-off of an amplitude.
It thus can be used to distinguish
logarithmic leading \emph{twist} perturbative corrections from the $1/Q^{n}$ \emph{power corrections}. The \emph{OPE}
typically does not provide values for the nonperturbative \emph{power correction} coefficients.
\item Optical Theorem: the relation between a cross-section and its corresponding photo-absorption amplitude.
Generally speaking, the dispersion of a beam is related to the transition amplitude.
This results from the \emph{unitarity} of a reaction. The theorem expresses
the fact that the dispersive part of a process (the cross-section) is proportional
to the imaginary part of the transition amplitude. The
case is similar to classical optics, where complex refraction indices
are introduced to express the dispersion of a beam of light in a medium
imperfectly transparent. This explains the name of the theorem.
\item PDF: parton distribution functions
\item Poincar\'{e} series (also Asymptotic series). See also ``renormalons".
A series that converges up to an order
$k$ and then diverges. The series reaches its best convergence at
order $N_{b}$ and then diverges for orders $N\gtrsim N_{b}+\sqrt{N_{b}}$.
Quantum Field Theory series typically are asymptotic and converge
up to an order $N_b \simeq 1/a$, with $a$ the expansion coefficient.
IR \emph{renormalons} generate an $n!\beta^{n}$ factorial growth
of the $n$th coefficients in \emph{nonconformal} ($\beta \neq 0$) theories. Perturbative calculation
to high order ($\alpha_{s}^{20}$) has been performed on the lattice
\cite{the:Renormalon growth} to check
the asymptotic behavior of QCD series. Factorial growth is seen up
to the 20th order of the calculated series.
\item Positivity constraint: the requirement on PDF functions that scattering cross-sections must be positive.
\item Power corrections. See ``Higher-twist" and ``Renormalons".
\item pQCD: perturbative quantum chromodynamics.
\item Principle of Maximal Conformality (PMC): a method used to set the \emph{renormalization scale},
order-by-order in perturbation theory, by shifting all $\beta$ terms in the pQCD series into the
\emph{renormalization scale} of the running QCD coupling at each order. The resulting coefficients of the
series then match the coefficients of the corresponding \emph{conformal} theory with $\beta=0$. The PMC
generalizes the Brodsky Lepage Mackenzie BLM method to all orders. In the Abelian $N_C\to 0$ limit, the
PMC reduces to the standard Gell-Mann--Low method used for scale setting in QED \cite{Brodsky:1997jk}.
\item Pure gauge sector, pure Yang Mills or pure field. Non Abelian field theory
without fermions. See also \emph{quenched} approximation.
\item PV: parity violating.
\item PWIA: plane wave impulse approximation.
\item QCD: quantum chromodynamics.
\item QCD counting rules: the asymptotic constraints imposed on form factors and transition amplitudes by the minimum number of partons involved in the elastic scattering.
\item QCD scale parameter $\Lambda_s$: the UV scale ruling the energy-dependence
of $\alpha_{s}$. It also provides the
scale at which $\alpha_{s}$ is expected to be large, and nonperturbative
treatment of QCD is required~\cite{Deur:2016tte}.
\item QED: quantum electrodynamics.
\item Quenched approximation: calculations where the fermion loops are neglected.
It differs from the \emph{pure gauge}, pure Yang Mills case in that heavy (static)
quarks are present.
\item Renormalization scale: the argument of the running coupling. See also ``Subtraction point".
\item Renormalon: the residual between the physical value of an observable
and the \emph{Asymptotic series} of the observable at its best convergence
order $n \simeq 1/\alpha_{s}$. The terms of a pQCD calculation which involve the \emph{$\beta$-function} typically diverge as $n!$: {\it i.e.}, as a renormalon.
Borel summation techniques indicate that IR renormalons can often be interpreted as \emph{power
corrections}. Thus, IR renormalons should be related to the \emph{higher
twist} corrections of the \emph{OPE} formalism \cite{the:Renormalons}.
The existence of IR renormalons in \emph{pure gauge} QCD is supported by lattice
QCD \cite{the:Renormalon growth}. See also ``Asymptotic series".
\item RHIC: relativistic heavy ion collider (RHIC).
\item RSS: resonance spin structure.
\item Sea quarks: quarks stemming from gluon splitting $g \to q \bar{q}$ and from QCD's vacuum fluctuations.
This second contribution is frame dependent and avoided in the light-front formalism.
Evidence for \emph{sea quarks} making up the nucleon structure in addition to the \emph{valence quarks}
came from DIS data yielding PDFs that strongly rise at low-$x_{Bj}$.
\item SIDIS: semi-inclusive deep inelastic scattering.
\item SLAC: Stanford Linear Accelerator Center.
\item SMC: spin muon collaboration.
\item SoLID: solenoidal large intensity device.
\item SSA: single-spin asymmetry.
\item Subtraction point $\mu$: the scale at which the renormalization
procedure subtracts the UV divergences.
\item Sum rules: a relation between the moment of a structure function, a form factor or a photoabsorption cross-section, and static properties of the nucleon. A more general definition includes
relations of moments to double deeply virtual Compton scattering amplitudes rather than to a static property.
\item Tadpole corrections: in the context of lattice QCD, tadpole terms
are unphysical contributions to the lattice action which arise from
the discretization of space-time. They contribute at NLO of the bare
coupling $g^{bare}=\sqrt{4\pi\alpha_{s}^{bare}}$ to the expression
of the \emph{gauge link} variable $U_{\overrightarrow{\mu}}$. (The LO corresponds
to the continuum limit.) To suppress these contributions, one can redefine
the lattice action by adding larger \emph{Wilson loops} or by rescaling the
\emph{link variable}.
\item TMD: transverse momentum distributions.
\item TT: transverse-transverse.
\item TUNL: Triangle Universities Nuclear Laboratory.
\item Twist: the twist $\tau$ of an elementary operator is given by its dimension minus its spin.
For example, the quark operator $\psi$ has dimension $3$, spin $1/2$ and thus $\tau=1$.
For elastic scattering at high $Q^2$, LF QCD gives $ \tau=n-1$ with $n$ is the number of effective constituents of a hadron.
For DIS, structure functions are dominated by $ \tau=2$, the \emph{leading-twist}.
\emph{Higher-twist} are $Q^{2-\tau}$ \emph{power corrections} to those, typically
derived from the \emph{OPE} analysis of the nonperturbative effects of multiparton interactions.
\emph{Higher-twist} is sometimes interpretable as kinematical phenomena, {\it e.g.} the mass $M$ of a
nucleon introduces a \emph{power correction} beyond the pQCD scaling violations, or as dynamical
phenomena, {\it e.g.}, the intermediate distance transverse forces that confine
quarks \cite{Burkardt:2008ps, Abdallah:2016xfk}.
\item Unitarity: conservation of the probability:
the sum of probabilities that a scattering occurs with any reaction, or does not occur, must be 1.
\item Unquenched QCD: see \emph{pure gauge} sector and \emph{quenched} approximation.
\item Valence quarks: the nucleon quark content once all quark-antiquark pairs (\emph{sea quarks}) are excluded.
Valence quarks determine the correct quantum numbers of hadrons.
\item VEV: vacuum expectation value.
\item VVCS: doubly virtual Compton scattering.
\item Wilson line: a Wilson line represents all of the final-state interactions between the
struck quark in DIS and the target spectators. It generates both leading and higher \emph{twists} effects:
for example the exchange of a gluon between the struck quark and the proton's spectators after the
quark has been struck yields the Sivers effect~\cite{Sivers:1989cc}. It also contributes to DDIS at leading twist.
\item Wilson Loops: closed paths linking various sites in a lattice \cite{Wilson:1974sk}.
They are used to define the lattice action and \emph{Tadpole corrections}.
(See Section \ref{LGT}.)
\end{itemize}
\newpage
|
2,877,628,089,630 | arxiv | \section{Introduction}
By describing the competition between kinetically induced electron delocalization and electron localization due to the Coulomb repulsion, the non-trivial Hubbard model remains one of the most challenging systems in condensed matter physics~\cite{hubbard_electron_1963}.
Indeed, despite its simplicity, no general and analytic solution exists.
Besides exact results at certain limits such as the Nagaoka theorem~\cite{nagaoka_ground_1965} close to half-band filling or the Bethe Ansatz~\cite{lieb_absence_1968} in one dimension, different approximations, strategies and numerical algorithms have been designed to solve this cornerstone problem on classical computers.
More precisely, one could mention density functional~\cite{lopez-sandoval_density-matrix_2002,lima_density_2003} or Green's functions~\cite{georges_dynamical_1996,senechal_spectral_2000,potthoff_self-energy-functional_2003} based theories, renormalization methods~\cite{white_density_1992} or more recently divide and conquer strategies~\cite{knizia_density_2012,sekaran_householder_2021}, to cite but a few.
In that context, the emergence of quantum computers has revived the hope of obtaining accurate physically relevant quantities for any dimension, size, regime and filling.
Indeed, a growing interest on developing quantum algorithms to solve the Hubbard model emerges from the literature~\cite{wecker_solving_2015,wecker_towards_2015,kivlichan_quantum_2018,reiner_finding_2019,montanaro_compressed_2020,cai_resource_2020,cade_strategies_2020,mineh_solving_2022,martin_simulating_2022,stanisic_observing_2022,dallaire-demers_low-depth_2018,dallaire-demers_application_2020,suchsland_simulating_2022,gard_classically_2022,kivlichan_improved_2020,campbell_early_2022,clinton_hamiltonian_2021}.
On the one hand, most of the proposed algorithms targets Noisy Intermediate Scale Quantum (NISQ) devices and relies mainly on hybrid classical/quantum strategies such as the Variational Quantum Eigensolver (VQE)~\cite{peruzzo_variational_2014,bharti_noisy_2022}. Roughly speaking, it consists in applying a parameterized unitary transformation on an
easy-to-prepare initial state, generally the Hartree--Fock state, on the quantum device while the variational parameters are optimized on a classical computer.
Several type of Ansatz have been proposed to design this unitary transformation, either physically motivated such as the variational Hamiltonian Ansatz~\cite{wecker_towards_2015,kivlichan_quantum_2018,reiner_finding_2019,montanaro_compressed_2020,cai_resource_2020,cade_strategies_2020,mineh_solving_2022,martin_simulating_2022,stanisic_observing_2022} and the unitary coupled cluster Ansatz~\cite{dallaire-demers_low-depth_2018}, or hardware efficient ones~\cite{dallaire-demers_application_2020,suchsland_simulating_2022,gard_classically_2022}. Most of these approaches, as they are based on an initial Hartree--Fock state, are particularly relevant for the weakly correlated regime.~\cite{cade_strategies_2020,martin_simulating_2022} In any case, a compromise between the desired accuracy and the computational cost has to be reached.
It depends in particular on the Ansatz circuit depth, the number of CNOT gates and the number of variational parameters, for which the development of improved or new types of Ansatz is needed.
On the second hand, some algorithms target long-term expected fault-tolerant devices~\cite{kivlichan_improved_2020,campbell_early_2022,clinton_hamiltonian_2021},
and rely for instance on Hamiltonian propagation for which the associated quantum circuits are much deeper than those devoted to the NISQ era.
Concerning the application of a unitary transformation onto a easy-to-prepare known state, the unitary Van--Vleck (VV) similarity transformation, developed in the framework of many-body perturbation theory~\cite{van_vleck__1929,jordahl_effect_1934,foldy_on_1950,primas_generalized_1963,brandow_formal_1979,shavitt_quasidegenerate_1980,bravyi_schriefferwolff_2011}, appears relevant to serve as a basis for new quantum algorithms.
In few words, given an Hamiltonian $\hat{H} = \hat{H}_0 + \hat{V}$ where $\hat{H}_0$ is called the unperturbed Hamiltonian whose eigenstates are known, and $\hat{V}$ is a perturbation, the VV similarity transformation aims to design perturbatively a unitary transformation $\hat{U} = e^{\hat{S}^{\rm VV}}$, where $\hat{S}^{\rm VV}$ is called the generator, that leads to an effective Hamiltonian $\bar{H}_{\rm eff}$ in the low-energy subspace of $\hat{H}_0$.
Ultimately, at infinite order of perturbation, the transformed Hamiltonian $\bar{H}=\hat{U}\hat{H}\hat{U}^{\dagger}$ is block-diagonal
and is reduced to $\bar{H}_{\rm eff}$ in the low-energy subspace of $\hat{H}_0$, such that the eigenvalues of $\bar{H}_{\rm eff}$ strictly match the lowest eigenvalues of $\hat{H}$.
It follows a straightforward quantum algorithm for which the ground state (or excited states) of a given Hamiltonian can be prepared on a quantum computer by applying $e^{\hat{S}^{\rm VV}}$ on the known ground state (or excited states) of the unperturbed Hamiltonian $\hat{H}_0$.
However, an explicit expression for $\hat{S}^{\rm VV}$ is in general unknown
and truncation of the perturbative order or approximations are mandatory.
Considering the non-interacting Hamiltonian as $\hat{H}_0$ and the electron-electron Coulomb repulsion as the perturbation, the VV similarity transformation is closely related to the unitary coupled cluster Ansatz~\cite{shavitt_many-body_2009}.
On the other limit where $\hat{H}_0$ is the Coulomb repulsion operator and $\hat{V}$ is the non-interacting Hamiltonian, Schrieffer and Wolff (SW) derived an analytic form of $\hat{S}^{\rm VV}$, such that $\bar{H}$ is block-diagonal at the first-order of perturbation~\cite{schrieffer_relation_1966}.
Moreover, they showed that at the limit of small perturbation, the Kondo model corresponds to the effective low-energy approximation of the Anderson model.
Following the work of SW, the Heisenberg model was also shown to be the effective Hamiltonian of the Hubbard model at half-band filling for large Coulomb repulsion strength~\cite{harris_single-particle_nodate,chao_kinetic_1977}.
Yet, improvements of the SW approximation can fairly serve as a basis to approximate $\hat{S}^{\rm VV}$ and construct an efficient and hopefully accurate quantum algorithm for the Hubbard model.
In that context, Zhang {\it et al.} proposed two quantum algorithms devoted to finding the VV unitary transformation in the context of spin chains.~\cite{zhang_quantum_2022}
The first one is a quantum phase estimation based algorithm that provides the exact transformation,
but which is only realizable in the fault-tolerant era.
The second one, more adapted to the NISQ era, is an hybrid quantum-classical algorithm
based on a variational approach where the unitary transformation (Ansatz) is built from the exponentiation of the commutator $[\hat{H}_0, \hat{V}]$, expressed as a linear
combination of Pauli operators.
In this contribution, we derive recursive relations to the perturbative expansion of $\bar{H}$ within the standard SW generator for the Hubbard dimer.
Following these relations, we propose two modifications of this generator, one variational with a single parameter thanks to the recursive relations, and the other iterative in the spirit of the Foldy--Wouthuysen transformation~\cite{foldy_on_1950}.
Both modified SW transformations are shown to approximate, or even perform for the homogeneous case, the desired block-diagonalization at infinite order of perturbation, as the VV generator would provide.
As a proof of concept, we introduce two quantum algorithms associated to the modified SW transformations on the Hubbard dimer.
Finally, in light of our findings, we discuss the perspective of generalizing our approach to larger Hubbard systems that is left for future investigations.
In particular, we show that in contrast to most of the currently proposed Ansatz, our strategies are relevant close to the strongly interacting regime.
\section{Van--Vleck similarity and standard Schrieffer--Wolff transformations}
Let us first recall the Van--Vleck
canonical perturbation theory following Shavitt and Redmon~\cite{shavitt_quasidegenerate_1980}.
Consider a Hamiltonian $\hat{H}$ with (unknown) orthonormal eigenvectors $\{\ket{\Psi_i}\}$ such that
\begin{equation}
\hat{H} |\Psi_i\rangle = E_i |\Psi_i\rangle,
\label{eq:eigenvaluepb}
\end{equation}
and another orthonormal basis set $\{\ket{\Phi_i}\}$, eigenvectors of another Hamiltonian $\hat{H}^0$ with the same dimension than $\hat{H}$, that is related
to $\{\ket{\Psi_i}\}$ by a unitary transformation,
\begin{equation}
| \Psi_i \rangle = \hat{U}^{\dagger} | \Phi_i \rangle = \sum_j | \Phi_j \rangle \langle \Phi_j | \hat{U}^{\dagger} | \Phi_i \rangle = \sum_j | \Phi_j \rangle U_{ij}^{\dagger}.
\end{equation}
The eigenvalues of $\hat{H}$
can be inferred as the elements of the diagonal representation of the similar Hamiltonian,
\begin{equation}\label{eq:UHU}
\bar{H}^{\rm VV} = \hat{U} \hat{H}\hat{U}^{\dagger},
\end{equation}
in the orthonormal basis $\{\ket{\Phi_i}\}$.
Thus, solving the eigenvalue problem in Eq.~(\ref{eq:eigenvaluepb}) is equivalent to searching for a unitary transformation $\hat{U}$ such that $\bar{H}^{\rm VV} = \hat{U} \hat{H}\hat{U}^{\dagger}$ is diagonal in a given basis set $\{\ket{\Phi_i}\}$.
The reasoning remains equivalent, though less restrictive, if solely a block-diagonalization in a target subspace is desired.
In other words, we are looking for an unknown Hamiltonian $\bar{H}^{\rm VV}$ with eigenvectors $ | \Phi_i \rangle$ that shares the same eigenvalues than $\hat{H}$.
If one focuses on the ground state $\ket{\Psi_0}$, it is enough to only block-diagonalize $\bar{H}$,
\begin{align}
\langle \Phi_i | \hat{U} \hat{H} \hat{U}^{\dagger} | \Phi_0\rangle & = \langle \Phi_0 | \hat{U}^{\dagger} \hat{H} \hat{U} | \Phi_i\rangle = 0\quad \forall i\neq 0, \\
\langle \Phi_0 | \hat{U} \hat{H} \hat{U}^{\dagger} | \Phi_0\rangle &= E_0.
\end{align}
Many $\hat{U}$ fulfill these conditions up to a unitary transformation acting only on the subspace of $\lbrace \ket{\Phi_i}\rbrace$ with $i \neq 0 $.\\
Let us now consider the following decomposition of the Hamiltonian,
\begin{equation}\label{eq:H0_V}
\hat{H} = \hat{H}^0 + \hat{V},
\end{equation}
where $ \hat{H}^0 $ is diagonal in the $\{\ket{\Phi_i}\}$ basis set, i.e. $\langle \Phi_i | \hat{H}^0 | \Phi_j \rangle = E_i^{0} \delta_{ij}$.
If one wants to block-diagonalize $\bar{H}$ with respect to a given subspace $\Omega$, for instance the one that contains all degenerate ground states of $\hat{H}^0$, one can define the operator
\begin{equation}
\hat{P} = \sum_{i \in \Omega} |\Phi_i \rangle\langle \Phi_i |
\end{equation}
that projects onto $\Omega$, and its complementary projector
\begin{equation}
\hat{Q} = \hat{1} - \hat{P} = \sum_{i \notin \Omega} |\Phi_i \rangle\langle \Phi_i |.
\end{equation}
We note $\hat{O}_D = \hat{P}\hat{O}\hat{P} + \hat{Q}\hat{O}\hat{Q}$ the block-diagonal projection of an operator $\hat{O}$ and its complementary off-block-diagonal part $\hat{O}_X = \hat{P}\hat{O}\hat{Q} + \hat{Q}\hat{O}\hat{P}$.
Adopting the exponential form of the unitary transformation $\hat{U} = e^{\hat{G}}$, $\hat{G}$ being an anti-Hermitian generator with $\hat{G} = \hat{G}_X$ and $\hat{G}_D = 0$, we seek conditions for $\hat{G}$ such that $\bar{H}^{\rm VV} $ is block-diagonal, i.e. $\bar{H}_X^{\rm VV} = \hat{0}$.
Within the super-operator formalism~\cite{primas_generalized_1963},
$\bar{H}^{\rm VV}$ reads:
\begin{align}
\bar{H}^{\rm VV} & = e^{\hat{G}}\hat{H} e^{-\hat{G}} = \hat{H} + [\hat{G}, \hat{H}] + \frac{1}{2}[\hat{G}, [\hat{G}, \hat{H}] ] + \dots \nonumber \\
& = \sum_{n=0}^{\infty} \frac{1}{n!} \mathcal{G}^n(\hat{H}) = e^{\mathcal{G}}(\hat{H}),
\end{align}
where $\mathcal{G}(\hat{X}) = [\hat{G},\hat{X}]$.
By decomposing $e^{\mathcal{G}}(\hat{H}) = {\rm cosh}\mathcal{G}(\hat{H}) + \rm{ sinh}\mathcal{G}(\hat{H})$, it follows that the condition $\bar{H}_X^{\rm VV} = \hat{0}$ is fulfilled for
\begin{equation}
[\hat{G}, \hat{H}^0] = -[\hat{G}, \hat{V}_D] - \sum_{n=0}^{\infty}c_n \mathcal{G}^{2n}(\hat{V}_X), \label{eq:VV_MBPT}
\end{equation}
where $c_n = 2^{2n}B_{2n}/(2n)!$ are functions of Bernoulli numbers $B_{2n}$. Eq.~(\ref{eq:VV_MBPT}) is the central equation of the VV canonical perturbation theory that defines the generator $\hat{G}$ such that $\bar{H}^{\rm VV} $ is block-diagonal,
thus expressing the eigenstates of $\hat{H}$ in terms of eigenstates of $\hat{H}^0$ through $\hat{G}$.
Using an order by order expansion of $\hat{G}$, i.e. $\hat{G} = \sum_{n = 1} \hat{G}^{(n)}$, conditions to cancel $\bar{H}_X$ can be obtained at each order as,
\begin{align}
& [\hat{G}^{(1)}, \hat{H}^0] = -\hat{V}_X, \label{eq:1storder} \\
& [\hat{G}^{(2)}, \hat{H}^0] = -[\hat{G}^{(1)}, \hat{V}_D],\\
& [\hat{G}^{(3)}, \hat{H}^0] = -[\hat{G}^{(2)}, \hat{V}_D] - \frac{1}{3}[\hat{G}^{(1)}, [\hat{G}^{(1)}, \hat{V}_X]], \\
& \dots \nonumber
\end{align}
It follows that $\bar{H}^{\rm VV} $ can also be expressed order by order as
\begin{equation}
\bar{H}^{\rm VV} = \hat{H}^0 + \hat{V}_D + \sum_{n=0}^{\infty}t_n\mathcal{G}^{2n+1}(\hat{V}_X),\label{eq:VV_barV}
\end{equation}
with $t_n = 2(2^{2n+2} -1) B_{2n+2}/(2n + 2)!$.
As mentioned in Ref.~[\onlinecite{shavitt_quasidegenerate_1980}], the Van--Vleck perturbation theory equations (\ref{eq:VV_MBPT}) and (\ref{eq:VV_barV}) are expressed in the domain of a Lie algebra, thus allowing an equivalent diagrammatic expansion.
Note that the convergence of perturbative series and the diagrammatic expansion has been thoroughly investigated in Ref.~[\onlinecite{bravyi_schriefferwolff_2011}], which also provides recursive relations to obtain the $n$-th order term
$\hat{G}^{(n)}$ of the VV generator $\hat{G}$ as a function of the previous $n-1$ terms.
In practice, finding both an analytic and a numerical form of $\hat{G}$ for a given $\hat{H}^0$ and $\hat{V}$ remains challenging, at least equivalent as the explicit diagonalisation of $\hat{H}$.
From the perspective of developing quantum algorithms based on the VV formalism, one realizes that the number of terms in the generator drastically increases order by order,
thus leading to deeper circuits
and, consequently, to an increase in complexity and
sensibility to noise of quantum algorithms.
To overcome this issue, we explore an alternative approach
which consists in using a truncated generator, the Schrieffer--Wolff generator, that is later modified by adding a variational parameter or by using an iterative process to compensate the resulting truncation error.
First of all, following Ref.~[\onlinecite{schrieffer_relation_1966}], let us recall the Schrieffer--Wolff transformation in the context of
the half-filled Hubbard model that we decompose as in Eq.~(\ref{eq:H0_V})
into a local part,
\begin{eqnarray}
\hat{H}^0 &=& \sum_{i \sigma}\mu_i \hat{n}_{i\sigma} + \sum_{i}U_i \hat{n}_{i\uparrow}\hat{n}_{i\downarrow},
\end{eqnarray}
and a non-local (kinetic) part,
\begin{eqnarray}
\hat{V} &=& -\dfrac{1}{2} \sum_{ i \neq j,\sigma} t_{ij} \left( \hat{\gamma}_{ij\sigma} + \hat{\gamma}_{ji\sigma}\right),
\end{eqnarray}
with $\hat{n}_{i\sigma} = \hat{c}_{i\sigma}^{\dagger}\hat{c}_{i\sigma}$ and $ \hat{\gamma}_{ij\sigma} = \hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma}$,
and $\hat{c}^\dagger_{i\sigma}$ ($\hat{c}_{i\sigma}$) the
creation (annihilation) operator of an electron of spin $\sigma = \lbrace \uparrow, \downarrow \rbrace$ in site $i$.
This decomposition contrasts with the usual decomposition between the non-interacting part for which the solution is easily accessible and the non-trivial canonical (interacting) part.
Indeed, the ground state of $\hat{H}^0$ is degenerate at half filling for $U>0$, and consists in a superposition of all states having no double occupation (spanning the so-called Heisenberg subspace in this paper).
Starting from the atomic limit ($U/t \rightarrow \infty$), Schrieffer and Wolff have proposed, in the original context of an Anderson Hamiltonian, to use the unitary transformation $\hat{U} = e^{\hat{S}}$
such that
\begin{equation}
\bar{H}^{\rm SW} = e^{\hat{S}}He^{-\hat{S}},
\end{equation}
that we denote simply $\bar{H}$ in the following to simplify notations, is block-diagonalized at first order of perturbation, i.e.
\begin{equation}
[\hat{S}, \hat{H}^0 ] = -\hat{V}.\label{eq:1storderS}
\end{equation}
We highlight that the above equation corresponds to the first order of perturbation of the VV relations, Eq.~(\ref{eq:1storder}), i.e that the SW generator $\hat{S}$ block-diagonalizes the Hamiltonian only at first order, contrary to the VV generator $\hat{G}$.\\
Note that $\hat{V}_X = \hat{V}$ when the operator $\hat{P}$ projects onto the Heisenberg subspace.
It can be shown that under the SW condition (\ref{eq:1storderS}), $\hat{S}$ takes the following form,
\begin{equation}
\label{eq:SWS}
\hat{S} = \dfrac{1}{2}\sum_{i\neq j, \sigma}\hat{p}_{ij\bar{\sigma}}\left(\hat{\gamma}_{ij\sigma} - \hat{\gamma}_{ji\sigma}\right),
\end{equation}
with $\hat{p}_{ij\sigma}$ defined as
\begin{align}
\hat{p}_{ij \sigma} = \sum_{x=0}^3 \lambda_{ij\sigma,x}\,\hat{p}_{ij \sigma,x},
\end{align}
where $\hat{p}_{ij \sigma,0} = \left(1-\hat{n}_{i\sigma}\right)\left(1-\hat{n}_{j\sigma}\right) $, $\hat{p}_{ij \sigma,1} = \hat{n}_{i\sigma}\left(1-\hat{n}_{j\sigma}\right)$, $\hat{p}_{ij \sigma,2} = \left(1-\hat{n}_{i\sigma}\right)\hat{n}_{j\sigma}$, $\hat{p}_{ij \sigma,3} = \hat{n}_{i\sigma}\hat{n}_{j\sigma}$ are projectors, i.e. $\sum_{x=0}^3 \hat{p}_{ij \sigma,x} = \hat{1}$,
and
\begin{align}
\lambda_{ij\sigma,0} & = -\frac{t_{ij}}{\Delta \mu_{ij}}\;\; {\rm if}\; \Delta \mu_{ij}\neq 0;\;\; \lambda_{ij\sigma,0} = 0 \;\; {\rm else}, \label{eq:lamb0}\\
\lambda_{ij \sigma,1} & = -\frac{t_{ij}}{\Delta \mu_{ij}+U_i}\;\; {\rm if}\; \Delta \mu_{ij} + U_i\neq 0;\;\; \lambda_{ij\sigma,1} = 0 \;\; {\rm else}, \label{eq:lamb1}\\
\lambda_{ij \sigma,2} & = -\frac{t_{ij}}{\Delta \mu_{ij}-U_j}\;\; {\rm if}\; \Delta \mu_{ij} - U_j \neq 0;\;\; \lambda_{ij\sigma,2} = 0 \;\; {\rm else}, \label{eq:lamb2}\\
\lambda_{ij \sigma,3} & =-\frac{t_{ij}}{\Delta \mu_{ij} + \Delta U_{ij}}\;\; {\rm if}\; \Delta \mu_{ij} + \Delta U_{ij}\neq 0;\;\; \lambda_{ij\sigma,3} = 0 \;\; {\rm else}, \label{eq:lamb3}
\end{align}
with $\Delta \mu_{ij} = \mu_i - \mu_j$ and $\Delta U_{ij} = U_i - U_j$.
Within the SW transformation, we obtain
\begin{equation}
\bar{H} = \hat{H}^0 + \sum_{n=2}^{\infty} \frac{n-1}{n!} \mathcal{S}^{n-1}(\hat{V}) = \hat{H}^0 + \sum_{n=2}^{\infty} \bar{H}^{(n)} , \label{eq:barH_SW}
\end{equation}
where $\mathcal{S}$ is the super-operator defined as $\mathcal{S}(\hat{X}) = [\hat{S},\hat{X}]$, and $\bar{H}^{(n)} = \dfrac{n-1}{n!}\mathcal{S}^{n-1}(\hat{V})$.
Since $\hat{S}$ consists in a truncated form of $\hat{G}$, $\bar{H}$ is not expected to be block-diagonal anymore, i.e. $\bar{H}_X \neq \hat{0}$.
In the following, we propose recursive relations between $\mathcal{S}^{n}(\hat{V})$ and $\mathcal{S}^{n-1}(\hat{V})$,
derived for the Hubbard dimer, that provide an explicit expression for $\bar{H}$ and in particular for $\bar{H}_X$ in terms of two- and three-body operators.
These relations are further exploited to develop two modifications of $\hat{S}$, one variational and the other iterative, designed to minimize or even cancel $\bar{H}_X$ while conserving the same complexity as $\hat{S}$.
\section{Modified Schrieffer--Wolff transformations}
\subsection{Recursive relations}
We establish recursive relations for each order of Eq.~(\ref{eq:barH_SW}), which details are provided in Appendix~\ref{app:Recurence}. More precisely, we find that even orders are block-diagonal, i.e. $\bar{H}^{(2n)}_D = \bar{H}^{(2n)}$ and reads
\begin{eqnarray}
\bar{H}^{(2n)} &= &\dfrac{2n - 1}{2(2n)!}\sum_{\substack{i \neq j \\ \sigma}}\sum_{x=0}^3K^{(2n-1)}_{ij\sigma,x}\hat{p}_{ij \bar{\sigma},x}\left(\hat{n}_{i\sigma} - \hat{n}_{j\sigma}\right) \nonumber \\
& &+ \dfrac{2n - 1}{2(2n)!}\sum_{\substack{i \neq j\\\sigma}}J^{(2n-1)}_{ij\sigma}\left(\hat{\gamma}_{ij\sigma}\hat{\gamma}_{ji\bar{\sigma}} + \hat{\gamma}_{ji\sigma}\hat{\gamma}_{ij\bar{\sigma}}\right) \nonumber \\
& &+ \dfrac{2n - 1}{2(2n)!}\sum_{\substack{i \neq j\\\sigma}}L^{(2n-1)}_{ij\sigma}\left(\hat{\gamma}_{ij\sigma}\hat{\gamma}_{ij\bar{\sigma}} + \hat{\gamma}_{ji\sigma}\hat{\gamma}_{ji\bar{\sigma}}\right), \label{eq:MainS2n+1}
\end{eqnarray}
while odd orders are found to be off-block-diagonal, i.e. $\bar{H}^{(2n+1)}_X = \bar{H}^{(2n+1)}$ and take the following expression,
\begin{equation}
\bar{H}^{(2n + 1)} = \dfrac{2n}{2(2n + 1)!}\sum_{i\neq j\sigma} \sum_{x=0}^3T_{ij\sigma,x}^{(2n)} \hat{p}_{ij \bar{\sigma},x}\left(\hat{\gamma}_{ij\sigma} + \hat{\gamma}_{ji\sigma}\right). \label{eq:MainS2n}
\end{equation}
and where only the expression of interaction integrals $I_{ij\sigma,x}^{(k)}$ ($I = J, K, L$ or $T$) depend on the order $k$.
Explicit formulas for the interaction integrals $I_{ij\sigma,x}^{(k)}$ are given in Appendix~\ref{app:Recurence}.
By summing over all orders, $\bar{H}$ takes exactly the following form,
\begin{align}
\label{eq:Htrans}
\bar{H} & = H^0 + \sum_{n=2}^{\infty} \bar{H}^{(n)} = \bar{H}_D + \bar{H}_X,
\end{align}
with $\bar{H}_D = \hat{H}^0 + \sum_{n = 1}^{\infty}\bar{H}^{(2n)}$ and $\bar{H}_X = \sum_{n = 1}^{\infty} \bar{H}^{(2n +1)}$. Specifically, the non block-diagonal contribution $\bar{H}_X$ couples states from the Heisenberg subspace to states from its complementary subspace, and reads explicitely
\begin{align}
\label{eq:Hcouple}
\bar{H}_X =\dfrac{1}{2}\sum_{\substack{i \neq j \\ \sigma}}\sum_{x=0}^3T_{ij\sigma,x}\hat{p}_{ij \bar{\sigma},x}\left(\hat{\gamma}_{ij\sigma} + \hat{\gamma}_{ji\sigma}\right),
\end{align}
where the interaction integrals $T_{ij\sigma,x}$ are obtained by summing over all orders as
\begin{equation}
T_{ij\sigma,x} = \sum_{n=1}^{\infty} \dfrac{2n}{(2n + 1)!} T_{ij\sigma,x}^{(2n)}.
\end{equation}
For the homogeneous case, the associated integrals are simply given by
\begin{align}
\label{eq:Jcpl}
T_{ij\sigma,0} = T_{ij\sigma,3} = 0,
\end{align}
and
\begin{align}
T_{ij\sigma,1} = T_{ij\sigma,2} = -t_{ij}\left(\cos{(4t/U)} - {\rm sinc}\,(4t/U)\right).
\end{align}
Explicit form of the block-diagonal contributions and relations for the inhomogeneous Hubbard dimer are derived in Appendix~\ref{app:Recurence}.
\begin{figure}[h]
\vspace{0.2cm}
\resizebox{\columnwidth}{!}{
\includegraphics[scale=1]{fig1.pdf}
}
\caption{Schematic representation of the action of the different operators $\bar{H}$ in the Hilbert space of the half-filled Hubbard dimer. }
\label{fig:fig1}
\end{figure}
At this stage, we have established recursive relations to obtain the similar Hamiltonian $\bar{H}$ within the standard SW transformation (SWT) at infinite order of perturbation.
However, given the definition of $\hat{S}$ in Eq.~(\ref{eq:1storderS}), the standard SW transformation at infinite order does not lead to a block-diagonal representation of $\bar{H}$ with respect to the Heisenberg subspace since $\bar{H}_X \neq \hat{0}$, see Fig~\ref{fig:fig1}.
Based on the previous recursive relations, in the following subsections we present two strategies, denoted as modified SW (MSW) transformations, one variational and the other iterative, to fully perform the desired block-diagonalization.
\subsection{Variational Schrieffer--Wolff transformation }
\label{sec:variational}
We propose to introduce a variational scaling parameter $\theta$ to the unitary transformation,
\begin{equation}
\label{eq:Var_SWT_U}
\hat{U}(\theta) = e^{\theta \hat{S}},
\end{equation}
such that for $\theta = 0$, $\hat{U} = \hat{1}$, and for $\theta = 1$ one recovers the standard SW transformation $\hat{U} = e^{\hat{S}}$.
Within this unitary transformation, the similar Hamiltonian $\bar{H}(\theta)$ reads
\begin{align}
\bar{H}(\theta) &= e^{\theta \hat{S}}\hat{H} e^{-\theta \hat{S}}, \nonumber \\
& = \hat{H}^0 +\hat{V} \left( 1 - \theta \right) + \sum_{n = 2}^{\infty}\frac{\theta^{n-1}(n - \theta)}{n!}\mathcal{S}^{n-1}(\hat{V}).
\end{align}
Using the previously established recursive relations and after summation till the infinite order, see Appendix~\ref{app:Var}, it can be decomposed
as follows, similarly as in Eq.~(\ref{eq:Htrans}),
\begin{eqnarray}
\label{eq:Htheta}
\bar{H}(\theta) = \bar{H}_D(\theta) + \bar{H}_X(\theta),
\end{eqnarray}
where the $\theta$-dependence lies in the renormalized interaction integrals
that read for the non block-diagonal contribution in the homogeneous case,
\begin{align}
T_{ij\sigma,0}(\theta) = T_{ij\sigma,3}(\theta) = 0,
\end{align}
and
\begin{align}
T_{ij\sigma,1}(\theta) = T_{ij\sigma,2}(\theta) = -t\left(\cos{(4t\theta/U)} - \theta{\rm sinc}\,(4t\theta/U)\right),
\end{align}
The scaling parameter $\theta$ can be optimized to minimize the contributions from the coupling operator $\bar{H}_X(\theta)$, which is shown to cancel out for the homogeneous case at
\begin{align}\label{eq:theta_analytic}
\theta = (U/4t)\tan^{-1}{(4t/U)},
\end{align}
leading to an exact block-diagonalization of $\bar{H}(\theta)$.
More precisely, we demonstrate in Appendix~\ref{app:Var} that in this case and at the saddle point, the variational generator fulfill the VV condition in Eq.~(\ref{eq:VV_MBPT}).
Relations become more complex for the inhomogeneous cases and the variational process has to be done numerically, see Appendix~\ref{app:Var}.
In this case, the cancellation of $\bar{H}_X(\theta)$ cannot always be reached.
Alternatively, one can minimize the energy of $\bar{H}(\theta)$ restricted to the Heisenberg subspace, which is equivalent to maximize the overlap between the minimizing state
and the exact ground state $\ket{\Psi_0}$.
The difference between the two optimization schemes is discussed in Appendix~\ref{app:min}.
\subsection{Iterative Schrieffer--Wolff transformation}
\label{sec:iterative}
Alternatively to the variational approach, one can take advantage of the similarity between the coupling terms in Eq.~(\ref{eq:Jcpl}) and
the perturbation $\hat{V}$
in Eq.~(\ref{eq:Vdecomp}).
In the spirit of the Foldy--Wouthuysen transformation~\cite{foldy_on_1950}, we propose the following iterative scheme:
\begin{enumerate}
\item Initialize the iterative process by applying the standard SW transformation on the original problem to obtain $\bar{H}^{(s=0)}$.
\item At the iteration $s = s+1$, define the new problem $\bar{H}^{0(s)} = \bar{H}^{(s)} - \bar{V}^{(s)} $ and $\bar{V}^{(s)} = \bar{H}_X^{(s-1)}$.
\item Find the corresponding generator $\hat{S}^{(s)}$ such that $[\bar{H}^{0(s)}, \hat{S}^{(s)}] = \bar{V}^{(s)}$.
\item Use the recursive relations derived in Appendix~\ref{app:Itgen} to obtain the new $\bar{H}^{(s)}$.
\item Repeat steps 2 to 4 until convergence is reached, i.e. $\bar{H}_X^{(s)} \rightarrow \hat{0}$.\\
\end{enumerate}
After $N_s$ iterations, the iterative unitary transformation
and the similar Hamiltonian are given by
\begin{equation}
\label{eq:Ite_SWT_U}
\hat{U}^{(N_s)} = \left(\prod_{s = 0}^{N_s-1}e^{\hat{S}^{(s)}}\right),
\end{equation}
and
\begin{equation}
\bar{H}^{(N_s)} = \hat{U}^{(N_s)} \hat{H}\hat{U}^{(N_s)^{\dagger}},
\end{equation}
respectively.
The amplitudes of the resulting coupling terms for large $U/t$ behave asymptotically in $(t^2/U)^{N_s}$ for $N_s$ iterations,
such that the iterative algorithm converges exponentially to a precise decoupling.
\subsection{Perspectives for larger Hubbard rings}
The iterative and variational MSW transformations are shown to perform exact (quasi) block-diagonalization for the homogeneous (inhomogeneous) Hubbard dimer, respectively, thanks to the recursive properties in Eqs.~(\ref{eq:MainS2n+1}) and (\ref{eq:MainS2n}).
Before investigating the quantum algorithms associated to the presented MSW transformation applied to the Hubbard dimer, we discuss possible extensions to larger systems.
First of all, note that the recursive relations obtained in Eqs.~(\ref{eq:MainS2n+1}) and (\ref{eq:MainS2n}) are not valid for larger systems, where additional terms giving rise to interactions among more than two sites emerge.
Nonetheless, for the purposes of this study, we neglect these terms, meaning that VV perturbation condition in Eq.~(\ref{eq:VV_MBPT}) is not satisfied, but the SW condition in Eq.~(\ref{eq:1storderS}) (i.e first order) still is.
In this section, the truncation error is assessed on a classical computer for homogeneous nearest neighbour (NN) Hubbard rings of up to $N=10$ sites.
To do so we apply the unitary transformation $\hat{U}^\dagger(\theta) = e^{-\theta \hat{S}}$
to the ground state $\ket{\Phi_{\rm Heis}}$ of the NN antiferromagnetic Heisenberg model $J\sum_{ij} \hat{s}_i\hat{s}_j$, where $\hat{s}_i$ denotes the spin operator on site $i$ and $J>0$ is the spin-coupling element, which corresponds to the strongly correlated limit of the NN Hubbard model~\cite{schrieffer_relation_1966}.
We follow the variational scheme presented in Sec.~\ref{sec:variational}, where $\theta$ is optimized to minimize the expectation value $E(\theta) = \langle \Phi_{\rm Heis}|\bar{H}(\theta)|\Phi_{\rm Heis}\rangle$ with $\bar{H}(\theta)=U(\theta)\hat{H}U^\dagger(\theta)$,
which is equivalent to maximize the overlap of $U^\dagger(\theta)|\Phi_{\rm Heis}\rangle$ with the exact ground state $|\Psi_{0}\rangle$.
\begin{figure}
\resizebox{\columnwidth}{!}{
\includegraphics[scale=1]{fig2.pdf}
}
\caption{Relative errors in the ground-state energy calculated for homogeneous half-filled Hubbard rings with respect to the number of sites. Results are given for the variational MSW transformation (solid lines) and the standard SWT ($\theta=1$, dashed lines). Lines are guide for the eye.}
\label{fig:swt_nsites}
\end{figure}
In Fig.~\ref{fig:swt_nsites} we show, as a function of the number of sites $N$ and for different values of the Coulomb repulsion strenght $U/t$, the relative error (in $\%$) of $E(\theta)$ with respect to the exact ground-state energy.
Results are provided for $\theta = 1$, corresponding to the standard SWT, and for the optimal value $\theta^*$.
As $N$ increases the relative error increases and appears to converge to what would correspond to the truncation error.
As expected, the truncation error increases as $U/t$ decreases, i.e. $\sim 1\%\, (1\%), \sim 5\%\, (7\%)$ and $\sim 11\%\, (30\%)$ for $U/t = 20$, 8, 4 and $\theta = \theta^*$ ($\theta = 1)$, respectively.
The introduction of a single and variational parameter systematically and drastically improves over the standard SWT.
Consequently, the variational extension to the SW approximation, that is exact for the homogeneous half-filled Hubbard dimer, remains a good approximation for
larger system sizes in the intermediate to strongly correlated regime.
Straightforward improvements can be envisioned by considering higher-order contributions from the generator,
following the recursive relations proposed in Ref.~[\onlinecite{bravyi_schriefferwolff_2011}], for instance.
However, the implementation of the MSW transformations
on classical computers is computationally intractable for systems beyond $\sim 16$ orbitals, in analogy with the unitary coupled cluster ansatz~\cite{romero2018strategies}.
This is also the case for the construction of the trial state $|\Phi_{\rm Heis} \rangle$ for large system's size, which we disregard in the following by investigating quantum algorithms applied to the Hubbard dimer, for which $|\Phi_{\rm Heis} \rangle$ is easy to prepare.
\section{Modified SW transformations applied on quantum computers}
At this stage,
we investigate the relevance of the aforementioned MSW transformations, $\bar{H}^{\rm MSW}=\bar{H}(\theta)$ and $\bar{H}^{\rm MSW}=\bar{H}^{(N_s)}$ in Secs.~\ref{sec:variational} and \ref{sec:iterative}, respectively, for the design of new quantum algorithms.
In both cases, $\bar{H}^{\rm MSW}$ is block-diagonal for the homogeneous case, so that ground or excited states can easily be constructed as linear combination of two basis vectors for the Hubbard dimer, see Fig.~\ref{fig:fig1}.
For the homogeneous half-filled Hubbard dimer,
relevant trial eigenstates of $\bar{H}^{\rm MSW}$
consist in the Heisenberg state $|\Phi_{\rm Heis} \rangle = (1/\sqrt{2})\left(|\uparrow \; \downarrow \,\rangle + | \downarrow \; \uparrow \,\rangle \right)$
and the ionic state $|\Phi_{\rm Ionic}^\alpha \rangle = \cos(\alpha)|\uparrow \downarrow\; \cdot \,\rangle + \sin(\alpha)|\, \cdot \; \uparrow \downarrow\,\rangle$.
Indeed, $e^{\hat{S}}$ preserving the spin symmetry, triplet states $|\uparrow \; \uparrow \,\rangle$ and $| \downarrow \; \downarrow \,\rangle$ are discarded.
The eigenstates of $\hat{H}$ can then be constructed from
the trial eigenstates
of $\bar{H}^{\rm MSW}$ by applying the transformation $\hat{U}^{\rm MSW}$, which refers to the variational [see Eq.~(\ref{eq:Var_SWT_U})] or to the iterative [see Eq.~(\ref{eq:Ite_SWT_U})] MSW transformation.
It appears clear that both the variational or iterative MSW approaches are adapted to the design of quantum algorithms, as they are both formulated as a unitary transformation applied to an easy-to-prepare initial state.
As a proof of concept, we have implemented both quantum algorithms to treat the homogeneous and inhomogeneous Hubbard dimer, using Qiskit~\cite{Qiskit} to construct the quantum circuits.
We use the one-to-one correspondence between the states of the qubits and the occupation of the spin-orbitals of the Hubbard dimer to map our states onto qubits, with even-numbered qubits corresponding to spin-up orbitals and odd-numbered qubits to spin-down orbitals.
The fermionic creation and annihilation operators are
mapped onto Pauli strings $\hat{\mathcal{P}}_i$
using the Jordan--Wigner (JW) transformation~\cite{jordan1928p}.
To implement the unitary transformation on quantum circuits, the first-order Trotter--Suzuki approximation is used, i.e. the exponential of the sum of Pauli strings is decomposed into
a product of exponential of a single Pauli string,
\begin{eqnarray}\label{eq:trotter}
e^{\theta \hat{S}} \xrightarrow{\rm JW}
e^{\theta \sum_i \xi_i \hat{\mathcal{P}}_i} \approx \prod_i e^{\theta \xi_i \hat{\mathcal{P}}_i},
\end{eqnarray}
for which the associated circuit is known (see panel (c) of Fig.~\ref{fig:circuit}),
and $\lbrace\xi_i\rbrace$ are the coefficients that are functions of the SW generator parameters $\{\lambda\}$ obtained after the JW transformation.
The trial Heisenberg and ionic states can be easily prepared on the quantum computer, as shown in panels (a) and (b) of Fig.~\ref{fig:circuit}.
Finally, we simulate our variational MSW transformation
using a noise model built on Qiskit.
This noise model consists in a depolarizing quantum error channel applied on every one- and two-qubit gates, with depolarizing error parameters of $\lambda_1 = 0.0001$ and $\lambda_2 = 0.001$, respectively.
Note that the 4-qubit circuit resulting from the variational MSW transformation is composed of 32 one-qubit gates and 35 CNOT gates, and that no readout error is considered.
Sampling noise is also added to this noise model by considering $n_{\rm shots} = 8192$ for the estimation of the expectation value of each Pauli string.
The variational parameter was optimized by using the SPSA optimizer with a maximum of 1000 iterations.
Then, the optimal parameter $\theta^*$ is calculated as the mean of the last 25 iterations, and the expectation values of $\bar{H}(\theta^*)$
with respect to the Heisenberg and ionic states are estimated as the mean of another 100 noisy simulations (with fixed parameter $\theta^*$).
The noisy results are then compared to the exact references
obtained by exact diagonalization, as well as to the noiseless state-vector simulation, without considering any quantum or sampling noise and for which the L-BFGS-B optimizer was used to update the variational parameter.
For the iterative MSW transformation, only state-vector simulations is performed.
\begin{figure*}
\resizebox{2\columnwidth}{!}{
\includegraphics[scale=1]{fig3.pdf}
}
\caption{a) Quantum circuit corresponding to the Heisenberg state $\ket{\Phi_{\rm Heis}} = \left(\ket{\,\uparrow \; \downarrow\,} - \ket{\,\downarrow \; \uparrow\,}\right)/\sqrt{2} =\left(\ket{1001} - \ket{0110}\right)/\sqrt{2} $, b) Quantum circuit corresponding to the linear combination of the ionic states $\ket{\Phi_{\rm Ionic}^\alpha} = \cos(\alpha) \ket{\,\uparrow \downarrow \; \cdot\,} + \sin(\alpha) \ket{\,\cdot \; \uparrow \downarrow\,} = \cos(\alpha) \ket{1100} + \sin(\alpha)\ket{0011}$, c) Quantum circuit corresponding to the implementation of $e^{\xi X_0 Z_1 Y_2 Z_3}$.}
\label{fig:circuit}
\end{figure*}
\subsection{Variational approach}
The variational approach described in Sec.~\ref{sec:variational} consists in finding the optimal parameter $\theta_X$ of the unitary in Eq.~(\ref{eq:Var_SWT_U}) such that the couplings $\bar{H}_X(\theta_X)$ are minimized, thus enforcing the block-diagonalization of $\bar{H}(\theta_X)$.
Compared to the strategy of Zhang and coworkers~\cite{zhang_quantum_2022},
our minimization process implies only a single variational parameter, rather than a number of parameters that would correspond to the number of Pauli strings composing the generator [i.e. each $\xi_i$ in Eq.~(\ref{eq:trotter})].
Minimizing the couplings $\bar{H}_X(\theta_X)$ requires the estimation of the off-diagonal matrix elements of $\bar{H}$ such as done in Ref.~[\onlinecite{zhang_quantum_2022}].
However, this is not straightforward on quantum computers in contrast to the measurement of expectation values, although
one can note some improvements in the literature~\cite{huggins2020non,stair2021simulating}.
Consequently in this work, in analogy with the VQE algorithm, we minimize the energy $\langle \Phi \hat{U}(\theta) | \hat{H} | \hat{U}^{\dagger}(\theta) \Phi \rangle $ measured on the quantum device, rather than minimizing the norm of $\bar{H}_X$. Note that the two strategies are equivalent when the initial trial state $|\Phi\rangle$ is indeed the ground state of $\bar{H}(\theta)$. Otherwise, it does not lead to the expected block-diagonalization of the Hamiltonian, as discussed in more details in Appendix~\ref{app:min}.
\begin{figure}
\resizebox{\columnwidth}{!}{
\includegraphics[scale=1]{fig4.pdf}
}
\caption{Energies of the half-filled Hubbard dimer for $\Delta \mu = 0$ with respect to the repulsion strength, using the variational SW method
to minimize the energy $\bra{\Phi_{\rm Heis}}\bar{H}(\theta)\ket{\Phi_{\rm Heis}}$.}
\label{fig:var_deltav0}
\end{figure}
Let us start with the homogeneous Hubbard dimer ($\Delta \mu = 0$).
In this case, analytical expressions for the variational SW transformation can be derived
as shown in Sec.~\ref{sec:variational}.
As readily seen in Fig.~\ref{fig:var_deltav0},
the energy obtained by minimizing
$\bra{\Phi_{\rm Heis}}\bar{H}(\theta)\ket{\Phi_{\rm Heis}}$
matches exactly the ground-state energy of $\hat{H}$, and the minimizing parameter, denoted by $\theta_{\rm Heis}$, is exactly the same as the analytical expression in Eq.~(\ref{eq:theta_analytic}) (not shown).
In addition, using the exact same unitary $\hat{U}(\theta_{\rm Heis})$ but on the
equi-weighted ionic state $\vert \Phi_{\rm Ionic}^{\alpha = \pi/4} \rangle$, one recovers the first-excited singlet energy of $\hat{H}$.
Thus, our variationally optimized SW transformation has indeed block-diagonalized $\hat{H}$ exactly for any repulsion strength $U/t$,
with the Heisenberg subspace containing the singlet ground state and the triplet states.
Looking at the energies obtained from the noisy simulation,
they follow closely the noiseless results, especially for the first-excited state energy for which the relative error does not exceed 1.5\%.
We note also an increase of around 0.03 in the expectation value of the spin operator $\hat{S}^2$ due to the noise, showing that the
final state is not a pure singlet state anymore.
\begin{figure}
\resizebox{\columnwidth}{!}{
\includegraphics[scale=0.2]{fig5.pdf}
}
\caption{Energies of the half-filled Hubbard dimer for $\Delta \mu/t = 2$ (top panel) with respect to the repulsion strength, using the variational SW method
to minimize the energies $\bra{\Phi_{\rm Heis}}\bar{H}(\theta)\ket{\Phi_{\rm Heis}}$ (orange markers, shown for $U > \Delta \mu$) and $\bra{\Phi_{\rm Ionic}^{\alpha=0}}\bar{H}(\theta)\ket{\Phi_{\rm Ionic}^{\alpha = 0}}$ (blue markers, shown for $U < \Delta \mu$).
The associated minimizing parameters $\theta_{\rm Heis}$ and
$\theta_{\rm Ionic}$ are shown on the bottom panel, respectively.
The vertical dotted line corresponds to $U = \Delta \mu$.}
\label{fig:var_deltav2}
\end{figure}
Turning to the inhomogeneous Hubbard dimer
with $\Delta \mu /t = 2$,
no analytical expressions are known
for the optimal parameter $\theta$.
In contrast to the homogeneous case,
minimizing the energy $\bra{\Phi_{\rm Heis}}\bar{H}(\theta)\ket{\Phi_{\rm Heis}}$ doesn't lead to a block-diagonal
$\bar{H}(\theta_{\rm Heis})$ in the entire range of interaction,
but only for $U \gg \Delta \mu$ as shown in Fig.~\ref{fig:var_deltav2}.
In the other case,
the ground state doesn't belong to the Heisenberg
subspace such that
$\bar{H}(\theta_{\rm Heis})$ is not block-diagonal.
Hence, the Heisenberg state is not an eigenstate of $\bar{H}(\theta_{\rm Heis})$, neither is the ionic state (see Appendix~\ref{app:min} for more details).
However, rather than minimizing the energy with respect to the Heisenberg state, one can prepare a different initial trial state corresponding to the ground state (or a good approximation of it)
that belongs to the other subspace.
In the case of the Hubbard dimer, this is the ionic subspace which ground state is
a linear combination of the ionic states (see Fig.~\ref{fig:fig1} and panel (b) of Fig.~\ref{fig:circuit}).
As $\Delta \mu/t = 2$, the optimal $\alpha$ value
of $\ket{\Phi_{\rm Ionic}^\alpha}$ is not trivial and
we approximate it as 0, i.e. $\vert \Phi_{\rm Ionic}^{\alpha=0}\rangle = \ket{\, \uparrow \downarrow \; \cdot \,}$.
Minimizing $\langle\Phi_{\rm Ionic}^{\alpha=0}\vert\bar{H}(\theta)\vert\Phi_{\rm Ionic}^{\alpha=0}\rangle$
now leads to an optimal $\theta_{\rm Ionic}$ that approximately block-diagonalizes $\bar{H}(\theta_{\rm Ionic})$.
More precisely, one recovers the correct ground-state and first-excited-state singlet energies
for $U \ll \Delta \mu$ by measuring the expectation values
$\langle\Phi_{\rm Ionic}^{\alpha=0}\vert\bar{H}(\theta_{\rm Ionic})\vert\Phi_{\rm Ionic}^{\alpha=0}\rangle$
and
$\bra{\Phi_{\rm Heis}}\bar{H}(\theta_{\rm Ionic})\ket{\Phi_{\rm Heis}}$, where $\theta_{\rm Ionic}$ is defined as the optimal parameter that minimizes $\langle\Phi_{\rm Ionic}^{\alpha=0}\vert\bar{H}(\theta)\vert\Phi_{\rm Ionic}^{\alpha=0}\rangle$.
Interestingly, the Heisenberg state now belongs to the subspace which contains the first-excited singlet state, as opposed to the correlation regime $U \gg \Delta \mu$.
Note that in the strictly correlated (or atomic) limit $U/t \rightarrow \infty$, $\theta_{\rm Heis}$ tends to 1
(see bottom panel of Fig.~\ref{fig:var_deltav2}), which is expected as the variational SW transformation tends to the standard SW transformation that is exact in this limit.
Moving from this limit, the value of the optimal parameter $\theta_{\rm Heis}$ decreases to compensate the error from applying the MSW transformation in the non-atomic limit.
Finally, the noisy simulations show a relatively good agreement with the noiseless results.
In analogy with the homogeneous model on Fig.~\ref{fig:var_deltav0}, the expectation value of $\hat{S}^2$ also increases from 0 to around 0.05,
and the deviation in energy is more significant on the ground-state energy and when $U/t$ increases.
According to the bottom panel of Fig.~\ref{fig:var_deltav2}, it seems that the optimized parameter obtained from the noisy simulation deviates significantly from the exact one for large $U/t$ values (last blue circle on the curve), thus indicating that the classical optimization for large $U/t$ values is more challenging
in the noisy environment.
This could be mitigated by employing error mitigation strategies that are outside of the scope of this manuscript~\cite{cai2022quantum}.
\subsection{Iterative approach}
\begin{figure}
\resizebox{\columnwidth}{!}{
\includegraphics[scale=1]{fig6.pdf}
}
\caption{Energies of the half-filled Hubbard dimer for $\Delta \mu = 0$ with respect to the repulsion strength, using the iterative SW transformation applied on the Heisenberg state (orange triangles) and the equi-weighted ionic state (blue crosses). The exchange integrals $J_{01\sigma}^{(N_s)}$ obtained at convergence are also represented.}
\label{fig:it_deltav0}
\end{figure}
The steps 1 to 5 of the iterative approach described in Sec.~\ref{sec:iterative}
can all be performed on a classical computer by using the recursive relations
derived in Appendix~\ref{app:Var}, such that only the preparation of the final state $U^{(N_s)\dagger}\ket{\Phi}$
and the measurement of $\hat{H}$ are done on the quantum device.
Let us start with the homogeneous dimer in Fig.~\ref{fig:it_deltav0}.
Interestingly, and in contrast to the variational approach,
the ground state doesn't always belong to the Heisenberg subspace.
Indeed, the ground and second-excited singlet states of $\bar{H}^{(N_s)}$
oscillate between the Heisenberg and the equi-weighted ($\alpha = \pi/4$) ionic states.
This can be rationalized by analyzing the behaviour of the exchange integrals.
Indeed, the analytical function in Eq.~(\ref{eq:Jana}) at iteration 0 (corresponding to the standard SW transformation) shows that the exchange integrals oscillate and change sign for different correlation strength (not shown).
The iterative process strongly sharpens these oscillations (though the function remains continuous and infinitely differentiable for all $U/t > 0$), as shown by the solid blue lines in Fig.~\ref{fig:it_deltav0}.
The change of sign of the exchange integrals indicates a change in the ground state
of $\bar{H}^{(N_s)}$.
If they are negative, the ground state belongs to the Heisenberg subspace, while it belongs to the ionic subspace if they are positive.
One can also verify that the energy required to go from the Heisenberg state to the equi-weighted ionic state is of $4J_{01\sigma}^{(N_s)}$, where $J_{01\sigma}^{(N_s)}$ are the couplings terms obtained after $N_s$ iterations.
Finally, note that the first-excited singlet state energy
of $\hat{H}$
is actually exactly recovered from the $\vert\Phi_{\rm Ionic}^{\alpha = - \pi/4}\rangle$ state that is an eigenstate of $\bar{H}^{(N_s)}$ (not shown).
\begin{figure}
\resizebox{\columnwidth}{!}{
\includegraphics[scale=0.2]{fig7.pdf}
}
\caption{Energies of the half-filled Hubbard dimer for $\Delta \mu/t = 2$ with respect to the repulsion strength.
The iterative SW transformation is applied on the Heisenberg state (orange triangles) and on the pure ionic state (blue crosses), with (top panel) and without (bottom panel) trotterization error.
The vertical dotted line corresponds to $U = \Delta \mu$.}
\label{fig:it_deltav2}
\end{figure}
Turning to the inhomogeneous dimer with $\Delta \mu/t = 2$
in Fig.~\ref{fig:it_deltav2},
one observes a similar behaviour than for the variational approach in Fig.~\ref{fig:var_deltav2},
i.e. the ground state belongs to the Heisenberg subspace for $U \gg \Delta \mu$ and to the ionic subspace for $U \ll \Delta \mu$.
However, the values around the transition $U \sim \Delta \mu$ (top panel of Fig.~\ref{fig:it_deltav2}) are much less accurate than for the variational method.
This can be rationalized by comparing the energies obtained with and without trotterizing the SW transformation (top and bottom panels of Fig.~\ref{fig:it_deltav2}, respectively).
Indeed, $\bar{H}^{(N_s)}$ obtained without trotterization is block-diagonal
(appart from some small deviation for a few points),
although there are some interchanges between the nature of the ground state around the transition $U \sim \Delta \mu$ in contrast to the variational approach.
Trotterizing the iterative SW transformation does lead to significant errors and to a non-block-diagonalized $\bar{H}^{(N_s)}$, such that the Heisenberg and the ionic states are not eigenstates of $\bar{H}^{(N_s)}$ anymore.
Such trotterization errors are much more pronounced within the
iterative method than the variational one for two reasons.
On the one hand, the successive applications of more than one (most of the time, 3 iterations in this work) unitary transformation does multiply the number of operators that have to be trotterized.
On the other hand,
it is known that the variational optimization of the parameters in VQE-based algorithms does compensate the
trotter errors~\cite{grimsley2019trotterized}.\\
\subsection{Variational versus iterative approach: numerical efficiency}
In contrast to the variational approach, the iterative approach leads to a fully quantum (parameter-free) algorithm as it simply consists in applying the unitary transformation of Eq.~(\ref{eq:Ite_SWT_U}) on a prepared eigenstate of $\bar{H}^{\rm MSW} = \bar{H}^{(N_s)}$.
However, the associated quantum circuit is much deeper than for the variational approach, which applies a single unitary transformation only.
In terms of gate complexity, the number of CNOT required to implement a single SW unitary transformation scales
with the number of Pauli terms in the SW generator, as well as with the number of qubits as shown by the cascade of CNOT in panel (c) of Fig.~\ref{fig:circuit}.
To evaluate the relevance of our approach to more complex systems, we extrapolate the computational scaling for a $N$-sites Hubbard model.
As the operators of the SW generator only act on nearest-neighbor sites, the number of Pauli terms scales as $\mathcal{O}(N)$.
For the iterative approach, one has to multiply by the number of iterations $N_s$.
Hence, the number of CNOT scales as $\mathcal{O}(N^2)$ and $\mathcal{O}(N_s N^2)$ for the variational and the iterative approach, respectively.
Although the variational approach is more attractive in the NISQ era due to its shallower circuit depth,
it is at the expense of much more measurements
as it has to be multiplied at least by the number of iterations
dictated by the type of cost function and the method used for the classical optimization of the circuit parameters.
Note that while the iterative approach appears less adapted to NISQ computers, its associated circuit depth still remains far shallower than quantum phase estimation based approaches, such as the fault-tolerant one proposed in Ref.~[\onlinecite{zhang_quantum_2022}].
Which method is the most efficient will depend on the ability of the considered quantum computer to afford deep quantum circuit.
Within noisy quantum computer, the variational approach appears more adapted, while the iterative method can be used on fault tolerant devices.
\section{Conclusions and perspectives}
In this paper, we derived recursive relations for the Schrieffer--Wolff transformation applied on the half-filled Hubbard dimer.
Based on these findings, we proposed a variational and an iterative modification of the standard SW transformation to approach,
or even to perform for homogeneous case, a block-diagonalization of the Hamiltonian.
These modified Schrieffer--Wolff transformations have been used to design two quantum algorithms that have been implemented and compared on the half-filled Hubbard dimer.
Regarding the extension of this work to design efficient and alternative quantum algorithms for the general Hubbard model, or even for other models or {\it ab-initio} Hamiltonian, several challenges have to be addressed.
At this stage, one could directly, and without modification, use the variational SW Ansatz [Eq.~(\ref{eq:Var_SWT_U})], the iterative Ansatz [Eq.~(\ref{eq:Ite_SWT_U})] or a combination of both
to evaluate the ground-state energy of a given Hubbard model.
Beside the fact that it consists in a serious approximation, as additional terms in the perturbative expansion will implicitly be neglected for Hubbard models larger than two sites, it also requires to prepare a relevant trial state $\ket{\Phi}$ that generalizes the Heisenberg state used for the Hubbard dimer.
If no trivial and easy-to-prepare trial eigenstates of $\hat{H}^0$ are known, this step could be performed variationally
using the VQE algorithm, for instance.
Within this strategy, one can expect valuable results for the regime of large $U/t$ values and close to half-filling.
Alternatively, one could apply the modified SW transformations
on a few relevant and easy-to-prepare states that
we know belong to the
low-energy subspace we are interested in,
thus forming a basis on which all the
Hamiltonian matrix elements are measured on the quantum computer,
followed by a classically diagonalization in the same spirit of the quantum subspace diagonalization methods~\cite{mcclean2017hybrid,
motta2020determining,stair2020multireference}.
Finally,
the generalization of this work to any filled Hubbard model or to the Quantum Chemistry Hamiltonian probably requires to improve the generator.
This could be done for instance, in the spirit of coupled cluster approaches, by introducing more complex terms or more variational parameters.
All the aforementioned developments are beyond the scope of this manuscript and are left for future work.
\begin{acknowledgments}
The authors would like to thank the ANR (Grant No. ANR-19-CE29-0002 DESCARTES
project) for funding.
\end{acknowledgments}
|
2,877,628,089,631 | arxiv | \section{I. Introduction}
Low dimensional semiconductor structures exhibit different characteristics, ruled
by the size and morphology of the system. Recently, there is an increasing interest
in nanowhiskers (NWs), also known as nanowires. These are one dimensional nanostructures
grown perpendicular to the surface of the substrate, usually by the vapor-liquid-crystal
(VLC) method. The technological applications of NWs, including biological and chemical
nanosensors \cite{science-293-1289, nanomedicine-1-51, nbt-23-1294}, lasers
\cite{nature-421-241}, light emission diodes \cite{nature-409-66} and field effect
transistors \cite{nl-4-1247}, can be used in a large variety of fields.
The first register in the literature of whiskers was made by Wagner and Ellis \cite{apl-4-89}
in 1964. In this classic study, it is demonstrated the vertical growth of a Si whisker in
the [111] direction, activated by droplets of Au, using the VLC method. The radius of
the structure is approximately the same as the catalist droplet of Au and the vertical
size depends on the ratio and time of the compound's deposition on the substrate. Although
the VLC method is the most common, other methods like vapor-phase epitaxy (VPE),
molecular-beam epitaxy (MBE) and magnetron deposition (MD) are also applied for the NWs growth.
In III-V compound NWs (e. g., arsenides and phosphides), a surprising characteristic is
the predominance of the wurtzite (WZ) phase in the structure. Exception made for the nitrides,
the stable crystalline structure of III-V compounds in the bulk form is zincblend (ZB).
Although the difference between the small formation energy of the two phases are small,
approximately of $20\;\text{meV}$ per pair of atoms at zero pressure, high pressures would
be necessary to obtain the WZ phase in the bulk form. However, reducing the dimensions of
the system to the nanoscale level, such as in these NWs, the WZ phase becomes more stable.
This stability is due to the smaller surface energy of lateral faces compared to the cubic
crystal. An extensive summary of NWs growth, properties and applications was made by
Dubrovskii {\it et al.} \cite{semiconductors-43-1539}.
Controlling the growth conditions, such as temperature and diameter of the NW, it is
possible to create different regions composed of ZB and WZ structures \cite{nl-10-1699,
naturemat-5-574, naturenano-4-50, nanoIEEE-6-384, am-21-3654, nano-22-265606, nl-11-2424,
sst-25-024009}. The mixture of both crystalline phases in the same nanostructure is
called polytypism. Such characteristic directly affects the electronic and optical
properties of NWs. The detailed study of polytypism in III-V semiconductor NWs is
fundamental to the development of novel functional nanodevices with a variety of
features.
The theoretical tool to calculate the electronic band structure of polytypical NWs
used in this paper is the k$\cdot$p method. Although the formulation of this method has
already been done for ZB and WZ crystal structures in the bulk form \cite{pr-100-580,
book-kane, spjetp-14-898, book-birpikus, prb-54-2491} and in superlattices and heterostructures
\cite{IEEEjqe-22-1625,prb-53-9930,apl-76-1015,sst-12-252,jcg-246-347}, it was never applied
to a polytypical case.
Deeply studying the core of the k$\cdot$p formulation for both crystal structures and
relying on the symmetry connection of the polytypical interface presented in the paper
of Murayama e Nakayama \cite{prb-49-4710}, it was possible to describe the top valence
bands and lower conduction bands of ZB and WZ in the same Hamiltonian matrix. The
envelope function scheme was then applied to obtain the variation of the parameters
along the growth direction, describing the different regions in a NW, thus completing
the polytypical model. The effects of strain, spontaneous polarization and piezoelectric
polarization are also included in the model.
In order to test the model, it was applied to a polytypical WZ/ZB/WZ quantum well of InP.
Although a real NW is composed by several regions of WZ and ZB, the physical trends of the
polytypical interface can be extracted from a single quantum well system. We choose the InP
compound basically for two reasons: the small spin-orbit energy makes easier to fit the matrix
parameters to the effective mass values given in the paper of De and Pryor \cite{prb-81-155210}
for the WZ polytype and also because the InP NWs can be found in a great number of studies in
the present literature \cite{nl-9-648, nano-20-225606, nl-10-1699, prb-82-125327, nanolet-10-4055,
nanotec-21-505709, ssc-151-781, jap-104-044313}.
The present paper is divided as follows: In section II we discuss the symmetry of ZB
and WZ crystal structures and analyze how the irreducible representations of the energy
bands are connected in the polytypical interface. Section III describes the Hamiltonian
terms for the polytypical model. The results, and their discussion, of InP WZ/ZB/WZ single
well are found in section IV. Finally, in section V, we draw our conclusions.
\section{II. Symmetry analysis}
\subsection{A. Zincblend and wurtzite structures}
Our formulation relies on group theory concepts and therefore it is necessary to
understand the symmetry of the two crystal structures considered in the polytypical
NWs. The ZB structure belongs to the class of symmorphic space groups and has the
$T_{d}$ symmetry as its factor group. The number of atoms in the primitive unit
cell is two. Unlike ZB, the WZ structure belongs to the class of nonsymmorphic
space groups and its factor group is isomorphic to $C_{6v}$. The classes of symmetry
operations $C_{2}$, $C_{6}$ and $\sigma_{v}$ are followed by a translation of $c/2$
in the [0001] direction. WZ has four atoms in the primitive unit cell. Comparing
the factor groups one can notice that $C_{6v}$ is less symmetric than $T_{d}$.
In the k$\cdot$p framework, this lower symmetry decreases the number of irreducible
representations (IRs) in the group consequently increasing the number of interactions
in the Hamiltonian. A good description of the concepts of space group symmetry can
be found in Ref. \cite{dresselhaus-jorio}.
In polytypical NWs, the common growth direction is the ZB [111], which exhibit a
noticeable similarity to WZ [0001]. Actually, analyzing both crystal structures in these
directions, one can describe then as stacked hexagonal layers. The ZB has three
layers in the stacking sequence (ABCABC) while WZ has only two (ABAB) as shown
in Figure \ref{fig:zb_wz}. The crystal structure alternation occurs when a stacking
fault happens in WZ, leading to a single ZB segment, or when two twin planes appear
in ZB, originating a single WZ segment \cite{naturenano-4-50}.
\begin{figure}[h]
\includegraphics{zb_wz_nanowhisker}
\caption{ZB (left) and WZ (right) structures and their stacking sequence. The ZB
is presented in the [111] direction. In this direction, the unit cell is twice as
large as the usual ZB unit cell \cite{prb-49-4710}.}
\label{fig:zb_wz}
\end{figure}
\subsection{B. Irreducible representations at the polytypical interface}
An important issue of the model is how to connect the energy levels at the
polytypical interface depending on the symmetry they assume. Based on the scheme
presented by Murayama and Nakayama \cite{prb-49-4710} of single group IRs at the WZ/ZB
interface the symmetry of the energy bands can be chose. The same scheme was constructed
by De and Pryor \cite{prb-81-155210} for the double group IRs with the inclusion of the
spin-orbit coupling.
\begin{figure}[h]
\includegraphics{brillouin_zones}
\caption{Usual a) ZB and (b) WZ first Brillouin zones and their respective high symmetry
points. The arrows (red line) represent the growth directions in NWs. For ZB, the
[111] direction is directed towards the L-point.}
\label{fig:brillouin_zones}
\end{figure}
Since WZ has twice more atoms in the primitive unit cell than ZB, the number of
energy bands in the $\Gamma$-point is also twice as large. Considering the $sp^3$
hybridization, without spin, ZB has 8 energy bands while WZ has 16. However, in the
[111] direction, the ZB unit cell is two times larger than the usual face-centered
cubic (FCC) unit cell \cite{prb-49-4710}. In the IRs scheme mentioned above,
the presence of energy bands with $L$ symmetry takes into account the mismatch in
the number of atoms for the usual unit cells. The reason for the appearance of the
$L$ symmetry is the fact that ZB [111] growth direction is directed towards the $L$-point,
as displayed in Figure \ref{fig:brillouin_zones}a, hence this point is mapped out in
the $\Gamma$-point. Figure \ref{fig:brillouin_zones}b, displays the first Brillouin zone
(FBZ) for the WZ structure.
\begin{figure}[h]
\includegraphics{gamma_symmetry}
\caption{The subset of IRs considered in this formulation with and without the
spin-orbit (SO) coupling. The numbers in parentheses are the degeneracy of the IRs.
The notation for the IRs follow Refs. \cite{prb-49-4710,prb-81-155210}}
\label{fig:gamma_symmetry}
\end{figure}
Among all the IRs presented in Refs. \cite{prb-49-4710,prb-81-155210} we
considered only a small subset. Displayed in Figure \ref{fig:gamma_symmetry}, this subset
comprises the lower conduction band and the top three valence bands, which belong to the
$\Gamma$-point in both structures. The price that is paid in considering only a small subset
is the accuracy of the Hamiltonian for a fraction of the FBZ, approximately
10-20\%. The basis states for the IRs of the considered bands are presented in equations
(\ref{eq:ZB_bands_sym}) and (\ref{eq:WZ_bands_sym}) for ZB and WZ, respectively.
\begin{eqnarray}
\Gamma_{1c}^{ZB} & \sim & x^{2}+y^{2}+z^{2}\nonumber \\
\Gamma_{15v}^{ZB} & \sim & \left(x,y,z\right)
\label{eq:ZB_bands_sym}
\end{eqnarray}
\begin{eqnarray}
\Gamma_{1c}^{WZ} & \sim & x^{2}+y^{2}+z^{2}\nonumber \\
\Gamma_{6v}^{WZ} & \sim & \left(x,y\right)\nonumber \\
\Gamma_{1v}^{WZ} & \sim & z
\label{eq:WZ_bands_sym}
\end{eqnarray}
Although the IRs belong to different symmetry groups ($T_d$ and $C_{6v}$), the
basis states transform as the usual $x,y,z$ cartesian coordinates for the valence
bands and the scalar $x^{2}+y^{2}+z^{2}$ for the conduction band in both crystal
structures. This information is crucial to represent WZ and ZB with the same
Hamiltonian and is the essential insight of our formulation.
\section{III. Theoretical model}
\subsection{A. k$\cdot$p Hamiltonian}
In order to develop our k$\cdot$p Hamiltonian \cite{arxiv} it is convenient to
describe the ZB structure in a coordinate system that has the $z$ axis parallel
to the growth direction. This coordinate system is the primed one presented in
Figure \ref{fig:coordinate_systems}. Even though the choice of the coordinate
system is arbitrary, it alters the k$\cdot$p Hamiltonian. For example, in the
unprimed coordinate system the $k_z$ direction is directed towards the $X$-point
but in the primed coordinate system it is directed towards the $L$-point. Thus
our expectation is that an anisotropy in the ZB Hamiltonian between the
$k_x$ and $k_z$ directions will occur since they are not reaching equivalent
points in the the reciprocal space anymore.
\begin{figure}[h]
\includegraphics{coordinate_systems}
\caption{(a) ZB conventional unit cell with two different coordinate systems.
(b) WZ conventional unit cell with its common coordinate system. The [111]
growth direction for ZB structure passes along the main diagonal of the cube
and is represented in the primed coordinate system.}
\label{fig:coordinate_systems}
\end{figure}
Considering the single group formulation of the k$\cdot$p method, the choice of
the coordinate system defines the symmetry operation matrices used to derive
the momentum matrix elements. It is necessary to recalculate the Hamiltonian
terms for the ZB structure. However, the energy bands we consider here are
exactly the ones customary used in the ZB [001] k$\cdot$p Hamiltonian. Instead
of recalculating the terms for ZB [111] k$\cdot$p Hamiltonian it is possible,
and also useful, to apply a basis rotation to the ZB [001] matrix. This rotation
procedure is well described in the paper of Park and Chuang \cite{jap-87-353}.
The basis set for both crystal structures in the primed coordinate system (the
prime will be dropped out of the notation from now on and will be used only
when it is necessary) is given by
\begin{eqnarray}
\left|c_{1}\right\rangle & = & -\frac{1}{\sqrt{2}}\left|(X+iY)\uparrow\right\rangle \nonumber \\
\left|c_{2}\right\rangle & = & \frac{1}{\sqrt{2}}\left|(X-iY)\uparrow\right\rangle \nonumber \\
\left|c_{3}\right\rangle & = & \left|Z\uparrow\right\rangle \nonumber \\
\left|c_{4}\right\rangle & = & \frac{1}{\sqrt{2}}\left|(X-iY)\downarrow\right\rangle \nonumber \\
\left|c_{5}\right\rangle & = & -\frac{1}{\sqrt{2}}\left|(X+iY)\downarrow\right\rangle \nonumber \\
\left|c_{6}\right\rangle & = & \left|Z\downarrow\right\rangle \nonumber \\
\left|c_{7}\right\rangle & = & i\left|S\uparrow\right\rangle \nonumber \\
\left|c_{8}\right\rangle & = & i\left|S\downarrow\right\rangle
\end{eqnarray}
In a first approximation, the interband interaction is not taken into account explicitly
here, thus the conduction band is a single band model for spin-up and spin-down
reading as
\begin{equation}
E_{C}(\vec{k}) = E_{g}+E_{0}+\frac{\hbar^{2}}{2m_{e}^{\parallel}}k_{z}^{2}+\frac{\hbar^{2}}{2m_{e}^{\perp}}\left(k_{x}^{2}+k_{y}^{2}\right)
\end{equation}
\\
where $E_{g}$ is the band gap, $E_{0}$ is the energy reference at $\vec{k}=0$ and
$m_{e}^{\parallel}$, $m_{e}^{\perp}$ are the electron effective masses parallel
and perpendicular do the $z$ axis, respectively. For the ZB structure, however,
the electron effective masses are equal.
The Hamiltonian for WZ and ZB valence band is given by
\begin{equation}
H_{V}(\vec{k}) = \left[\begin{array}{cccccc}
F & -K^{*} & -H^{*} & 0 & 0 & 0\\
-K & G & H & 0 & 0 & \Delta\\
-H & H^{*} & \lambda & 0 & \Delta & 0\\
0 & 0 & 0 & F & -K & H\\
0 & 0 & \Delta & -K^{*} & G & -H^{*}\\
0 & \Delta & 0 & H^{*} & -H & \lambda
\end{array}\right]
\label{eq:Hv_kp}
\end{equation}
\\
and the matrix terms are defined as
\begin{eqnarray}
F & = & \Delta_{1}+\Delta_{2}+\lambda+\theta\nonumber \\
G & = & \Delta_{1}-\Delta_{2}+\lambda+\theta\nonumber \\
\lambda & = & A_{1}k_{z}^{2}+A_{2}\left(k_{x}^{2}+k_{y}^{2}\right)\nonumber \\
\theta & = & A_{3}k_{z}^{2}+A_{4}\left(k_{x}^{2}+k_{y}^{2}\right)\nonumber \\
K & = & A_{5}k_{+}^{2}+2\sqrt{2}A_{z}k_{-}k_{z}\nonumber \\
H & = & A_{6}k_{+}k_{z}+A_{z}k_{-}^{2}\nonumber \\
\Delta & = & \sqrt{2}\Delta_{3}
\end{eqnarray}
\\
where $k_{\alpha}(\alpha=x,y,z)$ are the wave vectors in the primed coordinate
system, $A_{i}(i=1,...,6,z)$ are the holes effective mass parameters, $\Delta_1$
is the crystal field splitting energy in WZ, $\Delta_{2,3}$ are the spin-orbit
coupling splitting energies and $k_{\pm} = k_x \pm i k_y$.
It is important to notice that the parameter $A_z$ appears in the matrix elements
to regain the original isotropic symmetry of the ZB band structure in the new
coordinate system. In the regions of WZ crystal structure, this parameter is
zero and the matrix is exactly the canonical in use for WZ crystals.
Although this is not the usual way to describe ZB crystals, all the parameters
in the matrix can be related to the familiar $\gamma_{i}(i=1,2,3)$ and
$\Delta_{SO}$ as shown above
\begin{eqnarray}
\Delta_{1} & = & 0\nonumber \\
\Delta_{2} & = & \Delta_{3}=\frac{\Delta_{SO}}{3}\nonumber \\
A_{1} & = & -\gamma_{1}-4\gamma_{3}\nonumber \\
A_{2} & = & -\gamma_{1}+2\gamma_{3}\nonumber \\
A_{3} & = & 6\gamma_{3}\nonumber \\
A_{4} & = & -3\gamma_{3}\nonumber \\
A_{5} & = & -\gamma_{2}-2\gamma_{3}\nonumber \\
A_{6} & = & -\sqrt{2}\left(2\gamma_{2}+\gamma_{3}\right)\nonumber \\
A_{z} & = & \gamma_{2}-\gamma_{3}
\end{eqnarray}
One may also argue that this formulation is very similar to the WZ phase.
The insight here is to consider ZB as a WZ structure without the crystal
field splitting energy. Since the WZ structure is less symmetric than ZB,
as mentioned in section II.A, it is possible to represent the ZB
parameters with the WZ ones.
The resulting valence band structures for bulk WZ and ZB InP using matrix
(\ref{eq:Hv_kp}) are shown in Figure \ref{fig:valence_band_bulk}. The presence of the
crystal field in the WZ structure creates three distinct two-fold degenerate
bands whereas in ZB there is a four-fold and a two-fold degenerate set of
bands. Additionally, the anisotropy between $k_x$ and $k_x$ is evident in
both crystal structures. In WZ it is due to the different symmetry properties
of the $xy$-plane and the $z$ axis but in ZB it is because $k_x$ and $k_z$
directions do not reach equivalent points in the reciprocal space. Since the
conduction band is a parabolic model, we did not present its dispersion
relation. The ZB parameters were obtained from Ref. \cite{jap-89-5815} and the
WZ parameters were derived using the effective masses presented in Ref.
\cite{prb-81-155210}. These parameters can be found in Table \ref{tab:kp_par}.
\begin{figure}[h]
\begin{center}
\includegraphics{valence_band_bulk}
\caption{Valence band structure for bulk (a) WZ and (b) ZB in the primed coordinate
system. The usual identification of the bands was used for ZB while in WZ it was
necessary to analyze the composition of the states in the $\Gamma$-point. We can
see the anisotropy between $k_z$ and $k_x$ in WZ and also in ZB because in the
new coordinate system the $x$ and $z$ axes do not reach equivalent points in
the reciprocal space. The top of the valence band in both crystal structures was
chosen to be at zero.}
\label{fig:valence_band_bulk}
\end{center}
\end{figure}
Following Chuang and Chang \cite{apl-68-1657} notation, the valence energy bands
for WZ are named after the composition of the states at $\vec{k}=0$. HH (heavy
hole) states are composed only by $\left|c_{1}\right\rangle$ or
$\left|c_{4}\right\rangle$, LH (light hole) states are composed mainly of
$\left|c_{2}\right\rangle$ or $\left|c_{5}\right\rangle$ and CH (crystal-field
split-off hole) are composed mainly of $\left|c_{3}\right\rangle$ or
$\left|c_{6}\right\rangle$. For the ZB structure the common identification of
the valence energy bands was used. The four-fold degenerate bands at $\vec{k}=0$
are HH and LH and the lower two-fold degenerate band is SO (split-off hole).
\begin{table}[h]
\begin{center}
\caption{InP parameters used in the calculations.}
\begin{tabular}{ccc}
\hline
\hline
Parameter & ZB InP & WZ InP\tabularnewline
\hline
Lattice constant ($\textrm{\AA}$) & & \tabularnewline
$a$ & 5.8697 & 4.1505 \tabularnewline
$c$ & - & 6.7777 \tabularnewline
Energy parameters (eV) & & \tabularnewline
$E_{g}$ & 1.4236 & 1.474\tabularnewline
$\Delta_{1}$ & 0 & 0.303\tabularnewline
$\Delta_{2}=\Delta_{3}$ & 0.036 & 0.036\tabularnewline
Conduction band effective masses & & \tabularnewline
$m_{e}^{\parallel}/m_{0}$ & 0.0795 & 0.105\tabularnewline
$m_{e}^{\perp}/m_{0}$ & 0.0795 & 0.088\tabularnewline
Valence band effective mass \\ parameters (units of $\frac{\hbar^{2}}{2m_{0}}$) & & \tabularnewline
$A_{1}$ & -13.4800 & -10.7156 \tabularnewline
$A_{2}$ & -0.8800 & -0.8299 \tabularnewline
$A_{3}$ & 12.6000 & 9.9301 \tabularnewline
$A_{4}$ & -6.3000 & -5.2933 \tabularnewline
$A_{5}$ & -5.8000 & 5.0000 \tabularnewline
$A_{6}$ & -7.4953 & 1.5000 \tabularnewline
$A_{z}$ & -0.5000 & 0 \tabularnewline
\hline
\hline
\label{tab:kp_par}
\end{tabular}
\end{center}
\end{table}
\subsection{B. Strain}
The strain Hamiltonian can be obtained using the same basis rotation applied to
the k$\cdot$p matrix \cite{jap-87-353}. Similarly, the conduction and valence
band are decoupled.
For the conduction band, the strain effect is given by
\begin{equation}
E_{C\varepsilon} = a_{c\parallel}\varepsilon_{zz}+a_{c\perp}\left(\varepsilon_{xx}+\varepsilon_{yy}\right)
\end{equation}
\\
where $a_{c\parallel}$ and $a_{c\perp}$ are the conduction band deformation potentials parallel and perpendicular to
the $z$ axis, respectively. In the ZB structure they have the same value.
The valence band strain Hamiltonian is
\begin{equation}
H_{V\varepsilon} = \left[\begin{array}{cccccc}
F_{\varepsilon} & -K_{\varepsilon}^{*} & -H_{\varepsilon}^{*} & 0 & 0 & 0\\
-K_{\varepsilon} & F_{\varepsilon} & H_{\varepsilon} & 0 & 0 & 0\\
-H_{\varepsilon} & H_{\varepsilon}^{*} & \lambda_{\varepsilon} & 0 & 0 & 0\\
0 & 0 & 0 & F_{\varepsilon} & -K_{\varepsilon} & H_{\varepsilon}\\
0 & 0 & 0 & -K_{\varepsilon}^{*} & F_{\varepsilon} & -H_{\varepsilon}^{*}\\
0 & 0 & 0 & H_{\varepsilon}^{*} & -H_{\varepsilon} & \lambda_{\varepsilon}
\end{array}\right]
\label{eq:Hv_strain}
\end{equation}
\\
and the matrix terms are
\begin{eqnarray}
F_{\varepsilon} & = & \left(D_{1}+D_{3}\right)\varepsilon_{zz}+\left(D_{2}+D_{4}\right)\left(\varepsilon_{xx}+\varepsilon_{yy}\right)\nonumber \\
\lambda_{\varepsilon} & = & D_{1}\varepsilon_{zz}+D_{2}\left(\varepsilon_{xx}+\varepsilon_{yy}\right)\nonumber \\
K_{\varepsilon} & = & D_{5}^{(1)}\left(\varepsilon_{xx}-\varepsilon_{yy}\right)+D_{5}^{(2)}2i\varepsilon_{xy}\nonumber \\
H_{\varepsilon} & = & D_{6}\left(\varepsilon_{xz}+i\varepsilon_{yz}\right)+D_{z}\left(\varepsilon_{xx}-\varepsilon_{yy}\right)
\end{eqnarray}
\\
where $D_i$'s are the valence band deformation potentials and $\varepsilon_{ij}$
is the strain tensor.
In the same way as the $A_{z}$ parameter appears in k$\cdot$p matrix, some extra
deformation potential terms were appears to use the same strain Hamiltonian
for both crystal structures. The deformation potential $D_{5}$ was split in two
parts because the strain tensor $\varepsilon_{xy}$ is not present in the ZB
structure. For WZ, $D^{(1)}_{5}=D^{(2)}_{5}$. Also, the $D_{z}$ deformation
potential takes into account the non existing term
$\varepsilon_{xx}-\varepsilon_{yy}$ in the WZ structure.
The deformation potentials $D_i$'s are related to the ZB ones
\begin{eqnarray}
D_{1} & = & a_{v}+\frac{2d}{\sqrt{3}}\nonumber \\
D_{2} & = & a_{v}-\frac{d}{\sqrt{3}}\nonumber \\
D_{3} & = & -\sqrt{3}d\nonumber \\
D_{4} & = & \frac{3d}{2\sqrt{3}}\nonumber \\
D_{5}^{(1)} & = & -\frac{b}{2}-\frac{d}{\sqrt{3}}\nonumber \\
D_{5}^{(2)} & = & 0\nonumber \\
D_{6} & = & 0\nonumber \\
D_{z} & = & -\frac{b}{2}+\frac{d}{2\sqrt{3}}\nonumber \\
a_{c\parallel} & = & a_{c\perp}=a_{c}
\end{eqnarray}
Considering biaxial strain, the elements of the strain tensor can be obtained in
both coordinate systems for ZB and WZ. Although it is convenient to describe the
Hamiltonian terms in the primed coordinate system, it is also useful to describe
the strain tensor elements in the unprimed coordinate system. They will be used
to construct the piezoelectric polarization in section III.C. The prime will be
reintegrated in the notation to avoid confusion and the upper scripts $z$ and $w$
denotes the ZB and WZ structures, respectively.
For the primed coordinate system, the elements of the strain tensor are given by
\begin{equation}
\varepsilon_{xx}^{\prime(z,w)} = \varepsilon_{yy}^{\prime(z,w)}=\frac{a_{0}-a^{(z,w)}}{a^{(z,w)}}
\end{equation}
\begin{equation}
\varepsilon_{zz}^{\prime(z)} = -\frac{1}{\sigma^{(111)}} \varepsilon_{xx}^{\prime(z)}
\label{eq:prime_z}
\end{equation}
\begin{equation}
\varepsilon_{zz}^{\prime(w)} = -\frac{2C^{(w)}_{13}}{C^{(w)}_{33}}\varepsilon_{xx}^{\prime(w)}
\label{eq:prime_w}
\end{equation}
\begin{equation}
\varepsilon_{yz}^{\prime(z,w)} = \varepsilon_{zx}^{\prime(z,w)}=\varepsilon_{xy}^{\prime(z,w)} = 0
\end{equation}
\\
where $a_{0}$ is the lattice constant of the substrate.
In the unprimed coordinate system, the strain tensor elements assume the form
\begin{equation}
\varepsilon_{xx}^{(z)} = \varepsilon_{yy}^{(z)} = \varepsilon_{zz}^{(z)} = \frac{1}{3}\left(2-\frac{1}{\sigma^{(111)}}\right)\varepsilon_{xx}^{\prime(z)}
\end{equation}
\begin{equation}
\varepsilon_{yz}^{(z)} = \varepsilon_{zx}^{(z)} = \varepsilon_{xy}^{(z)} = -\frac{1}{3}\left(1+\frac{1}{\sigma^{(111)}}\right)\varepsilon_{xx}^{\prime(z)}
\end{equation}
The quantity $\sigma^{(111)}$ is given by
\begin{equation}
\sigma^{(111)} = \frac{C^{(z)}_{11} + 2C^{(z)}_{12} + 4C^{(z)}_{44}}{2C^{(z)}_{11} + 4C^{(z)}_{12} - 4C^{(z)}_{44}}
\end{equation}
Comparing the expression (\ref{eq:prime_z}) with (\ref{eq:prime_w}) it is possible
to obtain effective values for $C^{(z)}_{13}$ and $C^{(z)}_{33}$ for ZB in the
primed coordinate system. The effective values are:
\begin{equation}
C^{(z)}_{13}=C^{(z)}_{11}+2C^{(z)}_{12}-2C^{(z)}_{44}
\end{equation}
\begin{equation}
C^{(z)}_{33}=C^{(z)}_{11}+2C^{(z)}_{12}+4C^{(z)}_{44}
\end{equation}
Thus, we have a single set of expressions to describe biaxial strain in the
primed coordinate system for ZB and WZ crystal structures:
\begin{equation}
\varepsilon_{xx}^{\prime} = \varepsilon_{yy}^{\prime}=\frac{a_{0}-a}{a}
\end{equation}
\begin{equation}
\varepsilon_{zz}^{\prime} = -\frac{2C^{(z,w)}_{13}}{C^{(z,w)}_{33}}\varepsilon_{xx}^{\prime}
\end{equation}
\begin{equation}
\varepsilon_{yz}^{\prime} = \varepsilon_{zx}^{\prime} = \varepsilon_{xy}^{\prime} = 0
\end{equation}
Since deformation potentials and elastic stiffness constants for WZ InP are not
yet available in the literature, we will consider here that the strain effect
appears only in the ZB structure. This assumption is not totally unrealistic
because WZ is the dominant phase in the NW.
\begin{figure}[h]
\begin{center}
\includegraphics{strain_zb_edges}
\caption{Strain effect in the band edges of ZB InP as a function of the percentage
of strain tensor. EL and HH shows a linear variation while LH and SO shows a
nonlinear behavior. The strain effect removes the HH and LH degeneracy.}
\label{fig:strain_zb_edges}
\end{center}
\end{figure}
Figure \ref{fig:strain_zb_edges} shows the effect of strain at $\vec{k}=0$ for
the diagonalized Hamiltonian (k$\cdot$p and strain terms) as a function of the
percentage of strain tensor. A linear variation for the conduction band and the
HH band is observed, however, the LH and SO bands have a non-linear behavior. The
order of the HH and LH bands changes when strain is distensive. Table
\ref{tab:strain_par} lists the ZB parameters used in the calculations.
\begin{table}[h]
\begin{center}
\caption{ZB InP strain parameters.}
\begin{tabular}{cc}
\hline
\hline
Parameter & ZB InP\tabularnewline
\hline
Deformation potentials (eV) & \tabularnewline
$D_{1}$ & -6.3735 \tabularnewline
$D_{2}$ & 2.2868 \tabularnewline
$D_{3}$ & 8.6603 \tabularnewline
$D_{4}$ & -4.3301 \tabularnewline
$D^{(1)}_{5}$ & 3.8868 \tabularnewline
$D^{(2)}_{5}$ & 0 \tabularnewline
$D_{6}$ & 0 \tabularnewline
$D_{z}$ & -0.4434 \tabularnewline
$a_{c\par}$ & -6.0 \tabularnewline
$a_{c\perp}$ & -6.0 \tabularnewline
Elastic stiffness constant (GPa) & \tabularnewline
$C_{11}$ & 1011 \tabularnewline
$C_{12}$ & 561 \tabularnewline
$C_{44}$ & 456 \tabularnewline
\hline
\hline
\label{tab:strain_par}
\end{tabular}
\end{center}
\end{table}
\subsection{C. Spontaneous and Piezoelectric Polarization}
Piezoelectric polarization appears when a crystal is subjected to strain. In ZB
semiconductors grown along the [111] direction, the magnitude of the piezoelectric
polarization, in the unprimed coordinate system of Fig. \ref{fig:coordinate_systems},
is given by \cite{prb-35-1242}:
\begin{equation}
P_{i} = 2e_{14}\varepsilon_{jk}
\end{equation}
\\
where $e_{14}$ is the piezoelectric constant for ZB materials, $(i,j,k)$ are the
cartesian coordinates $(x,y,z)$ in a cyclic order and $\varepsilon_{jk}$ are the
strain tensor components.
Applying the coordinate system rotation in the piezoelectric polarization vector
components in order to describe them in the primed coordinate system we obtain:
\begin{eqnarray}
P_{x}^{\prime} & = & \frac{1}{\sqrt{6}}\left(P_{x}+P_{y}-2P_{z}\right)=0\nonumber \\
P_{y}^{\prime} & = & \frac{1}{\sqrt{2}}\left(-P_{x}+P_{y}\right)=0\nonumber \\
P_{z}^{\prime} & = & \frac{1}{\sqrt{3}}\left(P_{x}+P_{y}+P_{z}\right)=\sqrt{3}P
\end{eqnarray}
The resulting piezoelectric polarization alongside the growth direction is then:
\begin{equation}
P_{z}^{\prime} = -\frac{2}{\sqrt{3}}e_{14}\left(1+\frac{1}{\sigma^{(111)}}\right)\varepsilon_{xx}^{\prime}
\end{equation}
The spontaneous polarization effect in WZ structure is due to the relative
displacement between the cations and anions when the ratio $c/a$ is different
from the ideal value in the WZ structure.
In a heterostructure, the effect of the different polarizations in each region
creates an electric field through the whole structure. The net electric field in
a determined layer, $i$, due to spontaneous and piezoelectric polarizations in
the system is given by \cite{paul-harrison}:
\begin{equation}
E_{i}=\frac{\overset{N}{\underset{j=1}{\sum}}\left(P_{j}-P_{i}\right)\frac{l_j}{\varepsilon_j}}{\varepsilon_{i}\overset{N}{\underset{j=1}{\sum}}\frac{l_j}{\varepsilon_j}}
\end{equation}
\\
where $j$ sums all over the layers in the heterostructure with polarization $P$,
dielectric constant $\varepsilon$ and length $l$.
\subsection{D. Effective mass equation in reciprocal space}
The envelope function approximation \cite{book-bastard, IEEEjqe-22-1625} is applied
to couple the different crystal structures alongside the growth direction in the NW.
In each region the wave function is expanded in terms of the Bloch functions of the
corresponding polytype. Thus, the wave function of the whole system is given by:
\begin{equation}
\psi(\vec{r}) = \sum_{l}e^{i(\vec{k}\cdot\vec{r})}g_{l}(\vec{r})u_{l}^{(WZ,ZB)}(\vec{r})
\label{eq:eva}
\end{equation}
\\
where $g_{l}(\vec{r})$ are the envelope functions of the $l$-th basis state.
Considering different Bloch functions for each region, the Hamiltonian parameters
vary alongside the growth direction, making it possible to use the common k$\cdot$p
and strain matrices, (\ref{eq:Hv_kp}) and (\ref{eq:Hv_strain}), for both crystal
structures. Moreover, since each crystal structure dictates its symmetry to their
respective Bloch functions, some matrix elements can be forbidden by symmetry in the
region of a certain crystalline phase. For example, the $A_z$ parameter is zero in
WZ regions whereas the $\Delta_1$ parameter is zero in ZB regions.
To represent the growth dependence of the Hamiltonian parameters and envelope functions,
the plane wave expansion is used. This formalism considers the periodicity of the whole
system allowing the expansion of growth dependent functions in Fourier coefficients:
\begin{equation}
U(\vec{r}) = \sum_{\vec{K}} U_{\vec{K}} e^{i \vec{K} \cdot \vec{r}}
\label{eq:pwe}
\end{equation}
\\
where $U_{\vec{K}}$ are the Fourier coefficients of the function $U(\vec{r})$ and
$\vec{K}$ is a reciprocal lattice vector. The Fourier expansion also induces the
change $\vec{k} \rightarrow \vec{k} + \vec{K}$ in the k$\cdot$p matrix.
\section{IV. Results and discussion}
The NW system chosen to apply our model is a WZ/ZB/WZ single well structure. Although
a real NW is composed of multiple polytypical quantum wells with different sizes, the
analysis of just a single well can bring out the physics of the polytypical interface.
The effects of lateral confinement are neglected in a first approach, assuming NWs with
large lateral dimensions. Strain, piezoelectric and spontaneous polarization are also
included in the single well system.
When both crystal structures are put side by side, a band-offset is created at the
interface, originating a confinement profile. The band mismatch is also taken from
reference \cite{prb-81-155210}. Figure \ref{fig:kp_interface} exhibits the WZ/ZB
interface for InP in two different schemes: in the left, the energies in $\vec{k}=0$
for the diagonalized Hamiltonian and in the right, the diagonal terms of the Hamiltonian.
Although the composition of the states in the diagonalized energy bands are the same
in $\vec{k}=0$, the matrix is not constructed in this basis. Since the variation along
the growth direction of the matrix elements is well defined, the scheme in the right
is more convenient to analyze the potential profile of the system.
\begin{figure}[h]
\begin{center}
\includegraphics{kp_interface}
\caption{Left side: Band edge energies at $\vec{k}=0$ for the diagonalized Hamiltonian.
Right side: Diagonal terms of the Hamiltonian at $\vec{k}=0$.}
\label{fig:kp_interface}
\end{center}
\end{figure}
In all performed calculations, the entire length of the system is set to
$500\,\textrm{\AA}$ and the width of the ZB region, $l$, is variable. Figure
\ref{fig:single_well_and_strain_pots} shows the single well potentials with and
without the effect of strain. Since the potential profiles exhibits a type-II behavior,
we expect a spatial separation of the carriers: electrons are more likely to be in the
ZB region and the holes in the WZ region.
The strain considered here, $-0.8\%$, is a intermediate value between two data
available in the literature from \textit{ab initio} calculations. Reference
\cite{ssc-151-781} shows that the deviation between the lattice constant of ZB[111]
and WZ[0001] is $-1.3\%$ and reference \cite{nanotec-21-505709} shows $-0.3\%$. Also,
reference \cite{nanolet-10-4055} suggests a difference slighter than $0.5\%$ between
the lattice constants of the two polytypes. The effect of strain shallows the potential
wells in the conduction and valence bands, reducing the confinement of the carriers. We
expect to have less confined states for the strained potential compared to the unstrained one.
\begin{figure}[h]
\begin{center}
\includegraphics{single_well_and_strain_pots}
\caption{Diagonal potential profile of the Hamiltonian for the polytypical InP system
with and without strain. The width of the ZB region, $l$, can change but the whole
system's dimension remains constant with $500\,\textrm{\AA}$.}
\label{fig:single_well_and_strain_pots}
\end{center}
\end{figure}
The conduction and valence band structures for the potential profile without strain
are presented in Fig. \ref{fig:single_well_bs} for three different widths of the ZB region.
The calculations were performed up to 10\% in the $\Gamma-T$ direction and 100\% in
the $\Gamma-A$ direction. For the valence band 64 energy states are presented while
only 18 are presented for the conduction band. Since the system has no asymmetric
potential, the energy bands are two-fold degenerate in spin, therefore 32 states are
visible in the valence band and 9 in the conduction band. For the three different
values of $l$, the conduction energy bands are nominated, from bottom to top, as
EL1-EL9, composed of $\left|c_{7,8}\right\rangle$ states. The valence bands, from
top to bottom, are nominated as HH1-19, LH1-4, HH20-21, LH5-7, HH22-23, LH8-9 for
$l = 100\,\textrm{\AA}$; HH1-16, LH1-2, HH17, LH3, HH18, LH4, HH19, LH5-7, HH20-21,
LH8-9, HH22-23 for $l = 160\,\textrm{\AA}$ and HH1-14, LH1-2, HH15, LH3, HH16-17,
LH4-5, HH18, LH6, HH19, LH7, HH20, LH8, HH21, LH9, HH22, LH10 for $l = 200\,\textrm{\AA}$.
Since the highest valence band states are HH there is no significant anticrossing among the
energy bands in the $\Gamma-T$ direction. The anticrossing is characteristic of interactions
between HH and LH bands in ZB and WZ quantum well structures. A slight anticrossing, however,
can be seen in the energy region just above $-75\,\text{meV}$, which is next to the interaction
region of the $\left|c_{1,4}\right\rangle$ and $\left|c_{2,5}\right\rangle$ profiles.
Increasing the value of $l$ we find that the number of confined states in the conduction band
increases. On the other hand, for the valence band the number of confined states decreases
because the WZ region's width also decreases.
\begin{figure}[h]
\begin{center}
\includegraphics{single_well_bs}
\caption{Band structure for the unstrained profile in Figure \ref{fig:single_well_and_strain_pots}.
The solid horizontal line is the top energy of conduction band well and the dashed horizontal line
is the bottom energy of the conduction band well. The $\Gamma-T$ direction refers to $k_x$ and
$\Gamma-A$ to $k_z$.}
\label{fig:single_well_bs}
\end{center}
\end{figure}
For the strained potential profile, the band structures for three different widths of
the ZB region are displayed in Figure \ref{fig:single_well_strain_bs}. The calculations were
performed considering the same extension for the FBZ of the unstrained band structure.
For the three different values of $l$, the conduction energy bands are nominated, from
bottom to top, as EL1-EL9, composed of $\left|c_{7,8}\right\rangle$ states. The valence
bands, from top to bottom, are nominated as HH1-22, LH1, HH23, LH2-7, HH24-25 for
$l = 100\,\textrm{\AA}$; HH1-21, LH1-3, HH22-23, LH4-8, HH24 for $l = 160\,\textrm{\AA}$
and HH1-21, LH1-4, HH22-23, LH5-9 for $l = 200\,\textrm{\AA}$.
As expected, the number of confined states for the strained profile compared to the
unstrained one is smaller. Nonetheless, the similar confinement trend is visible here
when the value of $l$ increases: the number confined states in the conduction band
increases while in valence band decreases.
An interesting feature presented in the strained band structure is the presence of some
confined states below the top region, around $-62\,\text{meV}$. This suggests a confinement
in the intermediate region of the $\left|c_{2,5}\right\rangle$ and $\left|c_{3,6}\right\rangle$
profiles. Note that the coupling of these two profiles at $\vec{k}=0$ happens because of
the off-diagonal spin-orbit term.
The composition of the energy states at $\vec{k}=0$ in the band structure is similar for
the strained and unstrained cases: they are just HH or LH states. There is no CH states
in the energy range considered here. The major contribution for CH states comes from
the $\left|c_{3,6}\right\rangle$ profile, which is the lowest one in both cases.
The information of the energy states' composition can reveal important trends in the
luminescence spectra for this kind of system. For example, at $\vec{k}=0$ the dominant
symmetry of the energy states belongs to $\left(x,y\right)$, which means that the
luminescence spectra is more intense perpendicular to the growth direction. However,
experimental measurements \cite{prb-82-125327} indicates that the intensities
perpendicular and parallel to the growth direction are almost similar. Therefore, we expect
that the contribution for the parallel luminescence comes from the states at $\vec{k} \neq 0$.
For $\vec{k}$ points away from the $\Gamma$-point, there is a stronger mixing between all
the basis states.
\begin{figure}[h]
\begin{center}
\includegraphics{single_well_strain_bs}
\caption{Band structure for the strained profile in Fig. \ref{fig:single_well_and_strain_pots}.
The solid and dashed lines have the same meaning as in Figure \ref{fig:single_well_bs}. The
$\Gamma-T$ direction refers to $k_x$ and $\Gamma-A$ to $k_z$.}
\label{fig:single_well_strain_bs}
\end{center}
\end{figure}
The effect of the ZB region width for both strained and unstrained potential profile
for the conduction and valence band states at $\vec{k}=0$ is presented in Figure
\ref{fig:single_well_and_strain_gamma_states}. It is possible to observe that the number of
confined states in the conduction band increases as the value of $l$ increases. On the other hand,
the number of confined states in the valence band decreases. Nevertheless, the effect of the
variation of $l$ is more significant for the conduction band since the electron effective mass in
ZB ($m_e^*/m_0=0.0795$) is smaller than the heavy hole mass of WZ ($m_{HH}^\parallel/m_0=1.273$ and
$m_{HH}^\perp/m_0=0.158$). The same trend is also observed in the strained case. Also, since the
bottom of the well in the conduction band has a higher value in the strained case, we can expect the
interband transition energies to be blueshifted with the inclusion of strain effects.
\begin{figure}[h]
\begin{center}
\includegraphics{single_well_and_strain_gamma_states}
\caption{The first 5 states of the conduction and valence bands at $\vec{k}=0$ as
a function of the ZB region width $l$. The solid line indicates the top of the conduction
band well and the dashed line indicates the bottom.}
\label{fig:single_well_and_strain_gamma_states}
\end{center}
\end{figure}
The presence of strain effects gives rise to the piezoelectric polarization. For the InP ZB,
the value for the piezoelectric constant used was $e_{14}=0.035\,\text{C/m}^2$, taken from Ref.
\cite{jap-92-932}. For both crystal structures, the value used for static dielectric constant
was $12.5$. The unknown parameter is the spontaneous polarization for WZ InP. Ref. \cite{nanolet-10-4055}
suggests that this value is smaller than that of InN ($-0.03\,C/m^2$). In an attempt to
estimate this value for InP, we performed the band structure calculations considering
a range of values for $P_{sp}$.
The energy of the first 5 conduction and valence band states at $\vec{k}=0$ as a function
of spontaneous polarization in WZ InP for three different ZB region widths is presented
in Figure \ref{fig:sp_gamma_states}. The considered values for spontaneous polarization
are $-0.02\,C/m^2$, $-0.015\,C/m^2$, $-0.01\,C/m^2$, $-0.005\,C/m^2$ and $-0.001\,C/m^2$. For
$l=160\,\textrm{\AA}$ and $l=200\,\textrm{\AA}$ there is a crossing between the conduction
and valence band states. This is not observed experimentally therefore we consider this
region \textit{forbidden}. Then, the \textit{allowed} values for spontaneous polarization
considered here are then $-0.01\,C/m^2$, $-0.005\,C/m^2$ and $-0.001\,C/m^2$.
\begin{figure}[h]
\begin{center}
\includegraphics{sp_gamma_states}
\caption{The first 5 states of the conduction and valence bands at $\vec{k}=0$ as
a function of WZ spontaneous polarization $P_{sp}$. Notice the crossing of valence
and conduction bands.}
\label{fig:sp_gamma_states}
\end{center}
\end{figure}
The diagonal potential profile including effects of piezoelectric and spontaneous polarization
is presented in Figure \ref{fig:sppz_well_200_pots}. The ZB region is fixed at $200\,\textrm{\AA}$.
Analyzing these profiles, we expect to have strong coupling in the band structure for higher values
of $P_{sp}$ since the profiles are more close to each other. This induces the mixing of states because
an energy value can be in more than one profile.
\begin{figure}[h]
\begin{center}
\includegraphics{sppz_well_200_pots}
\caption{Diagonal potential profile of the Hamiltonian for the polytypical InP system
considering strain, piezoelectric polarization in the ZB region and spontaneous
polarization in the WZ region for $l=200\,\textrm{\AA}$. The values for the spontaneous
polarization were chosen from Figure \ref{fig:sp_gamma_states}.}
\label{fig:sppz_well_200_pots}
\end{center}
\end{figure}
The resulting band structures for the three different potential profiles of Figure
\ref{fig:sppz_well_200_pots} are shown in Figure \ref{fig:sppz_well_200_bs}.
For the three different values of $P_{sp}$, the conduction energy bands are nominated,
from bottom to top, as EL1-EL9, composed of $\left|c_{7,8}\right\rangle$ states.
The valence bands, from top to bottom, are nominated as HH1-2, LH1, HH3, LH2, HH4-5,
LH3, HH6, LH4, HH7, LH5, HH8, LH6, HH9, LH7, HH10, LH8, HH11, LH9, HH12, LH10, HH13,
LH11, HH14, LH12, HH15, LH13, HH16, LH14, HH17, LH15 for $P_{sp} = -0.01\,C/m^2$;
HH1-3, LH1, HH4, LH2, HH5-6, LH3, HH7, LH4, HH8, LH5, HH9-10, LH6, HH11, LH7, HH12,
LH8, HH13, LH9, HH14, LH10, HH15-16, LH11, HH17, LH12-13, HH18, LH14 for
$P_{sp} = -0.005\,C/m^2$ and HH1-7, LH1, HH8-9, LH2, HH10-11, LH3, HH12-13,
LH4, HH14-15, LH5, HH16, LH6, HH17-18, LH7, HH19, LH8, HH20, LH9, HH21-22, LH10
for $P_{sp} = -0.001\,C/m^2$. The number of HH states increases as the value of
$P_{sp}$ decreases.
The anticrossings and also the spin splitting in the valence sub bands are more visible
for higher values of $P_{sp}$. The strength of the resulting electric field not only
increases the mixing of HH and LH states but also increases the value of the spin splitting
in each sub band. This spin splitting is known as the Rashba effect \cite{jpc-17-6039} and is
due to potential inversion asymmetry, even though the term $\alpha(\vec{\sigma}\times\vec{k})\cdot\vec{E}$
does not appear explicitly in the Hamiltonian \cite{book-winkler}.
The number of confined states decreases as the spontaneous polarization decreases.
On the other hand, the energy difference between the conduction and valence band ground
state increases as the spontaneous polarization decreases, blueshifting the interband energy
transitions.
\begin{figure}[h]
\begin{center}
\includegraphics{sppz_well_200_bs}
\caption{Band structure for the profiles presented in Figure \ref{fig:sppz_well_200_pots}.
The spin-splitting of the energy bands is due to the field induced asymmetry.}
\label{fig:sppz_well_200_bs}
\end{center}
\end{figure}
The effect of piezoelectric and spontaneous polarization also induce carriers' spatial
separation. This effect, in the probability densities in $\vec{k}=0$ can be seen in Fig.
\ref{fig:sppz_well_200_probdens}. The lowest four states of the conduction band and the
highest four states of the valence band are presented. At $\vec{k}=0$ the wave functions of
spin-up and spin-down are degenerated. We can see that the overlap increases for more excited
states, also blueshifting the energy peak in the interband transitions. Since the potential
profile is not completely even or odd, the envelope functions no longer have well defined parities.
\begin{figure}[h]
\begin{center}
\includegraphics{sppz_well_200_probdens}
\caption{Probability densities at $\vec{k}=0$ for the lowest four states of the conduction band and the
highest four states of the valence band of Fig. \ref{fig:sppz_well_200_bs}.
The solid lines are the HH states and dashed lines represent the LH states.}
\label{fig:sppz_well_200_probdens}
\end{center}
\end{figure}
\section{V. Conclusions}
The basic result of this study is the theoretical model based on the k$\cdot$p
method and group theory concepts to calculate band structures of WZ/ZB polytypical
systems in the vicinity of the band edge. The method allows us to describe in the
same matrix Hamiltonian the ZB and WZ structures, with $k_z$ along the [111] and
[0001] directions, respectively. Since the WZ structure is less symmetric, the ZB
parameters are assigned to the WZ ones. Our method not only is able to describe
the k$\cdot$p terms of the Hamiltonian but also includes the strain and polarization
(spontaneous and piezoelectric) effects.
Extracting the parameters of WZ InP from Ref. \cite{prb-81-155210} we applied our
model to a WZ/ZB/WZ single well in order to understand the physics of the polytypical
interface. The potential profile at the interface WZ/ZB is type-II, whose feature is
the spatial separation of carriers. The performed calculations in this study holds this
characteristic.
Due to the lack of parameters in the literature for WZ InP, only the strain effect in
the ZB region was considered here. This seems to be a reasonable consideration since
the WZ structure is the dominant phase in NWs structures. However, such strain parameters
would be fundamental in a system that the stable lattice constant is a intermediate
value between WZ and ZB InP lattice parameters.
Within the limitation of strain, the piezoelectric polarization was also considered
in the ZB region. For the WZ region, only the spontaneous polarization appears. Since
there is no value in the literature for the spontaneous polarization of WZ InP, a
range of values were considered in the simulations. Some of these values, however,
induces a negative gap in the system. There is no data in the literature that corroborates
this effect.
The proposed model, jointly with the obtained results, proved to be useful in the
study of electronic band structures of WZ/ZB polytypical systems, such as NWs.
Exploring the opportunities of band gap engineering considering not only different
compounds, but also different crystal structures, could lead to the development of
novel nanodevices.
\section{Acknowledgements}
The authors acknowledge financial support from the Brazilian funding agencies
CAPES and CNPq.
|
2,877,628,089,632 | arxiv | \section{Introduction}\label{introduce1}
As an important research area of quantum chromodynamics (QCD), there are still many puzzles on the phase transition of strongly interacting matters. Lattice simulations show evidence that the QCD matters exhibit a crossover transition with the increase of temperature $T$ at zero baryon chemical potential \cite{Aoki:2006we,Bazavov:2011nk,Bhattacharya:2014ara}. It is generally believed that there is a critical end point (CEP) at some finite chemical potential in QCD phase diagram, and the crossover transition converts into first-order phase transition as the baryon chemical potential $\mu_B$ increases beyond the CEP \cite{Fodor:2001pe,Fodor:2004nz,Gavai:2004sd}. The physics of the CEP and the search of it are crucial for our understanding of QCD phase diagram, which have attracted extentive studies for decades, both theoretically and experimentally \cite{Stephanov:2007fk,Luo:2017faz}. However, no final conclusion has been reached on these matters. Lattice QCD as the first-principle calculation is hindered by the sign problem at finite chemical potential, though some tentative methods have been proposed to address this issue \cite{Fodor:2001au,Allton:2002zi,deForcrand:2002hgr,Ding:2017giu}. The QCD phase diagram and the relevant properties of CEP have also been studied in various effective models, see Ref. \cite{Stephanov:2007fk} for a review.
In this work, we continue the holographic QCD program aimed to study the low-energy physics of strong interaction in terms of the anti-de Sitter/conformal field theory (AdS/CFT) correspondence \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}. There have been large amounts of holographic studies on the low-energy hadron physics over the past decades, including hadron spectrum, QCD thermodynamics and many other properties \cite{DaRold:2005mxj,Erlich:2005qh,Karch:2006pv,deTeramond:2005su,Brodsky:2014yha,Babington:2003vm,Kruczenski:2003uq,Sakai:2004cn,Sakai:2005yt,Csaki:2006ji,Cherman:2008eh,Gherghetta:2009ac,Kelley:2010mu,Sui:2009xe,Sui:2010ay,Cui:2013xva,Cui:2014oba, Li:2012ay,Li:2013oda,Policastro:2001yc,Cai:2009zv,Cai:2008ph,Sin:2004yx,Shuryak:2005ia,Nastase:2005rp, Nakamura:2006ih,Sin:2006pv,Janik:2005zt, Herzog:2006gh,Gubser:2008yx,Gubser:2008ny,Gubser:2008sz,Noronha:2009ud,DeWolfe:2010he,Finazzo:2013efa,Finazzo:2014zga,Yaresko:2013tia,Li:2011hp,Li:2014hja,Li:2014dsa,Fang:2016uer,Fang:2016dqm,Chelabi:2015cwn,Chelabi:2015gpc,Fang:2015ytf,Evans:2016jzo,Mamo:2016xco,Fang:2016cnt,Dudal:2016joz,Dudal:2018rki,Ballon-Bayona:2017dvv,Critelli:2017oub,Critelli:2017euk,Critelli:2018osu,Rougemont:2018ivt,ChenXun:2019zjc}. Although AdS/CFT has been taken as a powerful tool in the description of low-energy QCD, a caveat must be given here on the validity of this method in view of the assumption of large N limit and the supergravity approximation that may be invalid in the ultraviolet (UV) region of QCD with asymptotic freedom. The string loop correction is indeed required to provide an adequate account of QCD thermodynamics in the holographic framework. Nevertheless, in the current stage, we still need a down-to-earth treatment in order to understand the holographic picture of low-energy QCD.
This work focuses on chiral transition in AdS/QCD with $2+1$ flavors, especially on the CEP-related properties of chiral phase diagram. We remark that a more realistic description for QCD phase diagram demands a detailed study on the equation of state, which is intimately related to the dual gravity background \cite{Gubser:2008yx,Gubser:2008ny,Gubser:2008sz,Noronha:2009ud,DeWolfe:2010he,Finazzo:2013efa,Finazzo:2014zga,Yaresko:2013tia,Li:2011hp,Li:2014hja}. Following the previous studies \cite{Colangelo:2011sr,Jarvinen:2011qe,Alho:2012mh,Alho:2013hsa,Jarvinen:2015ofa,Li:2016smq,Bartz:2016ufc,Bartz:2017jku,Li:2017ple,Chen:2018msc}, we just adopt a fixed black hole background to study the thermodynamical behaviors of chiral transition that comes from the flavor sector of AdS/QCD. We expect that the sensible chiral transition behaviors obtained from our model can still be generated somehow from AdS/QCD with dynamical background.
In Ref. \cite{Fang:2016nfj}, we proposed an improved soft-wall AdS/QCD model with linear confinement and spontaneous chiral symmetry breaking in the two-flavor case, where the light meson spectra and the properties of chiral transition have been studied in detail. We then generalized this two-flavor AdS/QCD model to the $2+1$ flavor case, where a 't Hooft determinant term of the bulk scalar field turns out to be crucial for the description of quark-mass phase diagram \cite{Fang:2018vkp}. The chemical potential effects on chiral transition have also been studied in this model, where the chiral phase diagram in the $\mu -T$ plane can be obtained with a CEP linking the crossover transition with the first-order phase transition \cite{Fang:2018axm}. However, the model parameters in the aforementioned works are all tuned artificially to generate correct chiral transition behaviors, without any constraint from low-energy hadron properties. In this work, we try to compute the octet meson spectra and the related decay constants in the improved soft-wall AdS/QCD model with $2+1$ flavors, by which the model parameters can be constrained and the relevant properties of chiral transition at finite baryon chemical potential can be investigated.
The paper is organized as follows. In Sec. \ref{model}, we outline the improved soft-wall AdS/QCD model with $2+1$ flavors. In Sec. \ref{mass-octet}, we first derive the equation of motion (EOM) of octet pseudoscalar, vector and axial-vector mesons, and then fit the model parameters by the octet meson spectra and the related decay constants. In Sec. \ref{chiraltran}, we investigate the chiral transition behaviors at finite $\mu_B$, from which the chiral phase diagram in the $\mu-T$ plane can be obtained. In Sec. \ref{conclution}, we come to a brief summary of our work and conclude with some remarks.
\section{The improved soft-wall AdS/QCD model with $2+1$ flavors}\label{model}
The improved soft-wall AdS/QCD model is constructed on the background of AdS$_5$ spacetime with the metric ansatz
\begin{equation}\label{metric}
ds^2=e^{2A(z)}\left(\eta_{\mu\nu}dx^{\mu}dx^{\nu}-dz^2\right) ,
\end{equation}
where $\eta_{\mu\nu}=(+1,-1,-1,-1)$ and $A(z)=-\mathrm{log}\frac{z}{L}$ (the AdS radius is set to be $L=1$ for simplicity).
The bulk action of the improved soft-wall AdS/QCD model with $2+1$ flavors can be written as
\begin{equation}\label{2+1-act}
S =\int d^{5}x\,\sqrt{g}\,e^{-\Phi(z)}\left[\mathrm{Tr}\{|DX|^{2}-m_5^2(z)|X|^{2}
-\lambda |X|^{4}-\frac{1}{4g_{5}^2}(F_{L}^2+F_{R}^2)\} -\gamma\,\mathrm{Re}\{\det X\}\right] ,
\end{equation}
where the covariant derivative of bulk scalar field has the form $D^MX=\partial^MX-i A_L^MX+i X A_R^M$, and the field strength $F_{L,R}^{MN}=\partial^MA_{L,R}^N-\partial^NA_{L,R}^M-i[A_{L,R}^M,A_{L,R}^N]$ with the chiral gauge field $A_{L,R}^M=A_{L,R}^{a,M}T^a =\frac{1}{2}A_{L,R}^{a,M}\lambda^a$, where $\mathrm{Tr}(T^aT^b)=\frac{1}{2}\delta^{ab}$ and $\lambda^a$ denote Gell-Mann matrices. The gauge coupling is $g_5^2=12\pi^2/N_c$ with $N_c$ being the color number \cite{Erlich:2005qh}. The running mass of bulk scalar field takes the form $m_5^2(z)=-3-\mu_c^2\,z^2$ with the constant term $-3$ determined by the mass-dimension relation in AdS/CFT \cite{Erlich:2005qh} and the infrared (IR) asymptotics of $m_5^2(z)$ empirically related to the low-energy hadron properties \cite{Fang:2016nfj}. To give a better fitting for the $\rho$ meson spectrum, we will use a modified dilaton field in this work,
\begin{equation}\label{Phi}
\Phi(z) =\mu_{g}^2\,z^2\left(1-e^{-\frac{1}{4}\mu_{g}^2z^2}\right),
\end{equation}
which has the IR limit $\Phi(z\to\infty)\sim z^2$ to reproduce the Regge spectra of highly excited mesons. Here we remark that the ultraviolet (UV) asymptotics of $\Phi(z)$ has little effects on the properties of chiral phase diagram and other meson spectra in our framework. The 't Hooft determinate term $\mathrm{Re}\{\det X\}$ has been introduced into the bulk action for the correct realization of chiral transition in the $2+1$ flavor case \cite{Chelabi:2015gpc}.
The vacuum expectation value (VEV) of bulk scalar field takes the form
\begin{equation}\label{VEVs}
\langle X \rangle=\frac{1}{\sqrt{2}}
\begin{pmatrix}
\chi_u(z) & 0 & 0 \\
0 & \chi_d(z) & 0 \\
0 & 0 & \chi_s(z)
\end{pmatrix}
\end{equation}
with $\chi_u=\chi_d$ for the $2+1$ flavor case. The action of the scalar VEV $\VEV{X}$ can be obtained from the bulk action (\ref{2+1-act}) as
\begin{equation}\label{2+1-vev-act}
S_{\chi} =\int d^{5}x\,\sqrt{g}\,e^{-\Phi(z)}\left[\mathrm{Tr}\{\partial^{z}\VEV{X}\partial_{z}\VEV{X} -m_5^2(z)\VEV{X}^{2}
-\lambda\VEV{X}^{4}\} -\gamma\,\det\VEV{X}\right],
\end{equation}
from which the EOMs of $\chi_{u}$ and $\chi_{s}$ can be derived as
\begin{align}
\chi_{u}'' +\left(3A'-\Phi'\right)\chi'_{u} -e^{2A}\left(m_5^2\chi_u +\lambda\chi_u^3 +\frac{\gamma}{2\sqrt{2}}\chi_u\chi_s \right) &=0, \label{vevX-eom1} \\
\chi_{s}'' +\left(3A'-\Phi'\right)\chi'_{s} -e^{2A}\left(m_5^2\chi_s +\lambda\chi_s^3 +\frac{\gamma}{2\sqrt{2}}\chi_{u}^{2}\right) &=0, \label{vevX-eom2}
\end{align}
which are coupled with each other as a result of the 't Hooft determinate term of the bulk scalar field.
The UV asymptotic forms of the scalar VEV $\chi_{u,s}$ near the boundary can be obtained from Eqs. (\ref{vevX-eom1}) and (\ref{vevX-eom2}) as
\begin{align}
\chi_u(z \sim 0) =\frac{1}{\sqrt{2}}&\left[ m_u\,\zeta\,z-\frac{1}{4}\,m_u\, m_s\,\gamma \, \zeta^2\,z^2+\frac{\sigma_u}{\zeta}z^3 +\frac{1}{16} m_{u}\zeta\left(-\frac{1}{2}\, m_{s}^{2}\, \gamma^{2}\,\zeta^{2} \right.\right. \nonumber \\
&\quad\left.\left.-\frac{1}{2}\,m_{u}^{2}\,\gamma^{2}\,\zeta^{2}+4\,m_{u}^{2}\,\zeta^{2}\,\lambda-8\,\mu_{c}^{2} \right)\,z^{3}\, \log{z}+\cdots\right] , \label{asy-chiu1} \\
\chi_s(z \sim 0) =\frac{1}{\sqrt{2}}&\left[m_s\,\zeta\,z-\frac{1}{4}\,m_u^2\,\gamma \, \zeta^2\,z^2+\frac{\sigma_s}{\zeta}z^3 +\frac{1}{16} m_{s}\zeta\left(-\,m_{u}^{2}\, \gamma^{2}\,\zeta^{2}\right.\right. \nonumber \\
&\quad \left.+4\,m_{s}^{2}\,\zeta^{2}\,\lambda-8\,\mu_{c}^{2}\right)
\,z^{3}\, \log{z} +\cdots\bigg] , \label{asy-chis1}
\end{align}
where $m_{u,s}$ denote current quark masses and $\sigma_{u,s}$ denote chiral condensates, and the normalization constant $\zeta$ is fixed as $\zeta=\frac{\sqrt{N_c}}{2\pi}$ \cite{Cherman:2008eh}. The coefficient $\frac{1}{\sqrt{2}}$ is necessary to attain the Gell-Mann–Oakes–Renner (GOR) relation $m_{\pi}^2f_\pi^2 =2m_u\sigma_u$ for the two-flavor case \cite{Erlich:2005qh}. The IR asymptotics of scalar VEV should take the linear form $\chi_{u,s}(z\to\infty) \sim z$ to generate the mass split of chiral partners \cite{Fang:2016nfj}. With the above UV asymptotic forms and the IR boundary conditions, the scalar VEV $\chi_{u,s}$ can be solved numerically from Eqs. (\ref{vevX-eom1}) and (\ref{vevX-eom2}), and the chiral condensates $\sigma_{u,s}$ can be extracted.
\section{Octet meson spectra and decay constants}\label{mass-octet}
\subsection{Input parameters}
Now we consider the octet meson spectra and the related decay constants, which will be used to constrain the model parameters $\mu_{g}$, $\gamma$, $\lambda$ and $\mu_{c}$. The parameter $\mu_{g}$ can be fixed by the $\rho$ meson spectrum, while the parameters $\gamma$ and $\lambda$ can be determined by $\pi$ meson spectrum and pion decay constant. The last parameter $\mu_c$ is strongly correlated with the chemical potential value of CEP, and it also affects the global fitting of octet meson spectra. The physical quark masses $m_{u,s}$ are slightly tuned within the error range of experimental data in order to attain the best fitting for the ground-state masses of octet pseudoscalar mesons.
We first derive the EOMs of octet pseudoscalar, vector and axial-vector mesons from the linearized action of relevant bulk fields, and then we compute the mass spectra of these octet mesons and related decay constants. The fixed parameter values are listed in Table \ref{parameter-fit}.
\begin{table}
\begin{center}
\begin{tabular}{cccccc}
\hline\hline
$m_u$(MeV) & $m_s$(MeV) & $\mu_{g}$(MeV) & $\mu_c$(MeV) & $\lambda$ & $\gamma$ \\
\hline
3.24 & 98 & 480 & 877.8 & 130 & -69.53 \\
\hline\hline
\end{tabular}
\caption{The input values of model parameters in the numerical calculation. The quark masses $m_{u,s}$ are taken from Ref. \cite{Tanabashi:2018oca}.}
\label{parameter-fit}
\end{center}
\end{table}
\subsection{Octet pseudoscalar mesons}
Following the usual procedure, the bulk scalar field $X$ can be decomposed into
\begin{equation}\label{X-decomp}
X=\xi(X_{0}+S^{a}T^a +S^0T^0)\xi, \quad \xi=\mathrm{exp}(iT^a\pi^a),
\end{equation}
where $\pi^a$ are pseudoscalar fields and $S^a$ ($S^0$) are $SU(3)$ octet (singlet) scalar fields. We will neglect the scalar part in our work. In the axial gauge $A_z=0$, the axial gauge fields can be written as $A_{\mu}^a=A_{\mu \bot}^a+\partial_{\mu}\phi^a$ to eliminate the cross terms of the pseudoscalar and axial-vector fields. With the Kaluza-Klein (KK) decomposition $\pi^a(x,z)=\sum_n\varphi_n(x)\pi^a_n(z)$, the EOMs of octet pseudoscalar mesons can be derived as
\begin{align}
\partial_z\left(e^{A-\Phi}\partial_z\phi^a_n\right) +2g_{5}^2e^{3A-\Phi}(M^2_A)_{ab} \left(\pi^b_n -\phi^b_n\right) &=0 , \label{3f-PS-eom2-2} \\
m_n^2\partial_z\phi^a_n -2g_{5}^2 e^{2A}(M^2_A)_{ab}\partial_z\pi^b_n &=0 \label{3f-PS-eom4-2}
\end{align}
with
\begin{align}\label{M-A2}
M^2_A =\begin{pmatrix} \chi_u^2 \mathbf{1}_{3\times3} & 0 & 0 \\ 0 & \frac{1}{4}(\chi_u+\chi_s)^2\mathbf{1}_{4\times4} & 0 \\ 0 & 0 & \frac{1}{3}(\chi_u^2+2\chi_s^2) \end{pmatrix},
\end{align}
which can be solved numerically with the boundary condition $\pi_n^a(z\to 0) =\phi_n^a(z\to 0) =\partial_z\phi_n^a(z\to\infty) =0$.
The computed mass spectra of octet pseudoscalar mesons are presented in Table \ref{pi-spectrum1}, where the experimental data are also shown for comparison. The data choosing is based on the suggested quark model assignments for the observed light mesons \cite{Tanabashi:2018oca}.
\begin{table}
\renewcommand\tabcolsep{8.0pt}
\begin{center}
\begin{tabular}{ccccccc}
\hline\hline
$n$ & $\pi$ exp. (MeV) & Model & $K$ exp. (MeV) & Model & $\eta$ exp. (MeV) & Model \\
\hline\hline
0 & $139.57$ & $139.57$ & $493.677\pm0.016$ & $492.85$ & $547.862\pm0.017$ & $585.19$ \\
\hline
1 & $1300\pm100$ & $1447$ & $1460$ & $1472$ & $1476\pm4$ & $1485$ \\
\hline
2 & $1812\pm12$ & $1817$ & $1874\pm43$ & $1836$ & $1751\pm15$ & $1846$ \\
\hline\hline
\end{tabular}
\caption{The model results of the mass spectra of octet pseudoscalar mesons, which are compared with experimental data taken from the suggested quark-model assignment for the observed light mesons \cite{Tanabashi:2018oca}.}
\label{pi-spectrum1}
\end{center}
\end{table}
\subsection{Octet vector and axial-vector mesons}
The chiral gauge fields are recombined into the vector field $V^M=\frac{1}{2}(A_L^M+A_R^M)$ and the axial-vector field $A^M=\frac{1}{2}(A_L^M-A_R^M)$. In the axial gauge $V_z= A_z=0$ and with the KK decomposition, the EOMs of octet vector and axial-vector mesons can be derived from the bulk action (\ref{2+1-act}) as
\begin{align}
\partial_z\left(e^{A-\Phi}\partial_zV_n^a\right) -2g_{5}^2e^{3A-\Phi}(M^2_V)_{ab}V_n^b +m_{V^a_n}^2e^{A-\Phi}V_n^a &=0, \label{V-eom1} \\
\partial_z\left(e^{A-\Phi}\partial_zA_n^a\right) -2g_{5}^2e^{3A-\Phi}(M^2_A)_{ab}A_n^b +m_{A^a_n}^2e^{A-\Phi}A_n^a &=0, \label{A-eom1}
\end{align}
where the matrix $M^2_A$ is given in (\ref{M-A2}) and $M^2_V$ has the form
\begin{align}\label{M-V1}
M^2_V =\begin{pmatrix} \mathbf{0}_{3\times3} & 0 & 0 \\ 0 & \frac{1}{4}(\chi_u-\chi_s)^2\mathbf{1}_{4\times4} & 0 \\ 0 & 0 & 0 \end{pmatrix}.
\end{align}
Note that the octet axial-vector mesons are incorporated in the transverse part of axial gauge fields $A_{\mu \bot}^a$.
In terms of the redefinitions $V^a_n=e^{\omega/2} v^a_n$ and $A^a_n=e^{\omega/2} a^a_n$ with $\omega=\Phi -A$, the Eqs. (\ref{V-eom1}) and (\ref{A-eom1}) can be transformed into the Schr$\ddot{o}$dinger form
\begin{align}
\partial_z^2v_n^a+\left(\frac{1}{2}\omega'' -\frac{1}{4}\omega'^2\right)v^a_n -2g_{5}^2e^{2A}(M^2_V)_{ab}v_n^b +m_{V^a_n}^2 v^a_n &=0, \label{V-eom2} \\
\partial_z^2a_n^a+\left(\frac{1}{2}\omega'' -\frac{1}{4}\omega'^2\right)a^a_n -2g_{5}^2e^{2A}(M^2_A)_{ab}a_n^b +m_{A^a_n}^2 a^a_n &=0. \label{A-eom2}
\end{align}
The mass spectra of octet vector and axial-vector mesons can be obtained by solving the eigenvalue problem of Eqs. (\ref{V-eom2}) and (\ref{A-eom2}) with the following boundary condition
\begin{align}
& v_n^a(z\to 0)=0, \quad \partial_zv_n^a(z\to \infty) =0; \label{V-bound} \\
& a_n^a(z\to 0)=0, \quad \partial_za_n^a(z\to \infty) =0. \label{A-bound}
\end{align}
The numerical results are presented in Table \ref{rho-spectrum1} and \ref{a1-spectrum1}, where only the data of ground-state mesons in the axial-vector octet have been shown in consideration of experimental uncertainty \cite{Tanabashi:2018oca}. We find that the mass spectra of $\phi$ and $f_1$ computed from the model have large discrepancies with experimental values. This is due to the fact that the EOMs of isovector and isosinglet states in the vector octet are of the same form as a result of the specific $M_V^2$, while the flavor symmetry breaking terms built in $M_A^2$ differ so little from each other that cannot distinguish the masses of different axial-vector mesons. One might introduce high-order terms into the bulk action to improve the model results of vector and axial-vector meson spectra without affecting the pseudoscalar part, e.g., $D_{[M}X D_{N]}X^{\dagger}F_L^{MN}$, $D_{[M}X^{\dagger}D_{N]}XF_R^{MN}$, $XX^{\dagger}F_L^{MN}F_{LMN}$, $X^{\dagger}XF_R^{MN}F_{RMN}$, $XF_R^{MN}X^{\dagger}F_{LMN}$. However, in this way, the predictive power would be decreased with more free parameters introduced. Thus we will not consider it in this work. One can refer to Ref. \cite{Sui:2010ay} for the effects of high-order terms on the octet meson spectra.
\begin{table*}
\renewcommand\tabcolsep{8.0pt}
\begin{center}
\begin{tabular}{cccccccc}
\hline\hline
$n$ & $\rho$ exp. (MeV) & Model & $K^{*}$ exp. (MeV) & Model & $\phi$ exp. (MeV) & Model \\
\hline\hline
0 & $775.26\pm0.25$ & $775.1$ & $891.76\pm0.25$ & $775.2$ & $1019.461\pm0.016$ & $775.1$ \\
\hline
1 & $1465\pm25$ & $1335$ & $1421\pm9$ & $1336$ & $1680\pm20$ & $1335$ \\
\hline
2 & $1720\pm20$ & $1714$ & $1718\pm18$ & $1714$ & $2188\pm10$ & $1714$ \\
\hline\hline
\end{tabular}
\caption{The model results of the mass spectra of octet vector mesons, which are compared with experimental data taken from the suggested quark-model assignment for the observed light mesons \cite{Tanabashi:2018oca}.}
\label{rho-spectrum1}
\end{center}
\end{table*}
\begin{table*}
\renewcommand\tabcolsep{8.0pt}
\begin{center}
\begin{tabular}{cccccccc}
\hline\hline
$n$ & $a_1$ exp.(MeV) & Model & $K_1$ exp.(MeV) & Model & $f_1$ exp.(MeV) & Model \\
\hline\hline
0 & $1230\pm40$ & $1115$ & $1272\pm7$ & $1121$ & $1426.4\pm0.9$ & $1122.8$ \\
\hline
1 & $\cdots$ & $1525$ & $\cdots$ & $1528$ & $\cdots$ & $1530$ \\
\hline
2 & $\cdots$ & $1854$ & $\cdots$ & $1856$ & $\cdots$ & $1857$ \\
\hline\hline
\end{tabular}
\caption{The model results of the mass spectra of octet axial-vector mesons, which are compared with experimental data taken from the suggested quark-model assignment for the observed light mesons \cite{Tanabashi:2018oca}.}
\label{a1-spectrum1}
\end{center}
\end{table*}
\subsection{Decay constants of the pseudoscalar and (axial-)vector mesons}
According to AdS/CFT \cite{Erlich:2005qh}, the decay constants of octet pseudoscalar mesons can be extracted from the two-point correlation functions of axial-vector currents,
\begin{align}\label{pi-decay}
f_{\pi^a}^2 &=-\frac{1}{g_5^2}e^{A-\Phi}\partial_z A^a(0,z)|_{z\to0},
\end{align}
where $A^a(0,z)$ is the solution of Eq. (\ref{A-eom1}) with $m_{A^a_n}=0$ and the boundary condition $A^a(0,0)=1$ and $\partial_z A^a(0,\infty)=0$. It should be noted that we have taken the limit $m_{\pi^a}\to 0$ in the derivation of the decay constants $f_{\pi^a}$, which is only a suitable approximation for $\pi$ meson \cite{Erlich:2005qh}.
The decay constants of $\rho$ and $a_1$ mesons are given by
\begin{align}\label{rho-decay}
F_{\rho}^2 &=\frac{1}{g_5^2}\left(e^{A-\Phi}\partial_z V_{\rho}(z)|_{z\to0}\right)^2,\\
F_{a_1}^2 &=\frac{1}{g_5^2}\left(e^{A-\Phi}\partial_z A_{a_1}(z)|_{z\to0}\right)^2,
\end{align}
where $V_{\rho}(z)$ and $A_{a_1}(z)$ are the ground-state wave functions of $\rho$ and $a_1$ normalized by $\int dz\,e^{A-\Phi}V_{\rho}^2 =\int dz\,e^{A-\Phi}A_{a_1}^2 =1$.
We show the model results of the decay constants $f_{\pi}$, $f_{K}$, $F_{\rho}$ and $F_{a_1}$ along with the experimental values in Table \ref{decay-const}, where $f_{\pi}$ is taken as an input to determine the values of $\gamma$ and $\lambda$.
\begin{table}
\begin{center}
\begin{tabular}{ccccc}
\hline\hline
& $f_{\pi}$(MeV) & $f_{K}$(MeV) & $F_{\rho}^{1/2}$(MeV) & $F_{a_1}^{1/2}$(MeV) \\
\hline
Exp. & $92.4$ & $110$ & $346.2\pm1.4$ & $433\pm13$ \\
\hline
Model & $92.4$ & 100 & 307.1 & 350 \\
\hline\hline
\end{tabular}
\caption{The model results and the experimental values of related decay constants \cite{Erlich:2005qh,Tanabashi:2018oca}.}
\label{decay-const}
\end{center}
\end{table}
\section{Chiral transition and phase diagram}\label{chiraltran}
With the model parameters fixed by the octet meson spectra and the relevant decay constants, we now study the chemical potential effects on chiral transition in the improved soft-wall AdS/QCD model following Ref. \cite{Fang:2018axm}. To introduce temperature $T$ and baryon chemical potential $\mu_B$, we adopt the AdS/Reissner-Nordstrom (AdS/RN) black hole as the bulk background in terms of the metric ansatz
\begin{equation}\label{AdSRN1}
ds^2=e^{2A(z)}\left(f(z)dt^2-dx^{i\,2}-\frac{dz^2}{f(z)}\right)
\end{equation}
with
\begin{align}\label{AdSRN2}
f(z) &=1-(1+Q^2)\left(\frac{z}{z_h}\right)^4 +Q^2\left(\frac{z}{z_h}\right)^6,
\end{align}
where the charge $Q=\mu_q z_h$ with $z_h$ being the event horizon of black hole and $\mu_q$ being the quark chemical potential that is related to the baryon chemical potential by $\mu_B=3\mu_q$ \cite{Colangelo:2011sr}. The Hawking temperature is given by the formula
\begin{align}\label{T1}
T =\frac{1}{4\pi}\left|\frac{df}{dz}\right|_{z_{h}} =\frac{1}{\pi z_h}\left(1-\frac{Q^2}{2}\right)
\end{align}
with $0<Q<\sqrt{2}$.
In terms of the metric ansatz (\ref{AdSRN1}) of the AdS/RN black hole, the EOMs of the scalar VEV $\chi_{u,s}$ can be derived as
\begin{align}
\chi_{u}'' +\left(\frac{f'}{f}+3A'-\Phi'\right)\chi'_{u} -\frac{e^{2A}}{f}\left(m_5^2\chi_u +\lambda\chi_u^3 +\frac{\gamma}{2\sqrt{2}}\chi_u\chi_s \right) &=0, \label{vevX-eomT1} \\
\chi_{s}'' +\left(\frac{f'}{f}+3A'-\Phi'\right)\chi'_{s} -\frac{e^{2A}}{f}\left(m_5^2\chi_s +\lambda\chi_s^3 +\frac{\gamma}{2\sqrt{2}}\chi_{u}^{2}\right) &=0. \label{vevX-eomT2}
\end{align}
By the prescription of AdS/CFT \cite{Erlich:2005qh}, the UV asymptotic forms of $\chi_{u,s}$ can be obtained from the above EOMs with the chiral condensates $\sigma_{u,s}$ now depending on $\mu_B$ and $T$. The differences from those in Eqs. (\ref{asy-chiu1}) and (\ref{asy-chis1}) are only incorporated in the high-order terms of $z$. At finite $\mu_B$ and $T$, the Eqs. (\ref{vevX-eomT1}) and (\ref{vevX-eomT2}) admit natural boundary conditions on the horizon $z=z_h$ imposed by the regular properties of $\chi_{u,s}$,
\begin{align}
\left.f'\chi'_{u} -e^{2A}\left(m_5^2\chi_u +\lambda\chi_u^3 +\frac{\gamma}{2\sqrt{2}}\chi_u\chi_s \right)\right|_{z=z_{h}} &=0, \label{BC1} \\
\left.f'\chi'_{s} -e^{2A}\left(m_5^2\chi_s +\lambda\chi_s^3 +\frac{\gamma}{2\sqrt{2}}\chi_{u}^{2}\right)\right|_{z=z_{h}} &=0. \label{BC2}
\end{align}
As in Ref. \cite{Fang:2018vkp}, we can solve Eqs. (\ref{vevX-eomT1}) and (\ref{vevX-eomT2}) numerically with the given boundary conditions to obtain the profiles of scalar VEV $\chi_{u,s}$ and thus the chiral condensates $\sigma_{u,s}$ as functions of $\mu_B$ and $T$. The thermal transitions of $\sigma_{u,s}$ with temperature $T$ at four different chemical potentials are shown in Fig. \ref{sigma-T-mu}, where the distinction between the behaviors of $\sigma_{u}(T)$ and that of $\sigma_{s}(T)$ is due to the mass difference of $m_u$ and $m_s$. We can see that the chiral transition is a crossover at small $\mu_B$, and eventually turns into a first-order phase transition with the increase of $\mu_B$, and in between there is a second-order one locating at $\mu_B\simeq 390 \;\text{MeV}$, which signifies the existence of CEP in the chiral phase diagram of our model.
\begin{figure
\begin{center}
\includegraphics[width=68mm,clip=true,keepaspectratio=true]{sigma-T-mu1.pdf}
\hspace*{0.6cm}
\includegraphics[width=68mm,clip=true,keepaspectratio=true]{sigma-T-mu2.pdf}
\vspace{0.35cm} \\
\includegraphics[width=68mm,clip=true,keepaspectratio=true]{sigma-T-mu3.pdf}
\hspace*{0.6cm}
\includegraphics[width=68mm,clip=true,keepaspectratio=true]{sigma-T-mu4.pdf}
\vskip -1cm \hskip 0.7 cm
\end{center}
\caption{The chiral transition behaviors of the condensates $\sigma_{u}$ and $\sigma_{s}$ with the temperature $T$ at $\mu_B=0, 0.39, 0.9, 1.5\,\mathrm{GeV}$.}
\label{sigma-T-mu}
\end{figure}
We compute the chiral transition temperature $T_c$ at each baryon chemical potential $\mu_B$ using the prescription given in Ref. \cite{Fang:2018axm}. The pseudocritical temperature for the crossover transition can be defined as the maximum of $|\frac{\partial\sigma_q}{\partial T}|$, while the critical temperature for the first-order phase transition will be restricted to the region between two transition inflections of the curve. The model calculations for chiral phase diagram in the $\mu-T$ plane are shown in Fig. \ref{mu-T-diagram}, where the CEP linking the crossover transition with the first-order phase transition is highlighted by the red point with the location at $(\mu_B, T_c) \simeq (390 \;\text{MeV}, 145 \;\text{MeV})$. We also show the freeze-out data analyzed from experiments of relativistic heavy ion collisions and the crossover line obtained from analytic continuation of lattice data with imaginary chemical potential \cite{Becattini:2005xt,Cleymans:2004pp,Andronic:2008gu,Becattini:2012xb,Alba:2014eba,Bellwied:2015rza}. We find that the crossover line and the location of CEP obtained from the improved soft-wall model are consistent with experimental analysis and lattice results. However, the descent of $T_c$ with the increase of $\mu_B$ at large chemical potentials is too slow for the model prediction. Thus we need to keep cautious on the reliability of the model results at large $\mu_B$.
\begin{figure}
\centering
\includegraphics[width=75mm,clip=true,keepaspectratio=true]{mu-T-diagram.pdf}
\caption{The chiral phase diagram in the $\mu-T$ plane obtained from the improved soft-wall AdS/QCD model. The colored points with error bars are freeze-out data from experimental analysis. The black-triangle data are taken from Ref. \cite{Becattini:2005xt,Cleymans:2004pp}, the green-triangle ones are taken from Ref. \cite{Andronic:2008gu}, the purple squares are taken from Ref. \cite{Becattini:2012xb}, and the magenta squares are taken from Ref. \cite{Alba:2014eba}. The shaded black region denotes the crossover line obtained from lattice simulation at small $\mu_B$, and the light blue band indicates the uncertainty width of lattice simulation \cite{Bellwied:2015rza}.}
\label{mu-T-diagram}
\end{figure}
\section{Conclusion and remarks}\label{conclution}
We have presented a further study on the improved soft-wall AdS/QCD model with $2+1$ flavors, which can reproduce the standard scenario of the quark-mass phase diagram in terms of chiral transition \cite{Fang:2018vkp}. The EOMs of octet pseudoscalar, vector and axial-vector mesons have been derived from the action of the improved soft-wall model, and the octet meson spectra were computed and also compared with experimental data. The model parameters are fixed by a global fitting with the experimental data of octet spectra. We find that the improved soft-wall model gives a better description for the pseudoscalar octet spectrum than that for the vector and axial-vector octet spectra, in which the model results for the isosinglet states are much smaller than experimental values due to the same form of EOMs for the isosinglet and isovector states in the vector octet and the tiny flavor symmetry breaking in the axial-vector octet. A possible way to address this issue is to consider high-order terms of the bulk action \cite{Sui:2010ay}. We have also computed the decay constants $f_{\pi}$, $f_{K}$, $F_{\rho}$ and $F_{a_1}$, which are compared with experimental results.
We then studied the chemical potential effects on chiral transition, and obtained the chiral phase diagram in the $\mu -T$ plane, where the CEP still exists and locates at $(\mu_B, T_c) \simeq (390 \;\text{MeV}, 145 \;\text{MeV})$. The crossover line and the location of CEP are consistent with lattice results and experimental analysis from relativistic heavy ion collisions. However, the transition temperature $T_c$ at large chemical potential declines too slowly with the increase of $\mu_B$. In our work, the baryon chemical potential $\mu_B$ and the temperature $T$ are generated by a fixed AdS/RN black hole, yet a more sensible way to introduce these effects is to consider a dynamical background which is solved from an Einstein-Maxwell-dilaton system, which also enables us to study the chiral and deconfining phase transitions simultaneously in a single holographic framework. As a preliminary attempt, we have shown that some main features of chiral phase diagram can be captured by a simply improved soft-wall AdS/QCD model, which might be taken as a guidance in further researches.
As an important aspect of QCD phase transition, we need to investigate the behaviors of equation of state that can be dealt with by the Einstein-Maxwell-dilaton system at finite chemical potential \cite{DeWolfe:2010he,Critelli:2017oub}. The back-reaction effects of the flavor sector to the background need also to be considered in order to clarify the phase structure of AdS/QCD. However, we remark that the QCD phase transition cannot be completely described by the semiclassical gauge/gravity duality even all these effects have been considered. From the large N analysis we know that the pressure of hadron gas in the confined phase is of order one, while the pressure in the deconfined phase dominated by color degrees of freedom is of order $N^2$. This must entail the string loop correction for an adequate account of thermal transition between the two phases separated by a factor of $1/N^2$ suppression in the large N limit. Moreover, there is no reason for the QCD matter produced in the heavy ion collisions to be close to thermodynamic equilibrium, therefore, it is natural for us to expect nonequilibrium effects to come into play in the phase transition with possible experimental signatures of CEP. Indeed, there have been indications that equilibrium lattice QCD susceptibilities cannot account for the moments of net-proton multiplicity distributions at low energy \cite{Borsanyi:2014ewa}, which may be a hint that the low-energy heavy ion collision at higher chemical potential is not really a thermodynamic equilibrium process. Nevertheless, we are content with a phenomenological description for low-energy QCD in terms of the bottom-up approach, which has been shown to be able to roughly grasp the picture of chiral phase transition.
\section*{Acknowledgements}
This work is partly supported by the National Natural Science Foundation of China (NSFC) under Grant Nos. 11851302, 11851303, 11747601 and 11690022, and the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No. XDB23030100 as well as the CAS Center for Excellence in Particle Physics (CCEPP). Z. F. is also supported by the NSFC under Grant No. 11905055 and the Fundamental Research Funds for the Central Universities under Grant No. 531118010198.
|
2,877,628,089,633 | arxiv | \section{Introduction and Results}
\setcounter{equation}{0}
In this paper we discuss the dynamics of a relativistic string in
constant
curvature spacetimes, using a combination of geometrical methods and
physical
insight. The kind of problems we are interested in here and the way
of
reasoning, historically had the origin in investigations of the
motion of
vortices in a superfluid \cite{lun,lun1}. Interestingly enough, the
latter
problem, which is equivalent to a theory of dual strings interacting
in a
particular way through a scalar field \cite{lun,lun1}, reduces to
solving two
coupled non-linear partial differential equations, one of which being
a
generalized Sine-Gordon equation. It was soon realized that exactly
the same
equations appear when considering a two-dimensional sigma model
corresponding
to $O(4)$ \cite{poh,lun2,lun3}.
This model, on the other hand, describes a relativistic string in a
(Euclidean
signature)
constant curvature space.
For the theory of fundamental strings, it is important to consider
formulations
in curved spacetimes also, essentially as descriptions of one string
in the
background created by the others. The string equations of motion in
curved
spacetimes are generally highly non-linear in any gauge, which in
most cases
means that the system is non-integrable. Exceptional cases are, among
others,
strings in maximally symmetric spacetimes \cite{zak,eic}. From the
physical
point of view, de Sitter spacetime plays a particular role in this
family of
spacetimes,
since it describes an inflationary universe.
String theory in de Sitter spacetime is therefore also of interest
from the
point of view of cosmic strings and cosmology and for the open
question of
string self-sustained inflation \cite{ven,ven1}. Specific problems
concerning
the integrability of the equations
describing the dynamics of classical strings in de Sitter spacetime
were
discussed in \cite{nes,san}. The present work is a completion
and generalization of the results presented in those papers.
In Section 2 we set up the general
formalism for classical strings in de Sitter and anti de
Sitter spacetimes, and we derive the equations fulfilled by the
fundamental
quadratic form for a generic string configuration.
The fundamental quadratic form $\alpha(\tau,\sigma)$ is a measure of
the
invariant string size. We show that it solves the Sinh-Gordon
equation, the
Cosh-Gordon equation or the Liouville equation. We find that in order
to cover
the generic string dynamics, {\it all} three equations must be taken
into
account. Associated potentials ($\pm 2\cosh\alpha,\;\pm
2\sinh\alpha,\;\pm
e^\alpha$) to these equations can be respectively defined ($(+)$-sign
for anti
de Sitter spacetime and
$(-)$-sign for de Sitter spacetime). Generic properties of the string
dynamics
are then directly extracted at the level of the equations of motion
from the
properties of these potentials (irrespective of any solution). The
three
equations correspond to three different sectors of the string
dynamics (until
now only the Sinh-Gordon sector (corresponding to the $\cosh\alpha$
potential)
in de Sitter spacetime was known). The differences between the three
sectors in
each spacetime appear mainly for small strings (strings with proper
size
$<1/(\sqrt{2}H)).$
In de Sitter spacetime,
the
Sinh-Gordon sector characterizes the evolution in which small strings
necessarily collapse into a point, while in the Cosh-Gordon sector,
strings
never collapse but reach a minimal size. In anti de Sitter spacetime,
the
situation is exactly the opposite: the Cosh-Gordon sector
characterizes the
evolution in which strings necessarily collapse into a point, while
in the
Sinh-Gordon sector, strings never collapse but reach a minimal size.
On the
other hand, the dynamics of large strings is rather similar in the
three
sectors
in each
spacetime (see Figs. 1a, 1b, for instance).
The dynamics of small strings is rather similar in de Sitter and anti
de Sitter
spacetimes, while for large strings (strings with proper size
$>1/(\sqrt{2}H))$
the dynamics is drastically different in the two spacetimes. In de
Sitter
spacetime, the presence of potentials unbounded from below for
positive
$\alpha,$ in all three sectors, makes string instability
(indefinetely
growing
strings) unavoidable (in anti de Sitter spacetime, the positive
potential
barriers for positive $\alpha$ prevents the strings from growing
indefinetely).
In Section 3 we present
new classes of explicit solutions in both de
Sitter and anti de Sitter spacetimes, which cover all the three
sectors. These
solutions exhibit the multi-string property
\cite{mik,com,dev,all2}, namely one single world-sheet describes a
finite or
infinite number of different and independent strings. The presence of
multi-strings is a characteristic feature in spacetimes with a
cosmological
constant (constant curvature or asymptotically constant curvature
spacetimes).
In
Section 4, we show that our results also hold for the $2+1$
dimensional black
hole anti de Sitter spacetime \cite{ban}, and we complete earlier
investigation on the dynamics of circular string configurations in
this
spacetime \cite{all1}.
Finally, in Section 5 we give our conclusions.
\section{General Formalism}
\setcounter{equation}{0}
For simplicity, the following analysis is performed for $2+1$
dimensional
spacetimes. However, it is straightforward to generalize the results
to
arbitrary dimensions, following the lines of \cite{san}.
It is well known that $2+1$ dimensional de Sitter spacetime can be
obtained
from flat $R^{1,3}$ spacetime:
\begin{equation}
ds^2(R^{1,3})=-dt^2+du^2+dx^2+dy^2,
\end{equation}
by restricting to the submanifold:
\begin{equation}
\eta_{\mu\nu}q^\mu q^\nu=1,
\end{equation}
where:
\begin{equation}
\eta_{\mu\nu}={\mbox{diag}}(-1,1,1,1),
\end{equation}
\begin{equation}
q^\mu=H(t,u,x,y),
\end{equation}
and $H$ is the Hubble constant of de Sitter spacetime.
Similarly, we can obtain $2+1$ dimensional anti
de Sitter spacetime
from flat $R^{2,2}$ spacetime:
\begin{equation}
ds^2(R^{2,2})=-dt^2-du^2+dx^2+dy^2,
\end{equation}
by restricting to the submanifold:
\begin{equation}
\eta_{\mu\nu}q^\mu q^\nu=-1,
\end{equation}
where:
\begin{equation}
\eta_{\mu\nu}={\mbox{diag}}(-1,-1,1,1),
\end{equation}
\begin{equation}
q^\mu=H(t,u,x,y),
\end{equation}
and $H$ is the Hubble constant of anti de Sitter spacetime.
We can thus treat de Sitter and anti de Sitter spacetimes
simultaneously by
introducing the following notation. We consider a flat spacetime with
line
element $ds^2_{(\epsilon)}$ where $\epsilon=\pm 1:$
\begin{equation}
ds^2_{(\epsilon)}=-dt^2+\epsilon du^2+dx^2+dy^2,
\end{equation}
and restrict to the submanifold:
\begin{equation}
\eta^{(\epsilon)}_{\mu\nu}q^\mu q^\nu=\epsilon,
\end{equation}
where:
\begin{equation}
\eta^{(\epsilon)}_{\mu\nu}={\mbox{diag}}(-1,\epsilon,1,1),
\end{equation}
and $q^\mu$ is in the form of equations
(2.4), (2.8). That is, $\epsilon=+1$ corresponds
to de Sitter spacetime while $\epsilon=-1$ corresponds
to anti de Sitter spacetime.
Let us now consider a bosonic string embedded in the spacetime (2.9).
In the conformal gauge, where the string world-sheet metric is
diagonal,
the Lagrangian is given by:
\begin{equation}
{\cal L}^{(\epsilon)}=\eta^{(\epsilon)}_{\mu\nu}(
\dot{q}^\mu\dot{q}^\nu-
q'^\mu q'^\nu)-\lambda(\eta^{(\epsilon)}_{\mu\nu}q^\mu
q^\nu-\epsilon),
\end{equation}
where the restriction (2.10) has been taken into account
consistently,
through the Lagrange multiplier $\lambda,$ and dot and prime denote
differentiation
with respect to the world-sheet coordinates tau and sigma,
respectively.
The classical string equations of motion and constraints
take then the form:
\begin{equation}
\ddot{q}^\mu-q''^\mu+\epsilon\eta^{(\epsilon)}_{\rho\sigma}
(\dot{q}^\rho
\dot{q}^\sigma-q'^\rho q'^\sigma) q^\mu=0,
\end{equation}
\begin{equation}
\eta^{(\epsilon)}_{\mu\nu}\dot{q}^\mu
q'^\nu=\eta^{(\epsilon)}_{\mu\nu}
( \dot{q}^\mu\dot{q}^\nu+q'^\mu q'^\nu)=0.
\end{equation}
The induced line element on the string world-sheet is given by:
\begin{equation}
dS^2_{(\epsilon)}=\frac{1}{H^2}\eta^{(\epsilon)}_{\mu\nu}dq^\mu
dq^\nu=
-\frac{1}{2H^2}
\eta^{(\epsilon)}_{\mu\nu}( \dot{q}^\mu\dot{q}^\nu-
q'^\mu q'^\nu)(-d\tau^2+d\sigma^2).
\end{equation}
Since we consider only timelike world-sheets, we can define a real
function
$\alpha^{(\epsilon)}$ by:
\begin{equation}
e^{\alpha^{(\epsilon)}}\equiv
-\eta^{(\epsilon)}_{\mu\nu}( \dot{q}^\mu\dot{q}^\nu-
q'^\mu q'^\nu)=-\eta^{(\epsilon)}_{\mu\nu}q^\mu_+ q^\nu_-,
\end{equation}
and we have introduced world-sheet light-cone coordinates
$\sigma_\pm=
(\tau\pm\sigma)/2,$ that is to say, $q^\mu_\pm=\dot{q}^\mu\pm
q'^\mu\;,$ etc.
The fundamental quadratic form, $\alpha^{(\epsilon)},$
is a measure of the invariant string size $S_{(\epsilon)},$
as follows from equations (2.15)-(2.16):
\begin{equation}
S_{(\epsilon)}=\frac{1}{\sqrt{2}H}e^{\alpha^{(\epsilon)}/2}.
\end{equation}
The
string equations of motion and constraints, equations (2.13) and
(2.14),
can now be
written in the more compact form:
\begin{equation}
q^\mu_{+-}=\epsilon e^{\alpha^{(\epsilon)}} q^\mu,
\end{equation}
\begin{equation}
\eta^{(\epsilon)}_{\mu\nu} q^\mu_\pm q^\nu_\pm=0.
\end{equation}
{}From now on we follow the procedure of Refs.\cite{nes,san} (see
also
[1-5,18-20]).
Let us consider the set of vectors:
\begin{equation}
{\cal U}_{(\epsilon)}=
\{ q^\mu, q^\mu_+,q^\mu_-,l^\mu_{(\epsilon)}\},
\end{equation}
where $l^\mu_{(\epsilon)}$ is defined by:
\begin{equation}
l^\mu_{(\epsilon)}\equiv
e^{-\alpha^{(\epsilon)}} e^\mu_{(\epsilon)\rho\sigma\delta}
q^\rho q^\sigma_+ q^\delta_-,
\end{equation}
and $e^\mu_{(\epsilon)\rho\sigma\delta}$ is the completely
anti-symmetric
four-tensor in the spacetime (2.9). It is easy to verify that the
vectors
in ${\cal U}_{(\epsilon)}$
are linearly independent, although not orthonormal. The vector
$l^\mu_{(\epsilon)}$ is normalized according to:
\begin{equation}
\eta_{\mu\nu}^{(\epsilon)}l^\mu_{(\epsilon)}l^\nu_{(\epsilon)}=1.
\end{equation}
The second derivatives of $q^\mu,$ when expressed in the basis
${\cal U}_{(\epsilon)},$ are
given by:
\begin{equation}
q^\mu_{++}=\alpha^{(\epsilon)}_+ q^\mu_+ +u^{(\epsilon)}
l^\mu_{(\epsilon)},
\end{equation}
\begin{equation}
q^\mu_{--}=\alpha^{(\epsilon)}_- q^\mu_- +v^{(\epsilon)}
l^\mu_{(\epsilon)},
\end{equation}
\begin{equation}
q^\mu_{+-}=e^{\alpha^{(\epsilon)}} q^\mu,
\end{equation}
where the functions $u^{(\epsilon)}$ and $v^{(\epsilon)}$ are
implicitly
defined by:
\begin{equation}
u^{(\epsilon)}\equiv
\eta^{(\epsilon)}_{\mu\nu}q^{\mu}_{++}l^\nu_{(\epsilon)},
\;\;\;\;\;\;\;\;\;\;
v^{(\epsilon)}\equiv
\eta^{(\epsilon)}_{\mu\nu}q^{\mu}_{--}l^\nu_{(\epsilon)}.
\end{equation}
{}From these expressions we compute the quantities
$\eta^{(\epsilon)}_{\mu\nu}q^{\mu}_{++-}l^\nu_{(\epsilon)}$ and
$\eta^{(\epsilon)}_{\mu\nu}q^{\mu}_{+--}l^\nu_{(\epsilon)}$ in two
different
ways (using $(q^\mu_{++})_-$ from (2.23) and $(q^\mu_{+-})_+$ from
(2.25), as
well as $(q^\mu_{+-})_-$ from (2.25) and $(q^\mu_{--})_+$ from
(2.24)), and
it then follows that:
\begin{equation}
u^{(\epsilon)}_-=v^{(\epsilon)}_+=0.
\end{equation}
Then, by differentiating equation (2.16) twize, we get:
\begin{equation}
\alpha^{(\epsilon)}_{+-}-\epsilon
e^{\alpha^{(\epsilon)}}+u^{(\epsilon)}
(\sigma_+) v^{(\epsilon)}(\sigma_-) e^{-\alpha^{(\epsilon)}}=0.
\end{equation}
In the previous discussions \cite{nes,san}, it was implicitly assumed
that the
product $u^{(\epsilon)}
(\sigma_+) v^{(\epsilon)}(\sigma_-)$ is positive definite. In that
case the
conformal transformation on the world-sheet metric (2.15):
\begin{eqnarray}
&\alpha^{(\epsilon)}(\sigma_+,\sigma_-)=\hat{\alpha}^{(\epsilon)}
(\hat{\sigma}_+,\hat{\sigma}_-)+\frac{1}{2}\mbox{log}|u^{(\epsilon)}
(\sigma_+)||v^{(\epsilon)}(\sigma_-)|,&\nonumber\\
&\hat{\sigma}_+=\int\sqrt{|u^{(\epsilon)}
(\sigma_+)}|\;d\sigma_+,\;\;\;\;\;\;\;\;
\hat{\sigma}_-=\int\sqrt{|v^{(\epsilon)}
(\sigma_-)}|\;d\sigma_-,&
\end{eqnarray}
which transforms $dS^2_{(\epsilon)}\rightarrow
d\hat{S}^2_{(\epsilon)}:$
\begin{equation}
dS^2_{(\epsilon)}=\frac{-2}{H^2}e^{\alpha^{(\epsilon)}}d\sigma_+
d\sigma_-\;
\rightarrow\;\frac{-2}{H^2}e^{\hat{\alpha}^{(\epsilon)}}
d\hat{\sigma}_+
d\hat{\sigma}_-=d\hat{S}^2_{(\epsilon)},
\end{equation}
reduces equation (2.28) to the equation:
\begin{equation}
\alpha^{(\epsilon)}_{+-}-\epsilon e^{\alpha^{(\epsilon)}}+
e^{-\alpha^{(\epsilon)}}=0.
\end{equation}
This equation is the Sinh-Gordon equation in the case of de Sitter
spacetime
$(\epsilon=+1),$ and the Cosh-Gordon equation in the case of
anti de Sitter spacetime
$(\epsilon=-1).$
It must be noticed, however, that for a generic string world-sheet,
the product
$u^{(\epsilon)}
(\sigma_+) v^{(\epsilon)}(\sigma_-)$ is neither positive nor negative
definite.
In fact, in the next section we shall construct explicit solutions
to the string equations of motion and constraints (2.18)-(2.19)
corresponding
to $u^{(\epsilon)}
(\sigma_+) v^{(\epsilon)}(\sigma_-)$ positive, $u^{(\epsilon)}
(\sigma_+) v^{(\epsilon)}(\sigma_-)$ negative and
$u^{(\epsilon)}
(\sigma_+) v^{(\epsilon)}(\sigma_-)$ identically zero, in {\it both}
de Sitter and anti de Sitter spacetimes.
In the case that
$u^{(\epsilon)}
(\sigma_+) v^{(\epsilon)}(\sigma_-)$ is negative, the conformal
transformation
(2.29) reduces equation (2.28) to:
\begin{equation}
\alpha^{(\epsilon)}_{+-}-\epsilon e^{\alpha^{(\epsilon)}}-
e^{-\alpha^{(\epsilon)}}=0,
\end{equation}
and including also the case when
$u^{(\epsilon)}
(\sigma_+) v^{(\epsilon)}(\sigma_-)=0,$ we conclude that the most
general
equation fulfilled by the fundamental quadratic form
$\alpha^{(\epsilon)}$
is:
\begin{equation}
\alpha^{(\epsilon)}_{+-}-\epsilon e^{\alpha^{(\epsilon)}}+
Ke^{-\alpha^{(\epsilon)}}=0,
\end{equation}
where:
\begin{equation}
K=\left\{ \begin{array}{l}
+1,\;\;\;\;\;\;u^{(\epsilon)}(\sigma_+) v^{(\epsilon)}(\sigma_-)>0 \\
-1,\;\;\;\;\;\;u^{(\epsilon)}(\sigma_+) v^{(\epsilon)}(\sigma_-)<0 \\
\;0,\;\;\;\;\;\;\;\;u^{(\epsilon)}(\sigma_+)
v^{(\epsilon)}(\sigma_-)=0
\end{array}\right.
\end{equation}
and:
\begin{equation}
\epsilon=\left\{ \begin{array}{l}
+1,\;\;\;\;\;\;{\mbox{de Sitter}} \\ -1,\;\;\;\;\;\;{\mbox{anti de
Sitter}}
\end{array}\right.
\end{equation}
Equation (2.33) is either the Sinh-Gordon equation ($\epsilon=K=\pm
1$),
the Cosh-Gordon equation ($\epsilon=-K=\pm 1$) or the
Liouville equation ($K=0$), and all three equations appear in both de
Sitter
and
anti de Sitter spacetimes. This does not mean, of course, that the
string
dynamics is the same in de Sitter and anti de Sitter spacetimes.
Let us
define a potential $V^{(\epsilon)}(\alpha^{(\epsilon)})$ by:
\begin{equation}
\alpha^{(\epsilon)}_{+-}+\frac{dV^{(\epsilon)}(\alpha^{(\epsilon)})}
{d\alpha^{(\epsilon)}}=0,
\end{equation}
(so that if $\alpha^{(\epsilon)}=\alpha^{(\epsilon)}(\tau),$ then
$\;\frac{1}{2}(\dot{\alpha}^{(\epsilon)})^2+V^{(\epsilon)}
(\alpha^{(\epsilon)})=$ const.). Then, it
follows that in the case of de Sitter spacetime:
\begin{equation}
V^{(+1)}(\alpha)=\left\{ \begin{array}{l}
-2\cosh\alpha,\;\;\;\;\;\;K=+1 \\
-2\sinh\alpha,\;\;\;\;\;\;K=-1\\
\;\;\;\;\;-e^{\alpha},\;\;\;\;\;\;\;\;\;\;\;K=0 \end{array}\right.
\end{equation}
while in the case of anti de Sitter spacetime:
\begin{equation}
V^{(-1)}(\alpha)=\left\{ \begin{array}{l}
2\sinh\alpha,\;\;\;\;\;\;K=+1
\\ 2\cosh\alpha,\;\;\;\;\;\;K=-1\\
\;\;\;\;\;e^{\alpha},\;\;\;\;\;\;\;\;\;\;\;K=0 \end{array}\right.
\end{equation}
and we have skipped the $(\pm)$-index on $\alpha.$ Notice that to the
Sinh-Gordon
equation corresponds the $\cosh\alpha$ potential and vice-versa.
The results (2.36)-(2.38) are represented in Fig.1., showing the
different
potentials in
the
cases of de Sitter and anti de Sitter spacetimes, respectively.
Until now only the $K=+1$ sector in de Sitter spacetime was known.
The new
features introduced by the new sectors $K=0$ (corresponding to the
Liouville
equation) and $K=-1$ (corresponding to the Cosh-Gordon equation in
the case of
de Sitter spacetime and to the Sinh-Gordon equation in the case of
anti
de Sitter
spacetime) appear for negative $\alpha$ ("small" strings). Small
strings with
proper size $<1/(\sqrt{2}H)$ in the $K=-1$ sector (inside the horizon
in the
case of de Sitter spacetime), do not collapse into a point (as is the
case in the
$K=+1$ sector) but have a minimal size.
The main differences between
de Sitter
and anti de Sitter potentials are for positive $\alpha$ (strings with
proper
size $>1/(\sqrt{2}H).$
In the case of de Sitter spacetime (Fig.1a.),
the potentials are unbounded from below
for large strings (large positive $\alpha$), while for small strings
(large negative $\alpha$) they are either growing indefinetely, flat
or
unbounded from below. In the case of anti de
Sitter spacetime (Fig.1b.), on the other hand, the potentials grow
indefinetely
for large strings (large positive $\alpha$), while for small strings
(large negative $\alpha$) they are either growing indefinetely, flat
or
unbounded from below.
{}From these results we can deduce the generic features of strings
propagating in de Sitter and anti de Sitter spacetimes: Large strings
(large positive $\alpha$) in de Sitter spacetime generically expand
indefinitely, while small strings (large negative $\alpha$) either
bounce or
collapse. In anti de Sitter spacetime, large strings generically
contract, while small strings either bounce or collapse.
For small strings (large negative $\alpha$) the
dynamics is similar in de Sitter and anti de Sitter spacetimes, while
for large
strings (large positive $\alpha$)
it is completely different in the two spacetimes.
Notice that the $\epsilon$ in equation (2.33), which distinguishes
between
de Sitter and anti de Sitter spacetimes, corresponds to the
$"K"$ in the notation of
Ref.\cite{nes}. Our $K$ in equation (2.33) was missed in
Refs.\cite{nes,san}; only
the solutions corresponding to $K=+1$ were found there.
\section{Explicit Examples}
\setcounter{equation}{0}
The exact ("global", i.e. the whole world-sheet)
solutions to the string equations of motion and constraints in de
Sitter and anti de Sitter spacetimes considered in the literature
until now [12-15, 17, 21-23], describe different classes of string
solutions of
generic shape, circular strings and stationary strings. These
solutions exhibit
the multi-string property
\cite{mik,com,dev,all2}, namely one single world-sheet describes a
finite or
infinite number of different and independent strings. The presence of
multi-strings is a characteristic feature in spacetimes with a
cosmological
constant (constant curvature or asymptotically constant curvature
spacetimes).
All these solutions fall in the $K=+1$ sector, i.e. are solutions to
the
Sinh-Gordon equation in
the case of de Sitter spacetime and to the Cosh-Gordon equation in
the case
of anti de Sitter Spacetime. We shall now
construct larger families of exact solutions which fall into
{\it all} three sectors $K=\pm 1,\;0.$
Consider first the following algebraic problem:
What is the most general ansatz which reduces the string equations of
motion
and constraints to {\it ordinary} differential equations, in
spacetimes of
the form:
\begin{equation}
ds^2=-a(r)dt^2+\frac{dr^2}{a(r)}+r^2 d\phi^2.
\end{equation}
The string equations of motion
are given by:
\begin{eqnarray}
\ddot{t}\hspace*{-2mm}&-&\hspace*{-2mm}t''+\frac{a_{,r}}{a}
(\dot{t}\dot{r}-
t'r')=0,\nonumber\\
\ddot{r}\hspace*{-2mm}&-&\hspace*{-2mm}r''-\frac{a_{,r}}{2a}
(\dot{r}^2-r'^2)+
\frac{aa_{,r}}{2}(\dot{t}^2-t'^2)-ar
(\dot{\phi^2}-\phi'^2)=0,\nonumber\\
\ddot{\phi}\hspace*{-2mm}&-&\hspace*{-2mm}\phi''+\frac{2}{r}
(\dot{\phi}
\dot{r}-\phi' r')=0,
\end{eqnarray}
while the constraints take the form:
\begin{eqnarray}
\hspace*{-2mm}&-&\hspace*{-2mm}a(\dot{t}^2+t'^2)+\frac{1}{a}
(\dot{r}^2+r'^2)
+r^2(\dot{\phi}^2+\phi'^2)=0,\nonumber\\
\hspace*{-2mm}&-&\hspace*{-2mm}a\dot{t}t'+\frac{1}{a}\dot{r}r'+
r^2\dot{\phi}\phi'=0.
\end{eqnarray}
Since the Christoffel symbols depend only on $r,$ the desired ansatz
is:
\begin{equation}
r=r(\xi^1),\;\;\;\;t=t(\xi^1)+c_1 \xi^2,\;\;\;\;\phi=\phi(\xi^1)+c_2
\xi^2,
\end{equation}
where $(\xi^1, \xi^2)$ are the two world-sheet coordinates (one of
which is
timelike, the other spacelike), and $(c_1, c_2)$ are two arbitrary
constants.
With this ansatz, equations (3.2) are solved by:
\begin{equation}
\frac{dt}{d\xi^1}=\frac{k_1}{a(r)},\;\;\;\;\;\;\;\;\;\;
\frac{d\phi}{d\xi^1}=\frac{k_2}{r^2},
\end{equation}
\begin{equation}
\left( \frac{dr}{d\xi^1}\right)^2=-a(r)
r^2c_2^2-\frac{a(r)}{r^2}k_2^2+k_1^2+
a^2(r) c_1^2,
\end{equation}
where $(k_1, k_2)$ are two integration constants. For the
constraints,
equations (3.3), to be
fulfilled, we must have:
\begin{equation}
k_1 c_1=k_2 c_2.
\end{equation}
In particular, circular string dynamics as considered in
[12-15, 17, 21, 23] corresponds to
$c_1=k_2=0$ and $(\xi^1,\xi^2)=(\tau,\sigma),$ while the infinitely
long
stationary strings considered in
[15] correspond to the "dual" choice
$c_2=k_1=0$ and $(\xi^1,\xi^2)=(\sigma,\tau).$
The induced line element on the string world-sheet is:
\begin{equation}
dS^2=(r^2 c_2^2-a(r)c_1^2)[-(d\xi^1)^2+(d\xi^2)^2],
\end{equation}
such that the fundamental quadratic form is given by:
\begin{equation}
e^\alpha=2|r^2 c_2^2-a(r)c_1^2|.
\end{equation}
Let us now return to our main interest here: strings in de Sitter and
anti de
Sitter spacetimes. In this case, the function $a(r)$ is given by:
\begin{equation}
a_{(\epsilon)}=1-\epsilon H^2 r^2.
\end{equation}
In the case of anti de Sitter spacetime $(\epsilon=-1),$
the static coordinates $(t,r,\phi)$ cover the complete
manifold, while for de Sitter spacetime $(\epsilon=+1),$ they cover
only the
region inside the horizon; the complete de Sitter manifold can
however be
covered by four coordinate patches of the form (3.1), (3.10), see for
instance
\cite{rin}. Notice that the
equation (3.6) for the radial coordinate can be solved explicitly in
terms of the Weierstrass elliptic $\wp$-function \cite{abr}.
The other two equations (3.5)
can then be integrated; the results being expressed in terms of
the Weierstrass elliptic $\sigma$ and $\zeta$-functions \cite{abr}.
We have thus solved completely the string equations of motion and
constraints
using the ansats (3.4) in both de Sitter and anti de Sitter
spacetimes, but
the explicit
expressions of the solutions are not important here. It should be
also stressed that in general the ansatz (3.4) does not lead to
solutions
automatically
fulfilling the standard closed or open string boundary conditions,
see for
instance \cite{gsw}. However, imposing the
boundary conditions does not arise any problem. In some cases the
ansats
(3.4) actually {\it does} lead to solutions fulfilling the standard
boundary
conditions; an example is $c_1=k_2=0,$ in which case the solution
describes
dynamical circular strings [12-15, 17, 21, 23]. Finally, we are often
interested
in
string
solutions that do not fulfill the standard closed or open string
boundary
conditions; this is for instance the case for infinitely
long strings \cite{all2,zel} or finite open
strings with external forces acting on the endpoints
of the strings \cite{vil2,fro}.
Let us consider the spacetime
region where $(c_2^2+\epsilon H^2 c_1^2)r^2\geq c_1^2$
(similar conclusions are reached in the other region). In this case
$\xi^1$ is
the timelike world-sheet coordinate, $\xi^1\equiv\tau/H.$
Then, equations (3.6), (3.9)
lead to:
\begin{eqnarray}
\left( \frac{d\alpha^{(\epsilon)}}{d\tau}\right) ^2-2\epsilon
e^{\alpha^{(\epsilon)}}+\frac{8}{H^2}[c_1^2 c_2^2
\hspace*{-2mm}&-&\hspace*{-2mm}(c_2^2+\epsilon H^2 c_1^2)
(k_1^2+\epsilon H^2 k_2^2)]
e^{-\alpha^{(\epsilon)}}\nonumber\\
=\hspace*{-2mm}&-&\hspace*{-2mm}\frac{4}{H^2}(c_2^2-\epsilon H^2
c_1^2).
\end{eqnarray}
Now, by tuning the
constants of motion to fix the sign
of the square bracket, and by performing conformal transformations
of the form (2.28), we can, after differentiation with respect to
tau, reduce
this equation to either
the Sinh-Gordon equation, the Cosh-Gordon equation or the Liouville
equation:
\begin{eqnarray}
&\epsilon[c_1^2 c_2^2-(c_2^2+\epsilon H^2 c_1^2)
(k_1^2+\epsilon H^2
k_2^2)]<0&\;\;\;\;\Rightarrow\;\;\;\;\;\;\mbox{Sinh-Gordon}
\nonumber\\
&\epsilon[c_1^2 c_2^2-(c_2^2+\epsilon H^2 c_1^2)
(k_1^2+\epsilon H^2
k_2^2)]>0&\;\;\;\;\Rightarrow\;\;\;\;\;\;\mbox{Cosh-Gordon}
\nonumber\\
&\;[c_1^2 c_2^2-(c_2^2+\epsilon H^2 c_1^2)
(k_1^2+\epsilon H^2 k_2^2)]=0\;&\;\;\;\;\Rightarrow\;\;\;\;\;\;
\mbox{Liouville}\nonumber
\end{eqnarray}
Thus, we have constructed explicit solutions to the string equations
of
motion and constraints associated to
the Sinh-Gordon equation, the Cosh-Gordon equation or the Liouville
equation
and all three equations appear in both de Sitter and anti de Sitter
spacetimes.
We close this section with the following remark: The ansatz (3.4) is
a
generalization of both the circular string ansatz
($c_1=0,\;\;\phi(\xi^1)=\mbox{const.},\;\;\xi^1$ timelike) and the
stationary
string ansatz ($c_2=0,\;\;t(\xi^1)=\mbox{const.},\;\;\xi^1$
spacelike). In both
these cases, it was shown in Refs. [12-15] that the resulting
solutions in de Sitter and anti de Sitter spacetimes should be
interpreted as
multi-string solutions, that is to say, string solutions where one
single
world-sheet
describes finitely or infinitely many different and independent
strings. The
existence of such
multi-string solutions appears to be a quite general feature in
constant
curvature (and asymptotically constant curvature) spacetimes.
\section{The 2+1 BH-ADS Spacetime}
\setcounter{equation}{0}
As another example to illustrate our general results of Section 2, we
now
consider the $2+1$ dimensional black hole anti de Sitter spacetime
(BH-ADS).
The metric of the $2+1$ dimensional BH-ADS spacetime in its standard
form
is given by \cite{ban}:
\begin{equation}
ds^2=(\frac{J^2}{4r^2}-\Delta)dt^2+\frac{dr^2}{\Delta}-Jdt dr +r^2
d\phi^2,
\end{equation}
where:
\begin{equation}
\Delta=\frac{r^2}{l^2}-M+\frac{J^2}{4r^2}.
\end{equation}
Here $M$ represents the mass, $J$ is the angular momentum and the
cosmological
constant is $\Lambda=-1/l^2.$ For $M^2 l^2\geq J^2,$ there are two
horizons
$(g_{rr}=\infty):$
\begin{equation}
r^2_\pm=\frac{Ml^2}{2}\left( 1\pm\sqrt{1-\frac{J^2}{M^2
l^2}}\;\right),
\end{equation}
and a static limit $(g_{tt}=0):$
\begin{equation}
r^2_{\mbox{st}}=M l^2.
\end{equation}
This spacetime has attracted a lot of interest recently (for a
review, see
for instance \cite{car}), since the causal structure is similar to
that of the
four dimensional Kerr spacetime. However, notice that there is no
strong curvature singularity at $r=0,$ in fact:
\begin{equation}
R_{\mu\nu}=2\Lambda g_{\mu\nu},
\end{equation}
that is to say, the curvature is constant everywhere and the
spacetime
is locally and asymptotically isometric to $2+1$ dimensional
anti de Sitter spacetime; this is of course why it is also relevant
for our
purposes here. For more details on the local and global geometry of
the BH-ADS
spacetime, see for instance Refs. [16, 30-32].
The problem of the string propagation in the BH-ADS spacetime
was completely analyzed and the circular string motion was exactly
solved, in
terms of elliptic functions,
by the present authors in \cite{all1}. The
equation determining the string loop radius as a function of time is:
\begin{equation}
\left( \frac{dr}{d\tau}\right)^2+r^2\left( \frac{ r^2}{l^2}-M\right)
+
\frac{J^2}{4}-E^2=0,
\end{equation}
where $E^2$ is a non-negative integration constant,
while the fundamental quadratic form $\alpha,$ which
determines the invariant size
of the string, is given by:
\begin{equation}
e^\alpha=2 r^2/l^2.
\end{equation}
It is then straightforward to show that the equation (4.6)
becomes:
\begin{equation}
\left( \frac{d\alpha}{d\tau}\right)^2+2e^\alpha-\frac{8}{l^2}
\left( E^2-\frac{J^2}{4}\right) e^{-\alpha}= 4M.
\end{equation}
After performing a conformal transformation of the form (2.28) and
differentiating with respect to tau,
this equation reduces to the (i) Sinh-Gordon equation if $E^2 <
J^2/4,$ (ii) to
the
Cosh-Gordon equation if $E^2 > J^2/4$ and (iii) to the Liouville
equation if
$E^2=J^2/4,$
thus all three equations are present. Notice finally that the three
different types of allowed dynamics as reported in \cite{all1},
essentially
whether the
circular string collapses into $r=0$ (case (ii)) or not (case (i)),
precisely
correspond
to these different equations (in the limiting case (iii), the string
contracts
from the static limit to $r=0$).
\section{Concluding Remarks}
\setcounter{equation}{0}
In conclusion, we have shown that the fundamental quadratic form of
classical
string propagation in $2+1$ dimensional constant curvature spacetimes
solves
the Sinh-Gordon equation, the Cosh-Gordon equation or the Liouville
equation.
We have shown that in both de Sitter and anti de Sitter spacetimes
(as well as
in the $2+1$ BH-ADS spacetime), all three equations must be included
to cover
the generic string dynamics. This is particularly enlightening,
since generic
features of the string propagation in these spacetimes can be read
off directly
at the level of the equations of motion from the properties of the
Sinh,
Cosh and Liouville potentials, without need of solving the
equations.
We also constructed new classes of explicit solutions to
{\it all} three equations in
both de Sitter and anti de Sitter spacetimes, exhibiting the
multi-string
property.
Finally it is worth to observe that our results suggest the existence
of
various kinds of dualities relating the different string solutions in
de Sitter
and anti de Sitter spacetimes. From the potentials, Eqs.
(2.36)-(2.38), it
follows, in particular, that small strings are dual
($\alpha\rightarrow
-\alpha$) to large strings in the $K=+1$ ($K=-1$) sector of de Sitter
(anti de Sitter)
spacetime. Furthermore, small (large) strings in the $K=-1$ sector in
de Sitter
spacetime are dual ($\alpha\rightarrow
-\alpha,\;\;\epsilon\rightarrow
-\epsilon$) to large (small) strings in the $K=+1$ sector in anti de
Sitter
spacetime.
\vskip 12pt
\hspace*{-6mm}{\bf Acknowledgements:}\\
The work by A.L. Larsen was supported by NSERC (National
Sciences and Engineering Research Council of Canada). We also
acknowledge
support from NATO
Collaboration Grant CRG 941287.
\newpage
|
2,877,628,089,634 | arxiv |
\section{Introduction} \label{sec:intro}
Gamma-ray observations have played a central role in multi-messenger astronomy. The detection of the gamma-ray burst GRB 170817A by \textit{Fermi}-GBM shortly after the gravitational wave signal GW170817 \citep{grbgw170817}, and the gamma-ray flare from the blazar TXS 0506+056 seen by \textit{Fermi}-LAT in coincidence the high-energy neutrino IceCube-170922A \citep{TXSmulti2018} resulted in unique insights which would not have been possible otherwise. In the near future, gamma-ray transients will also be critical for attaining counterparts to gravitational wave signals and understanding the details of neutron star mergers \citep{2019BAAS...51c.260B}. Gamma-ray observations also provide critical information needed to understand the physics of cosmic ray acceleration and the sources that produce these energetic charges \citep{10.1093/mnras/stx3280, 10.1093/mnras/stx498,2019BAAS...51c.431O,2019BAAS...51c.151O,2019BAAS...51c.396V,2019BAAS...51c.485V}.
Despite this success, there is a portion of the gamma-ray spectrum that remains relatively unexplored. The so-called ``MeV gap'', from approximately 0.1 MeV to 100 MeV, has not been studied with the same sensitivity as the neighboring energies, lagging behind by more than order of magnitude in flux. There is also great discovery potential in this energy range, in part due to the fact that some classes of AGN \citep[e.g.,][]{1998MNRAS.299..433F,2019arXiv190306106P,2019BAAS...51c.485V,2019BAAS...51c.348R,2019BAAS...51c.291M}, magnetars in quiescence \citep[e.g.,][]{2008A&ARv..15..225M,Turolla2015,2019BAAS...51c.292W}, rotation-powered pulsars \citep[e.g.,][]{2015MNRAS.449.3827K,2019BAAS...51c.379H} and topical systems involving pulsar winds and leptonic pevatrons \citep{1996ApJ...457..253D,2013A&ARv..21...64D,2013arXiv1305.2552K,2015SSRv..191..391K,2017hsn..book.2159S,2017ApJ...850..100A,2019BAAS...51c.513G,2019BAAS...51c.183D,2020ApJ...904...91V,2020ApJ...897...52A,2021MNRAS.502..915C,2021arXiv210801705W,2021arXiv211102575A} have a peak energy output in the 0.1-100 MeV band. Moreover, MeV observations can be constraining to some models and sectors of particle dark matter \citep[e.g.,][]{2019BAAS...51c..78C,2021JCAP...06..036F}.
AMEGO-X\footnote{\url{https://asd.gsfc.nasa.gov/amego-x}} is a proposed mission capable of surveying the sky with unprecedented sensitivity at these energies \citep{2021arXiv210802860F}, more than an order of magnitude better compared to its predecessors \citep{comptel1985sensi}. As explained in Section \ref{sec:mission}, AMEGO-X is equipped with a silicon pixel tracker and a cesium iodide scintillator calorimeter, tuned to detect gamma-ray photons both through Compton scattering interactions ($\sim$100 keV to $\sim$10 MeV) and electron-positron pair production ($\sim$10 MeV to $\sim$1 GeV). Two or more hits on either the tracker or the calorimeter allow for distinguishing between these two types of events, and for the reconstruction of the direction and energy of the incident photon. AMEGO-X builds upon the AMEGO concept \citep{McEnery2019All}, with a reduced energy threshold, improved sensitivity at $\sim$GeV energies, and a smaller and lighter detector.
The science enabled by AMEGO-X is vast and varied. While the measurements in the MeV band will be crucial, some of the sources of interest have a spectral distribution that extends below the energy threshold for Compton events. As discussed in Section \ref{sec:science}, the energy output of some of these sources peaks below 200 keV. This is the case of some gamma-ray burst (GRBs), magnetar giant flare (MGFs) tails, and recurrent magnetar short bursts ---such as those associated with a fast radio burst in SGR 1935+2154 \citep{gbm_catalog_spectra_2014,2015ApJS..218...11C,2020ApJ...898L..29M,Bochenek_2020,2020Natur.587...54C,Burns2021,2021NatureSvinkin,hurley2005exceptionally, 2021NatAs...5..378L,2021NatAs...5..408Y,1999Natur.397...41H,1999AstL...25..635M,1999ApJ...515L...9F}. Extending the energy range of AMEGO-X to lower energies will significantly enhance its capacity to study these different source classes.
In this work we demonstrate that it is possible to extend the energy range of AMEGO-X down to 25 keV using \textit{single-site events} (SSE). These are events that produce a single hit, in most cases due to photoelectric absorption. The cross section for this type of interaction increases rapidly below $\sim$200 keV.
We show in Section \ref{sec:performance} how the large effective area between 25 keV and 100 keV allows us to achieve a competitive performance for transient events of up to 100 s duration. Moreover, despite the fact that, by definition, SSE leave no tracks and therefore event-by-event direction reconstruction is not possible, we show how the spatial distribution of the SSE in the detector can be used to constrain the location of a burst like GRB 170817A to within a $<$2$^{\circ}$ radius.
Leveraging the information from SSE will enhance the AMEGO-X capabilities overall. It will enlarge the observable volume and increase the rate of detected GRBs and other transients, supporting the multi-wavelength and multi-messenger efforts of the astronomy and astroparticle communities. We will be able to measure energy spectra down to tens of keV, improving our ability to reject or support theoretical models. This spectral extension will assist AMEGO-X in its goal of significantly improving our knowledge of transient sources in the MeV band.
\section{Summary and conclusions}
In this paper it has been shown that the performance of AMEGO-X at low energies is significantly improved by utilizing photons that deposit all their energy in a single tracker pixel, further advancing its science goals. Using single-site events (SSE) we lower the AMEGO-X detection energy threshold and achieve a continuous sensitivity to transient events over five orders of magnitude, from 25 keV to $\sim$1 GeV.
While SSE events leave no tracks, we have demonstrated that we can use the aggregate signal from a transient source to provide a competitive sky localization and support the multi-wavelength and multi-messenger efforts of the community. An event similar to GRB170817A would be localized to within a $<$2$^\circ$ radius.
We find an integrated flux sensitivity (6.5$\sigma$) between 25 keV and 100 keV of $\sim0.5 \ \mathrm{ph \ cm^{-2} \ s^{-1}}$. Employing SSE gives a significant benefit for studying transient sources with peak energies below a few hundred keV or with an otherwise soft spectrum. It more than doubles the number of detected gamma-ray bursts that AMEGO-X will detect and allows a better understanding of the source spectral and time-dependent properties.
SSE capability will open the possibility for many interesting studies relating to magnetar giant flares (MGFs). The tails of MGFs can be detected up to a distance of $\sim 700$ kpc with a time resolution of 0.5 s, enough to observe the periodicity due to the star rotation. Integrating for $\sim 10s$ AMEGO-X could detect a bright MGF tail like the one in 2004 from SGR 1806-20 as far as the Andromeda Galaxy. The achieved sensitivity also enables the detection of soft gamma-ray bursts associated with FRBs that are about 100 times dimmer than the one associated with SGR 1935+2154 in April 2020.
Using SSE allows us to exploit the full potential of AMEGO-X and contribute to an exciting and successful mission.
\section{The AMEGO-X mission}
\label{sec:mission}
The AMEGO-X Gamma-Ray Telescope (GRT) consists of a silicon pixel Tracker, a hodoscopic Cesium Iodide (CsI) Calorimeter, surrounded by a plastic anti-coincidence detector (ACD), as shown in Figure~\ref{fig:amego-xschematic}~(a).
The GRT consists of four identical detection towers each with a Tracker and Calorimeter Module.
The Tracker Module has 40 layers, each measuring $40\times40$~cm$^2$ and separated by 1.5~cm, composed of monolithic complementary metal-oxide-semiconductor (CMOS) active pixel sensors (APS). The Calorimeter consists of 4 layers, each with 25 $1.5\times1.5\times38$~mm$^3$ CsI bars, modelled after the \textit{Fermi}-LAT calorimeter.
To operate as a telescope sensitive to Compton scattering and pair production interactions across four decades of energy, AMEGO-X requires a large instrumented area, with high stopping power, and good position and energy resolution in the sensitive detector volumes.
The GRT design is similar to \textit{Fermi}-LAT, although it is optimized for the low MeV range with a 3D position-sensitive Tracker and does not have conversion foils~\citep{2009ApJ...697.1071A}.
Specifically, the main functionality of the Tracker is to measure the energy and position of gamma-ray Compton scatter and pair-conversion charged-particle interactions with high precision. As seen in Figure~\ref{fig:amego-xschematic} these type of interactions are denominated \textit{Compton events} ---either tracked (TC) or untracked (UC), depending on whether or not the direction of the scattered electron can be determined--- and \textit{pair events} (P) respectively.
The AMEGO-X instrument is further described in \cite{2021arXiv210802860F}.
\begin{figure*}[tb]
\centering
\gridline{\fig{figures/AMEGO-X_CAD.png}{0.45\textwidth}{(a)}
\fig{figures/AMEGO-X_Event_Detection.png}{0.35\textwidth}{(b)}}
\caption{(a) The AMEGO-X Gamma-Ray Telescope has four identical Detection Towers, each composed of a 40 layer silicon pixel Tracker and 4 layer CsI Calorimeter. The Detection Towers are surrounded by an ACD and Micrometeoroid Shield. (b) AMEGO-X is designed to detect Compton scattering and pair production events, where the position and energy deposited in multiple interactions within the large detector volume are used to reconstruct the direction of the incoming gamma ray. Single site events, which result from mainly photoelectric absorption within a single pixel, cannot be used for single-photon imaging, but allow for increased low-energy sensitivity to transient events.}
\label{fig:amego-xschematic}
\end{figure*}
AMEGO-X leverages more than 10 years of development in CMOS monolithic APS technology from the ground-based particle physics experiments~\citep{peric2021}.
The Tracker detectors are based on the AstroPix project \citep{BREWER2021165795} which has optimized the ATLASPix detector design \citep{peric2012} for gamma-ray applications.
Each AMEGO-X Tracker layer contains an array of 380 AstroPix $2\times2$~cm APS chips, each with $19\times17$ pixels, where the pixel size is $1\times1$~mm$^2$.
The APS are 0.5 mm thick and operate at full depletion.
Each interaction within the Tracker is recorded as the energy deposited (25-700 keV) and the pixel location (0.5~mm$^3$ resolution). The low detector threshold of 25 keV allowed by the Tracker APS and the ability to obtain three-dimensional spatial information are two key features of the AstroPix detector that enabled this work.
The main trigger criteria requires at least two detected hits on either the tracker or the calorimeter, resulting from Compton or pair-production interactions. At energies below the Compton regime ($\lesssim$100 keV), photons predominantly undergo photoelectric absorption and their energy is deposited in a single pixel.
With the high gamma-ray backgrounds at these energies and no imaging capabilities, single-site events (SSE) are dominated by background for most observations and are not continously recorded due to prohibitively high data rates (see Section~\ref{sec:background}). SSE are instead saved into a temporary buffer, and short duration ($\sim$100 seconds) readouts are initiated when the on-board Transient Alert (TA) logic identifies a significant rate increase above the background.
The AMEGO-X on-board TA logic is based on \textit{Fermi}-GBM algorithms~\citep{paciesas2012}. This capability allows to measure emission down to 25 keV and expands the portfolio of astrophysical phenomena beyond the previously conceived sensitivity to include lower-energy transients, especially GRBs, giant magnetar flares, and magnetar short bursts, as discussed in Section~\ref{sec:science}.
AMEGO-X will send rapid alerts via the Tracking and Data Relay Satellite System (TDRSS) Demand Access within 30 seconds to the Gamma-ray Coordinates Network (GCN) enabling multi-wavelength follow-up.
\section{Science enabled by utilizing single-site events}
\label{sec:science}
\subsection{Gamma-Ray Bursts}
\label{sec:grbs}
Gamma-Ray Bursts (GRBs) are a class of transient events characterized by a highly variable pulse-like signal in gamma-rays, known as the prompt phase, followed by a smoothly decaying emission at various wavelengths, called the afterglow. The prompt emission can last from $<2$ s in the case of bursts classified as short GRBs (SGRBs), up to hundreds of seconds for long GRBs (LGRBs). It is generally believed that most LGRBS result from the core-collapse supernovae, while the progenitors of SGRBs are compact object mergers, such as binary neutrons stars (BNS) and neutron star - black hole binaries \citep{levan2016gamma}.
In recent years short GRBs have played a key role in the development of multi-messenger astronomy after the simultaneous detection of GRB170817a and the gravitational wave (GW) event GW 170817 \citep{grbgw170817}, confirming BNS mergers as a type of progenitor. Questions still remain about the physics of neutron stars, jet formation and structure, acceleration mechanisms and the existence of other progenitor types. In order to answer them we need a higher number of such joint GRB-GW detections, followed by extensive observations of the afterglow.
AMEGO-X can make use of its large field of view and on-board Transient Alert system to send rapid alerts with precise localization information to other detectors. Since the emission of GRBs typically peaks at hundreds of keV, with a median energy peak of $\sim$200 keV \citep{gbm_catalog_spectra_2014}, it is expected that SSE will play a major role in the detection and analysis of the majority of this type of transient phenomena by AMEGO-X.
In order to investigate this we first calculate the sensitivity using SSE alone. We use a fiducial model for SGRBs based on the average properties of the GRBs detected by GBM~\citep{2020ApJ...893...46V}. The model consists of a Comptonized spectrum ---described in Section \ref{sec:performance}--- with $\gamma = -0.37$ and $E_{peak}=$ 636 keV. We use a flat lightcurve lasting 1 s. The choice of a Comptonized spectral form with two free parameters is made for simplicity. Yet we note that it is preferred over the well-known Band model \citep[a modified broken power law with 4 parameters:][]{Band-1993-ApJ} for many of the bursts in the most recent {\it Fermi}-GBM spectral catalog in \cite{Poolakkil-2021-ApJ}. Thus the Comptonized form is suitable for developing a general sense of the utility of SSE in enhancing the AMEGO-X low energy sensitivity.
As shown in Figure \ref{fig:SGRB_SNR}, we vary the flux and find where $SN=6.5$, defining our detection threshold. The estimated sensitivity is $6.5 \times 10^{-1} \ \mathrm{ph \ cm^{-2} \ s^{-1}}$ between 100 keV and 1 MeV (corresponding to $5.2 \times 10^{-1} \ \mathrm{ph \ cm^{-2} \ s^{-1}}$ between 50 keV and 300 keV). We also simulated GRB170817A using the best fit parameters $\gamma =-0.85$ and $E_{peak}=$ 229 keV~\citep{2017ApJ...848L..14G}. A similar event will be detected with high SN for the majority of the field of view. Figure \ref{fig:SGRB_Sensi} also puts the SSE sensitivity of GRBs in perspective with the Compton-only sensitivity by taking the SGRB fiducial model and varying the peak energy.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Signal_Noise_1s_All_sims.pdf}
\caption{Signal-to-noise ratio (defined in text), using SSE only, versus flux for the fiducial SGRB model described in text. We include counts between 25 - 100 keV. The curves are for the representative models, and the squares are for GRB170817A. The source is simulated at the different off-axis angles. Note that the on-axis case is mostly behind the 30$^\circ$ off-axis case. }
\label{fig:SGRB_SNR}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{sensi_ix-0.37_1.0s_2}
\caption{Predicted sensitivity between 10 keV and 10 MeV as a function of the peak energy for a Comptonized spectrum with a fiducial low-energy spectral index $\gamma = -0.37$ and 1 s duration. Single-site events (SSE) dominate the sensitivity for peak energies below hundreds of keV over tracked (TC) and untracked (UC) Compton events.}
\label{fig:SGRB_Sensi}
\end{figure}
The increase in sensitivity for peak energies below hundreds of keV will be reflected in the number of detections. We quantify this by using empirical distributions of GRB peak energies, spectral indices and flux values based on \textit{Fermi}-GBM observations \citep{2020ApJ...893...46V}, extended below the threshold based on the Swift-BAT observations \citep{2016ApJ...829....7L}. We assumed that the observed logN-logS curve reflects the intrinsic GRB flux distribution, and considered a power law with an exponential cutoff spectrum. Based on these assumptions, we synthesized more than $10^5$ events, convolved them with the response of the instrument, and computed the SN for each simulated event for both single-site and Compton events. As opposed to BATSE and Fermi-GBM, which use scintillators, AMEGO-X does not require a minimum of two detectors to trigger in order to avoid false detections, mostly from phosphorescence events. This allows us to use the full effective area of the detector to trigger on-board.
We find that by utilizing SSE, AMEGO-X will be able to detect $\sim$200 $yr^{-1}$ short GRBs, more than doubling the number of detections compared to using only Compton events. As shown in Figure \ref{fig:SGRB_rates} not only will the overall number of detections will increase, but the use of SSE will improve the signal-to-noise ratio overall allowing for more detailed spectral and time-dependent analyses.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{shortGRBrates_log}
\caption{Expected rates of short GRBs seen by \mbox{AMEGO-X} above various detection thresholds. The filled bands correspond to the theoretical uncertainty in the modeling. Including single-site events significantly increases the rate of short GRB detections (nominal threshold 6.5$\sigma$ shown by the dashed line) as well as the rates of short GRBs detected with high enough significance to enable precise localization and detailed studies of the spectrum and lightcurve.}
\label{fig:SGRB_rates}
\end{figure}
\subsection{Magnetar giant flares and short bursts}
\label{sec:mgf}
Magnetars are highly magnetized ($>10^{13}$ G), young neutron stars (NSs) that display a wide range of radiative activity \citep{2008A&ARv..15..225M,Turolla2015}. Among the different dramatic variability exhibitions, magnetar giant flares (MGFs) are the most energetic. They are characterized by a short ($\lesssim 100$ ms), bright ($10^{44}-10^{46}$ erg), hard spike immediately followed by a $10^{44}$ erg/s minutes-long quasi-thermal ($3 k_b T \sim 50-100$ keV) pulsating tail modulated by the spin-period of the magnetar: these signatures have been observed for all three nearby events reported in ~\cite{Mazets1979}, \cite{Mazets1999} and \cite{Hurley2005,2005Natur.434.1107P}.
In distant events only the prompt spike is currently detectable by wide-field GRB instruments and thus MGFs resemble cosmological short GRBs. As opposed to Galactic MGFs, which saturate all viewing detectors during the prompt emission, characterization is possible for extragalactic ones. The recently determined intrinsic volumetric rate of MGFs places MGFs as the dominant channel for extragalactic gamma-ray transients \citep{Burns2021}. Moreover, the 2020 MGF from the Sculptor galaxy \citep{2021NatureSvinkin, Roberts2021} revealed a peak energy of $\sim$1.5 MeV and a spectral index $\sim$0, resulting in a harder spectrum compared to typical cosmological short GRBs: this would explain the small number (only 4 in ~27 years) of extragalactic MGFs detected so far by GRB monitors, which have trigger systems optimized to detect the latter kind of short GRBs \citep[see Figure 3 of ][]{Negro2021S3}. Given these spectral characteristics, the majority of the AMEGO-X sensitivity to MGFs comes from UC events, as shown in Figure \ref{fig:sensi_ratio_2D} (yellow upward pointing triangles).
Considering the intrinsic rate of MGFs found by \cite{Burns2021}, we estimate that AMEGO-X will detect 1 to 19 local MGFs within 25 Mpc for a 3 year mission. The detection of just six MGFs, either Galactic or extragalactic, would double the current statistics of the known population \citep[we exclude the first MGF from the LMC in 1979 which was not accounted for in the intrinsic rate computation of][]{Burns2021}. This increase would improve the current statistical uncertainty on the average index of the power law spectral fits for MGF initial spikes, thereby better securing the intrinsic energetics distribution for MGFs, and consequently reducing the volumetric rate uncertainty of MGFs by $\sim$40\%. Such an improvement is crucial to determining the favoured formation channel: only progenitors ---e.g. core-collapse supernovae--- with comparable or higher rates could form magnetars. Furthermore, a more stringent constraining of the cosmic MGF rate would help determine whether or not there is more than one MGF per magnetar during its active lifetime: this could then help identify the first class of repeating short GRBs ever observed.
For nearby MGFs with a few Mpc, the observation and characterization of MeV-GeV afterglows of MGFs enables the determination of the outflow/jet bulk Lorentz factor via pair creation transparency arguments \citep{Roberts2021}. Assuming an event like the 2004 MGF from SGR1806-20 (GRB 041227), we estimate the maximum distance AMEGO-X would have detected the tail. We rely on the spectral and time evolution study reported in \cite{hurley2005exceptionally}, which gives a blackbody (BB) spectral shape with temperatures decreasing with time between $\sim$9 keV and $\sim$4 keV (see Figure 1(b) of that paper). In Figure \ref{fig:sensi_magnetarTail} we show, for both SSE and UC events, the maximum distance for a given exposure time that AMEGO-X would have detected the tail of GRB 041227 at 5$\sigma$ significance: the advantage of SSE is clear. For the brightest part of the tail (between 30 and 200 seconds after the main peak), considering an exposure of 0.5 seconds, which would be enough to detect the periodicity due to the star rotation, AMEGO-X would detect the emission from the tail at a distance of $\sim$700 kpc with SSE; with UC events only, this distance drops to 20 kpc, which is about the distance of SGR 1806-20. During the fainter, latter half of the tail, after 270 s after the initial spike, the temperature drops to an average of 5.1 keV, and SSE would contribute to observations of similar tail portions only out to a distance of about 160 kpc. With an integration time of 10 seconds, time for which we can assume a stable background, AMEGO-X would recover the tail of a MGF from a magnetar in the Andromeda Galaxy (about 765 kpc away), which would be a first.
The blackbody spectral fit in \cite{hurley2005exceptionally} for the GRB 041227 tail, however, significantly underestimates the measured spectrum above 30 keV, where an additional power-law (PL) component would better represent the data. This additional component would further improve the AMEGO-X SSE sensitivities, resulting in an extension of the distance range for detectability of MGF tails.
Such additional PL component was even more evident in the MGF tail from SGR 1900+14 in 1998 \citep{Feroci1999}: the BB component was peaking at higher temperatures and the additional PL component was dominant above 100-200 keV with a spectral index steepening with time, and disappearing after about 200 seconds. Assuming the spectrum measured in the first 67 seconds of the events by \cite{Feroci1999}, namely an optically thin thermal bremsstrahlung (OTTB) component with kT$\sim$25 keV, plus a power law with index $\sim$1.5, the 5$\sigma$ detectability as function of the source distance and exposure time is reported in the left panel of Figure \ref{fig:sensi_magnetarTail} (dotted lines). AMEGO-X SSE events would allow the detection of such a tail emission out to $\sim 2$ Mpc with an exposure of 0.5 seconds, and with an exposure of 10 seconds it would detect it out to $\sim 8$ Mpc. In the right panel of Figure \ref{fig:sensi_magnetarTail}, for reference, the cumulative distribution of galaxies is presented as a function of the distance from Earth. Clearly, the enhancement of AMEGO-X sensitivity enables the extension of observations of MGF tails to more host galaxies. MGFs tails may also appear unassociated with the initial hard spike in the particular case that the latter is due to a collimated outflow \citep{Roberts2021} directed outside our field of view. Accordingly, population studies of potential “orphan” tails enables independent constraints for outflow collimation, thereby significantly enhancing the SSE contribution to AMEGO-X extragalactic MGF science.
In addition to the MGFs, magnetars also produce much more numerous short duration (0.05-0.5 s) bursts with much lower luminosity (peak luminosity ranging between $10^{36}-10^{43}$ erg/s, with a power-law event energy distribution). In April 2020, the magnetar SGR 1935+2154 entered an active state, producing a ``burst storm" with more than 200 bursts in about a thousand seconds (see e.g. \cite{Younes2020_Storm}). A few hours after the peak of the storm, a hard X-ray burst (FRB-X) was observed contemporaneously with a fast radio burst \citep[FRB,][]{2020ApJ...898L..29M,2021NatAs...5..378L,Bochenek_2020,2020Natur.587...54C}. \cite{2021NatAs...5..378L} reported the time integrated ($T_0$-$0.2s-T_0$+$1s$) spectrum of the FRB-associated burst to be well represented by a soft cutoff-power-law (CPL) with photon index of 1.5 and energy cutoff around 84 keV (corresponding to a peak energy of 37 keV). This spectrum is somewhat steeper and extends to high energies relative to spectra of other bursts in the same epoch \citep{Younes2020_FRBSGR}. Note that despite the temporal association, it is unclear whether FRB-X and the FRB are causally connected.
Figure \ref{fig:sensi_ratio_2D} highlights the clear benefit SSE will bring to the AMEGO-X mission for such a soft, fast (1 s integration time) transient. Assuming the best-fit CPL model reported in \cite{2021NatAs...5..378L}, the energy-integrated flux between 10 and 100 keV is about 77.5 ph/cm$^2$/s, while AMEGO-X minimum flux detectable at 6.5$\sigma$ (considering 1 s exposure) is 50 ph/cm$^2$/s for UT events and $\sim$0.5 ph/cm$^2$/s forSSE. The latter consideration means that AMEGO-X could detect bursts which are a factor $\sim 100$ dimmer than the FRB-associated burst from SGR 1935+2154. This is particularly interesting since several additional, dimmer FRBs have been observed from SGR 1935+2154 \citep{moreFRBnoSGR, FASTfrb}. Identifying and collecting a much larger sample of magnetar short bursts afforded by sensitive wide-field monitoring capability of AMEGO-X SSEs, in concert with wide-field radio facilities in the 2020s, will help solve the mystery of why not all magnetar short bursts produce FRBs. Moreover, millisecond time-offsets between the radio and hard X-rays, as observed in SGR 1935+2154 by INTEGRAL \citep{2020ApJ...898L..29M}, will also help inform on the burst mechanism.
Magnetar bursts have a rate of about a few thousand per decade in \textit{Fermi}-GBM \citep{2015ApJS..218...11C,2020ApJ...902L..43L}, but are highly clustered in time in specific sources and exhibit statistical behavior and unpredictability similar to earthquakes \citep{1996Natur.382..518C,1999ApJ...526L..93G}. The short bursts are typically characterized by spectral fits employing double blackbodies of similar luminosities, with a hot component at $k T_h \sim 20-50$ keV and a cooler component at $k T_c\sim 5-10$ keV \citep{2008ApJ...685.1114I,2012ApJ...749..122V,2014ApJ...785...52Y,2020ApJ...893..156L}. Thus, magnetar short bursts are expected to be readily observable in the SSE channel with AMEGO-X, with a higher rate than GBM. Short bursts have also exhibited quasi-periodic oscillations (QPOs) \cite[e.g.,][]{2014ApJ...787..128H} that will be important to identify in SSEs.
\begin{figure*}
\centering
\gridline{\fig{sensi_magnetarTail_new_2.pdf}{0.48\textwidth}{(a)}
\fig{Gal_distrib_insert.pdf}{0.46\textwidth}{(b)}}
\caption{(a) Minimum exposure time necessary for AMEGO-X to detect average MGF tail emission as a function of the distance to the magnetar, using SSE (teal) and Compton only (blue). Results are shown for three different spectral models: A surface blackbody spectrum with kT = 5.1 keV (solid lines) representative of the late-tail emission of the MGF from SGR 1806-20 between $\sim$272 and 400 seconds, a blackbody spectrum with kT = 9 keV (dashed line) representative of the first 200 seconds of the tail emission from SGR 1806-20, and an OTTB+PL model spectrum with kT = 25 keV, and $\Gamma = 1.47$ (dotted lines) representative of the tail emission from SGR 1900+14. The spectra of the MGF tail from SGR 1806-20 are from \cite{Hurley2005}, while the spectrum for the MGF from SGR 1900+14 is from \cite{Feroci1999}. The grey vertical lines indicate the measured distances to the two magnetars and to the Andromeda galaxy. With SSE, AMEGO-X would be able to detect emission from magnetar tails similar to the ones already observed on second timescales even if they were located in the Andromeda galaxy or other nearby galaxies. With Compton events only, AMEGO-X could still expect to detect tail emission from some galactic MGFs. (b) Cumulative distribution of nearby galaxies.}
\label{fig:sensi_magnetarTail}
\end{figure*}
\section{Single-site events performance}
\label{sec:performance}
The response of the AMEGO-X instrument has been simulated using the Medium-Energy Gamma-ray Astronomy library (MEGAlib) software package\footnote{Available at \url{https://megalibtoolkit.com/home.html}}, a standard tool in MeV astronomy~\citep{2006NewAR..50..629Z}. MEGAlib is based on GEANT4 \citep{geant4_2003} and is able to simulate the emission from a source, the passage of particles through the spacecraft and their energy deposits in the detector. It also performs event reconstruction, imaging analysis and allows to estimate the background rates from different expected components.
The knowledge of the effective area and the expected background is critical to understand the advantages of utilizing SSE over only considering Compton and pair events. We estimate the sensitivity ratio between these two cases for various spectral hypotheses which leads to the realization that some types of studies are only possible when SSE are considered, as described in Section \ref{sec:science}.
In this section we also describe the expected localization performance for burst-like events using the aggregate signal from SSE. This will allow AMEGO-X to provide sky localization information ---critical for multi-messenger and multi-wavelength detection--- even for soft-spectrum sources with a low signal in the Compton channel.
\subsection{Effective area}
The ability to perform imaging analysis reduces the background and increases the sensitivity of an instrument. Although this is not an option for SSE, the increase in background with respect to Compton and pair events is well compensated by the large effective area to this type of events. Each tracker layer has a physical area of approximately 6400 cm$^{2}$, and together they efficiently absorb these low-energy photons.
Figures \ref{fig:effectiveAreaVsEnergy} and \ref{fig:effectiveAreaVsZenith} show the effective area for the various types of events AMEGO-X will be able to detect (see Section \ref{sec:mission}), as a function of energy and the off-axis angle, respectively. The effective area for SSE peaks at about 40 keV with $\sim$3700 cm$^{2}$. This is about an order of magnitude greater than the effective area for UC events, the lowest energy events for which imaging is possible. In the 30-200 keV range, AMEGO-X's effective area is more than an order of magnitude greater than a single NaI detector of \emph{Fermi}-GBM. Although the \emph{Fermi}-GBM utilizes 12 such detectors, they have different pointings and therefore in all cases the total effective area always falls below that of AMEGO-X. This exemplifies why AMEGO-X will be able to detect fainter sources in this energy range compared to current detectors.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{figures/scale0.8_Aeff_log_onAxis}.pdf}
\caption{Simulated effective area for different event types recorded by AMEGO-X as a function of the photon energy, for incident photons parallel to the boresight (on-axis). The grey lines show the effective areas of other instruments: \emph{Fermi}-GBM (single detector only, \cite{GBM2009}), COMPTEL \citep{1993ApJS...86..657S}, \emph{Fermi}-LAT \citep{FermiLATPerformanceURL}, and \emph{Swift}-BAT \citep{2005SSRv..120..143B} (for photons with a 30\textdegree{} angle to boresight).}
\label{fig:effectiveAreaVsEnergy}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{effarea_vs_zenith}
\caption{Effective area as a function of the off-axis angle for various energies where each type of event dominates. The sudden decrease at a $\sim90^\circ$ angle is caused by the minimal projected area of the tracker layers as seen from the sides. Upwards-going events are severely attenuated by the calorimeter.}
\label{fig:effectiveAreaVsZenith}
\end{figure}
\subsection{Background}
\label{sec:background}
We use MEGAlib to predict and simulate the average background AMEGO-X will observe during flight, including both instrumental and astrophysical components. The background simulations included prompt and delayed (activation-induced radioactive decays inside the instrument) emission from primary cosmic-ray electrons and positrons \citep{Mizuno_2004}, protons and helium nuclei \citep{SPENVIS}, extra-galactic diffuse gamma-ray emission \citep{1999ApJ...520..124G}, and Albedo emission, i.e. secondary particles produced in the Earth's atmosphere, including neutrons \citep{Kole2015}, electrons and positrons \citep{200010, Mizuno_2004}, X-ray and gamma-ray photons \citep{tuerler2010, Mizuno_2004, 2009PhRvD..80l2004A, Harris2003}, and protons \citep{Mizuno_2004}. Cosmic-ray electrons and protons trapped in the Earth's geomagnetic field \citep{SPENVIS} only contribute via delayed emission, as their prompt emission is negligible outside the SAA, inside of which data-taking will be paused. The current AMEGO-X background model does not yet include the Galactic diffuse component. We assumed a 575 km orbit with an inclination of 6$^\circ$.
Figure~\ref{fig:BG} shows the expected measured background spectra for the four different event types. The statistical fluctuations are due to the short simulation time. In general, calculation of the SSE background is very computationally expensive, and so we use a simulation time of 60 seconds for this event type (a long enough exposure to cover short and long GRBs) and 1 hr for the rest. For SSE the background counts rate between 25 - 100 keV is $\sim$ 23 kHz. For comparison, the background counts rate for UC events in the energy range 25 - 100 keV (100 keV - 1 MeV) is $\sim$ 2 Hz ($\sim$ 330 Hz).
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{AMEGO_X_Backgrounds.pdf}
\caption{AMEGO-X background rates including both astrophysical and instrumental contributions. The background is comprised of (from left to right) single-site events (cyan), untracked Compton events (blue), tracked Compton events (green), and pair events (red).}
\label{fig:BG}
\end{figure}
\subsection{Sensitivity}
In order to estimate the sensitivity we define the signal-to-noise ratio (SN) as
\begin{equation}
SN = \frac{S}{\sqrt{S+B}} \ ,
\end{equation}
where $S$ is the number of signal counts and $B$ is the number of background counts. The SN is computed for each event type, and then sum in quadrature if needed. We chose a conservative threshold for detection $SN > 6.5$ corresponding to a false alarm rate of $\sim1$ yr$^{-1}$. For a point-like source, we have $S=F\cdot A \cdot T$ and $B= b \cdot T$ , where $F$ is the energy-integrated gamma-ray particle flux, averaged over the observing time $T$, $A$ is the spectrum-averaged effective area determined from simulations, and $b$ is the background rate, also determined from simulation. $A$ and $b$ depend on the event type in question (SSE or UC). For a given signal-to-noise ratio $SN$ and event type $x$, and fixed spectral shape, we can solve for $F$ to determine the minimum flux $F_{SN}^{x}$ that would produce a detection.
We estimate the sensitivity for a fiducial 1 s burst at 30$^{\circ}$ off-axis as $F_{6.5}^{SSE}$ $\sim0.5 \ \mathrm{ph \ cm^{-2} \ s^{-1}}$ using only SSE between 25 keV and 100 keV. In Section \ref{sec:science} we discuss in detail the sensitivity for various realistic models of GRBs and magnetars.
The gain in sensitivity by including SSE events in a given analysis depends on the spectral characteristics of the source. Figure \ref{fig:sensi_ratio_2D} shows the ratio $F_{6.5}^{SSE}/F_{6.5}^{UC}$ between SSE and UC sensitivity for bursts with characteristic Comptonized energy spectra modeled as
\begin{equation}
\frac{dN}{dE} \propto \left( \frac{E}{E_0}\right)^{\gamma} \exp\left( - \frac{E \left( \gamma + 2 \right)}{E_{peak}}\right), \,
\label{eq:Comptonize_spec}
\end{equation}
where $E_{peak}$ is the peak energy, $\gamma$ is the low-energy spectral index and $E_0$ is the arbitrary pivot energy. In general, the analysis of sources with either $E_{peak} \lesssim 300$ keV or $\gamma \lesssim -1.15$ benefit significantly from the inclusion of SSE, as well as some other combinations. Observe that the Comptonized form in Eq.~\ref{eq:Comptonize_spec} is well suited to describing many GRB and magnetar burst spectra.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/sensi_2D_1.0s_nofill2.pdf}
\caption{Ratio of the integral flux sensitivity (minimum flux between 10 keV and 10 MeV needed for a 6.5$\sigma$ detection in 1 s) achieved by SSE alone to the sensitivity achieved in UC events alone, as a function of the spectral parameters: peak energy $E_{peak}$ and low-energy spectral index $\gamma$. We assume a Comptonized spectral model, described in the text. A ratio of $\leq$1 implies that the addition of SSE significantly enhances the sensitivity of the instrument. SSE dominate the sensitivity for peak energies below hundreds of keV and for soft spectral indices. The purple star represents the ``nominal'' population of short GRBs considered in Section \ref{sec:grbs}, while the green square shows the case of GRB 170817A. The yellow upward pointing triangles mark the main peak emission of magnetar giant flares (discussed in Sec.~\ref{sec:mgf}), while the downward pointing triangle represents the SGR1935+2154 burst associated to an FRB reported in 2020 \citep{2020ApJ...898L..29M,2021NatAs...5..378L}.}
\label{fig:sensi_ratio_2D}
\end{figure}
\input{sections/localization}
\subsection{Localization}
AMEGO-X is capable of reconstructing the direction of a single photon if its passage through the instrument leaves at least two energy deposits $>$25 keV, whether on two or more tracker layers, or on a single tracker layer and the calorimeter. These photons typically have an energy $>$100 keV and interact due to either Compton scattering or pair production. Using this information AMEGO-X will localize a transient source (5 $\mathrm{ph \ cm^{-2} \ s^{-1}}$ between 120 keV and 1 MeV, 1 s in duration) within a $<2^\circ$ radius (90\% cont).
Single-site events, on the other hand, are primarily due to photoelectric interactions and by definition lack any track. The only information recorded is their deposited energy, time, and location in the tracker. While an event-by-event direction reconstruction is therefore impossible, we can use the aggregate information of multiple SSE to estimate the sky location of the source that produced them.
The method is similar to those employed by other count-based detectors, such as BATSE \citep{Briggs1999-BATSELoc} and \emph{Fermi}-GBM \citep{Connaughton2015-GBMLoc}, and relies on the change in relative acceptance of each detector as a function of the incoming direction. BATSE and GBM relied primarily on the different pointing of each detector --- with an approximately a cosine angular response. These missions also take into account the scattering and attenuation by the different spacecraft structures, although it was not the dominant source of leverage when fitting the source location. The opposite is true for AMEGO-X. Even though all layers in the tracker are pointing towards the same direction, their mutual shadowing leads to abundant directional information, as shown in Figure \ref{fig:locdist}. A similar principle was followed by POLAR \citep{WANG2021164866}.
\begin{figure*}
\centering
\gridline{\fig{sse-locdist-top}{0.48\textwidth}{(a)}
\fig{sse-locdist-side}{0.48\textwidth}{(b)}}
\caption{Expected SSE position probability distribution from an on-axis burst (a) and one located 70$^\circ$ off-axis and along the $xz$-plane (indicated by the arrows). The insets show the projection along each axis. These distributions vary as a function of the incoming direction due to the partial attenuation by the different tracker layers and provide the leverage to estimate the source location. The null count at $x=0$ and $y=0$ is caused by the spacing between the four towers that compose the tracker.}
\label{fig:locdist}
\end{figure*}
The localization sky maps are obtained through a maximum likelihood analysis. It compares the number of detected events at various spatial coordinates in the tracker to the expectation from a source located at a given sky coordinate, and finds the best match. We construct the test statistic ---called a \textit{C}-stat estimator in other contexts \citep{1979ApJ...228..939C}---
\begin{eqnarray}
TS &= 2 \frac{\displaystyle\max \left(\sum_i \log P(d_i; f) \right)}{\displaystyle \sum_i \log P(d_i; f = 0)} \nonumber\\
P(d_i; f) &= \frac{\displaystyle (b_i + e_i f)^{d_i} e^{-(b_i + e_i f)}}{\displaystyle d_i!} ,
\end{eqnarray}
where
\begin{itemize}
\item $P(d_i; f)$ is the Poisson probability of observing $d_i$ events given a source with a flux $f$;
\item $b_i$ is the estimated number of background events;
\item $e_i$ is the expected excess given a source spectral hypothesis, per flux unit. It is a function of the sky coordinate of the hypothetical source.
\end{itemize}
The index $i$ runs through all the different tracker locations where a hit can be found in the tracker. Each pixel of each tracker layer can be considered as a individual detector. Although the tracker pixel size is 1 mm, it was sufficient to divide each of the 40 tracker layers (80cm$\times$80cm) into a 32x32 grid, uniformly spaced along each dimension (as in Figure \ref{fig:locdist}).
The unitary expected excess values $e_i$ are obtained from look-up tables previously filled using simulations. The $i$-th element of each table contains the differential effective area for events arriving from a given sky location. The differential effective area equals the total effective area of the instrument multiplied by the fraction of the events detected in the $i$-th tracker location. We use a HEALPix sky pixelization \citep{healpix2005} to describe the incoming direction, although other schemes are also possible.
The $TS$ value is computed for each sky location ---the flux is considered a nuisance parameter for this purpose. A sky location confidence interval is then obtained based on Wilk's theorem \citep{10.1214/aoms/1177732360} and considering two degrees of freedom ---i.e. a 90\% containment corresponds to $\Delta$TS$<4.60$ with respect to the maximum, which we confirmed for our case through simulations. This calculation was done for simulated sources injected at various sky locations, using the estimated background presented in Section \ref{sec:performance}. The expected performance is shown in Figure \ref{fig:loc_uncertainty}.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{figures/loc/loc_uncertainty}
\caption{Expected localization uncertainty from SSE only for a fiducial 1 s burst as a function of the flux between 25 keV and 100 keV, and the off-axis angle. While the localization error region is not well-described by a disc in general, this represents the radius containing the equivalent solid angle area. As described in the text, systematic errors will need to be considered in addition to the statistical-only uncertainty presented here. Compton and pair events can further reduce the localization error.}
\label{fig:loc_uncertainty}
\end{figure}
In addition to the expected statistical errors showed in Figure \ref{fig:loc_uncertainty}, the main source of systematics that can contribute to the localization unertainty consists of the inaccuracies of the look-up tables used to obtain the expected excess counts as a function of the incoming direction. This can be mitigated by a meticulous calibration campaign on ground. Other sources of systematics include the mismodeling of the source spectrum using a parametric shape, as well as possible errors simulating the effect of photons scattered by the Earth's atmosphere \citep{Connaughton2015-GBMLoc, palit2021revisiting}. During flight, these systematic errors can be studied and mitigated by performing both a cross-calibration with other instruments, and a self-calibration with higher-energy photons that do produce tracks in the detector.
|
2,877,628,089,635 | arxiv | \section{Introduction}
The authentication of public messages is a fundamental problem
nowadays for bipartite and network communications.
The scenario is the following: Alice sends a (classical)
message to Bob through a public channel, together with an
authentication tag through a private or public channel. The tag will allow
Bob to verify if the message he received via the public channel
has been tampered with or if it is indeed the authentic message,
originally sent by Alice. A third character, Eve, wants to
sabotage this scheme by intercepting Alice's message and sending
her own message to Bob, together with a false tag which will
convince Bob he is receiving the authentic message. For instance,
one could imagine that Alice is sending to Bob her bank account
number, to which Bob will transfer some money, and Eve wants to
interfere in the communication in such a way that Bob will receive
her bank account number believing it is Alice's one, thus giving
his money to Eve. The use of authentication tags allows to
separate the secrecy problem in message transmission from the authentication problem
and it is useful even if a secure communication channel is
available~\cite{WeCa:81}.
In 1983, G. Brassard proposed a computationally secure scheme of
classical authentication tags based on the sharing of short secret
keys~\cite{Bras:83}. Brassard's scheme is itself an
improvement of the Wegman-Carter protocol~\cite{WeCa:81}. Brassard showed
that a relatively short seed of a PRG can be used as a secret key shared
between Alice and Bob which will allow the exchange of computationally secure
authentication tags. This method yields a much more practical protocol, where
the requirements on the seed length grow reasonably with the number of messages
we want to authenticate, as opposed to the Wegman-Carter proposal.
The security of PRGs is based on the alleged hardness of some problems
of number theory, e.g., the factorization of a large number with classical
computers. However, several of these problems are provably solvable if
quantum computers are available.
Consequently, the security of the PRGs might be compromised.
Assuming Alice and Bob communicate quantically, can Eve yet menacing the PRG
security? This question is our main motivation to write this article.
In this work, we extend Brassard's protocol to include quantum-encoded
authentication tags, which we prove will offer, under certain conditions,
information-theoretical security for the authentication of classical messages.
We observe that our scheme can authenticate the quantum channel itself, which
is an important part of the quantum cryptography: in fact, it is the crucial first step of quantum key distribution
protocols.
\section{\label{sec:preliminares} Preliminaries}
In this section we set up basic notation, briefly review the description
of the Brassard's protocol and describe our new proposal. We conclude the section
with a negative result on the robustness of an attackable PRG when its output is
hidden by a specific quantum coding.
We denote $\mathcal{M} $ the set of messages and $\mathcal{T} $ the set of tags,
where $\log |\mathcal{M} | >> \log |\mathcal{T}| $.
As hash functions are an important ingredient for all protocols described here
we start by presenting their formal definition~\cite{larsson08.1}:
\begin{definition}[$\varepsilon-\textrm{almost}$ strongly universal-2 hash functions]
\ \ Let $\mathcal{M} $ and $\mathcal{T} $ be finite sets and call functions
from $\mathcal{M} $ to $\mathcal{T} $ \textit{hash functions}. Let $\varepsilon $ be a
positive real number. A set $\mathcal{H} $ of hash functions is $\varepsilon-$almost strongly
universal-2 if the following two conditions are satisfied
\begin{itemize}
\item[1) ] The number of hash functions in $\mathcal{H} $ that takes an arbitrary $m\in
\mathcal{M} $ to an arbitrary $t \in \mathcal{T} $ is exactly $|\mathcal{H}| /|\mathcal{T}|.$
\item[2) ] The fraction of those functions that also takes $Y^{\prime} \neq Y $ in
$\mathcal{M} $ to an arbitrary $T^{\prime} \in \mathcal{T} $ (possibly equal to $T$) is no
more than $\varepsilon .$
\end{itemize}
\end{definition}
The number $\varepsilon $ is related to the probability of guessing the correct tag with respect to an arbitrary message $Y$.
Notice that the smaller $\varepsilon$ is, the
larger is $|\mathcal{H} | $.
\begin{figure}[htbp]
\centerline{\epsfig{file=qauthbrassard.eps,scale=0.5}}
\caption{Brassard's classical authentication protocol~\cite{Bras:83} }
\label{f:qauthbrassard}
\end{figure}
For additional details on universal-2 functions we point the reader to~\cite{WeCa:81}.
Brassard's protocol (see Figure~\ref{f:qauthbrassard}) makes use of two
secret keys. The first one, $U^{(l)},$
specifies a fixed universal-2 hash function $h \in {\cal H},$ where
$l = \lceil \log |\mathcal{H} | \rceil .$
The second specifies the seed $X^{(n)}\in \mathbb{Z}_2^n $,
for a PRG, a sequence of $n$ bits.
The main ingredient of our first quantum-enhanced protocol proposed here
(see Figure~\ref{f:qauthproposed}) is replacing the classical gate \texttt{XOR} of
Brassard protocol by a quantum encoder
similar to that used in the BB84 protocol~\cite{BB84:84}. After some
developments, we shall verify that the key $U^{(l)} $ is no longer necessary.
Assume that Alice and Bob agree on two orthonormal bases $B_0$ and $B_1$ for the
2-dimensional Hilbert space,
$$
B_0 = \left\{\qu{0},\qu{1} \right\} \ \ \text{ and }
B_1 = \left\{\qu{+} = \frac{1}{\sqrt{2}} (\qu{0}+\qu{1}),
\qu{-} = \frac{1}{\sqrt{2}} (\qu{0}-\qu{1}) \right\}
$$
These bases will be used to prepare four quantum states. We shall refer
to this preparation process as \textit{quantum coding}. For each bit of the
$ k = \lceil \log |\mathcal{T} | \rceil $ bits long tag $ T_Y = h(Y) $,
Alice prepares a quantum state $\qu{\psi} = \qu{\psi }\left(X_i, (T_Y)_i\right) $
determined by
the bit $X_i$ from the PRG and the corresponding bit $(T_Y)_i $ of 2-radix representation
of the tag $T_Y. $ Then, if the bit $X_i = 0, $ Alice prepares $\qu{\psi}$ using
basis $B_0$, such that
\begin{equation} \label{eq:B0}
\qu{\psi}=
\begin{cases}
\qu{0} & \quad \text{ if } (T_Y)_i = 0 \\
\qu{1} & \quad \text{if } (T_Y)_i = 1.
\end{cases}
\end{equation}
Similarly, if the bit $X_i = 1 $, Alice prepares $\qu{\psi}$ using basis $B_1$,
such that
\begin{equation} \label{eq:B1}
\qu{\psi}=
\begin{cases}
\qu{+} & \quad \text{ if } (T_Y)_i = 0 \\
\qu{-} & \quad \text{ if } (T_Y)_i = 1
\end{cases}
\end{equation}
After the qubits generation, Alice sends the separable state
$\qu{\psi_{Y}}^{\otimes k}$ to Bob through a noiseless quantum channel and the message
$Y$ through an unauthenticated classical channel. At the reception, Bob performs measurements
to obtain a sequence of $k$ bits from the quantum encoded version of $h(Y).$ For the
$i$-th received qubit, Bob measures it using the basis $ B_0$ or $B_1$ depending on the
$i-$th bit of $X$ is 0 or 1, respectively, recovering a $k$-bit long string
$T^{\prime} = h^{\prime}\left(\qu{\psi}^{\otimes k}\right) $.
Because the quantum channel is assumed to be perfect, Bob recognizes that the message is
authentic if $h^{\prime} = h(Y_B)$, where $Y_B$ is the message received from the classical
channel. Otherwise, Bob assumes that Eve tried to send him an unauthentic message. This concludes the authentication protocol for one message. Throughout this article it is always assumed that the above
coding rule is public.
Even though we assume a noise-free quantum channel, we observe that if the
quantum channel is noisy, the only piece of information requiring
error-protecting coding is the block of bits $(T_Y)_i $ of the tag $T_Y $.
The sequence of bases to be prepared by Alice and Bob is known a priori, determined
locally by the sequence of bits from the PRG.
A future task is evaluating the effects of the utilization of error-correcting codes
to the bits of $T_Y$.
\begin{figure}[htbp]
\centerline{\epsfig{file=qauthproposed.eps,scale=0.5}}
\caption{First proposal of quantum-enhanced authentication scheme }
\label{f:qauthproposed}
\end{figure}
In a warning against alleged collective attacks, we notice that our analysis allows
Eve to make general procedures (suggested in Figure~\ref{f:qauthproposed} by the block
labeled POVM) without being detected. Our results are robust to such powerful and unrealistic
assumption for the attacker. Note that our quantum scheme aims at minimizing the key length for
one-way transmission. Another example of such an approach is given in~\cite{Damgaard:04}. Next we
focus on crucial aspects of the PRGs.
\subsection*{Weak pseudo-random generators }
Clearly, it is important to understand how secure the authentication code described
above is. As we shall see, the security of the authentication code is deeply related
with to quality of the pseudo-random generator. The quality of a pseudo-random generator
is evaluated by the hardness to discriminate its pseudo-random sequence output from
a truly random sequence or by the hardness to find its seed. The first quality
evaluation relates to the PRG's robustness against \textit{distinguishing attacks}, the
second relates to the so-called \textit{state recovery attacks}. In~\cite{sidok05.1}
it is shown that a state recovery attack is a subclass of the distinguishing attacks.
As a matter of fact, if the pseudo-random generator can be
attacked by a quantum computer so does the authentication code. To set this result we
refer to Figure~\ref{f:qauth3}, that describes a simple scheme to assist us the proof.
In this scheme, we simply allow Eve to compare a sequence $\{Y_i \} $ of classical bits
with the corresponding sequence $\{ Z_i \} $ obtained from the measurement apparatus
POVM.
Recall that a pseudo-random generator is a polynomial-time family
of functions $G=\{G_n:\mathbb{Z}_2^n\times\ensuremath{\mathbb{N}}\to \mathbb{Z}_2\}_{n\in \ensuremath{\mathbb{N}}}$ where
$\mathbb{Z}_2$ is the set $\{0,1\}$ and $G_n$ is the pseudo-generator for seeds with size
$n$, that is, $G_n(X^{(n)}, i)$ returns the $i$-th bit generated from $n$ bits long seed
$X^{(n)}$. Pseudo-random generators are expected to fulfill an indistinguishability property
that we will not detail here for the sake of simplicity (more details on~\cite{goldreich99}).
In the following definition we write
$
X^{p(n)} = \left( G(X^{(n)},i_1), G(X^{(n)}, i_2), \ldots , G(X^{(n)}, i_{p(n)}) \right)
$
to denote a subsequence of $p(n)$ (not necessarily contiguous) bits generated
by $G.$
\begin{definition}\em We say that a pseudo-random generator $G$ is
{\em attackable in (quan\-tum/proba\-bi\-listic) polynomial time} if there exists a
(quantum/probabilistic) polynomial time algorithm $P$ and polynomial $p$ such that if
$P$ is fed with a subsequence of $p(n)$ (not necessarily contiguous) generated bits
$X^{p(n)}$ of $G$ we have that:
\[ H(X^{(n)}|P(X^{p(n)}))\in O(2^{-n}).\]
\end{definition}
For a pseudo-random generator to be attackable, there must exist an algorithm (quantum
or probabilistic) that receives a subsequence of $p(n)$ generated bits (not necessarily
contiguous) and is able to compute the seed up to a negligible uncertainty. We observe
that the security/randomness of the pseudo-random generator can not be grounded in the
fact that the attack can only be performed to a contiguous subsequence of generated bits.
This is due to the fact that the generator could always hide some bits if the attack
required this type of sequences.
A simple example of a pseudo-random generator that can be attackable in polynomial time
are the pseudo-number generators based on linear congruence~\cite{sidok05.1}.
\begin{figure}[htbp]
\centerline{\epsfig{file=qauth3.eps,scale=0.5}}
\caption{Auxiliary scheme for Theorem~\ref{thm:negativo} proof }
\label{f:qauth3}
\end{figure}
\begin{theorem}\label{thm:negativo}\em If a pseudo-random generator $G$ is
{\em attackable in (quantum/proba\-bilistic) polynomial time} then the scheme presented
in Figure~\ref{f:qauth3} is not secure in polynomial-time for a quantum adversary that
has access to $Y = \{ Y_i\} $.
\end{theorem}
\begin{proof} Since $G$ is attackable there exists a quantum polynomial time algorithm
$P$ and a polynomial $p$ such that if $P$ is fed with $p(n)$ bits of the string $X$
generated by $G$ then $P$ computes (up to negligible uncertainty) the seed $X^{(n)}$ of
$G$. So it is enough to show that Eve, upon capturing the qubits generated by
$\texttt{QC}$, is able to recover (with non-negligible probability) $p(n)$ bits of $X$.
Indeed, assume that Eve has captured $8 p(n)$ qubits $\qu{\psi}_i, i:1\dots 8 p(n)$ and
has measured them in a random basis (that is, either the computational or the diagonal
basis). Eve can now verify if $Z_i = Y_i . $ If this occurs Eve does not now if the
basis chose to encode the $Y_i$ bit was the basis she measured or if she got with
$\frac{1}{2}$ probability the correct bit due to encoding in the other basis. However,
if the outcome is different (that is, $Y_i \neq Z_i $ ), then she knows that the
basis at the $i-th$ bit is the basis she did not choose the measure, because no mismatch
would be possible if the encoding was performed with the same basis. In the latter
case, she knows that $X_i $ is either $0$ or $1$ depending if she measured in the
diagonal or the computational basis, respectively. Moreover, this happens
with $1/4$ probability. So the probability of Eve not obtaining $p(n)$ elements of $X$
by measuring $8p(n)$ qubits is given by the cumulative function of a binomial
distribution with $1/4$ Bernoulli trial, $8p(n)$ trials and success of at most $p(n)$.
By Hoeffding's inequality this probability is upper-bounded by
$\exp\left(-2\frac{(8p(n)/4-p(n))^2}{p(n)})\right)=\exp(-2p(n))$ which decreases
exponentially with $n$, and so in other words, Eve has an exponentially increasing
probability of obtaining $p(n)$ bits of $X$ with $8p(n)$ qubits measurements. Since $G$
is attackable by knowing $p(n)$ bits of $X$, Eve is able to perform this attack up to
negligible probability. \hfill$ \square $
\end{proof}
\begin{corollary}
\label{corollary:1}\em
If a pseudo-random generator $G$ is attackable then the scheme presented in
Figure~\ref{f:qauthproposed} is not secure in polynomial-time for a quantum
adversary that has access to hash function $h.$
\end{corollary}
\begin{proof}
Eve is able to calculate $h(Y)$ from $Y$ that is public. Therefore she can
apply Theorem~\ref{thm:negativo} by observing a number $N$ of tags such that
$N \log|\mathcal{T} | \geq 8p(n) .$ \hfill $\square $
\end{proof}
Although Theorem~\ref{thm:negativo} points that the quantum coding of
Figure~\ref{f:qauth3} is not better asymptotically than the classical coding (where
we simply replace the quantum coder $\texttt{QC}$ by a $\texttt{XOR} $ gate), it seems
harder to attack the quantum scheme. We will now show that this is true for the
simple case where the encoder is fed by an independent and identically distributed
(i.i.d.) Bernoulli sequence. The following example illustrates that this is true even for
a very simple generator.
\begin{example}[State Recovery Attack for Linear Congruential Generator(LCG)] \em
Let $A$ be a positive integer and $\mathbb{Z}_A $
the set of integers modulo $A.$ The seed of the LCG is the vector
$X^{(n)} = (A, s_0, a, b )$, where $s_0, a, b \in \mathbb{Z}_A $. The length of the
seed is $n = 4\lceil \log A \rceil $.
A binary pseudorandom sequence with length $N\times \lceil \log A \rceil $ bits is
obtained from the 2-radix expansion of the sequence
$\mathbf{s} = \{ s_1,\ s_2, \ldots , s_N \} $ created by the following recursion:
\begin{equation}\label{e:weakLCG}
s_i = a s_{i-1} + b \mod A, \ \ i=2,3, \ldots , N
\end{equation}
It is well known (see~\cite{sidok05.1}) that for all $i, i=1,2, \ldots, N-3$, the numbers
\[
\delta_i = \mathrm{det} \left[\begin{array}{lll} s_i & s_{i+1} & 1 \\
s_{i+1} & s_{i+2} & 1 \\ s_{i+2} & s_{i+3} & 1
\end{array}\right]
\]
are multiple of $A$. As a consequence, the greatest common divisor $\mathrm{GCD} $ of some
$\delta_i's$ gives the value of $A$. The rest of the seed, that is $a, b $ and $s_0 $,
follow then from a system of linear equations. In practice five values of $\delta_i $
are enough.
Figure~\ref{f:cqbits}(right) displays a simplified version of the scheme shown in
Figure~\ref{f:qauth3}, where $X$ stands for the pseudo-random sequence from the output of
the PRG. The left side of Figure~\ref{f:cqbits} displays the situation when a
gate \texttt{XOR} is utilized. We notice that the state recovery attack is applicable
without change to the \texttt{XOR}-based scheme. It is enough to compute
$X = Z \oplus Y $ before applying the algorithm.
In contrast, for the quantum scheme, Eve is submitted to an irreducible uncertainty
on the $X$ values due to quantum coding. In particular, if she employs the procedure
described in the proof of the Theorem~\ref{thm:negativo} it is expected only one fourth
of the $X$'s are expected to be correct. The problem from Eve's point of view is how to solve the
seed from a degraded version of the algorithm input $X$.
\end{example}
\section{\label{sec:comparing} Comparing \texttt{XOR} with quantum coding}
In the last section we have considered the problem of the state recovering attack and defined
the weakness of a PRG. In this section we make a rigorous comparison between the \texttt{XOR} and the quantum coding performances using information-theoretical measures.
\begin{figure}[htbp]
\centerline{\epsfig{file=cqbits.eps,scale=0.5}}
\caption{\texttt{XOR} (left) and quantum coding (right) }\label{f:cqbits}
\end{figure}
To this end consider Figure~\ref{f:cqbits} where both classical and quantum encodings are displayed. The
$\texttt{QC}$ denotes the
quantum encoder defined before, in \eqref{eq:B0} and \eqref{eq:B1}, where $X$ is the variable that sets the basis.
The block POVM stands for the measurement apparatus defined by the positive operator-valued measure
$$Z=\{E_m(Y)\}_{m\in O}$$ where $O$ is the set of outcomes. Observe that the measurement may depend on
the message $Y$, which is public.
The goal of Eve is to maximize the knowledge of $X$, that is, minimize the entropy $H(X|Y,Z)$.
We consider the classical and quantum scheme presented in Figure~\ref{f:cqbits} in two ways:
Firstly, we will assume that $X$ is a sequence of fair and independent Bernoulli random variables,
that is, the PRG describing $X$ is perfect. Secondly, we consider a biased PRG (unfair) to describe $X$ and introduce blocks of random variables into the analysis.
\subsection*{Fair input single-sized block}
We start with the simple case of a single-sized block and where $X\sim\textrm{Ber}\left(\frac{1}{2}\right)$. In the classical \texttt{XOR} encoding case we have that $Z=X\oplus Y$ and thus $H(X|Y,Z)=0$, and so Eve has no doubt about $X$.
In the quantum encoding case, the Holevo bound states that
\begin{equation}\label{ineq:1}
I(X;Z|Y)\leq S(\rho(Y)) -\sum_{i=0}^1\frac{1}{2}S(\qu{\phi_i(Y)}\dqu{\phi_i(Y)})
\end{equation}
where $\rho(Y)$ is the density operator describing the encoding by $\textrm{QC}$ that is
\begin{equation}
\rho(Y)= \frac{1}{2}\qu{\phi_0(Y)}\dqu{\phi_0(Y)} + \frac{1}{2}\qu{\phi_1(Y)}\dqu{\phi_1(Y)},
\end{equation}
where $\qu{\phi_0(0)} = \qu{0}$, $\qu{\phi_1(0)} = \qu{+}$,
$\qu{\phi_0(1)} = \qu{1}$ and $\qu{\phi_1(1)}=\qu{-}$.
We shall need a well known property of the von Neumann entropy~ (see \cite{NC:2000} for more details).
\begin{proposition}\label{zeroS}\em
Let $\mathbf{\rho} $ be a quantum state and $S(\mathbf{\rho} ) $ its entropy, then
$ S(\mathbf{\rho} ) \geq 0 $, and the
equality holds iff $\rho $ is a pure state.
\end{proposition}
Thus, thanks to Proposition~\ref{zeroS} we can simplify \eqref{ineq:1} to
\begin{equation}\label{ineq:2}
I(X;Z|Y)\leq S(\rho(Y)).
\end{equation}
Moreover, one can compute easily the von Neumann entropy of $S(\rho(Y))=S(\rho(0))=S(\rho(1))$ and is
\begin{eqnarray}\label{e:xyztheta1}
S^{\ast}=S(\rho(Y))
& = & -2 \cos^2\left(\frac{\pi }{8}\right) \log\left(\cos\left(\frac{\pi }{8}\right)\right)-2 \log\left(\sin\left(\frac{\pi }{8}\right)\right) \sin^2\left(\frac{\pi }{8}\right).
\end{eqnarray}
And so, since $H(X|Z,Y)=H(X|Y)-I(X;Z|Y)$ and $H(X|Y)=1$, the minimum uncertainty that Eve may attain about
$X$ is given by
\begin{eqnarray}\label{e:xyztheta}
H\left(X| Y,Z \right)
& = & 1-S(\rho(Y)).
\end{eqnarray}
The Holevo bound can be achieved by a simple von Neumann measurement~\cite[pp.421]{Paris:04} described
by the Hermitian
\begin{equation}
A=\mathbf{0}\qu{\psi_\theta}\dqu{\psi_\theta} + \mathbf{1}\qu{\psi^\bot_\theta}
\dqu{\psi^\bot_\theta}
\end{equation}
with $\psi_\theta=\cos(\theta)\qu{0}+\sin(\theta)\qu{1}$,
$\psi^\bot_\theta=\sin(\theta)\qu{0}-\cos(\theta)\qu{1}$ and
$\theta=-\frac{\pi}{8}$.
\subsection*{Fair input $k$-blocks}
First, consider the classical setup, then $H(X^k| Y^k, Z^k ) = 0$ , since the block $X^{k}$ is
completely determined from the knowledge of $Y^k$ and $Z^k$.
For the quantum setup, the subsystem that Eve owns is
described by
\begin{equation}\label{fairk}
\rho_{Y^k} = \bigotimes_{i=1}^k \left(\frac{1}{2}\qu{\phi_0(Y_i)}\dqu{\phi_0(Y_i)} + \frac{1}{2}\qu{\phi_1(Y_i)}\dqu{\phi_1(Y_i)}\right).
\end{equation}
By the Holevo bound we get that
\begin{equation}\label{e:hs2}
H\left( X^k|Y^k, Z^k \right) \geq H(X^k)- S(\rho_{Y^k} ).
\end{equation}
\begin{example}\em Table~\ref{tab:t1} illustrates the scenario for $k=2.$ Rows are indexed by the
four possible values of $Y^2$ and columns are indexed by the bases
corresponding to the four values of $X^2.$ Notice that Eve is not able
to distinguish which column is being used. Then, her uncertainty is lower bounded by the
von Neumann entropy of the quantum system formed by states listed in
row indexed by the values of $Y^2 $ that she can access.
\begin{table}[hbtp]
\caption{Encoding for blocks of length 2 \label{tab:t1}}
\begin{center}
\begin{tabular}{l|l|l|l|l}
\hline \hline
& \multicolumn{4}{c}{Bases } \\ \cline{2-5}
\multicolumn{1}{c|}{$Y^2$} & $B_0B_0$ & $B_0B_1$ & $B_1B_0$ & $B_1B_1$ \\ \hline
$00$ & $\qu{00} $ & $ \qu{0+}$ & $ \qu{+0} $ & $ \qu{++} $ \\ \hline
$01$ & $\qu{01} $ & $ \qu{0-} $ & $ \qu{+1} $ & $ \qu{+-} $ \\ \hline
$10$ & $\qu{10} $ & $ \qu{1+} $ & $ \qu{-0} $ & $ \qu{-+} $ \\ \hline
$11$ & $\qu{11} $ & $ \qu{1-} $ & $ \qu{-1} $ & $ \qu{--} $ \\ \hline
\end{tabular}
\end{center}
\end{table}
\end{example}
Recall the following property concerning the von Neumann entropy.
\begin{proposition}\label{2prop}\em Let $\rho$ and $\sigma$ be quantum states, then
$S\left( \mathbf{\rho} \otimes \mathbf{\sigma} \right) = S(\mathbf{\rho} )
+ S(\mathbf{\sigma} ).$
\end{proposition}
As a consequence of Equation \eqref{fairk} and Proposition~\ref{2prop}, for a sequence of fair Bernoullis we have
\begin{equation}\label{e:somasmax}
S(\rho_{Y^k} ) = kS^{\ast},
\end{equation}
where $S^{\ast}$ is given by \eqref{e:xyztheta}. So we have that
\begin{equation}\label{e:hs2b}
H\left( X^k|Y^k, Z^k \right) \geq k - kS^{\ast}.
\end{equation}
Again, the equality can be achieved by a simple von Neumann measurement, namely that defined by $A^{\otimes k}$. This is the best scenario one can imagine to defeat Eve. However, for the protocol to be practical, the $X's$ should be generated by a PRG, which is the case we examine next.
\subsection*{Unfair input $k$-blocks}
The results above where obtained assuming that $\{ X_i \} $
was a sequence of i.i.d. fair Bernoulli random variables. In this section we study the general case,
with the purpose of clarifying how the use of a real PRG affects the uncertainty about $X$.
Consider $k-$length blocks $X^{k},\ Y^{k} $ and $Z^{k} ,$ where
$X^{k} = X_{i+1}, X_{i+2}, \ldots , X_{i + k} $ is a
contiguous subsequence of $ \{X_i\} $ and similarly to $Y^{k} $ and $Z^{k}. $
Note that, to ease notation, we omit the index $i$ in defining $X^k.$ However, it is
crucial to remark that the probability distribution of $X^k$ is, in general, dependent
on $i$. As a matter of fact, $\mathbf{p}_{X^k} = \left( p_0, p_2,\ldots , p_{2^k-1} \right) $
can even degenerate to a distribution with a single component equal to 1, depending on the
robustness of the PRG. We shall simplify the notation denoting $\mathbf{p}_{X^k}$ by
$\mathbf{p} $.
Concerning the unfairness of $\{X_i\}$, the best strategy for Eve to get information from
$X^k$ is to prepare a measurement (POVM) over the all $k$ qubits sent, given that she knows $Y^k$. Again, the Holevo bound gives us
\begin{equation}\label{e:hs4}
H\left( X^k|Y^k, Z^k \right) \geq H(X^k)- S(\rho_{Y^k} ) =
H(X^k)-H(\lambda )
\end{equation}
where $\lambda=(\lambda_1\dots\lambda_{2^k})$ is the spectrum of $\rho_{Y^k}$ and
\begin{equation}\label{e:rho}
\rho_{Y^k} = \sum_{j=0}^{2^k-1} p_j \qu{\phi_j}\dqu{\phi_j}
\end{equation}
where the states $\qu{\phi_j}=\otimes_{i=1}^k\qu{\phi_{j_i}(Y_i)}$ and $j_i$ is the $i$-th bit of the binary representation of $j$. Note that $\rho_{Y^k} $ is a mixture of pure states weighted by the
probabilities $p_j, \ j \in \{ 0, \ldots , 2^k-1 \}.$ Accordingly, we write
$p_j = \Pr[X^k = j] $ where $j$ is seen in its binary representation
(e.g., for $k=2, \ \ p_0 = \Pr[X^2 = 00], \ p_1 = \Pr[X^2= 01], \ldots $).
Observe that $S(\rho_{Y^k_1})=S(\rho_{Y^k_2})$ since there exist a unitary transformation $U$ such that $U\rho_{Y^k_1}U^{-1}=\rho_{Y^k_2}$.
We now establish a relationship between the probability vectors
$\mathbf{p}_{X^k} $ and the lower bound given in Equation~(\ref{e:hs4}).
Denote by $\rho$ the uniform distribution, that is,
$q_j = 1/2^k , \ j=0,\ldots , 2^k-1. $ In this section we shall verify that if
the probability distribution of a block $X^k$ from the PRG, say $\mathbf{p} $,
is near enough the distribution $\rho $, for a block of size $k$, then the lower bound of
~\eqref{e:hs4} will be kept significantly near of $k-kS^{\ast}$, which is the best one can achieve.
Let $\mathbf{\sigma}_{Y^k} $ be the density operator corresponding to a $k-$length block
$X^k$ generated by a fair Bernoulli sequence given that the $k$-length block $Y^k$ is
known, that is
\begin{equation}\label{e:sigma}
\mathbf{\sigma}_{Y^k} = \sum_{j=0}^{2^k - 1} q_j \qu{\phi_j}\dqu{\phi_j}.
\end{equation}
In this section we establish some results relating von Neumann entropy with the trace
distance $D\left(\rho_{Y^k}, \mathbf{\sigma}_{Y^k} \right) $ between
$\rho_{Y^k} $ and $\mathbf{\sigma}_{Y^k}$.
Recall that the trace distance between two quantum states
$\mathbf{\rho} $ and $\mathbf{\sigma} $ is
defined by
\[
D(\mathbf{\rho}, \mathbf{\sigma} ) =
\frac{1}{2}\mathrm{tr}\left| \mathbf{\rho} - \mathbf{\sigma} \right|
\]
where $|A| = \sqrt{A^{\dagger}A} $. We shall also need the trace distance between
probability vectors, say $\mathbf{a} $ and $\mathbf{b}, $ defined by
\[
D(\mathbf{a}, \mathbf{b} ) = \frac{1}{2} \sum_{j} |a_j - b_j|.
\]
The trace distance can be used to measure how biased a probability distribution is
compared to a fair Bernoulli sampling. Given a probability distribution $\mathbf{p}$, we
call the {\em bias} of $\mathbf{p}$ the value $B(\mathbf{p})=D(\mathbf{p},\rho)$
where $\rho$ is the uniform distribution.
\begin{proposition}\label{prop:2}\em
Let $\varepsilon > 0 $ be an arbitrary real number. If
\begin{equation}
B\left( \mathbf{p} \right) \leq \varepsilon
\end{equation}
then
\begin{equation}
D\left(\rho_{Y^k} , \mathbf{\sigma}_{Y^k} \right) \leq \varepsilon
\end{equation}
where $\rho_{Y^k} $ is the state defined in Equation~\eqref{e:rho}.
\end{proposition}
\begin{proof}
Denote $\mathbf{\gamma}_j = \qu{\phi_j}\dqu{\phi_j} .$ From the strong convexity of the
trace distance we have:
\begin{eqnarray}
D\left(\sum_{j=0}^{2^k - 1} p_j\mathbf{\gamma}_j,
\sum_{j=0}^{2^k -1}\frac{1}{2^k} \mathbf{\gamma}_j \right)
& \leq &
D\left( \mathbf{p}, \rho \right)
+\sum_{j=0}^{2^k-1} p_j D\left(\mathbf{\gamma}_j,\mathbf{\gamma}_j\right)\\
& = & D\left(\mathbf{p},\rho \right)
\end{eqnarray}
which concludes the proof. \hfill $\square$
\end{proof}
In the proof of the next proposition we shall apply Fannes'
inequality (see \cite{bengtsson06} for more details about this equality):
\begin{equation}\label{e:fannes}
\left| S(\rho) - S(\sigma) \right| \leq 2 D\left(\rho , \sigma \right)
\textrm{ln}\left(\frac{N}{2D(\rho , \sigma)} \right)
\end{equation}
where it is assumed that $D(\rho, \sigma ) \leq 1/(2\mathrm{e}) $ and $N$ is the
dimension of the Hilbert space dimension where the states live in.
\begin{theorem}\label{thm:positivo}\em
If the conditions in Proposition~\ref{prop:2} hold, that is, if
$B(\mathbf{p} ) \leq \varepsilon $, then
\begin{eqnarray}
|H(X^k)-S\left(\rho_{Y^k}\right)-(H(X^k)-S\left(\mathbf{\sigma}_k\right))|&=&|S\left(\rho_{Y^k}\right) - S\left(\mathbf{\sigma}_k\right)|\\ & \leq &
2\mathrm{ln}2 (k-1)\varepsilon
+ 2\varepsilon \mathrm{ln}\frac{1}{\varepsilon}. \label{e:thm:positivo}
\end{eqnarray}
\end{theorem}
\begin{proof}
Observe that the function $- x\mathrm{ln}x $ is monotonous in the interval $(0,1/e)$. Therefore,
assuming $0 \leq \varepsilon \leq 1/e $ and for $N=2^k $, we have:
\begin{eqnarray}
| S\left(\rho_{Y^k}\right) - S\left(\mathbf{\sigma}_k\right) |
& \overset{(a)}{\leq} & 2D( \rho_{Y^k}, \mathbf{\sigma}_k )\textrm{ln}
\frac{2^k}{2D(\rho_{Y^k}, \mathbf{\sigma}_k ) } \nonumber\\
& \overset{(b)}{=} &
2\mathrm{ln}2 (k-1)D(\rho_{Y^k}, \sigma_k )
+ 2 D(\rho_{Y^k} , \mathbf{\sigma}_k )\textrm{ln}
\frac{1}{D(\rho_{Y^k}, \mathbf{\sigma}_k )} \\
& \overset{(c)}{\leq} &
2\mathrm{ln}2\ (k-1)\varepsilon + 2 \varepsilon \mathrm{ln}\frac{1}{\varepsilon}
\end{eqnarray}
where $(a)$ results from Fannes' inequality, $(b)$ is due to logarithm properties and,
$(c) $ is due to Proposition~\ref{prop:2}. $\square $
\end{proof}
This result states that if a PRG is such that the probability distribution of
its output $X^k ,$ say, $\mathbf{p} $ (possibly conditioned on the past), is near enough
the fair distribution $\rho$, then Eve's uncertainty is kept near the
maximum $H\left(X^k|Y^k,Z^k\right) = k-kS^{\ast} $ (see Equation~\eqref{e:hs2b}).
Note that the distribution of $\mathbf{p} $ is induced by
the random secret seed of the PRG, $X^{(n)} $, which is chosen with uniform distribution. Consequently, any
practical use of Equation~(\ref{e:thm:positivo}) will depend on the Eve's capability to
estimate that distribution and clearly, on the PRG being used.
For instance, suppose we want to upper bound the right side of~\eqref{e:thm:positivo} with
a given \textit{tolerance} defined by a positive real number $\delta $. After some
simple algebraic manipulation we obtain that
\begin{equation}\label{e:limitek}
k < 1 + \frac{1}{2\mathrm{ln} 2}\left(\frac{\delta}{\varepsilon}\right) -
\frac{\mathrm{ln}\left(\frac{1}{\varepsilon}\right)}{\mathrm{ln} 2} .
\end{equation}
For the case of $\varepsilon=\delta$ we get the simple bound
\begin{equation}\label{e:limitek2}
k < 1 + \frac{1}{2\mathrm{ln} 2} -
\frac{\mathrm{ln}\left(\frac{1}{\varepsilon}\right)}{\mathrm{ln} 2} \leq
1 + \frac{1}{2\mathrm{ln} 2} +
\frac{\left(\frac{1}{\varepsilon}\right)-1}{\mathrm{ln} 2}\leq 1+ \frac{1}{\varepsilon \mathrm{ln} 4}.
\end{equation}
Additionally, when the conditions of Proposition~\ref{prop:2} hold, that is, for
bias $B(\mathbf{p}) < \varepsilon $, we can rewrite~(\ref{e:limitek}) as
\begin{equation}\label{e:limitekB}
k < 1 + \frac{1}{2\mathrm{ln}2}\left(\frac{\delta }{B(\mathbf{p})}\right) -
\frac{\mathrm{ln}\left(1/B(\mathbf{p}\right))}{\mathrm{ln} 2}.
\end{equation}
Note that the right-hand side of \eqref{e:limitekB} approximates
$\frac{1}{2\mathrm{ln}2}\left(\frac{\delta }{B\left(\mathbf{p}\right)}\right) $ as
$B(\mathbf{p}) $ tends to zero.
In detail, Equation~(\ref{e:limitekB}), at the light of Theorem~\ref{thm:positivo}, provides a way to compute the largest block whose uncertainty remains near $k-kS^{\ast}$ (up to $\varepsilon$), given an upper bound of the bias of $\mathbf{p}$. However,
a word of advice is necessary: from its very definition, $\mathbf{p} $
depends on $k$ and also on the position $i$ the block start, because $X^k = X_{i+1},X_{i+2},\ldots , X_{i+k} $.
So, the use of Equation~\eqref{e:limitekB} to establish a bound of a secure block relies on a bias difficult to compute for standard PRG's.
The following corollary clarifies the meaning of Theorem~\ref{thm:positivo} from
an asymptotic point of view.
\begin{corollary}\label{corol:thmpositivo}\em
Given a PRG, let $\mathbf{p}$ be the probability distribution of a
$k$-length generated block, and let $f(k,n)$ and $g(n)$ positive functions such that:
\begin{itemize}
\item $\lim_{n\to \infty} g(n)=+\infty$
\item $\lim_{n\to \infty} g(n) f(g(n),n)=0$.
\end{itemize}
Then, if $B(\mathbf{p}_{PRG})\leq f(k,n)$ and $k\leq g(n)$,
$$
\lim_{n\to \infty}
|S\left(\rho_{Y^{g(n)}}\right) - S\left(\mathbf{\sigma}_{g(n)}\right) |=0.
$$
\end{corollary}
We now discuss the results above. The idea is that $n$ is the size of the seed of the $PRG$ and $k$ is the size of the block. If one chooses $k\leq g(n)$ for some $g$ and the bias of the PRG is smaller the $f(k,n)$ for some $f$ fulfilling the conditions of Corollary~\ref{corol:thmpositivo}, then the information Eve can retrieve from blocks of size $g(n)$ is as close to the ideal case as desired, just be choosing a larger $n$. A good PRG is one for which $n<<g(n)$, so that the block size could be larger than the seed and still, little information about the seed is revealed.
In the next section we make a comparison between classical $\mathtt{XOR}$ and quantum
$\mathtt{QC}$ Brassard's schemes for authentication of classical messages.
\section{\label{sec:comparison} Improving Key-Tag Secrecy }
In the last section we compared Eve's equivocation on $X$ for the $\texttt{XOR}$
and $\texttt{QC}$ schemes when she has access both to the message $Y$ and its quantum
encoded version, which she observes from the quantum channel. We concluded that the
equivocation is kept above some lower bound depending on the quality of the PRG.
In this section we include a hash function $h$ in the scheme
(see Figure~\ref{f:qauthitsugestion}) in such a way that Eve only accesses the public
message $Y$ and the quantum encoded version of the tag $T=h(Y)$.
Thanks to that modification we shall demonstrate that is feasible
to improve the secrecy of the key and of the tag simultaneously.
By information-theoretic secrecy, as usually, we mean $I(W; V) = 0 +O(2^{-n}) $ or
equivalently, the equivocation $H(W|V) = H(W)- O(2^{-n}) $, where $W$ is the secret to be protected and $V$
is the piece of data available to the eavesdropper. Our derivations will focus in the equivocation $H(W\mid V)$ to measure the quality of the scheme.
Then, the information to be protected is $W = \left( T, X^k \right) $ and the information available, from Eve's viewpoint, is $V = \left( Y, Z \right)$. We
investigate the uncertainty of the tag $H(T\mid Y,Z) $ and the uncertainty of the
key $H(X^k\mid Y,Z) $.
\begin{figure}[htbp]
\centerline{\epsfig{file=qauthitsugestion.eps,scale=0.5}}
\caption{Authentication scheme with a single key $X^{(n)}$ }
\label{f:qauthitsugestion}
\end{figure}
We assume that $X^k$, is independent of the message $Y$ and that the hash function is selected
from the $\varepsilon-$almost universal-2 class of hash functions,
which we refer in the following just as \textit{hash functions}.
\subsection*{Modified classical case}
Consider a simple modified setup
where a $\mathtt{XOR}$ gate is taken in place of the $\texttt{QC} $ block in the scheme displayed in
Figure~\ref{f:qauthitsugestion}.
If $\{X_i\} $ is a fair Bernoulli and a $k-$block of bits such that
$ k = \max \{ \lceil \log |T | \rceil , \lceil \log |\mathcal{H}| \rceil\ \} $
it is utilized per message, then the scheme turns to be equivalent to the Wegman-Carter
one-time pad scheme. Indeed, in this situation $h$ is in fact drawn uniformly from $\mathcal{H}$, then
\begin{eqnarray}
H\left(T, X^k| Y, Z^k\right) & \overset{(a)}{=} & H(T|Y, Z^k) + H(X^k| T,Y,Z^k) \\
& \overset{(b)}{=} & H(T| Y, Z^k ) \\
& \overset{(c)}{=} & H\left( T |Y \right) \\
& \overset{(d)}{=} & \log \mid \mathcal{T}\mid .
\end{eqnarray}
Where equality $(a)$ is due to chain rule for Shannon entropy,
$(b)$ is due to the fact that in the classical setup $X^k = T\oplus Z^k $.
The equality (c) is harder to obtain, indeed it follows from the properties of the $\varepsilon-$almost universal-2 class of hash function. Note that $T=h_{X^k}(Y)$ has a uniform distribution. Moreover $T|_{x_k}=h_{x^k}(Y)$ has also uniform distribution, and therefore, $T$ is independent of $X^k$. Since $Z^k=f(X^k,Y)$ we have that $ H(T| Y, Z^k )= H(T| Y)$. Equality $(d)$ is also due to the properties of hash functions.
On the other hand, if $\{ X_i \} $ comes from a PRG, the Eve's uncertainty on
the tag can, eventually, decreases by observing the random variable $Z^k$. Indeed,
in general, $ H\left( T |Y, Z^k \right) < H\left(T | Y\right) $.
Consequently, unconditional secrecy relative to $T$, $H(T|Y,Z^k) = \log{|\mathcal{T}|}$
cannot be assured.
\subsection*{Uncertainty of the tag in the quantum case}
In this subsection we introduce a condition to attain unconditional security of the tag in terms
of conditioned mutual information between $T$ and the $k-$block of bits of the key.
\begin{proposition}\label{prop:sufficient}\em
If $ I\left(T; X^k | Y, Z^k \right) = H(T) $ then the tag is
secure in the information theoretical sense, that is, $H(T|Y, Z^k )=H(T)$.
\end{proposition}
\begin{proof}
From the standard chain rule of Shannon entropy we have:
\begin{eqnarray}
H\left(T, X^k \mid Y, Z^k \right) & = &
H\left(X^k \mid Y, Z^k\right) + H\left(T \mid X^k, Y, Z^k\right) \label{e:1}\\
& = &
H\left(T | Y, Z^k \right) + H\left(X^k | T, Y, Z^k \right). ~\label{e:2}
\end{eqnarray}
Then, comparing~(\ref{e:1}) and ~(\ref{e:2}) we obtain
\begin{eqnarray}
H\left( T | Y, Z^k \right) & \overset{(a)}{=} &
H\left(T | X^k ,Y, Z^k \right) + \nonumber \\
& &
H\left(X^k|Y, Z^k \right)
- H\left(X^k|T, Y, Z^k\right) ~\label{e:3} \\
& \overset{(b)}{=} &
H\left(T | X^k ,Y, Z^k \right) +
I\left(T; X^k | Y, Z^k\right) ~\label{e:4} \\
& \overset{(c)}{=} &
I\left(T; X^k | Y, Z^k \right) ~\label{e:5}
\end{eqnarray}
Where $(a)$ is due to a simple manipulation of~\eqref{e:1} and~\eqref{e:2}, $(b)$ is
definition of mutual information and $(c)$ follows because the hash function is determined
by $X^k$ and so, then $T = h(Y) $ is immediately calculated. That is,
$H(T|X^k, Y, Z^k) = H(T|X^k,Y,T = h(Y)) = 0.$
The results follows from~\eqref{e:5}. $\square $
\end{proof}
Eq.~\eqref{e:5} clearly indicates that in order to increase Eve's uncertainty
about $T$ we must maximize the mutual information between the block $X^k$
and the tag $T.$ This is the \textit{information-theoretical} hint that motivates
the scheme presented in Figure~\ref{f:qauthitsugestion}. Note that in this case we make the tag $T$ depend of $X^k$, increasing thus their mutual information. In Brassard scheme (see Figure~\ref{f:qauthbrassard}) the hash function is fixed in the beginning, and therefore $I\left(T; X^k |V' \right)=0$ where $V'$ is the observation that Eve can perform in Brassard's scheme.
It is remarkable to be possible to attain unconditional security of the tag
using non-fair Bernoulli for $X$ with the proposed of Figure~\ref{f:qauthitsugestion}. This fact is in sharp
contrast with the classical setup for which only Bernoulli sequences can assure that requirement.
Thus, a good approximation is to use PRG for the sequence of $X$, and the mutual information $I(T;X^K|Y,Z^k)$ is as high as the PRG is unbiased, since that mutual information is mediated by
the random variable $Z^k$.
It is clear that, if we are dealing with real PRGs (that do not generate a sequence of fair Bernoullis), then the conditions of
Theorem~\ref{thm:positivo} should be considered in order to evaluate the number
of messages that can be authenticated before leaking too much information.
Another possibility to apply the scheme of Figure~\ref{f:qauthitsugestion} is to spend just
$k=\log |\mathcal{T}|$ key bits per message to protect the current tag. This approach is similar
to Brassard's scheme, but improves it since the tag is protected by the quantum coding. Observe that as
$ \log | \mathcal{T} | < \log | \mathcal{H}| $, this scheme is less costly in terms of key
consumption.
\subsection*{Uncertainty of the key in the quantum case}
In this case, the bounds derived in Section~\ref{sec:comparing} remain valid, namely the inequality~\eqref{e:hs4} that we recall
\begin{equation}\label{e:hs4b}
H\left( X^k|Y^k, Z^k \right) \geq H(X^k)- S(\rho_{Y^k} ) =
H(X^k)-H(\lambda ).
\end{equation}
In this case, since the measurement $Z^k$ is on the quantum encoding of the tag, and not on the quantum encoding of $Y$, the uncertainty is greater than that of the case discussed in Section~\ref{sec:comparing}.
So, with the scheme of Figure~\ref{f:qauthitsugestion}, not only we obtain a high equivocation about the tag, but we also increase the uncertainty of the sequence $X^k$ and, therefore, also of the seed $X^{(n)}$ of the PRG. Observe that Theorem~\ref{thm:positivo} and inequality~\ref{e:limitek} are also valid for this scheme, and can be used to get bounds about the size of $k$ for which a threshold of information is leaked to Eve.
\section{Summary}
In this work we have investigated how quantum resources can improve the security of Brassard's classical
message authentication protocol. We have started by showing that a quantum coding of secret bits offers
more security than the classical \texttt{XOR} function introduced by Brassard. Then, we have used this
quantum coding to propose a quantum-enhanced protocol to authenticate classical messages, with improved
security with respect to the classical scheme introduced by Brassard in 1983. Our protocol is also more
practical in the sense that it requires a shorter key than the classical scheme by using the pseudorandom
bits to choose the hash function. We then establish the relationship between the bias of a PRG and
the amount of information about the key that the attacker can retrieve from a block of authenticated
messages. Finally, we prove that quantum resources can improve both the secrecy of the key generated by
the PRG and the secrecy of the tag obtained with a hidden hash function.
\section*{Acknowledgments}
F. M. Assis acknowledges partial support from Brazilian National
Council for Scientific and Technological Development (CNPq) under Grants No.~302499/2003-2
and CAPES-GRICES No.~160.
P. Mateus and Y. Omar thank the support from project IT-QuantTel, as well as from Funda\c{c}\~{a}o para a Ci\^{e}ncia e a
Tecnologia (Portugal), na\-mely through pro\-grams POC\-TI/PO\-CI/PT\-DC and proj\-ects
PTDC/EIA/\-67661/2006 QSec and PTDC/EEA-TEL/103402/2008 QuantPrivTel, partially funded by FEDER (EU).
|
2,877,628,089,636 | arxiv | \section{Introduction}
With the development of wireless communication and machine learning, a huge amount of intelligent applications have appeared in the networks \cite{IoTData}. To support massive connectivity for these applications over limited wireless resources, the conventional communication systems are facing critical challenges. To address this issue, semantic communications have been considered as a promising technology to achieve better performance\cite{Principle, SemanMaga}. Different from conventional communications, semantic communications only take into account the relevant semantic features to the tasks, which enables the systems to recover information from the received semantic features.
According to the task types at the receiver, the existing works on semantic communications can be mainly divided into two categories: data reconstruction \cite{JSCC,DeepSC, device,wit} and task execution \cite{ImagRetri, TransIoT, vqvae, Task-oriented, JSCCf}. As for the data reconstruction, the semantic system extracts global semantic information of source data. Specifically, The authors in \cite{JSCC} proposed a joint source channel coding (JSCC) system for image transmission. For text transmission, a so called DeepSC framework, has been proposed to encode the text information into various length by employing sentence information \cite{DeepSC}. In \cite{wit}, an attention based JSCC has been proposed to operate with different signal-to-noise (SNR) levels during image transmission.
For the task execution applications, only the task-specific semantic information is extracted and encoded at the transmitter \cite{ImagRetri, TransIoT, vqvae, Task-oriented,JSCCf}. In particular, the authors of \cite{ImagRetri} proposed a model for image retrieval task under power and bandwidth constraints. In \cite{TransIoT}, an image classification-oriented semantic communication system has been developed. The authors of \cite{vqvae} have proposed a vector quantization-variational autoencoder (VQ-VAE) based robust semantic communication systems for image classification.
\begin{figure*}[!htbp]
\begin{centering}
\includegraphics[width=0.96\textwidth]{Fig2.pdf}
\par\end{centering}
\caption{The framework of the proposed unified deep learning enabled semantic communication system.}
\label{FrameArchitect}
\end{figure*}
Compared with the data reconstruction applications, the transmission overhead can be further reduced in the task execution applications. However, they can only handle one task with single modality of data. Therefore, it is difficult for serving various tasks in practice for two reasons: (i) The model in the system has to be updated once the task is changed, which leads to a lot of gradient transmission for retraining the model, since the model at the transmitter and receiver are jointly trained; (ii) Multiple models are stored for serving different tasks, which is difficult for the devices with limited storage resources. In \cite{Task-oriented}, a Transformer-based framework has been proposed to address this issue initially. It is able to share the same transmitter structures for the considered tasks. However, the model in \cite{Task-oriented} still needs to be retrained separately for different tasks, and the architecture of the receiver hasn’t been unified for different tasks yet. Inspired from multi-task learning \cite{Unit}, we firstly propose a unified deep learning enabled semantic communication system (U-DeepSC) to further address this issue by unifying the transmitter and receiver simultaneously.
To the best of our knowledge, this is the first work on the unified semantic communication system for serving various tasks. In this paper, we propose the U-DeepSC, an encoder-decoder semantic communication architecture, to serve multiple tasks with different modalities. Our proposed model is able to simultaneously deal with a number of tasks consisting of image-only and text-only tasks, and even image-and-text reasoning tasks with two modalities of data. In order to extract and transmit only the task-specific information in the U-DeepSC, we divide the encoded features into different parts according to the tasks, and each part corresponds to the semantic information of one specific task. To further specify the semantic information, we employ the domain adaptation \cite{domain_survery} to project the features of different tasks into specific feature domains. In particular, we propose the domain adaptation loss for the joint training procedure. Moreover, since each task is of different difficulty and requires different number of layers to achieve satisfactory performance, the multi-exit architecture is developed by inserting the early-exit modules after the intermediate layer of the decoder to provide early-exit results for relatively simple tasks \cite{MultiExit}. Simulation results show that our proposed method achieves comparable performance to the task-oriented semantic communication systems designed for a specific task with much reduced transmission overhead and fewer model parameters.
The rest of this paper is structured as follows. Section II introduces the framework of U-DeepSC. In Section III, the detailed architecture of the proposed U-DeepSC is presented. Simulation results are presented in Section IV. Finally, Section V concludes this paper.
\section{Framework of U-DeepSC} \label{System}
In this section, we propose the framework of U-DeepSC, and the details of the considered tasks.
\subsection{System Model}
As shown in Fig. \ref{FrameArchitect}, the proposed U-DeepSC is able to handle a number of tasks with two modalities, i.e., image and text. The proposed framework mainly consists of three parts: image transmitter, text transmitter, and unified receiver. The deep neural networks (DNNs) are employed to represent the transmitter and the unified receiver. In particular, the image transmitter consists of the image semantic encoder and the image channel encoder, while the text transmitter consists of the text semantic encoder. Moreover, the receiver consists of the unified channel decoder and the unified semantic decoder.
We consider a communication system equipped with $N_t$ transmit antennas and $N_r$ receive antennas. The inputs of the system are image, $\mathbf{x}^{I}$, and text, $\mathbf{x}^{T}$. The image semantic encoder learns to map $\mathbf{x}^{I}$ into the encoded image features, while $\mathbf{x}^{T}$ is processed by the text semantic encoder to obtain the encoded text features. Thus, the encoded features of image and text can be represented by
\begin{equation}
\hat{\mathbf{x}}^{I}=\mathcal{F}^{I}_{C} \big( \mathcal{F}^{I}_{S}(\mathbf{x}^{I};\bm{m}{\theta} ^{I}_{S}); \bm{m}{\theta} ^{I}_{C}\big),
\end{equation}
and
\begin{equation}
\hat{\mathbf{x}}^{T}=\mathcal{F}^{T}_{C} \big( \mathcal{F}^{T}_{S}(\mathbf{x}^{T};\bm{m}{\theta} ^{T}_{S}); \bm{m}{\theta} ^{I}_{C}\big),
\end{equation}
respectively, where $\hat{\mathbf{x}}^{I}\in \mathbb{C}^{N_{t}\times 1}$, $\hat{\mathbf{x}}^{T}\in \mathbb{C}^{N_{t}\times 1}$, $\bm{m}{\theta} ^{I}_{S}$, and $\bm{m}{\theta} ^{I}_{C}$ denote the trainable parameters of the image semantic encoder, $\mathcal{F}^{I}_{S}$, and the image channel encoder, $\mathcal{F}^{I}_{C}$, respectively, $\bm{m}{\theta} ^{T}_{S}$ and $\bm{m}{\theta} ^{T}_{C}$ denote the trainable parameters of the text semantic encoder, $\mathcal{F}^{T}_{S}$, and the text channel encoder, $\mathcal{F}^{T}_{C}$, respectively. We concatenate the encoded features to obtain the transmitted symbol streams expressed as $\mathbf{x} = \big[\hat{\mathbf{x}}^{I}, \hat{\mathbf{x}}^{T} \big ].$
Then, the received signal at the receiver is given by
\begin{equation}
\mathbf{Y} = \mathbf{H} \mathbf{x} + \mathbf{n},
\end{equation}
where $\mathbf{H} \in \mathbb{C}^{N_{r}\times N_{t}}$ represents the channel gain and $\mathbf{n}\sim \mathcal{CN}(\bm{m}{0}, \sigma^{2}\mathbf{I})$ is the additive white Gaussian noise (AWGN).
At the receiver, the decoded signal can be represented as
\begin{equation}
\hat{\mathbf{Y}}=\mathcal{G}_{S} \big(\mathcal{G}_{C}(\mathbf{Y};\bm{m}{\phi}_{C}); \bm{m}{\phi}_{S}\big),
\end{equation}
where $\bm{m}{\phi}_{C}$ and $\bm{m}{\phi}_{S}$ denote the trainable parameters of the channel decoder, $\mathcal{G}_{C}$, and the semantic decoder, $\mathcal{G}_{S}$, respectively. Finally, the obtained features are further processed by the light-weight task-specific heads to execute downstream tasks. Particularly, the task-specific head refers to some simple layers that reshape the decoded features into the intended dimension of output, e.g., the number of classes for a classification task.
\subsection{Task Description}
To provide a thorough analysis of U-DeepSC and also provide sufficient results to prove the effectiveness, we will experiment with jointly handling prominent tasks from different domains, including sentiment analysis, visual question answering (VQA), image retrieval, image data reconstruction, and text data reconstruction tasks. Besides, these tasks have been widely considered in existing semantic comunication systems \cite{DeepSC,Task-oriented,ImagRetri}.
\subsubsection{Sentiment analysis}
The purpose of the sentiment analysis task is to classify whether the sentiment of a given sentence is positive or negative. It is essentially the binary classification problem. Thus, we take classification accuracy as the performance metric for sentiment analysis and VQA, and the cross entropy as the loss function to train the model.
\subsubsection{VQA}
In VQA task, the images and questions in text are processed by the model to classify which answer is right. Thus, we take answer accuracy as the performance metric and the cross entropy as the loss function.
\subsubsection{Image retrieval}
The image retrieval task aims at finding similar images to a query image among the images stored in a large server. To evaluate the performance of image retrieval task, the Recall@1 is adopted as the performance evaluation metric, which refers to the ratio of successful image retrieval at the first query. To learn this task, we adopt the triplet loss, which is given by
\begin{equation}
\mathcal{L}=\max (d(\mathbf{s}_a,\mathbf{s}_p),d(\mathbf{s}_a,\mathbf{s}_n)+m,0),
\end{equation}
where $m$ is a constant, $\mathbf{s}_p$ denotes the positive sample with the same class as sample $\mathbf{s}_a$, $\mathbf{s}_n$ is the negative sample with the different class from $\mathbf{s}_a$, and $d$ is the distance metric. This triplet loss aims to make the distance between the features of two similar samples closer, and to make the distance between the features of two different samples farther. Thus, the model will be able to find the similar samples to the given input according to their encoded features.
\subsubsection{Image reconstruction}
The performance of the image reconstruction task is quantified by the peak signal-to-noise ratio (PSNR). The PSNR measures the ratio between the maximum possible power and the noise, which is given by
\begin{equation}
\textrm{PSNR}=10 \log_{10}{\frac{\textrm{MAX}^{2}}{\textrm{MSE}}}(\textrm{dB}),
\end{equation}
where $\textrm{MSE}=d(\mathbf{x},\hat{\mathbf{x}})$ denotes the mean squared-error (MSE) between the source image, $\mathbf{x}$, and the reconstructed image, $\hat{\mathbf{x}}$, and $\textrm{MAX}$ is the maximum possible value of the pixels. Moreover, the MSE is adopted as the training loss.
\subsubsection{Text reconstruction}
As for the text reconstruction task, the bi-lingual evaluation understudy (BLEU) score is adopted to measure the performance. BLEU score is a scalar between $0$ and $1$, which evaluates the similarity between the reconstructed text and the source text, with $1$ representing highest similarity. We take the cross entropy as the loss function since the BLEU score is non-differentiable.
\section{Architecture of the Proposed U-DeepSC} \label{Architect}
In this section, we design the architecture of U-DeepSC. The U-DeepSC is built based on the unified Transformer structure \cite{Unit}, which consists of the separate semantic/channel encoders for each modality and the unified semantic/channel decoder with light-weight task-specific heads and multi-exit module.
\subsection{Semantic Encoder}
Since the data from different modalities have totally different statistical characteristics, semantic information, and encoded features, we design an image semantic encoder and a text semantic encoder for image and text, respectively.
\subsubsection{Image semantic encoder}
The image-only and multi-modal tasks take an image $\mathbf{I}$ as input, and the extracted features $\mathbf{x}^{I}$ is given by $\mathbf{x}^{I} = f(\mathbf{I})$, where $f$ denotes the preprocessing module. Then, a Transformer encoder is employed as the image semantic encoder to encode $\mathbf{x}^{I}$ to encoded image feature matrix, $\mathbf{U}^{I}$. Moreover, since different tasks may require the semantic encoder to extract different features, a task embedding vector, $\mathbf{w}_{task}^{I}$, is added to the semantic encoder given as $[\mathbf{x}^{I}, \mathbf{w}_{task}^{I}]$, to indicate the model which task to perform with the given sample, $\mathbf{I}$, and to allow it to extract task-specific information. Thus we obtain the encoded images feature matrix $\mathbf{U}^{I} = \{\mathbf{u}_{1}^{I},...,\mathbf{u}_{L}^{I}\}$, where $\mathbf{u}^{I}$ denotes the encoded image feature vectors, and $L$ is the number of image feature vectors.
\subsubsection{Text semantic encoder}
As for text input, we preprocess the input text into a sequence of $S$ features $\mathbf{x}^{T}=\{ \mathbf{w}_{1}^{T},...,\mathbf{w}_{S}^{T} \}$. Subsequently, $\mathbf{x}^{T}$ is encoded by the text semantic encoder, which consists of a Transformer encoder. Similar to image semantic encoder, we also add a learned task embedding vector $\mathbf{w}_{task}^{T}$ by concatenating it at the beginning of $\mathbf{x}^{T}$. Then, the concatenated sequence, $[\mathbf{x}^{T}$, $\mathbf{w}_{task}^{T} ]$, is input into the text semantic encoder, and it outputs the encoded text features as $\mathbf{U}^{T} = \{ \mathbf{u}_{1}^{T},...,\mathbf{u}_{S}^{T}\} $.
\begin{figure*}[!htbp]
\begin{centering}
\includegraphics[width=0.82\textwidth]{adaptation.pdf}
\par\end{centering}
\caption{The details of domain adaptation module.}
\label{dodo}
\end{figure*}
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=0.42\textwidth]{multi_exit.pdf}
\par\end{centering}
\caption{The proposed structure of semantic decoder with multi-exit architecture.}
\label{Multi-exit}
\end{figure}
\subsection{Domain Adaptation Module}
Note that the overall encoded features contain the global semantic information, and they will be utilized for the aforementioned text and image reconstruction tasks. However, the other tasks, e.g., VQA and sentiment analysis, only require the task-specific semantic information. Thus, we need to specify these task-specific features for different tasks and transmit them, rather than send the overall encoded features. In order to specify the task-specific semantic information in the U-DeepSC, we divide the encoded features into several parts for different tasks as shown in the left of Fig. \ref{dodo}. Moreover, since the semantic information of different tasks may overlap, they may share part of the encoded features. Thus, the encoded features of each task can be further divided into private features and shared features. In order to enable the unified decoder to better distinguish the features of different tasks, the domain adaptation module is introduced in the training procedure to make the shared features of different tasks similar to each other.
As shown in Fig. \ref{dodo}, we denote the shared feature matrix of task A and task B as $\mathbf{E}^{s}_{A}$ and $\mathbf{E}^{s}_{B}$, respectively, which consists of the shared encoded features. Similarly, the private feature matrix of task A and task B can denoted as $\mathbf{E}^{p}_{A}$ and $\mathbf{E}^{p}_{B}$, respectively. We define the similarity of the features as the Frobenius norm of the product of corresponding feature matrices, e.g., the similarity between the shared features can be denoted as $\Vert {\mathbf{E}^{s}_{A}}^{T}\mathbf{E}^{s}_{B} \Vert_F$, with the larger value indicating the higher similarity. Then, the adaptation loss is designed to increase the similarity between the shared features of task A and task B, as well as decreasing the similarity between private features for each tasks, which is given by
\begin{equation}
\label{ada}
\mathcal{L}_{a}=\Vert {\mathbf{E}^{p}_{A}}^{T}\mathbf{E}^{p}_{B} \Vert_F -\Vert {\mathbf{E}^{s}_{A}}^{T}\mathbf{E}^{s}_{B} \Vert_F.
\end{equation}
It enables the model to project the private features of different tasks into different domains, while the shared features are projected into the same domain. Thus, the task-specific semantic information can be better segmented and aggregated on the task-specific features. In this way, different tasks require different encoded features, where the model can be trained more easily and the performance of these tasks can be improved. It also significantly reduces the transmission overhead, since only the task-specific encoded features are transmitted. For example, as shown in Fig. 2, Task A requires the red and orange features, while Task B requires the orange and yellow features. In addition, by training two tasks at one time and mapping the features to different domains, the network can avoid forgetting the past learned tasks, which is beneficial to learn multiple tasks.
\subsection{Unified Semantic Decoder with Multi-Exit Task Heads}
\subsubsection{Unified semantic decoder}
\begin{figure*}[t]
\begin{centering}
\subfloat[Sentiment Analysis]{\label{fig:a}\includegraphics[width=5.2cm]{sst2.pdf}}\quad
\subfloat[VQA]{\label{fig:b}\includegraphics[width=5.2cm]{vqa.pdf}}\quad
\subfloat[Image Retrieval]{\label{fig:a}\includegraphics[width=5.2cm]{imageretri2.pdf}}\quad
\subfloat[Text Reconstruction]{\label{fig:b}\includegraphics[width=5.2cm]{eur.pdf}}\quad
\subfloat[Image Reconstruction]{\label{fig:b}\includegraphics[width=5.2cm]{image_recons.pdf}}
\caption{The performance of five tasks versus SNR.}
\label{results}
\end{centering}
\end{figure*}
The received encoded features are firstly processed by the channel decoder and we denote its output as $\mathbf{U}^{enc}$. For image-only tasks and text-only tasks, the input to the decoder can be represented by $\mathbf{U}^{enc}=\hat{\mathbf{U}}^{I}$ and $\mathbf{U}^{enc}=\hat{\mathbf{U}}^{T}$, where $\hat{\mathbf{U}}^{I}$ and $\hat{\mathbf{U}}^{T}$ denote the decoded image features and text features, respectively. For multi-modal tasks, we concatenate the decoded semantic features from the image and text into a sequence as $\mathbf{U}^{enc}= [\hat{\mathbf{U}}^{I}, \hat{\mathbf{U}}^{T}]$.
Unlike the separate design at the transmitter, the semantic decoder is built upon the unified Transformer decoder structure for the five tasks, as shown in Fig. \ref{FrameArchitect}. The semantic decoders take the output of channel decoder, $\mathbf{U}^{enc}$, and the task-specific query vector, $\mathbf{q}_{task}$, as input. The task-specific query vector is able to indicate which task the semantic decoder is to handle. Then, we obtain the output of $i$-th decoder layer, $\mathbf{U}_{i}^{dec}$, which will be processed by the multi-exit task-specific heads to output early-exit results.
\subsubsection{Multi-exit task-specific heads}
The different tasks require different numbers of layers due to the reasons below: (i) Each task is of different difficulty and requires different numbers of layers to achieve satisfactory performance, and the multi-exit architecture can provide early-exit results for simple tasks; (ii) Different tasks require different levels of semantic information from various layers. As shown in Fig. \ref{Multi-exit}, we attach the task-specific heads to these intermediate layers. Then the inference time of some simple tasks can be significantly reduced. The difficulty of the task can be determined by the size of the dataset. As for the aforementioned five tasks, the VQA is the most difficult task and requires the maximum number of layers, while the sentiment analysis is the easiest one that exits the earliest.
\subsection{Channel Encoder and Decoder}
The encoded features are compressed by the channel encoder and decompressed by the channel decoder as well as eliminating the signal distortion caused by the wireless channel. The channel encoders and decoder are modeled as the the fully-connected networks with ReLU as the activation function.
\subsection{Training Method}
To jointly learn five tasks, we propose an efficient method to train the modules in the U-DeepSC system with domain adaptation. We will randomly take samples from two tasks at one time, and the task with larger dataset will be assigned a higher sampling probability, since they are generally more difficult to learn. Besides, the domain adaptation loss is adopted in the training procedure to specify the task-specific features. It is added by the values of loss function computed on the current two tasks to perform stochastic gradient descent together for learning two tasks simultaneously. The detailed training procedures are summarized in Algorithm \ref{Training}.
\begin{algorithm}[t]
\begin{small}
\caption{Training procedures of the U-DeepSC}
\label{Training}
\DontPrintSemicolon
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKwInOut{Initialize}{Initialize}
\Input{Five training datasets consist of input images and texts with their labels, and batch size $B$.}
\Output{The trained U-DeepSC model with encoders and decoders.}
\For{$m\leftarrow 1$ \KwTo $M$}{
Randomly choose two tasks and generate two mini-batch samples, A and B from the corresponding datasets. \\
Compute the task-specific loss $\mathcal{L}_{A}$ and $\mathcal{L}_{B}$ based on the loss function of each task.\\
Compute the domain adaptation loss $\mathcal{L}_{a}$ based on (\ref{ada}).\\
Compute the total loss $\mathcal{L}=\mathcal{L}_{A}+\mathcal{L}_{B}+\mathcal{L}_{a}$.\\
Train the model based on $\mathcal{L}$.
}
\end{small}
\end{algorithm}
\section{Simulation Results} \label{Simulation}
In this section, we test our U-DeepSC on the aforementioned five tasks, and five datasets are considered. In particular, Cars196 and CIFAR-10 datasets are adopted for image retrieval and image reconstruction tasks, respectively. As for the text reconstruction and sentiment analysis, the proceedings of the European Parliament and the SST-2 datasets are used, respectively. Moreover, the VQAv2 dataset is used for the VQA task. The image semantic encoder of U-DeepSC is built with eight Transformer encoder layers, and the text semantic encoder is designed with eight Transformer encoder layers. The unified semantic decoder consists of eight Transformer decoder layers. The setting of the training procedure is the AdamW optimizer with learning rate $1\times10^{-4}$, batch size $32$, weight decay $5\times10^{-3}$. According to the experimental results of these tasks on different layers, we choose the reasonable layer numbers for these tasks, and the numbers of layers for the VQA, image retrieval, image, reconstruction, text reconstruction, and sentiment analysis are set as $8$, $6$, $4$, $3$, $2$, respectively.
For comparison, three benchmarks are considered.
\begin{itemize}
\item Conventional methods: The conventional separate source-channel coding. For the image data, the joint photographic experts group (JPEG) and the low-density parity-check code (LDPC) are adopted as image source coding and image channel coding, respectively. For the text data, the 8-bit unicode transformation format (UTF-8) encoding and the Turbo coding are adopted as the text source coding and text channel coding, respectively. The coding rate of channel coding is set as 1/2.
\item T-DeepSC: The task-oriented deep learning enabled semantic communication (T-DeepSC) designed for a specific task with the same architecture as U-DeepSC, and is implemented by separately trained U-DeepSC.
\item Upper bound: Results obtained via delivering noiseless image and text features to the receiver based on the T-DeepSC.
\end{itemize}
Fig. \ref{results} illustrates the performance of the investigated schemes versus the SNR for different tasks. The proposed U-DeepSC is trained with SNR = $0$ dB and tested in SNR from $-6$ dB to $18$ dB. It is readily seen that both the U-DeepSC and the T-DeepSC outperform the conventional schemes. The U-DeepSC approachs the upper bound at high SNR. It is readily seen that the proposed U-DeepSC achieves approaching performance to the T-DeepSC the in all considered tasks. It shows that our proposed U-DeepSC is able to simultaneously handle 5 tasks with comparable performance to the task-oriented models designed for a specific task. Since only specifc part of the overall features are transmitted in U-DeepSC for different taks, the satisfactory performance of U-DeepSC shows that the task-specific semantic information can be well segmented and specified by the U-DeepSC.
\begin{table}[!hbt]
\caption{Performance of U-DeepSC trained with or without domain adaptation loss.}
\renewcommand{\arraystretch}{1.4}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Task & \makecell{without domain \\ adaptation Loss} & \makecell{with domain\\ adaptation Loss} \\ \hline
Image Retrieval & 70.0 & 73.9 \\ \hline
VQA & 57.8 & 60.9 \\ \hline
Text Reconstruction &0.94 & 0.96\ \\ \hline
Image Reconstruction & 31.9 & 32.0 \\ \hline
Sentiment Analysis & 80.1 & 84.2 \\ \hline
\end{tabular}
\label{table1}
\end{table}
Table \ref{table1} depicts the performance of U-DeepSC trained with or without domain adaptation loss. It has been proved that the domain adaptation loss significantly improves the performances of image retrieval, VQA, and sentiment analysis. However, the performances of the text reconstruction and image reconstruction are almost unchanged, which is mainly because all of the encoded features are transmitted for these two tasks, i.e., the semantic information of the overall encoded features will keep the same in both cases.
\begin{table}[!hbt]
\caption{Number of parameters.}
\renewcommand{\arraystretch}{1.3}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Task & T-DeepSC & U-DeepSC \\ \hline
Image Retrieval & 46.9M & 95.6M \\ \hline
VQA & 92.3M & 95.6M \\ \hline
Text Reconstruction &52.9M & 95.6M\ \\ \hline
Image Reconstruction & 42.5M & 95.6M \\ \hline
Sentiment Analysis & 55.6M & 95.6M \\ \hline
Stored Parameters & 290.2M & 95.6M \\ \hline
\end{tabular}
\label{table2}
\end{table}
As shown in Table \ref{table2}, the total stored number of parameters of T-DeepSC is 290.2M, which is obtained by adding the parameters required for each task. For our proposed U-DeepSC, the number of stored model parameters is only 95.6M for five tasks, which is 67.1\% less than that of the T-DeepSC. The U-DeepSC is able to provide satisfactory performance with much-reduced model parameters. It is of great significance towards a practical semantic communication system for scenarios with limited spectrum resources and storage resources.
\section{Conclusion} \label{Conclusion}
In this paper, we firstly proposed a general framework for U-DeepSC. Particularly, we considered five popular tasks and jointly trained these tasks with a unified model. To learn the unified model for serving various tasks and achieve task-specific transmission overhead, we employed domain adaptation in the training procedure to specify the task-specific features for each task. Thus, only the task-specific features were transmitted in U-DeepSC. Then, we developed a multi-exit architecture to provide early-exit results for relatively simple tasks, which reduced the inference time. Simulation results showed that our proposed model has satisfactory performance in low SNR regime, and achieved comparable performance to the task-oriented model designed for a specific task.
\bibliographystyle{IEEEtran}
|
2,877,628,089,637 | arxiv | \section{\normalsize Introduction}
Throughout the paper, by an algebra we mean an artin algebra over
a fixed commutative artin ring $R$, that is, an $R$-algebra
(associative, with identity) which is finitely generated as an
$R$-module. For an algebra $A$, we denote by $\mod A$ the category
of finitely generated right $A$-modules, by $\ind A$ the full
subcategory of $\mod A$ formed by the indecomposable modules, by
$K_0(A)$ the Grothendieck group of $A$, and by $[M]$ the image of
a module $M$ from $\mod A$ in $K_0(A)$. Then $[M]=[N]$ for two
modules $M$ and $N$ in $\mod A$ if and only if $M$ and $N$ have
the same (simple) composition factors including the
multiplicities. A module $M$ in $\mod A$ is called sincere if
every simple right $A$-module occurs as a composition factor of
$M$. Further, we denote by $D$ the standard duality $\Hom_R(-,E)$
on $\mod A$,
where $E$ is a minimal injective cogenerator in $\mod R$. Moreover, for a module $X$ in $\mod A$ and its minimal projective
presentation $\xymatrix@C=13pt{P_1 \ar[r]^f & P_0 \ar[r] &X \ar[r] &0}$ in $\mod A$, the transpose $\Tr X$ of $X$ is the
cokernel of the homomorphism $\Hom_A(f, A)$ in $\mod A^{\op}$,
where $A^{\op}$ is the opposite algebra of $A$. Then we obtain the
homological operator $\tau_A=D\Tr$ on modules in $\mod A$, called
the Auslander-Reiten translation, playing a fundamental role in
the modern representation theory of artin algebras.
The aim of this article is to provide a complete description of all modules $M$ in $\mod A$ satisfying the condition:
for any module $X$ in $\ind A$, we have $\Hom_A(X,M)=0$ or $\Hom_A(M, \tau_AX)=0$. We note that, by \cite{AR},
\cite{RSS}, a sequence $\xymatrix@C=13pt{X \ar[r] & M \ar[r] & \tau_AX}$ of nonzero
homomorphisms in $\mod A$ with $X$ being indecomposable is called
a short chain, and $M$ the middle of this short chain. Therefore,
we are concerned with the classification of all modules in $\mod
A$ which are not the middle of a short chain. We also mention
that, if $M$ is a module in $\mod A$ which is not the middle of a
short chain, then $\Hom_A(M, \tau_AM)=0$, and hence the number of
pairwise nonisomorphic indecomposable direct summands of $M$ is
less than or equal to the rank of $K_0(A)$, by \cite[Lemma 2]{S3}.
Further, by \cite[Theorem 1.6]{RSS} and \cite[Lemma 1]{HL}, an
indecomposable module $X$ in $\mod A$ is not the middle of a short
chain if and only if $X$ does not lie on a short cycle
$\xymatrix@C=13pt{Y \ar[r] & X \ar[r] & Y}$ of nonzero
nonisomorphisms in $\ind A$. Hence, every indecomposable direct
summand $Z$ of a module $M$ in $\mod A$ which is not the middle of
a short chain is uniquely determined (up to isomorphism) by the
composition factors (see \cite[Corollary 2.2]{RSS}). Finally, we
point out that the class of modules which are not the middle of a
short chain contains the class of directing modules investigated
in \cite{Bak}, \cite{HRi2}, \cite{Ri1}, \cite{S3}, \cite{S4},
\cite{SW}.
Following \cite{Bon}, \cite{HRi1}, by a tilted algebra we mean an
algebra of the form $\End_H(T)$, where $H$ is a hereditary algebra
and $T$ is a tilting module in $\mod H$, that is,
$\Ext^1_H(T,T)=0$ and the number of pairwise nonisomorphic
indecomposable direct summands of $T$ is equal to the rank of
$K_0(H)$. The tilted algebras play a prominent role in the
representation theory of algebras and have attracted much
attention
(see \cite{ASS}, \cite{Ha}, \cite{MS}, \cite{Ri1}, \cite{Ri2}, \cite{SS1}, \cite{SS2} and their cited papers).\\
The following theorem is the main result of the paper.
\begin{thm} \label{thm 1.1}
Let $A$ be an algebra and $M$ a module in $\mod A$ which is not
the middle of a short chain. Then there exists a hereditary
algebra $H$, a tilting module $T$ in $\mod H$, and an injective
module $I$ in $\mod H$ such that the following statements hold:
\begin{enumerate}
\renewcommand{\labelenumi}{\rm(\roman{enumi})}
\item the tilted algebra $B=\End_H(T)$ is a quotient algebra of $A$;
\item $M$ is isomorphic to the right $B$-module $\Hom_H(T,I)$.
\end{enumerate}
\end{thm}
We note that for a hereditary algebra $H$, $T$ a tilting module
in $\mod H$, $I$ an injective module in $\mod H$, and
$B=\End_H(T)$, the right $B$-module $\Hom_H(T,I)$ is not the
middle of a short chain in $\mod B$ (see Lemma \ref{lem 3.1}). An
important role in the proof of the main theorem plays the
following characterization of tilted algebras established recently
in the authors paper \cite{JMS1}: an algebra $B$ is a tilted
algebra if and only if $\mod B$ admits a sincere module $M$ which
is not the middle of a short chain.\\
The following fact is a consequence of Theorem \ref{thm 1.1}.
\begin{cor} \label{cor 1.2}
Let $A$ be an algebra and $M$ a module in $\mod A$ which is not
the middle of a short chain. Then $\End_A(M)$ is a hereditary
algebra.
\end{cor}
In Sections 2 and 3, after recalling some background on module
categories and tilted algebras, we prove preliminary facts
playing an essential role in the proof of Theorem \ref{thm 1.1}.
Section 4 is devoted to the proofs of Theorem \ref{thm 1.1} and
Corollary \ref{cor 1.2}. In the final Section 5 we present
examples illustrating the main
theorem.\\
For background on the representation theory applied here we refer
to \cite{ASS}, \cite{ARS}, \cite{Ri1}, \cite{SS1}, \cite{SS2}.
\section{\normalsize Preliminaries on module categories}
Let $A$ be an algebra. We denote by $\Gamma_A$ the
Auslander-Reiten quiver of $A$. Recall that $\Gamma_A$ is a valued
translation quiver whose vertices are the isomorphism classes
$\{X\}$ of modules $X$ in $\ind A$, the valued arrows of
$\Gamma_A$ correspond to irreducible homomorphisms between
indecomposable modules (and describe minimal left almost split
homomorphisms with indecomposable domains and minimal right almost
split homomorphisms with indecomposable codomains) and the
translation is given by the Auslander-Reiten translations
$\tau_A=D\Tr$ and $\tau^-_A=\Tr D$. We shall not distinguish
between a module $X$ in $\ind A$ and the corresponding vertex
$\{X\}$ of $\Gamma_A$. By a component of $\Gamma_A$ we mean a
connected component of the quiver $\Gamma_A$. Following \cite{S2},
a component $\mathcal{C}$ of $\Gamma_A$ is said to be generalized
standard if $\rad^{\infty}_A(X,Y)=0$
for all modules $X$ and $Y$ in $\mathcal{C}$, where
$\rad^{\infty}_A$ is the infinite Jacobson radical of $\mod A$.
Moreover, two components $\mathcal{C}$ and $\mathcal{D}$ of
$\Gamma_A$ are said to be orthogonal if $\Hom_A(X,Y)=0$ and
$\Hom_A(Y,X)=0$ for all modules $X$ in $\mathcal{C}$ and $Y$ in
$\mathcal{D}$. A family $\mathcal{C}=(\mathcal{C}_i)_{i \in I}$
of components of $\Gamma_A$ is said to be (strongly) separating
if the components in $\Gamma_A$ split into three disjoint
families $\mathcal{P}^A$, $\mathcal{C}^A= \mathcal{C}$ and
$\mathcal{Q}^A$ such that the following conditions are
satisfied:
\begin{itemize}
\item[(S1)] $ \mathcal{C}^A$ is a sincere family of pairwise orthogonal
generalized standard components;
\item[(S2)] $\Hom_A(\mathcal{Q}^A,
\mathcal{P}^A)=0$, $\Hom_A(\mathcal{Q}^A, \mathcal{C}^A)=0$,
$\Hom_A(\mathcal{C}^A,
\mathcal{P}^A)=0$;
\item [(S3)] any homomorphism from $\mathcal{P}^A$ to $\mathcal{Q}^A$ in
$\mod A$ factors through $\add (\mathcal{C}_i)$ for any
$i \in I$.
\end{itemize}
We then say that $\mathcal{C}^A$ separates $\mathcal{P}^A$ from
$\mathcal{Q}^A$ and write
$$\Gamma_A=\mathcal{P}^A \vee
\mathcal{C}^A \vee \mathcal{Q}^A.$$
A component $\mathcal{C}$ of
$\Gamma_A$ is said to be preprojective if $\mathcal{C}$ is acyclic
(without oriented cycles) and each module in $\mathcal{C}$ belongs
to the $\tau_A$-orbit of a projective module. Dually,
$\mathcal{C}$ is said to be preinjective if $\mathcal{C}$ is
acyclic and each module in $\mathcal{C}$ belongs to the
$\tau_A$-orbit of an injective module. Further, $\mathcal{C}$ is
called regular if $\mathcal{C}$ contains neither a projective
module nor an injective module. Finally, $\mathcal{C}$ is called
semiregular if $\mathcal{C}$ does not contain both a projective
module and an injective module. By a general result of S. Liu
\cite{L0} and Y. Zhang \cite{Z}, a regular component $\mathcal{C}$
contains an oriented cycle if and only if $\mathcal{C}$ is a
stable tube, that is, an orbit quiver
$\mathbb{ZA}_{\infty}/(\tau^r)$, for some integer $r \geq 1$.
Important classes of semiregular components with oriented cycles
are formed by the ray tubes, obtained from stable tubes by a
finite number (possibly empty) of ray insertions, and the coray
tubes obtained from stable tubes by a finite number (possibly
empty) of coray insertions (see \cite{Ri1}, \cite{SS2}).
The following characterizations of ray and coray tubes of
Auslander-Reiten quivers of algebras have been established by S.
Liu in \cite{L1a}.
\begin{thm}
Let $A$ be an algebra and $\mathcal{C}$ be a semiregular component
of $\Gamma_A$. The following equivalences hold:
\begin{enumerate}
\renewcommand{\labelenumi}{\rm(\roman{enumi})}
\item $\mathcal{C}$ contains an oriented cycle but no injective
module if and only if $\mathcal{C}$ is a ray tube;
\item $\mathcal{C}$ contains an oriented cycle but no projective
module if and only if $\mathcal{C}$ is a coray tube.
\end{enumerate}
\end{thm}
The following lemma from \cite[Lemma 1.2]{JMS1} will play an
important role in the proof of our main theorem.
\begin{lem}\label{lem 2.2}
Let $A$ be an algebra and $M$ a sincere module in $\mod A$ which
is not the middle of a short chain. Then the following statements
hold:
\begin{enumerate}
\renewcommand{\labelenumi}{\rm(\roman{enumi})}
\item $\Hom_A(M,X)=0$ for any $A$-module $X$ in $\mathcal{T}$, where $\mathcal{T}$ is an arbitrary ray tube of $\Gamma_A$ containing a projective
module;
\item $\Hom_A(X,M)=0$ for any $A$-module $X$ in $\mathcal{T}$, where $\mathcal{T}$ is an arbitrary coray tube of $\Gamma_A$ containing an injective module.
\end{enumerate}
\end{lem}
\begin{lem} \label{lem 2.3}
Let $A$ be an algebra, $\mathcal{C}=(\mathcal{C}_i)_{i \in I}$ a
separating family of stable tubes of $\Gamma_A$, and $\Gamma_A =
\mathcal{P}^A \vee \mathcal{C}^A \vee \mathcal{Q}^A$ the
associated decomposition of $\Gamma_A$ with
$\mathcal{C}^A=\mathcal{C}$. Then for arbitrary modules $M \in
\mathcal{P}^A$, $N \in \mathcal{Q}^A$, and $i \in I$, the
following statements hold:
\begin{enumerate}
\renewcommand{\labelenumi}{\rm(\roman{enumi})}
\item $\Hom_A(M,X) \neq 0$ for all but finitely many modules $X \in
\mathcal{C}_i$;
\item $\Hom_A(X, N) \neq 0$ for all but finitely many modules $X \in
\mathcal{C}_i$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $M$ be a module in $\mathcal{P}^A$, $N$ a module in
$\mathcal{Q}^A$, $i \in I$, and $r_i$ be the rank of the stable
tube $\mathcal{C}_i$. Consider an injective hull $M \rightarrow
E_A(M)$ of $M$ in $\mod A$ and a projective cover
$P_A(N)\rightarrow N$ of $N$ in $\mod A$. Applying the separating
property of $\mathcal{C}$, we conclude that there exist
indecomposable modules $U$ and $V$ in $\mathcal{C}_i$ such that
$\Hom_A(M,U) \neq 0$ and $\Hom_A(V,N) \neq 0$. Then $\Hom_A(M,X)
\neq 0$ and $\Hom_A(X,N)\neq 0$ for all indecomposable modules $X$
in $\mathcal{C}_i$ of quasi-length greater than or equal to $r_i$,
by \cite[Lemma 3.9]{S2a}. Since such modules $X$ exhaust all but
finitely many modules in $\mathcal{C}_i$, the claims (i) and (ii)
hold.
\end{proof}
We also have the following known fact.
\begin{lem} \label{lem 2.4}
Let $A$ be an algebra and $\mathcal{T}$ a stable tube of
$\Gamma_A$. Then every indecomposable module $X$ in $\mathcal{T}$
is the middle of a short chain in $\mod A$.
\end{lem}
A path $\xymatrix@C=13pt{X_0 \ar[r] & X_1 \ar[r] & ... \ar[r]
&X_{t-1} \ar[r] & X_t}$ in the Auslander-Reiten quiver $\Gamma_A$
of an algebra $A$ is called sectional if $\tau_AX_i \ncong
X_{i-2}$ for all $i \in \{2,...,t\}$. Then we have the following
result proved by R. Bautista and S. O. Smal\o \, \cite{BS}.
\begin{lem} \label{lem 2.5}
Let $A$ be an algebra and
\[\xymatrix@C=13pt{X_0 \ar[r]^{f_1} & X_1
\ar[r]^{f_2} & ... \ar[r] &X_{t-1} \ar[r]^{f_t} & X_t}\] be a path
of irreducible homomorphisms $f_1,f_2,..., f_t$ corresponding to a
sectional path of $\Gamma_A$. Then $f_t...f_2f_1 \neq 0$.
\end{lem}
Let $A$ be an algebra, $\mathcal{C}$ a component of $\Gamma_A$ and
$V$, $W$ be $A$-modules in $\mathcal{C}$ such that $V$ is a
predecessor of $W$ (respectively, a successor of $W$). If $V$ lies
on a sectional path from $V$ to $W$ (respectively, from $W$ to
$V$), then we say that $V$ is a sectional predecessor of $W$
(respectively, a sectional successor of $W$). Otherwise, we say
that $V$ is a nonsectional predecessor of $W$ (respectively, a
nonsectional successor of $W$). Moreover, denote by
$\mathcal{S}_W$ the set of all indecomposable modules $X$ in
$\mathcal{C}$ such that there is a sectional path in $\mathcal{C}$
(possibly of length zero) from $X$ to $W$, and by
$\mathcal{S}^*_W$ the set of all indecomposable modules $Y$ in
$\mathcal{C}$ such that there is a sectional path in $\mathcal{C}$
(possibly of length zero) from $W$ to $Y$.
\begin{prop}\label{prop 2.6}
Let $A$ be an algebra and $\mathcal{C}$ be an acyclic component of
$\Gamma_A$ with finitely many $\tau_A$-orbits. Then the following
statements hold:
\begin{enumerate}
\renewcommand{\labelenumi}{\rm(\roman{enumi})}
\item if $V$ and $W$ are modules in $\mathcal{C}$ such that $V$ is a predecessor of $W$, $V$ does not belong to $\mathcal{S}_W$, and $W$ has no injective
nonsectional predecessors in $\mathcal{C}$, then we have
$\Hom_A(V,\tau_AU)\neq 0$ for some module $U$ in $\mathcal{S}_W$;
\item if $V$ and $W$ are modules in $\mathcal{C}$ such that $V$ is a successor of $W$, $V$ does not belong to $\mathcal{S}^*_W$, and $W$ has no projective
nonsectional successors in $\mathcal{C}$, then we have
$\Hom_A(\tau^-_AU,V)\neq 0$ for some module $U$ in
$\mathcal{S}^*_W$.
\end{enumerate}
\end{prop}
\begin{proof}
We shall prove only (i), because the proof of (ii) is dual. Let
$V$ and $W$ be modules in $\mathcal{C}$ such that $V$ is a
predecessor of $W$, $V$ does not belong to $\mathcal{S}_W$, and
$W$ has no injective nonsectional predecessors in $\mathcal{C}$.
Moreover, let $n(V)$ be the length of the shortest path in
$\mathcal{C}$ from $V$ to $W$. We prove first by induction on
$n(V)$ that then every path in $\mathcal{C}$ of sufficiently large
length starting at $V$ is passing through a module in
$\tau_A\mathcal{S}_W$.
We may assume that $V$ does not belong to $\tau_A\mathcal{S}_W$
and hence $n(V)\geq 3$. Because $W$ has no injective nonsectional
predecessors in $\mathcal{C}$ and $\mathcal{S}_W$ does not contain
the module $V$, we conclude that there exists $\tau_A^-V$ and it
is a predecessor of $W$ in $\mathcal{C}$. Moreover,
$n(\tau_A^-V)=n(V)-2$. Indeed, if it is not the case, then we get
a contradiction with the minimality of $n(V)$. Let $\{U_1, U_2,
\ldots, U_t\}$ be the set of all direct predecessors of
$\tau_A^-V$ in $\mathcal{C}$. Then, for any $i\in\{1,\ldots,t\}$,
$U_i$ is a predecessor of $W$ in $\mathcal{C}$ and
$n(U_i)=n(V)-1$. Hence, by the induction hypothesis, every path of
sufficiently large length starting at $U_i$ is passing through a
module in $\tau_A\mathcal{S}_W$. Since $\{U_1, U_2, \ldots,
U_t\}$ is also the set of all direct successors of $V$, we have
that every path in $\mathcal{C}$ of nonzero length starting at $V$
is passing through $U_i$ for some $i\in\{1,\ldots,t\}$. Therefore,
the required property holds.
Let now $u: V\to E_A(V)$ be an injective hull of $V$ in $\mod A$.
Then there exists an indecomposable injective $A$-module $I$ such
that $\Hom_A(V,I)\neq 0$. Since $W$ has no injective nonsectional
predecessors in $\mathcal{C}$, applying \cite[Chapter IV, Lemma
5.1]{ASS}, we conclude that there exists a path of irreducible
homomorphisms
$$\xymatrix{V=V_0 \ar[r]^(.55){g_1}& V_1 \ar[r]^(.45){g_2} & V_2 \ar[r]^(.45){}&\cdots\ar[r]^(.45){}& V_{r-1} \ar[r]^(.5){g_{r}} & V_r} $$
with $V_r=\tau_AU$ for some $U\in\mathcal{S}_W$ and a homomorphism
$h_r: V_r\to I$ such that $h_rg_r\ldots g_1\neq 0$. Hence, we
conclude that $\Hom_A(V,\tau_AU)=\Hom_A(V,V_r)\neq 0$.
\end{proof}
\section{\normalsize Preliminaries on tilted algebras}
Let $H$ be an indecomposable hereditary algebra and $Q_H$ the
valued quiver of $H$. Recall that the vertices of $Q_H$ are the
numbers $1, 2, \ldots, n$ corresponding to a complete set $S_1,
S_2, \ldots, S_n$ of pairwise nonisomorphic simple modules in
$\mod H$ and there is an arrow from $i$ to $j$ in $Q_H$ if
$\Ext^1_H(S_i,S_j)\neq 0$, and then to this arrow is assigned the
valuation
$(\dim_{\End_H(S_j)}\Ext^1_H(S_i,S_j),\dim_{\End_H(S_i)}\Ext^1_H(S_i,S_j))$.
Recall that the Auslander-Reiten quiver $\Gamma_H$ of $H$ has a
disjoint union decomposition of the form
\[\Gamma_H = \mathcal{P}(H) \vee \mathcal{R}(H) \vee \mathcal{Q}(H),\]
where $\mathcal{P}(H)$ is the preprojective component containing
all indecomposable projective $H$-modules, $\mathcal{Q}(H)$ is the
preinjective component containing all indecomposable injective
$H$-modules, and $\mathcal{R}(H)$ is the family of all regular
components of $\Gamma_H$. More precisely, we have:
\begin{itemize}
\item[$\bullet$] if $Q_H$ is a Dynkin quiver, then $\mathcal{R}(H)$ is empty and
$\mathcal{P}(H)=\mathcal{Q}(H)$;
\item[$\bullet$] if $Q_H$ is a Euclidean quiver, then $\mathcal{P}(H)\cong (-\mathbb{N})Q^{\op}_H$, $\mathcal{Q}(H)\cong \mathbb{N}Q^{\op}_H$ and
$\mathcal{R}(H)$ is a separating infinite family of stable tubes;
\item[$\bullet$] if $Q_H$ is a wild quiver, then $\mathcal{P}(H) \cong (-\mathbb{N})Q^{\op}_H$, $\mathcal{Q}(H)\cong \mathbb{N}Q^{\op}_H$ and
$\mathcal{R}(H)$ is an infinite family of components of type
$\mathbb{ZA}_{\infty}$.
\end{itemize}
Let $T$ be a tilting module in $\mod H$ and $B=\End_H(T)$ the
associated tilted algebra. Then the tilting $H$-module $T$
determines the torsion pair $(\mathcal{F}(T), \mathcal{T}(T))$ in
$\mod H$, with the torsion-free part $\mathcal{F}(T)=\{X \in \mod
H | \Hom_H(T,X)=0\}$ and the torsion part $\mathcal{T}(T)=\{X \in
\mod H | \Ext^1_H(T,X)=0\}$, and the splitting torsion pair
$(\mathcal{Y}(T), \mathcal{X}(T))$ in $\mod B$, with the
torsion-free part $\mathcal{Y}(T)=\{Y \in \mod B|
\Tor^B_1(Y,T)=0\}$ and the torsion part $\mathcal{X}(T)=\{Y \in
\mod B| Y \otimes_B T=0\}$. Then, by the Brenner-Butler theorem,
the functor $\Hom_H(T,-): \mod H \to \mod B$ induces an
equivalence of $\mathcal{T}(T)$ with $\mathcal{Y}(T)$, and the
functor $\Ext^1_H(T,-): \mod H \to \mod B$ induces an equivalence
of $\mathcal{F}(T)$ with $\mathcal{X}(T)$ (see \cite{BB},
\cite{HRi1}). Further, the images $\Hom_H(T,I)$ of the
indecomposable injective modules $I$ in $\mod H$ via the functor
$\Hom_H(T,-)$ belong to one component $\mathcal{C}_T$ of
$\Gamma_B$, called the connecting component of $\Gamma_B$
determined by $T$, and form a faithful section $\Delta_T$ of
$\mathcal{C}_T$, with $\Delta_T$ the opposite valued quiver
$Q^{\op}_H$ of $Q_H$. Recall that a full connected valued
subquiver $\Sigma$ of a component $\mathcal{C}$ of $\Gamma_B$ is
called a section if $\Sigma$ has no oriented cycles, is convex in
$\mathcal{C}$, and intersects each $\tau_B$-orbit of $\mathcal{C}$
exactly once. Moreover, the section $\Sigma$ is faithful provided
the direct sum of all modules lying on $\Sigma$ is a faithful
$B$-module. The section $\Delta_T$ of the connecting component
$\mathcal{C}_T$ of $\Gamma_B$ has the distinguished property: it
connects the torsion-free part $\mathcal{Y}(T)$ with the torsion
part $\mathcal{X}(T)$, because every predecessor in $\ind B$ of a
module $\Hom_H(T,I)$ from $\Delta_T$ lies in $\mathcal{Y}(T)$ and
every successor of $\tau^-_B \Hom_H(T,I)$ in $\ind B$ lies in
$\mathcal{X}(T)$.
\begin{lem} \label{lem 3.1}
Let $H$ be an indecomposable algebra, $T$ a tilting module in
$\mod H$, and $B=\End_H(T)$ the associated tilted algebra. Then
for any injective module $I$ in $\mod H$, $M_I=\Hom_H(T,I)$ is a
module in $\mod B$ which is not the middle of a short chain.
\end{lem}
\begin{proof}
Consider the connecting component $\mathcal{C}_T$ of $\Gamma_B$
determined by $T$ and its canonical section $\Delta_T$ given by
the images of a complete set of pairwise nonisomorphic injective
$H$-modules via the functor $\Hom_H(T, -): \mod H \rightarrow \mod
B$. Then $M_I$ is isomorphic to a direct sum of indecomposable
modules lying on $\Delta_T$. Suppose $M_I$ is the middle of a
short chain $\xymatrix@C=13pt{X \ar[r] & M_I \ar[r] & \tau_BX}$
in $\mod B$. Then $X$ is a predecessor in $\ind B$ of an
indecomposable module $Y$ lying on $\Delta_T$, and consequently $Y
\in \mathcal{Y}(T)$ forces $X \in \mathcal{Y}(T)$. Hence $\tau_BX$
also belongs to $\mathcal{Y}(T)$ since $\mathcal{Y}(T)$ is closed
under predecessors in $\ind B$. In particular, $\tau_BX$ does not
lie on $\Delta_T$. Then $\Hom_B(M_I,\tau_BX) \neq 0$ implies that
there is an indecomposable module $Z$ on $\Delta_T$ such that
$\tau_BX$ is a successor of $\tau_B^{-1}Z$ in $\ind B$. But then
$\tau^{-1}_BZ \in \mathcal{X}(T)$ forces $\tau_BX \in
\mathcal{X}(T)$, because $\mathcal{X}(T)$ is closed under
successors in $\ind B$. Hence the indecomposable $B$-module
$\tau_BX$ is simultaneously in $\mathcal{Y}(T)$ and
$\mathcal{X}(T)$, a contradiction. Therefore, $M_I$ is indeed a
module in $\mod B$ which is not the middle of a short chain.
\end{proof}
Recently, the authors established in \cite{JMS1} the following
characterization of tilted algebras.
\begin{thm} \label{thm jms}
An algebra $B$ is a tilted algebra if and only if $\mod B$ admits
a sincere module $M$ which is not the middle of a short chain.
\end{thm}
We exhibit now a handy criterion for an indecomposable algebra to
be a tilted algebra established independently in \cite{L2} and
\cite{S1}.
\begin{thm} \label{thm 3.3}
Let $B$ be an indecomposable algebra. Then $B$ is a tilted algebra
if and only if the Auslander-Reiten quiver $\Gamma_B$ of $B$
admits a component $\mathcal{C}$ with a faithful section $\Delta$
such that $\Hom_B(X, \tau_BY)=0$ for all modules $X$ and $Y$ in
$\Delta$. Moreover, if this is the case and $T^{\ast}_{\Delta}$ is
the direct sum of all indecomposable modules lying on $\Delta$,
then $H_{\Delta}=\End_B(T^{\ast}_{\Delta})$ is an indecomposable
hereditary algebra, $T_{\Delta}=D(T^{\ast}_{\Delta})$ is a tilting
module in $\mod H_{\Delta}$, and the tilted algebra
$B_{\Delta}=\End_{H_{\Delta}}(T_{\Delta})$ is the basic algebra of
$B$.
\end{thm}
Let $H$ be an indecomposable hereditary algebra not of Dynkin
type, that is, the valued quiver $Q_H$ of $H$ is a Euclidean or
wild quiver. Then by a concealed algebra of type $Q_H$ we mean an
algebra $B=\End_H(T)$ for a tilting module $T$ in
$\add(\mathcal{P}(H))$ (equivalently, in $\add(\mathcal{Q}(H))$).
If $Q_H$ is a Euclidean quiver, $B$ is said to be a tame concealed
algebra. Similarly, if $Q_H$ is a wild quiver, $B$ is said to be a
wild concealed algebra. Recall that the Auslander-Reiten quiver
$\Gamma_B$ of a concealed algebra $B$ is of the form:
\[\Gamma_B=\mathcal{P}(B) \vee \mathcal{R}(B) \vee \mathcal{Q}(B),\]
where $\mathcal{P}(B) $ is a preprojective component containing
all indecomposable projective $B$-modules, $\mathcal{Q}(B)$ is a
preinjective component containing all indecomposable injective
$B$-modules and $\mathcal{R}(B)$ is either an infinite family of
stable tubes separating $\mathcal{P}(B)$ from $\mathcal{Q}(B)$ or
an infinite family of components of type $\mathbb{ZA}_{\infty}$.
\begin{prop}\label{prop 3.4}
Let $B$ be a wild concealed algebra, $\mathcal{C}$ a regular
component of $\Gamma_B$, $M$ a module in $\mathcal{P}(B)$ and $N$
a module in $\mathcal{Q}(B)$. Then the following statements hold:
\begin{enumerate}
\renewcommand{\labelenumi}{\rm(\roman{enumi})}
\item $\Hom_B(M,X)\neq 0$ for all but finitely many modules $X$ in
$\mathcal{C}$;
\item $\Hom_B(X,N)\neq 0$ for all but finitely many modules $X$ in $\mathcal{C}$.
\end{enumerate}
In particular, all but finitely many modules in $\mathcal{C}$ are
sincere.
\end{prop}
\begin{proof} (i) Let $H$ be a wild hereditary algebra and $T$ a tilting module in $\add(\mathcal{P}(H))$ such that $B=\End_H(T)$. Recall that the
functor $\Hom_H(T,-): \mod H \rightarrow \mod B$ induces an
equivalence of the torsion part $\mathcal{T}(T)$ of $\mod H$ and
the torsion-free part $\mathcal{Y}(T)$ of $\mod B$. Moreover, we
have the following facts:
\begin{itemize}
\item[{(a)}] the images under the functor $\Hom_H(T,-)$ of the regular components from $\mathcal{R}(H)$ form the family $\mathcal{R}(B)$ of all regular components
of $\Gamma_{B}$;
\item[{(b)}] the images under the functor $\Hom_H(T,-)$ of all indecomposable modules in $\mathcal{P}(H)\cap\mathcal{T}(T)$ form the unique preprojective
component $\mathcal{P}(B)$ of $\Gamma_B$.
\end{itemize}
\noindent Since $\mathcal{C}$ is in $\mathcal{R}(B)$, there exists
a component $\mathcal{D}$ in $\mathcal{R}(H)$ such that
$\mathcal{C}=\Hom_H(T,\mathcal{D})$. We note that $\mathcal{C}$
and $\mathcal{D}$ are of the form ${\Bbb Z}{\Bbb A}_{\infty}$. It
follows from \cite{Bae} (see also \cite[Corollary XVIII.2.4]{SS2})
that all but finitely many modules in $\mathcal{D}$ are sincere
$H$-modules. We may choose an indecomposable module $U$ in
$\mathcal{P}(H)\cap\mathcal{T}(T)$ such that $M=\Hom_H(T,U)$.
Further, there exists an indecomposable projective module $P$ in
$\mathcal{P}(H)$ such that $U=\tau_H^{-m}P$ for some integer
$m\geq 0$. Take now an indecomposable module $Z$ in $\mathcal{D}$.
Then we obtain isomorphisms of $R$-modules
\[ \Hom_H(U,Z)\cong\Hom_H(\tau_H^{-m}P,Z)\cong\Hom_H(P,\tau_H^{m}Z), \]
because $H$ is hereditary (see \cite[Corollary IV.2.15]{ASS}).
Since $\Hom_H(P,R)\neq 0$ for all but finitely many modules $R$ in
$\mathcal{D}$, we conclude that $\Hom_H(U,Z)\neq 0$ for all but
finitely many modules $Z$ in $\mathcal{D}$. Applying now the
equivalence of categories $\Hom_H(T,-): \mathcal{T}(T)\to
\mathcal{Y}(T)$ and the equalities
$\mathcal{P}(B)=\Hom_H(T,\mathcal{P}(H)\cap\mathcal{T}(T))$,
$\mathcal{C}=\Hom_H(T,\mathcal{D})$, and $M=\Hom_H(T,U)$, we
obtain that $\Hom_B(M,X)\neq 0$ for all but finitely many modules
$X$ in $\mathcal{C}$.
(ii) We note that the preinjective component $\mathcal{Q}(B)$ is
the connecting component $\mathcal{C}_T$ of $\Gamma_B$ determined
by $T$, and is obtained by gluing the image
$\Hom_H(T,\mathcal{Q}(H))$ of the preinjective component
$\mathcal{Q}(H)$ of $\Gamma_H$ with a finite part consisting of
all indecomposable modules of the torsion part
$\mathcal{X}(T)=\Ext_H^1(T,\mathcal{F}(T))$ of $\mod B$ (see
\cite[Theorem VIII.4.5]{ASS}). But the wild concealed algebra $B$
is also of the form $B=\End_{H^*}(T^*)$, where $H^*$ is a wild
hereditary algebra and $T^*$ is a tilting module in
$\add(\mathcal{Q}(H^*))$. Then the functor $\Ext_{H^*}^1(T^*,-):
\mod H^*\to\mod B$ induces an equivalence of the torsion-free part
$\mathcal{F}(T^*)$ of $\mod H^*$ and the torsion part
$\mathcal{X}(T^*)$ of $\mod B$. Moreover, we have the following
facts:
\begin{itemize}
\item[{(a$^*$)}] the images under the functor $\Ext_{H^*}^1(T^*,-)$ of the regular components from $\mathcal{R}(H^*)$ form the family $\mathcal{R}(B)$ of all regular
components of $\Gamma_B$;
\item[{(b$^*$)}] the images under the functor $\Ext_{H^*}^1(T^*,-)$ of all indecomposable modules in $\mathcal{Q}(H)\cap\mathcal{F}(T)$ form the
unique preinjective component $\mathcal{Q}(B)$ of $\Gamma_B$.
\end{itemize}
In particular, we have that
$\mathcal{C}=\Ext_{H^*}^1(T^*,\mathcal{D}^*)$ for a component
$\mathcal{D}^*$ in $\mathcal{R}(H^*)$. We then conclude that
$\Hom_B(X,N)\neq 0$ for all but finitely many modules $X$ in
$\mathcal{C}$, applying arguments dual to those used in the proof
of (i).
The fact that all but finitely many modules in $\mathcal{C}$ are
sincere follows from (i) (equivalently (ii)), because
$\mathcal{P}(B)$ contains all indecomposable projective
$B$-modules and $\mathcal{Q}(B)$ contains all indecomposable
injective $B$-modules.
\end{proof}
A prominent role in our considerations will be played by the
following consequence of a result of D. Baer \cite{Bae} (see
\cite[Theorem XVIII.5.2]{SS2}).
\begin{thm} \label{thm 3.5}
Let $B$ be a wild concealed algebra, and $M,N$ indecomposable
$B$-modules lying in regular components of $\Gamma_B$. Then there
exists a positive integer $m_0$ such that $\Hom_B(M,
\tau_B^mN)\neq 0$ for all integers $m \geqslant m_0$.
\end{thm}
\begin{lem} \label{lem 3.6}
Let $B$ be a wild concealed algebra and $\mathcal{C}$ a regular
component of $\Gamma_B$. Then any indecomposable module $N$ in
$\mathcal{C}$ is the middle of a short chain in $\mod B$.
\end{lem}
\begin{proof}
Suppose $N$ is an indecomposable module in $\mathcal{C}$.
Obviously $\mathcal{C}$ is of the form $\mathbb{ZA}_{\infty}$.
Applying Theorem \ref{thm 3.5}, we conclude that there is a
positive integer $m_0$ such that $\Hom_B(N, \tau^m_BN) \neq 0$ for
all integers $m \geq m_0$. Then we may take an indecomposable
module $X$ in $\mathcal{C}$ such that there are a sectional path
$\Omega$ from $X$ to $N$ and a sectional path $\Sigma$ from
$\tau^m_BN$ to $\tau_BX$ for some integer $m \geq m_0$. Observe
that all irreducible homomorphisms corresponding to arrows of
$\Sigma$ are monomorphisms whereas all irreducible homomorphisms
corresponding to arrows of $\Omega$ are epimorphisms. Hence there
are a monomorphism $f: \tau^m_BN \rightarrow \tau_BX$ and an
epimorphism $g: X \rightarrow N$. Since $\Hom_B(N, \tau_B^mN) \neq
0$, we conclude that $\Hom_B(N, \tau_BX) \neq0$. Therefore, we
obtain a short chain $\xymatrix@C=13pt{X \ar[r] & N \ar[r] &
\tau_BX}$.
\end{proof}
\section{\normalsize Proofs of Theorem \ref{thm 1.1} and Corollary \ref{cor 1.2}}
Let $A$ be an algebra and $M$ a module in $\mod A$ which is not
the middle of a short chain. By $\ann_A(M)$ we shall denote the
annihilator of $M$ in $A$, that is, the ideal $\{a \in A| Ma=0\}$.
Then $M$ is a sincere module over the algebra $B=A/ \ann_A(M)$.
Moreover, by \cite[Proposition 2.3]{RSS}, $M$ is not the middle of
a short chain in $\mod B$, since $M$ is not the middle of a short
chain in $\mod A$. Let $B=B_1\times\ldots\times B_m$ be a
decomposition of $B$ into a product of indecomposable algebras and
$M=M_1\oplus\ldots\oplus M_m$ the associated decomposition of $M$
in $\mod B$ with $M_i$ a module in $\mod B_i$ for any $i\in \{1,
\ldots, m\}$. Observe that, for each $i\in \{1, \ldots, m\}$,
$B_i=A/ \ann_A(M_i)$, $M_i$ is a sincere $B_i$-module which is not
the middle of a short chain in $\mod B_i$, and hence $B_i$
is a tilted algebra, by Theorem \ref{thm jms}. Therefore, we may assume that $B$ is an indecomposable algebra. \\
We will start our considerations by showing that for a tilted
algebra $B$ and a sincere $B$-module $M$ which is not the middle
of a short chain, all indecomposable direct summands of $M$ belong
to the same component, which is in fact a connecting component of
$\Gamma_B$. According to a result of C. M. Ringel \cite[p.46]{Ri2}
$\Gamma_B$ admits at most two components containing sincere
sections (slices), and exactly two if and only if $B$ is a
concealed algebra. We shall discuss this case in the following
proposition.
\begin{prop}\label{prop 4.1}
Let $B$ be a concealed algebra and $M$ a sincere $B$-module which is not the middle of a short chain. Then $M \in
\add (\mathcal{C})$ for a connecting component $\mathcal{C}$ of $\Gamma_B$.
\end{prop}
\begin{proof}
Observe that $M$ has no indecomposable direct summands in $\mathcal{R}(B)$, by Lemmas \ref{lem 2.4} and \ref{lem 3.6}. Hence we may assume that $M=M_P \oplus M_Q$, where $M_P$ is a direct summand of $M$ contained in $\add(\mathcal{P}(B))$,
whereas $M_Q$ is a direct summand of $M$ which belongs to $\add(\mathcal{Q}(B))$.
We claim that $M_P=0$ or $M_Q=0$. Suppose $M_P \neq 0$ and $M_Q \neq 0$.
Let $M'$ be an indecomposable
direct summand of $M_P$ and $M''$ an indecomposable direct summand of $M_Q$.
Consider the case when $B$ is a concealed algebra of Euclidean type, that is, $\mathcal{R}(B)$ is a family of stable tubes.
Then it follows from Lemma \ref{lem 2.3}
that there is a module $Z$ in $\mathcal{R}(B)$ such that
$\Hom_B(M', \tau_BZ)\neq 0$ and $\Hom_B(Z, M'')\neq 0$. This contradicts the assumption that $M$ is not the middle
of a short chain. Hence $M_P=0$ or $M_Q=0$.
Assume now that $B$ is a wild concealed algebra. Fix a regular component $\mathcal{D}$ of $\Gamma_{B}$. Invoking
Proposition \ref{prop 3.4} we conclude that there exists a module $X \in \mathcal{D}$ such that $\Hom_B(M',\tau_BX)\neq 0$
and $\Hom_B(X, M'') \neq 0$. Thus $M$ is the middle of a short chain $X \rightarrow M \rightarrow \tau_BX$ in $\mod B$.
Hence, we get that $M_P=0$ or $M_Q=0$.
Therefore, we obtain that $M$ belongs to $\add(\mathcal{C})$ for a connecting component $\mathcal{C} =\mathcal{P}(B)$
or $\mathcal{C}=\mathcal{Q}(B)$.
\end{proof}
We shall now be concerned with the situation of exactly one
connecting component in the Auslander-Reiten quiver of a tilted
algebra. Let $H$ be an indecomposable hereditary algebra of
infinite representation type, $T$ a tilting module in $\mod H$
and $B=\End_H(T)$ the associated tilted algebra. By
$\mathcal{C}_T$ we denote the connecting component in $\Gamma_B$
determined by $T$. We keep these notations to formulate and prove
the following statement.
\begin{prop}\label{prop 4.2}
Let $B=\End_H(T)$ be an indecomposable tilted algebra which is not
concealed. If $M$ is a sincere $B$-module which is not the
middle of a short chain in $\mod B$, then $M \in
\add(\mathcal{C}_T)$.
\end{prop}
\begin{proof}
We start with the general view on the module category $\mod B$ due
to results established in \cite{K1}, \cite{K2}, \cite{K3},
\cite{L1}, \cite{St}. Let $\Delta= \Delta_T$ be the canonical
section of the connecting component $\mathcal{C}_T$ of $\Gamma_B$
determined by $T$. Hence, $\Delta= Q^{\op}$ for $Q=Q_H$. Then
$\mathcal{C}_T$ admits a finite (possibly empty) family of
pairwise disjoint full translation (valued) subquivers
\[\mathcal{D}^{(l)}_1, ..., \mathcal{D}^{(l)}_m, \mathcal{D}^{(r)}_1, ..., \mathcal{D}^{(r)}_n\]
such that the following statements hold:
\begin{itemize}
\item[(a)] for each $i \in \{1,...,m\}$, there is an isomorphism of translation quivers $\mathcal{D}^{(l)}_i
\cong \mathbb{N} \Delta^{(l)}_i$, where $\Delta^{(l)}_i$ is a
connected full valued subquiver of $\Delta$, and
$\mathcal{D}^{(l)}_i$ is closed under predecessors in
$\mathcal{C}_T$;
\item[(b)] for each $j \in \{1,...,n\}$, there is an isomorphism of translation quivers $\mathcal{D}^{(r)}_j
\cong (-\mathbb{N}) \Delta^{(r)}_j$, where $\Delta^{(r)}_j$ is a
connected full valued subquiver of $\Delta$, and
$\mathcal{D}^{(r)}_j$ is closed under successors in
$\mathcal{C}_T$;
\item[(c)] all but finitely many indecomposable modules of $\mathcal{C}_T$ lie in
\[\mathcal{D}^{(l)}_1 \cup ...\cup \mathcal{D}^{(l)}_m \cup \mathcal{D}^{(r)}_1 \cup ...\cup \mathcal{D}^{(r)}_n;\]
\item[(d)] for each $i \in \{1,...,m\}$, there exists a tilted algebra $B^{(l)}_i=\End_{H^{(l)}_i}(T^{(l)}_i)$,
where $H^{(l)}_i$ is a hereditary algebra of type $(\Delta^{(l)}_i)^{\op}$ and $T^{(l)}_i$ is a
tilting $H^{(l)}_i$-module without preinjective indecomposable direct summands such that
\begin{itemize}
\item[$\bullet$] $B^{(l)}_i$ is a quotient algebra of $B$, and hence there is a fully faithful embedding
$\mod B^{(l)}_i \hookrightarrow \mod B$,
\item[$\bullet$] $\mathcal{D}^{(l)}_i$ coincides with the torsion-free part $\mathcal{Y}(T^{(l)}_i) \cap
\mathcal{C}_{T^{(l)}_i}$ of the connecting component
$\mathcal{C}_{T^{(l)}_i}$ of $\Gamma_{B^{(l)}_i}$ determined by
$T^{(l)}_i$;
\end{itemize}
\item[(e)]for each $j \in \{1,...,n\}$, there exists a tilted algebra $B^{(r)}_j=\End_{H^{(r)}_j}(T^{(r)}_j)$,
where $H^{(r)}_j$ is a hereditary algebra of type $(\Delta^{(r)}_j)^{\op}$ and $T^{(r)}_j$ is a
tilting $H^{(r)}_j$-module without preprojective indecomposable direct summands such that
\begin{itemize}
\item[$\bullet$] $B^{(r)}_j$ is a quotient algebra of $B$, and hence there is a fully faithful embedding
$\mod B^{(r)}_j \hookrightarrow \mod B$,
\item[$\bullet$] $\mathcal{D}^{(r)}_j$ coincides with the torsion part $\mathcal{X}(T^{(r)}_j) \cap
\mathcal{C}_{T^{(r)}_j}$ of the connecting component
$\mathcal{C}_{T^{(r)}_j}$ of $\Gamma_{B^{(r)}_j}$ determined by
$T^{(r)}_j$;
\end{itemize}
\item[(f)] $\mathcal{Y}(T)=\add (\mathcal{Y}(T^{(l)}_1) \cup ... \cup \mathcal{Y}(T^{(l)}_m)\cup (\mathcal{Y}(T)
\cap \mathcal{C}_T))$;
\item[(g)] $\mathcal{X}(T)=\add ((\mathcal{X}(T) \cap \mathcal{C}_T)\cup \mathcal{X}(T^{(r)}_1) \cup ... \cup
\mathcal{X}(T^{(r)}_n))$;
\item[(h)] the Auslander-Reiten quiver $\Gamma_B$ has the disjoint union form
\[\Gamma_B = (\bigcup_{i=1}^m \mathcal{Y}\Gamma_{B^{(l)}_i}) \cup \mathcal{C}_T \cup (\bigcup_{j=1}^n
\mathcal{X}\Gamma_{B^{(r)}_j}),\]
where
\begin{itemize}
\item[$\bullet$] for each $i \in \{1,...,m\}$, $\mathcal{Y}\Gamma_{B^{(l)}_i}$ is the union of all components
of $\Gamma_{B^{(l)}_i}$ contained entirely in $\mathcal{Y}(T^{(l)}_i)$,
\item[$\bullet$] for each $j \in \{1,...,n\}$, $\mathcal{X}\Gamma_{B^{(r)}_j}$ is the union of all components
of $\Gamma_{B^{(r)}_j}$ contained entirely in
$\mathcal{X}(T^{(r)}_j)$.
\end{itemize}
\end{itemize}
Moreover, we have the following description of the components of
$\Gamma_B$ contained in the parts $\mathcal{Y}\Gamma_{B^{(l)}_i}$
and $\mathcal{X}\Gamma_{B^{(r)}_j}$.
\begin{itemize}
\item[(1)] If $\Delta^{(l)}_i$ is a Euclidean quiver, then $\mathcal{Y}\Gamma_{B^{(l)}_i}$ consists
of a unique preprojective component $\mathcal{P}({B^{(l)}_i})$ of
$\Gamma_{{B^{(l)}_i}}$ and an infinite family
$\mathcal{T}^{B^{(l)}_i}$ of pairwise orthogonal generalized
standard ray tubes. Further, $\mathcal{P}({B^{(l)}_i})$ coincides
with the preprojective component $\mathcal{P}({C^{(l)}_i})$ of a
tame concealed quotient algebra $C^{(l)}_i$ of $B^{(l)}_i$.
\item[(2)] If $\Delta^{(l)}_i$ is a wild quiver, then $\mathcal{Y}\Gamma_{B^{(l)}_i}$ consists of
a unique preprojective component $\mathcal{P}(B^{(l)}_i)$ of
$\Gamma_{B^{(l)}_i}$ and an infinite family of components obtained
from the components of the form $\mathbb{ZA}_{\infty}$ by a finite
number (possibly empty) of ray insertions. Further,
$\mathcal{P}(B^{(l)}_i)$ coincides with the preprojective
component $\mathcal{P}(C^{(l)}_i)$ of a wild concealed quotient
algebra $C^{(l)}_i$ of $B^{(l)}_i$.
\item[(3)] If $\Delta^{(r)}_j$ is a Euclidean quiver, then $\mathcal{X}\Gamma_{B^{(r)}_j}$ consists
of a unique preinjective component $\mathcal{Q}(B^{(r)}_j)$ of
$\Gamma_{B^{(r)}_j}$ and an infinite family
$\mathcal{T}^{B^{(r)}_j}$ of pairwise orthogonal generalized
standard coray tubes. Further, $\mathcal{Q}(B^{(r)}_j)$ coincides
with the preinjective component $\mathcal{Q}(C^{(r)}_j)$ of a tame
concealed quotient algebra $C^{(r)}_j$ of $B^{(r)}_j$.
\item[(4)] If $\Delta^{(r)}_j$ is a wild quiver, then $\mathcal{X}\Gamma_{B^{(r)}_j}$ consists of
a unique preinjective component $\mathcal{Q}(B^{(r)}_j)$ of $\Gamma_{B^{(r)}_j}$ and an infinite family of
components obtained from the components of the form $\mathbb{ZA}_{\infty}$ by a finite number (possibly
empty) of coray insertions. Further, $\mathcal{Q}(B^{(r)}_j)$ coincides with the preinjective component
$\mathcal{Q}(C^{(r)}_j)$ of a wild concealed quotient algebra $C^{(r)}_j$ of $B^{(r)}_j$.
\end{itemize}
Observe that each indecomposable $B$-module belongs either to
$\mathcal{Y}(T)$ or $\mathcal{X}(T)$ ($T$ is a splitting tilting
module). Let $M'$ be an indecomposable direct summand of $M$,
which is contained in $\mathcal{Y}(T)$. We claim that then $M'$
belongs to $\mathcal{Y}(T) \cap \mathcal{C}_T$. Conversely, assume
that $M' \in \mathcal{Y}(T)\backslash \mathcal{C}_T$. Then there
exists $i \in \{1,..., m\}$ such that $M' \in
\mathcal{Y}\Gamma_{B^{(l)}_i} \backslash \mathcal{C}_T$,
equivalently $M' \in \mathcal{Y}\Gamma_{B^{(l)}_i} \backslash
\mathcal{C}_{T^{(l)}_i}$. Without loss of generality we may assume
that $i=1$. Since $T^{(l)}_1$ does not contain indecomposable
preinjective direct summands,
we may distinguish two cases.\\
Assume first that $T^{(l)}_1$ contains an indecomposable direct
summand from $\mathcal{R}(H^{(l)}_1)$. This implies that there is
a projective module $P$ in $\mathcal{Y}\Gamma_{B^{(l)}_1}$ which
does not belong to $\mathcal{P}(B^{(l)}_1)$. If $B_{1}^{(l)}$ is a
tilted algebra of Euclidean type, then $P$ is a module from some
ray tube $\mathcal{T}$. Then, according to Lemma \ref{lem 2.2},
$\Hom_{B^{(l)}_1}(M',X)=0$ for any $X \in \mathcal{T}$, which
leads to conclusion that $M' \notin \mathcal{P}(B^{(l)}_1)$,
because $\mathcal{T}$ belongs to the family
$\mathcal{T}^{B^{(l)}_i}$ of ray tubes separating
$\mathcal{P}(B^{(l)}_i)$ from the preinjective component
$\mathcal{Q}(B^{(l)}_i)$ of $\Gamma_{B^{(l)}_i}$. Since $M'$ does
not belong to an infinite family of ray tubes (Lemmas \ref{lem
2.2} and \ref{lem 2.4}), by (1) we conclude that $M'=0$, a
contradiction. This shows that any indecomposable direct summand
of $M$ from $\mathcal{Y}(T)$ is contained in $\mathcal{Y}(T) \cap
\mathcal{C}_T$. If $B^{(l)}_1$ is a tilted algebra of wild type,
then $P$ belongs to a component obtained from a component of type
$\mathbb{ZA}_{\infty}$, say $\mathcal{D}$, by a positive number of
ray insertions. Then there is a left cone $(\rightarrow N)$ in
$\mathcal{D}$ which consists only of $C^{(l)}_1$-modules
\cite[Theorem 1]{K2}. Moreover,
$\tau_{C^{(l)}_1}V=\tau_{B^{(l)}_1}V$ for any module $V \in
(\rightarrow N)$ and there is an indecomposable module $Y \in
\mathcal{R}(H^{(l)}_1)$ such that $N=\Hom_{H^{(l)}_1}(T^{(l)}_1,
Y)$. Because $M' \in \mathcal{Y}\Gamma_{B^{(l)}_1} \backslash
\mathcal{C}_{T^{(l)}_1}$, we have also that $M' =
\Hom_{H^{(l)}_1}(T^{(l)}_1, X)$ for some $X \in
\mathcal{P}(H^{(l)}_1) \cup \mathcal{R}(H^{(l)}_1)$.
Suppose that $X \in \mathcal{R}(H^{(l)}_1)$. Invoking Theorem
\ref{thm 3.5}, we have that there exists a positive integer $t$
such that $\Hom_{H^{(l)}_1}(X, \tau^p_{H^{(l)}_1}Y)\neq 0$ for all
integers $p \geq t$. This implies that also
$\Hom_{B^{(l)}_1}(M',\tau_{B^{(l)}_1}^pN) \neq 0$ for all integers
$p \geq t$. Moreover, if $X \in \mathcal{P}(H^{(l)}_1)$, then
$M'=\Hom_{H^{(l)}_1}(T^{(l)}_1,X)$ belongs to
$\mathcal{P}(B^{(l)}_1)$, which is equal to
$\mathcal{P}(C^{(l)}_1)$. From Proposition \ref{prop 3.4} we
obtain now that there exists a positive integer $t$ such that
$\Hom_{B^{(l)}_1} (M', \tau_{B^{(l)}_1}^pN) \neq 0$ for all
integers $p \geq t$. Thus we have, independently on the position
of $X$ in $\mathcal{P}(H^{(l)}_1) \cup \mathcal{R}(H^{(l)}_1)$, a
nonzero homomorphism $g: M' \rightarrow \tau^p_{B^{(l)}_1}N$, for
any integer $p \geq t$. Observe also that $M$ is a faithful
$B$-module because $M$ is sincere and not the middle of a short
chain (see \cite[Corollary 3.2]{RSS}). Hence there is a
monomorphism $B_B\to M^r$, for some positive integer $r$, so we
have a monomorphism $P\to M^r$, because $P$ is a direct summand of
$B_B$. Further, since $\mathcal{D}$ contains a finite number of
projective modules, we may assume, without loss of generality,
that $P$ is the one whose radical has an indecomposable direct
summand $L$ such that $\tau^s_{B^{(l)}_1}L \neq 0$ for any integer
$s \geq 1$. Consider the infinite sectional path $\Sigma$ in
$\mathcal{D}$ which terminates at $L$. Then there exists an
integer $p \geq t$ such that the infinite sectional path $\Omega$
which starts at $\tau_{B^{(l)}_1}^p N$ contains a module
$\tau_{B^{(l)}_1}Z$ with $Z$ lying on $\Sigma$. Then,
$\Hom_{B^{(l)}_1}(Z, L) \neq 0$, by Lemma \ref{lem 2.5}, and hence
$\Hom_{B^{(l)}_1}(Z, M) \neq 0$, since there are monomorphisms $L
\rightarrow P$ and $P \rightarrow M^r$ for some integer $r \geq
1$. Similarly, we obtain $\Hom_{B^{(l)}_1}(M', \tau_{B^{(l)}_1}Z)
\neq 0$, because there are a nonzero homomorphism $g: M'\to
\tau^p_{B^{(l)}_1}N$ and a monomorphism from $\tau_{B^{(l)}_1}^pN$
to $\tau_{B^{(l)}_1}Z$ being a composition of irreducible
monomorphisms. Finally, we get a short chain $Z \rightarrow M
\rightarrow \tau_{B^{(l)}_1}Z$ in $\mod B$, which contradicts
the assumption imposed on $M$. \\
Assume now that $T^{(l)}_1$ belongs to
$\add(\mathcal{P}(H^{(l)}_1))$. Then $B^{(l)}_1$ is a concealed
algebra and $B \neq B^{(l)}_1$, since $B$ is not concealed by the
assumption. Therefore, since $B$ is indecomposable, there exists a
module $R \in \mathcal{Q}(B^{(l)}_1)$, more precisely, a module
$R\in \mathcal{Q}(B^{(l)}_1) \cap \mathcal{C}_{T}$ such that $W$
is a direct summand of $\rad P$ of some projective $B$-module $P$.
Moreover, by Lemmas \ref{lem 2.4} and \ref{lem 3.6}, we obtain
that $M' \in \mathcal{Y}\Gamma_{B^{(l)}_1}\backslash
\mathcal{C}_{T^{(l)}_1}$ implies $M' \in \mathcal{P}(B^{(l)}_1)$.
We claim that there exists $Z \in \mathcal{R}(B^{(l)}_1)$ such
that $\Hom_{B^{(l)}_1}(Z, W) \neq 0$ and $\Hom_{B^{(l)}_1}(M',
\tau_{B^{(l)}_1}Z) \neq 0$.
If $B^{(l)}_1$ is of Euclidean type then the claims follows from
Lemma \ref{lem 2.3}. Suppose now that $B^{(l)}_1$ is of wild type.
Let $\mathcal{D}$ be a fixed component in
$\mathcal{R}(B^{(l)}_1)$. From Proposition \ref{prop 3.4} we know
that there are nonzero homomorphisms $M' \rightarrow U$ for almost
all $U \in \mathcal{D}$. Also by this proposition, for almost all
$U \in \mathcal{D}$, there is a nonzero homomorphism $U
\rightarrow W$. Thus, we conclude that there exists a regular
$B^{(l)}_1$-module $Z$ such that $\Hom_{B^{(l)}_1}(Z, W) \neq 0$
and $\Hom_{B^{(l)}_1}(M', \tau_{B^{(l)}_1}Z) \neq 0$. Combining
now a nonzero homomorphism from $Z$ to $W$ with the composition of
monomorphisms $W \rightarrow P$ and $P \rightarrow M^r$, for some
integer $r\geq 1$, we obtain that $\Hom_{B^{(l)}_1}(Z, M) \neq 0$.
Consequently,
there is a short chain $Z \rightarrow M \rightarrow \tau_{B^{(l)}_1}Z$ in $\mod B$, a contradiction.\\
We use dual arguments to show that any indecomposable direct summand $M''$ of $M$, which is contained in
$\mathcal{X}(T)$, belongs in fact to $\mathcal{X}(T) \cap \mathcal{C}_T$.
\end{proof}
\vspace{0,2cm}
\begin{prop}\label{prop 4.3}
Let $B=\End_H(T)$ be an indecomposable tilted algebra,
$\mathcal{C}_T$ the connecting component of $\Gamma_B$ determined
by $T$, and $M$ a sincere module in $\add(\mathcal{C}_T)$ which is
not the middle of a short chain. Then there is a section $\Delta$
in $\mathcal{C}_T$ such that every indecomposable direct summand
of $M$ belongs to $\Delta$.
\end{prop}
\begin{proof}
We divide the proof into several steps.
(1) Let $M'$ be an indecomposable direct summand of $M$ and $R$ be
an immediate predecessor of some projective module $P$ in
$\mathcal{C}_T$ (if $\mathcal{C}_T$ contains a projective module).
We prove that, if $M'$ is a predecessor of $R$ in $\mathcal{C}_T$,
then $M'$ belongs to $\mathcal{S}_R$. Assume that $M'$ is a
predecessor of $R$ in $\mathcal{C}_T$ and $M'$ does not belong to
$\mathcal{S}_R$. Since $R$ has no injective nonsectional
predecessors in $\mathcal{C}_T$, we have from Proposition
\ref{prop 2.6} (i) that $\Hom_B(M',\tau_BU)\neq 0$ for some module
$U\in\mathcal{S}_R$. Moreover, $\Hom_B(U,R)\neq 0$, because there
is a sectional path from $U$ to $R$ in $\mathcal{C}_T$. Since $M$
is faithful, there is a monomorphism $B_B\to M^r$, for some
positive integer $r$, so we have a monomorphism $P\to M^r$,
because $P$ is a direct summand of $B_B$. Combining now a nonzero
homomorphism from $U$ to $R$ with the composition of monomorphisms
$R\to P$ and $P\to M^r$, we obtain $\Hom_B(U,M^r)\neq 0$, and
hence $\Hom_B(U,M)\neq 0$. Summing up, we have in $\mod B$ a short
chain $U\to M\to \tau_BU$, a contradiction.
Dually, using Proposition \ref{prop 2.6}(ii) we show that, if an
indecomposable direct summand $M''$ of $M$ is a successor of an
immediate successor $J$ of some injective module $I$ in
$\mathcal{C}_T$,
then $M''$ belongs to $\mathcal{S}^*_J$. \\
(2) Let $M'$ and $M''$ be nonisomorphic indecomposable direct
summands of $M$ such that $M'$ is a predecessor of $M''$ in
$\mathcal{C}_T$. We show that every path from $M'$ to $M''$ in
$\mathcal{C}_T$ is sectional. Assume for the contrary that there
exists a nonsectional path from $M'$ to $M''$ in $\mathcal{C}_T$.
For each nonsectional path $\sigma$ in $\mathcal{C}_T$ from $M'$
to $M''$, we denote by $n(\sigma)$ the length of the maximal
sectional subpath of $\sigma$ ending in $M''$. Among the
nonsectional paths in $\mathcal{C}_T$ from $M'$ to $M''$ we may
choose a path $\gamma$ with maximal $n(\gamma)$. Let $Y_0 \to Y_1
\to\cdots\to Y_{n-1} \to Y_n=M''$ be the maximal sectional subpath
of $\gamma$ ending in $M''$. Observe that then $\gamma$ admits a
subpath of the form $\tau_BY_1 \to Y_0 \to Y_1$, and so $Y_1$ is
not projective.
We show first that there is no sectional path in $\mathcal{C}_T$
from $M'$ to $Y_0$. Note that there is no sectional path in
$\mathcal{C}_T$ from $M'$ to $\tau_BY_1$. Indeed, otherwise
$\Hom_B(M',\tau_BY_1)\neq 0$ and clearly $\Hom_B(Y_1,M'')\neq 0$,
since there is a sectional path from $Y_1$ to $Y_n=M''$, and
consequently $M$ is the middle of a short chain $Y_1 \to M \to
\tau_BY_1$, a contradiction. Moreover, applying (1), we conclude
that $Y_0$ and $\tau_BY_1$ are not projective. We claim that
$\tau_BY_1$ is a unique immediate predecessor of $Y_0$ in
$\mathcal{C}_T$. Suppose that $Y_0$ admits an immediate
predecessor $L$ in $\mathcal{C}_T$ different from $\tau_BY_1$.
Since there is no sectional path in $\mathcal{C}_T$ from $M'$ to
$\tau_BY_1$, we conclude that $\gamma$ contains a subpath of the
form
\[ M'=N_0\to N_1\to\cdots\to N_s=\tau_BZ_1\to Z_0\to Z_1\to\cdots\to Z_{t-1}\to Z_t=\tau_BY_1. \]
Assume first that all modules $Z_2, \ldots, Z_{t-1}$ are nonprojective. Then there is in $\mathcal{C}_T$ a nonsectional path $\beta$ from $M'$ to $M''$
of the form
\[ M'=N_0\to N_1\to\cdots\to \tau_BZ_1\to \tau_BZ_2 \to\cdots\to \tau_BZ_t\to \tau_BY_0\to L\to \]
\[\to Y_0\to Y_1\to\cdots\to Y_n=M'' \]
with $n(\beta)=n(\gamma)+1$, a contradiction with the choice of $\gamma$. Assume now that one of the modules $Z_2, \ldots, Z_{t-1}$ is projective.
Choose $k\in\{2,\ldots,t-1\}$ such that $Z_k$ is projective but $Z_{k+1},\ldots, Z_{t-1}, Z_t$ are nonprojective. Then $\tau_BZ_{k+1}$ is an immediate
predecessor of $Z_k$ in $\mathcal{C}_T$ and hence, applying (1), we infer that there is a sectional path in $\mathcal{C}_T$ from $M'$ to $\tau_BZ_{k+1}$.
We obtain then a nonsectional path $\alpha$ in $\mathcal{C}_T$ of the form
\[ M'\to\cdots\to \tau_BZ_{k+1}\to\cdots\to \tau_BZ_t\to \tau_BY_0\to L\to Y_0\to Y_1\to\cdots\to Y_n=M'' \]
with $n(\alpha)=n(\gamma)+1$, again a contradiction with the
choice of $\gamma$. Summing up, we proved that $Y_0$, $Y_1$ are
nonprojective and $\tau_BY_1$ is a unique immediate predecessor of
$Y_0$ in $\mathcal{C}_T$. Hence every sectional path in
$\mathcal{C}_T$ from $M'$ to $Y_0$ passes through $\tau_BY_1$.
This proves our claim, because there is no sectional path in
$\mathcal{C}_T$ from $M'$ to $\tau_BY_1$.
Observe that $\Hom_B(Y_0,M)\neq 0$, since we have a sectional path
in $\mathcal{C}_T$ from $Y_0$ to the direct summand $M''$ of $M$.
Denote by $f$ a nonzero homomorphism in $\mod B$ from $Y_0$ to $M$
and consider a projective cover $g: P_B(Y_0) \to Y_0$ of $Y_0$ in
$\mod B$. Then $fg\neq 0$ and hence there exist an indecomposable
projective $B$-module $P$ and nonzero homomorphism $h: P\to Y_0$
such that $fh\neq 0$. Applying (1) and Proposition \ref{prop 2.6}
(ii), we conclude that $h$ factorizes through a module in
$\add(\tau^-_B\mathcal{S}^*_{M'})$. Then there exists a module $U$
in $\mathcal{S}^*_{M'}$ and a nonzero homomorphism $j: \tau^-_BU
\to Y_0$ such that $fj\neq 0$. Moreover, $\Hom_B(M',U)\neq 0$
because there is a sectional path from $M'$ to $U$ in
$\mathcal{C}_T$. Therefore, $M$ is the middle of a short chain
$\tau^-_BU \to M\to U$, with $U=\tau_B (\tau^-_BU)$, a contradiction. \\
(3) Let $M'$ be an indecomposable direct summand of $M$ which is a
predecessor of an indecomposable projective module $P$ in
$\mathcal{C}_T$. Then every path in $\mathcal{C}_T$ from $M'$ to
$P$ is sectional. Indeed, since $M$ is a faithful module in $\mod
B$, there is a monomorphism $B_B\to M^r$ in $\mod B$ for some
positive integer $r$, and hence $\Hom_B(P,M'')\neq 0$ for an
indecomposable direct summand $M''$ of $M$. Since $\mathcal{C}_T$
is a generalized standard component, we infer that then there is
in $\mathcal{C}_T$ a path from $P$ to $M''$. Therefore, any path
in $\mathcal{C}_T$ from $M'$ to $P$ is
a subpath of a path in $\mathcal{C}_T$ from $M'$ to $M''$, and so is sectional, by (2). \\
(4) Let $M''$ be an indecomposable direct summand of $M$ which is
a successor of an indecomposable injective module $I$ in
$\mathcal{C}_T$. Then
every path in $\mathcal{C}_T$ from $I$ to $M''$ is sectional. This follows by arguments dual to those applied in (3). \\
We denote by $\Delta_T$ the section of $\mathcal{C}_T$ given by
the images of a complete set of pairwise nonisomorphic
indecomposable injective $H$-modules via the functor
$\Hom_H(T,-): \mod H \to \mod B$. \\
(5) Let $M_1, M_2, \ldots, M_t$ be a complete set of pairwise
nonisomorphic indecomposable direct summands of $M$. We know that
for a given module $N$ in $\mathcal{C}_T$ there exists a unique
integer $r$ such that $\tau^r_BN\in\Delta_T$. Let $r_1, r_2,
\ldots, r_t$ be the unique integers such that
$\tau^{r_i}_BM_i\in\Delta_T$, for any $i\in\{1,\ldots,t\}$.
Observe that the modules $\tau^{r_1}_BM_1, \tau^{r_2}_BM_2,
\ldots, \tau^{r_t}_BM_t$ are pairwise different because, by (2),
every path in $\mathcal{C}_T$ from $M_i$ to $M_j$, with $i\neq j$
in $\{1,\ldots,t\}$, is sectional. We shall prove our claim by
induction on the number $s(\Delta_T)=\sum_{i=1}^t|r_i|$.
Assume $s(\Delta_T)=0$. Then, for any $i\in\{1,\ldots,t\}$, $M_i\in \Delta_T$ and there is nothing to show.
Assume $s(\Delta_T)\geq 1$. Fix $i\in\{1,\ldots,t\}$ with
$|r_i|\neq 0$. Assume that $r_i>0$, or equivalently,
$M_i\in\mathcal{C}_T\cap\mathcal{X}(T)$. Denote by
$\Sigma^{(i)}_T$ the set of all modules $X$ in $\Delta_T$ such
that there is a path in $\mathcal{C}_T$ of length greater than or
equal to zero from $X$ to $\tau_B^{r_i}M_i$. We note that every
path from a module $X$ in $\Sigma^{(i)}_T$ to $\tau_B^{r_i}M_i$ is
sectional, because $\Delta_T$ is convex in $\mathcal{C}_T$ and
intersects every $\tau_B$-orbit in $\mathcal{C}_T$ exactly once.
Further, by (2) and (4), no module in $\Sigma^{(i)}_T$ is a
successor of a module $M_j$ with $j\in\{1,\ldots,t\}\setminus
\{i\}$ nor an indecomposable injective module, because there is a
nonsectional path in $\mathcal{C}_T$ from $\tau_B^{r_i}M_i$ to
$M_i$. Consider now the full subquiver $\Delta^{(i)}_T$ of
$\mathcal{C}_T$ given by the modules from
$\tau^-_B(\Sigma^{(i)}_T)$ and $\Delta_T\setminus \Sigma^{(i)}_T$.
Then $\Delta^{(i)}_T$ is a section of $\mathcal{C}_T$ and
$s(\Delta^{(i)}_T)\leq s(\Delta_T)-1$.
Assume now $r_i<0$, or equivalently,
$M_i\in\mathcal{C}_T\cap\mathcal{Y}(T)$. Denote by
$\Omega^{(i)}_T$ the set of all modules $Y$ in $\Delta_T$ such
that there is a path in $\mathcal{C}_T$ of length greater than or
equal to zero from $\tau_B^{r_i}M_i$ to $Y$. It follows from (2)
and (3) that no module in $\Omega^{(i)}_T$ is a predecessor of a
module $M_j$ with $j\in\{1,\ldots,t\}\setminus \{i\}$ nor an
indecomposable projective module, because there is a nonsectional
path in $\mathcal{C}_T$ from $M_i$ to $\tau_B^{r_i}M_i$. Consider
now the full subquiver $\Delta^{(i)}_T$ of $\mathcal{C}_T$ given
by the modules from $\tau_B(\Omega^{(i)}_T)$ and
$\Delta_T\setminus \Omega^{(i)}_T$. Then $\Delta^{(i)}_T$ is a
section of $\mathcal{C}_T$ and $s(\Delta^{(i)}_T)\leq
s(\Delta_T)-1$.
Summing up, we obtain that there is a section $\Delta$ in
$\mathcal{C}_T$ containing all modules $M_1, M_2, \ldots, M_t$.
\end{proof}
We complete now the proof of Theorem \ref{thm 1.1}.\\
Let $B$ be an indecomposable tilted algebra and $M$ a sincere
module in $\mod B$ which is not the middle of a short chain in
$\mod B$. Applying Propositions \ref{prop 4.1} and \ref{prop 4.2},
we conclude that there exists a hereditary algebra $\overline{H}$
and a tilting module $\overline{T}$ in $\mod \overline{H}$ such
that $B=\End_{\overline{H}}(\overline{T})$ and $M$ is isomorphic
to a $B$-module $M_1^{n_1} \oplus ... \oplus M_t^{n_t}$ with
$M_1,..., M_t$ indecomposable modules in
$\mathcal{C}_{\overline{T}}$, for some positive integers
$n_1,..,n_t$. Further, it follows from Proposition \ref{prop 4.3}
that there is a section $\Delta$ in $\mathcal{C}_{\overline{T}}$
containing the modules $M_1, ..., M_t$. Denote by
$T^{\ast}_{\Delta}$ the direct sum of all indecomposable
$B$-modules lying on $\Delta$. Then it follows from Theorem
\ref{thm 3.3} that $H_{\Delta}=\End_B(T^{\ast}_{\Delta})$ is an
indecomposable hereditary algebra, $T_{\Delta}=D
(T^{\ast}_{\Delta})$ is a tilting module in $\mod H_{\Delta}$, and
the tilted algebra $B_{\Delta} =\End_{H_{\Delta}}(T_{\Delta})$ is
the basic algebra of $B$. Let $H=H_{\Delta}$. Then there exists a
tilting module $T$ in the additive category $\add(T_{\Delta})$ of
$T_{\Delta}$ in $\mod H =\mod H_{\Delta}$ such that
$B=\End_{H}(T)$, $\mathcal{C}_{\overline{T}}$ is the connecting
component $\mathcal{C}_T$ of $\Gamma_B$ determined by $T$, and
$\Delta$ is the section $\Delta_T$ of $\mathcal{C}_T$ given by the
images of a complete set of pairwise nonisomorphic indecomposable
injective $H$-modules via the functor $\Hom_{H}(T,-): \mod H
\rightarrow \mod B$. Since $M_1,...,M_t$ lie on
$\Delta=\Delta_{T}$, we conclude that there is an injective module
$I$ in $\mod H$ such that the right $B$-modules $M=M_1^{n_1}
\oplus ... \oplus M_t^{n_t}$ and $\Hom_{H}(T,I)$ are isomorphic.
This finishes the proof of Theorem \ref{thm 1.1}.\\
We provide now the proof of Corollary \ref{cor 1.2}.\\
Let $A$ be an algebra and $M$ a module in $\mod A$ which is not
the middle of a short chain. It follows from Theorem \ref{thm 1.1}
that there exists a hereditary algebra $H$ and a tilting module
$T$ in $\mod H$ such that the tilted algebra $B=\End_H(T)$ is a
quotient algebra of $A$ and $M$ is isomorphic to the right
$B$-module $\Hom_B(T,I)$. Further, the functor $\Hom_H(T,-): \mod
H \rightarrow \mod B$ induces an equivalence of the torsion part
$\mathcal{T}(T)$ of $\mod H$ with the torsion-free part
$\mathcal{Y}(T)$ of $\mod B$, and obviously $I$ belongs to
$\mathcal{T}(T)$. Then we obtain isomorphisms of algebras
\[\End_A(M) \cong \End_B(M) \cong \End_B(\Hom_H(T,I)) \cong \End_H(I).\]
Thus Corollary \ref{cor 1.2} follows from the following known
characterization of hereditary algebras (see \cite{KSZ} for more
general results in this direction).
\begin{prop}
Let $\Lambda$ be an algebra. The following conditions are
equivalent:
\begin{itemize}
\item[$(i)$] $\Lambda$ is a hereditary algebra;
\item[$(ii)$] $\End_{\Lambda}(P)$ is a hereditary algebra for any
projective module $P$ in $\mod \Lambda$;
\item[$(iii)$] $\End_{\Lambda}(I)$ is a hereditary algebra for any
injective module $I$ in $\mod \Lambda$.
\end{itemize}
\end{prop}
\section{\normalsize Examples}
In this section we exhibit examples of modules which are not the middle of short chains, illustrating Theorem \ref{thm 1.1}.\\
\noindent {\sc Example 5.1 \,} Let $K$ be a field, $n$ a positive
integer, $Q$ the quiver
\[
\xymatrix@C=2pc@R=2pc{
1\ar[rrd]_{\alpha_1} & 2\ar[rd]^{\alpha_2} & \cdots & n-1\ar[ld]_{\alpha_{n-1}} & n\ar[lld]^{\alpha_n} \\
&&0
}
\]
and $A=KQ$ the path algebra of $Q$ over $K$. Then the Auslander-Reiten quiver $\Gamma_A$ admits a unique preinjective component $\mathcal{Q}(A)$ whose
right part is of the form
\[
\xymatrix@C=2pc@R=1pc{
{\phantom{III}}\ar[rdd]&&\tau_AI(1)\ar[rdd]&&I(1) \\
{\phantom{III}}\ar[rd]&&\tau_AI(2)\ar[rd]&&I(2) \\
\cdots&\tau_AI(0)\ar[ruu]\ar[ru]\ar[rdd]\ar[rd]&\vdots&I(0)\ar[ruu]\ar[ru]\ar[rdd]\ar[rd]&\vdots \\
{\phantom{III}}\ar[ru]&&\tau_AI(n-1)\ar[ru]&&I(n-1) \\
{\phantom{III}}\ar[ruu]&&\tau_AI(n)\ar[ruu]&&I(n)
}
\]
where $I(0), I(1), I(2) \ldots, I(n-1), I(n)$ are the
indecomposable injective right $A$-modules at the vertices $0, 1,
2, \ldots, n-1, n$, respectively. Consider the semisimple module
$M=I(1)\oplus I(2)\oplus\ldots\oplus I(n-1)\oplus I(n)$ in $\mod
A$. Then $M$ is not the middle of a short chain in $\mod A$ and
$B=A/\ann_A(M)$ is the path algebra $K\Delta$ of the subquiver
$\Delta$ of $Q$ given by the vertices $1, 2, \ldots, n-1, n$,
which is isomorphic to the product of $n$ copies of $K$.
Observe also that the injective modules $I(0), I(1),..., I(n)$ form a section of $\mathcal{Q}(A)$.\\
\noindent {\sc Example 5.2 \,} Let $K$ be a field and $n$ be a
positive integer. For each $i\in\{1,\ldots,n\}$, choose a basic
indecomposable finite-dimensional hereditary $K$-algebra $H_i$, a
multiplicity-free tilting module $T_i$ in $\mod H_i$, and consider
the associated tilted algebra $B_i=\End_{H_i}(T_i)$, the
connecting component $\mathcal{C}_{T_i}$ of $\Gamma_{B_i}$
determined by $T_i$, and the module
$M_{T_i}=\Hom_{H_i}(T_i,D(H_i))$ whose indecomposable direct
summands form the canonical section $\Delta_{T_i}$ of
$\mathcal{C}_{T_i}$. It follows from general theory (\cite{K1},
\cite{St}) that the Auslander-Reiten quiver $\Gamma_{B_i}$
contains at least one preinjective component. Therefore, we may
choose, for any $i\in\{1,\ldots,n\}$, a simple injective right
$B_i$-module $S_i$ lying in a preinjective component
$\mathcal{Q}_i$ of $\Gamma_{B_i}$. Let $B=B_1\times\ldots\times
B_n$ and $S=S_1\oplus\ldots\oplus S_n$. Then $S$ is a
finite-dimensional $K$-$B$-bimodule, and we may consider the
one-point extension algebra
\[ A=\left[\begin{matrix}K&S\\ 0 & B\end{matrix}\right] = \left\{\left[\begin{matrix}\lambda &s\\0&b
\end{matrix}\right]\mid \,\,\lambda\in K,\,\, s\in S,\,\,b\in B\right\}. \]
Since $S$ is a semisimple injective module in $\mod B$, it follows
from general theory (see \cite[(2.5)]{Ri1} or \cite[(XV.1)]{SS2})
that, for any indecomposable module $X$ in $\mod A$ which is not
in $\mod B$, its radical $\rad X$ coincides with the largest right
$B$-submodule of $X$ and belongs to the additive category
$\add(S)$ of $S$. In particular, for any indecomposable module $Z$
in $\mod B$, the almost split sequence in $\mod B$ with the right
term $Z$ is an almost split sequence in $\mod A$. Therefore, the
Auslander-Reiten quiver $\Gamma_A$ of $A$ is obtained from the
disjoint union of the Auslander-Reiten quiver $\Gamma_{B_1}, ...,
\Gamma_{B_n}$ by glueing of the preinjective components
$\mathcal{Q}_1, ..., \mathcal{Q}_n$ into a one component by the
new indecomposable projective $A$-module $P$ with $\rad P=S$ (and
possibly adding new components). This implies that the right
$B$-module
\[ M=M_{T_1}\oplus\ldots\oplus M_{T_n} \]
is not the middle of a short chain in $\mod A$. Moreover, since $M_{T_i}$ is a faithful right $B_i$-module for any $i\in\{1,\ldots,n\}$, we
conclude that $B=A/\ann_A(M)$.\\
\noindent {\sc Example 5.3 \,} Let $K$ be a field, $H$ be a basic
indecomposable finite-dimensional hereditary $K$-algebra, $T$ a
multiplicity-free tilting module in $\mod H$, and $B=\End_H(T)$
the associated tilted algebra. For a positive integer $r\geq 2$,
consider the $r$-fold trivial extension algebra
\[
T(B)^{(r)} =
\left\{\begin{array}{c}
\left[\begin{array}{cccccc}
b_1 & 0 & 0 & & & \\
f_2 & b_2 & 0 & & 0 & \\
0 & f_3 & b_3 & & & \\
& & \ddots & \ddots & & \\
& 0 & & f_{r-1} & b_{r-1} & 0 \\
& & & 0 & f_1 & b_1 \\
\end{array}\right] \\
b_1,\dots,b_{r-1} \in B, \ f_1,\dots,f_{r-1} \in D(B) \\
\end{array}\right\}
\]
of $B$. Then $T(B)^{(r)}$ is a basic indecomposable
finite-dimensional selfinjective $K$- algebra which is isomorphic
to the orbit algebra $\widehat{B} / (\nu_{\widehat{B}}^r)$ of the
repetitive algebra $\widehat{B}$ of $B$ with respect to the
infinite cyclic group $(\nu_{\widehat{B}}^r)$ of automorphisms of
$\widehat{B}$ generated by the $r$-th power of the Nakayama
automorphism $\nu_{\widehat{B}}$ of $\widehat{B}$. Moreover, we
have the canonical Galois covering $F^{(r)}: \widehat{B}\to
\widehat{B} / (\nu_{\widehat{B}}^r) = T(B)^{(r)}$ and the
associated push-down functor $F_{\lambda}^{(r)}: \mod\widehat{B}
\to \mod T(B)^{(r)}$ is dense (see \cite[Sections 6 and 7]{SY} for
more details). We also note that $T(B)^{(r)}$ admits a quotient
algebra $B_1\times B_2\times\ldots\times B_{r-1}$ with $B_i=B$ for
any $i\in\{1,2,\ldots,r-1\}$.
Fix a positive integer $m$ and consider the selfinjective algebra $A_m=T(B)^{(4(m+1))}$. For each $i\in\{1,2,\ldots,m\}$, consider the quotient algebra
$B_{4i}=B$ of $A_m$ and the right $B_{4i}$-module $M_{4i}=\Hom_{H}(T,D(B))$, being the direct sum of all indecomposable modules lying on the canonical
section $\Delta_{4i}=\Delta_T$ of the connecting component $\mathcal{C}_{4i}=\mathcal{C}_{T}$ of $\Gamma_{B_{4i}}$ determined by $T$. Then, applying
arguments as in \cite[Section 2]{RSS1}, we conclude that
\[ M=\bigoplus_{i=1}^mM_{4i} \]
is a module in $\mod A_m$ which is not the middle of a short chain and $A_m/\ann_{A_m}(M)$ is isomorphic to the product
\[ \prod_{i=1}^mB_{4i} \]
of $m$ copies of the tilted algebra $B$.
\bigskip
|
2,877,628,089,638 | arxiv | \section{Introduction}
Many real-world problems involve the cooperation of multiple agents, such as unmanned aerial vehicles~\citep{pham2018cooperative, zhao2018multi} and sensor networks~\citep{stranders2009decentralised}. Like in single-agent settings, learning control policies for multi-agent teams largely relies on the estimation of action-value functions, no matter in value-based~\citep{sunehag2018value, rashid2018qmix, rashid2020weighted} or policy-based approaches~\citep{lowe2017multi, foerster2018counterfactual, wang2021off}. However, learning action-value functions for complex multi-agent tasks remains a major challenge. Learning individual action-value functions~\citep{tan1993multi} is scalable but suffers from learning non-stationarity because it treats other learning agents as part of its environment. Joint action-value learning~\citep{claus1998dynamics} is free from learning non-stationarity but requires access to global information that is often unavailable during execution due to partial observability and communication constraints.
Factored Q-learning~\citep{guestrin2002multiagent} combines the advantages of these two methods. Learning the global action-value function as a combination of local utilities, factored Q functions maintain learning scalability while avoiding non-stationarity. Enjoying these advantages, fully decomposed Q functions significantly contribute to the recent progress of multi-agent reinforcement learning~\citep{samvelyan2019starcraft, wang2021rode}. However, when fully decomposed, local utility functions only depend on local observations and actions, which may lead to miscoordination problems in partially observable environments with stochastic transition functions~\citep{wang2020learning, wang2020qplex} and a game-theoretical pathology called relative overgeneralization~\citep{panait2006biasing, bohmer2020deep}. Relative overgeneralization renders optimal decentralized policies unlearnable when the employed value function does not have enough representational capacity to distinguish other agents' effects on local utility functions.
Coordination graphs~\citep{guestrin2002coordinated} provide a promising approach to solving these problems. Using vertices to represent agents and (hyper-) edges to represent payoff functions defined over the joint action-observation space of the connected agents, a coordination graph expresses a higher-order value decomposition among agents. Finding actions with the maximum value in a coordination graph can be achieved by distributed constraint optimization (DCOP) algorithms~\citep{cheng2012coordinating}, which consists of multiple rounds of message passing along the edges. Recently, DCG~\citep{bohmer2020deep} scales coordination graphs to large state-action spaces, shows its ability to solve the problem of relative overgeneralization, and obtains competitive results on StarCraft II micromanagement tasks. However, DCG focuses on predefined static and dense topologies, which largely lack flexibility for dynamic environments and induce intensive and inefficient message passing.
The question is how to learn dynamic and sparse coordination graphs sufficient for coordinated action selection. This is a long-standing problem in multi-agent learning. Sparse cooperative Q-learning~\citep{kok2006collaborative} learns value functions for sparse coordination graphs, but the graph topology is static and predefined by prior knowledge. \citet{zhang2013coordinating} propose to learn minimized dynamic coordination sets for each agent, but the computational complexity grows exponentially with the neighborhood size of an agent. Recently,~\citet{castellini2019representational} study the representational capability of several sparse graphs but focus on random topologies and stateless games. In this paper, we push these previous works further by proposing a novel deep method that learns context-aware sparse coordination graphs adaptive to the dynamic coordination requirements.
For learning sparse coordination graphs, we propose to use the variance of pairwise payoff functions as an indicator to select edges. Sparse graphs are used when selecting greedy joint actions for execution and the update of Q-function. We provide a theoretical insight into our method by proving that the probability of greedy action selection changing after an edge is removed decreases with the variance of the corresponding payoff function. Despite the advantages of sparse topologies, they raise the concern of learning instability. To solve this problem, we further equip our method with network structures based on action representations for utility and payoff learning to reduce the influence of estimation errors on sparse topologies learning. We call the overall learning framework Context-Aware SparsE Coordination graphs (CASEC).
For evaluation, we present the Multi-Agent COordination (MACO) benchmark. This benchmark collects classic coordination problems raised in the literature of multi-agent learning, increases their difficulty, and classifies them into 6 classes. Each task in the benchmark represents a type of problem. We carry out a case study on the MACO~benchmark to show that~CASEC~can discover the coordination dependence among agents under different situations and to analyze how the graph sparsity influences action coordination. We further show that CASEC~can largely reduce the communication cost (typically by 50\%) and perform significantly better than dense, static graphs and several alternative methods for building sparse graphs. We then test CASEC~on the StarCraft II micromanagement benchmark~\citep{samvelyan2019starcraft} to demonstrate its scalability and effectiveness.
\section{Background}
In this paper, we focus on fully cooperative multi-agent tasks that can be modelled as a \textbf{Dec-POMDP}~\citep{oliehoek2016concise} consisting of a tuple $G\textup{\texttt{=}}\langle I, S, A, P, R, \Omega, O, n, \gamma\rangle$, where $I$ is the finite set of $n$ agents, $\gamma\in[0, 1)$ is the discount factor, and $s\in S$ is the true state of the environment. At each timestep, each agent $i$ receives an observation $o_i\in \Omega$ drawn according to the observation function $O(s, i)$ and selects an action $a_i\in A$. Individual actions form a joint action ${\bm{a}}$ $\in A^n$, which leads to a next state $s'$ according to the transition function $P(s'|s, {\bm{a}})$, a reward $r=R(s,{\bm{a}})$ shared by all agents. Each agent has local action-observation history $\tau_i\in \mathrm{T}\equiv(\Omega\times A)^*$. Agents learn to collectively maximize the global return $Q_{tot}(s, {\bm{a}})=\mathbb{E}_{s_{0:\infty}, a_{0:\infty}}[\sum_{t=0}^{\infty} \gamma^t R(s_t, {\bm{a}}_t) | s_0=s, {\bm{a}}_0={\bm{a}}]$.
In a \textbf{coordination graph}~\citep{guestrin2002coordinated} $\mathcal{G}=\langle\mathcal{V}, \mathcal{E}\rangle$, each vertex $v_i \in \mathcal{V}$ represents an agent $i$, and (hyper-) edges in $\mathcal{E}$ represent coordination dependencies among agents. In this paper, we consider pairwise edges, and such a coordination graph induces a factorization of the global Q function:
\begin{equation}
Q_{tot}(\bm\tau, {\bm{a}}) = \frac{1}{|\mathcal{V}|}\sum_i{q_i(\tau_i, a_i)} + \frac{1}{|\mathcal{E}|}\sum_{\{i,j\}\in\mathcal{E}} q_{ij}(\bm\tau_{ij},{\bm{a}}_{ij}),
\label{equ:q_tot}
\end{equation}
where $q_i$ and $q_{ij}$ is \emph{utility} functions for individual agents and pairwise \emph{payoff} functions, respectively. $\bm\tau_{ij}=\langle\tau_i,\tau_j\rangle$ and ${\bm{a}}_{ij}=\langle a_i,a_j\rangle$ is the joint action-observation history and action of agent $i$ and $j$, respectively.
Within a coordination graph, the greedy action selection required by Q-learning can not be completed by simply computing the maximum of individual utility and payoff functions. Instead, distributed constraint optimization (DCOP)~\citep{cheng2012coordinating} techniques can be used. \textbf{Max-Sum}~\citep{stranders2009decentralised} is a popular implementation of DCOP, which finds optimal actions on a coordination graph $\mathcal{G}=\langle\mathcal{V}, \mathcal{E}\rangle$ via multi-round message passing on a bipartite graph $\mathcal{G}_m=\langle \mathcal{V}_a, \mathcal{V}_q, \mathcal{E}_m \rangle$. Each node $i\in\mathcal{V}_a$ represents an agent, and each node $g\in\mathcal{V}_q$ represents a utility ($q_i$) or payoff ($q_{ij}$) function. Edges in $\mathcal{E}_m$ connect $g$ with the corresponding agent node(s). Message passing on this bipartite graph starts with sending messages from node $i\in\mathcal{V}_a$ to node $g\in\mathcal{V}_q$:
\begin{equation}
m_{i \rightarrow g}\left(a_{i}\right)=\sum_{h \in \mathcal{F}_{i} \backslash g} m_{h \rightarrow i}\left(a_{i}\right)+c_{i g},
\label{equ:3}
\end{equation}
where $\mathcal{F}_{i}$ is the set of nodes connected to node $i$ in $\mathcal{V}_q$, and $c_{i g}$ is a normalizing factor preventing the value of messages from growing arbitrarily large. The message from node $g$ to node $i$ is:
\begin{equation}
m_{g \rightarrow i}\left(a_{i}\right)=\max _{{\bm{a}}_{g} \backslash a_{i}}\left[q\left({\bm{a}}_{g}\right)+\sum_{h \in \mathcal{V}_{g} \backslash i} m_{h \rightarrow g}\left(a_{h}\right)\right],
\label{equ:4}
\end{equation}
where $\mathcal{V}_{g}$ is the set of nodes connected to node $g$ in $\mathcal{V}_a$, ${\bm{a}}_{g}\textup{\texttt{=}}\left\{a_{h}| h \in \mathcal{V}_{g}\right\}$, ${\bm{a}}_{g} \backslash a_{i}\textup{\texttt{=}}\left\{a_{h}| h \in \mathcal{V}_{g} \backslash \{i\}\right\}$, and $q$ represents utility or payoff functions. After several iterations of message passing, each agent $i$ can find its optimal action by calculating $a_{i}^{*}=\operatorname{argmax}_{a_{i}}\sum_{h \in \mathcal{F}_{i}} m_{h \rightarrow i}\left(a_{i}\right)$.
A drawback of Max-Sum or other message passing methods (e.g., max-plus~\citep{pearl2014probabilistic}) is that running them for each action selection through the whole system results in intensive computation and communication among agents, which is impractical for most applications with limited computational resources and communication bandwidth. In the following sections, we discuss how to solve this problem by learning sparse coordination graphs.
\section{Learning Context-Aware Sparse Graphs}\label{sec:graph}
In this section, we introduce our methods for learning context-aware sparse graphs. We first introduce how we construct a sparse graph for effective action selection in Sec.~\ref{sec:q-based_graph}. After that, we introduce our learning framework in Sec.~\ref{sec:framework}. Although sparse graphs can reduce communication overhead, they raise the concern of learning instability. We discuss this problem and how to alleviate it in Sec.~\ref{sec:method-ar}.
\subsection{Construct Sparse Graphs}\label{sec:q-based_graph}
Action values, especially the pairwise payoff functions, contain much information about mutual influence between agents. Let's consider two agents $i$ and $j$. Intuitively, agent $i$ needs to coordinate its action selection with agent $j$ if agent $j$'s action exerts significant influence on the expected utility of agent $i$. For a fixed action $a_i$, $\text{Var}_{a_j}\left[q_{ij}(\bm\tau_{ij}, {\bm{a}}_{ij})\right]$ can measure the influence of agent $j$ on the expected payoff. This intuition motivates us to use the \textbf{variance of payoff functions}
\begin{equation}
\zeta_{ij}^{q_{\text{var}}} = \max_{a_i} \text{Var}_{a_j}\left[q_{ij}(\bm\tau_{ij}, {\bm{a}}_{ij})\right],
\end{equation}
as an indicator to construct sparse graphs. The maximization operator guarantees that the most affected action is considered. When $\zeta_{ij}^{q_{\text{var}}}$ is large, the expected utility of agent $i$ fluctuates dramatically with the action of agent $j$, and they need to coordinate their actions. Therefore, with this measurement, to construct sparse coordination graphs, we can set a sparseness controlling constant $\lambda\in(0,1)$ and select $\lambda n(n-1)$ edges with the largest $\zeta_{ij}$ values.
To justify this approach, we theoretically prove that, the smaller the value of $\zeta_{ij}$ is, the more likely that the Max-Sum algorithm will select the same actions after removing the edge $(i,j)$.
\begin{prop}\label{prop}
For any two agents $i$, $j$ and the edge $e_{ij}$ connecting them in the coordination graph, after removing edge $e_{ij}$, greedy actions of agent $i$ and $j$ selected by the Max-Sum algorithm keep unchanged with a probability larger than
\begin{equation}
\frac{2}{n}\left[\frac{(\bar{m}-\min_{a_j}m(a_j))(\max_{a_j}m(a_j)-\bar{m})}{\left[\zeta_{ij} \left[q(a_i,a_j)\right] + 2A^2 + 2\sqrt{A^2\left(A^2 + \zeta_{ij} \left[q(a_i,a_j)\right]\right)}\right]^2} - 1\right] ,
\end{equation}
where $m(a_j) = m_{e_{ij}\rightarrow j}(a_j)$, $\bar m$ is the average of $m(a_j)$, $A = \max_{a_j}\left[\max_{a_i} r\left(a_i, a_j\right) - r\left(a_i, a_j\right)\right]$, and $r(a_i, a_j) = q(a_i, a_j) + m_{i\rightarrow e_{ij}}(a_j)$.
\end{prop}
The lower bound in Proposition~\ref{prop} increases with a decreasing $\zeta_{ij}$. Therefore, edges with a smaller $\zeta_{ij}$ are less likely to influence the results of Max-Sum, justifying the way we construct sparse graphs.
\subsection{Learning Framework}\label{sec:framework}
Like conventional Q-learning, CASEC~consists of two main components -- learning value functions and selecting greedy actions. The difference is that these two steps are now carried out on dynamic and sparse coordination graphs.
In CASEC, agents learn a shared utility function $q_{\xi_u}(\cdot | \tau_i)$, parameterized by $\xi_u$, and a shared pairwise payoff function $q_{\xi_p}(\cdot | \bm\tau_{ij})$, parameterized by $\xi_p$. The global Q value function is estimated as:
\begin{equation}
Q_{tot}(\bm\tau,{\bm{a}}) = \frac{1}{|\mathcal{V}|}\sum_i q_{\xi_u}(a_i | \tau_i) + \frac{1}{|\mathcal{V}|(|\mathcal{V}|-1)} \sum_{i\neq j} q_{\xi_p}({\bm{a}}_{ij}|\bm\tau_{ij}),
\end{equation}
which is updated by the TD loss:
\begin{equation}
\mathcal{L}_{TD}(\xi_u, \xi_p) = \mathbb{E}_{\mathcal{D}}\left[\left(r + \gamma \hat{Q}_{tot}(\bm\tau',\text{Max-Sum}(q_{\hat{\xi}_u}, q_{\hat{\xi}_p}))-Q_{tot}(\bm\tau,{\bm{a}})\right)^2\right].\label{eq:q-learning}
\end{equation}
$\text{Max-Sum}(\cdot,\cdot)$ is the greedy joint action selected by Max-Sum, $\hat{Q}_{tot}$ is a target network with parameters $\hat{\xi}_u$, $\hat{\xi}_p$ periodically copied from $Q_{tot}$, and the expectation is estimated with uniform samples from a replay buffer $\mathcal{D}$. Meanwhile, we also minimize a sparseness loss
\begin{equation}\label{equ:sparse_loss}
\mathcal{L}_{\text{sparse}}(\xi_p) = \frac{1}{n(n-1) |A|} \sum_{i\neq j} \sum_{a_i} \text{Var}_{a_j}\left[q_{ij}(\bm\tau_{ij}, {\bm{a}}_{ij})\right],
\end{equation}
which is a regularization on $\zeta_{ij}^{q_{\text{var}}}$. Introducing a scaling factor $\lambda_{\text{sparse}} \in (0,1]$ and minimizing $\mathcal{L}_{TD}(\xi_u, \xi_p)$ $+\lambda_{\text{sparse}}\mathcal{L}_{\text{sparse}}(\xi_p)$ builds in inductive biases which favor minimized coordination graphs that would not sacrifice global returns.
Actions with the maximized value are selected for Q-learning and execution. In our framework, such action selections are finished by running Max-Sum on sparse graphs induced by $\zeta_{ij}^{q_{\text{var}}}$. Running Max-Sum requires the message passing through each node and edge. To speed up action selections, similar to previous work~\citep{bohmer2020deep}, we use multi-layer graph neural networks without parameters in the message passing module to process messages in a parallel manner.
\subsection{Stabilize Learning}\label{sec:method-ar}
One question with estimating $q_{ij}$ is that there are $|A|\times|A|$ action-pairs, each of which requires an output head in conventional deep Q networks. As only executed action-pairs are updated during Q-learning, parameters of many output heads remain unchanged for long stretches of time, resulting in estimation errors. Previous work~\citep{bohmer2020deep} uses a low-rank approximation to reduce the number of output heads. However, it is largely unclear how to choose the best rank K for different tasks, and still only $\frac{1}{|A|}$ of the output heads are selected in one Q-learning update.
This problem of estimation errors becomes especially problematic in CASEC, where building coordination graphs relies on the estimation of $q_{ij}$. A negative feedback loop is created because the built coordination graphs also affect the learning of $q_{ij}$. This loop renders learning unstable (Fig.~\ref{fig:lc-smac}).
We propose to solve this question and stabilize training by 1) periodically fixing the way we construct graphs via using the target payoff function to build graphs; and 2) accelerating the training of payoff function between target network updates to reduce the estimation errors.
Specifically, for 2), we propose to condition the utility and payoff functions on action representations to improve sample efficiency. We train an action encoder $f_{\xi_a}(a)$, whose input is the one-hot encoding of an action $a$ and output is its representation $z_a$. We adopt the technique introduced by~\citet{wang2021rode} to train an effect-based action encoder. Specifically, action representation $z_a$, together with the current local observations, is used to predict the reward and observations at the next timestep. The prediction loss is minimized to update the action encoder $f_{\xi_a}(a)$. For more details, we refer readers to Appendix~\ref{appx:action_repr}. The action encoder is trained with few samples when learning begins and remains unchanged for the rest of the training process.
Using action representations, the utility and payoff functions can now be estimated as:
\begin{equation}
\begin{aligned}
& q_{\xi_u}(\tau_i,a_i) = h_{\xi_u}(\tau_i)^{\mathrm{T}}z_{a_i}; \\
& q_{\xi_p}(\bm\tau_{ij},{\bm{a}}_{ij}) = h_{\xi_p}(\bm\tau_{ij})^{\mathrm{T}}[z_{a_i}, z_{a_j}], \label{equ:q_action_repr}\\
\end{aligned}
\end{equation}
where $h$ includes a GRU~\citep{cho2014learning} to process sequential input and output a vector with the same dimension as the corresponding action representation. $[\cdot,\cdot]$ means vector concatenation. Using Eq.~\ref{equ:q_action_repr}, no matter which action is selected for execution, all parameters in the framework ($\xi_u$ and $\xi_p$) would be updated. The detailed network structure can be found in Appendix~\ref{appx:hyper}.
\section{Multi-Agent Coordination Benchmark}
To evaluate our sparse graph learning algorithm, we collect classic coordination problems from the cooperative multi-agent learning literature, improve their difficulty, and classify them into different types. Then, 6 representative problems are selected and presented as a new benchmark called Multi-Agent COordination (MACO) challenge (Table~\ref{tab:maco}).
\begin{wraptable}[14]{r}{0.6\linewidth}
\vspace{-1em}
\caption{Multi-agent coordination benchmark.}
\label{tab:maco}
\centering
\begin{tabular}{CRCRCRCRCR}
\toprule
\multicolumn{2}{c}{Task} &
\multicolumn{2}{l}{Factored} &
\multicolumn{2}{l}{\makecell{Pairwise}} &
\multicolumn{2}{l}{\makecell{Dynamic}} &
\multicolumn{2}{l}{\# Agents}\\
\cmidrule(lr){1-2}
\cmidrule(lr){3-4}
\cmidrule(lr){5-6}
\cmidrule(lr){7-8}
\cmidrule(lr){9-10}
\multicolumn{2}{c}{Aloha} & \multicolumn{2}{c}{$\surd$} & \multicolumn{2}{c}{$\surd$} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{10} \\
\cmidrule(lr){1-2}
\cmidrule(lr){3-4}
\cmidrule(lr){5-6}
\cmidrule(lr){7-8}
\cmidrule(lr){9-10}
\multicolumn{2}{c}{Pursuit} & \multicolumn{2}{c}{$\surd$} & \multicolumn{2}{c}{$\surd$} & \multicolumn{2}{c}{$\surd$} & \multicolumn{2}{c}{10} \\
\cmidrule(lr){1-2}
\cmidrule(lr){3-4}
\cmidrule(lr){5-10}
\multicolumn{2}{c}{Hallway} & \multicolumn{2}{c}{$\surd$} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{12} \\
\cmidrule(lr){1-2}
\cmidrule(lr){3-4}
\cmidrule(lr){5-6}
\cmidrule(lr){7-8}
\cmidrule(lr){9-10}
\multicolumn{2}{c}{Sensor} & \multicolumn{2}{c}{$\surd$} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{$\surd$} & \multicolumn{2}{c}{15} \\
\toprule
\multicolumn{2}{c}{Gather} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{--} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{5} \\
\cmidrule(lr){1-2}
\cmidrule(lr){3-4}
\cmidrule(lr){5-6}
\cmidrule(lr){7-8}
\cmidrule(lr){9-10}
\multicolumn{2}{c}{Disperse} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{--} & \multicolumn{2}{c}{$\surd$} & \multicolumn{2}{c}{12} \\
\toprule
\end{tabular}
\end{wraptable}
At the first level, tasks are classified as factored and non-factored games, where factored games present an explicit decomposition of global rewards. Factored games are further categorized according to two properties -- whether the task requires pairwise or higher-order coordination, and whether the underlying coordination relationships change temporally. For non-factored games, we divide them into two classes by whether the task characterizes static coordination relationships among agents. To better test the performance of different methods, we increase the difficulty of the included problems by extending stateless games to temporally extended settings ($\mathtt{Gather}$ and $\mathtt{Disperse}$), complicating the reward function ($\mathtt{Pursuit}$), or increasing the number of agents ($\mathtt{Aloha}$ and $\mathtt{Hallway}$). We now briefly describe the setting of each game. For detailed description, we refer readers to Appendix~\ref{appx:maco}.
$\mathtt{Aloha}$~\citep{hansen2004dynamic,oliehoek2010value} consists of $10$ islands in a $2\times 5$ array. Each island has a backlog of messages to send. They send one message or not at each timestep. When two neighboring agents send simultaneously, messages collide and have to be resent. Islands start with $1$ package in the backlog. At each timestep, with a probability $0.6$ a new packet arrives if the maximum backlog ($5$) has not been reached. Each agent observes its position and the number of packages in its backlog. Agents receive a global reward of $0.1$ for each successful transmission, and $-10$ for a collision.
$\mathtt{Pursuit}$, also called $\mathtt{Predator}$ $\mathtt{and}$ $\mathtt{Prey}$, is a classic coordination problem~\citep{benda1986optimal, stone2000multiagent, son2019qtran}. Ten agents (predators) roam a $10\times 10$ map populated with 5 random walking preys for 50 environment steps. One prey is captured if two agents catch it simultaneously, after which the catching agents and the prey are removed from the map, resulting in a team reward of $1$. If only one agent tries to catch the prey, the prey would not be captured and the agents will be punished. We consider a challenging version of $\mathtt{Pursuit}$ by setting the punishment to $1$, which is the same as the reward obtained by a successful catch.
$\mathtt{Hallway}$~\citep{wang2020learning} is a multi-chain Dec-POMDP. We increase the difficulty of $\mathtt{Hallway}$ by introducing more agents and grouping them (Fig.~\ref{fig:hallway-env}). One agent randomly spawns at a state in each chain. Agents can observe its own position and choose to move left, move right, or keep still at each timestep. Agents win with a global reward of 1 if they arrive at state $g$ simultaneously with other agents in the same group. If $n_g > 1$ groups attempt to move to $g$ at the same timestep, they keep motionless and agents receive a global punishment of $-0.5 * n_g$.
$\mathtt{Sensor}$ has been extensively studied~\citep{lesser2012distributed, zhang2011coordinated}. We consider 15 sensors in a $3\times 5$ matrix. Sensors can scan the eight nearby points. Each scan induces a cost of -1, and agents can do $\mathtt{noop}$ to save the cost. Three targets wander randomly in the gird. If $k>2$ sensors scan a target simultaneously, the system gets a reward of $1.5*k$. Agents can observe the id and position of targets nearby.
$\mathtt{Gather}$ is an extension of the $\mathtt{Climb}$ Game~\citep{wei2016lenient}. In $\mathtt{Climb}$ Game, each agent has three actions: $A_i=\{a_0, a_1, a_2\}$. Action $a_0$ yields no reward (0) if only some agents choose it, but a high reward (10) if all choose it. Otherwise, if no agent chooses action $a_0$, a reward $5$ is obtained. We increase the difficulty of this game by making it temporally extended and introducing stochasticity. Actions are no longer atomic, and agents need to learn policies to realize these actions by navigating to goals $g_1$, $g_2$ and $g_3$ (Fig.~\ref{fig:gather-env}). Moreover, for each episode, one of $g_1$, $g_2$ and $g_3$ is randomly selected as the optimal goal (corresponding to $a_0$ in the original game).
$\mathtt{Disperse}$ consists of $12$ agents. At each timestep, agents can choose to work at one of 4 hospitals by selecting an action in $A_i=\{a_0, a_1, a_2,a_3\}$. At timestep $t$, hospital $j$ needs $x^j_t$ agents for the next timestep. One hospital is randomly selected and its $x^j_t$ is a positive number, while the need of other hospitals is $0$. If $y^j_{t+1}<x^j_t$ agents go to the selected hospital, the whole team would be punished $y^j_{t+1}-x^j_t$. Agents observe the local hospital's id and its need for the next timestep.
\section{Case Study: Learning Sparse Graphs on Sensors}
We are particularly interested in the dynamics and results of sparse graph learning. Therefore, we carry out a case study on $\mathtt{Sensors}$. When training CASEC~on this task, we select edges with $10\%$ edges with largest $\zeta_{ij}^{q_\text{var}}$ values to construct sparse graphs.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig-case_study/three.pdf}
\caption{Left: Learning curves (the number of successfully scanned targets) of CASEC~and DCG on $\mathtt{Sensors}$. Middle: The influence of graph sparseness on performance (reward and the number of scanned targets). Right: An example coordination graph learned by our method on $\mathtt{Sensor}$.}
\label{fig:three}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig-case_study/case-study.pdf}
\caption{Coordination graphs learned by our method on $\mathtt{Sensor}$.}
\label{fig:case}
\end{figure}
\textbf{Interpretable sparse coordination graphs.} In Fig.~\ref{fig:three} right, we show a screenshot of the game with the learned coordination graph at this timestep. We can observe that all edges in the learned graph involve agents around the targets. The action proposed by the individual utility function of $\mathtt{agent\ 8}$ is to scan $\mathtt{target\ 1}$. After coordinating its action with other agents, $\mathtt{agent\ 8}$ changes its action selection and scans target $\mathtt{target\ 2}$, resulting in an optimal solution for the given configuration. This result is in line with our theoretical analysis in Sec.~\ref{sec:q-based_graph}. The most important edges can be characterized by a large $\zeta$ value.
\textbf{Influence of graph sparseness on performance.} It is worth noting that with fewer edges in the coordination graph, CASEC~have better performance than DCG on sensors (Fig.~\ref{fig:three} left). This observation may be counter-intuitive at the first glance. To study this problem, we load the model after convergence learned by CASEC~and DCG, gradually remove edges from the full coordination graph, and check the change of successfully scanned targets and the obtained reward. Edges are removed in the descending order of $\zeta_{ij}^{q_{\text{var}}}$. Results are shown in Fig.~\ref{fig:three} middle.
It can be observed that the performance of DCG (both the number of scanned targets and return) does not change with the number of edges in the coordination graph. In another word, only the individual utility function contributes to action selection. Screenshots shown in Fig.~\ref{fig:case} (right column) are in line with this observation. With more edges, DCG each makes a less optimal decision: agent 14 no longer scans target 2.
In contrast, the performance of CASEC~grows with more edges in the coordination graph. Another interesting observation is that the number of scanned targets grows faster than the return. By referring to Fig.~\ref{fig:case} (left column), we can conclude that CASEC~first selects edges that help agents get more reward by scanning targets, and then selects edges can eliminate useless scan action. These results prove that our method can distinguish the most important edges.
We also study the influence of the sparseness loss (Eq.~\ref{equ:sparse_loss}). As shown in Fig.~\ref{fig:three} middle, the gap between return and the number of scanned targets is larger without the sparseness loss. The lower return when the number of edges is small indicates that CASEC~without the sparseness loss cannot effectively select edges that can bring more rewards.
\section{Experiments}\label{sec:exp}
In this section, we design experiments to answer the following questions: (1) How much communication can be saved by our method? How does communication threshold influence performance on factored and non-factored games? (Sec.~\ref{sec:exp-comm}) (2) How does our method compare to state-of-the-art multi-agent learning methods? (Sec.~\ref{sec:exp-maco},~\ref{sec:exp-smac}) (3) Is our method efficient in settings with larger action-observation spaces? (Sec.~\ref{sec:exp-smac}) For results in this section, we show the median performance with 8 random seeds as well as the 25-75\% percentiles.
\subsection{Graph Sparseness}\label{sec:exp-comm}
\begin{wraptable}[9]{r}{0.32\linewidth}
\vspace{-1em}
\caption{Percentage of communication saved for each task.}
\label{tab:comm}
\centering
\begin{tabular}{CRCRCR}
\toprule
\multicolumn{2}{c}{Aloha} &
\multicolumn{2}{l}{Pursuit} &
\multicolumn{2}{l}{Hallway} \\
\cmidrule(lr){1-2}
\cmidrule(lr){3-4}
\cmidrule(lr){5-6}
\multicolumn{2}{c}{80.0\%} & \multicolumn{2}{c}{70.0\%} & \multicolumn{2}{c}{50.0\%} \\
\toprule
\multicolumn{2}{l}{Sensor} &
\multicolumn{2}{l}{Gather} &
\multicolumn{2}{l}{Disperse} \\
\cmidrule(lr){1-2}
\cmidrule(lr){3-4}
\cmidrule(lr){5-6}
\multicolumn{2}{c}{90.0\%} & \multicolumn{2}{c}{30.0\%} & \multicolumn{2}{c}{60.0\%}\\
\toprule
\end{tabular}
\vspace{-1em}
\end{wraptable}
An important advantage of learning sparse coordination graphs is reduced communication costs. The complexity of running Max-Sum for each action selection is $O(k(|\mathcal{V}|+|\mathcal{E}|))$, where $k$ is the number of iterations of message passing. Sparse graphs cut down communication costs by reducing the number of edges.
We carry out a grid search to find the communication threshold under which sparse graphs have the best performance. We find that most implementations of dynamically sparse graphs require similar numbers of edges to prevent performance from dropping significantly. In Table~\ref{tab:comm}, we show the communication cut rates we use when benchmarking our method. Generally speaking, non-factored games typically require more messages than factored games, while, for most tasks, at least $50\%$ messages can be saved without sacrificing the learning performance.
\begin{wrapfigure}[13]{r}{0.5\linewidth}
\vspace{-1em}
\includegraphics[width=\linewidth]{fig-lc/threshold_test.pdf}
\caption{The influence of graph sparseness (1.0 represents complete graphs) on the performance on factored games ($\mathtt{Sensor}$, left) and non-factored games ($\mathtt{Gather}$, right).}
\label{fig:threshold}
\end{wrapfigure}
\textbf{Communication threshold vs. performance} In Fig.~\ref{fig:threshold}, we show the performance of our method under different communication thresholds. The threshold controls the sparseness of edges. We can observe that, on the factored game $\mathtt{Sensor}$, performance first grows then drops when more edges are included in the coordination graphs. These observations are in line with the fact that sparse coordination graphs can outperform complete graphs and fully-decomposed value functions on this task. In contrast, for the non-factored game $\mathtt{Gather}$, performance stabilizes beyond a certain threshold. Non-factored games usually involve complex coordination relationships, and denser topologies are suitable for this type of questions.
\subsection{MACO: Multi-Agent Coordination Benchmark}\label{sec:exp-maco}
We compare our method with state-of-the-art fully-decomposed multi-agent value-based methods (VDN \citep{sunehag2018value}, QMIX \citep{rashid2018qmix}, and Weighted QMIX \citep{rashid2020weighted}) and coordination graph learning method (DCG \citep{bohmer2020deep}) on the MACO~benchmark (Fig.~\ref{fig:lc-maco}). Since the number of actions is not very large in the MACO~benchmark, we do not use action representations when estimating the utility difference function for CASEC.
We can see that our method significantly outperforms fully-decomposed value-based methods. The reason is that fully-decomposed methods suffer from the \emph{relative overgeneralization} issue and miscoordination problems in partially observable environments with stochasticity. For example, on task $\mathtt{Pursuit}$~\citep{benda1986optimal}, if more than one agent catches one prey simultaneously, these agents will be rewarded 1. However, if only one agent catches prey, it fails and gets a punishment of -1. For an agent with a limited sight range, the reward it obtains when taking the same action (catching a prey) under the same local observation depends on the actions of other agents and changes dramatically. This is the relative overgeneralization problem. Another example is $\mathtt{Hallway}$~\citep{wang2020learning}, where several agents need to reach a goal state simultaneously but without the knowledge of each other's location. Fully-decomposed methods cannot solve this problem if the initial positions of agents are stochastic.
For DCG, we use its default settings of complete graphs and no low-rank approximation. We observe that DCG outperforms CASEC~on $\mathtt{Hallway}$ but is less effective on tasks characterized by sparse coordination interdependence like $\mathtt{Sensor}$. We hypothesize this is because coordinating actions with all other agents requires the shared estimator to express payoff functions of most agent pairs accurately enough, which is beyond the representational capacity of the network or needs more samples to learn, hurting the performance of DCG on loosely coupled tasks.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{fig-lc/smac.pdf}
\caption{Performance and TD errors compared to baselines and ablations on the SMAC benchmark.}
\label{fig:lc-smac}
\vspace{-1.5em}
\end{figure}
\subsection{StarCraft II Micromanagement Benchmark}\label{sec:exp-smac}
We compare our method against the state-of-the-art coordination graph learning method (DCG~\citep{bohmer2020deep}) and fully decomposed value-based MARL algorithms (VDN \citep{sunehag2018value}, QMIX \citep{rashid2018qmix}). For CASEC, we use action representations to estimate the payoff function. We train the action encoder for 50$k$ samples and keep action representations unchanged afterward. In Fig.~\ref{fig:lc-smac}, we show results on $\mathtt{5m\_vs\_6m}$ and $\mathtt{MMM2}$. Detailed hyperparameter settings of our method can be found in Appendix~\ref{appx:hyper}.
For DCG, we use its default settings, including low-rank approximation for learning the payoff function. We can see that CASEC~outperforms DCG by a large margin. The result proves that sparse coordination graphs provide better scalability to large action-observation spaces than dense and static graphs. In DCG's defense, low-rank approximation still induces large estimation errors. We replace low-rank approximation with action representations and find that DCG (\emph{Full (action repr.)}) achieves similar performance to CASEC~after 5M steps, but CASEC~is still more sample-efficient. Moreover, taking advantage of higher-order value decomposition, CASEC~is able to represent more complex coordination dynamics than fully decomposed value functions and thus performs better.
\textbf{Ablation study} Our method is characterized by two contributions: context-aware sparse topologies and action representations for learning the utility difference function. In this section, we design ablations to show their contributions.
The effect of sparse topologies can be observed by comparing CASEC~to \emph{Full (action repr.)}, which is the same as CASEC~other than using complete coordination graphs. We observe that sparse graphs enjoy better sample efficiency than full graphs, and the advantage becomes less obvious as more samples are collected. This observation indicates that sparse graphs introduce inductive biases that can accelerate training, and their representational capacity is similar to that of full graphs.
From the comparison between CASEC~to CASEC~using conventional Q networks (\emph{w/o action repr.}), we can see that using action representations can significantly stabilize learning. For example, learning diverges on $\mathtt{5m\_vs\_6m}$ without action representations. As analyzed before, this is because a negative feedback loop is created between the inaccurate utility difference function and coordination graphs.
To further consolidate that action representations can reduce the estimation errors and thus alleviate learning oscillation as discussed in Sec.~\ref{sec:method-ar}, we visualize the TD errors of CASEC~and ablations during training in Fig.~\ref{fig:lc-smac} right. We can see that action representations can dramatically reduce the TD errors. For comparison, the low-rank approximation can also reduce the TD errors, but much less significantly. Smaller TD errors prove that action representations provide better estimations of the value function, and learning with sparse graphs can thus be stabilized (Fig.~\ref{fig:lc-smac} left).
\section{Conclusion}
We study how to learn dynamic sparse coordination graphs, which is a long-standing problem in cooperative MARL. We propose a specific implementation and theoretically justify it. Empirically, we evaluate the proposed method on a new multi-agent coordination benchmark. Moreover, we equip our method with action representations to improve the sample efficiency of payoff learning and stabilize training. We show that sparse and adaptive topologies can largely reduce communication overhead as well as improve the performance of coordination graphs. We expect our work to extend MARL to more realistic tasks with complex coordination dynamics.
One limitation of our method is that the learned sparse graphs are not always cycle-free. Since the Max-Sum algorithm guarantees optimality only on acyclic graphs, our method may select sub-optimal actions. We plan to study how to solve this problem in future work.
Another limitation is that we set a fixed communication threshold then train and test our method. For future work, we plan to study how to adaptively determine the communication threshold during learning, automatically and accurately find the minimum threshold that can guarantee the learning performance.
\paragraph{Reproducibility} The source code for all the experiments along with a README file with instructions on how to run these experiments is attached in the supplementary material. In addition, the settings and parameters for all models and algorithms mentioned in the experiment section are detailed in Appendix~\ref{appx:hyper}.
\section{MACO: Multi-Agent Coordination Benchmark}\label{appx:maco}
In this paper, we study how to learn context-aware sparse coordination graphs. For this purpose, we propose a new Multi-Agent COordination (MACO) benchmark (Table~\ref{tab:maco}) to evaluate different implementations and benchmark our method. This benchmark collects classic coordination problems in the literature of cooperative multi-agent learning, increases their difficulty, and classifies them into different types. We now describe the detailed settings of tasks in the MACO~benchmark.
\subsection{Task Settings}
\textbf{Factored Games} are characterized by a clear factorization of global rewards. We further classify factored games into 4 categories according to whether coordination dependency is pairwise and whether the underlying coordination graph is dynamic (Table~\ref{tab:maco}).
$\mathtt{Aloha}$ (\citet{oliehoek2010value}, also similar to the Broadcast Channel benchmark problem proposed by \citet{hansen2004dynamic}) consists of $10$ islands, each equipped with a radio tower to transmit messages to its residents. Each island has a backlog of messages that it needs to send, and agents can choose to send one message or not at each timestep. Due to the proximity of islands, communications from adjacent islands interfere. This means that when two neighboring agents attempt to send simultaneously, a collision occurs and the messages have to be resent. Each island starts with $1$ package in its backlog. At each timestep, with probability $0.6$ a new packet arrives if the maximum backlog (set to $5$) has not been reached. Each agent observes its position and the number of packages in its backlog. A global reward of $0.1$ is received by the system for each successful transmission, while punishment of $-10$ is induced if the transmission leads to a collision.
$\mathtt{Pursuit}$, also called $\mathtt{Predator}$ $\mathtt{and}$ $\mathtt{Prey}$, is a classic coordination problem~\citep{benda1986optimal, stone2000multiagent, son2019qtran}. In this game, ten agents (predators) roam a $10\times 10$ map populated with 5 random walking preys for 50 environment steps. Based on the partial observation of any adjacent prey and other predators, agents choose to move in four directions, keep motionless, or catch prey (specified by its id). One prey is captured if two agents catch it simultaneously, after which the catching agents and the prey are removed from the map, resulting in a team reward of $1$. However, if only one agent tries to catch the prey, the prey would not be captured and the agents will be punished. The difficulty of $\mathtt{Pursuit}$ is largely decided by the relative scale of the punishment compared to the catching reward~\citep{bohmer2020deep}, because a large punishment exacerbates the relative overgeneralization pathology. In the MACO~benchmark, we consider a challenging version of $\mathtt{Pursuit}$ by setting the punishment to $1$, which is the same as the reward obtained by a successful catch.
$\mathtt{Hallway}$~\citep{wang2020learning} is a multi-chain Dec-POMDP whose stochasticity and partial observability lead to fully-decomposed value functions learning sub-optimal strategies. In the MACO~benchmark, we increase the difficulty of $\mathtt{Hallway}$ by introducing more agents and grouping them (Fig.~\ref{fig:hallway-env}). One agent randomly spawns at a state in each chain. Agents can observe its own position and choose to move left, move right, or keep still at each timestep. Agents win if they arrive at state $g$ simultaneously with other agents in the same group. In Fig.~\ref{fig:hallway-env}, different groups are drawn in different colors.
\begin{wrapfigure}[11]{r}{0.4\linewidth}
\vspace{-1em}
\centering
\includegraphics[width=\linewidth]{fig-maco/hallway-env.pdf}
\caption{Task $\mathtt{Hallway}$~\citep{wang2020learning}. To increase the difficulty of the game, we consider a multi-group version. Different colors represent different groups.} \label{fig:hallway-env}
\end{wrapfigure}
Each winning group induces a global reward of 1. Otherwise, if any agent arrives at $g$ earlier than others, the system receives no reward and all agents in that group would be removed from the game. If $n_g > 1$ groups attempt to move to $g$ at the same timestep, they keep motionless and agents receive a global punishment of $-0.5 * n_g$. The horizon is set to $max_i\ l_i + 10$ to avoid an infinite loop, where $l_i$ is the length of chain $i$.
$\mathtt{Sensor}$ (Fig. 3 in the main text) has been extensively studied in cooperative multi-agent learning~\citep{lesser2012distributed, zhang2011coordinated}. We consider 15 sensors arranged in a 3 by 5 matrix. Each sensor is controlled by an agent and can scan the eight nearby points. Each scan induces a cost of -1, and agents can choose $\mathtt{noop}$ to save the cost. Three targets wander randomly in the gird. If two sensors scan a target simultaneously, the system gets a reward of 3. The reward increases linearly with the number of agents who scan a target simultaneously -- all agents are jointly rewarded by $4.5$ and $6$ if three and four agents scan a target, respectively. Agents can observe the id and position of targets nearby.
\textbf{Non-factored games} do not present an explicit decomposition of global rewards. We classify non-factored games according to whether the game can be solved by a static (sparse) coordination graph in a single episode.
$\mathtt{Gather}$ is an extension of the $\mathtt{Climb}$ Game~\citep{wei2016lenient}. In $\mathtt{Climb}$ Game, each agent has three actions: $A_i=\{a_0, a_1, a_2\}$. Action $a_0$ yields no reward if only some agents choose it, but a high reward if all choose it. The other two actions are sub-optimal actions but can induce a positive reward without requiring precise coordination:
\begin{equation}
r({\bm{a}}) = \begin{cases}
10 & \# a_0 = n,\\
0 & 0< \# a_0< n, \\
5 & \text{otherwise}.
\end{cases}\label{equ:climb_env}
\end{equation}
We increase the difficulty of this game by making it temporally extended and introducing stochasticity. We consider three actions. Actions are no longer atomic, and agents need to learn policies to realize these actions by navigating to goals $g_1$, $g_2$ and $g_3$ (Fig.~\ref{fig:gather-env}).
\begin{wrapfigure}[13]{r}{0.35\linewidth}
\vspace{-1em}
\centering
\includegraphics[width=\linewidth]{fig-maco/gather-env.pdf}
\caption{Task $\mathtt{Gather}$. To increase the difficulty of this game, we consider a temporally extended version and introduce stochasticity.}
\label{fig:gather-env}
\end{wrapfigure}
Moreover, for each episode, one of $g_1$, $g_2$ and $g_3$ is randomly selected as the optimal goal (corresponding to $a_0$ in Eq.~\ref{equ:climb_env}). Agents spawn randomly, and only agents initialized near the optimal goal know which goal is optimal. Agents need to simultaneously arrive at a goal state to get any reward. If all agents are at the optimal goal state, they get a high reward of $10$. If all of them are at other goal states, they would be rewarded $5$. The minimum reward would be received if only some agents gather at the optimal goal. We further increase the difficulty by setting this reward to $-5$. It is worth noting that, for any single episode, $\mathtt{Gather}$ can be solved using a static and sparse coordination graph -- for example, agents can collectively coordinate with an agent who knows the optimal goal.
\begin{table*} [ht]
\caption{Temporally average performance on the multi-agent coordination benchmark.}
\label{tab:maco_avg_perf}
\centering\scriptsize
\begin{tabular}{c| P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c | c}
\Xhline{2\arrayrulewidth}
Topology& \multicolumn{3}{c|}{Full} & \multicolumn{3}{c|}{Rand.} & \multicolumn{3}{c|}{$\delta_{\max}$} & \multicolumn{3}{c|}{$q_{\text{var}}$} & \multicolumn{3}{c|}{$\delta_{\text{var}}$} & Attn. \\
\hline
Loss & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & -- \\
\hline
Aloha & 11.7& 36.8& 40.8& 39.1& 36.0& 37.2& 34.6& 32.0& 41.4& \textbf{44.8}& \textbf{41.9}& 32.2& 33.8& 29.7& \textbf{42.1} & 15.9\\
Pursuit & 3.93& 3.96& 3.98& 3.89& 3.88& 3.85& 4.05& \textbf{4.11}& \textbf{4.12}& 4.05& 4.08& 4.08& 4.02& 4.07& \textbf{4.09} & 3.87 \\
Hallway & \textbf{0.51}& 0.46& \textbf{0.51}& 0.47& 0.44& 0.47& 0.29& 0.38& 0.41& 0.33& 0.47& 0.42& \textbf{0.54}& 0.50& 0.46 & 0.00 \\
Sensor & 6.32& 0.00& 0.00& 6.80& 8.96& 6.77& \textbf{21.0}& \textbf{20.6}& 20.4& 20.4& 19.9& \textbf{20.7}& 20.1& 20.3& 19.7& 5.67\\
Gather & 0.52& \textbf{0.73}& 0.72& 0.54& \textbf{0.73}& 0.61& 0.67& \textbf{0.77}& \textbf{0.73}& 0.58& 0.72& 0.67& 0.64& 0.68& 0.65& 0.34\\
Disperse & 6.94& 6.67& 6.68& 7.06& 7.07& 7.00& 7.60& 7.68& 7.86& 7.56& 7.77& 7.70& \textbf{8.05}& \textbf{8.20}& 7.88& \textbf{8.49}\\
\Xhline{2\arrayrulewidth}
\end{tabular}
\end{table*}
\begin{table*} [t]
\caption{Stability of each implementation. The averaged distance between raw and highly smoothed curves is shown. The smaller the values are, the more stable the learning curves are. Stability scores shown in the main text have been scaled reversely to $[0,1]$.}
\label{tab:maco_fluc}
\centering\scriptsize
\begin{tabular}{c| P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c | c}
\Xhline{2\arrayrulewidth}
Topology& \multicolumn{3}{c|}{Full} & \multicolumn{3}{c|}{Rand.} & \multicolumn{3}{c|}{$\delta_{\max}$} & \multicolumn{3}{c|}{$q_{\text{var}}$} & \multicolumn{3}{c|}{$\delta_{\text{var}}$} & Attn. \\
\hline
Loss & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & -- \\
\hline
Aloha & 2.94& 1.18& 0.96& 0.25& 0.22& 0.22& 1.28& 0.93& 1.03& 0.66& 0.81& 2.20& 0.85& 1.79& 0.75& 1.33 \\
Pursuit & 0.35& 0.31& 0.41& 0.23& 0.21& 0.25& 0.20& 0.20& 0.26& 0.20& 0.13& 0.21& 0.19& 0.18& 0.18& 0.19\\
Hallway & 1.78& 1.85& 0.65& 0.42& 0.81& 1.19& 0.92& 0.84& 0.72& 0.67& 0.92& 0.78& 0.47& 1.79& 2.13& 1.63\\
Sensor & 0.76& 0.63& 0.89& 0.49& 0.18& 0.44& 0.11& 0.15& 0.18& 0.14& 0.11& 0.12& 0.12& 0.11& 0.11& 0.23\\
Gather & 0.29& 0.21& 0.21& 0.31& 0.27& 0.28& 0.14& 0.19& 0.21& 0.34& 0.14& 0.21& 0.21& 0.19& 0.21& 0.59\\
Disperse & 0.35& 0.53& 0.41& 0.26& 0.28& 0.39& 0.16& 0.23& 0.19& 0.25& 0.46& 0.33& 0.17& 0.16& 0.24& 0.06\\
\Xhline{2\arrayrulewidth}
\end{tabular}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig-maco/aloha.pdf}
\caption{Performance of different implementations on $\mathtt{Aloha}$. Different colors indicate different topologies. Performance of different losses is shown in different sub-figures.}
\label{fig:aloha-lc}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{fig-maco/pursuit.pdf}
\caption{Performance of different implementations on $\mathtt{Pursuit}$. Different colors indicate different topologies. Performance of different losses is shown in different sub-figures.}
\label{fig:prey-lc}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{fig-maco/hallway.pdf}
\caption{Performance of different implementations on $\mathtt{Hallway}$. Different colors indicate different topologies. Performance of different losses is shown in different sub-figures.}
\label{fig:hallway-lc}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{fig-maco/sensor.pdf}
\caption{Performance of different implementations on $\mathtt{Sensor}$. Different colors indicate different topologies. Performance of different losses is shown in different sub-figures.}
\label{fig:sensor-lc}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{fig-maco/gather.pdf}
\caption{Performance of different implementations on $\mathtt{Gather}$. Different colors indicate different topologies. Performance of different losses is shown in different sub-figures.}
\label{fig:gather-lc}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{fig-maco/disperse.pdf}
\caption{Performance of different implementations on $\mathtt{Disperse}$. Different colors indicate different topologies. Performance of different losses is shown in different sub-figures.}
\label{fig:disperse-lc}
\end{figure*}
$\mathtt{Disperse}$ consists of $12$ agents. At each timestep, agents can choose to work at one of 4 hospitals by selecting an action in $A_i=\{a_0, a_1, a_2,a_3\}$. At timestep $t$, hospital $j$ needs $x^j_t$ agents for the next timestep. One hospital is randomly selected and its $x^j_t$ is a positive number, while the need of other hospitals is $0$. If $y^j_{t+1}<x^j_t$ agents go to the selected hospital, the whole team would be punished $y^j_{t+1}-x^j_t$. Agents observe the local hospital's id and its need for the next timestep.
\subsection{Performance}
With this benchmark in hand, we are now able to evaluate our method for constructing sparse graphs. We compare our method with the following approaches.
\textbf{Maximum utility difference}\ \ $q_i$ (or $q_j$) is the expected utility agent $i$ (or $j$) can get without the awareness of actions of other agents. After specifying the action of agent $j$ or $i$, the joint expected utility changes to $q_{ij}$. Thus the measurement
\begin{align}
\zeta_{ij}^{\delta_{\max}} & = \max_{{\bm{a}}_{ij}} |\delta_{ij}(\bm\tau_{ij}, {\bm{a}}_{ij})|
\end{align}
can describe the mutual influence between agent $i$ and $j$. Here
\begin{equation}
\delta_{ij}(\bm\tau_{ij},{\bm{a}}_{ij}) = q_{ij}(\bm\tau_{ij},{\bm{a}}_{ij}) - q_i(\tau_i, a_i) - q_j(\tau_j, a_j)
\label{equ:2}
\end{equation}
is the \textbf{\emph{utility difference function}}.
We use a maximization operator here because two agents need to coordinate with each other if such coordination significantly affects the probability of selecting at least one action pair.
\textbf{Variance of utility difference}\ \ As discussed before, the value of utility difference $\delta_{ij}$ and variance of payoff functions can measure the mutual influence between agent $i$ and $j$. In this way, the variance of $\delta_{ij}$ serves as a second-order measurement, and we can use
\begin{equation}
\zeta_{ij}^{\delta_{\text{var}}} = \max_{a_i} \text{Var}_{a_j}\left[\delta_{ij}(\bm\tau_{ij}, {\bm{a}}_{ij})\right]\label{equ:zeta_delta_var}
\end{equation}
to rank the necessity of coordination relationships between agents. Again we use the maximization operation to base the measurement on the most influenced action.
For these three measurements, the larger the value of $\zeta_{ij}$ is, the more important the edge $\{i,j\}$ is. For example, when $\max_{a_i}\text{Var}_{a_j}\left[q_{ij}(\bm\tau_{ij}, {\bm{a}}_{ij})\right]$ is large, the expected utility of agent $i$ fluctuates dramatically with the action of agent $j$, and they need to coordinate their actions. Therefore, with these measurements, to learn sparse coordination graphs, we can set a sparseness controlling constant $\lambda\in(0,1)$ and select $\lambda n(n-1)$ edges with the largest $\zeta_{ij}$ values. To make the measurements more accurate in edge selection, we minimize the following losses for the two measurements, respectively:
\begin{align}
\mathcal{L}^{|\delta|}_{\text{sparse}} &= \frac{1}{n(n-1) |A|^2} \sum_{i\neq j}\sum_{a_i, a_j} |\delta_{ij}(\bm\tau_{ij}, {\bm{a}}_{ij})|; \nonumber \\
\mathcal{L}^{\delta_{\text{var}}}_{\text{sparse}} &= \frac{1}{n(n-1) |A|} \sum_{i\neq j} \sum_{a_i} \text{Var}_{a_j}\left[\delta_{ij}(\bm\tau_{ij}, {\bm{a}}_{ij})\right].
\end{align}
We scale these losses with a factor $\lambda_{\text{sparse}}$ and optimize them together with the TD loss. It is worth noting that these measurements and losses are not independent. For example, minimizing $\mathcal{L}^{\delta_{\text{var}}}_{\text{sparse}}$ would also reduce the variance of $q_{ij}$. Thus, in the next section, we consider all possible combinations between these measurements and losses.
\textbf{Observation-Based Approaches}
In partial observable environments, agents sometimes need to coordinate with each other to share their observations and reduce their uncertainty about the true state~\citep{wang2020learning}. We can build our coordination graphs according to this intuition.
Agents use an attention model~\citep{vaswani2017attention} to select the information they need. Specifically, we train fully connected networks $f_k$ and $f_q$ and estimate the importance of agent $j$’s observations to agent $i$ by:
\begin{equation}
\alpha_{ij} = f_k(\tau_i)^{\mathrm{T}} f_q(\tau_j).
\end{equation}
Then we calculate the global Q function as:
\begin{equation}
Q_{tot}(s, {\bm{a}}) = \frac{1}{|\mathcal{V}|}\sum_i{q_i(\tau_i, a_i)} + \sum_{i\neq j} \bar{\alpha}_{ij}q_{ij}(\bm\tau_{ij},{\bm{a}}_{ij}),
\end{equation}
where $\bar{\alpha}_{ij} = e^{\alpha_{ij}} / \sum_{i\neq j}e^{\alpha_{ij}}$. Then both $f_k$ and $f_q$ can be trained end-to-end with the standard TD loss. When building coordination graphs, given a sparseness controlling factor $\lambda$, we select $\lambda n(n-1)$ edges with the largest $\bar{\alpha}_{ij}$ values.
\subsection{Which method is better for learning dynamically sparse coordination graphs?}\label{sec:whichone}
We test all combinations of measurements and losses described above and our method on the MACO~benchmark and use the following three metrics to quantify their performance. (1) \emph{Performance after convergence}. This value reflects the representational capacity of each candidate. (2) \emph{Temporally average performance}. Final performance alone can not depict sample efficiency. To make up for this drawback, we use a metric called \emph{temporally average performance}, which is the area between a learning curve and the $x$-axis. (3) \emph{Stability}. As described in Sec.~\ref{sec:graph}, alternating between utility/payoff function learning and graph structure learning can lead to oscillations. To measure the stability of learning processes, we use the following metric. For a learning curve ${\bm{x}}=[x_1, x_2, \dots, x_T]$, we use Kalman Filtering~\citep{musoff2009fundamentals} to get a highly smoothed curve $\hat{{\bm{x}}}=[\hat{x}_1, \hat{x}_2, \dots, \hat{x}_T]$. Then we compute the distance $d=\sqrt{\sum_{t=1}^T (x_t - \hat{x}_t)^2}$ and average $d$ across different random seeds. The averaged distance is the stability measure. In Appendix~\ref{appx:stability_measure}, we discuss the influence of different choices of stability measurement.
\begin{table*} [t]
\caption{Performance after convergence on the multi-agent coordination benchmark.}
\vspace{-0.2em}
\label{tab:maco_perf}
\centering\scriptsize
\begin{tabular}{c| P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c | c}
\Xhline{2\arrayrulewidth}
Topology& \multicolumn{3}{c|}{Full} & \multicolumn{3}{c|}{Rand.} & \multicolumn{3}{c|}{$\delta_{\max}$} & \multicolumn{3}{c|}{$q_{\text{var}}$} & \multicolumn{3}{c|}{$\delta_{\text{var}}$} & Attn.\\
\hline
Loss & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & --\\
\hline
Aloha & 35.7& 49.9& 50.0& 45.8& 37.1& 44.0& 51.1& 41.1& \textbf{60.7}& \textbf{52.5}& 49.4& 49.7& 41.8& 35.2& \textbf{55.1} & 25.3 \\
Pursuit & 4.74& 4.76& 4.8& 4.71& 4.72& 4.74& 4.77& \textbf{4.83}& 4.78& 4.74& \textbf{4.83}& 4.79& 4.74& 4.81& \textbf{4.89} & 4.75 \\
Hallway & \textbf{1.00}& \textbf{1.00}& \textbf{1.00}& 0.98& 0.95& 0.92& 0.97& \textbf{1.00}& 0.99& 0.96& \textbf{1.00}& 0.99& \textbf{1.00}& 0.99& 0.99 & 0.01\\
Sensor & 7.38& 0.0& 0.0& 7.51& 15.0& 7.39& 24.0& 25.2& 24.6& \textbf{26.2}& 25.8& \textbf{26.5}& 25.7& 26.0& \textbf{26.1} & 7.42\\
Gather & 0.80& 0.99& 0.99& 0.84& 0.99& 0.81& 0.99& 0.99& 0.99& 0.83& 0.99& 0.98& \textbf{1.00}& 0.99& 0.99& 0.52 \\
Disperse & 8.07& 8.02& 8.14& 8.29& 7.71& 7.97& 8.32& 8.28& 8.38& 8.50& \textbf{8.57}& \textbf{8.58}& 8.74& 8.51& 8.53& \textbf{9.45}\\
\Xhline{2\arrayrulewidth}
\end{tabular}
\vspace{-2.5em}
\end{table*}
\textbf{Performance after Convergence}\ \ Table~\ref{tab:maco_perf} shows the averaged final performance of different implementations over 8 random seeds. For each task, we highlight the top 3 scores. Generally speaking, performance of two implementations is impressive across different types of problems: (1) Build graphs using the variance of payoff functions and minimize $\mathcal{L}_{\text{sparse}}^{q_{\text{var}}}$; (2) Construct graphs according to the variance of utility difference functions and minimize $\mathcal{L}_{\text{sparse}}^{\delta_{\text{var}}}$. We denote these two implementations by $q_{\text{var}}$ \& $\mathcal{L}_{\text{sparse}}^{q_{\text{var}}}$ and $\delta_{\text{var}}$ \& $\mathcal{L}_{\text{sparse}}^{\delta_{\text{var}}}$, respectively. In contrast, using the maximum absolute value of the utility difference function has the worst performance among value-based methods. We hypothesize that this is because the maximum value, compared to variance, is more vulnerable to noise and estimation errors in utility and payoff functions.
We also observe that, most dynamically sparse coordination graphs perform much better than complete coordination graphs, demonstrating the power of the adaptability and flexibility provided by sparse graphs. One exception is task $\mathtt{Hallway}$ that is characterized by static and high-order coordination relationships among agents. Such relationships can be easier captured by dense and static graphs.
\textbf{Temporally Average Performance}\ \ We show the temporally average performance in Table~\ref{tab:maco_avg_perf}. Again, we observe a performance gap between complete coordination graphs and most implementations of context-aware sparse graphs. \citet{castellini2019representational} finds that (randomly) sparse coordination graphs perform much worse than full graphs. This is aligned with our experimental results. Importantly, we show that carefully learned sparse graphs can significantly outperform complete graphs. We expect these observations to be a good supplementation to \citet{castellini2019representational} and can eliminate any possible misunderstanding about sparse coordination graphs.
\textbf{Stability and Recommendation}\ \ Table~\ref{tab:maco_fluc} shows the stability of each implementation on all tasks across the MACO~benchmark. Generally speaking, random graphs are the most stable while sparse graphs oscillate on some tasks like $\mathtt{Aloha}$, which is in line with our analyses in Sec.~\ref{sec:graph}.
In Fig.~\ref{fig:radar}, for the two implementations with outstanding final performance, we scale each score to $[0,1]$ on each task and show the averaged scaled scores over tasks. We can see that $q_{\text{var}}$ \& $\mathcal{L}_{\text{sparse}}^{q_{\text{var}}}$ has the best temporally average performance and stability. We thus recommend it for learning dynamically sparse graphs.
\begin{wrapfigure}[16]{r}{0.3\linewidth}
\centering
\includegraphics[width=\linewidth]{fig-maco/maco.png}
\caption{Normalized scores of two implementations with outstanding \emph{performance after convergence}.}
\label{fig:radar}
\end{wrapfigure}
Table~\ref{tab:maco_perf} shows the \emph{performance after convergence} of different implementations on the MACO~benchmark. This measurement reflects the representational capacity of different methods, but conveys less information about their sample efficiency and stability. To this end, we propose evaluations \emph{temporally average performance} and \emph{stability}. Table~\ref{tab:maco_avg_perf} and~\ref{tab:maco_fluc} show the scores of all the considered implementations evaluated by these two measurements.
The three measurements we use are just some statistics of the learning curves. We further show the complete learning curves of value-based implementations in Fig.~\ref{fig:aloha-lc}-\ref{fig:disperse-lc} and compare the best value-based method ($\delta_{\text{var}}$ \& $\mathcal{L}_{\text{sparse}}^{\delta_{\text{var}}}$) against the (attentional) observation-based method in Fig.~\ref{fig:attn-lc}. We can see that value-based methods generally perform better than the observation-based method, except for the task $\mathtt{Disperse}$. Compared to other games, observations provided by $\mathtt{Disperse}$ can reveal all the game information. In this case, the observation-based method can make better use of local observations and can easily learn an accurate coordination graph.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{fig-maco/attention.pdf}
\caption{Performance comparison between the best value-based method and the (attentional) observation-based approach on the MACO~benchmark.}
\label{fig:attn-lc}
\end{figure*}
\section{Mathematical Proof}
In this section, we provide proof to Proposition~\ref{prop}.
Without loss of generality, we consider two agents $1$ and $2$ and the edge between them, $(1, 2)$.
We prove our idea by comparing the action selection of agent $2$ before and after removing edge $(1, 2)$. In the following proof, we use $i$ to denote agent $i$ and $e$ to denote edge $(1, 2)$.
Action of agent $2$ is determined by
\begin{align}
a_2^* &= \arg\max_{a_2} \sum_{h\in \mathcal{F}_2} m_{h\rightarrow 2}(a_2) \\
& = \arg\max_{a_2} \left[m_{e\rightarrow 2}(a_2) + \sum_{h\in \mathcal{F}_2 / \{e\}} m_{h\rightarrow 2}(a_2)\right],\label{equ:1}
\end{align}
and we first see the influence of $m_{e\rightarrow 2}(a_2)$ on $a_2^*$. For clarity, we use $m(a_2)$ to denote $m_{e\rightarrow 2}(a_2)$ and $l(a_2)$ to denote $\sum_{h\in \mathcal{F}_2 / \{e\}} m_{h\rightarrow 2}(a_2)$. We are interested in whether $\arg\max_{a_2} l(a_2) = \arg\max_{a_2} \left[m(a_2)+l(a_2)\right]$. Let $\text{Range}({\bm{x}})$ denote the largest values of vector ${\bm{x}}$ minus the smallest value one and $a_2^j = \arg\max l(a_k)$ denote the maximum value of $a_k$.
The probability of this event holds if the following inequality holds:
\begin{equation}
\text{Range}\left[m(a_2)\right] \le \min_{a_2\neq a_2^j}(l(a_2^j)-l(a_2)).
\end{equation}
We use $\bar{m}$ to denote the average of $m(a_k)$. We rewrite this condition and obtain
\begin{align}
&Pr\left(\text{Range}\left[m(a_2)\right] \le \min_{a_2\neq a_2^j}(l(a_2^j)-l(a_2))\right) \\
=& Pr\left(\min_{a_2}m(a_2) < m(a_2) < \min_{a_2}m(a_2) + \min_{a_2\neq a_2^j}(l(a_2^j)-l(a_2))\right)
\end{align}
According to the Asymmetric two-sided Chebyshev's inequality, we get a lower bound of this probability:
\begin{equation}
4\frac{(\bar{m}-\min_{a_2}m(a_2))(\max_{a_2}m(a_2)-\bar{m})-\sigma^2}{\left[\min_{a_2\neq a_2^j}(l(a_2^j)-l(a_2))\right]^2}
\end{equation}
where $\sigma$ is the variance of $m(a_2)$, and $\bar m$ is the average of $m(a_2)$.
Suppose that we take $n$ actions independently. According to the von Szokefalvi Nagy inequality, we can further get the lower bound as follows:
\begin{equation}
\begin{aligned}
4\frac{(\bar{m}-\min_{a_2}m(a_2))(\max_{a_2}m(a_2)-\bar{m})-\sigma^2}{\left[\min_{a_2\neq a_2^j}(l(a_2^j)-l(a_2))\right]^2} &\geq 4\frac{(\bar{m}-\min_{a_2}m(a_2))(\max_{a_2}m(a_2)-\bar{m})-\sigma^2}{2n\sigma^2} \\
&= \frac{2}{n}\left[\frac{(\bar{m}-\min_{a_2}m(a_2))(\max_{a_2}m(a_2)-\bar{m})}{\sigma^2} - 1\right] \label{equ:lower1}
\end{aligned}
\end{equation}
Note that
\begin{align}
m_{e\rightarrow 2}(a_2) = \max_{a_1} \left[q(a_1, a_2) + m_{1\rightarrow e}(a_1) \right]
\end{align}
and we are interested in $q(a_1, a_2)$. We now study the relationship between $m_{e\rightarrow 2}(a_2)$ and $\max_{a_1} \left[q(a_1, a_2) \right]$. For clarity, we use $r(a_1,a_2)$ to denote $q(a_1,a_2) + m_{1\rightarrow e}(a_1)$, and $r(a_1^{i_2}, a_k)$ to denote $\max_{a_1} r(a_1, a_2)$. Then we have
\begin{equation}
\text{Var}_{a_2} \max_{a_1} r(a_1, a_2) = \text{Var}_{a_2} r(a_1^{i_2}, a_2)
\end{equation}
Since $i_2=\arg\max_{i} r(a_1,a_2)$ for a given $a_2$, we have
\begin{equation}
\text{Var}_{a_2} r(a_1,a_2) = \text{Var}_{a_2} \left[ r(a^{i_2}_1,a_2) - s_2 \right],
\end{equation}
where $s_2 > 0$, for a given $a_1$.
Since
\begin{align}
& \text{Var}_{a_2} \left[ r(a^{i_2}_1,a_2) - s_2 \right] \\
=& \text{Var}_{a_2} \left[ r(a^{i_2}_1,a_2)\right] + \text{Var} \left[ s_2\right] - 2\text{Cov}( r(a^{i_2}_1,a_2), s_2)
\end{align}
and
\begin{align}
\text{Cov}( r(a^{i_2}_1,a_2), s_2) \le \sqrt{\text{Var}_{a_2} \left[ r(a^{i_2}_1,a_2)\right]\text{Var} \left[s_2\right]},
\end{align}
it follows that
\begin{align}
& \text{Var}_{a_2} \left[ r(a^{i_2}_1,a_2) - s_2 \right] \\
\ge& \text{Var}_{a_2} \left[ r(a^{i_2}_1,a_2)\right] - 2\sqrt{\text{Var}_{a_2} \left[ r(a^{i_2}_1,a_2)\right]\text{Var} \left[s_2\right]}
\end{align}
Thus,
\begin{align}
\zeta_{12} \left[r(a_1,a_2)\right] = \max_{a_1}\text{Var}_{a_2} \left[r(a_1,a_2)\right] \ge \text{Var}_{a_2}\max_{a_1}\left[r(a_1,a_2)\right] - 2\sqrt{\text{Var}_{a_2} \left[ r(a^{i_2}_1,a_2)\right]\text{Var} \left[ s_2\right]}.
\end{align}
Observing that $\zeta_{12} \left[r(a_1,a_2)\right] = \max_{a_1}\text{Var}_{a_2} \left[r(a_1,a_2)\right] = \max_{a_1}\text{Var}_{a_2} \left[q(a_1,a_2)\right] = \zeta_{12} \left[q(a_1,a_2)\right]$, we have
\begin{equation}
\begin{aligned}
\sigma &\le \zeta_{12} \left[q(a_1,a_2)\right] + 2\sqrt{\text{Var}_{a_2} \left[ r(a^{i_2}_1,a_2)\right]\text{Var} \left[ s_2\right]}\\
&= \zeta_{12} \left[q(a_1,a_2)\right] + 2\sqrt{\sigma S}\\
\end{aligned}
\end{equation}
where $\sigma=\text{Var}_{a_2}\max_{a_1}\left[r(a_1,a_2)\right]$ and $\text{Var} \left[ s_2\right] = S$. According to the fixed-point theorem, the term $\sigma$ satisfies $\zeta_{12} \left[q(a_1,a_2)\right] + 2\sqrt{\sigma S} = \sigma$. We can solve this quadratic form and get $\sigma = \zeta_{12} \left[q(a_1,a_2)\right] + 2S \pm 2\sqrt{S\left(S + \zeta_{12} \left[q(a_1,a_2)\right]\right)}$. Because the $\sigma$ term is larger than $\zeta_{12} \left[q(a_1,a_2)\right] + 2S$, we get $\sigma = \zeta_{12} \left[q(a_1,a_2)\right] + 2S + 2\sqrt{S\left(S + \zeta_{12} \left[q(a_1,a_2)\right]\right)}$.
By inserting this inequality to the lower bound (Eq.~\ref{equ:lower1}), we get a lower bound related to $q(a_1,a_2)$:
\begin{equation}
\frac{2}{n}\left[\frac{(\bar{m}-\min_{a_2}m(a_2))(\max_{a_2}m(a_2)-\bar{m})}{\left[\zeta_{12} \left[q(a_1,a_2)\right] + 2S + 2\sqrt{S\left(S + \zeta_{12} \left[q(a_1,a_2)\right]\right)}\right]^2} - 1\right]
\end{equation}
if the vector ${\bm{x}}$ is larger than 0, we have: $\text{Var}(x) = \frac{1}{n} \sum_i (x_i - \bar{m}) \leq \frac{1}{n} \sum max_i x^2 \leq max_i x^2$. Thus we can further get the following bound:
\begin{equation}
\begin{aligned}
S &= \text{Var}_{a_2}\left[ \max_{a_1} r\left(a_1, a_2\right) - r\left(a_1, a_2\right) \right] \\
& \leq max_{a_2}^2 \left[ \max_{a_1} r\left(a_1, a_2\right) - r\left(a_1, a_2\right) \right] \\
\end{aligned}
\end{equation}
Let $A = \max_{a_2}\left[\max_{a_1} r\left(a_1, a_2\right) - r\left(a_1, a_2\right)\right]$, we have $S \leq A^2$. We can get the final lower bound:
\begin{equation}
\frac{2}{n}\left[\frac{(\bar{m}-\min_{a_2}m(a_2))(\max_{a_2}m(a_2)-\bar{m})}{\left[\zeta_{12} \left[q(a_1,a_2)\right] + 2A^2 + 2\sqrt{A^2\left(A^2 + \zeta_{12} \left[q(a_1,a_2)\right]\right)}\right]^2} - 1\right]
\end{equation}
\section{Stability Measurement}\label{appx:stability_measure}
For evaluating the stability of different implementations, we use the following stability measurement in our paper. For a learning curve ${\bm{x}}=[x_1, x_2, \dots, x_T]$, we use Kalman Filtering~\citep{musoff2009fundamentals} to get a highly smoothed curve $\hat{{\bm{x}}}=[\hat{x}_1, \hat{x}_2, \dots, \hat{x}_T]$. Then we compute the distance $d=\sqrt{\sum_{t=1}^T (x_t - \hat{x}_t)^2}$ and average $d$ across learning curves obtained using different random seeds. The averaged distance is the stability measure. However, different stability measurements may lead to different conclusions. In this section, we discuss the influence of the choice of stability measurements.
We propose some other stability measurements and compare the induced rankings. Specifically, we test the following methods for quantifying the volatility of a learning curve ${\bm{x}}=[x_1, x_2, \dots, x_T]$. We use (1) Exponential Moving Average (EMA) introduced by \citet{klinker2011exponential} and (2) MidPoint over a period by calculating the average of maximum and minimum within a rolling window to get the smoothed curve, and calculate the distance between the original and smoothed curve. Moreover, we use (3) Double Exponential Moving Average (DEMA) introduced by \citet{stanley1988digital} to smooth the curve and then calculate the distance similarly. In Table~\ref{tab:maco_fluc_measure}, we highlight the implementations that are the most stable on $\mathtt{Pursuit}$ as measured by each method. It can be observed that the results are similar. We thus conclude that the stability ranking reported in the paper is robust over different measurements.
\iffalse
\begin{table*} [t]
\caption{Influence of stability measurements. On $\mathtt{Aloha}$, we compare the stability rankings of different implementations as measured by 4 different measurements. For each measurement, we highlight the three stablest observation- or value-based methods.}
\label{tab:maco_fluc_measure}
\centering
\begin{tabular}{c| P{0.455cm} P{0.455cm} c|P{0.455cm} P{0.455cm} c|P{0.455cm} P{0.455cm} c|P{0.455cm} P{0.455cm} c|P{0.455cm} P{0.455cm} c | c}
\Xhline{2\arrayrulewidth}
Topology& \multicolumn{3}{c|}{Full} & \multicolumn{3}{c|}{Rand.} & \multicolumn{3}{c|}{$\delta_{\max}$} & \multicolumn{3}{c|}{$q_{\text{var}}$} & \multicolumn{3}{c|}{$\delta_{\text{var}}$} & Attn. \\
\hline
Loss & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & -- \\
\hline
Kalman & 0.67& 0.62& 2.54& 0.29& 0.58& 0.39& 0.63& 1.34& 2.28& 1.29& 1.25& \textbf{0.60}& \textbf{0.59}& 0.69& \textbf{0.62}& 2.04 \\
EMA & 0.98& 0.89& 3.20& 0.58& 0.96& 0.65& 0.89& 1.48& 2.27& 1.53& 1.93& \textbf{0.85}& \textbf{0.70}& 1.17& \textbf{0.79}& 2.49 \\
MidPoint & 0.81& 0.88& 3.17& 0.43& 0.82& 0.54& 0.86& 1.51& 2.36& 1.40& 1.62& \textbf{0.75}& \textbf{0.57}& 0.92& \textbf{0.70}& 2.39\\
PolyFit & 0.72& 0.41& 2.70& 0.29& 0.60& 0.36& 0.70& 1.65& 2.39& 1.04& 2.21& \textbf{0.57}& \textbf{0.48}& 0.77& \textbf{0.68}& 1.98\\
\Xhline{2\arrayrulewidth}
\end{tabular}
\end{table*}
\fi
\begin{table*} [h!]
\caption{Influence of stability measurements. On $\mathtt{Pursuit}$, we compare the stability rankings of different implementations as measured by 4 different measurements. For each measurement, we highlight the three stablest observation- or value-based methods. The values are multiplied by $10$.}
\label{tab:maco_fluc_measure}
\centering\scriptsize
\begin{tabular}{c| P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c|P{0.25cm} P{0.25cm} c | c}
\Xhline{2\arrayrulewidth}
Topology& \multicolumn{3}{c|}{Full} & \multicolumn{3}{c|}{Rand.} & \multicolumn{3}{c|}{$\delta_{\max}$} & \multicolumn{3}{c|}{$q_{\text{var}}$} & \multicolumn{3}{c|}{$\delta_{\text{var}}$} & Attn. \\
\hline
Loss & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & $|\delta|$ & $q_{\text{var}}$ & $\delta_{\text{var}}$ & -- \\
\hline
Kalman & 3.55& 3.11& 4.07& 2.32& 2.08& 2.49& 2.04& 1.96& 2.58& 1.99& \textbf{1.28}& 2.07& 1.86& \textbf{1.82}& \textbf{1.77}& 1.87 \\
EMA & 5.20& 4.87& 5.27& 3.59& 3.46& 3.76& 3.42& 3.70& 4.06& 3.48& \textbf{2.46}& 3.78& 3.51& \textbf{3.33}& \textbf{3.04}& 3.55 \\
MidPoint & 4.49& 4.16& 4.46& 3.04& 2.81& 3.30& \textbf{2.69}& 2.95& 3.37& 2.77& \textbf{1.91}& 2.96& 2.73& 2.72& \textbf{2.40}& 2.55\\
DEMA & 2.08& 1.90& 2.20& 1.43& 1.33& 1.50& 1.29& 1.35& 1.59& 1.32& \textbf{0.92}& 1.39& 1.30& 1.22& \textbf{1.19}& \textbf{1.20}\\
\Xhline{2\arrayrulewidth}
\end{tabular}
\end{table*}
\section{The SMAC benchmark}
On the SMAC benchmark, we compare our method against fully decomposed value function learning methods (VDN~\citep{sunehag2018value} \& QMIX~\citep{rashid2018qmix}) and a deep coordination graph learning method (DCG~\citep{bohmer2020deep}). Experiments are carried out on a hard map $\mathtt{5m\_vs\_6m}$ and a super hard map $\mathtt{MMM2}$. For the baselines, we use the code provided by the authors and their default hyper-parameters settings that have been fine-tuned on the SMAC benchmark. We also notice that both our method and all the considered baselines are implemented based on the open-sourced codebase PyMARL\footnote{https://github.com/oxwhirl/pymarl}, which further guarantees the fairness of the comparisons.
\section{Action Representation Learning}\label{appx:action_repr}
As discussed in Sec. 5.1 of the main text, we use action representations to reduce the influence of utility difference function's estimation errors on graph structure learning. In this section, we describe the details of action representation learning (the related network structure is shown in Fig.~\ref{fig:action_repr_learner}). We
\begin{wrapfigure}[18]{r}{0.35\linewidth}
\vspace{-0.5em}
\includegraphics[width=\linewidth]{fig-demo/action_repr_learner.pdf}
\caption{Framework for learning action representations, reproduced from~\citet{wang2021rode}.}
\label{fig:action_repr_learner}
\end{wrapfigure}
use the technique proposed by~\citet{wang2021rode} and learn an action encoder $f_e(\cdot; \theta_e)$: $\mathbb{R}^{|A|}\rightarrow\mathbb{R}^d$, parameterized by $\theta_e$, to map one-hot actions to a $d$-dimensional representation space. With the encoder, each action $a$ has a latent representation ${\bm{z}}_{a}$, \textit{i}.\textit{e}., ${\bm{z}}_{a}=f_e(a; \theta_e)$. The representation ${\bm{z}}_{a}$ is then used to predict the next observation $o_i'$ and the global reward $r$, given the current observation $o_i$ of an agent $i$, and the one-hot actions of other agents, ${\bm{a}}_{\textup{\texttt{-}} i}$. This model is a forward model, which is trained by minimizing the following loss:
\begin{equation}\label{equ:ar_learning}
\begin{aligned}
\mathcal{L}_e(\theta_e, \xi_e) & = \mathbb{E}_{({\bm{o}}, {\bm{a}}, r, {\bm{o}}')\sim \mathcal{D}}\big[\sum_i\|p_o({\bm{z}}_{a_i},o_i, {\bm{a}}_{\textup{\texttt{-}} i}) - o_i'\|^2_2 \\
& + \lambda_e\sum_i\left(p_r({\bm{z}}_{a_i},o_i, {\bm{a}}_{\textup{\texttt{-}} i}) - r\right)^2\big],
\end{aligned}
\end{equation}
where $p_o$ and $p_r$ is the predictor for observations and rewards, respectively. We use $\xi_e$ to denote the parameters of $p_o$ and $p_r$. $\lambda_e$ is a scaling factor, $\mathcal{D}$ is a replay buffer, and the sum is carried out over all agents.
In the beginning, we collect samples and train the predictive model shown in Fig.~\ref{fig:action_repr_learner} for 50$K$ timesteps. Then policy learning begins and action representations are kept fixed during training. Since tasks in the MACO~benchmark typically do not involve many actions, we do not use action representations when benchmarking our method. In contrast, StarCraft II micromanagement tasks usually have a large action space. For example, the map $\mathtt{MMM2}$ involves $16$ actions, and a conventional deep Q-network requires $256$ output heads for learning utility difference. Therefore, we equip our method with action representations to estimate the utility difference function when testing it on the SMAC benchmark.
\section{Architecture, Hyperparameters, and Infrastructure}\label{appx:hyper}
\subsection{CASEC}
In CASEC, each agent has a neural network to estimate its local utility. The local utility network consists of three layers---a fully-connected layer, a 64 bit GRU, and another fully-connected layer---and outputs an estimated utility for each action. The utility difference function is also a 3-layer network, with the first two layers shared with the local utility function to process local action-observation history. The input to the third layer (a fully-connected layer) is the concatenation of the output of two agents' GRU layer. The local utilities and pairwise utility differences are summed to estimate the global action value (Eq. 11 in the paper).
For all experiments, the optimization is conducted using RMSprop with a learning rate of $5\times10^{-4}$, $\alpha$ of 0.99, RMSProp epsilon of 0.00001, and with no momentum or weight decay. For exploration, we use $\epsilon$-greedy with $\epsilon$ annealed linearly from 1.0 to 0.05 over 50$K$ time steps and kept constant for the rest of the training. Batches of 32 episodes are sampled from the replay buffer. The default iteration number of the Max-Sum algorithm is set to 5. The communication threshold depends on the number of agents and the task, and we set it to $0.3$ on the map $\mathtt{5m\_vs\_6m}$ and $\mathtt{MMM2}$. We test the performance with different values ($1e\textup{\texttt{-}} 3$, $1e\textup{\texttt{-}} 4$, and $1e\textup{\texttt{-}} 5$) of the scaling weight of the sparseness loss $\mathcal{L}_{\text{sparse}}^{\delta_{\text{var}}}$ on $\mathtt{Pursuit}$, and set it to $1e\textup{\texttt{-}} 4$ for both the MACO and SMAC benchmark. The whole framework is trained end-to-end on fully unrolled episodes. All experiments on StarCraft II use the default reward and observation settings of the SMAC benchmark.
All the experiments are carried out on NVIDIA Tesla P100 GPU. We show the estimated running time of our method on different tasks in Table~\ref{tab:time-maco} and~\ref{tab:time-smac}. Typically, CASEC~can finish 1M training steps within 8 hours on MACO~tasks and in about 10 hours on SMAC tasks. In Table~\ref{tab:time-comparision}, we compare the computational complexity of action selection for CASEC and DCG, which is the bottleneck of both algorithms. CASEC is slightly faster than DCG by virtue of graph sparsity.
\begin{table*}[h!]
\caption{Approximate running time of CASEC~on tasks from the MACO~benchmark.}
\centering
\begin{tabular}{CRCRCRCRCRCR}
\toprule
\multicolumn{2}{c}{Aloha} &
\multicolumn{2}{l}{Pursuit} &
\multicolumn{2}{l}{Hallway} &
\multicolumn{2}{l}{Sensor} &
\multicolumn{2}{l}{Gather} &
\multicolumn{2}{l}{Disperse} \\
\cmidrule(lr){1-2}
\cmidrule(lr){3-4}
\cmidrule(lr){5-6}
\cmidrule(lr){7-8}
\cmidrule(lr){9-10}
\cmidrule(lr){11-12}
\multicolumn{2}{c}{13h (2M)} & \multicolumn{2}{c}{17h (2M)} & \multicolumn{2}{c}{7h (1M)} & \multicolumn{2}{c}{4.5h (0.5M)} & \multicolumn{2}{l}{6.5h (1M)} & \multicolumn{2}{c}{8h (1M)}\\
\toprule
\end{tabular}
\label{tab:time-maco}
\caption{Approximate running time of CASEC~on tasks from the SMAC benchmark.}
\centering
\begin{tabular}{CRCRCRCRCR}
\toprule
\multicolumn{2}{l}{5m\_vs\_6m} &
\multicolumn{2}{l}{MMM2}\\
\cmidrule(lr){1-2}
\cmidrule(lr){3-4}
\multicolumn{2}{c}{18h (2M)} & \multicolumn{2}{c}{21h (2M)} \\
\toprule
\end{tabular}
\label{tab:time-smac}
\end{table*}
\begin{table}[h!]
\centering
\caption{Average time (milliseconds) for 1000 action selection phases of CASEC/DCG. CASEC uses a graph with sparseness 0.2 while DCG uses the full graph. To ensure a fair comparison, both Max-Sum/Max-Plus algorithms pass messages for 8 iterations. The batch size is set to 10.}
\vspace{0.5em}
\begin{tabular}{|c|c|c|c|}
\hline
& 5 actions & 10 actions & 15 actions \\ \hline
5 agents & 2.90/3.11 & 3.15/3.39 & 3.42/3.67\\ \hline
10 agents & 3.17/3.45 & 3.82/4.20 & 5.05/5.27 \\ \hline
15 agents & 3.41/3.67 & 5.14/5.4 & 7.75/8.02 \\ \hline
\end{tabular}
\label{tab:time-comparision}
\end{table}
|
2,877,628,089,639 | arxiv | \section{Introduction}
It is observationally known that
there exists a net excess of baryons over antibaryons in
the Universe.
In order to generate the baryon asymmetry of the Universe (BAU)
from a state in which baryons and antibaryons have same abundances,
the Sakharov's three conditions \cite{Sakharov1} must be satisfied:
(1) baryon number nonconservation, (2) $C$ and $CP$ violation,
and (3) a departure from thermal equilibrium.
Many cosmological scenarios in which the above three conditions
could be satisfied have been proposed
(for reviews of the BAU, see
Refs.~\cite{Dolgov92, DK04}).
The origin of the BAU,
however, is not well established yet.
It has been pointed out
\cite{JS97, GS98-1, GS98-2, Thompson98, Vachaspati94} that
hypercharge electromagnetic fields could play a significant role
in the electroweak (EW) scenario
\cite{Cohen93, Rubakov96, Funakubo96, Trodden99} for baryogenesis.
Giovannini and Shaposhnikov \cite{GS98-1, GS98-2} have shown that
the Chern-Simons number stored
in the hypercharge electromagnetic
fields, i.e., the hypermagnetic helicity, is converted into
fermions at the electroweak phase transition (EWPT) owing to
the Abelian anomaly \cite{Kuzmin85},
and at the same time
the hypermagnetic fields are replaced by the ordinary magnetic fields,
which survive after the EWPT.
The hypermagnetic helicity corresponds to a topologically non-trivial
hypercharge configurations.
Topologically non-trivial refers here to the topology of the magnetic
flux lines.
The physical interpretation of these hypercharge configurations is
given in terms of hypermagnetic knots.
The generated fermions will not
be destroyed by sphaleron processes \cite{Sphaleron} if the EWPT is
strongly first order. This condition could be met
in the extensions of the minimal standard model (MSM) \cite{Funakubo05}.
The most natural origin of large-scale hypermagnetic fields before
the EWPT is hypercharge electromagnetic quantum fluctuations generated
in the inflationary stage
\cite{Turner} (for reviews of inflation, see
Refs.~\cite{Linde1, Kolb}).
This is because inflation naturally produces effects on very large scales,
larger than Hubble horizon, starting from microphysical processes
operating on a causally connected volume.
If the conformal invariance of the Maxwell theory is broken by some
mechanism in the inflationary stage,
hypercharge electromagnetic quantum fluctuations could be generated
even in the Friedmann-Robertson-Walker (FRW) spacetime, which
is conformally flat.
Hence several breaking mechanisms have been proposed in
Refs.~\cite{Turner, Ratra, Scalar, Garretson92, Field00,
Dolgov93, Bamba1, Bamba2}
(for other mechanisms, see reviews of the origin of cosmic magnetic fields
\cite{Grasso01, Dolgov01, Widrow02, Giovannini04, Semikoz05}
and references therein).
In particular, it follows from indications
in higher-dimensional theories including string theory that
there can exist the dilaton field coupled to the hypercharge
electromagnetic fields. This coupling also breaks the conformal invariance of
the Maxwell theory \cite{Ratra, Scalar, Bamba1, Bamba2}
and hence the large-scale hypercharge magnetic fields could be generated.
Furthermore, it has been noticed \cite{BO99, Giovannini00} that
when a pseudoscalar field $\phi$ with an axion-like coupling to
the $U(1)_Y$ hypercharge field strength $Y_{\mu\nu}$
in the form $\phi Y_{\mu\nu}\tilde{Y}^{\mu\nu}$, where
$\tilde{Y}^{\mu\nu}$ is the dual of $Y_{\mu\nu}$ and
$Y_{\mu\nu}\tilde{Y}^{\mu\nu}$
corresponds to the hypercharge topological number density,
coherently rolls or oscillates before the EWPT,
the Sakharov's three conditions can be satisfied.
In this case, the motion of the pseudoscalar field generates
a time-dependent hypercharge topological number condensate
which violates fermion number conservation through the Abelian
anomalous coupling \cite{Kuzmin85}, and realizes a departure from
equilibrium. Moreover, $C$ symmetry is violated by the chiral coupling
of the hypercharge gauge boson to fermions. Furthermore, since
the hypercharge topological number is odd under $CP$,
$CP$ symmetry is spontaneously broken if the hypercharge topological number
is coupled to a field with a time-dependent expectation value.
This mechanism is able to generate a net Chern-Simons number which can
survive until the EWPT and then be converted into a baryon asymmetry.
Pseudoscalar fields with the above axion-like coupling appear in
several possible extensions of the standard model \cite{BO01}.
Incidentally, the generation of large-scale magnetic fields owing to
the breaking of the conformal invariance of the Maxwell theory through such an
axion-like coupling between a pseudoscalar field and
electromagnetic fields have been considered
in Refs.~\cite{Turner, Garretson92, Field00}.
Moreover,
baryogenesis due to the above coupling has been discussed in
Ref.~\cite{Guendelman92}.
Furthermore, the generation of the magnetic helicity
owing to the above coupling has been considered in Ref.~\cite{Campanelli05}.
In the present paper,
in addition to the existence of the dilaton coupled to hypercharge
electromagnetic fields,
we assume the existence of a pseudoscalar field with an
axion-like coupling to hypercharge electromagnetic fields,
and consider the generation of the BAU in slow-roll exponential
inflation models.
In particular, we consider the generation of the
Chern-Simons number, i.e., the hypermagnetic helicity,
through the coupling between
a pseudoscalar field and hypercharge electromagnetic fields
in the inflationary stage.
The generated Chern-Simons number is converted into
fermions at the EWPT owing to the Abelian anomaly,
and the generated fermions can survive after the EWPT
if the EWPT is strongly first order.
The reason why we consider the generation of the
Chern-Simons number during inflation is as follows:
In this scenario,
the BAU is induced by the helicity of
the hypermagnetic fields. Hence
the large-scale hypermagnetic fields whose present scale is
larger than the present horizon scale have to be generated
in order that the resultant BAU could be
homogeneous over the present horizon.
This paper is organized as follows.
In Sec.\ II we describe our model and derive equations of motion
from its action.
In Sec.\ III we consider the evolution of the $U(1)_Y$ gauge field
and investigate the generated Chern-Simons number density,
and then estimate the resultant baryon asymmetry.
Finally, Sec.\ IV is devoted to a conclusion.
We use units in which $k_\mathrm{B} = c = \hbar = 1$ and denote the
gravitational constant $8 \pi G$ by ${\kappa}^2$ so that
${\kappa}^2 \equiv 8\pi/{M_{\mathrm{Pl}}}^2$ where
$M_{\mathrm{Pl}} = G^{-1/2} = 1.2 \times 10^{19}$GeV is the Planck mass.
Moreover, in terms of electromagnetism we adopt Heaviside-Lorentz units.
Throughout the present paper,
the subscripts `1', `R', and `0' represent the quantities at
the time $t_1$ when a given mode of the $U(1)_Y$ gauge field
first crosses outside the horizon during inflation,
the end of inflation (namely, the instantaneous reheating stage)
$t_\mathrm{R}$, and the present time $t_0$, respectively.
\section{MODEL}
\subsection{Action}
We introduce the dilaton field $\Phi$ and
a pseudoscalar field $\phi$.
Moreover, we introduce the coupling of the dilaton to hypercharge
electromagnetic fields and that of the pseudoscalar to those fields.
Our model action is the following:
\begin{eqnarray}
S
\Eqn{=}
\int d^{4}x \sqrt{-g}
\left[ \hspace{1mm}
{\mathcal{L}}_{\mathrm{inflaton}}
+
{\mathcal{L}}_{\mathrm{dilaton}}
+
{\mathcal{L}}_{\mathrm{ps}}
+
{\mathcal{L}}_{\mathrm{HEM}}
\hspace{1mm} \right],
\label{eq:2.1} \\[3mm]
{\mathcal{L}}_{\mathrm{inflaton}}
\Eqn{=}
-\frac{1}{2}g^{\mu\nu}{\partial}_{\mu}{\varphi}{\partial}_{\nu}{\varphi}
- U[\varphi],
\label{eq:2.2} \\[3mm]
{\mathcal{L}}_{\mathrm{dilaton}}
\Eqn{=}
-\frac{1}{2}g^{\mu\nu}{\partial}_{\mu}{\Phi}{\partial}_{\nu}{\Phi}
- V[\Phi],
\label{eq:2.3}
\end{eqnarray}
\begin{eqnarray}
{\mathcal{L}}_{\mathrm{ps}}
\Eqn{=}
-\frac{1}{2}g^{\mu\nu}{\partial}_{\mu}{\phi}{\partial}_{\nu}{\phi}
- W[\phi],
\label{eq:2.4} \\[3mm]
{\mathcal{L}}_{\mathrm{HEM}} \Eqn{=}
-\frac{1}{4}
f(\Phi) \left(
Y_{\mu\nu}Y^{\mu\nu} + g_{\mathrm{ps}} \frac{\phi}{M}
Y_{\mu\nu}\tilde{Y}^{\mu\nu} \right),
\label{eq:2.5} \\[3mm]
f(\Phi) \Eqn{=} \exp \left(-\lambda \kappa \Phi \right),
\label{eq:2.6}
\\[3mm]
V[\Phi] \Eqn{=} \bar{V} \exp \left( -\tilde{\lambda} \kappa \Phi \right),
\label{eq:2.7} \\[3mm]
W[\phi] \Eqn{=} \frac{1}{2} m^2 \phi^2,
\label{eq:2.8}
\end{eqnarray}
where $g$ is the determinant of the metric tensor $g_{\mu\nu}$,
$U[\varphi]$, $V[\Phi]$, and $W[\phi]$ are the inflaton, the dilaton, and
the pseudoscalar potentials, respectively,
$\bar{V}$ is a constant, $m$ is the mass of the pseudoscalar field,
and
$f$ is the coupling between the dilaton and hypercharge
electromagnetic fields with
$\lambda$\footnote{
The sign of $\lambda$ in the present paper is opposite to that in
Refs.~\cite{Bamba1, Bamba2}
}
and $\tilde{\lambda} \hspace{0.5mm}
(\hspace{0.5mm} > 0 \hspace{0.5mm})$ being
dimensionless constants.
Moreover,
$g_{\mathrm{ps}} = \bar{g}_{\mathrm{ps}} \alpha^{\prime}/(2\pi)$,
where $\bar{g}_{\mathrm{ps}}$ is a numerical factor and
$\alpha^{\prime} = g^{\prime \hspace{0.3mm} 2}/(4\pi)$. Here, $g^{\prime}$ is
the $U(1)_Y$ gauge coupling constant.
$M$ denotes a mass scale.
Furthermore, the $U(1)_Y$ hypercharge field strength is given by
$Y_{\mu\nu} = \nabla_{\mu} Y_{\nu} - \nabla_{\nu} Y_{\mu}$, where
$\tilde{Y}^{\mu\nu} = (1/2) \epsilon^{\mu\nu\rho\sigma} Y_{\rho\sigma}$.
Here, $Y_{\mu}$ is the $U(1)_Y$ gauge field, $\nabla_{\mu}$ denotes
the covariant derivative, and $\epsilon^{\mu\nu\rho\sigma}$ is the
Levi-Civita tensor.
Moreover, we note that
covariant derivatives for the antisymmetric tensor $Y_{\mu\nu}$ in the
metric (2.13) as is shown in the next subsection
are simple derivatives, $\nabla_{\mu}=\partial_{\mu}$,
as
it appears later in the equations of motion (2.12) and (2.22) based on
the hypercharge electromagnetic part of our model
Lagrangian in Eq.\ (\ref{eq:2.5}).
\subsection{Equations of motion}
From the above action in Eq.\ (\ref{eq:2.1}), the equations of motion for
the inflaton, the dilaton, the pseudoscalar field, and hypercharge
electromagnetic fields can be derived
as follows:
\begin{eqnarray}
-\frac{1}{\sqrt{-g}}{\partial}_{\mu}
\left( \sqrt{-g}g^{\mu\nu}{\partial}_{\nu} \varphi \right)
+ \frac{d U[\varphi]}{d \varphi} \Eqn{=} 0,
\label{eq:2.9} \\[3mm]
-\frac{1}{\sqrt{-g}}{\partial}_{\mu}
\left( \sqrt{-g}g^{\mu\nu}{\partial}_{\nu} \Phi \right)
+ \frac{dV[\Phi]}{d\Phi} \Eqn{=}
-\frac{1}{4} \frac{d f(\Phi)}{d \Phi}
\left( Y_{\mu\nu}Y^{\mu\nu} + g_{\mathrm{ps}} \frac{\phi}{M}
Y_{\mu\nu} \tilde{Y}^{\mu\nu} \right),
\label{eq:2.10} \\[3mm]
-\frac{1}{\sqrt{-g}}{\partial}_{\mu}
\left( \sqrt{-g}g^{\mu\nu}{\partial}_{\nu} \phi \right)
+ \frac{d W[\phi]}{d \phi} \Eqn{=}
-\frac{1}{4} \frac{g_{\mathrm{ps}}}{M}
f(\Phi) Y_{\mu\nu}\tilde{Y}^{\mu\nu},
\label{eq:2.11} \\[3mm]
\frac{1}{\sqrt{-g}}{\partial}_{\mu}
\left( \sqrt{-g} f(\Phi) Y^{\mu\nu} \right)
\Eqn{=} - \frac{g_{\mathrm{ps}}}{M}
{\partial}_{\mu} \left[ f(\Phi) \phi \right]
\tilde{Y}^{\mu\nu}.
\label{eq:2.12}
\end{eqnarray}
We now assume the spatially flat
Friedmann-Robertson-Walker (FRW) space-time with the metric
\begin{eqnarray}
{ds}^2 = g_{\mu\nu}dx^{\mu}dx^{\nu}
\Eqn{=} -{dt}^2 + a^2(t)d{\Vec{x}}^2 \nonumber \\[3mm]
\Eqn{=} a^2(\eta) ( -{d \eta}^2 + d{\Vec{x}}^2 ),
\label{eq:2.13}
\end{eqnarray}
where $a$ is the scale factor, and $\eta$ is the conformal time.
Since we are interested in the specific case in which
the background space-time is inflating, we assume that the spatial
derivatives of $\varphi$, $\phi$, and $\Phi$ are negligible
compared to the other terms
(if this is not the case at the beginning of inflation, any spatial
inhomogeneities will quickly be inflated away and this assumption will
quickly become very accurate).
Hence the equations of motion for the background homogeneous scalar fields
read
\begin{eqnarray}
\ddot{\varphi} + 3H\dot{\varphi} + \frac{dU[\varphi]}{d\varphi} \Eqn{=} 0,
\label{eq:2.14}
\end{eqnarray}
\begin{eqnarray}
\ddot{\Phi} + 3H\dot{\Phi} + \frac{dV[\Phi]}{d\Phi} \Eqn{=} 0,
\label{eq:2.15} \\[3mm]
\ddot{\phi} + 3H\dot{\phi} + \frac{dW[\phi]}{d\phi} \Eqn{=} 0,
\label{eq:2.16}
\end{eqnarray}
together with the background Friedmann equation
\begin{eqnarray}
H^2 \Eqn{=} \left( \frac{\dot{a}}{a} \right)^2 = \frac{{\kappa}^2}{3}
\left( {\rho}_{\varphi} + {\rho}_{\Phi} + {\rho}_{\phi} \right),
\label{eq:2.17} \\[3mm]
{\rho}_{\varphi} \Eqn{=} \frac{1}{2}{\dot{\varphi}}^2 + U[\varphi],
\label{eq:2.18} \\[3mm]
{\rho}_{\Phi} \Eqn{=} \frac{1}{2}{\dot{\Phi}}^2 + V[\Phi],
\label{eq:2.19} \\[3mm]
{\rho}_{\phi} \Eqn{=} \frac{1}{2}{\dot{\phi}}^2 + W[\phi],
\label{eq:2.20}
\end{eqnarray}
where a dot denotes a time derivative.
Here, ${\rho}_{\varphi}$, ${\rho}_{\Phi}$, and ${\rho}_{\phi}$
are the energy densities of the inflaton, the dilaton, and the
pseudoscalar field, respectively.
We here consider the case in which slow-roll exponential inflation
is driven by the potential energy of the inflaton
and during inflation the energy densities of the dilaton and
the pseudoscalar field are much smaller than that of the inflaton,
${\rho}_{\varphi} \gg {\rho}_{\Phi}$, ${\rho}_{\varphi} \gg {\rho}_{\phi}$.
Hence, during inflation $H$ reads
\begin{eqnarray}
H^2 \approx \frac{{\kappa}^2}{3} {\rho}_{\varphi} \equiv {H_{\mathrm{inf}}}^2,
\label{eq:2.21}
\end{eqnarray}
where $H_{\mathrm{inf}}$ is the Hubble constant in the inflationary stage.
We consider the evolution of the $U(1)_Y$ gauge field in this background.
Its equation of motion in the Coulomb gauge,
$Y_0(t,\Vec{x}) = 0$ and ${\partial}_j Y^j (t,\Vec{x}) =0$, becomes
\begin{eqnarray}
\ddot{Y_i}(t,\Vec{x})
+ \left( H + \frac{\dot{f}}{f}
\right) \dot{Y_i}(t,\Vec{x})
- \frac{1}{a^2}{\partial}_j {\partial}_j Y_i(t,\Vec{x})
- \frac{g_{\mathrm{ps}}}{M} \frac{1}{af} \frac{d \left(f \phi \right)}{d t}
\epsilon^{ijk} {\partial}_j Y_k(t,\Vec{x}) = 0.
\label{eq:2.22}
\end{eqnarray}
\section{Generation of the BAU}
In this section, we consider the generation of the BAU.
First, we consider
the evolution of the dilaton, that of the pseudoscalar field, and
that of the $U(1)_Y$ gauge field. Next, we investigate the generation of the
Chern-Simons number density in the inflationary stage, and then estimate
the ratio of the baryonic number density to the entropy density.
\subsection{Evolution of the dilaton and that of the pseudoscalar field}
In this subsection,
we consider the evolution of the dilaton and the pseudoscalar field.
Here we consider the case in which slow-roll exponential inflation is
realized and the scale factor $a(t)$ is given by
\begin{eqnarray}
a(t) = a_1 \exp \left[ \hspace{0.5mm} H_{\mathrm{inf}}(t-t_1) \right],
\label{eq:3.1}
\end{eqnarray}
where $a_1$ is the scale factor at the time $t_1$ when a given
comoving wavelength $2\pi/k$ of the $U(1)_Y$ gauge field
first crosses outside the horizon during
inflation, $k/(a_1 H_{\mathrm{inf}}) = 1$.
First, we investigate the evolution of the dilaton.
We consider the case in which
we can apply slow-roll approximation to the dilaton, that is,
\begin{eqnarray}
\left| \frac{\ddot{\Phi}}{H_{\mathrm{inf}}\dot{\Phi}} \right| \ll 1,
\label{eq:3.2}
\end{eqnarray}
and then Eq.\ (\ref{eq:2.15}) is reduced to
\begin{eqnarray}
3H_{\mathrm{inf}} \dot{\Phi} + \frac{dV[\Phi]}{d\Phi} = 0.
\label{eq:3.3}
\end{eqnarray}
The solution of this equation is given by
\begin{eqnarray}
\Phi \Eqn{=} \frac{1}{\tilde{\lambda}\kappa}
\ln \left[
\tilde{\lambda}^2 w H_{\mathrm{inf}} \left( t-t_\mathrm{R} \right)
+ \exp \left( \tilde{\lambda} \kappa {\Phi}_\mathrm{R} \right)
\right],
\label{eq:3.4} \\[3mm]
w \Eqn{\equiv} \frac{\bar{V}}{3 H_{\mathrm{inf}}^2/ \kappa^2}
\approx \frac{\bar{V}}{\rho_{\varphi}},
\label{eq:3.5}
\end{eqnarray}
where $\Phi_{\mathrm{R}}$ is
the dilaton field amplitude at the end of inflation.
Here we consider the case in which after inflation, the dilaton is finally
stabilized when it feels other contributions to its potential, e.g.,
from gaugino condensation \cite{GC} that generates a potential minimum
\cite{Barreiro, Seto}.
As it reaches there, the dilaton starts oscillation and
finally decays into radiation at $t=t_\mathrm{R}$.
Hence we assume that the potential minimum is generated at
$\Phi = \Phi_{\mathrm{R}} = 0$, so that
the coupling $f$ between the dilaton and hypercharge electromagnetic fields is
set to unity and thus the standard Maxwell theory is recovered.
Moreover, in deriving the second approximate equality in Eq.\ (\ref{eq:3.5}),
we have used Eq.\ (\ref{eq:2.21}).
Since we have ${\rho}_{\varphi} \gg {\rho}_{\Phi}$ by assumption, $w \ll 1$.
It follows from Eq.\ (\ref{eq:3.4}) that
the slow-roll condition to the dilaton, Eq.\ (\ref{eq:3.2}),
is equivalent to the following relation:
\begin{eqnarray}
\left| \frac{\ddot{\Phi}}{H_{\mathrm{inf}}\dot{\Phi}} \right| \ll 1
\hspace{2mm} \Longleftrightarrow \hspace{2mm}
\tilde{\lambda}^2 \frac{V[\Phi]}{{\rho}_{\varphi}} \ll 1.
\label{eq:3.6}
\end{eqnarray}
In deriving this relation, we have used Eq.\ (\ref{eq:2.21}).
If we assume that ${\tilde{\lambda}} \sim \mathcal{O}(1)$,
the second relation in (\ref{eq:3.6}),
$\tilde{\lambda}^2 V[\Phi]/{\rho}_{\varphi} \ll 1$,
is satisfied during inflation
because ${\rho}_{\varphi} \gg {\rho}_{\Phi}$.
Next, we consider the evolution of the pseudoscalar field.
The solution of Eq.\ (\ref{eq:2.16}) with Eq.\ (\ref{eq:2.8}) is
given by \cite{Garretson92}
\begin{eqnarray}
\phi = \phi_1 \exp \left\{
\frac{3}{2} \left[-1 \pm \sqrt{1- \left( \frac{2m}{3H_{\mathrm{inf}}}
\right)^2} \right] H_{\mathrm{inf}} \left( t-t_1 \right)
\right\}.
\label{eq:3.7}
\end{eqnarray}
For $m \gg H_{\mathrm{inf}}$,
we find that the approximate solution
of Eq.\ (\ref{eq:2.16}) with Eq.\ (\ref{eq:2.8}) is given
by \cite{Garretson92}
\begin{eqnarray}
\phi
\Eqn{\approx}
\phi_1 \exp \left[ -\frac{3}{2} H_{\mathrm{inf}} \left( t-t_1 \right)
\right] \sin \left[ m \left( t-t_1 \right) + \frac{\pi}{2} \right].
\label{eq:3.8}
\end{eqnarray}
In deriving Eq.\ (\ref{eq:3.8}), we have used Eq.\ (\ref{eq:3.1}).
\if
Moreover, in deriving Eq.\ (\ref{eq:3.9}), we have solved
Eq.\ (\ref{eq:2.16}) by neglecting $\ddot{\phi}$ because
$\left| \ddot{\phi} /
\left( 3 H_{\mathrm{inf}} \dot{\phi} \right) \right| \ll 1$ for
$m \ll H_{\mathrm{inf}}$.
In fact, using the solution in Eq.\ (\ref{eq:3.9}), we find that
$\left| \ddot{\phi}/ \left( 3 H_{\mathrm{inf}} \dot{\phi} \right)
\right| = \left[ m / \left( 3H_{\mathrm{inf}} \right) \right]^2 \ll 1$.
\fi
Moreover, we consider the case in which after inflation,
the pseudoscalar field $\phi$ decays in the radiation-dominated stage
and hence the entropy per comoving volume remains practically constant.
\subsection{Evolution of the $U(1)_Y$ gauge field}
Next, we consider the evolution of the $U(1)_Y$ gauge field.
To begin with, we shall quantize the $U(1)_Y$ gauge field
$Y_{\mu}(t,\Vec{x})$.
It follows from the hypercharge electromagnetic part of our model Lagrangian
in Eq.\ (\ref{eq:2.5}) that the canonical momenta conjugate to
$Y_{\mu}(t,\Vec{x})$ are given by
\begin{eqnarray}
{\pi}_0 = 0, \hspace{5mm} {\pi}_{i} = f(\Phi) a(t) \dot{Y_i}(t,\Vec{x}).
\label{eq:3.9}
\end{eqnarray}
We impose the canonical commutation relation
between $Y_i(t,\Vec{x})$ and ${\pi}_{j}(t,\Vec{x})$,
\begin{eqnarray}
\left[ \hspace{0.5mm} Y_i(t,\Vec{x}), {\pi}_{j}(t,\Vec{y})
\hspace{0.5mm} \right] = i
\int \frac{d^3 k}{{(2\pi)}^{3}}
e^{i \Vecs{k} \cdot \left( \Vecs{x} - \Vecs{y} \right)}
\left( {\delta}_{ij} - \frac{k_i k_j}{k^2 } \right),
\label{eq:3.10}
\end{eqnarray}
where $\Vec{k}$ is comoving wave number, and $k$ denotes its amplitude
$|\Vec{k}|$.
From this relation, we obtain the expression for $Y_i(t,\Vec{x})$ as
\begin{eqnarray}
Y_i(t,\Vec{x}) = \int \frac{d^3 k}{{(2\pi)}^{3/2}}
\left[ \hspace{0.5mm} \hat{b}(\Vec{k})
Y_i(t,\Vec{k})e^{i \Vecs{k} \cdot \Vecs{x} }
+ {\hat{b}}^{\dagger}(\Vec{k})
{Y_i^*}(t,\Vec{k})e^{-i \Vecs{k} \cdot \Vecs{x}} \hspace{0.5mm} \right],
\label{eq:3.11}
\end{eqnarray}
where $\hat{b}(\Vec{k})$ and ${\hat{b}}^{\dagger}(\Vec{k})$
are the annihilation and creation operators which satisfy
\begin{eqnarray}
\left[ \hspace{0.5mm} \hat{b}(\Vec{k}), {\hat{b}}^{\dagger}({\Vec{k}}^{\prime}) \hspace{0.5mm} \right] =
{\delta}^3 (\Vec{k}-{\Vec{k}}^{\prime}), \hspace{5mm}
\left[ \hspace{0.5mm} \hat{b}(\Vec{k}), \hat{b}({\Vec{k}}^{\prime})
\hspace{0.5mm} \right] =
\left[ \hspace{0.5mm}
{\hat{b}}^{\dagger}(\Vec{k}), {\hat{b}}^{\dagger}({\Vec{k}}^{\prime})
\hspace{0.5mm} \right] = 0.
\label{eq:3.12}
\end{eqnarray}
It follows from Eqs.\ (\ref{eq:3.10}) and (\ref{eq:3.11}) that
the normalization condition for $Y_i(k,t)$ reads
\begin{eqnarray}
Y_i(k,t){\dot{Y}}_j^{*}(k,t) - {\dot{Y}}_j(k,t){Y_i^{*}}(k,t)
= \frac{i}{f a} \left( {\delta}_{ij} - \frac{k_i k_j}{k^2 } \right).
\label{eq:3.13}
\end{eqnarray}
From now on we choose the $x^3$ axis to lie along the spatial momentum
direction \Vec{k} and denote the transverse directions $x^{I}$ with
$I=1, 2$. From Eq.\ (\ref{eq:2.22}),
we find that the Fourier modes $Y_I(k,t)$ of
the $U(1)_Y$ gauge field satisfy the following equations:
\begin{eqnarray}
\ddot{Y}_1(k,t)
+ \left( H_{\mathrm{inf}} + \frac{\dot{f}}{f}
\right) \dot{Y}_1(k,t)
+ \frac{k^2}{a^2} Y_1(k,t)
+ik \frac{g_{\mathrm{ps}}}{M}
\frac{1}{af} \frac{d \left(f \phi \right)}{d t}
Y_2 (k,t) \Eqn{=} 0,
\label{eq:3.14} \\[3mm]
\ddot{Y}_2(k,t)
+ \left( H_{\mathrm{inf}} + \frac{\dot{f}}{f}
\right) \dot{Y}_2(k,t)
+ \frac{k^2}{a^2} Y_2(k,t)
-ik \frac{g_{\mathrm{ps}}}{M}
\frac{1}{af} \frac{d \left(f \phi \right)}{d t}
Y_1(k,t) \Eqn{=} 0.
\label{eq:3.15}
\end{eqnarray}
In order to decouple the system of Eqs.\ (\ref{eq:3.14}) and (\ref{eq:3.15}),
we consider circular polarizations expressed by
the combination of linear polarizations as
$Y_{\pm}(k,t) \equiv Y_1(k,t) \pm i Y_2(k,t)$.
From Eqs.\ (\ref{eq:3.14}) and (\ref{eq:3.15}), we find that
$Y_{\pm}(k,t)$ satisfies the following equation:
\begin{eqnarray}
\ddot{Y}_{\pm}(k,t)
+ \left( H_{\mathrm{inf}} + \frac{\dot{f}}{f}
\right) \dot{Y}_{\pm}(k,t)
+ \left[
\left( \frac{k}{a} \right)^2
\pm \frac{g_{\mathrm{ps}}}{M}
\left( \frac{\dot{f}}{f} \phi + \dot{\phi} \right)
\left( \frac{k}{a} \right)
\right] Y_{\pm}(k,t) = 0.
\label{eq:3.16}
\end{eqnarray}
Since it is difficult to obtain the analytic solution of Eq.\ (\ref{eq:3.16}),
we numerically solve this equation during inflation.
Here we assume that the initial amplitude of $Y_+(k,t)$ and that of $Y_-(k,t)$
are the same value and we take the time $t_1$ as the initial time.
In the inflationary stage,
the difference between the evolution of $Y_+(k,t)$ and that of $Y_-(k,t)$
is induced by the coupling between the pseudoscalar field and
hypercharge electromagnetic fields and thus the hypermagnetic helicity is
generated \cite{Field00, BO99, Giovannini00, Campanelli05}.
During inflation ($t_1 \leq t \leq t_{\mathrm{R}}$),
the amplitude of $Y_{\pm}(k,t)$ is expressed as
\begin{eqnarray}
Y_{\pm}(k,t) = C_{\pm}(t) Y_{\pm}(k,t_1),
\label{eq:3.17}
\end{eqnarray}
where $C_{\pm}(t)$ is a numerical value
obtained by numerical calculations and we take $C_{\pm}(t_1)=1$.
In order to obtain the initial amplitude of $Y_\pm(k,t)$,
we here consider the solution of Eq.\ (\ref{eq:3.16}) inside of the
horizon, i.e., on subhorizon scales, $k/(aH) \gg 1$.
Replacing the independent variable $t$ by $\eta$, we find that
in the short-wavelength
limit, $k \rightarrow \infty$,
Eq.\ (\ref{eq:3.16}) is approximately given by
\begin{eqnarray}
Y_{\pm}^{\prime \prime}(k,\eta) +
\frac{f^{\prime}}{f} Y_{\pm}^{\prime}(k,\eta)
+ k^2 Y_{\pm}(k,\eta) = 0,
\label{eq:3.18}
\end{eqnarray}
where
the prime denotes differentiation with respect to the conformal time $\eta$.
Here, in deriving Eq.\ (\ref{eq:3.18}), we have taken only the term
proportional to $k^2$ on the right-hand side of Eq.\ (\ref{eq:3.16}) and
neglected that proportional to $k$.
The inside solution is given by
\begin{eqnarray}
Y_{\pm}^{\mathrm{in}} (k,\eta) =
\frac{1}{\sqrt{2k}} f^{-1/2} e^{-ik\eta},
\label{eq:3.19}
\end{eqnarray}
where we have determined the coefficient of this solution by requiring that
the vacuum reduces to the one in Minkowski spacetime in the short-wavelength
limit.
In fact, using Eq.\ (\ref{eq:3.19}), we find
\begin{eqnarray}
{Y_{\pm}^{\mathrm{in}}}^{\prime \prime}(k,\eta) +
\frac{f^{\prime}}{f} {Y_{\pm}^{\mathrm{in}}}^{\prime}(k,\eta)
\Eqn{=}
\left\{ - k^2
- \frac{1}{2} \left[ -\frac{1}{2} \left( \frac{f^{\prime}}{f} \right)^2 +
\frac{f^{\prime \prime}}{f}
\right] \right\} Y_{\pm}^{\mathrm{in}}(k,\eta)
\label{eq:3.20} \\[3mm]
\Eqn{\approx}
- k^2 Y_{\pm}^{\mathrm{in}} (k,\eta),
\label{eq:3.21}
\end{eqnarray}
where the approximate equality in Eq.\ (\ref{eq:3.21}) follows from
$-k \eta \gg 1$. Hence we see that
the inside solution in Eq.\ (\ref{eq:3.19}) approximately satisfies
Eq.\ (\ref{eq:3.18}).
Here we assume that the initial amplitude of $Y_{\pm}(k,t)$
is approximately given by the inside solution in Eq.\ (\ref{eq:3.19}).
Hence the initial amplitude of $Y_{\pm}(k,t)$ at the time $t_1$ is given by
\begin{eqnarray}
|Y_{\pm}(k,t_1)| \approx \frac{1}{\sqrt{2k f(t_1)}}.
\label{eq:3.22}
\end{eqnarray}
The numerical results of the evolution of $C_{\pm}(t)$ are shown in
Figs.~1 and 2.
Fig.~1 depicts the case in which
$H_{\mathrm{inf}} = 10^{10}$GeV, $m = 10^9$GeV, $\lambda = - 133$,
$\tilde{\lambda}=1.0$,
$w=1/(75)$, $\phi_1 = 10^9$GeV, $M = 10^9$GeV, and
$g_{\mathrm{ps}} = 1.0$
(the case (ii) in Table \ref{table:1} shown in the next subsection).
On the other hand,
Fig.~2 depicts the case in which
$H_{\mathrm{inf}} = 10^{10}$GeV, $m = 10^{12}$GeV,
$\lambda = - 135$,
$\tilde{\lambda}=1.0$
$w=1/(75)$, $\phi_1 = 10^{12}$GeV, $M = 10^{12}$GeV, and
$g_{\mathrm{ps}} = 1.0$ (the case (iii) in Table \ref{table:1}).
In Figs.~1 and 2,
we have used the evolution of $\phi$ in the positive sign equation in
(\ref{eq:3.7}) and that in Eq.\ (\ref{eq:3.8}), respectively.
Moreover, we have used
$k/a = \exp \left[ - H_{\mathrm{inf}} \left( t-t_1\right)\right]
H_{\mathrm{inf}}$, which follows from Eq.\ (\ref{eq:3.1}) and
$k/\left( a_1 H_{\mathrm{inf}}\right)=1$.
The solid curves represents $C_+(t)$ and the dotted curves
represents $C_-(t)$.
Here we have started to calculate Eq.\ (\ref{eq:3.16}) numerically at
$t=t_1= H_{\mathrm{inf}}^{-1}$ and we assume that the initial value
of $C_\pm(t)$ at $t=t_1$ is $C_\pm(t_1) = 1.0$.
From Figs.\ 1 and 2, we understand that
after several Hubble expansion times,
both the value of $C_+(t)$ and that of $C_-(t)$ becomes almost constant.
This qualitative behavior is common to the case $ m < H_{\mathrm{inf}}$
and the case $m \gg H_{\mathrm{inf}}$.
Finally, we note that if we take a larger value of $g_{\mathrm{ps}}$
and/or that of $\phi_1/M$ (in all the cases in Table \ref{table:1}
we have taken $\phi_1/M = 1.0$), the difference between
the evolution and amplitude of $C_+(t)$ and those of $C_-(t)$ becomes larger.
\subsection{Chern-Simons number density}
In this subsection, we consider the generation of the
Chern-Simons number density in the inflationary stage and
estimate the resultant baryon asymmetry.
The proper hyperelectric and hypermagnetic fields are given by \cite{Ratra}
\begin{eqnarray}
\Eqn{} {E_Y}_i^{\mathrm{proper}}(t,\Vec{x})
= a^{-1}{E_Y}_i(t,\Vec{x}) = -a^{-1}\dot{Y}_i(t,\Vec{x}),
\label{eq:3.23} \\[3mm]
\Eqn{} {B_Y}_i^{\mathrm{proper}}(t,\Vec{x})
= a^{-1}{B_Y}_i(t,\Vec{x}) =
a^{-2}{\epsilon}_{ijk}{\partial}_j Y_k(t,\Vec{x}),
\label{eq:3.24}
\end{eqnarray}
where ${E_Y}_i(t,\Vec{x})$ and ${B_Y}_i(t,\Vec{x})$ are
the comoving hyperelectric and hypermagnetic fields,
and ${\epsilon}_{ijk}$ is the totally antisymmetric tensor
(\hspace{0.5mm}${\epsilon}_{123}=1$\hspace{0.5mm}).
The density of the baryonic number $n_\mathrm{B}$ is given by \cite{GS98-2}
\begin{eqnarray}
n_\mathrm{B} \Eqn{=} -\frac{n_\mathrm{f}}{2} \Delta n_\mathrm{CS},
\label{eq:3.25} \\[3mm]
\Delta n_\mathrm{CS} \Eqn{=} -
\frac{g^{\prime \hspace{0.3mm} 2}}{4 \pi^2}
\int^{t} {\Vec{E}}_Y \cdot {\Vec{B}}_Y d \tilde{t}.
\label{eq:3.26}
\end{eqnarray}
Here, $n_\mathrm{f}$ is the number of fermionic generations
(throughout this paper we use $n_\mathrm{f} = 3$)
and
$\Delta n_\mathrm{CS}$ is the Chern-Simons number density.
It follows from Eqs.\ (\ref{eq:3.23}) and (\ref{eq:3.24}) that
the Fourier modes ${E_Y}_\pm^{\mathrm{proper}}(k,t) =
{E_Y}_1^{\mathrm{proper}}(k,t) \pm i {E_Y}_2^{\mathrm{proper}}(k,t)$
and ${B_Y}_\pm^{\mathrm{proper}}(k,t) =
{B_Y}_1^{\mathrm{proper}}(k,t) \pm i {B_Y}_2^{\mathrm{proper}}(k,t)$
satisfy the following relations:
\begin{eqnarray}
{E_Y}_\pm^{\mathrm{proper}}(k,t) \Eqn{=}
\pm \frac{1}{k} \frac{\partial {B_Y}_\pm^{\mathrm{proper}}(k,t)}{\partial t},
\label{eq:3.27} \\[3mm]
{E_Y}_\pm^{\mathrm{proper}}(k,t) {B_Y}_\pm^{\mathrm{proper}}(k,t) \Eqn{=}
\pm \frac{1}{2}\frac{1}{k} \frac{\partial
\left[ {{B_Y}_\pm^{\mathrm{proper}}}(k,t) \right]^2}{\partial t}.
\label{eq:3.28}
\end{eqnarray}
On the other hand,
the energy density of the proper hypermagnetic field
in Fourier space is given by
\begin{eqnarray}
{\rho}_{B_Y}(k,t)
=
\frac{1}{2}
\left[
\left| {B_Y}_+^{\mathrm{proper}}(k,t)
\right|^2
+
\left| {B_Y}_-^{\mathrm{proper}}(k,t)
\right|^2 \right] f,
\label{eq:3.29}
\end{eqnarray}
\begin{eqnarray}
\left| {B_Y}_\pm^{\mathrm{proper}}(k,t)
\right|^2
=
\frac{1}{a^2} \left( \frac{k}{a} \right)^2
|Y_\pm(k,t)|^2.
\label{eq:3.30}
\end{eqnarray}
In deriving Eq.\ (\ref{eq:3.30}), we have used Eq.\ (\ref{eq:3.24}).
Multiplying ${\rho}_{B_Y}(k,t)$ by phase-space density:\ $4\pi k^3/(2\pi)^3$,
we obtain the energy density of the proper magnetic field
in the position space
\begin{eqnarray}
{\rho}_{B_Y}(L,t) =
\frac{k^3}{4{\pi}^2}
\left[
\left| {B_Y}_+^{\mathrm{proper}}(k,t)
\right|^2
+
\left| {B_Y}_-^{\mathrm{proper}}(k,t)
\right|^2 \right] f,
\label{eq:3.31}
\end{eqnarray}
on a comoving scale $L=2\pi/k$.
Using Eqs.\ (\ref{eq:3.17}), (\ref{eq:3.26}), (\ref{eq:3.28}),
(\ref{eq:3.30}), and (\ref{eq:3.31}), we find that
the Chern-Simons number density in the inflationary stage is given by
\begin{eqnarray}
\Delta n_\mathrm{CS} \Eqn{=}
- \frac{g^{\prime \hspace{0.3mm} 2}}{4 \pi^2}
\frac{1}{k} \frac{1}{f} {\rho}_{B_Y}(L,t) \mathcal{A}(t),
\label{eq:3.32} \\[3mm]
\mathcal{A}(t) \Eqn{=}
\frac{|C_+(t)|^2 - |C_-(t)|^2}{|C_+(t)|^2 + |C_-(t)|^2}.
\label{eq:3.33}
\end{eqnarray}
Here we consider the case in which
after inflation the Universe is reheated immediately at $t=t_\mathrm{R}$.
Moreover, we assume that the instantaneous reheating stage
is much before the EWPT
(the background temperature at the EWPT is $T_\mathrm{EW} \sim 100$GeV).
The conductivity of the Universe ${\sigma}_\mathrm{c}$
is negligibly small during inflation, because there are few charged particles
at that time.
After reheating, however, a number of charged particles are produced,
so that the conductivity immediately jumps to a large value:\
${\sigma}_\mathrm{c} \gg H \hspace{1.5mm}
(\hspace{0.5mm}t \geq t_\mathrm{R}\hspace{0.5mm})$.
Consequently,
for a large enough conductivity at the instantaneous reheating stage,
hyperelectric fields accelerate charged particles and dissipate.
On the other hand, the proper hypermagnetic fields evolve in
proportion to $a^{-2}(t)$ in the radiation-dominated stage
and the subsequent matter-dominated stage
(\hspace{0.5mm}$t \geq t_\mathrm{R}$\hspace{0.5mm}) \cite{Ratra, Bamba1}.
Furthermore, the hypermagnetic helicity, i.e., the Chern-Simons number, is
conserved \cite{GS98-2, EParker, Biskamp}.
The Chern-Simons number will be released at the EWPT in the form of fermions,
which will not be destroyed by the
sphaleron processes \cite{Sphaleron} if the EWPT is
strongly first order \cite{GS98-1, GS98-2}. Although the EWPT is
not strongly first order in the MSM,
the EWPT could be strongly first order
in the extensions of the MSM \cite{Funakubo05}.
Moreover, at the EWPT the hypermagnetic fields are replaced by the ordinary
magnetic fields \cite{GS98-1, GS98-2}.
Moreover, we discuss the finite conductivity effects.
We consider the case in which in the last stage of inflation
the conductivity has a finite value. In this case,
the term of the conductivity ${\sigma}_\mathrm{c}$ is added to the inside of
the parentheses in the second term on the right-hand side of
Eq.\ (\ref{eq:3.16}), which is the coefficient of
$\dot{Y}_{\pm}(k,t)$. On the other hand,
there also exists the term of $\dot{f}/f$ in the inside of
the parentheses in the second term on the right-hand side of
Eq.\ (\ref{eq:3.16}). Hence the term of $\dot{f}/f$ has the same
effect as the finite conductivity ${\sigma}_\mathrm{c}$.
As the physical effect, the term of $\dot{f}/f$ makes
both the value of $C_+(t)$ and that of $C_-(t)$ in Eq.\ (\ref{eq:3.17})
becomes almost constant after several Hubble expansion times.
In fact, in the case in which in the last stage of inflation
the conductivity ${\sigma}_\mathrm{c}$ has a finite value, e.g.,
${\sigma}_\mathrm{c} = 100 H_{\mathrm{inf}}$, we have numerically calculated
the evolution of $C_+(t)$ and $C_-(t)$. As a result, we have found that
the results in the case in which we take into account the finite conductivity
effects are almost same as the results without taking into account the finite
conductivity effects. This is because
there exists the term of $\dot{f}/f$
in the coefficient of $\dot{Y}_{\pm}(k,t)$ in Eq.\ (\ref{eq:3.16}) and
this term has the same physical effect as the finite
conductivity ${\sigma}_\mathrm{c}$.
In the models in Refs.~\cite{BO99, Giovannini00}, the finite conductivity
effects have a influence on results because in these models the coupling
$f(\Phi)$ between the dilaton and the hypercharge electromagnetic fields
is not considered and hence there does not exist the term of $\dot{f}/f$
in the coefficient of $\dot{Y}_{\pm}(k,t)$. In our model, however,
since we consider the coupling $f(\Phi)$,
there exists the term of $\dot{f}/f$ in the coefficient of
$\dot{Y}_{\pm}(k,t)$ in Eq.\ (\ref{eq:3.16}), which
has the same physical effect as the finite
conductivity ${\sigma}_\mathrm{c}$.
Consequently, in our model, the finite conductivity
effects have little influence on results.
Thus we think that the assumption that
after reheating the conductivity immediately jumps to a large value,
${\sigma}_\mathrm{c}\gg H \hspace{1.5mm}
(\hspace{0.5mm}t \geq t_\mathrm{R}\hspace{0.5mm})$, is appropriate,
and that the results under this assumption are proper.
It follows from Eqs.\ (\ref{eq:3.25}), (\ref{eq:3.32}),
and (\ref{eq:3.33}) that after the EWPT,
the ratio of the density of the baryonic number $n_\mathrm{B}$
to the entropy density $s$ is given by
\begin{eqnarray}
\frac{n_\mathrm{B}}{s} \Eqn{=}
n_\mathrm{f} \frac{g^{\prime \hspace{0.3mm} 2}}{8\pi^2}
\frac{1}{k} \frac{\rho_B (L,t)}{s} \mathcal{A}(t_\mathrm{R}),
\label{eq:3.34}
\end{eqnarray}
with
\begin{eqnarray}
\rho_B (L,t) \Eqn{=}
\frac{1}{8\pi^2} \frac{1}{f(t_1)} \left( \frac{k}{a} \right)^4
\left[ |C_+(t_\mathrm{R})|^2 + |C_-(t_\mathrm{R})|^2 \right],
\label{eq:3.35} \\[3mm]
f(t_1) \Eqn{=} \left( 1 - \tilde{\lambda}^2 w N \right)^{-X},
\label{eq:3.36} \\[3mm]
N \Eqn{=} H_{\mathrm{inf}} \left( t_\mathrm{R} - t_1 \right),
\label{eq:3.37} \\[3mm]
X \Eqn{\equiv} \frac{\lambda}{\tilde{\lambda}},
\label{eq:3.38}
\end{eqnarray}
where
we have used the fact that when $t \geq t_\mathrm{R}$,
$f = 1$, and that after the EWPT,
$\rho_{B_Y} (L,t) \rightarrow \rho_B (L,t)$, where
$\rho_B (L,t)$ is the energy density of the ordinary magnetic fields.
Moreover, we have taken into account the fact that
for a large enough conductivity after the instantaneous reheating stage,
the proper hypermagnetic fields evolve in proportion to $a^{-2}(t)$
as stated above.
Here, $N$ is the number of $e$-folds between the time $t_1$
and the end of inflation $t_{\mathrm{R}}$.
Furthermore, in deriving Eq.\ (\ref{eq:3.36}), we have used
Eqs.\ (\ref{eq:2.6}) and (\ref{eq:3.4}) with $\Phi_{\mathrm{R}} = 0$.
In order to estimate the value of $n_\mathrm{B}/s$,
we use the following relations \cite{Kolb}:
\begin{eqnarray}
H_{0} \Eqn{=} 100 h
\hspace{1mm} \mathrm{km} \hspace{1mm} {\mathrm{s}}^{-1} \hspace{1mm}
{\mathrm{Mpc}}^{-1}
= 2.1 h \times 10^{-42} {\mathrm{GeV}},
\label{eq:3.39} \\[3mm]
{\rho}_{\varphi} \left( t_{\mathrm{R}} \right) \Eqn{=}
\frac{{\pi}^2}{30} g_\mathrm{R} {T_\mathrm{R}}^4 \hspace{3mm}
\left( g_{\mathrm{R}} \approx 200 \right),
\label{eq:3.40} \\[3mm]
N \Eqn{=} 45 + \ln \left( \frac{L}{\mathrm{[Mpc]}} \right) +
\ln \left\{ \frac{ \left[ 30/({\pi}^2 g_\mathrm{R} ) \right]^{1/12}
{{\rho}_{\varphi}}^{1/4} }
{10^{38/3} \hspace{1mm} \mathrm{[GeV]}} \right\},
\label{eq:3.41} \\[3mm]
\frac{a_{\mathrm{R}}}{a_0}
\Eqn{=}
\left( \frac{ g_{\mathrm{R}} }{3.91} \right)^{-1/3}
\frac{T_{ \gamma 0} }{ T_{\mathrm{R}} }
\hspace{0mm} \approx \hspace{0mm}
\frac{2.35\times10^{-13} \hspace{1mm} [\mathrm{GeV}]}{ 3.71 T_{\mathrm{R}}}
\hspace{3mm} \left(
T_{ \gamma 0} \approx 2.73 \hspace{1mm} \mathrm{K} \right),
\label{eq:3.42} \\[3mm]
s_0 \Eqn{=} 2.97 \times 10^{3}
\left( \frac{T_{ \gamma 0}}{2.75 \hspace{1mm} [\mathrm{K}]} \right)^3
{\mathrm{cm}}^{-3},
\label{eq:3.43}
\end{eqnarray}
where $H_{0}$
is the Hubble constant at
the present time (throughout this paper we use $h=0.70$ \cite{HST}),
$g_{\mathrm{R}}$ is the total number of degrees of freedom for
relativistic particles at the reheating epoch,
$T_{\mathrm{R}}$ is reheating temperature,
$T_{\gamma 0}$ is the present temperature of
the cosmic microwave background (CMB) radiation,
and $s_0$ is the entropy density at the present time.
Moreover, we use $g^{\prime \hspace{0.3mm} 2}/(4\pi) =
\alpha_{\mathrm{EM}}/ \cos^2 \theta_{\mathrm{w}}$, where
$\alpha_{\mathrm{EM}} = 1/(137)$ is the fine-structure constant and
$\theta_{\mathrm{w}}$ is the weak mixing angle. The experimentally measured
value of $\theta_{\mathrm{w}}$
is given by $\sin^2 \theta_{\mathrm{w}} \simeq 0.23$
\cite{Trodden99}.
\begin{table}[tbp]
\caption{
Estimates of the value of $n_\mathrm{B}/s$
for $w=1/(75)$, $\phi_1 = M =m$, $g_{\mathrm{ps}} = 1.0$, and
$\tilde{\lambda}=1.0$.
Here we have used the evolution of $\phi$
in the positive sign equation in (\ref{eq:3.7})
for the cases (i), (ii), and (iv),
and that in Eq.\ (\ref{eq:3.8}) for those (iii) and (v).
}
\begin{center}
\tabcolsep = 2mm
\begin{tabular}
{cccccccc}
\hline
\hline
& $\left| n_\mathrm{B}/s \right |$
& $B(H_0^{-1},t_0) \hspace{1mm} [\mathrm{G}]$
& $H_{\mathrm{inf}} \hspace{1mm} [\mathrm{GeV}]$
& $m \hspace{1mm} [\mathrm{GeV}]$
& $C_+(t_{\mathrm{R}})$
& $C_-(t_{\mathrm{R}})$
& $X$
\\[0mm]
\hline
(i)
& $3.6 \times 10^{-10}$
& $2.7 \times 10^{-24}$
& $1.0 \times 10^{14}$
& $1.0 \times 10^{12}$
& $0.367$
& $2.23$
& $-1.11 \times 10^2$
\\[0mm]
(ii)
& $1.0 \times 10^{-10}$
& $1.4 \times 10^{-24}$
& $1.0 \times 10^{10}$
& $1.0 \times 10^{9}$
& $0.369$
& $2.18$
& $-1.33 \times 10^2$
\\[0mm]
(iii)
& $1.5 \times 10^{-10}$
& $3.5 \times 10^{-24}$
& $1.0 \times 10^{10}$
& $1.0 \times 10^{12}$
& $1.04$
& $0.816$
& $-1.35 \times 10^2$
\\[0mm]
(iv)
& $0.96 \times 10^{-10}$
& $1.4 \times 10^{-24}$
& $1.0 \times 10^{5}$
& $1.0 \times 10^{3}$
& $0.368$
& $2.17$
& $-1.65 \times 10^2$
\\[0mm]
(v)
& $2.8 \times 10^{-10}$
& $4.7 \times 10^{-24}$
& $1.0 \times 10^{5}$
& $1.0 \times 10^{7}$
& $1.04$
& $0.808$
& $-1.68 \times 10^2$
\\[1mm]
\hline
\hline
\end{tabular}
\end{center}
\label{table:1}
\end{table}
Consequently, from Eq.\ (\ref{eq:3.34}), we can estimate the
value of the ratio of the density of the baryonic number $n_\mathrm{B}$
to the entropy density $s$, which is observationally estimated as
$n_\mathrm{B}/s = 0.92 \times 10^{-10}$
by using the
the first year Wilkinson Microwave Anisotropy Probe
(WMAP) data on the anisotropy of the CMB radiation \cite{Spergel}.
Table \ref{table:1} displays the estimate of the value of $n_\mathrm{B}/s$
for $w=1/(75)$, $\phi_1 = M =m$, $g_{\mathrm{ps}} = 1.0$, and
$\tilde{\lambda}=1.0$.
Here we have used the evolution of $\phi$
in the positive sign equation in (\ref{eq:3.7})
for the cases (i), (ii), and (iv),
and that in Eq.\ (\ref{eq:3.8}) for those (iii) and (v).
In Table \ref{table:1}, we have considered the BAU
induced by the helicity of
the hypermagnetic fields whose present scale is the present horizon scale,
$L = H_0^{-1}$. Hence the resultant BAU is
homogeneous over the present horizon.
Moreover,
in estimating the value of the energy density of the magnetic fields
in Eq.\ (\ref{eq:3.35}), we have used $a_{\mathrm{R}}/a_1 = \exp (N)$ and
$k / \left( a_1 H_{\mathrm{inf}} \right) = 1$.
Furthermore,
a constraint on $H_\mathrm{inf}$
from tensor perturbations \cite{Abbott1, Rubakov} is obtained by
using the WMAP three year data on temperature fluctuations \cite{Spergel06},
$H_\mathrm{inf} < 5.9 \times 10^{14}$GeV.
From Table \ref{table:1}, we see that if the magnetic fields with
the field strength $\sim 10^{-24}$G on the horizon scale at the
present time are generated, the resultant value of $n_\mathrm{B}/s$
can be as large as $10^{-10}$, which is consistent with the above
observational estimation of WMAP.
When the generated magnetic fields with the field
strength $\sim 10^{-24}$G on the horizon scale at the present time
play the role of
seed magnetic fields of galactic magnetic fields,
by assuming the magnetic flux conservation, $B r^2 = \mathrm{constant}$,
where $r$ is a scale, and using the scale ratio
$r_\mathrm{gal}/H_0^{-1} \sim 10^{-5}$,
where $r_\mathrm{gal}$ is the scale of galaxies,
we can estimate the seed magnetic field strength of
the galactic magnetic fields
as $B_\mathrm{gal}^{(\mathrm{seed})} \sim 10^{-14}~\mathrm{G}$ which
is sufficient for the following dynamo amplification.
Thus, in this way, the main problem of cosmological magnetic fields
can be solved by obtaining both large spatial scale and
large magnetic field amplitude
as it should be for the relic seed magnetic fields obeying dynamo theories.
Here we state the reason why in this model
the amplitude of the generated magnetic fields can be as large as
$\sim 10^{-24}$G on the horizon scale at the present time.
The reason is that the conformal invariance of the
hypercharge electromagnetic fields is extremely broken
through the coupling between the dilaton and
the hypercharge electromagnetic fields by introducing
a huge hierarchy
between the coupling constant of the dilaton to
the hypercharge electromagnetic fields
$\lambda$ and the coupling constant $\tilde\lambda$ of the dilaton potential,
$|X| = \left|\lambda/\tilde{\lambda}\right| \gg 1$.
From Table \ref{table:1}, we see that
in order that
the magnetic fields with
the field strength $\sim 10^{-24}$G on the horizon scale at the
present time could be generated,
we have to introduce the huge hierarchy $|X| \gg 1$.
It follows from Eqs.\ (\ref{eq:3.22}) and
(\ref{eq:3.36}) that if $|X| \gg 1$, the value of $f(t_1)$ is very small
and hence the amplitude of
$|Y_{\pm}(k,t_1)|$ is sufficiently large.
Moreover, it follows from Figs. 1 and 2 that
after several Hubble expansion times,
both the value of $C_+(t)$ and that of $C_-(t)$ becomes almost constant.
Hence it follows from Eq.\ (\ref{eq:3.17}) that
the amplitude of $|Y_{\pm}(k,t)|$ is sufficiently large.
Consequently, it follows from Eqs.\ (\ref{eq:3.30}), (\ref{eq:3.31}), and
(\ref{eq:3.35}) that the energy density of the generated magnetic fields,
i.e., the resultant magnetic field amplitude can be as large as
$\sim 10^{-24}$G on the horizon scale at the present time.
This is the reason why the present amplitude of the generated magnetic fields
in this model is larger than that in other inflation scenarios
\cite{Scalar}.
The physical reason why the coupling of the dilaton to the
hypercharge electromagnetic fields has to be much stronger than
the coupling in the dilaton potential in order that the amplitude of the
generated magnetic fields could be as large as $\sim 10^{-24}$G on the horizon
scale at the present time is as follows:
In order to generate the magnetic fields with the sufficient strength,
the conformal invariance of the hypercharge electromagnetic fields has to be
extremely broken through the coupling $f(\Phi)$ between the dilaton and
the hypercharge electromagnetic fields. In order that this condition could
be met, it is necessary that the change of
the value of $f(\Phi)$ in the inflationary stage is much larger than
the change of the value of the dilaton potential $V[\Phi]$ at that stage,
in other words, $f(\Phi)$ evolves more rapidly than the dilaton
potential $V[\Phi]$.
Thus the coupling of the dilaton to the hypercharge electromagnetic
fields has to be much stronger than the coupling in the dilaton potential,
i.e., the value of $|\lambda|$ has to be much larger than that of
$\tilde\lambda$.
In this case, the value of $f(\Phi)$ can change from $f(t_1) \ll 1$ to
$f(t_\mathrm{R}) = 1$ in the inflationary stage
($t_1 \leq t \leq t_{\mathrm{R}}$).
Consequently, from Eqs.\ (\ref{eq:3.35}) and (\ref{eq:3.36}), we see that
the amplitude of the generated magnetic fields can be sufficiently large
because $f(t_1)$ is much smaller than unity.
\if
Finally, we note that in this scenario
there remain some difficulties, e.g.,
the introduction of the unknown pseudoscalar field with a heavy
mass and the necessary condition that the EWPT is strongly first order
while the corresponding light Higgs mass is still an open problem.
These difficulties should be solved in future work.
\fi
\if
Finally, we note the following point:
From Table \ref{table:1}, we see that
in order that
$n_\mathrm{B}/s$
could be as large as $10^{-10}$, in other words,
the magnetic fields with
the field strength $\sim 10^{-24}$G on the horizon scale at the
present time could be generated,
we have to introduce a huge hierarchy
between the coupling constant of the dilaton to
the hypercharge electromagnetic fields
$\lambda$ and the coupling one $\tilde\lambda$ of the dilaton potential,
$|X| = \left|\lambda/\tilde{\lambda}\right| \gg 1$.
The reason is that the conformal invariance of the
hypercharge electromagnetic fields
must be broken extremely through the coupling between the dilaton and
the hypercharge electromagnetic fields in order that the magnetic fields
with sufficient strength to account for the observed value of
$n_\mathrm{B}/s$ could be generated.
The existence of the above huge hierarchy, however, seems to be unnatural
in realistic high energy theories.
The existence of the above huge hierarchy $|X| \gg 1$ seems to be unnatural
in realistic high energy theories.
In Ref.~\cite{Bamba2}, therefore,
the present author and Yokoyama have discussed
a possible solution to the above huge
hierarchy between $\lambda$ and $\tilde\lambda$,
by taking into account the effects of
the stringy spacetime uncertainty relation (SSUR) \cite{Yoneya}.
As a result, they have found that in power-law inflation models,
owing to the consequences of the SSUR on metric perturbations,
the magnetic fields with sufficient strength at the present time
could be generated even in the case in which
$\lambda$ and $\tilde\lambda$ are of the same order of magnitude.
Furthermore, recently the present author and Sasaki have shown that
if the conformal invariance of the Maxwell theory is broken through
both the coupling of the dilaton to electromagnetic fields and
that of the scalar curvature to those fields,
large-scale magnetic fields with sufficient strength
at the present time could be generated in models of power-law inflation
without introducing the above huge hierarchy \cite{Bamba3}.
\fi
\if
For example, we find that when the hypermagnetic fields on the
horizon scale at the present time is generated, $n_\mathrm{B}/s$ at
the present time is given by
$n_\mathrm{B}/s \left(L = H_0^{-1} , t_0 \right)$
for $H_{\mathrm{inf}} = 10^{10}$GeV, $m = 10^{12}$GeV,
$g_{\mathrm{ps}} = 1.0$,
$\tilde{\lambda}=1.0$, $w=1/75$, $\phi_1 = M =m$,
and $M = 10^{10}$GeV.
\fi
\section{Conclusion}
In the present paper we have studied
the generation of the BAU from the helicity of hypermagnetic fields
in inflationary cosmology,
taking into account the breaking of the conformal invariance of
the hypercharge electromagnetic fields by introducing
both a coupling with the dilaton and that with a pseudoscalar field.
Owing to the coupling between the pseudoscalar field and
the hypercharge electromagnetic fields, the hypermagnetic helicity,
which corresponds to the Chern-Simons number, is
generated in the inflationary stage.
The Chern-Simons number
stored in the hypercharge electromagnetic fields
is converted into fermions at the EWPT due to the Abelian anomaly,
and at the same time
the hypermagnetic fields are replaced by the ordinary magnetic fields,
which survive after the EWPT.
The generated fermions can survive after the EWPT
if the EWPT is strongly first order.
In the extensions of the MSM, the above condition could be satisfied.
As a result, we have found that
if the magnetic fields with
sufficient strength on the horizon scale at the
present time are generated, the resultant value of the ratio of
the density of the baryonic number to the entropy density,
$n_\mathrm{B}/s$,
can be as large as $10^{-10}$, which is consistent with the magnitude
of the BAU suggested by observations obtained from WMAP.
\section*{Acknowledgements}
The author is deeply grateful to Mikhail Shaposhnikov,
Jun'ichi Yokoyama, and Yasunari Kurita for helpful discussions.
The author's work was partially supported by
the Monbu-Kagaku Sho 21st century COE Program
``Center for Diversity and Universality in Physics"
and was also supported by
a Grant-in-Aid provided by the
Japan Society for the Promotion of Science.
|
2,877,628,089,640 | arxiv | \section{Introduction}
Federated learning is a machine learning methodology where the model updates happen in many clients (mobile devices, silos, etc.), as opposed to a more traditional setting, where training happens on a centralized server. This approach has many benefits, such as avoiding the upkeep of a centralized server, minimizing the network traffic between many clients and servers, as well as keeping sensitive user data anonymous and private.
For cross-device Federated Learning, the most common architecture pattern is a centralized topology \cite{2016arXiv160205629B}. This topology has been a popular area of research, and has seen much success when applied to commercial products. However, the centralized topology introduces some challenges.
We are learning from the tradeoffs presented by the comparison of certain topologies and help us focus on which tradeoffs we want to measure specifically.
In a centralized network topology, a central server is where all the training happens. However, in some scenarios, a central server may not always be desirable; or the central may not be powerful enough \cite{2016arXiv161005202V}. The central server could also become a bottleneck for large amount of network traffic, and large number of connections. This bottleneck is exacerbated when the federated networks are composed of a massive number of devices \cite{2017arXiv170509056L}. Furthermore, current Federated Learning algorithms, such as Federated Averaging, can only efficiently utilize hundreds of devices in each training round, but many more are available \cite{2019arXiv190201046B}.
One approach that attempted to address these challenges is decentralized training \cite{2016arXiv160205629B,2017arXiv170509056L,2018arXiv180307068T,2016arXiv161005202V}. However, in large scale FL systems, a fully decentralized FL topology is inefficient, since the convergence time could be long, and the traffic between devices could be too intensive. Well-connected or denser networks encourage faster consensus and give better theoretical convergence rates, which depend on the spectral gap of the network graph. However, when data is IID, sparser topologies do not necessarily hurt the convergence in practice. Denser networks typically incur communication delays which increase with the node degrees \cite{2019arXiv191204977K}.
In this paper, we propose asynchronous hierarchical federated learning, in which the central server uses either the network topology or some clustering algorithm to assign clusters for workers (i.e., client devices). In each cluster, a special aggregator device is selected to enable hierarchical learning, leads to efficient communication between server and workers, so that the burden of the server can be significantly reduced. In addition, asynchronous federated learning schema is used to tolerate heterogeneity of the system and achieve fast convergence, i.e., the server aggregates the gradients from the workers weighted by a staleness parameter to update the global model, and regularized stochastic gradient descent is performed in workers, so that the instability of asynchronous learning can be alleviated.
The rest of this paper is organized as follows. Section \ref{sec:liter} discusses several recently developed federated learning work that are related to ours. In Section \ref{sec:approach}, we illustrate our proposed algorithm in details, theoretical analysis is shown as well. We conduct experiments and show the evaluation results in Section \ref{sec:res}. Finally, we conclude our findings and discuss future work directions in Section \ref{sec:conc}.
\section{Related Work} \label{sec:liter}
The most widely used and straightforward algorithm to aggregate the local models is to take the average, proposed in \cite{mcmahan2017communication} and known as Federated Averaging (\textit{FedAvg}).
In FL, the communication cost often dominates the computation cost \cite{mcmahan2017communication}, thus is one of the key issues we need to resolve for implementing FL system at scale. In particular, the state-of-the-art deep learning models are designed to achieve higher prediction performance at the cost of increasing model complexity with millions or even billions of parameters \cite{devlin2018bert, brown2020language}. On the other hand, FL requires frequent communication of the models between the server and workers. As such, \textit{FedAvg} \cite{mcmahan2017communication} encourages each worker to perform more iterations of local updates before communicating during global aggregation, this results in significantly less communication rounds, and also increases the accuracy eventually as model averaging produces regularization effect. Another way to decrease the communication cost is to reduce the size of model information that needs to be sent, either through model compression techniques such as sparsification \cite{stich2018sparsified} and quantization \cite{caldas2018expanding}, or only select a small portion of important gradients to be communicated \cite{tao2018esgd} based on the observation that most of deep learning model parameters are closed to zero \cite{strom2015scalable}. However, these methods may result in deterioration of model accuracy, or incur high computation cost \cite{lim2020federated}. Alternatively, \cite{liu2020client} proposed client-edge-cloud hierarchical federated learning (\textit{HierFAVG}), an edge computing paradigm in which the edge servers play the roles of intermediate parameter aggregators. The hierarchical FL algorithm leverages on the proximity of edge servers, significantly relieves the burden of the central server on remote cloud.
\cite{yuan2020hierarchical} introduces a hierarchical federated learning protocol through LAN-WAN orchestration, which involves a hierarchical aggregation mechanism in the local-area network (LAN) due to its abundant bandwidth and almost negligible monetary cost than WAN, and incorporates cloud-device aggregation architecture, intra-LAN peer-to-peer (p2p) topology generation, inter-LAN bandwidth capacity heterogeneity.
While the hierarchical learning pattern is promising to reduce communication, it is not applicable to all networks, as the physical hierarchy may not exist or be known \textit{a priori} \cite{li2020federated}.
\cite{sattler2020clustered} designs and implements Clustered Federated Learning (CFL) using a cosine-similarity-based clustering method that creates a bi-partitioning to group client devices with the same data generating distribution into the same cluster. Client devices are clustered into different groups according to their properties. It has better performance for the non-IID-severe client network, without accessing the local data.
\cite{briggs2020federated} implemented a hierarchical clustering step (FL+HC) to separate clusters of clients by the similarity of their local updates to the global joint model. Once separated, the clusters are trained independently and in parallel on specialised models.
In \cite{nguyen2020self}, a self-organizing hierarchical structured FL mechanism is implemented based on democratized learning, agglomerative clustering, and hierarchical generalization.
Most of the current FL systems are implemented using synchronous update, which is susceptible to the straggler effect. To address this problem, \cite{xie2019asynchronous} proposed an asynchronous algorithm for federated optimization, \textit{FedAsync}, which solves regularized local problems and then uses a weighted average with respect to the staleness to update the global model. \cite{chen2019asynchronous} presented \textit{ASO-Fed} for asynchronous online FL, which uses the same surrogate objective for local updates, while the local learning rate is adaptive to the average time cost of past iterations. This type of surrogate of adding such a proximal term for local updates was introduced in \textit{FedProx} \cite{li2020fedhetero}, which mainly aimed to address the problem of system heterogeneity, yet it turns out the similar idea can be adopted well to asynchronous federated learning.
There also exist several work that focused on decentralized solutions. Gossip learning is a decentralized alternative to federated learning. \cite{hu2019decentralized} adopted gossip learning without aggregation servers nor a central component. Knowing that peer-to-peer bandwidth is much smaller than the worker’s maximum network capacity, the system could fully utilize the bandwidth by saturating the network with segmented gossip aggregation and the experiments showed that the training time can be reduced significantly with great convergence performance.
\cite{hegedHus2019gossip} presents a thorough comparison of Centralized Federated Learning and Gossip Learning. Examine the aggregated cost of machine learning in both cases, considering also a compression technique applicable in both approaches.
\cite{lalitha2019peer} presents a peer-to-peer Federated Learning on graphs, which is a distributed learning algorithm in which nodes update their belief by judicially aggregating information from their local observational data with the model of their one-hop neighbors to collectively learn a model that best fits the observations over the entire network.
Coral is a peer-to-peer self-organizing content distribution system introduced by \cite{freedman2003sloppy}. Coral creates self-organizing clusters of nodes that fetch information from each other to avoid communicating with more distant or heavily-loaded servers.
As a peer-to-peer FL framework particularly targeted towards medical applications, BrainTorrent introduced in \cite{roy2019braintorrent} presents a highly dynamic peer-to-peer environment, where all centers directly interact with each other without depending on a central body.
\section{Approach} \label{sec:approach}
\subsection{Problem Formulation}
We consider the supervised federated learning problem which involves learning a single global statistical model owned by the central server, while each of $N$ devices owns a private dataset and works on training a local model. Let $\mathbf{w}$ parameterize the model, and $\mathcal{D}^i = \{\mathbf{x}_j, y_j\}$ denote the training dataset owned by $i$-th device, where $i\in\{1, \cdots, N\}$, $\mathbf{x}_j$ is the $j$-th input sample from $\mathcal{D}^i$, while $y_j$ is the corresponding label. Denote $\ell (\mathbf{x}_j, y_j | \mathbf{w})$ as the loss function presents the prediction error, our overall goal is to minimize the empirical loss $\mathcal{L}(\mathbf{w})$ over all distributed training data $\mathcal{D} = \bigcup_{i=1}^n \mathcal{D}^i$, i.e., we aim at solving the following optimization problem:
\begin{equation}
\min\limits_\mathbf{w} \mathcal{L}(\mathbf{w}) = \frac{ \sum_{i=1}^n \sum_{j\in \mathcal{D}^i} \ell (\mathbf{x}_j, y_j | \mathbf{w}) }{|\mathcal{D}|}.
\end{equation}
The problem is often solved by mini-batch stochastic gradient descent (SGD), in which during each step, the model is updated as
\begin{equation}
\mathbf{w} \leftarrow \mathbf{w} - \alpha \dfrac{\partial \mathcal{L}}{\partial \mathbf{w}} ,
\end{equation}
where $\alpha$ denotes the learning rate, and the average gradient
\begin{equation}
\dfrac{\partial \mathcal{L}}{\partial \mathbf{w}}= \dfrac{1}{m} \sum_{j \in B} \dfrac{\partial \ell_j }{\partial \mathbf{w}}
\end{equation}
is derived through back-propagation from the mini-batch $B$ of $m$ input samples. In the typical FL setting, each device $i$ performs SGD with data sampled from its own private training dataset $\mathcal{D}^i$ and train a local model
\begin{equation}
\mathbf{w}_i = \text{arg}\min\limits_{\mathbf{w}_i} \mathcal{L}_i(\mathbf{w}_i) = \frac{ \sum_{j\in \mathcal{D}^i} \ell (\mathbf{x}_j, y_j | \mathbf{w}_i) }{|\mathcal{D}^i|} ,
\end{equation}
the server aggregates all local models collected from the workers and update the global model which is then sent back to the workers for next iteration.
\subsection{Proposed Method} \label{sec:our_method}
\subsubsection{Initialization at Central Server
We consider a hierarchcal FL system which has one central server on the cloud. The central server owns the global model $\mathbf{w}$ and denotes the timestamp $t$ for the model parameters.
Therefore, as the learning begins, the central server initializes the global model parameters $\mathbf{w}$, its timestamp $t=0$, as well as several hyperparameters that are required by learning.
In addition, given the network information of all client devices, we allow the central server to be responsible for the knowledge of the hierarchical communication topology.
In the case of mobile edge computing, the partition and hierarchy can be naturally formed by the communication edges, as the links between the central server with the edge servers or base stations form a star topology, and so do the links between each edge server with the devices. We extend this architecture to a more general case by allowing the central server to run clustering algorithm to assign which cluster each device belongs to, as well as a special device in each cluster, which we denote as the ``\textit{aggregator}'', that plays the role of an edge server, that is, provides inter-hierarchy communication including downlink transmission from the central server and client devices and uplink transmission from the clients to the central server, also aggregates information to reduce the necessary communication. In FL, this aggregation work is specific to aggregating of the clients' updated weights/gradients, and sending them to the central server. The downlink transmission is straightforward: the server periodically sends the global model with timestamp as well as the hyperparameters for the learning task to the aggregators, and the aggregator serves as the parent node of the client devices in each cluster, forwards the information it receives from the central server to its children nodes.
\subsubsection{Learning on Local Clients}
Upon receiving the global model parameters $\mathbf{w}$ with its timestamp (according to the central server clock) from the central server, the worker client performs local update.
In order to mitigate the deviations of the local models on an arbitrary device $j$ from that of the central server, following \textit{FedAsync} \cite{xie2019asynchronous}, instead of minimization of the original local loss function $\ell_j$, client $j$ locally solves a regularized optimization problem, i.e., performs SGD update for one or multiple iterations on the following surrogate objective:
\begin{equation} \label{eqn:local}
\min\limits_{\mathbf{w}_j} g_j(\mathbf{w}_j) = \mathbb{E}_{(\mathbf{x}, y) \sim \mathcal{D}^j} \Big[\ell_j(\mathbf{w}_j) + \frac{\lambda}{2} || \mathbf{w}_j - \mathbf{w} ||^2 \Big],
\end{equation}
in which the regularization term $\frac{\lambda}{2} || \mathbf{w}_j - \mathbf{w} ||^2$ controls the deviation of the local models. After local learning, the client sends its updated parameters as well as the original model timestamp to its parent node, the corresponding aggregator. The next local learning iteration will based on the newest received global model and the corresponding timestamp.
\subsubsection{Learning on Cluster Aggregators}
On an aggregator, the rate of receiving updates from the clients may vary caused by several reasons, such as heterogeneity of the computation power among devices, network delay, etc. We propose to perform asynchronous federated learning, that is, the aggregator immediately aggregates the update from the clients and reports to the central server. In real implementation, we can use a thread-safe FIFO queue to store the updates from the clients inside each aggregator, and periodically aggregates the results in the queue without waiting for that from some potential stragglers. This is different from the synchronous FL paradigm, and the uplink communication is then non-blocking. Again, following \textit{FedAsync} \cite{xie2019asynchronous}, we use a function of staleness to mitigate the error caused by obsolete models. Intuitively, more staleness results in larger error. On an aggregator device, assume the latest global model it received was with timestamp $t'$ (according to central server clock) at the moment it is about to aggregate the updates,
and the local model from client was with timestamp $t$, then it must be true that $t' \ge t$. We modify the learning rate to be weighted by the staleness:
\begin{equation}
\alpha_{t'} = \alpha \times \sigma(t'-t),
\end{equation}
in which $\sigma(z)$ is the staleness function. Different forms of $\sigma(z)$ were defined in \cite{xie2019asynchronous}, such as:
\begin{itemize}
\item the polynomial form: \begin{equation} \label{eqn:poly_stale}
\sigma (t'-t) = (t'-t+1)^{-\beta} ,
\end{equation}
\item the hinge form:
\begin{equation}
\sigma (t'-t) = \begin{cases}
1 & \quad \text{if } t'-t \le b \\
\dfrac{1}{a(t'-t-b)+1} & \quad \text{otherwise}
\end{cases} \end{equation}
\end{itemize}
Note that $\sigma(t'-t)=1$ if $t'=t$, and monotonically decreases as $t'$ and $t$ deviates more, so that the obsolete update would affect the model less as it shrinks the learning rate.
Therefore, upon receiving local update from client device $j$, the model can then be updated on the cluster aggregator $k$ as:
\begin{equation}
\mathbf{w}_k^{(t')} \leftarrow (1-\alpha_{t'}) \mathbf{w}^{(t')} + \alpha_{t'} \mathbf{w}^{(t)}_{j}.
\end{equation}
Or equivalently, if we aggregate the gradients collected from the clients, we have:
\begin{equation} \mathbf{dw}_k^{(t')} = \sum \alpha \sigma(t'-t) \mathbf{dw}_j^{(t)}, \end{equation}
where $\mathbf{dw}_j^{(t)} $ is the gradient collected by device $j$ in $k$-th cluster.
\subsubsection{Learning on Central Server}
The central server aggregates the results from the cluster aggregators to update the global model. Similar to the learning procedure on the cluster aggregators, in the central server, we can use a queue to store the updates from the aggregators. As asynchronous learning, the numbers of updates gathered from each of the aggregators can be imbalanced. As such, we let the aggregator $k$ send the number of updates $n_k$ along with the aggregated results (i.e., $\mathbf{w}_k^{(t')}$ and timestamp $t'$) to the central server. We assume the newest global model was updated at timestamp $t''$ according to the central server clock, and that it must hold that $t'' > t'$. Combine the update counts information and the staleness schema, by collecting the update from aggregator $k$, the learning rate is weighted and modified as $$\alpha_{t''} = \dfrac{n_k}{N} \sigma(t''-t') \alpha, $$
where $N$ is the total number of devices in the FL system which is known to the central server.
Note that this weighting mechanism makes sense if the data are i.i.d. over the clients. However, the bias could be severe if non-i.i.d. data are involved, as the mechanism favors learning for faster computed and communicated devices, in which case we need to carefully tune and design a more complicated weighting mechanism. The central server updates the global model as follows:
\begin{equation}
\mathbf{w}^{(t''+1)} \leftarrow (1-\alpha_{t''}) \mathbf{w}^{(t'')} + \alpha_{t''} \mathbf{w}^{(t')}_{k},
\end{equation}
The detailed algorithm is illustrated in Algorithm \ref{alg:fedah}.
\begin{algorithm}[htb!]
\caption{Asynchronous Hierarchical Federated Learning (FedAH)}
\label{alg:fedah}
\begin{algorithmic}[1]
\footnotesize
\STATE \underline{\textbf{Central Server:}}
\STATE Assign clusters and aggregators according to network topology or by running clustering algorithm.
\STATE Initialize global model $\mathbf{w}$ and time clock $t$.
\FOR{$t=0, \cdots, T-1$ \textbf{until} end of learning}
\STATE Broadcast $(\mathbf{w}, t)$ to its direct children aggregators.
\STATE Receive triples $\big(\mathbf{dw}_k^{(t')}, t', n_k\big)$ from any direct child aggregator $k$.
\STATE Update global model
$\mathbf{w}^{(t+1)} \leftarrow \mathbf{w}^{(t)} - \alpha_{t} \mathbf{dw}^{(t')}_{k}$ where $\alpha_{t} = \alpha \sigma(t-t') n_k / N$.
\ENDFOR
\STATE \underline{\textbf{Middle Layer Aggregator:}}
\STATE Receive $\big(\mathbf{w}, t''\big)$ from its parent, broadcast to its direct children.
\STATE Receive $\big(\mathbf{w}_j^{(t')}, t'\big)$ from any of its direct child $j$.
\STATE Aggregate the collected gradients: $ \mathbf{dw}_k^{(t')} = \sum \alpha \sigma(t''-t') \mathbf{dw}_j^{(t)}$.
\STATE Send triples $\big(\mathbf{dw}_k^{(t')}, t', n_k\big)$ to its parent.
\STATE \underline{\textbf{Bottom Layer Client Device:}}
\STATE Receive $\big(\mathbf{w}, t''\big)$ from its parent.
\STATE Define $g_j(\mathbf{w}_j) = \ell_j(\mathbf{w}_j) + \frac{\lambda}{2} || \mathbf{w}_j - \mathbf{w} ||^2 $
\FOR{local iteration}
\STATE Randomly sample $(\mathbf{x}, y) \sim \mathcal{D}^i$
\STATE Local update $\mathbf{w}_j \leftarrow \mathbf{w}_j - \alpha \nabla g_j$
\ENDFOR
\STATE Send updated model $\big(\mathbf{w}_j, t''\big)$ to its parent.
\end{algorithmic}
\end{algorithm}
\section{Experiments and Results} \label{sec:res}
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/10.png}
\caption{10 client devices, with 2 cluster aggregators for hierarchical learning.}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/20.png}
\caption{20 client devices, with 4 cluster aggregators for hierarchical learning.}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/50.png}
\caption{50 client devices, with 5 cluster aggregators for hierarchical learning.}
\end{subfigure}
\caption{Comparison of test accuracy w.r.t. training clock in central server.}
\label{fig:cmp}
\end{figure*}
In our experiments, we consider an asynchronous hierarchical FL system with $N_c$ client devices, $N_a$ cluster aggregators, and a single central server. The non-server machines are grouped into $N_a$ clusters using hierarchical agglomerate cluster based on their IP addresses, while in reality the computational power and network conditions can be integrated as the clustering features as well. For simplicity purpose, the aggregators in each cluster are randomly selected. We conduct our initial experiments on a standard image classification task, the famous CIFAR-10 dataset were used. We set up the model as a convolutional neural network (CNN) with 3 convolutional blocks, which has 5852170 parameters and achieves 90\% test accuracy in centralized training. For our FL system, we randomly partition the CIFAR-10 dataset among the $N_c$ local learning devices, so that each of the 10 class labels are kinds of balanced distributed over all clients. For local training, SGD optimizer are employed with a batch size of 128 and an initial learning rate of 0.001.
Our models and learning procedures are implemented using PyTorch, and we compare the performance of several different algorithms with our proposed approach.
Figure \ref{fig:cmp} shows the comparison of the global models' accuracy evaluated on central servers' validation dataset verses the training time. In \textit{FedAvg} and \textit{FedAsync}, the worker devices directly communicate to the central server without hierarchical structure, while \textit{FedAsync} allows the server and workers to update the models at any time without synchronization. Our hierarchical learning involves the simplest Client-Aggregator-Server 3-layer hierarchical structure, where \textit{HierFedAvg} perform synchronous learning, while \textit{HierFedAsync} follows the learning schema we described in Section \ref{sec:our_method}.
In each setting, we let the learning last for 2500 learning epochs with a clock in the central server. Specifically,
each learning epoch is synchronized among all devices. We simulate asynchronous learning system by assuming the fault (e.g. device down, communication loss, straggler effect, etc.) uniformly distributed with probability 0.1 among all non-server devices for each learning epoch. This probability distribution can be further investigated by tuning the parameters and looking at empirical studies. Note that this setting of fault is not introduced into the synchronous learning systems, otherwise a faulty device or network partition may cause the systems to wait forever by their synchronous nature. This in turn demonstrates the advantage of fault tolerance with asynchronous learning.
According to Figure \ref{fig:cmp}, with hierarchical settings, the complexity of learning system is greatly increased, as a result, the \textit{HierFedAvg} algorithm not only converges the slowest, the learning is also not stable. Conversely, it is obvious that our designed \textit{HierFedAsync} algorithm overcomes the issues brought by the hierarchical setting.
Although \textit{FedAvg} seems to perform the best when the number of client devices is small (e.g., when there are 10 clients in the system), we would like to emphasize again that we did not count the fault device nor the stragglers' effect in the synchronous settings for our experiments, while those effects are included in our asynchronous settings. Even so, our \textit{HierFedAsync} algorithm performs close to the best in all cases, and when the system gets larger, the advantages of \textit{HierFedAsync} gets more obvious, not only shows faster convergence especially early on, also leads to higher test accuracy. And we expect further that as the number of devices gets larger, the advantage gets bigger.
Table \ref{tab:num_packets} presents the total numbers of gradients sent or received by each type of devices, from which we can see that the communication burden of the central server would be greatly alleviated in a hierarchical topology, not to mention the potential benefits of more local computation and faster overall convergence. In a large system of network topology, this could also leads to less packet loss, more effective communication and computation.
\begin{table}[]
\centering
\caption{Comparison of the numbers of gradients sent/received, with 20 client devices in the system and 2500 training epochs in the central server.}
\label{tab:num_packets}
\resizebox{.475\textwidth}{!}{
\begin{tabular}{l|cccc}
\hline
& \textit{FedAvg} & \textit{FedAsync} & \textit{HierFedAvg} &\textit{HierFedAsync} \\ \hline
Central Server & 50000 & 44769 & 10000 & 8842 \\
Cluster Aggregators & - & - & 60000 & 52904 \\
Local Clients & 50000 & 45033 & 50000 & 44977 \\ \hline
\end{tabular}
}
\end{table}
Our \textit{HierFedAsync} algorithm involves several hyperparameters to tune. We conduct comparative analysis with different values of $\beta$ in the polynomial form of staleness function as described in Equation (\ref{eqn:poly_stale}),
and show the effect of staleness on learning convergence in Figure \ref{fig:beta}. We see that when $\beta=1$, the learning curve is closed to that without introducing staleness (i.e. $\beta=0$), but the validation accuracy cannot exceed 55\% as training proceeds, moreover, we notice the learning is unstable as the curve oscillates severely. In general, larger staleness alleviates the instability, at the cost of slower convergence. From Figure \ref{fig:beta}, we can easily see that by using $\beta=2$ or $3$, the performance is significantly improved as the validation accuracy is higher than 60\% after convergence, as the convergence rate is also very acceptable. We also note that the learning effect is not sensitive with $\beta$ as it is between 2 and 3, indicates that $\beta$ is quite easy to tune.
Similar comparative analysis is conducted for the regularization coefficient $\lambda$ in Equation (\ref{eqn:local}) on local clients. Although results in Figure
shows little effect for change of $\lambda$, we would like to note that our current experimental setting does not emphasize on simulation of the stragglers.
We expect that the regularization on local clients plays a more important role during training in an asynchronous system as the straggler effect gets more common.
\begin{figure}[ht!]
\centering
\includegraphics[width=.375\textwidth]{figs/beta.png}
\caption{Test accuracy with different $\beta$ values in \textit{HierFedAsync} with 20 client devices and 2500 training epochs in the central server. The polynomial staleness function $\sigma (t'-t) = (t'-t+1)^{-\beta}$ is used on central server as well as cluster aggregators.}
\label{fig:beta}
\end{figure}
\section{Conclusions and Future Work} \label{sec:conc}
In this paper, we propose asynchronous hierarchical federated learning.
As federated networks are composed of a massive number of devices, communication is a critical challenge in FL. We tackle this problem by exploring different architectural patterns for the design of FL systems. The tradeoff of central and fully decentralized learning on the complexity of the system as well as the learning effectiveness, computational and communication cost is obvious. We deploy a FL system with a central server, but with hierarchical topology. In this paper, we combine asynchronous FL and hierarchical FL into our \textit{FedAH} algorithm. In addition, we blur the concept of network topological edges to form clusters as well as the hierarchical structures. We aim at reducing the communication load between devices and the server in FL system, also improve flexibility and scalability. Our initial experiments demonstrated that combining asynchronous FL and hierarchical FL not only leads to faster convergence, tolerates heterogeneity of the system such as the faulty devices, straggler effect, etc., also significantly alleviates the communication burden on the central server. However, the asynchronous and hierarchical nature greatly increases the complexity of the system, especially on the communication topology, which could lead to unstable learning.
We explored the literature and recent research advances, combine them into our proposed method, \textit{FedAH}, which inherits the merits of both asynchronous and hierarchical FL, meanwhile significantly mitigates the instability with the utilization of staleness function and cluster weighting on the central server and edge devices, as well as adding regularization for local updates on client devices. We implemented the system and evaluated on CIFAR-10 image classification task, the results verify the effectiveness of our design and meet our expectation.
There are several interesting directions to pursue for the future of this paper. First off, as we mentioned in the paper, our weighting mechanism for aggregation favors learning for faster computed and communicated devices, which works fine if the data are i.i.d. over the clients. We need to design a more sophisticated weighting mechanism for asynchronous FL if non-i.i.d. data are involved.
The second interesting direction in which to take this paper would be modification of the simple $L^2$-regularization on local clients' learning.
We also need to finish the derivation and proof of the theoretical analysis.
In addition, more experiments need to be conducted on different datasets and tasks.
Furthermore, we need to bring more sophisticated experimental settings for the simulation, for instance, the straggler effect should be emphasized.
{\small
\bibliographystyle{abbrvnat}
\section{Related work}
More recently in Feb 21, Hegudus et a.l \cite{hegedHus2021decentralized} contributes to the field of FL with a comparison of the efficiency of decentralized and centralized solutions that are based on keeping the data local, with an assumption that gossip learning will hurt performance. They recognize many areas of tradeoffs to measure, and that many algorithms can outperform each other depending on what metric they are measuring. However in their testing, they observed that federated learning converges faster, since it can mix information more efficiently and is clearly competitive with centralized federated learning. They believe future work could be improved via applying even more sophisticated peer sampling methods that are optimized based on other tradeoffs.
Their experimental scenarios include a real churn trace collected over phones, both continuous and bursty communication patterns, different network sizes and different distributions of the training data over the devices. They also evaluate a number of additional techniques including a compression technique based on sampling, and token account based flow control for gossip learning. After evaluating the average cost of both approaches, they found that the best gossip variants perform comparably to the best federated learning variants overall. [We take a hard look at their gossip learning, one of these state-of-the-art decentralized machine learning protocols, Because this paper identifies the conditions in which gossip learning can and cannot be applied, we take lessons learn in our evaluation of other topologies including gossip learning]
The authors Hu et al. \cite{hu2019decentralized} initiated their study with a logical hypothesis is that gossip learning without an aggregation server or a central component will be strictly less efficient than federated learning due its reliance on a more basic infrastructure, just message passing and no cloud resources. They state that, “One of the most challenging problem of federated learning is the poor network connection as the workers are geodistributed and connected with slow WAN.” They asked the question if workers can even send full model updates (Size of BERT Large can be up to 1360MB). They ask if “is it possible for workers to synchronize the model partially, from/to only a part of the workers, and still achieve good training results?” They explore this area with a decentralized FL solution, called Combo. Knowing that peer-to-peer bandwidth is much smaller than the worker’s maximum network capacity, their program could fully utilize the bandwidth by saturating the network with segmented gossip aggregation. Their experiments end up showing that they can reduce the training time significantly with great convergence performance. [We plan on looking into segmented network capacity and will focusing utilizing the bandwidth given the restraints of mobile networks]
Jameel et al \cite{jameel2019ring} focus on the the design choices for a sparse model averaging strategy in a decentralized parallel SGD. Their attempt to design an optimal communication topology that is both quick and efficient. They propose a superpeer topology where they form a ring and have some number of regular peers connected to them. The hierarchical two-layer sparse communication topology allows a principled trade-off between convergence speed and communication overhead and is well suited to loosely coupled distributed systems. We demonstrate this using an image classification task and a batch stochastic gradient descent learning (SGD) algorithm that their proposed method shows similar convergence behavior as Allreduce while having lower communication costs. [We plan on testing a super peer topology similar to this, we are also likely to test on an image classification CIFAR10]
Giaretta and Girdzijauskas \cite{giaretta2019gossip} examine the conditions in which gossip learning can and cannot be applied. Our team takes a hard look at the extensions that their research has mentioned to mitigate some of the limitations in FL. They present a thorough analysis of the applicability of gossip learning, Their work includes scenarios that range from the effect of certain topologies, and the correlation of communication speed and data distribution. [We are learning from the tradeoffs presented by the comparison of certain topologies and help us focus on which trade offs we want to measure specifically]
Although tested on Industrial IoT (IioT) setting, Savazzi et al. \cite{savazzi2020federated} study a handful of gossip based decentralized machine learning methods in the context of (IioT) apps. Their focus is on when the data distribution is not identical over the nodes (similar to privacy data on cell phones.. They do not consider compression techniques or other algorithmic enhancements such as token-based flow control.
Their paper proposes a serverless learning approach, where the proposed FL algorithms use device to device cooperation to perform data operations on the network by iterating local computations using consensus-based methods. [We plan on evaluating their approach which lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.]
|
2,877,628,089,641 | arxiv | \section{Introduction}
In this work, we study the uniform computational strength of theorems arising in classical descriptive set theory related to perfect subsets of Polish spaces. In general, if $P$ is a subset of a topological space $\mathcal{X}$, a point $x \in P$ is a \emph{limit point of $P$} if for every open set $U$ with $x \in U$ there is a distinct point $y \in P \cap U$: otherwise, we call $x$ \emph{isolated in $P$}. A subset of a topological space is \emph{perfect} if it is closed and has no isolated points. An equivalent formulation is that $P\subseteq \mathcal{X}$ is perfect if $P=P'$, where $P'$ is the Cantor-Bendixson derivative, i.e.\ the set of all limit points of $P$. Notice that every nonempty perfect subset of a Polish space has the cardinality of the continuum.
A classical theorem in this context is the \emph{Cantor-Bendixson theorem}.
\begin{theorem}[Cantor-Bendixson theorem]
\thlabel{Initialtheorem}
Every closed subset $C$ of a Polish space $\mathcal{X}$ can be uniquely written as the disjoint union of a perfect set $P$ and a countable set $S$. We call $P$ the {perfect kernel} of $C$ and $S$ the {scattered part} of $C$.
\end{theorem}
When $\mathcal{X}$ is either Cantor space $2^\mathbb{N}$ or Baire space $\mathbb{N}^\mathbb{N}$, there is a well-known correspondence between closed sets and sets of \emph{paths} through \emph{trees}. In these settings, the Cantor-Bendixson theorem states that all but countably many paths through a tree $T$ on $\mathbb{N}$ belong to a perfect subtree $S$ of $T$ (i.e.\ a tree such that every node has two incomparable extensions). Again, $S$ (which is unique) is called the perfect kernel of $T$ and the set of missing paths is the scattered part of $T$.\smallskip
Many theorems in \lq\lq classical\rq\rq\ mathematics, including \thref{Initialtheorem} and its tree version, can be written in the form
\[(\forall x \in X)(\varphi(x)\implies (\exists y \in Y)\psi(x,y)),\]
and this formulation has a natural translation as a computational problem: given an instance $x \in X$ satisfying $\varphi(x)$, the task is to find a solution $y \in Y$ such that $\psi(x,y)$ (notice that such a $y$ in general is not unique). A computational problem can be naturally rephrased as a \emph{(partial) multi-valued function} $\partialmultifunction{f}{X}{Y}$ where $f(x):=\{y \in Y:\psi(x,y)\}$, for every $x \in X$ such that $\varphi(x)$. The interpretation of theorems as multi-valued functions/problems, allow us to compare their uniform computational content using the framework of \emph{Weihrauch reducibility}.
This work continues the program initiated by Gherardi and the second author \cite{hanhbanach} that aims to provide a bridge between computable analysis and \emph{reverse mathematics}, the discipline which establishes equivalences between mathematical statements and the axioms required to prove them. A well-known empirical fact in this field is the so-called \emph{big-five phenomenon}. Namely, many theorems of \lq\lq classical\rq\rq\ mathematics happen to be equivalent to one of five subsystems of second order arithmetic, namely $\mathsf{RCA}_0,\mathsf{WKL}_0,\mathsf{ACA}_0,\mathsf{ATR}_0$ and $\boldfacePi^1_1\mathsf{-CA}_0$. Some analogues of the big-five have been identified in the Weihrauch context. For example, $\mathsf{RCA}_0$ roughly corresponds to computable problems, $\mathsf{WKL}_0$ to the task of choosing an element from a nonempty closed subset of the Cantor space (denoted by $\codedChoice{}{}{2^\mathbb{N}}$) and $\mathsf{ACA}_0$ to iterations of $\mathsf{lim}$, where $\mathsf{lim}$ is the problem that takes in input a converging sequence in Baire space and outputs its limit.
The second author in \cite{daghstul} raised the question \lq\lq What do the Weihrauch hierarchies look like once we go to very high levels of reverse mathematics strength?\rq\rq. Here we continue to answer this question by focusing on theorems that, in reverse mathematics, are either equivalent to $\mathsf{ATR}_0$ or $\boldfacePi^1_1\mathsf{-CA}_0$. In this direction it has been shown that statements that in reverse mathematics are equivalent to $\mathsf{ATR}_0$, when considered as problems, fall into different Weihrauch degrees (see for example \cite{kihara_marcone_pauly_2020, choiceprinciples, openRamsey, computabilitytheoretic, GOH2020102789}). On the other hand, $\boldfacePi^1_1\mathsf{-CA}_0$ has a natural correspondent in the Weihrauch lattice, namely the problem $\parallelization{\mathsf{WF}}$ that given in input a sequence of trees on $\mathbb{N}$ outputs the sequence $p \in 2^\mathbb{N}$ such that $p(i)=1$ iff the $i$-th tree of the sequence is well-founded. This problem has been briefly considered by Jeff Hirst in \cite{leafmanaegement} and by Goh, Pauly and the third author in \cite{goh_pauly_valenti_2021}, but ---to the best of our knowledge--- this is the first paper carrying out a systematic study of a theorem equivalent to $\boldfacePi^1_1\mathsf{-CA}_0$.
In \cite{leafmanaegement}, Hirst showed that $\parallelization{\mathsf{WF}}$ is Weihrauch equivalent to the problem of finding the perfect kernel of a tree. We show that, for any uncountable computable Polish space $\mathcal{X}$, the problem of finding the perfect kernel of a closed set is strictly below $\parallelization{\mathsf{WF}}$ and its degree does not depend on $\mathcal{X}$. Notice that in reverse mathematics all these problems are equivalent to $\boldfacePi^1_1\mathsf{-CA}_0$ (see \cite[\S VI.1]{simpson_2009}). We consider the full Cantor-Bendixson problem, that, given in input a tree or a closed set, outputs its perfect kernel and its scattered part. Again, the tree version is strictly stronger than the closed set version; however, in this case the topological properties of $\mathcal{X}$ affect the precise degree of the problem. We also study the problem of just listing the scattered part of a tree or a closed set.
In \cite{kihara_marcone_pauly_2020} Kihara, Pauly and the second author studied different problems related to the perfect tree theorem, i.e.\ the statement \lq\lq a tree on $\mathbb{N}$ has countably many paths or contains a perfect subtree\rq\rq. The disjunctive nature of the theorem gives rise to two different groups of problems: either the problem takes as input a tree with uncountably many paths and outputs a perfect subtree of it, or it takes a tree with countably many paths and outputs a list of them. Here we study the same two kinds of problems for the perfect set theorem, i.e.\ the statement that a closed subset of a Polish space is either countable or contains a perfect subset. We obtain a number of results for different computable Polish spaces.
Figures \ref{SummaryAtThebeginning} and \ref{Figureslist} summarize some of our results. The precise definitions of the various functions are given in due time.
\begin{figure}[H]
\centering
\tikzstyle{every picture}=[tikzfig]
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=box] (UCBaire) at (-6.5,0) {$\mathsf{UC}_{\Baire}$};
\node [style=box] (ScListCantor) at (-12,0) {$\ScList[2^\mathbb{N}]$};
\node [style=box] (PST) at (-6.5,3) {$\PST[2^\mathbb{N}]\equiv_{\mathrm{W}}\PST[\mathbb{N}^\mathbb{N}]$};
\node [style=box] (CBaire) at (2,6) {$\mathsf{C}_{\Baire}\equiv_{\mathrm{W}} \PTT[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \PTT[2^\mathbb{N}]$};
\node [style=box] (PK) at (-12,7) {$\PK[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}}\wScList[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}}\wCB[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}}\CB[2^\mathbb{N}]$};
\node [style=box] (ScListBaire) at (-6.5,10) {$\ScList[\mathbb{N}^\mathbb{N}]$};
\node [style=box] (CBBaire) at (-6.5,13) {$\CB[\mathbb{N}^\mathbb{N}]$};
\node [style=box] (PKTree) at (-6.5,16) {$\parallelization{\mathsf{WF}}\equiv_{\mathrm{W}}\PK[]\equiv_{\mathrm{W}}\wCB[]\equiv_{\mathrm{W}}\CB[]$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=strictreducible] (UCBaire) to (PST);
\draw [style=strictreducible] (PST.north) to (CBaire);
\draw [style=strictreducible] (PST.north) to (PK);
\draw [style=strictreducible] (PK) to (ScListBaire);
\draw [style=strictreducible] (CBBaire) to (PKTree);
\draw [style=strictreducible] (CBaire) to (PKTree);
\draw [style=strictreducible] (ScListCantor) to (PK);
\begin{scope}[transform canvas={xshift=-.5em}]
\draw [style=nonreducible] (CBaire) to (CBBaire.south east);
\draw [style=nonreducible] (CBaire) to (ScListBaire);
\end{scope}
\draw [style=nonreducible] (CBBaire.290) to (ScListBaire.70);
\draw [strictreducible] (ScListBaire.110) to (CBBaire.250);
\end{pgfonlayer}
\end{tikzpicture}
\caption{Some multi-valued functions studied in this paper. Black arrows represent Weihrauch reducibility in the direction of the arrow. Red arrows mean that the existence of a reduction is still open. If a function cannot be reached from another one following a path of arrows we know that there is no reduction between the two functions.}
\label{SummaryAtThebeginning}
\end{figure}
\begin{figure}[H]
\begin{center}
\tikzstyle{every picture}=[tikzfig]
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=box] (1) at (0,0) {$\List[2^\mathbb{N},<\omega]$};
\node [style=box] (2) at (12,0) {$\wList[2^\mathbb{N}]\equiv_{\mathrm{W}}\wList[2^\mathbb{N},\leq \omega]$};
\node [style=box] (3) at (0,4) {$\List[2^\mathbb{N}]$};
\node [style=box] (4) at (12,4) {$\wScList[2^\mathbb{N}]$};
\node [style=box] (5) at (0,8) {$\mathsf{UC}_{\Baire} \equiv_{\mathrm{W}}\wList[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}}\List[\mathbb{N}^\mathbb{N}]$};
\node [style=box] (6) at (12,8) {$\ScList[2^\mathbb{N}]$};
\node [style=box] (7) at (0,12) {$\mathsf{C}_{\Baire}$};
\node [style=box] (8) at (12,12) {$\wScList[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}}\PK[\mathbb{N}^\mathbb{N}]$};
\node [style=box] (9) at (12,16) {$\ScList[\mathbb{N}^\mathbb{N}]$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=strictreducible] (1) to (3);
\draw [style=strictreducible] (3) to (5);
\draw [style=strictreducible] (5) to (7);
\draw [style=strictreducible] (2) to (3);
\draw [style=strictreducible] (2) to (4);
\draw [style=strictreducible] (4) to (6);
\draw [style=strictreducible] (6) to (8);
\draw [style=strictreducible] (8) to (9);
\draw [style=strictreducible] (5) to (8);
\draw [style=strictreducible] (1) to (4);
\draw [style=strictreducible] (3) to (6);
\draw [style=nonreducible] (7) to (9);
\end{pgfonlayer}
\end{tikzpicture}
\caption{Multi-valued functions related to listing problems in the Weihrauch lattice. The arrows have the same meaning of Figure \ref{SummaryAtThebeginning}.}
\label{Figureslist}
\end{center}
\end{figure}
The paper is organized as follows. In \S \ref{Background} we give the necessary preliminaries: namely, in the first part we provide definitions and notations about trees together with some useful lemmas, while in the second part we deal with represented spaces and Weihrauch reducibility. In \S \ref{perfectsetsgeneral} we study multi-valued functions related to the perfect set and perfect tree theorem in Baire and Cantor space, while in \S \ref{cantorbendixson} we consider problems related to the Cantor-Bendixson theorem in the same setting. In \S \ref{otherspaces} we study the problems considered in \S \ref{perfectsetsgeneral} and \S \ref{cantorbendixson} for arbitrary computable metric spaces, while \S\ref{Openquestions} lists some open problems that remain to be solved.
\section{Background}
\label{Background}
\subsection{Sequences and trees}
\label{Sequencessandtrees}
Let $\mathbb{N}^n$ denote the set of finite sequences of natural numbers of length $n$, where the length is denoted by $\length{\cdot}$. If $n=0$, $\mathbb{N}^0=\{\str{}\}$, where $\str{}$ is the empty sequence: in general, given $i_0,\dots,i_{n-1} \in \mathbb{N}$, we denote by $\str{i_0,\dots,i_{n-1}}$ the finite sequence in $\mathbb{N}^n$ having digits $i_0,\dots,i_{n-1}$. The set of all finite sequences of natural numbers is denoted by $\mathbb{N}^{<\mathbb{N}}$, while we write $2^{<\mathbb{N}}$ for the set of all finite sequences of $0$ and $1$. For $\sigma \in \mathbb{N}^{<\mathbb{N}}$ and $m\leq\length{\sigma}$, let $\sigma[m]:=\str{\sigma(0),\dots,\sigma(m-1)}$. Given $\sigma, \tau \in \mathbb{N}^{<\mathbb{N}}$, we use $\sigma \sqsubseteq \tau$ to say that $\sigma$ is an \emph{initial segment} of $\tau$ (equivalently, $\tau$ an \emph{extension} of $\sigma$), i.e. $\sigma=\tau[m]$ for some $m \leq \length{\sigma}$. We use the symbol $\sqsubset$ in case $\sigma \sqsubseteq \tau$ and $\length{\sigma}<\length{\tau}$, and in case $\sigma \not\sqsubseteq \tau$ and $\tau \not\sqsubseteq \sigma$ we say that $\sigma$ and $\tau$ are \emph{incomparable} ($\sigma ~|~ \tau$).
The concatenation of two finite sequences $\sigma,\tau$ is denoted by $\sigma^\smallfrown\tau$, but often we just write $\sigma\tau$. The same symbol is also used for the concatenation of a finite and an infinite sequence. For $n,k \in \mathbb{N}$, we denote by $n^k$ the sequence made of $k$ many $n$'s: in case $k=1$ we just write $n$ and we use $n^\mathbb{N}$ to denote the infinite sequence with constant value $n$. For $\sigma \in \mathbb{N}^{<\mathbb{N}}$ and $p \in \mathbb{N}^\mathbb{N}$ we denote by $\sigma^-$ and $p^-$ the result of deleting the first digit of the sequence.
\begin{remark}
\thlabel{Bijections}
We fix a bijection between $\mathbb{N}^{<\mathbb{N}}$ and $\mathbb{N}$: to avoid too much notation we do not introduce a specific symbol for this bijection, but we identify a sequence with the number representing it: it should be clear from the context whether we are referring to a finite sequence or to the number representing it. We need the bijection to enjoy all the usual properties such as $\sigma \mapsto \length{\sigma}$ being computable: moreover we require that if $\sigma \sqsubset \tau$ then $\sigma<\tau$.
\end{remark}
A tree $T$ is a nonempty subset of $\mathbb{N}^{<\mathbb{N}}$ closed under initial segments. In case the tree $T$ is a subset of $2^{<\mathbb{N}}$, we call $T$ a binary tree. We say that $f \in \mathbb{N}^\mathbb{N}$ is a path through $T$ if for all $n \in \mathbb{N}$, $f[n]\in T$ where, as for finite sequences, $f[n]=\str{f(0),\dots,f(n-1)}$. We denote by $\body{T}$ the \emph{body} of $T$, that is the set of paths through $T$. We say that a tree $T$ is \textit{ill-founded} iff there exists at least one path in $\body{T}$ and \textit{well-founded} otherwise. Given $\sigma \in T$ we define the tree of extensions of $\sigma$ in $T$ as $T_{\sigma}:=\{\tau:\tau \sqsubseteq \sigma \lor \tau \sqsupseteq \sigma\}$. Notice that $T_\sigma$ is ill-founded iff there exists a path through $T$ that extends $\sigma$. We say that $T$ is \emph{perfect} if every element of $T$ has (at least) two incompatible extensions in $T$, that is, $(\forall \sigma \in T)(\exists \tau,\tau' \in T)(\sigma \sqsubset \tau \land \sigma \sqsubset \tau' \land \tau ~|~ \tau')$. It is straightforward that the body of a nonempty perfect tree has uncountably many paths. Given a tree $T$, the largest perfect subtree $S$ of $T$ is called the \emph{perfect kernel} of $T$ while $\body{T} \setminus \body{S} \subseteq \mathbb{N}^\mathbb{N}$ is called the \emph{scattered part} of $T$. We call $T$ \emph{pruned} if every $\sigma \in T$ has a proper extension. Notice that every perfect tree is pruned. Moreover, if $\body{T}$ is perfect and $T$ is pruned then $T$ is a perfect tree.
\begin{remark}
\thlabel{Perfectnessincantor}
It is useful to notice that, for a binary tree $T$, if $\length{\body{T}}>\aleph_0$ then there must uncountably many paths with infinitely many ones. In other words, it can't be the case that all the paths in $\body{T}$ are eventually zero paths, as it is straightforward to notice that they are only countably many.
\end{remark}
Given trees $T$ and $S$, we define the \textit{disjoint union} of $T$ and $S$ as $T\sqcup S=\{\str{}\} \cup \{\str{0} \tau: \tau \in T \} \cup \{\str{1} \tau: \tau \in S \}$. Of course, this is still a tree and it has the property that $T\sqcup S$ is ill-founded iff at least one of $T$ and $S$ is ill-founded. The construction can be easily generalized to countably many trees letting $\disjointunion{i\in \mathbb{N}}{T^i}:=\{\str{}\} \cup \{\str{i} \tau:\tau \in T^i \land i \in \mathbb{N}\}$ and we still have that $\disjointunion{i\in \mathbb{N}}{T^i}$ is ill-founded iff there exists $i$ such that $T^i$ is ill-founded. We also define the \emph{binary disjoint union} as $\binarydisjointunion{i\in \mathbb{N}}{T^i}:=\{\str{}\} \cup \{0^i\str{1}\tau: \tau \in T^i \land i \in \mathbb{N}\}$
\begin{remark}
\thlabel{Disjoint_union}
Notice that if all the $T^i$'s are binary trees, also $\binarydisjointunion{i\in \mathbb{N}}{T^i}$ is and, regardless the ill-foundedness/well-foundedness of the $T^i$'s, $0^\mathbb{N}\in\body{\binarydisjointunion{i\in \mathbb{N}}{T^i}}$. Moreover $\length{\body{\binarydisjointunion{i\in \mathbb{N}}{T^i}}}=1+\sum\limits_{i \in \mathbb{N}} \length{\body{T^i}}$. In particular, $\length{\body{\binarydisjointunion{i\in \mathbb{N}}{T^i}}}>1$ iff there exists an $i \in \mathbb{N}$ such that $T^i$ is ill-founded.
\end{remark}
We now turn our attention to another operation on trees, namely \emph{interleaving}. Given $\sigma,\tau \in \mathbb{N}^{n}$, we define $\sigma*\tau:=\str{ \sigma(0),\tau(0),\dots ,\sigma(n-1),\tau(n-1) }$. The same definition applies to infinite sequences. Then given trees $T$ and $S$, the {interleaving} between $T$ and $S$ is $T*S:=\{ \sigma*\tau: \length{\sigma}=\length{\tau} \land \sigma \in T \land \tau \in S\}$. Clearly, $T*S$ is a tree and it is ill-founded iff both $T$ and $S$ are ill-founded. This construction can be generalized to countably many trees in a straightforward way and we use a notation such as $\underset{i \in \mathbb{N}}{*}T^i$.
We often use the interleaving $\exploded{T}:= T*2^{<\mathbb{N}}$, which we call the \emph{explosion} of $T$.
Sometimes it is useful to be able to \lq\lq translate\rq\rq\ back and forth between sequences of natural numbers and binary sequences.
\begin{definition}
\thlabel{Translationfinitesequences}
We define:
\begin{itemize}
\item $\translateCantor\colon\mathbb{N}^{<\mathbb{N}} \rightarrow 2^{<\mathbb{N}}$ by $\translateCantor(\sigma)\defas0^{\sigma(0)}10^{\sigma(1)}1\dots10^{\length{\sigma}-1}1$; in particular, $\translateCantor(\str{}):=\str{}$;
\item $\translateBaire\colon2^{<\mathbb{N}}\rightarrow \mathbb{N}^{<\mathbb{N}}$ by
\[\translateBaire(\tau):=
\begin{cases}
\translateCantor^{-1}({\tau[n_\tau+1]})&\text{if } (\exists i)(\tau(i)=1) \text{ where } n_\tau:=\max \{i:\tau(i)=1\};
\\
\str{} & \text{if } (\forall i)(\tau(i)=0).
\end{cases}\]
\end{itemize}
\end{definition}
The two functions defined above have the following properties:
\begin{itemize}
\item $\translateCantor$ is injective;
\item $\translateBaire(\translateCantor(\sigma))=\sigma$;
\item $\sigma \sqsubset \sigma'$ iff $\translateCantor(\sigma) \sqsubset \translateCantor(\sigma')$;
\item if $\tau \sqsubset \tau'$ then $\translateBaire(\tau) \sqsubseteq \translateBaire(\tau')$.
\end{itemize}
We are now able to \lq\lq translate\rq\rq\ back and forth between trees on $\mathbb{N}$ and binary trees. We use the same symbols $\translateCantor$ and $\translateBaire$ as the context explains which function we are using.
\begin{definition}
\thlabel{Translatetrees}
Let $T\subseteq 2^{<\mathbb{N}}$ and $S\subseteq \mathbb{N}^{<\mathbb{N}}$ be trees. We define:
\begin{itemize}
\item $\translateBaire(T):=\{\sigma \in \mathbb{N}^{<\mathbb{N}}: \translateCantor(\sigma) \in T\}$;
\item $\translateCantor(S):=\{\tau \in 2^{<\mathbb{N}}:\translateBaire(\tau)\in S\}$.
\end{itemize}
\end{definition}
\begin{remark}
\thlabel{Translation}
Notice that, since $\translateCantor(\sigma0^n)=\translateCantor(\sigma)$ for every $n$, if $\sigma \in \translateBaire(T)$ then $\sigma0^\mathbb{N} \in \body{\translateBaire(T)}$. It is straightforward to check that $\translateBaire(T)=\{\translateBaire(\tau) \in \mathbb{N}^{<\mathbb{N}}: \tau \in T\}$. On the other hand, for most trees $S\subseteq \mathbb{N}^{<\mathbb{N}}$, $\translateCantor(S) \neq \{\translateCantor(\tau) \in 2^{<\mathbb{N}}: \tau \in S\}$ as the latter is not even a tree.
\end{remark}
The back and forth translations between sequences in $\mathbb{N}^\mathbb{N}$ and $2^\mathbb{N}$ are also denoted by the same function symbols used for finite sequences and for trees: again the context clarifies which one we are using.
\begin{definition}
\thlabel{translateinfinitesequence}
\begin{itemize}
\item $\translateCantor\colon\mathbb{N}^\mathbb{N} \rightarrow 2^\mathbb{N}$ is defined by $\translateCantor(p):=\underset{n \in \mathbb{N}}{\bigcup}\translateCantor(p[n])=0^{p(0)}10^{p(1)}\dots 10^{p(n)}1\dots$;
\item $\partialfunction{\translateBaire}{2^\mathbb{N}}{\mathbb{N}^\mathbb{N}}$ has domain $\{q:(\exists^\infty i)(q(i)=1)\}$ and is defined by $\translateBaire(q):=\underset{n \in \mathbb{N}}{\bigcup}\translateBaire(q[n])$.
\end{itemize}
In both definitions the union makes sense because the finite sequences are comparable.
\end{definition}
Notice that all the functions $\translateBaire$ and $\translateCantor$ we defined are computable. For the functions on finite sequences this means usual Turing computability. For the functions on trees and infinite sequences computability is to be intended in the sense of \S \ref{representedspaces}.
\begin{lemma}
\thlabel{AllPropertiesOfTranslation}
The following lemma summarizes the fundamental properties of $\translateCantor$ and $\translateBaire$ for infinite sequences and trees.
\begin{enumerate}
\item The range of $\translateCantor$ is $\{q \in 2^\mathbb{N}:(\exists^\infty i)(q(i)=1)\}$;
\item $\translateBaire(\translateCantor(p))=p$ for every $p \in \mathbb{N}^\mathbb{N}$.
\item $\translateCantor(\translateBaire(q))=q$ for every $q \in \operatorname{dom}(\translateBaire)$.
\item If $S\subseteq \mathbb{N}^{<\mathbb{N}}$, $p \in \body{S} \iff \translateCantor(p)\in \body{\translateCantor(S)}$ and hence $\body{\translateCantor(S)}\subseteq \{\translateCantor(p): p \in \body{S}\}\cup \{q:(\forall^{\infty} i)(q(i)=0)\}$ so that $\length{\body{\translateCantor(S)}}\leq \aleph_0 \iff \length{\body{S}} \leq \aleph_0$.
\item If $T\subseteq 2^{<\mathbb{N}}$ and $q \in \operatorname{dom}(\translateBaire)$ we have that $q \in \body{T} \iff \translateBaire(q) \in \body{\translateBaire(T)}$.
\item If $T\subseteq 2^{<\mathbb{N}}$ and $p \in \mathbb{N}^\mathbb{N}$ then $p \in \body{\translateBaire(T)} \iff \translateCantor(p) \in \body{T}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proofs are straightforward from the definitions above.
\end{proof}
\begin{lemma}
\thlabel{Perfecttreesinbaire}
If $T$ is a binary tree is such that $\body{T}$ is perfect then $\body{\translateBaire(T)}$ is perfect as well. Furthermore, if $T$ is a perfect tree then $\translateBaire(T)$ is a perfect tree as well.
\end{lemma}
\begin{proof}
Let $T$ be a binary tree such that $\body{T}$ is perfect. We show that no $f \in \body{\translateBaire(T)}$ is isolated, i.e.\ $(\forall n)(\exists g \in \body{\translateBaire(T)})(f[n] \sqsubset g \land f \neq g)$.
Fix $n$: by \thref{AllPropertiesOfTranslation}(6) we get that $\translateCantor(f) \in \body{T}$ and, in particular, $\sigma := \translateCantor(f[n]) \in T$. Since $\body{T}$ is perfect, by \thref{Perfectnessincantor}, there exists $h \in \body{T}$ with infinitely many ones such that $\sigma \sqsubset h$ and $\translateCantor(f) \neq h$. By \thref{AllPropertiesOfTranslation}(5) $\translateBaire(h) \in \body{\translateBaire(T)}$ and letting $g:= \translateBaire(h)$ we reach the conclusion.
In case $T$ is a perfect tree, it suffices to show that $\translateBaire(T)$ is pruned. Suppose there exists $\sigma \in \translateBaire(T)$ with no extensions in $\translateBaire(T)$. Then $\tau:= \translateCantor(\sigma)$ belongs to $T$ and that the only path in $T$ extending $\tau$ is of the form $\tau 0^\mathbb{N}$, contradicting the perfectness of $T$.
\end{proof}
Notice that if $S\subseteq \mathbb{N}^{<\mathbb{N}}$ is a perfect tree it may be the case that $\body{\translateCantor(S)}$ is not perfect, e.g.\ let $S=\{\sigma \in \mathbb{N}^{<\mathbb{N}}:\sigma(0)=0\}$ and notice that $0^\mathbb{N}$ is isolated in $\body{\translateCantor(S)}$.
\subsection{Represented spaces and Weihrauch reducibility}
\label{representedspaces}
We give a brief introduction to computable analysis and the theory of represented spaces. We refer the reader to \cite{Weihrauch} and \cite{brattka2021weihrauch} for more on these topics. In particular, we assume the notion of partial computable function from $\mathbb{N}^\mathbb{N}$ to $\mathbb{N}^\mathbb{N}$, as formulated in the so-called TTE theory of computation. We denote by $\Phi_e$ the partial function computed by the Turing machine of index $e \in \mathbb{N}$.
A \emph{represented space} $\textbf{X}$ is a pair $(X,\repmap{X})$ where $X$ is a set and $\partialfunction{\repmap{X}}{\mathbb{N}^\mathbb{N}}{X}$ is a (possibly partial) surjection. For each $x \in X$ we say that $p$ is a $\repmap{X}$-name for $x$ if $\repmap{X}(p)=x$.
A computational problem $f$ between represented spaces $\textbf{X}$ and $\textbf{Y}$ is formalized as a \emph{partial multi-valued function} $\partialmultifunction{f}{\textbf{X}}{\textbf{Y}}$. A (possibly partial) function $\partialfunction{F}{\mathbb{N}^\mathbb{N}}{\mathbb{N}^\mathbb{N}}$ is a \emph{realizer} for $\partialmultifunction{f}{\textbf{X}}{\textbf{Y}}$ if for every $p \in \operatorname{dom}(f\circ \delta_X)$, $\delta_Y(F(p)) \in f(\delta_X(p))$. The notion of realizer allows us to transfer properties of functions on the Baire space (such as computability or continuity) to multi-valued functions defined on represented spaces in general. Whenever we say that a multi-valued function between represented spaces is computable we mean that it has a computable realizer.
We fix a computable enumeration $(q_i)_{i \in \mathbb{N}}$ of $\mathbb{Q}$ and we represent the space $\mathbb{R}$ by the so-called Cauchy representation $\delta_\mathbb{R}$ where $\operatorname{dom}(\delta_\mathbb{R}):=\{p \in \mathbb{N}^\mathbb{N}:(\forall j)(\forall i>j)(|q_{p(i)}-q_{p(j)}|<2^{-j})\}$ and $\delta_\mathbb{R}(p) := \lim q_{p(n)}$.
We can now represent any \emph{computable metric space} $\mathcal{X}=(X,d,\alpha)$, that is a separable metric space $(X,d)$ and a dense sequence $\function{\alpha}{\mathbb{N}}{X}$ such that $\function{d\circ (\alpha \times \alpha)}{\mathbb{N}^2}{\mathbb{R}}$ is a computable double sequence of real numbers. The Cauchy representation $\partialfunction{\delta_X}{\mathbb{N}^\mathbb{N}}{X}$ of such a space has domain $\{p \in \mathbb{N}^\mathbb{N}:(\forall j)(\forall i>j)(d(\alpha(p(i)),\alpha(p(j)))<2^{-j})\}$ and is defined by $\delta_X(p)=\lim \alpha(p(n))$. We always assume that computable metric spaces are represented by this representation. For convenience, we fix a computable enumeration $(B_i)_{i \in \mathbb{N}}$ of all basic open sets of $\mathcal{X}$, where the ball $B_{\pairing{n,m}}$ is centered in $\alpha(n)$ and has radius $q_m$.
A computable Polish space is a computable metric space $\mathcal{X}=(X,d,\alpha)$ such that the metric $d$ is complete.
Given a computable metric space $\mathcal{X}$ and $k>0$, using the inductive definition of Borel sets, we can define the represented spaces $\boldsymbol{\Sigma}_k^0(\mathcal{X})$, $\boldsymbol{\Pi}_k^0(\mathcal{X})$ and $\boldsymbol{\Delta}_k^0(\mathcal{X})$: this shows that the Borel classes can be naturally considered as represented spaces.
\begin{definition}[{\cite[Definition 3.1]{effectiveborelmeasurability}}]
\thlabel{Borelrepspaces}
For any computable metric space $\mathcal{X}=(X,d,\alpha)$ and for any $k>0$, we define the represented spaces $(\boldsymbol{\Sigma}_k^0(\mathcal{X}),\delta_{\boldsymbol{\Sigma}_k^0(\mathcal{X})})$, $(\boldsymbol{\Pi}_k^0(\mathcal{X}),\delta_{\boldsymbol{\Pi}_k^0(\mathcal{X})})$ and $(\boldsymbol{\Delta}_k^0(\mathcal{X}),\delta_{\boldsymbol{\Delta}_k^0(\mathcal{X})})$ inductively as follows:
\begin{itemize}
\item $\delta_{\boldsymbol{\Sigma}_1^0(\mathcal{X})}(p):=\underset{i \in \mathbb{N}}{\bigcup}B_{p(i)}$;
\item $\delta_{\boldsymbol{\Pi}_k^0(\mathcal{X})}:= X\setminus\delta_{\boldsymbol{\Sigma}_k^0(\mathcal{X})}(p)$;
\item $\delta_{\boldsymbol{\Sigma}_{k+1}^0(\mathcal{X})}(p_0*p_1*\dots):=\underset{i \in \mathbb{N}}{\bigcup}\delta_{\boldsymbol{\Pi}_k^0(\mathcal{X})}(p_i)$.
\end{itemize}
\end{definition}
Notice that the representation of $\boldsymbol{\Pi}_1^0(\mathcal{X})$ is the standard negative representation of closed sets of $\mathcal{X}$: as usual in the literature we denote this represented space by $\negrepr{\mathcal{X}}$. We denote by $\Pi_1^0(\mathcal{X})$ the collection of closed sets of $\mathcal{X}$ having a computable name.
Let $\mathbf{Tr}$ and $\mathbf{Tr}_2$ be the represented spaces of trees on $\mathbb{N}$ numbers and binary trees respectively: for both the representation map is the characteristic function and hence we identify $\mathbf{Tr}$ and $\mathbf{Tr}_2$ with closed subsets of $2^\mathbb{N}$. The surjective function $\function{[\cdot]}{\mathbf{Tr}}{\negrepr{\mathbb{N}^\mathbb{N}}}$ defined by $T\mapsto \body{T}$ is computable with multi-valued computable inverse and the same holds for its restriction to $\mathbf{Tr}_2$, which is onto $\negrepr{2^\mathbb{N}}$. This means that the negative representation of a closed subset $C$ of $\mathbb{N}^\mathbb{N}$ (resp.\ $2^\mathbb{N}$) is equivalent (in the sense of \cite[Definition 2.3.2]{Weihrauch}) to the one given by the characteristic function of a (resp.\ binary) tree $T$ such that $\body{T}=C$. We refer to the latter representation as the \emph{tree representation}.
We can represent the class $\boldsymbol{\Sigma}_1^1(\mathcal{X})$ of analytic subsets of $\mathcal{X}$ by defining a name for $S$ as a name for a closed set $C\subseteq \mathcal{X}\times \mathbb{N}^\mathbb{N}$ such that $S$ is the projection on the first coordinate of $C$. Then, a name for a coanalytic set $R \in \boldsymbol{\Pi}_1^1(\mathcal{X})$ is just a name for its complement.
The next theorem summarizes some well-known results about the level at which some subsets of trees are in the \emph{Kleene arithmetical and analytical hierarchy} (or lightface hierarchy), the effective counterpart of the Borel and projective hierarchy (or boldface hierarchy). Here, completeness is defined with respect to effective Wadge reducibility. These notions can be found, sometimes with different terminology, for example in \cite{Moschovakis, kechris2012classical}.
\begin{theorem}
\thlabel{Complexityresults}
The following classification results hold:
\begin{enumerate}[(i)]
\item The set $\mathcal{IF}:= \{T \in \mathbf{Tr} : T \text{ is ill-founded}\}$ is $\Sigma_1^1$-complete, while $\mathcal{WF}:= \{T \in \mathbf{Tr} : T \text{ is well-founded}\}$ is $\Pi_1^1$-complete. In contrast, $\mathcal{IF}_2:= \mathcal{IF} \cap \mathbf{Tr}_2$ is $\Pi_1^0$-complete and $\mathcal{WF}_2:= \mathcal{WF} \cap \mathbf{Tr}_2$ is $\Sigma_1^0$-complete.
\item The set $\mathcal{T}^{>\aleph_0}:= \{T \in \mathbf{Tr}: \length{\body{T}} > \aleph_0\}$ is $\Sigma_1^1$-complete, while $\mathcal{T}^{\leq\aleph_0} := \{T \in \mathbf{Tr}: \length{\body{T}} \leq \aleph_0\}$ is $\Pi_1^1$-complete. In this case, $\mathcal{T}^{>\aleph_0}_2 := \mathcal{T}^{>\aleph_0} \cap \mathbf{Tr}_2$ is $\Sigma_1^1$-complete as well and $\mathcal{T}^{\leq\aleph_0}_2 := \mathcal{T}^{\leq\aleph_0} \cap \mathbf{Tr}_2$ is also $\Pi_1^1$-complete.
\item The set $\mathcal{UB}:= \{T \in \mathbf{Tr} : \length{\body{T}}=1\}$ is $\Pi_1^1$-complete. In contrast, $\mathcal{UB}_2 := \mathcal{UB}\cap\mathbf{Tr}_2$ is $\Pi_2^0$-complete.
\end{enumerate}
\end{theorem}
\begin{proof}
To show that $\mathcal{IF}$ is $\Sigma_1^1$-complete see \cite[Theorem 27.1]{kechris2012classical}: the theorem states the boldface case, but its proof works also in the lightface one. If $T\in \mathbf{Tr}_2$, notice that, by K\"{o}nig's lemma, $T \in \mathcal{IF}_2$ iff $(\forall n)(\exists \tau \in 2^{n})(\tau \in T)$; hence $\mathcal{IF}_2$ is $\Pi_1^0$ and completeness is straightforward. It follows immediately that $\mathcal{WF}$ is $\Pi_1^1$-complete and $\mathcal{WF}_2$ is $\Sigma_1^0$-complete.
To prove that $\mathcal{T}^{>\aleph_0}$ is $\Sigma_1^1$-complete notice that, by the Cantor-Bendixson theorem for trees, $T \in \mathcal{T}^{>\aleph_0} $ iff $(\exists S \subseteq T)(S \text{ is nonempty and perfect})$: the latter is a $\Sigma_1^1$ formula and it remains to show that $\mathcal{T}^{>\aleph_0}$ is complete for $\Sigma_1^1$ sets. This is immediate as $T \in \mathcal{IF} \iff \exploded{T} \in \mathcal{T}^{>\aleph_0}$. The proof for $\mathcal{T}^{>\aleph_0}_2$ is similar and it follows immediately that $\mathcal{T}^{\leq\aleph_0}$ and $\mathcal{T}^{\leq\aleph_0}_2$ are $\Pi_1^1$-complete.
To prove that $\mathcal{UB}$ is $\Pi_1^1$-complete notice that, by the effective perfect set theorem (see \cite[Theorem 4F.1]{Moschovakis}), $T \in \mathcal{UB}$ iff
\[(\exists p \in \mathsf{HYP}(\mathbb{N}^\mathbb{N}))(p \in \body{T}) \land (\forall \tau,\tau')(\tau ~|~ \tau' \implies T_\tau \in \mathcal{WF} \lor T_{\tau'} \in \mathcal{WF}),\]
where $\mathsf{HYP}(\mathbb{N}^\mathbb{N})$ is the set of hyperarithmetical elements in $\mathbb{N}^\mathbb{N}$.
Notice that the formula is $\Pi_1^1$: indeed, the second conjunct is clearly $\Pi_1^1$ and by Kleene's quantification theorem (see \cite[Theorem 4D.3]{Moschovakis}), the first conjunct is $\Pi_1^1$ as well. It remains to show that $\mathcal{UB}$ is complete for $\Pi_1^1$ sets. To do so, it suffices to notice that $T \in \mathcal{WF}$ iff $S \in \mathcal{UB}$, where $S:= \{0^n:n \in \mathbb{N}\} \sqcup T$ (indeed, $\body{S}=\{0^\mathbb{N}\} \cup \{1p:p \in \body{T}\}$). If $T \in \mathbf{Tr}_2$, notice that $T \in \mathcal{UB}_2$ iff
\[T \in \mathcal{IF}_2 \land (\forall \tau,\tau')(\tau ~|~ \tau' \implies T_\tau \in \mathcal{WF}_2 \lor T_{\tau'} \in \mathcal{WF}_2).\]
The formula is clearly $\Pi_2^0$ and proving completeness is straightforward.
\end{proof}
To compare the uniform computational content of different problems, we use the framework of \emph{Weihrauch reducibility}. We say that a problem $f$ is \emph{Weihrauch reducible} to a problem $g$, written $f \le_{\mathrm{W}} g$, if there are computable maps $\partialfunction{\Phi,\Psi}{\mathbb{N}^\mathbb{N}}{\mathbb{N}^\mathbb{N}}$ such that if $p$ is a name for some $x \in \operatorname{dom}(f)$, then:
\begin{enumerate}[(i)]
\item $\Phi(p)$ is a name for some $y \in \operatorname{dom}(g)$;
\item for every name $q$ for some element of $g(y)$, $\Psi(p*q)$ is a name for some element of $f(x)$.
\end{enumerate}
Informally, if $f\le_{\mathrm{W}} g$ we are claiming the existence of a procedure for solving $f$ which is computable modulo a single invocation of $g$ as an oracle (in other words, this procedure transforms realizers for $g$ into realizers for $f$). In case $\Phi$ is as above and $\Psi$ is not allowed to use $p$ in its computation, we say that $f$ is \emph{strongly Weihrauch reducible} to a problem $g$, written $f \le_{\mathrm{sW}} g$.
Weihrauch reducibility and strong Weihrauch reducibility are reflexive and transitive hence they induce the equivalence relations
$\equiv_{\mathrm{W}}$ and $\equiv_{\mathrm{sW}}$: that is $f\equiv_{\mathrm{W}} g$ iff $f \le_{\mathrm{W}} g$ and $g\le_{\mathrm{W}} f$ (similarly for $\le_{\mathrm{sW}}$).
The $\equiv_{\mathrm{W}}$-equivalence classes are called \emph{Weihrauch degrees} (similarly the $\equiv_{\mathrm{sW}}$-equivalence classes are called \emph{strong Weihrauch degrees}). Both the Weihrauch degrees and the strong Weihrauch degrees form lattices (see \cite[Theorem\ 3.9\ and\ Theorem\ 3.10]{brattka2021weihrauch}).
There are several natural operations on problems which also lift to the $\equiv_{\mathrm{W}}$-degrees and the $\equiv_{\mathrm{sW}}$-degrees: we mention below the ones we need.
\begin{itemize}
\item The \emph{parallel product} $f \times g$ is defined by $(f \times g)(x,y) := f(x) \times g(y)$.
\item The \emph{finite parallelization} is defined as $f^*((x_i)_{i<n}):=\{(y_i)_{i<n}:(\forall i<n)(y_i \in f(x_i))\}$.
\item The \emph{infinite parallelization} is defined as $\parallelization{f}((x_i)_{i \in \mathbb{N}}):=\{(y_i)_{i \in \mathbb{N}}:(\forall i)(y_i \in f(x_i)\}$.
\end{itemize}
Informally, the three operators defined above, capture respectively the idea of using $f$ and $g$ in parallel, using $f$ a finite (but given in the input) number of times in parallel and using $f$ countably many times in parallel.
The following definition, with a slightly different notation, was recently given by Sold\`a and the third author.
\begin{definition}[\cite{valentisolda}]
\thlabel{ustar}
For every $\partialmultifunction{f}{\mathbf{X}}{\mathbf{Y}}$, define the finite unbounded parallelization $\partialmultifunction{\ustar{f}}{\mathbb{N} \times \mathbb{N}^\mathbb{N} \times \mathbf{X}}{(\mathbb{N}^\mathbb{N})^{<\mathbb{N}}}$ as follows:
\begin{itemize}
\item instances are triples $(e,w,(x_n)_{n \in \mathbb{N}})$ such that $(x_n)_{n \in \mathbb{N}} \in \operatorname{dom}(\parallelization{f})$ and for each sequence $(q_n)_{n \in \mathbb{N}}$ with $\repmap{Y}(q_n) \in f(x_n)$, there is a $k\in \mathbb{N}$ such that $\Phi_e(w,q_0*\dots * q_{k-1})(0)\downarrow$ in $k$ steps;
\item a solution for $(e,w,(x_n)_{n \in \mathbb{N}})$ is a finite sequence $(q_n)_{n<k}$ such that for every $n <k$, $\repmap{Y}(q_n)\in f(x_n)$ and $\Phi_e(w,q_0*\dots * q_{k-1})(0)\downarrow$ in $k$ steps.
\end{itemize}
\end{definition}
Informally, $\ustar{f}$ takes an input a Turing functional with a parameter and an input for $\parallelization{f}$ and outputs \lq\lq sufficiently many\rq\rq\ names for solutions where \lq\lq sufficiently many\rq\rq\ is determined by the convergence of the Turing functional in input.
We call $f$ a \emph{cylinder} if $f \equiv_{\mathrm{sW}} f \times \operatorname{id}$. If $f$ is a cylinder, then $g \le_{\mathrm{W}} f$ iff $g \le_{\mathrm{sW}} f$ (\cite[Cor.\ 3.6]{BG09}). This is useful for establishing nonreductions because, if $f$ is a cylinder, then it suffices to diagonalize against all strong Weihrauch reductions from $g$ to $f$ in order to show that $g \not\le_{\mathrm{W}} f$. Cylinders are also useful when working with compositional products (discussed below). Observe that for every problem $f$, $f \times \operatorname{id}$ is a cylinder which is Weihrauch equivalent to $f$.
The \emph{compositional product} $f * g$ captures the idea of what can be achieved by first applying $g$, possibly followed by some computation, and then applying $f$. Formally, $f*g$ is any function satisfying
\[ f * g \equiv_{\mathrm{W}} \max_{\le_{\mathrm{W}}} \{f_1 \circ g_1 {}\,:\,{} f_1 \le_{\mathrm{W}} f \land g_1 \le_{\mathrm{W}} g\}. \]
This operator was first introduced in \cite{BolWei11}, and proven to be well-defined in \cite{BP16}. For each problem $f$, we denote by $f^{[n]}$ the $n$-fold iteration of the compositional product of $f$ with itself, i.e., $f^{[1]} = f$, $f^{[2]} = f * f$, and so on.
Many (non) reductions in this paper follows from the characterization of the \emph{first-order} part $\firstOrderPart{f}$ and the \emph{deterministic part} $\mathsf{Det}(f)$ of a problem $f$. The first was introduced in \cite{dzafarovsolomonyokoyama} and extensively studied in \cite{valentisolda}, the second one was defined in \cite{goh_pauly_valenti_2021}.
We say that a computational problem $f$ is \emph{first-order} if its codomain is $\mathbb{N}$. As we need only the following characterization of $\firstOrderPart{f}$ we omit the technical definition (see e.g.\ \cite[Definition 2.2]{goh_pauly_valenti_2021}).
\begin{theorem}[\cite{dzafarovsolomonyokoyama}]
For every problem $f$, $\firstOrderPart{f} \equiv_{\mathrm{W}} \max_{\le_{\mathrm{W}}}\{ g{}\,:\,{} g \text{ is first-order and } g \le_{\mathrm{W}} f\}$.
\end{theorem}
The following theorem relates the first-order part with the unbounded finite parallelization.
\begin{theorem}[{\cite[Theorem 5.7]{valentisolda}}]
\thlabel{Summaryfopustar}
For every first-order $f$, $\firstOrderPart{(\parallelization{f}) }\equiv_{\mathrm{W}} \ustar{f}$.
\end{theorem}
Similarly, we only need the following characterization of $\mathsf{Det}(f)$ and we omit the formal definition (see \cite[Definition 3.1]{goh_pauly_valenti_2021}).
\begin{theorem}[{\cite[Theorem 3.2]{goh_pauly_valenti_2021}}]
For every problem $f$,
\[ \mathsf{Det}(f) \equiv_{\mathrm{W}} \max_{\le_{\mathrm{W}}}\{g: \partialfunction{g}{X}{\mathbb{N}^\mathbb{N}} \land g \le_{\mathrm{W}} f\}.\]
\end{theorem}
Hence, $\mathsf{Det}(f)$ is the strongest single-valued function which Weihrauch reduces to $f$.
Other useful operations on problems do not lift to Weihrauch degrees (i.e.\ applying the operation to equivalent problems does not always produce equivalent problems).
The first such operation is the \emph{jump}. First we define the jump of a represented space $\mathbf{X}=(X,\repmap{X})$. This is $\mathbf{X}'=(X,\repmap{X}')$ where $\repmap{X}'$ takes in input a sequence of elements of $\mathbb{N}^\mathbb{N}$ converging to some $p \in \operatorname{dom}(\delta_X)$ and returns $\repmap{X}(p)$. Then for a problem $\partialmultifunction{f}{\textbf{X}}{\textbf{Y}}$ its jump $ \partialmultifunction{f'}{\textbf{X}'}{\textbf{Y}}$ is defined as $f'(x) := f(x)$. In other words, $f'$ is the following task: given a sequence which converges to a name for an instance of $f$, produce a solution for that instance. The jump lifts to strong Weihrauch degrees but not to Weihrauch degrees (see \cite[\S 5]{BolWei11}). We use $f^{(n)}$ to denote the $n$-th iterate of the jump applied to $f$.
Let $ \partialfunction{\mathsf{lim}}{(\mathbb{N}^\mathbb{N})^{\mathbb{N}}}{\mathbb{N}^\mathbb{N}}, \ (p_n)_{n \in \mathbb{N}}\mapsto \lim p_n$ be the single-valued function
whose domain consists of all converging sequences in $\mathbb{N}^\mathbb{N}$: we have $f^{(n)}\le_{\mathrm{W}} f* \mathsf{lim}^{[n]}$, but the converse reduction does not hold in general.
We now introduce the \emph{totalization of a problem} and the \emph{completion of a problem}. These two operators are different ways of making a partial multi-valued function total; neither of them lift to Weihrauch degrees. Given a partial multi-valued function $\partialmultifunction{f}{\mathbf{X}}{\mathbf{Y}}$ the totalization of $f$ is the total multi-valued function $\totalization{f}$ defined as
\[\totalization{f}(x):=\begin{cases}
f(x) & \text{ if } x \in \operatorname{dom}(f),\\
Y & \text{otherwise.}
\end{cases}
\]
For more details on the totalization we refer the reader to \cite{CompletionOfChoice}.
To define the completion of a problem $f$ we need to first introduce the completion of a represented space. We adopt the following notation: given $p \in \mathbb{N}^\mathbb{N}$ we define $\hat{p}_n$ to be $\str{}$ if $p(n)=0$, $\str{p(n)-1}$ otherwise; then $p-1$ is the concatenation of all the $\hat{p}_n$'s. For a represented space $\mathbf{X}=(X, \delta_X)$ we define its completion as $\overline{\mathbf{X}}=(\overline{X}, \delta_{\overline{X}})$ where $\overline{X}=X\cup \{\bot\}$ with $\bot \notin X$ and $\function{\delta_{\completion{X}}}{\mathbb{N}^\mathbb{N}}{\completion{X}}$ is the total function defined by
\[\delta_{\completion{X}}(p):=\begin{cases}
\delta_X(p-1)& \text{if } p-1 \in \operatorname{dom}(\delta_X)\\
\bot &\text{otherwise.}
\end{cases}\]
Let $\partialmultifunction{f}{\mathbf{X}}{\mathbf{Y}}$ be a multi-valued function. We define the completion of $f$ as the total multi-valued function $\multifunction{\completion{f}}{\completion{\mathbf{X}}}{\completion{\mathbf{Y}}}$ such that
\[\completion{f}(x)=
\begin{cases}
f(x) & \text{ if } x \in \operatorname{dom}(f),\\
\overline{Y} & \text{ otherwise}.
\end{cases}
\]
We now introduce some major benchmarks in the Weihrauch lattice.
The well-known problem $\function{ \mathsf{LPO} }{2^\mathbb{N}}{\{0,1\}}$ is defined as $ \mathsf{LPO} (p)=1$ iff $(\forall n)(p(n)=0)$. It is convenient to think to $ \mathsf{LPO} $ as the function answering yes or no to questions which are $\Pi_1^{0}$ or $\Sigma_1^{0}$ in the input. Similarly, $ \mathsf{LPO} ^{(n)}$ can be seen as the function answering yes or no to questions which are $\Pi_{n+1}^0$ or $\Sigma_{n+1}^0$ in the input. It is well-known that $\mathsf{lim} \equiv_{\mathrm{sW}} \parallelization{ \mathsf{LPO} }$. Moreover, using \cite[\S 5]{BolWei11}, we obtain that for every $n$, $\mathsf{lim}^{(n)} \equiv_{\mathrm{sW}} \parallelization{ \mathsf{LPO} ^{(n)}}$.
We define also $\function{\mathsf{WF}}{\mathbf{Tr}}{\{0,1\}}$ as $\mathsf{WF}(T)=1$ iff $T \in \mathcal{WF}$. Analogously to $ \mathsf{LPO} $, we can think of $\mathsf{WF}$ as the problem answering yes or not to questions which are $\Pi_1^{1}$ or $\Sigma_1^{1}$ in the input. In the literature, $\mathsf{WF}$ was introduced under different names: the same notation appears in \cite{leafmanaegement,valentisolda}, while \cite{CompletionOfChoice} uses $\mathsf{WFT}$ and \cite{kihara_marcone_pauly_2020,openRamsey} use $\chi_{\Pi_1^1}$.
We now move our attention to \emph{choice problems}, which have emerged as very significant milestones in the Weihrauch lattice. For a computable metric space $\mathcal{X}$ and a class $\boldsymbol{\Gamma}$ as the ones in \thref{Borelrepspaces}, let $\partialmultifunction{\codedChoice{\boldsymbol{\Gamma}}{}{\mathcal{X}}}{\boldsymbol{\Gamma}(\mathcal{X})}{\mathcal{X}}$ be the problem that given in input a nonempty set $A \in \boldsymbol{\Gamma}(\mathcal{X})$ outputs a member of $A$. When $\boldsymbol{\Gamma}=\boldsymbol{\Pi}_1^0$ we just write $\codedChoice{}{}{\mathcal{X}}$. The same problem with domain restricted to singletons is denoted by $\codedUChoice{\boldsymbol{\Gamma}}{}{\mathcal{X}}$. It is well-known that for every $n>0$, $(\codedChoice{\boldsymbol{\Pi}_n^0}{}{\mathbb{N}})'\equiv_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_{n+1}^0}{}{\mathbb{N}}$.
Using the tree representation of closed sets, $\mathsf{C}_{\Baire}$ can be formulated as the problem of computing a path through some $T \in \mathcal{IF}$; $\mathsf{UC}_{\Baire}$ is the same problem with domain restricted to $\mathcal{UB}$. Notice that both problems are closed under compositional product by \cite[Theorem 7.3]{closedChoice}.
As noticed in \cite{kihara_marcone_pauly_2020} and mentioned in the introduction, $\mathsf{C}_{\Baire}$ and $\mathsf{UC}_{\Baire}$ are among the problems that correspond to $\mathsf{ATR}_0$. We need the following proposition.
\begin{proposition}
\thlabel{UCbaireisparallelization}
$\parallelization{\codedUChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}} \equiv_{\mathrm{W}} \parallelization{ \codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}}\equiv_{\mathrm{W}}\mathsf{UC}_{\Baire}<_\mathrm{W} \parallelization{\codedChoice{\boldsymbol{\Sigma}_1^1}{}{\mathbb{N}}}<_\mathrm{W}\mathsf{C}_{\Baire}$.
\end{proposition}
\begin{proof}
The reduction $\codedUChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}} \le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}$ is trivial; by the effective version of the Novikov-Kondo-Addison uniformization theorem \cite[Theorem 4E.4]{Moschovakis}, we obtain $\codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}\le_{\mathrm{W}} \codedUChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}$. Hence, $\parallelization{\codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}} \equiv_{\mathrm{W}} \parallelization{\codedUChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}}$.
By \cite[Theorem 3.11]{kihara_marcone_pauly_2020} $\mathsf{UC}_{\Baire}\equiv_{\mathrm{W}}\parallelization{\codedUChoice{\boldsymbol{\Pi}_1^1}{}{2}}$ (in \cite{kihara_marcone_pauly_2020} $\parallelization{\codedUChoice{\boldsymbol{\Pi}_1^1}{}{2}}$ is denoted by $\boldsymbol{\Delta}_1^1\text{-}\mathsf{CA}$). Since $\parallelization{\codedUChoice{\boldsymbol{\Pi}_1^1}{}{2}}\le_{\mathrm{W}}\parallelization{\codedUChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}}$ is trivial, we obtain $\mathsf{UC}_{\Baire} \le_{\mathrm{W}} \parallelization{\codedUChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}}$.
To complete the proof of the equivalence between the first three problems, it suffices to show that $\codedUChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}} \le_{\mathrm{W}} \parallelization{\codedUChoice{\boldsymbol{\Pi}_1^1}{}{2}}$ and then notice that $\parallelization{\codedUChoice{\boldsymbol{\Pi}_1^1}{}{2}}$ is parallelizable. We can think of an input for $\codedUChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}$ as a sequence $(T^i)_{i \in \mathbb{N}} \in \mathbf{Tr}^\mathbb{N}$ such that exactly one $T^i$ belongs to $\mathcal{WF}$. For every $i$, we compute the pair of trees $(S^i,R^i)$ where $S^i:=\underset{j\leq i}{*}T^j$ and $R^i:=\underset{j> i}{*}T^j$: notice that exactly one of $S^i$ and $R^i \in \mathcal{WF}$. We can view the sequence $(S^i,R^i)_{i \in \mathbb{N}}$ as an instance of $\parallelization{\codedUChoice{\boldsymbol{\Pi}_1^1}{}{2}}$. Finally, let $n:=\min \big\{m: \parallelization{\codedUChoice{\boldsymbol{\Pi}_1^1}{}{2}}((S^i,R^i)_{i \in \mathbb{N}}))(m)=0 \big\}$. Clearly $T^n \in \mathcal{WF}$.
The last two (strict) reductions are \cite[Theorem 4.3]{kihara_marcone_pauly_2020} and \cite[Theorem 3.34]{choiceprinciples}.
\end{proof}
For further reference, we collect here some facts which are implicit in the literature.
\begin{proposition}
\thlabel{Fopcbaire}
If $\boldsymbol{\Gamma} \in \{\boldsymbol{\Sigma},\boldsymbol{\Pi},\boldsymbol{\Delta}\}$ and $\boldsymbol{\Lambda}\in \{\boldsymbol{\Pi},\boldsymbol{\Delta}\}$ then
\[\firstOrderPart{\mathsf{UC}_{\Baire}}\equiv_{\mathrm{W}} \codedUChoice{\boldsymbol{\Gamma}_1^1}{}{\mathbb{N}}\equiv_{\mathrm{W}}\ustar{(\codedUChoice{\boldsymbol{\Gamma}_1^1}{}{\mathbb{N}})}\equiv_{\mathrm{W}} \codedChoice{\boldsymbol{\Lambda}_1^1}{}{\mathbb{N}}\equiv_{\mathrm{W}}\ustar{(\codedChoice{\boldsymbol{\Lambda}_1^1}{}{\mathbb{N}})}<_\mathrm{W}\firstOrderPart{\mathsf{C}_{\Baire}}\equiv_{\mathrm{W}} \codedChoice{\boldsymbol{\Sigma}_1^1}{}{\mathbb{N}}.\]
\end{proposition}
\begin{proof}
The equivalence $\firstOrderPart{\mathsf{C}_{\Baire}}\equiv_{\mathrm{W}} \codedChoice{\boldsymbol{\Sigma}_1^1}{}{\mathbb{N}}$ is proved in \cite[Proposition 2.4]{goh_pauly_valenti_2021}; essentially the same proof shows that $\firstOrderPart{\mathsf{UC}_{\Baire}}\equiv_{\mathrm{W}} \codedUChoice{\boldsymbol{\Sigma}_1^1}{}{\mathbb{N}}$.
If $A\subseteq \mathbb{N}$ is a singleton then $n \in A$ iff $(\forall m\neq n)(m \notin A)$. This implies that $A$ is $\Sigma_1^1$ iff $A$ is $\Pi_1^1$ iff $A$ is $\Delta_1^1$ and this shows that $\codedUChoice{\boldsymbol{\Sigma}_1^1}{}{\mathbb{N}}\equiv_{\mathrm{W}} \codedUChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}\equiv_{\mathrm{W}}\codedUChoice{\boldsymbol{\Delta}_1^1}{}{\mathbb{N}}$. These equivalences, together with \thref{UCbaireisparallelization} and \thref{Summaryfopustar} allow us to derive all the stated equivalent characterizations of $\firstOrderPart{\mathsf{UC}_{\Baire}}$.
By \thref{UCbaireisparallelization}, $\mathsf{UC}_{\Baire} <_\mathrm{W} \parallelization{\codedChoice{\boldsymbol{\Sigma}_1^1}{}{\mathbb{N}}}$: since $\mathsf{UC}_{\Baire}$ is parallelizable, this implies $\codedChoice{\boldsymbol{\Sigma}_1^1}{}{\mathbb{N}} \not\le_{\mathrm{W}} \mathsf{UC}_{\Baire}$ and hence $\firstOrderPart{\mathsf{UC}_{\Baire}} <_\mathrm{W} \firstOrderPart{\mathsf{C}_{\Baire}}$.
\end{proof}
\begin{proposition}
\thlabel{limdoesnotreachucbaire}
For every $n$,
\begin{enumerate}[(i)]
\item $\firstOrderPart{(\mathsf{lim}^{(n)})} \equiv_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_{n+1}^0}{}{\mathbb{N}} \equiv_{\mathrm{W}} \ustar{( \mathsf{LPO} ^{(n)})} <_\mathrm{W} \firstOrderPart{(\mathsf{lim}^{(n+1)})}$;
\item $\codedChoice{\boldsymbol{\Pi}_{n+1}^0}{}{\mathbb{N}} \not\le_{\mathrm{W}} \mathsf{LPO} ^{(n+1)}$;
\item $\firstOrderPart{(\mathsf{lim}^{(n)})} <_\mathrm{W} \codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}} \equiv_{\mathrm{W}} \firstOrderPart{\mathsf{UC}_{\Baire}}$.
\end{enumerate}
\end{proposition}
\begin{proof}
The equivalences in (i) are from {\cite[Theorem 7.2]{valentisolda}}. It follows from \cite[Theorem 5.10(5)]{valentisolda} that $\parallelization{ \mathsf{LPO} ^{(n)^{u*}}} \equiv_{\mathrm{W}} \parallelization{ \mathsf{LPO} ^{(n)}} \equiv_{\mathrm{W}} \mathsf{lim}^{(n)}$; since $\mathsf{lim}^{(n)} <_{\mathrm{sW}} \mathsf{lim}^{(n+1)}$ this implies the non-reductions in (i) and (ii).
The equivalence in (iii) is from \thref{Fopcbaire}, while the strictness follows from (i).
\end{proof}
\section{The perfect set theorem in $\mathbb{N}^\mathbb{N}$ and $2^\mathbb{N}$}
\label{perfectsetsgeneral}
\subsection{Perfect sets}
\label{perfectsets}
The following multi-valued function was introduced and studied in \cite{kihara_marcone_pauly_2020}.
\begin{definition}
The multi-valued function
$\partialmultifunction{\PTT[1]}{\mathbf{Tr}}{\mathbf{Tr}}$ has domain $\{T \in \mathbf{Tr}: T \in \mathcal{T}^{>\aleph_0}\}$ and is defined by
$$\PTT[1](T):=\{S\in \mathbf{Tr}: S\subseteq T \land S \text{ is perfect}\}. $$
\end{definition}
We also study $\PTT[1]{\restriction \mathbf{Tr}_2}$, the restriction of $\PTT[1]$ to $\mathbf{Tr}_2$. We now define the same problem for closed sets.
\begin{definition}
Let $\mathcal{X}$ be a computable Polish space. The multi-valued function $\partialmultifunction{\PST[\mathcal{X}]}{\negrepr{\mathcal{X}}}{\negrepr{\mathcal{X}}}$ has domain $\{A \in \negrepr{\mathcal{X}}: \length{A}>\aleph_0\}$ and is defined as
$$\PST[\mathcal{X}](A):=\{P \in \negrepr{\mathcal{X}}: P\subseteq A \land P \text{ is perfect}\}.$$
\end{definition}
Using the tree representation of closed sets in $\mathbb{N}^\mathbb{N}$, we can think of a name for an input of $\PST[\mathbb{N}^\mathbb{N}]$ as $T \in \mathcal{T}^{>\aleph_0}$ and a name for a solution of $\PST[\mathbb{N}^\mathbb{N}]([T])$ as $S\in\mathbf{Tr}$ such that $\body{S}\subseteq \body{T}$ and $\body{S}$ is perfect. Notice that $\PST[\mathbb{N}^\mathbb{N}](\body{T})$ contains $\PTT[1](T)$, but includes also every tree with perfect body contained in $\body{T}$.
\begin{theorem}
\thlabel{cantor_and_baire_same}
$\PTT[1]{\restriction}\mathbf{Tr}_2 \equiv_{\mathrm{sW}} \PTT[1]$ and $\PST[2^\mathbb{N}]\equiv_{\mathrm{sW}} \PST[\mathbb{N}^\mathbb{N}]$.
\end{theorem}
\begin{proof}
The reduction $\PTT[1]{\restriction}\mathbf{Tr}_2\le_{\mathrm{sW}} \PTT[1]$ is trivial.
We now prove that $\PST[2^\mathbb{N}] \le_{\mathrm{sW}} \PST[\mathbb{N}^\mathbb{N}]$. Let $T \in \mathbf{Tr}_2$ and let the forward functional be the identity. Let $P$ be a name for $\PST[\mathbb{N}^\mathbb{N}]([T])$: even if $P\in\mathbf{Tr}$, notice that $\body{P}$ is perfect and $\body{P}\subseteq 2^\mathbb{N}$. Let $\Psi(P)=\{\sigma \in P: \sigma \in 2^{<\mathbb{N}}\}$ and notice that $\Psi(P)$ is a name for an element $\PST[2^\mathbb{N}](\body{T})$.
For the other direction, the reduction $\PTT[1]\le_{\mathrm{sW}}\PTT[1]{\restriction}\mathbf{Tr}_2 $ is witnessed by the maps $\translateCantor$ (forward) and $\translateBaire$ (backward) from \thref{Translatetrees}. Let $T\in \mathcal{T}^{>\aleph_0}$ and let $P \in \PTT[1]\restriction \mathbf{Tr}_2(\translateCantor(T))$. By \thref{Perfecttreesinbaire}, $\translateBaire(P)$ is a perfect tree. To show that $\translateBaire(P) \subseteq T$, it suffices to prove that $\body{\translateBaire(P)}\subseteq \body{T}$. Let $f \in \body{\translateBaire(P)}$: by \thref{AllPropertiesOfTranslation}(6) we have that $\translateCantor(f) \in \body{P}\subseteq \body{\translateCantor(T)}$ and by \thref{AllPropertiesOfTranslation}(4) we conclude that $f \in \body{T}$.
The proof that $\PST[\mathbb{N}^\mathbb{N}] \le_{\mathrm{sW}} \PST[2^\mathbb{N}]$ is similar.
\end{proof}
\begin{lemma}
\thlabel{UCBaireReducesToPST}
$\mathsf{UC}_{\Baire}<_{\mathrm{sW}}\PST[\mathbb{N}^\mathbb{N}]$ and $\PST[\mathbb{N}^\mathbb{N}]\not\le_{\mathrm{W}}\mathsf{UC}_{\Baire}$.
\end{lemma}
\begin{proof}
Since $\PST[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{sW}} \PST[2^\mathbb{N}]$ (\thref{cantor_and_baire_same}), we prove the lemma with $\PST[2^\mathbb{N}]$ in place of $\PST[\mathbb{N}^\mathbb{N}]$.
To show $\mathsf{UC}_{\Baire}\le_{\mathrm{sW}}\PST[2^\mathbb{N}]$, fix $T \in \mathcal{UB}$ an let $p_0$ be the unique element of $\body{T}$. Let $S:= \translateCantor(\exploded{T})$ and notice that, by definition of $\exploded{\cdot}$ and by \thref{AllPropertiesOfTranslation}(4), all paths in $ \body{S}$ are either eventually zero or are of the form $\translateCantor(p_0*q)$ for some $q \in 2^\mathbb{N}$.
Fix a name $P$ for an element of $\PST[2^\mathbb{N}](\body{S})$. We claim that all the paths in $\body{P}$ are of the form $\translateCantor(p_0*q)$ for some $q \in 2^\mathbb{N}$. To prove this, we need to rule out that some eventually zero path belongs to $\body{P}$. Let $r \in 2^\mathbb{N}$ be of the form $\sigma0^\mathbb{N}$, where $\sigma=\str{}$ or $\sigma(\length{\sigma}-1)=1$. Notice that $\translateCantor(\translateBaire(\sigma))=\sigma$. Let
\[k:= \begin{cases}
p_0( {\length{\translateBaire(\sigma)}}/2) & \text{if } \length{\translateBaire(\sigma)} \text{ is even},\\
1 & \text{otherwise,}
\end{cases}\]
and set $m= \length{\sigma}+ k +1$. It suffices to prove that
$$(\forall q \in 2^\mathbb{N})(r[m]\not\sqsubset \translateCantor(p_0*q)),$$
so that all paths in $\body{S}$ which extend $r[m]$ are eventually zero and $S_{r[m]} \in \mathcal{T}^{\leq\aleph_0}$, which implies $r \notin \body{P}$.
Fix $q \in 2^\mathbb{N}$:
\begin{itemize}
\item if $\sigma\not\sqsubset \translateCantor(p_0*q)$ then $r[m]\not\sqsubset \translateCantor(p_0*q)$;
\item if $\sigma\sqsubset \translateCantor(p_0*q)$ then either $\sigma 0^k 1$ or $\sigma 0^{k-1} 1$ (in case $\length{\translateBaire(\sigma)}$ is odd and $q((\length{\translateBaire(\sigma)} -1)/2) =0$) is a prefix of $\translateCantor(p_0*q)$ which is incomparable with $r[m] = \sigma 0^{k+1}$; hence $r[m] \not\sqsubset \translateCantor(p_0*q)$ also in this case.
\end{itemize}
We show how to computably retrieve $p_0$ from ${P}$. To find $p_0(0)$ we search for $n$ such that
$$(\forall \tau \in 2^{n+1})( P_\tau \in \mathcal{IF}_2 \implies \tau = 0^{n}1 ).$$
Indeed, the previous claim implies that the unique $n$ satisfying this condition is $p_0(0)$. Since $\mathcal{IF}_2$ is a $\Pi_1^0$ set (\thref{Complexityresults}(i)), the above condition is $\Sigma_1^0$ and at some finite stage we find $p_0(0)$.
Suppose now that we computed the first $i$ coordinates of $p_0$, i.e. $p_0[i]$. We generalize the previous strategy to compute $p_0(i)$. Let $$A_i:= \{0^{p_0(0)} 1 \xi_0 0^{p_0(1)} 1 \xi_1 \dots \xi_{i-2} 0^{p_0(i-1)} 1 \xi_{i-1} \in P: (\forall j<i) (\xi_j \in \{1,01\})\}
$$
(recall that $\translateCantor(0)=1$ and $\translateCantor(1)=01$). Informally, the $\xi_j$'s come from the interleaving of sequences in $T$ with sequences in $2^{<\mathbb{N}}$. Notice that $A_i$ is finite and nonempty; moreover there exists $\sigma \in A_i$ which is the prefix of some path in $\body{P}$. We search for $n$ satisfying the $\Sigma_1^0$ property
$$(\forall \sigma \in A_i)(\forall \tau \in 2^{n+1})( P_{\sigma\tau} \in \mathcal{IF}_2 \implies \tau = 0^{n}1 )$$
As before the claim implies that the unique $n$ satisfying this condition is $p_0(i)$. The main difference with the case $i=0$ is that it may be the case that different sequences in $A_i$ are prefixes of paths in $\body{P}$ that come from the interleaving of $p_0$ with different elements of $2^\mathbb{N}$. However, any such $\sigma$ provides the correct $n = p_0(i)$.
This ends the proof of the reduction.\smallskip
To show that $\PST[2^\mathbb{N}]\not<_\mathrm{W}\mathsf{UC}_{\Baire}$, recall from \S \ref{representedspaces} that $\mathsf{lim}*\mathsf{UC}_{\Baire}\equiv_{\mathrm{W}} \mathsf{UC}_{\Baire}$, so it suffices to show that $\mathsf{C}_{\Baire} \le_{\mathrm{W}} \mathsf{lim}*\PST[2^\mathbb{N}]$. From \cite[Proposition 6.3]{kihara_marcone_pauly_2020} and \thref{cantor_and_baire_same}, it follows that $\mathsf{C}_{\Baire}\equiv_{\mathrm{W}} \PTT[1]\equiv_{\mathrm{W}} \PTT[1]\restriction \mathbf{Tr}_2$ and hence it is enough to show that $\PTT[1]\restriction \mathbf{Tr}_2\le_{\mathrm{W}} \mathsf{lim}*\PST[2^\mathbb{N}]$. From \cite{Nobrega2017GameCA}, we know that $\mathsf{lim}$ is equivalent to the function that prunes elements in $\mathbf{Tr}_2$. So let $T\in \mathcal{T}^{>\aleph_0}_2$ be the input of $\PTT[1]\restriction \mathbf{Tr}_2$ and let $P$ be a name for an element of $\PST[2^\mathbb{N}](\body{T})$: pruning $P$ with $\mathsf{lim}$ is enough to obtain a perfect subtree of $T$.
\end{proof}
Recall that trees are represented via their characteristic functions and notice that $s \in 2^{<\mathbb{N}}$ is a prefix of a (name for a) tree iff $\{\tau: s(\tau)=1\}$ is a tree, which is a computable property.
\begin{lemma}
\thlabel{fop_pst}
$\firstOrderPart{\PST[\mathbb{N}^\mathbb{N}]} \equiv_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}^1_1}{}{\mathbb{N}}$.
\end{lemma}
\begin{proof}
The fact that $\codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}\le_{\mathrm{W}}\firstOrderPart{\PST[\mathbb{N}^\mathbb{N}]}$ follows from the fact that $\firstOrderPart{\mathsf{UC}_{\Baire}}\equiv_{\mathrm{W}}\codedChoice{\boldsymbol{\Pi}^1_1}{}{\mathbb{N}}$ by \thref{Fopcbaire} and $\mathsf{UC}_{\Baire}<_{\mathrm{sW}}\PST[\mathbb{N}^\mathbb{N}]$ by \thref{UCBaireReducesToPST}.
For the opposite direction, suppose that $f$ is a first-order problem such that $f \le_{\mathrm{W}} \PST[\mathbb{N}^\mathbb{N}]$ as witnessed by the computable maps $\Phi$ and $\Psi$. Let $p$ be a name for an input $x$ of $f$. Then $\Phi(p)=T$ where $T \in \mathcal{T}^{>\aleph_0}$. Consider the set
$$\mathbf{Prefixes}:=\{s: s \text{ is a prefix of a tree } \land \Psi(p[\length{s}],s)(0){\downarrow} \land (\forall \tau)(s(\tau)=0\implies T_\tau \in \mathcal{T}^{\leq\aleph_0})\}.$$
Since $\mathcal{T}^{\leq\aleph_0}$ is a $\Pi_1^1$ set (see \thref{Complexityresults}(ii)), $\mathbf{Prefixes}$ is a $\Pi_1^{1,T}$ subset of $\mathbb{N}$.
We prove that $\mathbf{Prefixes}$ is nonempty. Let $q$ be a name for the perfect kernel of $\body{T}$, which belongs to $\PST[\mathbb{N}^\mathbb{N}](\body{T})$. Let $t$ be the least stage such that $\Psi(p[t],q[t])(0)\downarrow$. Then, if $q(\tau)=0$ then $ T_\tau \in \mathcal{T}^{\leq\aleph_0}$ (otherwise $T_{\tau}$ contains some perfect subset of $T$ contradicting that $q$ is a name for the perfect kernel of $\body{T}$). This proves that $q[t] \in \mathbf{Prefixes}$.
Thus, $\mathbf{Prefixes}$ is a valid input for $\codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}$.
The argument above shows that every $s\in \mathbf{Prefixes}$ is a prefix of a name for the perfect kernel of $\body{T}$, which belongs to $\PST[\mathbb{N}^\mathbb{N}](\body{T})$. Since $f$ is first-order, for any such $s$, $\Psi(p[\length{s}],s)(0) \in f(x)$. This shows that $f \le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}$.
\end{proof}
Our results about $\PST[2^\mathbb{N}]$ and $\PST[\mathbb{N}^\mathbb{N}]$ are summarized in the following theorem.
\begin{theorem}
\thlabel{pstucbairesummary}
$\mathsf{UC}_{\Baire}<_{\mathrm{sW}} \PST[2^\mathbb{N}] \equiv_{\mathrm{sW}} \PST[\mathbb{N}^\mathbb{N}] <_{\mathrm{sW}} \mathsf{C}_{\Baire}$
\end{theorem}
\begin{proof}
The first strict reduction and the first equivalence were proven in \thref{UCBaireReducesToPST} and \thref{cantor_and_baire_same} respectively. The last reduction follows from the fact $\PTT[1]\equiv_{\mathrm{W}} \mathsf{C}_{\Baire}$ (\cite[Proposition 6.3]{kihara_marcone_pauly_2020}) and a solution for $\PTT[1]$ is also a solution for $\PST[\mathbb{N}^\mathbb{N}]$. Strictness follows by \thref{fop_pst} as, by \thref{Fopcbaire}, $\codedChoice{\boldsymbol{\Pi}^1_1}{}{\mathbb{N}}<_\mathrm{W}\firstOrderPart{\mathsf{C}_{\Baire}}$.
\end{proof}
\subsection{Listing problems}
\label{listingproblems}
We now move our attention to the functions that, given in input a countable closed set of a computable metric space, output a list of all its elements. There are different possible meanings of the word \lq list\rq, and these correspond to different functions. For Baire and Cantor space, some of these functions were already introduced and studied in \cite{kihara_marcone_pauly_2020}. For trees and closed sets we made a distinction between the perfect tree and the perfect set theorem; on the other hand, if $T \in \mathbf{Tr}$ and $A\in \negrepr{\mathbb{N}^\mathbb{N}}$ are such that $A=\body{T}$ then listing the elements in $\body{T}$ and listing the elements in $A$ are the same problem.
We generalize \cite[Definition 6.1]{kihara_marcone_pauly_2020} from $\mathbb{N}^\mathbb{N}$ to an arbitrary computable metric space.
\begin{definition}
\thlabel{definitionsList}
Let $\mathcal{X}$ be a computable metric space. The two multi-valued function $\partialmultifunction{\wList[\mathcal{X}]}{\negrepr{\mathcal{X}}}{(2\times \mathcal{X})^\mathbb{N}}$ and $\partialmultifunction{\List[\mathcal{X}]}{\negrepr{\mathcal{X}}}{\mathbb{N} \times (2\times \mathcal{X})^\mathbb{N}}$ with the same domain $\{A \in \negrepr{\mathcal{X}}:\length{A}\leq \aleph_0\}$ are defined by
$$\wList[\mathcal{X}](A):=\{(b_i,x_i)_{i \in \mathbb{N}}: A=\{x_i:b_i=1\}\},$$
$$\List[\mathcal{X}](A):=\{(n,(b_i,x_i)_{i \in \mathbb{N}}): A=\{x_i:b_i=1\} \land ((n=0 \land \length{A}=\aleph_0) \lor (n>0 \land \length{A}=n-1)) \}.$$
\end{definition}
\begin{definition}
\thlabel{ellesigmadefinition}
Given $\sigma \in 2^{<\mathbb{N}}$, we define $\ell_\sigma:=\min\{i:\pairing{i,0}\geq \length{\sigma}\}$. Then, for every $i<\ell_\sigma$, we define $\pi_i(\sigma):=\str{\sigma(\pairing{i,j}): \pairing{i,j}<\length{\sigma}}$ where $\length{\pi_i(\sigma)}=\max\{j: \pairing{i,j}<\length{\sigma}\}$. These definitions are related to the notation $\sigma:=\mathsf{dvt}(\tau_0,\dots,\tau_m)$ in \cite[Theorem 4.11]{openRamsey} that was used to denote the prefix of an infinite sequence $f$ obtained by joining countably many infinite sequences $g_i$.
\end{definition}
\begin{remark}
\thlabel{equivalentcersionslists}
Notice that there is a slight difference between $\List$ (there was no subscript there, because only $\mathbb{N}^\mathbb{N}$ was considered) in \cite[Definition 6.1]{kihara_marcone_pauly_2020} and our definition of $\List[\mathbb{N}^\mathbb{N}]$. Indeed, $\partialmultifunction{\List}{\negrepr{\mathbb{N}^\mathbb{N}}}{(\mathbb{N}^\mathbb{N})^\mathbb{N}}$ is defined by stipulating that $(n,(p_i)_{i \in \mathbb{N}}) \in \List(A)$ iff either $n=0$, $A=\{p_i :i \in \mathbb{N}\}$ and $p_i \neq p_j$ for every $i \neq j$, or else $n>0$, $\length{A}=n-1$ and $A=\{p_i :i< n-1\}$.
In particular, the output of $\List$ is always injective: this version is apparently stronger than $\List[\mathbb{N}^\mathbb{N}]$ because the latter allows repeating elements of the form $(1,p_i)$ for $p_i \in A$. We briefly discuss why $\List \equiv_{\mathrm{sW}} \List[\mathbb{N}^\mathbb{N}]$.
We claim that given $(n,(b_i,p_i)_{i \in \mathbb{N}}) \in \List[\mathbb{N}^\mathbb{N}](A)$ we can compute some $I$ such that $(n,I) \in \List[](A)$. Let $L := (b_ip_i)_{i \in \mathbb{N}}$ and recall that $(b_ip_i)^-=p_i$. At any finite stage $s$ we inspect the finite prefix $L[s]$ of $L$ where $\pi_i(L[s])^-\sqsubset p_i$. We start listing $\pi_i(L[s])^-$ in $I$ when we see that $\pi_i(L[s])(0)=1$ and $\pi_i(L[s])^-\not\sqsubseteq \pi_j(L[s])^-$ for every $j<i$ such that $\pi_j(L[s])(0)=1$.
If $n>0$ (i.e.\ $A$ is finite) after we listed $n-1$ elements we can add to $I$ any element in $\mathbb{N}^\mathbb{N}$ having first digit $0$.
If $n=0$ (i.e.\ $A$ is infinite) then we always find new elements to list in $I$ and we continue forever.
Since we are listing each $p_i$ only if $p_i \neq p_j$ for every $j<i$ with $b_j=1$, we have that $I$ lists injectively all the elements of $A$.
\end{remark}
We now focus on listing problems in Cantor space, comparing $\wList[2^\mathbb{N}]$ and $\List[2^\mathbb{N}]$ with the analogous problems in Baire space and with the functions $\List[2^\mathbb{N},<\omega]$ and $\wList[2^\mathbb{N}, \leq \omega]$ considered in \cite[\S 6.1]{kihara_marcone_pauly_2020}.
\begin{definition} The multi-valued functions $\partialmultifunction{\List[2^\mathbb{N},<\omega]}{\negrepr{2^\mathbb{N}}}{(2^\mathbb{N})^{<\mathbb{N}}}$ and $\partialmultifunction{\wList[2^\mathbb{N}, \leq \omega]}{\negrepr{2^\mathbb{N}}}{(2^\mathbb{N})^\mathbb{N}}$ have domains $\{A \in \negrepr{2^\mathbb{N}}:\length{A}<\aleph_0\}$ and $\{A \in \negrepr{2^\mathbb{N}}:\length{A}\leq \aleph_0\land A\neq \emptyset\}$ respectively and are defined by
\begin{align*}
\List[2^\mathbb{N},<\omega](A) & :=\{(p_i)_{i <n}: A=\{p_i:i<n\}\};\\
\wList[2^\mathbb{N}, \leq \omega](A) & :=\{(p_i)_{i \in \mathbb{N}}:A=\{p_i:i \in \mathbb{N}\}\}.
\end{align*}
\end{definition}
The following theorem establishes the relations between the listing problems defined so far: the results stated in this theorem are collected with other ones in Figure \ref{Figureslist}.
\begin{theorem}
\thlabel{Summary_list_cantor}
$\List[2^\mathbb{N},< \omega] ~|_{\mathrm{W}~} \wList[2^\mathbb{N}, \leq \omega] \equiv_{\mathrm{W}}\wList[2^\mathbb{N}]$ and $\wList[2^\mathbb{N}, \leq \omega],\List[2^\mathbb{N}, < \omega]<_\mathrm{W} \List[2^\mathbb{N}] <_\mathrm{W} \mathsf{UC}_{\Baire} \equiv_{\mathrm{W}} \wList[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \List[\mathbb{N}^\mathbb{N}]$.
Since all the problems involved are cylinders all the reductions mentioned above are strong.
\end{theorem}
\begin{proof}
The fact that $\List[2^\mathbb{N},< \omega] ~|_{\mathrm{W}~} \wList[2^\mathbb{N}, \leq \omega]$ is \cite[Corollary 6.15]{kihara_marcone_pauly_2020}.
To prove that $\wList[2^\mathbb{N}, \leq \omega] \le_{\mathrm{W}}\wList[2^\mathbb{N}]$, let $A \in \negrepr{2^\mathbb{N}}$ be countable and nonempty and let $(b_i,p_i)_{i \in \mathbb{N}} \in \wList[2^\mathbb{N}](A)$. Then, $\{p_i:b_i=1\}$ can be easily rearranged, and possibly duplicated, to produce an element of $\wList[2^\mathbb{N}, \leq \omega](A)$.
For the opposite direction, let $A \in \negrepr{2^\mathbb{N}}$ be countable and possibly empty. Define $A':=\{0^\mathbb{N}\} \cup \{1 x: x \in A\}$: $A'$ is still countable but nonempty, i.e.\ a suitable input for $\wList[2^\mathbb{N}, \leq \omega]$. Let $(p_i)_{i \in \mathbb{N}}\in \wList[2^\mathbb{N}, \leq \omega](A')$: then $(p_i(0),p_i^-)_{i \in \mathbb{N}} \in \wList[2^\mathbb{N}](A)$.
The reductions $\wList[2^\mathbb{N}] \le_{\mathrm{W}} \List[2^\mathbb{N}]$ and $\List[2^\mathbb{N},<\omega]\le_{\mathrm{W}} \List[2^\mathbb{N}]$ are immediate. Strictness follows from incomparability of $\wList[2^\mathbb{N}]$ and $\List[2^\mathbb{N}, < \omega]$.
By \thref{equivalentcersionslists} and \cite[Theorem 6.4]{kihara_marcone_pauly_2020} we obtain that $\mathsf{UC}_{\Baire} \equiv_{\mathrm{W}} \wList[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \List[\mathbb{N}^\mathbb{N}]$.
The reduction $\List[2^\mathbb{N}]\le_{\mathrm{W}} \List[\mathbb{N}^\mathbb{N}]$ is obvious.
To prove that $\List[\mathbb{N}^\mathbb{N}]\not\le_{\mathrm{W}} \List[2^\mathbb{N}]$ we recall that by \thref{limdoesnotreachucbaire}(i) and (iii) $\codedChoice{\boldsymbol{\Pi}_3^0}{}{\mathbb{N}}<_\mathrm{W} \codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}\equiv_{\mathrm{W}} \firstOrderPart{\mathsf{UC}_{\Baire}}$. It thus suffices to show that $\firstOrderPart{\List[2^\mathbb{N}]}\le_{\mathrm{W}}\codedChoice{\boldsymbol{\Pi}_3^0}{}{\mathbb{N}}$.
Let $f$ be a first-order function and suppose that $f\le_{\mathrm{W}} \List[2^\mathbb{N}]$ as witnessed by the maps $\Phi$ and $\Psi$. Let $p$ be a name for an input of $f$. Then $\Phi(p)=T$ where $T\in \mathcal{T}^{\leq\aleph_0}_2$.
Let $\varphi(n,T)$ be the formula $(\exists \sigma_0,\dots,\sigma_{n-1})(\forall i \neq j< n)(\sigma_i ~|~ \sigma_j \land T_{\sigma_i} \in \mathcal{IF}_2)$. Notice that $\mathcal{IF}_2$ is a $\Pi_1^0$ set (see \thref{Complexityresults}(i)) and hence $\varphi$ is $\Sigma_2^0$.
Let $\mathbf{Prefixes}$ be the set of all $(n,(\tau,\sigma)) \in \mathbb{N} \times 2^{<\mathbb{N}} \times 2^{<\mathbb{N}}$ such that
\begin{itemize}
\item $\length{\tau}=\ell_\sigma$;
\item $\Psi\big(p[\sigma],\str{\tau(i),\pi_i(\sigma)}_{i<\ell_\sigma}\big)(0){\downarrow}$;
\item $(\forall i < \ell_\sigma)(\tau(i)=1 \implies T_{\pi_i(\sigma)} \in \mathcal{IF}_2)$;
\item $(n=0 \land (\forall k)(\varphi(k,T))) \lor (n>0 \land \varphi(n-1,T) \land \lnot \varphi(n,T))$.
\end{itemize}
Elements in $\mathbb{N} \times 2^{<\mathbb{N}} \times 2^{<\mathbb{N}}$ can be coded as natural numbers, hence $\mathbf{Prefixes}$ is a $\Pi_3^{0,T}$ subset of $\mathbb{N}$.
We claim that $\mathbf{Prefixes} \neq \emptyset$, so that $\mathbf{Prefixes}$ is a valid input for $\codedChoice{\boldsymbol{\Pi}_3^0}{}{\mathbb{N}}$.
Let $(n,L)$ be an element of $\List[2^\mathbb{N}](\body{T})$ and let $s$ be the least stage such that $\Psi\big(p[s],\pairing{n,L[s]
}\big)(0)\downarrow$. Then $n=0$ implies that $\length{\body{T}}=\aleph_0$, while if $n>0$ then $\length{\body{T}}=n-1$. Furthermore, $L[s]$ is of the form $(\tau(i),\pi_i(\sigma))_{i<\ell_\sigma}$ for some $(\tau, \sigma) \in 2^{<\mathbb{N}} \times 2^{<\mathbb{N}}$ such that $\length{\tau}=\ell_\sigma$. It is immediate that if $\tau(i)=1$ then $\pi_i(\sigma)$ is the initial segment of a path in $T$. This implies that $(n,(\tau, \sigma)) \in \mathbf{Prefixes}$.
The same argument shows that every $(n,(\tau, \sigma)) \in \mathbf{Prefixes}$ computes a prefix for an element of $\List[2^\mathbb{N}](\body{T})$. Since $f$ is first-order, $ \Psi \big( p[\sigma] , \pairing{n,(\tau(i),\pi_i(\sigma))}_{i<\ell_\sigma})\big)(0) \in f(x)$. This shows that $f \le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_3^0}{}{\mathbb{N}}$.
\end{proof}
\begin{lemma}
\thlabel{wsclistcantorparallelizable}
$\wList[2^\mathbb{N}]$ is parallelizable.
\end{lemma}
\begin{proof}
To prove the claim it suffices to show that $\parallelization{\wList[2^\mathbb{N}]}\le_{\mathrm{W}} \wList[2^\mathbb{N}]$. Given $(T^n)_{n \in \mathbb{N}}$, an input for $\parallelization{\wList[2^\mathbb{N}]}$, i.e.\ a sequence of binary trees with countable body, compute $T:=\binarydisjointunion{n \in \mathbb{N}}{T^n}$ and notice that $T \in \mathbf{Tr}_2$ (see \thref{Disjoint_union}). Given $(b_i,p_i) \in \wList[2^\mathbb{N}](T)$, it is straightforward to check that $\{(b_i,p_i): b_i=1 \land 0^n1\sqsubset p_i\} \in \wList[2^\mathbb{N}](T^n)$.
\end{proof}
We conclude this section characterizing the first-order part of $\wList[2^\mathbb{N}]$.
\begin{lemma}
\thlabel{fop_wlist}
$\firstOrderPart{\wList[2^\mathbb{N}]}\equiv_{\mathrm{W}} \codedChoice{}{}{\mathbb{N}}$.
\end{lemma}
\begin{proof}
We first show that $\firstOrderPart{\wList[2^\mathbb{N}]}\le_{\mathrm{W}} \codedChoice{}{}{\mathbb{N}}$. Let $f$ be a first-order function such that $f\le_{\mathrm{W}} \wList[2^\mathbb{N}]$ as witnessed by the maps $\Phi$ and $\Psi$. Let $p$ be a name for an input of $f$. Then $\Phi(p)=T$ where $T\in \mathcal{T}^{\leq\aleph_0}_2$. Let $\mathbf{Prefixes}$ be the set of all $(\tau,\sigma) \in 2^{<\mathbb{N}} \times 2^{<\mathbb{N}}$ such that
\begin{itemize}
\item $\length{\tau}=\ell_\sigma$;
\item $ \Psi\big(p[\sigma],\pairing{\tau(i),\pi_i(\sigma)}_{i<\ell_\sigma}\big)(0){\downarrow}$ in $\length{\sigma}$ steps;
\item $(\forall i<\ell_\sigma)(\tau(i)=1 \implies T_{\pi_i(\sigma)} \in \mathcal{IF}_2)$.
\end{itemize}
Elements in $2^{<\mathbb{N}} \times 2^{<\mathbb{N}}$ can be coded as natural numbers and since $\mathcal{IF}_2$ is a $\Pi_1^{0}$ set (\thref{Complexityresults}(i)), $\mathbf{Prefixes}$ is a $\Pi_1^{0,T}$ subset of $\mathbb{N}$.
We claim that $\mathbf{Prefixes} \neq \emptyset$, so that $\mathbf{Prefixes}$ is a valid input for $\codedChoice{}{}{\mathbb{N}}$.
Let $S$ be an element of $\wList[2^\mathbb{N}](\body{T})$ and let $n$ be the least stage such that $\Psi(p[n],S[n])(0)\downarrow$. Then $S[n]$ is of the form $\pairing{\tau(i),\pi_i(\sigma)}_{i<\ell_\sigma}$ for some $(\tau, \sigma) \in 2^{<\mathbb{N}} \times 2^{<\mathbb{N}}$ such that $\length{\tau}=\ell_\sigma$. It is immediate that if $\tau(i)=1$ then $\pi_i(\sigma)$ is the initial segment of a path in $T$. This implies that $ (\tau, \sigma) \in \mathbf{Prefixes}$.
The same argument shows that every $(\tau,\sigma) \in \mathbf{Prefixes}$ computes a prefix of a name for $\wList[2^\mathbb{N}](\body{T})$. Since $f$ is first-order, $\Psi\big(p[\sigma],\pairing{\tau(i),\pi_i(\sigma)}_{i<\ell_\sigma}\big)(0) \in f(x)$. This shows that $f \le_{\mathrm{W}} \codedChoice{}{}{\mathbb{N}}$.
We now show that $\codedChoice{}{}{\mathbb{N}}\le_{\mathrm{W}} \wList[2^\mathbb{N}]$. Let $A \in \negrepr{\mathbb{N}}$ be nonempty, and let $A^c[s]$ denote the enumeration of the complement of $A$ up to stage $s$. We compute the tree
$$T:= \{\str{}\} \cup \{0^n10^{s}: n \notin A^c[s]\}.$$
Notice that for every $n$, $n \in A$ iff $0^n10^\mathbb{N} \in \body{T}$. Given $(b_i,p_i)_{i \in \mathbb{N}} \in \wList[2^\mathbb{N}](\body{T})$, we computably search for some $i$ such that $b_i=1$ and $0^n1\sqsubset p_i$ for some $n \in \mathbb{N}$ (such an $i$ exists because $A$ is nonempty). By construction, $n \in \codedChoice{}{}{\mathbb{N}}(A)$.
\end{proof}
\section{Functions arising from the Cantor-Bendixson Theorem}
\label{cantorbendixson}
\subsection{Perfect kernels}
We now move to the study of functions related to the perfect kernel.
\begin{definition}
\thlabel{pkdefinition}
Let $\function{\PK}{\mathbf{Tr}}{\mathbf{Tr}}$ be the total single-valued function defined as $\PK(T):= S$ where $S$ is the perfect kernel of $T$. We denote by $\PK{\restriction}\mathbf{Tr}_2$ the restriction of $\PK$ to $\mathbf{Tr}_2$.
Similarly, for a computable Polish space $\mathcal{X}$, let $\function{\PK[\mathcal{X}]}{\negrepr{\mathcal{X}}}{\negrepr{\mathcal{X}}}$ be the total single-valued function defined as $\PK[\mathcal{X}](A):= P$ where $P$ is the perfect kernel of $A$.
\end{definition}
Notice that $\PK$ was already introduced by Hirst in \cite{leafmanaegement}, where he also proved the following theorem.
\begin{theorem}
\thlabel{Chipistrongpktree}
$\parallelization{\mathsf{WF}}\equiv_{\mathrm{sW}} \PK$.
\end{theorem}
The following proposition summarizes some well-known facts about the relationship between $\mathsf{C}_{\Baire}$ and (the parallelization of) $\mathsf{WF}$.
\begin{proposition}
\thlabel{cbairepica}
$\mathsf{WF}\not\le_{\mathrm{W}} \mathsf{C}_{\Baire}$ and $\mathsf{C}_{\Baire}<_{\mathrm{sW}} \parallelization{\mathsf{WF}}$.
\end{proposition}
\begin{proof}
The fact that $\mathsf{WF}\not\le_{\mathrm{W}} \mathsf{C}_{\Baire}$ was already noticed in \cite[page 1033]{kihara_marcone_pauly_2020}.
To show that $\mathsf{C}_{\Baire}\le_{\mathrm{W}} \parallelization{\mathsf{WF}}$ it suffices to notice that $\PTT[1]\equiv_{\mathrm{W}} \mathsf{C}_{\Baire}$ (\cite[Proposition 6.3]{kihara_marcone_pauly_2020}) and $\PK$ clearly computes $\PTT[1]$. Strictness of the reduction is immediate by the first part of this proposition and the fact that $\mathsf{C}_{\Baire}$ is parallelizable.
\end{proof}
Let $\function{\mathsf{J}}{\mathbb{N}^\mathbb{N}}{\mathbb{N}^\mathbb{N}}$, $p \mapsto p'$ denote the Turing jump operator. As $\mathsf{lim} \equiv_{\mathrm{sW}} \mathsf{J}$ (\cite[Lemma 8.9]{closedChoice}), $\parallelization{\mathsf{WF}}$ is strongly Weihrauch equivalent to the function computing the \emph{hyperjump} of a set.
The hyperjump of $A \subseteq \mathbb{N}$ can be defined, following \cite[Definition 4.12]{Rogers}, as
\[ \mathsf{HJ}(A):=\{z:\varphi_z^A\text{ is the characteristic function of a well-founded tree}\}.\]
The well-known fact that $\mathsf{HJ}(A)$ is a $\Pi_1^{1,A}$-complete subset of the natural numbers allows us to prove the following proposition.
\begin{proposition}
\thlabel{Wfcompositionalproduct}
$\mathsf{lim}*\parallelization{\mathsf{WF}} \not\le_{\mathrm{W}} \parallelization{\mathsf{WF}}$ and hence $\parallelization{\mathsf{WF}}$ is not closed under compositional product.
\end{proposition}
\begin{proof}
Towards a contradiction, suppose that $\mathsf{lim}*\parallelization{\mathsf{WF}} \le_{\mathrm{W}} \parallelization{\mathsf{WF}}$. By the definition of compositional product (see \S \ref{representedspaces}) and the facts that $\mathsf{lim} \equiv_{\mathrm{sW}} \mathsf{J}$ and $\mathsf{J} \circ \parallelization{\mathsf{WF}}$ is defined, let $\Phi$ and $\Psi$ witness $\mathsf{J}\circ \parallelization{\mathsf{WF}} \le_{\mathrm{W}} \parallelization{\mathsf{WF}}$.
Let $(T^i)_{i \in \mathbb{N}}$ be a computable list of all computable elements $\mathbf{Tr}$ and notice that $\parallelization{\mathsf{WF}}((T^i)_{i \in \mathbb{N}}) \equiv_T \mathsf{HJ}(\emptyset)$. Then, $\Phi((T^i)_{i \in \mathbb{N}})$ is a computable list of trees and $\parallelization{\mathsf{WF}}(\Phi((T^i)_{i \in \mathbb{N}}))$ is Turing reducible to $\mathsf{HJ}(\emptyset) \equiv_T \mathsf{HJ}({\Phi((T^i)_{i \in \mathbb{N}})})$. Therefore, $\Psi\big((T^i)_{i \in \mathbb{N}}, \parallelization{\mathsf{WF}}(\Phi((T^i)_{i \in \mathbb{N}}))\big) \leq_T \mathsf{HJ}(\emptyset)$ as well. On the other hand, $(\mathsf{J}\circ \parallelization{\mathsf{WF}}) ((T^i)_{i \in \mathbb{N}})$ computes the Turing jump of $\mathsf{HJ}(\emptyset)$, which is not Turing reducible $\mathsf{HJ}(\emptyset)$.
\end{proof}
As we did when dealing with $\PST[\mathbb{N}^\mathbb{N}]$ and $\PST[2^\mathbb{N}]$, to study $\PK[\mathbb{N}^\mathbb{N}]$ and $\PK[2^\mathbb{N}]$ we use the tree representation of $\negrepr{\mathbb{N}^\mathbb{N}}$ and $\negrepr{2^\mathbb{N}}$. The following is the analogue of \thref{cantor_and_baire_same}.
\begin{proposition}
\thlabel{Cantor_and_baire_same2}
$\PK{\restriction}\mathbf{Tr}_2\equiv_{\mathrm{sW}}\PK$ and $\PK[2^\mathbb{N}]\equiv_{\mathrm{sW}} \PK[\mathbb{N}^\mathbb{N}]$.
\end{proposition}
\begin{proof}
We follow the pattern of the proof of \thref{cantor_and_baire_same}.
$\PK{\restriction}\mathbf{Tr}_2\le_{\mathrm{sW}}\PK$ is trivial and $\PK[2^\mathbb{N}] \le_{\mathrm{sW}} \PK[\mathbb{N}^\mathbb{N}]$ is witnessed by the same functionals of the proof of $\PST[2^\mathbb{N}]\le_{\mathrm{sW}}\PST[\mathbb{N}^\mathbb{N}]$ in \thref{cantor_and_baire_same}. Given $T \in \mathbf{Tr}_2$, and a name $P$ for $\PK[\mathbb{N}^\mathbb{N}](\body{T})$ we get $\body{\Psi(P)}=\body{P}$. Hence, $\Psi(P)$ is a name for $\PK[2^\mathbb{N}](T)$.
For the opposite directions, we only deal with $\PK \le_{\mathrm{sW}} \PK{\restriction}\mathbf{Tr}_2$, as the proof of $\PK[\mathbb{N}^\mathbb{N}] \le_{\mathrm{sW}} \PK[2^\mathbb{N}]$ follows the same pattern. Again, the reduction is witnessed by the same functionals of the analogous proof in \thref{cantor_and_baire_same}. Fix $T\in \mathbf{Tr}$ and let $P:= \PK(\translateCantor(\body{T}))$. As before, set $\Psi(P):=\translateBaire(P)$. To prove the reduction, it suffices to show that $\length{\body{T}\setminus \body{\translateBaire(P)}}\leq \aleph_0$. We claim that $\body{T}\setminus \body{\translateBaire(P)}\subseteq \{\translateBaire(q): q \in \body{\translateCantor(T)}\setminus \body{P}\land (\exists^\infty i)(q(i)=1)\}$, which completes the proof as the set on the right-hand side is countable. If $p \in \body{T}\setminus \body{\translateBaire(P)}$, then by \thref{AllPropertiesOfTranslation}(4) and (6) we have that $\translateCantor(p) \in \body{\translateCantor(T)} \setminus \body{P}$. Moreover, $q:=\translateCantor(p)$ has infinitely many ones and, by \thref{AllPropertiesOfTranslation}(2), $p=\translateBaire(q)$.
\end{proof}
\begin{proposition}
\thlabel{Pkbaire_parallelizable}
$\PK[2^\mathbb{N}]$ and $\PK[\mathbb{N}^\mathbb{N}]$ are (strongly) parallelizable.
\end{proposition}
\begin{proof}
To prove the statement, by \thref{Cantor_and_baire_same2}, it is enough to show that $\parallelization{\PK[\mathbb{N}^\mathbb{N}]}\le_{\mathrm{sW}}\PK[\mathbb{N}^\mathbb{N}]$. Given $(T^i)_{i \in \mathbb{N}}$ an input for $\parallelization{\PK[\mathbb{N}^\mathbb{N}]}$, let $P$ be a name for $\PK[\mathbb{N}^\mathbb{N}](\body{\disjointunion{i \in \mathbb{N}}{T^i}})$. By the definitions of perfect kernel and disjoint union of trees we have that for every $i$, $\{\sigma : i^\smallfrown \sigma \in P\}$ is a name for $\PK[\mathbb{N}^\mathbb{N}](\body{T^i})$.
\end{proof}
\begin{definition}
Let $\mathbb{S}$ be the Sierpi\'nski space, which is the space $\{0_{\mathbb{S}},1_{\mathbb{S}}\}$ with the representation
\[\repmap{\mathbb{S}}(p):=
\begin{cases}
1_{\mathbb{S}} & \text{if } (\exists i)(p(i)\neq 0),\\
0_{\mathbb{S}} & \text{if } p=0^\mathbb{N}.
\end{cases}\]
We define the multi-valued function $\function{\mathsf{WF}_{\sierpinski}}{\mathbf{Tr}}{\mathbb{S}}$ as
\[\mathsf{WF}_{\sierpinski}(T):=
\begin{cases}
1_{\mathbb{S}} &\text{if } T \in \mathcal{WF},\\
0_{\mathbb{S}} &\text{if } T \in \mathcal{IF}.
\end{cases}
\]
\end{definition}
The next proposition shows that the main functions we consider in this section are cylinders, which implies that most reductions we obtain in this section are strong.
\begin{proposition}
$\parallelization{\mathsf{WF}_{\sierpinski}}$, $\parallelization{\mathsf{WF}}$, $\PK$, $\PK{\restriction}\mathbf{Tr}_2$, $\PK[2^\mathbb{N}]$ and $\PK[\mathbb{N}^\mathbb{N}]$ are cylinders.
\end{proposition}
\begin{proof}
All six functions are parallelizable (this is either obvious or follows from \thref{Pkbaire_parallelizable,Chipistrongpktree}) and hence it is enough to show that $\operatorname{id}$ strongly Weihrauch reduces to each of them. As $\parallelization{\mathsf{WF}_{\sierpinski}} \le_{\mathrm{sW}} \parallelization{\mathsf{WF}} \equiv_{\mathrm{sW}} \PK \equiv_{\mathrm{sW}} \PK{\restriction}\mathbf{Tr}_2$ (\thref{Cantor_and_baire_same2,Chipistrongpktree}) and $\PK[2^\mathbb{N}] \equiv_{\mathrm{sW}} \PK[\mathbb{N}^\mathbb{N}]$ (\thref{Cantor_and_baire_same2}), it suffices to show that $\operatorname{id} \le_{\mathrm{sW}} \parallelization{\mathsf{WF}_{\sierpinski}}$ and $\operatorname{id} \le_{\mathrm{sW}} \PK[2^\mathbb{N}]$.
For the first reduction let $p$ be an input for $\operatorname{id}$. For any $i,j \in \mathbb{N}$ let
\[
T^{\pairing{i,j}}:=\begin{cases}
\emptyset & \text{if } p(i)=j,\\
2^{<\mathbb{N}} & \text{if } p(i)\neq j.
\end{cases}
\]
Let $\parallelization{\mathsf{WF}_{\sierpinski}}((T^{\pairing{i,j}})_{i,j \in \mathbb{N}})=(a_{\pairing{i,j}})_{i,j \in \mathbb{N}}$. To compute $p(i)$ we search for the unique $j$ such that $a_{\pairing{i,j}}=1_\mathbb{S}$ (recall that the set of names for $1_\mathbb{S}$ is $\boldsymbol{\Sigma}_1^0$).
For the second reduction, recall that by \thref{UCBaireReducesToPST}, $\mathsf{UC}_{\Baire}\le_{\mathrm{sW}}\PST[2^\mathbb{N}]$, clearly $\PST[2^\mathbb{N}]\le_{\mathrm{sW}}\PK[2^\mathbb{N}]$ and $\operatorname{id} \le_{\mathrm{sW}}\mathsf{UC}_{\Baire}$.
\end{proof}
We now give a useful characterization of $\PK[\mathbb{N}^\mathbb{N}]$.
\begin{theorem}
\thlabel{Pkcantor_idPiSigma}
$\PK[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \parallelization{\mathsf{WF}_{\sierpinski}}$.
\end{theorem}
\begin{proof}
Recall that by \thref{Cantor_and_baire_same2}, $\PK[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \PK[2^\mathbb{N}]$ so that it suffices to show that $\PK[2^\mathbb{N}] \equiv_{\mathrm{W}} \parallelization{\mathsf{WF}_{\sierpinski}}$.
Let $T\in \mathbf{Tr}_2$ be a name for an input of $\PK[2^\mathbb{N}]$. Notice that $\{\sigma: T_\sigma \in \mathcal{T}^{\leq\aleph_0}_2\}$ is $\Pi_1^{1,T}$ (see \thref{Complexityresults}(ii)) and hence, using \thref{Complexityresults}(i), we can compute from $T$ a sequence $(S(\sigma))_{\sigma \in 2^{<\mathbb{N}}} \in \mathbf{Tr}^\mathbb{N}$ such that $S(\sigma) \in \mathcal{WF}$ iff $T_\sigma \in \mathcal{T}^{\leq\aleph_0}_2$. Let $A:=\{p \in 2^\mathbb{N}: (\forall n)(\mathsf{WF}_{\sierpinski}(S(p[n]))=0_{\mathbb{S}})\}$: since the set of names for $0_\mathbb{S}$ is $\boldsymbol{\Pi}_1^0$, $A \in \negrepr{2^\mathbb{N}}$ and we can compute $U \in \mathbf{Tr}_2$ with $\body{U}=A$. Notice that for any $\tau \in U$, $\tau$ is a prefix of a path through $\body{U}$ iff
$T_\tau \in \mathcal{T}^{>\aleph_0}$. Therefore, $U$ is a name for $\PK[2^\mathbb{N}](\body{T})$.
To show that $\parallelization{\mathsf{WF}_{\sierpinski}} \le_{\mathrm{W}} \PK[2^\mathbb{N}]$, as $\PK[2^\mathbb{N}]$ is parallelizable it suffices to prove that $\mathsf{WF}_{\sierpinski} \le_{\mathrm{W}} \PK[2^\mathbb{N}]$. Let $T \in \mathbf{Tr}$ be an input for $\mathsf{WF}_{\sierpinski}$ and notice that $T \in \mathcal{WF}$ iff $\exploded{T} \in \mathcal{WF}$ iff $\translateCantor(\exploded{T}) \in \mathcal{T}^{\leq\aleph_0}$. Hence, if $S$ is a name for $ \PK[2^\mathbb{N}](\body{\translateCantor(\exploded{T})})$, $T \in \mathcal{WF}$ iff $S \in \mathcal{WF}_2$.
Since $\mathcal{WF}_2$ is a $\Sigma_1^0$ set (see \thref{Complexityresults}(i)), given $S$ we can uniformly compute a name for $\mathsf{WF}_{\sierpinski}(T)$.
\end{proof}
\begin{proposition}
\thlabel{fop_pi11}
$\firstOrderPart{\PK[\mathbb{N}^\mathbb{N}]} \equiv_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}<_\mathrm{W} \firstOrderPart{\mathsf{C}_{\Baire}}\equiv_{\mathrm{W}}\codedChoice{\boldsymbol{\Sigma}_1^1}{}{\mathbb{N}}<_\mathrm{W}\firstOrderPart{\parallelization{\mathsf{WF}}}\equiv_{\mathrm{W}}\ustar{\mathsf{WF}}$.
\end{proposition}
\begin{proof}
Since clearly $\PST[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}}\PK[\mathbb{N}^\mathbb{N}]$, by \thref{fop_pst} we obtain $\codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}} \equiv_{\mathrm{W}} \firstOrderPart{\PST[\mathbb{N}^\mathbb{N}]} \le_{\mathrm{W}} \firstOrderPart{\PK[\mathbb{N}^\mathbb{N}]}$. For the opposite direction, notice that the proof of $\firstOrderPart{\PST[\mathbb{N}^\mathbb{N}]} \le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}$ in \thref{fop_pst} actually shows that $\firstOrderPart{\PK[\mathbb{N}^\mathbb{N}]} \le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}$. In fact, the definition of $\mathbf{Prefixes}$ works also if $T \in \mathcal{T}^{\leq\aleph_0}$, and we already considered only prefixes of names for the perfect kernel of $T$.
\thref{Fopcbaire} tells us that $\codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}<_\mathrm{W}\firstOrderPart{\mathsf{C}_{\Baire}}\equiv_{\mathrm{W}} \codedChoice{\boldsymbol{\Sigma}_1^1}{}{\mathbb{N}}$. Moreover, $\codedChoice{\boldsymbol{\Sigma}_1^1}{}{\mathbb{N}}<_\mathrm{W} \ustar{\mathsf{WF}}$ as $\parallelization{\codedChoice{\boldsymbol{\Sigma}_1^1}{}{\mathbb{N}}} \le_{\mathrm{W}} \mathsf{C}_{\Baire} <_\mathrm{W} \parallelization{\mathsf{WF}}$ by \thref{Fopcbaire,cbairepica} and the fact that $\mathsf{C}_{\Baire}$ is parallelizable. On the other hand, $\firstOrderPart{\parallelization{\mathsf{WF}}}\equiv_{\mathrm{W}} \ustar{\mathsf{WF}}$ is an instance of \thref{Summaryfopustar}.
\end{proof}
\begin{theorem}
\thlabel{Pk_below__chipi}
$\PK[\mathbb{N}^\mathbb{N}]<_\mathrm{W}\parallelization{\mathsf{WF}}\equiv_{\mathrm{W}} \PK \le_{\mathrm{W}} \mathsf{lim} * \PK[\mathbb{N}^\mathbb{N}]$.
\end{theorem}
\begin{proof}
Given $T \in \mathbf{Tr}$, $\PK(T)$ is a name for $\PK[\mathbb{N}^\mathbb{N}](\body{T})$: therefore $\PK[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}}\PK$; strictness follows from \thref{fop_pi11}. By \thref{Chipistrongpktree}, $\parallelization{\mathsf{WF}}\equiv_{\mathrm{W}} \PK$.
To prove the last reduction, by \thref{Cantor_and_baire_same2} and \thref{Chipistrongpktree} we have that $\PK{\restriction}\mathbf{Tr}_2\equiv_{\mathrm{W}} \parallelization{\mathsf{WF}}$ and $\PK[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \PK[2^\mathbb{N}]$, hence, to finish the proof, it suffices to show that $\PK{\restriction}\mathbf{Tr}_2 \le_{\mathrm{W}} \mathsf{lim}*\PK[2^\mathbb{N}]$. From \cite{Nobrega2017GameCA}, we know that $\mathsf{lim}$ is equivalent to the function that prunes a binary tree. So let $T\in \mathbf{Tr}_2$ and let $P$ be a name for $\PK[2^\mathbb{N}](\body{T})$: pruning $P$ with $\mathsf{lim}$ is enough to obtain $\PK{\restriction}\mathbf{Tr}_2(T)$.
\end{proof}
We do not know whether $\PK \equiv_{\mathrm{W}} \mathsf{lim} * \PK[\mathbb{N}^\mathbb{N}]$ (see \thref{PKandLimQuestion}).
\begin{proposition}
\thlabel{Ucbaire_below_pk}
$\PST[\mathbb{N}^\mathbb{N}]<_\mathrm{W} \PK[\mathbb{N}^\mathbb{N}]~|_{\mathrm{W}~} \mathsf{C}_{\Baire} $.
\end{proposition}
\begin{proof}
The fact that $\PST[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$ is trivial. By \thref{Pk_below__chipi}, $\parallelization{\mathsf{WF}}\le_{\mathrm{W}}\mathsf{lim}*\PK[\mathbb{N}^\mathbb{N}]$ while, by the closure of $\mathsf{C}_{\Baire}$ under compositional product, we get $\mathsf{lim}*\mathsf{C}_{\Baire} \equiv_{\mathrm{W}} \mathsf{C}_{\Baire}$: hence by \thref{cbairepica} $ \PK[\mathbb{N}^\mathbb{N}]\not\le_{\mathrm{W}}\mathsf{C}_{\Baire}$ and a fortiori $\PK[\mathbb{N}^\mathbb{N}]\not\le_{\mathrm{W}}\PST[\mathbb{N}^\mathbb{N}]$. For the opposite non-reduction, just notice that by \thref{fop_pi11} we have that $\firstOrderPart{\mathsf{C}_{\Baire}} \not\le_{\mathrm{W}} \firstOrderPart{\PK[\mathbb{N}^\mathbb{N}]}$.
\end{proof}
\begin{proposition}
\thlabel{Pktcbaire}
$\PK[\mathbb{N}^\mathbb{N}]<_\mathrm{W}\parallelization{\totalization{\mathsf{C}_{\Baire}}}$ and hence the reduction $\mathsf{WF}_{\sierpinski}\le_{\mathrm{W}}\totalization{\mathsf{C}_{\Baire}}$ in \cite[Proposition 11.4(1)]{CompletionOfChoice} is actually strict.
\end{proposition}
\begin{proof}
From $\mathsf{WF}_{\sierpinski}\le_{\mathrm{W}}\totalization{\mathsf{C}_{\Baire}}$ using \thref{Pkcantor_idPiSigma}, we obtain $\PK[\mathbb{N}^\mathbb{N}] \le_{\mathrm{W}} \parallelization{\totalization{\mathsf{C}_{\Baire}}}$. Strictness follows from \thref{Ucbaire_below_pk} as $\mathsf{C}_{\Baire} \not\le_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$ but clearly $\mathsf{C}_{\Baire} \le_{\mathrm{W}} \totalization{\mathsf{C}_{\Baire}}$.
\end{proof}
We end the subsection by characterizing the deterministic part of $\PK[\mathbb{N}^\mathbb{N}]$.
\begin{proposition}
\thlabel{Detpart_pkbaire}
$\mathsf{Det}(\PK[\mathbb{N}^\mathbb{N}])\equiv_{\mathrm{W}} \mathsf{UC}_{\Baire}$.
\end{proposition}
\begin{proof}
For the right to left direction, notice that $\mathsf{UC}_{\Baire}$ is single-valued and, by \thref{UCBaireReducesToPST} and \thref{Ucbaire_below_pk} we have that $\mathsf{UC}_{\Baire} <_\mathrm{W}\PK[\mathbb{N}^\mathbb{N}]$. For the converse, observe that $\firstOrderPart{\PK[\mathbb{N}^\mathbb{N}]}\equiv_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}$ (\thref{fop_pi11}) and $\parallelization{\codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}}\equiv_{\mathrm{W}}\mathsf{UC}_{\Baire}$
(\thref{UCbaireisparallelization}). This, together with $\mathsf{Det}(\PK[\mathbb{N}^\mathbb{N}])\le_{\mathrm{W}} \parallelization{\firstOrderPart{\PK[\mathbb{N}^\mathbb{N}]}}$ (\cite[Corollary 3.7]{goh_pauly_valenti_2021}), concludes the proof.
\end{proof}
\subsection{Scattered lists}
\label{scatteredlist}
We now introduce the problems of listing the scattered part of a closed subset of a computable Polish space. Their definition is similar to \thref{definitionsList}: the crucial difference is that the domain includes all closed sets $A$ of the computable Polish space $\mathcal{X}$ (not only the countable ones as in \thref{definitionsList}) and we ask for the list of the elements of the scattered part of $A$.
\begin{definition}
Let $\mathcal{X}$ be a computable Polish space. We define three multi-valued functions $\multifunction{\wScList[\mathcal{X}]}{\negrepr{\mathcal{X}}}{2\times \mathcal{X}^\mathbb{N}}$, $\function{\ScCount[\mathcal{X}]}{\negrepr{\mathcal{X}}}{\mathbb{N}}$ and $\multifunction{\ScList[\mathcal{X}]}{\negrepr{\mathcal{X}}}{\mathbb{N} \times (2\times \mathcal{X})^\mathbb{N}}$ by
\begin{align*}
\wScList[\mathcal{X}](A) & := \{(b_i,x_i)_{i \in \mathbb{N}}: A\setminus\PK[\mathcal{X}](A) = \{x_i:b_i=1\}\},\\
\ScCount[\mathcal{X}](A) & :=
\begin{cases}
0 & \text{if } A\setminus \PK[\mathcal{X}](A) \text{ is infinite},\\
\length{A\setminus \PK[\mathcal{X}]}+1 & \text{if } A\setminus \PK[\mathcal{X}](A) \text{ is finite}.
\end{cases}\\
\ScList[\mathcal{X}](A) & := \wScList[\mathcal{X}](A) \times \ScCount[\mathcal{X}](A).
\end{align*}
\end{definition}
\begin{remark}
\thlabel{Isolated_paths}
Notice that if a closed set $A$ of some $T_1$ topological space has a finite set $F$ of isolated points, then $A \setminus F$ is perfect and hence the scattered part of $A$ is $F$. Equivalently, if the scattered part of $A$ is infinite then it contains infinitely many isolated points. Moreover, the set of isolated points is always dense in the scattered part.
\end{remark}
With a similar proof to that \thref{Pkbaire_parallelizable}, we obtain the following.
\begin{proposition}
\thlabel{Wsclist_parallelizable}
$\wScList[\mathbb{N}^\mathbb{N}]$ and $\wScList[2^\mathbb{N}]$ are parallelizable.
\end{proposition}
One of the main result of this subsection is that $\wScList[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$. We first prove the easier direction.
\begin{lemma}
\thlabel{Pkreduciblewsclist}
$\PK[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}}\wScList[\mathbb{N}^\mathbb{N}]$.
\end{lemma}
\begin{proof}
By \thref{Pkcantor_idPiSigma} we get $\PK[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \parallelization{\mathsf{WF}_{\sierpinski}}$ and, by \thref{Wsclist_parallelizable}, $\wScList[\mathbb{N}^\mathbb{N}]$ is parallelizable. So it suffices to show that $\mathsf{WF}_{\sierpinski}\le_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}]$.
Given an input $T \in \mathbf{Tr}$ for $\mathsf{WF}_{\sierpinski}$, let $S:=\binarydisjointunion{i \in \mathbb{N}}{\exploded{T}}\in\mathbf{Tr}$. If $T \in \mathcal{WF}$ then $\exploded{T} \in \mathcal{WF}$, so that by \thref{Disjoint_union}, $\body{S}=\{0^\mathbb{N}\}$. If instead $T \in \mathcal{IF}$, $\body{\exploded{T}}$ is perfect and therefore $0^\mathbb{N} \in \PK[\mathbb{N}^\mathbb{N}](\body{S})$ and $\body{S}$ is perfect, so that the scattered part of $\body{S}$ is empty.
Hence, for every $(b_i,x_i)_{i \in \mathbb{N}} \in \wScList[\mathbb{N}^\mathbb{N}](\body{S})$ we obtain $\mathsf{WF}_{\sierpinski}(T)=1_\mathbb{S}$ iff $(\exists i)( b_i=1)$. Hence, a name for $\mathsf{WF}_{\sierpinski}(T)$ can be uniformly computed from $(b_i,x_i)_{i \in \mathbb{N}}$.
\end{proof}
We split the proof of $\wScList[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$ in several lemmas. Before stating them we discuss some results in \cite[\S 6.1]{kihara_marcone_pauly_2020} and correct an error there.
\begin{remark}
\thlabel{Remarknnmcb}
Theorem 6.4 of \cite{kihara_marcone_pauly_2020} states (in our notation) that $\mathsf{UC}_{\Baire}\equiv_{\mathrm{W}} \wList[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \List[\mathbb{N}^\mathbb{N}]$. The main ingredient of the proof of $\wList[\mathbb{N}^\mathbb{N}] \le_{\mathrm{W}} \mathsf{UC}_{\Baire}$ is a variant of the Cantor-Bendixson derivative that allows to carry out the process in a Borel way for countable closed sets. A single step of this process is called \emph{one-step $\mathsf{mCB}$-certificate} (\cite[Definition 6.5]{kihara_marcone_pauly_2020}) and all steps are then \lq\lq collected\rq\rq\ in a \emph{global $\mathsf{mCB}$-certificate} (\cite[Definition 6.6]{kihara_marcone_pauly_2020}).
\end{remark}
\begin{definition}[{\cite[Definition 6.5]{kihara_marcone_pauly_2020}}]
Let $A \in \negrepr{\mathbb{N}^\mathbb{N}}$. A \emph{one-step $\mathsf{mCB}$-certificate} for $A$ is some $c=((\sigma_i^c)_{i \in \mathbb{N}},(b_i^c)_{i \in \mathbb{N}},(p_i^c)_{i \in \mathbb{N}}) \in (\mathbb{N}^{<\mathbb{N}},2,\mathbb{N}^\mathbb{N})^\mathbb{N}$ where
\begin{itemize}
\item for all $i\neq j$, $\sigma_i^c \not\sqsubset \sigma_j^c$ and if $i<j$ then $\sigma_i^c<\sigma_j^c$;
\item there exists $i$ such that $b_i^c=1$,
\item Let $\mathsf{HYP}(A)$ be the set of hyperarithmetical elements of $A$. For all $i$:
\begin{itemize}
\item if $b_i^c=1$, then $p_i^c \in A$ and $\sigma_i^c \sqsubset p_i^c$,
\item if $b_i^c=0$, then $(\forall p \in \mathsf{HYP}(A))(\sigma_i^c \not\sqsubset p)$ and $p_i^c=0^\mathbb{N}$,
\item $(\forall p,q \in \mathsf{HYP}(A))(p,q \in A \land \sigma_i^c \sqsubset p,q \implies p=q)$,
\end{itemize}
\item for every $\sigma \in \mathbb{N}^{<\mathbb{N}}$, if $(\forall i \in \mathbb{N})(\sigma_i^c \not\sqsubseteq \sigma)$ then $(\exists p,q \in A)(p \neq q \land \sigma \sqsubset p,q)$.
\end{itemize}
For a one-step $\mathsf{mCB}$-certificate $c$ for $A$, the residue of $c$ is $\{p \in A : (\forall i \in \mathbb{N}) (\sigma_i^c \not\sqsubseteq p)\}$.
\end{definition}
By \cite[Lemma 6.8]{kihara_marcone_pauly_2020}, every nonempty non-perfect $A \in \negrepr{\mathbb{N}^\mathbb{N}}$ has a one-step $\mathsf{mCB}$-certificate $c$: moreover the residue of $c$ is the Cantor-Bendixson derivative of $A$. Furthermore, if $\length{A} \leq \aleph_0$ then the one-step $\mathsf{mCB}$-certificate of $A$ is unique.
\begin{definition}[{\cite[Definition 6.6]{kihara_marcone_pauly_2020}}, corrected]
\thlabel{Globalmcb}
A \emph{global $\mathsf{mCB}$-certificate} for $A \in \negrepr{\mathbb{N}^\mathbb{N}}$ is indexed by some initial $I \subseteq \mathbb{N}$ and consists of a sequence $(c_n)_{n \in I}$ and a strict linear ordering $\lhd$ on $I$ with minimum $n_0$ (if nonempty) such that, denoting with $A_i$ the residue of $c_i$:
\begin{itemize}
\item $c_{n_0}$ is a one-step $\mathsf{mCB}$-certificate for $A$;
\item for every $n \in I \setminus \{n_0\}$, $c_n$ is a one-step $\mathsf{mCB}$-certificate for $\underset{i \lhd n}{\bigcap} A_i$;
\item for all $p \in \mathsf{HYP}(A)$, if $p \in A$ then $(\exists i \in I)(p \in A_i)$;
\item for every $n,m \in I$, if $n<m$ then $\sigma_{h(n)}^{c_n}< \sigma_{h(m)}^{c_m}$ where $h(n):= \min\{i: b_{i}^{c_n}=1\}$.
\end{itemize}
\end{definition}
Observe that \thref{Globalmcb} differs from \cite[Definition 6.6]{kihara_marcone_pauly_2020} by the addition of the last requirement, which ensures that \cite[Corollary 6.9]{kihara_marcone_pauly_2020} holds: if $\length{A}\leq \aleph_0$, then $A$ has a unique global $\mathsf{mCB}$-certificate. Indeed, the last condition forces a specific ordering (determined by the code of the first finite sequence that is the prefix of an isolated path) on the sequence of the one-step $\mathsf{mCB}$-certificates, avoiding the different codings of the sequence allowed in \cite[Definition 6.6]{kihara_marcone_pauly_2020} because it was possible to permute the ordering.
Notice that, assuming $\length{A}\leq \aleph_0$, the global $\mathsf{mCB}$-certificate of $A$ computes a list of the paths in $A$. Since the global $\mathsf{mCB}$-certificate of a countable closed set $A$ is a $\Sigma_1^{1,A}$ singleton, we obtain, as in \cite[Theorem 6.4]{kihara_marcone_pauly_2020}, that $\wList[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}}\mathsf{UC}_{\Baire}$.
The following lemmas involve the completion of a problem (see \S \ref{representedspaces} for its definition).
\begin{lemma}
\thlabel{Lemmacompletion}
The function $\function{F}{\mathbb{N}\times(\mathbb{N}^\mathbb{N}\cup \mathbb{N}^{<\mathbb{N}})}{\completion{\mathbb{N}^\mathbb{N}}}$ such that for all $e \in \mathbb{N}$ and $p \in \mathbb{N}^\mathbb{N}\cup \mathbb{N}^{<\mathbb{N}}$
$$F(e,p)=\begin{cases}
\Phi_e(p)& \text{ if } p \in \mathbb{N}^\mathbb{N} \text{ and } p \in \operatorname{dom}(\Phi_e),\\
\bot & \text{ otherwise,}
\end{cases}$$
is computable.
\end{lemma}
\begin{proof}
A computable realizer $\Phi'$ for $F$ can be defined recursively as follows. Suppose we have defined $\Phi'(e,p)[s]$ and let $t_s=\length{\{t<s:\Phi_e(p)(t)>0\}}$. Then set
\[
\Phi'(e,p)(s)=
\begin{cases}
\Phi_e(p)(t_{s})+1 & \text{ if } \Phi_e(p)(t_{s})\downarrow \text{in less than } s \text{ steps},\\
0& \text{ otherwise. }
\end{cases}\]
It is easy to check that this works.
\end{proof}
\begin{lemma}
\thlabel{completionofwlist_below_pk}
$\completion{\wList[\mathbb{N}^\mathbb{N}]}\le_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$.
\end{lemma}
\begin{proof}
Since $\PK[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \PK[2^\mathbb{N}]$ is parallelizable (\thref{Cantor_and_baire_same2,Pkbaire_parallelizable}), it suffices to show that $\completion{\wList[\mathbb{N}^\mathbb{N}]}\le_{\mathrm{W}} \parallelization{\PK[2^\mathbb{N}]}$.
By \cite[Lemma 5.1]{CompletionOfChoice}, $\negrepr{\mathbb{N}^\mathbb{N}}$ is multi-retraceable, i.e.\ there is a computable multi-valued function $\multifunction{r}{\completion{\negrepr{\mathbb{N}^\mathbb{N}}}}{{\negrepr{\mathbb{N}^\mathbb{N}}}}$ such that its restriction to $\negrepr{\mathbb{N}^\mathbb{N}}$ is the identity. Given $A\in \completion{\negrepr{\mathbb{N}^\mathbb{N}}}$, let $G$ be the set of global $\mathsf{mCB}$-certificates of $r(A)\in \negrepr{\mathbb{N}^\mathbb{N}}$. By \cite[Lemma 6.7]{kihara_marcone_pauly_2020}, $G$ is a $\Sigma_1^{1,A}$ subset of $\mathbb{N}^\mathbb{N}$ and, in case $\length{r(A)}\leq \aleph_0$, by \cite[Corollary 6.9]{kihara_marcone_pauly_2020}, $G$ is a singleton.
Let $T$ be a name for $G$ as described in \S \ref{Background}, i.e.\ $T\in \mathbf{Tr}$ is such that $G=\{x:(\exists y)(\forall n)(x[n]*y[n]\in T)\}$. Then, for every $m$, compute the tree
$$T^{*m}:=\mathbb{N}^m \cup \{\sigma:\str{\sigma(m),\dots,\sigma({\length{m}-1})}\in T \land (\forall i<m)(m+2i<\length{\sigma}\implies \sigma(i)=\sigma(m+2i))\}.$$
Notice that $\body{T^{*m}}=\{\str{p(0),p(2),\dots,p(2m-2)}p:p \in \body{T}\}$; therefore, every path in $\body{T^{*m}}$ begins with the first $m$ coordinates of some element of the analytic set $G$. In particular, if $G=\{p_0\}$ then every path in $\body{T^{*m}}$ extends $p_0[m]$. Let $U^m$ be a name for $\PK[2^\mathbb{N}](\translateCantor(\exploded{T^{*m}}))$. If $G=\{p_0\}$ then every path in $\body{U^m}$ extends $\translateCantor(p_0[m]*\sigma)$ for some $\sigma \in 2^m$.
We now describe how to compute an element $x \in \mathbb{N}^\mathbb{N} \cup \mathbb{N}^{<\mathbb{N}}$ from the sequence $(U^m)_{m \in \mathbb{N}}$ such that $x=p_0$ when $G=\{p_0\}$.
The procedure is similar to the one used in the proof of \thref{UCBaireReducesToPST}. Looking first at $U^1$, we search for $n_0$ such that
\begin{equation*}
(\forall \tau \in 2^{n_0+1})(U_\tau^1 \in \mathcal{IF}_2 \implies \tau=0^{n_0}1).
\end{equation*}
Since $\mathcal{IF}_2$ is $\Sigma_1^0$ (\thref{Complexityresults}(i)), the above condition is $\Pi_1^{0}$. If we find such an $n_0$, we set $x(0)=n_0$ and we move to the next step. Notice that, in case $G=\{p_0\}$, the unique $n_0$ satisfying the above condition is $p_0(0)$.
Suppose we have computed $x[m-1]:= n_0n_1\dots n_{m-1}$. We generalize the previous strategy to compute $n_m$. Let $$A_m=\{0^{n_0} 1 \xi_0 0^{n_1} 1 \xi_1 \dots \xi_{m-2} 0^{n_{m-1}} 1 \xi_{m-1} \in U^m: (\forall j<m) (\xi_j \in \{1,01\})\}.$$
We search for $n_m$ satisfying the $\Sigma_1^{0}$ property
$$(\forall \sigma \in A_m)(\forall \tau \in 2^{n_m+1})(U_{\sigma\tau}^m \in \mathcal{IF}_2 \implies \tau=0^{n_m}1 )$$
As before, if we find such an $n_m$ we let $x(m):= n_m$ and move to the next step. Again, if $G=\{p_0\}$, the unique $n_m$ satisfying the above condition is $p_0(m)$.
The proof of \cite[Theorem 6.4]{kihara_marcone_pauly_2020} gives us a computable function $\Phi_e$ such that, in case $\length{r(A)}\leq \aleph_0$, $\Phi_e(x)$ is a name for a member of $\wList[\mathbb{N}^\mathbb{N}](r(A))$. If $F$ is the function of \thref{Lemmacompletion} then, identifying the completion of the codomain of $\wList[\mathbb{N}^\mathbb{N}]$ with $\completion{\mathbb{N}^\mathbb{N}}$, we obtain $F(e,x) \in \completion{\wList[\mathbb{N}^\mathbb{N}]}(r(A))$.
Summing up, if $A \in \negrepr{\mathbb{N}^\mathbb{N}}$ is countable then $F(e,x)$ is a name in $\completion{\mathbb{N}^\mathbb{N}}$ for an element of $\wList[\mathbb{N}^\mathbb{N}](A)$. If instead $A \in \completion{\negrepr{\mathbb{N}^\mathbb{N}}}$ does not belong to $\operatorname{dom}(\wList[\mathbb{N}^\mathbb{N}])$, then $F(e,x)$ is anyway a name for some member of $\completion{\mathbb{N}^\mathbb{N}}$.
\end{proof}
\begin{lemma}
\thlabel{wsclistreduciblepkbaire}
$\wScList[\mathbb{N}^\mathbb{N}] \le_{\mathrm{W}} \parallelization{\mathsf{WF}_{\sierpinski}} \times \parallelization{\completion{\wList[\mathbb{N}^\mathbb{N}]}} .$
\end{lemma}
\begin{proof}
Let $T \in \mathbf{Tr}$ be a name for an element of $\negrepr{\mathbb{N}^\mathbb{N}}$ and recall that, by \thref{Complexityresults}(ii), $\{\sigma: T_\sigma \in \mathcal{T}^{\leq\aleph_0}\}$ is $\Pi_1^{1,T}$. Hence, using \thref{Complexityresults}(i), we can compute $(S(\sigma))_{\sigma \in \mathbb{N}^{<\mathbb{N}}} \in \mathbf{Tr}^\mathbb{N}$ such that $ S(\sigma) \in \mathcal{WF}$ iff $T_\sigma \in \mathcal{T}^{\leq\aleph_0}$. For any $\sigma \in T$, let $S(\sigma)$ and $\body{T_{\sigma}}$ be the inputs for the $\sigma$-th instance of $\mathsf{WF}_{\sierpinski}$ and $\completion{\wList[\mathbb{N}^\mathbb{N}]}$ respectively. Let $L_\sigma \in \completion{\wList[\mathbb{N}^\mathbb{N}]}(\body{T_\sigma})$.
For any $\sigma$, when we see that $\mathsf{WF}_{\sierpinski}(S(\sigma))=1_{\mathbb{S}}$ then we know that $S(\sigma) \in \mathcal{WF}$ and hence $T_\sigma \in \mathcal{T}^{\leq\aleph_0}$: we need to include a list of $\body{T_\sigma}$ in $\wScList[\mathbb{N}^\mathbb{N}](\body{T})$. We compute a name for $L \in \wScList[\mathbb{N}^\mathbb{N}](\body{T})$ combining all $L_\sigma$ such that $\mathsf{WF}_{\sierpinski}(S(\sigma))=1_{\mathbb{S}}$.
\end{proof}
\begin{theorem}
\thlabel{Pkbaire_equiv_wsclistbaire}
$\wScList[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$.
\end{theorem}
\begin{proof}
The right-to-left direction is \thref{Pkreduciblewsclist}. For the opposite direction,
notice that, by \thref{Pkcantor_idPiSigma} and \thref{completionofwlist_below_pk}, we have that $\mathsf{WF}_{\sierpinski},\completion{\wList[\mathbb{N}^\mathbb{N}]}\le_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$, By \thref{wsclistreduciblepkbaire} and the fact that $\PK[\mathbb{N}^\mathbb{N}]$ is parallelizable (\thref{Pkbaire_parallelizable}), we conclude that $\wScList[\mathbb{N}^\mathbb{N}] \le_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$.
\end{proof}
We now study $\ScList[\mathbb{N}^\mathbb{N}]$ and show that it lies strictly between $\PK[\mathbb{N}^\mathbb{N}]$ and $\parallelization{\mathsf{WF}}$.
\begin{lemma}
\thlabel{wfustarsccountbaire}
$\mathsf{WF}^*\le_{\mathrm{W}} \ScCount[\mathbb{N}^\mathbb{N}]$.
\end{lemma}
\begin{proof}
Let $(T^m)_{m\leq n} \in \mathbf{Tr}^n$ be an input for $\mathsf{WF}^*$. For every $T^m$ compute the tree $S^m := \{j^n\tau:j<2^m \land n \in \mathbb{N} \land \tau \in \exploded{T^m}\} $.
Notice that if $T^m\in \mathcal{WF}$ then $\exploded{T^m} \in \mathcal{WF}$, and, in this case, $\length{\body{S^m}}=2^m$. On the other hand, if $T^m \in \mathcal{IF}$ then $\body{S^m}$ is perfect. Now let $k := \ScCount[\mathbb{N}^\mathbb{N}]\left(\body{\disjointunion{m\leq n}{S^m}}\right)$. Notice that $k>0$ (as the scattered part of $\body{\disjointunion{m\leq n}{S^m}}$ is always finite) and
\[
k-1 = \sum_{T^m \in \mathcal{WF}} 2^m.
\]
Hence, the binary expansion of $k-1$ contains the information about which $T^m$'s are well-founded.
\end{proof}
We do not know more about the Weihrauch degree of $\ScCount[\mathbb{N}^\mathbb{N}]$ (see \thref{question:sccount}).
\begin{theorem}
\thlabel{Pkbaire_below_sclistbaire}
$\PK[\mathbb{N}^\mathbb{N}]<_\mathrm{W}\ScList[\mathbb{N}^\mathbb{N}]$.
\end{theorem}
\begin{proof}
Since $\wScList[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}}\ScList[\mathbb{N}^\mathbb{N}]$ is trivial, \thref{Pkbaire_equiv_wsclistbaire} immediately implies the reduction.
For strictness, first notice that $\mathsf{WF} \not\le_{\mathrm{W}} \firstOrderPart{\PK[\mathbb{N}^\mathbb{N}]}$: indeed, by \thref{cbairepica} and the fact that $\mathsf{WF}$ is first-order, we obtain that $\mathsf{WF} \not\le_{\mathrm{W}} \firstOrderPart{\mathsf{C}_{\Baire}}$ and by \thref{fop_pi11}, we get that $\firstOrderPart{\PK[\mathbb{N}^\mathbb{N}]}\le_{\mathrm{W}} \firstOrderPart{\mathsf{C}_{\Baire}}$. Hence,$\mathsf{WF}\not\le_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$. On the other hand, clearly $\ScCount[\mathbb{N}^\mathbb{N}] \le_{\mathrm{W}} \ScList[\mathbb{N}^\mathbb{N}]$ and $\mathsf{WF}^* \le_{\mathrm{W}} \ScCount[\mathbb{N}^\mathbb{N}]$ by \thref{wfustarsccountbaire}, so that $\mathsf{WF}\le_{\mathrm{W}} \ScList[\mathbb{N}^\mathbb{N}]$.
\end{proof}
Combining this with the fact that $\PK[\mathbb{N}^\mathbb{N}]~|_{\mathrm{W}~} \mathsf{C}_{\Baire}$ (\thref{Ucbaire_below_pk}), we immediately obtain that $\ScList[\mathbb{N}^\mathbb{N}]\not\le_{\mathrm{W}} \mathsf{C}_{\Baire}$. On the other hand, we do not know if $\mathsf{C}_{\Baire} <_\mathrm{W} \ScList[\mathbb{N}^\mathbb{N}]$ or $\mathsf{C}_{\Baire} ~|_{\mathrm{W}~} \ScList[\mathbb{N}^\mathbb{N}]$ (see \thref{questioncbaire}).
\begin{remark}
\thlabel{Pi11sets2}
By \thref{Complexityresults}(iii), given a tree $T$, $\{\sigma : \length{\body{T_\sigma}}=1\}$ is $\Pi_1^{1,T}$ if $T \in \mathbf{Tr}$ and $\Pi_2^{0,T}$ if $T \in \mathbf{Tr}_2$. Let $\varphi(n,T):=(\exists \sigma_0,\dots,\sigma_{n-1})(\forall i \neq j< n)(\sigma_i ~|~ \sigma_j \land \length{\body{T_{\sigma_i}}})$: this formula, asserting that $T$ has at least $n$ isolated points, is $\Pi_1^{1}$ if $T \in \mathbf{Tr}$, and $\Sigma_3^0$ if $T \in \mathbf{Tr}_2$. Notice that, by \thref{Isolated_paths}, the scattered part of $\body{T}$ has at least $n$ elements iff $\varphi(n,T)$ holds. Therefore, the scattered part of $\body{T}$ is infinite iff $(\forall n)(\varphi(n,T))$, which is $\Pi_1^{1}$ if $T \in \mathbf{Tr}$ and $\Pi_2^0$ if $T \in \mathbf{Tr}_2$.
\end{remark}
\begin{theorem}
\thlabel{sclist_below_pica}
$\ScList[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \parallelization{{\mathsf{WF}}}$.
\end{theorem}
\begin{proof}
To prove the reduction notice that $\ScList[\mathbb{N}^\mathbb{N}] \le_{\mathrm{W}} \ScCount[\mathbb{N}^\mathbb{N}] \times \wScList[\mathbb{N}^\mathbb{N}]$. By \thref{Pkbaire_equiv_wsclistbaire} and \thref{Pk_below__chipi}, $\wScList[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}}\PK[\mathbb{N}^\mathbb{N}]<_\mathrm{W}\widehat{\mathsf{WF}}$. As $\widehat{\mathsf{WF}}$ is clearly parallelizable, it suffices to prove that $\ScCount[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}}\widehat{\mathsf{WF}}$. Let $T \in \mathbf{Tr}$ be a name for an input of $\ScCount[\mathbb{N}^\mathbb{N}]$: by \thref{Pi11sets2} we can compute $(S^n)_{n \in \mathbb{N}} \in \mathbf{Tr}^\mathbb{N}$ such that $S^0 \in \mathcal{WF}$ iff the scattered part of $\body{T}$ has infinitely many elements and,for $n>0$, $S^n \in \mathcal{WF}$ iff the scattered part of $\body{T}$ has at least $n$ elements. Then,
$$\ScCount[\mathbb{N}^\mathbb{N}](\body{T})=\begin{cases}
0 & \text{if } \mathsf{WF}(S^0)=1,\\
\min\{n>0:\mathsf{WF}(S^{n})=0\} & \text{if } \mathsf{WF}(S^0)=0.
\end{cases}$$
For strictness, notice that $\ScList[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}]*\ScCount[\mathbb{N}^\mathbb{N}]$. By Theorem 3.9 of \cite{goh_pauly_valenti_2021}, $\mathsf{Det}(f*g)\le_{\mathrm{W}}\mathsf{Det}(f)*g$ and so
$$\mathsf{Det}(\wScList[\mathbb{N}^\mathbb{N}]*\ScCount[\mathbb{N}^\mathbb{N}])\le_{\mathrm{W}}\mathsf{Det}(\wScList[\mathbb{N}^\mathbb{N}])*\ScCount[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \mathsf{UC}_{\Baire}*\ScCount[\mathbb{N}^\mathbb{N}],$$
where the equivalence follows from \thref{Pkbaire_equiv_wsclistbaire} and \thref{Detpart_pkbaire}. Since the output of $\ScCount[\mathbb{N}^\mathbb{N}]$ is a natural number and the solution of $\mathsf{UC}_{\Baire}$ is always hyperarithmetical relative to the input (\cite[Corollary 3.4]{kihara_marcone_pauly_2020}), while $\parallelization{\mathsf{WF}}$ has instances with no hyperarithmetical solutions in the input, we conclude that $\mathsf{UC}_{\Baire}*\ScCount[\mathbb{N}^\mathbb{N}]<_\mathrm{W} \mathsf{Det}(\parallelization{\mathsf{WF}})\equiv_{\mathrm{W}}\parallelization{\mathsf{WF}}$ (the equivalence is immediate as $\parallelization{\mathsf{WF}}$ is single-valued). Therefore, $ \parallelization{\mathsf{WF}} \not\le_{\mathrm{W}}\wScList[\mathbb{N}^\mathbb{N}]*\ScCount[\mathbb{N}^\mathbb{N}]$ and, a fortiori, $\parallelization{\mathsf{WF}} \not \le_{\mathrm{W}} \ScList[\mathbb{N}^\mathbb{N}]$.
\end{proof}
We now move our attention to listing problems of the scattered part of closed subsets of Cantor space. From \thref{Pk_below__chipi} and \thref{Pkbaire_equiv_wsclistbaire} notice that $\mathsf{WF}\le_{\mathrm{W}} \mathsf{LPO} *\wScList[\mathbb{N}^\mathbb{N}]$. As the next lemma shows, to compute $\mathsf{WF}$ it suffices to compose $\wScList[2^\mathbb{N}]$ with a function slightly stronger than $ \mathsf{LPO} $.
\begin{lemma}
\thlabel{Wsclistreacheswf}
$\mathsf{WF} \le_{\mathrm{W}} \mathsf{LPO} '*\wScList[2^\mathbb{N}]$
\end{lemma}
\begin{proof}
Given an input $T\in \mathbf{Tr}$ for $\mathsf{WF}$, we can compute $S:=\binarydisjointunion{n \in \mathbb{N}}{(\translateCantor(\exploded{T}))}$. Let $(b_i,p_i)_{i \in \mathbb{N}} \in \wScList[2^\mathbb{N}](S)$. Then
\begin{align*}
T \in \mathcal{WF} & \iff \exploded{T}\in \mathcal{WF}\\
& \iff \translateCantor(\exploded{T}) \in \mathcal{T}^{\leq\aleph_0}_2\\
& \iff (\exists i)(b_i=1 \land p_i=0^\mathbb{N}).
\end{align*}
The last condition is $\Sigma_2^{0}$ and so $ \mathsf{LPO} '$ suffices to establish from $(b_i,p_i)_{i \in \mathbb{N}}$ whether $T \in \mathcal{WF}$.
\end{proof}
\begin{lemma}
\thlabel{Fop_wsclistcantor}
$\codedChoice{\boldsymbol{\Pi}_2^0}{}{\mathbb{N}}\equiv_{\mathrm{W}} \firstOrderPart{\wScList[2^\mathbb{N}]}$.
\end{lemma}
\begin{proof}
For the left-to-right direction, observe that $\wScList[2^\mathbb{N}]$ is parallelizable (\thref{Wsclist_parallelizable}), hence we show that $\codedChoice{\boldsymbol{\Pi}_2^0}{}{\mathbb{N}} \le_{\mathrm{W}}\parallelization{\wScList[2^\mathbb{N}]}$. An input for $\codedChoice{\boldsymbol{\Pi}_2^0}{}{\mathbb{N}}$ is a nonempty set $A \in \boldsymbol{\Pi}_2^0(\mathbb{N})$. We can uniformly find a sequence $(p_n)_{n \in \mathbb{N}}$ of elements of $2^\mathbb{N}$ such that $n \in A \iff (\exists^{\infty} i)(p_n(i)=0)$.
For every $n$, let
\[
T^n:= \{\sigma \in 2^{<\mathbb{N}}: (\forall i <\length{\sigma})(p_n(i)=0\implies (\forall j<i)( \sigma(j)=0 )\}.
\]
Notice that if $(\exists^{\infty} i)(p_n(i)=0)$ then $\body{T^n}=\{0^\mathbb{N}\}$, while $\body{T^n}$ is perfect otherwise.
Given $((b_{i,n},p_{i,n})_{i \in \mathbb{N}})_{n \in \mathbb{N}}\in \parallelization{\wScList[2^\mathbb{N}]}((T^n)_{n \in \mathbb{N}})$ notice that, for every $n$, $n \in A$ iff there exists $i$ such that $b_{i,n}=1$. Hence, we can find $n \in A$ simply by searching for a pair $i,n$ such that $b_{i,n}=1$.
For the other direction, let $f$ be a first-order function and suppose that $f\le_{\mathrm{W}} \wScList[2^\mathbb{N}]$ is witnessed by the maps $\Phi$ and $\Psi$. Let $p$ be a name for an input of $f$ and let $\Phi(p)=T\in \mathbf{Tr}_2$. Recall that \thref{ellesigmadefinition} introduced $\ell_\sigma$ and $\pi_i(\sigma)$ for $\sigma \in 2^{<\mathbb{N}}$. Let $\mathbf{Prefixes}$ be the set of all $(\sigma,\tau) \in 2^{<\mathbb{N}} \times 2^{<\mathbb{N}}$ such that
\[
\length{\tau}=\ell_\sigma \land \Psi\big(p[\length{\sigma}],\str{\tau(i),\pi_i(\sigma)}_{i<\ell_\sigma}\big)(0){\downarrow} \land (\forall i<\ell_\sigma)(\tau(i)=1 \implies T_{\pi_i(\sigma)} \in \mathcal{UB}_2).
\]
Elements in $2^{<\mathbb{N}} \times 2^{<\mathbb{N}}$ can be coded as natural numbers. Since $\mathcal{UB}_2$ is a $\Pi_2^0$ set (\thref{Complexityresults}(iii)), $\mathbf{Prefixes}$ is a $\Pi_2^{0,T}$ subset of $\mathbb{N}$. It is immediate that every $(\tau,\sigma) \in \mathbf{Prefixes}$ is a prefix of a name for $\wScList[2^\mathbb{N}](\body{T})$, and hence $f \le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_2^0}{}{\mathbb{N}}$ follows immediately if $\mathbf{Prefixes} \neq \emptyset$.
To show this, let $L\in\wScList[2^\mathbb{N}](\body{T})$ and $n$ be such that $\Psi(p[n],L[n])(0)\downarrow$. Then $L[n]=\str{\tau(i),\pi_i(\sigma)}_{i<\ell_\sigma}$ for some $(\tau, \sigma) \in 2^{<\mathbb{N}} \times 2^{<\mathbb{N}}$. If $\tau(i)=1$ then $\pi_i(\sigma)$ is a prefix of a member of the scattered part of $\body{T}$ and, by \thref{Isolated_paths}, there exists $\xi_i \sqsupseteq \pi_{i}(\sigma)$ such that $T_{\xi_i} \in \mathcal{UB}_2$. Let $(\tau',\sigma')$ be such that $\tau'\sqsupseteq \tau $, $\sigma'\sqsupseteq \sigma$, $(\forall i<\ell_\sigma)(\tau(i)=1 \implies \pi_i(\sigma') \sqsupseteq \xi_i)$ and $(\forall i<\ell_{\sigma'})(i \geq \ell_\sigma \implies \tau'(i)=0)$. Then $(\tau',\sigma') \in \mathbf{Prefixes}$.
\end{proof}
We collect in the next theorem our results about the problems of listing the scattered part of a closed set; these results are also summarized in Figure \ref{Figureslist}.
\begin{theorem}
\thlabel{sclistcantor_summary}
The following relations hold:
\begin{enumerate}[(i)]
\item $ \List[2^\mathbb{N},< \omega],\wList[2^\mathbb{N}] <_\mathrm{W} \wScList[2^\mathbb{N}]$, while $\List[2^\mathbb{N}]$ and $\mathsf{UC}_{\Baire}$ are both Weihrauch incomparable with $\wScList[2^\mathbb{N}]$;
\item $\List[2^\mathbb{N}],\wScList[2^\mathbb{N}]<_\mathrm{W}\ScList[2^\mathbb{N}]$ and $\mathsf{UC}_{\Baire}~|_{\mathrm{W}~}\ScList[2^\mathbb{N}]$;
\item $\mathsf{UC}_{\Baire},\ScList[2^\mathbb{N}]<_\mathrm{W}\wScList[\mathbb{N}^\mathbb{N}]<_\mathrm{W}\ScList[\mathbb{N}^\mathbb{N}]$ while $\mathsf{C}_{\Baire}~|_{\mathrm{W}~} \wScList[\mathbb{N}^\mathbb{N}]$ and $\ScList[\mathbb{N}^\mathbb{N}]\not\le_{\mathrm{W}} \mathsf{C}_{\Baire}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item By \cite[Proposition 6.11]{kihara_marcone_pauly_2020} and \thref{Fop_wsclistcantor}, $\List[2^\mathbb{N},< \omega]\equiv_{\mathrm{W}}\codedChoice{\boldsymbol{\Pi}_2^0}{}{\mathbb{N}}\equiv_{\mathrm{W}} \firstOrderPart{\wScList[2^\mathbb{N}]}$. The reduction $\wList[2^\mathbb{N}]\le_{\mathrm{W}}\wScList[2^\mathbb{N}]$ is obvious. From \thref{Summary_list_cantor} we know that $\List[2^\mathbb{N},< \omega] ~|_{\mathrm{W}~} \wList[2^\mathbb{N}]$ and hence $\wScList[2^\mathbb{N}] \not\le_{\mathrm{W}}\List[2^\mathbb{N},< \omega]$ and $\wScList[2^\mathbb{N}] \not\le_{\mathrm{W}} \wList[2^\mathbb{N}]$.
For the incomparabilities, recall that, by \thref{Summary_list_cantor}, $\List[2^\mathbb{N}]\le_{\mathrm{W}}\mathsf{UC}_{\Baire}$: therefore it suffices to show that $\wScList[2^\mathbb{N}]\not\le_{\mathrm{W}} \mathsf{UC}_{\Baire}$ and $\List[2^\mathbb{N}]\not\le_{\mathrm{W}}\wScList[2^\mathbb{N}]$.
By \thref{Wsclistreacheswf}, $\mathsf{WF}\le_{\mathrm{W}} \mathsf{LPO} '* \wScList[2^\mathbb{N}]$ while, since $\mathsf{WF}\not\le_{\mathrm{W}}\mathsf{UC}_{\Baire}$ (\thref{cbairepica}) and $\mathsf{UC}_{\Baire}$ is closed under compositional product, we obtain that $\mathsf{WF}\not\le_{\mathrm{W}} \mathsf{LPO} '*\mathsf{UC}_{\Baire}$. Hence, $\wScList[2^\mathbb{N}]\not\le_{\mathrm{W}} \mathsf{UC}_{\Baire}$.
To show that $\List[2^\mathbb{N}]\not\le_{\mathrm{W}} \wScList[2^\mathbb{N}]$ we prove that $\firstOrderPart{\List[2^\mathbb{N}]}\not\le_{\mathrm{W}} \firstOrderPart{\wScList[2^\mathbb{N}]}$. Since, by \thref{Fop_wsclistcantor} and \thref{limdoesnotreachucbaire}(ii), $\firstOrderPart{\wScList[2^\mathbb{N}]}\equiv_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_2^0}{}{\mathbb{N}} \not\le_{\mathrm{W}} \mathsf{LPO} ''$, it suffices to show $ \mathsf{LPO} '' \le_{\mathrm{W}} \List[2^\mathbb{N}]$. We can view an input for $ \mathsf{LPO} ''$ as a sequence $(q_n)_{n \in \mathbb{N}}$ of elements of $2^\mathbb{N}$ so that
$$ \mathsf{LPO} ''((q_n)_{n \in \mathbb{N}})=1 \iff (\exists^\infty n)(\forall i)(q_n(i)=0).$$
Given $(q_n)_{n \in \mathbb{N}}$, we compute $(T^n)_{n \in \mathbb{N}} \in \mathbf{Tr}^\mathbb{N}$ defined as $T^n:= \{0^s:(\forall i<s)(q_n(i)=0)\}$. Notice that $T^n \in \mathcal{IF}_2 \iff q_n=0^\mathbb{N}$ and given $T':= \binarydisjointunion{n \in \mathbb{N}}{T^n}$ it is easy to check that $\length{\body{T'}}=\aleph_0 \iff \mathsf{LPO} ''((q_n)_{n \in \mathbb{N}})=1$. Since the information about the cardinality of $\body{T'}$ is included in $\List[2^\mathbb{N}](\body{T'})$, this concludes the reduction $ \mathsf{LPO} ''\le_{\mathrm{W}} \List[2^\mathbb{N}]$.
\item The reductions $\List[2^\mathbb{N}],\wScList[2^\mathbb{N}]\le_{\mathrm{W}}\ScList[2^\mathbb{N}]$ are immediate and, since we just showed that $\List[2^\mathbb{N}]~|_{\mathrm{W}~}\wScList[2^\mathbb{N}]$, they are strict.
Combining the facts that $\wScList[2^\mathbb{N}]<_\mathrm{W} \ScList[2^\mathbb{N}]$ and $\wScList[2^\mathbb{N}]\not\le_{\mathrm{W}}\mathsf{UC}_{\Baire}$, we conclude that $\ScList[2^\mathbb{N}]\not\le_{\mathrm{W}}\mathsf{UC}_{\Baire}$.
To show that $\mathsf{UC}_{\Baire}\not\le_{\mathrm{W}}\ScList[2^\mathbb{N}]$, we first prove that $\ScCount[2^\mathbb{N}]\le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_4^0}{}{\mathbb{N}}$. Given $T \in \mathbf{Tr}_2$, let
$$A:=\{n: (n>0 \implies \varphi(n-1,T) \land \lnot \varphi(n,T)) \land (n=0\implies (\forall k)(\varphi(k,T))\}$$
where $\varphi(n,T):=(\exists \sigma_0,\dots,\sigma_{n-1})(\forall i \neq j< n)(\sigma_i ~|~ \sigma_j \land T_{\sigma_i} \in \mathcal{UB}_2)$. Using \thref{Pi11sets2}, it is easy to check that $\varphi(n,T)$ is $\Sigma_3^0$ and hence $A$ is a $\Pi_4^{0,T}$ subset of $\mathbb{N}$. Notice that $A$ is a singleton and, by \thref{Isolated_paths}, the unique $n \in A$ is the correct answer for $\ScCount[2^\mathbb{N}]$.
As $\ScList[2^\mathbb{N}]\le_{\mathrm{W}} \wScList[2^\mathbb{N}] \times \ScCount[2^\mathbb{N}] \le_{\mathrm{W}} \wScList[2^\mathbb{N}]*\ScCount[2^\mathbb{N}]$ we have that
\[\firstOrderPart{\ScList[2^\mathbb{N}]} \le_{\mathrm{W}} \firstOrderPart{(\wScList[2^\mathbb{N}]*\ScCount[2^\mathbb{N}])} \le_{\mathrm{W}} \firstOrderPart{\wScList[2^\mathbb{N}]}*\ScCount[2^\mathbb{N}],\]
where the second reduction follows from \cite[Proposition 4.1(4)]{valentisolda} which states that $\firstOrderPart{(f*g)}\le_{\mathrm{W}}\firstOrderPart{f}*g$ for any $f$ and $g$. By \thref{Fop_wsclistcantor} and the fact that $\ScCount[2^\mathbb{N}]\le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_4^0}{}{\mathbb{N}}$ we get that $\firstOrderPart{\ScList[2^\mathbb{N}]}\le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_2^0}{}{\mathbb{N}}*\codedChoice{\boldsymbol{\Pi}_4^0}{}{\mathbb{N}} \equiv_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_4^0}{}{\mathbb{N}}$ (the last equivalence follows from \cite[Theorem 7.2]{valentisolda}).
By \thref{limdoesnotreachucbaire}(i) and (iii) and \thref{Fopcbaire}, we know that $\codedChoice{\boldsymbol{\Pi}_4^0}{}{\mathbb{N}}<_\mathrm{W}\codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}}\equiv_{\mathrm{W}}\firstOrderPart{\mathsf{UC}_{\Baire}}$, hence $\mathsf{UC}_{\Baire}\not\le_{\mathrm{W}}\ScList[2^\mathbb{N}]$.
\item
By \thref{Ucbaire_below_pk} and \thref{Pkbaire_equiv_wsclistbaire}, we have that $\mathsf{UC}_{\Baire}<_\mathrm{W}\PK[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}}\wScList[\mathbb{N}^\mathbb{N}]$. Moreover, in the proof of (ii), we showed $\ScCount[2^\mathbb{N}]\le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_4^0}{}{\mathbb{N}}$, which by \thref{fop_pi11} implies $\ScCount[2^\mathbb{N}] \le_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}]$. Since $\ScList[2^\mathbb{N}] \le_{\mathrm{W}} \ScCount[2^\mathbb{N}] \times \wScList[2^\mathbb{N}]$ and $\wScList[2^\mathbb{N}] \le_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}]$ is immediate, we have that $\ScList[2^\mathbb{N}] \le_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}] \times \wScList[\mathbb{N}^\mathbb{N}]$. Recalling that by \thref{Wsclist_parallelizable} $\wScList[\mathbb{N}^\mathbb{N}]$ is parallelizable, we obtain that $\ScList[2^\mathbb{N}] \le_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}]$. Since $\mathsf{UC}_{\Baire}~|_{\mathrm{W}~}\ScList[2^\mathbb{N}]$ by the previous item, we also deduce that the reduction is strict.
By the fact that $\wScList[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$ (\thref{Pkbaire_equiv_wsclistbaire}), we have that \thref{Pkbaire_below_sclistbaire} and \thref{Ucbaire_below_pk} imply respectively $\wScList[\mathbb{N}^\mathbb{N}]<_\mathrm{W} \ScList[\mathbb{N}^\mathbb{N}]$ and $\mathsf{C}_{\Baire}~|_{\mathrm{W}~}\wScList[\mathbb{N}^\mathbb{N}]$. These two relationships imply that $\ScList[\mathbb{N}^\mathbb{N}]\not\le_{\mathrm{W}} \mathsf{C}_{\Baire}$.\qedhere
\end{enumerate}
\end{proof}
\subsection{The full Cantor-Bendixson theorem}
\label{Fullcantorbendixson}
The following functions formulate the Cantor-Bendixson theorem as a problem.
\begin{definition}
Let $\mathcal{X}$ be a computable Polish space. We define two multi-valued functions $\multifunction{\wCB[\mathcal{X}]}{\negrepr{\mathcal{X}}}{\negrepr{\mathcal{X}} \times (2\times \mathcal{X})^\mathbb{N}}$ and $\multifunction{\CB[\mathcal{X}]}{\negrepr{\mathcal{X}}}{\negrepr{\mathcal{X}} \times (\mathbb{N} \times (2\times \mathcal{X})^\mathbb{N})}$ by
$$\wCB[\mathcal{X}](A) := \PK[\mathcal{X}](A) \times \wScList[\mathcal{X}](A) \text{ and } \CB[\mathcal{X}](A):= \PK[\mathcal{X}](A) \times \ScList[\mathcal{X}](A).$$
The multi-valued functions $\multifunction{\wCB}{\mathbf{Tr}}{\mathbf{Tr} \times(2\times \mathbb{N}^\mathbb{N})^\mathbb{N}}$ and $\multifunction{\CB}{\mathbf{Tr}}{\mathbf{Tr} \times (\mathbb{N}\times(2\times \mathbb{N}^\mathbb{N})^\mathbb{N})}$ are defined similarly, substituting $\PK[\mathcal{X}]$ with $\PK$ and $(\mathsf{w})\List[\mathcal{X}]$ with $(\mathsf{w})\List[\mathbb{N}^\mathbb{N}]$ in the definitions above.
\end{definition}
\begin{proposition}
\thlabel{CBtree}
$\wCB\equiv_{\mathrm{W}}\CB\equiv_{\mathrm{W}}\parallelization{\mathsf{WF}}$.
\end{proposition}
\begin{proof}
Clearly $\PK\le_{\mathrm{W}}\wCB[]\le_{\mathrm{W}}\CB[]$ and since, by \thref{Chipistrongpktree}, $\parallelization{\mathsf{WF}}\equiv_{\mathrm{W}}\PK$ we have that $\parallelization{\mathsf{WF}}\le_{\mathrm{W}}\wCB[]\le_{\mathrm{W}}\CB[]$. For the opposite directions notice that $\wCB[]\le_{\mathrm{W}}\PK\times\wScList[\mathbb{N}^\mathbb{N}]$ and $\CB[]\le_{\mathrm{W}}\PK\times\ScList[\mathbb{N}^\mathbb{N}]$. By \thref{Pkbaire_equiv_wsclistbaire,sclist_below_pica}, we have that $\wScList[\mathbb{N}^\mathbb{N}]<_\mathrm{W}\ScList[\mathbb{N}^\mathbb{N}]<_\mathrm{W} \parallelization{\mathsf{WF}}$. As $\parallelization{\mathsf{WF}}$ is clearly parallelizable this concludes the proof.
\end{proof}
\begin{theorem}
\thlabel{equivalencesCBbairecantor}
$\wCB[2^\mathbb{N}]\equiv_{\mathrm{W}} \CB[2^\mathbb{N}] \equiv_{\mathrm{W}} \PK[2^\mathbb{N}] \equiv_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \wCB[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \ScList[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}} \CB[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \CB$.
\end{theorem}
\begin{proof}
By \thref{Pkbaire_equiv_wsclistbaire,Cantor_and_baire_same2}, $\wScList[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \PK[2^\mathbb{N}]$. By \thref{sclistcantor_summary}, $\wScList[2^\mathbb{N}]<_\mathrm{W} \ScList[2^\mathbb{N}]<_\mathrm{W} \wScList[\mathbb{N}^\mathbb{N}]$. Since, by \thref{Pkbaire_parallelizable}, $\PK[\mathbb{N}^\mathbb{N}]$ is parallelizable, we obtain all the equivalences. Also, $\ScList[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}} \CB[\mathbb{N}^\mathbb{N}]$ is immediate, and $\PK[\mathbb{N}^\mathbb{N}]<_\mathrm{W} \ScList[\mathbb{N}^\mathbb{N}]$ was already proven in \thref{Pkbaire_below_sclistbaire}. To prove $\CB[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \CB$, notice that the reduction is straightforward and $\CB[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}} \wCB[\mathbb{N}^\mathbb{N}]* \ScCount[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}]*\ScCount[\mathbb{N}^\mathbb{N}]$. To conclude the proof, observe that $\parallelization{\mathsf{WF}}\equiv_{\mathrm{W}} \CB[]$ (\thref{CBtree}) and that $\parallelization{\mathsf{WF}} \not\le_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}]*\ScCount[\mathbb{N}^\mathbb{N}]$ (see the proof of \thref{sclist_below_pica}), hence $\CB\not \le_{\mathrm{W}} \CB[\mathbb{N}^\mathbb{N}] $.
\end{proof}
The situation here is similar to the one discussed above for $\ScList[\mathbb{N}^\mathbb{N}]$: we do not know if $\mathsf{C}_{\Baire}<_\mathrm{W} \CB[\mathbb{N}^\mathbb{N}]$ or $\mathsf{C}_{\Baire}~|_{\mathrm{W}~} \CB[\mathbb{N}^\mathbb{N}]$. It is also open whether $\ScList[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \CB[\mathbb{N}^\mathbb{N}]$ (see \thref{questioncbaire,question:cbbaireandlist}).
In the next theorem we explore what can be added to $\PK[\mathbb{N}^\mathbb{N}]$ in order to compute $\CB[\mathbb{N}^\mathbb{N}]$.
\begin{theorem}
\thlabel{AttemptCBaireCN}
$\CB[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}} \ustar{\mathsf{WF}}\times \PK[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}} \codedChoice{}{}{\mathbb{N}}*\PK[\mathbb{N}^\mathbb{N}]$.
\end{theorem}
\begin{proof}
For the first reduction, by \thref{equivalencesCBbairecantor}, $\PK[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}}\wCB[\mathbb{N}^\mathbb{N}]$ and clearly $\CB[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}}\ScCount[\mathbb{N}^\mathbb{N}]\times \wCB[\mathbb{N}^\mathbb{N}]$: by \thref{sclist_below_pica,Summaryfopustar}, we obtain that $\ScCount[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}} \ustar{\mathsf{WF}}$.
For the second reduction, notice that ${\ustar{\Choice{\mathbb{N}}}}\equiv_{\mathrm{W}} \codedChoice{}{}{\mathbb{N}}$ (\cite[Theorem 7.2]{valentisolda}). Furthermore, since $\PK[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}}\wCB[\mathbb{N}^\mathbb{N}]$, $\PK[\mathbb{N}^\mathbb{N}]$ is parallelizable (\thref{Pkbaire_parallelizable}) and $\PK[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}}\parallelization{\mathsf{WF}_{\sierpinski}}$ (\thref{Pkcantor_idPiSigma}), it suffices to show that $ \ustar{\mathsf{WF}} \le_{\mathrm{W}} {\ustar{\Choice{\mathbb{N}}}} *\parallelization{\mathsf{WF}_{\sierpinski}}$.
The input for $\ustar{\mathsf{WF}}$ is a sequence $(T^i)_{i \in \mathbb{N}} \in \mathbf{Tr}^\mathbb{N}$. Let $(p_i)_{i \in \mathbb{N}}$ be a name for a solution of $\parallelization{\mathsf{WF}_{\sierpinski}}((T^i)_{i \in \mathbb{N}})$ (recall that the only name for $0_\mathbb{S}$ is $0^\mathbb{N}$). For every $i\in \mathbb{N}$, the input for the $i$-th instance of $\codedChoice{}{}{\mathbb{N}}$
\[A_i := \{n:(n=0\land p_i=0^\mathbb{N}) \lor (n>0 \land (\exists m<n)(p_i(m)=1))\}.\]
To conclude the proof it suffices to notice that for any $i$ and $n_i \in \codedChoice{}{}{\mathbb{N}}(A_i)$, $T^i \in \mathcal{WF}$ iff $n_i\neq0$.
\end{proof}
\section{What happens in arbitrary computable metric spaces}
In this section we study the functions connected to the perfect set and Cantor-Bendixson theorems in arbitrary computable metric spaces. We start by collecting some facts about maps between spaces of closed sets.
\label{otherspaces}
\begin{proposition}[{\cite[proof of Proposition 3.7]{closedChoice}}]
\thlabel{surjections}
Let $\mathcal{X}$ and $\mathcal{Y}$ be computable metric spaces and $\partialfunction{s}{\mathcal{X}}{\mathcal{Y}}$ be a computable function with $\operatorname{dom}(s) \in \Pi_1^0(\mathcal{X})$: then the function $\function{S}{\negrepr{\mathcal{Y}}}{\negrepr{\mathcal{X}}}$, $M\mapsto s^{-1}(M)$ is computable as well.
\end{proposition}
\begin{definition}[{\cite{BorelComplexity}}]
\thlabel{richness}
Given two represented spaces $\mathcal{X}$ and $\mathcal{Y}$, we say that $\function{\iota}{\mathcal{X}}{\mathcal{Y}}$ is a \emph{computable embedding} if $\iota$ is injective and $\iota$ as well as its partial inverse $\iota^{-1}$ are computable.
If $\mathcal{X}$ is a computable metric space we say that $\mathcal{X}$ is \emph{rich} if there exists a computable embedding of $2^\mathbb{N}$ into $\mathcal{X}$.
\end{definition}
As observed in \cite{BorelComplexity}, any computable embedding $\function{\iota}{2^\mathbb{N}}{\mathcal{X}}$ is such that $\mathsf{range}(\iota) \in \Pi_1^0(\mathcal{X})$. Moreover, by \cite[Theorem 6.2]{BorelComplexity}, any perfect computable metric space $\mathcal{X}$ is rich.
\begin{theorem}[{\cite[Theorem 3.7]{BorelComplexity}}]
\thlabel{embeddingtheorem}
Let $\mathcal{X}$ and $\mathcal{Y}$ be computable metric spaces and $\function{\iota}{\mathcal{X}}{\mathcal{Y}}$ be a computable embedding with $\mathsf{range}(\iota) \in \Pi_1^0(\mathcal{Y})$. Then the map $\function{J}{\negrepr{\mathcal{X}}}{\negrepr{\mathcal{Y}}}, A \mapsto \iota(A)$ is computable and admits a partial computable right inverse.
\end{theorem}
The following is an analogue of \cite[Corollaries 4.3 and 4.4]{closedChoice}.
\begin{lemma}
\thlabel{richspacesPKGeneral}
Let $\mathcal{X}$ and $\mathcal{Y}$ be computable Polish spaces and $\function{\iota}{\mathcal{X}}{\mathcal{Y}}$ be a computable embedding with $\mathsf{range}(\iota) \in \Pi_1^0(\mathcal{Y})$. Let $f$ be any of the following: $\PST$, $(\mathsf{w})\List$, $\PK$, $(\mathsf{w})\ScList$, $(\mathsf{w})\CB$. Then $f_{\mathcal{X}}\le_{\mathrm{W}} f_{\mathcal{Y}}$. In particular, $f_{2^\mathbb{N}} \le_{\mathrm{W}} f_{\mathcal{Y}}$ for every rich computable metric space $\mathcal{Y}$.
\end{lemma}
\begin{proof}
By \thref{embeddingtheorem}, the map $\function{J}{\negrepr{\mathcal{X}}}{\negrepr{\mathcal{Y}}}$ and its partial inverse are computable. Given $A \in \operatorname{dom}(f_\mathcal{X})$, we have that $J(A) \in \operatorname{dom}(f_\mathcal{Y})$, as cardinality is preserved by $J$. Moreover, $J(A)$ is homeomorphic to $A$. Depending on $f$, we use combinations of copies of $J^{-1}$ and $\iota^{-1}$ to compute from a solution for $f_\mathcal{Y}(J(A))$ a solution for $f_\mathcal{X}(A)$. For example, considering the case $f = \CB$, we have that if $(B, (n,(b_i,y_i)_{i \in \mathbb{N}})) \in \CB[\mathcal{Y}](J(A))$ then $(J^{-1}(B), (n, (b_i,\iota^{-1}(y_i))_{i \in \mathbb{N}})) \in \CB[\mathcal{X}](A)$.
\end{proof}
The following lemma is immediate using \thref{richspacesPKGeneral} and either \thref{cantor_and_baire_same} or \thref{Cantor_and_baire_same2} or \thref{equivalencesCBbairecantor}.
\begin{lemma}
\thlabel{richspacesPK}
Let $f$ be any of the following: $\PST$, $\PK$, $\wCB$. For every rich computable Polish space $\mathcal{X}$, $f_{\mathbb{N}^\mathbb{N}} \le_{\mathrm{W}} f_\mathcal{X}$. If moreover there exists a computable embedding $\function{\iota}{\mathcal{X}}{\mathbb{N}^\mathbb{N}}$ with $\mathsf{range}(\iota)\in \Pi_1^0(\mathbb{N}^\mathbb{N})$, $f_\mathcal{X}\equiv_{\mathrm{W}} f_{\mathbb{N}^\mathbb{N}}$.
\end{lemma}
\subsection{Perfect sets}
\thref{richspacesPK} implies that $\PST[\mathcal{X}]\equiv_{\mathrm{W}} \PST[\mathbb{N}^\mathbb{N}]$ whenever $\mathcal{X}$ is $0$-dimensional. We can however obtain this result also for the unit interval.
\begin{theorem}
\thlabel{PST01}
$\PST[{[0,1]}]\equiv_{\mathrm{W}} \PST[\mathbb{N}^\mathbb{N}]$.
\end{theorem}
\begin{proof}
The right-to-left direction follows from \thref{richspacesPK}.
For the opposite direction, since $\PST[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \PST[2^\mathbb{N}]$ by \thref{cantor_and_baire_same}, we show instead that $\PST[{[0,1]}]\le_{\mathrm{W}}\PST[2^\mathbb{N}]$.
Let $\function{s_{\mathsf{b}}}{2^\mathbb{N}}{[0,1]}$ be the computable function that computes a real from its binary expansion:
\[
s_{ \mathsf{b} } (p) = \underset{ i \in \mathbb{N} }{ \sum } \frac { p(i) } { 2^{i+1} }.
\]
Notice that $s_{\mathsf{b}}$ is not injective (and hence not an embedding) as $s_{\mathsf{b}}(\sigma01^\mathbb{N}) = s_{\mathsf{b}}(\sigma10^\mathbb{N})$ for any $\sigma \in 2^{<\mathbb{N}}$; however these are the only counterexamples to injectivity. In particular, for every $x \in [0,1]$, $\length{s_{\mathsf{b}}^{-1}(x)}\leq 2$.
For $\sigma \in 2^{<\mathbb{N}}$ we let $I^\sigma:=\{x \in [0,1]:(\forall p \in 2^\mathbb{N})(s_{\mathsf{b}}(p)=x \implies \sigma\sqsubset p)\}$. Notice that if $\sigma$ is not constant then $I^\sigma=(s_{\mathsf{b}}(\sigma0^\mathbb{N}), s_{\mathsf{b}}(\sigma1^\mathbb{N}))$, while $I^{0^n}=[0, s_{\mathsf{b}}(0^n1^\mathbb{N}))$ and $I^{1^n}=(s_{\mathsf{b}}(1^n0^\mathbb{N}), 1]$: all these intervals are open subsets of $[0,1]$.
By \thref{surjections} given $A \in \negrepr{[0,1]}$ we can compute $s_{\mathsf{b}}^{-1}(A)$.
Although \thref{embeddingtheorem} does not apply, we claim that $\function{J}{\negrepr{2^\mathbb{N}}}{\negrepr{[0,1]}}$, $C\mapsto s_{\mathsf{b}}(C)$, is computable (notice that, as $2^\mathbb{N}$ is compact and $s_{\mathsf{b}}$ is continuous, the image of a closed set is closed). To prove that $J$ is computable, we proceed as follows: let $S \in \mathbf{Tr}_2$ be a name for $C \in \negrepr{2^\mathbb{N}}$, i.e.\ a tree such that $\body{S}=C$. Recall that, by \thref{Complexityresults}(i), $\mathcal{WF}_2$ is a $\Sigma_1^0$ set. We compute $B \in \negrepr{[0,1]}$ as follows:
\begin{enumerate}[(i)]
\item whenever we witness that ${S_\sigma} \in \mathcal{WF}_2$, we list $I^\sigma$ in the complement of $B$;
\item whenever we witness that ${S_{\sigma01^i}}$ and ${ S_{\sigma10^i}}$ are in $\mathcal{WF}_2$ for some $i \in \mathbb{N}$, we list in the complement of $B$ the open interval $(s_{\mathsf{b}}(\sigma01^i0^\mathbb{N}),s_{\mathsf{b}}(\sigma10^i1^\mathbb{N}))$ which coincides with $I^{\sigma01^i} \cup I^{\sigma10^i} \cup \{s_{\mathsf{b}}(\sigma01^\mathbb{N})\}$.
\end{enumerate}
We need to check that $B=J(C)$, i.e.\ for every $x \in [0,1]$, $x \notin B$ iff $s_{\mathsf{b}}^{-1}(x)\cap C = \emptyset$.
If $x \notin B$ because $x \in I^\sigma$ for some $\sigma$ with ${S_\sigma} \in \mathcal{WF}_2$, then $\sigma$ is a prefix of every element of $s_{\mathsf{b}}^{-1}(x)$ and hence $s_{\mathsf{b}}^{-1}(x)\cap C = \emptyset$. If $x \notin B$ because $x \in (s_{\mathsf{b}}(\sigma01^i0^\mathbb{N}),s_{\mathsf{b}}(\sigma10^i1^\mathbb{N}))$ for some $\sigma$ and $i$, then either $x \in I^{\sigma01^i} \cup I^{\sigma10^i}$, in which case we can apply the previous argument to one of $\sigma01^i$ and $\sigma10^i$, or $x = s_{\mathsf{b}}(\sigma01^\mathbb{N}) = s_{\mathsf{b}}(\sigma10^\mathbb{N})$; in this case we know that both $\sigma01^\mathbb{N}$ and $\sigma10^\mathbb{N}$ do not belong to $C$.
For the converse, consider first the case where $s_{\mathsf{b}}^{-1}(x) = \{q\}$ and $x \notin \{0,1\}$: then $q$ is not eventually constant. Since $q \notin C$, there exists $\sigma\sqsubset q$ such that $\sigma \notin S$ and hence $I^\sigma$ is listed in the complement of $B$. As $q \notin \{\sigma0^\mathbb{N}, \sigma1^\mathbb{N}\}$, we obtain that $x \in I^\sigma$ and hence $x \notin B$. The case in which $x \in \{0,1\}$ is analogous.
If ${s_{\mathsf{b}}^{-1}(x)} = \{q_0,q_1\}$ then, as noticed above, there exists $\tau$ such that $q_0 = \tau01^\mathbb{N}$ and $q_1=\tau10^\mathbb{N}$. Since $q_0,q_1 \notin C$ we have $\tau01^i, \tau10^i \notin S$ for some $i$. Then $x \in (s_{\mathsf{b}}(\tau01^i0^\mathbb{N}),s_{\mathsf{b}}(\tau10^i1^\mathbb{N}))$, and this interval is listed in the complement of $B$ by condition (ii). Therefore, $x \notin B$.
We now describe the reduction. Given an uncountable $A\in \negrepr{[0,1]}$, we can compute $s_{\mathsf{b}}^{-1}(A) \in \negrepr{2^\mathbb{N}}$ which is uncountable as well. Let $P \in \PST[2^\mathbb{N}](s_{\mathsf{b}}^{-1}(A))$ and $B = J(P)$. It suffices to show that $B \subseteq A$ and that $B$ is perfect.
If $x \in B$ then there exists $q \in s_{\mathsf{b}}^{-1}(x) \cap P$. Since $P \subseteq s_{\mathsf{b}}^{-1}(A)$ we get that $s_{\mathsf{b}}(q)=x \in A$. This shows that $B \subseteq A$.
It remains to show that $B$ is perfect. Suppose not: then there exists $x \in B$ and some open interval $I \subseteq [0,1]$ such that $I \cap B=\{x\}$. By continuity of $s_{\mathsf{b}}$, $s_{\mathsf{b}}^{-1}(I \cap B)$ is an open set in $P$ which has at most two members; these points are isolated in $P$, contradicting the perfectness of $P$.
\end{proof}
\begin{remark}
Following the ideas of the previous proof and using some extra care it is possible to prove that $\PST[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \PST[\mathbb{R}]$: replace $s_{ \mathsf{b} }$ with $\function{s_{ \mathsf{b} }'}{\mathbb{N}\times2^\mathbb{N}}{\mathbb{R}}$ defined by
\[
s_{ \mathsf{b} }' (n,p) = (-1)^{n} \cdot \left\lceil\frac{n}{2}\right\rceil + s_{ \mathsf{b} }(p).
\]
\end{remark}
We do not know whether there exist rich computable Polish spaces $\mathcal{X}$ such that $\PST[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \PST[\mathcal{X}]$ (see \thref{question:richspacesBaire}).
\subsection{(Weak) lists}
The following classical fact helps in the next proofs.
\begin{theorem}[{\cite[Theorem 3E.6]{Moschovakis}}]
\thlabel{theorem3e6moschovakis}
For every computable metric space $\mathcal{X}$ there is a computable surjection $\function{s}{\mathbb{N}^\mathbb{N}}{\mathcal{X}}$ and $A \in \Pi_1^0(\mathbb{N}^\mathbb{N})$ such that $s$ is one-to-one on $A$ and $s(A)=\mathcal{X}$.
\end{theorem}
\begin{lemma}
\thlabel{listupperbound}
Let $\mathcal{X}$ be a computable metric space. Then $(\mathsf{w})\List[\mathcal{X}]\le_{\mathrm{W}}( \mathsf{w})\List[\mathbb{N}^\mathbb{N}]$.
\end{lemma}
\begin{proof}
We prove only $\List[\mathcal{X}]\le_{\mathrm{W}}\List[\mathbb{N}^\mathbb{N}]$ as the other reduction is similar. Let $s$ and $A$ be as in \thref{theorem3e6moschovakis} and $s_A$ be the restriction of $s$ to $A$. By \thref{surjections}, the function $\function{S_A}{\negrepr{\mathcal{X}}}{\negrepr{\mathbb{N}^\mathbb{N}}}$ such that $S_A(M)=s_A^{-1}(M)$ is computable: hence given $C \in \negrepr{\mathcal{X}}$ and $(n,(b_i,p_i)_{i \in \mathbb{N}}) \in \List[\mathbb{N}^\mathbb{N}](S_A(C))$ we have that $(n,(b_i,s(p_i))_{i \in \mathbb{N}}) \in \List[\mathcal{X}](C)$.
\end{proof}
\begin{lemma}
\thlabel{richspaceslist}
Let $\mathcal{X},\mathcal{Y}$ be computable metric spaces and $\function{\iota}{\mathcal{X}}{\mathcal{Y}}$ be a computable embedding with $\mathsf{range}(\iota) \in \Pi_1^0(\mathcal{Y})$. Then $(\mathsf{w})\List[\mathcal{X}] \le_{\mathrm{W}} (\mathsf{w})\List[\mathcal{Y}]$. In particular, $(\mathsf{w})\List[2^\mathbb{N}] \le_{\mathrm{W}} (\mathsf{w})\List[\mathcal{Y}]$ for every rich computable metric space $\mathcal{Y}$.
\end{lemma}
\begin{proof}
We only prove that $\List[\mathcal{X}]\le_{\mathrm{W}}\List[\mathcal{Y}]$, the other reduction is similar. By \thref{embeddingtheorem} the map $\function{J}{\negrepr{\mathcal{X}}}{\negrepr{\mathcal{Y}}}$ is computable. Given $A \in \operatorname{dom}(\List[\mathcal{X}])$ we have that $J(A) \in \operatorname{dom}(\List[\mathcal{Y}])$: moreover, given $(n,(b_i,p_i))_{i \in \mathbb{N}} \in \List[\mathcal{Y}](J(A))$ we have that $(n,(b_i,\iota^{-1}(p_i)))_{i \in \mathbb{N}} \in \List[\mathcal{X}](A)$.
\end{proof}
\begin{lemma}
\thlabel{wlistequivelenceotherspaces}
$\wList[\mathbb{R}]\equiv_{\mathrm{W}} \wList[{[0,1]}]\equiv_{\mathrm{W}} \wList[2^\mathbb{N}]$.
\end{lemma}
\begin{proof}
The fact that $\wList[{[0,1]}]\le_{\mathrm{W}}\wList[\mathbb{R}]$ is immediate and $\wList[2^\mathbb{N}]\le_{\mathrm{W}} \wList[{[0,1]}]$ follows from \thref{richspaceslist}.
Recall that $\wList[2^\mathbb{N}]$ is parallelizable (\thref{wsclistcantorparallelizable}) and notice that it is straightforward to check that $\wList[\mathbb{R}] \le_{\mathrm{W}} \parallelization{\wList[{[0,1]}]}$. Hence, it suffices to show that $\wList[{[0,1]}]\le_{\mathrm{W}}\wList[2^\mathbb{N}]$. The function $s_{\mathsf{b}}$ of the proof of \thref{PST01} is useful also here. Consider an input $A \in \negrepr{[0,1]}$ and let $(b_i,p_i)_{i \in \mathbb{N}}\in \wList[2^\mathbb{N}](s_{\mathsf{b}}^{-1}(A))$: it is straightforward to check that $(b_i,s_{\mathsf{b}}(p_i))_{i \in \mathbb{N}}$ is a solution of $\wList[{[0,1]}](A)$.
\end{proof}
Notice that the argument above shows that $\wList[\mathcal{X}]\le_{\mathrm{W}}\wList[2^\mathbb{N}]$ for every computable metric space $\mathcal{X}$ such that there exists an admissible representation $\partialfunction{\delta}{2^\mathbb{N}}{\mathcal{X}}$ with $\operatorname{dom} (\delta) \in \Pi_1^0(2^\mathbb{N})$ and such that $\length{\delta^{-1}(x)}\leq \aleph_0$ for every $x \in \mathcal{X}$. In particular $\wList[ {[0,1]^d}] \equiv_{\mathrm{W}} \wList[2^\mathbb{N}]$ for any $d \in \mathbb{N}$.
Notice that the situation for $\List[\mathcal{X}]$ is less clear: for example, we do not know if, in contrast to what happens for $\wList[\mathcal{X}]$, $\List[2^\mathbb{N}] <_\mathrm{W} \List[\mathbb{R}]$ (see \thref{question:list}).
We now consider listing problems on countable spaces. Let us start from finite spaces: for $n>0$, we denote by $\mathbf{n}$ the space consisting of $\{0,\dots,n-1\}$ with the discrete topology and an arbitrary computable metric, which is obviously a computable metric space.
\begin{proposition}
\thlabel{FiniteListSame}
For every $n >0$, $\wList[\mathbf{n}]\equiv_{\mathrm{W}} \List[\mathbf{n}] \equiv_{\mathrm{W}} \mathsf{LPO} ^{n}$ and therefore $\List[\mathbf{n}]<_\mathrm{W} \List[\mathbf{n}+1]$.
\end{proposition}
\begin{proof}
The fact that $\wList[\mathbf{n}]\le_{\mathrm{W}} \List[\mathbf{n}]$ is trivial. For the converse let $A\in \negrepr{\mathbf{n}}$ and let $(b_i,m_i)_{i \in \mathbb{N}} \in \wList[\mathbf{n}](A)$. Notice that for every $m<n$ exactly one of $m \notin A$ and $(\exists i) (b_i=1 \land m_i=m)$ holds: since both conditions are $\Sigma_1^{0}$ we can compute whether $m \in A$ or not. This allows us to compute $\length{A}$ and, together with $(b_i,m_i)_{i \in \mathbb{N}}$ we obtain a name for $\List[\mathbf{n}]$ (see \thref{equivalentcersionslists}).
To show that $\wList[\mathbf{n}]\le_{\mathrm{W}} \mathsf{LPO} ^{n}$, let $A \in \negrepr{\mathbf{n}}$ and fix a computable formula $\varphi$ such that $i \in A$ iff $(\forall k) \varphi(i,k,A)$. The input $p_i \in 2^\mathbb{N}$ for the $i$-th instance of $ \mathsf{LPO} $ is defined by $p_i(k)=1$ iff $\lnot \varphi(i,k,A)$, so that $ \mathsf{LPO} (p_i)=1 \iff i \in A$. For all $i \in \mathbb{N}$ define
\[
b_i := \begin{cases}
1 & \text{if } i<n \text{ and } \mathsf{LPO} (p_i)=1;\\
0 & \text{otherwise.}
\end{cases}
\qquad
x_i := \begin{cases}
i & \text{if } i<n;\\
0 & \text{otherwise.}
\end{cases}
\]
Then, $(b_i,x_i)_{i \in \mathbb{N}} \in \wList[\mathbf{n}](A)$.
For the opposite direction, we show that $ \mathsf{LPO} ^{n}\le_{\mathrm{W}} \wList[\mathbf{n}]$. Let $(p_j)_{j < n}$ be an input for $ \mathsf{LPO} ^{n}$. Consider $A := \{j < n: p_j=0^\mathbb{N}\} \in \negrepr{\mathbf{n}}$ and let $(b_i,m_i)_{i \in \mathbb{N}} \in \List[\mathbf{n}](A)$. Notice that, for every $j<n$, $p_j = 0^\mathbb{N}$ iff $(\exists i) (b_i=1 \land m_i = j)$. We thus can compute $ \mathsf{LPO} (p_j)$ by searching for $i$ such that either $p_j(i)=i$ or $b_i=1$ and $m_i=j$.
The fact that $\List[\mathbf{n}]<_\mathrm{W} \List[\mathbf{n}+1]$ follows from $ \mathsf{LPO} ^{n}<_\mathrm{W} \mathsf{LPO} ^{n+1}$ (\cite[Corollary 6.7]{BG09}).
\end{proof}
We say that a computable metric space is \emph{effectively countable} if there exists a computable surjection $\function{f}{\mathbb{N}}{\mathcal{X}}$.
\begin{lemma}
\thlabel{effectivelycountable}
For any computable metric space $\mathcal{X}$ which is effectively countable, $\wList[\mathcal{X}]\le_{\mathrm{W}}\mathsf{lim}$.
\end{lemma}
\begin{proof}
Fix a computable surjection $\function{f}{\mathbb{N}}{\mathcal{X}}$. Recalling that $\mathsf{lim} \equiv_{\mathrm{W}} \parallelization{ \mathsf{LPO} }$, the proof is a straightforward generalization of the proof of $\wList[\mathbf{n}]\le_{\mathrm{W}} \mathsf{LPO} ^{n}$ in \thref{FiniteListSame}.
\end{proof}
We say that a computable metric space $\mathcal{X}$ is \emph{effectively infinite} if there exists a computable sequence $(U_i)_{i \in \mathbb{N}}$ of open sets in $\mathcal{X}$ such that $(\forall i)(U_i \not\subseteq \underset{j\neq i}{\bigcup} U_j)$.
\begin{lemma}
\thlabel{effectivelyinfinite}
For every countable computable metric space $\mathcal{X}$ which is effectively infinite, $\mathsf{lim} \le_{\mathrm{W}} \wList[\mathcal{X}]$.
\end{lemma}
\begin{proof}
Fix a sequence $(U_i)_{i \in \mathbb{N}}$ witnessing that $\mathcal{X}$ is effectively infinite. Recall that $\mathsf{lim} \equiv_{\mathrm{W}} \parallelization{ \mathsf{LPO} }$ so that it suffices to show $\parallelization{ \mathsf{LPO} }\le_{\mathrm{W}}\wList[\mathcal{X}]$.
Given an input $(p_i)_{i \in \mathbb{N}}$ for $\parallelization{ \mathsf{LPO} }$, let $A:=\{x \in \mathcal{X}: (\forall i)(x \in U_i \implies p_i=0^\mathbb{N})\} \in \negrepr{\mathcal{X}}$ and notice that $\length{A} \leq \aleph_0$ because $\mathcal{X}$ is countable. Notice that $p_i = 0^\mathbb{N}$ iff $A \cap U_i \neq \emptyset$ (for the forward direction use the existence of $y_i \in U_i$ such that $y_i \notin \underset{j\neq i}{\bigcup} U_j$ by definition of effectively infinite).
Fix $(b_i,x_i)_{i \in \mathbb{N}} \in \wList[\mathcal{X}](A)$. By the above observation, for every $i \in \mathbb{N}$ we get
$$p_i=0^\mathbb{N} \iff (\exists k)(b_k=1\land x_k \in U_i).$$
Since we showed the equivalence of the $\Pi_1^0$ condition $p_i=0^\mathbb{N}$ with a $\Sigma_1^0$ condition, we can compute $ \mathsf{LPO} (p_i)$ for every $i \in \mathbb{N}$.
\end{proof}
Many natural countable computable metric spaces, not necessarily Polish, are easily seen to be both effectively countable and effectively infinite. We thus can combine \thref{effectivelycountable,effectivelyinfinite} to obtain $\wList[\mathcal{X}] \equiv_{\mathrm{W}} \mathsf{lim}$ for several countable spaces, both compact and non-compact:
\begin{corollary}
\thlabel{corollarywListnats}
$\wList[\mathbb{N}] \equiv_{\mathrm{W}} \wList[\mathcal{K}] \equiv_{\mathrm{W}} \wList[\mathbb{Q}] \equiv_{\mathrm{W}} \mathsf{lim}$, where $\mathcal{K} = \{0\} \cup \{2^{-n}: n \in \mathbb{N}\} $.
\end{corollary}
\begin{proposition}
\thlabel{wlistNparallelizable}
$\wList[\mathbb{N}]<_\mathrm{W} \List[\mathbb{N}]$.
\end{proposition}
\begin{proof}
The fact that $\wList[\mathbb{N}]\le_{\mathrm{W}}\List[\mathbb{N}]$ is trivial. For strictness, we show that $ \mathsf{LPO} '\le_{\mathrm{W}} \List[\mathbb{N}]$: since $ \mathsf{LPO} '~|_{\mathrm{W}~}\mathsf{lim}$ and $\wList[\mathbb{N}]\equiv_{\mathrm{W}}\mathsf{lim}$ by \thref{corollarywListnats} this suffices to conclude the proof.
We can think of $ \mathsf{LPO} '$ as the function that, given in input $p \in 2^\mathbb{N}$, is such that $ \mathsf{LPO} '(p)=1 \iff (\exists^\infty i)(p(i)=0)$. For any $p \in 2^\mathbb{N}$, let $A:= \{i:p(i)= 0\}$: given $(n,(b_i,p_i)_{i \in \mathbb{N}}) \in \List[\mathbb{N}](A)$ it is clear that $ \mathsf{LPO} '(p)=1$ iff $n=0$.
\end{proof}
\subsection{The Cantor-Bendixson theorem}
Notice that it makes sense to study $\PK[\mathcal{X}]$ only when $\mathcal{X}$ is an uncountable effectively Polish space: indeed, if $\mathcal{X}$ is countable, $\PK[\mathcal{X}]$ is the function with constant value $\emptyset$.
\begin{lemma}
\thlabel{pi11countabilityinotherspaces}
For any computable Polish space $\mathcal{X}$ the set $\{C \in \negrepr{\mathcal{X}}:\length{C} \leq \aleph_0\}$ is $\Pi_1^1$.
\end{lemma}
\begin{proof}
Let $s$ and $A$ be as in \thref{theorem3e6moschovakis} and denote by $s_A$ be the restriction of $s$ to $A$. By \thref{surjections}, the function $\function{S}{\negrepr{\mathcal{X}}}{\negrepr{\mathbb{N}^\mathbb{N}}}$ defined by $S(C)=s_A^{-1}(C)$ is computable. Since $s_A$ is a bijection, we obtain that $\length{C}=\length{S(C)}$. Recall from \S \ref{representedspaces} that we can represent $S(C)$ via some $T \in \mathbf{Tr}$ such that $S(C)=\body{T}$. To conclude the proof notice that $\length{S(C)} \leq \aleph_0$ iff $T \in \mathcal{T}^{\leq\aleph_0}$ and, by \thref{Complexityresults}(ii), $\mathcal{T}^{\leq\aleph_0}$ is a $\Pi_1^1$ set.
\end{proof}
Recall that in \S \ref{representedspaces} we fixed an enumeration $(B_i)_{i \in \mathbb{N}}$ of all basic open sets of $\mathcal{X}$, where the ball $B_{\pairing{n,m}}$ is centered in $\alpha(n)$ and has radius $q_m$.
\begin{theorem}
\thlabel{perfectkernelforallx}
For every rich computable Polish space $\mathcal{X}$, $\PK[\mathcal{X}]\equiv_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$.
\end{theorem}
\begin{proof}
The right-to-left direction is \thref{richspacesPK}. For the converse reduction, by \thref{Pkcantor_idPiSigma} we have that $\PK[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \parallelization{\mathsf{WF}_{\sierpinski}}$, hence it suffices to show that $\PK[\mathcal{X}]\le_{\mathrm{W}}\parallelization{\mathsf{WF}_{\sierpinski}}$. Let $C \in \negrepr{\mathcal{X}}$ be an input for $\PK[\mathcal{X}]$.
Notice that $\length{B_{\str{n,m}} \cap C}\leq \aleph_0$ iff $(\forall \epsilon>0)(\length{ \{x \in \mathcal{X}:d(x,\alpha(n))\leq q_m-\epsilon\}\cap C}\leq \aleph_0)$: as $\{x \in \mathcal{X}:d(x,\alpha(n))\leq q_m - \epsilon\}\cap C$ is a closed set that can be uniformly computed from $C$, $n$ and $m$, by \thref{pi11countabilityinotherspaces} we get that $\length{B_{\str{n,m}}\cap C}\leq \aleph_0$ is $\Pi_1^1$.
We can therefore compute a sequence $(T^{\pairing{n,m}})_{n,m\in \mathbb{N}}$ of trees such that $T^{\pairing{n,m}} \in \mathcal{WF}$ iff $\length{B_{\str{n,m}} \cap C}\leq \aleph_0$. Hence, searching the output of $\parallelization{\mathsf{WF}_{\sierpinski}}((T^{\pairing{n,m}})_{n,m \in \mathbb{N}})$ for the $\pairing{n,m}$'s such that $\mathsf{WF}_{\sierpinski}(T^{\str{n,m}})=1$, we eventually enumerate all the $B_{\str{n,m}}$ such that $B_{\str{n,m}} \cap \PK[\mathcal{X}](C)=\emptyset$, thus obtaining a name for $\PK[\mathcal{X}](C)\in \negrepr{\mathcal{X}}$.
\end{proof}
The proof of the next Lemma combines ideas from the proof of \thref{wsclistreduciblepkbaire} and \thref{listupperbound}.
\begin{lemma}
\thlabel{sclistupperbound}
For every computable Polish space $\mathcal{X}$, $\wScList[\mathcal{X}]\le_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}]$.
\end{lemma}
\begin{proof}
We show that $\wScList[\mathcal{X}]\le_{\mathrm{W}} \parallelization{\mathsf{WF}_{\sierpinski}} \times \parallelization{\completion{\wList[\mathcal{X}]}}\le_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}]$.
The first reduction is obtained by generalizing the proof of \thref{wsclistreduciblepkbaire} (which is the case $\mathcal{X}= \mathbb{N}^\mathbb{N}$): given $C \in \negrepr{\mathcal{X}}$ it suffices to use as input for the $\pairing{n,m}$-th instances of $\mathsf{WF}_{\sierpinski}$ and $\completion{\wList[\mathcal{X}]}$ respectively a tree $T^{\str{n,m}}$ such that $T^{\str{n,m}} \in \mathcal{WF}$ iff $(\forall \epsilon>0) (\length{ \{x \in \mathcal{X} : d(x,\alpha(n))\leq q_m - \epsilon\}\cap C}\leq \aleph_0)$ (see the proof \thref{perfectkernelforallx}) and $\{x \in \mathcal{X}:d(x,\alpha(n))\leq q_m-\epsilon\}\cap C$.
For the second reduction, notice that $\completion{\wList[\mathcal{X}]}\le_{\mathrm{W}} \completion{\wList[\mathbb{N}^\mathbb{N}]}$ by essentially the same proof of \thref{listupperbound}. As $\wScList[\mathbb{N}^\mathbb{N}]$ is parallelizable (\thref{Wsclist_parallelizable}) and $\completion{\wList[\mathbb{N}^\mathbb{N}]}\le_{\mathrm{W}}\wScList[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \parallelization{\mathsf{WF}_{\sierpinski}}$ (\thref{Pkcantor_idPiSigma,Pkbaire_equiv_wsclistbaire} and \thref{completionofwlist_below_pk}) we obtain the reduction.
\end{proof}
The same proof of \thref{richspaceslist} yields the following Lemma.
\begin{lemma}
\thlabel{richspacessclist}
Let $\mathcal{X}$ and $\mathcal{Y}$ be computable metric spaces and $\function{\iota}{\mathcal{X}}{\mathcal{Y}}$ be a computable embedding with $\mathsf{range}(\iota) \in \Pi_1^0(\mathcal{Y})$. Then $(\mathsf{w})\ScList[\mathcal{X}] \le_{\mathrm{W}} (\mathsf{w})\ScList[\mathcal{Y}]$. In particular, $(\mathsf{w})\ScList[2^\mathbb{N}] \le_{\mathrm{W}} (\mathsf{w})\ScList[\mathcal{Y}]$ for every rich computable metric space $\mathcal{Y}$.
\end{lemma}
Recall that the problems $\PK[\mathcal{X}]$, where $\mathcal{X}$ is a rich computable Polish space, are all Weihrauch equivalent (\thref{perfectkernelforallx}). Combining \thref{richspacessclist,sclistupperbound}, we obtain $\wScList[2^\mathbb{N}] \le_{\mathrm{W}} \wScList[\mathcal{X}] \le_{\mathrm{W}} \wScList[\mathbb{N}^\mathbb{N}]$, but we do not know whether for some rich computable Polish space $\mathcal{X}$ both reductions are strict (see \thref{question:sclistcantor}).
\begin{theorem}
\thlabel{wsclistxequivwsclistbaire}
For any rich computable Polish space $\mathcal{X}$, $\wCB[\mathcal{X}]\equiv_{\mathrm{W}}\PK[\mathbb{N}^\mathbb{N}]$.
\end{theorem}
\begin{proof}
For the left-to-right reduction notice that $\wCB[\mathcal{X}]\le_{\mathrm{W}} \PK[\mathcal{X}] \times \wScList[\mathcal{X}]$. By \thref{perfectkernelforallx} and \thref{sclistupperbound} we know that $\PK[\mathcal{X}]\equiv_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$ and $\wScList[\mathcal{X}]\le_{\mathrm{W}}\wScList[\mathbb{N}^\mathbb{N}]$. Since $\wScList[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$ (\thref{Pkbaire_equiv_wsclistbaire}) and $\PK[\mathbb{N}^\mathbb{N}]$ is parallelizable (\thref{Pkbaire_parallelizable}), this concludes the reduction. The other direction follows from the combination of \thref{perfectkernelforallx} and the fact that $\PK[\mathcal{X}]\le_{\mathrm{W}} \wCB[\mathcal{X}]$.
\end{proof}
In the literature there are many equivalent definitions of computably compact represented spaces. The following is the most convenient for our purposes.
\begin{definition}[{\cite[\S 5]{topaspects}}]
\thlabel{compactDefinition}
A subset $K$ of a represented space $\mathcal{X}$ is computably compact if $\{A \in \negrepr{\mathcal{X}} : A \cap K = \emptyset\}$ is $\Sigma_1^0$.
\end{definition}
\begin{definition}
A computable metric space $\mathcal{X}$ is computably $K_\sigma$ if there exists a computable sequence $(K_i)_{i \in \mathbb{N}}$ of nonempty computably compact sets with $\mathcal{X}=\bigcup_{i \in \mathbb{N}} K_i$.
\end{definition}
The following remark extends \thref{Pi11sets2} to computably $K_\sigma$ spaces.
\begin{remark}
\thlabel{ComplexityresultsKsigmaX}
Let $\mathcal{X}$ be a computably $K_\sigma$ space and let $(K_i)_{i \in \mathbb{N}}$ witness this property.
Notice that for $C\in \negrepr{\mathcal{X}}$, $C=\emptyset$ iff $(\forall i)(K_i \cap C=\emptyset)$, i.e.\ a $\Pi_2^0$ condition. Moreover, $C\cap B_{\pairing{n,m}}=\emptyset $ iff
$(\forall k) (\{x \in \mathcal{X}:d(x,\alpha(n))\leq q_m-2^{-k}\}\cap C=\emptyset)$, so that this condition is $\Pi_2^0$ as well.
Now, $\length{C}=1$ iff $C \neq \emptyset$ and
\[(\forall n,n',m,m')(d(\alpha(n),\alpha(n'))\geq q_m+q_{m'} \implies B_{\str{n,m}} \cap C=\emptyset \lor B_{\str{n',m'}} \cap C =\emptyset)\]
is $\Pi_2^0$. Now $\length{C\cap B_{\str{n,m}}}=1$ is the conjunction of a $\Sigma_2^0$ and a $\Pi_2^0$ formula because it is equivalent to $C\cap B_{\pairing{n,m}} \neq \emptyset$ and
\begin{align*}
(\forall n',n'',m',m'')&(d(\alpha(n),\alpha(n')) \leq q_m-q_{m'} \land d(\alpha(n),\alpha(n'')) \leq q_m-q_{m''} \land \\
& \land d(\alpha(n'),\alpha(n''))\geq q_{m'}+q_{m''} \implies B_{\str{n',m'}} \cap C=\emptyset \lor B_{\str{n'',m''}} \cap C =\emptyset)\}.
\end{align*}
\end{remark}
\begin{lemma}
\thlabel{Ksigma}
For every rich computable Polish computably $K_\sigma$ space $\mathcal{X}$, $\CB[\mathcal{X}]\equiv_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$.
\end{lemma}
\begin{proof}
The right to left direction follows from the facts that $\PK[\mathcal{X}]\le_{\mathrm{W}}\CB[\mathcal{X}]$ and that, by \thref{perfectkernelforallx}, $\PK[\mathcal{X}]\equiv_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}]$.
For the opposite direction, notice that $\CB[\mathcal{X}]\le_{\mathrm{W}} \wCB[\mathcal{X}]\times \ScCount[\mathcal{X}]$. Since $\PK[\mathbb{N}^\mathbb{N}]$ is parallelizable (\thref{Pkbaire_parallelizable}) and $\PK[\mathbb{N}^\mathbb{N}]\equiv_{\mathrm{W}} \wCB[\mathcal{X}]$ (\thref{wsclistxequivwsclistbaire}), it suffices to show that $\ScCount[\mathcal{X}]\le_{\mathrm{W}}\PK[\mathbb{N}^\mathbb{N}]$. To do so, we now adapt the proof of \thref{sclistcantor_summary}(ii) to show that $\ScCount[\mathcal{X}]\le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_4^0}{}{\mathbb{N}}$: this concludes the proof as $\codedChoice{\boldsymbol{\Pi}_4^0}{}{\mathbb{N}} \le_{\mathrm{W}} \codedChoice{\boldsymbol{\Pi}_1^1}{}{\mathbb{N}} \equiv_{\mathrm{W}} \firstOrderPart{\PK[\mathbb{N}^\mathbb{N}]}$ (\thref{fop_pi11}). Given in input $C \in \negrepr{\mathcal{X}}$, let
\[ A:= \{k: (k>0 \implies \varphi(k-1,C) \land \lnot \varphi(k,C)) \land (k=0\implies (\forall m)(\varphi(m,C)))\},\]
where $\varphi(k,C)$ says that there exists a finite string $\sigma = (\pairing{n_0,q_0},\hdots,\pairing{n_{k-1},q_{k-1}})\in \mathbb{N}^k$ such that for every $i\neq j <k$,
\[d(\alpha(n_i),\alpha(n_j))\geq q_{i}+q_{j} \land \length{C\cap B_{\pairing{n_i,q_i}}}=1. \]
By \thref{ComplexityresultsKsigmaX}, it is easy to check that each $\varphi$ is $\Sigma_3^0$ and hence $A$ is $\Pi_4^{0}$. By \thref{Isolated_paths} the unique $k \in A$ is the correct answer for $\ScCount[\mathcal{X}](C)$.
\end{proof}
The final part of this section is devoted to spaces which are not $K_\sigma$.
Recall the following consequence of Hurewicz's theorem from classical descriptive set theory.
\begin{theorem}[{\cite[Theorem 7.10]{kechris2012classical}}]
\thlabel{hurewicz}
Let $\mathcal{X}$ be a Polish space. Then there is an embedding $\function{\iota}{\mathbb{N}^\mathbb{N}}{\mathcal{X}}$ such that $\mathsf{range}(\iota)$ is closed iff $X$ is not $K_\sigma$.
\end{theorem}
\begin{definition}
We say that a computable Polish space is computably non-$K_\sigma$ if there exists a computable embedding $\function{\iota}{\mathbb{N}^\mathbb{N}}{\mathcal{X}}$ such that $\mathsf{range}(\iota) \in \Pi_1^0(\mathcal{X})$.
\end{definition}
The following theorem is a corollary of \thref{richspacesPKGeneral}.
\begin{theorem}
For any rich computable Polish space $\mathcal{X}$ which is computably non-$K_\sigma$, $\CB[\mathbb{N}^\mathbb{N}]\le_{\mathrm{W}} \CB[\mathcal{X}]$.
\end{theorem}
We leave open the questions whether there is a rich computable Polish space $\mathcal{X}$ such that $\CB[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \CB[\mathcal{X}]$ (see \thref{question:sclistcbbaire}).
\section{Open problems}
\label{Openquestions}
In this paper we studied problems related to the Cantor-Bendixson theorem. In contrast with reverse mathematics, we showed that many such problems lie in different Weihrauch degrees; some of these problems still lack a complete classification.
In \thref{Pk_below__chipi}, we showed that $\PK[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \PK \le_{\mathrm{W}} \mathsf{lim} * \PK[\mathbb{N}^\mathbb{N}]$. Upon hearing about this result, Linda Westrick asked the following question.
\begin{open}
\thlabel{PKandLimQuestion}
Is it true that $\PK \equiv_{\mathrm{W}} \mathsf{lim} * \PK[\mathbb{N}^\mathbb{N}]$?
\end{open}
By \thref{Pkbaire_below_sclistbaire,sclist_below_pica,Summaryfopustar}, and the fact that $\ScCount[\mathbb{N}^\mathbb{N}]$ is a first-order problem, we obtain $\mathsf{WF}^* \le_{\mathrm{W}} \ScCount[\mathbb{N}^\mathbb{N}] \le_{\mathrm{W}} \firstOrderPart{\parallelization{\mathsf{WF}}} \equiv_{\mathrm{W}} \ustar{\mathsf{WF}}$. By \cite[Corollary 7.8]{valentisolda}, $\mathsf{WF}^* <_\mathrm{W} \ustar{\mathsf{WF}}$ and therefore at least one of the inequalities is strict.
\begin{open}
\thlabel{question:sccount}
Characterize the Weihrauch degree of $\ScCount[\mathbb{N}^\mathbb{N}]$.
\end{open}
In particular, if $\ScCount[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \firstOrderPart{\parallelization{\mathsf{WF}}}$ we would obtain a nice characterization of the first-order part of $\parallelization{\mathsf{WF}}$.
A related question is the following.
\begin{open}
\thlabel{question:cbbaireandlist}
Is it true that $\CB[\mathbb{N}^\mathbb{N}] \le_{\mathrm{W}} \ScList[\mathbb{N}^\mathbb{N}]$ (and hence $\CB[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \ScList[\mathbb{N}^\mathbb{N}]$)?
\end{open}
Notice that we proved that $\ScList[2^\mathbb{N}] <_\mathrm{W} \CB[2^\mathbb{N}]$ and $\ScList[\mathbb{N}^\mathbb{N}]<_\mathrm{W} \CB$ (\thref{equivalencesCBbairecantor,sclistcantor_summary}(iii)). A negative answer to \thref{question:cbbaireandlist} would confirm this pattern. However, we have $\PK[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \ScList[\mathbb{N}^\mathbb{N}]$, while $\ScList[2^\mathbb{N}] <_\mathrm{W} \PK[2^\mathbb{N}]\equiv_{\mathrm{W}} \CB[2^\mathbb{N}]$ and $\ScList[\mathbb{N}^\mathbb{N}]<_\mathrm{W} \PK \equiv_{\mathrm{W}} \CB$ (\thref{equivalencesCBbairecantor,CBtree,sclistcantor_summary}(iii)): therefore the situation in $\mathbb{N}^\mathbb{N}$ differs from those in $2^\mathbb{N}$ and $\mathbf{Tr}$ and a positive answer is possible. In this case we would obtain an unexpected result: namely, that the gap between $\PK[\mathbb{N}^\mathbb{N}]$ and $\CB[\mathbb{N}^\mathbb{N}]$ is due entirely to the scattered part and its cardinality, rather than to the perfect kernel. If this is the case, the cardinality of the scattered part (i.e.\ $\ScCount[\mathbb{N}^\mathbb{N}]$) would be of crucial importance because the scattered part on its own is not enough as witnessed by the fact that $\wScList[\mathbb{N}^\mathbb{N}] \equiv_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \CB[\mathbb{N}^\mathbb{N}]$ (\thref{Pkbaire_equiv_wsclistbaire,equivalencesCBbairecantor}).
The following questions are strictly related and concern the relationship of two of our problems with $\mathsf{C}_{\Baire}$, which plays a major role in the Weihrauch lattice. Choice principles, have a convenient definition and hence, it is quite natural to compare any problem with them. In particular, $\mathsf{C}_{\Baire}$ plays a pivotal role among the problems that, from the point of view of reverse mathematics, are equivalent to $\mathsf{ATR}_0$.
\begin{open}
\thlabel{questioncbaire}
Is it true that $\mathsf{C}_{\Baire} \le_{\mathrm{W}}\CB[\mathbb{N}^\mathbb{N}]$? Even more, does $\mathsf{C}_{\Baire} \le_{\mathrm{W}} \ScList[\mathbb{N}^\mathbb{N}]$ hold?
\end{open}
By \thref{Ucbaire_below_pk,CBtree,AttemptCBaireCN} we obtain $\mathsf{C}_{\Baire} \not\le_{\mathrm{W}} \PK[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \CB[\mathbb{N}^\mathbb{N}] \le_{\mathrm{W}} \codedChoice{}{}{\mathbb{N}}* \PK[\mathbb{N}^\mathbb{N}]$. Since $\ScList[\mathbb{N}^\mathbb{N}] \le_{\mathrm{W}} \CB[\mathbb{N}^\mathbb{N}]$ this implies that to answer negatively both questions it suffices to show that $\mathsf{C}_{\Baire} \not\le_{\mathrm{W}} \codedChoice{}{}{\mathbb{N}} * \PK[\mathbb{N}^\mathbb{N}]$. By \cite[Theorem 7.11]{closedChoice} we know that $f \le_{\mathrm{W}} \codedChoice{}{}{\mathbb{N}}$ iff $f$ is computable with finitely many mind-changes. In other words, $\mathsf{C}_{\Baire} \le_{\mathrm{W}} \codedChoice{}{}{\mathbb{N}}* \PK[\mathbb{N}^\mathbb{N}]$ iff $\mathsf{C}_{\Baire}$ can be reduced to $\PK[\mathbb{N}^\mathbb{N}]$ employing a backward functional which is computable with finitely many mind-changes: intuitively, this seems unlikely to hold.
The last section left open some interesting questions. First of all, by \thref{richspacesPK} we have that $\PST[\mathbb{N}^\mathbb{N}]$ is a lower bound for $\PST[\mathcal{X}]$ whenever $\mathcal{X}$ is a rich computable Polish spaces. In \thref{PST01} we showed that equivalence holds when $\mathcal{X} = [0,1]$ or $\mathcal{X} = \mathbb{R}$.
\begin{open}
\thlabel{question:richspacesBaire}
Is there a rich computable Polish space $\mathcal{X}$ such that $\PST[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \PST[\mathcal{X}]$?
\end{open}
Concerning the listing problems for countable closed sets, the situation for the so-called weak lists is quite clear, while we do not have a satisfactory result for problems requiring also the cardinality of the set. An open question is the following:
\begin{open}
\thlabel{question:list}
Does $\List[2^\mathbb{N}] <_\mathrm{W} \List[\mathbb{R}]$?
\end{open}
By \thref{perfectkernelforallx,wsclistxequivwsclistbaire} all problems of the form $\PK[\mathcal{X}]$ and $\wCB[\mathcal{X}]$ belong to the same Weihrauch degree as long as $\mathcal{X}$ is a rich computable Polish space. In contrast, we do not know if the same happens with $\CB[\mathcal{X}]$.
\begin{open}
\thlabel{question:sclistcbbaire}
Is there a rich computable Polish space $\mathcal{X}$ such that $\CB[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \CB[\mathcal{X}]$? By \thref{Ksigma} if such $\mathcal{X}$ exists must be computably non-$K_\sigma$. This problem is strictly related to the existence of a rich computable Polish space $\mathcal{X}$ such that $\ScList[\mathbb{N}^\mathbb{N}] <_\mathrm{W} \ScList[\mathcal{X}]$.
\end{open}
The last problem concerns the weak form of listing the scattered part of a set.
\begin{open}
\thlabel{question:sclistcantor}
Is there a rich computable Polish space $\mathcal{X}$ such that $\wScList[2^\mathbb{N}] <_\mathrm{W} \wScList[\mathcal{X}] <_\mathrm{W} \wScList[\mathbb{N}^\mathbb{N}]$?
\end{open}
\bibliographystyle{plain}
|
2,877,628,089,642 | arxiv | \section{Introduction}
One-dimensional driven-diffusion systems have been a subject of study in recent years because they exhibit interesting properties such as non-equilibrium phase transitions~\cite{Schmittmann}. These systems have many applications in different fields of physics and biology~\cite{Schutz,MacdonaldGibbsPipkin}. A well-known example is the Asymmetric Simple Exclusion Process (ASEP), which is studied experimentally by optical tweezers~\cite{Optical tweezers1,Optical tweezers2}.
Various approaches have been developed in order to solve such systems exactly, including for example the matrix product method. With the matrix product method, the steady-state weight of a configuration is written as the trace of a product of operators corresponding to the local state of each lattice site. The operators obey certain algebraic rules which are derived from the dynamics of the model~\cite{DEHP}. The algebraic relations among these operators might have finite or infinite dimensional matrix representations~\cite{EsslerRittenberg,BlytheMPM}. Recently the matrix product method with quadratic algebras attracted renewed attention as it can also be applied to dissipative quantum systems~\cite{Prosen,Karevski}.
It is well known that one-dimensional systems with open boundary conditions, in which the particle number is not conserved at the boundaries, can exhibit a phase transition~\cite{EvansNonconserving}. On the other hand a phase transition may also take place in systems with non-conserving dynamics in the bulk~\cite{Hinrichsen,EvansKafri}. For example, in Ref.~\cite{EvansKafri} the authors have studied a three-states model on a lattice with periodic boundary conditions with two particle species which evolve by diffusion, creation and annihilation. By changing the annihilation rate of the particles, this model displays a transition from a maximal current phase to a fluid phase.
As shown in~\cite{EvansZRP1} it is possible to map a one-dimensional driven-diffusive system defined on a periodic lattice onto a so-called zero-range process (ZRP). Recently this mapping was used to study various models which have an exact solution in the steady state~\cite{EvansZRP,KafriZRP}. It was shown that a phase transition in the original model corresponds to a condensation transition in the corresponding ZRP.
In present work, we introduce and study an exactly solvable one-dimensional driven-diffusive model with non-conserved dynamics which exhibits an interesting type of phase transition. The model is defined on a ring of $L$ sites which can be either empty (denoted by a vacancy $\emptyset$) or occupied by a one particle of type $A$ or type $B$. The system evolves random-sequentially according to a set of two-site processes which can be written in the most general form as
\begin{equation}
\label{DYNAMICAL RULE}
I\emptyset\mathop{\rightarrow}\limits^{\alpha_{I}} \emptyset I\,,\qquad
IK\mathop{\rightleftharpoons}\limits^{\beta_{IJ}}_{\beta_{JI}} JK\,,\qquad
IJ\mathop{\rightleftharpoons}\limits^{\omega_{IJ}}_{\omega_{JI}} J\emptyset\,,
\end{equation}
where $I, J, K\in \{A,B\} $. In what follows we study a special case of this model defined by the processes
\begin{equation}
\label{DYNAMICALRULES}
\begin{array}{cc}
A\emptyset\mathop{\rightarrow}\limits^{\alpha_{+}} \emptyset A,&
B\emptyset\mathop{\rightarrow}\limits^{\alpha_{-}} \emptyset B\\
AB\mathop{\rightleftharpoons}\limits^{p_{+}}_{p_{-}} BB,&
AA\mathop{\rightleftharpoons}\limits^{p_{+}}_{p_{-}} BA\\
AB\mathop{\rightleftharpoons}\limits^{\alpha_{+}}_{\alpha_{-}} B\emptyset,&A\emptyset\mathop{\rightleftharpoons}\limits^{p_{+}}_{p_{-}} BA\\
A\emptyset\mathop{\rightleftharpoons}\limits^{1}_{1} AA,&
B\emptyset\mathop{\rightleftharpoons}\limits^{p}_{\alpha} BB\\
\end{array}
\end{equation}
where the rates $\alpha$ and $p$ are given by the ratios
\begin{equation}
\label{ratios}
\alpha=\frac{\alpha_+}{\alpha_-}\,,\qquad p = \frac{p_+}{p_-}\,.
\end{equation}
As we will see below, for this particular choice the model turns out to be exactly solvable. Obviously, this defines a non-conserved dynamics, allowing the number of particles ($N_{A}$ and $N_{B}$) and vacancies ($N_{\emptyset}$) to fluctuate under the constraint $L=N_{A}+N_{B}+N_{\emptyset}$. Moreover, the model is a driven system since diffusion and reaction processes are not left-right symmetric. The dynamical rules~(\ref{DYNAMICAL RULE}) is extensible to an exactly solvable model with the various types of particles, in which a phase transition is accessible. A generalized model consisting of three species of particles is presented in the Appendix A.
In this paper we demonstrate that the model defined in~(\ref{DYNAMICALRULES}) exhibits a phase transition and that its stationary state can be determined exactly by means of the matrix product method. In Sect.~\ref{sec:zrp} we show that our model can be mapped onto a ZRP and that the phase transition corresponds to a condensation transition in the ZRP. In Sect.~\ref{sec:numerics} we study the dynamical behavior, which is not part of the exact solution, by numerical simulations. It turns out that the dynamical behavior near the critical point is plagued by unusually persistent corrections to scaling, which are explained from a phenomenological point of view in Sect.~\ref{sec:toymodel}.
\section{Phase diagram and phenomenological properties}
\begin{figure}
\centering\includegraphics[width=\linewidth]{rho.pdf}
\vspace*{-5mm}
\caption{Stationary density of the reaction-diffusion process investigated in the present work. Left: At the lower boundary $p=0$ the model exhibits two different phases, namely a high-density phase for $\alpha<2$ (red) and a low-density phase for $\alpha > 2$ (violet), separated by a discontinuous phase transition when moving along the bottom line in the left panel. For $p>0$ the order parameter $\rho_A$ changes continuously without exhibiting a phase transition. Right: The order parameter $\rho_B$ displays instead a continuous phase transition. }
\label{fig:rho}
\end{figure}
\label{sec:Phase diagram}
The model defined above is controlled by four parameters $\alpha_+$, $\alpha_-$, $p_+$, and $p_-$. As we will see below, the essential quantities which determine the matrix algebra are the ratios $\alpha=\frac{\alpha_+}{\alpha_-}$ and $p = \frac{p_+}{p_-}$ in Eq.~(\ref{ratios}), and therefore it is useful to study the phase diagram of the model in terms of these ratios. For the remaining two degrees of freedom we choose $\alpha_+\alpha_-=p_-=1$ throughout this paper, i.e. we use the definition
\begin{equation}
\alpha_+=\sqrt{\alpha}\,,\quad \alpha_-=\frac{1}{\sqrt{\alpha}}\,,\qquad p_+=p\,,\quad p_-=1.
\end{equation}
This selects a 2D subspace in the 4D parameter space which is believed to capture the essential phase structure of the system.
The phase diagram for the particle densities $\rho_A$ and $\rho_B$ in terms of $\alpha$ and $p$ is shown in Fig.~\ref{fig:rho}. As can be seen, these densities vary continuously everywhere except for the point $\alpha=2, p=0$, where the model exhibits a phase transition. Moving along the horizontal axis at $p=0$, the order parameter $\rho_A$ jumps discontinuously from $1/2$ to $0$, indicating first-order behavior, while $\rho_B$ changes continuously as in a second-order phase transition.
To give a first impression how the process behaves in different parts of the phase diagram, we show various typical snapshots of the space-time evolution in Fig.~\ref{fig:demo}. For $p
=0$ the density of $B$-particles (blue pixels) is very low while the $A$-particles (red pixels) form fluctuating domains with a high density. As we will see in the last section, these sharply bounded domains are important for a qualitative understanding of the phase transition.
For $\alpha<2$ the $A$-particles eventually fill the entire system while for $\alpha>2$ the $A$-domains almost disappear, leaving diffusing $B$-particles behind. For $p>0$ one can see that $B$-particles are continuously generated. Thus the parameter $\alpha$ controls the domain size of $A$-particles while the parameter $p$ controls the creation and therewith the density of $B$-particles.
\begin{figure*}
\centering\includegraphics[width=0.9\linewidth]{demo.pdf}
\caption{Snapshots of typical space-time evolutions starting with random initial conditions. Particles of type $B$ are represented by blue pixels while $A$-particles are plotted in red color. The figure shows snapshots for four different choices of the parameters, corresponding to the points in the phase diagram shown on the right.}
\label{fig:demo}
\end{figure*}
\section{Exact results}
\label{sec:exact}
The matrix product method is an important analytical tool developed in the 90's to compute the steady-state of driven diffusive systems exactly~\cite{DEHP,BlytheMPM}. Let us now investigate the stationary state of the model by using this method. We consider a configuration $C=\{\tau_{1},\cdots,\tau_{L}\}$ with $\tau_{i}\in \{ \emptyset,A,B\}$ on a discrete lattice of length $L$ with periodic boundary condition. According to this method, the stationary state weight of a configuration $C$ is given by the trace of a product of non-commuting operators~$X_{i}$:
\begin{equation}
\label{Weights MPM}
W(C)={\rm Tr}\bigl[\prod_{i
=1}^{L}{X_{i}}\bigr]\,.
\end{equation}
Note that this method differs from the well-known transfer matrix method in so far as different matrices are used depending on the actual configuration of the lattice sites, i.e. the choice of the operator $X_{i}$ at site $i$ depends on its local state. In our model, the operator $X_{i}=\mathbf E$ stands for a vacancy while $X_{i}=\mathbf A (\mathbf B)$ represents a particle of type $A$ ($B$). Depending on the dynamical rules, these operators should satisfy a certain set of algebraic relations. For the dynamical rules listed in~(\ref{DYNAMICALRULES}) one obtains a quadratic algebra of the form
\begin{equation}
\label{ALGEBRA}
\begin{array}{l}
p_{-}\mathbf B \mathbf A+\mathbf A \mathbf E=(1+p_{+})\mathbf A\mathbf A\\
p_{-}\mathbf B \mathbf B+\alpha_{-}\mathbf B \mathbf E=(\alpha_{+}+p_{+})\mathbf A\mathbf B\\
p_{+}\mathbf A \mathbf A+p_{+}\mathbf A \mathbf E=2p_{-}\mathbf B \mathbf A\\
p_{+}\mathbf A\mathbf B+p\mathbf B\mathbf E=(p_{-}+\alpha)\mathbf B\mathbf B\\
p_{-}\mathbf B\mathbf A+\mathbf A\mathbf A-(\alpha_{+}+p_{+})\mathbf A\mathbf E=\mathbf A\overline{\mathbf E}\\
\alpha_{+}\mathbf A\mathbf B+\alpha \mathbf B\mathbf B-(p+2\alpha_{-}-1)\mathbf B\mathbf E=\mathbf B\overline{\mathbf E}\\
\alpha_{+}\mathbf A\mathbf E-\mathbf E\mathbf A=- \overline{\mathbf E} \mathbf A\\
\alpha_{-}\mathbf B\mathbf E-\mathbf E\mathbf B=-\overline{\mathbf E}\mathbf B\\
\overline{\mathbf E}\mathbf E-\mathbf E\overline{\mathbf E}=0\end{array}
\end{equation}
where $\overline{\mathbf E}$ is an auxiliary matrix which is expected to cancel out in the final result. We find that the algebra~(\ref{ALGEBRA}) has a two-dimensional matrix representation given by the following matrices
\begin{equation}
\label{MATRIX A,B,E}
\mathbf A=\left(
\begin{array}{cc}
1 & 0 \\
1 & 0
\end{array} \right),\;
\mathbf B=p\left(
\begin{array}{cc}
0 & 1 \\
0 & 1
\end{array} \right),\;
\mathbf E=\left(
\begin{array}{cc}
1 & 0 \\
0 & \alpha
\end{array} \right)
\end{equation}
and $\overline{\mathbf E}={\mathbf E}-\alpha_{+}{\mathbf I}$, where ${\mathbf I}$ is an identity $2\times2$ matrix.
We note that the algebra (\ref{ALGEBRA}) and its representation (\ref{MATRIX A,B,E}) were studied for the first time by Basu and Mohanty in Ref.~\cite{Basu} in the context of a different model. It differs from our one in so far as it evolves only according to the processes in the first two lines of (\ref{DYNAMICALRULES}), where the $A$ and $B$-particles hop with different rates and can also transform into each other, meaning that the total number of particles is conserved. The authors calculated the spatial correlations exactly and mapped their model to a ZRP. However, as the particle number is conserved in their model, a phase transition does not occur by changing the rates. In other words, although the matrix algebra already contains information about the phase transition, their model could not access the part of the phase diagram where the transition takes place. The model presented here is an extension of their model with the same matrix representation but with a non-conserved dynamics and an extended parameter space, in which the phase transition becomes accessible.
To compute the partition sum of the system, we first note that according to (\ref{DYNAMICALRULES}) a configuration without a particle of type $A$ or $B$ is not dynamically accessible. Therefore, the partition function, defined as a sum of the weights of all available configurations with at least one particle, is given by
\begin{equation}
\label{PartitionFunction}
Z_{L}={\rm Tr}\bigl[(\mathbf A+\mathbf B+\mathbf E)^L-\mathbf E^L\bigr]\,.
\end{equation}
With this partition sum the stationary density of the $A$~and $B$-particles can be written as
\begin{equation}
\label{A particlesDensity}
\rho_A^{stat}=\frac{{\rm Tr}\bigl[\mathbf A(\mathbf A+\mathbf B+\mathbf E)^{L-1}\bigr]}{Z_{L}},
\end{equation}
\begin{equation}
\label{B particlesDensity}
\rho_B^{stat}=\frac{{\rm Tr}\bigl[\mathbf B(\mathbf A+\mathbf B+\mathbf E)^{L-1}\bigr]}{Z_{L}}\,.
\end{equation}
We can also compute the density of the vacancies using $\rho_{\emptyset}^{stat}=1-(\rho_A^{stat}+\rho_B^{stat})$. Using the representation~(\ref{MATRIX A,B,E}) the equations~(\ref{PartitionFunction})-(\ref{B particlesDensity}) can be calculated exactly. In the thermodynamic limit $L\mathop{\rightarrow}\infty$, where high powers of matrices are dominated by their largest eigenvalue, the density of the $A$ and $B$-particles is given by (see Fig.~\ref{fig:rho})
\begin{equation}
\label{A particlesDensityTrmo}
\rho_A^{stat}= \frac{(2-\alpha)(\alpha+p)+\alpha \sqrt{4-4\alpha+(p+\alpha)^2} }{2(2\alpha+p)\sqrt{4-4\alpha+(p+\alpha)^2}},
\end{equation}
\begin{equation}
\label{B particlesDensityTrmo}
\rho_B^{stat}=\frac{p(p+3\alpha-2+\sqrt{4-4\alpha+(p+\alpha)^2})}{2(2\alpha+p)\sqrt{4-4\alpha+(p+\alpha)^2}}\,.
\end{equation}
Approaching the critical point at $p=0$ and $\alpha_{c}=2$, we find a discontinuous behavior
\begin{equation}
\label{A particlesDensityP=0}
\rho_A^{stat}=
\left\{
\begin{array}{ll}
\frac{1}{2}
& \mbox{for} \; \alpha <\alpha_{c}\\ & \\
0
& \mbox{for} \; \alpha >\alpha_{c}
\end{array}
\right.
\end{equation}
\begin{equation}
\label{VacanciesDensityP=0}
\rho_{\emptyset}^{stat}=
\left\{
\begin{array}{ll}
\frac{1}{2}
& \mbox{for} \; \alpha <\alpha_{c}\\ & \\
1
& \mbox{for} \; \alpha >\alpha_{c}
\end{array}
\right.
\end{equation}
while $\rho_B^{stat}=0$. In fact, it is clear from ($\ref{DYNAMICALRULES}$) that for $p=0$, the $B$-particles can only transform into $A$-particles or vacancies but they are not created. Hence, in the steady state in the thermodynamic limit, the $B$-particles will disappear.
We also observe that the density of the $B$-particles in the vicinity of the critical point changes discontinuously in a particular limit. This can be seen already in the snapshots of Fig.~\ref{fig:demo}a and ~\ref{fig:demo}c: For $\alpha<2$ and $p=0$ the density of $B$-particles vanishes rapidly on an exponentially short time scale, while for $\alpha>2$ one observes some kind of annihilating random walk with a slow algebraic decay. Therefore, for a small value of $p>0$, i.e. when switching on the creation of $B$-particles at a small rate, it is plausible that the system will respond differently in both cases. In fact, expanding (\ref{B particlesDensity}) around $p=0$ to first order in $p$ in the two phases $\alpha>\alpha_{c}$ or $\alpha
=\alpha_{c}+\epsilon $ and $\alpha<\alpha_{c}$ or $\alpha
=\alpha
_{c}-\epsilon $, where $\epsilon $ is very small, we find a band gap as
\begin{equation}
\label{BandGap}
\Delta =\rho_B^{stat ,\alpha>\alpha_{c}}-\rho_B^{stat ,\alpha<\alpha_{c}}\approx \frac{L^{2}p \epsilon}{8}
\end{equation}
which is valid for $1 \ll L\ll L_{max}$ where $L_{max}=(p\epsilon)^{-1/2}$.
\section{Relation to a zero-range process}
\label{sec:zrp}
A zero range process (ZRP) is defined as a system of $L$ boxes where each box can be empty or occupied by an arbitrary number of particles. The particles hop between neighboring boxes with a rate that can depend on the number of particles in the box of departure \cite{EvansZRP}. The stationary state of the ZRP factorizes, meaning that the steady-state weight of any configuration is given by a product of factors associated with each of the boxes.
It is known that various driven-diffusive systems can be mapped onto a ZRP \cite{EvansZRP}. This is usually done by interpreting the vacancies (particles) in the driven-diffusive systems as particles (boxes) in the ZRP. Following the same line we find that our model can be mapped onto a non-conserving ZRP with two different types of boxes. More specifically, the $n$ vacancies to the right of an $A$($B$)-particle are regarded as an $A$($B$)-box containing $n$ particles in the ZRP denoted as $A_n(B_n)$. The total number of particles distributed among the boxes is denoted as~$N_\emptyset$ while number of boxes of type $A$($B$) is denoted as $N_A$($N_B$). By definition, the sum $N_{A}+N_{B}+N_{\emptyset}=L$ is conserved. However, the individual numbers are not conserved and change according to the following dynamical rules:
\begin{enumerate}
\item[(i)]
Particles from an $A$($B$)-box hop to the neighboring left box with rate $\alpha_{+}$ ($\alpha_{-}$):
\begin{align}
\label{RulesZRP}
X_mA_n &\mathop{\longrightarrow}\limits^{\alpha_+} X_{m+1}A_{n-1}\\
X_mB_n &\mathop{\longrightarrow}\limits^{\alpha_-} X_{m+1}B_{n-1} \qquad (X=A,B) \notag
\end{align}
\item[(ii)]
An empty $A$($B$)-box transforms into an empty $B(A)$ box with the rate $p
_{+}$ ($p_{-}$):
\begin{equation}
A_0 \mathop{\rightleftharpoons}\limits^{p_+}_{p_-} B_0
\end{equation}
\item[(iii)]
An $A$($B$)-box with $n$ particles together with an adjacent empty $B$($A$)-box on the left side transforms into a single $A$($B$)-box containing $n+1$ particles with rate $p_{-}$ ($\alpha_{+}$). The reversed process is also possible and takes place with rate $p_{+}$ ($\alpha_{-}$):
\begin{equation}
B_0A_n \mathop{\rightleftharpoons}\limits^{p_-}_{p_+} A_{n+1}\,,\qquad
A_0B_n \mathop{\rightleftharpoons}\limits^{\alpha_+}_{\alpha_-} B_{n+1}
\end{equation}
\item[(iv)]
An $A$($B$)-box containing $n$ particles and a neighboring empty $A(B)$-box on the left side transform into an $A$($B$)-box with $n+1$ particles with the rate $1$ ($\alpha$). The reversed process is also possible and takes place with rate $1$ ($p$):
\begin{equation}
\label{EndRulesZRP}
A_0 A_n \mathop{\rightleftharpoons}\limits^{1}_{1} A_{n+1}\,,\qquad
B_0 B_n \mathop{\rightleftharpoons}\limits^{\alpha}_{p} B_{n+1}
\end{equation}
\end{enumerate}
With these dynamical rules, we can show that the weights of configurations in the ZRP can be expressed as factorized forms. We consider a configuration consisting of $\delta=N_{A}+N_{B}$ boxes with $N_{\emptyset}$ particles distributed in the boxes. Defining $n_k$ as the number of particles in $k^{\rm th}$ box of type $\tau_{k}\in \{A,B\} $, where $\sum_{k=1}^\delta n_k=N_{\emptyset}$, the weight of the configuration can be written as
\begin{equation}
\label{Weights ZRP}
W_{ZRP}\bigl(\{n_{1}\tau_{1},\cdots,n_{\delta}\tau_{\delta}\}\bigr)
=\prod_{k=1}^{\delta} f_{\tau_{k}}(n_{k})\,,
\end{equation}
where $f_{A}(n)$ ($f_{B}(n)$) is the weight of an $A(B)$-box containing $n$ particles. In order to compute $f_{A}(n)$ and $f
_{B}(n)$, let us define the vectors $\vert a_{1} \rangle$, $ \langle a
_{2} \vert$, $\vert b_{1} \rangle$ and $\langle b_{2} \vert$ by
\begin{equation}
\label{VECTOR}
\vert a_{1} \rangle= \vert b_{1} \rangle= \vert 1 \rangle+ \vert 2 \rangle,\quad \langle a_{2} \vert=\langle 1\vert,\quad \langle b_{2} \vert
=p\langle 2\vert\,,
\end{equation}
where we used the basis vectors
\begin{equation}
\label{BaseVectors}
\vert 1 \rangle=\left(\begin{array}{c}
1\\ 0
\end{array}\right) \; , \;
\vert 2\rangle=\left(\begin{array}{c}
0 \\ 1
\end{array}\right)\,.
\end{equation}
Then the operators $\mathbf A$ and $\mathbf B$ in the matrix representation~(\ref{MATRIX A,B,E}) can be rewritten as
\begin{equation}
\label{REWITEA,B}
\mathbf A= \vert a_{1} \rangle\langle a_{2} \vert,\qquad
\mathbf B
= \vert b_{1} \rangle\langle b_{2} \vert.
\end{equation}
Using Eqs.~(\ref{VECTOR})-(\ref{REWITEA,B}) and~(\ref{MATRIX A,B,E}) we obtain
\begin{align}
\label{WeightA}
f_{A}(n)&=\langle a_{2} \vert \mathbf E^{n}\vert a_{1} \rangle=\langle a_{2} \vert \mathbf E^{n}\vert b_{1} \rangle=1\\
\label{WeightB}
f_{B}(n)&=\langle b_{2} \vert \mathbf E^{n}\vert b_{1} \rangle=\langle b_{2} \vert \mathbf E^{n}\vert a_{1} \rangle=p\alpha^{n}.
\end{align}
We can show that Eq.~(\ref{Weights ZRP}) satisfy the pairwise balance condition \cite{Pairwise Balance}, therefore it is the stationary state for the dynamics specified by~(\ref{RulesZRP})-(\ref{EndRulesZRP}).
Let us finally turn to the case $p=0$. It is clear from Eqs.~(\ref{Weights ZRP}),~(\ref{WeightA}) and~(\ref{WeightB}) that the stationary state weight of the ZRP consists only of the weights of $A$-boxes containing particles. Defining $\langle N
_{A}\rangle$ as the average number of $A$-boxes and $\langle n\rangle$ as the average number of particles in an $A$-box, and noticing the dynamical rules of the non-conserving ZRP, (\ref{A particlesDensityP=0}) and (\ref{VacanciesDensityP=0}), we observe different behaviors for $\langle N_{A}\rangle$ and $\langle n\rangle$, namely
\begin{itemize}
\item
for $p=0,\, \alpha<\alpha_{c}$, $\langle N_{A}\rangle$ and $\langle n\rangle$ are finite.
\item
for $p=0,\, \alpha>\alpha_{c}$, $\langle N_{A}\rangle={\mathcal O}(1)$ and $\langle n\rangle={\mathcal O}(L)$.
\end{itemize}
Therefore, we have a condensation transition where a large number of particles accumulate in a single $A$-box.
\color{black}
\section{Numerical results}
\label{sec:numerics}
Since all stationary properties of the model defined in (\ref{DYNAMICALRULES}) can be computed exactly, our numerical simulations focus on its dynamical evolution. As we will see, the dynamical behavior is affected by strong scaling corrections which will be explained heuristically in Sect.~\ref{sec:toymodel}.
\begin{figure}
\centering\includegraphics[width=\linewidth]{num-decay.pdf}
\vspace*{-6mm}
\caption{Decay of the order parameters $\rho_{A,B}$ at the critical point in a very large system with $10^5$ sites with random initial conditions (see text).}
\label{fig:num-decay}
\end{figure}
\subsection{Decay of $\rho_A$ and $\rho_B$ at the critical point}
At the critical $p=0$, $\alpha=2$ we have $p_+/p_-=p=0$ and $\alpha_+/\alpha_-=2$, implying $p_+=0$, meaning that at this point the model controlled by two parameters $\alpha_+$ and $p_-$. In Fig.~\ref{fig:num-decay} we measured the time dependence of both order parameters for $p_-=1$ and various values of $\alpha_+$, starting with a random initial state with $\rho_A(0)=\rho_B(0)=1/3$. The behavior turns out to be qualitatively similar in all cases: While the density $\rho
_A(t)$ seems to increase slightly, the density $\rho_B(t)$ shows a decay reminding of a power law $\rho_B(t)\sim t^{-\delta}$. However, if we first estimate the exponent $\delta\approx 0.57$ and then divide the data by $t^{-\delta}$ one observes a significant curvature of the data: The effective exponent $\delta_{\rm eff}$ decreases from 0.6 down 0.57 without having reached a stable value in the numerically accessible regime, indicating strong scaling corrections.
It turns out that the effective exponent depends strongly on the particle densities in the initial state. This freedom can be used to reduce the influence of the scaling corrections. Choosing for example a random initial configuration with $\rho_A(0)=0.9$ and $\rho_B(0)=0.1$ one obtains a less pronounced curvature of $\rho_B(t)$ with an effective exponent of only $\delta\approx 0.51$. This suggests that the asymptotic exponent is $\delta
=1/2$.
\subsection{Finite-size scaling}
Using the initial condition $\rho_A(0)=0.9$ and $\rho_B(0)=0.1$ we repeated the simulation in finite systems. The results are plotted in the left panel of Fig.~\ref{fig:num-fs}, where we divided $\rho_B(t)$ by the expected power law $t^{-1/2}$ so that an infinite system should produce an asymptotically horizontal line. As can be seen, a finite system size leads to a sudden breakdown of $\rho_B(t)$ while there is no change in $\rho_A(t)$. Plotting the same data against $t/L^z$ (right panel), where $z
=\nu_\parallel/\nu_\perp$ is the dynamical exponent, the best data collapse is obtained for $z=2$. This is plausible since so far all systems, which have been solved by means of matrix product methods, are essentially diffusive with a dynamical exponent $z=2$.
\begin{figure}
\centering\includegraphics[width=\linewidth]{num-fs.pdf}
\vspace*{-6mm}
\caption{Finite-size scaling at the critical point (see text).}
\label{fig:num-fs}
\end{figure}
\subsection{Off-critical simulations}
Finally we investigate the two-dimensional vicinity of the critical point where
\begin{equation}
\Delta\alpha \;=\; \alpha-\alpha_c \;=\; \alpha-2
\end{equation}
as well as $p$ are small. First we choose $\Delta\alpha=0$ and study the model for $p>0$. In this case the order parameter $\rho_B(t)$ first decays as if the system was critical until it saturates at a constant value, as shown in the inset of Fig.~\ref{fig:num-offcrit}.
Surprisingly, $\rho_B(t)$ first goes through a local minimum and then increases again before it reaches the plateau. This phenomenon of \textit{undershooting} has also been observed in conserved sandpile models~\cite{SandpilesDP} and may indicate that the system has a long-time memory for specific correlations in the initial state. Plotting $\rho_B(t) t^{1/2}$ against $t p^{\nu_\parallel}$ one finds an excellent data collapse for $\nu_\parallel=1.00(5)$, indicating that $\nu_\parallel=1$.
Next we keep $p=0$ fixed and vary $\Delta\alpha$. For $\Delta\alpha<0$ one finds that the density $\rho_B(t)$ crosses over to an exponential decay. For $\Delta\alpha>0$, where one expects supercritical behavior, $\rho_B(t)$ does \textit{not} saturate at a constant, instead it first decreases as $t^{-1/2}$ followed by a short period of a decelerated decay until it continues to decay as $t^{-1/2}$. This means that $\alpha>2$ causes an increase of the amplitude but not a crossover to a different type of decay. To our knowledge this is the first example of a power law to the same power law but with a different amplitude.
Plotting $\rho_B(t) t^{1/2}$ against $t p^{\eta_\parallel}$ the data collapse is unsatisfactory due to the scaling corrections discussed above. However, the best compromise is obtained for $\eta_\parallel=1.9(2)$, which is compatible with $\eta_\parallel=2$.
\begin{figure}
\centering\includegraphics[width=\linewidth]{num-offcrit.pdf}
\vspace*{-6mm}
\caption{Data collapses for off-critical simulations. Left: Variation of $p$ in the range $0.0001,0.0002,\ldots,0.4096$. The inset shows the corresponding raw data. Right: Variation of $\Delta \alpha=\alpha-2$ in the range $\pm 0.001$, $\pm 0.002$, $\ldots \pm 0.512$. }
\label{fig:num-offcrit}
\end{figure}
\subsection{Phenomenological scaling properties}
Apart from the scaling corrections which will be discussed in the following section, the collected numerical results suggest that the process in the vicinity of the critical point is invariant under scale transformations of the form
\begin{align}
\label{ScalingScheme}
&t \to \Lambda^{\nu_\parallel} t\,, \qquad L \to \Lambda^{\nu_\perp} L\,, \qquad \rho_B \to \Lambda^{\beta}\rho_B \notag\\
&p \to \Lambda p \,, \qquad \Delta \alpha \to \Lambda^\theta \Delta\alpha\,,
\end{align}
where $\theta=\nu_\parallel/\eta_\parallel$ is the crossover exponent between the two control parameters.
Assuming that the critical behavior is described by simple rational exponents, our findings suggest that the universality class of the process is characterized by four exponents $\beta=1/2\,,\quad \nu_\parallel=1\,,\quad \nu_\perp=1/2 \,,\quad \theta=1/2$ together with the scaling relations
\begin{align}
&\delta = \frac{\beta}{\nu_\parallel}=\frac12 \\
&z
= \frac{\nu_\parallel}{\nu_\perp} = \frac{\eta_\parallel}{\eta
_\perp}=2\\
&\theta=\frac{\nu_\parallel}{\eta_\parallel}=1/2\,.
\end{align}
The values of the exponents are listed in Table~\ref{tab:exponents}. Regarding the stationary properties for $p>0$, these exponents are in full agreement with the exact solution in Sect.~\ref{sec:exact}.
The scaling scheme (\ref{ScalingScheme}) implies various scaling relations. For example, it allows us to predict that the stationary density of $B$-particles in the vicinity of the critical point should scale as
\begin{equation}
\label{StationaryScaling}
\rho_B^{\rm stat} \;=\; p^\beta F\Bigl(\frac{(\Delta\alpha)^2}{p}\Bigr)\,,
\end{equation}
where $F$ is a universal scaling function. Comparing this form with the exact result (\ref{B particlesDensityTrmo}) we find that
\begin{equation}
F(\xi)=\frac{1}{2 \sqrt{4+\xi}}\,.
\end{equation}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$\quad\beta\quad$ & $\quad\nu_\perp\quad$ & $\quad\nu_\parallel\quad$ & $\quad z\quad$ & $\quad\eta_\perp\quad$ & $\quad\eta_\parallel\quad$ & $\quad\theta\quad$ & $\quad\delta\quad$\\
\hline
$1/2$ & $1/2$ & $1$ & $2$ & $1$ & $2$ & $1/2$ & $1/2$\\
\hline
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{Expected values of the critical exponents. \label{tab:exponents}}
\end{table}
\section{Heuristic explanation of the critical behavior}
\label{sec:toymodel}
\subsection{Reduction to an effective model}
The model investigated above can be related to an effective process of pair-creating and annihilating random walks. As we will see below, this effective model captures the phase structure and the essential critical properties of the full model.
\begin{figure*}[t]
\centering\includegraphics[width=0.9\linewidth]{toymodel.pdf}
\vspace*{-3mm}
\caption{Motivation of the reduced model (see text). (a) Temporal evolution of the original model for $\alpha=2$ and $p=0.01$ in a vertically compressed representation with 20 Monte-Carlo sweeps per pixel. As before, particles of type $A$ and $B$ are marked by red and blue pixels, respectively. (b) Illustration of compactified $A$-domains. (c)~Kink representation, interpreted as a pair-creating and annihilating diffusion process. (d) Removal of the overall bias. }
\label{fig:toymodel}
\end{figure*}
Starting point is the observation that the original model, especially close to the critical point, tends to form dense and sharply bounded domains of $A$-particles while the $B$-particles are sparsely distributed. The $A$-domains are not compact, rather they are interspersed by little patches of empty sites. As can be seen in Fig.~\ref{fig:toymodel}a, these small voids inside the $A$-domains do not exceed a certain typical size. This suggests that they can be regarded as some kind of local noise which is irrelevant for the critical behavior on large scales, meaning that we may disregard them and consider the $A$-domains effectively as compact objects, as shown schematically in Fig.~\ref{fig:toymodel}b.
Secondly we note that the $B$-particles in the full model are predominantly located at the right boundary of the $A$-domains. This suggests that the dynamics can be encoded effectively in terms of the left and right boundaries of the $A$ domains, interpreted as charges $-$ and~$+$ (see Fig.~\ref{fig:toymodel}c). In this kink representation, the negative charges can be identified with the $B$-particles in the original model, while the positive charges can be understood as marking the left boundary of $A$-domains.
Thirdly, we observe that the dynamics of the original model is biased to the right. In the kink representation, an overall bias does not change the critical properties of the model and can be eliminated in a co-moving frame, as sketched schematically in Fig.~\ref{fig:toymodel}d.
Having completed this sequence of simplifications, the original process can be interpreted as an effective pair-creating and annihilating random walk of $+$ and $-$ charges according to the reaction-diffusion scheme
\begin{align}
\label{toyrules}
+\emptyset \stackrel{\lambda}\longrightarrow \emptyset + \qquad \emptyset + \stackrel{1/\lambda}\longrightarrow +\emptyset\notag\\
-\emptyset \stackrel{1/\lambda}\longrightarrow \emptyset - \qquad \emptyset - \stackrel{\lambda}\longrightarrow -\emptyset\\
-+ \stackrel{1}\longrightarrow \emptyset\emptyset\qquad\;\;
\emptyset\emptyset \stackrel{q}\longrightarrow -+\notag
\end{align}
Here the parameter $\lambda$ controls the relative bias between the two particle species and thus it is expected to play the same role as $\alpha$ in the full model, although with a different critical value $\lambda_c=1$. The other parameter $q$ controls the rate of spontaneous pair creation and therefore plays a similar role as $p$ in the original model.
The reduced process starts with an alternating initial configuration $+-+-+-...$, where $\rho_+(0)=\rho_-(0)=1/2$. As time evolves, particles are created and annihilated in pairs, meaning that the two densities
\begin{equation}
\rho_+(t)
= \rho_-(t)\,
\end{equation}
are exactly equal. These densities are expected to play the same role as the order parameter $\rho_B(t)$ in the original model.
\subsection{Numerical results for the reduced model}
The reduced model has the advantage that it can be implemented very efficiently on a computer by storing the coordinates of the kinks in a dynamically generated list. Simulating the model we find the following results:
\begin{itemize}
\item
$q>0$: The model evolves into a stationary state with a constant density $\rho_{+}=\rho_-$, qualitatively reproducing the corresponding results for the full model shown in the right panel of Fig.~\ref{fig:rho}.
\item
$q=0, \,\lambda>1$: Positive charges move to the right and negative charges move to the left until they form bound $+-$ pairs which perform a slow unbiased random walk. If two such pairs collide they coagulate into a single one by the effective reaction $+-+-\to +-$. Therefore, one expects the density of particles to decay as $t^{-1/2}$ in the same way as in a coagulation-diffusion process~\cite{Coagulation}.
\item
$q=0,\, \lambda=1$: At the critical point the particle density seems to decay somewhat faster than $t^{-1/2}$. The origin of these scaling corrections will be discussed below.
\item
$q=0, \,\lambda<1$: In this case the negative charges diffuse to the right while positive charges diffuse to the left. When they meet they quickly annihilate in pairs, reaching an empty absorbing state in exponentially short time.
\end{itemize}
\begin{figure}
\centering\includegraphics[width=\linewidth]{decay.pdf}
\vspace{-7mm}
\caption{Numerical simulation of the reduced model with $L=10^7$ sites simulated at the critical point. Left panel: Decay of the particle density $\rho(t)$. The green dashed straight line visualizes the slow curvature of the data, indicating persistent scaling corrections. Right panel: Corresponding local slopes plotted against $1/\ln(t)$, interpreted as an effective critical exponent $-\delta_{\rm eff}(t)$. A visual extrapolation along the red dashes line to $t\to \infty$ is consistent with the expected asymptotic exponent $\delta=0.5$.}
\label{fig:decay}
\end{figure}
Therefore, the reduced model exhibits the same type of critical behavior as the full model. Moreover, repeating the standard simulations of Sect.~\ref{sec:numerics} (not shown here) we obtain similar estimates of the critical exponents.
\subsection{Explaining the scaling corrections heuristically}
Performing extensive numerical simulations of the reduced model at the critical point over seven decades in time (see Fig.~\ref{fig:decay}) one can see a clear curvature in the double-logarithmic plot. Unlike initial transients in other model, this curvature seems to persist over the whole temporal range. To confirm this observation, we plotted the corresponding local exponent $\delta_{\rm eff}$ against $1/\ln(t)$ in the right panel of the figure. If the curve is extrapolated visually to $t \to \infty$, the most likely extrapolation limit is indeed $\delta=1/2$, confirming our previous conjecture in the case of the full model.
Where do the slow scaling corrections come from? This question is of general interest because various other nonequilibrium phase transitions, where the universal properties are not yet fully understood, show similar corrections. For example, the diffusive pair contact process~\cite{PCPD} and fixed-energy sandpiles~\cite{MannaDP,SandpilesDP} both exhibit a similar slow curvature of the particle decay at the critical point. Here we have a particularly simple system with an exactly known critical point, where the origin of the slow scaling corrections can be identified much easier.
To explain the scaling corrections heuristically, let us consider the pair annihilation process defined in (\ref{toyrules}) at the critical point starting with an alternating initial configuration ($+-+-+-...$). We first note that this process has the special property that pairs of particles which eventually annihilate must have been nearest neighbors in the initial configuration. In so far this process differs significantly from the usual annihilation process $2A\to \emptyset$, where in principle any pair can annihilate.
If the process had started with only a single $-+$ pair, both particles would perform a simple random walk until they collide and annihilate. In this case the annihilation probability would be related to the first-return probability of a random walk~\cite{SidRedner}. Since the first-return probability is known to scale as $t^{-3/2}$ in one spatial dimension, the life time of the pair, which is obtained by integration over time, would decay as $t^{-1/2}$. However, in the present case the $-+$ pair is interacting with other pairs to the left and to the right. These neighboring pairs impose a kind of non-reactive fluctuating boundary, limiting the space in which the random walk of the two particles can expand. In other words, the neighboring pairs lead to a small effective force, pushing the two charges towards each other. This in turn enhances the frequency of annihilation events, explaining qualitatively why the particle density first decays faster than $t^{-1/2}$.
However, as time proceeds the accelerated decay of the particle density leads to a corresponding increase of the average distance between the particles which grows faster than $t^{1/2}$. Since the average distance between $-$ and $+$ particles cannot grow faster than $t^{1/2}$, this implies that the average distance between $+$ and $-$ has to grow faster than $t^{1/2}$, as we could confirm by numerical measurements in Fig.~\ref{fig:expl}. This in turn implies that the effective force mentioned above decreases with time.
\begin{figure}
\centering\includegraphics[width=0.8\linewidth]{scaledpic.pdf}
\centering\includegraphics[width=\linewidth]{expl}
\vspace{-5mm}
\caption{Explanation of the accelerated decay in the reduced charge model. The upper panel shows a typical snapshot of the process at the critical point monitored over long time. As can be seen, the process preferentially forms $-+$ pairs separated by large empty intervals. This impression is confirmed by a measurement of the average distance between neighboring charges shown in the lower left panel. Likewise, the average number of adjacent $+-$ and $-+$ pairs evolves differently.}
\label{fig:expl}
\end{figure}
To find out how fast the effective force decreases with time, we first note that the force is caused by adjacent $+-$ pairs which cannot penetrate each other. A numerical measurement shows that the number of $+-$ pairs decays in the same way as the squared particle density, i.e. like in a mean-field approximation (see right panel of Fig.~\ref{fig:expl}), while the number of $-+$ pairs is -- as expected -- proportional to the particle loss:
\begin{equation}
n_{+-}(t) \sim \rho^2(t) \,, \qquad n_{-+}(t) \sim \dot\rho(t).
\end{equation}
Therefore, we expect the effective force to be proportional to $\rho^2(t)$ which roughly scales as $t^{-1}$. Thus we conclude that the particle density of the pair annihilation process at the critical point (and similarly in the full model) decays in the same way as the survival probability of a one-dimensional random walk starting at the origin subjected to a time-dependent bias proportional to $1/t$ towards the origin, terminating upon the first passage of the origin. In fact, simulating such a random walk we find slowly-decaying logarithmic corrections of the same type, confirming the heuristic arguments given above. To our knowledge an exact solution of a first-passage random walk with time-dependent bias is not yet known.
\section{Conclusions}
\label{sec:conclusions}
In this work we have introduced and studied a two-species reaction-diffusion process on a one-dimensional periodic lattice which exhibits a nonequilibrium phase transition. Its stationary state can be determined exactly by means of the matrix product method. Together with numerical studies of the dynamics we have identified the critical exponents which are listed in Table~\ref{tab:exponents}. The transition can be explained qualitatively by relating the model to a reduced process (see Sect.~\ref{sec:toymodel}). This relation also provides a heuristic explanation of the unusual corrections to scaling observed in this model.
Our findings seem to be in contradiction with a previous claim by one of the authors~\cite{Comment,Book} that first-order phase transitions in non-conserving systems with fluctuating domains should be impossible in one dimension. In \cite{Comment} it was argued that a first-order transition needs a robust mechanism in order to eliminate spontaneously generated minority islands of the opposite phase, but this would be impossible in 1D because in this case the minority islands do not have surface tension. Although this claim was originally restricted to two-state models, the question arises why we find the contrary in the present case.
Again the caricature of the reduced process sketched in Fig.~\ref{fig:toymodel}a provides a possible explanation: As can be seen there are two types of white patches, namely, large islands with a blue $B$-particle at the left boundary, and small islands without. This means that the $B$-particles are used for marking two different types of vacant islands, giving them different dynamical properties. Only the large islands containing a $B$-particle are minority islands in the sense discussed in~\cite{Comment}, while the small islands without $B$-particles inside the $A$-domains are biased to shrink by themselves.
Therefore, we arrive at the conclusion that first-order phase transitions in non-conserving 1D systems with fluctuating domains are indeed possible in certain models with several particle species if one of the species is used for marking different types of minority islands.
|
2,877,628,089,643 | arxiv | \section{Introduction}
The theory of complex networks \cite{Dorogovtsev:2003, Newman:2003, Boccaletti:2006, Caldarelli:2007, Doro_review,Barrat:2008} has flourished thanks to the availability of new datasets on large complex systems ,such as the Internet or the interaction networks inside the cell.
In the last ten years attention has been focusing mainly on static or growing complex networks, with little emphasis on the rewiring of the links. The topology of these networks and their modular structure \cite{Fortunato, Palla:2007,Lehmann, Bianconi:2009} are able to affect the dynamics taking place on them \cite{Doro_review, Barrat:2008,Ising,Ising_spatial}.
Only recently temporal networks \cite{Holme:2005,Latora:2009,Havlin:2009,Cattuto:2010, Isella:2011,Holme:2012}, dominated by the dynamics of rewirings, are starting to attract the attention of quantitative scientists working on complexity.
One of the most beautiful examples of temporal networks are social interaction networks.
Indeed, social networks \cite{Granovetter:1973, Wasserman:1994} are intrinsically dynamical and social interactions are continuously formed and dissolved.
Recently we are gaining new insights into the structure and dynamics of these temporal social networks, thanks to the availability of a new generation of datasets recording the social interactions of the fast time scale. In fact, on one side we have data on face-to-face interactions coming from mobile user devices technology \cite{Eagle:2006,Hui:2005}, or Radio-Frequency-Identification-Devices \cite{Cattuto:2010,Isella:2011}, on the other side, we have extensive datasets on mobile-phone calls \cite{Onnela:2007} and agent mobility \cite{Brockmann:2006, Gonzalez:2008}.
This new generation of data has changed drastically the way we look at social networks. In fact, the adaptability of social networks is well known and several models have been suggested for the dynamical formation of social ties and the emergence of connected societies \cite{Bornholdt:2002,Marsili:2004,Holme:2006,MaxiSanMiguel:2008}. Nevertheless, the strength and nature of a social tie remained difficult to quantify for several years despite the careful sociological description by Granovetter \cite{Granovetter:1973}. Only recently, with the availability of data on social interactions and their dynamics on the fast time scale, it has become possible to assign to each acquaintance the strength or weight of the social interaction quantified as the total amount of time spent together by two agents in a given time window \cite{Cattuto:2010}.
The recent data revolution in social sciences is not restricted to data on social interaction but concerns all human activities \cite{Barabasi:2005,Vazquez:2006,Rybski:2009, Amaral:2009}, from financial transaction to mobility. From these new data on human dynamics evidence is emerging that human activity is bursty and is not described by Poisson processes \cite{Barabasi:2005, Vazquez:2006}. Indeed, a universal pattern of bursty activities was observed in human dynamics such as broker activity, library loans or email correspondence. Social interactions are not an exception, and there is evidence that face-to-face interactions have a distribution of duration well approximated by a power-law \cite{Cattuto:2010, Scherrer:2008,Stehle:2010, Zhao:2011, Karsai:2012} while they remain modulated by circadian rhythms \cite{Karsai:2011b}.
The bursty activity of social networks has a {significant} impact on dynamical processes defined on networks \cite{Vazquez:2007, Karsai:2011a}{.}
Here we compare these {observations} with data coming from a large dataset of mobile-phone communication \cite{PlosOne, Frontiers} {and show} that human social interactions, when mediated by a technology, such as the mobile-phone communication, demonstrate the adaptability of human behavior. Indeed, the distribution of duration of calls does not follow any more a power-law distribution but has a characteristic scale determined by the weights of the links, and is described by a Weibull distribution. {At} the same time, however, this distribution remains bursty and strongly deviates from a Poisson distribution.
We will show that both the power-law distribution of durations of social interactions and the Weibull distribution of durations and social interactions observed respectively in face-to-face interaction datasets and in mobile-phone communication activity can be explained phenomenologically by a model with a reinforcement dynamics \cite{Stehle:2010, Zhao:2011,PlosOne, Frontiers} responsible for the deviation from a pure Poisson process.
In this model, the longer two agents interact, the smaller is the probability that they split apart, and the longer an agent is non interacting, the less likely it is that he/she will start a new social interaction.
We observe here that this framework is also necessary to explain the group formation in simple animals \cite{Bisson}. This suggests that the reinforcement dynamics of social interactions, much like the Hebbian dynamics, might have a neurobiological foundation. Furthermore, this is supported by the results on the bursty mobility of rodents \cite{Chialvo} and on the recurrence patterns of words encountered in online conversations \cite{Motter}.
We have therefore found ways to quantify the adaptability of human behavior to different technologies.
We observe here that this change of behavior corresponds to the very fast time dynamics of social interactions and it is not related to macroscopic change of personality consistently with the results of \cite{Lambiotte2} on online social networks.
Moreover, temporal social networks encode information \cite{Cover:2006} in their structure and dynamics.
This information is necessary for efficiently navigating \cite{Kleinberg,WS} the network, and to build collaboration networks \cite{Newman:2001} that are able to enhance the performance of a society.
Recently, several authors have focused on measure{s} of entropy and information for networks.
The entropy of network ensembles is able to quantify the information encoded in a structural feature of networks such as the degree sequence, the community structure, and the physical embedding of the network in a geometric space \cite{Bianconi:2008,Anand:2009, Bianconi:2009}. The entropy rate of a dynamical process on the networks, such a biased random walk, are also able to characterize the interplay between structure of the networks and the dynamics occurring on them \cite{Latora_biased}. Finally, the mutual information for the data of email correspondence was shown to be fruitful in characterizing the community structure of the networks \cite{Eckmann:2004} and the entropy of human mobility was able to set the limit of predictability of human movements \cite{Song:2010}.
Here we will characterize the entropy of temporal social networks as a proxy to characterize the predictability of the dynamical nature of social interaction networks.
This entropy will quantify how many typical configurations of social interactions we expect at any given time, given the history of the network dynamical process.
We will evaluate this entropy on a typical day of mobile-phone communication directly from data showing modulation of the dynamical entropy during the circadian rhythm. Moreover we will show that
when the distribution of duration of contacts changes from a power-law distribution to a Weibull distribution the level of information and the value of the dynamical entropy significantly change indicating that human adaptability to new technology is a further way to modulate the information content of dynamical social networks.
\section{Temporal social networks and the distribution of duration of contacts}
\label{sec:1}
Human social dynamics is bursty, and the distribution of inter-event times follows a universal trend showing power-law tails. This is true for e-mail correspondence events, library loans,and broker activity.
Social interactions are not an exception to this rule, and the distribution of inter-event time between face-to-face social interactions has power-law tails \cite{Barabasi:2005,Vazquez:2006}. Interestingly enough, social interactions have an additional ingredient with respect to other human activities. While sending an email can be considered an instantaneous event characterized by the instant in which the email is sent, social interactions have an intrinsic duration which is a proxy of the strength of a social tie.
In fact, social interactions are the microscopic structure of social ties and a tie can be quantified as the total time two agents interact in a given time-window.
New data on the fast time scale of social interactions have been now gathered with different methods which range from Bluetooth sensors \cite{Eagle:2006}, to the new generation of Radio-Frequency-Identification-Devices \cite{Cattuto:2010,Isella:2011}.
In all these data there is evidence that face-to-face interactions have a duration that follows a distribution with a power-law tail.
Moreover, there is also evidence that the inter-contact times have a distribution with fat tails.
In this chapter we report a figure of Ref. \cite{Cattuto:2010} (Fig. \ref{barrat} of this chapter ) in which the duration of contact in Radio-Frequency-Device experiments conducted by Sociopatterns experiments is clearly fat tailed and well approximated by a power-law (straight line on the log-log plot).
In this figure the authors of Ref. \cite{Cattuto:2010} report the distribution of the duration of binary interactions and the distribution of duration of a the triangle of interacting agents. Moreover they report data for the distribution of inter-event time.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3.27in ,height=80mm]{barrat}
\end{center}
\caption{ \small{Probability distribution of human social interaction. Figure from \cite{Cattuto:2010}
A) Probability distribution of duration of contacts between any two given persons. Strikingly, the distributions show a similar long-tail behavior independently of the setting or context where the experiment took place or the detection range considered. The data correspond to respectively 8700, 17000 and 600000 contact events registered at the ISI, SFHH and 25C3 deployments. B) Probability distribution of the duration of a triangle. The number of triangles registered are 89, 1700 and 600000 for the ISI, SFHH and 25C3 deployments. C){.} Probability distribution of the time intervals between the beginning of consecutive contacts AB and AC. Some distributions show spikes (i.e., characteristic timescales) in addition to the broad tail; for instance, the 1 h spike in the 25C3 data may be related to a time structure to fix appointments for discussions.}
}
\label{barrat}
\end{figure}
How do these distributions change when human agents are interfaced with a new technology? This is a major question that arise{s} if we want to characterize the universality of these distributions.
In this book chapter we report an analysis of mobile-phone data and we show evidence of human adaptability to a new technology.
We have analysed the call sequence of subscribers of a major European mobile service provider. In the dataset the users were anonymized and impossible to track. We considered calls between users who called each other mutually at least once during the examined period of $6$ months in order to examine calls only reflecting trusted social interactions. The resulted event list consists of $633,986,311$ calls between $6,243,322$ users. We have performed measurements {for} the distribution of call duration{s} and non-interaction time{s} {of} all the users for the entire 6 months time period. The distribution of phone call durations strongly deviates from a fat-tail distribution.
In Fig. \ref{interaction} we report these distributions and show that {they} depend on the strength $w$ of the interactions (total duration of contacts in the observed period) but do not depend on the age, gender or type of contract in a significant way.
The distribution $P^w(\Delta t_{in})$ of duration of contacts within agents with strength $w$ is well fitted by a Weibull distribution
\begin{equation}
\tau^*(w) P^w(\Delta t_{in})=W_{\beta}\left(x=\frac{\Delta t}{\tau^{\star}(w)}\right)= \frac{1}{x^{\beta}} e^{-\frac{1}{1-\beta}x^{1-\beta}}.
\end{equation}
with $\beta=0.47..$.
The typical times of interactions between users $\tau^*(w)$ depend on the weight $w$ of the social tie. In particular the values used for the data collapse of Figure 3 are listed in Table \ref{tauw}.
These values are broadly distributed, and there is evidence that such heterogeneity might depend on the geographical distance between the users \cite{Lambiotte}.
The Weibull distribution strongly deviates from a power-law distribution to the extent that it is characterized by a typical time scale $\tau(w)$, while power-law distribution does not have an associated characteristic scale.
The origin of this significant change in the behavior of humans interactions could be due to the consideration of the cost of the interactions although we are not in the position to draw these conclusions (See Fig. \ref{pay} in which we compare distribution of duration of calls for people with different type of contract) or might depend on the different nature of the communication.
The duration of a phone call is quite short and is not affected significantly by the circadian rhythms of the population.
On the contrary, the duration of no-interaction periods is strongly affected by the periodic daily of weekly rhythms.
In Fig. \ref{non-interaction} we report the distribution of duration of no-interaction periods in the day periods between 7AM and 2AM next day. { The typical times $\tau^*(k)$ used in Figure 5 are listed in Table \ref{tauk}.}
The distribution of non-interacting times is difficult to fit due to the noise derived by the dependence on circadian rhythms. In any case the non-interacting time distribution if it is clearly fat tail.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=70mm, height=65mm]{proof2}
\end{center}
\caption{ \small{(A) Distribution of duration of phone-calls between two users with weight $w$. The data depend on the typical scale $\tau^{\star}(w)$ of duration of the phone-call.
(B) Distribution of duration of phone-calls for people of different age. (C) Distribution of duration of phone-calls for users of different gender. The distributions shown in the panel (B) and (C) do not significantly depend on the attributes of the nodes. Figure from \cite{PlosOne}.}}
\label{interaction}
\end{figure}
\begin{table}
\caption{Typical times $\tau^{\star}(w)$ used in the data collapse of Fig. \ref{interaction}.}
\label{tauw}
\begin{tabular}{p{5.5cm}p{5.5cm}}
\hline\noalign{\smallskip}
Weight of the link & Typical time $\tau^{\star}(w)$ in seconds (s)\\
(0-2\%) \ \ \ $w_{max}$ &111.6 \\
(2-4\%) \ \ \ $w_{max}$ & 237.8 \\
(4-8\%) \ \ \ $w_{max}$ & 334.4 \\
(8-16\%)\ \ $w_{max}$ & 492.0 \\
(16-32\%) $w_{max}$ & 718.8 \\
\end{tabular}
\end{table}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=70mm, height=60mm]{pay}
\end{center}
\caption{ \small{Distribution of duration of phone-calls for people with different types of contract. No significant change is observed that modifies the functional form of the distribution. Figure from \cite{PlosOne}.}}
\label{pay}
\end{figure}
\begin{table}
\caption{Typical times $\tau^{\star}(k)$ used in the data collapse of Fig. \ref{non-interaction}.}
\label{tauk}
\begin{tabular}{p{5.5cm}p{5.5cm}}
\hline\noalign{\smallskip}
Connectivity & Typical time $\tau^{\star}(k)$ in seconds (s) \\
k=1 &158,594 \\
k=2 &118,047 \\
k=4 & 69,741 \\
k=8 & 39,082 \\
k=16 & 22,824 \\
k=32 & 13,451 \\
\end{tabular}
\end{table}
\section{Model of social interaction}
It has been recognized that human dynamics is not Poissonian. Several models have been proposed for explaining a fundamental case study of this dynamics, the data on email correspondence.
The two possible explanations of bursty email correspondence are described in the following.
\begin{itemize}
\item
A queueing model of tasks with different priorities has been suggested to explain bursty interevent time. This model implies rational decision making and correlated activity patterns \cite{Barabasi:2005,Vazquez:2006}. This model gives rise to power-law distribution of inter event times.
\item
A convolution of Poisson processes due to different activities during the circadian rhythms and weekly cycles have been suggested to explain bursty inter event time. These different and multiple Poisson processes are introducing a set of distinct characteristic time scales on human dynamics giving rise to fat tails of interevent times \cite{Malmgren}.
\end{itemize}
In the previous section we have showed evidence that the duration of social interactions is generally non Poissonian.
Indeed, both the power-law distribution observed for duration of face-to-face interactions and the Weibull distribution observed for duration of mobile-phone communication
strongly deviate from an exponential.
The same can be stated for the distribution of duration of non-interaction times, which strongly deviates from an exponential distribution both for face-to-face interactions and for mobile-phone communication.
In order to explain the data on duration of contacts we cannot use any of the models proposed for bursty interevent time in email correspondence.
In fact, on one side it is unlikely that the decision to continue a conversation depends on rational decision making. Moreover the queueing model \cite{Barabasi:2005, Vazquez:2006} cannot explain the observed stretched exponential distribution of duration of calls. On the other side, the duration of contacts it is not effected by circadian rhythms and weekly cycles which are responsible for bursty behavior in the model \cite{Malmgren}.
This implies that a new theoretical framework is needed to explain social interaction data.
Therefore, in order to model the temporal social networks we have to abandon the generally considered assumption that social interactions are generated by a Poisson process.
In this assumption the probability for two agents to start an interaction or to end an interaction is constant in time and not affected by the duration of the social interaction.
{ Instead, to build a model for human social interactions we have to consider a reinforcement dynamics, in which the probability to start an interaction depends on how long an individual has been non-interacting, and the probability to end an interaction depends on the duration of the interaction itself.
Generally, to model the human social interactions, we can consider an agent-based system consisting of $N$ agents that can dynamically interact with each other and give rise to interacting agent groups. In the following subsections we give more details on the dynamics of the models. We denote by the state $n$ of the agent, the number of agents in his/her group (including itself). In particular we notice here that a state $n=1$ for an agent, denotes the fact that the agent is non-interacting. A reinforcement dynamics for such system is defined in the following frame. }
\begin{framed}
\hspace{-.25in} {\bf Reinforcement dynamics in temporal social networks}\\
The longer an agent is interacting in a group the smaller is the probability that he/she will leave the group.\\
The longer an agent is non-interacting the smaller is the probability that he/she will form or join a new group.\\
The probability that an agent $i$ change his/her state (value of $n$) is given by
\begin{equation}
f_n(t,t_i)=\frac{h(t)}{(\tau+1)^{\beta}}
\label{f}
\end{equation}
where $\tau:=(t-t_i)/N$, $N$ is the total number of agents in the model and $t_i$ is the last time the agent $i$ has changed his/her state, and $\beta$ is a parameter of the model.
The reinforcement mechanism is satisfied by any function $f_n(t,t_i)$ that is decreasing with $\tau$ but social-interaction data currently available are reproduced only for this particular choice $f_n(t,t_i)$.
\end{framed}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=110mm, height=45mm]{nointeraction}
\end{center}
\caption{ \small{Distribution of non-interaction times in the phone-call data. The distribution strongly depends on circadian rhythms. The distribution of rescaled time depends strongly on the connectivity of each node. Nodes with higher connectivity $k$ are typically non-interacting for a shorter typical time scale $\tau^{\star}(k)$. Figure from \cite{PlosOne}.}}
\label{non-interaction}
\end{figure}
The function $h(t)$ only depends on the actual time in which the decision is made. This function is able to modulate the activity during the day and throughout the weekly rhythms. For the modelling of the interaction data we will first assume that the function $h(t)$ is a constant in time.
Moreover in the following subsections we will show that in order to obtain power-law distribution of duration of contacts and non-interaction times (as it is observed in face-to-face interaction data) we have to take $\beta=1$ while in order to obtain Weibull distribution of duration of contacts we have to take $\beta<1$.
Therefore, summarizing here the results of the following two sections, we can conclude with the following statement for the adaptability of human social interactions
\begin{framed}
\hspace{-.25in} {\bf The adaptability of human social interactions}\\
The adaptability of human social interactions to technology can be seen as an effective way to modulate the parameter $\beta$ in Eq. $(\ref{f})$ parametrizing the probability to start or to end the social interactions.
\end{framed}
\subsection{Model of face-to-face interactions}
Here we recall the model of face-to-face interactions presented in \cite{Stehle:2010,Zhao:2011} and we delineate the main characteristics and outcomes.
A simple stochastic dynamics is imposed to the agent-based system in order to model face-to-face interactions. Starting from given initial conditions, the dynamics of face-to-face interactions at each time step $t$ is implemented as the following algorithm.
\begin{itemize}
\item[(1)] An agent $i$ is chosen randomly.
\item[(2)] The agent $i$ updates his/her state $n_i=n$ with probability $f_n(t,t_i)$.
If the state $n_i$ is updated, the subsequent action of the agent proceeds with the following rules.
\begin{itemize}
\item[(i)] If the agent $i$ is non-interacting ($n_i=1$), he/she starts an interaction with another non-interacting agent $j$ chosen with probability proportional to $f_1(t, t_j)$. Therefore the coordination number of the agent $i$ and of the agent $j$ are updated ($n_i \rightarrow 2$ and $n_j \rightarrow 2$).
\item[(ii)] If the agent $i$ is interacting in a group ($n_i=n>1$), with probability $\lambda$ the agent leaves the group and with probability $1-\lambda$ he/she introduces an non-interacting agent to the group.
If the agent $i$ leaves the group, his/her coordination number is updated ($n_i \rightarrow 1$) and also the coordination numbers of all the agents in the original group are updated ($n_r \rightarrow n-1$, where $r$ represent a generic agent in the original group). On the contrary, if the agent $i$ introduces another isolated agent $j$ to the group, the agent $j$ is chosen with probability proportional to $f_1(t,t_j)$ and the coordination numbers of all the interacting agents are updated ($n_i \rightarrow n+1$, $n_j \rightarrow n+1$ and $n_r \rightarrow n+1$ where $r$ represents a generic agent in the group ).
\end{itemize}
\item[(3)] Time $t$ is updated as $t \rightarrow t+1/N$ (initially $t=0$). The algorithm is repeated from (1) until $t=T_{max}$.
\end{itemize}
We have taken in the reinforcement dynamics with parameter $\beta=1$ such that
\begin{equation}
f_n(t, t')=\frac{b_n}{1+(t-t')/N}.
\label{p}
\end{equation}
In Eq. $(\ref{p})$,for simplicity, we take $b_n=b_2$ for every $n\geq2$, indicating the fact the interacting agents change their state independently on the coordination number $n$.
We note that in this model we assume that everybody can interact with everybody so that the underline network model is fully connected. This seems to be a very reasonable assumption if we want to model face-to-face interactions in small conferences, which are venues designed to stimulate interactions between the participants. Nevertheless the model can be easily modified by embedding the agents in a social network so that interactions occur only between social acquaintances.
In the following we review the mean-field solution to this model. For the detailed description of the solution of the outline non-equilibrium dynamics the interest reader can see \cite{Stehle:2010, Zhao:2011}. We denote by $N_n(t,t')$ the number of
agents interacting with $n=0,1,\ldots,N-1$ agents at time $t$, who
have not changed state since time $t'$. In the mean field
approximation, the evolution equations for $N_n(t,t')$ are given by
\begin{eqnarray}
\frac{\partial N_1(t,t')}{\partial t}&=&-2\frac{N_1(t,t')}{N}f_1(t-t')-(1-\lambda)\epsilon(t) \frac{N_1(t,t')}{N}f_1(t-t')+\sum_{i > 1}\pi_{i,2}(t)\delta_{tt'}\nonumber\\
\frac{\partial N_2(t,t')}{\partial t}&=&-2\frac{N_2(t,t')}{N}f_2(t-t')+[\pi_{1,2}(t)+\pi_{3,2}(t)]\delta_{tt'} \nonumber \\
\frac{\partial N_n(t,t')}{\partial t}&=&-n \frac{N_n(t,t')}{N}f_n(t-t') +[\pi_{n-1,n}(t)+\pi_{n+1,n}(t)+\pi_{1,n}(t)]\delta_{tt'},~n> 2.
\label{dNiB}
\end{eqnarray}
In these equations, the parameter $\epsilon(t)$ indicates the rate
at which isolated nodes are introduced by another agent in already
existing groups of interacting agents. Moreover, $\pi_{mn}(t)$
indicates the transition rate at which agents change its state
from $m$ to $n$ (i.e. $m\to n$) at time $t$. In the mean-field
approximation the value of $\epsilon(t)$ can be expressed in terms of
$N_n(t,t')$ as
\begin{equation} \epsilon(t)=\frac{\sum_{n >
1}\sum_{t'=1}^tN_n(t,t')f_n(t-t')}{\sum_{t'=1}^tN_1(t,t')f_1(t-t')}.
\label{epsilon}
\end{equation}
Assuming that asymptotically in time $\epsilon(t)$ converges to a time
independent variable, i.e. $\lim_{t\to
\infty}\epsilon(t)=\hat{\epsilon}$, the solution to the rate
equations (\ref{dNiB}) in the large time limit is given by
\begin{eqnarray}
\label{NiB}
N_1(t,t')&=&N_1(t',t')\bigg(1+\frac{t-t'}{N}\bigg)^{-b_1[2+(1-\lambda)\hat{\epsilon}]} \nonumber\\
N_2(t,t')&=&N_2(t',t')\bigg(1+\frac{t-t'}{N}\bigg)^{-2b_2} \\
N_n(t,t')&=&N_n(t',t')\bigg(1+\frac{t-t'}{N}\bigg)^{-nb_2} \ \mbox{for} \ n> 2,\nonumber
\end{eqnarray}
with
\begin{eqnarray}
N_1(t',t')&=&\sum_{n > 1}\pi_{n,1}(t')\nonumber \\
N_2(t',t')&=&\pi_{1,2}(t')+\pi_{3,2}(t') \\
N_n(t',t')&=&\pi_{n-1,n}(t')+\pi_{n+1,n}(t')+\pi_{0,n}(t') \ \mbox{for} \ n> 2.\nonumber
\label{pig}.
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{multigroup_stationary}
\end{center}
\caption{ \small{Distribution $P_n(\tau) $ of durations of
groups of size $n$ in the stationary region. The simulation is
performed with $N=1000$ agents for a number of time steps
$T_{max}=N\times 10^5$. The parameter used are $b_0=b_1=0.7$,
$\lambda=0.8$. The data is averaged over $10$ realizations.}}
\label{Groups_stationary}
\end{figure}
We denote by $P_n(\tau)$ the distribution of duration of different coordination number $n$ which satisfies the relation
\begin{equation}
P_n(\tau)=\int_{t'=0}^{t-\tau}f_n(t-t')N(t,t')dt'.
\end{equation}
and using Eq.(\ref{p}) and Eqs.(\ref{NiB}) we find that $P_n(\tau)$ simply satisfy
\begin{eqnarray}
P_1(\tau) &\propto& (1+\tau)^{-b_1[2+(1-\lambda)\hat{\epsilon}]-1} \nonumber \\
P_n(\tau) &\propto& (1+\tau)^{-nb_2-1}.
\label{Pn}
\end{eqnarray}
As shown in Fig.\ref{Groups_stationary}, the analytic prediction Eqs.(\ref{Pn}) is in good agreement with the computer simulation.
\begin{figure}[h!]
\begin{center}
$\begin{array}{ccc}
\includegraphics[width=30mm, height=30mm]{phase1}&
\includegraphics[width=30mm, height=30mm]{phase2}&
\includegraphics[width=30mm, height=30mm]{phase3}
\end{array}$
\end{center}
\caption{\small{Phase diagram of arbitrary state number $n$: The red area indicates the regime where a large group is formed and the solution is divergent. The blue area indicates the non-stationary regime. The white area indicates the stationary regime. }}
\label{Fig_phase}
\end{figure}
Despite the simplicity of this model, the non-equilibrium dynamics of this system is characterized by a non trivial phase diagram.
The phase-diagram of the model is summarized in Fig.\ref{Fig_phase}. We can distinguish between three phases:
\begin{itemize}
\item{\em Region I - the stationary region: $b_2>0.5$, $b_1>(2\lambda-1)/(3\lambda-3)$ and $\lambda>0.5$-} This region corresponds to the white area in Fig.\ref{Fig_phase}. The region is stationary and the transition rates between different states are constant.
\item{\em Region II - the non-stationary region: $b_2<0.5$ or $b_1>(2\lambda-1)/(3\lambda-3)$, and $\lambda>0.5$ -}This region corresponds to the blue area in Fig.\ref{Fig_phase}. The region is non-stationary and the transition rates between different states are decaying with time as power-law.
\item {\em Region III - formation of a big group: $\lambda<0.5$ -}In this region there is an instability for the formation of a large group of size ${\cal O}(N)$.
\end{itemize}
In both regions I and region II the distribution of the duration of groups of size $n$ follows a power-law distribution with an exponent which grows with the group size $n$.
This fact is well reproduced in the face-to-face data \cite{Zhao:2011} and implies the following principle on the stability of groups in face-to-face interactions.
\begin{framed}
\hspace{-.25in} {\bf Stability of groups in face-to-face interactions}
In face-to-face interactions, groups of larger size are less stable than groups of smaller size.
In fact the stability of a group depends on the independent decisions of the $n$ agents in the group to remain in contact.
\end{framed}
\subsection{Model of phone-call communication}
\label{sec3.2}
To model cell-phone communication, we consider once again a system of $N$ agents representing the mobile phone users. Moreover, we introduce a static weighted network $G$, of which the nodes are the agents in the system, the edges represent the social ties between the agents, such as friendships, collaborations or acquaintances, and the weights of the edges indicate the strengths of the social ties. Therefore the interactions between agents can only take place along the network $G$ (an agent can only interact with his/her neighbors on the network $G$).
Here we propose a model for mobile-phone communication constructed with the use of the reinforcement dynamic mechanism. This model shares significant similarities with the previously discussed model for face-to-face interactions, but has two major differences. Firstly, only pairwise interactions are allowed in the case of cell-phone communication. Therefore, the state $n$ of an agent only takes the values of either $1$ (non-interacting) or $2$ (interacting). Secondly, the probability that an agent ends his/her interaction depends on the weight of network $G$. The dynamics of cell-phone communication at each time step $t$ is then implemented as the following algorithm.
\begin{itemize}
\item[(1)] An agent $i$ is chosen randomly at time $t$.
\item[(2)]
The subsequent action of agent $i$ depends on his/her current state (i.e. $n_i$):
\begin{itemize}
\item[(i)] If $n_i=1$, he/she starts an interaction with one of his/her non-interacting neighbors $j$ of $G$ with probability $f_1(t_i,t)$ where $t_i$ denotes the last time at which agent $i$ has changed his/her state. If the interaction is started, agent $j$ is chosen randomly with probability proportional to $f_1(t_j,t)$ and the coordination numbers of agent $i$ and $j$ are then updated ($n_i \rightarrow 2$ and $n_j \rightarrow 2$).
\item[(ii)] If $n_i=2$, he/she ends his/her current interaction with probability $f_2(t_i,t|w_{ij})$ where $w_{ij}$ is the weight of the edge between $i$ and the neighbor $j$
that is interacting with $i$. If the interaction is ended, the coordination numbers of agent $i$ and $j$ are then updated ($n_i \rightarrow 1$ and $n_j \rightarrow 1$).
\end{itemize}
\item[(3)] Time $t$ is updated as $t \rightarrow t+1/N$ (initially $t=0$). The algorithm is repeated from (1) until $t=T_{max}$.
\end{itemize}
Here we take the probabilities $f_1(t,t^{\prime}), f_2(t,t^{\prime}|w)$ according to the following functional dependence
\begin{eqnarray}
f_1(t,t')&=&f_1(\tau)=\frac{b_1}{(1+\tau)^{\beta}}\nonumber \\
f_2(t,t'|w)&=&f_2(\tau|w)=\frac{b_2g(w)}{(1+\tau)^{\beta}}
\label{f2t}
\end{eqnarray}
where the parameters are chosen in the range $b_1>0$, $b_2>0$, $0 \le\beta\le 1$, $g(w)$ is a positive decreasing function of its argument, and $\tau$ is given by $\tau=(t-t')/N$.
In order to solve the model analytically, we assume the quenched network $G$ to be annealed and uncorrelated. Here we outline the main result of this approach and we suggest for the interested reader to look at papers \cite{PlosOne, Frontiers} for the details of the calculations. Therefore we assume that the network is rewired while the degree distribution $p(k)$ and the weight distribution $p(w)$ remain constant. We denote by $N_1^k(t,t')$ the number of non-interacting agents with degree $k$ at time $t$ who have not changed their state since time $t'$. Similarly we denote by $N_2^{k,k',w}(t,t')$ the number of interacting agent pairs (with degree respectively $k$ and $k'$ and weight of the edge $w$) at time $t$ who have not changed their states since time $t'$. In the annealed approximation the probability that an agent with degree $k$ is called by another agent is proportional to its degree. Therefore the evolution equations of the model are given by
\begin{eqnarray}
\frac{\partial N_1^k(t,t')}{\partial t}&=&-\frac{N_1^k(t,t')}{N}f_1(t-t')-ck\frac{N_1^k(t,t')}{N}f_1(t-t')+\pi_{21}^k(t)\delta_{tt'} \nonumber\\
\frac{\partial N_2^{k,k',w}(t,t')}{\partial t}&=&-2\frac{N_2^{k,k',w}(t,t')}{N}f_2(t-t'|w)+\pi_{12}^{k,k',w}(t)\delta_{tt'}
\label{dN1w}
\end{eqnarray}
where the constant $c$ is given by
\begin{equation}
c=\frac{\sum_{k'}\int_0^t dt' N_1^{k'}(t,t')f_1(t-t')}{\sum_{k'}k'\int_0^t dt'N^{k'}_1(t,t')f_1(t-t')}.
\label{c_sum}
\end{equation}
In Eqs. $(\ref{dN1w})$ the rates $\pi_{pq}(t)$ indicate the average number of agents changing from state $p=1,2$ to state $q=1,2$ at time $t$.
The solution of the dynamics must of course satisfy the conservation equation
\begin{equation}
\int dt' \big[N_1^k(t,t')+\sum_{k',w} N_2^{k,k',w}(t,t')\big]=Np(k).
\label{N_conserve}
\end{equation}
In the following we will denote by $P^k_1(t,t')$ the probability distribution that an agent with degree $k$ is non-interacting in the period between time $t'$ and time $t$ and we will denote by $P^w_2(t,t')$ the probability that an interaction of weight $w$ is lasting from time $t'$ to time $t$ which satisfy
\begin{eqnarray}
P_1^k(t,t')&=&(1+ck)f_1(t,t')N_1^k(t,t')\nonumber \\
P_2^w(t,t')&=&2f_2(t,t'|w)\sum_{k,k'}N_2^{k,k',w}(t,t').
\label{P12}
\end{eqnarray}
As a function of the value of the parameter of the model we found different distribution of duration of contacts and non-interaction times.
\begin{itemize}
\item {\em Case $0<\beta<1$}
The system allows always for a stationary solution with $N_1^k (t,t')=N_1^k(\tau)$ and $N_2^{k,k',w}(t,t')=N_2^{k,k',w}(\tau)$.
The distribution of duration of non-interaction times $P_1^k(\tau)$ for agents of degree $k$ in the network and the distribution of interaction times $P_2^w(\tau)$ for links of weight $w$ is given by
\begin{eqnarray}
P_1^k(\tau)&\propto &\frac{b_1(1+ck)}{(1+\tau)^{\beta}}e^{-\frac{b_1(1+ck)}{1-\beta}(1+\tau)^{1-\beta}}\nonumber \\
P_2^w(\tau)&\propto &\frac{2b_2g(w)}{(1+\tau)^{\beta}}e^{-\frac{2b_2g(w)}{1-\beta}(1+\tau)^{1-\beta}}.
\label{P2kt1}
\end{eqnarray}
Rescaling Eqs.(\ref{P2kt1}), we obtain the Weibull distribution which is in good agreement with the results observed in mobile-phone datasets.
\item {\em Case $\beta=1$}
Another interesting limiting case of the mobile-phone communication model is the case $\beta=1$ such that $f_1^k(\tau)\propto(1+\tau)^{-1}$ and $f_2^w(\tau|w)\propto(1+\tau)^{-1}$.
In this case the model is much similar to the model used to mimic face-to-face interactions described in the previous subsection \cite{Stehle:2010,Zhao:2011}, but the interactions are binary and they occur on a weighted network. In this case we get the solution
\begin{eqnarray}
N_1^k(\tau)& = & N\pi_{21}^k(1+\tau)^{-b_1(1+ck)}\nonumber \\
N_2^{k,k',w}(\tau) & = & N\pi_{12}^{k,k',w}(1+\tau)^{-2b_2g(w)}.
\end{eqnarray}
and consequently the distributions of duration of given states Eqs. $(\ref{P12})$ are given by
\begin{eqnarray}
P_1^k(\tau) & \propto & \pi_{21}^k(1+\tau)^{-b_1(1+ck)-1}\nonumber \\
P_2^w(\tau) & \propto & \pi_{12}^{k,k',w}(1+\tau)^{-2b_2g(w)-1}.
\end{eqnarray}
The probability distributions are power-laws.This result remains valid for every value of the parameters $b_1,b_2,g(w)$ nevertheless the stationary condition is only valid for
\begin{eqnarray}
b_1(1+ck)>1\nonumber \\
2b_2g(w)>1.
\end{eqnarray}
Indeed this condition ensures that the self-consistent constraints Eqs. (\ref{c_sum}), and the conservation law Eq. (\ref{N_conserve}) have a stationary solution.
\item {\em Case $\beta=0$}
This is the case in which the process described by the model is a Poisson process and their is no reinforcement dynamics in the system.
Therefore we find that the distribution of durations are exponentially distributed.
In fact for $\beta =0$ the functions $f_1(\tau)$ and $f_2(\tau|w)$ given by Eqs.$(\ref{f2t})$ reduce to constants, therefore the process of creation of an interaction is a Poisson process. In this case the social interactions do not follow the reinforcement dynamics. The solution that we get for the number of non interacting agents of degree $k$, $N_1^k(\tau)$ and the number of interacting pairs $N_2^{k,k'w}(\tau)$ is given by
\begin{eqnarray}
N_1^k(\tau)&=&N\pi_{21}^ke^{-b_1(1+ck)\tau}\nonumber \\
N_2^{k,k',w}(\tau)&=&N\pi_{12}^{k,k',w}e^{-2b_2g(w)\tau}.
\end{eqnarray}
Consequently the distributions of duration of given states Eqs. $(\ref{P12})$ are given by
\begin{eqnarray}
P_1^k(\tau) \propto e^{-b_1(1+ck)\tau}\nonumber \\
P_2^w(\tau) \propto e^{-2b_2g(w)\tau}.
\end{eqnarray}
Therefore the probability distributions $P_1^k(\tau)$ and $P_2^w(\tau)$ are exponentials as expected in a Poisson process.
\end{itemize}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3.27in]{group_new-1}
\end{center}
\caption{ \small{The dynamical social networks are composed by different dynamically changing groups of interacting agents. (A) Only groups of size one or two are allowed as in the phone-call communication. (B) Groups of any size are allowed as in the face-to-face interactions. }}
\label{fig1}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3.27in]{entropy_vs_t_new2}
\end{center}
\caption{ \small{Evaluation of the entropy of the dynamical social networks of phone calls communication in a typical week-day. In the nights the social dynamical network is more predictable. Figure from \cite{PlosOne}.}}
\label{entropyt}
\end{figure}
\section{Entropy of temporal social networks}
In this section we introduce the entropy of temporal social networks as a measure of information encoded in their dynamics. We can assume that the following stochastic dynamics takes place in the network: according to this dynamics at each time step $t$, different interacting groups can be formed and can be dissolved giving rise to the temporal social networks. The agents are embedded in a social network $G$ such that interaction can occur only by acquaintances between first neighbors of the network $G$. This is a good approximation if we want to model social interactions on the fast time scale. In the case of a small conference, where each participant is likely to discuss with any other participant we can consider a fully connected network as the underlying network $G$ of social interactions. In the network $G$ each set of interacting agents can be seen as a connected subgraph of ${\cal G}$, as shown in Fig \ref{fig1}.
We use an indicator function $g_{i_1,i_2,\ldots , i_n}(t)$ to denote, at time $t$, the maximal set $i_1$, $i_2$,..., $i_n$ of interacting agents in a group. If $(i_1i_2,\ldots, i_n)$ is the maximal set of interacting agents in a group, we let $g_{i_1,i_2,\ldots , i_n}(t)=1$ otherwise we put $g_{i_1,i_2,\ldots, i_n}(t)=0$. Therefore at any given time the following relation is satisfied,
\begin{equation}
\sum_{{\cal G}=(i,i_2,\ldots,i_n )|i\in {\cal G}}g_{i,i_2,\ldots, i_n}(t)=1.
\end{equation}
where ${\cal G}$ is an arbitrary connected subgraph of $G$.
Then we denote by ${\cal S}_t=\{g_{i_1,i_2,\ldots, i_n}(t^{\prime})\, \forall t^{\prime}<t\}$ the history of the dynamical social networks, and $p(g_{i,i_2,\ldots, i_n}(t)=1|{\cal S}_t)$ the probability that $g_{i_1,i_2,\ldots, i_n}(t)=1$ given the history ${\cal S}_t$.
Therefore the likelihood that at time $t$ the dynamical social networks has a group configuration $g_{i_1,i_2,\ldots,i_n}(t)$ is given by
\begin{equation}
{\cal L}=\prod_{{\cal G}} p(g_{i_1,i_2,\ldots, i_n}(t)=1|{\cal S}_t)^{g_{i_1,i_2,\ldots, i_{n}}(t)}.
\label{Likelihood}
\end{equation}
We denote the entropy of the dynamical networks as $S=-\Avg{\log{\cal L}}_{|{\cal S}_t}$ indicating the logarithm of the typical number of all possible group configurations at time $t$ which can be explicitly written as
\begin{equation}
S=- \sum_{{\cal G}}p(g_{i,i_2,\ldots, i_n}(t)=1|{\cal S}_t)\log p(g_{i,i_2,\ldots, i_n}(t)=1|{\cal S}_t).
\end{equation}
The value of the entropy can be interpreted as following: if the entropy is larger, the dynamical network is less predictable, and several possible dynamic configurations of groups are expected in the system at time $t$. On the other hand, a smaller entropy indicates a smaller number of possible future configuration and a temporal network state which is more predictable.
\subsection{Entropy of phone-call communication}
In this subsection we discuss the evaluation of the entropy of phone-call communication. For phone-call communication, we only allow pairwise interaction in the system such that the product in Eq.(\ref{Likelihood}) is only taken over all single nodes and edges of the quenched network $G$ which yields
\begin{equation}
{\cal L}=\prod_i p(g_i(t)=1|{\cal S}_t)^{g_i(t)}\prod_{ij|a_{ij}=1} p(g_{ij}(t)=1|{\cal S}_t)^{g_{ij}(t)}
\end{equation}
with
\begin{equation}
g_i(t)+\sum_{j} a_{ij} g_{ij}(t)=1.
\end{equation}
where $a_{ij}$ is the adjacency matrix of $G$.
The entropy then takes a simple form
\begin{eqnarray}
S&=&- \sum_i p(g_{i}(t)=1|{\cal S}_t)\log p(g_{i}(t)=1|{\cal S}_t)\nonumber \\
&&-\sum_{ij}a_{ij} p(g_{ij}(t)=1|{\cal S}_t)\log p(g_{ij}(t)=1|{\cal S}_t).
\label{s_pair2}
\end{eqnarray}
\subsection{Analysis of the entropy of a large dataset of mobile phone communication}
\label{sec:3}
In this subsection we use the entropy of temporal social networks to analyze the information encoded in a major European mobile service provider, making use of the same dataset that we have used to measure the distribution of call duration in Section 2. Here we evaluate the entropy of the temporal networks formed by the phone-call communication in a typical week-day in order to study how the entropy of temporal social networks is affected by circadian rhythms of human behavior.
For the evaluation of the entropy of temporal social networks we consider a subset of the large dataset of mobile-phone communication. We selected $562,337$ users who executed at least one call a day during a weeklong period. We denote by $f_n(t,t^{\prime})$ the transition probability that an agent in state $n$ ($n=1,2)$ changes its state at time $t$ given that he/she has been in his/her current state for a duration $\tau=t-t^{\prime}$. The probability $f_n(t,t^{\prime})$ can be estimated directly from the data. Therefore, we evaluate the entropy in a typical weekday of the dataset by using the transition probabilities $f_n(t,t^{\prime})$ and the definition of entropy of temporal social networks (Readers should refer to the supplementary material of Ref. \cite{PlosOne} for the details). In Fig. \ref{entropyt} we show the resulting evaluation of entropy in a typical day of our phone-call communication dataset. The entropy of the temporal social network is plotted as a function of time during one typical day. The mentioned figure shows evidence that the entropy of temporal social networks changes significantly during the day reflecting the circadian rhythms of human behavior.
\subsection{Entropy modulated by the adaptability of human behavior}
The adaptability of human behavior is evident when comparing the distribution ofthe duration of phone-calls with the duration of face-to-face interactions.
In the framework of the model for mobile-phone interactions described in Sec. \ref{sec3.2}, this adaptability, can be understood, as a possibility to change the exponent $\beta$ in Eqs. (\ref{f}) and (\ref{f2t}) regulating the duration of social interactions.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3.27in ,height=50mm]{entropy_network}
\end{center}
\caption{ \small{Entropy $S$ of social dynamical network model of pairwise communication normalized with the entropy $S_R$ of a null model in which the expected average duration of phone-calls is the same but the distribution of duration of phone-calls and non-interaction time are Poisson distributed. The network size is $N=2000$ the degree distribution of the network is exponential with average $\avg{k}=6$, the weight distribution is $p(w)=Cw^{-2}$ and $g(w)$ is taken to be $g(w)=b_2/w$ with $b_2=0.05$.
The value of $S/S_R$ is depending on the two parameters $\beta, b_1$. For every value of $b_1$ the normalized entropy is smaller for $\beta\to 1$. Figure from \cite{PlosOne}.}
}
\label{entropy_network}
\end{figure}
Changes in the parameter $\beta$ correspond to different values entropy of the dynamical social networks. Therefore, by modulating the exponent $\beta$, the human behavior is able to modulate the information encoded in temporal social networks.
In order to show the effect on entropy of a variation of the exponent $\beta$ in the dynamics of social interaction networks, we considered the entropy corresponding to the model described in Sec. \ref{sec3.2} as a function of the parameters $\beta$ and $b_1$ modulating the probabilities $f_1(t,t^{\prime}), f_2(t,t'|w)$ Eqs.(\ref{f2t}).
In Fig. \ref{entropy_network} we report the entropy $S$ of the proposed model a function of $\beta$ and $b_1$. The entropy $S$, given by Eq.(\ref{s_pair2}), is calculated using the annealed approximation for the solution of the model and assuming the large network limit. In the calculation of the entropy $S$ we have taken a network of size $N=2000$ with exponential degree distribution of average degree $\avg{k}=6$, weight distribution $P(w)=Cw^{-2}$ and function $g(w)=1/w$ and $b_2=0.05$. Our aim in Fig. \ref{entropy_network} is to show only the effects on the entropy due to the different distributions of duration of contacts and non-interaction periods. Therefore we have normalized the entropy $S$ with the entropy $S_R$ of a null model of social interactions in which the duration of groups are Poisson distributed but the average time of interaction and non-interaction time are the same as in the model of cell-phone communication (Readers should refer to the supplementary material of Ref. \cite{PlosOne} for more details).
From Fig. \ref{entropy_network} we observe that if we keep $b_1$ constant, the ratio $S/S_R$ is a decreasing function of the parameter $\beta$. This indicates that the broader is the distribution of probability of duration of contacts, the higher is the information encoded in the dynamics of the network. Therefore the heterogeneity in the distribution of duration of contacts and no-interaction periods implies higher level of information in the social network. The human adaptive behavior by changing the exponent $\beta$ in face-to-face interactions and mobile phone communication effectively changes the entropy of the dynamical network.
\section{Conclusions}
The goal of network science is to model, characterize, and predict the behavior of complex networks.
Here, in this chapter, we have delineated a first step in the characterization of the information encoded in temporal social networks. In particular we have focused on modelling phenomenologically social interactions on the fast time scale, such a face-to-face interactions and mobile phone communication activity. Moreover, we have defined the entropy of dynamical social networks, which is able to quantify the information present in social network dynamics.
We have found that human social interactions are bursty and adaptive. Indeed, the duration of social contacts can be modulated by the adaptive behavior of humans: while in face-to-face interactions dataset a power-law distribution of duration of contacts has been observed, we have found, from the analysis of a large dataset of mobile-phone communication, that mobile-phone calls are distributed according to a Weibull distribution.
We have modeled this adaptive behavior by assuming that the dynamics underlying the formation of social contacts implements a reinforcement dynamics according to which the longer an agent has been in a state (interacting or non-interacting) the less likely it is that he will change his/her state.
We have used the entropy of dynamical social networks to evaluate the information present in the temporal network of mobile-phone communication, during a typical weekday of activity, showing that the information content encoded in the dynamics of the network changes during a typical day.
Moreover, we have compared the entropy in a social network with the duration of contacts following a Weibull distribution, and with the duration of contacts following a power-law in the framework of the stochastic model proposed for mobile-phone communication.
We have found that a modulation of the statistics of duration of contacts strongly modifies the information contents present in the dynamics of temporal networks.
Finally, we conclude that the duration of social contacts in humans has a distribution that strongly deviates from an exponential. Moreover, {the} data show that human behavior is able to modify the information encoded in social networks dynamics during the day and when {facing} a new technology such as the mobile-phone communication technology.
\subsection{Acknowledgement}
We thank A. Barrat and J. Stehl\'e for a collaboration that started our research on face-to-face interactions. Moreover we especially thank A.-L. Barab\'asi for his useful comments and for the mobile call data used in this research.
MK acknowledges the financial support from EU’s 7th Framework Program’s FET-Open to ICTeCollective project no. 238597
|
2,877,628,089,644 | arxiv | \section{Metric--affine gravity (MAG)}
In 1976, a new metric--affine theory of gravitation was published
\cite{MAG}. In this model, the {\em metric} $g_{ij}$ and the linear
(sometimes also called affine) {\em connection} $\Gamma_{ij}{}^k$ were
considered to be independent gravitational field variables. The metric
carries 10 and the connection 64 independent components. Although
nowadays more general Lagrangians are considered, like the one in
Eq.(\ref{6}), the original Lagrangian density of metric--affine
gravity reads
\begin{equation}
{\cal V}_{{\rm GR}^\prime} = \frac{\sqrt{-g}}{2\kappa}\, g^{ij}
\Bigl[{\rm Ric}_{\,ij} (\Gamma, \partial \Gamma) + \beta\, Q_i Q_j
\Bigr]
\label{1}\, .
\end{equation}
The Ricci tensor ${\rm Ric}_{\,ij}$ depends only on the connection but
not on the metric, whereas the Weyl covector $Q_i := -g^{kl}\,
\nabla_i\, g_{kl}/4$ depends on both. Here $\nabla_i$ represents the
covariant derivative with respect to the connection $\Gamma_{ij}{}^k$,
furthermore $g=\det\,g_{kl}$, $\kappa$ is Einstein's gravitational
constant, and $\beta$ a dimensionless coupling constant. With $i,j,k,
\cdots=0,1,2,3$ we denote coordinates indices.
This model leads back to general relativity, as soon as the material
current coupled to the connection, namely $\sqrt{-g}\,
\Delta^{ij}{}_k:= \delta {\cal L}_{mat}/\delta \Gamma_{ij}{}^k$, the
so--called hypermomentum, vanishes. Thus, in such a model, the
post--Riemannian pieces of the connection and the corresponding new
interactions are tied to matter, they do not propagate.
As we know from the weak interaction, a contact interaction appears to
be suspicious for causality reasons, and one wants to make it {\em
propagating}, even if the carrier of the interaction, the
intermediate gauge boson, may become very heavy as compared to the
mass of the proton, e.g.. However, before we report on the more
general gauge Lagrangians that have been used, we turn back to the
geometry of spacetime.
MAG represents a gauge theory of the 4--dimensional affine group
enriched by the existence of a metric. As a gauge theory, it finds its
appropriate form if expressed with respect to arbitrary frames or {\em
coframes}. Therefore, the apparatus of MAG was reformulated in the
calculus of exterior differential forms, the result of which can be
found in the review paper \cite{PartI}, see also \cite{PRs} and
\cite{Erice95}. Of course, MAG could have been alternatively
reformulated in tensor calculus by employing an arbitrary
(anholonomic) frame (tetrad or vierbein formalism), but exterior
calculus, basically in a version which was advanced by Trautman
\cite{Trautman} and others \cite{Jim,Chen+Jim}, seems to be more
compact.
In the new formalism, we have then the metric $g_{\alpha\beta}$, the
coframe $\vartheta^\alpha$, and the connection 1--form
$\Gamma_\alpha{}^\beta$ (with values in the Lie algebra of the
4--dimensional linear group $GL(4,R)$) as new independent field
variables. Here $\alpha,\beta,\gamma,\cdots = 0,1,2,3$ denote
(anholonomic) frame indices. For the formalism, including the
conventions, which we will be using in this paper, we refer to
\cite{PRs}.
A first order Lagrangian formalism for a matter field $\Psi$ minimally
coupled to the gravitational {\em potentials} $g_{\alpha\beta}$,
$\vartheta^\alpha$, $\Gamma_\alpha{}^\beta$ has been set up in
\cite{PRs}. Spacetime is described by a metric--affine geometry with
the gravitational {\em field strengths} nonmetricity $Q_{\alpha\beta}
:=-Dg_{\alpha\beta}$, torsion $T^\alpha:=D\vartheta^\alpha$, and
curvature $R_\alpha{}^\beta:= d\Gamma_\alpha{}^\beta
-\Gamma_\alpha{}^\gamma\wedge\Gamma_\gamma{}^\beta$. The
gravitational field equations
\begin{eqnarray}
DH_{\alpha}- E_{\alpha}&=&\Sigma_{\alpha}\,,\label{first}\\
DH^{\alpha}{}_{\beta}-
E^{\alpha}{}_{\beta}&=&\Delta^{\alpha}{}_{\beta}\,,
\label{second}
\end{eqnarray}
link the {\em material sources}, the material energy--momentum current
$\Sigma_\alpha$ and the material hypermomentum current
$\Delta^\alpha{}_\beta$, to the gauge field {\em excitations}
$H_\alpha$ and $H^\alpha{}_\beta$ in a Yang--Mills like manner. In
\cite{PRs} it is shown that the field equation corresponding to the
variable $g_{\alpha\beta}$ is redundant if (\ref{first}) as well as
(\ref{second}) are fulfilled.
If
the gauge Lagrangian 4--form
\begin{equation}
V= V\left(g_{\alpha\beta}, \vartheta^\alpha, Q_{\alpha\beta},
T^\alpha, R_\alpha{}^\beta \right)
\label{2}\,
\end{equation}
is given, then the excitations can be calculated by partial
differentiation,
\begin{equation} H_\alpha = - \frac{\partial V}{\partial T^\alpha}\, , \quad
H^\alpha{}_\beta= -\frac{ \partial V}{\partial
R_\alpha{}^\beta}\,,\quad M^{\alpha\beta} = - 2 \frac{\partial
V}{\partial Q_{\alpha\beta}}\, ,
\label{3}\,
\end{equation}
whereas the gauge field currents of energy--momentum and
hypermomentum, respectively, turn out to be linear in the Lagrangian
and in the excitations,
\begin{eqnarray}
E_{\alpha} & := & \frac{\partial V}{\partial\vartheta^\alpha}
=e_{\alpha}\rfloor V + (e_{\alpha}\rfloor T^{\beta}) \wedge
H_{\beta} + (e_{\alpha}\rfloor R_{\beta}{}^{\gamma})\wedge
H^{\beta}{}_{\gamma} + {1\over 2}(e_{\alpha}\rfloor Q_{\beta\gamma})
M^{\beta\gamma}\,,\\ E^{\alpha}{}_{\beta} & := &\frac{\partial
V}{\partial\Gamma_\alpha{}^\beta}= - \vartheta^{\alpha}\wedge
H_{\beta} - g_{\beta\gamma}M^{\alpha\gamma}\,.
\end{eqnarray}
Here $e_\alpha$ represents the frame and $\rfloor$ the interior
product sign, for details see \cite{PRs}.
\section{The quadratic gauge Lagrangian of MAG}
The gauge Lagrangian (\ref{1}), in the new formalism, is a 4--form and
reads \cite{Mapping}
\begin{equation}
V_{{\rm GR}^\prime} = \frac{1}{2\kappa} \left( - R^{\alpha\beta}
\wedge \eta_{\alpha\beta} + \beta Q \wedge {}^*Q\right)
\label{4}\, .
\end{equation}
Here $\eta_{\alpha\beta} := {}^*(\vartheta_\alpha \wedge
\vartheta_\beta)$, $*$ denotes the Hodge star. Besides Einstein
gravity, it encompasses additionally {\em contact} interactions.
It is obvious of how to make $Q$ a propagating field: One adds, to the
massive $\beta$--term, a kinetic term \cite{HLordSmalley,PonoObukhov}
$-\alpha\,dQ\wedge {}^*dQ/2$. Since $dQ= R_\gamma{}^\gamma/2$, the
kinetic term can alternatively be written as
\begin{equation}
-\frac{\alpha}{8} \,R_\beta{}^\beta \wedge {}^*R_\gamma{}^\gamma
\label{5}\, .
\end{equation}
This term, with the appearance of one Hodge star, displays a typical
{\em Yang--Mills structure}. More generally, propagating
post--Riemannian gauge interactions in MAG can be consistently
constructed by adding terms quadratic in $Q_{\alpha\beta}$,
$T^\alpha$, $R_\alpha{}^\beta$ to the Hilbert-Einstein type Lagrangian
and the term with the cosmological constant.
In the first order formalism we are using, higher order terms, i.e.\
cubic and quartic ones etc.\ would preserve the second order of the
field equations. However, the {\em quasilinearity} of the gauge field
equations would be destroyed and, in turn, the Cauchy problem would be
expected to be ill--posed. Therefore we do not go beyond a gauge
Lagrangian which is quadratic in the gauge field strengths
$Q_{\alpha\beta}$, $T^\alpha$, $R_\alpha{}^\beta$. Incidentally, a
quadratic Lagrangian is already so messy that it would be hard to
handle a still more complex one anyway.
Different groups have already added, within a metric--affine
framework, different quadratic pieces to the Hilbert--Einstein--type
Lagrangian, see
\cite{Yasskin,Grossmann4,Duan,irredDermott,TresguerresShear1,TuckerWang,TuckWar,Teyssandier,Yuritheorem,collwavesMAG},
e.g., and references given there. The end result of all these
deliberations is the {\em most general parity conserving quadratic}
Lagrangian which is expressed in terms of the $4+3+11$ irreducible
pieces (see \cite{PRs}) of $Q_{\alpha\beta}$, $T^\alpha$,
$R_\alpha{}^\beta$, respectively:
\begin{eqnarray}
\label{QMA} V_{\rm MAG}&=&
\frac{1}{2\kappa}\,\left[-a_0\,R^{\alpha\beta}\wedge\eta_{\alpha\beta}
-2\lambda\,\eta+T^\alpha\wedge{}^*\!\left(\sum_{I=1}^{3}a_{I}\,^{(I)}
T_\alpha\right)\right.\nonumber\\ &+&\left.
2\left(\sum_{I=2}^{4}c_{I}\,^{(I)}Q_{\alpha\beta}\right)
\wedge\vartheta^\alpha\wedge{}^*\!\, T^\beta + Q_{\alpha\beta}
\wedge{}^*\!\left(\sum_{I=1}^{4}b_{I}\,^{(I)}Q^{\alpha\beta}\right)\right.
\nonumber \\&+&
b_5\bigg.\left(^{(3)}Q_{\alpha\gamma}\wedge\vartheta^\alpha\right)\wedge
{}^*\!\left(^{(4)}Q^{\beta\gamma}\wedge\vartheta_\beta \right)\bigg]
\nonumber\\&- &\frac{1}{2\rho}\,R^{\alpha\beta} \wedge{}^*\!
\left(\sum_{I=1}^{6}w_{I}\,^{(I)}W_{\alpha\beta}
+w_7\,\vartheta_\alpha\wedge(e_\gamma\rfloor
^{(5)}W^\gamma{}_{\beta} ) \nonumber\right.\\&+& \left.
\sum_{I=1}^{5}{z}_{I}\,^{(I)}Z_{\alpha\beta}+z_6\,\vartheta_\gamma\wedge
(e_\alpha\rfloor ^{(2)}Z^\gamma{}_{\beta}
)+\sum_{I=7}^{9}z_I\,\vartheta_\alpha\wedge(e_\gamma\rfloor
^{(I-4)}Z^\gamma{}_{\beta} )\right)
\label{6}\,.
\end{eqnarray}
The constant $\lambda$ is the cosmological constant, $\rho$ the strong
gravity coupling constant, the constants $ a_0, \ldots a_3$, $b_1,
\ldots b_5$, $c_2, c_3,c_4$, $w_1, \ldots w_7$, $z_1, \ldots z_9$ are
dimensionless. We have introduced in the curvature square term the
irreducible pieces of the antisymmetric part $W_{\alpha\beta}:=
R_{[\alpha\beta]}$ and the symmetric part $Z_{\alpha\beta}:=
R_{(\alpha\beta)}$ of the curvature 2--form. In $Z_{\alpha\beta}$, we
have the purely {\em post}--Riemannian part of the curvature. Note the
peculiar cross terms with $c_I$ and $b_5$.
Esser \cite{Esser}, in the component formalism, has carefully
enumerated all different pieces of a quadratic MAG Lagrangian, for the
corresponding nonmetricity and torsion pieces, see also Duan et al.\
\cite{Duan}. Accordingly, Eq.(\ref{6}) represents the most general
quadratic parity--conserving MAG--Lagrangian. All previously published
quadratic parity--conserving Lagrangians are subcases of (\ref{6}).
Hence (\ref{6}) is a safe starting point for our future
considerations.
We concentrate here on Yang--Mills type Lagrangians. Since $V_{\rm
MAG}$ is required to be an {\em odd} 4--form, if parity conservation
is assumed, we have to build it up according to the scheme $F\wedge
{}^*F$, i.e.\ with one Hodge star, since the star itself is an odd
operator. Also the Hilbert--Einstein type term is of this type, namely
$\sim R^{\alpha\beta} \wedge {}^*(\vartheta_\alpha \wedge
\vartheta_\beta)$, as well as the cosmological term $\sim \eta =
{}^*1$. Thus $V_{\rm MAG}$ is homogeneous of order one in the star
operator. It is conceivable that in future one may want also consider
parity violating terms with no star appearing (or an even number of
them) of the (Pontrjagin) type $F \wedge F$. Typical terms of this
kind in four dimensions would be
\begin{equation}
R^{\alpha\beta} \wedge (\vartheta_\alpha \wedge \vartheta_\beta)\,
,\quad 1\, , \quad T^\alpha\wedge T_\alpha\, ,\quad Q_{\alpha\beta}
\wedge\vartheta^\alpha \wedge T^\beta\,,\quad R^{\alpha\beta} \wedge
R_{\alpha\beta}
\label{7} \, .
\end{equation}
The first term of (\ref{7}), e.g., represents the totally
antisymmetric piece of the curvature $R^{[\gamma\delta\alpha\beta]}\,
\vartheta_\gamma\wedge\vartheta_\delta \wedge\vartheta_\alpha\wedge
\vartheta_\beta$, which is purely post--Riemannian. Such
parity--violating Lagrangians have been studied in the past, see,
e.g., \cite{Mukku,Nelson} and \cite{Bianchi,O+H}, but, for simplicity,
we will restrict ourselves in this article to parity preserving
Lagrangians.
\section{On the possible physics of MAG}
Here we are, with a Lagrangian $V_{\rm MAG}$ encompassing more than
two dozens of unknown dimensionless constants. But the situation is
not as bad as it may look at first. For the Newton--Einstein type of
{\em weak gravity} --- the corresponding terms are collected in
(\ref{6}) within two square brackets $[ \quad ]$ --- we have the
gravitational constant $\kappa$, with dimension of $\kappa=length^2$,
and the cosmological constant $\lambda$, with dimension of $\lambda=
length^{-2}$. For {\em strong gravity} of the Yang--Mills type, the
basic newly postulated interaction within the MAG framework, the
strength of the coupling is determined by the dimensionless strong
coupling constant $\rho$. Thus, the three constants
$\kappa,\lambda,\rho$ are fundamental, whereas the rest of the
constants, 12 for weak and 16 for strong gravity, are expected to be
of the order unity or should partially vanish.
As was argued elsewhere \cite{PRs}, we do not believe that at the
present state of the universe the geometry of spacetime is described
by a metric--affine one. We rather think, and there is good
experimental evidence, that the present-day geometry is
metric-compatible, i.e., its nonmetricity vanishes. In earlier epochs
of the universe, however, when the energies of the cosmic ``fluid''
were much higher than today, we expect scale invariance to prevail ---
and the canonical dilation (or scale) current of matter, the trace of
the hypermomentum current $\Delta^\gamma{}_\gamma$, is coupled,
according to MAG, to the Weyl covector $Q^\gamma{}_\gamma$. By the
same token, shear type excitations of the material multispinors (Regge
trajectory type of constructs) are expected to arise, thereby
liberating the (metric-compatible) Riemann-Cartan spacetime from its
constraint of vanishing nonmetricity $Q_{\alpha\beta}=0$ . Tresguerres
\cite{Tres3} has proposed a simple cosmological model of Friedmann
type which carries a metric-affine geometry at the beginning of the
universe, the nonmetricity of which dies out exponentially in time.
That is the kind of thing we expect.
If one keeps the differential manifold structure of spacetime intact,
i.e., doesn't turn to discrete structures or non-commutative geometry,
then MAG appears to be the most natural extension of Einstein's
gravitational theory. The {\em rigid} metric-affine structure
underlying the Minkowski space of special relativity, see Kopczy\'nski
and Trautman \cite{K+T}, make us believe that this structure should
be gauged according to recipes known from gauge theory. Also the
existence, besides the energy-momentum current, of the {\em external}
material currents of spin and dilation (and, perhaps, of shear) does
point in the same direction.
\section{Exact MAG solutions of Tresguerres and Tucker \& Wang}
For getting a deeper understanding of the meaning and the possible
consequences of MAG, a search for exact solutions appears
indispensable. Tresguerres, after finding exact solutions
\cite{Tresguerres3D,Tres3Da} for specific $(1+2)$--dimensional models
of MAG, turned his attention to $1+3$ dimensions and, in 1994, for a
fairly general subclass of the Lagrangian (\ref{6}), found the first
static spherically symmetric solutions with a non--vanishing {\em
shear charge} \cite{TresguerresShear1,TresguerresShear2}, i.e., the
solution is endowed with a traceless part
${\nearrow\!\!\!\!\!\!\!Q}_{\alpha\beta}:=
Q_{\alpha\beta}-Qg_{\alpha\beta}$ of the nonmetricity. This
constituted a breakthrough. Since that time,
${\nearrow\!\!\!\!\!\!\!Q}_{\alpha\beta}$ lost its somewhat elusive
and abstract character. Even an operational interpretation has been
attempted in the meantime \cite{Test}.
The metric of Tresguerres' solution is the {\em Reissner--Nordstr\"om
metric} of general relativity with cosmological constant but the
place of the electric charge is taken by the {\em dilation charge}
which is related to the trace of the nonmetricity, the Weyl covector.
Furthermore, the Tresguerres solutions carries, besides the
above-mentioned shear charge (related to the
\begin{table}[h]
\caption{Irreducible decomposition of the nonmetricity$^*$ {\tt nom}
$Q_{\alpha\beta}$}
\begin{center}
\leavevmode
\begin{tabular}{|l|c|l|}
\hline
name& number of indep. comp.& \hfil piece \hfil\\
\hline
\hline
{\tt nom} & 40 & \hfil $Q_{\alpha\beta}$ \hfil\\
\hline
{\tt trinom} & 16 & $^{(1)}Q_{\alpha\beta}:=Q_{\alpha\beta}
-{}^{(2)}Q_{\alpha\beta}
-{}^{(3)}Q_{\alpha\beta}
-{}^{(4)}Q_{\alpha\beta}$\\
{\tt binom} & 16 & ${}^{(2)}Q_{\alpha\beta}:={2\over3}
\,{}^*\!(\vartheta_{(\alpha}\wedge
\Omega_{\beta)})$\\
{\tt vecnom} & 4 & ${}^{(3)}Q_{\alpha\beta}:={4\over 9}
\left(\vartheta_{(\alpha}e_{\beta)}
\rfloor\Lambda - {1\over
4}g_{\alpha\beta}\Lambda\right)$\\
{\tt conom} & 4 & ${}^{(4)}Q_{\alpha\beta}:=g_{\alpha\beta}Q$\\
\hline
\end{tabular}
\end{center}
\end{table}
\footnotesize
\noindent $^*)$ First
the nonmetricity is split into its trace, the Weyl covector
$Q:={1\over 4}g^{\alpha\beta}Q_{\alpha\beta}$, and its traceless piece
${\nearrow\!\!\!\!\!\!\!Q}_{\alpha\beta}:=Q_{\alpha\beta}-
Qg_{\alpha\beta}$. The traceless piece yields the shear covector
$\Lambda:=\vartheta^{\alpha}e^{\beta}\rfloor
{\nearrow\!\!\!\!\!\!\!Q}_{\alpha\beta}$ and the shear 2-form
$\Omega_{\alpha}:=\Theta_{\alpha} - {1\over 3}e_{\alpha}\rfloor
(\vartheta^{\beta}\wedge\Theta_{\beta})$, with $\Theta_{\alpha}:=
{}^*({\nearrow\!\!\!\!\!\!\!Q}_{\alpha\beta}\wedge\vartheta^{\beta})$.
The 2-form $\Omega^{\alpha}$ describes ${}^{(2)}Q_{\alpha\beta}$ and
has precisely the same symmetry properties as the 2-form
${}^{(1)}T^{\alpha}$ (see below). In particular, we can prove that
$e_{\alpha}\rfloor\Omega^{\alpha}=0$ and
$\vartheta_{\alpha}\wedge\Omega^{\alpha}=0$.
\normalsize
\begin{table}[h]
\caption{Irreducible decomposition of the torsion$^{**}$
{\tt tor} $T^\alpha$}
\begin{center}
\leavevmode
\begin{tabular}{|l|c|l|}
\hline
name & number of indep. comp. & \hfil piece \hfil\\
\hline
\hline
{\tt tor} & 24 & \hfil $T^\alpha$ \hfil \\
\hline
{\tt tentor} & 16 & ${}^{(1)}T^{\alpha}:=T^{\alpha}
-{}^{(2)}T^{\alpha} - {}^{(3)}T^{\alpha}$\\
{\tt trator} & 4 & ${}^{(2)}T^{\alpha}:= {1\over 3}
\vartheta^{\alpha}\wedge T$\\
{\tt axitor} & 4 & ${}^{(3)}T^{\alpha}:=-\,{1\over 3}
{}^*(\vartheta^{\alpha}\wedge A)$\\
\hline
\end{tabular}
\end{center}
\end{table}
\footnotesize
\noindent $^{**})$ The 1-forms $T$ (torsion trace or covector) and $A$ (axial
covector) are defined by $T:=e_{\alpha}\rfloor T^{\alpha}$ and
$A:={}^*(\vartheta_{\alpha}\wedge T^{\alpha})$, respectively.
\normalsize
\clearpage
\noindent traceless part of the
nonmetricity) a {\em spin charge} related to the torsion of spacetime.
Thus, beyond the Reissner--Nordstr\"om metric, the following
post--Riemannian degrees of freedom are excited in the Tresguerres
solutions (see Tables I and II): {\em two} pieces of the nonmetricity,
namely ${}^{(4)}Q_{\alpha\beta}$ ({\tt conom}, which is equivalent to
the Weyl covector) and the traceless piece ${}^{(2)}Q_{\alpha\beta}$
({\tt binom}), and all {\em three} pieces of the torsion
$^{(1)}T^\alpha$ ({\tt tentor}), $^{(2)}T^\alpha$ ({\tt trator}),
$^{(3)}T^\alpha$ ({\tt axitor}). The names in the parentheses are
taken form our computer programs \cite{PRs,CPC}. The first solution
\cite{TresguerresShear1}, requires in the Lagrangian weak gravity
terms and, for strong gravity, the curvature square pieces with $z_4
\neq 0$, $w_3 \neq 0$, $w_5 \neq 0$, i.e., with Weyl's segmental
curvature ({\tt dilcurv}), the curvature pseudoscalar ({\tt pscalar}),
and the antisymmetric Ricci ({\tt ricanti}). In his second solution
\cite{TresguerresShear2}, the torsion is independent of the
nonmetricity, otherwise the situation is similar yet not as clear cut.
The price Tresguerres had to pay in order to find exact solutions at
all was to impose {\em constraints} on the dimensionless {\em coupling
constants} of MAG. In other words, the Lagrangian $V_{\rm MAG}$ was
engineered such that exact solutions emerged. This is, of course, not
exactly what one really wants. Rather one would like to prescribe a
Lagrangian and then to find an exact solution. But, with the methods
then available, one could not do better. And one was happy to find
exact solutions at all for such complicated Lagrangians.
As to the methods applied, one fact should be stressed. To handle
Lagrangians like (\ref{6}), it is practically indispensable to use
{\em computer algebra} tools. This is also what Tresguerres did. He
took Schr\"ufer's {\em Excalc} package of Hearn's computer algebra
system {\em Reduce;} for introductory lectures on Reduce and Excalc
see \cite{Stauffer}. More recently, we described the corresponding
computer routines within MAG in some detail \cite{CPC} and showed of
how to build up Excalc programs for finding exact solutions of MAG.
What one basically does with these programs, is to make a clever
ansatz for the coframe, the torsion, and the nonmetricity, then to
substitute this into the field equations, as programed in Excalc, and
subsequently to inspect these expressions in order to get an idea of
how to solve them. One way of reducing them to a manageable size, is
to constrain the dimensionless coupling constants or to solve, also by
computer algebra methods, some of the partial differential equations
emerging. If \clearpage
\begin{table}[h]
\caption{Solutions for insular objects$^{*}$}
\bigskip
\begin{tabular}{|p{8cm}|c|p{5cm}|}
\hline
solution & references & post-Riemannian structures\\
\hline
\hline
{\bf Monopoles} with strong gravito--electric and strong gravito--magnetic
charge (and combinations of them) plus triplet (degenerate case
of Reissner-Nordstr\"om solution with triplet)
& \cite{Soliton2,M+S} & {\tt conom} $\sim$ {\tt vecnom} $\sim$
{\tt trator}\\\hline\hline
{\bf Reissner--Nordstr\"om} metric with strong gravito-electric charge plus
{\tt nom} and {\tt tor}
&&\\\hline
--- dilation type solution &\cite{TresguerresShear1,TuckerWang,Ho}&
{\tt conom} $\sim$ {\tt trator}, {\tt axitor} \cite{Ho}\\\hline
--- triplet type solution &\cite{OVETH,Dereli}& {\tt conom} $\sim$
{\tt vecnom} $\sim$ {\tt trator} \\\hline
--- dilation--shear type solution &\cite{TresguerresShear1} &{\tt
conom} $\sim$ {\tt binom}, {\tt tentor} $\sim$ {\tt binom}, {\tt
trator} $\sim$ {\tt conom}, {\tt axitor} $\sim$ {\tt binom}\\\hline
--- dilation--shear--torsion type solution&\cite{TresguerresShear2}&
{\tt conom} $\sim$ {\tt binom}, {\tt tentor}, {\tt trator} $\sim$
{\tt conom}, {\tt axitor}\\\hline\hline
{\bf Kerr--Newman} metric with strong-gravito electric charge plus
{\tt nom} and {\tt tor}&&\\\hline
--- triplet type solution& \cite{VTOH} &{\tt conom} $\sim$ {\tt
vecnom} $\sim$ {\tt trator}\\\hline\hline
{\bf Pleba\'nski--Demia\'nski} metric with strong gravito-electric and
magnetic charge plus {\tt nom} and {\tt tor}&&\\\hline
--- triplet type solution &\cite{PDMAG}& {\tt conom} $\sim$ {\tt
vecnom} $\sim$ {\tt trator}\\\hline\hline
{\bf Electrically} (and magnetically) charged versions of
all of the triplet solutions & \cite{Puntigam,HSocorro,electrovacMAG,M+S}&
{\tt conom} $\sim$ {\tt
vecnom} $\sim$ {\tt trator}\\
\hline
\end{tabular}
\end{table}
\bigskip
\footnotesize
\noindent $^{*})$ Those pieces of the nonmetricity and the torsion vanish
identically which are not mentioned in the description of a solution.
\normalsize
\clearpage
\noindent one is stuck, one changes the ansatz etc.
Beside the two dilation--shear solutions, Tresguerres
\cite{TresguerresShear1} and Tucker and Wang \cite{TuckerWang} found
Reissner--Nordstr\"om metrics together with a non--vanishing Weyl
covector, $^{(4)}Q^{\alpha\beta}\neq 0$, and a vector part of the
torsion, $^{(2)}T^\alpha\neq 0$, i.e., these solutions carry a {\em
dilation} charge (in the words of Tucker and Wang, a Weyl charge)
and a {\em spin} charge, but are devoid of any other post--Riemannian
``excitations", in particular, they have no tracefree pieces
${\nearrow\!\!\!\!\!\!\!Q}_{\alpha\beta}$ of the nonmetricity. As
shown by Tucker and Wang, the corresponding Lagrangian needs only a
Hilbert--Einstein piece $(a_0=1)$ and a segmental curvature squared
with $z_4 \neq 0$. The same has been proved for the Tresguerres
dilation solution, see footnote 4 of \cite{OVETH}.
Ho et al.\ \cite{Ho} found four spherically symmetric exact solutions
in a pure Weyl--Cartan spacetime which are similar to the dilation
type solutions. However, they include an additional axial part of the
torsion, $^{(3)}T^\alpha\neq 0$, see Table III.
\section{The triplet of post--Riemannian 1--forms and Obukhov's
equivalence theorem}
The next step consisted in an attempt to understand the emergence of
the dilation--shear and the dilation--shear--torsion solutions of
Tresguerres. However, as it so happened, it shifted the attention to
other types of solutions. In both Tresguerres shear solutions, the
nonmetricity, besides the Weyl covector part {\tt conom}, was
represented by {\tt binom}, basically a 16 components' quantity.
However, {\tt conom} and {\tt trator} each have only 4 components, as
has {\tt vecnom}. Accordingly, to create a simpler solution with
shear than the two Tresguerres dilation--shear solutions, it seemed
suggestive to require
\begin{equation}\label{triplet3} {\tt conom}\sim {\tt vecnom}\sim
{\tt trator}\,.
\end{equation}
This amounts to the presence of one 1--form $\phi$ which creates the
three post--Riemannian pieces (\ref{triplet3}). If $k_0$, $k_1$, $k_2$
are some constants (see below), then we have
\begin{equation}\label{triplet4} Q=k_0\phi\,,\quad \Lambda=k_1\phi\,,\quad T=
k_2\phi\,,
\end{equation} with $\Lambda:=\vartheta^\alpha e^\beta \rfloor
{\nearrow\!\!\!\!\!\!\!Q}_{\alpha\beta}$ and $T:=e_\alpha\rfloor
T^\alpha$. This 1--form triplet was first proposed in
\cite{OVETH,VTOH} and also used in \cite {Dereli}.
Again, in the context of the triplet ansatz (\ref{triplet4}), a
Reissner--Nordstr\"om metric with a strong gravito--electric charge
could successfully be used \cite{OVETH} and a constraint on the
coupling constants had to be imposed. Thus this ``triplet'' solution
is reminiscent of the Tresguerres dilation--shear solutions. However,
its structure is simpler and, instead of {\tt binom}, it is {\tt
vecnom} which enters the solution. Moreover, of the curvature square
pieces in the gauge Lagrangian $V_{\rm MAG}$ only the piece with
$z_4\neq 0$ is required. All others do not contribute.
Soon this result was generalized to an {\em axially symmetric}
solution \cite{VTOH} based on the {\em Kerr--Newman} metric and, a bit
later, to the whole {\em Pleba\'nski-Demia\'nski} class of metrics
\cite{PDMAG}. Already earlier, however, it became clear that the
triplet represents a general structure in the context of the
Einstein--Maxwellian ``seed'' metrics. In \cite{Dereli} it was pointed
out that for {\em each} Einstein--Maxwell solution (metric plus
electromagnetic potential 1--form), if the electric charge is replaced
by the strong gravito--electric charge and if a suitable constraint on
the coupling constants is postulated, an exact solution of MAG can be
created by means of the triplet (\ref{triplet4}). Even more so, if one
started from an {\em Einstein--Proca} solution instead, one could even
abandon the constraint on the coupling constants. This was first shown
for a certain 3-parameter Lagrangian by Dereli et al.\ \cite{Dereli}
and extended to a 6-parameter Lagrangian by Tucker \& Wang
\cite{TuckWar}. The situation was eventually clarified for a fairly
general 11-parameter Lagrangian by the
\begin{itemize}
\item{} {\em Equivalence theorem of Obukhov \cite{Yuritheorem}:} Let
be given the gauge Lagrangian $V_{\rm MAG}$ of (\ref{QMA}) with all
$w_I=0$, $z_I=0$, except $z_4\neq 0$, i.e., the {\em segmental}
curvature squared
\begin{equation}
-\frac{z_4}{8\rho}\, R_\alpha{}^\alpha\wedge\,^\ast R_\beta{}^\beta=
-\frac{z_4}{2\rho}\,dQ\wedge{}^\ast dQ\,\label{seg^2}
\end{equation}
is the only surviving strong gravity piece in $V_{\rm MAG}$. Solve
the Einstein--Proca equations\footnote{For the $\eta$--basis we have
$\eta_{\alpha\beta\gamma}={}^\ast\left(\vartheta_\alpha\wedge
\vartheta_\beta\wedge\vartheta_\gamma\right)$ and
$\eta_\alpha={}^\ast\vartheta_\alpha$.}
\begin{eqnarray}\label{fieldob}
\frac{a_0}{2}\,\eta_{\alpha\beta\gamma}\wedge\tilde{R}^{\beta\gamma}
+ \lambda \,\eta_\alpha &= \kappa\,\Sigma_\alpha^{(\phi)}\,,\\
\left(\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3+m^2\right)\phi &= 0\,,\\ \qquad d^\dagger\phi &= 0\,,
\end{eqnarray}
with respect to the metric $g$ and the Proca 1-form $\phi$. Here the
tilde $\tilde{\null}$ denotes the Riemannian part of the curvature,
\begin{eqnarray}
\Sigma_\alpha^{(\phi)}& := \frac{z_4k_0^2}{2\rho} \left\{ \left(
e_\alpha\rfloor d\phi \right)\wedge{}^\ast d\phi- \left(
e_\alpha\rfloor\,^\ast d\phi \right)\wedge\, d\phi \right.
\nonumber \\ &\quad\left. +\;m^2\,\left[ (e_\alpha\rfloor
\phi)\wedge{}^\ast \phi\;+\;
(e_\alpha\rfloor\,^\ast\phi)\wedge{}\phi\right] \right\}\,
\label{ProcaEM}\end{eqnarray}
is the energy--momentum current of the Proca field and $d^\dagger$ the
exterior {\em co}derivative. Then the {\em general vacuum solution} of
MAG with the stated parameter restrictions is represented by the
metric and the post--Riemannian triplet
\begin{equation}\left(g\,,\>\; Q=k_0\phi\,,\> \Lambda=k_1\phi\,,\> T=
k_2\phi\right)\,, \label{MAGsolution}
\end{equation} where $k_0,k_1,k_2$ are elementary functions of the
weak gravity coupling constants, $a_I,b_I,c_I$, and $m^2$ depends,
additionally, on $\kappa$ and the strong coupling constant $z_4/\rho$
(the details can be found in \cite{Yuritheorem}).
\end{itemize}
\noindent The results of \cite{Dereli,TuckWar} and of the Obukhov theorem
lead to an understanding of the meaning of the constraint between the
different coupling constants: If we put $m^2=0$, then the
Einstein--Proca system becomes an Einstein--{\em Maxwell} system --
and such metrics, like the Kerr--Newman metric, e.g., are more readily
available for our purposes. In fact, we are not aware of any known
Einstein--Proca metrics which we could use for the construction of
exact MAG solutions. One should consult, however, the early work on
the Einstein-Proca system by Buchdahl \cite{Buch}, Ponomariov \&
Obukhov \cite{PonoObukhov}, and Gottlieb et al.\ \cite{Gott}.
Also the reason for the more general character of the Tresguerres
shear solutions is apparent. He allowed gauge Lagrangians with
additional strong gravity pieces. In \cite{TresguerresShear1} he
added, to the segmental curvature piece, the strong gravity pieces
$w_3\times ({\tt pscalar})^2+w_5\times ({\tt ricanti})^2$. Here $(\;)^2$
is an abbreviation of $(\;)\wedge{}\! ^*(\;)$. In this way he
circumvented the Obukhov theorem and found the spin 2 piece of the
nonmetricity, {\tt binom}, inter alia. On the other hand, the dilation
type solution in \cite{TresguerresShear1,TuckerWang} can be recovered
{}from the triplet solution \cite{OVETH} by means of a certain limiting
procedure, see \cite{OVETH}.
\section{Strong gravito--electric monopole, electrically charged
versions of the triplet solutions}
In Table III, we gave an overview of the solutions of insular objects.
However, we didn't explain so far the first and the last entry of the
table.
The monopole type solution was found in \cite{Soliton2}, see also
\cite{Soliton1}, in terms of isotropic coordinates. In the Appendix we
translated the solution into Schwarzschild coordinates. Then, in these
coordinates, the orthonormal coframe, the metric, and the triplet
read, respectively,
\begin{equation}
\vartheta ^{\hat{0}} =\,\left(1-\frac{q}{r}\right)\, d\,t \,,\quad
\vartheta ^{\hat{1}} =\, \frac{d\, r}{1-\frac{q}{r}}\, , \quad
\vartheta ^{\hat{2}} =\, r\, d\,\theta\,,\quad \vartheta ^{\hat{3}}
=\, r\, \sin\theta \, d\,\varphi
\label{frame3}\,,
\end{equation}
\begin{eqnarray} g&=& \vartheta ^{\hat{0}}\otimes \vartheta ^{\hat{0}}-
\vartheta ^{\hat{1}}\otimes \vartheta ^{\hat{1}}-
\vartheta ^{\hat{2}}\otimes \vartheta ^{\hat{2}}-
\vartheta ^{\hat{3}}\otimes \vartheta ^{\hat{3}}\nonumber\\&=&
\left(1-\frac{q}{r}\right)^2dt^2-\frac{d\,r^2}
{\left(1-\frac{q}{r}\right)^2}
-r^2\left(d\,\theta^2+\sin^2\theta\,d\,\varphi^2 \right)\,,
\label{metric3}\end{eqnarray}
\begin{equation}\phi=\frac{Q}{k_0}=\,\frac{\Lambda}{k_1}=\,\frac{T}{k_2}=\,
\frac{N_{\rm e}}{r\left(1-\frac{q}{r}\right)}\,\vartheta^{\hat{0}}=
\frac{N_{\rm e}}{r}\,d\,t\,,
\label{monotrip1}
\end{equation}with $q=\sqrt{\frac{z_4\kappa}{2a_0\rho}}
\,k_0 N_{\rm e}$, i.e., it is again a triplet solution.
Note that the metric is {\em not} of the Schwarzschild form, the Weyl
covector, however, behaves as one expects for a strong gravito--{\em
electric} charge. We recognize in this example in a particularly
transparent way that the strong gravito--electric charge $N_{\rm e}$
creates the post--Riemannian potentials {\tt conom}, {\tt vecnom},
{\tt trator} in (\ref{monotrip1}) in a quasi--Maxwellian fashion but
also emerges, in (\ref{metric3}), in the components of the metric.
However, in the metric, $N_{\rm e}$ behaves neither Schwarzschildian
(the metric is different) nor Reissner--Nordstr\"omian (the power of r
is reciprocal instead of $r^{-2}$).
We can construct this metric by a specific choice of the {\em mass} of
the Reissner-Nordstr\"om metric.\footnote{Private communications by
D.\ Kramer (Jena) and M.\ Toussaint (Cologne).} In other words,
the metric of this solution represents a {\em subcase of the
Reissner-Nordstr\"om metric}. Then it is immediately clear that this
solution is covered by the Obukhov theorem: One starts from an
Einstein--Maxwell solution, namely the Reissner--Nordstr\"om metric,
supplements the corresponding triplet, and chooses the mass such that
the Reissner-Nordstr\"om function $1-2m/r +q^2/r^2$ becomes a pure
square.
In the meantime, also a strong gravito-magnetic monopole has been
found \cite{M+S}. The mechanism is analogous to the gravito--electric
case and doesn't seem to bring new insight.
The last entry of Table III indicates that we are always able to find
electrically charged versions of a MAG solution as long as we confine
ourselves to the triplet type solutions. This is evident from
Obukhov's theorem: We take an electrically uncharged MAG solution with
the triplet $\sim\phi$. Then we choose the electromagnetic potential
$A$ proportional to the 1--form $\phi$. Thus the structure of the
energy--momentum currents of the 1--form $\phi$ and the 1--form
$A$ is the same one. Both currents differ only by a constant.
Accordingly, they just add up, on the right hand side of the Einstein
equation, to a total energy--momentum current carrying a modified
constant in front of it. Clearly, this structure breaks down as soon
as one turns to the full Einstein-Proca system, i.e., as soon as the
Proca mass becomes non--vanishing. Nevertheless, it is quite useful to
have found these electrically charged solutions explicitly. It helps
to illustrate the coupling of the electromagnetic field to the
post-Riemannian structures of a metric-affine spacetime, see
\cite{Puntigam}.
\section{Wave solutions}
{\em Plane--fronted} metric--Weyl covector--torsion {\em waves} have
been constructed by Tucker and Wang \cite{TuckerWang}. Their source is
a semi--classical Dirac spinor field $\psi(x)$. Let $\gamma^\alpha$ be
the Dirac matrices. Then the Dirac spin current
$\sim\overline{\psi}\gamma\gamma\gamma\psi$ generates the torsion
according to $\overline{\psi}\gamma\gamma\gamma\psi\sim$ {\tt tentor}
+ {\tt axitor}, whereas the Weyl covector and the torsion trace are
proportional to each other and are induced by the segmental curvature
square piece in the Lagrangian: {\tt conom} $\sim$ {\tt trator}. Thus
we have in this model an underlying Weyl--Cartan spacetime since the
tracefree part of the nonmetricity vanishes. In other words, the
solution is of the dilation type. Accordingly, the vacuum part of the
(weak and strong) gravitational field can be understood as a
degenerate triplet solution and again, as remarked in
\cite{TuckerWang}, it is straightforward to include a Maxwell field
with electric (and possibly magnetic) charge.
In view of \cite{TuckWar} and the Obukhov theorem, it is clear that
one may start with any solution of the Einstein--Maxwell equations.
Then one replaces, after imposing a suitable constraint on the
coupling constants, the electric charge by the strong gravity charge
thereby arriving at the post--Riemannian triplet which was mainly
discussed in Sec.V. The procedure is fairly straightforward.
Nevertheless, it is useful to have a couple of worked--out examples at
one's disposal. Explicit solutions may convey a better understanding
of the structures involved.
Garcia et al.\ \cite{collwavesMAG} studied {\em colliding waves} with
the corresponding metric and an excited post--Riemannian triplet in
the framework of a Lagrangian of the Obukhov theorem. Usually, in
general relativity, the colliding waves are generated by {\em
quadratic polynomials} in the appropriate coordinates. And these
polynomials were also used in the paper referred to. Recently,
however, Bret\'on et al.\ \cite{collwavesGR} were able, within general
relativity, to extend this procedure by using also {\em quartic}
polynomials. Again, this procedure can be mimicked in metric--affine
spacetime and Garcia et al.\ \cite{shock5,Oscht} constructed
corresponding colliding gravity waves with triplet excitation. For the
quadratic as well as for the quartic case it is also possible to
generalize to the {\em electrovac} case, as has been shown in
\cite{electrovacMAG}.
\section{Cosmological solutions}
As we argued above, we expect more noticeable deviations from
metric--compatibility the further we go back in time. Therefore it is
natural to investigate cosmological models in the framework of
metric--affine gravity. And the standard {\em Friedmann} model is a
good starting point. Tresguerres \cite{Tres3} proposed such a model
with torsion and a Weyl covector, i.e., spacetime is described therein
by means of a Weyl--Cartan geometry. The matter he used to support the
model is a fluid carrying an energy--momentum and a dilation current.
The field equations of the model stayed within a manageable size since
the Lagrangian, by assumption, carries only the segmental curvature
square piece of the {\em symmetric} part of the curvature 2--form.
However, the square of all 6 irreducible pieces of the {\em
anti}symmetric part of the curvature are allowed in the
gravitational Lagrangian even if only the tracefree symmetric Ricci
turns out to be relevant in the end. A somewhat similar model has been
investigated by Minkevich and Nemenmann \cite{Minkevich}.
Using the much more refined model of a hyperfluid \cite{Ob2}, Obukhov
et al.\ \cite{Yuritheorem} derived, within the framework of the the
equivalence theorem, but with some additional simplifying assumptions,
a Friedmann cosmos with a time varying Weyl covector. This is
analogous as in the Tresguerres model.
Similar structures have been suggested by Tucker and Wang
\cite{TuckerWang2}. They proposed a metric--affine geometry of
spacetime for the purpose of taking care of the supposedly unseen dark
matter which, as they suggest, interacts with the strong gravity
potential of the Proca type as described by means of a gravitational
Lagrangian carrying a segmental curvature square. Thus the Obukhov
theorem applies to their scenario, and a Friedmann solution with a
post--Riemannian triplet is expected to emerge. And this is exactly
what happens. Ordinary matter and dark matter both supply their own
material energy--momentum current to the right hand side of the
Einstein equation and, additionally, a Proca energy-momentum comes up,
see (\ref{fieldob},\ref{ProcaEM}). The material current that couples
to the Proca field can be identified with the trace of the material
hypermomentum current, the material dilation current, see the trace of
the right hand side of (\ref{second}). The model is worked out in
considerable detail, galactic dynamics and the cosmological evolution
are studied inter alia and numerical results presented.
\section{The minimal dilation--shear Lagrangian, ansatz with a Proca
`mass'}
Taking the triplet (\ref{triplet4}) as a guide, it is certainly helpful
for model building not to take the whole weak part of (\ref{QMA}) but
only some sort of essential nucleus of it. Putting (\ref{4}) and
(\ref{5}) together, one gets certainly a propagating Weyl covector.
{}From the Obukhov theorem we know that we only need a further weak
gravity piece in order to allow for shear. In view of the triplet, the
addition of a {\tt trator} square piece is suggested. In this way we
recover the minimal dilation--shear Lagrangian \cite{OVETH,Dereli}
\begin{equation}
V_{\rm dil-sh} = \frac{1}{2\kappa} \left( - R^{\alpha\beta} \wedge
\eta_{\alpha\beta} + \beta\, Q \wedge {}^*Q+ \gamma\, T \wedge
{}^*T\right)- \frac{\alpha}{8}\,R_\beta{}^\beta \wedge
{}^*R_\gamma{}^\gamma
\label{dilsh}\, .
\end{equation}
And indeed, our Reissner--Nordstr\"om, Kerr--Newman, and
Pleba\'nski--Demia\'nski metrics, together with the post--Riemannian
triplet (\ref{triplet4}), with the constants
\begin{equation}k_0=-\frac{3}{2}\,\gamma-4\,,\qquad
k_1=\frac{27}{2}\,\gamma\,,\qquad k_2=6\,,
\end{equation}and with the 1--form ($N$ is an integration constant)
\begin{equation}\phi=\frac{N}{r}\,d\,t\,,\end{equation}
are solutions of the field equations belonging to the $V_{\rm
dil-sh}$ Lagrangian. However, a constraint on the weak coupling
constants has to be imposed:
\begin{equation}\label{conx1}\gamma=-\frac{8}{3}\, \frac{\beta}{\beta+6}\,.
\end{equation}
Accordingly, the Lagrangian (\ref{dilsh}) may be considered as the
generic Lagrangian of the Obukhov theorem.
Let us now try to get rid of the constraint (\ref{conx1}). The
corresponding procedure runs as follows: According to
\cite{Yuritheorem} Eq.\ (6.8), we can define the Proca mass
\begin{equation}m_{\rm Proca}^2=\frac{1}{2\kappa\alpha}\,\left(2\beta+
\frac{36\gamma}{3\gamma+8}\right)\,.
\end{equation}
If we put it to zero, we recover the constraint (\ref{conx1}):
\begin{equation}m_{\rm Proca}^2=0\quad\longrightarrow\quad
\gamma=-\frac{8}{3}\, \frac{\beta}{\beta+6}\,.
\end{equation} Thus the dropping
of the constraint (\ref{conx1}) is equivalent to the emergence of a
Proca mass, i.e., we now have to turn to the Einstein--Proca system
instead of to the Einstein--Maxwell system.
Then, in {\em flat} spacetime, after dropping the constraint
(\ref{conx1}), instead of a Coulomb potential, we expect a Yukawa
potential to arise as a solution of the Proca equation:
\begin{equation}\phi\sim N\,\frac{e^{-m_{\rm Proca}r}}{r}\,d\,t\,.
\end{equation}In the corresponding metric--affine spacetime, the
Reissner-Nordstr\"om metric has also to be modified. If done in a
suitable way, this should lead to an exact solution of the {\em
unconstrained} dilation--shear Lagrangian (\ref{dilsh}).
\section{Discussion}
In the last section we have already seen, how we can hope to extend
our work. But also a generalization in another direction is
desirable. If we want to include the shear solutions of Tresguerres,
then the dilation--shear Lagrangian is too narrow. To go beyond the
triplet solution requires a generalization of (\ref{dilsh}). A `soft'
change, by switching on only the post-Riemannian pieces of the {\em
antisymmetric} piece of the curvature 2--form, seems worth a try:
\begin{equation}V_{\rm dil-sh-tor}\sim V_{\rm dil-sh} -\frac{1}{2\rho}\left[
w_2\times ({\tt paircom})^2+w_3\times({\tt
pscalar})^2+w_5\times({\tt ricanti})^2 \right]\,.\end{equation}
A related model was discussed in \cite{Bianchi} Sec.5.3. In this way
we can hope to `excite', besides {\tt conom} and {\tt vecnom}, also
{\tt binom}, e.g. Of course, also in this case one should try to
remove the constraint. However, it will not be sufficient in this
case, as is clear from \cite{Yuritheorem} and the Obukhov equivalence
theorem, to turn only to the Einstein--Proca system --- rather a more
general procedure will be necessary.
\section{Appendix: Strong gravito-electric monopole in
Schwarzschild coordinates}
In \cite{Soliton2}, the MAG solution of the soliton type was given in
terms of isotropic coordinates. This makes it more difficult to
compare it with the Reissner--Nordstr\"om type solution. Therefore we
will perform a coordinate transformation. We will denote the isotropic
polar coordinates by $(t,\rho,\theta,\varphi)$ and the Schwarzschild
coordinates by $(t,r,\theta,\varphi)$. In \cite{Soliton2}, the
following monopole solution has been found: The orthonormal coframe
reads
\begin{equation}
\vartheta ^{\hat{0}} =\,{1\over f}\, d\,t \,,\quad
\vartheta ^{\hat{1}} =\, f\, d\, \rho\, , \quad
\vartheta ^{\hat{2}} =\, f\, \rho\, d\,\theta\,,\quad
\vartheta ^{\hat{3}} =\, f\, \rho\, \sin\theta \, d\,\varphi
\label{frame2}\, ,
\end{equation}
with the function
\begin{equation} f(\rho)= 1+\frac{q}{\rho}\,,
\label{monopole}\end{equation}
and the one--form triplet is specified by (in this Appendix, $\rho_{\rm
c}$ denotes the strong gravity coupling constant)
\begin{equation}\phi=
\frac{Q}{k_0}=\,\frac{\Lambda}{k_1}=\,\frac{T}{k_2}=\, \frac{N_{\rm
e}}{\rho}\,\vartheta^{\hat{0}}\,,\qquad{\rm with}\qquad
{q}^2=\frac{z_4\kappa}{2a_0\rho_{\rm c}}\,\left(k_0N_{\rm
e}\right)^2
\label{monotrip}\, .
\end{equation}
For the transition to Schwarzschild coordinates, the
$\theta$--component of the coframe has to obey
\begin{equation} \vartheta^{\hat{2}}=\left(1+\frac{q}{\rho}\right)\rho
\,d\theta=r\,d\theta\,.\end{equation} Thus
\begin{equation} r=\left(1+\frac{q}{\rho}\right)\rho=\rho+q\,,\qquad
dr=d\rho\,.\end{equation}
Substitution into (\ref{monopole}) yields
\begin{equation}f=1+\frac{q}{\rho}=\frac{r}{\rho}=\frac{r}{r-q}=
\frac{1}{1-\frac{q}{r}}\,.\end{equation} Accordingly, the monopole
solution can be rewritten in the form as displayed in
(\ref{frame3},\ref{metric3},\ref{monotrip1}).
\acknowledgments
This research was supported by the joint German--Mexican project
DLR--Conacyt E130--2924 and MXI 009/98 INF and by the Conacyt grant
No.\ 28339E. FWH would like to thank Alfredo Mac\'{\i}as and the
Physics Department of the UAM--I for hospitality. Furthermore, we
appreciate helpful remarks by Tekin Dereli (Ankara), Dietrich Kramer
(Jena), Jim Nester (Chung-li), Yuri Obukhov (Moscow), Jos\'e Socorro
(Le\'on), Marc Toussaint (Cologne), Robin Tucker (Lancaster), and
Charles Wang (Lancaster).
|
2,877,628,089,645 | arxiv | \section{Introduction}
The physics program of super B-factories requires the best possible reconstruction of all final state particles in a given event. The differentiation between charged particles species (electrons, muons, pions, kaons, protons) is accomplished by dedicated particle identification (PID) subdetectors and further assisted by their ionization energy loss ($dE/dx$) as measured in the tracking system as well as their energy deposition in the electromagnetic calorimeter and the muon system.
The Belle\,II barrel region is instrumented with the Time of Propagation (TOP) particle identification system, which measures the detection time of Cherenkov photons generated sixteen \SI{2625x450x20}{\mm} quartz radiator bars arranged around the interaction point (IP). The photons are trapped inside the bars by total internal reflection and transmitted to an array of pixelated photo-sensors (MCP-PMTs). The measured time difference between the particle collision and the photon arrival time is the sum of two contributions: first, the particle's time of flight from the IP to the quartz bar, which is inversely proportional to the particle velocity. Second, the photon propagation time inside the quartz bar, which is a function of the Cherenkov angle \cite{Belle2, topfee} and thus the particle velocity. The particle crossing the bar is identified comparing the time distribution of the photons detected in each photo-sensors pixel with the expected ones for different mass hypotheses.
The TOP is responsible for most of the for hadron identification capability in the barrel region, which corresponds to $\approx 80\%$ of the total Belle II acceptance.
The mechanical design of the TOP modules leaves gaps in the azimuthal direction ($\phi$) of about \SI{2}{\cm} width in between the individual quartz bars, accounting for about \SI{6}{\percent} of missing geometric coverage in the nominal TOP acceptance. Tracks passing through the outermost edges of a TOP bar also have reduced particle identification performance, effectively widening this gap to around \SI{16}{\percent} of tracks with no or degraded TOP PID. Many physics analyses foreseen in the Belle\,II physics program require the positive identification of several final state particles, multiplying the impact of the TOP quartz gap. A particularly important case is represented by the analyses based on the full event reconstruction (FEI) with the tagging of the opposite B meson, like the semileptonic B decays with undetected neutrinos. Such decays are of particular interest for searches of phenomena beyond the standard model and, in some cases, can be uniquely explored at Belle\,II. Flavor tagging, used for CP violation measurements, also depends on the efficiency and quality of lepton and hadron identification.
The TOP does not only contributes to the particle identification, but plays a central role in the global Belle II reconstruction. A most critical component for both particle identification and tracking in the silicon vertex detector is the event time $T_0$. The TOP detector is the most precise timing instrument in the Belle\,II apparatus and thus plays an important role in its determination. However, the TOP contribution to the $T_0$ measurement strongly depends on the number of tracks in the TOP acceptance. Especially with the expected increase in background rates, the impact of a precise $T_0$ determination will continue to grow in importance. Thus, not only the particle identification, but the overall quality of the Belle\,II event reconstruction would be improved by extending the TOP coverage.
A possible solution to remedy the gaps in the TOP acceptance is to install a supplemental time-of-flight detector that covers the non-instrumented areas between adjacent quartz bars. Since the available space around the installed TOP modules is quite limited, one or multiple layers of fast-timing silicon detectors would serve this scope. The Supplemental TOP Gap Instrumentation (STOPGAP) has become feasible with the advent of modern silicon sensor types with very short signal collection times and excellent time resolutions in the range of a few tens of picoseconds, which we will show to be sufficient to provide particle identification in the momentum range \SIrange{0.05}{5.0}{\GeV\per c} and path lengths of \SIrange{1}{2}{\m} at super B-factory experiments such as Belle\,II.
There is around \SI{45}{\mm} of free space between the the outer shell of the Belle\,II Central Drift Chamber (CDC) and the inner wall of the TOP module enclosures. This is sufficient to fit two layers of silicon sensors \SI{40}{\mm} wide, and the additional required services, to cover the gap between the TOP quartz bars along their whole length. Fig.\ref{fig:schematic_view} shows a sketch of the geometry around the gap between two TOP module enclosures and a possible STOPGAP module geometry.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{fig/stopgap_concept}
\caption[]{Conceptual sketch of supplemental TOP gap instrumentation module in the cross section of two adjacent TOP modules as seen from the backward side. The solid light blue lines show the outlines of the readout side of the TOP quartz bar box. The dashed light blue lines show the outline of the rest of the TOP quartz bar box. The TOP quartz cross section is shown in dark blue. The dashed black lines roughly show the dimensions of a possible STOPGAP module, with the two dashed circles indicating possible cooling lines or cabling channels and a layer of silicon sensors shown in pink.}
\label{fig:schematic_view}
\end{figure}
\section{MC Performance Study}
We performed Monte Carlo (MC) study of the STOPGAP performance using simulated $\Upsilon(4S) \to B\bar{B}$. The existing structures of Belle II detectors are simulated using Geant4, and the charged particles' trajectories are reconstructed using the publicly available Belle II analysis software framework \cite{basf2, basf2_github}.
The STOPGAP response is calculated using a parametric MC approach using the extrapolated impact point of the track of the inner TOP surface, which approximately corresponds to the planned radial position of the STOPGAP modules.
To simulate the STOPGAP response several contributions to the total time resolution have been considered, both reducible and irreducible: sensor readout resolutions from \SIrange{20}{100}{\ps}, global clock distribution jitter (\SI{10}{\ps}), the SuperKEKB bunch overlap time (\SI{15}{\ps}), and track length uncertainty, parameterized assuming a \SI{5}{\mm} resolution on the impact point of the track in the Z direction. The particle identification is then performed by evaluating a likelihood constructed from the these time resolution components, all assumed to be Gaussian. The resulting selection and mis-identification fractions for charged pions and kaons are shown for a sensor resolution of \SI{50}{ps} in Fig.\ref{fig:tof_perf}. Detailed plots for both pion an kaon (mis-) identifications are attached Figs.\ref{fig:tof_perf_kaon},\ref{fig:tof_perf_pion} in the appendix. A STOPGAP system with a combined time resolution for sensor and readout of \SI{50}{\ps} would perform significantly better than the TOP system in the momentum range $p<\SI{2}{\GeV}$. A \SI{30}{\ps} system would outperform TOP in the whole momentum range.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{fig/stopgap_Keff_simplified}
\caption[]{Selection and mis-identification probabilities at $LL_{K}-LL{\pi}>0$ of a simulated STOPGAP PID system for charged kaons as a function of particle momentum, shown for a simulated sensor with \SI{50}{ps} MIP timing resolution with additional timing contributions as explained in the text. Solid curves are the kaon selection efficiencies, while the dashed curves show the mis-identification probability for $\pi\rightarrow K$. Shaded areas indicate the statistical uncertainties of the data points.}
\label{fig:tof_perf}
\end{figure}
Fig.\ref{fig:phi_KK} shows the reconstructed invariant mass of $\phi\rightarrow K^+ K^-$ decays selected from the simulated dataset. A \SI{50}{\ps} STOPGAP system achieves more than twice better signal-to-noise ratio near the $\phi$ mass peak. The improved efficiency is entirely defined by the difference in kaon efficiency, which is only \SI{90}{\percent} for TOP, but driven by the MIP efficiency in STOPGAP.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{fig/tof_phi_to_KK_2}
\caption[]{Invariant mass distribution from charged kaons selected at $LL_{K}t>LL{\pi}$ for both tracks, with both tracks in the TOP acceptance. The black line shows the unselected spectrum. The shaded areas indicates the fraction corresponding to true $\phi$ decays. The TOP system achieves a S/N of 0.37 ($\mu\pm2\sigma$), while the STOPGAP reconstruction achieves a S/N of 0.79 in the same invariant mass window.}
\label{fig:phi_KK}
\end{figure}
The next step will be the implementation of a realistic STOPGAP geometry into the Belle\,II simulation, which will allow us to study the combined performance of TOP+STOPGAP as well as its impact on specific physics channels of interest. This would also allow realistic studies of the resilience of a time of flight system vs. beam backgrounds. The expected improvement in the combined particle identification has to be understood to optimize the number of sensor layers and their ideal dimensions and arrangement.
\subsection{Backgrounds}
We identified two main sources of background affecting STOPGAP: backsplash from showers in the electromagnetic calorimeter located few centimeters behind the TOP, and beam-induced backgrounds. We expect the former to be easily identifiable in the majority of cases, since backsplash hits are necessarily later in time that the initial particle hit and in addition should deposit significant energy in the corresponding ECL crystal.
The beam background are discussed in \autoref{sec:hitrates}, where in lieu of a true GEANT4-based simulation study, we use the occupancy in the vertex detector as a proxy for the background level and perform some extrapolations for a few conservative sensor parameters to show that the expected impact is minimal.
\section{Timing Layers at Lower Radii}
Several of the currently proposed and discussed upgrades of the Belle\,II vertexing system plan to increase the inner radius of the current drift chamber replacing its inner layers with silicon sensors. The thin silicon sensors proposed for the vertexing upgrade are unlikely to provide enough $dE/dx$ discrimination for low momentum particle as the CDC does currently. Additionally, the CDC currently provides track triggering for transverse particle momenta $p_T$ down to around \SI{100}{\MeV}, which a full silicon inner tracking system might not be able to provide. We thus explore here the potential of recovering the PID and track triggering capabilities for low $p_T$ particles by analytical modelling of dedicated timing layers at \SI{250}{\mm} and/or \SI{450}{\mm} radius.
For pion/kaon separation purposes, the region of $p_T<\SI{500}{\MeV}$ is most relevant, as higher $p_T$ will reach the dedicated PID subdetectors TOP, ARICH and also generally provide a $dE/dx$ measurement from the remaining part of the CDC or a potential TPC replacement. We calculate the time of flight from the IP until the first crossing of the timing layer radius as a function of particle momentum and mass, including the path length of the circular segment that describes its trajectory. In order to keep the calculation simple and conservative, we assume $\left|p\right| = p_T$ here.
The resulting time of flight differences between charged kaons and pions are then divided by the assumed time resolution of a timing layer, resulting in a measure or particle identification in standard deviations of separation. These numbers are then compared to an estimate of the $dE/dx$ particle separation of the CDC as shown in Fig.\ref{fig:tof_lowpt}. This study indicates that a \SI{50}{\ps} MIP timing resolution layer at \SI{250}{\mm} radius yields consistently better pion/kaon separation than the CDC, while a \SI{450}{\mm} radius timing layer leads to at least double the separation power between pions and kaons over the CDC in the momentum region inspected here.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{fig/tof_pid_lin2}
\caption[]{Pion/kaon separation performance of a MIP timing layer compared to an estimated $dE/dx$ PID at different radii. The arrows indicate the lower $p_T$ cutoff for each system due to its radius.}
\label{fig:tof_lowpt}
\end{figure}
To estimate the general feasibility of timing layers to provide track triggers for Belle\,II, simplified trigger model is combined with extrapolations from the beam background estimates described in \autoref{tab:backgrounds} below. The individual layer geometry is assumed as a cylinder length of three times its radius, leading to a covered area of \SI{1.2}{\meter\squared} at \SI{250}{mm} radius or \SI{3.8}{\meter\squared} at \SI{450}{\mm} radius.
For each examined timing layer radius, the expected background hit rates at full luminosity are extrapolated from all given radii and their ensemble mean and min-max envelope used for further calculations. An additional simplifying assumption made here is that background hits are spatially uncorrelated.
A timing system at the trigger level would likely not work at the full granularity of the used sensor technology, but instead operate on larger areas for coincidence. We thus conservatively assume here that the effective "trigger region" of coincidence of such a system is \SI{10}{\cm\squared}. Such regions would be formed by OR'ing the outputs of all individual sensor channels in that region. For the purposes of this study a "track" consists of $N$ stacked (or geometrically associated) trigger regions firing coincident in time within a conservative window of \SI{1}{\ns} (corresponding to a \SI{\pm5}{\sigma} range of an assumed \SI{100}{\ps} MIP timing resolution at the trigger level). Individual tracks are considered coincident when $M$ track occur within a time window of \SI{5}{\ns}, which roughly covers the time of flight differences between electrons and protons at the relevant radii and momenta here.
Based on these assumptions, the trigger rates from pure beam backgrounds can be calculated. The event $T_0$ reconstruction at the trigger level is not expected to reach the resolution of the timing layers discussed here, so a single timing layer can only form a coincidence with itself. This is prohibitive in expected noise rates, so we only investigate cases of different radial configurations of two timing layers forming a coincidence.
Fig.\ref{fig:timing_trig_twolayer} shows the results of the model described above for the beam backgrounds expected at full design luminosity of SuperKEKB. Depending on the radius, each additional track required for a trigger decreases the background trigger rate by 2-4 orders of magnitude. Increasing the dual timing layer radius from \SI{250}{\mm} to \SI{450}{\mm} decreases the background trigger rates by at least one to two orders of magnitude.
Notably, a configuration with two single timing layers at \SI{250}{\mm} and \SI{450} radius, respectively, is estimated to yield background trigger rates right in between the double-layer options. However, the base assumption of spatially uncorrelated background hits is much more likely to be accurate in this configuration. Additionally a distanced two-layer setup would enable a coarse estimation of the Z-origin of a triggered track, enabling a selection of tracks from the IP, as well as a coarse estimation of the track momentum (or at least its charge) on the trigger level at the cost of increased complexity in the required trigger reconstruction logic.
The uncertainties from the beam background extrapolations generally span at least an order of magnitude, and the systematic uncertainties from the assumptions described above are not included in the figure at all. The given numbers nevertheless show the fundamental feasibility of a timing track trigger system, at least for triggering events with two or more tracks in its acceptance.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{fig/timing_track_trigger_rate_twolayer}
\caption[]{Estimated background rate of double layers of fast timing coincidence track trigger at \SI{250}{\mm} and \SI{450}{\mm} radii as well as for two single track trigger timing layers at \SI{250}{\mm} and \SI{450}{\mm} radius.}
\label{fig:timing_trig_twolayer}
\end{figure}
\section{Requirements}
We now review the requirements for a STOPGAP upgrade to the Belle\,II TOP detector system or even its full replacement. The technological requirements for timing layers could be extracted in the same general way, but are not discussed in further detail here.
Depending on the details of the chosen STOPGAP geometry, \SIrange{1}{3}{\m\squared} of active area will be needed to fill the gaps in the TOP acceptance with a single sensor layer. In case a double-layer design is necessary, that area would double accordingly. For reference, a full replacement of the TOP system would require around \SI{18}{\m\squared} of timing sensitive sensor.
As was described earlier, the particle identification performance of a time-of-flight system is fundamentally limited by the single MIP detection efficiency of the used sensor technology. A MIP efficiency as close as possible to \SI{100}{\percent} is thus a strong requirement for such a system.
The allowable material budget for a STOPGAP module should generally not exceed the material budget of a full TOP module consisting of several \si{\mm} of aluminium and around \SI{2}{\cm} of quartz (about \SI{0.2}{X_0}). Since there will be some effective overlap between the two structures, minimising the STOPGAP material budget will be helpful in reducing potential impacts on the photon reconstruction from conversions in the material in front of the ECL crystals.
In the following, the expected background hit rates and radiation dosages are discussed. This discussion is based on extrapolations of the expected beam background rates and radiation dosages - without additional safety factors - at full SuperKEKB luminosities at the radii of the currently installed Belle\,II vertex detector \cite{baudot}. The given numbers are extrapolated to the radius of the TOP system and potential radii of timing trigger layers. The original values as well as our radial extrapolations are given in \autoref{tab:backgrounds}.
\begin{table}
\centering
\caption{Estimated hit rates and radiation dosages at full SuperKEKB luminosity of \SI{8E35}{\per\cm\squared\per\second} for the existing Belle\,II VXD layer radii \cite{baudot}. The given numbers for a STOPGAP system or timing trigger layers at \SIrange{250}{450}{\mm} are extrapolated from the given ensemble of VXD estimates. The given range for each radius is the minimum and maximum value obtained from the ensemble.}
\ra{1.1}
\begin{tabular}{lrrrr}
\toprule
Layer & Radius & Hit Rate & NIEL & Total Ionising Dose\\
& in \si{\mm} & in \si{\Hz\per\cm\squared} & in \si{n_{eq} \per \cm \squared} & in \si{Rad} \\
\midrule
1 (PXD) & \num{14} & \num{22.6E6} & \num{10E12} & \num{2E6} \\
2 (PXD) & \num{22} & \num{11.3E6} & \num{5E12} & \num{600E3} \\
3 (PXD) & \num{39} & \num{1.41E6} & \num{0.2E12} & \num{100E3} \\
4 (SVD) & \num{80} & \num{290E3} & \num{0.1E12} & \num{20E3} \\
5 (SVD) & \num{115} & \num{220E3} & \num{0.1E12} & \num{10E3} \\
6 (SVD) & \num{135} & \num{150E3} & \num{0.1E12} & \num{10E3} \\
\midrule
STOPGAP & \num{1167} & \makecell[r]{2416 \\ (\numrange{1362}{4015}) } & \makecell[r]{\num{1.05E9} \\ (\numrange{0.22E9}{1.78E9}) } & \makecell[r]{158 \\ (\numrange{94}{288}) } \\
Trigger 1 & \num{450} & \makecell[r]{\num{16.2E3} \\ (\numrange{9.2E3}{27E3}) } & \makecell[r]{\num{7.08E9} \\ (\numrange{1.50E9}{12.0E9}) } & \makecell[r]{1062 \\ (\numrange{632}{1936}) } \\
Trigger 2 & \num{250} & \makecell[r]{\num{52.6E3} \\ (\numrange{29.7E3}{87.5E3}) } & \makecell[r]{\num{23.0E9} \\ (\numrange{4.87E9}{38.7E9}) } & \makecell[r]{3442 \\ (\numrange{2048}{6272}) } \\
\bottomrule
\end{tabular}
\label{tab:backgrounds}
\end{table}
\subsection{Estimation of STOPGAP hit rates} \label{sec:hitrates}
Based on \autoref{tab:backgrounds}, we calculate a STOPGAP background rate of \SI{2.4+-0.9}{\kHz\per\cm\squared} at the SuperKEKB design luminosity of \SI{8E35}{\per\cm\squared\per\second}. In order to be conservative, we assume an additional safety factor of 5, and an extra luminosity factor of 5 for an assumed maximum background hit rate of \SI{60}{\kHz\per\cm\squared}. Further assuming a conservative readout open time window of \SI{100}{\ns} (to broadly cover an assumed trigger time jitter of \SI{10}{\ns} and a maximum time of flight of individual particles of \SI{20}{\ns}), we arrive at a background occupancy of $\SI{0.60+-0.23}{\percent\per\cm\squared}$. Finding the hit on the STOPGAP sensor that corresponds to a given track needs to consider around \SI{1}{\cm\squared} of sensor area due to the limited Z-pointing resolution of the CDC tracking. We thus arrive at a very conservatively estimated chance of \SI{0.6}{\percent} that there is a background hit in the same spatial and temporal region of interest on the STOPGAP surface for a given charged particle track.
\subsection{Radiation Hardness of STOPGAP}
As given in \autoref{tab:backgrounds}, the expected non-ionizing lattice displacement (NIEL) at the STOPGAP radius is on the order of \SI{1E9}{n_{eq}\per\cm\squared}. This value is compatible with the NIEL estimates obtained as part of the original TOP irradiation expectation studies. Similarly, the total ionizing radiation dose at the STOPGAP radius is estimated to be \SI{0.16+-0.07}{KRad}.
These values are several orders of magnitude lower than those expected and observed in the (HL-)LHC environments. Radiation hardness is thus not expected to of significant importance in the selection of a suitable technology option.
\subsection{Granularity}
Based on the occupancy estimates and the tracking extrapolation resolution presented above, a sensor granularity of around \SI{1}{\cm^2} would be sufficient. The time resolution contribution from an uncertainty in the trajectory path length due to the Z-pointing resolution term is generally small. With sufficient granularity in Z-direction, the inclusion of STOPGAP hits into the track fit would further eliminate that uncertainty. \SI{1}{\mm} of STOPGAP sensor granularity in Z corresponds to around a \SI{1}{\ps} additional time resolution contribution from track path length uncertainty.
\subsection{Summary}
Especially in comparison to the detector upgrades under discussion and construction for the HL-LHC, the requirements on hit rates, radiation hardness and granularity are entirely negligible for a STOPGAP application. The driving factors of its performance are the single MIP timing resolution and a MIP efficiency of \SI{>99}{\percent}. As discussed above, STOPGAP modules are expected to yield useful particle identification capabilities for time resolutions \SI{<100}{ps}.
For timing layers at lower radii, the requirements on hit rate and radiation hardness are stronger than for the STOPGAP case, but still tame compared to HL-LHC developments. As part of the inner tracking detector, finer granularity and a much reduced material budget would be required compared to STOPGAP.
\section{Existing Sensor Options and Readout Technologies}
There are several options for sensors and readout technologies. We briefly review some of the existing technologies planned used for the HL-LHC upgrades and under consideration for e.g. the EIC detector concepts.
\paragraph{Low Gain Avalanche Diodes}(LGAD) are planned for the end cap timing layers of the CMS and ATLAS high luminosity upgrades \cite{atlastdr, cmstdr}. Due to their doping profiles, LGAD sensors are fabricated in specialised non-standard processes and thus rather expensive. They generally require hybridized readout chips bump bonded to them, further increasing their cost and difficulty in assembly and handling. Current LGAD designs have a large dead zone between neighbouring readout pads and hence require double layers for greater than \SI{99}{\percent} MIP efficiency. Recently first working prototypes of AC coupled LGADs have shown to remediate the efficiency gaps between pixels. Integrated readouts for LGAD sensors are under development e.g. for the ATLAS upgrade by OMEGA (ALTIROC \cite{altiroc}) and for the CMS upgrade by FNAL (ETROC \cite{cmstdr, etroc_tdc}). The target time resolution for LGAD layers is \SI{30}{ps} for a double layer. Under ideal conditions, unirradiated single LGAD layers have demonstrated down to around \SI{20}{ps} timing resolution for single MIPs.
\paragraph{LYSO+SiPM} systems were originally considered for time-of-flight positron emission tomography applications \cite{tofpet}, but are now also used for in HEP detectors, specifically the the CMS barrel timing layer (BTL) \cite{cmstdr}. The CMS BTL uses small staves of scintillating LYSO:Ce crystals around \SI{5x5x50}{\mm} read out by one silicon photomultiplier (SiPM) on each of the smaller faces. Timing resolutions of down to \SI{30}{ps} for single MIPs have been demonstrated. This approach is fundamentally limited in the possible granularity by the minimum feasible size of the LYSO crystals to around \SI{1}{\cm\squared}. The necessary material budget is limited by the required thickness of the LYSO crystal of a few \si{mm} to yield enough scintillation photons for a timing measurement of the required precision. Since SiPMs generally have large internal amplification, the noise requirements on the readout electronics are less demanding, enabling high readout channel densities per chip (see e.g. CMS TOFHiR \cite{tofhir}).
In summary, all existing fast timing sensor technologies are potentially feasible for a STOPGAP system. As a matter of fact the LYSO+SiPM CMS BTL modules would almost fit into the available space between TOP modules the way they have been designed and constructed for the CMS upgrade. However, neither the achievable granularity nor the material budget of (at the very least) a few \si{\mm} of LYSO crystals is suitable for an application at lower radii. LGADs are a potential option for both STOPGAP and track timing layers, with drawbacks in granularity, MIP efficiency and cost per area. AC-LGADs can potentially solve the first two points, but they do not address the cost issue and have also not been demonstrated in full-scale systems yet.
\section{Timing with fast MAPS}
As discussed in great details in ref.\cite{riegler}, the fundamental limitation in improving the timing resolution of silicon sensors is the achievable signal-to-noise ratio (SNR) of its output signal. LGADs address this via amplification of the ionisation charge signal in a thin avalanche amplification layer. A feasible alternative is to improve the SNR by means of a lower noise and thus higher powered electronic amplification of the pixel output signal. This has been demonstrated in principle by the NA62 Gigatracker \cite{gigatracker}, which achieves \SI{130}{ps} time resolution on single MIP hits per layer with a hybridized fast timing frontend bonded to \SI{200}{\um} thick silicon sensors without internal avalanche amplification.
The next step towards monolithic fast timing sensors is to integrate high-powered low noise preamplifiers and discriminators directly into the sensor. This is ideally produced in a CMOS process, leveraging the cost benefits and scalability of industrial CMOS fabrication. Such a chip would fully integrate the readout electronics and sparsification logic into the sensor wafer to also greatly simplify the required complexity of the electronics downstram of the sensor layer: An intelligent readout logic ideally only requires power and a reference clock as inputs, and delivers lists of hits and their timings via a fast serial output line. This would be realised in a single, thin layer of silicon with a minimum of material budget. \autoref{fig:powerbox} shows a comparison of existing fast timing technologies with the current state of MAPS and the potential for fast MAPS.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{fig/powerbox_3.pdf}
\caption[]{Comparison of existing fast MIP timing technologies and the potential for fast MAPS. The granularity of a LYSO+SiPM is limited in practice and the material budget is relatively high compared to silicon sensors. LGADs have shown good timing resolutions, but are expensive and somewhat limited in granularity (AC-LGAD might solve the granularity issue in the near future). Current MAPS are cost effective, highly granular and radiation hard, but their time resolution falls short by orders of magnitude compared to LYSO+SiPM and LGADs.}
\label{fig:powerbox}
\end{figure}
At least two groups are actively pursuing first steps towards MAPS with single MIP time resolutions of \SI{<100}{ps}. The University of Geneva is working on small pixels produced in \SI{130}{\nm} IHP SG13G2SiGe BiCMOS \cite{geneva1, geneva2}. They have shown to achieve single MIP time resolutions down to \SI{60}{ps} in laboratory conditions for small hexagonal pixels of \SI{65}{\um} side length. In the current prototype, the power consumption necessary to achieve this time resolution is outright prohibitive at around \SI{50}{\kilo\watt\per\meter\squared}.
The IRFU Saclay group has produced a prototype of their CACTUS concept in a \SI{150}{\nm} LFoundry HVCMOS process \cite{cactus}. The sensor is based on their experience and layout of the ATLAS monopix family, modified for the best possible time resolution. Their test structure includes \SI{1x1}{\mm} and \SI{1x0.5}{\mm} pads. Their achieved single MIP time resolutions on the first available iteration are in the order of hundreds of \si{ps}. The reason for this worse than expected performance has been understood as mis-estimated parasitic capacitances of the metal layer power rails running across the pixel. A future iteration of the same concept is expected to perform better. \autoref{fig:fmaps_prototypes} shows a picture of the Geneva prototype chip as well as a diagram of the CACTUS test structures.
\begin{figure}[ht]
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{fig/geneva_sige.png}
\caption{Die photograph of the University of Geneva fast monolithic pixel prototype based on SiGe HBT amplifiers. The hexagonal structures are individual test pixels of \SI{130}{\um} side length (left) and \SI{65}{\um} side length (right). \cite{geneva1, geneva2}}
\label{fig:geneva_die}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{fig/cactus_diagram.png}
\caption{Schematic of the CACTUS HVCMOS prototype chip consisting of arrays of \SI{1x1}{\mm} and \SI{1x0.5}{\mm} test pixels with additional digital logic and contact pads on the top and bottom. \cite{cactus}}
\label{fig:cactus_diagram}
\end{subfigure}
\caption{Early fast CMOS sensor prototypes.}
\label{fig:fmaps_prototypes}
\end{figure}
These projects show that it is indeed possible to achieve sub-\SI{100}{ps} timing with monolithic CMOS detectors, even though none of the projects has produced a sensor prototype that is currently feasible for practical use.
The relatively tame requirements of a STOPGAP sensor on hit rates, radiation hardness and (in principle) readout granularity, at least compared to the requirements for HL-LHC applications, opens up STOPGAP as an ideal test bed for emerging fast timing CMOS MAPS technologies.
The primary goal of such a dedicated project would be to develop a prototype pixel structure with an integrated preamplifier close to the thermodynamic limit in noise per power consumption. \autoref{fig:preamp_enc} shows the required effective noise charge of an in-pixel amplifier for different target time resolutions and silicon sensor thicknesses.
While it is unlikely for any realistic fast MAPS sensor to reach the timing performance of (AC-)LGADs, it should be possible to get close to it within a factor of two. While LGADs or other future sensors might remain the technology choice for applications demanding the highest possible MIP timing resolution, the prospect of achieving almost comparable resolutions with significantly reduced cost is extremely appealing. Opening up the fast timing frontier for MAPS also enables future MAPS tracker projects to include as much time resolution into their designs as needed for the application. Establishing fast MAPS is thus of prime importance not only for HEP detectors, but for technologically related instrumentation as a whole.
Fast MAPS have the potential to fulfill all needed requirements for a STOPGAP system or a timing layers in an upgraded inner tracking system. Since the technical requirements, especially for STOPGAP, are fairly tame apart from the MIP time resolution, it poses an ideal prototype project to establish fast MAPS for HEP instrumentation.
\begin{figure}[ht]
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1.0\linewidth, trim=1cm 1cm 1cm 0.5cm]{fig/preamp_contours_200um.pdf}
\label{fig:preamp_enc_200}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.48\textwidth}
\centering
\includegraphics[width=1.0\linewidth, trim=1cm 1cm 1cm 0.5cm]{fig/preamp_contours_50um.pdf}
\label{fig:preamp_enc_50}
\end{subfigure}
\caption{Preamplifier requirements on effective noise charge (ENC) and peaking time to achieve a given time resolution with a silicon sensor without internal amplification. The shaded area indicates the signal collection time for a sensor of the given thickness. Calculation based on \cite{riegler}.}
\label{fig:preamp_enc}
\end{figure}
\section{Resource estimates and timeline}
The resource estimate for the whole project is divided into two parts: the development of the MAPS technology, and the actual construction of the detector modules. The given numbers come with great uncertainties and should be considered as rough guidelines.
\subsection{R\&D for fast MAPS}
Making significant steps towards very fast CMOS sensors with timing resolution around \SI{50}{ps} is the most important fundamental building block of this proposal. In that regard the goal of building a STOPGAP system is a vehicle to motivate the extensive R\&D needed towards such fast CMOS sensors. On the way, the current practical limitations of this technology choice will be discovered and, eventually, overcome.
The current efforts towards achieving the best time resolution with CMOS sensors use fabrication processes of \SI{130}{\nm} and \SI{150}{\nm} structure size. Since the noise figure is correlated to the process feature size, an immediate improvement in time resolution (or alternatively the power draw at a fixed time resolution) can be achieved moving the development to the next smaller node of \SI{65}{\nm}. To this end, a suitable \SI{65}{\nm} process with sufficiently high surface resistivity to support high voltage biasing needs to be identified and qualified first. An general drawback of \SI{65}{\nm} processes respect to \SI{130}{\nm} or \SI{150}{\nm} ones are the higher costs per submission and the lower availability of multi project wafer shuttle projects, increasing the costs of such developments.
At least two engineering submissions and extensive laboratory test bench and testbeam campaigns will be necessary to develop and validate a sensor layout that achieves the necessary time resolutions at an acceptable power budget.
The second step is to integrate TDC electronics and the readout logic onto the same chip. Recent developments in \SI{65}{\nm} processes have demonstrated TDCs with bin widths \SI{<10}{ps} \cite{etroc_tdc, picotdc}. We thus do not expect fundamental issues in designing an on-chip TDC that reaches the resolution range of \SIrange{20}{30}{ps} needed to read out our sensors.
The intermediate goal of this proposed R\&D program is the construction of a small sized prototype detector with active area of a few \si{\cm\squared} in order to validate the sensor technology in a realistic environment. This prototype will be installed inside Belle\,II and operated synchronously with the other detectors while recording physics collisions. The readout of the prototype does not necessarily need to be constructed using a final chip with fully integrated electronics, but could e.g. be read out using a picoTDC \cite{picotdc} integrated into the prototype module. Demonstrating the time-of-flight resolution for single MIPs in the environment of an active HEP experiment will not only validate the sensor technology, but also uncover the practical issues with operating a sensor in realistic conditions compared to test beam or bench tests. The ultimate capstone of the program would be a working prototype of a fully integrated monolithic detector module with integrated sparsified readout that demonstrates the required timing and hit rate capabilities at least in a test beam environment.
We believe that such a program is not feasible without significant funding for at least three to five years. Out of that time, we coarsely estimate around one to two years will be needed to finish the sensor design, at least one year is needed for the integrated timing frontend and readout logic, and an additional year will be needed for integration onto one common mixed signal chip. Depending on available person power and expertise, parts of this design and test effort can run in parallel. To acquire the necessary momentum, the strong support of an existing local work group with extensive expertise in CMOS sensor design and a profound interest in pushing the possibilities of fast MAPS to its limits is required at the very least for the first one to two years.
\subsection{STOPGAP construction}
Ultimately, 16 STOPGAP modules with around \SIrange{1}{2}{\m\squared} of total active sensor area will have to be built and installed. Apart from the significant funding requirements for the sensor production itself, this will need detailed concepts for the mechanics, cooling, service routing and installation procedure for these modules. None of these tasks have even been considered so far. First steps towards a STOPGAP module design in the form of finite element simulations to estimate the required stiffness and eventual mechanical mock-ups could in principle start in parallel with the sensor R\&D program outlined above.
\section{Summary and Outlook}
Around \SI{6}{\percent} of the charged tracks in the nominal coverage of the TOP particle identification system in the Belle\,II barrel region pass near or through the gaps between the TOP quartz radiator bars and thus do not yield usable PID information. An additional \SI{10}{\percent} of tracks suffer from degraded PID performance from passing too close to the quartz edges. The STOPGAP project aims to fill these gaps in coverage with fast-timing silicon sensors to measure the time of flight of such particles to improve the Belle\,II particle identification coverage. Examining the possible high precision timing technologies, CMOS timing detectors appear to be most promising for the requirements of a Belle II upgrade. We propose to start a significant CMOS timing sensor R\&D program with a series of prototypes and beam tests. The intermediate goal would be to integrate a small sensor protoype into Belle\,II and to operate it in physics collisions.
|
2,877,628,089,646 | arxiv | \section{Introduction}
Cognates are words that are known to have descended from a common ancestral language. In historical
linguistics, identification of cognates is an important step for positing relationships between
languages. Historical linguists apply the comparative method \cite{trask1996historical} for
positing relationships between languages.
In NLP, automatic identification of cognates is associated with the task of determining if two
words are descended from a common ancestor or not. There are at least two ways to achieve automatic
identification of cognates.
One way is to modify a well-known string alignment technique such as
Longest Common Subsequence or Needleman-Wunsch algorithm \cite{needleman1970general} to weigh the
alignments differentially \cite{kondrak2001identifying,list2012sca}. The weights are determined
through the linguistic knowledge of the sound changes that occurred in the
language family.
The second approach employs a machine learning perspective that is widely employed
in NLP. The cognate identification is achieved by training a linear classifier or a sequence
labeler on a set of labeled positive and negative examples; and then employ the trained classifier
to classify new word pairs. The features for a classifier consist of word similarity measures
based on number of shared bigrams, edit distance, and longest common subsequence
\cite{hauer-kondrak:2011:IJCNLP-2011,inkpen2005automatic}.
The above procedures provide an estimate of the similarity between a pair of words and cannot
directly be used to infer a phylogeny based on models of trait evolution. The
pairwise judgments have to be converted into multiple
cognate judgments so that the multiple judgments can be supplied to a automatic tree building
program for inferring a phylogeny for the languages under study.
It has to be noted that the Indo-European dating studies
\cite{bouckaert2012mapping,chang2015ancestry} employ human expert
cognacy judgments for inferring phylogeny and dates of a very well-studied language family. Hence,
there is a need for developing automated cognate
identification methods that can be applied to under-studied languages of the world.
\section{Related work}
The earlier computational effort of \cite{jager2013phylogenetic,rama2013two} employs
Pointwise Mutual Information (PMI) to compute transition matrices between sounds.
Both \newcite{jager2013phylogenetic} and \newcite{rama2013two} employ undirectional sound
correspondence based scorer to compute word similarity. The general approach is to align word pairs
using vanilla edit distance and impose a cutoff to extract potential cognate pairs. The aligned
sound symbols are then employed to compute the PMI scoring matrix that is used to realign the pairs.
The PMI scoring matrix is recounted from the realigned pairs. This procedure is repeated until
convergence.
\newcite{jager2013phylogenetic} imposes an additional cutoff based on the PMI scoring matrix.
Further, \newcite{jager2013phylogenetic} also employs the PMI scoring matrix to infer family trees
for new language families and compares those trees with the \emph{expert} trees given in
\emph{Glottolog} \cite{nordhoff2011glottolog}. \newcite{rama2013two} take a slightly different
approach, in that, the authors compute a PMI matrix independently for each language family and
evaluate its performance at the task of pair-wise cognate identification. In this work, we also
compare the convolutional networks against PMI based binary classifier.
Previous works of cognate identification such as \cite{Bergsma:07,inkpen2005automatic} supply
string similarity measures as features for training different classifiers such as decision trees,
maximum-entropy, and SVMs for the purpose of determining if a given word pair is cognate or not.
In another line of work, \newcite{list2012sca} employs a transition matrix derived from
historical linguistic knowledge to align and score word pairs. This approach is algorithmically
similar to that of \newcite{Kondrak:00} who employs articulation motivated weights to score a sound
transition matrix. The weighted sound transition matrix is used to score a word pair.
The work of \newcite{list2012sca} known as Sound-Class Phonetic Alignment (SCA) approach reduces
the phonemes to historical linguistic motivated sound classes such that transitions between some
classes are less penalized than transitions between the rest of the classes. For example, the
probability of velars transitioning to palatals is a well-attested sound change across the world.
The SCA approach employs a weighted directed graph to model directionality and proportionality of
sound changes between sound classes. For example, a direct change between velars and dentals is
unattested and would get a zero weight. Both \newcite{Kondrak:00} and
\newcite{list2012sca} set the weights and directions in the sound transition graph to suit the
reality of sound change.
All the above outlined approaches employ a scoring matrix that is derived automatically or manually;
or, employ a SVM to train form similarity based features for the purpose of cognate identification.
\section{Convolutional networks}
This article is the first to apply convolutional networks (ConvNets) to phonemes by
treating each phoneme as a vector of binary valued phonetic features. This approach has the
advantage that it does not require explicit feature engineering, alignments,
and a sound transition matrix. The approach requires cognacy statements and phonetic descriptions of
sounds used to transcribe the words. The cognacy statements can be obtained from etymological
dictionaries and the quality of the phonemes can be obtained from \newcite{ladefoged1998sounds}.
\newcite{collobert2011natural} proposed ConvNets for NLP tasks in 2011 and were since applied for
sentence classification
\cite{kim:2014:EMNLP2014,DBLP:conf/naacl/Johnson015,KalchbrennerACL2014,NIPS2015_5782},
part-of-speech tagging \cite{santos2014learning}, and information retrieval \cite{shen2014latent}.
\newcite{kim:2014:EMNLP2014} applied convolutional networks to pre-trained word embeddings in a
sentence for the task of sentence classification. \newcite{DBLP:conf/naacl/Johnson015} train their
convolutional network from scratch by using a one-hot vector for each word. The authors show that
their convolutional network performs better than a SVM classifier trained on bag-of-words features.
\newcite{santos2014learning} use character embeddings to train their POS-tagger. The authors find
that the POS-tagger performs better than the accuracies reported in \cite{manning2011part}.
In a recent work, \newcite{NIPS2015_5782} treat documents as a sequence of characters and transform
each document into a sequence of one-hot character vectors. The authors designed and trained two
9-layer convolutional networks for the purpose of sentiment classification. The authors report
competitive or state-of-the art performance on a wide range of benchmark sentiment classification
datasets.
\section{Character convolutional networks}
\newcite{chopra2005learning} extended the traditional ConvNets
to classify if two images belong to the same person. These ConvNets are known as
Siamese Networks (inspired from Siamese twins) and share weights for independent but identical
layers of convolutional networks. Siamese networks and their variants have been employed for
identifying if two images are from the same person or different persons
\cite{Zagoruyko_2015_CVPR}; and for recognizing if two speech segments belong to the same word
class \cite{DBLP:journals/corr/KamperWL15}.
\subsection{Word as image}
Historical linguists perform cognate identification based on regular correspondences which are
described as changes in phonetic features of phonemes. For instance, Grimm's law $b^h \sim b$ is
described as loss of aspiration; $p \sim f$ is described as change from plosives to fricatives; and
devoicing $d \sim t$ in English \emph{ten} $\sim$ Latin \emph{decem}.
Learning criteria for cognacy through phonetic features
from a set of training examples implies that there is no need for explicit alignment and
design/learning of sound scoring matrices. In this article, we
represent each phoneme as a binary-valued vector of phonetic features and then perform convolution
on the two-dimensional matrix.
\subsection{Siamese network}
Intuitively, a network should learn a similarity function such that words that
diverged due to accountable sound shifts are placed close to one another than two
words that are not cognates. And, Siamese networks are suitable for this task since, they learn a
similarity function that has a higher similarity between cognates as compared to non-cognates. The
weight tying ensures that two cognate words sharing similar phonetic features in a local context
tend to be get higher weights than words that are not cognate.
\subsection{Phoneme vectorization}
In this article, we work with the ASJP alphabet \cite{brown2013sound}. The ASJP alphabet is coarser
than IPA but is designed with the aim
to capture highly frequent sounds in the world's languages. The ASJP database has word
lists for 60\% of the world's languages but only has cognate judgments for some selected families
\cite{wichmann2013languages}.
\begin{table}[!ht]
\small
\begin{tabular}{p{7cm}}
\hline
\hline
p = voiceless bilabial stop and fricative [IPA: p, \textipa{F}]\\
b = voiced bilabial stop and fricative [IPA: b, \textipa{B}] \\
m = bilabial nasal [IPA: m] \\
f = voiceless labiodental fricative [IPA: f] \\
v = voiced labiodental fricative [IPA: v] \\
8 = voiceless and voiced dental fricative [IPA: \textipa{T}, \textipa{D}] \\
4 = dental nasal [IPA: \textipa{\|[n}] \\
t = voiceless alveolar stop [IPA: t] \\
d = voiced alveolar stop [IPA: d] \\
s = voiceless alveolar fricative [IPA: s] \\
z = voiced alveolar fricative [IPA: z] \\
c = voiceless and voiced alveolar affricate [IPA: ts, dz] \\
n = voiceless and voiced alveolar nasal [IPA: n] \\
S = voiceless postalveolar fricative [IPA: \textipa{S}] \\
Z = voiced postalveolar fricative [IPA: \textipa{Z}] \\
C = voiceless palato-alveolar affricate [IPA: t\textipa{S}] \\
j = voiced palato-alveolar affricate [IPA: d\textipa{Z}] \\
T = voiceless and voiced palatal stop [IPA: c, \textipa{\textbardotlessj}] \\
5 = palatal nasal [IPA: \textipa{\textltailn}]\\
k = voiceless velar stop [IPA: k]\\
g = voiced velar stop [IPA: g]\\
x = voiceless and voiced velar fricative [IPA: x, \textipa{G}]\\
N = velar nasal [IPA: \textipa{N}]\\
q = voiceless uvular stop [IPA: q]\\
G = voiced uvular stop [IPA: \textipa{\;G}]\\
X = voiceless and voiced uvular fricative, voiceless and voiced pharyngeal fricative [IPA:
\textipa{X}, \textipa{K}, \textipa{\textcrh}, \textipa{Q}]\\
7 = voiceless glottal stop [IPA: \textipa{P}]\\
h = voiceless and voiced glottal fricative [IPA: h, \textipa{H}]\\
l = voiced alveolar lateral approximate [IPA: l]\\
L = all other laterals [IPA: L, \textipa{L}]\\
w = voiced bilabial-velar approximant [IPA: w]\\
y = palatal approximant [IPA: j]\\
r = voiced apico-alveolar trill and all varieties of ``r-sounds'' [IPA: r, R, etc.]\\
! = all varieties of ``click-sounds'' [IPA: !, \textipa{\!o}, \textipa{||},
\textipa{\textdoublebarpipe}]\\
\hline
\end{tabular}
\caption{ASJP consonants. ASJP has 6 vowels which we collapsed to a single vowel
\emph{V}.}
\label{tab:asjpconso}
\end{table}
We composed a binary vector for each phoneme based on the description given in table
\ref{tab:asjpconso}. In total, there are 16 binary valued features. We also reduced all vowels to a
single vowel that has a value of $1$ for voicing feature and $0$ for the rest of the features. The
main motivation
for such decision is that vowels are diachronically unstable than consonants
\cite{kessler2007word}.
A word such as ``fat'' would be
represented as $3 \times 16$ matrix where each column provides a binary value for the phonetic
feature (cf. table \ref{tab:binary}).
\begin{table}[!ht]
\small
\begin{tabular}{p{1.5cm}||p{4cm}}
\hline
\hline
p& 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0\\
b& 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0\\
f &0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0\\
v &1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0\\
m &1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0\\
8 &1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0\\
4 &1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0\\
t &0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0\\
d &1 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0\\
s &0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0\\
z &1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0\\
c &1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0\\
n &1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0\\
S &0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0\\
Z &1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0\\
C &0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0\\
j &1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0\\
T &1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0\\
5 &0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0\\
k &0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0\\
g &1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0\\
x &1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0\\
N &1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0\\
q &0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0\\
G &1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0\\
X &1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0\\
7 &0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0\\
h &1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0\\
l &1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0\\
L &1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0\\
w &1 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0\\
y &1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0\\
r &1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1\\
! &1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0\\
V &1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\\\hline
\end{tabular}
\caption{Binarized ASJP alphabet used in our experiments. Each column corresponds to
the following features: Voiced, Labial, Dental, Alveolar, Palatal/Post-alveolar, Velar, Uvular,
Glottal, Stop, Fricative, Affricate, Nasal, Click, Approximant, Lateral, and Rhotic.}
\label{tab:binary}
\end{table}
\subsection{ConvNet Models}
In this subsection, we describe the ConvNet models used in our experiments.
\begin{figure}
\centering
\includegraphics[scale=0.5]{siamese}
\caption{Siamese network with fully connected layer. The weighted are shared between the two
convolutional networks.}
\label{fig:siam}
\end{figure}
\textbf{Siamese ConvNet} Siamese
networks takes a pair of inputs and minimizes the distance between the
output representations. Each branch of the Siamese network is composed of a convolutional
network. The Euclidean distance $D$ between the representations of each branch is then used to
train a contrastive-loss function $y D + (1-y) max\{0, m-D\}$ where $m$ is the margin and $y$ is
the true label. We only describe the architecture since this forms the basis for the rest of our
experiments with Siamese architectures.\footnote{The results were slightly better than a majority
class classifier and were not reported in the article}
\textbf{Manhattan Siamese ConvNet} The second ConvNet is also a Siamese network where the
Euclidean distance is replaced by a element-wise absolute difference layer followed by a fully
connected layer (cf. figure \ref{fig:siam}). To the best of our knowledge,
only \newcite{Zagoruyko_2015_CVPR} added two fully connected layers to
the concatenated outputs of the Siamese network and trained a system that predicts if two image
patches belong
to the same image or different images. We refer this architecture as a Manhattan Siamese ConvNet
due to the difference layer's similarity to Manhattan distance.
\textbf{2-channel Convnet} Until now, each word is treated as a separate
input. \newcite{Zagoruyko_2015_CVPR} introduced a 2-channel architecture which treats a pair of
image patches as a 2-channel image. This can also be applied to words. The 2-channel
ConvNet has two convolutional layers, a maxpooling layer, and a fully
connected layer with 8 units.
The number of feature maps in each convolutional layer is fixed at $10$ with a kernel size of
$2\times 3$. The max-pooling layer halves the output of the previous convolutional layer. We
also inserted a dropout layer with $0.5$ probability \cite{srivastava2014dropout} after a
fully-connected layer to
avoid over-fitting. The convolutional layers were trained with ReLU non-linearity.
We zero-padded each word to obtain a length of $10$ for all the words to apply the filter equally
about a word.
We used adadelta optimizer \cite{zeiler2012adadelta} with learning
rate of $1.0$, $\rho = 0.95$, and $\epsilon = 10^{-6}$. We fixed the mini-batch size to
$128$ in all our experiments. We experimented with different batch sizes ($[32, 64, 128, 256]$) and
did not observe any significant deviation in the validation loss. Both, Manhattan and 2-stream
ConvNets were trained using the log-loss function. Both our architectures are relatively
shallow (3) as compared to the text classification architecture of \newcite{NIPS2015_5782}. We
trained all our networks using Keras \cite{chollet2015keras} and Theano \cite{bergstra2010theano}.
\section{Comparison methods}
We compare the ConvNet architectures with SVM classifiers trained with different string
similarities as features.
\textbf{Other sound classes/alphabets} Apart from ASJP alphabet, there are two other alphabets
that have been designed by historical linguists for the purpose of modeling sound change. As
mentioned before, the main idea behind the design of sound classes is to discourage transitions
between particular classes of sounds but allow transitions within a sound class.
\newcite{dolgopolsky1986probabilistic} proposed a ten sound class system based on the empirical data
of $140$ languages. SCA alphabet \cite{list2012sca} has a size of $25$ and attempts to
address some issues with the ASJP alphabet (lack of tones) and also extend Dolgopolsky's sound
classes based on evidence from more number of languages.
\textbf{Orthographic measures as features} We converted all the datasets into all the three sound
classes and computed the following string similarity scores:
\begin{compactitem}
\item Edit distance.
\item Common number of bigrams.
\item Length of the longest common subsequence.
\item Length of longest common prefix.
\item Common number of trigrams.
\item Global alignment based on Needlman-Wunch algorithm \cite{needleman1970general}.
\item Local alignment score based on Smith-Waterman algorithm \cite{smith1981identification}.
\item Semi-global alignment score is a compromise between global and local alignments
\cite{durbin1998biological}.\footnote{The global, local, and alignment scores were computed using
LingPy library \cite{list-moran:2013:SystemDemo}.}
\item Common number of skipped bigrams (XDICE).
\item A positional extension of XDICE known as XXDICE \cite{brew1996word}.
\end{compactitem}
\textbf{Pair-wise Mutual Information (PMI)} We also computed a PMI score for a pair of ASJP
transcribed words using the PMI scoring matrix developed by \newcite{jager2013phylogenetic}. This
system is referred to as PMI system.
We included length of each word and the absolute difference in length between the words as
features for both the Orthographic and PMI systems. The sound class orthographic scores
system attempts to combine the previous cognate identification systems developed by
\cite{inkpen2005automatic,hauer-kondrak:2011:IJCNLP-2011} and the insights from applying
string similarities to sound classes for language comparison \cite{kessler2007word}.
\section{Datasets}
In this section, we describe the datasets used in our experiments.
\textbf{IELex database} The Indo-European Lexical database is created by
\newcite{dyen1992indoeuropean} and curated by Michael Dunn.\footnote{\url{ielex.mpi.nl}} The
transcription in IELex database is not uniformly IPA and retains many forms transcribed in
the Romanized IPA format of \newcite{dyen1992indoeuropean}. We cleaned the IELex database of any
non-IPA-like transcriptions and converted part of the database into ASJP
format.
\textbf{Austronesian vocabulary database} The Austronesian Vocabulary
Database \cite{greenhill2009austronesian} has word lists for 210 Swadesh concepts and 378
languages.\footnote{\url{http://language.psy.auckland.ac.nz/austronesian/}} The database
does not have transcriptions in a uniform IPA format. We
removed all symbols that do not appear in the standard IPA and converted the lexical items to ASJP
format.\footnote{For computational reasons, we work with a subset of 100 languages.}
\begin{table}[!ht]
\footnotesize
\centering
\begin{tabular}{p{1.6cm}p{1.cm}p{1.cm}p{1.cm}p{1.2cm}}
\hline
Family & Concepts & Languages & Training & Testing\\\hline
Austronesian & $210$ & $100$ & $334807$ & $140697$ \\
Mayan & $100$ & $30$ & $28222$ & $12344$\\
Indo-European & $206$ & $50$ & $117740$ & $49205$\\
Mixed dataset & -- & -- & $176889$ & -- \\
\hline
\end{tabular}
\caption{The number of languages, concepts, training, and test examples in our datasets. We do not
test on the mixed database and only use it for training purpose.}
\label{tab:data}
\end{table}
\textbf{Short word lists with cognacy judgments} \newcite{wichmann2013languages} and
\newcite{List2014d} compiled cognacy wordlists for subsets of families from
various scholarly sources such as comparative handbooks and historical linguistics' articles.
The details of this compilation is given below. For each dataset, we give the number of
languages/the number of concepts in parantheses. This dataset is henceforth referred to as ``Mixed
dataset''.
\begin{itemize}
\item \newcite{wichmann2013languages}: Afrasian (21/40), Kadai (12/40), Kamasau (8/36),
Lolo-Burmese (15/40), Mayan (30/100), Miao-Yao (6/36), Mixe-Zoque (10/100), Mon-Khmer (16/100).
\item \newcite{List2014d}: Bai dialects (9/110), Chinese dialects (18/180), Huon (14/84), Japanese
(10/200), ObUgrian (21/110; Hungarian excluded from Ugric sub-family), Tujia (5/107; Sino-Tibetan).
\end{itemize}
We performed two experiments with these datasets. In the first experiment, we randomly selected
70\% of concepts from IELex, ABVD, and Mayan datasets for training and the rest of the 30\%
concepts for testing. The motivation behind this experiment is to test if ConvNets can learn
phonetic feature patterns across concepts. In the second experiment, we trained on the Mixed
dataset but tested on the Indo-European and Austronesian datasets. The motivation behind this
experiment is to test if ConvNets can learn general patterns of sound change across language
families. The number of training and testing examples in each dataset is given in table
\ref{tab:data}.
\begin{table*}[!t]
\centering
\begin{tabular}{lccccc}\hline
Language family & Orthographic & PMI & Manhattan ConvNet & 2-Channel ConvNet
\\\hline
Austronesian & $77.92\%$ & $78\%$ & $79.04\%$ & $76.1\%$ \\
Indo-European & $80\%$ & $78.58\%$ & $83.43\%$ & $81.7\%$ \\
Mayan & $83.66\%$ & $85.25\%$ & $87.1\%$ & $82.1\%$ \\\hline
\hline
\multirow{3}{*}{Austronesian} & $0.833$ & $0.836$ & $0.861$ & $0.830$\\
& $0.675$ & $0.665$ & $0.576$ & $0.595$\\
& $0.783$ & $0.782$ & $0.776$ & $0.760$ \\\hline
\multirow{3}{*}{Indo-European} & $0.863$ & $0.854$ & $0.894$ & $0.883$ \\
& $0.628$ & $0.598$ & $0.618$ & $0.585$\\
& $0.808$ & $0.794$ & $0.830$ & $0.813$\\\hline
\multirow{3}{*}{Mayan} & $0.866$ & $0.885$ &$0.888$ & $0.865$\\
&$0.791$ & $0.795$&$0.756$ & $0.734$\\
& $0.84$& $0.853$ & $0.842$ & $0.819$\\\hline
\hline
Austronesian & $0.749$ & $0.74$ & $0.683$ & $0.643$ \\
Indo-European & $0.729$ & $0.678$ & $0.681$ & $0.64$ \\
Mayan & $0.88$ & $0.892$ & $0.871$ & $0.805$ \\\hline
\hline
\end{tabular}
\caption{Each system is trained on cognate and non-cognate pairs on 145 concepts in Indo-European
and Austronesian families; and tested on the rest of the concepts. For Mayan family, the
number of training concepts is 70 and the number of concepts in testing data is 30. For each
family, numbers correspond to the following metrics: Accuracies, F-scores
(negative, positive, combined), Average precision score.}
\label{tab:scores}
\end{table*}
\section{Results}
In this section, we report the results of our cross-concepts and cross-family experiments.
\textbf{SVM training and evaluation metrics} We used a linear kernel and optimized the SVM
hyperparameter ($C$) through ten-fold cross-validation and grid search on the training data. We
report accuracies, class-wise F-scores (positive and negative), combined F-score, and average
precision score for each system on concepts dataset in table \ref{tab:scores}. The average precision
score corresponds to the area under the precision-recall curve and is an indicator of the robustness
of the model to thresholds.
\subsection{Cross-Concept experiments}
\textbf{Effect of size and width of fully connected layers} We observed that both the depth and
width of the fully connected layers do not affect the performance of the ConvNet models.
We used a fully connected network of size 8 in all our experiments. We increased the number of
neurons from $8$ to $64$ in multiples of two and observed that increasing the number of neurons
hurts the performance of the system.
\textbf{Effect of filter size} \newcite{zhang2015sensitivity} observed that the size of the filter
patch can affect the performance of the system. We experimented with different filter sizes of
dimensions $m \times k$ where, $m \in [1,2]$ and $k \in [1,3]$. We did not find any change in the
performance in concepts experiments. We report the results for $m=2, k=3$ filter size for
cross-concept experiments.
\subsection{Cross-Family experiments}
\textbf{Effect of filter size} Unlike the previous experiment, the filter size has a effect on the
performance of the ConvNet system. We observed that the best results were obtained with a filter
size of $1\times 3$.
We did not include the results of the 2-channel ConvNet because of its worse performance at the
task of cross-family cognate identification. The results of our experiments are given in table
\ref{tab:lngscores}.
\begin{table}[ht!]
\small
\centering
\begin{tabular}{lccp{1.7cm}}\hline
Dataset & Orthographic & PMI & Manhattan ConvNet\\\hline
Austronesian & $0.766$ & $0.78$ & $0.746$ \\
Indo-European & $0.815$ & $0.804$ & $0.804$ \\\hline
\hline
\multirow{3}{*}{Austronesian} & $0.821$ & $0.837$ & $0.820$\\
& $0.661$ & $0.656$ & $0.570$\\
& $0.759$ & $0.768$ & $0.728$ \\\hline
\multirow{3}{*}{Indo-European} & $0.876$ & $0.873$ & $0.871$ \\
& $0.631$ & $0.569$ & $0.590$ \\
& $0.806$ & $0.786$ & $0.791$ \\\hline\hline
Austronesian & $0.771$ & $0.795$ & $0.707$ \\
Indo-European & $0.731$ & $0.692$ & $0.691$ \\\hline
\end{tabular}
\caption{Testing accuracies, class-wise and combined F-scores, average precision score of
each system on Indo-European and Austronesian families.}
\label{tab:lngscores}
\end{table}
\section{Discussion}
The Manhattan ConvNet competes with PMI and orthographic models at cross-concept cognate
identification task. The Manhattan ConvNet performs better than PMI and orthographic models in
terms of overall accuracy in all the three language families. In terms of averaged F-scores,
Manhattan ConvNet performs slightly better than orthographic model and only performs worse than the
other models at Austronesian language family.
The Manhattan ConvNet shows mixed performance at the task of cross-family cognate identification.
The Manhattan ConvNet does not turn up as the best system across all the evaluation metrics in a
single language family. The ConvNet performs better than PMI but is not as good as Orthographic
measures at Indo-European language family. In terms of accuracies, the ConvNet comes closer to PMI
than the orthographic system.
These experiments suggest that ConvNets can compete with a classifier trained on
different orthographic measures and different sound classes. ConvNets can also compete with a data
driven method like PMI which was trained in an EM-like fashion on millions of word pairs. ConvNets
can certainly perform better than a classifier trained on word similarity scores at cross-concept
experiments.
The Orthographic system and PMI system show similar performance at the Austronesian cross-concept
task. However, ConvNets do not perform as well as orthographic and PMI systems. The reason for this
could be due to the differential transcriptions in the database.
\section{Conclusion}
In this article, we explored the use of phonetic feature convolutional networks for
the task of pairwise cognate identification. Our experiments with convolutional networks
show that phonetic features can be directly used for classifying if two words are related or not.
In the future, we intend to work directly with speech recordings and include language relatedness
information into ConvNets to improve the performance. We are currently working towards building a
larger database of word lists in IPA transcription.
\section*{Acknowledgments}
I thank Aparna Subhakari, Vijayaditya Peddinti, Johann-Mattis List, Johannes Dellert, Armin Buch,
Çağrı Çöltekin, Gerhard Jäger, and Daniël de Kok for all the useful comments. The data for the
experiments was processed by Johann-Mattis List and Pavel Sofroniev.
|
2,877,628,089,647 | arxiv | \section{Introduction}
Recognizing emotions from speech has usefulness in many areas such as psychology, medicine and designing human-computer interaction systems \cite{el2011survey}. The fact that speech is easy to collect unlike physiological signals has made it a popular candidate to build models for such tasks. Typically, designing a classification system entails extracting feature vectors $\mathbf{x} \in \mathbb{R}^d$ from speech signal which could carry information about the emotional state of the speaker. A classifier is then trained to estimate the conditional probability $p(y|\mathbf{x})$ ($y \in$ set of categorical emotional labels) using the ground truth annotations. Features vectors are usually high dimensional which leads to the joint distribution $p(\mathbf{x},y)$ to lie in complex manifolds. Understanding their distribution could be the key to build robust classifiers.
In the past, researchers have used the generative ability of models for tasks such as building emotion classification models \cite{chandrakala2009combination} . In this paper, we focus on analyzing one specific category of generative models: Generative Adversarial Networks (GANs) when applied to speech based emotion recognition. Using multiple GAN variants, we present insights on model training as well as an analysis on the quality of feature vectors generated. We also propose multiple metrics to evaluate the quality of generated samples and discuss their interpretations. With these contributions, we aim to advance the application of GANs to speech emotion recognition for tasks such as training emotion classification and synthetic feature generation.
Deep generative models such as Generative Adversarial Nets (GANs) \cite{goodfellow2014generative} and variational auto-encoders (VAEs) \cite{kingma2013auto} are popular variants of generative model attributed to their ability to capture the complexities of real world data distribution and generate realistic examples from those distributions. GANs have been successful in tasks such as image generation \cite{radford2015unsupervised}, style transfer \cite{zhang2017style} and speech enhancement \cite{pascual2017segan}. The objective of GAN training is to obtain a generator which when fed samples $\mathbf{z}$ from a lower dimensional simple distribution $p_{\mathbf{z}}$ can generate higher dimensional realistic looking data points. VAEs are probabilistic graphical models implemented using deep networks to learn an efficient lower dimensional encoding of higher dimensional feature vectors. Synthetic feature vectors can then be generated by passing the samples generated from the lower dimensional space through the VAE decoder. VAEs can be used for data generation as well as matching the lower dimensional latent space to a desired distribution, typically done by minimizing distance of the latent space with a pre-defined prior distribution such as a Gaussian distribution.
Applications of VAE include blurry image generation \cite{arjovsky2017wasserstein}, anomaly detection \cite{an2015variational} and text generation \cite{semeniuta2017hybrid}.
Makhzani et. al \cite{makhzani2015adversarial} propose using GAN based adversarial losses to match the distribution of latent space to the prior distribution. This further allowed them to match the latent space with more complex distributions than a Gaussian distribution. In \cite{sahu2018adversarial}, we enforce the latent space to resemble a mixture of four Gaussians, each mixture component spanning the latent codes obtained from samples belonging to one of the four emotion classes : angry, sad, neutral and happy. Synthetic samples belonging to a particular class can then be generated by sampling from the corresponding mixture component and passing it through the decoder of a trained adversarial auto-encoder. Note that we imposed the condition that the generated data should lie in four clusters by choosing a prior which has four mixture components. Another way to enforce the clustering would be to feed the generator with an additional label vector along with points sampled from the prior and then maximize the mutual dependence between the generated synthetic samples and the input label vectors as done in an infoGAN \cite{chen2016infogan} framework. We borrow ideas from these GAN and auto-encoder based frameworks to build generative models that can synthesize feature vectors when fed with samples from the prior distribution.
In Section~\ref{sec:related}, we look at previous attempts by researchers to benefit from generative models in building emotion classification models. In Section~\ref{sec:background} we give a brief overview of the GAN frameworks that has been used by the vision community for synthetic image generation. Section~\ref{sec:exp} explains our experimental setup. We talk about the datasets used in our experiments. We describe the auto-encoder and GAN based architectures and their training methodology for synthetic data generation. We also explain the proposed metrics. In Section~\ref{sec:resss}, we discuss the results based on the proposed metrics. We also perform qualitative analysis comparing the real and synthetic data distributions. Following a speaker independent cross-validation study we perform a cross-corpus with the two corpora having differences in speakers, recording conditions and annotators. Our aim was to observe the transferability of synthetic samples generated from a model trained on an external corporal to another corpora of interest. Finally we summarize our findings and mention some future avenues that could be worth exploring in Section~\ref{sec:conclusion}
\section{Related work}
\label{sec:related}
Speech emotion recognition is a widely researched topic and researchers in the past have leveraged the ability of generative models to learn a rich informative representations to build discriminative classifiers such as Gaussian mixture models (GMMs) and hidden Markov models (HMMs) \cite{el2011survey}. Chandrakala and Sekhar \cite{chandrakala2009combination} used Gaussian mixture models (GMMs) to model the distribution of feature sets extracted from utterances. They tried two different set-ups (i) each training sample is represented as a time-series of 34 dimensional feature vectors which are used to model a GMM. This results in M different GMMs for M training samples. Once the GMMs are trained, an M-dimensional score vector is computed for each utterance where each entry is the log-likelihood score obtained when the utterance is applied to one of the M GMMs. The score vectors are then used to train a support vector machine (SVM) which is evaluated on test samples. This approach is a simple way to obtain a fixed length representation for variable length utterances. Moreover, one can also see that similar utterances would generate similar log-likelihood scores resulting in similar score vectors which are then fed to SVM. (ii) Given that the previous model fails to capture the temporal dynamics of the utterance the authors tried a segment based approach. Each utterance was divided into a fixed number of segments. The set of feature vectors belonging to each segment were then modeled using a multivariate Gaussian distribution. The segment-wise feature vector is obtained by concatenating the entries in mean vector and the covariance matrix of the multivariate Gaussian. The final feature vector for an utterance used to train and evaluate the SVM classifier was obtained by concatenating the segment-wise feature vectors. The authors showed that segment based approach outperformed the score-vector based approach validating the importance of modeling temporal dynamics. Amer et al. \cite{amer2014emotion} proposed using hybrid networks by combining generative models that would draw rich representations of short term temporal dynamics and discriminative models which were used for classification of long range temporal dynamics using those representations. They experimented with using restricted boltzmann machines (RBMs) or conditional RBMs (CRBMs) as generative models while using an SVM or conditional random fields (CRFs) as discriminative models. They evaluated their models on three datasets and observed that the discriminative models trained with intermediate CRBM representations outperformed those trained using raw features. Latif et al. \cite{latif2017variational} leveraged the modeling capability of variational auto-encoders (VAEs) and conditional variational auto-encoders (CVAEs) to extract salient representations from log Mel filterbank coefficients for speech emotion recognition. VAEs are a class of auto-encoder based generative models that like GANs can generate synthetic data samples when provided with data points sampled from a simpler prior distribution $p_{\mathbf{z}}$. Along with minimizing the reconstruction error, the encoder output is made to resemble $p_{\mathbf{z}}$ by including a function in loss term that minimizes the Kullback- Leibler divergence between them. In case of CVAE, label information is provided while training and the latent representations are learned conditioned on both the input data and labels.
GAN based models have also been utilized to get meaningful representations of raw feature vectors for speech emotion recognition and related tasks. Since the discriminators in GANs are trained to discriminate between real and fake samples, one could use its intermediate layer representations obtained from raw feature vectors to train a classifier \cite{radford2015unsupervised}. Towards that end, Deng et al. \cite{deng2017speech} modelled a GAN whose generator was trained to output synthetic feature vectors that mimic the distribution of real acoustic feature vectors extracted from speech waveforms using the openSMILE toolkit \cite{eyben2010opensmile}. The intermediate layers of the discriminator were used to extract non-linear representations of the acoustic feature vectors. These were used to train an SVM to perform a 4-way classification of autism spectrum disorders. They showed an improvement in performance on their test set when they used the intermediate layer representations rather than the raw acoustic features. This indicates that the discriminators of a well trained GAN can extract meaningful representations from raw feature vectors. Chang and Scherer \cite{chang2017learning} followed a similar approach to obtain meaningful representations from spectrograms for valence level classification. They implemented a deep convolutional GAN architecture and used the activations from an intermediate layer of the discriminator for the final classification task. They reported better performance over a baseline model performing direct classification on spectrograms. Eskimez et al. \cite{eskimez2018unsupervised} used latent representations obtained from an auto-encoder based architectures and compared their performances for speech emotion recognition. Along with a denoising auto-encoder and a variational auto-encoder, they implemented an adversarial auto-encoder \cite{makhzani2015adversarial}, where the output of encoder is made to resemble a prior distribution using adversarial loss terms. They also implemented an adversarial variational bayes network which combined the advantages of variational auto-encoders and GANs. They reported that the representations obtained from an adversarial variational bayes network performed the best.
Besides these there have also been few other works utilizing GANs for speech emotion recognition that do not concern with extracting salient representations from raw feature vectors. Latif et al. \cite{latif2018adversarial} utilized GANs to build more robust speech emotion recognition systems by leveraging speech enhancement capability of GANs. They generated adversarial examples by adding noise (cafe, meeting or station noise) to actual training data which could not be distinguished by human listeners from real examples in most cases. However, a classifier trained on real data couldn't classify the emotional class of the adversarial samples correctly. They trained a GAN to generate cleaner utterances from the corrupted utterances. They showed that the classifier was liable to make fewer miss-classifications when trained and evaluated on the cleaner data obtained from GAN than on perturbed data. Hence, inclusion of a GAN in their pipeline led to a more robust emotion recognition model. Han et al. \cite{han2018towards} proposed using a conditional GAN based affect recognition framework where the machine predicted labels were made to mimic the distribution of real labels. Their affect recognition framework consisted of a neural network classifier $NN1$ which when provided with the acoustic feature vector outputs the emotional class label. Another neural network $NN2$ was trained to distinguish between the ground truth labels and the output obtained from $NN1$. Hence, $NN1$ can be seen as a generator which generates 'fake' labels given the feature vectors and $NN2$ acts as a discriminator trying to differentiate between real and fake labels conditioned on the acoustic feature vector. The parameters of $NN1$ are updated based on a loss function which is a combination of the supervised cross-entropy loss term and a GAN based error term trying to confuse the discriminator $NN2$ between predicted and ground truth labels. Note that both these components would work towards making the predicted labels resemble ground truth labels. Like any GAN framework, the discriminator $NN2$ is updated so that it gets better at distinguishing between predicted and ground truth labels. They report that conditional adversarial training is helpful to improve speech emotion recognition showing GANs can be used to learn the label space distribution.
None of these works have however studied the generative capability of these models in greater detail. We have previously made attempts to investigate the ability of GAN based models to generate realistic feature vectors which can be used for speech emotion recognition \cite{sahu2018adversarial, sahu2018enhancing}. In this paper we explore this aspect in more detail by training three GAN based models to generate synthetic feature vectors mimicking the distribution of real acoustic feature vectors. We define the metrics to evaluate and compare the quality of the synthetic feature vectors generated using the three models. We provide visualizations comparing the distributions of real and synthetic data. Finally, we discuss the applicability of these synthetically generated feature vectors for speech recognition in low resource conditions. Note that since we are learning the distribution of feature vectors and not raw speech, the generated data could not be used for qualitative evaluation.
\section{Background on adversarial training}
\label{sec:background}
The purpose of a generative adversarial network (GAN) \cite{goodfellow2014generative} is to learn a complex distribution from a simpler distribution. Once trained the generator can be used to generate points from the complex distribution when fed with points from the simpler distribution. In this work, we attempt to train a generator that can produce synthetic high dimensional feature vectors (1582 dimensional vectors used for speech emotion recognition) from points belonging to simpler distributions. GANs consist of two modules : generator $G$ and a discriminator $D$ each of them having specific functions. Purpose of $G$ is to generate realistic data-points $G(\mathbf{z})$ when fed with samples $\mathbf{z}$ from a simpler distribution $p_{\mathbf{z}}$. Generally, $p_{\mathbf{z}}$ is chosen to be a Gaussian or a uniform distribution. Simultaneously, the discriminator is trained to be able to classify between the generated data-points $G(\mathbf{z})$ and real data-points $\mathbf{x}$. The final objective is to obtain a generator that can mimic real data distribution so that the discriminator is unable to differentiate between the generated data and real data. The loss function used to update the parameters of $D$ and $G$ is given by:
\begin{equation}\label{eq:gan_loss}
\begin{aligned}
\min_{G} \max_{D} V_\text{GAN}(D,G) = \mathbb{E}_{\mathbf{x} \sim p_{\text{data}}} [\log D(\mathbf{x})] + \\
\mathbb{E}_{\mathbf{z} \sim p_{\mathbf{z}}} [\log (1-D(G(\mathbf{z})))]
\end{aligned}
\end{equation}
Note that in the equation above $D(\mathbf{x})$ and $D(G(\mathbf{z}))$ denote the probabilities that $\mathbf{x}$ and $G(\mathbf{z})$ are recognized to be a real sample by the discriminator respectively. On the other hand parameters of $G$ should be updated such that it fools the discriminator into thinking that $G(\mathbf{z})$ comes from real data distribution. Hence, minimizing the loss term with respect to parameters of $G$ would push the value of $D(G(\mathbf{z}))$ closer to 1.
In practice, the parameters of $D$ and $G$ are updated in an iterative fashion with the parameters of one module kept frozen while the other one is updated. In each iteration, the number of updates to D and G could be different.
In our experiments we use variations of adversarial auto-encoder network proposed by Makhzani et al. \cite{makhzani2015adversarial} for image classification and generation. We have explored their utility mainly for feature vector compression in \cite{sahu2018adversarial}. In adversarial auto-encoders we map the lower dimensional output of the bottleneck layer of an auto-encoder to a distribution $p_{\mathbf{z}}$. For a $N$-class classification problem, we can consider $p_{\mathbf{z}}$ to be a mixture of $N$ Gaussians, with each mixture component corresponding to a particular class. Once trained, points can be sampled from a particular mixture component and passed through the decoder to generate a synthetic data-point belonging to the corresponding class.
Another way to enforce clustering in the generated data is to use an infoGAN framework proposed by Chen et al. \cite{chen2016infogan} as opposed to specifying that $p_{\mathbf{z}}$ have $N$ components. In an infoGAN, the generator is fed with a conditional label vector $\mathbf{c} \sim p_{\mathbf{c}} $ along with $\mathbf{z} \sim p_{\mathbf{z}}$ to generate synthetic data-points $G(\mathbf{z},\mathbf{c})$. If we wish the generated data to have $N$ clusters, then $c$ can be a $N$ dimensional one-hot label vector. Along with the vanilla loss-term used for a GAN, an infoGAN additionally tries to maximize mutual information between generated sample distribution and the label distribution denoted by $I(\mathbf{c},G(\mathbf{z},\mathbf{c}))$ in the equation below.
\begin{equation}\label{eq:infogan_loss}
\min_{G} \max_{D} V_\text{infoGAN}(D,G) = V_\text{GAN}(D,G) - \lambda I(\mathbf{c},G(\mathbf{z},\mathbf{c}))
\end{equation}
Note that computing $I(\mathbf{c},G(\mathbf{z},\mathbf{c})$ requires us to compute the posterior $p(\mathbf{c}|G(\mathbf{z},\mathbf{c})$ which can be difficult. Chen et al. proposed a workaround by adding an auxiliary layer to their discriminator which classifies the generated synthetic vector. In other words it estimates the label $\mathbf{c'}$ given $G(\mathbf{z},\mathbf{c})$. The auxiliary layer is updated so that $\mathbf{c'}$ is as close to $\mathbf{c}$ as possible. Hence, they approximated the quantity $p(\mathbf{c}|G(\mathbf{z},\mathbf{c}))$ with the auxiliary layer output $q(\mathbf{c}|G(\mathbf{z},\mathbf{c}))$. More theoretical details can be found in \cite{chen2016infogan}. Denoting the approximation of $I(\mathbf{c},G(\mathbf{z},\mathbf{c}))$ with $Q(\mathbf{c},G(\mathbf{z},\mathbf{c}))$, the loss function to optimize now becomes
\begin{equation}\label{eq:infogan_app_loss}
\min_{G} \max_{D} V_\text{infoGAN}(D,G) = V_\text{GAN}(D,G) - \lambda Q(\mathbf{c},G(\mathbf{z},\mathbf{c}))
\end{equation}
This is the equation we used in our models utilizing the infoGAN framework. In our experiments, value of $\lambda$ was kept at 1.
\subsection{InfoGAN vs vanilla GAN}
To visualize how data generation varies between a vanilla GAN and infoGAN, we ran a simple experiment where we trained the two GAN models (for equal number of epochs) to learn a target Probability Distribution Function (PDF) from a source PDF. Our source PDF was a normal Gaussian distribution while target PDF was a mixture of 4 Gaussians with orthogonal means. Once trained, we sampled points from source PDF and fed it to the two GAN models. Figure~\ref{fig:info_vs_normal} shows the differences in the distribution of the generated data-points. We note that the generated data from a infoGAN has high inter-cluster variance with no overlap between samples belonging to different clusters. However, the inter-class variability is low as all the samples lie along a straight line. A vanilla GAN on the other hand doesn't exhibit these properties. Hence, the mutual information based loss function focusses more on inter-cluster separability than intra-cluster variance. This is something we should keep in mind as we will discuss more on this in later sections.
\begin{figure}[t]
\includegraphics[height=6cm, width=9cm]{graphics/info.png}
\caption{Comparison of data generated using a trained vanilla GAN (c) and infoGAN (d). Source PDF is a 2D normal Gaussian distribution shown in (a) while target PDF is a mixture of 4 Gaussians as shown in (b). Note that the four clusters are quite separable in (d)}
\label{fig:info_vs_normal}
\end{figure}
We now discuss our experimental set up explaining the architectures of the models used and the training methodology in more detail.
\section{Experimental setup}
\label{sec:exp}
In this section we explain the databases and the architectures and training methodology of the three GAN based models trained by us to generate synthetic samples. We only used data from four emotion classes namely angry, sad, neutral and happy to train the models. We used the openSMILE toolkit to extract 1582 dimensional `emobase' feature set from raw speech waveforms which was used in our experiments. This feature set consists of various functionals computed for spectral, prosody and energy based features. Similar features have been previously used for emotion classification \cite{el2011survey}. We then define the metrics used by us to compare the synthetic data generation capability of the three different models. We perform an in-domain cross validation analysis where we train the GAN using samples from training split. We used the metrics defined by us to compare the synthetic data distribution with the data distribution in training and validation splits. This is followed by a cross-corpus analysis where the synthetic data generated from a GAN trained on one corpus was compared with data belonging to a different corpus.
\subsection{Datasets}
We use IEMOCAP and MSP-IMPROV datasets for our analysis. These datasets are one of the larger datasets used by the emotion recognition community \cite{lotfian2017building}. Another desirable property of IEMOCAP which is used to train the GAN models is the more balanced distribution of emotion labels compared to other datasets.
\subsubsection{IEMOCAP}
Interactive Emotional Dyadic Motion Capture (IEMOCAP) dataset \cite{busso2008iemocap} consists of five sessions with dyadic affective interactions. In each session, two actors act out scenarios which are either scripted or improvised. No two sessions have the same actor participating in them. This enabled us to perform a five fold leave-one-session out cross-validation analysis on IEMOCAP. The conversations have been segmented into utterances which are then labeled by three annotators for emotions such as happy, sad, angry, excitement and, neutral. For our experiments, we only use utterances for which we could obtain a majority vote and assign that as the ground truth label. We used approximately 7 hours of data from the dataset which amounts to 5530 utterances : neutral (1708), angry (1103), sad (1083), and happy (1636).
\subsubsection{MSP-IMPROV}
MSP-IMPROV \cite{busso2017msp} has actors participating in dyadic conversations across six sessions and like IEMOCAP they also have been segmented into utterances. But unlike IEMOCAP, it also includes a set of pre-defined 20 target sentences that are spoken with different emotions depending on the context of conversation. There are 7798 utterances belonging to the same four emotion classes. The class distribution is unbalanced with the number of utterances belonging to happy/neutral class (2644 and 3477 respectively) more than three times that of angry/sad (792 and 885 respectively). We used MSP-IMPROV to perform a cross-corpus study using it as a test set while IEMOCAP was used as training set.
\subsection{GAN architectures employed}
\label{sec:archi}
We used auto-encoder and GAN based models where the bottleneck layer of the auto-encoder learns lower dimensional code vectors from higher dimensional feature vectors. The lower dimensional encoding space is made to resemble a simple prior $p_{\mathbf{z}}$ by using a GAN based training framework. Points from this lower dimensional subspace are then sampled and fed to the decoder of the auto-encoder to get synthetic feature vectors. Size of the the bottleneck layer in our architectures was decided so that we could obtain lower dimensional encodings without sacrificing much of the discriminability present in the actual higher dimensional feature vectors. To quantify this, we ran a cross-validation experiment on IEMOCAP for a 4-way emotion classification by training an SVM using raw features as well as the lower dimensional encodings obtained by feeding the trained models with raw features similar to the set up described in \cite{sahu2018adversarial}. The drop in accuracy would give us an idea of the amount of discriminability retained by the lower dimensional encodings. Using the architectures mentioned in this paper, we noticed that the classification accuracy dropped by only a couple of percentage points when lower dimensional code-vectors were used instead of raw feature vectors suggesting they indeed retain most of the discriminability. We employ three models $M1$, $M2$ and $M3$ for the task of synthetic data generation from a simpler distribution $p_{\mathbf{z}}$. All of them consist of fully connected layers. We describe them in more detail below.
\begin{figure*}[t]
\includegraphics[height=5cm, width=18cm]{graphics/architecture.png}
\caption{Architectures for $M1$ (left), $M2$ (center) and $M3$ (right). Note that there are two discriminators in $M2$ and $M3$, one to learn the encoding space and one to generate data samples. While in $M1$ and $M2$, the encoding space is pre-defined to be a mixture of 4 maximally separated Gaussians, in case of $M3$ it is being learned from the training data provided using a code generator block}
\label{fig:AAEdgan}
\end{figure*}
\begin{table*}[t]
\caption{Architecture of various components of models $M1$, $M2$ and $M3$. Except the bottleneck layer and output layer in auto-encoders (linear activations), output layer of discriminators $D_1$ and $D_2$ (sigmoid activations) and output layer of code-generator in $M3$ (linear activation) all the other layers had ReLU activations.}
\begin{tabular}{l|l|l} \hline
\textbf{Component} & \textbf{Model} & \textbf{Architecture} \\ \hline
Auto-encoder & $M1$ and $M2$ & $1582\rightarrow{}1000\rightarrow{}500\rightarrow{}100\rightarrow{}\textbf{2}\rightarrow{}100\rightarrow{}500\rightarrow{}1000\rightarrow{}1582$ \\
(Enc/Dec with bottle-neck layer in bold) &$M3$&$1582\rightarrow{}1000\rightarrow{}700\rightarrow{}300\rightarrow{}\textbf{256}\rightarrow{}300\rightarrow{}700\rightarrow{}1000\rightarrow{}1582$\\\hline
Discriminator $D\_1$& $M1$ and $M2$ & $6\rightarrow{}1000\rightarrow{}500\rightarrow{}100\rightarrow{}1$\\
&$M3$ & $260\rightarrow{}1000\rightarrow{}500\rightarrow{}100\rightarrow{}1$\\\hline
$p_{\mathbf{z}}$& $M1$ and $M2$ & Mixture of four 2D Gaussians with orthogonal means\\
&$M3$&20 dimensional normal distribution \\\hline
Code generator (CG) & $M3$ & $24\rightarrow{}140\rightarrow{}256$\\\hline
Discriminator $D\_2$& $M2$ and $M3$ & $1586\rightarrow{}1000\rightarrow{}500\rightarrow{}100\rightarrow{}1$\\\hline
Auxiliary layer (AUX)&$M3$&$[1586\rightarrow{}1000\rightarrow{}500\rightarrow{}100]\rightarrow{}128\rightarrow{}4$\\
(Layers within brackets are shared with $D\_2$)&&\\\hline
\end{tabular}
\label{tab:archi_m}
\end{table*}
\begin{itemize}
\item $M1$: $M1$ is an adversarial auto-encoder with $p_{\mathbf{z}}$ as mixture of four 2D Gaussians with orthogonal means and same mixture weights. This also meant the bottleneck layer can have two neurons. We trained the model to match the distribution of code vectors (output of bottleneck layer) to that of $p_{\mathbf{z}}$. While training, we used the label information in the form of one-hot vectors to match each emotion category to a particular mixture component. We match the distributions using a GAN framework with a discriminator $D\_1$ trying to distinguish between code-vectors and samples obtained from $p_{\mathbf{z}}$. In this framework, the encoder can be viewed as a generator which generates lower dimensional encodings when provided with real feature vectors. Once trained the encodings would match the distribution $p_{\mathbf{z}}$ with each emotion category being mapped to one mixture component. We can also sample points from $p_{\mathbf{z}}$ to feed the decoder thereby generating synthetic feature vectors.
\item $M2$: Note that the decoder in $M1$ does not receive any feedback to update its parameters from a GAN framework trying to match its output to real feature vectors. In model $M2$, we included a second discriminator $D\_2$ to the architecture $M1$ that can distinguish between decoder samples and real feature vectors. Decoder parameters can now be updated taking advantage of the information provided by $D\_2$. Label information was used for matching the distribution of real feature vectors belonging to a specific emotion class to the distribution of synthetic vectors generated from a specific component of $p_{\mathbf{z}}$ . In this case, decoder acts as the generator trying to generate synthetic feature vectors when provided with samples from $p_{\mathbf{z}}$.
\item $M3$: Note that in both the architectures described above, clustering of synthetic data into four emotion classes is ensured by specifying a $p_{\mathbf{z}}$ having four components with orthogonal means. In model $M3$ we enforce this clustering by implementing an infoGAN framework. At the same time clustering of code vectors into four classes was data-driven. \cite{wang2018learning} has explored similar models for synthetic image generation. Since the code-space clustering is data-driven we had to increase the dimension of bottleneck layer to 256 neurons to retain the discriminability present in raw features as described before. $p_{\mathbf{z}}$ is modeled to be 20 dimensional normal distribution. When fed to a code generator network the output spanned a 256 dimensional space which was matched to the code-vector distribution. Once trained, points are sampled from $p_{\mathbf{z}}$ and provided as input to the code generator followed by the decoder network generating synthetic feature vectors. The architecture is shown in Figure~\ref{fig:AAEdgan}. Note that since we are using an infoGAN framework to generate synthetic data, conditional information $\mathbf{c}$ is being provided in the form of one-hot label vectors to the code generator along with points from $p_{\mathbf{z}}$. We take $\mathbf{c}$ to be sampled from a discrete uniform distribution implying the synthetic feature vectors are equally likely to belong to any of the four classes. Discriminator $D\_2$ had an auxiliary layer predicting the class of the synthetically generated samples. This output was used to approximate the conditional distribution of the labels given the synthetic feature vectors.
\end{itemize}
The three architectures are shown in Figure~\ref{fig:AAEdgan} and more details for various components of the models are provided in Table~\ref{tab:archi_m}.
\subsection{Training methodology}
\label{sec:trainingm}
In this section we outline the methodology used to train our models. As mentioned before GANs have a discriminator and a generator playing a min-max game trying to fool each other. At the end of training, the loss curves obtained from discriminator and generator networks should converge implying that the GAN has achieved an equilibrium with the generator producing realistic enough samples to confuse the discriminator.
Usually a generator's job to learn a complex distribution is more difficult than a discriminator's job to simply classify between real and generated data. Hence updating a generator more number of times than a discriminator in each iteration can be helpful. However, a weak discriminator that doesn't do a good job of classifying real and generated data is undesired as the discriminator's output drives the the generator to produce more realistic samples. Hence, a careful tuning regarding the number of updates to each module is required.
Below we mention the training steps for our GAN based models. We used stochastic gradient descent as the optimizer to update the model parameters. The learning rate and other training parameters for the various components are tuned so that the discriminator and the generator errors converge as the training progresses. Note that while for $M1$ we have one GAN framework to match the coding space to $p_{\mathbf{z}}$, for $M2$ and $M3$ we have an additional GAN framework to match the distribution of synthetic samples coming out of the decoder to that of real feature vectors. The former task is easier because the feature vectors lie in a higher dimensional space compared to the code-vectors. For all three models we start off with updating the auto-encoder's weights to minimize reconstruction error followed by training the GAN framework to match the code-vector distribution with that of samples obtained from $p_{\mathbf{z}}$ . Since $M2$ and $M3$ have an extra component in the form of discriminator $D\_2$, additional steps were implemented to train the GAN framework trying to match decoder's output with real feature vectors. For a particular batch of the training sample, the different components in $M1$, $M2$ and $M3$ were updated in the following steps:
\begin{itemize}
\item Step 1 - Update auto-encoder wights: Weights of the auto-encoder (encoder and decoder) are updated based on a reconstruction loss function. We considered mean squared error to be the reconstruction loss function. Hence for a real data sample $\mathbf{x}$ and its reconstruction $\mathbf{x'}$, the auto-encoder weights were updated to minimize $\|\mathbf{x}-\mathbf{x'}\|_2^2$. We used a learning rate (LR) of 0.001 with a momentum of 0.9 for $M1$ and $M2$. LR was kept the same for $M3$ but no momentum was used.
\item Step 2 - Update $D\_1$ weights: Real data-points are transformed by the encoder. An equal number of points are sampled from $p_{\mathbf{z}}$. In case of $M3$, the sampled points are also passed through the code generator (CG) along with the conditional labels $\mathbf{c}$. Weights of the discriminator ($D\_1$ in pictures) are updated to minimize cross-entropy to distinguish between encoded samples and samples obtained/derived from $p_{\mathbf{z}}$. Note that label information is also provided to discriminator. Let us consider a real sample $\mathbf{x}$ and $\mathbf{c_x}$ to be the one-hot vector denoting its class. Let $\mathbf{z}$ be a sample obtained from $p_{\mathbf{z}}$ and $\mathbf{c_z}$ be the one hot vector denoting the mixture component it was sampled from in case of $M1$ and $M2$. Assuming the ground truth label when $D\_1$ gets the code-vectors $enc(\mathbf{x})$ as input is 1 and it's 0 when $D\_1$ is provided with samples obtained/derived from $p_{\mathbf{z}}$, loss function minimized to update the discriminator's parameters is given by:
\begin{equation}\label{eq:gan_loss2}
\text{\bf M1, M2:}\; \log(D(\text{enc}(\mathbf{x}),\mathbf{c_x}))-\log(1-D(\mathbf{z},\mathbf{c_z}))
\end{equation}
\begin{equation}\label{eq:gan_loss3}
\text{\bf M3:}\; \log(D(\text{enc}(\mathbf{x}),\mathbf{c_x}))-\log(1-D(\text{CG}(\mathbf{z},\mathbf{c}),\mathbf{c}))
\end{equation}
LR of 0.1 was used for $M1$ and $M2$ while it was 0.01 for $M3$.
\item Step 3 - Update encoder weights: We then freeze the discriminator (D\_1) weights. The weights of encoder are updated based on its ability to fool the discriminator. Hence, for a real sample $\mathbf{x}$, the ground truth label when $D\_1$ gets the code-vectors $enc(\mathbf{x})$ as input is now 0. Loss function minimized to update the encoder's parameters is given by $-\log(1-D(enc(\mathbf{x}),\mathbf{c_x}))$. LR of 0.1 was used for $M1$ and $M2$ while it was 0.01 for $M3$.
\end{itemize}
For $M2$ and $M3$ there were two additional steps as mentioned below to match the decoder's output with the distribution of real feature vectors.
\begin{itemize}
\item Step 4 (Only for M2/M3): Points $\mathbf{z}$ are sampled from $p_{\mathbf{z}}$ and fed to decoder in case of $M2$ or to code generator + decoder along with a class label in case $M3$. Weights of the discriminator ($D\_2$ in pictures) are updated to minimize cross-entropy to classify between synthetic samples and real samples. Assuming the ground truth label when $D\_2$ is fed with the decoder's outputs as input is 1 and it's 0 when $D\_2$ is provided with real samples $\mathbf{x}$, loss function minimized to update the discriminator's parameters is
\begin{equation}\label{eq:gan_loss4}
\text{\bf M2:}\; \log(D(dec(\mathbf{z}),\mathbf{c_z}))-\log(1-D(\mathbf{x},\mathbf{c_x}))
\end{equation}
\begin{equation}\label{eq:gan_loss5}
\text{\bf M3:} \;\log(D(dec(CG(\mathbf{z},\mathbf{c})),\mathbf{c}))-\log(1-D(\mathbf{x},\mathbf{c_x}))
\end{equation}
LR of 0.0001 was used for both $M2$ and $M3$.
\item Step 5 (Only for M2/M3): We then freeze the discriminator ($D\_2$) weights. The weights of decoder (in case of $M2$) or code generator + decoder (in case of $M3$) are updated based on its ability to fool the discriminator. In case of $M3$, an additional loss term based on mutual information $Q$ is also considered to update the parameters.
\begin{equation}\label{eq:gan_loss6}
\text{\bf M2:}\log(1-D(\text{dec}(\mathbf{z}),\mathbf{c_z}))
\end{equation}
\begin{equation}\label{eq:gan_loss7}
\begin{aligned}
\text{\bf M3:}\log(1-D(\text{dec}(CG(\mathbf{z},\mathbf{c})),\mathbf{c})) \\
-Q(\mathbf{c},\text{dec}(CG(\mathbf{z},\mathbf{c})))
\end{aligned}
\end{equation}
Furthermore, the auxiliary layer (AUX) weights are also updated busing the mutual information based loss term. LR of 0.001 was used for both $M2$ and $M3$.
\end{itemize}
Note that the learning rate used to update $D\_2$ parameters is less than that used to update the decoder/code generator + decoder so as to balance out the learning abilities of the generator and discriminator in this GAN framework. Also, the generators were trained for two epochs for every single epoch of training $D\_2$.
\subsection{Evaluation metrics}
Once the models are trained till the discriminator and generator errors converge, we can sample points from $p_{\mathbf{z}}$ and feed it to the decoder (in case of $M1$ and $M2$) or code generator followed by decoder (in case of $M3$) to generate synthetic feature vectors. For $M1$ and $M2$, the corresponding label of the generated feature vectors is the same as the mixture id of the Gaussian component from which $\mathbf{z}$ was sampled. For $M3$, the synthetic feature vectors are assigned to the class denoted by the label vector $\mathbf{c}$ being fed to code generator with respect to which we maximize the mutual information of the generated feature vectors. Hence, using our trained GAN models, we are able to sample points from the distribution $p(\mathbf{x_{synth}},y_{synth})$ where $\mathbf{x_{synth}}$ represents the synthetic feature vectors and $y_{synth}$ represents their labels. To evaluate the effectiveness of our models, we need to compare the distribution $p(\mathbf{x_{synth}},y_{synth})$ with real distribution $p(\mathbf{x_{real}},y_{real})$. So far, a standardized set of metrics that can quantify the similarity between real and fake samples is not available. To address this, we suggest three metrics and evaluate the models on them. We define these evaluation metrics below.
\subsubsection{Metric 1: Testing accuracy on synthetic data with a classifier trained on real data}\label{sec:synth_test}
The objective of this experiment is to assess the similarity between real and synthetic data by using a model trained on real data to classify synthetic data. This would give us an idea about the quality of the synthetic data. A higher accuracy would suggest the generated distribution $p(\mathbf{x_{synth}},y_{synth})$ is very much similar to the real distribution $p(\mathbf{x_{real}},y_{real})$. However, it may so happen that the variance within samples belonging to the same class is low i.e. they do not capture the full distribution of the modeled class. On the other hand a lower accuracy would suggest the real and synthetic data samples comes from relatively different distributions. It doesn't necessarily imply that synthetic data is bad quality. It may so happen that the synthetic data is generating meaningful samples not represented in the real dataset.
\subsubsection{Metric 2: Testing accuracy on real data with a classifier trained on synthetic data}\label{sec:synth_train}
In this experiment we evaluate the performance of a model trained on synthetic data to classify the test set consisting of real data. A high accuracy indicates that the generative models produce samples that are good representations of all the classes. This measure would reflect the diversity of synthetic data. For example, the classifier trained using synthetic samples form a GAN model that's liable to mode collapse \cite{arjovsky2017wasserstein} would perform poorly because the training set will have samples from only a few classes. Also we can verify if a GAN based model generates meaningful samples because then it can be used to train a classifier to classify real data even if they are not explicitly present in the real dataset.
\subsubsection{Metric 3: Using Fretchet Inception Distance (FID) metric}\label{sec:FID}
This metric derives its name from the Inception network \cite{szegedy2015going}. It is a deep convolutional neural network model trained on millions of images for the purpose of image classification. Inception net had reported state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014. Since, its been trained on a huge dataset it is assumed that the network is generalizable enough and the intermediate layers produce meaningful activations helpful for image classification. Researchers have used this for evaluating the quality of synthetic samples generated by GANs \cite{heusel2017gans}. To compute FID, inception network is used to get intermediate layer activations for real and synthetic dataset. Then we compute the statistics : mean $\mu$ and covariance $\Sigma$ for those activations over all the samples present in the corresponding fake and synthetic datasets. The FID between the real images x and generated images g is computed as:
\begin{equation}\label{eq:fid}
FID(x,g) = \|\mu_x-\mu_g\|_2^2 + Trace(\Sigma_x + \Sigma_g - 2(\Sigma_x\Sigma_g)^{0.5})
\end{equation}
Hence, if the generated and real images come from similar distributions, FID will be lower. Also note that while in previous two metrics we are comparing the joint distributions $p(\mathbf{x_{synth}},y_{synth})$ and $p(\mathbf{x_{real}},y_{real})$, FID compares the marginals $p(\mathbf{x_{synth}})$ and $p(\mathbf{x_{real}})$.
Unfortunately, there isn't a deep network like Inception net for speech emotion recognition which has been trained on a whole lot of data to be genralizable enough to extract meaningful activations when provided with raw feature vectors. Hence, we used a neural network with fully connected layers trained on IEMOCAP and derive the activations from its intermediate layer for FID computation. Note that GAN models generating data similar to IEMOCAP will have a lower FID thereby judging them to be "better" models by this metric. However, as mentioned before a lower score doesn't necessarily mean worse models because synthetic dataset can have meaningful samples not represented in the limited IEMOCAP dataset.
\section{Results}
\label{sec:resss}
In this section we evaluate the GAN based models based on the characteristics of the generated synthetic data. We first execute an in-domain 5-fold cross-validation analysis on IEMOCAP. As has been mentioned before each of the 5 sessions in the dataset have different speakers so by doing a leave-one-session out cross-validation we ensure a speaker independent analysis. We also perform a visual analysis to compare the distributions of real and synthetic datasets for one of the cross-validation splits. Finally, we do a cross-corpus study where we train a GAN model using IEMOCAP and compare the synthetic data generated to real feature vectors obtained from utterances in MSP-IMPROV. Along with computing the above three metrics, we simulate a low resource condition to find out if using synthetic data along with limited real data to train a model aids in emotion classification. The purpose of this experiment was to judge the transferability of knowledge provided by synthetic vectors across two corpora. If that was the case appending the real dataset from a source (IEMOCAP) distribution with synthetic data obtained from GANs trained on the source distribution would aid the classification of test samples belonging to target (MSP-IMPROV) distribution. We have previously seen in \cite{sahu2018adversarial} and \cite{sahu2018enhancing} how appending real dataset with synthetic dataset improves the in-domain classification only by a few (1\% approx) percentages. Here we undertake a more in-depth analysis of how cross-corpus emotion recognition is affected.
\subsection{In-domain cross-validation experiments}
Figure~\ref{fig:errors} shows the reconstruction loss curves for for models $M1$, $M2$ and $M3$ trained on four sessions for a given cross-validation iteration. We also show how the discriminator ($D\_2$) and generator's losses of the data-generating part of the models change as the training progresses. Note that the errors converge for both training and validation splits indicating that the GAN errors converge for both of them. Once trained, we sample points from $p_{\mathbf{z}}$, pass it through the respective decoders and get the points $(\mathbf{x_{synth}},y_{synth})$. In case of $M1$ and $M2$, we are equally likely to sample points from any of the four modes of the mixture PDF $p_{\mathbf{z}}$. For $M3$, the conditional label vectors $\mathbf{c}$ are sampled from a uniform distribution. Since there is almost equal representation from all classes, the synthetic dataset will be balanced. For each CV split, the generated data can either be compared to the training set (set-1) or the validation set (set-2). Post the GAN model training, we compute metrics 1 and 2 with the real datapoints sourced either from training set (set-1) or validation set (set-2). We train separate SVM classifier on the real and synthetic dataset to compute metrics 1 and 2, respectively. The model hyperparameters are tuned on the respective training set. We report the average unweighted accuracy (UWA) over the five cross-validation splits. We generate 6000 synthetic data-points, approximately the same number of data-points as the IEMOCAP set used in experiments. We show the results for the different train and test conditions in Table~\ref{tab:synth_res} (chance accuracy = $\frac{1}{4} = 25\%$).
\begin{table}[t]
\centering
\caption {Cross-validation accuracies (\%) obtained using different combinations of data-sets for training and evaluating an SVM classifier. Datasets used to train and test the SVM classifier are denoted as Tr. and Te. in the table respectively. Set-1 refers to the training set of a cross-validation split used to train the GAN model while set-2 refers to the validation set.}
\begin{tabular}{@{}l|c|c|c|c@{}}
Models&Metric 1 & Metric 2 & Metric 1 & Metric 2 \\
&Tr. : Set-1 & Te. : Set-1 & Tr. : Set-2 & Te. : Set-2 \\ \hline
$M1$ &85.60& 48.58 & 72.52 &45.89 \\
\hline
$M2$ &\textbf{88.41}& 49.91 & \textbf{74.75} &46.96 \\
\hline
$M3$ &55.20& \textbf{52.24} & 45.63&\textbf{51.58} \\
\hline
\end{tabular}
\label{tab:synth_res}
\end{table}
\begin{figure*}[t]
\includegraphics[height=13cm, width=17cm]{graphics/errors.png}
\caption{Reconstruction or adversarial errors (discriminator’s (blue) and generator’s (red) errors) for one of the cross-validation splits (a) $M1$ (b) $M2$ (c) $M3$. a(i), b(i,iii), c(i,iii) belong to training set while a(ii), b(ii,iv), c(ii,iv) belong to validation set. Note how the discriminator and generator's errors are converging indicating GANs have reached a equilibrium state. Also the trends are similar for training and validation splits indicating the models generalize well.}
\label{fig:errors}
\end{figure*}
It can be seen that results obtained using set-1 to train/test the SVM classifier are better than if set-2 were used instead. This is expected as set-1 was used to train the GAN models and hence the generated data is expected to be similar to set-1. Note that the set-2 contains a different set of speakers, adding to the mismatch. It can be observed that the accuracies obtained for $M2$ are better than that obtained for $M1$. This indicates that $M2$ generated synthetic samples have more similarity to samples obtained from real data distribution than $M1$. Note that the decoders are trained differently for the two models. While in case of $M1$ decoder parameters are updated based only on reconstruction loss, in case of $M2$ the parameters are updated based on an additional adversarial loss that determines how close it is to real data. We hypothesize that this extra update is what leads to better synthetic sample generation by $M2$ that also generalizes better for unseen speakers. Another interesting thing to note is the characteristic of synthetic data generated using $M3$. When used as test set, the classifiers trained on real data are unable to classify them as good as they classify samples generated from $M1$ and $M2$. This indicates unlike $M1$ and $M2$, $M3$ produces synthetic data samples that are not represented in the real data distribution and hence a classifier trained on real data fails to recognize their classes accurately. However, training a classifier on synthetic data obtained from $M3$ performs better at classifying real data-points than a classifier trained on samples generated from $M1$ and $M2$. This indicates that data generated using the model $M3$ has more diverse samples than data generated using $M1$ and $M2$. These differences arise due to the difference in training procedures and the prior $p_{\mathbf{z}}$ from which points are sampled to be fed into the decoder to generate synthetic samples. While in case of $M1$ and $M2$ it is pre-defined to be a mixture of four Gaussians, in case of $M3$ the GAN model learns it during training by maximizing the mutual information between generated samples and the conditional label vector used to generate them. Also, the coding space of $M3$ has more dimensions (256) than $M1$ and $M2$ (2) which could provide the decoder with a wider range of input samples which probably leads to diverse synthetic data-points. It is interesting to note that even though data generated using $M3$ doesn't resemble the samples in the real dataset used to train it, they still contain enough meaningful samples which can help recognize the classes of real data points. Next, we evaluate the generated data from the three different models using the FID metric.
FID metric uses the intermediate layer's outputs for obtained from a trained neural network for real and synthetic data sets. For our purposes we used a fully connected neural network with 4 hidden layers of 64 neurons each and regularized linear (ReLU) activation followed by an output layer of 4 neurons (each neuron corresponding to an emotion class) with softmax activation. The input layer had 1582 neurons corresponding to the dimension of the feature vectors used to train it for emotion recognition. The network was trained for 30 epochs on the entire IEMOCAP dataset containing samples from the four emotional classes of interest. Once trained, the weights were frozen and output of the third hidden layer was obtained for real and synthetic samples. Then the statistics $\mu$ and $\Sigma$ were computed for real and synthetic datasets and these were used to calculate the FID metric according to equation ~\ref{eq:fid}. The metric averaged over the 5 cross-validation splits is shown in Table~\ref{tab:fid_res}. As can be seen the scores are lowest for $M2$ closely followed by $M1$ indicating the synthetic data generated by these models are more similar to the real IEMOCAP data than those generated by $M3$. This confirms our findings presented in Table~\ref{tab:synth_res} where we observed that an SVM classifier trained on real data does a better job of classifying synthetic samples generated from $M1$ and $M2$. This doesn't necessarily mean that $M3$ is worse than the other two models as we saw from Table~\ref{tab:synth_res} that it generates more diverse and meaningful samples that might not be represented in the limited IEMOCAP dataset. Next, we show some visualizations to qualitatively analyze the quality of synthetic data generated from the three models.
\begin{table}[t]
\centering
\caption {FID metric when synthetic data from the three models was compared to real data distribution. Note that lower FID means that the distributions are more similar.}
\begin{tabular}{@{}l|c|c|c@{}} \hline
Models&$M1$ & $M2$ & $M3$ \\ \hline
FID &14.52& \textbf{13.24} & 33.99 \\
\hline
\end{tabular}
\label{tab:fid_res}
\end{table}
\subsubsection{t-SNE analysis of generated data}\label{sec:tsne}
In Figure~\ref{fig:tsne_synth}(a), we compare the 2D t-distributed stochastic neighborhood embeddings (t-SNE) plots of 1582-D synthetic feature vectors generated using models $M1$, $M2$ and $M3$ with each other and with that of real data. We see that for all three models, the majority of the synthetic data embeddings lie in the space defined by the real data which indicates that the GAN models are indeed capturing the underlying feature vector distribution to some extent. Additionally it can be seem that the distributions of synthetic data generated from models $M1$ and $M2$ resemble each other. This is because the prior $p_{\mathbf{z}}$ used to generate the synthetic data is same in both these models and different from that used in $M3$. In fact data generated using $M3$ form four separate clusters, each corresponding to an emotion Figure~\ref{fig:tsne_synth}(b). This again points to the observation made in Section~\ref{sec:background} that InfoGAN based models try to increase the inter-class variability while not giving as much attention to intra-class variability. This can explain the results in Table~\ref{tab:synth_res} where we saw SVM trained on synthetic data generated using $M3$ giving us better accuracies in classifying real data than those trained on data generated from $M1$ and $M2$. The greater inter-class variability in synthetic training data leads to formation of more separable hyper-planes when training an SVM. On the other hand, since $M1$ and $M2$ don't focus on maximizing inter-class variability, less restrictions are imposed on them when they try to capture the real underlying data distribution. This leads to GAN models that generate synthetic data samples closer to the real data samples used to train the corresponding models. Such synthetic samples can be classified with greater accuracy using a classifier trained on real data. Also, we can see points lying outside of the space spanned by real data. They could be meaningful feature vectors that are not in our limited IEMOCAP data.
\begin{figure}[t]
\includegraphics[height=9cm, width=9cm]{graphics/real_synth_tsne.png}
\caption{(a) Comparison of t-SNE embeddings of synthetic feature vectors generated using $M1$, $M2$ and $M3$ with the embeddings of real IEMOCAP data (b) Class-wise clustering of synthetic data generated using the three models.}
\label{fig:tsne_synth}
\end{figure}
\subsubsection{Analysis of 2D projections obtained from $M1$}
In this section, we compare the 2D encodings of synthetic data and real data obtained from the trained encoder of $M1$. As mentioned before, $M1$ has been trained so that its code space resembles a mixture of 4-Gaussian. Figure~\ref{fig:enc_synth} shows the scatter plot of 2D encodings obtained for one of the cross-validation splits when the corresponding datasets were passed through the encoder of $M1$. As expected the encodings obtained from the training split strongly resembles a mixture of four Gaussian with each mixture component corresponding to an emotion. The resemblance for encodings obtained from the validation split is not as strong but they are still separable. The scatter plots of encodings obtained from $M1$ and $M2$ look more similar to that of training and validation splits than those obtained from $M3$. It seems as if encodings obtained from $M3$ generated points for a particular emotion lie in clusters which are subsets of the mixture component it is supposed to lie on. However the clusters are farther away from each other with lesser overlap because of the mutual information based loss function trying to maximize inter-class variability. This further indicates that the distributions of synthetic data generated from $M1$ and $M2$ are more similar to real IEMOCAP data than the data generated using $M3$.
\begin{figure*}[t]
\includegraphics[height= 5cm, width=18cm]{graphics/synth_scatter.png}
\caption{Scatter plot of 2D encodings obtained from code space of $M1$ for (a) training set (b) validation set and synthetic data obtained from (c) $M1$ (d) $M2$ (e) $M3$. Note that the scatter plots of synthetic data encodings obtained from $M1$ and $M2$ have a closer resemblance to 2D encodings of training and validation than that of $M3$.}
\label{fig:enc_synth}
\end{figure*}
\subsection{Cross-corpus experiments}
The objective of cross-corpus evaluations is to investigate the generalization capability added by synthetically generated samples for classification on an external corpus (as opposed to being applicable for only in-domain tasks). To do this we compared the three metrics defined above. We also performed a low resource classification experiment explained below. We generate the synthetic samples from GAN models trained using the entire IEMOCAP dataset.
\subsubsection{Evaluating the three metrics}
As before, we conduct two experiments (Table~\ref{tab:synth_cc}). First, we use MSP-IMPROV to train an SVM classifier and evaluate it on synthetic data to compute metric 1. Note that since MSP-IMPROV was unbalanced, we balanced it by selecting equal number of audio samples from each class before training the SVM classifier. This was followed with computing metric 2 where we used the synthetic dataset as training set and MSP-IMPROV as test set where we leave it unbalanced. We then computed the FID (metric 3) by comparing the feature vectors obtained from MSP-IMPROV dataset with the synthetic feature vectors. The neural network with similar architecture as described in section was trained with MSP-IMPROV dataset and then the third layer's activations were used to compute FID.
We observe that evaluating a classifier that has been trained on MSP-IMPROV to classify the synthetic sets shows gives almost similar accuracies for the three models. The slightly higher accuracies obtained for $M2$ and $M3$ could be due to the decoder receiving an extra adversarial error to update its parameters thereby producing more generalizable samples. On the other hand, evaluating different classifiers which has been trained on synthetic samples generated from different GAN based models perform almost similarly in classifying samples from MSP-IMPROV. The FID metric shows a similar trend as the in-domain cross-validation experiment suggesting that synthetic data generated by $M1$ and $M2$ are more similar to the real MSP-IMPROV data than those generated by $M3$. However, the difference between the FID metric obtained from $M3$ generated dataset and the other two models is much lower compared to what we observed in the cross-validation experiment. These experiments indicate that synthetic data generated from the three different GAN architectures trained on a particular dataset compare similarly to data from a different corpus.
\begin{table}[t]
\centering
\caption {Cross-corpus accuracies obtained on MSP-IMPROV. Synthetic data is generated from GAN based models trained on IEMOCAP}
\vspace{4mm}
\begin{tabular}{@{}l|c|c|c@{}}
& Metric 1 & Metric 2 & FID \\ \hline
$M1$ & 48.52 &38.3& 15.91 \\
\hline
$M2$ & 49.6 & 38.61& 15.6 \\
\hline
$M3$ & 49.98& 37.72&18.93 \\
\hline
\end{tabular}
\label{tab:synth_cc}
\end{table}
\subsubsection{Low resource classification experiments}
One interesting thing to note is that all the accuracies obtained are higher than the chance accuracy of 25\%. This indicates that synthetic data generated by training a GAN on a specific dataset do carry relevant information which can possibly be leveraged while classifying emotions for unseen data. To validate our hypothesis, we simulated a low resource condition scenario where we use only a portion of IEMOCAP data ($P\%$ of the entire dataset) to train a neural network based classifier and evaluating its performance on MSP-IMPROV. This is our baseline model. Next we append the limited training data with $N_{synth}$ synthetic data samples and repeat the experiment. Figure~\ref{fig:low_1}(a) shows how the accuracies change for different values of $P$ (10\%, 25\%, 50\%, 80\% and 100\%) and $N_{synth}$ (600, 2000 and 6000) when we use synthetic samples generated from $M1$. It can be observed from the figure that using synthetic data along with real data performs better than just using real data for training. Furthermore, the absolute improvement in accuracy is more when lesser amount of real data is used as compared to when the entirety of real data is used for training the neural network based classifier. Note that the synthetic data has been generated from a GAN based model trained on the whole IMEOCAP database. Hence, the synthetic dataset tries to captures the characteristics of the distribution defined by the entirety of the IEMOCAP set. So, it provides more useful information to the classifier while training when only a portion of IEMOCAP is used as opposed to when the whole IEMOCAP dataset is used. We also note that we see more improvement in accuracy when more synthetic samples are used. For lower values of $P$, accuracies keep increasing when we increase $N_{synth}$. For higher values of $P$, they saturate and it seems increasing $N_{synth}$ won't lead to any more improvement in performance. Next we fix $N_{synth}$ at 6000 and compare the performances when we use synthetic data obtained from the three different GAN based models. From Figure~\ref{fig:low_1}(b), we see that the classifiers trained on real data along with samples generated using $M1$ or $M2$ perform similarly while outperforming the baseline. However, samples generated from $M3$ are only beneficial for lower values of P. With availability of larger amount of real data for training the classifier, appending them with synthetic data from $M3$ doesn't lead to any improvement. This can be attributed to the low intra-class variability in the generated samples obtained from $M3$ (as explained in Section~\ref{sec:tsne}) which would lead to the classifier overfitting on only those specific samples present in the training set. Real data with more intra-class variability (as seen in real world) provides more information and lead to a more generalizable classifier. Nevertheless they are still helpful when we have limited real data available for training (P less than 40\% in Figure~\ref{fig:low_1}(b) ). Therefore, our experiments simulating low resource conditions has shown that synthetic data do carry relevant information and can be used for training classifiers when real training data is available in a limited quantity.
\begin{figure}[t]
\includegraphics[height=5.5cm, width=9cm]{graphics/plots_combined.png}
\caption{Cross-corpus classification accuracy vs the percentage $P$ of real dataset used for training a neural network classifier with or without synthetic data samples obtained from a trained GAN model. Note that the GANs have been trained on the entire IEMOCAP data. (a) Synthetic data from only $M1$ is used while the number of synthetic samples $N_{synth}$ is varied. (b) Synthetic data from all three models $M1$, $M2$ and $M3$ is used with $N_{synth}$ = 6000. Note that for baseline systems $N_{synth} = 0$}
\label{fig:low_1}
\end{figure}
\section{Summary and future directions}
\label{sec:conclusion}
In this paper we implemented three auto-encoder and GAN based models to synthetically generate the high-dimensional feature vectors useful for speech emotion recognition. The models were trained to generate such data-points given a sample from a prior distribution $p_{\mathbf{z}}$ . We considered generating synthetic samples for four emotion classes namely angry, sad, neutral and happy. We explored two ways of enforcing a 4-way clustering of the generated data : (a) In models $M1$ and $M2$ where $p_{\mathbf{z}}$ was chosen to be a mixture of four Gaussians with each mixture component corresponding to an emotion class (b) Model $M3$ where $p_{\mathbf{z}}$ was Gaussian but the generator received an additional label vector as input. Mutual information was then maximized between the generated sample and the label vector. In our cross-validation experiments, the FID metric and the experiments classifying synthetic data using SVM trained on real data showed that the distribution $p(\mathbf{x_{synth}},y_{synth})$ generated using $M1$ and $M2$ are closer to the real distribution $p(\mathbf{x_{real}},y_{real})$ than $M3$. Between $M1$ and $M2$, the latter seemed to produce more realistic samples. This can be attributed to the fact the decoder of $M2$ received an extra update based on GAN based adversarial error where a discriminator was used to distinguish between its output and real samples. However, training an SVM using samples generated from $M3$ did a better job in classifying real data-points than $M1$ and $M2$. This was probably because of the tendency of mutual information based loss function resulting in a GAN that would generate samples with more inter-class varibaility. This leads to an SVM trained to have better/more efficient hyper-planes separating the emotional classes. It would be an interesting experiment to explore models where we can possibly take advantage of both these phenomena. In such an experiment, we can define the prior $p_{\mathbf{z}}$ to be a mixture of four Gaussians along with an additional term in the loss function to maximize the mutual information between the generated samples and the mixture component id from which $\mathbf{z} \sim p_{\mathbf{z}}$ was sampled. We can also play around with the weight assigned to the mutual information based loss term relative to the vanilla GAN loss term. The lower dimensional visualizations show that while most of the points lie in the space spanned by real data, a good number of points lie outside of it. In future, we plan to focus more on these data-points to identify the meaningful samples that are not represented in the limited real dataset (IEMOCAP) used to train the GAN based models. The cross-corpus experiments further pointed out that such meaningful samples might exist after all. While $M3$ were only useful when less than 40\% of IEMOCAP data was used for training; samples generated from $M1$ and $M2$ were useful even when all of IEMOCAP data is used for training. This leads us to believe that such GAN based models even though trained with limited real data have the ability to produce meaningful samples which aren't present in the real dataset thereby helping us in cross-corpus emotion recognition. One thing to keep in mind is that more synthetic samples isn't always better as seen from Figure~\ref{fig:low_1}(a) (there is not much difference in accuracies obtained when 2000 and 6000 syntheitc datapoints are used to train a classifier along with the real data.)
Additionally, these synthetic feature vectors cannot be converted back to audio waveforms. Hence we plan to investigate the utility of such architectures to generate audio waveforms corresponding to different emotions. Such samples can be evaluated by having humans listen to them giving us further insights as to how these models behave.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEbib}
|
2,877,628,089,648 | arxiv |
\section{Introduction}
\label{sec:Introduction}
In uses of pre-trained machine learning models, it is a known issue that the target population in which the model is being deployed may not have been reflected in the data with which the model was trained. There are many reasons why a training set would not match the target population, including sampling bias \cite{suresh2021FrameworkUnderstandingSources}, concept drift \cite{DBLP:journals/kais/GoldenbergW19}, and domain shift \cite{DBLP:journals/kais/GoldenbergW19}. This situation can lead to a reduction in performance in the target domain. One risk is that, as the demographic distribution of the population changes, certain demographic populations will be under-served by model performance, even as they become more represented in the target population: a type of representation bias \cite{suresh2021FrameworkUnderstandingSources}. Lack of representation, or invisibility, of this kind can be unfair, and adequate visibility can be a prerequisite for fairness \cite[Chapter~4]{dignazio2020DataFeminism}. A classic example is that of demographics, such as female and darker-skinned people being underrepresented in computer-vision datasets, hence scarcely visible to the learning algorithm, with consequences such as high error rates in facial recognition and consequent denials of benefits (such as authentication) or imposition of harms (such as arrests) \cite{buolamwiniGenderShadesIntersectional2018}. One, often advisable, approach for dealing with this is to train a new model with updated or improved training data. However, in the case of supervised learning, this may not be possible, as label information for these additional members of the target population may not yet exist. Additionally, while collection of representative data is very important, it does come at a cost, including a time cost, so that some shift in the target is likely to occur before updated data is collected or a shift is even identified. The field of domain adaptation proposes techniques for addressing these situations \cite{redkoSurveyDomainAdaptation2020}.
In this paper we contribute to the domain adaptation literature by introducing \textit{domain-adaptive decision trees} (DADT). With DADT we aim to improve accuracy of decision tree models trained in a source domain (or training data) that differs from the target domain (or test data), as it may occur when we do not have labeled instances for the target domain. We do this by proposing an in-processing step that adjusts the information gain (IG) split criterion with outside information (none label data), corresponding to the distribution of the target population we aim for. The approach works by adapting probabilities of child nodes to the target domain and, in this sense, by making parts of the feature space (such as certain demographics) more visible to the learning algorithm. We investigate the conditions in which this strategy can lead to increases in performance, in fairness, or in both.
As an illustrative example, consider the case of a sports retail store looking to target new clients ($D_T$) using what it knows about its current clients ($D_S$) based on geographical regions. The store has information on the purchasing habits ($Y$) of $D_S$ as they have purchased at the store before; this is not the same for $D_T$.
Imagine, in particular, that the store wants to use a classifier to inform its inventory on women's football shoes. If the two client populations differ by region, which is likely, the classifier trained on $D_S$ and intended to predict purchasing patterns ($\hat{Y}$) on $D_T$ could lead to biased predictions when used. For instance, if there are few women athletes and little demand for women’s football shoes in the source region relative to the target region, the classifier could underestimate the stocks of women's football shoes needed, under-serving the potential new clients. This could lead to lower service or higher prices for some social groups, and the lost opportunity by the store to gain or even retain customers in the target region. To break such feedback loops, the store could improve the classifier by amplifying some of the knowledge about football-shoe purchases in the source domain. It could, for instance, use knowledge about the demographics in the target domain to better approximate the target region's potential demand for football shoes purchases by women.
In the experiments reported in Section~\ref{sec:Experiments} we utilize the \textit{ACSPublicCoverage} dataset---an excerpt of US Census data \cite{dingRetiringAdultNew2021}, with the prediction task of whether or not a low income individual is covered by public health coverage. The dataset provides the same feature sets for each of the US states. This design allows us to set up an experimental scenario that mirrors our retail example of having no labeled data for the target domain, but some knowledge of the distribution of the attributes in the target domain.
We aim not only to improve overall accuracy, but also to produce sufficient accuracy for different demographic groups. This is important because the distribution of these groups may be different in the target population and even shift over time in that population. \citet{dingRetiringAdultNew2021}, for example found that naively implementing a model trained on data from one US state and using it in each of the other states resulted in unpredictable performance in both overall accuracy and in \emph{demographic parity} (a statistically defined metric of model fairness performance based on treatment of members of a selected demographic group compared to members of another demographic group). We therefore also test the impact of our intervention on the results of a post-processing fairness intervention \cite{DBLP:conf/nips/HardtPNS16}, which we measure using two common fairness metrics: \textit{demographic parity} and \textit{equal opportunity} \cite[Chapter~3]{barocas-hardt-narayanan2019}.
We focus on decision trees for domain shift because decision trees are accessible, interpretable, and well-performing classification models that are commonly used. We study decision trees rather than more complex classifiers when using tabular data for three reasons. First, these models are widely available across programming languages and are standard in industry and academic communities \cite{Rudin2016_KeyNotePapis}. Second, these models are inherently transparent \cite{Rudin2019_StopExplainingML}, meaning they are white-box rather than black-box models, which may facilitate the inclusion of stakeholders in understanding and assessing model behaviour. Third, ensembles of these models still outperform the deep learning models on tabular data \cite{Grinsztajn2022_DTs}. For these reasons, and as proposed AI regulations include calls for explainable model behaviour \cite{EU_AIAct2, USA_AIBill}, decision trees are a relevant choice when training a classifier and it is therefore important to address issues specific to them.
There are different types of domain shift \cite{Quinonero2009} and
they have different implications for suitable interventions. We focus on the \textit{covariate shift} case of domain shift. This is the case where only the distribution of the attributes change between the source and target, not the relationship between the attributes and the label. In the next section, we introduce the problem setting as a domain adaptation problem and specifically focus on the covariate shift type of this problem. Before presenting our proposed intervention to information gain in Section \ref{sec:DADT} we present the necessary background information on decision trees, entropy and information gain and measuring covariate shift. In Section \ref{sec:Experiments} we present the results of our experiments. We then examine those results in relation to the covariate shift assumption between source and target populations. We see that our intervention leads to an increase in accuracy when the covariate shift assumption holds. The related work section \ref{sec:RelatedWork} situates our approach in the literature on domain adaptation in decision trees and adjusting the information gain of decision trees. Section \ref{sec:Closing} closes and gives an outlook on future work.
{\bf Research Ethics:} We use data from an existing benchmark designed for public use and work with aggregated or anonymized data only, thus complying with applicable legal and ethical rules, and we disclose all details of our method in line with the transparency mandates of the ACM Code of Ethics.
\section{Related Work}
\label{sec:RelatedWork}
Domain adaptation (DA) studies how to achieve a robust model when the training (source domain) and test (target domain) data do not follow the same distribution \cite{Redko2020_DASurvey}. Here, we focused on the covariate shift type, which occurs when the attribute space $\mathbf{X}$ is distributed differently across domains \cite{Quinonero2009,DBLP:journals/pr/Moreno-TorresRACH12, DBLP:conf/icml/ZhangSMW13}. To the best of our knowledge, DADT is the first framework to address domain DA as an in-processing problem specific to decision tree classifiers.
Previous work on adjusting entropy estimation has been conducted largely outside of machine learning, as well as in the context of information gain (IG) in decision trees. Here, too, DADT is the first work to look at entropy estimation under DA. \cite{guiasu_weighted_1971} proposes a general form for a weighted entropy equation for adjusting the likelihood of the information being estimated. Other works study the estimation properties behind using frequency counts for estimating the entropy \cite{Schurmann_2004, DBLP:conf/icml/Nowozin12, DBLP:conf/nips/NemenmanSB01, DBLP:journals/jmlr/ArcherPP14}. In relation to decision trees, \cite{DBLP:journals/eswa/SingerAB20} proposes a weighted IG based on the the risk of the portfolio of financial products that the decision tree is trying to predict.
Similarly, \cite{DBLP:conf/ijcai/ZhangN19, DBLP:conf/dis/ZhangB20} re-weight IG with the fairness metric of statistical parity.
\cite{DBLP:conf/csemws/VieiraA14} adjust the IG calculation with a gain ratio calculation for the purpose of correcting a bias of against attributes that represent higher levels of abstraction in an ontology.
Recent work has started to examine the relationship between DA and fairness. \cite{mukherjeeDomainAdaptationMeets2022} show that domain adaptation techniques can enforce individual fairness notions.
\cite{maityDoesEnforcingFairness} show that enforcing risk-based fairness minimization notions can have an ambiguous effect under covariate shift for the target population, arguing that practitioners should check on a per-context basis whether fairness is improved or harmed. This is line with the findings of \cite{dingRetiringAdultNew2021} who test both standard and fairness adjusted gradient boosting machines across numerous shifted domains and find that both accuracy and fairness metric measures are highly variable across target domains. These works call for further work to understand the impact of domain drifts and shifts.
\section{Problem Setting}
\label{sec:ProblemSetting}
Let $\mathbf{X}$ denote the set of discrete/continuous \textit{predictive attributes}, $Y$ the \textit{class attribute}, and $f$ the \textit{decision tree classifier} such that $\hat{Y}=f(\mathbf{X})$ with $\hat{Y}$ denoting the \textit{predicted class attribute}.
We assume a scenario where the population used for training $f$ (the \textit{source domain} $D_S$) is not representative of the population intended for $f$ (the \textit{target domain} $D_T$). Formally, we write it as $P_S(\mathbf{X}, Y) \neq P_T(\mathbf{X}, Y)$, where $P_S(\mathbf{X}, Y)$ and $P_T(\mathbf{X}, Y)$, respectively, denote the source and target domain joint probability distributions.
We tackle this scenario as a \textit{domain adaptation} (DA) problem \cite{Redko2020_DASurvey} as it allows us to formalize the difference between distributions in terms of distribution shifts.
There are three types of distribution shifts in DA: covariate, prior probability, and dataset shift. Here, we focus on \textit{covariate shift} \cite{Quinonero2009, DBLP:journals/pr/Moreno-TorresRACH12, DBLP:conf/icml/ZhangSMW13} in which the conditional distribution of the class, $P(Y|\mathbf{X})$, remains constant but the marginal distribution of the attributes, $P(\mathbf{X})$, changes across the two domains:
\begin{equation}
\label{eq:CovariateShiftDefinition}
P_S(Y | \mathbf{X}) = P_T(Y | \mathbf{X}) \; \; \text{but} \; \;
P_S(\mathbf{X}) \neq P_T(\mathbf{X})
\end{equation}
We focus on covariate shift because we assume, realistically, to have some access only to the predictive attributes $\mathbf{X}$ of the target domain.\footnote{The other two settings require information on $Y \in D_T$, with \textit{prior probability shift} referring to cases where the marginal distribution of the class attribute changes, $P_S(\mathbf{X}|Y) = P_T(\mathbf{X}|Y)$ but $P_S(Y) \neq P_T(Y)$, and \textit{dataset shift} referring to cases where neither covariate nor prior probability shifts apply but the joint distributions still differ, $P_S(\mathbf{X}, Y) \neq P_T(\mathbf{X}, Y)$.}
Under this \textit{unsupervised setting}, we picture a scenario where a practitioner needs to train $f$ on $D_S$ to be deployed on $D_T$. Aware of the potential covariate shift, the practitioner wants to avoid training a biased model relative to the target domain that could result in poor performance on $\hat{Y}$.
What can be done here to address the DA problem depends on what is known about $P_T(\mathbf{X})$.
In the ideal case in which we know the whole covariate distribution $P_T(\mathbf{X})$, being under \eqref{eq:CovariateShiftDefinition} allows for computing the full joint distribution due to the multiplication rule of probabilities:
\begin{equation}
\label{eq:chain}
P_T(Y, \mathbf{X}) = P_T(Y | \mathbf{X}) \cdot P_T(\mathbf{X}) = P_S(Y | \mathbf{X}) \cdot P_T(\mathbf{X})
\end{equation}
where we can exchange $P_T(Y|\mathbf{X})$ for $P_S(Y|\mathbf{X})$, which is convenient as we know both $Y$ and $\mathbf{X}$ in $D_S$. In reality, however, the right-hand-side of \eqref{eq:chain} can only be known to some extent due to three potential issues:
\begin{enumerate}
\item[(P1)] $P_T(\mathbf{X})$ is not fully available, meaning the marginal distributions of some of the attributes $X \in \mathbf{X}$ are known;
\item[(P2)] $P_T(Y | \mathbf{X}) \approx P_S(Y | \mathbf{X})$ but not equal, meaning the covariate shift holds in a relaxed form; and
\item[(P3)] $P_T(\mathbf{X})$ and $P_S(Y | \mathbf{X})$ are estimated given a sample data from the respective populations, and, as such, the estimation can have some variability.
\end{enumerate}
The issue P3 is pervasive in statistical inference and machine learning. We do not explicitly\footnote{We tackle it implicitly through the Law of Large Numbers by restricting to estimation of probabilities in contexts with a minimum number of instances. This is managed in decision tree learning by a parameter that stops splitting a node if the number of instances at a node is below a minimum threshold.} consider it in our problem statement. Therefore, the main research question that we intend to address in this paper is:
\smallskip
\textit{RQ1. With reference to the decision tree classifier, which type and amount of target domain knowledge (issue P1) can help reduce the loss in accuracy at the variation of relaxations of covariate shift (issue P2)?}
\smallskip
In the interest of fairness, as domain shift can have a detrimental impact on performance of the model for some demographic groups over others, a subsequent question to address in this paper is:
\smallskip
\textit{RQ2. How does the loss in accuracy by the decision tree classifier, based on the issues P1 and P2, affect a fairness metric used for protected groups of interest in the target domain?}
\smallskip
The range of cases to consider in \eqref{eq:chain} for both RQ1 and RQ2 fall within two bordering cases.
\textbf{\textit{No target domain knowledge:}} it consists of training $f$ on the source data and using it on the target data without any change or correction. Formally, we estimate $P_T(\mathbf{X})$ as $P_S(\mathbf{X})$ and $P_T(Y | \mathbf{X})$ as $P_S(Y | \mathbf{X})$.
\textbf{\textit{Full target domain knowledge:}} it consists of training $f$ on the source data and using it on the target data, but exploiting full knowledge of $P_T(\mathbf{X})$ in the learning algorithm to replace $P_S(\mathbf{X})$.
\textbf{\textit{Partial target domain knowledge:}} consequently, the in-between case consists of training a decision tree on the source data and using it on the target data, but exploiting partial knowledge of $P_T(\mathbf{X})$ in the learning algorithm and complementing it with knowledge of $P_S(\mathbf{X})$.
The form of partial knowledge depends on the information available on $\mathbf{X}$, or subsets of it. Here, we consider a scenario where for $\mathbf{X}' \subseteq \mathbf{X}$, an estimate of $P(\mathbf{X}')$ is known only for $|\mathbf{X}'| \leq 2$ (or $|\mathbf{X}'| \leq 3$), namely we assume to know bi-variate (resp., tri-variate) distributions only, but not the full joint distribution. This scenario occurs, for example, when using cross-tabulation data from official statistics.
We specify how to exploit the knowledge of $P_T(\mathbf{X})$ for a decision tree classifier in Section~\ref{sec:DADT}, introducing what we refer to as a \textit{domain-adaptive decision tree} (DADT) classifier. But first, we introduce the required technical background on decision tree learning in the context of domain adaptation in the remainder of this section.
\subsection{Decision Tree Learning}
\label{sec:ProblemSetting.DecisionTreeLearning}
Top-down induction algorithms grow a decision tree classifier \cite{Hastie2009_ElementsSL} from the root to the leafs. At each node, either the growth stops producing a leaf, or a split condition determines child nodes that are recursively grown.
Common stopping criteria include node purity (all instances have the same class value), data size (the number of instances is lower than a threshold), and tree depth (under a maximum depth allowed). Split conditions are evaluated based on a split criterion, which selects one of them or possibly none (in this case the node becomes a leaf).
We assume binary splits of the form:\footnote{There are other forms of binary splits, as well as multi-way and multi-attribute split conditions \cite{Kumar2006}.} $X=t$ for the left child and $X\neq t$ for the right child, when $X$ is a discrete attribute; or $X\leq t$ for the left child and $X > t$ for the right child, when $X$ is a continuous attribute.
We call $X$ the \textit{splitting attribute}, and $t \in X$ the \textit{threshold value}. Together they form the \textit{split condition}.
Instances of the training set are passed from a node to its children by partitioning them based on the split condition.
The conjunction of split conditions from the root to the current node being grown is called the \textit{current path} $\varphi$. It determines the instances of the training dataset being considered at the current node.
The predicted probability of class $y$ at a leaf node is an estimation of $P(Y=y|\varphi)$ obtained by the relative frequency of $y$ among the instances of the training set reaching the leaf or, equivalently, satisfying $\varphi$.
\subsection{Entropy and the Information Gain Split Criterion}
\label{sec:ProblemSetting.Entropy}
We focus on the information gain split criterion. It is, along with Gini, one of the standard split criteria used. It is also based on information theory via entropy \cite{Cover1999ElementsIT}, which links a random variable's distribution to its information content. The \textit{entropy} ($H$) measures the information contained within a random variable based on the uncertainty of its events. The standard definition used is \textit{Shannon's entropy} \cite{DBLP:conf/icaisc/MaszczykD08} where we define $H$ for the class random variable $Y$ at $\varphi$ as:
\begin{equation}
\label{eq:ShannonEntropy}
H(Y|\varphi) = \sum_{y \in Y} - P(Y=y|\varphi) \log_2(P(Y=y|\varphi))
\end{equation}
where $-\log_2(P(Y=y|\varphi))=I(y|\varphi)$ represents the \textit{information} ($I$) of $Y=y$ at current path $\varphi$. Therefore, entropy is the expected information of the class distribution at the current path.
Intuitively, the information of class value $y$ is inversely proportional to its probability $P(Y=y|\varphi)$. The more certain $y$ is, reflected by a higher $P(Y=y|\varphi)$, the lower its information as $I(y|\varphi)$ (along with its contribution to $H(Y|\varphi)$). The general idea behind this is that there is little new information to be learned from an event that is certain to occur.
The \textit{information gain} ($IG$) for a split condition is the difference between the entropy at a node and the weighted entropy at the child nodes determined by the split condition $X$ and $t$ in consideration.
$IG$ uses \eqref{eq:ShannonEntropy} to measure how much information is contained under the current path.
For a discrete splitting attribute $X$ and threshold $t$, we have:
\begin{align}
\label{eq:BasicIGd}
IG(X, t | \varphi) = H(Y|\varphi) - P(X=t|\varphi) H(Y|\varphi, X=t) - P(X\neq t|\varphi) H(Y|\varphi, X \neq t)
\end{align}
and for a continuous splitting attribute $X$ and threshold $t$:
\begin{align}
\label{eq:BasicIGc}
IG(X, t | \varphi) = H(Y|\varphi) - P(X\leq t|\varphi) H(Y|\varphi, X\leq t) - P(X> t|\varphi) H(Y|\varphi, X > t)
\end{align}
where the last two terms in \eqref{eq:BasicIGd}--\eqref{eq:BasicIGc} represent the total entropy obtained from adding the split condition $X$ and $t$ to $\varphi$.\footnote{
Formally, together these last two terms represent the conditional entropy $H(Y,X|\varphi)$ written for the binary split case we are considering such that:
\begin{equation*}
\label{eq:CondShannonEntropy}
H(Y, X|\varphi) =
\sum_{x \in X} - P(X=x|\varphi) \sum_{y \in Y} P(Y=y|\varphi, X=x) \log_2(P(Y=y|\varphi, X=x))
\end{equation*}
}
The selected split attribute and threshold are those with maximum $IG$, namely $\mathit{arg max}_{X, t} IG(X, t | \varphi)$.
\subsection{On Estimating Probabilities}
\label{sec:ProblemSetting.EstProb}
Probabilities and, thus, $H$~\eqref{eq:ShannonEntropy} and $IG$~\eqref{eq:BasicIGd}--\eqref{eq:BasicIGc} are defined for random variables.
As the decision tree grows, the probabilities are estimated on the subset of the training set $D$ reaching the current node satisfying $\varphi$ by frequency counting:
\begin{equation}
\label{eq:EstProb}
\hat{P}(X=t \, | \, \varphi) = \frac{|\{w \in D\ |\ \varphi(w) \wedge w[X] = t\}| }{|\{w \in D\ |\ \varphi(w)| }
\, ,\;\;\;
\hat{P}(Y=y \, | \, \varphi) = \frac{|\{w \in D\ |\ \varphi(w) \wedge w[Y] = y\}| }{\{|\{w \in D\ |\ \varphi(w)| }
\end{equation}
where we have the probability estimators for $P(X)$ and $P(Y)$ $\forall t, y \in D$. The denominator represents the number of instances $w$ in $D$ that satisfy the condition $\varphi$ (written $\varphi(w)$) and the numerator the number of those instances that further satisfy $X=t$ (respectively, $Y=y$). We use the estimated probabilities \eqref{eq:EstProb} to estimate $H$ and $IG$.
The hat in \eqref{eq:EstProb} differentiates the estimated probability, $\hat{P}$, from the population probability, $P$. Frequency counting is supported by the Law of Large Numbers (LLN). Assuming that the training set $D$ is an \textit{i.i.d.} sample from the $P$ probability distribution, we expect for $\hat{P}(X=t|\varphi) \approx P(X=t \, | \, \varphi)$ and $\hat{P}(Y=y|\varphi) \approx P(Y=y \, | \, \varphi)$ as long as we have enough training observations in $D$, which is often the case when training $f$.
A key issue is whether $D$ is representative of the population of interest. This is important as $\hat{P}$ will approximate the $P$ behind $D$.
When training any classifier, the key assumption is that the $P$ probability distribution is the same for the data used for growing the decision tree (the training dataset) and for the data on which the decision tree makes predictions (the test dataset).
Under covariate shift \eqref{eq:CovariateShiftDefinition} this assumption does not hold. Instead, the training dataset belongs to the source domain $D_S$ with probability distribution $P_S$ and the test dataset belongs to the target domain $D_T$ with probability distribution $P_T$, such that $P_T \neq P_S$. To stress this point, we use the name \textit{source data} for training data sampled from $D_S$ and \textit{target data} for training data sampled from $D_T$.
The estimated probabilities \eqref{eq:EstProb}, and the subsequent estimations for $H$ \eqref{eq:ShannonEntropy} and $IG$ \eqref{eq:BasicIGd}--\eqref{eq:BasicIGc} based on source data alone can be biased, in statistical terms, relative to the intended target domain (recall issue P1). This can result, among other issues, in poor model performance from the classifier. This is why we propose extending \eqref{eq:EstProb} by embedding them with target domain knowledge.
\subsection{Distance between Probability Distributions}
\label{sec:sec:ProblemSetting.DistBtwProbs}
Measuring the distance between the probability distributions $P_S$ and $P_T$ is relevant for detecting distribution shifts.
We resort to the \textit{Wasserstein distance} $W$ between two probability distributions to quantify the amount of covariate shift and the robustness of target domain knowledge. In the former case, we quantify the distance between $P_S(Y|\mathbf{X})$ and $P_T(Y|\mathbf{X})$. In the latter case, the distance between $P_S(X|\varphi)$ and $P_T(X|\varphi)$.
We define $W$ between $P_S$ and $P_T$ as:
\begin{equation}
\label{eq:WDistance}
W(P_S, P_T) = \int_{-\infty}^{+\infty} | \mathcal{P}_S - \mathcal{P}_T |
\end{equation}
where $\mathcal{P}_S$ and $\mathcal{P}_T$ are the cumulative distribution functions (CDFs) of $P_S$ and $P_T$.
We can estimate $\mathcal{P}_S$ and $\mathcal{P}_T$ from the data using \eqref{eq:EstProb}. The smaller $W$ is, the closer are the two distributions, indicating similar informational content.
Under covariate shift \eqref{eq:CovariateShiftDefinition}, it is assumed that $P_S(Y|\mathbf{X}) = P_T(Y|\mathbf{X})$, which allows to focus on the issue of $P_S(\mathbf{X}) \neq P_T(\mathbf{X})$. This equality is often not verified in practice. We plan to use $W$, along with an approximation to $P_T(Y|\mathbf{X})$ (as, recall, $Y \notin D_T$), to measure the distance between these two conditional probabilities to ensure that our proposed embedding with target domain knowledge is impactful (recall issue P2).
Measuring this distance will allow to evaluate how relaxations of $P_S(Y|\mathbf{X}) = P_T(Y|\mathbf{X})$ affect the impact of our proposed target domain embedding.
\subsection{Post-Processing Fairness}
\label{sec:ProblemSetting.FairnessRegu}
Together Sections~\ref{sec:ProblemSetting.EstProb}~and~\ref{sec:sec:ProblemSetting.DistBtwProbs} address RQ1. To address RQ2, we rely on the known link between a model's performance as measured under some accuracy metric and its fairness as measured under some fairness metric \cite{DBLP:conf/icml/DuttaWYC0V20, DBLP:journals/ijis/ValdiviaSC21, DBLP:conf/sigecom/LiangLM22}.
Inline with our scenario, the practitioner, now concerned beyond model performance, wants to understand the performance of the trained classifier w.r.t. certain demographic groups in the target domain. The practitioner resorts to applying a post-processing method around $f$ that adjusts the predictions under red a chosen fairness metric; we do this for demographic parity and separately for equal opportunity. In practice, this comes down to using a wrapper function based on \citet{DBLP:conf/nips/HardtPNS16}. The implementation details are covered in appendix~\ref{sec:Expiriments.Fairness}.
We focus on this model agnostic post-processing fairness intervention so that we can measure the impact of DADT on both accuracy, and on a fairness intervention. Post-processing methods rely on the non-DA setting, meaning $P_S(Y, \mathbf{X}) = P_T(Y, \mathbf{X})$. Classifiers are a statement on the joint probability distribution of the training data. Each leaf in a decision tree, e.g., answers to how its current path relates to the class, $P(Y=y|\varphi)$, which in turn relates to model accuracy through the predicted class: for an instance $\mathbf{X}=x$ that follows $\varphi$ we predict $f(\mathbf{x})=\hat{y}$ hoping that $\hat{y}=y$.
Under DA, post-processing methods are essentially only modifying $P_S(Y, \mathbf{X})$. Granted the practitioner trained an oracle-like standard decision tree, the post-processing intervention would only be addressing fairness issues on the population for which the classifier is not intended. The issue is that under covariate shift and other DA settings, the source of bias we are addressing is based on the source domain relative to the target domain. We study how the domain-adaptive decision tree compares to the standard decision tree under the same post-processing fairness step. We hypothesize that under a DA scenario, said step is more impactful for a domain-adaptive than a standard decision tree as the former accounts for the target domain information during training.
\section{Domain-Adaptive Decision Trees}
\label{sec:DADT}
We present our approach for addressing covariate shift by embedding target domain knowledge when learning the decision tree classifier.
We propose an \textit{in-processing step} under the information gain split criterion, motivating what we refer to as \textit{domain-adaptive decision trees} (DADT) learning.
As discussed in Section~\ref{sec:ProblemSetting}, when growing the decision tree, the estimated probabilities \eqref{eq:EstProb} used for calculating $H$ \eqref{eq:ShannonEntropy} and thus $IG$ \eqref{eq:BasicIGd}--\eqref{eq:BasicIGc} at the current path $\varphi$ are derived over a training dataset, which is normally a dataset over the source domain $D_S$. For the split condition $X=t$, it follows that $\hat{P}(X=t | \varphi) \approx P_S(X=t | \varphi)$, which is an issue under covariate shift. We instead want that $\hat{P}(X=t | \varphi) \approx P_T(X=t | \varphi)$. We propose to embed in the learning process knowledge from the target domain $D_T$, reducing the potential bias in the estimation of the probabilities and, in turn, reducing the bias of the trained decision classifier.
\subsection{Embedding Target Domain Knowledge}
\label{sec:DADT:embedding}
There are two probability forms that are to be considered when growing a decision tree for the current path $\varphi$: $P(X=t|\varphi)$ in \eqref{eq:BasicIGd} (and, respectively, $P(X\leq t|\varphi)$ in \eqref{eq:BasicIGc}) and $P(Y=y|\varphi)$ in \eqref{eq:ShannonEntropy}--\eqref{eq:CondShannonEntropy}. In fact, the formulas of entropy and information gain only rely on those two probability forms, and on the trivial relation $P(X\neq t|\varphi) = 1 - P(X=t|\varphi)$ for discrete attributes (and, respectively, $P(X> t|\varphi) = 1 - P(X\leq t|\varphi)$ for continuous attributes).
It follows that we can easily estimate $\hat{P}_S(X=t|\varphi)$ and $\hat{P}_S(Y|\varphi)$ using the available source domain knowledge.
\subsubsection{Estimating $P(X=t|\varphi)$}
We assume that some target domain knowledge
is available, from which we can estimate $\hat{P}_T(X|\varphi)$,
in the following cases:\footnote{
Actually, since we consider $x$ in the (finite) domain of $X$ the two forms are equivalent, due to basic identities $P_T(X\leq x|\varphi) = \sum_{X \leq x} P_T(X = x|\varphi)$ and $P_T(X = x|\varphi) = P_T(X\leq x|\varphi) - P_T(X\leq x'|\varphi)$, where $x'$ is the element preceding $x$ in the domain of $X$. Moreover, by definition of conditional probability, we have $P(X=t|\varphi) = P(X=t, \varphi)/P(\varphi)$ and then target domain knowledge boils down to estimates of probabilities of conjunction of equality conditions. Such form of knowledge is, for example, provided by cross-tables in official statistics data.
}
\begin{align}
\label{eq:tdk}
\hat{P}_T(X=x|\varphi) \approx P_T(X=x|\varphi) \mbox{\rm\ for $X$ discrete,} \quad \hat{P}_T(X\leq x|\varphi) \approx P_T(X\leq x|\varphi) \mbox{\rm\ for $X$ continuous}
\end{align}
We extend $\hat{P}(X=x|\varphi)$ to the case that $\hat{P}_T(X=x|\varphi)$ is not directly available in the target domain knowledge $D_T$, by affine combinations for discrete and continuous attributes using the source domain knowledge $D_S$:
\begin{eqnarray}
\label{eq:aff}
& \hat{P}(X=x|\varphi) = \alpha \cdot \hat{P}_S(X=t|\varphi) + (1-\alpha) \cdot \hat{P}_T(X=t|\varphi') \\
\label{eq:aff2}
& \hat{P}(X\leq x|\varphi) = \alpha \cdot \hat{P}_S(X\leq t|\varphi) + (1-\alpha) \cdot \hat{P}_T(X\leq t|\varphi')
\end{eqnarray}
where $\varphi'$ is a maximal subset of split conditions in $\varphi$ for which $\hat{P}_T(X=t|\varphi')$ is in the target domain knowledge, and $\alpha \in [0, 1]$ is a tuning parameter to be set. In particular, setting $\alpha=0$ boils down to estimate probabilities based on the source data only.
With such assumptions, $P(X=t|\varphi)$ in \eqref{eq:BasicIGd} (resp., $P(X\leq t|\varphi)$ in \eqref{eq:BasicIGc}) can be computed from the target domain knowledge as $P(X=t|\varphi) = \hat{P}(X=t|\varphi)$ (respectively, $P(X\leq t|\varphi) = \hat{P}(X \leq t|\varphi)$) to derive $IG$.
\subsubsection{Estimating $P(Y=y|\varphi)$}
Let us consider the estimation of $P(Y=y|\varphi)$ in \eqref{eq:ShannonEntropy} over the target domain.
Since $Y \notin D_T$, it is legitimate to ask whether
$P(Y=y|\varphi)$ is the same probability in the target as in the source domain when growing the decision tree classifier?
If yes, then we would simply estimate $P_T(Y=y|\varphi) \approx \hat{P}_S(Y=y|\varphi)$. Unfortunately, the answer is no.
Recall that the covariate shift assumption \eqref{eq:CovariateShiftDefinition} states that $P_S(Y|\mathbf{X}) = P_T(Y|\mathbf{X})$, namely that the probability mass function of $Y$ conditional on fixing \textit{all of the} variables in $\mathbf{X}$ is the same in the source and target domains:
\begin{align}
\label{eq:covshift}
\forall \mathbf{x} \in \mathbf{X}, \forall y \in Y, \ P_S(Y=y|\mathbf{X=x}) = P_T(Y=y|\mathbf{X=x})
\end{align}
However, this equality may not hold when growing the tree as the current path $\varphi$ does not necessarily fix all of the $\mathbf{X}$'s, i.e., \eqref{eq:covshift} does not necessarily imply $\forall \varphi \; P_T(Y=y|\varphi) = P_S(Y=y|\varphi)$. This situation, in fact, is an instance of the Simpson's paradox \cite{Simpson1951_Interpretation}. We show this point with Example~\ref{example:EstYUnderCovShift} in Appendix~\ref{Appendix:SuppMaterial}. Therefore, we rewrite $P_T(Y=y|\varphi)$ using the law of total probability as follows:
\begin{align}
\label{eq:rewry}
P_T(Y=y|\varphi) = \sum_{\mathbf{x} \in \mathbf{X}} P_T(Y=y|\mathbf{X}=\mathbf{x}, \varphi) \cdot P_T(\mathbf{X}=\mathbf{x}|\varphi) = \sum_{\mathbf{x} \in \mathbf{X}} P_S(Y=y|\mathbf{X}=\mathbf{x}, \varphi) \cdot P_T(\mathbf{X}=\mathbf{x}|\varphi)
\end{align}
where the final equation exploits the covariate shift assumption \eqref{eq:covshift} when it holds for a current path $\varphi$.
Now, instead of taking $P_S(T|\varphi)=P_T(Y|\varphi)$ for granted, which we should not under DADT learning,
we rewrite $P_T(Y=y|\varphi)$ in terms of probabilities over source domain, $P_S(Y=y|\mathbf{X}=\mathbf{x}, \varphi)$, and target domain, $P_T(\mathbf{X}=\mathbf{x}|\varphi)$, knowledge.
Notice that \eqref{eq:rewry} allows us to approximate $P_T(Y | \mathbf{X})$ under $\varphi$ without having access to $P_T(Y)$. On top of embedding target domain knowledge, this is useful for addressing issue P2 (Section~\ref{sec:ProblemSetting}, in particular, \ref{sec:sec:ProblemSetting.DistBtwProbs}).
Varying $\mathbf{x} \in \mathbf{X}$ over all possible combination as stipulated in \eqref{eq:rewry}, however, is not feasible in practice as it would require extensive target domain knowledge to estimate $P_T(\mathbf{X}=\mathbf{x}|\varphi)$ $\forall \mathbf{x} \in \mathbf{X}$. This would still be a practical issue in the ideal case we had a full sample of the target domain, as it would require the sample to be large enough for observing each value $\mathbf{x} \in \mathbf{X}$ in $D_T$ w.r.t. $D_S$. Therefore, to calculate \eqref{eq:rewry} we choose to vary values with respect to a \textit{single attribute} $X_w \in \mathbf{X}$ and define the estimate of $P$ as:
\begin{align}
\label{eq:wpyphi}
\hat{P}_{T}(Y=y|\varphi) = \sum_{x \in X_w} \hat{P}_S(Y=y|X_w=x, \varphi) \cdot \hat{P}_T(X_w=x|\varphi)
\end{align}
where we now use only target domain knowledge about $P_T(X_w=x_w|\varphi)$ instead of spanning the entire attribute space $\mathbf{X}$.
The attribute $X_w$ is chosen such that the summation \eqref{eq:wpyphi} is close to the estimand $\hat{P}_T(Y=y|\varphi)$ for \eqref{eq:rewry}.
Such an attribute $X_w$, however, may depend on the current path $\varphi$, which would require target domain knowledge on the conditional distribution of $Y$. Hence, we only consider the empty $\varphi$, and then choose $X_w$ based on:
\begin{align}
\label{eq:wpy}
X_w = arg min_{X} W( \hat{P}^X(Y), \hat{P}_T(Y) ) \quad \mbox{\rm where\ } \hat{P}^X(Y) = \sum_{x \in X} \hat{P}_S(Y=y|X=x) \cdot \hat{P}_T(X=x)
\end{align}
namely, such that the $W$ distance \eqref{eq:WDistance} between estimated and target domain marginal probability of $Y$ is minimal. In terms of target domain knowledge $D_T$, this requires to know an approximation $\hat{P}_T(Y)$ of the marginal distribution of the class in the target domain. We are slightly departing here from our assumption of no knowledge of $Y \in D_T$ by requiring an estimate of its marginal (unconditional) distribution on the target population. If this is not feasible, then we assume some expert input on an attribute $X_w$ such that $\hat{P}_S(Y=y|X=x) \approx \hat{P}_T(Y=y|X=x)$, as a way to minimize the first term of the summation (\ref{eq:wpy}).
To summarize, we use the optimal $X_w$ as from \eqref{eq:wpy} to derive \eqref{eq:wpyphi} as an empirical approximation to \eqref{eq:rewry}. This is how we estimate $P(Y|\varphi)$ over the target domain.
\subsection{How Much Target Domain Knowledge?}
\label{sec:DADT:knowledge}
We can now formalize the range of cases based on the availability of $P_T(\mathbf{X})$ described in Section \ref{sec:ProblemSetting}. Under
\textit{\textbf{no target domain knowledge}}, we have no information available on $D_T$, which means that $\hat{P}(X=x|\varphi) = P_S(X=x|\varphi)$. This amounts to setting $\alpha=0$ in \eqref{eq:aff}--\eqref{eq:aff2}, and, whatever $X_w$ is, \eqref{eq:wpyphi} boilds down to $\hat{P}(Y=y|\varphi) = P_S(Y=y|\varphi)$.
In short, both probability estimations boil down to growing the DADT classifier using the source data $D_S$ without any modification or, simply, growing a standard decision tree classifier.
Similarly, under \textit{\textbf{full target domain knowledge}} we have target domain knowledge for all attributes in $D_T$ along with enough instances to estimate both probabilities. This amounts to setting $\alpha=1$ in \eqref{eq:aff}--\eqref{eq:aff2}, and to know which attribute $X_w$ minimizes \eqref{eq:wpy}. The full target domain knowledge is the strongest possible assumption within our DADT approach, but not in general. For validation purposes (Section~\ref{sec:Experiments}), we move away from our unsupervised setting and assume $Y \in D_T$ to set up an \textit{additional baseline} under the full knowledge of $D_T$:
\textit{\textbf{target-to-target baseline.}} In this scenario, the decision tree is grown \textit{and} tested exclusively on the target data. Such a scenario does not require covariate shift, since probabilities $P(X=t|\varphi)$ and $P(Y=y|\varphi)$ are estimated directly over the target domain. This is \textit{the ideal case} as we train the classifier on the intended population.
Finally, under \textit{\textbf{partial target domain knowledge}} we consider cases where we have access to estimates of $P(\mathbf{X}')$ only for some subsets $\mathbf{X}' \subseteq \mathbf{X}$. This allows to estimate $P(X=x|\varphi)$ only if $X$ and the variables in $\varphi$ are in one of those subsets $\mathbf{X}'$. When the target domain information is insufficient, DADT resorts to the source domain information in \eqref{eq:aff}--\eqref{eq:aff2} by a linear combination of both. The weight $\alpha$ in such a linear combination should be set proportional to the contribution of the source domain information. We refer to Section \ref{sec:AccuracyResults} for our experimental setting of $\alpha$.
\section{Experiments: State Public Coverage}
\label{sec:Experiments}
\subsection{The ACSPublicCoverage dataset}
We consider the \textit{ACSPublicCoverage} dataset---an excerpt of the 2017 U.S. Census data \cite{dingRetiringAdultNew2021}---that provides the same feature sets for different geographical regions based on each of the US states, which may have different distributions. This allows us to examine the impact of our method given a wide range of distribution shifts. We utilize the prediction task, constructed by the dataset creators, of whether or not a low income individual is covered by public health coverage. Inspired by this experimental setting, we imagine a task where a public administrator wants to identify individuals are not receiving the public benefits to which they are entitled, however, information about who does and does not receive these benefits is only available for a population different than the target population, say another state. However, this administrator is likely to have some information about the target population distribution; information that they realistically may have population breakdown by demographics such as age, race and gender. To address RQ1 we now test whether, with DADT, we can utilize that information to train an improved model in the new state, compared to blindly applying a model trained in the latter state. Additionally, we address RQ2 by testing the impact of DADT compared to baseline on two fairness metrics.
\subsection{Experimental Setup}
The design of the ACSPublicCoverage dataset allows us to set up a scenario that mirrors our example of the retail store in Section~\ref{sec:Introduction} of having no labeled data for the target domain, but some knowledge of the (unconditional) distribution of the target domain. In this case we do in fact have labeled data for each state, however, we implement DADT without utilizing it. In Section~\ref{sec:Experiments.CovariateShift} we utilize our access to the label data to test our assumption that DADTs are suitable for addressing covariate shift. Given the dataset design, we are able to utilize the distribution of the predictive attributes in the target domain as our source of outside knowledge, to adjust the information gain calculation. Unless otherwise stated, we consider the attributes: SCHL (educational attainment), MAR (marital status), AGEP (age), SEX (male or female), CIT (citizenzhip status), RAC1P (race), with AGEP being continuous and all other discrete. Data was accessed through the Python package Folktables\footnote{\url{https://github.com/zykls/folktables}}.
We consider pairs of source and target datasets consisting of data from different US states, with a model trained in each of the fifty states being tested on every state, for a total of 2500 train / test pairs. The decision trees are all trained on 75\% of source data $D_S$, and tested on 25\% of the target data $D_T$. Stopping criteria include the following: a node must have at least 5\% of the training data, and not all instances have the same class value (purity level set to 100\%), the maximum tree depth is 8.
\begin{figure}[t]
\centering
\includegraphics[width=0.51\linewidth]{img/fig1.png}\hspace{0.4cm}
\includegraphics[width=0.40\linewidth]{img/fig2acc.png}
\caption{The scatter-plot (left) relates the Wasserstein distances each attribute and source-target US state pair. The x-axis shows the distance of each attribute's marginal distributions between source $P_S(X)$ and target domains $P_T(X)$, while the y-axis shows the distance between the target $P_T(Y)$ and the estimated given $X$, $\hat{P}^{X}(Y)$ class distribution.
The heat-map (right) shows the difference in accuracy between the cases no target domain knowledge $ACC_{ntdk}$ and target-to-target baseline $ACC_{tt}$ for each source-target US state pair. Both figures show a lack of an overall pattern across all states in \textit{ACSPublicCoverage}. The dataset does not in general satisfy the covariate shift assumption (left).}
\label{fig:srtg}
\end{figure}
\subsection{Results: On Model Performance}
\label{sec:Experiments.CovariateShift}
\subsubsection{Accuracy Results}\label{sec:AccuracyResults}
We now address RQ1 (Section~\ref{sec:ProblemSetting}).
The scatter-plot Fig. \ref{fig:srtg} (left) relates the Wasserstein distances for each attribute and source-target pair. On the x-axis, there is the distance between the marginal attribute distributions, i.e., $W(P_S(X), P_T(X))$. On the y-axis, there is the distance between the target and estimated marginal class distribution, i.e.,$W( \hat{P}^X(Y), P_T(Y) )$ from (\ref{eq:wpy}). The distances between the marginal attribute distributions are rather small, with the exception of CIT and RAC1P.
The distances between target and estimated class distributions are instead much larger, for all attributes. The plot shows that the \textit{ACSIncome} dataset does not in general satisfy the covariate shift assumption (at least for marginal distributions), but rather the opposite: close attribute distributions and distant conditional class distributions. This fact will help us in exploring how much our approach relies on the covariate shift assumption. Below we report accuracy at varying levels of target domain knowledge (issue P1 Section~\ref{sec:ProblemSetting}), as defined in Section~\ref{sec:DADT:knowledge}.
\textit{\textbf{Case 1: no target domain knowledge (ntdk) vs target-to-target baseline (tt).}}
Let us consider the scenario of no target domain knowledge, i.e., training a decision tree on the source training data and testing it on the target test data. We compare the decision tree accuracy in this scenario (let us call $ACC_{ntdk}$) to the accuracy of training a decision tree on the target training data and testing on the target test data ($ACC_{tt}$), a.k.a., the target-to-target baseline. Recall that accuracy estimates on a test set (of the target domain) the probability that the classifier prediction $\hat{Y}$ is correct w.r.t. the ground truth $Y$:
\[ ACC = P_T(\hat{Y}=Y) \]
The heat-map plot Fig. \ref{fig:srtg} (right) shows for each source-target pair of states the difference in accuracy $(ACC_{ntdk}-ACC_{tt}) \cdot 100$ between the target-to-target baseline and the no target domain knowledge scenario. In most of the cases the difference is negative, meaning that there is an accuracy loss in the no target domain knowledge scenario.
%
\begin{figure}[t]
\centering
\includegraphics[width=0.51\linewidth]{img/fig3rACC.png}\hspace{0.4cm}
\includegraphics[width=0.43\linewidth]{img/fig4racc.png}
\caption{The scatter-plot (left) shows the relative gain in accuracy $rACC$, with a greener dot indicating a greater gain derived from the full target domain knowledge (\textit{ftdk}) relative to the no target domain knowledge (\textit{ntdk}). The x- and y-axis, respectively, shows the covariate shift measured by the Wasserstein distance between the source-target domain pairs used for a decision tree grown in the \textit{ntdk}, $W(T_{ntdk})$, and in the \textit{ftdk}, $W(T_{ftdk})$, scenarios. It shows that a greater gain in accuracy from access to the full target domain knowledge is achieved when the covariate shift assumption is (strictly) met.
The plot (right), similarly, shows how model performance (mean $rACC$) deteriorates as the covariate shift assumption is relaxed (shown by a larger Wasserstein distance).}
\label{fig:res}
\end{figure}
\textit{\textbf{Case 2: full target domain knowledge (ftdk) vs No target domain knowledge (ntdk).}} The decision tree in this scenario is grown on the source (training) data but probabilities are estimated by full target domain knowledge using (\ref{eq:tdk}), and (\ref{eq:aff}) with $X_w$ minimizing (\ref{eq:wpyphi}). In the experiments, $\hat{P}_T(X=t|\varphi)$ and
$\hat{P}_T(X\leq t|\varphi)$ are calculated from the target training data, for each $X$, $t$, and $\varphi$.
Let us compare the accuracy of the decision tree grown using full target domain knowledge (let us call it $ACC_{ftdk}$) to the one with no target domain knowledge ($ACC_{ntdk}$).
In 48\% of the source-target pairs,
the accuracy of the full target domain knowledge scenario is better than the one of the no target domain knowledge scenario ($ACC_{ftdk} > ACC_{ntdk}$), and in 26\% of the pairs they are equal ($ACC_{ftdk} = ACC_{ntdk}$).
This gross comparison needs to be investigated more in depth. Let us define the relative gain in accuracy as:
\[ rACC = \frac{ACC_{ftdk}-\mathit{min}(ACC_{ntdk},ACC_{tt})}{|ACC_{tt}-ACC_{ntdk}|} \cdot 100 \]
where $ACC_{tt}$ is the accuracy in the targe-to-target baseline. The relative gain quantifies how much of the loss in accuracy in the no target domain knowledge scenario has been recovered in the full target domain knowledge scenario. The definition quantifies the recovered loss in accuracy also in the case that $ACC_{ntdk} > ACC_{tt}$, which may occur by chance. Moreover, to prevent outliers due to very small denominators, we cap $rACC$ to the $-100$ and $+100$ boundaries. The mean value of $rACC$ over all source-target pairs is $16.6$, i.e., on average our approach recover 16.6\% of the loss in accuracy. However, there is a large variability, which we examine further in the next section.
\begin{figure}[t]
\centering
\includegraphics[width=0.40\linewidth]{img/fig4racc3.png}
\hspace{0.4cm}
\includegraphics[width=0.40\linewidth]{img/fig4rrr.png}
\caption{The plot on the left shows results of DADT, across all state pairs, with partial target domain knowledge; we show the mean $rACC$ for pairs with maximum distance $W(\hat{P}^{X_w}(Y), P_T(Y))$ for the cases of having knowledge of $na=8$ attributes (i.e., \textit{ftdk}), $na=3$, and $na=2$. The plot on the right shows the change in relative demographic parity, relative equalized odds and accuracy over increasing $W(\hat{P}^{X_w}(Y), P_T(Y))$.}
\label{fig:partial}
\end{figure}
\textit{\textbf{Case 3: partial target domain knowledge.}}
We reason on partial target domain knowledge under the assumption that we only know an estimate of the distribution of some subsets of $\mathbf{X}$'s but not of the full joint probability distribution $P_T(\mathbf{X})$.
We experiment assuming to know $\hat{P}_T(\mathbf{X}')$ for $\mathbf{X}' \subseteq \mathbf{X}$, only if $|\mathbf{X}'| \leq 2$ (resp., $|\mathbf{X}'| \leq 3$). Equivalently, we assume to know $\hat{P}(X=x|\varphi')$ only if $\varphi'$ contains at most one (resp., two) variables.
Formulas \eqref{eq:aff}--\eqref{eq:aff2} mix such a form of target domain knowledge with the estimates on the source: for $\hat{P}(X=x|\varphi)$, we compute $\varphi'$ as the subset of split conditions in $\varphi$ regarding at most the first (resp., the first two) attributes in $\varphi$ -- namely, the attributes used in the split condition at the root (resp., at the first two levels) of the decision tree, which are the most critical ones.
The weight $\alpha$ in \eqref{eq:aff}--\eqref{eq:aff2} is set dynamically as the proportion of attributes in $\varphi$ which are not in $\varphi'$. This value is $0$ when $\varphi$ tests on at most one variable (resp., two variables), and greater than $0$ otherwise. We consider the proportion of attributes and not of the number of split conditions, since continuous attributes may be used in more than one split along a decision tree path.
\subsubsection{Covariate Shift and Accuracy}
We test whether the difference in model performance is due to the fact that different pairs match or do not match the covariate shift assumption. In order to quantify the covariate shift (issue P2), we define specifically for a decision tree $T$:
\begin{align}
\label{eq:wtree}
W(T) = \sum_{\varphi\ \mbox{\rm path of a leaf of\ } T} W( \hat{P}_T(Y|\varphi), P_T(Y|\varphi)) \cdot P_T(\varphi)
\end{align}
as the average Wasserstein distance between the estimated (through \eqref{eq:wpyphi}) and target domain class distributions at leaves of the decision tree, weighted by the leaf probability in the target domain. Notice that, since $P_T$ is unknown, we estimate the probabilities in the above formula on the test set of the target domain. We write $W(T_{ntdk})$ and $W(T_{ftdk})$ respectively for denoting the amount of covariate shift for the decision tree grown in the no target domain knowledge and with full target domain knowledge scenarios.
The scatter plot Fig. \ref{fig:res} (left) shows the relative accuracy (in color) at the variation of $W(T_{ntdk})$ and $W(T_{ftdk})$\footnote{$W(T_{ntdk})$ and $W(T_{ftdk})$ appear to be correlated. While they are specific of their respective decision trees, they both depend on the distribution shift between the source and target domain.}.
We make the following qualitative observations:
\begin{itemize}
\item when $W(T_{ftdk})$ is small, say smaller than $0.05$, i.e., when the covariate shift assumption holds, the relative accuracy is high, i.e., using target domain knowledge allows for recovering the accuracy loss;
\item when $W(T_{ftdk})$ is larger, and, in particular, larger than $W(T_{ntdk})$,
then the gain is modest or even negative.
\end{itemize}
Let us consider now how to determine quantitatively on which pairs there is a large relative accuracy.
Fig. \ref{fig:res} (right) reports the mean $rACC$ for source-target pairs sorted by two different distances. Ordering by $W(T_{ftdk})$ allows to identify more source-target pairs for which our approach works the best than ordering by the marginal distance between $P_T(Y)$ and its estimate $\hat{P}^{X_w}(Y)$ using the attribute $X_w$ from (\ref{eq:wpy}). However:
\begin{itemize}
\item $W(T_{ftdk})$ requires target domain knowledge $P_T(Y|\varphi)$ for each
leaf in $T_{ftdk}$, which is impractical to obtain.
\item $W(\hat{P}^{X_w}(Y), P_T(Y))$ is easier to calculate/estimate, as it regards only the marginal distribution $P_T(Y)$. The exact knowledge of which attribute is $X_w$ is not required, as, by definition of $X_w$, using any other attribute instead of $X_w$ provides an upper bound to $W(\hat{P}^{X_w}(Y), P_T(Y))$.
\end{itemize}
In summary, Fig. \ref{fig:res} (right) shows that DADT is able to recover a good proportion of loss in accuracy, and it provides a general guidance for selecting under how much the covariate shift assumption can be relaxed. Finally, Fig. \ref{fig:partial} (left) contrasts the $rACC$ metric of the full target domain knowledge scenario to the two cases of the partial target domain knowledge scenario when we have knowledge of only pairs or triples of variables. There is, naturally, a degradation in the recovery of accuracy loss in latter scenarios, e.g., for a distance of $0.03$ we have the mean $rACC$ equal to $25.3\%$ for full target domain knowledge, to $21.6\%$ when using triples, and to $17.5\%$ when using pairs of variables\footnote{The extension of $rACC$ to partial target domain knowledge is immediate by replacing $ACC_{ftdk}$ in its definition with the accuracy $ACC_{ptdk}$ of the decision tree grown by using partial target domain knowledge.}.
Even with cross-tables, we can achieve a moderate recovery of the loss in accuracy.
\subsubsection{Fairness Results}\label{sec:fairnessresults}
Similarly, in Fig.~\ref{fig:partial} (right) we observe likewise results for DADT concerning the fairness metrics of demographic parity (DP) and equal opportunity (EOP). The implementation of DADT across its cases of none, partial, and full target domain knowledge shows how fairness recovers in the target domain under a covariate shift even better than accuracy. Due to lack of space, we explore the fairness implications of DADT in Appendix~\ref{Appendix:FairResults}.
\section{Discussion}
\section{Discussion, Conclusions, Limitations and Outlook}
\label{sec:Closing}
In answer to RQ1, we see that DADT result in both increased accuracy and better performance on fairness metrics over our baseline standard decision tree trained in $D_S$ and tested in $D_T$. Looking closer at our experimental results, we see that improvements are best when the covariate shift assumption holds in at least a relaxed form (P2). We also see this increase when we only have partial domain knowledge (P1), though a greater amount of domain knowledge, as we define it, results in greater improvements in those metrics. Interestingly, our intervention does not have worse performance over a standard decision tree even when the covariate shift assumption does not hold. Back to the example inspired by the experimental setting, we have demonstrated that DADTs are an effective method for using existing information about a target state. We can also think back to our retail example, wherein we identified a potential feedback loop leading to a lack of stock in women's football shoes. We propose that DADTs are a method for intervening on this feedback loop; if the store identified a pool of potential customers (such as the population living near the store), which had a higher rate of women than their existing customer base, DADT provides an accessible, interpretable, and performative classification model which can incorporate this additional information. In future work, different definitions of outside information should be explored as the outside information may not have the same structure as the source and target datasets.
While we see that the benefits are clear, we want to be clear about the limitations of our method. Firstly, we show that this is most effective when the covariate shift assumption holds. We consider it a a strength of our work that we specify and test this assumption and encourage future work on domain adaptation methods to similarly specify the conditions under which a method is suitable to be used. Secondly, we emphatically acknowledge that DADTs are not intended as a replacement for collecting updated and improved datasets. However, this is a low cost improvement that can be made over blindly applying to a new or changing context. Additionally, there are cases labelled data just doesn't exist yet. We recall the supermarket example from the introduction, where continued use of a model trained on customer data may produce a feedback loop wherein the changed potential customer base goes undetected as marketing decisions are never made with them in mind. Finally, DADTs are not a complete solution for achieving or ensuring fair algorithmic decision making, rather they are a easy to use method for improving accuracy, and fairness metric performance in the commonly occurring case of distribution shift between source and target data.
\section{Supplementary Theoretical Discussion}
\label{Appendix:SuppMaterial}
Recall the equality \eqref{eq:covshift}, which is central to covariate shift. Under a decision tree learning setting, it does not necessarily imply $P_T(Y=y|\varphi) = P_S(Y=y|\varphi)$ for a current path $\varphi$. Consider the example below:
\begin{example}
\label{example:EstYUnderCovShift}
Let $\mathbf{X} = X_1, X_2$ and $Y$ be binary variables, and $\varphi$ be $X_1=0$. Since $P(X_1, X_2, Y) = P(Y|X_1, X_2) \cdot P(X_1, X_2)$, the full distribution can be specified by stating $P(Y|X_1, X_2)$ and $P(X_1, X_2)$. Let us consider any distribution such that:
\[P_S(X_1, X_2) = P_S(X_1) \cdot P_S(X_2) \quad P_T(X_1=X_2)=1 \quad Y = I_{X_1=X_2} \]
i.e., $X_1$ and $X_2$ are independent in the source domain, while they are almost surely equal in the target domain. Notice that $Y = I_{X_1=X_2}$ readily implies that $P_S(Y|X_1, X_2) = P_T(Y|X_1, X_2)$, i.e., the covariate shift condition (\ref{eq:covshift}) holds. Using the multiplication rule of probabilities, we calculate:
\[ P_S(Y|\varphi) = P_S(Y|X_1=0) = P_S(Y|X_1=0, X_2=0) \cdot P_S(X_2=0|X_1=0) + P_S(Y|X_1=0, X_2=1) \cdot P_S(X_2=1|X_1=0) =\]
\[ = P_S(Y|X_1=0, X_2=0) \cdot P_S(X_2=0) + P_S(Y|X_1=0, X_2=1) \cdot P_S(X_2=1) \]
where we exploited the independence of $X_1$ and $X_2$ in the source domain, and
\[ P_T(Y|\varphi) = P_T(Y|X_1=0) = P_T(Y|X_1=0, X_2=0) \cdot P_T(X_2=0|X_1=0) + P_T(Y|X_1=0, X_2=1) \cdot P_T(X_2=1|X_1=0) =\]
\[ = P_T(Y|X_1=0, X_2=0)\]
were we exploited the equality of $X_1$ and $X_2$ in the target domain.
$P_S(Y|\varphi)$ and $P_T(Y|\varphi)$ are readily different when setting $X_1, X_2 \sim Ber(0.5)$ because $P_S(Y=1|\varphi) = 1 \cdot 0.5 + 0 \cdot 0.5 \neq 1 = P_T(Y=1|\varphi)$.
\end{example}
\section{Results: Impacts on Fairness}
\label{Appendix:FairResults}
\label{sec:Expiriments.Fairness}
We now address experiments on RQ2 (Section~\ref{sec:ProblemSetting}). Other quality metrics beyond accuracy can degrade in presence of covariate shift. There is a risk that certain demographic groups are more impacted by drops in accuracy than others. This can occur even if overall minimal accuracy drop is seen. In order to test the impact of DADT on specific groups, and answer RQ2, we utilize two \emph{fairness metrics} commonly used in fair machine learning literature, \emph{demographic parity} and \emph{equal opportunity}. We consider here the fairness metrics in reference to the protected attribute SEX.
\textit{Demographic parity} (DP) quantifies the disparity between predicted positive rate for men and women:
\[ DP = |P(\hat{Y}=1|\mbox{SEX=women})-P(\hat{Y}=1|\mbox{SEX=men})| \]
The lower the $DP$ the better is the fairness performance.
We consider this metric in the context of our women's football shoes example, where one measure of whether a model is addressing our identified feedback loop is whether the positive rate for the question of ``will buy football shoes'' moves towards parity for men and women.
\textit{Equal opportunity} (EOP) quantifies the disparity between true positive rate for men and women:
\[EOP = |P(\hat{Y}={y}|\mbox{SEX=women}, {Y}={1})-P(\hat{Y}=1|\mbox{SEX=men}, {Y}={1})| \]
We consider this metric in the context of the example of the public administrator who is identifying people who do not receive benefits to which they are entitled. Here, our concern is that the model is equally performant for all groups as prescribed by SEX.
Fairness-aware classifiers control for these metrics. We use here a classifier-agnostic post-processing method, described in Section~\ref{sec:ProblemSetting.FairnessRegu} that specializes the decision threshold for each protected group \cite{DBLP:conf/nips/HardtPNS16}. The correction is applied after the decision tree is trained. Fig. \ref{fig:resdp} (left)
confirms a degradation of the DP metric from the target-to-target scenario to the no target domain knowledge scenario. Fig. \ref{fig:reseop} (left) shows a less marked degradation for the EOP metric.
\begin{figure}[t]
\centering
\includegraphics[width=0.43\linewidth]{img/fig2dp.png}\hspace{0.8cm}
\includegraphics[width=0.43\linewidth]
{img/fig4rdp3.png}
\caption{Left: difference in DP between the cases target-to-target baseline $DP_{tt}$ and no target domain knowledge $DP_{ntdk}$ for each source-target US state pair. Right: mean $rDP$ for the cases of having knowledge of $na=8$ attributes (i.e., \textit{ftdk}), $na=3$, and $na=2$, over increasing $W(\hat{P}^{X_w}(Y), P_T(Y))$.}
\label{fig:resdp}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.43\linewidth]{img/fig2eop.png}\hspace{0.8cm}
\includegraphics[width=0.43\linewidth]
{img/fig4reop3.png}
\caption{Left: difference in EOP between the cases target-to-target baseline $EOP_{tt}$ and no target domain knowledge $EOP_{ntdk}$ for each source-target US state pair. Right: mean $rEOP$ for the cases of having knowledge of $na=8$ attributes (i.e., \textit{ftdk}), $na=3$, and $na=2$, over increasing $W(\hat{P}^{X_w}(Y), P_T(Y))$.}
\label{fig:reseop}
\end{figure}
We mimic the reasoning done for the accuracy metric in Section \ref{sec:AccuracyResults}, and introduce the relative gain in demographic parity ($rDP$) and the relative gain in equal opportunity ($rEOP$):
\[ rDP = \frac{\mathit{max}(DP_{ntdk}, DP_{tt})-DP_{ftdk}}{|DP_{tt}-DP_{ntdk}|} \cdot 100 \quad \quad \quad \quad
rEOP = \frac{\mathit{max}(EOP_{ntdk}, EOP_{tt})-EOP_{ftdk}}{|EOP_{tt}-EOP_{ntdk}|} \cdot 100
\]
Since DP and EOP improve when they become smaller, the definitions of relative gain are symmetric if compared to the one of $rACC$.
We have already mentioned in Section~\ref{sec:fairnessresults} that Fig. \ref{fig:partial} (right) substantiates also for $rDP$ and $rEOP$ the conclusions for $rACC$. The distance $W(\hat{P}^{X_w}(Y), P_T(Y))$ turns out to provide a guidance on when DADT works the best. For DP and EOP, however, for large values of such distance, we do not observe a degradation as in the case of ACC. In other words, when the assumption of covariate shift is strictly met, DADT works the best, but when it is not, the recovery of the DP and EOP does not degrade.
Finally, Fig. \ref{fig:resdp} (right) confirms the degradation of the DADT performances in the case of partial target domain knowledge. E.g., for a distance of $0.03$ we have the mean $rDP$ equal to $41.8\%$ for full target domain knowledge, $42.2\%$ when using triples, and $40.8\%$ when using pairs of variables. This is much less marked for $rEOP$, for which DADT performs very well also with knowledge of pairs of variables, as shown in Fig. \ref{fig:reseop} (right). |
2,877,628,089,649 | arxiv |
\section[Introduction]{Introduction} \label{Introduction}
Best practices can be slow to propagate between disciplines. This
paper attempts to address this problem between the fields of psychology and
software engineering. In particular, we look at the state of practice for the
development of statistical software meant to be used in psychology%
\footnote{For brevity, we will abbreviate this to SSP for the rest of this paper.}.
Developers of SSP, as
in other scientific domains, frequently develop their own software because
domain-specific knowledge is critical for the success of their applications
\citep{wilson-best-practices}. However, these scientists are often self-taught
programmers and thus potentially unaware of software development best practices. To
help remedy this situation, \citet{wilson-best-practices} provide general advice
for scientific software development. We look at how well this advice is applied
in the specific scientific domain of SSP. Our goal is to first understand what the
state of the practice is in SSP, and then provide advice as to what software
engineering practices would likely provide the biggest gains in perceived
software quality, \emph{as measured by end-user perception}.
A first look at the state of practice for software in a specific scientific
community is provided by \citep{gewaltig-neuroscience}, for the domain of
computational neuroscience. (A newer version of their paper is available
\citep{newneuro}, but we reference the original version, since its simpler
software classification system better matches our needs.)
\citet{gewaltig-neuroscience} provide a high level comparison of existing
neuroscience software, but little data is given on the specific metrics for
their comparison. We build on the idea of studying software created and used by
a specific scientific community, while also incorporating detailed measures of
software qualities. In this paper we use the term \emph{software qualities} as
used by software engineers to refer to properties of software such as
installability, reliability, maintainability, portability etc. When we speak of
\emph{software qualities}, we mean the union of these properties.
Here we target 30 SSP packages, developed by different
communities using different models. The packages were selected from a
combination of three external lists: \citet{pswiki:Online}, the National Council
on Measurement in Education \citet{psdb:Online}, and the Comprehensive
\proglang{R} Archive Network \citet{pscran:Online}.
Combining our own ideas with suggestions from \citet{wilson-best-practices} and
\citet{gewaltig-neuroscience}, we created a grading sheet to systematically
measure each package's qualities. We used the Analytic Hierarchy Process (AHP)
\citep{AHP} to quantify the ranking between packages via pair-wise comparisons.
Our grading assumes that the software is intended to be user ready, as defined
by \citet{gewaltig-neuroscience}. That is, the grading assumes that the
intention is for new users to be able to undertake their work without requiring
communication with the original developers. In many cases a low ``grade''
should not be attributed to a deficiency in the software, but rather to the fact
that the overall goal was not user readiness, but rather research readiness
\citep{gewaltig-neuroscience}. The overall quality target should be taken into
account when interpreting the final rankings.
Unlike \citet{gewaltig-neuroscience}, the authors of this study are not domain
experts. Our aim is to analyze SSP with respect to software
engineering aspects only. Due to our lack of domain knowledge, algorithms and
background theory will not be discussed, and packages will not be judged
according to the different functionalities they provide. We reach our
conclusions through a systematic and objective grading process, which
incorporates some experimentation with each package.
Others have looked at issues surrounding the engineering of scientific software.
Of particular relevance is \citet{Heaton2015207}, which looks at $12$ different
software engineering practices across $43$ papers that examine software
development as performed by scientists. The software engineering practices are
grouped as \emph{development workflow}, consisting of design issues, lifecycle
model, documentation, refactoring, requirements, testing, and verification and
validation; and \emph{infrastructure}, consisting of issue tracking, reuse,
third-party issues and version control. These, naturally, have significant
overlap with software qualities, as these practices are \emph{supposed} to
improve these qualities. Even though this is a survey of practices, and one
would expect that this would be biased towards success stories and thus fairly
good practices, what emerges is different. In other words, even among success
stories, the state of the practice is rather mixed. This further motivates us
to look at the state of the practice of SSP projects ``from the outside'', and
thus picking from a (hopefully) wider cross-section of projects. Another
relevant study is \citet{Kanewala20141219}, where the authors systematically
reviewed $62$ studies of relevance to software testing, motivated by the
increasing number of paper retractions traceable to software faults. The main
conclusion is that the cultural difference between scientist developers and
software engineers, coupled with issues specific to scientific software, makes
testing very difficult. We are gratified that this independent study justifies
both our choice to not do our own independent testing, as well as the idea that
investigating the ``software engineering maturity level'' of particular domains
is likely to find non-trivial variations.
Background information is provided in the first section below. This is followed
by the experimental results and basic comparisons between packages, along with
information on how the software developed by the CRAN community compares to
SSP developed by other communities.
\section[Background]{Background} \label{Sec_Background}
This section covers the process used for our study and the rationale behind it.
We also introduce the terms and definitions used to construct the software
quality grading sheet and the AHP technique, which we used to make the
comparisons. The process was,
\begin{enumerate}
\item Choose a domain where scientific computing is important. Here we chose
SSP because of its active software community, as evidenced by the
list of open source software summarized by \citet{psdb:Online}, and a large
number of \proglang{R} packages hosted by \citet{pscran:Online}.
\item Pick 30 packages from authoritative lists. For SSP, this is a
combination of NCME, CRAN and Wikipedia lists mentioned previously. We picked
15 \proglang{R} packages hosted by CRAN, and another 15 developed by other
communities. Packages common to at least two of the lists (15 out 30) were
selected first, with the remaining selected randomly.
\item Build our grading sheet. The template used is given in
Appendix~\ref{app:full-template} and is available at:
\url{https://github.com/adamlazz/DomainX}.
\item Grade each software. Our main goal here is to remain objective. To
ensure that our process is reproducible, we asked different people to grade
the same package. The result showed that different people's standards for
grading vary, but the overall ranking of software remained the same, since the
overall ranking is based on relative comparisons, and not absolute grades.
\item Apply AHP on the grading sheet to reduce the impact of absolute grade
differences.
\item Analyze the AHP results, using a series of different weightings, so that
conclusions and recommendations can be made.
\end{enumerate}
\subsection[Types of software]{Categories and Status} \label{typeofsoftware}
The development models of each package fell into three categories:
\begin{enumerate}
\item Open source: ``Computer software with its source code made available and
licensed so the copyright holder provides the rights to study, change and
distribute the software to anyone for any purpose'' \citep{license}.
\item Freeware: ``Software that is available for use at no monetary
cost, but with one or more restricted usage rights such as source code being
withheld or redistribution prohibited'' \citep{freeware}.
\item Commercial: ``Computer software that is produced for sale or that
serves commercial purposes'' \citep{Dictionary.com2014}.
\end{enumerate}
\noindent The \emph{status} of each project is said to be \textbf{Alive} if the
package, related documentation, or web site has been updated within the last 18
month; \textbf{Dead} if the last update was $18$ months ago or longer;
\textbf{Unclear} if last release information could not be easily derived
(denoted ? in tables).
\subsection[Software qualities]{Software qualities} \label{sec:qual}
We use the software engineering terminology from \citet{ghezzi-se} and best
practices from \citet{wilson-best-practices} to derive our terms and measures.
We measure items of concern to both end users and developers. Since the
terminology is not entirely fixed across the software engineering literature, we
provide the definitions we will use throughout. The qualities are presented in
(roughly) the order we measured them. Where relevant, information on how each
quality was measured is given.
\begin{itemize}
\item \textit{Installability} is a measure of the ease of software installation.
This is largely determined by the quantity and quality of installation
information provided by developers. Good installability means detailed and
well organized installation instructions, with less work to be done by users
and automation whenever possible.
\item \textit{Correctness and Verifiability} are related to how much a user can
trust the software. Software that makes use of trustworthy libraries (those
that have been used by other packages and tested through time) can bring more
confidence to users and developers than self-developed
libraries \citep{Dubois2005}. Carefully documented specifications should also
be provided. Specifications allow users to understand the background theory
for the software, and its required functionality. Well explained examples
(with input, expected output and instructions) are helpful too, so that users
can verify for themselves that the software produces the same result as
expected by the developers.
\item \textit{Reliability} is based on the dependability of the software.
Reliable software has a high probability of meeting its stated requirements
under a given usage profile over a given span of time.
\item \textit{Robustness} is defined as whether a package can handle unexpected
input. A robust package should recover well when faced with unexpected input.
\item \textit{Performance} is a measure of how quickly a solution can be found
and the resources required for its computation. Given the constraint that we
are not domain experts, in the current context we are simply looking to see
whether there is evidence that performance is considered. Potential evidence
includes signs of use of a profiler, or other performance data.
\item \textit{Usability} is a measure of how easy the software is to use. This
quality is related to the quality and accessibility of information provided by
the software and its developers. Good documentation helps with usability.
Some important documents include a user manual, a getting started tutorial,
and standard examples (with input, expected output and instructions). The
documentation should facilitate a user quickly familiarizing themselves with
the software. The GUI should have a consistent look and feel for its
platform. Good visibility \citep{norman} can allow the user to find the
functionality they are looking for more easily. A good user support model
(e.g.\ forum) is beneficial as well.
\item \textit{Maintainability} is a measure of the ease of correcting and
updating the software. The benefits of maintainability are felt by future
contributors (developers), as opposed to end users. Keeping track of version
history and change logs facilitates developers planning for the future and
diagnosing future problems. A developer's guide is necessary, since it
facilitates new developers doing their job in an organized and consistent
manner. Use of issue tracking tools and concurrent version system is a good
practice for developing and maintaining software
\citep{wilson-best-practices}.
\item \textit{Reusability} is a measure of the ease with which software code can
be used by other packages. In our project, we consider a software package to
have good reusability when part of the software is used by another package and
when the API (Application Program Interface) is documented.
\item \textit{Portability} is the ability of software to run on different
platforms. We examine a package's portability through developers' statement
from their web site or documents, and the success of running the software on
different platforms.
\item \textit{Understandability (of the code)} measures the quality of
information provided to help future developers with understanding the
behavior of the source code. We surface check the understandability by
looking at whether the code uses consistent indentation and formatting style,
if constants are not hard coded, if the code is modularized, etc. Providing a
code standard, or design document, helps people become familiar with the code.
The quality of the algorithms used in the code is not considered here.
\item \textit{Interoperability} is defined as whether a package is designed to
work with other software, or external systems. We checked whether that kind
of software or system exists and if an external API document is provided.
\item \textit{Visibility/Transparency} is a measure of the ease of examining the
status of the development of a package. We checked whether the development
process is defined in any document. We also record the examiner's overall
feeling about the ease of accessing information about the package. Good
visibility allows new developers to quickly make contributions to the project.
\item \textit{Reproducibility} is a measure of whether related information, or
instructions, are given to help verify a products' results
\citep{davison-reproducibility}. Documentation of the process of verification
and validation is required, including details of the development and testing
environment, operating system and version number. If possible, test data and
automated tools for capturing the experimental context should be provided.
\end{itemize}
Our grading sheet, as shown in Appendix~\ref{app:full-template}, is derived from
these qualities. Installability, for example was determined by asking the
questions shown in Table~\ref{table:example}, where \textit{Unavail} means that
uninstallation is not available, and the superscript $^*$ means that this
response should be accompanied by explanatory text.
\begin{table}
\caption{Measuring installability}
\label{table:example}
\begin{tabular}{l}
\toprule
Question (Allowed responses)\\
\midrule
Are there installation instructions? (Yes/ No)\\
Are the installation instructions linear? (Yes/ No)\\
Is there something in place to automate the installation? (Yes$^*$/ No)\\
Is there a specified way to validate the installation, such as a test suite? (Yes$^*$/ No)\\
How many steps were involved in the installation? (Number)\\
How many packages need to be installed before or during installation?
(Number)\\
Run uninstall, if available. Were any obvious problems caused? (Unavail/Yes$^*$/ No)\\
\bottomrule
\end{tabular}
\end{table}
\subsection[Analytic Hierarchy Process]{Analytic Hierarchy Process
(AHP)} \label{AHP}
``The AHP is a decision support tool which can be used to solve complex decision
problems. It uses a multi-level hierarchical structure of objectives, criteria,
subcriteria, and alternatives. The pertinent data are derived by using a set of
pairwise comparisons. These comparisons are used to obtain the weights of
importance of the decision criteria, and the relative performance measures of
the alternatives in terms of each individual decision criterion''
\citep{trianta-ahp}. By using AHP, we can compare between qualities and
packages without worrying about different scales, or units of
measurement.
Generally, by using AHP, people can evaluate $n$ options with respect to $m$
criteria. The criteria can be prioritized, depending on the weight given to
them. An $m \times m$ decision matrix is formed where each entry is the
pair-wise weightings between criteria. Then, for each criterion, a pairwise
analysis is performed on each of the options, in the form of an $n\times n$
matrix $a$. (There is a matrix $a$ for each criteria). It is formed through
using a pair-wise score between options as each entry. The entry of the upper
triangle of $a$ is scaled between one to nine, defined as in \citet{Saaty1990}.
Here we compared 30 packages ($n = 30$) with respect to the 13 qualities
($m = 13$) mentioned previously. Overall quality judgements will depend on the
context in which each package is meant to be used. To approximate this, we
experimented with different weights for each property.
We capture a subjective score, from one to ten, for each package for each
criteria through our grading process. To turn these into pair-wise scores, one
starts with two scores $A$ and $B$ (one for each package), and the result for
$A$ versus $B$ is:
\[
\begin{cases}
\max\{9, A - B + 1\} & A \geq B \\
1 / \min\{1, B - A + 1\} & A < B
\end{cases}
\]
For example, if installability is measured as an $8$ for package $A$ and a $2$
for package $B$, then the entry in $a$ corresponding to $A$ versus $B$ is
$7$, while that of $B$ versus $A$ is $1/7$. The implication
is that installing $A$ is much simpler than installing $B$.
\section[Experimental results and discussion]{Experimental
Results} \label{result}
We briefly introduce the $30$ packages. Next we present the AHP results by
discussing trends for each quality and then looking at the final rankings,
assuming both equal and non-equal weights. The detailed results are in
Appendix~\ref{app:SummaryMeasurements} and available on-line at
\url{https://github.com/adamlazz/DomainX}.
\subsection[Selection of Software]{Packages}
Summary information on the $30$ packages is presented in Tables
\ref{table:r}--\ref{table:commercial}. In particular,
\begin{itemize}
\item 19 packages are open source; 8 are freeware; and 3 are commercial.
\item 15 are developed using \proglang{R} (Table~\ref{table:r}) and maintained
by the CRAN community; 12 are from university projects, or developed by
research groups (Table~\ref{table:research}); and 3 are developed by companies
for commercial use (Table~\ref{table:commercial}).
\item 3 projects use \proglang{{C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}\xspace}; 2 use \proglang{Java}; 2 use
\proglang{Fortran}, and 1 uses \proglang{BASIC}. The programming language of
the remaining 7 is not mentioned by the developers.
\item All the packages from CRAN are alive, using the definition given in the
background section. Two of the commercial software product are alive and one
is unclear. For the rest of software packages (mostly from university
projects or research groups), 6 are alive, 5 are dead and the last is unclear.
\end{itemize}
\begin{table}[ht]
\caption{CRAN (\proglang{R}) packages}
\label{table:r}
\begin{tabular}{lccccc}
\toprule
Name & Released & Updated & Status & Source & Lang.\\[0.5ex]
\midrule
\pkg{eRm} \citep{erm} & 2007 & 2014 & Alive & Available & \proglang{R}\\
\pkg{Psych} \citep{Psych} & 2007 & 2014& Alive& Available& \proglang{R}\\
\pkg{mixRasch} \citep{mixRasch} & 2009 & 2014& Alive& Available& \proglang{R}\\
\pkg{irr} \citep{irr} & 2005 & 2014& Alive& Available& \proglang{R}\\
\pkg{nFactors} \citep{nFactors} & 2006 & 2014& Alive& Available& \proglang{R}\\
\pkg{coda} \citep{coda} & 1999 & 2014& Alive& Available& \proglang{R}\\
\pkg{VGAM} \citep{vgam} & 2006 & 2013& Alive& Available& \proglang{R}\\
\pkg{TAM} \citep{tam} & 2013 & 2014& Alive& Available& \proglang{R}\\
\pkg{psychometric} \citep{psychometricp} & 2006 & 2013& Alive& Available& \proglang{R}\\
\pkg{ltm} \citep{ltm} & 2005 & 2014& Alive& Available& \proglang{R}\\
\pkg{anacor} \citep{anacor} & 2007 & 2014& Alive& Available& \proglang{R}\\
\pkg{FAiR} \citep{FAiR} & 2008 & 2014& Alive& Available& \proglang{R}\\
\pkg{lavaan} \citep{lavaan} & 2011 & 2014& Alive& Available& \proglang{R}\\
\pkg{lme4} \citep{lme4} & 2003 & 2014& Alive& Available& \proglang{R}\\
\pkg{mokken} \citep{mokken} & 2007 & 2013& Alive& Available& \proglang{R}\\
[1ex]
\bottomrule
\end{tabular}
\end{table}
\begin{table}[h]
\caption{Other research group projects}
\label{table:research}
\begin{tabular}{l c c c c c}
\toprule
Name & Released & Updated & Status & Source & Lang.\\[0.5ex]
\midrule
\pkg{ETIRM} \citep{ETIRM} & 2000 & 2008 & Dead & Available & \proglang{{C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}\xspace}\\
\pkg{SCPPNT} \citep{SCPPNT} & 2001 & 2007& Dead& Available& \proglang{{C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}\xspace}\\
\pkg{jMetrik} \citep{jMetrik} & 1999 & 2014 & Alive & Not&
\proglang{Java}\\
\pkg{ConstructMap} \citep{ConstructMap} & 2005 & 2012& Dead& Not&
\proglang{Java}\\
\pkg{TAP} \citep{TAP} & ? & ?& ?& Not& ?\\
\pkg{DIF-Pack} \citep{DIF} & ? & 2012& Alive& Available&
\proglang{Fortran}\\
\pkg{DIM-Pack} \citep{DIM} & ? & 2012& Alive& Available&
\proglang{Fortran}\\
\pkg{ResidPlots-2} \citep{ResidPlots-2} & ? & 2008& Dead& Not&
?\\
\pkg{WinGen3} \citep{WinGen3} & ? & 2013& Alive& Not& ?\\
\pkg{IRTEQ} \citep{IRTEQ} & ? & 2011& Dead& Not& ?\\
\pkg{PARAM} \citep{PARAM} & ? & 2012& Alive& Not&
\proglang{BASIC}\\
\pkg{IATA} \citep{IATA} & ? & 2014& Alive& Not & ?\\
[1ex]
\bottomrule
\end{tabular}
\end{table}
\begin{table}[h]
\caption{Commercial packages}
\label{table:commercial}
\begin{tabular}{l c c c c c c}
\toprule
Name & Released & Updated & Status & Source & Lang.\\[0.5ex]
\midrule
\pkg{MINISTEP} \citep{ministep} & 1977 & 2014 & Alive & Not & ?\\
\pkg{MINIFAC} \citep{minifac} & 1987 & 2014& Alive& Not& ?\\
\pkg{flexMIRT} \citep{flexMIRT} & ? & ? & ? & Not& \proglang{{C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}\xspace}\\
[1ex]
\bottomrule
\end{tabular}
\end{table}
\subsection[AHP results]{AHP results}
We explain the AHP results for each quality. In the charts, blue bars (slashes)
are for packages hosted by CRAN\footnote{As all relevant \proglang{R} packages
are hosted by CRAN, we will drop this qualifier for the rest of this
paper.}, green bars (horizontal lines) for research groups projects and red
bars (vertical lines) for commercial software.
\subsubsection{Installability} \label{Installability} We
installed each package on a clean virtual machine. We did this to ensure we
used a clean environment for each installation, to not create bias for software
tested later. We also checked for installation instructions, and whether these
instructions are organized linearly. Automated tools for installation and test
cases for validation are preferred. The number of steps during installation and
the number of external libraries required was counted. At the end of the
installation and testing, if an uninstaller is available, we ran it to see if it
completed successfully. The AHP results are in Figure~\ref{fig:installability}.
Some key findings are as follows:
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{image003.png}
\caption{AHP result for installability}
\label{fig:installability}
\end{figure}
\begin{itemize}
\item \proglang{R} packages: The CRAN community provides general installation
instructions \citep{craninstall} for all packages it maintains. However, in
some cases links to the general instructions are not given in the package page,
which may cause confusion for beginner users. The following packages
addressed this problem by providing detailed installation information on their
own web site: \pkg{TAM}, \pkg{anacor}, \pkg{FAiR}, \pkg{lavaan}, \pkg{lme4}
and \pkg{mokken}. All the packages installed easily and automatically. Most of
the packages have dependencies on other \proglang{R} packages, but the
required packages can be found and installed automatically. The
uninstallation process is as easy as the installation process. A drawback of
\proglang{R} packages for installability is that none of the packages provide
a standard suite of test cases specifically for the verification of
installation. \pkg{lavaan} provide a simple test example, without output
data, which does show some consideration toward verification of the
installation process.
\item Research group projects: The results of installability in the research
group projects are uneven. Several are similar to \proglang{R} packages, but
some rank lower because no installation instructions are given (5 out of 12),
or the given instruction is not linearly organized (one). The developers may
think that the installation process is simple enough that there is no need for
installation instructions. However, even for a simple installation, it is good
practice to have documentation to prevent trouble for new users. Another
common problem is the lack of a standard suite of test cases, only
\pkg{ConstructMap} and \pkg{PARAM} showed consideration of this issue.
\item Commercial software: They have well organized installation instructions,
automated installers and only a few tasks needed to be done manually. However,
commercial software tends to have the same problem as the \proglang{R}
packages -- in no instance is a standard test suite provided specifically for
the verification of the installation.
\end{itemize}
\subsubsection{Correctness and Verifiability}
We checked for evidence of trustworthy libraries and a requirements
specification. With respect to the requirements specification, we did not
require the document to be written in a strict software engineering style -- it
was considered adequate if it described the relevant mathematics. We tried the
given examples, if any, to see if the results matched the expected output. The
results are shown in Figure~\ref{fig:correctness}. In particular, we observe
that
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{image005.png}
\caption{AHP result for correctness and verifiability (close source indicated by *)}
\label{fig:correctness}
\end{figure}
\begin{itemize}
\item \proglang{R} packages: For $10$ of the $15$, there are either technical
reports or published papers about the mathematics used by the package. For
instance, \pkg{lme4}, \pkg{eRm} and \pkg{mokken} are covered in a special
volume (number 20) on SSP from the Journal of
Statistical Software (JSS) \citep{deLeeuwAndMair2007}. A later special volume
(number 48) of JSS covers further \proglang{R} extensions,
including \pkg{lavaan} \citep{Rosseel2012}. The software has consistent
documentation because \proglang{R} extensions must satisfy the CRAN Repository
policy, which includes standardized interface documentation through
\proglang{Rd} (\proglang{R} documentation) files. \proglang{Rd} is a markup
language that can be processed to create documentation in \proglang{\LaTeX},
\proglang{HTML} and text formats \citep{RCoreTeam2014}. Although not required
by the CRAN policy, some of the \proglang{R} extensions also include
vignettes, which provide additional information in a more free format than the
\proglang{Rd} documentation allows. Vignettes can act as user manuals,
tutorials, and extended examples. Many vignettes are written with
\proglang{Sweave} \citep{Leisch2002}, which is a Literate Programming (LP)
\citep{Knuth1984} tool for \proglang{R}. All the \proglang{R} packages have
examples about how to use the software, but $9$ did not provide expected
output data; we encountered a small precision difference when we ran \pkg{ltm}
and compared with the expected results. Many packages rely on other packages
(which can be seen as CRAN provides package dependencies as well as reverse
dependencies); such reuse not only eases development burden, but reused
packaged tend to be generally more trustworthy than newly developed libraries.
\item Research group projects: None provide requirements specifications nor
reference manuals. Examples are given in most case ($11$ of $12$), but $9$ of
those did not provide input or output data. Only two mentioned that standard
libraries were used.
\item Commercial: Much less information (especially as compared to \proglang{R})
was provided for building confidence in correctness and verifiability.
Neither did they provide requirement specification documents, or reference
manuals. As this is commercial software, it may be the case that the
companies believe that their proprietary algorithms given them added value; we
should not prematurely conclude that these packages are not correct or
verifiable. On the plus side, all these packages provide standard examples
with relevant output and all the calculated results produced from our machine
matched the expected results. None of the selected packages mentioned whether
they used existing popular libraries.
\end{itemize}
Little evidence of verification via testing was found among the three classes of
software. The notable exception to this is the diagnostic checks done by CRAN
on each of the \proglang{R} extensions. To verify that the extensions will
work, the \proglang{R} package checker, \code{R CMD check}, is used. The tests
done by \code{R CMD check} include \proglang{R} syntax checks, \proglang{Rd}
syntax and completeness checks and, if available, example checks, to verify that
the examples produce the expected output \citep{RCoreTeam2014}. These checks
are valuable, but they focus more on syntax than on semantics. The \proglang{R}
community has not seemed to fully embrace automated testing, since common
development practices of \proglang{R} programmers do not usually include
automatically re-running test cases \citep{Wickham2011}.
Of the three classes of software, \proglang{R} packages provides the most
complete and consistent documentation, but there also seems to be a missed
opportunity here. If there is a drawback to the \proglang{R} documentation, it
is that the documents only assist with the use of the tools, not their
verification. In other words, although LP is used for user documentation, it is
not used for the implementation itself, so that documented code can be more
easily verified. \citet{Nedialkov2010} is a good example of the improved
verifiability one can achieve by maintaining the code and documentation together
in one source; the CRAN community does not appear to follow this approach.
\subsubsection{Reliability (Surface)}
We checked rudimentary reliability by attempting to run the package after
installing it. In particular, if there was a tutorial for the package, we tried
to run through its examples to see if we obtained the same results. This can
only be considered a surface measure of reliability, as we did not conduct any
domain-specific checks.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{image007.png}
\caption{AHP result for reliability}
\label{fig:reliability}
\end{figure}
\begin{itemize}
\item \proglang{R} packages: For $11$ packages, there were no problems during
installation and initial tutorial testing. There were small installation
problems for \pkg{eRm}, \pkg{Psych}, \pkg{anacor}, \pkg{FAiR} because the
instructions were not up to date with the software. The problems included
dead URLs and a lack of required packages. Our suggestion is that developers
should maintain their install instructions, or point users to the general
instruction given by CRAN.
\item Research group projects: For half of these, we found no problems. The
other half suffered from problems like Makefile errors (\pkg{SCPPNT}), old
instructions (\pkg{WinGen3}), or an absence of instructions.
\item Commercial software: There were no problem during installation and initial
tutorial testing when using the given instructions.
\end{itemize}
\subsubsection{Robustness (Surface)}
We checked robustness by providing the software with ``bad'' input. We wanted
to know how well they deal with situations that the developers may not have
anticipated. For instance, we checked whether they handle garbage input (a
reasonable response may be an appropriate error message) and whether they
gracefully handle text input files where the end of line character follows a
different convention than expected. Like reliability, this is only a surface
check. With reference to Figure~\ref{fig:robustness}, we have the following
remarks:
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\textwidth]{image009.png}
\caption{AHP result for robustness}
\label{fig:robustness}
\end{figure}
\begin{itemize}
\item \proglang{R} packages: $10$ handle unexpected input reasonably; they
provide information or warnings when the user enters invalid
data. \proglang{R} packages do not use text files as input files; therefore, a
change of format in a text file is not an issue.
\item Research group projects: $7$ performed well. The rest had problems like
an inability to handle when the input file is not present, or when it does not
have the designated name.
\item Commercial software: two did well, but \pkg{MINISTEP} did not issue a
warning when using an invalid format for an input -- and the software crashed.
\end{itemize}
\subsubsection{Performance (Surface)}
No useful evidence or trends were obtained for performance; therefore, no
comparison can be made.
\subsubsection{Usability (Surface)}
We checked usability mainly by looking at the documentation. Better usability
means the users get more help from the developers. We checked for a getting
started tutorial, for a fully worked example and for a user manual. Also, we
looked for a reasonably well designed GUI, a clear declaration of the expected
user characteristic, and for information about the user support model. The
results are shown in Figure~\ref{fig:usability}. We observed the following:
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\textwidth]{image013.png}
\caption{AHP result for usability}
\label{fig:usability}
\end{figure}
\begin{itemize}
\item \proglang{R} packages: Thanks to the CRAN repository policy, \proglang{Rd}
files and \proglang{Sweave} vignettes, users are provided with complete and
consistent documentation. With respect to usability, two notably good
examples are \pkg{mokken} and \pkg{lavaan}. They give detailed explanations
for their examples and provide well-organized user manuals. However,
\proglang{R} packages have a common drawback -- only a few (4 out of 15)
provide getting started tutorials. The common user support model is an email
address, with only two (\pkg{eRm} and \pkg{anacor}) providing a discussion
forum as well.
\item Research group projects: There was great inconsistency here; a few
packages (\pkg{IATA}, \pkg{ConstructMap}, \pkg{TAP: Test Analysis Program}
come to mind) provided the best examples for others to follow.
\item Commercial software: Commercial software did better than the \proglang{R}
packages here. They all have getting started tutorials, explanations for their
examples and good user manuals. The main drawback is that their GUIs do not
have the usual look and feel for the platforms they are on. For the user
support model, \pkg{MINISTEP} and \pkg{MINIFAC} provide user forums and a
feedback section, in addition to email support.
\end{itemize}
\subsubsection{Maintainability}
We looked for evidence that the software has actually been maintained and that
consideration was given to assisting developers with maintaining the software.
We looked for a history of multiple versions, documentation on how to contribute
or review code, and a change log. In cases where there was a change log, we
looked for the presence of the most common types of maintenance (corrective,
adaptive or perfective).
We were also concerned about the tools that the developers used. For instance,
what issue tracking tool was employed and does it show when major bugs were
fixed? Since it is important for all scientific software
\citep{wilson-best-practices}, we also looked to see which versioning system is
in use. Effort toward clear, non-repetitive code is considered as evidence for
maintainability. The results are shown in Figure~\ref{fig:maintainability}, and
we can highlight the following:
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\textwidth]{image015.png}
\caption{AHP result for maintainability (close source indicated by *)}
\label{fig:maintainability}
\end{figure}
\begin{itemize}
\item \proglang{R} packages: All of them provide a history of multiple versions
and give information about how the packages were checked. A few of them, like
\pkg{lavaan} and \pkg{lme4}, also give information about how to
contribute. $9$ provide change logs, $4$ indicate use of an issue tracking
tool (Tracker and GitHub) and versioning systems (SVN and GitHub). All of
them consider maintainability in the code, with no obvious evidence of code
clones.
\item Research group projects: Research group projects did not show much
evidence that they pay attention to maintainability. Only $5$ provide the
version history of their software; two give information about how to
contribute and three provided change logs. None of them showed evidence of
using issue tracking tools, or of using a versioning system in their project.
\item Commercial software: Because of the nature of commercial software, they
did not show much \emph{externally visible} evidence of maintainability. Two
did provide history of multiple versions of the software and change logs, but
other information is usually not provided by the vendors. In this case, our
measurements may not be an accurate reflection of the maintainability of these
packages.
\end{itemize}
\subsubsection{Reusability}
We checked to see if the given software is itself used by another package, or if
there is evidence that reusability was considered in the design. With reference
to Figure~\ref{fig:reusability}, our observations are as follows:
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\textwidth]{image017.png}
\caption{AHP result for reusability (close source indicated by *)}
\label{fig:reusability}
\end{figure}
\begin{itemize}
\item \proglang{R} packages: There is clear evidence that 12 out of 15 software
packages have been reused by other packages. Also, the \proglang{Rd}
reference manual required by CRAN can serve as an API document (of sorts),
which helps others to reuse the package.
\item Research group projects: No evidence was found that their packages have
been reused. Two of them (\pkg{ETIRM} and \pkg{SCPPNT}) provide API
documentation.
\item Commercial software: Due to their nature, no accurate result can be
presented here.
\end{itemize}
\subsubsection{Portability}
We checked which platforms the software is advertised to work on, how people are
expected to handle portability, and whether there is evidence in the
documentation which shows that portability has been achieved. Since the results
are so uniform by category, we omit the figure.
\begin{itemize}
\item \proglang{R} packages: These are consistently portable. There are
different versions of the package for different platform. Also, software
checking results from CRAN showed that it does work well on different
platforms.
\item Research group projects: All claim to work on Windows, with little
evidence of working on other platforms. An attempt for portability does not
appear to have been made.
\item Commercial software: The results are similar to the research group
projects.
\end{itemize}
\subsubsection{Understandability (Surface)}
We checked how easy it is for people to understand the code. We focused on code
formatting style, existence of the use of coding standards and comments,
identifier and parameters style, use of constants, meaningful file names,
whether code is modularized and whether there is a design document. The results
are summarized in Figure~\ref{fig:understandability}. We can points out the
following:
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\textwidth]{image021.png}
\caption{AHP result for understandability (close source indicated by *)}
\label{fig:understandability}
\end{figure}
\begin{itemize}
\item \proglang{R} packages: These generally do quite well, with some specific
stand-outs: (\pkg{FAiR}, \pkg{lavaan}, \pkg{lme4}). One definite issue is
that no coding standard is provided (for \proglang{R} in general) and there
are few comments in the code. If LP were used to maintain the code and its
documentation together, understandability could be greatly improved.
\item Research group projects: 8 out of 12 projects are closed source. For the
closed source projects, no measure of understandability is given. The common
problem for the remaining open source projects is not specifying any coding
standard.
\item Commercial software: Due to commercial software being closed source, no
accurate result can be presented here.
\end{itemize}
\subsubsection{Interoperability}
We looked for evidence for any external systems the software communicates or
interoperates with, for a workflow that uses other software, and for external
interactions (API) being clearly defined. The results for interoperability of
the SSP follow. As the results are similar within each
category, we omit the figure.
\begin{itemize}
\item \proglang{R} packages: These do quite well, largely because of the
documentation requirements from CRAN. While there is no obvious evidence that
these packages communicate with external system, it is very clear that
existing workflows use other \proglang{R} packages.
\item No sign of interoperability is found in commercial software and research
group projects.
\end{itemize}
\subsubsection{Visibility/Transparency}
We looked for a description of the development process. We also give a
subjective score about the ease of external examination of the product, relative
to the average of the other products being considered. This score is intend to
reflect how easy it is for other users to get useful information from the
software web site or documentation. The AHP results are shown in
Figure~\ref{fig:visibility}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{image025.png}
\caption{AHP result for visibility/transparency}
\label{fig:visibility}
\end{figure}
\begin{itemize}
\item \proglang{R} packages: No information about the development process is
given, but packages like \pkg{lavaan} provide web sites outside of CRAN with
detailed information about the software, including examples and instructions.
The benefit of the external web sites is that they provide more information to
users; the downside is that maintaining information in two different places
can be difficult. If an external web site is used, developers need to make
sure that all the web sites are up to date, and that links exist between them.
\item Research group projects and commercial software: None provide information
about their development process for either of these categories, but
information on the software is easily obtainable through web sites and
documentation.
\end{itemize}
\subsubsection{Reproducibility}
We looked for a record of the environment used for development and testing, test
data for verification and automated tools used to capture experimental context.
Only \proglang{R} packages provide a record of the environment for testing (all
of them); the other products do not explicitly address reproducibility.
\proglang{R} packages also benefit from the use of \proglang{Sweave} for
vignettes (if present).
\section{Discussion}
It should be clear that different development groups have different priorities;
research projects typically aim to advance the state-of-the-art, while commercial
projects wish to be profitable. This tends to mean that commercial projects have
more of an incentive to provide user-friendly software, with well written manuals,
and so on. To a certain extent, the \proglang{R} community tends to fall in the
middle of these extremes. On the other hand, it is not clear that such priorities
are always reflected in the actual development process. The authors are well aware
of research software where very careful attention was paid to usability, and of
commercial software which is barely usable (but succeeds because it provides a
service for which there is no effective competition).
In other words, while it may appear to be unfair to compare commercial and research
software, we disagree: assuming reasonable prices, users do not care so much
about these details, but they do care about using software tools to accomplish their
tasks. Being scientists, it is fair for us to measure these tools, even if it turns
out that the answer simply agrees with conventional wisdom.
Below, we provide some rankings, based on the measurements reported in the previous
section.
For our first analysis (see Figure~\ref{fig:nonweight}), we use the same weight
for each quality. In this case, closed source packages ranked significantly
lower, since their open source counterparts provide more information for
qualities like maintainability, understandability, etc. Thus, it is not
surprising that \proglang{R} packages fare dramatically better. With respect to
research projects, the \proglang{R} packages may outperform them because the
\proglang{R} community explicitly targets user readiness. As discussed in the
introductory section, some software developers implicitly target the ``lower''
goal of research readiness, since the burdens on design and documentation are
lower in this case.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\textwidth]{image001.png}
\caption {Ranking with equal weight between all qualities}
\label{fig:nonweight}
\end{figure}
To minimize the influence of open/closed source, we gave correctness \&
verifiability, maintainability, reusability, understandability and
reproducibility low weights (a weight of $1$ for these qualities, while all
others use $9$) -- See Figure~\ref{fig:weight}. Interestingly, \proglang{R}
packages still fare best, but not to the same extent as in the previous
analysis.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\textwidth]{weighted.png}
\caption{Ranking with minimal weight for correctness \& verifiability,
maintainability, reusability, understandability and reproducibility}
\label{fig:weight}
\end{figure}
One could argue that the results shown in Figure~\ref{fig:nonweight} are unfairly
biased against commercial software. We disagree. They are (unfairly?) biased
towards \emph{users} being able to ascertain that the product they have is of
high quality. Most commercial software vendors make choices in how they package
their products which make this very difficult. We believe that users should be
able to have confidence in the quality of their tools -- and to obtain such
confidence through direct measurement. This does mean that closed source is a
serious impediment; but this is not a barrier to commercialization, just to
certain business models.
At the end of the day, every user must decide the relative weight that they
want to put on each quality, which is why our full data is available at
\url{https://github.com/adamlazz/DomainX}.
\section[Conclusion]{Conclusion and Recommendations} \label{conclusion}
For the surveyed software, \proglang{R} packages clearly performed far better
than the other categories for qualities related to development, such as
maintainability, reusability, understandability and visibility. As we
expected, commercial software provided better usability but could not be
easily verified. In terms of quality measurements, research
projects usually ranked lower and showed a larger variation in the quality
measures between products.
There is much to learn from the success of CRAN. The overall high ranking of
\proglang{R} packages stems largely from their use of \proglang{Rd},
\proglang{Sweave}, \code{R CMD check} and the CRAN Repository Policy. The
policy, together with the support tools, mean that even a single developer
project can be perceived externally as being sufficiently well developed to be
worth using. A small research project usually does not have the resources to
set up an extensive development infrastructure and process, even though the end
results would benefit greatly from doing so.
As strong as the \proglang{R} community is, there is still room for improvement.
In particular, the documentation of an \proglang{R} extension seems to put too
much trust in the development team. The documentation is solely aimed at
teaching the use of the code to a new user, not to convincing other developers
that the implementation is correct. So while we applaud the use of the LP tool
\proglang{Sweave} for the user documentation, we are frankly puzzled that this
was not broadened to the code as well. Another area for improvement would be an
increased usage of regression testing, supported by automated testing tools.
Regression testing can be used to ensure that updates have not broken something
that previously worked.
For each of the qualities, we can make further specific recommendations. Some
of these sound incredibly obvious, but are nevertheless forced to make them as
we found sufficiently many instances of packages which did not do this.
\begin{itemize}
\item Installability: Systematically provide detailed installation instructions
and a standard suite of test cases for verification of the installation.
\item Correctness and Verifiability: Developers should consider following the
example provided by CRAN, in terms of the organization of the reference
manual, requirement specification, and information about the libraries used.
With respect to user examples, commercial software, such as \pkg{flexMIRT},
provides a wide variety of examples. Although CRAN facilitates inclusion of
examples, they are not generally required by repository policy -- they should
consider increasing the number of required examples, possibly through
vignettes, to match what is being done by commercial software.
\item Reliability: The user documentation, including installation instructions,
need to be kept in sync with the software as it is updated.
\item Robustness: It was disappointing to see that several programs did not
gracefully handle simple incorrect input. Additional testing should be
performed (and automated!), and issues uncovered, as shown in
Appendix~\ref{app:SummaryMeasurements}, should be fixed.
\item Usability: CRAN should require a detailed getting started tutorial in
addition to, or as part of, the user manual. Commercial software should
put more effort in designing a platform-friendly GUI.
\item Maintainability: For open source projects, a versioning system and issue
tracking tool are strongly suggested. Information like a change log, or how
to contribute, should also be presented.
\item Reusability: For programs to be reusable, a well-documented API should be
provided. If the generation of the documentation can be automated, there is a
better chance that this will be done, and that it will be in sync with the
code.
\item Understandability: Where not currently given, coding standards and design
documents should be provided. Developers should consider using LP for code
development, so that the code can be written with the goal of human
understanding, as opposed to computer understanding. While the \proglang{R}
community already uses such tools, for others, they can look to tools such as
\proglang{cweb}, \proglang{noweb}, etc.
\item Visibility and Transparency: Projects that anticipate the involvement of
future developers should provide details about the development process
adopted.
\item Reproducibility: Developers should explicitly track the environment used
for development and testing, to make the software results more reproducible.
Through the use of the \proglang{R} environment and \proglang{Sweave}, CRAN
provides good examples for how to do this. However, the benefit of
\proglang{Sweave} (or similar tools) would be improved if all CRAN developers
were required to write vignettes for the package documentation.
\end{itemize}
\bibliographystyle{spbasic}
\subsection{Visibility/Transparency}
The columns of the table below should be read as follows:\\
Dev process defined?: Development
process defined, Ease of ext exam:
Ease of examination relative to other software (out of 10).
\label {TblVisibility}
\renewcommand{\arraystretch}{0.6}
\begin{longtable} {l c c}
\toprule
Name & Dev process defined? & Ease of ext exam \\
\midrule
\endhead
eRm & no & 9 \\
Psych & no & 8 \\
mixRasch & no & 9 \\
irr & no & 9 \\
nFactors & no & 9\\
coda & no & 10 \\
VGAM & no & 10 \\
TAM & no & 10 \\
psychometric & no & 9 \\
ltm & no & 9 \\
anacor & no & 8 \\
FAiR & no & 7 \\
lavaan & no & 10 \\
lme4 & no & 9 \\
mokken & no & 9 \\
ETIRM & no & 6 \\
SCPPNT & no & 6 \\
jMetrik & no & 7 \\
ConstructMap & no & 7 \\
TAP & no & 6 \\
DIF-Pack & no & 6 \\
DIM-Pack & no & 6 \\
ResidPlots-2 & no & 5 \\
WinGen3 & no & 7 \\
IRTEQ & no & 7 \\
PARAM & no & 6 \\
IATA & no & 7 \\
MINISTEP & no & 8 \\
MINIFAC & no & 8\\
flexMIRT & no & 8\\
\bottomrule
\end{longtable}
\ifdefined\NONEWPAGE
{}
\else
\newpage
\fi
\subsection {Reproducibility}
The columns of the table below should be read as follows:\\
Dev env record: Record of development environment, Test data for
verification: Availability of test data for verification, Auto tools: Automated
tools (like Madagascar) used to capture experimental data.
\label {TblReproducibility}
\renewcommand{\arraystretch}{0.6}
\begin{longtable} {l c c l}
\toprule
Name & Dev env record & Test data for verification & Auto tools \\
\midrule
\endhead
eRm & only for testing & no & no \\
Psych & only for testing & no & no\\
mixRasch & only for testing & no & no\\
irr & only for testing & no & no\\
nFactors & only for testing & no & no\\
coda & only for testing & no & no\\
VGAM Snap & only for testing & no & no\\
TAM & only for testing & no & no\\
psychometric & only for testing & no & no\\
ltm & only for testing & no & no\\
anacor & only for testing & no & no\\
FAiR & only for testing & no & no\\
lavaan & only for testing & no & no\\
lme4 & only for testing & no & no\\
mokken & only for testing & no & no\\
ETIRM & no & no & no \\
SCPPNT & no & no & no\\
jMetrik & no & no & no\\
ConstructMap & no & no & no\\
TAP & no & no & no\\
DIF-Pack & no & no & no\\
DIM-Pack & no & no & no\\
ResidPlots-2 & no & no & no\\
WinGen3 & no & no & no \\
IRTEQ & no & no & no \\
PARAM & no & no & no \\
IATA & no & no & no\\
MINISTEP & no & no & no\\
MINIFAC & no & no & no\\
flexMIRT & no & no & no\\
\bottomrule
\end{longtable}
|
2,877,628,089,650 | arxiv | \section{Introduction}
Communications research has almost exclusively focused on
systems based on electromagnetic propagation. However, at scales
considered in nano-technology, it is not clear that these methods
are viable.
Inspired by the chemical-exchange communication performed by
biological cells,
this paper considers {\em molecular communication}~\cite{hiy05},
in which information is
transmitted by an exchange of molecules. Specifically we consider the
propagation of individual molecules between closely spaced transmitters
and receivers, both immersed in a fluid medium. The transmitter encodes
a message in the pattern of release of the molecules into the medium;
these molecules then propagate to the receiver where they are sensed.
The receiver then attempts to recover the message by observing the
pattern of the received molecules.
It is well known that microorganisms exchange information by molecular
communication, with {\em quorum sensing}~\cite{bro01} as but one
example, where bacteria exchange chemical messages to estimate the
local population of their species. The biological literature on
molecular communication is vast, but there has been much recent work
concerning these systems as engineered forms of communication. Several
recent papers have described the design and implementation of
engineered molecular communication systems, using methods such as:
exchanging arbitrary molecules using Brownian motion in free
space~\cite{cav06}; exploiting gap junctions between cells to exchange
calcium ions~\cite{nak05,nak07}; and using microtubules and molecular
motors to actively drive molecules to their
destination~\cite{eno06,hiy08}. A comprehensive overview of the
molecular communication system is also given by~\cite{aky08,HiyamaMoritani} and the
references therein.
Given this engineering interest, it is useful to explore the
theoretical capabilities of molecular communication systems. To the
authors' knowledge, the earliest effort towards information-theoretic
analysis of these channels was given in~\cite{tho03}, which examined
information flow in continuous diffusion. In~\cite{eck07,eck-arxiv},
physical models and achievable bounds on information rate were provided
for diffusion-based systems. Information rates were provided
in~\cite{ata07,ata08} for the case where the receiver chemically
``reacts'' with the molecules and form ``complexes''. In \cite{GarveyThesis},
it was shown that the additive white Gaussian noise (AWGN) is appropriate for
diffusion-based counting channels.
Information-theoretic results have also been obtained for specific
systems, such as propagation along microtubules~\cite{eck09, moo09b}
and continuous diffusion \cite{PierobonAkyildiz}.
All these studies indicate that useful information rates can be
obtained, although much lower per unit time than in electrical
communication; this is not surprising, since chemical processes are far
slower than electrical processes.
It is worth pointing out that these results build on theoretical work in Poisson and queue-timing channels
\cite{Kabanov,BitsThroughQueues}, which is an active area of research in information theory.
In any communication system, the potential rate of communication is
determined by the characteristics of the channel. We consider molecular
propagation in a fluid medium, governed by Brownian motion and,
potentially, a mean drift velocity. Our model is therefore applicable
to communications in, e.g., a blood vessel. This drift velocity is a
key difference between our work and \cite{eck07, eck-arxiv}, which
considered a purely diffusion channel. Furthermore, we
consider two cases - a single and two molecules being released. In this
regard, it is worth emphasizing the preliminary and conceptual nature
of this work. The long-term goal of this work is to understand the role
of both timing and the number of molecules (`amplitude'). Thus, the
contributions of this paper include:
\begin{itemize}
\item Calculation and optimization, under some simplifying
assumptions, of mutual information in Brownian motion with
drift, where the transmitter uses pulse-position modulation;
\item Optimization of the degree distributions related to two transmit
molecules; and
\item Demonstration (via theoretical results) that our simplified mutual
information calculation is an {\em upper bound} on the true
mutual information of any practical implementation of this system.
\end{itemize}
Our optimized degree distributions reveal interesting features of these
channels, suggesting transmission strategies for system designers.
The paper is organized as follows. In Section~\ref{sec:sysmodel} we
describe the system under consideration, in which the propagation of
the molecule is analyzed and the probability distribution function of
the absorption time is derived. In Section~\ref{mutinf}, we
characterize the maximum information transfer per molecule, for the
case where information is encoded in the time of release of the
molecule, and the case of two molecules. In Section~\ref{sec:sim},
numerical and theoretical results arising from these models (including
optimized degree distributions) are presented. The paper wraps up with
some conclusions and suggestions for future work.
\section{System Model}\label{sec:sysmodel}
\subsection{Communication model} \label{CommModel}
The communication model we consider is shown in Figure~\ref{sysmodel}.
The subsystems which make up the molecular communication system are:
\begin{enumerate}
\item {\bfseries Transmitter.} The transmitter is a source of
identical molecules. It encodes a message in the time of
dispersal of these molecules. We will assume that the
transmitter can control precisely the time of dispersal of
these molecules but does not influence the propagation of these
molecules once dispersed.
\item {\bfseries Propagation medium.} The molecules propagate
between transmitter and receiver in a fluid medium. The
propagation is modeled as Brownian motion, and is characterized by two
parameters: the drift velocity and the diffusion constant.
These in turn depend on the physical properties of the fluid
medium. The trajectories of different molecules are assumed to
be independent of one another.
\item {\bfseries Receiver.} In this paper, the propagation of the
molecule is assumed to be one dimensional. When it arrives at the receiver, the dispersed
molecule is \emph{absorbed} by
the receiver and \emph{is removed} from the medium. The
receiver makes an accurate measurement of the time when it
absorbs the molecule. It uses this information to determine the
message sent by the transmitter.
\item {\bfseries Transmission of information.} The transmitter can
encode information in either the time of dispersal of the
molecules, or the number of molecules it disperses, or both.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width = 3.5in]{figs/sysmodel.eps}
\caption{An abstract model of the molecular communication system. One or more molecules are released by the transmitter. These molecules then travel through the fluid medium to the receiver, which absorbs them upon reception. If all the molecules are identical, then information is conveyed from transmitter to the receiver only through the times at which the molecules are released.}
\label{sysmodel}
\end{figure}
\begin{figure}[ht]
\centering
\subfigure[Modeling the motion of the particle as a one dimensional random walk]{
\includegraphics[width=0.5\columnwidth] {figs/markovchain.eps}
\label{fig:markovchain}
}
\subfigure[Sample paths of six particles in the same fluid medium, three released at $t=0$, three at $t=400$.]{
\includegraphics[width=0.5\columnwidth] {figs/trajectories.eps}
\label{fig:trajectories}
}
\label{fig:sysmodel2}
\caption{If the size of the receiver is several orders greater than the size of the molecule, and if the velocity of the fluid in the `y axis' is negligible compared to the velocity of the fluid along `x axis' (Refer Fig. \ref{sysmodel}), then, one can ignore the y-coordinate of the position of the molecule and consider only the x-coordinate. The position of the molecule along the x axis is modeled as a Markov chain, specifically, as a one dimensional random walk. The bias of the walk (the values of $p$ and $q$) depend on the velocity of the fluid medium along the x-axis.}
\end{figure}
The motion of the dispersed molecule is affected by Brownian motion;
the diffusion process is therefore probabilistic and, in turn, the
propagation time to the receiver is random. Even in the absence of any
imperfection in the implementation of a molecular communication system,
this uncertainty in the propagation time limits the maximum information
rate per molecule. In this paper, we study the maximum information per
molecule that the transmitter can convey to the receiver, for a certain
velocity and diffusion in the fluid medium. Before proceeding to do so,
we need to characterize the propagation of the molecule in the medium.
\subsection{Diffusion via Brownian motion}\label{derivation}
Consider the discrete-time, discrete-space propagation model in
Figure~\ref{fig:markovchain}. Let $X(n)$ denote the position of the
particle at time $n$. Let $P_X(x,n;x_o,n_o)$ denote the probability
mass function (pmf) of the position of the particle at time $n$, given
that it was dispersed in the fluid medium at position $x_o$ at time
$n_o$. Assume that the fluid medium is static, and so the particle
disperses in either of the directions with equal probability. If $p$ is the probability that the particle moves from position $x$ to position $x+l$ in one time unit, and $q$ is the probability that it moves from position $x$ to $x-l$, then this situation is the case when
$p=q=0.5$. It is easy to see that $P_X(x,n;x_o,n_o)$ obeys the equation
\begin{eqnarray}
P_X(x,n+1;x_o,n_o) = \frac{1}{2}P_X(x-l,n;x_o,n_o) + \frac{1}{2}P_X(x+l,n;x_o,n_o),
\label{eqn:balance}
\end{eqnarray}
which states that if a particle at time $n+1$ is at position $x$, then
at time $n$, it should have been at position $x-l$ or $x+l$, where $l$
is the distance between two slices of space. This formulation of
Brownian motion is analogous to a Wiener process, where distinct
increments of the motion are independent from each other.
Equation (\ref{eqn:balance}) can be re-written as
\begin{eqnarray}
\nonumber
\lefteqn{P_X(x,n+1;x_o,n_o)-P_X(x,n;x_o,n_o)} & & \\
\nonumber
& = & \frac{1}{2}(P_X(x-l,n;x_o,n_o)-P_X(x,n;x_o,n_o)) + \frac{1}{2}(P_X(x+l,n;x_o,n_o)-
P_X(x,n;x_o,n_o)) \\
\label{eqn:diff_derivation_1}
& = & \frac{l^2}{2}
\left( \frac{1}{l} \left( \frac{P_X(x-l,n;x_o,n_o)-P_X(x,n;x_o,n_o)}{l} \right) \right) .
\end{eqnarray}
When $n\gg 1$ and $x \gg l$, the difference equation becomes a continuous time differential equation, yielding a probability distribution function (pdf) for the position of the particle, given by,
\begin{equation}
\frac{\partial}{\partial n}P_X(x,n;x_o,n_o)
= \frac{l^2}{2} \frac{\partial^2}{\partial x^2} P_X(x,n;x_o,n_o) .
\label{eqn:diff_derivation}
\end{equation}
Now, considering a continuous time Brownian motion $X(t)$,
the probability density function of the position of the particle can be
modeled by the diffusion equation
\begin{equation}
\frac{\partial }{\partial t}P_X(x,t;x_o,t_o) =
D \frac{\partial^2}{\partial x^2}P_X(x,t;x_o,t_o),
\label{eqn:diff}
\end{equation}
where $D=l^2/2$ is the diffusion constant, whose value is dependent on
the viscosity of the fluid medium. Note that the above equation
characterizes only the `$x$-coordinate' of the position of the
molecule. Solutions to this equation are well known.
Equation~(\ref{eqn:diff}) characterizes the motion $X(t)$ of the
particle in a macroscopically static medium. The more general and
useful case is that of a fluid medium is in motion with a mean drift
velocity $v$. Consider a frame of reference which is moving with the
same velocity. In this frame, the fluid medium is static and hence the
diffusion of the particle should obey Equation~(\ref{eqn:diff}). Let
$$x^{'}=x+vt, \quad t^{'}=t$$ be the new coordinate system, and without
loss of generality, assume $t_o=0$. Let
$$P_X(x,t;x_o,0)=P^{'}_{X^{'}}(x^{'},t^{'};x_o,0),$$ then
$$\frac{\partial }{\partial t^{'}}P^{'}_{X^{'}}(x^{'},t^{'};x_o,0) = D \frac{\partial^2}{\partial
x^{'2}}P^{'}_{X^{'}}(x^{'},t^{'};x_o,0).$$
In the static frame of reference, the differential equation can be written as
\begin{eqnarray}
\lefteqn{\frac{\partial }{\partial t}P^{'}_{X^{'}}(x^{'},t^{'};x_o,0)
\frac{\partial t}{\partial t^{'}}+\frac{\partial }
{\partial x}P^{'}_{X^{'}}(x^{'},t^{'};x_o,0)
\frac{\partial x}{\partial t^{'}} =} \nonumber \\
&& D \frac{\partial}{\partial x^{'}}
\left(\left(\frac{\partial x}{\partial x^{'}}
\frac{\partial }{\partial x}+\frac{\partial t}{\partial x^{'}}
\frac{\partial }{\partial t}\right)P^{'}_{X^{'}}(x^{'},t;x_o,0)\right) ,
\end{eqnarray}
which simplifies to
\begin{equation}
\frac{\partial }{\partial t}P_{X}(x,t;x_o,0) =
\left(D \frac{\partial^2}{\partial x^{2}}+
v\frac{\partial }{\partial x} \right) P_{X}(x,t;x_o,0) .
\label{eqn:diffwithdrift}
\end{equation}
Assume that there is no absorbing boundary (receiver) and that the fluid medium extends from $-\infty$ to $+\infty$. The probability density function of the location of the particle can be obtained by solving the differential Equation (\ref{eqn:diffwithdrift}) with boundary conditions $P_X(x,0;x_o,0)=\delta(x-x_o)$ and $P_X(\pm \infty,t;x_o,0)=0.$ The
solution to (\ref{eqn:diffwithdrift}) is given by~\cite{karatzas-book}
\begin{equation}
P_X(x,t;0,0)
= \frac{1}{\sqrt{4\pi Dt}} \mathrm{exp}\left(-\frac{(x-vt)^{2}}{4Dt}\right) .
\label{eqn:gaussdiff}
\end{equation}
Equation (\ref{eqn:gaussdiff}) states that, for every $t$, the
probability density function (pdf) is a Gaussian centered at $vt$ with
variance $2Dt$. As expected, the expected location of the particle drifts along the direction of flow of the fluid medium with velocity $vt$. Figure \ref{fig:pdf} plots $P(x,t)$ for the case when $v=3$ and $D=0.3$. Furthermore, for any transmitter point $\zeta$ and transmit time $t_0$,
we have that
\begin{equation}
P_X(x,t;\zeta,t_0)
= \frac{1}{\sqrt{4\pi D (t-t_0)}} \mathrm{exp}\left(-\frac{((x-\zeta)-v(t-t_0))^{2}}
{4D(t-t_0)}\right) .
\label{eqn:gaussdiffint}
\end{equation}
As expected, Brownian motion $X(t)$ satisfying
(\ref{eqn:gaussdiff})-(\ref{eqn:gaussdiffint}) is a Wiener process with
drift.
\begin{figure}%
\centering
\includegraphics[scale=0.4]{figs/pdfplot.eps}%
\caption{The pdf of the position of the molecule $P(x,t)$ for different values of $t$, when it is released at time $t=0$ at position $x=0$. Because of positive drift velocity, the mean of the pdf travels in the positive direction, and because of the diffusion, the variance of the pdf grows with time.}
\label{fig:pdf}
\end{figure}
Now, consider the case when there is an absorbing surface (receiver) at $x=0$. The particle is absorbed and is removed from the system when it hits the absorbing surface. For such a system, to solve for $P_X(x,t;-\zeta,0))$, we need to solve the differential equation in (\ref{eqn:diffwithdrift}) with the following boundary conditions.
\begin{itemize}
\item For $x<0, \quad P_X(x,0;-\zeta,0) = \delta(x+\zeta)$. The probability density function has a physical interpretation only for $x<0$. In this region, we require it to be a delta function at $t=0$ centered at $x=-\zeta$.
\item $P_X(-\infty,t;-\zeta,0) = 0, \quad \forall t$.
\item $P_X(0,t;-\zeta,0) = 0, \quad \forall t$. Condition imposed by the absorbing surface.
\end{itemize}
The solution to the differential equation can be computed using the \emph{method of images}, it is given by:
\begin{equation}
P_X(x,t;-\zeta,0) = \frac{1}{\sqrt{4\pi Dt}} \mathrm{exp}\left(-\frac{(x+\zeta-vt)^{2}}{4Dt}\right)- \frac{1}{\sqrt{4\pi Dt}} \mathrm{exp}\left(-\frac{(x-\zeta-vt)^{2}}{4Dt}\right) \mathrm{exp}\left(\frac{v\zeta}{D}\right)
\label{eqn:eqnwithabs}
\end{equation}
\subsection{Distribution of Absorption Time}
Recall from Section \ref{sec:sysmodel} that the receiver senses the
particles only when they arrive, at which time they are absorbed and
removed from the system. Thus, for the purposes of this paper, the most
important feature of the Brownian motion $X(t)$ expressed in
(\ref{eqn:diffwithdrift})-(\ref{eqn:gaussdiffint}) is the {\em first
passage time} at the destination. For a Brownian motion $X(t)$, and an
absorbing boundary located at position $\zeta$, the first passage time
$\tau(\zeta)$ at the barrier is defined as
\begin{equation}
\label{eqn:firstpassage}
\tau(\zeta) = \min_{t} \{ X(t) \: : \: X(t) = \zeta \} .
\end{equation}
In Figure \ref{fig:trajectories}, the simulated trajectories of six particles, modeled as a random walk, through a medium are plotted. The particles were all released at $x=0$, three at time $0$ and three at time 400, into a fluid medium that had a positive drift velocity. The receiver is located at $x=50$. Notice the large variation in the absorption times. Among the particles released at $t=0$, one gets absorbed at $t\approx 360$, other at $t\approx 500$, and another does not get absorbed even by $t=1000$. Furthermore, this plot shows how particles can get absorbed in an order different from the order in which they were released. It is therefore important to understand the variation in the propagation times of the particle.
The derivation of the first passage time for our case is given in \cite{chh-book}. Here, we repeat briefly the steps involved. At a given time $t$, the probability that the particle has not yet been absorbed is given by
\begin{eqnarray}
\bar{F}(t) &=& \int_{-\infty}^{0} P_X(x,t;-\zeta,0) dx \nonumber \\
&=& \int_{-\infty}^{0} \frac{1}{\sqrt{4\pi Dt}} \mathrm{exp}\left(-\frac{(x+\zeta-vt)^{2}}{4Dt}\right) dx \nonumber \\
&& - \mathrm{exp}\left(\frac{v\zeta}{D}\right) \int_{-\infty}^{0} \frac{1}{\sqrt{4\pi Dt}} \mathrm{exp}\left(-\frac{(x-\zeta-vt)^{2}}{4Dt}\right) dx \nonumber \\
&=& \left(1-\int_{\frac{-(vt-\zeta)}{\sqrt{2Dt}}}^{\infty} \frac{1}{\sqrt{2\pi}} e^{\frac{-x^2}{2}} dx \right) - \mathrm{exp}\left(\frac{v\zeta}{D}\right) \left(1- \int_{\frac{-(vt+\zeta)}{\sqrt{2Dt}}}^{\infty} \frac{1}{\sqrt{2\pi}} e^{\frac{-x^2}{2}} dx\right) \nonumber
\end{eqnarray}
$\bar{F}$(t) is the probability that the particle has not been absorbed until time $t$. The probability that the particle has been absorbed before $t$ is given by $F(t)=1-\bar{F}(t)$. Hence, the probability density function of the absorption time is $f(t)=F^{'}(t)=-\bar{F}^{'}(t)$.
\begin{eqnarray}
f(t)&=& -\frac{d\bar{F}}{dt} \nonumber\\
&=& \left( \frac{d}{dt}\int_{\frac{-(vt-\zeta)}{\sqrt{2Dt}}}^{\infty} \frac{1}{\sqrt{2\pi}} e^{\frac{-x^2}{2}} dx \right) - \mathrm{exp}\left(\frac{v\zeta}{D}\right) \left(\frac{d}{dt} \int_{\frac{-(vt+\zeta)}{\sqrt{2Dt}}}^{\infty} \frac{1}{\sqrt{2\pi}} e^{\frac{-x^2}{2}} dx\right) \nonumber \\
&=& - \frac{1}{\sqrt{2\pi}} \mathrm{exp}\left(\frac{-(vt-\zeta)^2}{4Dt}\right) \left( \frac{-v}{\sqrt{2Dt}} + \frac{(vt-\zeta)}{2\sqrt{2Dt^3}} \right) + \nonumber\\
&&\mathrm{exp}\left(\frac{v\zeta}{D}\right) \frac{1}{\sqrt{2\pi}} \mathrm{exp}\left(\frac{-(vt+\zeta)^2}{4Dt}\right) \left( \frac{-v}{\sqrt{2Dt}} + \frac{(vt+\zeta)}{2\sqrt{2Dt^3}} \right) \nonumber\\
&=& \frac{\zeta}{\sqrt{4\pi Dt^3}} \mathrm{exp}\left(\frac{-(vt-\zeta)^2}{4Dt}\right)
\label{eqn:abstime}
\end{eqnarray}
To summarize, (\ref{eqn:abstime}) gives the probability density function of the absorption time of a particle released in a fluid medium with diffusion constant $D$, at a distance $\zeta$ from the receiver, when the fluid has a constant velocity $v$. Note that this equation is valid only for positive drift velocities, i.e., when the receiver is downstream from the transmitter. Since our
communication is based largely on the time of transmission (and
reception), this pdf characterizes the \emph{uncertainty} in the
channel, and plays a role similar to that of the noise distribution
in an additive noise channel. Some example plots of this function are given in
Figure~\ref{fig:veldif}.
\begin{figure}%
\centering
\includegraphics[scale=0.4]{figs/veldif.eps}%
\caption{The time at which the molecule gets absorbed by the receiver, given that it was released at time 0, is a random variable. This is a result of the diffusion of the fluid medium. Here, we plot the probability distribution function of the absorption time for
different sets of velocity and diffusion. For this plot, the distance between the transmitter and the receiver is set at 1 unit.}
\label{fig:veldif}
\end{figure}
\section{Mutual Information}\label{mutinf}
The transmitter encodes the message in the time of release of molecules
and possibly the number of molecules. Based on the number and the time
of absorption of the molecules, the receiver decodes the transmitted
information. This section develops the mutual information between the
transmitter and receiver for two cases: with a single transmitted
molecule and two molecules whose release times can be chosen
independently. For a given information transmission strategy at the transmitter
(called the {\em input distribution} in the information theoretic literature),
the mutual information is also the maximum rate at which
information may be conveyed using that strategy. (Mutual information
is related to but distinct from the {\em capacity}, which
is the maximum mutual information over all possible input distributions.)
\subsection{Overview}
In a traditional wireline communication system,
receiver noise causes uncertainty in the reception, limiting the rate
at which information can be conveyed.
However, as
discussed before, the uncertainty in the propagation time is a major
bottleneck to the information transfer in molecular communication.
This uncertainty in the
propagation time also means that the order in which molecules are
received at the receiver need not be the order in which they were
transmitted. This will result in ``inter-block interference''. This is a
serious impairment in the low velocity regime, where the pdf of the
absorption time decays very slowly, making inter-block interference more likely.
In this paper, we ignore inter-block interference and assume that the
clocks are synchronized. Developing techniques for both issues are
significant works in themselves and outside the scope of this paper. So
our results are most relevant to system in a
fluid with some significant drift; further, as we show in Section \ref{sec:cap},
our results can be used to obtain upper bounds on both mutual information and capacity
for any drift velocity.
The channel here falls under a class of timing channels, channels where the mode of communication is through the timings of various events. The capacity of such channels are usually more difficult to characterize. A celebrated result in this field is the computation of the capacity of a single server queuing system \cite{BitsThroughQueues}. The molecular communication channel can be modeled as a $\cdot/G/\infty$ queuing system, i.e., an infinite server queuing system where the service time of a server is a random variable with distribution same as the pdf of the absorption time. To our knowledge, the exact capacity of such a channel has not been computed to date.
\subsection{Single molecule: Pulse position modulation}
We first analyze the case of the transmitter releasing just a single
molecule. In such a scenario, it can encode information only in the
time of release of the molecule. The transmitter releases the molecule
(or not at all) in the beginning of one of $N$ time slots, each of unit
duration (i.e., $T_s = 1$ in arbitrary units); this
action on the part of the transmitter is called a {\em channel use}.
This molecule then propagates through the
medium and is absorbed by the receiver in a later time slot. The
receiver then guesses the time slot in which the molecule was released.
This is a form of \emph{pulse-position modulation} (PPM).
Given that it has $(N+1)$ choices, the transmitter can encode a maximum
of $\textrm{log}_2 (N+1)$ bits of information per channel use, though in practice
much less due to the uncertain arrival times of the molecules.
For instance, suppose the velocity of
the fluid medium is high enough so that the particle gets absorbed by
the receiver in $M \simeq N$ time slots with very high probability.
In this case, one transmission strategy would be to emit a molecule in one of
$N/M$ time slots (each separated by $M$ slots), since inter-block interference
would thus occur with very low probability, and the transmitted information would
arrive without distortion.
For
such an ideal system, we can transmit information at a rate of
$\textrm{log}_2 N/M + 1$ bits per channel use. However, more practical
and interesting is the less than ideal case with lower velocities.
In this paper we neglect inter-block interference, i.e., we assume that
the receiver waits for enough time slots $M$, to ensure that the
molecule propagates to the receiver with high probability. Here $M$ is
chosen such that this probability is 0.999. Further, we assume that the
receiver sampling rate is $T_r = T_s/5$. This provides both a digital
input/output system while maintaining fairly high accuracy of the
received time. Both these parameters could be changed as required.
\subsection{Mutual information as an optimization problem}
Having dealt with preliminaries, we now derive the maximum possible
mutual information, here as an optimization problem. Define a random
variable $X$ to denote the time slot in which the transmitter releases
the molecule. Assume that the transmitter releases the particle at the
beginning of the $i^{th}$ slot ($1\leq i \leq N$) with probability
$p_i$. With probability $p_{0}=1-\sum_{i=1}^{N}p_i $, the transmitter does not
release the particle. Let $Y$ denote the time
slot in which the receiver absorbs the molecule. For the time being, we
allow $Y$ to range between $1$ and $\infty$, we will see shortly that
this is not required. Also, let $Y=0$ denote the event that the
molecule is never received. Since in our idealized case, the receiver
waits for a sufficiently long time, the event of $Y=0$ is the same
event that the molecule is not transmitted. Assume that the duration of
the time slot is $T_r$.
Let $F(t)$ denote the probability that the particle gets absorbed
before time $t$ given that it was released at time 0, i.e., $F(t)$ is
the integral of the pdf in \eqref{eqn:abstime}. Denote by $\alpha_j$
the probability that the particle arrives in the $j^{th}$ time slot,
given that it was released at time 0, which is equal to
$F(jT_r)-F((j-1)T_r)$ ; $\alpha_j = 0, j\leq 0 $. Let $H(X)$ denote the
entropy of random variable $X$ and let $\mathrm{entr}(x)$ represent
the binary entropy function, where
\begin{equation}
\mathrm{entr}(x) = \begin{cases} -x\mathrm{log}_2 x & x>0 \\
0 & x=0 .
\end{cases}
\end{equation}
We now proceed to
calculate the mutual information between the random variables $X$ and
$Y$.
\begin{eqnarray}
H(Y|X)&=& H(Y|X=0)p_0+\sum_{i=1}^{N} H(Y|X=i) p_i \nonumber\\
&=& 0\times p_0+ \sum_{i=1}^{N} p_i \sum_{j=i+1}^{\infty} \mathrm{entr}\left(P(Y=j|X=i)\right)
= \sum_{i=1}^{N} p_i \sum_{j=i+1}^{\infty} \mathrm{entr}\left(\alpha_{j-i}\right)\nonumber\\
&=& (1-p_0)\sum_{k=1}^{\infty} \mathrm{entr}\left(\alpha_{k}\right), \quad \\
H(Y)&=& \hspace*{-0.1in}\mathrm{entr}(P(Y=0))+\sum_{j=1}^{\infty}\mathrm{entr}(P(Y=j))\nonumber\\
&=& \hspace*{-0.1in}\mathrm{entr}(p_0)+
\sum_{j=1}^{\infty}\mathrm{entr}\left(\sum_{i=1}^{N}P(Y=j|X=i)p_i\right)\nonumber\\
&=& \hspace*{-0.1in} \mathrm{entr}(p_0)+
\sum_{j=1}^{\infty}\mathrm{entr}\left(\sum_{i=1}^{N}\left(\alpha_{j-i}\right)p_i\right)
\end{eqnarray}
\begin{eqnarray}
I(X;Y) &=&H(Y)-H(Y|X) \nonumber \\
&=& \hspace*{-0.1in}\mathrm{entr}(p_0)+
\sum_{j=1}^{\infty}\mathrm{entr}\left(\sum_{i=1}^{N} p_i \alpha_{j-i}\right)
-(1-p_0)\sum_{k=1}^{\infty} \mathrm{entr}\left(\alpha_{k}\right)
\label{eqn:mut_inf}
\end{eqnarray}
As seen in Figure \ref{fig:veldif}, the sequence $\{\alpha_j\}$ is a an eventually decreasing sequence. The rate of decay
depends on the values of the drift velocity $v$ and the diffusion
coefficient $D$. The summations in (\ref{eqn:mut_inf}) can, therefore,
be terminated for some large enough $M$.
The expression for mutual information is a non-negative weighted sum of
concave functions plus a constant. Hence, the mutual information is a
concave function of the input distribution $\{p_i, i=1,\ldots,N\}$.
Finding the degree distributions, the values for $p_i$s which maximize
the entropy, is therefore a concave optimization problem. Standard
convex optimization techniques can therefore be used to solve for the
input probability distribution which maximizes the mutual information
efficiently, in particular, the Blahut-Arimoto algorithm \cite{Blahutarimoto,ArimotoBlahut}.
As a special case, suppose that we were to convey information only in
the time of release of the molecule, i.e., we require the molecule to
be transmitted. The derivation of mutual information is very similar to
the derivation above. Mutual information can then be expressed as
\begin{equation}\label{eqn:withoutp0}
I(X;Y)= \sum_{j=1}^{M}\mathrm{entr}\left(\sum_{i=1}^{N} p_i \alpha_{j-i}\right) -
\sum_{j=1}^{M} \mathrm{entr}\left(\alpha_{j}\right),
\end{equation}
and, again, the optimal degree distribution can be obtained through
concave optimization.
\subsection{Two molecules}\label{multiplemolecules}
In the work so far we have considered only the propagation of a single
molecule and the focus was on PPM-based communication. We now take a
step toward involving amplitude wherein the transmitter can release two identical
molecules. The analysis is simplified by assuming that the propagation
paths of these two molecules are independent. The transmitter releases
each of these molecules in one of the $N$ time slots or chooses not to
release it. Based on the arrival times of these molecules at the
receiver, the receiver estimates their release times. However,
because of the nature of the diffusion medium, different molecules can
take different times to propagate to the receiver. Hence, the molecules
can be absorbed in a different order than in which they were released:
a key difference between this channel and traditional additive noise channels. As a result, the amount of information that can be conveyed through the medium with two indistinguishable molecules, as we will shortly see, is less than twice the amount of the information that can be conveyed using a single molecule.
To obtain the maximum mutual information, let $X_1\in\{1,2,\ldots,N\}$
be the time slot in which the first particle is released,
$X_2\in\{X_1,X_1+1,\ldots,N\}$ be the time slot in which the second
particle is released. Let $Y_1$, and $Y_2$ be the time slots in which
the first and second particles are received. For notational
convenience, if a particle is not released, we denote it by a release
in slot 0. Likewise, if a particle is not received at the receiver, we
denote it by a reception in time slot 0.
The probability mass function of the reception times
$(P(Y_1,Y_2))$, and the conditional probability mass function
of the reception times given the transmission times
$(P(Y_1,Y_2|X_1,X_2))$ can be expressed in terms of the conditional
probability mass function of the reception time of one
molecule, given its transmission time
$(P(Y_1=y_1|X_1=x_1)=\alpha_{y_1-x_1})$. Let $p_{x_1x_2}$ represent
$P(X_1=x_1,X_2=x_2)$.
\begin{eqnarray}
P(Y_1=y_1,Y_2=0|X_1=x_1,X_2=0)&=&\alpha_{y_1-x_1}, \quad x_1,y_1>0\nonumber \\
P(Y_1=y,Y_2=y|X_1=x_1,X_2=x_2)&=&\alpha_{y-x_1}\alpha_{y-x_2}, \quad x_1\geq x_2,y>0 \nonumber\\
P(Y_1=y_1,Y_2=y_2|X_1=x_1,X_2=x_2)&=& \nonumber \\
&& \lefteqn{\alpha_{y_1-x_1}\alpha_{y_2-x_2}+ \alpha_{y_1-x_2}
\alpha_{y_2-x_1}, \quad x_1 \geq x_2, y_1 \neq y_2>0} \nonumber\\
P(Y_1=0,Y_2=0)&=&p_{00}\nonumber \\
P(Y_1=y_1,Y_2=0)&=&\sum_{x_1=1}^{N}p_{x_10}\big( \alpha_{y_1-x_1} \big)\nonumber\\
P(Y_1=y,Y_2=y)&=&\sum_{x_1=1}^{N} \sum_{x_2=x_1}^{N} p_{x_1x_2}
\big(\alpha_{y-x_1}\alpha_{y-x_2}\big) \nonumber \\
P(Y_1=y_1,Y_2=y_2\ne y_1)&=&\sum_{x_1=1}^{N} \sum_{x_2=x_1}^{N} p_{x_1x_2} \big(\alpha_{y_1-x_1}
\alpha_{y_2-x_2}+\alpha_{y_1-x_2}\alpha_{y_2-x_1}\big)\nonumber
\end{eqnarray}
The term $\alpha_{y_1-x_2}\alpha_{y_2-x_1}$ in the above equations
accounts for the event that the molecule released later gets absorbed
before the molecule which is released earlier. The mutual information
between the variables $(X_1,X_2)$ and $(Y_1,Y_2)$ can now be written in
terms of these probability mass functions. Note that in the
above derivation, we have assumed that $\alpha_{k}$ for $k\leq0$ is
defined as zero.
Using these equations, we can frame the mutual information maximization
as another optimization problem. The optimization is to be done over
the upper triangular $N\times N$ matrix $P_{X_1,X_2}(x_1,x_2)$, where
each entry in the matrix is a positive number and all the entries sum
to one. The mutual information is a concave function of the
optimization variables $\{ p_{x_1x_2}:
x_1\in\{1,2,\ldots,N\},x_2\in\{x_1,x_1+1,\ldots,N\} \}$. The exact
expression is tedious to write, and is omitted here.
\section{Results}
\label{sec:sim}
The well known Blahut-Arimoto algorithm \cite{Blahutarimoto,ArimotoBlahut} is used to compute, numerically, the input distribution that maximizes the mutual information in each of the different scenarios. The distance from the sender to the receiver, $\zeta$ is set to one unit in all the results presented here.
\subsection{Release of a single molecule}
When one molecule is to be released, information can be conveyed in whether it is released or not, and if released, the slot number in which it is released.
\paragraph*{Case when the molecule can be released in one of the $N$ slots, or not at all}
In Figure~\ref{ratevsvel}, we plot the mutual information as a function of velocity, for two
different sets of diffusion coefficients, 0.05, representing the low
diffusion scenario, and a high diffusion constant 0.2. We have two sets
of plots in the figure, one for the case where we have two slots in
which we can release the molecule, or choose not to release it, and
another, where we have four time slots. Also, we give the input
distribution $(p_0,p_1,p_2,p_3,p_4)$ at which the mutual information is
maximized at the two extreme values of velocity.
From the figure, it is evident that the mutual information increases
with an increase in velocity and saturates to a maximum of
$\mathrm{log}_2(N+1)$ bits. This trend is as expected. At high
velocities, the optimal, information maximizing, distribution is
uniform. This is because the receiver can detect, without error, the
slot in which the transmitter disperses the molecule. Also, because the
receiver waits for a sufficiently long time, we can detect, without
error, if a molecule was transmitted or not. Therefore, a lower limit
on the mutual information is one bit. At lower velocities, timing
information is completely lost and the mutual information is marginally
greater than one bit.
The diffusion constant is a measure of the uncertainty in the
propagation time. Hence, we would expect the mutual information to be
lower when the diffusion constant is high. This is indeed the case at
high velocities. However, it is surprising that a higher diffusion
constant results in higher mutual information at low velocities (Also refer Figure \ref{fig:grid}). This
is because, at low velocities, it is the diffusion in the medium which
aids the propagation of the molecule from the transmitter towards the
receiver. This is illustrated in the pdf of the absorption time, shown
in Figure \ref{fig:veldif}. Compared to the case when the diffusion in
the medium is low, the probability distribution function is more
``concentrated'' (lower uncertainty) when the diffusion in the medium
is higher. Unfortunately, there does not seem to be a single parameter
that characterizes the resulting interplay between velocity and
diffusion.
\begin{figure}%
\centering
\includegraphics[scale=0.4]{figs/ratevsvel.eps}%
\caption{Variation of mutual information (which measures in bits, the information that can be conveyed from the transmitter to the receiver) with velocity. There are 2 sets of curves corresponding to the number of slots in which the molecule is released, $N=2$ and $N=4$. For $N=4$, we also list the p.m.f. of the release times which maximizes the mutual information.}%
\label{ratevsvel}%
\end{figure}
\begin{figure}%
\centering
\includegraphics[scale=0.4]{figs/gridplotlarge.eps}%
\caption{A grid plot denoting the mutual information for a range of different velocities and diffusion constants, for the case when $N=4$. Observe that at lower velocities, more information can be transferred in a medium with higher diffusion constant.}
\label{fig:grid}%
\end{figure}
\paragraph*{Case when we do not permit the transmitter to not transmit the molecule}
The information, in this scenario, is conveyed only in the time of release of the molecule. We find the input distribution which maximizes
(\ref{eqn:withoutp0}). The mutual information in this case is plotted
in Figure \ref{withoutp0}. The maximum mutual information is now
$\mathrm{log}_2(N)$ bits, which is achieved at high velocities.
However, it is in the low velocity regime where the mutual information
is significantly lower than the case where the transmitter is allowed
to not transmit the molecule. Figure \ref{comp} compares the two
scenarios.
From the results, we see that the velocity-diffusion region can be
roughly classified into three regimes:
\begin{itemize}
\item A diffusion dominated region, where mutual information is
relatively insensitive to the velocity; this corresponds to
$v<10^{-1}$ in Figure~\ref{ratevsvel}.
\item A high-velocity region where the mutual information is
insensitive to the diffusion constant; this corresponds to $v
> 3$ in Figure~\ref{ratevsvel}.
\item An intermediate regime, where the mutual information is
highly sensitive to the velocity and diffusion constant of the
medium, $10^{-1}<v<3$ in Figure \ref{ratevsvel}.
\end{itemize}
In the low velocity regime, we see no significant improvement in the
mutual information when we increase the number of time slots in which
we can release the molecules. As expected, very little information can
be conveyed in the time of release of the molecule when there is high
uncertainty in the propagation time. Hence, we need to explore
alternative ways of encoding message in this regime.
\begin{figure}%
\centering
\includegraphics[scale=0.4]{figs/ratevsvel_withoutp0.eps}%
\caption{Variation of mutual information with velocity when the transmitter
must disperse the molecule ($p_0 = 0$). The scenario is similar to the one used in plotting Figure \ref{ratevsvel}, with the difference being that the transmitter is not permitted not to transmit a molecule. }
\label{withoutp0}%
\end{figure}
\begin{figure}%
\centering
\includegraphics[scale=0.4]{figs/comparison.eps}%
\caption{A comparison of the information bits conveyed in the scenarios when the transmitter must ($p_0 = 0$) or may not release the molecule. Plots from Figures \ref{ratevsvel} and \ref{withoutp0} are compared here.}%
\label{comp}%
\end{figure}
\subsection{Release of multiple molecules}
Here, we present the results of the scenario in which the transmitter is allowed to transmit at most 2 molecules. The results are presented in Figure \ref{two}. We have two sets of plots, one where the transmitter can
release the molecule in one of two time slots, other, where the
transmitter can release the molecule in one of four time slots. At low
velocities, the mutual information is close to $\mathrm{log}_2 3$ bits.
This is because, at low velocities, any information encoded in the time
of release of the molecule is lost. The receiver can however accurately
estimate the number of molecules transmitted. With two molecules, the
receiver can decode if the number of molecules transmitted was one or
two or zero. However, this is because, we wait for infinite time at the
receiver. The probability distribution function which attains the
maximum mutual information at low velocities assigns, roughly, a
probability of $\frac{1}{3}$ to the events of releasing one or two or
no molecules.
At very high velocities, information encoded in both the time and
number of molecules released is retained through the propagation.
Hence, a maximum of $\mathrm{log}_2 \frac{(N+1)(N+2)}{2}$ bits can be
conveyed at high velocities.
In Tables \ref{t2low}, \ref{t2high} and \ref{t2high_upp} we list the
mutual information maximizing input distributions for the case of
release of two molecules in \emph{two} time slots. Tables \ref{t4low},
\ref{t4high} and \ref{t4high_upp} list the input distributions for the
case of release of two molecules in \emph{four} time slots. As
expected, at low velocities, the total probability of releasing one,
two or zero molecules is roughly one third each. The molecules, to
minimize uncertainty, are transmitted `far apart'.
It is however surprising to note that for reasonable velocities when
two molecules are released, they are both to be released in the same
time slot. This may be explained by the fact that, due to diffusion,
molecules can arrive out of order and the timing information is lost.
Transmitting both molecules at once avoids this confusion. This is also
an important result; if this trend is to hold true for the release of
multiple molecules, then we could consider only those schemes wherein
all the molecules are released in one of the time slots, and where
information is encoded only in the time slot in which all the molecules
are released.
\begin{table}
\centering
\caption{Mutual information maximizing input distribution when two molecules are released in one of
the two possible slots or not released at all, $v=10^{-2}$, $d=0.05$}
\label{t2low}
\begin{tabular}{|c|c|c|c|}
\hline
& $P(X_2=1)$ & $P(X_2=2)$ & $P(X_2=0)$ \\
\hline
$P(X_1=1)$ & 0.1424 & 0 & 0.1412 \\
$P(X_1=2)$ & 0 & 0.1939 & 0.1921 \\
$P(X_1=0)$ & 0 & 0 & 0.3303 \\
\hline
\end{tabular}
\caption{$v=10^{-2}$, $d=0.2$}
\label{t2high}
\begin{tabular}{|c|c|c|c|}
\hline
& $P(X_2=1)$ & $P(X_2=2)$ & $P(X_2=0)$ \\
\hline
$P(X_1=1)$ & 0.1382 & 0 & 0.1299 \\
$P(X_1=2)$ & 0 & 0.2113 & 0.2035 \\
$P(X_1=0)$ & 0 & 0 & 0.3171 \\
\hline
\end{tabular}
\caption{$v=10$, $d=0.2$ or $0.05$}
\label{t2high_upp}
\begin{tabular}{|c|c|c|c|}
\hline
& $P(X_2=1)$ & $P(X_2=2)$ & $P(X_2=0)$ \\
\hline
$P(X_1=1)$ & 0.1667 & 0.1667 & 0.1667 \\
$P(X_1=2)$ & 0 & 0.1667 & 0.1667 \\
$P(X_1=0)$ & 0 & 0 & 0.1667 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Mutual information maximizing input distribution when two molecules are
released in one of the four possible slots or not released at all, $v=10^{-2}$, $d=0.05$}
\label{t4low}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& $P(X_2=1)$ & $P(X_2=2)$ & $P(X_2=3)$ &$P(X_2=4)$ & $P(X_2=0)$ \\
\hline
$P(X_1=1)$ & 0.1395 & 0 & 0 & 0 & 0.1313 \\
$P(X_1=2)$ & 0 & 0 & 0 & 0 & 0 \\
$P(X_1=3)$ & 0 & 0 & 0 & 0 & 0 \\
$P(X_1=4)$ & 0 & 0 & 0 & 0.2094 & 0.2022 \\
$P(X_1=0)$ & 0 & 0 & 0 & 0 & 0.3176 \\
\hline
\end{tabular}
\caption{$v=10^{-2}$, $d=0.2$}
\label{t4high}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& $P(X_2=1)$ & $P(X_2=2)$ & $P(X_2=3)$ &$P(X_2=4)$ & $P(X_2=0)$ \\
\hline
$P(X_1=1)$ & 0.1345 & 0 & 0 & 0 & 0.1256 \\
$P(X_1=2)$ & 0 & 0.0305 & 0 & 0 & 0.0122 \\
$P(X_1=3)$ & 0 & 0 & 0.0129 & 0 & 0 \\
$P(X_1=4)$ & 0 & 0 & 0 & 0.2052 & 0.1964 \\
$P(X_1=0)$ & 0 & 0 & 0 & 0 & 0.2827 \\
\hline
\end{tabular}
\caption{$v=10$, $d=0.2$ or $0.05$}
\label{t4high_upp}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& $P(X_2=1)$ & $P(X_2=2)$ & $P(X_2=3)$ &$P(X_2=4)$ & $P(X_2=0)$ \\
\hline
$P(X_1=1)$ & 0.0667 & 0.0667 & 0.0667 & 0.0667 & 0.0667 \\
$P(X_1=2)$ & 0 & 0.0667 & 0.0667 & 0.0667 & 0.0667 \\
$P(X_1=3)$ & 0 & 0 & 0.0667 & 0.0667 & 0.0667 \\
$P(X_1=4)$ & 0 & 0 & 0 & 0.0667 & 0.0667 \\
$P(X_1=0)$ & 0 & 0 & 0 & 0 & 0.0667 \\
\hline
\end{tabular}
\end{table}
\begin{figure}%
\centering
\includegraphics[scale=0.6]{figs/twofour}%
\caption{Variation of mutual information with velocity for the case when the transmitter is allowed to release at most two molecules. The mutual information maximizing input distributions at the extreme points of the graph are given in Tables \ref{t2low}, \ref{t2high}, \ref{t2high_upp}, \ref{t4low}, \ref{t4high} and \ref{t4high_upp}. }%
\label{two}%
\end{figure}
\section{Relationship to achievable information rates and capacity}
\label{sec:cap}
When pulse-position modulation is used, symbols are normally
transmitted consecutively. That is, if the duration of a symbol is
$T$, then the first symbol is transmitted on the interval $[0,T)$, the
second on the interval $[T,2T)$, and so on. However, for the Brownian
motions considered in Section \ref{sec:sysmodel}, molecules transmitted
during a given interval may arrive during a later interval, causing
inter-block interference. In this paper, we have avoided this problem
by only considering symbols transmitted in isolation, disregarding inter-block interference.
In fact, for a fixed input distribution,
our information results lead to an {\em upper bound} on the
mutual information under consecutive symbol transmission. To show this,
let $\mathcal{X}$ represent the alphabet of allowed symbols. For
simplicity, suppose that a symbol is composed of the release of a
single molecule, although this assumption can be relaxed without
changing the argument. Then we will assume that $\mathcal{X}$ is a
finite, discrete list of allowed molecule release times on the interval $[0,T)$;
the cardinality $|\mathcal{X}|$ gives the number of allowed release times.
Further, there exists a discrete input distribution, with pmf $p_X(x)$, over $\mathcal{X}$.
Let $\mathcal{Y}$ represent the
corresponding set of channel outputs, given a {\em single} symbol input
to the channel, and disregarding inter-block interference.
Since $\mathcal{Y}$ is the arrival time of a single
molecule transmitted on the interval $[0,T)$, then clearly $\mathcal{Y}= [0,\infty)$, and nothing
changes if $\mathcal{Y}$ is quantized.
Let $\mathbf{x} = [x_1, x_2, \ldots, x_n] \in \mathcal{X}^n$ and
$\mathbf{y} = [y_1, y_2, \ldots, y_n] \in \mathcal{Y}^n$ represent
vectors of channel inputs and outputs, respectively, for $n$ uses of
the channel in isolation; throughout this section, we will assume that $x_i$ is independent and
identically distributed (IID) for each $i$.
Suppose the symbols in $\mathbf{x}$ are
transmitted consecutively. Then the resulting sequence of molecule
release times can be written $\mathbf{r} = [r_1,r_2,\ldots]$, where
\begin{equation}
\label{eqn:r}
r_i = x_i + (i-1)T .
\end{equation}
Since $x_i \in [0,T)$, clearly $r_i \in [(i-1)T,iT)$.
Note that the mutual information per unit time of the
channel is given by $I(X;Y)/T$, which is calculated for given $\mathcal{X}$ and $p_X(x)$.
Let $u_i$ represent the arrival corresponding to $r_i$. Since $r_i$ is
a time-delayed version of $x_i$, and $y_i$ is the arrival corresponding
to $x_i$, from (\ref{eqn:r}) we have
\begin{equation}
\label{eqn:u}
u_i = y_i + (i-1)T .
\end{equation}
The corresponding vector is $\mathbf{u} =
[u_1,u_2,\ldots,u_n]$.
However, the receiver does not observe $\mathbf{u}$ directly -- instead, it observes
$\mathbf{w}$, where
\begin{equation}
\label{eqn:sort}
\mathbf{w} = \mathrm{sort}(\mathbf{u}) ,
\end{equation}
and where the function $\mathrm{sort}(\cdot)$ sorts the argument vector in increasing order.
That is, while information symbol $x_i$ corresponds to arrival time $u_i$, it is potentially unclear
which element of $\mathbf{x}$ corresponds to arrival time $w_i$.
Since the length-$n$ vectors of consecutive input symbols
$\mathbf{r}$ and sorted outputs $\mathbf{w}$ are random variables, we
can write the mutual information between them as
$I(\mathbf{R};\mathbf{W})$. However, we are more interested in
the mutual information per unit time $I(R;W)$, which is given by
\begin{eqnarray}
\nonumber
I(R;W) & = & \lim_{n \rightarrow \infty} \frac{1}{nT + \delta} I(\mathbf{R};\mathbf{W}) \\
\label{eqn:truemutualinfo}
& = & \lim_{n \rightarrow \infty} \frac{1}{nT} I(\mathbf{R};\mathbf{W}) - \epsilon ,
\end{eqnarray}
where $nT$ represents the total time to transmit $n$ symbols, $\delta$ is the extra time after $nT$
required to wait for all remaining molecules to arrive, and
\begin{equation}
\label{eqn:epsilon}
\epsilon = \left( \frac{1}{nT} - \frac{1}{nT + \delta} \right) I(\mathbf{R};\mathbf{W}).
\end{equation}
We let $\delta = \log n$, so that $\lim_{n \rightarrow \infty} \delta = \infty$
(which is long enough time for all molecules to arrive with probability 1).
With this in mind, we have the following result:
\begin{theorem}
\label{thm:main}
\begin{equation}
\frac{1}{T}I(X;Y) \geq I(R;W) .
\end{equation}
\end{theorem}
\begin{proof}
From (\ref{eqn:truemutualinfo}), since $\epsilon$ is positive,
%
\begin{displaymath}
I(R;W) \leq \lim_{n \rightarrow \infty} \frac{1}{nT + \delta} I(\mathbf{R};\mathbf{W}) .
\end{displaymath}
%
Then we have
that
%
\begin{eqnarray}
I(\mathbf{X};\mathbf{Y}) & = & I(\mathbf{R};\mathbf{U}) \\
& \geq & I(\mathbf{R};\mathbf{W}) ,
\end{eqnarray}
%
where the first equality follows from (\ref{eqn:r})-(\ref{eqn:u}),
since $\mathbf{r}$ and $\mathbf{u}$ are bijective functions of
$\mathbf{x}$ and $\mathbf{y}$, respectively; and the second inequality
follows from the data processing inequality (e.g., see \cite{cov-book})
and (\ref{eqn:sort}). Finally, since $x_i,y_i$ and $x_j,y_j$ are
independent for any $i \neq j$, $I(\mathbf{X};\mathbf{Y}) = nI(X;Y)$,
and the theorem follows.
\end{proof}
Note that the result in Theorem \ref{thm:main} bounds the mutual information, and thus applies to {\em each}
set $\mathcal{X}$ and
input pmf $p_X(x)$; however, we can also
show that the result applied to capacity. Let $C_m$ represent the maximum of $I(X;Y)$ where $|\mathcal{X}| = m$,
i.e.,
\begin{equation}
C_m = \max_{p_X(x) : |\mathcal{X}| = m} I(X;Y).
\end{equation}
The capacity of the channel uses in isolation is then given by
\begin{equation}
C = \lim_{m \rightarrow \infty} C_m .
\end{equation}
It remains to show that this limit exists, which we do in the following result.
\begin{theorem}
\label{thm:limit}
$C$ exists, and is finite, if $0 < v,D,\zeta,T < \infty$.
Furthermore, if $\max_{p_X(x)} I(R;W)$ represents the capacity of $I(R;W)$ under
IID inputs, then
%
\begin{equation}
\max_{p_X(x)} I(R;W) \leq \frac{C}{T} .
\end{equation}
%
\end{theorem}
\begin{proof}
To prove the first part of the theorem, we proceed in two steps.
\begin{enumerate}
\item {\em $C_m$ is a nondecreasing sequence.}
For each $m$, either: the
maximizing distribution $p_X(x)$
(or every maximizing distribution, if not unique)
satisfies $p_X(x) > 0$ for all $x \in \mathcal{X}$;
or $p_X(x) = 0$ for at least one $x \in \mathcal{X}$ (in at least one maximizing distribution, if not unique).
If the former is true, then $C_m > C_j$ for all $j < m$; if the
latter is true, then $C_m = C_{m-1}$. Thus, $C_m$ is nondecreasing in $m$.
\item {\em $C_m$ is upper bounded.}
We can write $I(X;Y) = H(Y) - H(Y|X) = H(Y) - H(N)$, where $H(N)$ is the entropy of the first arrival time.
We can upper bound $H(Y)$ independently of $m$ as follows. If the pdf of $y$ is $f_Y(y)$, then
$H(Y) = E[\log_2 1/f_Y(y)]$, where $E[\cdot]$ is expectation. If $g(y)$ is any valid pdf of $y$, then by a well-known
property of entropy, $H(Y) \leq E[\log_2 1/g(y)]$ (with equality when $g(y) = f_Y(y)$).
Pick $g(y) = e^{-y}$ (supported on $y=[0,\infty)$), the exponential distribution with unit mean.
Then $H(Y) \leq E[y] \log_2 e$, which is finite if $E[y]$ is finite.
Finally, $E[y] = E[x] + E[n] \leq T + E[n]$, and $E[n]$ is known to be finite
if $0 < v,D,\zeta < \infty$ \cite{chh-book}.
\end{enumerate}
Since $C_m$ is a nondecreasing, upper bounded sequence, it must have a finite limit.
To prove the second part of the theorem, note that Theorem \ref{thm:main} applies to all
input distributions $p_X(x)$; thus, it also applies to the one maximizing $I(R;W)$.
As a result, since $C$ exists and is finite (from the first part of the theorem), it
is a nontrivial upper bound on $\max_{p_X(x)} I(R;W)$.
\end{proof}
In \cite{eck-arxiv}, it was shown that the mutual information cannot be tractably computed
in general for ``sorting'' channels, i.e., those with outputs given by (\ref{eqn:sort}).
Since $I(X;Y)$ can be calculated relatively easily, the results from Theorems \ref{thm:main}-\ref{thm:limit}
give us useful information about the capacity of a practical system.
\section{Conclusions and Future Work}\label{sec:futwork}
In this paper, a framework was constructed to study data rates that can
be achieved in a molecular communication system. A simple model was
considered for the communication system, consisting of a transmitter
and receiver separated in space, immersed in a fluid medium. The rates
achieved by a simple pulse-position modulation protocol were analyzed,
where information is encoded in the time of release of the molecule.
These results were extended to two molecules wherein the optimal
distribution reverted to the PPM protocol. While preliminary, the
results of this work suggest practical data transmission strategies
depending on the value of the drift velocity.
Given the preliminary nature of this work, there are many interesting
related problems. For example, it would be useful to consider the limitations
inherent in molecular production and detection: precise control over release times
and amounts, and precise measurement of arrival times, may not be possible;
more realistic communication models could be produced. Furthermore,
the communication architecture of molecular communication systems may be considered;
for instance, in order to achieve the mutual information results given in this paper,
error-correcting codes must be
used; an appropriate modulation and
coding strategy for molecular communication needs to be identified. Finally, channel
estimation techniques need to be derived in order to cope with unknown parameters, such
as an unknown drift velocity. Much work remains to be done to understand molecular
communication from a theoretical perspective, which presents many interesting and exciting
challenges to communication researchers.
\renewcommand{\baselinestretch}{1}
\normalsize
\bibliographystyle{IEEEtran}
|
2,877,628,089,651 | arxiv | \section{Introduction}
Among the variants of the seesaw mechanism, inverse seesaw \cite{Mohapatra:1986bd,Bernabeu:1987gr,Mohapatra:1986aw,Schechter:1981cv,Palcu:2014aza,Schechter:1980gr
,Fraser:2014yha,Hettmansperger:2011bt,Adhikary:2013mfa,Law:2013gma,Dev:2012sg,Malinsky:2009dh,Hirsch:2009mx} stands out as an attractive one due to its characteristic feature of generation of small neutrino mass without invoking high energy scale in the theory. Although to realize such feature one has
to pay the price in terms of incorporation of additional singlet fermions, nevertheless, in different GUT models accommodation of such type of neutral fermions are natural. Furthermore, such mechanism appeals to the foreseeable collider experiments to be testified due to its unique signature. The $9\times9$ neutrino mass matrix in this mechanism is written as
\begin{eqnarray}
m_\nu &=& \begin{pmatrix}
0&m_D&0\\m_D^T&0&M_{RS}\\
0&M_{RS}^T&\mu\\
\end{pmatrix}
\end{eqnarray}
with the choice of basis $(\nu_L,\nu_R^c,S_L)$.
The three matrices appear in $m_\nu$ are $m_D$, $M_{RS}$ and $\mu$ among them $m_D$ and $M_{RS}$ are Dirac type whereas $\mu$ is Majorana type mass matrix. After diagonalization, the low energy effective neutrino mass comes out as
\begin{eqnarray}
m_\nu &=& m_D M_{RS}^{-1}\mu ( m_D M_{RS}^{-1})^T \nonumber \\
&=&F\mu F^T\label{1}
\end{eqnarray}
where $F=m_D M_{RS}^{-1}$. Such definition resembles the above formula as a conventional type-I seesaw expression of $m_\nu$. However, $m_{\nu}$ contains large number of parameters and it is possible to fit them with neutrino oscillation experimental data \cite{Forero:2014bxa,GonzalezGarcia:2012sz,Tortola:2012te} (but the predictability is less). Our goal in this work is to find out a phenomenologically viable texture of $m_D$ and $\mu$ with minimum number of parameters or equivalently maximum number of zeros. We bring together two theoretical ideas to find out a minimal texture and they are \\
i) Scaling ansatz\cite{
sc1,sc2,Blum:2007qm,Obara:2007nb,Damanik:2007yg,Goswami:2008rt,Grimus:2004cj,Berger:2006zw,sc3,Dev:2013esa,
Adhikary:2012kb}, \\
ii) Texture Zeros\cite{Frampton:2002yf,Whisnant:2014yca,Ludl:2014axa,Grimus:2013qea,Liao:2013saa,Fritzsch:2011qv,Merle:2006du,Wang:2013woa,Wang:2014dka,Lavoura:2004tu,Kageyama:2002zw,Wang:2014vua,4zero1,Choubey:2008tb,Chakraborty:2014hfa,4zero2,4zero3,4zero4}.\\
\vskip 0.1in
\noindent
At the outset of the analysis, we choose a basis where the charged lepton mass matrix ($m_E$) and $M_{RS}$ are diagonal along with texture zeros in $m_D$ and $\mu$ matrices. We also start by assuming the scaling property in the elements of $m_D$ and $\mu$ to reduce the number of relevant matrices. Although, we are not addressing the explicit origin of such choice of matrices, however, qualitatively we can assume that this can be achieved due to some flavour symmetry \cite{Grimus:2004hf} which is required to make certain that the texture zeros appear in $m_D$ and $\mu$ are in the same basis in which $m_E$ and $M_{RS}$ are diagonal. We restrict ourselves within the frame work of $SU(2)_L \times U(1)_Y$ gauge group however, explicit realization of such scheme obviously more elusive which will be studied elsewhere.
\section{Scaling property and texture zeros}
We consider scaling property between the second and third row of $m_D$ matrix and the same for $\mu$ matrix also. Explicitly the relationships are written as
\begin{eqnarray}
\frac{(m_D)_{2i}}{(m_D)_{3i}} &=& k_1\label{2.1}\\
\frac{(\mu)_{2i}}{(\mu)_{3i}} &=& k_2\label{2.2}
\end{eqnarray}
where $i=1,2,3$ is the column index. We would like to mention that although we have considered different scale factors for $m_D$ and $\mu$ matrices, however, the effective $m_\nu$ is still scale invariant and leads to $\theta_{13}=0$. Thus, it is obvious to further break the scaling ansatz. In order to generate nonzero $\theta_{13}$ it is necessary to break the ansatz in $m_D$ since, breaking in $\mu$ does not affect the generation of nonzero $\theta_{13}$ although in some cases it provides $m_3\neq 0$. In our scheme texture zero format is robust and it remains intact while the scaling ansatz is explicitly broken. Such a scenario can be realized by considering the scaling ansatz and texture zeros to have a different origin.\\
\noindent
Another point is to be noted that, since the $\mu$ matrix is complex symmetric whereas $m_D$ is asymmetric, the scale factor considered in $\mu$ matrix is different from that of $m_D$ to keep the row wise invariance as dictated by Eqn.(\ref{2.1}) (for $m_D$), and Eqn.(\ref{2.2}) (for $\mu$). Finally, since the texture of $M_{RS}$ matrix is diagonal it is not possible to accommodate scaling ansatz considered in the present scheme.\\
\noindent
Let us now turn to further constrain the matrices assuming zeros in different entries. Since, in our present scheme the matrix $M_{RS}$ is diagonal, we constrain the other two matrices. We start with the maximal zero textures with scaling ansatz of general $3\times 3$ matrices and list different cases systematically in \textbf{Table \ref{t1}}.\\
\begin{table}[!h]
\caption{Texture zeros with scaling ansatz of a general $3\times3$ matrix} \label{t1}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{{\bf $7$ zero texture}}\\
\hline
$m_1^7=\begin{pmatrix}
0&0&0\\
k_1 c_1&0&0\\
c_1&0&0\\
\end{pmatrix}$ &
$m_2^7=\begin{pmatrix}
0&0&0\\
0&k_1 c_2&0\\
0&c_2&0\\
\end{pmatrix}$ &
$m_3^7=\begin{pmatrix}
0&0&0\\
0&0&k_1 c_3\\
0&0&c_3\\
\end{pmatrix}$\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{{\bf $6$ zero texture}}\\
\hline
$m_1^6=\begin{pmatrix}
d_1&0&0\\
k_1 c_1&0&0\\
c_1&0&0\\
\end{pmatrix}$ &
$m_2^6=\begin{pmatrix}
0&d_2&0\\
k_1 c_1&0&0\\
c_1&0&0\\
\end{pmatrix}$ &
$m_3^6=\begin{pmatrix}
0&0&d_3\\
k_1 c_1&0&0\\
c_1&0&0\\
\end{pmatrix}$\\
\hline
$m_4^6=\begin{pmatrix}
d_1&0&0\\
0&k_1 c_2&0\\
0&c_2&0\\
\end{pmatrix}$ &
$m_5^6=\begin{pmatrix}
0&d_2&0\\
0&k_1 c_2&0\\
0&c_2&0\\
\end{pmatrix}$ &
$m_6^6=\begin{pmatrix}
0&0&d_3\\
0&k_1 c_2&0\\
0&c_2&0\\
\end{pmatrix}$\\
\hline
$m_7^6=\begin{pmatrix}
d_1&0&0\\
0&0&k_1 c_3\\
0&0&c_3\\
\end{pmatrix}$ &
$m_8^6=\begin{pmatrix}
0&d_2&0\\
0&0&k_1 c_3\\
0&0&c_3\\
\end{pmatrix}$ &
$m_9^6=\begin{pmatrix}
0&0&d_3\\
0&0&k_1 c_3\\
0&0&c_3\\
\end{pmatrix}$ \\
\hline
\multicolumn{3}{|c|}{{\bf $5$ zero texture}}\\
\hline
$m_1^5=\begin{pmatrix}
0&0&0\\
k_1 c_1&k_1c_2&0\\
c_1&c_2&0\\
\end{pmatrix}$ &
$m_2^5=\begin{pmatrix}
0&0&0\\
k_1 c_1&0&k_1 c_3\\
c_1&0&c_3\\
\end{pmatrix}$&
$m_3^5=\begin{pmatrix}
0&0&0\\
0&k_1 c_1&k_1c_3\\
0&c_1&c_3\\
\end{pmatrix}$\\
\hline
$m_4^5=\begin{pmatrix}
d_1&d_2&0\\
k_1 c_1&0&0\\
c_1&0&0\\
\end{pmatrix}$ &
$m_5^5=\begin{pmatrix}
0&d_2&d_3\\
k_1 c_1&0&0\\
c_1&0&0\\
\end{pmatrix}$ &
$m_6^5=\begin{pmatrix}
d_1&0&d_3\\
k_1 c_1&0&0\\
c_1&0&0\\
\end{pmatrix}$\\
\hline
$m_7^5=\begin{pmatrix}
d_1&d_2&0\\
0&k_1 c_2&0\\
0&c_2&0\\
\end{pmatrix}$ &
$m_8^5=\begin{pmatrix}
0&d_2&d_3\\
0&k_1 c_2&0\\
0&c_2&0\\
\end{pmatrix}$ &
$m_9^5=\begin{pmatrix}
d_1&0&d_3\\
0&k_1 c_2&0\\
0&c_2&0\\
\end{pmatrix}$\\
\hline
$m_{10}^5=\begin{pmatrix}
d_1&d_2&0\\
0&0&k_1 c_3\\
0&0&c_3\\
\end{pmatrix}$ &
$m_{11}^5=\begin{pmatrix}
0&d_2&d_3\\
0&0&k_1 c_3\\
0&0&c_3\\
\end{pmatrix}$ &
$m_{12}^5=\begin{pmatrix}
d_1&0&d_3\\
0&0&k_1 c_3\\
0&0&c_3\\
\end{pmatrix}$ \\
\hline
\multicolumn{3}{|c|}{{\bf $4$ zero texture}}\\
\hline
$m_1^4=\begin{pmatrix}
d_1&0&0\\
0&k _1c_2&k_1 c_3\\
0&c_2&c_3\\
\end{pmatrix}$ &
$m_2^4=\begin{pmatrix}
0&d_2&0\\
0&k_1 c_2&k_1 c_3\\
0&c_2&c_3\\
\end{pmatrix}$ &
$m_3^4=\begin{pmatrix}
0&0&d_3\\
0&k_1 c_2&k_1 c_3\\
0&c_2&c_3\\
\end{pmatrix}$\\
\hline
$m_4^4=\begin{pmatrix}
d_1&0&0\\
k_1 c_1&0&k_1 c_3\\
c_1&0&c_3\\
\end{pmatrix}$ &
$m_5^4=\begin{pmatrix}
0&d_2&0\\
k_1c_1&0&k_1 c_3\\
c_1&0&c_3\\
\end{pmatrix}$ &
$m_6^4=\begin{pmatrix}
0&0&d_3\\
k_1 c_1&0&k_1 c_3\\
c_1&0&c_3\\
\end{pmatrix}$\\
\hline
$m_7^4=\begin{pmatrix}
d_1&0&0\\
k_1 c_1&k_1 c_2&0\\
c_1&c_2&0\\
\end{pmatrix}$ &
$m_8^4=\begin{pmatrix}
0&d_2&0\\
k_1 c_1&k_1 c_2&0\\
c_1&c_2&0\\
\end{pmatrix}$ &
$m_9^4•=\begin{pmatrix}
0&0&d_3\\
k_1 c_1&k_1 c_2&0\\
c_1&c_2&0\\
\end{pmatrix}$\\
\hline
$m_{10}^4=\begin{pmatrix}
d_1&d_2&d_3\\
k_1 c_1&0&0\\
c_1&0&0\\
\end{pmatrix}$ &
$m_{11}^4=\begin{pmatrix}
d_1&d_2&d_3\\
0&k_1 c_2&0\\
0&c_2&0\\
\end{pmatrix}$ &
$m_{12}^4=\begin{pmatrix}
d_1&d_2&d_3\\
0&0&k_1 c_3\\
0&0&c_3\\
\end{pmatrix}$\\
\hline
\end{tabular}
\end{center}
\noindent
We consider all the matrices\footnote{From now on we use $m^n$
as a mass matrix where $n(=4,5,6,7)$ is the number of zeros in that matrix.}
listed in \textbf{Table \ref{t1}} as the Dirac type matrices($m_D$). As
the lepton number violating mass matrix $\mu$ is complex symmetric, therefore,
the maximal number of zeros with scaling invariance is 5. Therefore, only $m_3^5$ and $m_5^5$ type matrices can be made complex symmetric with the scaling property and are shown in \textbf{Table \ref{t2}} where they are renamed as $\mu_1^5$ and $\mu_2^5$ with a different scale factor $k_2$.\\
\begin{table}[!h]
\caption{ Maximal zero texture of $\mu$ matrix}\label{t2}
\begin{center}
\begin{tabular}{|c|c|}
\hline
$\mu_1^5=\begin{pmatrix}
0&0&0\\
0&k_2^2s_3&k_2 s_3\\
0&k_2 s_3&s_3\\
\end{pmatrix}$ &
$\mu_2^5=\begin{pmatrix}
0&k_2s_3&s_3\\
k_2s_3&0&0\\
s_3&0&0\\
\end{pmatrix}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
Now using Eqn.(\ref{1}) we can construct $m_\nu$ and it is found that all the mass matrices constructed out of these matrices are not suitable to satisfy the neutrino oscillation data. The reason goes as follows:\\
\noindent
\textbf{Case A:} $m_D$ (7, 6 zero) + $\mu_1^5$, $\mu_2^5$ (5 zero):\\
\noindent
We can not generate nonzero $\theta_{13}$ by breaking the scaling ansatz
because in this case all the structures of $m_D$ are scaling ansatz invariant.
This can be understood in the following way: if we incorporate scaling ansatz
breaking by $k_1^\prime\rightarrow k_1(1+\epsilon)$ all the structures of
$m_D$ are still invariant and $m_\nu$ matrix will still give $\theta_{13}=0$ as breaking of scaling in $\mu_1^5$ and $\mu_2^5$ play no role for the generation of nonzero value of $\theta_{13}$. To generate nonzero $\theta_{13}$ it is necessary to break scaling ansatz in the Dirac sector.\\
\noindent
\textbf{Case B:} $m_D$ (5 zero) + $\mu_1^5$, $\mu_2^5$ (5 zero):\\
\noindent
The matrices in the last three rows ($m_4^5$ to $m_{12}^5$) of the
`\textbf{5 zero texture}' part of \textbf{Table \ref{t1}} are ruled out due to
the same reason as mentioned in \textbf{Case A} while, the matrices in the first row i.e. $m_1^5$, $m_2^5$ and $m_3^5$ give rise to the structure of $m_\nu$ as \\
\begin{equation}
A_1=\begin{pmatrix}
0&0&0\\
0&\ast&\ast\\
0&\ast&\ast\\
\end{pmatrix}\label{A1}
\end{equation}
where `$\ast$' represents some nonzero entries in $m_\nu$. This structure leads to complete disappearance of one generation. Moreover it has been shown in Ref. \cite{Frampton:2002yf} that if the number of independent zeros in an effective neutrino mass matrix ($m_\nu$) is $\geq 3$ it doesn't favour the oscillation data and hence, `$A_1$' type mass matrix is ruled out.\\
\noindent
\textbf{Case C:} $m_D$ (4 zero) + $\mu_1^5$ (5 zero):\\
\noindent
There are 12 $m_D$ matrices with 4 zero texture and they are designated as $m_1^4$,...$m_{12}^4$ in \textbf{Table \ref{t1}}. Due to the same reason as discussed in \textbf{Case A}, $m_{10}^4$, $m_{11}^4$ and $m_{12}^4$ are not considered. Furthermore, $m_\nu$ arises through $m_{1}^4$, $m_{4}^4$ and $m_{7}^4$ also correspond to the `$A_1$' type matrix (shown in Eqn.(\ref{A1})) and hence are also discarded. Finally, remaining six $m_D$ matrices $m_{2}^4$, $m_{3}^4$, $m_{5}^4$, $m_{6}^4$, $m_{8}^4$ and $m_{9}^4$ lead to the structure of $m_\nu$ with two zero eigenvalues and obviously they are also neglected.\\
\noindent
\textbf{Case D:} $m_D$ (4 zero) + $\mu_2^5$ (5 zero):\\
\noindent
In this case, for $m_2^4$ and $m_3^4$ the low energy mass matrix $m_\nu$ comes out as a null matrix while for $m_1^4$ the structure of $m_\nu$ is given by \begin{equation}
A_2=\begin{pmatrix}
0&\ast & \ast \\
\ast & 0&0\\
\ast &0 &0\\
\end{pmatrix}
\end{equation}
which is also neglected since the number of independent zeros $\geq 3$.\\
On the other hand rest of the $m_D$ matrices ( $m_4^4$ to $m_9^4$ ) correspond to the structure of $m_\nu$ as
\begin{equation}
A_3=\begin{pmatrix}
0&\ast & \ast \\
\ast & \ast &\ast \\
\ast & \ast &\ast\\
\end{pmatrix}.\label{rldout}
\end{equation}
Interestingly, a priori we cannot rule out the matrices of type $A_3$, however it is observed that $m_\nu$ of this type fails to generate $\theta_{13}$ within the present experimental bound (details are mentioned in section (\ref{s1})). It is also observed that in this scheme to generate viable neutrino oscillation data, four zero texture of both $m_D$ and $\mu$ matrices are necessary. Therefore, now on we discuss extensively the four zero texture in both the sectors ( Dirac as well as Majorana sector ).
\section{4 zero texture}
There are 126 ways to choose 4 zeros out of 9 elements of a general $3\times3$ matrix. Hence there are $ 126 $ textures. Incorporation of scaling ansatz leads to a drastic reduction to only 12 textures as given in the \textbf{Table \ref{t1}}. In our chosen basis since $M_{RS}$ is taken as diagonal, therefore, the structure of $m_D$ leads to the same structure of $F$. On the other hand the lepton number violating mass matrix $\mu$ is complex symmetric and therefore from the matrices listed in \textbf{Table \ref{t1}}, only $m_1^4$ and $m_{10}^4$ type matrices are acceptable. We renamed those matrices as $\mu_1^4$ and $\mu_2^4$ and explicit structures of them are presented in \textbf{Table \ref{t3}}.\\
\begin{table}[!h]
\caption{Four zero texture of $\mu$ matrix}\label{t3}
\begin{center}
\begin{tabular}{|c|c|}
\hline
$\mu_1^4=\begin{pmatrix}
r_1&0&0\\
0&k_2^2s_3&k_2 s_3\\
0&k_2 s_3&s_3\\
\end{pmatrix}$ &
$\mu_2^4=\begin{pmatrix}
r_1&k_2s_3&s_3\\
k_2s_3&0&0\\
s_3&0&0\\
\end{pmatrix}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\noindent
There are now $2\times12=24$ types of $m_\nu$ due to both the choices of
$\mu$ matrices. We discriminate different types of $m_D$ matrices in the following way:\\
i) First of all, the texture $m_{10}^4$, $m_{11}^4$ and $m_{12}^4$ are always scaling ansatz invariant due to the same reason mentioned earlier in \textbf{Case A} and hence are all discarded.\\
Next the matrices $m_1^4$, $m_2^4$ and $m_3^4$ are also ruled out due to the following:\\
a) When $\mu_1^4$ matrix is taken to generate $m_\nu$ along with
$m_1^4$, $m_2^4$ and $m_3^4$ as the Dirac matrices, then the structure of the
effective $m_\nu$ appears such that, one generation is completely decoupled
thus leading to two mixing angles zero for the matrix $m_1^4$ and two zero
eigenvalues when we consider $m_2^4$ and $m_3^4$ matrices.\\
b) In case of $\mu_2^4$ matrix, the form of $m_\nu$ for $m_1^4$ comes out as
\begin{eqnarray}
A_4&=&
\begin{pmatrix}
\ast & \ast & \ast\\
\ast &0&0\\
\ast &0 &0\\
\end{pmatrix}
\end{eqnarray}
which is phenomenologically ruled out and for other two matrices ($m_2^4$ and $m_3^4$) $m_\nu$ becomes a null matrix. For a compact view of the above analysis we present the ruled out and survived structures of $m_\nu$ symbolically in \textbf{Table \ref{sym}}.\\
\newpage
\begin{table}[!h]
\caption{ Compositions of the discarded and survived structures of $m_\nu$}\label{sym}
\begin{center}
\begin{tabular}{|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|}
\cline{2-13}
\multicolumn{1}{c|}{} & \multicolumn{12}{c|}{$m_D$} \\ \hline
$\mu$ & $m_1^4$ &$m_2^4$ & $m_3^4$ & $m_4^4$ & $m_5^4$ & $m_6^4$ & $m_7^4$ & $m_8^4$ & $m_9^4$ & $m_{10}^4$ & $m_{11}^4$& $m_{12}^4$\\
\hline
$ \mu_1^4 $ & $\times$ & $\times$ &$\times$ & $\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$ & $\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$ & $\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$& $\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$ & $\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$ & $\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$ & $\times$& $\times$ & $\times$\\
\hline
$\mu_2^4$&$\times$&$\times$&$\times$&$\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$&$\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$&$\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$&$\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$&$\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$&$\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;$&$\times$&$\times$&$\times$\\
\hline
\end{tabular}
\end{center}
\end{table}
\noindent
Thus we are left with same six textures of $m_D$ for both the choices of $\mu$ and they are renamed in \textbf{Table \ref{t4}} as $m_{D1}^4$, $m_{D2}^4$, ....
$m_{D_6}^4$.
\begin{table}[!h]
\caption{Four zero textures of the Dirac mass matrices} \label{t4}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$m_{D1}^4=\begin{pmatrix}
d_1&0&0\\
k_1 c_1&0&k_1 c_3\\
c_1&0&c_3\\
\end{pmatrix}$ &
$m_{D2}^4=\begin{pmatrix}
0&d_2&0\\
k_1 c_1&0&k_1 c_3\\
c_1&0&c_3\\
\end{pmatrix}$ &
$m_{D3}^4=\begin{pmatrix}
0&0&d_3\\
k_1 c_1&0&k_1 c_3\\
c_1&0&c_3\\
\end{pmatrix}$\\
\hline
$m_{D4}^4=\begin{pmatrix}
d_1&0&0\\
k_1c_1&k_1 c_2&0\\
c_1&c_2&0\\
\end{pmatrix}$ &
$m_{D5}^4=\begin{pmatrix}
0&d_2&0\\
k_1 c_1&k_1 c_2&0\\
c_1&c_2&0\\
\end{pmatrix}$ &
$m_{D6}^4=\begin{pmatrix}
0&0&d_3\\
k_1 c_1&k_1 c_2&0\\
c_1&c_2&0\\
\end{pmatrix}$\\
\hline
\end{tabular}
\end{center}
\end{table}
\noindent
Obviously, it is clear that the above analysis leads to altogether 12 effective $m_\nu$ matrices arising due to six $m_D$ ($m_{D1}^4$ to $m_{D6}^4$) and two $\mu$ ($\mu_1^4$ and $\mu_2^4$) matrices.
\section{ Parametrization}
Depending upon the composition of $m_D$ and $\mu$ we subdivided those 12
$m_\nu$ matrices in four broad categories and each category is again
separated in few cases and the decomposition is presented in
\textbf{Table \ref{t5}} and \textbf{Table \ref {t6}}.\\
\begin{table}[!h]
\caption{Different Composition of $m_D$ and $\mu_1$ matrices to generate $m_\nu$.}\label{t5}
\begin{center}
\begin{tabular}{|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|}
\cline{2-7}
\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{\textbf{Category A}} & \multicolumn{4}{c|}{\textbf{Category B}}\\ \hline
$\textbf{Matrices}$ &$I_A$&$II_A$&$I_B$&$II_B$&$III_B$&$IV_B$\\
\hline
$ m_D $ & $m_{D2}^4$ & $m_{D6}^4$ & $m_{D1}^4$ & $m_{D3}^4$ & $m_{D4}^4$ & $m_{D5}^4$\\
\hline
$\mu$ &$\mu_1^4$&$\mu_1^4$&$\mu_1^4$&$\mu_1^4$&$\mu_1^4$&$\mu_1^4$\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!h]
\caption{ Different Composition of $m_D$ and $\mu_2$ matrices to generate
$m_\nu$.}\label{t6}
\begin{center}
\begin{tabular}{|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|}
\cline{2-7}
\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{\textbf{Category C}} & \multicolumn{4}{c|}{\textbf{Category D}}\\ \hline
$\textbf{Matrices}$ &$I_C$&$II_C$&$I_D$&$II_D$&$III_D$&$IV_D$\\
\hline
$ m_D $ & $m_{D1}^4$ & $m_{D4}^4$ & $m_{D2}^4$ & $m_{D3}^4$ & $m_{D5}^4$ & $m_{D6}^4$\\
\hline
$\mu$ &$\mu_2^4$&$\mu_2^4$&$\mu_2^4$&$\mu_2^4$&$\mu_2^4$&$\mu_2^4$\\
\hline
\end{tabular}
\end{center}
\end{table}
\noindent
Throughout our analysis we consider the matrix $M_{RS}$ as
\begin{eqnarray}
M_{RS}&=&\begin{pmatrix}
p_1&0&0\\
0&p_2&0\\
0&0&p_3\\
\end{pmatrix}.
\end{eqnarray}
Following Eqn.(\ref{1}), the $m_\nu$ matrix arises in Category A and Category B can be written in a generic way as
\begin{eqnarray}
m_\nu^{AB} &= &m_0\begin{pmatrix}
1& k_1p & p\\
k_1p & k_1^2(q^2+p^2)& k_1(q^2+p^2)\\p& k_1(q^2+p^2)&(q^2+p^2)
\end{pmatrix}\label{4}
\end{eqnarray}
with the definition of parameters as following\\
\begin{eqnarray}
\textbf{$Set$ $I_A:\;$} m_0^\prime =\frac{d_3^2s_3}{p_3^2},p^\prime = \frac{p_3 c_2}{p_2 d_3},q^\prime =\frac{c_1p_3}{d_3p_1}\sqrt{\frac{r_1}{s_3}},m_0 = m_0^\prime ,p=k_2p^\prime ,q=q^\prime \nonumber \\
\textbf{$Set$ $II_A:\;$} m_0^\prime = \frac{d_2^2s_3}{p_2^2}, p^\prime =\frac{p_2 c_2}{p_3 d_2},q^\prime =\frac{c_1p_2}{d_2p_1}\sqrt{\frac{r_1}{s_1}},m_0=m_0^\prime k_2^2,p=\frac{p^\prime}{k_2},q=\frac{q^\prime}{k_2}\nonumber \\
\textbf{$Set$ $I_B:\;$} m_0^\prime = \frac{d_1^2r_1}{p_1^2},p^\prime =\frac{ c_1}{ d_1},q^\prime = \frac{c_3p_1}{d_1p_3}\sqrt{\frac{s_3}{r_1}},m_0=m_0^\prime , p=p^\prime ,q=q^\prime \nonumber \\
\textbf{$Set$ $II_B:\;$} m_0^\prime = \frac{d_3^2s_3}{p_3^2},p^\prime =\frac{ c_3}{ d_3},q^\prime =\frac{c_1p_3}{d_3p_1}\sqrt{\frac{r_1}{s_1}},m_0=m_0^\prime ,p=p^\prime ,q=q^\prime \nonumber \\
\textbf{$Set$ $III_B:\;$} m_0^\prime =\frac{d_1^2r_1}{p_1^2},p^\prime = \frac{ c_1}{ d_1},q^\prime =\frac{c_2p_1}{d_1p_2}\sqrt{\frac{s_3}{r_1}},m_0=m_0^\prime ,p = p^\prime ,q=k_2q^\prime \nonumber \\
\textbf{$Set$ $IV_B:\;$} m_0^\prime =\frac{d_2^2s_3}{p_2^2}, p^\prime =\frac{ c_2}{d_2},q^\prime =\frac{c_1p_2}{d_2p_1}\sqrt{\frac{r_1}{s_1}},m_0=m_0^\prime k_2^2,p={p^\prime},q=\frac{q^\prime}{k_2}.\label{p1}
\end{eqnarray}
Similarly the $m_\nu$ matrix arises in Category C can be written as
\begin{eqnarray}
m_\nu^{C}& =& m_0 \begin{pmatrix}
1&k_1(p+q)&p+q\\
k_1(p+q)&k_1^2(2pq+p^2)&k_1(2pq+p^2)\\
p+q&k_1(2pq+p^2)&(2pq+p^2)\\
\end{pmatrix}\label{7}
\end{eqnarray}
with the following choice of parameters
\begin{eqnarray}
\textbf{$Set$ $I_C:\;$} m_0^\prime =\frac{d_1^2r_1}{p_1^2•},p^\prime = \frac{c_1}{ d_1},q^\prime =\frac{c_2p_1•}{d_1p_2•}\sqrt{\frac{s_3}{r_1•}},m_0=m_0^\prime ,p=p^\prime ,q=k_2q^\prime \nonumber \\
\textbf{$Set$ $II_C:\;$} m_0^\prime =\frac{d_1^2r_1}{p_1^2•}, p^\prime =\frac{c_1}{ d_1},q^\prime =\frac{c_3p_1•}{d_1p_3•}\sqrt{\frac{s_3}{r_1•}},m_0=m_0^\prime ,p=p^\prime ,q=q^\prime .
\end{eqnarray}
For Category D the effective $m_\nu$ comes out as
\begin{eqnarray}
m_\nu^{D} &=&m_0\begin{pmatrix}
0&k_1p&p\\
k_1p&k_1^2(q^2+2rp)&k_1(q^2+2rp)\\
p&k_1(q^2+2rp)&(q^2+2rp)
\end{pmatrix}\end{eqnarray}
with the definition of parameters as
\begin{eqnarray}
\textbf{$Set$ $I_D:\;$} m_0^\prime =\frac{d_2^2r_1}{p_1^2•},p^\prime = \frac{ c_1p_1s_3}{ d_2p_2r_1}, q^\prime =\frac{c_1•}{d_2•},r^\prime =\frac{c_3}{d_2•},m_0=m_0^\prime ,p=k_2p^\prime ,q=q^\prime ,r=r^\prime \nonumber \\
\textbf{$Set$ $II_D:\;$} m_0^\prime =\frac{d_3^2r_1}{p_1^2•},p^\prime = \frac{ c_1p_1s_3}{ d_3p_3r_1},q^\prime =\frac{c_1}{d_3•},r^\prime =\frac{c_2}{•d_3},m_0=m_0^\prime ,p = p^\prime ,q=q^\prime r=k_2 r^\prime \nonumber \\
\textbf{$Set$ $III_D:\;$} m_0^\prime =\frac{ c_1p_1s_3}{ d_3p_3r_1},p^\prime = \frac{ c_1}{ d_1}, q^\prime =\frac{c_1}{d_3•}, r^\prime =\frac{c_3}{•d_3},m_0=m_0^\prime ,p =p^\prime ,q=k_2q^\prime ,r=r^\prime \nonumber\\
\textbf{$Set$ $IV_D:\;$} m_0^\prime =\frac{d_2^2r_1}{p_1^2•},p^\prime = \frac{ c_1p_1s_3}{ d_2p_2r_1}, q^\prime =\frac{c_1•}{d_2•},r^\prime =\frac{c_2}{d_2•},m_0=m_0^\prime ,p=k_2{p^\prime} ,q=q^\prime ,r =r^\prime
\end{eqnarray}
and in general, we consider all the parameters $m_0$, $k_1$, $p$, $r$ and $q$ are complex.
\section{Phase Rotation}
As mentioned earlier, all the parameters of $m_\nu$ are complex and therefore we can rephase $m_\nu$ by a phase rotation to remove the redundant phases. Here, we systematically study the phase rotation for each category.\\
\noindent
\textbf{Category A,B}\\
\noindent
The Majorana type mass matrix $m_\nu$ can be rotated in phase space through
\begin{eqnarray}
m_\nu^{\prime AB}= P^T m_\nu^{AB} P\label{pp}
\end{eqnarray}
where $P$ is a diagonal phase matrix and is given by $P=diag(e^{i\Phi_1},e^{i\Phi_2},e^{i\Phi_3})$.\\
Redefining the parameters of $m_\nu$ as \begin{eqnarray}
m_0\rightarrow m_0e^{i\alpha_m}, p\rightarrow pe^{i\theta_p}, q\rightarrow qe^{i\theta_q}, k_1\rightarrow k_1e^{i\theta_1}\label{prmtr}\end{eqnarray}
with
\begin{eqnarray} \Phi_1=-\frac{\alpha_m}{2•},\Phi_2=-(\theta_1+\theta_p+\frac{\alpha_m}{2•}), \Phi_3=-(\theta_p+\frac{\alpha_m}{2})\label{phi}\end{eqnarray}
the phase rotated $m_\nu^{\prime AB}$ appears as \begin{eqnarray}
m_\nu^{\prime AB}&=& m_0\begin{pmatrix}
1& k_1p & p\\
k_1p & k_1^2(q^2e^{i\theta}+p^2)& k_1(q^2e^{i\theta}+p^2)\\p& k_1(q^2e^{i\theta}+p^2)&(q^2e^{i\theta}+p^2)
\end{pmatrix}\label{8}
\end{eqnarray}
where $\theta=2(\theta_q-\theta_p)$ and all the parameters $m_0,p,q$ and $k_1$ are real. Thus there is only a single phase parameter in $ m_\nu^{\prime AB}$.\\
\noindent
\textbf{Category C}\\
\noindent
In a similar way, the mass matrix of Category C can be rephased as
\begin{eqnarray}
m_\nu^{\prime C} &=& m_0 \begin{pmatrix}
1&k_1(p+qe^{i\theta})&p+qe^{i\theta}\\
k_1(p+qe^{i\theta})&k_1^2(2pqe^{i\theta}+p^2)&k_1(2pqe^{i\theta}+p^2)\\
p+qe^{i\theta}&k_1(2pqe^{i\theta}+p^2)&(2pqe^{i\theta}+p^2)\\
\end{pmatrix}\label{9}
\end{eqnarray}
with the same set of redefined parameters as mentioned in Eqn.(\ref{prmtr})and (\ref{phi}) and the diagonal phase matrix mentioned in the previous case with $\theta=\theta_q-\theta_p$.\\
\noindent
\textbf{Category D} \\
For this category the rephased mass matrix comes out as
\begin{eqnarray}
m_\nu^{\prime D} &=&m_0\begin{pmatrix}
0&k_1p&p\\
k_1p&k_1^2(q^2e^{i\alpha}+2rpe^{i\beta})&k_1(q^2e^{i\alpha}+2rpe^{i\beta})\\
p&k_1(q^2e^{i\alpha}+2rpe^{i\beta})&(q^2e^{i\alpha}+2rpe^{i\beta})
\end{pmatrix}\label{10}
\end{eqnarray}
with $r\rightarrow r e^{i\theta_r}$, $\alpha =2(\theta_q-\theta_p)$, $\beta =(\theta_r-\theta_p)$ and the rest of the parameters are already defined in Eqn.(\ref{prmtr}) and Eqn.(\ref{phi}).
\section{Breaking of the scaling ansatz}
Since the neutrino mass matrix obtained in Eqn.(\ref{8}), (\ref{9}) and (\ref{10}) are all invariant under scaling ansatz and thereby give rise to
$\theta_{13}=0$ as well as $m_3=0$. Although vanishing value of $m_3$ is yet not ruled out however, the former, $\theta_{13}=0$ is refuted by the reactor experimental results. Popular paradigm is to consider $\theta_{13}=0$ at the leading order and by further perturbation nonzero value of $\theta_{13}$ is generated. We follow the same way to produce nonzero $\theta_{13}$ through small breaking of scaling ansatz. It is to be noted in our scheme, generation of nonzero $\theta_{13}$ necessarily needs breaking in $m_D$. To generate nonzero $m_3$ breaking in $\mu$ matrix is also necessary along with $m_D$, however, in Category B since $det(m_D=0)$ even after breaking in the $\mu$ matrix $m_\nu$ still gives one of the eigenvalue equal to zero. On the other hand for Category C and Category D, $\mu_2^4$ has always zero determinant because of being scaling ansatz invariant and therefore, leads to one zero eigenvalue as that of Category B.
It is the Category A for which we get nonzero $\theta_{13}$ as well as nonzero $m_3$ after breaking the scaling ansatz in both the matrices ($m_D$ and $\mu$).
\vskip 0.1in
\noindent
In the following, we invoke breaking of scaling ansatz in all four categories through\\
i) breaking in the Dirac sector ($\theta_{13}\neq0$, $m_3=0$)\\
ii) breaking in the Dirac sector as well as Majorana sector ($\theta_{13}\neq0$, $m_3\neq0$) and later we discuss separately both the cases.
\subsection{Breaking in the Dirac sector}
\subsubsection{Category A,B}
We consider minimal breaking of the scaling ansatz through a dimensionless real parameter $\epsilon$ in a single term of different $m_D$ matrices of those categories as
\begin{eqnarray}
m_{D2}^4=\begin{pmatrix}
0&d_2&0\\
k_1(1+\epsilon) c_1&0&k_1 c_3\\
c_1&0&c_3
\end{pmatrix}
,m_{D6}^4=\begin{pmatrix}
0&0&d_3\\
k_1(1+\epsilon) c_1&k_1 c_2&0\\
c_1&c_2&0\\
\end{pmatrix}\label{11}
\end{eqnarray}
for Category A and
\begin{eqnarray}
m_{D1}^4=\begin{pmatrix}
d_1&0&0\\
k_1 c_1&0&k_1(1+\epsilon) c_3\\
c_1&0&c_3\\
\end{pmatrix} ,
m_{D3}^4=\begin{pmatrix}
0&0&d_3\\
k_1(1+\epsilon) c_1&0&k_1 c_3\\
c_1&0&c_3\\
\end{pmatrix} \nonumber \\
m_{D4}^4=\begin{pmatrix}
d_1&0&0\\
k _1c_1&k_1(1+\epsilon) c_2&0\\
c_1&c_2&0\\
\end{pmatrix} ,
m_{D5}^4=\begin{pmatrix}
0&d_2&0\\
k_1(1+\epsilon) c_1&k_1 c_2&0\\
c_1&c_2&0\\
\end{pmatrix}
\end{eqnarray}
for Category B. We further want to mention that breaking considered in any
element of the second row are all equivalent. For example, if we consider breaking in the `$23$' element of $m_{D2}^4$ it is equivalent to as considered in
Eqn.(\ref{11}). Neglecting the $\epsilon^2$ and higher order terms, the effective $m_\nu$ matrix comes out as
\begin{eqnarray}
m_\nu^{\prime AB\epsilon}&=& m_0\begin{pmatrix}
1& k_1p & p \\
k_1p & k_1^2(q^2e^{i\theta}+p^2)& k_1(q^2e^{i\theta}+p^2)\\p & k_1(q^2e^{i\theta}+p^2)&(q^2e^{i\theta}+p^2)
\end{pmatrix}+m_0 \epsilon \begin{pmatrix}
0&0&0\\0&2k_1^2q^2e^{i\theta}&k_1q^2e^{i\theta}\\0&k_1q^2e^{i\theta}&0\\
\end{pmatrix}.\label{mAB}
\end{eqnarray}
As mentioned earlier, that for Category B, $det(m_D)=0$ and it is not possible to generate $m_3\neq0$ even if we consider breaking in the $\mu$ matrices. On the other hand , the matrices in Category A posses $det(m_D)\neq0$ and thereby give rise to $m_3\neq0.$
Now to calculate the eigenvalues, mixing angles, $J_{CP}$, the Dirac and Majorana phases we utilize the results obtained in ref.\cite{Adhikary:2013bma}, for a general complex matrix. We should mention that the formula obtained in ref.\cite{Adhikary:2013bma}, for Majorana phases is valid when all three eigenvalues are nonzero. However, when one of the eigenvalue is zero (in this case $m_3=0$) one has to utilize the methodology given in ref.\cite{sc2}, which shows, a general Majorana type mass matrix $m_\nu$ can be diagonalized as
\begin{eqnarray}
U^\dagger m_\nu U^*&=& diag(m_1,m_2,m_3)\end{eqnarray}
or alternely, \begin{eqnarray} m_\nu &=& U diag(m_1,m_2,m_3)U^T \label{m1}\end{eqnarray} \\ where \begin{eqnarray} U&=&U_{CKM} P_M.\end{eqnarray}\\
The mixing matrix $U_{CKM}$ is given by (following PDG\cite{Beringer:1900zz})convention)
\begin{eqnarray}
U_{CKM}=
\begin{pmatrix}
c_{1 2}c_{1 3} & s_{1 2}c_{1 3} & s_{1 3}e^{-i\delta_{c p}}\\
-s_{1 2}c_{2 3}-c_{1 2}s_{2 3}s_{1 3} e^{i\delta_{c p} }& c_{1 2}c_{2 3}-s_{1 2}s_{1 3} s_{2 3} e^{i\delta_{c p}} & c_{1 3}s_{2 3}\\
s_{1 2}s_{2 3}-c_{1 2}s_{1 3}c_{2 3}e^{i\delta_{c p}} & -c_{1 2}s_{2 3}-s_{1 2}s_{1 3}c_{2 3}e^{i\delta_{c p}} & c_{1 3}c_{2 3}
\end{pmatrix}
\end{eqnarray}
with $c_{ij}=\cos \theta_{ij}$, $s_{ij} = \sin \theta_{ij}$ and $\delta_{CP}$ is the Dirac CP phase. The diagonal phase matrix $P_M$ is parametrized as
\begin{eqnarray} P_M& =& diag(1,e^{\alpha_M},e^{i(\beta_M +\delta_{CP})})\end{eqnarray}
with $\alpha_M$ and $\beta_M+\delta_{CP}$ are the Majorana phases.
Writing Eqn.(\ref{m1}) explicitly with $m_3=0$ we can have expressions for six independent elements of $m_\nu$ in terms of the mixing angles, two eigenvalues and the Dirac CP phase, from which the $m_{11}$ element can be expressed as
\begin{eqnarray}
m_{11} & =& c_{12}^2 c_{13}^2m_1 +s_{12}^2 c_{13}^2m_2 e^{2i\alpha_M}\label{m2}
\end{eqnarray}
and therefore the Majorana phase $\alpha_M$ comes out as
\begin{eqnarray}
\alpha_M &=& \frac{1}{2}\cos^{-1}\left\lbrace \frac{|m_{11}|^2•}{2c_{12}^2s_{12}^2c_{13}^4m_1m_2•}-\frac{(c_{12}^4m_1^2+s_{12}^4m_2^2)}{•2c_{12}^2s_{12}^2m_1m_2}\right\rbrace. \label{m3}\end{eqnarray}
The Jarlskog measure of CP violation $J_{CP}$ is defined in usual way as
\begin{eqnarray}
J_{CP}&=& \frac{Im(h_{12}h_{23}h_{31})}{(\Delta m_{21}^2)(\Delta m_{32}^2)(\Delta m_{31}^2)}
\end{eqnarray}
where $h$ is a hermitian matrix constructed out of $m_\nu$ as $h=m_\nu m_\nu^{\dagger}$.
\subsubsection{Category C}
In this case breaking is considered in $m_D$ as
\begin{eqnarray}
m_{D1}^4=\begin{pmatrix}
d_1&0&0\\
k_1(1+\epsilon) c_1&k_1 c_2&0\\
c_1&c_2&0\\
\end{pmatrix},
m_{D4}^4 =\begin{pmatrix}
d_1&0&0\\
k_1(1+\epsilon) c_1&0&k_1 c_3\\
c_1&0&c_3\\
\end{pmatrix}
\end{eqnarray}
and the scaling ansatz broken $m_\nu$ appears as \begin{eqnarray}
m_\nu^{\prime C \epsilon} = m_0 \begin{pmatrix}
1&k_1(p+qe^{i\theta})&p+qe^{i\theta}\\
k_1(p+qe^{i\theta})&k_1^2(2pqe^{i\theta}+p^2)&k_1(2pqe^{i\theta}+p^2)\\
p+qe^{i\theta}&k_1(2pqe^{i\theta}+p^2)&(2pqe^{i\theta}+p^2)\\
\end{pmatrix} \nonumber \\+m_0 \epsilon \begin{pmatrix}
0&k_1qe^{i\theta}&0\\k_1qe^{i\theta}&2k_1^2pqe^{i\theta}&k_1pqe^{i\theta}\\
0&k_1pqe^{i\theta}&0\\
\end{pmatrix}.
\end{eqnarray}
\subsubsection{Category D}
Breaking in $m_D$ in this case is incorporated through
\begin{eqnarray}
m_{D2}^4=\begin{pmatrix}
0&d_2&0\\
k_1 c_1&0&k_1(1+\epsilon) c_3\\
c_1&0&c_3\\
\end{pmatrix},
m_{D3}^4=\begin{pmatrix}
0&0&d_3\\
k_1 c_1&0&k_1(1+\epsilon) c_3\\
c_1&0&c_3\\
\end{pmatrix} \nonumber \\
m_{D5}^4=\begin{pmatrix}
0&d_2&0\\
k_1 c_1&k_1(1+\epsilon) c_2&0\\
c_1&c_2&0\\
\end{pmatrix},
m_{D6}^4=\begin{pmatrix}
0&0&d_3\\
k_1 c_1&k_1(1+\epsilon) c_2&0\\
c_1&c_2&0\\
\end{pmatrix}
\end{eqnarray}
and the corresponding $m_\nu$ comes out as
\begin{eqnarray}
m_\nu^{\prime D \epsilon} =m_0\begin{pmatrix}
0&k_1p&p\\
k_1p&k_1^2(q^2e^{i\alpha}+2rpe^{i\beta})&k_1(q^2e^{i\alpha}+2rpe^{i\beta})\\
p&k_1(q^2e^{i\alpha}+2rpe^{i\beta})&(q^2e^{i\alpha}+2rpe^{i\beta})
\end{pmatrix} \nonumber \\ +m_0\epsilon \begin{pmatrix}
0&0&0\\0&2k_1^2rpe^{i\beta}&k_1rpe^{i\beta}\\0&k_1rpe^{i\beta}&0\\
\end{pmatrix}.
\end{eqnarray}
\subsection{Numerical Analysis}
In order to perform the numerical analysis to obtain allowed parameter space
we utilize the neutrino oscillation data obtained from global fit shown in \textbf{Table \ref{tl}}.
\begin{table}[!h]
\caption{Input experimental values\cite{Tortola:2012te}}\label{tl}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Quantity & 3$\sigma$ ranges \\
\hline
$|\Delta m_{31}^2|$ N& 2.31$< \Delta m_{31}^2(10^3 eV^{-2})<2.74$ \\
\hline
$|\Delta m_{31}^2|$ I& 2.21$< \Delta m_{31}^2(10^3 eV^{-2})<2.64$ \\
\hline
$\Delta m_{21}^2$& 7.21$< \Delta m_{21}^2(10^5 eV^{-2})<8.20$ \\
\hline
$\theta_{12}$ & $31.3^o<\theta_{12}<37.46^o$ \\
\hline
$\theta_{23}$ & $36.86^o < \theta_{23}<55.55^o$ \\
\hline
$\theta_{13}$ & $7.49^o < \theta_{13}< 10.46^o$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Category A,B}
We first consider Category A,B for which the neutrino mass matrix is given in Eqn.(\ref{mAB}). The parameter $\epsilon$ is varied freely to fit the extant data and it is constrained as $0.04<\epsilon<0.7$. However, to keep the ansatz breaking effect small we restrict the value of $\epsilon$ only upto 0.1. For this range of $\epsilon$ ($0<\epsilon<0.1$) under consideration the parameter spaces are obtained as $1.78<p<3.40$, $1.76<q<3.42$ and $0.66<k_1<1.3$. It is interesting to note a typical feature of this category is that the Dirac CP phase $\delta_{CP}$ comes out too tiny and thereby generating almost vanishing value of $J_{CP}$ ($\approx 10^{-6}$) while the range of the only Majorana phase in this category is obtained as $77^o<\alpha_M<90^o$. \\
\begin{figure}[h!]
\includegraphics[scale=.6]{catApvskjh.png} \includegraphics[scale=.6]{catAqvskjh.png}\\
\caption{Plot of $p$ vs $ k_1$ (left), $q$ vs $k_1$ (right) for the Category A,B with $\epsilon=0.1.$}\label{fig1}
\end{figure}
\noindent
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=.6]{summassjh.png}\\
\caption{ Plot of $|m_{11}|$ vs $\Sigma_i m_i$ for Category A,B with $\epsilon=0.1$.}\label{fig2}
\end{center}
\end{figure}
\noindent
As one of the eigenvalue $m_3=0$ therefore, the hierarchy of the masses is clearly inverted in this category. The sum of the three neutrino masses $\Sigma_i m_i(=m_1+m_2+m_3)$ and $|m_{11}|$ are obtained as 0.088 eV $<\Sigma_i m_i<$ 0.104 eV and 0.0102 eV $<|m_{11}|<$ 0.0181 eV which predict the value of the two quantities below the present experimental upper bounds. To illustrate the nature of variation, in figure \ref{fig1} we plot $p$ vs $k_1$ and $q$ vs $k_1$ while in figure \ref{fig2} a correlation plot of $\Sigma_i m_i$ with $|m_{11}|$ is shown for $\epsilon=0.1$ and it is also seen from figure \ref{fig1} and \ref{fig2} that the ranges of the parameters do not differ much compare to the values obtained for the whole range of $\epsilon $ parameter. \\
\noindent
In brief, distinguishable characteristics of this category are i) tiny $J_{CP}$ and $\delta_{CP}$ ii) inverted hierarchy of the neutrino masses. At the end of this section we will further discuss the experimental testability of these quantities for all the categories.
\subsubsection{Category C}
In this case it is found that a small breaking of $\epsilon$ ($0.02<\epsilon<0.09$) is sufficient to accommodate all the oscillation data. We explore the parameter space and the ranges obtained as $3.42<p<6.07$, $1.68<q<3.02$ and $0.7<k_1<1.32$. The hierarchy obtained in this case is also inverted due to the vanishing value of $m_3$.
\begin{figure}[!h]
\includegraphics[scale=.6]{catBpvskjh.png} \includegraphics[scale=.6]{catBqvskjh.png} \\
\caption{ Plot of $p$ vs $ k_1$ (left), $q$ vs $k_1$ (right) for the Category C with $\epsilon=0.09.$}\label{fig3}
\end{figure}
The other two quantities $\Sigma_i m_i$ and $|m_{11}|$ come out as 0.0118 eV $<|m_{11}|<$ 0.019 eV and 0.088 eV $<\Sigma_i m_i<$ 0.105 eV. Similar to the previous category $J_{CP}$ is vanishingly small due to low value of $\delta_{CP}$. The range of the Majorana phase $\alpha_M$ is obtained as $81^o<\alpha_M<89^o$. In figure \ref{fig3} we plot $k_1$ vs $p$ and $k_1$ vs $q$ for $\epsilon=0.09$ that predicts almost the same ranges of the parameters ($p$, $q$ and $k_1$) and all other quantities ($|m_{11}|$, $\Sigma_i m_i$, $\alpha_M$ and $J_{CP}$) as obtained from the whole range of $\epsilon$. We present a correlation plot of $\Sigma_i m_i$ with $|m_{11}|$ in figure \ref{fig4}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.6]{m_11jh.png}\\
\caption{Plot of $|m_{11}|$ vs $\Sigma_i m_i$ for Category C with $\epsilon=0.09.$}\label{fig4}
\end{center}
\end{figure}
\subsubsection{Category D}\label{s1}
In case of Category D, although a priori it is not possible to rule out $m_\nu^{\prime D\epsilon}$ without going into the detailed numerical analysis, however in this case even if with $\epsilon=1$ it is not possible to accommodate the neutrino oscillation data. Specifically, the value of $\theta_{13}$ is always beyond the reach of the parameter space. Exactly for the same reason the $m_\nu$ matrix of type $A_3$ in Eqn.(\ref{rldout}) is phenomenologically ruled out.
\subsection{Breaking in Dirac+Majorana sector}
In this section we focus on the phenomenology of the neutrino mass matrix where the scaling ansatz is broken in both the sectors. This type of breaking is only relevant for Category A since in this case $m_D$ is nonsingular after breaking of the ansatz and the resultant $m_\nu$ gives rise to nonzero $\theta_{13}$ along with $m_3\neq0$. In all the other categories due to the singular nature of $m_D$, inclusion of symmetry breaking in the Majorana sector will not generate $m_3\neq0$. Thus we consider only Category A under this scheme.
\subparagraph{}
We consider the breaking in $m_D$ as mentioned in Eqn.(\ref{11}) and the ansatz broken texture of $\mu_1^4$ matrix is given by
\begin{eqnarray}
\mu_1^4 &=& \begin{pmatrix}
r_1&0&0\\
0&k_2^2s_3&k_2 (1+\epsilon^\prime)s_3\\
0&k_2(1+\epsilon^\prime) s_3&s_3\\
\end{pmatrix}
\end{eqnarray}
where $\epsilon^\prime$ is a dimensionless real parameter. The effective neutrino mass matrix $m_\nu$ comes out as
\begin{eqnarray}
m_{\nu \epsilon ^ \prime}^{\prime A \epsilon}= m_0\begin{pmatrix}
1& k_1p & p \\
k_1p & k_1^2(q^2e^{i\theta}+p^2)& k_1(q^2e^{i\theta}+p^2)\\p & k_1(q^2e^{i\theta}+p^2)&(q^2e^{i\theta}+p^2)
\end{pmatrix}+m_0 \epsilon \begin{pmatrix}
0&0&0\\0&2k_1^2q^2e^{i\theta}&k_1q^2e^{i\theta}\\0&k_1q^2e^{i\theta}&0\\
\end{pmatrix} \nonumber \\ +m_0 \epsilon^\prime \begin{pmatrix}
0&k_1p&p\\k_1p&0&0\\p&0&0
\end{pmatrix}.
\end{eqnarray}
\subsubsection{Numerical results}
As mentioned above, $\epsilon^{\prime}=0$ leads to inverted hierarchy with $m_3=0$ and thus to generate nonzero $m_3$ a small value of $\epsilon^{\prime}$ is needed. Similar to the previous cases two breaking parameters $\epsilon$ and $\epsilon^{\prime}$ can be varied freely through the ranges that are sensitive to the oscillation data and are obtained as $0.06<\epsilon<0.68$ and $0<\epsilon^{\prime}<1$. It is to be noted that although the $\epsilon$ parameter is restricted due to $\theta_{13}$ value, $\epsilon^{\prime}$ is almost insensitive to $\theta_{13}$ and it can vary within a wide range as $0<\epsilon^{\prime}<1$. A correlation plot of $\epsilon$ with $\epsilon^{\prime}$ is shown in figure \ref{fig5}. However, as mentioned earlier, the effect of the breaking term should be smaller than the unbroken one, therefore, to obtain the parameter space for this category we consider breaking of the scaling ansatz in both the sectors only upto 10 \% and consequently for all combinatorial values of $\epsilon$ and $\epsilon^{\prime}$ the parameters $p$, $q$ and $k_1$ vary within the ranges as $1.07<p<3.10$, $1.03<q<3.12$ and $0.67<k_1<1.31$. Interestingly, although all the eigenvalues are nonzero in this case, the hierarchy is still inverted. $J_{CP}$ is found to be tiny ($\approx 10^{-6}$) again due to small value of $\delta_{CP}$. The Majorana phases are obtained as $-96^o<\alpha_M<74^o$ and $-100^0<\beta_M+\delta_{CP}<102^o$ followed by the bounds on $\Sigma_im_i$ and $|m_{11}|$ as 0.088 eV $<\Sigma_i m_i<$ 0.11 eV and 0.010 eV $<|m_{11}|<$ 0.022 eV which are well below the present experimental upper bounds. In figure \ref{fig6} we demonstrate the above predictions for $\epsilon=\epsilon^{\prime}=0.1$. In the left panel of figure \ref{fig6} the inverted hierarchical nature is shown and in the right panel variation of the Majorana phases is demonstrated.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.6]{epslnpvsepsln.png}
\caption{Correlated plot of $\epsilon$ with $\epsilon^{\prime}$.}\label{fig5}
\end{center}
\end{figure}\\
\begin{figure}[h!]
\includegraphics[scale=.6]{hierarchyjh.png} \includegraphics[scale=.6]{pap1majoph.png}\\
\caption{ Plot of $(m_1/m_3)$ vs $ (m_2/m_1)$ (left) and $\beta_M+\delta_{CP}$ vs $\alpha_M$ (right) after breaking of the scaling ansatz in both the sectors of Category A for a representative value of $\epsilon=\epsilon^{\prime}=0.1$.}\label{fig6}
\end{figure}
\\
\noindent
Some comments are in order regarding predictions of the present scheme:\\
\noindent
1. After precise determination of $\theta_{13}$ taking full account of reactor neutrino experimental data, it is shown that the hierarchy of the light neutrino masses can be probed through combined utilization of NO$\nu$A and T2K\cite{Ieki:2014bca} neutrino oscillation experimental results in near future. Thus the speculation of hierarchy in the present scheme will be clearly
verified. Moreover, taking the difference of probabilities between $P(\nu_\mu \rightarrow \nu_e)$ and $P(\bar{\nu_\mu}\rightarrow \bar{\nu_e})$ information on the value of $J_{CP}$ can be obtained using neutrino and anti neutrino beams.\\
\noindent
2. More precise estimation of the sum of the three light neutrino masses will be obtained utilizing a combined analysis with PLANCK data\cite{Ade:2013zuv} and other cosmological and astrophysical experiments\cite{Lesgourgues:2014zoa} such as, Baryon oscillation spectroscopic survey, The Dark energy survey, Large Synoptic Survey Telescope or the Euclid satellite data\cite{1475-7516-2013-01-026} etc. Such type of analysis will push $\Sigma_i m_i\sim 0.1$ eV (at the $4\sigma$ level for inverted ordering) and $\Sigma_i m_i\sim 0.05$ eV (at the $2\sigma$ level for normal ordering). Thus the prediction of the value of $\Sigma_i m_i$ in the different categories discussed in the present work
will also be tested in the near future. Furthermore, the NEXT-100\cite{DavidLorcafortheNEXT:2014fga} will probe the value of $|m_{11}|$ up to $0.1$ eV which is a more precise value than the EXO-200\cite{Auger:2012ar} experimental range (0.14-0.38 eV).
\section{Summary and conclusion}
In this work we explore the phenomenology of neutrino mass matrix obtained due to inverse seesaw mechanism adhering
i) Scaling ansatz, ii) Texture zeros within the framework of $SU(2)_L \times U(1)_Y$ model with three right handed neutrinos and three left chiral singlet
fermions.
Throughout our analysis we choose a basis in which the charged lepton mass matrix ($m_E$) and the $M_{RS}$ matrix (appeared in inverse seesaw mechanism due to the coupling of $\nu_R$ and $S_L$) are diagonal. It is found that four is the maximum number of zeros that can be allowed in $m_D$ and $\mu$ matrices to obtain viable phenomenology. We classify different four zero textures in four different categories depending upon their generic form. Since scaling ansatz invariance always gives rise to $\theta_{13}=0$, we have to break such ansatz. We consider breaking in $m_D$ and also in $\mu$ matrices. We explore the parameter space and it is seen that one category (Category D) is ruled out phenomenologically. The hierarchy obtained in all the cases is inverted and it is interesting to note that all such categories give rise to tiny $CP$ violation measure $J_{CP}$ due to small value of $\delta_{CP}$. In conclusion, further observation of hierarchy of neutrino masses and CP violation in the leptonic sector in the forthcoming experiments will conclusively refute or admit all these categories obtained in the present scheme.\\\\\\
\noindent
\textbf{ Acknowledgement}\\
We thank Mainak Chakraborty for discussion and computational help.
\newpage
|
2,877,628,089,652 | arxiv | \section{Introduction}
A gas of massive fermions which interact only gravitationally
has interesting thermal properties which may have important
consequences for the early universe.
The canonical and grand canonical ensembles
for such a system
have been shown to
have a nontrivial thermodynamical limit~\cite{Thir,Hert}.
Under certain conditions these systems
will undergo a phase transition
that is accompanied by gravitational collapse~\cite{Mess}.
This phase transition
occurs uniquely for the attractive gravitational interaction of
neutral fermions. There is indeed no such phase transition in
the case of charged fermions~\cite{Feyn}. Of course, gravitational
condensation will also take place if the fermions have an
additional weak interaction, as neutrinos, neutralinos,
axions and other
weakly interacting massive particles generally do.
Moreover, it is important to
note that the phase transition will occur, irrespective of the
magnitude of the initial density fluctuations in the fermion gas,
and, as we shall shortly demonstrate, irrespective of the
amount of background radiation.
To be specific, we
henceforth assume that this neutral fermion is the heaviest
neutrino $\nu_{\tau}$, although this is not essential for most of the
subsequent discussion.
The ground state of a gravitationally condensed neutrino cloud,
with mass below the Chandrasekhar limit, is a cold neutrino
star~\cite{Luri,RDV6,RDV7,RDV8}, in which the degeneracy pressure
balances the gravitational attraction of the neutrinos.
Degenerate stars of neutrinos in the mass range between $m_{\nu}
= 10$ and 25 keV are particularly
interesting~\cite{RDV8}, as they could explain, without resorting
to the black hole hypothesis, at least some of the features that
are observed around supermassive compact dark objects, which are
reported to exist at the centers of a number of
galaxies~\cite{Tonr,Dres10,Dres11,Korm12,Korm13,Korm14} including
our own~\cite{Lacy15,Lacy16} and quasi-stellar objects (QSO)
\cite{Bell,Zeld,Blan,Bege}. Indeed, there is little difference
between a supermassive black hole and a neutrino star of the same
mass near the Chandrasekhar limit, a few Schwarzschild radii away
from the object.
The existence of a quasi-stable neutrino in this mass range is
neither ruled out by particle and nuclear physics experiments nor by
direct astrophysical observations~\cite{RDV8}. In the
early universe, however, it would lead to an early neutrino matter
dominated phase some time after nucleosynthesis and prior to
recombination. In such a universe, the microwave background
temperature would be reached much too early to accommodate the
oldest stars in globular clusters, cosmochronology and the
Hubble expansion age, if the Standard Model of Cosmology is
correct. However, the early universe might have evolved quite
differently in the presence of such a heavy neutrino. In
particular, it is conceivable that primordial neutrino stars have
been formed in local condensation processes during a
gravitational phase transition that must have occurred some time
between nucleosynthesis and recombination. Aside from
reheating the gaseous phase of heavy neutrinos,
the latent heat
produced by the
condensed phase might have contributed partly to reheating the
radiation as well. Moreover, the bulk part of the heavy neutrinos
(and antineutrinos) will have annihilated efficiently into light
neutrinos via the $Z^{0}$ in the interior of these supermassive
neutrino
stars~\cite{Luri,RDV6,RDV7,RDV8}.
Since both these processes will
increase the age of the universe, or the time when the universe
reaches today's microwave background temperature~\cite{RDV8},
it does not seem
excluded that a quasi-stable massive neutrino in the mass range
between 10 and 25
keV is compatible with the cosmological
observations~\cite{Luri,RDV8}.
\newpage
The purpose of
this paper is to study the formation of such a neutrino star
during a gravitational phase transition in an expanding
universe at the time when
the energy densities of neutrino matter and radiation
are of comparable magnitude.
We assume here that the
equilibrium distribution of the $\nu_{\tau}$ gas
is spherically symmetric and
the energy density $\rho_{\gamma}$ of the radiation background
homogeneous.
At this stage
$\rho_{\gamma}$ consists of
photons and the two remaining relativistic
neutrino species
$\nu_{\mu}$ and
$\nu_e$, is given by
\begin{equation}\label{eq01}
\rho_{\gamma} = \frac{a}{2} g_2 T^4_{\gamma},
\end{equation}
with
\begin{equation}\label{eq02}
g_N=2+\frac{7}{4}\left(\frac{4}{11}\right)^{4/3} N .
\end{equation}
The gravitational potential $V(r)$ satisfies the Poisson equation
\begin{equation}\label{eq00}
\Delta V = 4\pi G m_{\nu} (m_{\nu}n_{\nu}+\rho_{\gamma}),
\end{equation}
where the number density of $\tau$
neutrinos (including antineutrinos) of mass $m_{\nu}$ can be expressed
in terms of the Fermi-Dirac distribution at a finite temperature
$T$ as
\begin{eqnarray}\label{eq10}
n_{\nu}(r) & = & \frac{g_{\nu}}{4 \pi^{2}} \left(
2m_{\nu} T \right)^{3/2} I_{\frac{1}{2}} \left( \frac{\mu - V(r)}{T}
\right),
\end{eqnarray}
with
\begin{eqnarray}\label{eq20}
I_{n} (\eta) & = & \int^{\infty}_{0} \frac{\xi^{n} d \xi}{1 +
e^{\xi - \eta}}.
\end{eqnarray}
$g_{\nu}$ denotes the combined spin degeneracy factors of
neutrinos and antineutrinos (i.e.\ $g_{\nu}$ is 2 or 4 for Majorana
or Dirac neutrinos respectively), and $\mu$ is the chemical
potential.
It is convenient to
introduce the normalized reduced potential
\begin{equation}\label{21}
v = \frac{r}{m_{\nu}GM_{\odot}}(\mu-V),
\end{equation}
$M_{\odot}$
being the solar mass,
and dimensionless variable $x=r/R_0$
with the scale factor
\begin{equation}\label{eq40}
R_0 = \left( \frac{3 \pi } {4 \sqrt{2} m_{\nu}^{4}
g_{\nu} G^{3/2} M_{\odot}^{1/2}} \right)^{2/3} =
2.1377\;\;{\rm lyr} \left( \frac{17.2\;\;{\rm keV}}{m_{\nu}}
\right)^{8/3} g_{\nu}^{- 2/3}.
\end{equation}
Eq.(\ref{eq00}) then takes the simple form
\begin{equation}\label{eq50}
\frac{1}{x}
\frac{d^{2} v}{dx^{2}} = - \frac{3}{2} \beta^{-
3/2} I_{\frac{1}{2}} \left( \beta \frac{v}{x} \right)
-4\pi\frac{R_0^3\rho_{\gamma}}{M_{\odot}},
\end{equation}
where we have introduced the
normalized inverse temperature
defined as
\begin{equation}\label{51}
\beta = T_{0}/T ;\;\;\;\;\; T_0=m_{\nu} GM_{\odot} / R_0 .
\end{equation}
At zero temperature
we recover from eq.(\ref{eq50}) the
well-known Lan\'{e}-Emden differential equation [6,8]
\begin{eqnarray}\label{eq60}
\frac{d^{2} v}{dx^{2}} & = & - \frac{v^{3/2}}{\sqrt{x}}.
\end{eqnarray}
The solution of the differential equation (\ref{eq50}) requires
boundary conditions. We assume here that
the neutrino gas is enclosed in a spherical
cavity of radius $R$ corresponding to $x_1=R/R_0$.
We further require the total
neutrino mass to be $M_{\nu}$, and the total radiation mass
within the cavity
to be $M_{\gamma}$, and we allow for the possibility
of a pointlike mass $M_{C}$ at the origin, which could be e.g.\ a
compact seed of other exotic matter. $v(x)$ and
$v(x)$ is then related to
its derivative
at $x = x_1$ by
\begin{eqnarray}
v' (x_1) & = & \frac{1}{x_1} \left( v (x_1) -
\frac{M_C+M_{\gamma} + M_{\nu}}{M_{\odot}} \right),
\label{eq100}
\end{eqnarray}
which in turn is related to
the chemical potential by $\mu = T_{0} v' (x_1)$.
$v(x)$ at $x=0$ is related to
the point mass at the origin by
$M_{C}/M_{\odot} = v(0)$.
Similar to the case of the Lan\'{e}-Emden equation,
it is easy to show that eq.(\ref{eq50}) has a scaling property:
if $v(x)$
is a solution of eq.(\ref{eq50}) at a temperature $T$
and a cavity radius $R$, then
$\tilde{v} (x) = A^{3} v (Ax)$ with $(A > 0)$ is also a solution
at the temperatures $\tilde{T} = A^{4} T$,
$\tilde{T}_{\gamma} = A^{4} T_{\gamma}$
and the cavity radius $\tilde{R}=R/A$.
It is important to note that only those
solutions that
minimize the free energy are physical.
The free energy functional is defined as~\cite{Hert}
\begin{eqnarray}
F[n] & = & \mu[n] N_{\nu}-W[n]
\nonumber \\
& - &
Tg_{\nu}\int\frac{d^3rd^3p}{(2\pi)^3}
\ln( 1+\exp \left(-\frac{p^2}{2m_{\nu}T}-
\frac{V[n]}{T}+\frac{\mu[n]}{T}\right)),
\label{eq120}
\end{eqnarray}
where
\begin{equation}\label{eq130}
V[n] = -Gm_{\nu}
\int d^3r'\frac{m_{\nu}n(r')+\rho_{\gamma}}{|\bf{r}-\bf{r}'|},
\end{equation}
and
\begin{equation}\label{eq131}
W[n] = -\frac{1}{2}Gm_{\nu}^2
\int d^3rd^3r'\frac{n(r)n(r')}{|\bf{r}-\bf{r}'|}.
\end{equation}
The chemical potential in eq. (\ref{eq120}) varies with
density so that
the number of neutrinos $N_{\nu}=M_{\nu}/m_{\nu}$
is kept fixed.
All the relevant thermodynamical quantities
such as
number density, pressure,
free energy, energy and entropy
can be expressed in terms of $v/x$
\begin{eqnarray}
n_{\nu}(x)
\!&\!=\!&\!
\frac{M_{\odot}}{m_{\nu} R_0^3}
\frac{3}{8\pi}
\beta^{-3/2} I_{\frac{1}{2}}
\left( \beta \frac{v}{x} \right),
\label{eq140}\\[.2cm]
P_{\nu}(x)
\!&\!=\!&\!
\frac{M_{\odot} T_0}{m_{\nu} R_0^3}
\frac{3}{8\pi}
\beta^{-5/2} I_{\frac{3}{2}}
\left( \beta \frac{v}{x} \right) =
\frac{2}{3} \varepsilon_{\rm kin} (x),
\label{eq150}
\end{eqnarray}
\newpage
\begin{eqnarray}
F
\!&\!=\!&\!
\frac{1}{2}\mu(N_{\nu}+N_{\gamma})
+\frac{3}{5}M_{\gamma}^2\frac{1}{R}
\nonumber \\
\!&\!+\!&\!
\frac{1}{2}T_0R_0^3\int d^3x(n_{\nu}-n_{\gamma})\frac{v(x)-v(0)}{x}
-R_0^3\int d^3x P_{\nu}(x),
\label{eq160}\\[.2cm]
E
\!&\!=\!&\!
\frac{1}{2}\mu(N_{\nu}+N_{\gamma})
+\frac{3}{5}M_{\gamma}^2\frac{1}{R}
\nonumber \\
\!&\!-\!&\!
\frac{1}{2}T_0R_0^3\int d^3x\left[(n_{\nu}+n_{\gamma})\frac{v(x)}{x}
+(n_{\nu}-n_{\gamma})\frac{v(0)}{x}\right]
+R_0^3\int d^3x
\varepsilon_{\rm kin}(x),
\label{eq170}\\[.2cm]
S
\!&\!=\!&\!
\frac{1}{T}(E-F).
\label{eq180}
\end{eqnarray}
In eqs. (\ref{eq160}) and (\ref{eq170}) we have introduced
the effective radiation number
$N_{\gamma}=M_{\gamma}/m_{\nu}$,
and the effective radiation number density
$n_{\gamma}=\rho_{\gamma}/m_{\nu}$.
We now turn to the numerical study of a
system
of self-gravitating massive neutrinos
with arbitrarily chosen
total mass $M=10 M_{\odot}$ varying the cavity radius $R$
and the radiation mass $M_{\gamma}$.
Owing to the scaling
properties, the system may be rescaled to any physically
interesting mass.
For definiteness, the $\nu_{\tau}$ mass
is chosen as
$m_{\nu}=17.2$ keV
which is about the central value of the mass
region between 10 and 25 keV \cite{RDV8}
that is interesting for our scenario.
In fig. 1 we present our results for a
gas of neutrinos
in a cavity of
radius $R=100 R_0$,
and with no
background radiation
i. e. with $M_{\gamma}=0$.
We find the
three distinct solutions
in the temperature interval
$T=(0.049-0.311)T_0$
of which only two are physical,
namely those
for which the free-energy
assumes a minimum.
The density distributions,
corresponding to
such two solutions are shown in the first plot
in fig. 1.
We refer to the solution
that exists above the
mentioned interval at arbitrary high temperature
as ``gas", while the solution which
exists at low
temperatures and eventually becomes a degenerate
Fermi gas at $T=0$, we refer to as ``condensate".
In fig. 1 we also plot
various extensive
thermodynamical quantities
(per neutrino)
as functions of neutrino temperature.
The phase transition takes place at the point
$T_t$ where the free energy of the gas and
condensate become equal.
The transition temperature
$T_t=0.19442T_0$ is
indicated by the dotted line
in the free energy plot.
The top dashed curve in the same plot
corresponds to the unphysical solution.
At $T=T_t$ the energy and the entropy have a discontinuity.
Next we study a system of massive neutrino gas in
a radiation background.
In the course of universe expansion,
the heavy neutrino becomes nonrelativistic
at a time $t_{NR}$ corresponding to the temperature
$T_{NR}=m_{\nu}$. At that time the radiation dominates
the matter by about a factor of 15.
A neutrino cloud with the total mass e.g.
$10^9 M_{\odot}$,
which is by about factor of 10 below the
Chandrasekhar limit for
$m_{\nu}=17.2$ keV and
$g_{\nu}=4$,
would have a radius
\begin{equation}
R_{NR}=2[G(M_{\nu}+M_{\gamma})t_{NR}^2]^{1/3},
\label{310}
\end{equation}
yielding $R_{NR}=0.265$ light days.
As we shall shortly see this
is way below the critical value and therefore
there exists
only one solution
for any fixed temperature.
As the universe expands and cools down
the relative amount of radiation mass
decreases as
\begin{equation}
M_{\gamma}=15M_{\nu}\frac{R_{NR}}{R}.
\label{320}
\end{equation}
Concerning the $R$ dependence of the temperature
we assume that during the
radiation domination the temperature $T$ of the
neutrino gas is fixed by the radiation heat bath
so that it continues to decrease as $1/R$ until
the neutrino matter starts to dominate.
At that point the system being in gaseous
phase will have the entropy per particle
$S/N_{\nu}=7.60$.
From then on,
the $T$ will not be coupled to the radiation
heat bath anymore and will decrease
with $R$ according to the adiabatic expansion
of the neutrino matter itself.
Shortly after the matter starts to dominate,
we reach the region of instability and
the first order phase transition takes place.
Fig 2. shows how the entropy behaves around the critical
region.
We find as we decrease the radius
that the first order phase transition becomes weaker
and eventually disappears at $R=R_c=5.34$ light days
corresponding to the critical temperature
$T_c=0.0043m_{\nu}$ and the radiation mass
relative to the neutrino mass
$M_{\gamma}=0.67M_{\nu}$.
This is the critical point of a second order
phase transition. This behavior is typical for
a mean-field type of models \cite{bil}.
The initial entropy of 7.6 per neutrino
drops to 0.75 so that the latent heat per particle
yields
$\Delta E= T \Delta S= 0.016m_{\nu}$.
Thus,
the condensate formation is accompanied by
a release of considerable amount of
energy which will reheat
the radiation environment.
\vspace{0.2in}
{\large \bf Acknowledgement}
We acknowledge useful discussions with
D. Tsiklauri and G.J. Kyrchev.
\vspace{0.4in}
|
2,877,628,089,653 | arxiv | \section{Introduction}
\label{sec:introduction}
\subsection{Orientation}
{\junk{testing}}
In the last several years there has been a revolution
in string theory. There are two major developments responsible for
this revolution.
\vspace{0.05in}
\noindent
{\sl i}. It has been found that
all five string theories, as well as 11-dimensional supergravity,
are related by duality symmetries and
seem to be aspects of one underlying theory whose fundamental
principles have not yet been elucidated.
\noindent
{\sl ii}. String theories contain Dirichlet $p$-branes, also known as
``D-branes''. These objects have been shown to play a fundamental
role in nonperturbative string theory.
\vspace{0.05in}
Dirichlet $p$-branes are dynamical objects which are extended in $p$ spatial
dimensions. Their low-energy physics can be described by
supersymmetric gauge theory. The goal of these lectures is to
describe the physical properties of D-branes
which can be understood from this Yang-Mills theory description. There is a
two-fold motivation for taking this point of view. At the superficial
level, super Yang-Mills theory describes much of the interesting
physics of D-branes, so it is a nice way of learning something about
these objects without having to know any sophisticated string theory
technology. At a deeper level, there is a growing body of evidence
that super Yang-Mills theory contains far more information about
string theory than one might reasonably expect. In fact, the recent
Matrix theory conjecture \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{BFSS} essentially states that the
simplest possible super Yang-Mills theory with 16 supersymmetries,
namely ${\cal N} = 16$ super Yang-Mills theory in 0 + 1 dimensions,
completely reproduces the physics of eleven-dimensional supergravity
in light-front gauge.
The point of view taken in these lectures is that many interesting
aspects of string theory can be derived from Yang-Mills theory. This
is a theme which has been developed in a number of contexts in recent
research. Conversely,
one of the other major themes of recent
developments in formal high-energy theory has been the idea that
string theory can tell us remarkable things about low-energy field
theories such as super Yang-Mills theory, particularly in a
nonperturbative context. In these lectures we will not discuss any
results of the latter variety; however, it is useful to keep in mind the
two-way nature of the relationship between string theory and
Yang-Mills theory.
The body of knowledge related to D-branes and Yang-Mills theory is by
now quite enormous and is growing steadily. Due to limitations on
time, space and the author's knowledge there are many interesting
developments which cannot be covered here. As always in an effort of
this sort, the choice of topics covered largely reflects the prejudices of the
author. An attempt has been made, however, to concentrate on a
somewhat systematic development of those concepts which are useful in
understanding recent progress in Matrix theory. For a comprehensive
review of D-branes in string theory, the reader is referred to the
reviews of Polchinski et al. \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{pcj,Polchinski-TASI}.
These lectures begin with a review of how the low-energy Yang-Mills
description of D-branes arises in the context of string theory. After
this introduction, we take super Yang-Mills theory as our starting
point and we proceed to discuss a number of aspects of D-brane and string
theory physics from this point of view. In the last
lecture we use the technology developed in the first four
lectures to discuss the recently developed Matrix theory.
\subsection{D-branes from string theory}
We now give a brief review of the manner in which D-branes appear in
string theory. In particular, we give a heuristic description of how
supersymmetric Yang-Mills theory arises as a low-energy description of
parallel D-branes. The discussion here is rather abbreviated; the
reader interested in further details is referred to the reviews of
Polchinski et al. \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{pcj,Polchinski-TASI} or to the original papers
mentioned below.
\begin{figure}
\vspace{-0.3in}
\psfig{figure=D-brane.eps,height=1.5in}
\vspace{-0.3in}
\caption[x]{\footnotesize D-branes (gray) are
extended objects on which strings (black) can end}
\label{f:D-branes}
\end{figure}
In string theory, Dirichlet $p$-branes are defined as $(p+1)$-dimensional
hypersurfaces in space-time on which strings are allowed to end (see
Figure~\ref{f:D-branes}). From the point of view of perturbative
string theory, the positions of the D-branes are fixed, corresponding
to a particular string theory background. The massless modes
of the open strings connected to the D-branes can be associated with
fluctuation modes of the D-branes themselves, however, so that in a full
nonperturbative context the D-branes are expected to become dynamical
$p$-dimensional membranes. This picture is analogous to
the way in which, in a particular metric background for perturbative
string theory, the quantized closed string has massless graviton modes
which provide a mechanism for fluctuations in the metric itself.
The spectrum of low-energy fields in a given string background can be
simply computed from the string world-sheet field theory \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{GSW}.
Let us briefly review the analyses of the spectra for the string
theories in which we will be interested. We consider two types of
strings: open strings, with endpoints which are free to move
independently, and closed strings, with no endpoints. A superstring
theory is defined by a conformal field theory on the $(1+1)$-dimensional
string world-sheet, with free bosonic fields $X^\mu$ corresponding to
the position of the string in 10 space-time coordinates, and fermionic
fields $\psi^\mu$ which are partners of the fields $X^\mu$ under
supersymmetry. Just as for the classical string studied in beginning
physics courses, the degrees of freedom on the open string correspond
to standing wave modes of the fields; there are twice as many
modes on the closed string, corresponding to right-moving and
left-moving waves. The open string boundary conditions on the bosonic fields
$X^\mu$ can be Neumann or Dirichlet for each field separately. When
all boundary conditions are Neumann the string endpoints move freely
in space. When $9-p$ of the fields have
Dirichlet boundary conditions, the string endpoints are constrained to
lie on a $p$-dimensional hypersurface which corresponds to a D-brane.
Different boundary conditions can also be chosen for the fermion
fields on the string. On the open string, boundary conditions
corresponding to integer and half-integer modes are referred to as
Ramond (R) and Neveu-Schwarz (NS) respectively. For the closed
string, we can separately choose periodic or antiperiodic boundary
conditions for the left- and right-moving fermions. These give rise to
four distinct sectors for the closed string: NS-NS, R-R, NS-R and
R-NS.
Straightforward quantization of either the open or closed superstring
theory leads to several difficulties: the theory seems to contain a
tachyon with $M^2 < 0$, and the theory is not supersymmetric from the
point of view of ten-dimensional space-time. It turns out that both of
these difficulties can be solved by projecting out half of the states
of the theory. For the open string theory, there are two choices of
how this GSO projection operation can be realized. These two
projections are equivalent, however, so that there is a unique spectrum for the
open superstring. For the closed string, on the other hand, one can
either choose the same projection in the left and right sectors, or
opposite projections. These two choices lead to the physically
distinct IIA and IIB closed superstring theories, respectively.
From the point of view of 10D space-time, the massless fields arising
from quantizing the string theory and incorporating the GSO projection
can be characterized by their transformation properties under ${\rm
spin}(8)$ (this is the covering group of the group $SO(8)$ which
leaves a lightlike momentum vector invariant). We will now simply
quote these results from
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{GSW}. For the
open string, in the NS sector there is a vector field $A_\mu$,
transforming under the $8_v$ representation of ${\rm spin} (8)$ and in
the R sector there is a fermion $\psi$ in the $8_s$ representation.
The massless fields for the IIA and IIB closed strings in the NS-NS
and R-R sectors are given in the following table:
\vspace{0.08in}
\begin{center}
\begin{tabular}{| l | c | c |} \hline
& NS-NS& R-R \\\hline
IIA & $g_{\mu \nu}, \phi, B_{\mu \nu}$ & $A^{(1)}_\mu, A^{(3)}_{\mu
\nu \rho}$\\\hline
IIB & $g_{\mu \nu}, \phi, B_{\mu \nu}$ & $A^{(0)},
A^{(2)}_{\mu \nu}, A^{(4)}_{\mu
\nu \rho \sigma}$\\\hline
\end{tabular}
\end{center}
\vspace{0.08in}
The IIA and IIB strings have the same fields in the NS-NS sector,
corresponding to the space-time metric $g_{\mu \nu}$, dilaton $\phi$
and antisymmetric tensor field $B_{\mu \nu}$. In addition, each
closed string theory has a set of R-R fields. For the IIA theory
there are 1-form and 3-form fields. For the IIB theory there is a
second scalar field (the axion), a second 2-form field, and a 4-form
field $A^{(4)}$ which is self-dual. The NS-NS and R-R fields all
correspond to space-time bosonic fields. In both the IIA and IIB theories
there are also fields in the NS-R and R-NS sectors corresponding to
space-time fermionic fields.
Until recently, the role of the R-R fields in string theory was rather
unclear. In one of the most important papers in the recent
string revolution \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Polchinski}, however, it was pointed out by Polchinski
that D-branes are charge carriers for these fields. Generally, a
Dirichlet $p$-brane couples to the R-R $(p + 1)$-form field through a
term of the form
\begin{equation}
\mu_p \int_{\Sigma_{(p + 1)}}A^{(p + 1)}
\end{equation}
where the integral is taken over the $(p + 1)$-dimensional
world-volume of the $p$-brane.
In type IIA theory there are Dirichlet $p$-branes with $p= 0, 2, 4, 6,
8$ and in type IIB there can be Dirichlet $p$-branes with $p = -1, 1,
3, 5,7, 9$. The D-branes with $p > 3$ couple to the duals of the R-R
fields, and are thus magnetically charged under the corresponding R-R
fields. For example, a Dirichlet 6-brane, with a 7-dimensional
world-volume, couples to the 7-form whose 8-form field strength is the
dual of the 2-form field strength of $A^{(1)}$. Thus, the Dirichlet
6-brane is magnetically charged under the R-R vector field in IIA
theory. The story is slightly more complicated for the Dirichlet
8-brane and 9-brane \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Polchinski-TASI}; however, 8-branes and
9-branes will not appear in these lectures in any significant way.
In addition to the Dirichlet $p$-branes which appear in type IIA and
IIB string theory, there are also solitonic NS-NS 5-branes which
appear in both theories, which are magnetically charged under the
NS-NS two-form field $B_{\mu \nu}$. In the remainder of these notes
$p$-branes which are not explicitly stated to be Dirichlet or NS-NS
are understood to be Dirichlet $p$-branes; we will also sometimes
use the notation D$p$-brane to denote a $p$-brane of a particular dimension.
It is interesting to see how the dynamical degrees of freedom of a
D-brane arise from the massless string spectrum in a fixed D-brane
background \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{dlp}. In the presence of a D-brane, the open string
vector field $A_\mu$ decomposes into components parallel to and
transverse to the D-brane world-volume.
Because the endpoints of the strings are tied to the world-volume of
the brane, we can interpret these massless fields in terms of a
low-energy field theory on the D-brane world-volume. The $p + 1$ parallel
components of $A_{\mu}$ turn into a $U(1)$ gauge field $A_{\alpha}$ on
the world-volume, while the remaining $9-p$ components appear as
scalar fields $X^a$. The fields $X^a$ describe fluctuations of the
D-brane world-volume in transverse directions. In general throughout
these notes we will use $\mu, \nu, \ldots$ to denote 10D indices,
$\alpha, \beta, \ldots$ to denote $(p + 1)$-D indices on a D-brane
world-volume, and $a, b, \ldots$ to denote $(9-d)$-D transverse indices.
One way to learn about the low-energy dynamics of a D-brane is to find
the equations of motion for the D-brane which must be satisfied for
the open string theory in the D-brane background to be conformally
invariant. Such an analysis was carried out by Leigh \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Leigh}.
He showed that in a purely bosonic theory, the equations of motion
for a D-brane are precisely those of the action
\begin{equation}
S = - T_p \int d^{p + 1} \xi
\;e^{-\phi} \;\sqrt{-\det (G_{\alpha \beta} + B_{\alpha \beta} + 2 \pi \alpha'
F_{\alpha \beta}) }
\label{eq:DBI}
\end{equation}
where $G$, $B$ and $\phi$ are the pullbacks of the 10D metric,
antisymmetric tensor and dilaton to the D-brane world-volume, while $F$ is the
field strength of the world-volume $U(1)$ gauge field $A_{\alpha}$.
This action can be verified by a perturbative string
calculation \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Polchinski-TASI}, which also gives a precise
expression for the brane tension
\begin{equation}
\tau_p =\frac{T_p}{g} =
\frac{1}{g\sqrt{\alpha'}} \frac{1}{ (2 \pi \sqrt{\alpha'})^{p}}
\end{equation}
where $g = e^{\langle \phi \rangle}$ is the string coupling, equal to
the exponential
of the dilaton expectation value, and $\alpha'$ is related to the
string tension through
\begin{equation}
\frac{1}{2 \pi \alpha'} = T_{{\rm string}}.
\end{equation}
The inverse string coupling appears because the leading string diagram
which contributes to the action (\ref{eq:DBI}) is a disk diagram.
In the full supersymmetric string theory, the action (\ref{eq:DBI})
must be extended to a supersymmetric Born-Infeld type action. In
addition, there are Chern-Simons type terms coupling the D-brane gauge
field to the R-R fields, of which the leading term is the $\int A^{(p
+ 1)}$ term discussed above; we will discuss these terms in more
detail later in these notes.
If we make a number of simplifying assumptions, the form of the action
(\ref{eq:DBI}) simplifies considerably. First, let us assume that the
background ten-dimensional space-time is flat, so that $g_{\mu \nu} =
\eta_{\mu \nu}$ (we use a metric with signature $-++ \cdots ++$).
Further, let us assume that the D-brane is approximately flat and that
we can identify the world-volume coordinates on the D-brane with $p+1$
of the ten-dimensional coordinates (the static gauge assumption).
Then, the pullback of the metric to the D-brane world-volume becomes
\begin{equation}
G_{\alpha \beta} \approx \eta_{\alpha \beta} + \partial_\alpha X^a
\partial_\beta X^a+{\cal O} \left((\partial X)^4 \right)
\end{equation}
If we make the further assumptions that $B_{\mu \mu}$ vanishes, and
that $2 \pi \alpha' F_{\alpha \beta}$ and $\partial_\alpha X^a$ are
small and of the same order, then we see that the low-energy D-brane
world-volume action becomes
\begin{equation}
S =-\tau_pV_p -\frac{1}{4g_{{\rm YM}}^2}
\int d^{p + 1} \xi
\left(F_{\alpha \beta} F^{\alpha \beta} +\frac{2}{(2 \pi \alpha')^2}
\partial_\alpha X^a \partial^\alpha X^a\right) +{\cal O} (F^4)
\label{eq:action-expansion}
\end{equation}
where $V_p$ is the $p$-brane world-volume and the coupling $g_{\rm YM}$ is given
by
\begin{equation}
g_{\rm YM}^2 = \frac{1}{4 \pi^2 \alpha'^2 \tau_p}
= \frac{g}{\sqrt{\alpha'}} (2 \pi \sqrt{\alpha'})^{p-2}
\label{eq:ym-coupling}
\end{equation}
The second term in (\ref{eq:action-expansion}) is essentially just the
action for a $U(1)$ gauge theory in $p + 1$ dimensions with $9-p$
scalar fields. In fact, after including fermionic fields $\psi$
the low-energy action for a D-brane becomes precisely the
supersymmetric $U(1)$ Yang-Mills theory in $p + 1$ dimensions which
arises from dimensional reduction of the $U(1)$ Yang-Mills theory in
10 dimensions with ${\cal N} = 1$ supersymmetry. The action of this
10D theory is
\begin{equation}
S = \frac{1}{g_{\rm YM}^2} \int d^{10}\xi \; \left(
-\frac{1}{4} F_{\mu \nu}F^{\mu \nu}
+ \frac{i}{2} \bar{\psi} \Gamma^\mu \partial_{\mu} \psi \right)
\label{eq:SYM1}
\end{equation}
In the next section we will discuss supersymmetric Yang-Mills theories
of this type in more detail. To conclude this introductory
discussion let us consider briefly the situation where we have a number of
distinct D-branes.
\begin{figure}
\vspace{-0.3in}
\psfig{figure=parallel.eps,height=1.5in}
\vspace{-0.3in}
\caption[x]{\footnotesize $U(N)$ fields $A_{ij}$ arise from strings stretched
between multiple D-branes}
\label{f:parallel-branes}
\end{figure}
In particular, let us imagine that we have $N$ parallel D-branes of
the same dimension, as depicted in Figure~\ref{f:parallel-branes}. We
label the branes by an index $i$ running from $1$ to $N$. There are
massless fields living on each D-brane world-volume, corresponding to
a gauge theory with total gauge group $U(1)^N$. In addition, however,
we expect fields to arise corresponding to strings stretching between
each pair of branes. These fields $A^{\mu}_{ij}$ carry 10D indices
$\mu$ as well as a pair of indices $i, j$ indicating which branes are
at the endpoints of the strings. Because the strings are oriented,
there are $N^2 -N$ such fields (counting a vector $A_{\mu} =
(A_{\alpha}, X^a)$ as a single field). The mass of a field
corresponding to a string
connecting branes $i$ and $j$ is proportional to the distance between
these branes. It was pointed out by Witten \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Witten-bound} that
as the D-branes approach each other and the stretched strings become
massless, the fields arrange themselves precisely into the
gauge field components and adjoint scalars of a supersymmetric $U(N)$ gauge
theory in $p + 1$ dimensions. Generally, such a super Yang-Mills
theory is described by the reduction to $p + 1$ dimensions of a 10D
non-abelian Yang-Mills theory where all fields are in
the adjoint representation of $U(N)$.
Thus, we see that with a number of simplifying assumptions, the
low-energy field theory describing a system of parallel D-branes is
simply a supersymmetric Yang-Mills (SYM) field theory. In the
following we will use SYM theory as the starting point from which to
analyze aspects of D-brane physics.
\section{D-branes and Super Yang-Mills Theory}
The previous section contained a fairly abbreviated discussion of the
string theory description of D-branes. The most significant
part of this description for the purposes of these lectures is the
following statement, which we will treat as axiomatic in most of the sequel
\vspace{0.07in}
\noindent {\bf Starting point:}
The low-energy physics of $N$ Dirichlet $p$-branes living in
flat space is described in static gauge by the dimensional reduction to
$p + 1$ dimensions of
${\cal N} = 1$ SYM in 10D.
\vspace{0.05in}
In this section we fill in some of the details of this theory in ten
dimensions, and describe explicitly the dimensionally
reduced theory in the case of 0-branes, $p = 0$.
\subsection{10D super Yang-Mills}
Ten-dimensional $U(N)$ super Yang-Mills theory has the action
\begin{equation}
S =\int d^{10}\xi \; \left( -\frac{1}{4} {\rm Tr}\; F_{\mu \nu}F^{\mu \nu}
+\frac{i}{2} {\rm Tr}\; \bar{\psi} \Gamma^\mu D_{\mu} \psi \right)
\label{eq:SYM}
\end{equation}
where the field strength
\begin{equation}
F_{\mu \nu} = \partial_\mu A_\nu -\partial_\nu A_\mu -i g_{YM}[A_\mu, A_\nu]
\end{equation}
is the curvature of a $U(N)$ hermitian gauge field $A_\mu$.
The fields $A_\mu$ and $\psi$ are both in the adjoint representation
of $U(N)$ and carry adjoint indices which we will generally suppress.
The covariant derivative $D_\mu$ of $\psi$ is given by
\begin{equation}
D_\mu \psi = \partial_\mu \psi -i g_{\rm YM}[A_\mu, \psi]
\end{equation}
where $g_{{\rm YM}}$ is the Yang-Mills coupling constant.
$\psi$ is a 16-component Majorana-Weyl spinor of $SO(9,1)$.
The action (\ref{eq:SYM}) is invariant under the supersymmetry transformation
\begin{eqnarray}
\delta A_\mu & = & \frac{i}{2} \bar{\epsilon} \Gamma_\mu \psi \\
\delta \psi & = & -\frac{1}{4} F_{\mu \nu} \Gamma^{\mu \nu} \epsilon
\label{eq:Yang-Mills-SUSY}
\nonumber
\end{eqnarray}
where $\epsilon$ is a Majorana-Weyl spinor. Thus, this
theory has 16 independent supercharges. There are 8 on-shell bosonic
degrees of freedom and 8 fermionic degrees of freedom after imposition
of the Dirac equation.
Classically, this ten-dimensional super Yang-Mills action gives a well-defined
field theory. The theory is anomalous, however, and therefore
problematic quantum mechanically.
It is often convenient to rescale the fields of the Yang-Mills
theory so that the coupling constant only appears as an overall
multiplicative factor in the action. By absorbing a factor of $g_{\rm
YM}$ in $A$ and $\psi$, we find that the action is
\begin{equation}
S =\frac{1}{4g_{\rm YM}^2} \int d^{10}\xi \; \left( - {\rm Tr}\;
F_{\mu \nu}F^{\mu \nu} +2i {\rm Tr}\; \bar{\psi} \Gamma^\mu
D_{\mu} \psi \right)
\end{equation}
where the covariant derivative is given by
\begin{equation}
D_\mu = \partial_\mu -iA_\mu.
\end{equation}
\subsection{Dimensional reduction of super Yang-Mills}
The ten-dimensional super Yang-Mills theory described in the previous
subsection can be used to construct a super Yang-Mills theory in $p +
1$ dimensions with 16 supercharges by the simple process of
dimensional reduction. This is done by assuming that all fields are
independent of coordinates $p + 1, \ldots, 9$. After dimensional
reduction, the 10D field $A_\mu$ decomposes into a $(p +
1)$-dimensional gauge field $A_\alpha$ and $9-p$ adjoint scalar fields
$X^a$. The action of the dimensionally reduced theory takes the form
\begin{equation}
S = \frac{1}{4g_{\rm YM}^2} \int d^{p + 1} \xi \;
{\rm Tr}\;(-F_{\alpha \beta} F^{\alpha \beta}-2(D_\alpha X^a)^2
+[X^a, X^b]^2
+{\rm fermions}).
\label{eq:reduced}
\end{equation}
As discussed in Section \ref{sec:introduction}, this is precisely the
action describing the low-energy dynamics of $N$ coincident Dirichlet
$p$-branes in static gauge (although there the fields $X$ and $\psi$
are normalized
by the factor $X \rightarrow X/(2 \pi \alpha')$,
$\psi \rightarrow \psi/(2 \pi \alpha')$). The field
$A_\alpha$ is the gauge field on the D-brane world-volume, and the
fields $X^a$ describe transverse fluctuations of the D-branes. Let us
comment briefly on the signs of the terms in the action
(\ref{eq:reduced}). We would expect kinetic terms to appear with a
positive sign and potential terms to appear with a negative sign.
Because the metric we are using has a mostly positive signature, the
kinetic terms have a single raised 0 index corresponding to a change
of sign, so the kinetic terms indeed have the correct sign. The
commutator term $[X^a, X^b]^2$ which acts as a potential term is
actually negative definite. This follows from the fact that $[X^a,
X^b]^{\dagger} = [X^b, X^a]= -[X^a, X^b]$. Thus, as expected, kinetic
terms in the action are positive while potential terms are
negative.
In order to understand the geometrical
significance of the fields $X^a$ it is useful to consider the field
configurations corresponding to classical vacua of the theory defined
by (\ref{eq:reduced}). A classical vacuum corresponds to a static
solution of the equations of motion where the potential energy of the
system is minimized. This occurs when the curvature $F_{\alpha \beta}$ and the
fermion fields vanish, and in addition the fields $X^a$ are covariantly
constant and commute with one another. When the fields $X^a$ all
commute with one another at each point in the $(p + 1)$-dimensional
world-volume of the branes, the fields can be simultaneously
diagonalized by a gauge transformation, so that we have
\begin{equation}
X^a = \left(\begin{array}{cccc}
x^a_1 & 0 &0 & \ddots\\
0 & x^a_2 &\ddots & 0\\
0 & \ddots & \ddots & 0\\
\ddots & 0 & 0 & x^a_N
\end{array}\right)
\end{equation}
In such a configuration, the $N$ diagonal elements of the matrix $X^a$
can be associated with the positions of the $N$ distinct D-branes in
the $a$-th transverse direction \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Witten-bound}. In accord with
this identification, one can easily verify that the masses of the
fields corresponding to off-diagonal matrix elements are precisely
given by the distances between the corresponding branes.
From this discussion, we see that the moduli space of classical vacua
for the $(p + 1)$-dimensional field theory arising from dimensional
reduction of 10D SYM is given by
\begin{equation}
\frac{({\bb R}^{9-p})^N}{S_N}
\end{equation}
The factors of ${\bb R}$ correspond to positions of the $N$ D-branes in
the $(9-p)$-dimensional transverse space. The symmetry group $S_N$ is
the residual Weyl symmetry of the gauge group. In the D-brane
language this corresponds to a permutation symmetry acting on the
D-branes, indicating that the D-branes should be treated as
indistinguishable objects.
As pointed out by Witten \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Witten-bound}, a remarkable feature of
this description of D-branes is that an interpretation of a D-brane
configuration in terms of classical geometry can only be given when
the matrices $X^a$ are simultaneously diagonalizable. In a generic
configuration, the positions of the D-branes are only roughly
describable through the spectrum of eigenvalues of the $X$ matrices.
This gives a natural and simple mechanism for the appearance of a
noncommutative geometry at short distances where the D-branes cease to
have well-defined positions according to classical commutative
geometry.
\junk{
{}
\subsection{Example: 3-branes}
We will now consider several special cases of the low-energy
world-volume SYM theory for D-branes of particular dimensions. In
this subsection we discuss the world-volume theory for 3-branes. As
discussed above, the low-energy theory describing the dynamics of $N$
3-branes in a flat ten-dimensional space-time is the dimensional
reduction of ${\cal N} = 1$ 10D SYM to four dimensions. In this
theory the ten-dimensional vector field $A_\mu$ decomposes into 6
transverse scalars $X^a$, and a 4-dimensional gauge field $A_\alpha$.
This theory has 16 supercharges, and therefore has ${\cal N} = 4$
extended supersymmetry in four dimensions. ${\cal N} = 4$ super
Yang-Mills theory in 4D has been studied extensively for many years;
for a more detailed discussion of such theories consult one of the
review papers on the subject, such as \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Sohnius}.
We confine ourselves here to a brief discussion of the relationship
between the usual formulation of this theory in
terms of ${\cal N} = 1$ superfields and the form (\ref{eq:reduced}).
Because there is no simple way to describe theories with higher
extended supersymmetries in 4D in terms of superfields, it is
conventional to discuss the ${\cal N} = 4$ theory in the language
of ${\cal N} = 1$ superfields. In general, recall \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Wess-Bagger}
that an ${\cal N} = 1$ super Yang-Mills theory in 4D with gauge group
$G = U(N)$ has a Lagrangian
\begin{equation}
{\cal L} = \int d^2 \theta d^2 \bar{\theta} \;
\Phi^{\dagger} e^{V_a T^a} \Phi +
\frac{1}{4 g^2} \left( \int d^2 \theta\; {\rm Tr}\; (W^\alpha W_\alpha)
+ \int d^2 \theta \; {\cal W} (\Phi) + {\rm h.c.} \right)
\end{equation}
where $V$ is a vector superfield in the adjoint representation of $G$,
$\Phi$ are chiral matter superfields in arbitrary representations of
$G$, $T^a$ are generators of the algebra of $G$ in the appropriate
representation(s),
the superpotential ${\cal W}$ is a holomorphic function of the
chiral fields and
\begin{equation}
W_\alpha = -\frac{1}{4} \bar{D} \bar{D} D_\alpha V
\end{equation}
In ${\cal N} = 1$ language, the ${\cal N} = 4$ theory has chiral
superfields $\phi, B$ and $C$, all in the adjoint representation,
with a superpotential
\begin{equation}
{\cal W} ={\rm Tr}\; \phi[B, C].
\end{equation}
The six bosonic components of the chiral
superfields can be directly identified with the transverse degrees of
freedom of the 3-branes through
\begin{equation}
\phi = X^4 + i X^5, B = X^6 + i X^7, C = X^8 + i X^9
\end{equation}
The potential for the bosonic fields in the SYM theory can be
calculated in the ${\cal N} = 1$ formalism in the usual fashion by
integrating out the auxiliary fields giving D-terms of the form
$(\Phi^{\dagger} T^a \Phi)^2$ and adding the contribution $|
\partial{\cal W} |^2$ from the superpotential. For the ${\cal N} = 4$
theory, the pieces of the potential are given by
\begin{equation}
{\rm Tr}\;(D^a)^2 = \frac{1}{4} {\rm Tr}\; (|[\phi, \phi^{\dagger}]|^2 +
|[B, B^{\dagger}]|^2 +
|[C, C^{\dagger}]|^2)
\end{equation}
and
\begin{equation}
\sum | \frac{\partial {\cal W}}{\partial \Phi^i} |^2
= {\rm Tr}\; \left(|[B, C]|^2 +|[\phi, B]|^2 + |[\phi, C]|^2 \right)
\end{equation}
which give a total of
\begin{equation}
V = -\sum_{i < j} [X^i, X^j]^2.
\end{equation}
This term corresponds to the bosonic potential in the action
(\ref{eq:reduced}). This is of course expected, as we have two
different descriptions of the same theory. The point of this
discussion is to demonstrate the correspondence between the fields in
the two descriptions.
}
{}
\subsection{Example: 0-branes}
\label{sec:0-branes}
We now consider an explicit example of the dimensionally reduced
theory, that of the low-energy action of pointlike 0-branes. This
system will be of central importance in the later sections on Matrix
theory. As discussed above, the low-energy theory describing the
dynamics of $N$ 0-branes in a flat ten-dimensional space-time is the
dimensional reduction of ${\cal N} = 1$ 10D SYM to one space-time
dimension. In the dimensionally reduced theory the ten-dimensional
vector field $A_\mu$ decomposes into 9 transverse scalars $X^a$, and a
1-dimensional gauge field $A_0$. This theory has 16 supercharges, and
is therefore an ${\cal N} = 16$ supersymmetric matrix quantum
mechanics theory. If we choose a gauge where the gauge field $A_0$
vanishes, then the Lagrangian for this theory is given by
\begin{eqnarray}
{\cal L} &=& \frac{1}{2 g \sqrt{\alpha'}} {\rm Tr}\; \left[
\dot{X}^a \dot{X}_a
+\frac{1}{ (2 \pi \alpha')^2}
\sum_{ a < b}[X^a, X^b]^2 \right. \label{eq:super-qm}\\
& &\hspace{1in}+ \left.
\frac{1}{2 \pi \alpha'} \theta^T i\dot{\theta}
- \frac{1}{ (2 \pi \alpha')^2}\theta^T \Gamma_a[X^a, \theta]) \right]\nonumber
\end{eqnarray}
Each of the nine adjoint scalar matrices $X^a$ is a
hermitian $N \times N$ matrix, where $N$ is the number of 0-branes.
The superpartners of the $X$ fields are 16-component spinors $\theta$
which transform under the $SO(9)$ Clifford algebra given by the
$16 \times 16$ matrices $\Gamma^a$. This theory was discussed many
years before the development of
D-branes \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Claudson-Halpern,Flume,brr};
a more detailed discussion of this theory in the D-brane context can
be found in \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{dfs,Kabat-Pouliot,DKPS}.
The classical static solutions of this theory are found by minimizing
the potential, which occurs when $[X^a, X^b]= 0$ for all $a, b$.
As discussed in the general case, when the matrices can be
simultaneously diagonalized their
diagonal elements can be interpreted geometrically as the coordinates
of the $N$ 0-branes. The
classical configuration space of $N$ 0-branes is therefore given by
\begin{equation}
\frac{({\bb R}^9)^N}{S_N}
\end{equation}
which is the configuration space of $N$ identical particles moving in
euclidean 9-dimensional space.
For a general configuration, the matrices cannot be diagonalized and the
off-diagonal elements only have a geometrical interpretation in terms
of a noncommutative geometry.
Note that for the 0-brane Yang-Mills theory, the reduction of the
original Born-Infeld theory is simpler than in higher dimensional
cases. The only assumptions necessary to derive the 0-brane
Yang-Mills theory are that the background metric is flat and that the
velocities of the 0-branes are small. Thus, the super Yang-Mills 0-brane
theory is essentially the nonrelativistic limit of the Born-Infeld
0-brane theory. In the case of 0-branes, the assumption of static
gauge reduces to the assumption that there are no anti-0-branes in the
system.
\section{D-branes and Duality}
\label{sec:duality}
One of the most remarkable features of string theory is the intricate
network of duality symmetries relating the different consistent string
theories \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Hull-Townsend,Witten-various}. Such dualities
relate each of the five known superstring theories to one another and
to 11-dimensional supergravity
Some duality symmetries, such as the T-duality symmetry which relates
type IIA to type IIB, are perturbative symmetries; other dualities,
such as the S-duality symmetry of type IIB, are nonperturbative
symmetries which can take theories with a strong coupling to weakly
coupled theories.
Historically, D-branes were first studied using T-duality symmetry on
the string world-sheet \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{dlp}. In this section we invert
the historical sequence of development and study duality symmetries
from the point of view of the low-energy field theories of D-branes.
We first discuss T-duality from the D-brane point of view. We show
that without making reference to the string theory structure from
which it arose, the low-energy super Yang-Mills theory of $N$ D-branes
admits a T-duality symmetry when compactified on the torus. We then
discuss S-duality of the IIB theory, which corresponds to super
Yang-Mills S-duality on the 3-brane world-volume.
\subsection{T-duality in super Yang-Mills theory}
\label{sec:T-duality}
Before deriving T-duality from the point of view of super Yang-Mills
theory, we briefly review what we expect of the type II T-duality
symmetry from string theory \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Polchinski-TASI}. T-duality is a
symmetry of type II string theory after one spatial dimension has been
compactified. Let us compactify $X^9$ on a circle of radius $R_9$,
giving a space-time ${\bb R}^9 \times S^1$. After such a
compactification, T-duality maps type IIA string theory compactified
on a circle of radius $R_9$ to type IIB string theory compactified on
a circle of radius $\hat{R}_9= \alpha'/R_9$.
On a string world-sheet, T-duality maps Neumann boundary conditions on
the bosonic field $X^9$ to Dirichlet boundary conditions and vice
versa. Thus, for a fixed string background, T-duality maps
a $p$-brane to a $(p \pm 1)$-brane, where a brane originally wrapped
around the $X^9$ dimension is unwrapped by T-duality and vice versa.
This result in the context of perturbative string theory indicates
that we would expect the low-energy field theory of a system of
$p$-branes which are unwrapped in the transverse direction $X^9$ to be
equivalent to a field theory of $(p +1)$-branes wrapped on a dual circle
$\hat{X}^9$. We now proceed to prove this result in a precise fashion,
using only the properties of the low-energy super Yang-Mills theory.
For the bulk of this subsection we set $2 \pi \alpha' = 1$ for
convenience; constants are restored in the formulas at the end
of the discussion. The arguments described in this subsection
originally appeared in
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{WT-compact,BFSS,grt}.
\vspace{0.08in}
\noindent
{\it 3.1.1\ \ \ 0-branes on a circle}
\vspace{0.05in}
In order to simplify the discussion we begin with the simplest case,
corresponding to $N$ 0-branes moving on a space ${\bb R}^8 \times
S^1$. The generalization to higher dimensional branes and to
T-dualities in multiple dimensions is straightforward and will be
discussed later.
As described in Section~\ref{sec:0-branes}, a system of $N$ 0-branes
moving in flat space ${\bb R}^9$ has a low-energy description in terms of
a supersymmetric matrix quantum mechanics. The matrices in this
theory are $N \times N$ matrices, and the theory has 16 supersymmetry
generators. In order to describe the motion of $N$ 0-branes in a
space where one direction is compactified, this theory must be
modified somewhat. A naive approach would be to try to make the
matrices $X^9$ periodic. This cannot be done without
increasing the number of degrees of freedom of the system, however. One simple
way to see this is to note that the off-diagonal matrix elements
corresponding to strings stretching between different 0-branes have
masses proportional to the distance between the branes in the flat
space theory; this feature cannot be implemented in a compact
space without introducing an infinite number of degrees of freedom
corresponding to strings wrapping with an arbitrary homotopy class.
A systematic approach to describing the motion of 0-branes on $S^1$
can be developed along the lines of familiar orbifold techniques. In
general, if we wish to describe the motion of $N$ 0-branes on a space
${\bb R}^9/\Gamma$ which is the quotient of flat space by a discrete group
$\Gamma$, we can simply consider a system of $(N \cdot | \Gamma |)$
0-branes moving on ${\bb R}^9$ and then impose a set of constraints which
dictate that the brane configuration is invariant under the action of
$\Gamma$. This approach was used by Douglas and Moore
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Douglas-Moore} to study the motion of 0-branes on spaces of the
form ${\bb C}^2/{\bb Z}_k$; the authors showed that on such spaces the moduli
space of 0-brane configurations is modified quantum mechanically to
correspond to smooth ALE spaces. Related work was done in
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Gimon-Polchinski,Johnson-Myers}.
In the case we are interested in here, the study of the motion of
0-branes in terms of a quotient space description is simplified since
there are no fixed points of the space under the action of any element
of the group $\Gamma$. The universal covering space of $S^1$ is
${\bb R}$, where $S^1 ={\bb R}/{\bb Z}$, so we can study 0-branes on $S^1$ by
considering the motion of an infinite family of 0-branes on ${\bb R}$. If
we wish to describe $N$ 0-branes moving on $S^1$, then, we must
consider a family of 0-branes moving on ${\bb R}$ which are indexed by two
integers $n, i$ with $n \in{{\bb Z}}$ and $i \in\{1, \ldots, N\}$ (see
Figure~\ref{f:cover}). This gives us a system described by $U
(\infty)$ matrix quantum mechanics with constraints.
\begin{figure}
\vspace{-0.3in}
\psfig{figure=cover.eps,height=1.5in}
\vspace{-0.5in}
\caption[x]{\footnotesize 0-branes on the cover of $S^1$ are indexed
by two integers}
\label{f:cover}
\end{figure}
The $U (\infty)$ theory describing the 0-branes on the covering space
has a set of matrix degrees of freedom described by fields $X^a_{mi,
nj}$. Such a field corresponds to a string stretching from the $m$th
copy of 0-brane number $i$ to the $n$th copy of 0-brane number $j$.
For simplicity of notation, we will suppress the $i,j$ indices and
write these matrices as infinite matrices whose blocks $X^a_{mn}$ are
themselves $N \times N$ matrices.
The constraint of translation invariance under $\Gamma ={\bb Z}$ imposes
the condition that the theory is invariant under a simultaneous
translation of the $X^9$ coordinate by $2 \pi R_9$ and relabeling of the
indices $n$ by $n + 1$. Mathematically, this condition says that
\begin{eqnarray}
X^a_{mn} & = & X^a_{(m-1)(n-1)},\;\;\;\;\; a < 9 \nonumber\\
X^9_{mn} & = & X^9_{(m-1)(n-1)},\;\;\;\;\; m\neq n\label{eq:constraints}\\
X^9_{nn} & = & 2 \pi R_9 {\rlap{1} \hskip 1.6pt \hbox{1}} +X^9_{(n-1)(n-1)}\ . \nonumber
\end{eqnarray}
Note that the matrix added to $X^9_{nn}$ is proportional to the
identity matrix. This is because the translation operation only
shifts the diagonal components of the 0-brane matrices. An easy way
to see this is that after $X^9$ has been diagonalized, its diagonal
elements correspond to the positions of the branes in direction
$X^9$; thus, adding a multiple of the identity matrix shifts the
positions by a constant amount. Since the identity matrix commutes
with everything, this is the correct implementation of the translation
operation even when $X^9$ is not diagonal.
As a result of the constraints (\ref{eq:constraints}), the infinite block
matrix $X^9_{mn}$ can be written in the following form
\begin{equation}
\left(\begin{array}{ccccccc}
\ddots& X_{1} & X_{2} & X_{3} & \ddots \\
X_{-1} & X_{ 0} -2 \pi R_9{\rlap{1} \hskip 1.6pt \hbox{1}} & X_{1} & X_{2} & X_{3} \\
X_{-2} & X_{-1}&X_0 & X_{1}&X_{2} \\
X_{-3} &X_{ -2} & X_{-1} & X_{0} + 2 \pi R_9{\rlap{1} \hskip 1.6pt \hbox{1}} &X_{1} \\
\ddots & X_{ -3} & X_{-2} & X_{ -1} & \ddots
\end{array} \right)
\label{eq:matrix-operator}
\end{equation}
where we have defined $X_k = X^9_{0k}$.
A matrix of this form can be interpreted as a matrix representation of
the operator
\begin{equation}
X^9 = i \hat{\partial} + A (\hat{x})
\end{equation}
describing the action of a gauge covariant derivative
on a Fourier decomposition of functions of the form
\begin{equation}
\phi ( \hat{x}) = \sum_{n} \hat{\phi}_n e^{in \hat{x}/\hat{R}_9}
\end{equation}
which are periodic on a circle of radius $\hat{R}_9 = \alpha'/R_9 = 1/(2 \pi
R_9)$.
In order to see this correspondence concretely, let us first consider
the action of the derivative operator $i \hat{\partial}$ on such a
function. Writing the Fourier components as a column vector
\begin{equation}
\phi (\hat{x}) \rightarrow \left(\begin{array}{c}
\vdots \\
\hat{\phi}_2 \\
\hat{\phi}_1 \\
\hat{\phi}_0 \\
\hat{\phi}_{-1} \\
\hat{\phi}_{-2} \\
\vdots
\end{array} \right)
\end{equation}
we find that the derivative operator acts as the matrix
\begin{equation}
i \hat{\partial}= {\rm diag}
(\ldots, -4 \pi R_9{\rlap{1} \hskip 1.6pt \hbox{1}}, -2 \pi R_9{\rlap{1} \hskip 1.6pt \hbox{1}}, 0,
2 \pi R_9{\rlap{1} \hskip 1.6pt \hbox{1}}, 4 \pi R_9{\rlap{1} \hskip 1.6pt \hbox{1}}, \ldots).
\end{equation}
This is precisely the inhomogeneous term along the diagonal of
(\ref{eq:matrix-operator})
Decomposing the connection $A (\hat{x})$ into Fourier components in turn
\begin{equation}
A (\hat{x}) =\sum_{n} A_n e^{in \hat{x}/\hat{R}_9}
\end{equation}
we find that multiplication of $\phi (\hat{x})$ by $A (\hat{x})$
precisely corresponds in the matrix language to the action of the
remaining part of (\ref{eq:matrix-operator}) on the column vector
representing $\phi (\hat{x})$, where $X_{n} = X_{0n}^9$ is identified
with $A_n$.
This shows that we can identify
\begin{equation}
X^9 \sim i \hat{\partial}^9 + \hat{A}^9
\label{eq:relation1}
\end{equation}
under T-duality in the compact direction. This identification
demonstrates that the infinite number of degrees of freedom in the
matrix $X^9$ of a constrained $U (\infty)$ Matrix theory describing
$N$ 0-branes on ${\bb R}^8 \times S^1$ can be precisely packaged in the
degrees of freedom of a $U(N)$ connection on a dual circle $\hat{S}^1$
of radius $\hat{R}_9 = 1/(2 \pi R_9)$. A similar correspondence
exists for the transverse degrees of freedom $X^a$, $a < 9$, and for
the fermion fields $\psi$. Because these fields are unchanged under
the translation symmetry, the infinite matrices which they are
described by in the 0-brane language satisfy condition
(\ref{eq:constraints}) without the inhomogeneous term. Thus, these
degrees of freedom simply become $N \times N$ matrix fields
living on the dual $\hat{S}^1$ whose Fourier modes correspond to the winding
modes of the original 0-brane fields.
This construction gives a precise correspondence between the degrees
of freedom of the supersymmetric Matrix theory describing $N$ 0-branes
moving on ${\bb R}^8 \times S^1$ and the $(1 + 1)$-dimensional super
Yang-Mills theory on the dual circle. To show that the theories
themselves are equivalent it only remains to check that the Lagrangian
of the 0-brane theory is taken to the super Yang-Mills Lagrangian
under this identification. In fact, this is quite easy to verify.
Considering first the commutator terms, the term
\begin{equation}
{\rm Tr}\;[X^a, X^b]^2 \;\;\;\;\; (a, b \neq 9)
\end{equation}
in the 0-brane Matrix theory turns into the term
\begin{equation}
\frac{1}{2 \pi \hat{R}_9} \int d\hat{x} \; {\rm Tr}\;[X^a, X^b]^2
\end{equation}
of 2D super Yang-Mills. Note that the trace in the 0-brane theory is
a trace over the infinite index $n \in{\bb Z}$ as well as over $i \in\{1,
\ldots, N\}$. The trace over $n$ has the effect of extracting the
Fourier zero mode of the corresponding product of fields in the dual
theory. The factor of $R_9 = 1/(2 \pi \hat{R}_9)$ in front of the integral in
the 2D super Yang-Mills is needed to normalize the zero mode so that
it integrates to unity. Technically, there should be a factor of $1/|
\Gamma |$ multiplying the 0-brane matrix Lagrangian because of the
multiplicity of the copies; this factor is canceled by an overall
factor of $| \Gamma |$ from the trace, and since both factors are
infinite we simply drop them from all equations for convenience.
Now let us consider the commutator term when one of the matrices is
$X^9$. In this case we have
\begin{equation}
{\rm Tr}\;[X^9, X^a]^2
\end{equation}
which becomes after the replacement (\ref{eq:relation1})
\begin{equation}
- \left(\frac{1}{2 \pi \hat{R}_9} \right)
\int d\hat{x} \; {\rm Tr}\; (\partial_9 X^a-i[A_9, X^a])^2
=- \left(\frac{1}{2 \pi \hat{R}_9} \right)
\int d\hat{x} \; {\rm Tr}\;(D_9X^a)^2
\end{equation}
which is precisely the derivative squared term for the adjoint scalars
which we expect in the dual 1-brane theory.
The kinetic term for $X^9$ in the 0-brane theory becomes
the Yang-Mills curvature squared term in the dual theory
\begin{equation}
{\rm Tr}\; (D_0 X^9)^2 \rightarrow
\left(\frac{1}{2 \pi \hat{R}_9} \right)
\int d\hat{x} \; {\rm Tr}\;F_{09}^2\ .
\end{equation}
The remaining terms in the 0-brane Lagrangian transform
straightforwardly into precisely the remaining terms expected in a 2D
super Yang-Mills Lagrangian with 16 supercharges. This shows that
there is a rigorous equivalence between the low-energy field theory
description of $N$ 0-branes on ${\bb R}^8 \times S^1$ and the low-energy
field theory description of $N$ 1-branes wrapped around a dual $\hat{S}^1$
in the static gauge. We note again the fact that the Lagrangian in
the dual Yang-Mills theory carries an overall multiplicative factor of
the original radius $R_9$. This fact will play a significant role in
later discussions, particularly in regard to Matrix theory.
The fact that the coupling constant in the dual Lagrangian should
correspond with that of (\ref{eq:ym-coupling}) for a system of
1-branes indicates that under T-duality the string coupling transforms
through $\hat{g} = g \sqrt{\alpha'}/R_9$, which is what we expect from
string theory.
\vspace{0.08in}
\noindent
{\it 3.1.2\ \ \ $p$-branes on a torus $T^d$}
\vspace{0.05in}
So far we have discussed the situation of $N$ 0-branes moving on a
space which has been compactified in a single direction. It is
straightforward to generalize this argument to $p$-branes of arbitrary
dimension moving in a space with any number of compact dimensions. By
carrying out the construction described above for each of the compact
directions in turn, it can be shown that the low-energy theory of $N$
$p$-branes which are completely unwrapped on a torus $T^d$ is
equivalent to the low-energy theory of $N$ $(p + d)$-branes which are
wrapped around the torus, in static gauge. The only new type of term
which appears in the Lagrangian corresponds to a commutator term for
two directions which are both compactified. In the original $p$-brane
theory, such a term would appear as $[X^a, X^b]^2$ (integrated over
the $p$-dimensional volume of the brane).
After T-duality on the two compact directions this term becomes
\begin{equation}
-\left(\frac{1}{4 \pi^2 \hat{R}_a \hat{R}_b} \right)
\int d\hat{x}^a d\hat{x}^b \; (F^{ab})^2
\end{equation}
which is just the appropriate Yang-Mills curvature strength squared
term in the dual theory. Note that in the dual theory, the action is
multiplied by a factor of $R_a R_b$, since each compact direction
gives an extra factor of the radius.
As a particular example of compactification on a higher dimensional
torus, we can consider the theory of $N$ 0-branes on
a torus $T^d$. After interpreting the winding modes of each matrix in
terms of Fourier modes of a dual theory, it follows that the
Lagrangian becomes precisely that of super Yang-Mills theory in $d +
1$ dimensions with a Yang-Mills coupling constant $g_{YM}$
proportional to $V^{-1/2}$ where $V$ is the volume of the original
torus $T^d$.
\vspace{0.08in}
\noindent
{\it 3.1.3\ \ \ Further comments regarding T-duality}
\vspace{0.05in}
Throughout this section we have fixed the constant $2 \pi \alpha' =
1$. It will be useful in some of the later discussions to have the
appropriate factors of $\alpha'$ reinstated in (\ref{eq:relation1}).
This is quite straightforward; since $\alpha'$ has units of length
squared, the correct T-duality relation is given by
\begin{equation}
X^a \leftrightarrow (2 \pi \alpha') (i \partial^a + A_a)
\label{eq:T-duality}
\end{equation}
where $X^a$ represents an infinite matrix of fields including winding
strings around a compactified dimension, and $A$ represents a
connection on a gauge bundle over the dual circle.
It should be emphasized that this T-duality relation
gives a precise correspondence between winding modes of strings on the
original circle and momentum modes on the dual circle. This is
precisely the association expected from
T-duality in perturbative string theory \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{dlp}.
So far we have been discussing field configurations in the $X^a$
matrices which correspond in the dual picture to connections on a
$U(N)$ bundle with trivial boundary conditions. In fact, there are
also twisted sectors in the theory corresponding to bundles with
nontrivial boundary conditions. We will now discuss such configurations
briefly. To make the story clear, it is useful to reformulate the
above discussion in a slightly more abstract language.
The constraints (\ref{eq:constraints}) can be formulated by saying
that there exists a translation operator $U$ under which the infinite
matrices $X^a$ transform as
\begin{equation}
U X^a U^{-1} = X^a + \delta^{a9} 2 \pi R_9{\rlap{1} \hskip 1.6pt \hbox{1}}.
\label{eq:constraint2}
\end{equation}
This relation is satisfied formally by the operators
\begin{equation}
X^9 = i \partial^9 + A_9, \;\;\;\;\; \;\;\;\;\;
U = e^{2 \pi i\hat{x}^9 R_9}
\end{equation}
which correspond to the solutions discussed above. In this
formulation of the quotient theory, the operator $U$ generates the
group $\Gamma ={\bb Z}$ of covering space transformations. Generally,
when we take a quotient theory of this type, however, there is a more
general constraint which can be satisfied. Namely, the translation
operator only needs to preserve the state up to a gauge
transformation. Thus, we can consider the more general constraint
\begin{equation}
U X^a U^{-1} = \Omega (X^a + \delta^{a9} 2 \pi R_9{\rlap{1} \hskip 1.6pt \hbox{1}}) \Omega^{-1}.
\label{eq:generalconstraint}
\end{equation}
where $\Omega \in U(N)$ is an arbitrary element of the gauge group.
This relation is satisfied formally by
\begin{equation}
X^9 = i \partial^9 + A_9, \;\;\;\;\; \;\;\;\;\;
U = \Omega \cdot e^{2 \pi i\hat{x}^9 R_9}
\end{equation}
This is precisely the same type of solution as we have above; however,
there is the additional feature that the translation operator now
includes a nontrivial gauge transformation. On the dual circle
$\hat{S}^1$ this corresponds to a gauge theory on a bundle with a
nontrivial boundary condition in the compact direction 9. Note that
even with such a nontrivial boundary condition, any $U(N)$ bundle over
$S^1$ is topologically trivial.
An example
of the type of boundary condition which might appear would be to take
$\Omega$ to be a permutation in $S_N$. This type of gauge
transformation has the effect in the original 0-brane theory of
switching the labels of the 0-branes on each sheet of covering space.
When translated into the dual gauge theory picture, this corresponds
to a super Yang-Mills theory with a nontrivial boundary condition
$\Omega$ in the compact direction.
A similar story occurs when several directions are compact. In this
case, however, there is a constraint on the translation operators in
the different compact directions. For example, if we have
compactified on a 2-torus in dimensions 8 and 9, the generators $U_8$
and $U_9$ of a general twisted sector must generate a group isomorphic
to ${\bb Z}^2$ and therefore must commute. The condition that these
generators commute can be related to the condition that the boundary
conditions in the dual gauge theory correspond to a well-defined
$U(N)$ bundle over the dual torus. For compactifications in more than
one dimension such boundary conditions can define a topologically
nontrivial bundle. In Section \ref{sec:bundles} we will discuss
nontrivial bundles of this nature in much more detail. It is
interesting to note that this construction can even be generalized to
situations where the generators $U_i$ do not commute. This leads to a
dual theory which is described by gauge theory on a noncommutative
torus \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{cds,Douglas-Hull,hww}.
{}
\subsection{S-duality for strings and super Yang-Mills}
\label{sec:S-duality}
The T-duality symmetry we have discussed above is a symmetry of type
II string theory which is essentially perturbative, in the sense that
the string coupling is only changed through multiplication by a
constant. Another remarkable symmetry seems to exist in the type II
class of theories which is essentially nonperturbative; this is the
S-duality symmetry of the type IIB string
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Hull-Townsend,Schwarz-multiplet}. S-duality is a symmetry which
acts according to the group $SL(2,{{\bb Z}})$ on the type IIB theory. At
the level of the low-energy IIB supergravity theory, the dilaton and
axion form a fundamental $SL(2,{\bb Z})$ multiplet, as do the NS-NS and
R-R two-forms. Because the string coupling $g$ is given by the
exponential of the dilaton, this S-duality is a nonperturbative
symmetry which can exchange strong and weak couplings. Because
symmetries in the S-duality group exchange the NS-NS and R-R
two-forms, we can see that S-duality exchanges strings and D1-branes,
and also exchanges D5-branes and NS (solitonic) 5-branes. As there is
only a single four-form in the IIB theory, however, it must be left
invariant under S-duality; it follows that S-duality takes a D3-brane
into another D3-brane.
Since D3-branes are invariant under S-duality, it is interesting to ask
how we can understand the action of S-duality on the low-energy field
theory describing $N$ parallel D3-branes.
This field theory is the reduction to four dimensions of
$U(N) \; {\cal N} = 1$ SYM in 10D, which is the pure $U(N)$
${\cal N} = 4$ super Yang-Mills
theory in 3 + 1 dimensions. Since the Yang-Mills coupling of this
theory is related to the string coupling through $g_{YM}^2 \sim g$,
the action of S-duality on this super Yang-Mills theory must be a
nonperturbative $SL(2,{\bb Z})$ duality symmetry. In fact, for a number
of years it has been conjectured that 4D super Yang-Mills theory
with ${\cal N} = 4$ supersymmetry has precisely such an S-duality
symmetry. This is a supersymmetric version of the non-abelian
S-duality symmetry proposed originally by Montonen and Olive. We will
now briefly review the basics of this duality symmetry.
Maxwell's equations describe a simple non-supersymmetric $U(1)$ gauge
theory in four dimensions. In the absence of sources, these equations
have a very simple symmetry, which takes the curvature tensor $F$ to
its dual $*F$. This has the effect of exchanging the electric and
magnetic fields in the theory (up to signs). Although this symmetry
is broken when electric sources are introduced, if magnetic sources
are also introduced then the symmetry is maintained when the electric
and magnetic charges are also exchanged.
This marvelous symmetry of $U(1)$ gauge field theory seems at first
sight to break down for non-abelian theories with gauge groups like
$U(N)$. It was suggested, however, by Montonen and
Olive \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Montonen-Olive} that such a symmetry might be possible for
non-abelian theories if the gauge group $G$ were replaced by a dual
group $\hat{G}$ with a dual weight lattice. Further
work \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Witten-Olive,Osborn} indicated that such a non-abelian
duality symmetry would probably only be possible in theories with
supersymmetry, and that the ${\cal N} = 4$ theory was the most likely
candidate. Although there is still no complete proof that the ${\cal
N} = 4$ super Yang-Mills theory in 4D has this S-duality symmetry,
there is a growing body of evidence which supports this conclusion.
The proposed non-abelian S-duality symmetry of 4D super Yang-Mills
acts by the group $SL(2,{\bb Z})$, just as we would expect from string
theory. The (rescaled) Yang-Mills coupling constant and theta angle
can be conveniently packaged into the quantity
\begin{equation}
\tau = \frac{\theta}{2 \pi} + \frac{i}{g_{{\rm YM}}^2}
\end{equation}
which is transformed under
$SL(2,{{\bb Z}})$ by the standard transformation law
\begin{equation}
\tau \rightarrow \frac{a \tau + b}{c \tau + d}
\end{equation}
where $a, b, c, d \in {{\bb Z}}$ with $ad-bc = 1$ parameterize a matrix
\begin{equation}
\left(\begin{array}{cc}
a & b\\c & d
\end{array}\right)
\end{equation}
in $SL(2,{\bb Z})$.
In particular, the group $SL(2,{\bb Z})$ is generated by the transformations
\begin{equation}
\tau \rightarrow \tau + 1
\end{equation}
corresponding to the periodicity of $\theta$, and
\begin{equation}
\tau \rightarrow -1/\tau
\end{equation}
which inverts the coupling and corresponds to strong-weak duality.
There is by now a large body of evidence that S-duality is a true
symmetry of ${\cal N} = 4$ super Yang-Mills. However, to date there
is no real proof of S-duality from the point of view of field theory.
One of the strongest pieces of evidence for this duality symmetry is
the fact that the spectrum of supersymmetric bound states of dyons is
invariant under the action of the S-duality group; a detailed proof of
this result and further references can be found in \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Ferrari}.
\section{Branes and Bundles}
\label{sec:bundles}
As we discussed in Section 3.1.3, there are different topological
sectors for a system of 0-branes on a torus which correspond in the
dual gauge theory language to nontrivial $U(N)$ bundles over the dual
torus. In fact, these topologically nontrivial configurations of
branes correspond to systems containing not only the original 0-branes
but also branes of higher dimension. In this section we describe in
some detail a general feature of D-branes which amounts to the fact
that the low-energy Yang-Mills theory describing Dirichlet $p$-branes
also contains information about D-branes of both higher and lower
dimensions. Roughly speaking, D-branes of lower dimension can be
described by topologically nontrivial configurations of the $U(N)$
gauge field living on the original $p$-branes, while D-branes of
higher dimension can be encoded in nontrivial commutation relations
between the matrices $X^a$ describing transverse D-brane excitations
in compact directions. In order to make the discussion precise, it
will be useful to begin with a review of nontrivial gauge bundles on
compact manifolds. In these notes we will concentrate primarily on
configurations of D-branes on tori; on general compact spaces the
story is similar but there are some additional subtleties
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{bvs,ghm,Cheung-Yin,Minasian-Moore}.
\subsection{Review of vector bundles}
An introductory review of bundles and their relevance for gauge field
theory is given in \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{egh}. In this section we briefly review some
salient features of bundles and Yang-Mills connections.
Roughly speaking, a (real) vector bundle is a space constructed by
gluing together a copy of a vector space $V ={\bb R}^k$ (called the fiber
space) for each point on a particular manifold $M$ (called the base
manifold) in a smooth fashion. Mathematically speaking, a vector
bundle can be defined by decomposing $M$ into coordinate patches
$U_i$. The vector bundle is locally equivalent to $U_i \times
{\bb R}^k$. When the patches of $M$ are glued together, however, there can
be nontrivial identifications which give the vector bundle a
nontrivial topology. For every pair of intersecting patches $U, V$
there is a transition function between these patches which relates the
fibers at each common point. Such a transition function $\Omega_{UV}$
takes values in a group $G$ called the structure group of the bundle.
The transition function $\Omega_{UV}$ identifies
$(u,f')$ and $(v, f)$ where $u$ and $v$ are points in $U$ and $V$
which represent the same point in $M$, and where the fiber elements
are related through $f' =\Omega_{UV}f$.
In order to describe a well-defined bundle, the transition functions
must obey certain relations called cocycle conditions. For example,
if as in Figure~\ref{f:cocycle} there are three patches $U, V, W$
whose intersection is nonempty, the transition functions between
the three patches must obey the relation
\begin{equation}
\Omega_{UV} \Omega_{VW} \Omega_{WU} = {\rm id}
\end{equation}
where ${\rm id}$ is the identity element in $G$.
This is clearly necessary in order that a point $(u, f)$ in the
intersection region not be identified with any other point in the same
fiber after repeated application of the transition functions.
\begin{figure}
\vspace{-0.3in}
\psfig{figure=regions.eps,height=1.8in}
\vspace{-0.15in}
\caption[x]{\footnotesize Three patches on a manifold $M$ over which a
bundle is defined}
\label{f:cocycle}
\end{figure}
This describes a bundle whose fiber is a vector space. Another type
of bundle, called a principal bundle, has a fiber which is a copy of
the structure group $G$ itself.
A Yang-Mills connection for a gauge theory with gauge group $G$
is associated with a principal bundle with
fiber $G$. Formally speaking, a Yang-Mills connection $A_\mu$ is a
one-form which takes values in the Lie algebra of $G$. A connection
of this type gives a definition of parallel transport in the bundle.
The most
important feature for our purposes is the transformation property of
such a connection under a transition function $\Omega$, which is given by
\begin{equation}
A' = \Omega \cdot A \cdot \Omega^{-1} -i\; d\Omega \cdot \Omega^{-1}
\end{equation}
Generally, a physical theory will include both a Yang-Mills field and
additional matter fields. The Yang-Mills connection is defined with
respect to a particular principal bundle, and the matter fields are
given by sections of associated vector bundles whose transition
functions are given by particular representations of the $G$-valued
transition functions of the principal bundle.
Over any compact Euclidean manifold, such as the torus $T^{d}$, there
are many topologically inequivalent ways to construct a nontrivial
bundle. One way to distinguish such inequivalent bundles is through
the use of topological invariants called characteristic classes. One
of the simplest examples of characteristic classes are the Chern
classes. These classes distinguish topologically inequivalent
$U(N)$ bundles, and are given by invariant polynomials in the Yang-Mills
field strength
\begin{equation}
F_{\mu \nu} = \partial_\mu A_\nu -\partial_\nu A_\mu -i[A_\mu, A_\nu].
\end{equation}
The first two Chern classes are defined by
\begin{eqnarray}
c_1 & = &\frac{1}{2 \pi} {\rm Tr}\; F\\
c_2 & = &\frac{1}{8 \pi^2} \left(
{\rm Tr}\; F \wedge F-({\rm Tr}\; F) \wedge({\rm Tr}\; F) \right) \nonumber
\end{eqnarray}
These forms $c_i$ are integral cohomology classes, so that when $c_i$
is integrated over any $2i$-dimensional submanifold (homology class)
the result is an integer. As we will now discuss, in a low-energy
D-brane Yang-Mills theory, these integers count lower-dimensional
D-branes embedded in the original D-brane world-volume.
\subsection{D-branes from Yang-Mills curvature}
\label{sec:branescurvature}
Let us consider the low-energy Yang-Mills theory describing $N$ coincident
$p$-branes. If the bundle associated with the Yang-Mills connection
is nontrivial, this indicates that the gauge field configuration
carries R-R charges which are associated with D-branes of
dimension less than $p$. We will now formulate precisely the way in
which these lower-dimensional D-branes appear, after which we will
discuss the justification for these statements.
Simply put, the integral form corresponding to the $k$th antisymmetric
product of the curvature form $F$ carries $(p-2k)$-brane charge.
Thus,
\begin{eqnarray}
&\makebox[1in][r]{$\frac{1}{2 \pi} \int {\rm Tr}\; F$} & {\rm
corresponds\ to} \;
(p-2)-{\rm brane\ charge}.\nonumber\\
&\makebox[1in][r]{$\frac{1}{8 \pi^2} \int {\rm Tr}\; (F \wedge F)$}
& {\rm corresponds\ to}
\; (p-4)-{\rm brane\ charge}. \nonumber\\
&\makebox[1in][r]{$\frac{1}{48 \pi^3}
\int {\rm Tr}\; (F \wedge F \wedge F)$} & {\rm corresponds\ to} \;
(p-6)-{\rm brane\ charge,\ etc.} \ldots \nonumber
\end{eqnarray}
More precisely, let us imagine that a $(p-2)$-brane is wrapped around
some $(p-2)$-dimensional homology cycle $h_{p-2}$ in the
$p$-dimensional volume of the original $p$-branes. If we choose any
two-dimensional cycle $h_2$, it will generically intersect $h_{p-2}$
in a fixed number of points, corresponding to the intersection number
of these two cycles. Thus, any $(p-2)$-cycle defines a cohomology
class which associates an integer with any 2-cycle. This cohomology
class is known as the Poincare dual of the original homology class.
Using this correspondence we can state the connection between D-branes
and field strength precisely: The integral cohomology class
proportional to $F \wedge^k F$ is the Poincare dual of a
$(p-2k)$-dimensional homology class which describes a system of
embedded $(p-2k)$-branes.
The observation that the instanton number $\frac{1}{8 \pi^2} \int F
\wedge F$ carries
$(p-4)$-brane charge was first made by Witten in the context of
5-branes and 9-branes \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Witten-small}. The more general result
for arbitrary $p, k$ was described by Douglas \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Douglas}.
From the string theory point of view, this correspondence between
D-branes and Yang-Mills curvature arises from a Chern-Simons type of
term which appears in the full D-brane action, and is given by \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Li-bound}
\begin{equation}
{\rm Tr}\; \int_{\Sigma_{p + 1}} {\cal A} \wedge e^{F}
\label{eq:Chern-Simons}
\end{equation}
where ${\cal A}$ is a sum over all the R-R fields
\begin{equation}
{\cal A} = \sum_{k} A^{(k)}
\end{equation}
and where the integral is taken over the full $(p + 1)$-dimensional
world-volume of the $p$-brane.
For example, on a 4-brane $F \wedge F$ couples to $A^{(1)}$ through
\begin{equation}
{\rm Tr}\;\int_{\Sigma_{5}}A^{(1)} \wedge F \wedge F,
\end{equation}
demonstrating that $F \wedge F$ is playing the role of 0-brane charge
in this case.
The existence of the Chern-Simons term (\ref{eq:Chern-Simons}) can be
shown on the basis of anomaly cancellation arguments \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{ghm}.
It is also possible, however, to show that
these terms must appear simply using the principles of T-duality
and rotational invariance
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Bachas,Douglas,bdgpt,abb}. We follow the latter approach here; in
the following sections we show how the correspondence between
lower-dimensional branes and wedge products of the curvature form can
be seen directly in the low-energy Yang-Mills description of D-branes,
using only T-duality and the intrinsic properties of Yang-Mills theory.
\subsection{Bundles over tori}
We will be primarily concerned here with D-branes on
toroidally compactified spaces. Thus, it will be useful to explicitly
review here some of the properties of $U(N)$ bundles over tori. Let
us begin with the simplest case, the two-torus $T^2$.
If we consider a space which has been compactified on $T^2$ with radii
\begin{equation}
R_1 = L_1/(2 \pi), \; R_2 = L_2/(2 \pi)
\end{equation}
then the low-energy field theory of $N$ wrapped 2-branes is $U(N)$ SYM
on $T^2$. To describe a $U(N)$ bundle over a general manifold, we
need to choose a set of coordinate patches on the manifold. For the
torus, we can choose a single coordinate patch covering the entire
space, where the transition functions for the bundle are given by (see
Figure~\ref{f:torus})
\begin{figure}
\vspace{-0.3in}
\psfig{figure=torus.eps,height=1.5in}
\vspace{-0.15in}
\caption[x]{\footnotesize Transition
functions defining a bundle over the 2-torus}
\label{f:torus}
\end{figure}
\begin{equation}
\Omega_1 (x_2), \;\Omega_2 (x_1).
\end{equation}
A connection on a bundle defined by these transition functions must
obey the boundary conditions
\begin{eqnarray}
A_1 (x_1 + L_1, x_2) & = & \Omega_1 (x_2) A_1 (x_1, x_2) \Omega_1^{-1}
(x_2) \\
A_2 (x_1 + L_1, x_2) & = & \Omega_1 (x_2) A_2 (x_1, x_2) \Omega_1^{-1}
(x_2)-i \; (\partial_2 \Omega_1 (x_2)) \cdot \Omega^{-1}_1 (x_2) \nonumber\\
A_1 (x_1, x_2 + L_2) & = & \Omega_2 (x_1) A_1 (x_1, x_2) \Omega_2^{-1}
(x_1)-i \; (\partial_1 \Omega_2 (x_1)) \cdot \Omega^{-1}_2 (x_1)\nonumber\\
A_2 (x_1, x_2 + L_2) & = & \Omega_2 (x_1) A_2 (x_1, x_2) \Omega_2^{-1}
(x_1)\nonumber
\end{eqnarray}
while a matter field $\phi$ in the fundamental representation must
satisfy the boundary conditions
\begin{eqnarray}
\phi (x_1 + L_1, x_2) & = & \Omega_1 (x_2)\phi (x_1, x_2) \nonumber\\
\phi (x_1, x_2 + L_2) & = & \Omega_2 (x_1)\phi (x_1, x_2).
\end{eqnarray}
The cocycle condition for a well-defined $U(N)$ bundle is
\begin{equation}
\Omega_1 (L_2) \Omega_2 (0) \Omega_1^{-1} (0) \Omega_2^{-1} (L_1)=
{\rlap{1} \hskip 1.6pt \hbox{1}}.
\label{eq:bc}
\end{equation}
In general, $U(N)$ bundles over $T^2$ are classified by the first
Chern number
\begin{equation}
C_1 = \int c_1=\frac{1}{2 \pi} \int {\rm Tr}\; F = k \in {{\bb Z}}
\end{equation}
Physically, this integer corresponds to the total non-abelian magnetic
flux on the torus. In order to understand these nontrivial $U(N)$
bundles, it is helpful to decompose the gauge group into its abelian
and non-abelian components
\begin{equation}
U(N) = (U(1) \times SU(N))/{{\bb Z}}_N.
\label{eq:un}
\end{equation}
Because the curvature $F$ has a trace which arises purely from the
abelian part of the gauge group, we see that the $U(1)$ part of the
total field strength for a bundle with $C_1 = k$ is given by $F =
k{\rlap{1} \hskip 1.6pt \hbox{1}}/N$. Such a field strength would not be possible for a purely
abelian theory (assuming the existence of matter fields in the
fundamental representation) since it would not be possible to satisfy
(\ref{eq:bc}). Once $U(1)$ is embedded in $U(N)$ through
(\ref{eq:un}), however, this deficiency can be corrected by choosing $SU(N)$
boundary conditions $\tilde{\Omega}$ which correspond to a ``twisted''
bundle. Such boundary conditions satisfy
\begin{equation}
\tilde{\Omega}_1 (L_2) \tilde{\Omega}_2 (0)
\tilde{\Omega}_1^{-1} (0) \tilde{\Omega}_2^{-1} (L_1)=Z
\end{equation}
where $Z = e^{-2 \pi ik/N} {\rlap{1} \hskip 1.6pt \hbox{1}}$ is central in $SU(N)$. Twisted bundles of this
type were originally considered by 't Hooft \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Hooft}.
The integer $k$ gives a complete classification of $U(N)$ bundles over
$T^2$. Over a higher dimensional torus $T^n$, the story is
essentially the same, however there is an integer $k_{ij}$ for every
pair of dimensions in the torus. For each dimension $i$ there is a
transition function, and for each pair $i, j$ the transition functions
satisfy a cocycle relation of the form (\ref{eq:bc}).
\subsection{Example: 0-branes as flux on $T^2$}
\label{sec:examplet2}
We will now discuss nontrivial bundles on $T^2$ and show
using T-duality that the first Chern class indeed counts 0-branes.
Consider a $U(N)$ theory on $T^2$ with total flux $\int {\rm Tr}\; F = 2 \pi$.
We can choose an explicit set of boundary conditions corresponding to
such a bundle
\begin{eqnarray}
\Omega_1 (x_2) & = & e^{2 \pi i (x_2/L_2) T}V \label{eq:boundary2}\\
\Omega_2 (x_1) & = & {\rlap{1} \hskip 1.6pt \hbox{1}} \nonumber
\end{eqnarray}
where
\begin{equation}
V =
\pmatrix{
& 1 & && \cr
& & 1 && \cr
& & & \ddots & \cr
& & & & 1\cr
1& & & &
}
\end{equation}
and $T = {\rm Diag} \; (0, 0, \ldots, 0, 1)$.
To understand the D-brane geometry of this bundle, let us construct a
linear connection on the bundle, which will correspond in the T-dual
picture to flat D-branes on the dual torus. The boundary conditions
(\ref{eq:boundary2}) admit a linear connection with constant curvature
\begin{equation}
\begin{array}{lll}
A_1 & = & 0 \\
A_2 & = & F x_1 + \frac{2 \pi}{L_2} \;
{\rm Diag} (0, 1/N, \ldots, (N -1)/N)
\end{array}
\end{equation}
with
\begin{equation}
F = \frac{2 \pi}{N L_1 L_2} {\rlap{1} \hskip 1.6pt \hbox{1}}.
\end{equation}
Because we have chosen the boundary conditions such that $\Omega_2 =
1$ we can T-dualize in a straightforward fashion using $X^2 = (2 \pi
\alpha') (i
\partial_2 + A_2)$.
After such a T-duality, $X^2$ represents the transverse positions of
a set of
1-branes on $T^2$. This field is represented by an infinite matrix
with indices $n \in {\bb Z}$ and $i \in\{1, \ldots, N\}$. In the $(n, m)
= (0, 0)$ block, the field $X^2$ is given by the matrix
\begin{equation}
X^2 =
\left( \frac{\hat{L}_2}{N} \right)
\left(\begin{array}{cccc}
\frac{x_1}{L_1} & 0 & 0 & \ddots\\
0 &\frac{x_1}{L_1} + 1 & 0 & \ddots\\
0 &0 & \ddots & 0\\
\ddots & \ddots & 0 &\frac{x_1}{L_1} + (N -1)
\end{array}\right)
\end{equation}
where
\begin{equation}
\hat{L}_2 =\frac{4 \pi^2 \alpha'}{L_2}
\end{equation}
Thus, we see that the T-dual of the original gauge field on $T^2$
describes a single 1-brane wrapped once diagonally around $\hat{R}_2$, and
$N$ times around $R_1$ (See Figure~\ref{f:diagonal}).
\begin{figure}
\psfig{figure=diagonal.eps,height=1.5in}
\caption[x]{\footnotesize An $(N, 1)$ diagonally wrapped string dual to
$N$ 2-branes with one unit of flux}
\label{f:diagonal}
\end{figure}
The dual configuration has quantum numbers corresponding to
$N$ 1-branes on $R_1$ and a single 1-brane on $\hat{R}_2$. In homology
this state could be written as
\begin{equation}
N \cdot (1) + (2)
\end{equation}
Since wrapped 1-branes are T-dual to 0-branes, the original flux on
the 2-brane corresponds to a single 0-brane. This gives a simple
geometrical demonstration through T-duality of the result that the first
Chern class counts $(p-2)$-branes.
It is straightforward to carry out
an analogous construction for a system with $k$ 0-branes. In this
case, the nontrivial boundary condition becomes \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{ht}
\begin{equation}
\Omega_1 (x_2) = e^{2 \pi i (x_2/L_2) T}V^k
\end{equation}
with $T$ being the diagonal matrix
\begin{eqnarray}
T
& = & {\rm Diag} (n, \ldots, n, n + 1, \ldots, n + 1)
\end{eqnarray}
where $n$ is the integral part of $k/N$
and where the multiplicities
of the diagonal elements of $T$ are $N-k$ and $k$ respectively.
In the discussion in this section we have chosen to set $\Omega_2 = 1$
for convenience. This makes the discussion slightly simpler since the
T-duality relation in direction 2 is implemented directly through
(\ref{eq:T-duality}). For a nontrivial gauge transformation
$\Omega_2$ T-duality would give a
configuration of the type described by
(\ref{eq:generalconstraint}). For example, if we used the more
standard ('t Hooft type) boundary conditions for the bundle with $k = 1$
\begin{eqnarray}
\Omega_1 (x_2) & = & e^{2 \pi i (x_2/ L_2) (1/N)}U\\
\Omega_2 (x_1) & = & V\nonumber
\end{eqnarray}
where
\begin{equation}
U =
\pmatrix{
1& & & \cr
& e^{2\pi i\over N} & & \cr
& & \ddots & \cr
& & & e^{2\pi i (N-1)\over N}
}
\end{equation}
Then after T-duality in direction 2 we would get a 1-brane
configuration in which translation by $2 \pi \hat{R}_2$ in the
covering space would give rotation by $V$, permuting the labels on the
1-branes. This situation is gauge equivalent to the one we have
discussed where the boundary conditions are given by
(\ref{eq:boundary2}).
\subsection{Example: 0-branes as instantons on $T^4$}
\label{sec:examplet4}
Let us now consider nontrivial bundles on $T^4$. From the previous
discussion it is clear that a nonvanishing first Chern class indicates
the existence of 2-branes in the system. For example, if $\int F_{12}
= 2 \pi$ then the configuration contains a 2-brane wrapped around the
(34) homology cycle. A constant curvature connection with $\int
F_{12} = 2 \pi$ and $\int F_{34} = 2 \pi$ would correspond after
T-duality in directions 2 and 4 to a diagonally wrapped 2-brane, and
in the original Yang-Mills theory on $T^4$ corresponds to a
``4-2-2-0'' configuration with a unit of 0-brane charge as well as
2-brane charge in directions $(12)$ and $(34)$ \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Sanjaye-Zack}. A
more interesting configuration to consider is one where the first
Chern class vanishes but the second Chern class does not. This
corresponds to an instanton in the $U(N)$ gauge theory on $T^4$. To
consider an explicit example of such a configuration, let us take a
$U(N)$ gauge theory on a torus $T^4$ with sides all of length $L$. We
want to construct a bundle with nontrivial second Chern class $C_2 =
(8 \pi^2)^{-1} \int c_2 = k$ and with $c_1 = 0$. There is no smooth
$U(N)$ instanton with $k = 1$; a single instanton tends to
shrink to a point on the torus \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{vb-shrink}. Thus, we will
consider a configuration with $k = 2, N = 2$.
To construct a bundle with the desired topology we can take the
transition functions in the four directions of the torus to be
\begin{eqnarray}
\Omega_2 = \Omega_4 & = & {\rlap{1} \hskip 1.6pt \hbox{1}} \nonumber\\
\Omega_1 & = & e^{2 \pi i (x_2/L) \tau_3}\\
\Omega_3 & = & e^{2 \pi i (x_4/L) \tau_3} \nonumber
\end{eqnarray}
where
\begin{equation}
\tau_3 =\left(\begin{array}{cc}
1 & 0\\
0 & -1
\end{array} \right)
\end{equation}
is the usual Pauli matrix.
This bundle admits a linear connection
\begin{eqnarray}
A_1 = A_3 & = & 0 \nonumber\\
A_2 & = & \frac{2 \pi x_1}{L^2} \tau_3\\
A_4 & = & \frac{2 \pi x_3}{L^2} \tau_3 \nonumber
\end{eqnarray}
whose curvature is given by
\begin{eqnarray}
F_{12} = F_{34} & = & \frac{2 \pi}{L^2} \tau_3
\end{eqnarray}
Since ${\rm Tr}\; F = 0$ there is no net 2-brane charge, as desired.
The instanton number of the bundle is
\begin{equation}
C_2 = \frac{1}{8 \pi^2} \int d^4 x \; {\rm Tr}\;F \wedge F = 2.
\end{equation}
As we would expect from the discussion in section
\ref{sec:branescurvature}, this should correspond to the existence of
two 0-branes in the system. We can see this by again using T-duality.
After performing T-duality transformations in directions 2 and 4 we get
two 2-branes whose transverse coordinates are described by
\begin{eqnarray}
X^2 (x_1, x_3) & = & \pm\hat{L}x_1/L\\
X^4 (x_1, x_3) & = & \pm \hat{L} x_3/L\nonumber
\end{eqnarray}
where $\hat{L} =4 \pi^2 \alpha'/L$.
These 2-branes are wrapped diagonally on the dual $T^4$ in such a way
that they correspond to the following homology 2-cycles
\begin{eqnarray}
{\rm brane\ 1} & \rightarrow &
(13) + (14) + (23) + (24)\\
{\rm brane\ 2} & \rightarrow &
(13) - (14) - (23) + (24) \nonumber
\end{eqnarray}
The total resulting homology class is $2(13)+ 2 (24)$, which is
T-dual to two 4-branes and two 0-branes as expected. Further
discussion of configurations of this type which are dual to instantons
on $T^4$ can be found in \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{bdl,Sanjaye-Zack,ht}.
{}
\subsection{Branes from lower-dimensional branes}
\label{sec:branes-smaller}
In the preceding subsections we have discussed how, in general,
$(p-2k)$-branes can be described by nontrivial gauge configurations in
the world-volume of a system of parallel $p$-branes. We will now
discuss the T-dual interpretation of this result, which indicates that
it is equally possible to construct $(p + 2k)$-branes out of a system
of interacting $p$-branes by choosing noncommuting matrices to
describe the transverse coordinates.
In the context of the preceding discussion, it is easiest to describe
the construction of higher-dimensional branes from a finite number of
$p$-branes in the case of toroidally compactified space. In Sections
7.2.2\ and 7.2.3\ we will discuss the construction of higher dimensional
branes in noncompact spaces from a system of 0-branes. The simplest
example of the phenomenon we wish to discuss here is the description
of a 2-brane in terms of a ``topological'' charge associated with the
matrices describing $N$ 0-branes on $T^2$. To see how a configuration
with such a charge is constructed, consider again the diagonal ($N,
1$) 1-brane on $T^2$ (Figure~\ref{f:diagonal}). If we take the
toroidal dimensions to be $L_1 \times L_2$ then the diagonal 1-brane
configuration satisfies
\begin{equation}
[(\partial_1 -iA_1), X^2] = \frac{L_2}{N L_1} {\rlap{1} \hskip 1.6pt \hbox{1}}.
\end{equation}
By taking the
T-dual on $X^2$ we get a system of $N$ 2-branes with unit flux
\begin{equation}
[(\partial_1-iA_1), (\partial_2-iA_2)] = -iF =
\frac{-2 \pi i}{N L_1 \hat{L}_2} {\rlap{1} \hskip 1.6pt \hbox{1}},
\end{equation}
as discussed in Section \ref{sec:examplet2}. If, on the other hand,
we perform a T-duality transformation on $X^1$, then we get a system
of $N$ 0-branes satisfying
\begin{equation}
[X^1, X^2] = \frac{2 \pi \hat{R}_1 R_2 i}{N} {\rlap{1} \hskip 1.6pt \hbox{1}}
\end{equation}
where $\hat{R}_1$ and $R_2$ are the radii of the torus on which the
0-branes are moving. Since the 1-brane wrapped around $X^2$ becomes a
2-brane on $T^2$ under the T-duality transformation which takes the
1-branes on $X^1$ to 0-branes, we see that on a $T^2$ with area $A$ a
system of $N$ 0-branes described by (infinite) matrices $X$ satisfying
\begin{equation}
{\rm Tr}\; [X^1, X^2] = \frac{iA}{2 \pi}
\label{eq:2-branecharge}
\end{equation}
carries a unit of 2-brane charge.
Note that if the 0-branes were not moving on a compact space the
quantity in (\ref{eq:2-branecharge}) would vanish for $N$ finite.
In the infinite $N$ limit, however, as will be discussed in
7.2.2, this charge can be nonzero even in Euclidean space.
This discussion generalizes naturally to higher dimensions. For
example, a system of $N$ 0-branes on a $T^4$ of volume $V$ with
\begin{equation}
{\rm Tr}\;\epsilon_{abcd} X^a X^b X^c X^d = \frac{V}{2 \pi^2}
\end{equation}
will carry a unit of 4-brane charge \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{grt,bss}. This is just the
T-dual of the instanton number for a system of $N$ 4-branes, which is
associated with 0-brane charge as discussed above. Similarly, any
system of $p$-branes on a $2k$-dimensional transverse torus can be in
a state with $(p+2k)$-brane charge.
It is also, of course, possible to mix the two types of conditions we
have discussed to describe, for example, 2-brane charge on the (34)
homology cycle of a 4-torus in terms of a gauge theory of 2-branes
wrapped on the (12) homology cycles. Such a charge is proportional to
\begin{equation}
{\rm Tr}\; \left(F_{12}[X^3, X^4] -(D_1 X^3) (D_2 X^4)
+ (D_1 X^4) (D_2 X^3) \right).
\end{equation}
\subsection{Strings and electric fields}
\label{sec:strings-electric}
We have seen that the gauge fields and transverse coordinates of a system of
$p$-branes can be combined to give $(p \pm 2k)$-brane charge. It is
also possible to choose gauge fields on the world-volume of a
$p$-brane which describe fundamental strings.
Consider a system of 0-branes moving on a space which has been
compactified in direction $X^9$.
Clearly, these 0-branes can be given momentum in the compact
direction; this momentum is proportional to $\dot{X}^9$ and is
quantized in units of $1/R$.
Under T-duality on the $S^1$, we have
\begin{equation}
\dot{X^9} \rightarrow \int (2 \pi \alpha')\dot{A_9}
\end{equation}
Thus, momentum of a set of 0-branes corresponds to electric flux
around the compact direction in the dual gauge theory. Since string
momentum is T-dual to string winding, we see that electric flux in a
gauge theory on a compact space can be associated with fundamental
string winding number. It is natural to give this result a local
interpretation, so that lines of electric flux in a gauge theory
correspond to fundamental strings even in noncompact space.
It is interesting to note that 0-brane momentum in
a compact direction and the T-dual string winding number are quantized
only because of the quantum nature of the theory. On the other hand,
the quantization of flux giving 0-brane charge in a gauge theory on
$T^2$ arises from topological considerations, namely the fact that the
first Chern class of a $U(1)$ bundle is necessarily integral.
Nonetheless, in string theory these quantities which are quantized in
such different fashions can be related through duality.
It is tempting to speculate that a truly fundamental description of
string theory would therefore in some way combine quantum mechanics
and topological considerations in a novel fashion.
{}
\normalsize
\section{D-brane Interactions}
So far we have discussed the geometry of D-branes as described by
super Yang-Mills theory. We now proceed to describe some aspects of
D-brane interactions. We begin with a discussion of D-brane bound
states from the point of view of Yang-Mills theory. We then discuss
potentials describing interactions between separated D-branes.
\subsection{D-brane bound states}
\label{sec:bound}
Bound states of D-branes were originally understood from supergravity
(as discussed in the lectures of Stelle at this school \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Stelle})
and by duality from the perturbative string spectrum
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Hull-Townsend,Sen-bound1,Sen-bound2}. There are a number of
distinct types of bound states which are of interest. These include
$p-p'$ bound states between D-branes of different dimension, $p-p$
bound states between identical D-branes, and bound states of D-branes
with strings. We will discuss each of these systems briefly; in order
to motivate the results on bound states, however, it is now useful to
briefly review the concept of BPS states.
\vspace{0.08in}
\noindent
{\it 5.1.1\ \ \ BPS states}
\vspace{0.05in}
Certain extended supersymmetry (SUSY) algebras contain central terms,
so that the full SUSY algebra has the general form
\begin{equation}
\{Q, Q\} \sim P + Z.
\end{equation}
For example, in $D = 4$, ${\cal N} = 2$ $U(2)$ super
Yang-Mills \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Witten-Olive},
\begin{equation}
\{Q_{\alpha i}, \bar{Q}_{\beta j}\} =
\delta_{ij} \gamma^{\mu}_{\alpha \beta} P_\mu
+ \epsilon_{ij} \left(\delta_{\alpha \beta} U +
(\gamma_5)_{\alpha \beta} V\right).
\end{equation}
where
\begin{equation}
U =\langle \phi \rangle e \;\;\;\;\; V = \langle \phi \rangle g
\end{equation}
are related to electric and magnetic charges after spontaneous
breaking to $U(1)$.
Since $\{Q_{\alpha i}, \bar{Q}_{\beta j}\}$ is positive definite it follows
that
\begin{equation}
M^2 \geq U^2 + V^2
\end{equation}
so
\begin{equation}
M \geq \langle \phi \rangle \sqrt{e^2 + g^2}
\label{eq:BPS-inequality}
\end{equation}
This inequality is saturated when $\{Q_{\alpha i}, \bar{Q}_{\beta
j}\}$ has vanishing eigenvalues. This condition implies $Q | {\rm
state} \rangle = 0$ for some $Q$. Thus, any state with a mass
saturating the inequality (\ref{eq:BPS-inequality}) lies in a
``short'' representation of the supersymmetry algebra. Because this
property is protected by supersymmetry, it follows that the relation
between the mass and charges of such a state cannot be modified by
perturbative or nonperturbative effects (although the mass and charges
can be simultaneously modified by quantum effects).
{}
Similar BPS states appear in string theory, where the central terms in
the SUSY algebra correspond to NS-NS and
R-R charges. As in the above example,
states which preserve some SUSY are BPS saturated. There are many
ways of analyzing BPS states in string theory. The spectrum of BPS
states with a particular set of D-brane charges can in some cases be
determined through duality from perturbative string states
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Hull-Townsend,Sen-bound1,Sen-bound2}.
Such dualities allow the number of BPS states with fixed charges to be
counted. BPS states can also be found through the space-time
supersymmetry algebra \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Polchinski-TASI}, providing a
connection to the large body of known results on supergravity
solutions \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Stelle}. We can also analyze BPS states using the
Yang-Mills or Born-Infeld theory on the world-volume of a set of
D-branes. We will follow this latter approach in the next few sections.
Before discussing BPS bound states in detail, let us synopsize results
on the energies of these states which can be obtained from duality or
the supersymmetry algebra \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Polchinski-TASI}. We will then show
that these results are correctly reproduced in the SYM description.
\vspace{0.05in}\noindent
{\sl i}. $p-p$ BPS systems are marginally bound. This means that the
energy of a bound state of $N$ $p$-branes, when such a state exists,
is $N E_p$ where $E_p$ is the
energy of a single $p$-brane.
\vspace{0.03in}\noindent
{\sl ii}. $p-(p+4)$ BPS systems are marginally bound. For a bound
state of $N_p$ $p$-branes and $N_{p +4}$ $(p+4)$-branes the total
energy is $E = N_pE_p + N_{p + 4} E_{p +4}$.
\vspace{0.03in}\noindent
{\sl iii}. $p-(p+2)$ BPS systems are truly bound when $N_p$ and $N_{p
+2}$ are relatively prime. For
these systems, the energy is $E = \sqrt{(N_pE_p)^2 + (N_{p +2}E_{p +2})^2}$.
\vspace{0.03in}\noindent
{\sl iv}. 1-brane/string BPS systems are truly bound,
$E = \sqrt{ (N_1E_1)^2 + (N_sE_s)^2}$.
\vspace{0.05in}
The energies given for these states are the exact energies expected
from string theory. These are expected to correspond with the
Born-Infeld energies of these bound state configurations. From the
Yang-Mills point of view we only see the $F^2$ term in the expansion
of the Born-Infeld energy
around a flat background, as in (\ref{eq:action-expansion}).
A static field configuration on a single flat $p$-brane has
Born-Infeld energy
\begin{equation}
E_{{\rm BI}} = \tau_p \sqrt{\det (\delta_{ij} + 2 \pi \alpha' F_{ij})}.
\label{eq:Born-Infeld-energy}
\end{equation}
It is not completely understood at this time how to generalize the
Born-Infeld action to arbitrary non-abelian fields
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Tseytlin,ht}. In the case where all components of the
field strength commute, however, the Born-Infeld action can be defined by
simply taking a trace outside the square root in
(\ref{eq:Born-Infeld-energy}). This gives the expected formula for
the non-abelian super Yang-Mills energy at second order
\begin{equation}
E_{{\rm YM}} = \tau_p \pi^2 \alpha'^2 \int {\rm Tr}\; F_{ij}^2.
\label{eq:SYM-energy}
\end{equation}
We will now discuss the descriptions of various bound states in the
super Yang-Mills formalism and show that (\ref{eq:SYM-energy}) indeed
has the expected BPS value for these systems.
{}
\vspace{0.08in}
\noindent
{\it 5.1.2\ \ \ 0-2 bound states}
\vspace{0.05in}
The simplest bound state of multiple D-branes from the point of view
of Yang-Mills theory is a bound state of 0-branes and 2-branes where
the 2-branes are wrapped around a compact 2-torus \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Townsend}. As
discussed in Section \ref{sec:examplet2}, a system containing $N$
2-branes and $k$ 0-branes (with the 0-branes confined to the surface
of the 2-branes) is described by a $U(N)$ Yang-Mills theory with total
magnetic flux $\int F = 2 \pi k$. From simple dimensional
considerations it is clear that the energy of the configuration is
minimized when the flux is distributed as uniformly as possible on the
surface of the 2-branes. This follows from the fact that in the
Yang-Mills theory the energy scales as $\int F^2$. For example, if we
consider a field configuration $F$ corresponding to a 0-brane on an
infinite 2-brane, the energy can be scaled by a factor of $\rho^2$
while leaving the flux invariant by taking $F (x) \rightarrow \rho^2 F
(\rho x)$; thus the energy can be taken arbitrarily close to 0 by
taking $\rho \rightarrow 0$.
On a compact space such as $T^2$, the energy is minimized when the
flux is uniformly distributed. Precisely such a configuration of $N$
2-branes and $k$ 0-branes was
considered in Section \ref{sec:examplet2}. The Yang-Mills energy of this
configuration corresponds to the second term in the power series
expansion of the expected Born-Infeld energy for a BPS configuration
\begin{equation}
E =\sqrt{(N \tau_2 L_1 L_2)^2 + (k \tau_0)^2}
= N \tau_2 L_1 L_2+ \tau_2 \pi^2 \alpha'^2 \int {\rm Tr}\; F^2 + \cdots
\end{equation}
where
\begin{equation}
F_{12} = \frac{2 \pi k}{ N L_1L_2} {\rlap{1} \hskip 1.6pt \hbox{1}}.
\end{equation}
Thus, we see that the Yang-Mills energy is indeed that expected of a BPS bound
state. The fact that this configuration is truly bound is
particularly easy to see in the T-dual picture, where it corresponds
to a state of D1-branes with winding numbers $N, k$ on the dual
torus. Clearly, when $N$ and $k$ are relatively prime, the lowest
energy state of this 1-brane system is a single diagonally wound
brane. This is precisely the system described in Section \ref{sec:examplet2}
as the dual of the 0-2 system with uniform flux density. When $N$ and
$k$ have a greatest common denominator $n$ then the system can be
considered to be a marginally bound configuration of $n$ $(N/n, k/n)$
states. In this case the moduli space of constant curvature solutions
has extra degrees of freedom corresponding to the independent motion of
the component branes \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Sanjaye-Zack2}.
Since the 0-2 bound states saturate the BPS bound on the energy, it is
natural to try to check that there is an unbroken supersymmetry in the
super Yang-Mills theory. Naively applying the supersymmetry
transformation (\ref{eq:Yang-Mills-SUSY})
\begin{equation}
\delta \psi = -\frac{1}{4} F_{\mu \nu} \Gamma^{\mu \nu} \epsilon
\end{equation}
it seems that the state is not supersymmetric, since
\begin{equation}
(\Gamma^{12})^2 = -1
\end{equation}
and therefore $\delta \psi$ cannot vanish for all $\epsilon$ when
$F_{12} \sim {\rlap{1} \hskip 1.6pt \hbox{1}}$. There is a subtlety here, however
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Green-Gutperle,Balasubramanian-Leigh}. In the IIA string theory
there are 32 supersymmetries. 16 are broken by the 2-brane and
therefore do not appear in the SUSY algebra of the gauge theory. To
see the unbroken supersymmetry it is necessary to include the extra 16
supersymmetries, which appear as linear terms in
(\ref{eq:Yang-Mills-SUSY}). After including these terms we see that
as long as $F$ is constant and proportional to the identity, the
Yang-Mills configuration preserves 1/2 of the original 32
supersymmetries, as we would expect for a BPS state of this type.
Thus, although the $0-2$ bound state breaks the original 16
supersymmetries of the SYM theory, there exists another linear
combination of 16 SUSY generators under which the state is invariant.
{}
\vspace{0.08in}
\noindent
{\it 5.1.3\ \ \ 0-4 bound states}
\vspace{0.05in}
We now consider bound states of 0-branes and 4-branes. A system of
$N$ 4-branes, no 2-branes and $k$ 0-branes is described by a $U(N)$
Yang-Mills configuration with instanton number $C_2= k$, as discussed in
Section \ref{sec:examplet4}. Unlike the 0-2 case, on an
infinite 4-brane world-volume the Yang-Mills configuration can be
scaled arbitrarily without changing the energy of the system. This
follows from the fact that the instanton number and the energy both
scale as $F^2$. The set of Yang-Mills solutions which minimize the
energy for a fixed value of $C_2$ form the moduli space of $U(N)$
instantons. This corresponds to the classical moduli space of 0-4
bound states.
If we compactify the 4-brane world-volume on a torus $T^4$ then the
moduli space of 0-4 bound states becomes the moduli space of $U(N)$
instantons on $T^4$ with instanton number $k$ \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Vafa-instantons}.
As an example we now describe a particularly simple class of
instantons in the case $N = k = 2$ considered in Section
\ref{sec:examplet4}. If we allow the dimensions of the torus to be
arbitrary, there are solutions of the Yang-Mills equations with
constant curvature $F_{12} = 2 \pi \tau_3 {\rlap{1} \hskip 1.6pt \hbox{1}}/(L_1 L_2), F_{34}
= 2 \pi \tau_3 {\rlap{1} \hskip 1.6pt \hbox{1}}/(L_3 L_4)$. It is a simple exercise to check
that the Yang-Mills energy of this configuration is greater or equal
to the energy $2 \tau_0$ of two 0-branes, with equality when $L_1 L_2=
L_3 L_4$. In fact, in the extremal case the Born-Infeld energy
\begin{equation}
E = 2 \tau_4 V_4 \sqrt{(1 + 4 \pi^2 \alpha'^2 F_{12}^2)
(1 + 4 \pi^2 \alpha'^2 F_{34}^2)} =2 \tau_4 V_4+2 \tau_0
\end{equation}
factorizes exactly so that there are no higher order corrections to
the Yang-Mills energy. The extremality condition here amounts to the
requirement that the field strength $F$ is self-dual. In this case,
precisely 1/4 of the supersymmetries of the system are preserved, and
the mass is therefore BPS protected. As discussed in Section
\ref{sec:examplet4}, this field configuration is T-dual to a
configuration of two 2-branes intersecting at angles. The
self-duality condition is equivalent to the condition that the angles
$\theta_1, \theta_2$ relating the intersecting branes are equal; this
is precisely the necessary condition for a system of
intersecting branes to preserve some supersymmetry \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{bdl}.
In general, on any manifold the moduli space of instantons is
equivalent to the space of self-dual or anti-self-dual field
configurations. This follows essentially from the inequality
\begin{equation}
\int(F \pm *F)^2 = 2 \int (F^2 \pm F \wedge F) \geq 0.
\end{equation}
As we have discussed, the moduli space of instantons is, roughly
speaking, the classical moduli space of bound state configurations for
a 0-4 system. There are several complications to this story, however,
which we now discuss briefly.
The first subtlety is that when an instanton shrinks to a point, the
associated 0-brane can leave the surface of the 4-branes on which it
was embedded. Although this is a natural process from the string
theory point of view, this phenomenon is not visible in the gauge
theory living on the 4-brane world-volume. Thus, to address questions
for which this process is relevant, a more general description of a
0-4 system is needed. One approach which has been used
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Witten-small,Vafa-instantons}
is to incorporate two
gauge groups $U(N)$ and $U(k)$, describing simultaneously the
world-volume physics of the 4-branes and 0-branes. In addition to the
gauge fields on the two sets of branes this theory contains a set of
additional hypermultiplets $\chi$ corresponding to 0-4 strings. If
the dynamics of the 4-brane are dropped by ignoring fluctuations in
the $U(N)$ fields, then the remaining theory is the dimensional
reduction of an ${\cal N} = 2$ super Yang-Mills theory in four
dimensions with $N k$ hypermultiplets. The moduli space of vacua for
this theory has two branches: a Coulomb branch where $\chi = 0$ and a
Higgs branch where the 0-brane lies in the 4-brane world-volume. It
was shown by Witten \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Witten-small} (in the analogous context of
5-branes and 9-branes) that the Higgs branch of this theory is precisely
the moduli space of instantons on ${\bb R}^4$. In fact, the ADHM
construction of this moduli space involves precisely the hyperk\"ahler
quotient which gives the Higgs branch of the
moduli space of vacua for the ${\cal N} = 2$ Yang-Mills
theory. The generalization of this situation to arbitrary $p-(p + 4)$
brane systems was discussed by Douglas \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Douglas,Douglas-gauge}
who also showed that the instanton structure can be seen by a probe
brane.
A second complication which arises in the discussion of 0-4 bound
states is that on compact manifolds such as $T^4$ for certain values
of $N$ and $k$ there are no smooth instantons. For example, for $N =
2$ and $k = 1$, instantons on $T^4$ tend shrink to a point so there
are no smooth instanton configurations with these quantum numbers. It
was argued by Douglas and Moore \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Douglas-Moore} that a complete
description of the moduli space in this case requires the more
sophisticated mathematical machinery of sheaves. Using the language
of sheaves it is possible to describe a moduli space analogous to the
instanton moduli space for arbitrary $N, k$. One argument for why
this language is essential is that the Nahm-Mukai transform which
gives an equivalence between moduli spaces of instantons on the torus
with $(N, k)$ and $(k, N)$ is only defined for arbitrary $N$ and $k$
in the sheaf context (See \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Donaldson-Kronheimer} for a review
of the Nahm-Mukai transform and further references). This equivalence
amounts to the statement that the moduli space of 0-4 bound states is
invariant under T-duality, which is a result clearly expected from
string theory.
This discussion has centered around the classical moduli space of 0-4
bound states. In the quantum theory, the construction of bound states
essentially involves solving supersymmetric quantum mechanics on this
moduli space, giving a relationship between the number of discrete
bound states and the cohomology of the moduli space \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Vafa-gas}.
Precisely solving this counting problem requires understanding how the
singularities in the moduli space are resolved quantum mechanically.
The mathematics underlying the resolution of these singularities again
involves sheaf theory \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Harvey-Moore,Nakajima}. A fully detailed
description of how this state counting problem works out on a general
compact 4-manifold has not been given yet, although
there are many results in special cases, particularly for asymptotic
values of the charges, which are applicable to entropy analysis for
stringy black holes; this issue will be discussed in further detail in
the lectures of Maldacena at this school.
{}
\vspace{0.08in}
\noindent
{\it 5.1.4\ \ \ 0-6 and 0-8 bound states}
\vspace{0.05in}
So far we have discussed 0-2 and 0-4 bound states from the Yang-Mills
point of view. In both cases there are classically stable Yang-Mills
solutions which correspond to a $p$-brane with a gauge field strength
carrying 0 brane charge. It is natural to ask what happens when we
try to construct analogous configurations for $p = 6$ or 8. From the
scaling argument used above, it is clear that a 0-brane on an infinite
6- or 8-brane will tend to shrink to a point, since the 0-brane charge
scales as $F^3$ or $F^4$ while the energy scales as $F^2$. Thus, in
general we would expect that a 0-brane spread out on the surface of a
6- or 8-brane would tend to contract to a point and then leave the
surface of the higher dimensional brane. In fact, analysis of the
SUSY algebra in string theory indicates that BPS states containing
0-brane and 6- or 8-brane charge have vanishing Yang-Mills energy so
that the 0-brane cannot have nonzero size on the 6/8-brane.
Strangely, however, on the torus $T^6$ or $T^8$ there
are (quadratically) stable Yang-Mills configurations with charges
corresponding to 0-branes and no other lower-dimensional branes
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{WT-adhere}. For example, on $T^6$ we can construct a field
configuration with
\begin{equation}
F_{12} = 2 \pi \mu_1\;\;\;\;\;
F_{34} = 2 \pi \mu_2\;\;\;\;\;
F_{56} = 2 \pi \mu_3
\end{equation}
where
\begin{equation}
\mu_1=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & -1 & 0\\
0 & 0 & 0 & -1\\
\end{array} \right)
\;\;\;\;\;
\mu_2=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & -1 & 0 & 0\\
0 & 0 & -1 & 0\\
0 & 0 & 0 & 1\\
\end{array} \right)
\end{equation}
\begin{equation}
\mu_3=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & -1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & -1\\
\end{array} \right)
\end{equation}
This solution is quadratically stable, but breaks all supersymmetry.
It is T-dual to a system of 4 3-branes intersecting pairwise on
lines. In the quantum theory these configurations must be unstable
and will eventually decay; however, because of the classical quadratic
stability we might expect that the states would be fairly long-lived.
These configurations seem to be related to metastable
non-supersymmetric black holes \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Khuri-Ortin,Sheinblatt}.
{}
\vspace{0.08in}
\noindent
{\it 5.1.5\ \ \ $p-p$ bound states}
\vspace{0.05in}
We now consider the question of bound states between parallel branes
of the same dimension. As in the case of 0-4 bound states, the
existence of $p-p$ bound states depends crucially upon subtleties in
the quantum theory, and is a somewhat complicated question. We review
the story here very briefly; for a much more detailed analysis the
reader is referred to the paper of Sethi and Stern \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Sethi-Stern}.
Recall that the world-volume theory of $N$ $p$-branes is $U(N)$ ${\cal
N} = 1$ SYM in 10D dimensionally reduced to $p + 1$ dimensions. The
bosonic fields in this theory are $A_{\alpha}$ and $X^a$. The moduli
space of classical vacua for $N$ $p$-branes is the configuration space
\begin{equation}
\frac{({\bb R}^{9-p})^N}{S_N}
\end{equation}
where $S_N$ arises from Weyl symmetry. Thus, classically the branes
move freely and there is no apparent reason for a bound state to
occur.
Once we include quantum effects, the story becomes more subtle. Let
us restrict ourselves for simplicity to the case of two 0-branes,
corresponding to supersymmetric $U(2)$ matrix quantum mechanics
(\ref{eq:super-qm}). If we had a purely bosonic theory, then it is
easy to see that if we consider a classical configuration where the
two 0-branes are separated by a distance $r$ then the off-diagonal
matrix elements would behave like harmonic oscillators with a quantum
ground state energy proportional to $r$. Thus, in the classical
bosonic theory we expect to see a discrete spectrum of bound
states \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Simon}. Once supersymmetry has been included, the fermions
contribute ground state energies with the opposite sign, which
precisely cancel the zero-point energies of the bosons. In principle,
this allows for the possibility of a zero-energy ground state
corresponding to a marginal bound state of two 0-branes. The
existence of such a state was finally proven definitively in the work
of Sethi and Stern \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Sethi-Stern}. Remarkably, the existence of
the marginally bound state depends crucially upon the large degree of
supersymmetry in the 0-brane matrix quantum mechanics. The analogous
theories with 8 and 4 supersymmetries which arise from dimensional
reduction of ${\cal N} = 1$ theories in 6 and 4 dimensions have no
such bound states.
{}
\vspace{0.08in}
\noindent
{\it 5.1.6\ \ \ Bound states of D-strings and fundamental strings}
\vspace{0.05in}
We conclude our discussion of bound states with a brief discussion of
bound states of D-strings with fundamental strings. This was in fact
the first of the bound state configurations described here to be
analyzed from the point of view of the D-brane world-volume gauge
theory \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Witten-bound}. In IIB string theory, states of
1-branes and strings with quantum numbers $(N, q)$ transform as a
vector under the $SL(2,Z)$ S-duality symmetry. Combining this
symmetry with T-duality, the following diagram shows that $N$
D-strings and $q$ fundamental strings should form a truly bound state
when $N$ and $q$ are relatively prime since this configuration is dual
to an $N, q$ 2-0 system:
\begin{center}
\centering
\centering
\begin{picture}(200,40)(- 100,- 25)
\put(-120,0){\makebox(0,0){IIA}}
\put(120,0){\makebox(0,0){IIB}}
\put(-40,0){\makebox(0,0){IIB}}
\put(40,0){\makebox(0,0){IIB}}
\multiput(-108,0)(80,0){3}{\vector(1,0){56}}
\multiput(-52,0)(80,0){3}{\vector(-1,0){56}}
\put(-80,10){\makebox(0,0){\tiny $T_3$}}
\put(0,10){\makebox(0,0){\tiny $S$}}
\put(80,10){\makebox(0,0){\tiny $T_{12}$}}
\put(-120,-15){\makebox(0,0){\small D2(12)+D0}}
\put(-40,-15){\makebox(0,0){\small D3(123)+D1(3)}}
\put(40,-15){\makebox(0,0){\small D3(123)+F1(3)}}
\put(120,-15){\makebox(0,0){\small D1(3)+F1(3)}}
\end{picture}
\end{center}
Indeed, Witten showed that an argument for the existence of such a
bound state can be given in terms of the world-volume
gauge theory. As discussed in Section
\ref{sec:strings-electric}, string winding number is proportional to
electric flux on the 1-brane world-sheet. As mentioned previously,
the quantization of fundamental string number is therefore a quantum
effect in this gauge theory; the flux quantum in the $U(N)$ theory
\begin{equation}
e = \frac{g}{2 \pi \alpha' N}
\end{equation}
can be related to the fundamental unit of momentum of a 0-brane on a
T-dual circle
\begin{equation}
\pi = \frac{\dot{x}}{ \hat{g} \sqrt{\alpha'}} = \frac{1}{N\hat{R}}
\end{equation}
through
\begin{equation}
\hat{g} \sqrt{\alpha'} \pi = (2 \pi \alpha') e = g/N.
\end{equation}
In terms of this flux quantum it
is easy to check that the leading term in the Born-Infeld expression
for the bound state energy indeed corresponds with that found in the
gauge theory
\begin{eqnarray}
E &
= & L \sqrt{(\tau_1 N)^2 + (\frac{k}{2 \pi \alpha'})^2 } = L \tau_1 N
\sqrt{1 + k^2 g^2/N^2} \nonumber\\
& = & L \tau_1 N +\frac{1}{2} L \tau_1
(4 \pi^2 \alpha'^2 k^2 e^2) + \cdots
\end{eqnarray}
{}
\normalsize
\subsection{Potentials between separated D-branes}
Now that we have discussed bound states of various types of D-branes,
we go on to consider interactions between separated branes. In string
theory the dominant long-distance interaction between D-branes is
found by calculating the annulus diagram which
corresponds to the exchange of a closed string between the two objects
(See Figure~\ref{f:annulus}). At long distances, this amplitude is
dominated by the massless closed string modes, which give an effective
supergravity interaction between the objects. The annulus diagram can
also be interpreted in the open string channel as a one-loop diagram.
We expect that the gauge theory description of the interaction between
the branes should be given by the massless open string modes, which
are relevant at short distances. These two calculations (restricting
to massless closed and open strings respectively) represent different
truncations of the full string spectrum. There does not seem to be
any {\it a priori} reason why gauge theory should correctly describe long
distance physics, although in some cases the calculations may agree
because the configuration has some supersymmetry protecting its
physical properties. The original computations of interactions
between separated D-branes were carried out in the context of the full
string theory spectrum \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Polchinski,Bachas,Lifschytz}. In
the spirit of these lectures, however, we will confine our discussion to
aspects of D-brane interactions which can be studied in the context of
Yang-Mills theory. As we shall see, many of the important qualitative
features (and some quantitative features) of D-brane interactions can
be seen from this point of view.
\begin{figure}
\vspace{-0.3in}
\psfig{figure=annulus.eps,height=1.5in}
\vspace{-0.3in}
\caption[x]{\footnotesize Annulus diagram for D-brane interactions}
\label{f:annulus}
\end{figure}
In the next few subsections we consider static potentials between
branes of various dimensions. Using T-duality, one of these branes
can always be transformed into a 0-brane, so without loss of
generality we restrict ourselves to interactions between 0-branes and
$p$-branes with $p$ even.
{}
\vspace{0.08in}
\noindent
{\it 5.2.1\ \ \ Static $p-p$ potential}
\vspace{0.05in}
To begin with, let us consider a pair of parallel $p$-branes. In
Yang-Mills theory such a configuration is described by a $U(2)$ gauge
theory with a nonzero scalar VEV
\begin{equation}
\langle X^a\rangle = d (\tau_3 + {\rlap{1} \hskip 1.6pt \hbox{1}})/2 = \left(\begin{array}{cc}
d & 0\\
0 & 0
\end{array} \right)
\end{equation}
where $d$ is the distance between the branes. For any $d$, this is a
BPS configuration with Born-Infeld energy 2 $E_p$ and vanishing
Yang-Mills energy. Therefore there is no force between the branes
even in the quantum theory. This agrees with the results of the full
string calculation by Polchinski \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Polchinski}. In the (closed)
string
calculation, there is a delicate balance between NS-NS and R-R string
exchanges. Note that in a purely bosonic theory, although there is no
classical potential between the branes, there is a quantum-mechanical
attraction between the branes due to the zero-point energy of the
off-diagonal fields, as mentioned in the discussion of 0-brane bound
states.
{}
\vspace{0.08in}
\noindent
{\it5.2.2\ \ \ Static 0-2 potential \footnote{This subsection is based
on conversations with Hashimoto, Lee, Peet and Thorlacius.}}
\vspace{0.05in}
Unlike the $p-p$ system, a configuration containing a single 2-brane
and a single 0-brane cannot be described by a simple gauge theory
configuration when the branes are not coincident. This makes it
slightly more difficult to study the interactions between a 0-brane
and a 2-brane in Yang-Mills theory. Since we know that the
static potential between a pair of 2-branes must vanish, however, we can study
the static potential between a 0-brane and a 2-brane by attaching the
0-brane to an auxiliary 2-brane. Thus, we consider a pair of 2-branes
on a torus $T^2$ of area $L^2$, corresponding to a $U(2)$ gauge theory
with a single unit of magnetic flux ${\rm Tr}\int F = 2 \pi$. If we
fix the expectation values of the scalar fields $X^a$ to vanish except
in a single direction with $X^3 = d \tau_3/2$ then the branes are
fixed at a relative separation $d$. For a fixed value of $d$, we can
then minimize the Yang-Mills energy associated with the gauge field
$A_\alpha$. This energy will depend upon the separation $d$ because of the
terms of the form $[A_\alpha, X^3]^2$, and gives a classical potential
function $V (d)$. As we discussed in Section 5.1.2, when $d = 0$ the
energy will be minimized when the flux is shared between the two
2-branes, corresponding to a (2,1) bound state. In this case the
Yang-Mills energy of the system is proportional to
\begin{equation}
v (0) = \frac{L^2}{4 \pi^2} \int {\rm Tr}\; F_{(0)}^2 = 1/2
\end{equation}
On the other hand, when $d$ is very large the energy
will be minimized when the flux is constrained to one of the diagonal
U(1)'s corresponding to a single brane, so that
\begin{equation}
v (\infty) =\frac{L^2}{4 \pi^2}\int {\rm Tr}\; F_{(\infty)}^2 = 1.
\end{equation}
In this case the flux cannot be shared because the constant scalar
field $X^3$ is not compatible with the boundary conditions
(\ref{eq:boundary2}) needed for the curvature to be proportional to
the identity matrix. The energetics of these two limits are easy to
understand in the T-dual picture, where the configuration at $d = 0$
corresponds to a single diagonally wrapped (2,1) D-string while the $d
\rightarrow \infty$ configuration corresponds to a pair of strings
with windings (1, 0) and (1,1) (See Figure~\ref{f:0-2}).
\begin{figure}
\vspace{-0.3in}
\psfig{figure=two.eps,height=2in}
\vspace{-0.3in}
\caption[x]{\footnotesize D-string configurations which are T-dual to
a pair of separated 2-branes with a unit of flux (0-brane charge)}
\label{f:0-2}
\end{figure}
In fact, it turns out that the potential function $v (d)$ is constant
for any $d$ greater than a critical distance $d_c$ \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{vb-unstable}.
Beyond this distance, the 0-brane and 2-brane have essentially no
interaction classically. Below this distance, however, the solution
where the flux is confined to a single 2-brane becomes unstable and
the potential function drops continuously down to the value 1/2 at $d
= 0$. When quantum effects are included, for example by integrating
out at one loop the off-diagonal terms, the potential is smoothed and
the force between the objects becomes nonzero and attractive at
arbitrary distance. These results agree perfectly with the full
string calculation, which indicates that there is an
attractive potential at all distances \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Lifschytz}.
It is interesting to compare this analysis with a similar discussion
by Douglas, Kabat, Pouliot and Shenker (DKPS) \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{DKPS} of the
potential between a pair of 1-branes carrying a single fundamental
string charge. The situation they consider is dual to the 0-2
configuration we are discussing; however, because the quantization of
electric field strength occurs only at the quantum level, the
potential they calculate appears only when quantum effects are
considered. The one-loop potential they calculate is smooth and gives
a nonzero attractive force at all distances, as we expect from the
one-loop calculation in the 0-2 case.
The fact that the force between the 0-brane and 2-brane is mostly
localized within a finite distance $d_c$ provides a simple example of
a general feature which is most clearly seen in brane-anti-brane
interactions \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Banks-Susskind}. Namely, when two brane
configurations are separately stable but can combine to form a lower
energy configuration, at a distance analogous to $d_c$ a tachyonic
instability appears in the system which indicates the existence of the
lower energy configuration. A similar situation to the one we have
described here occurs when two 2-branes are provided with 0-brane and
anti-0-brane charges respectively. In this case when the 2-branes are
brought sufficiently close a tachyonic instability appears which
allows the 0-brane and anti-0-brane to annihilate. In the
Yang-Mills language we are using here, these unstable modes can be
explicitly constructed as degrees of freedom in the gauge field.
Because of the nontrivial boundary conditions on the field,
these degrees of freedom are described as theta functions which are
sections of a nontrivial U(1) bundle on the torus. Using these theta
functions, the tachyonic instabilities associated with brane-brane and
brane-anti-brane forces can be precisely analyzed \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{ht,gns}.
{}
\vspace{0.08in}
\noindent
{\it 5.2.3\ \ \ Static 0-4 potential}
\vspace{0.05in}
Just as we placed a 0-brane on an auxiliary 2-brane to determine the
form of the static 0-2 potential, we can place a 0-brane on a pair of
auxiliary 4-branes to determine the static potential between a 0-brane
and a 4-brane. Thus, we consider a set of 3 (uncompactified) 4-branes
with a scalar field $X^5 = {\rm Diag} (d, d, 0)$, with a $U(2)$
instanton living on the first two 4-branes. This configuration has a
self-dual gauge field, so it is a BPS state in the moduli space of
$U(3)$ instantons. Thus, the potential is independent of the distance
$d$ even after quantum effects are considered and we see that there is
no static potential between 0-branes and 4-branes. This is in
agreement with the results from the full string calculation
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Lifschytz}.
\vspace{0.08in}
\noindent
{\it 5.2.4\ \ \ Static 0-6 and 0-8 potential}
\vspace{0.05in}
The static potential between a 0-brane and a 6-brane or an 8-brane is
not as easy to understand from the point of view of gauge theory as in
the 0-0, 0-2, and 0-4 cases, since there are no known 0-6 or 0-8 bound
states. As mentioned in Section 5.1.4, however, a set of 4 or 8
0-branes can be smoothly distributed on the world-volume of 4 or 8
6-branes or 8-branes in a stable way after energy is added to the
system. In the case of the 6-brane, this corresponds to the fact that
there is a repulsive interaction between 0-branes and 6-branes (as
determined by the string calculation), so that energy is needed to
push them together from an infinite separation. Based on Yang-Mills
theory alone, then, one might think that in the 0-8 case one would
also get a repulsive force between the branes. There is an extra
complication in this case, however, arising from interactions via R-R
fields. In fact, the potential between separated 0-branes and
8-branes vanishes and such configurations preserve some supersymmetry.
The 0-8 story has a number of subtleties, however. For example, as a
0-brane passes through an 8-brane, a (fundamental) string is created
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Hanany-Witten,bdg,dfk,bgl}. This string produces a charge
density on the 8-brane world-volume. The physics associated with this
system is still a matter of some discussion.
\vspace{0.08in}
\noindent
{\it 5.2.5\ \ \ 0-brane scattering}
\vspace{0.05in}
We will now consider the interaction between a pair of moving
0-branes. The classical configuration space for a pair of 0-branes is
the flat quotient space \begin{equation} \frac{({\bb R}^9)^2}{{{\bb Z}}_2}.
\end{equation} As discussed in Section 5.2.1, this configuration space is
protected by supersymmetry, so that all points in the space correspond
to classical BPS states of the two 0-brane system. When the two
0-branes have a nonzero relative velocity, however, the supersymmetry
of the system is broken and a velocity-dependent potential appears
between the branes. The leading term in this potential can be
determined by performing a one-loop calculation in the 0-brane quantum
mechanics theory. We will now review this calculation briefly. The
Yang-Mills calculation of the potential between two moving 0-branes
was first carried out by Douglas, Kabat, Pouliot and Shenker
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{DKPS}; many variations on this calculation have appeared in the
literature over the last year or so, particularly in the context of
Matrix theory. This calculation will be discussed further in
Section \ref{sec:matrix-interactions}.
To find the velocity-dependent potential at one-loop, we begin by
considering a classical background solution for the two-particle
system in which the two 0-branes are moving with relative velocity $v$
in the $X^1$ direction with an impact parameter of $b$ along the $X^2$
axis
\begin{eqnarray}
X^2 (t) & = &\left(\begin{array}{cc}
b & 0\\
0 & 0
\end{array} \right)\\
X^1 (t) &= & \left(\begin{array}{cc}
vt & 0\\
0 & 0
\end{array} \right) \nonumber
\end{eqnarray}
To calculate the effective potential between these 0-branes at
one-loop order, we need to integrate out the off-diagonal fields. We
can perform the calculation in background-field gauge, where we set
\begin{equation}
X^a = \langle X^a \rangle + \delta X^a
\end{equation}
and where we add a background-field gauge fixing term
\begin{equation}
-\frac{1}{2 R} \left(\dot{A}_0+i[\langle X^a \rangle, X^a] \right)^2
\end{equation}
to the Lagrangian (\ref{eq:super-qm}). We can calculate the one-loop
potential by expanding the action to quadratic order in the
off-diagonal fluctuations. In the quasi-static approximation, which
is valid to leading order in the inverse separation
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Tafjord-Periwal}, the potential is then simply given by the sum
of the ground state energies of the corresponding harmonic
oscillators. There are 10 (complex) bosonic oscillators, with frequencies
\begin{eqnarray*}
\omega_b & = & \sqrt{r^2} \;\;\;\;\; {\rm with\ multiplicity\ 8}\\
\omega_b & = & \sqrt{r^2 \pm 2iv} \;\;\;\;\;
{\rm with\ multiplicity\ 1\ each}.
\end{eqnarray*}
where $r = \sqrt{b^2 + v^2 t^2}$ is the distance between the branes at
time $t$.
There are also 2 ghosts with frequencies
\begin{equation}
\omega_g = \sqrt{r^2},
\end{equation}
and there are 16 fermions with frequencies
\begin{equation}
\omega_f = \sqrt{r^2 \pm iv} \;\;\;\;\;
{\rm with\ multiplicity\ 8\ each}.
\end{equation}
The velocity-dependent potential is then given by
\begin{equation}
V = \sum_{b}\omega_b-2 \omega_g-1/2 \sum_{f} \omega_f.
\label{eq:oscillator-sum}
\end{equation}
For $v = 0$ the frequencies clearly cancel and the potential
vanishes. For nonzero $v$ we can expand each frequency in a power
series in $1/r$. At the first three orders in $v/r^2$ the potential
vanishes; the first nonvanishing term appears at fourth order, so that
the potential between the 0-branes is given at leading order by
\begin{equation}
V (r) =\frac{-15v^4}{16\;r^7}.
\label{eq:scattering-potential}
\end{equation}
As we will discuss in more detail in the following sections, it can be
checked that this is in precise agreement with the corresponding
potential in supergravity, including the multiplicative constant.
{}
\normalsize
\section{M(atrix) theory: The Conjecture}
\label{sec:conjecture}
In the first four lectures we accumulated a fairly wide range of
results which can be derived from the Yang-Mills description of
D-branes. The last lecture (Sections \ref{sec:conjecture},
\ref{sec:evidence} and \ref{sec:developments}) contains an
introduction to the Matrix theory conjecture, which states that the
large $N$ limit of the Yang-Mills quantum mechanics of 0-branes
contains a large portion of the physics of M-theory and string theory.
As we shall see, much of the evidence for the Matrix theory
conjecture is based on properties of the Yang-Mills description of
D-branes which we have discussed in the context of type II string
theory. The discussion given here of Matrix theory is fairly
abbreviated and focuses on understanding how the objects and
interactions of supergravity can be found in Matrix theory. The
core of the material is based on the original lectures given in June
of 1997; however, some more recent material is included which is
particularly germane to the subject matter of the original lectures.
Many important and interesting aspects of the theory are mentioned
briefly, if at all. Other reviews which discuss some recent
developments in more detail have been given by Banks
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{banks-review} and by Susskind \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Susskind-review}.
This section contains the statement of the Matrix theory conjecture
as well as a brief review of some background material useful in
understanding the statement of the conjecture, namely short reviews of
M-theory and the infinite momentum frame. In Section
\ref{sec:evidence} we discuss some of the evidence for Matrix
theory, and in Section \ref{sec:developments} we discuss some further
directions in which this theory has been explored.
\subsection{M-theory}
The concept of M-theory has played a fairly central role in the
development of the web of duality symmetries which relate the five
string theories to each other and to supergravity
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Hull-Townsend,Witten-various,dlm,Schwarz-m,Horava-Witten}.
M-theory is a conjectured eleven-dimensional theory whose low-energy
limit corresponds to 11D supergravity \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{cjs}. Although there are
difficulties with constructing a quantum version of 11D supergravity,
it is a well-defined classical theory with the following field content:
\vspace{0.03in}
\noindent
$e^a_I$: a vielbein field (bosonic, with 44 components)\\
\noindent
$\psi_I$: a Majorana fermion gravitino (fermionic, with 128
components)\\
\noindent
$A_{I J K}$: a 3-form potential (bosonic, with 84 components).
\vspace{0.02in}
In addition to being a local theory of gravity with an extra 3-form
potential field, M-theory also contains extended objects. These
consist of a two-dimensional supermembrane and a 5-brane.
One way of defining M-theory is as the strong coupling limit of the
type IIA string. The IIA string theory is taken to be equivalent to
M-theory compactified on a circle $S^1$, where the radius of
compactification $R$ of the circle in direction 11 is related to the
string coupling $g$ through $R= g^{2/3}l_p = g l_s$, where $l_p$ and
$l_s = \sqrt{\alpha'}$ are the M-theory Planck length and the string
length respectively. The
decompactification limit $R \rightarrow \infty$ corresponds then to
the strong coupling limit of the IIA string theory. (Note that we
will always take the eleven dimensions of M-theory to be labeled $0,
1, \ldots, 8, 9, 11$; capitalized roman indices $I, J, \ldots$ denote
11-dimensional indices).
Given this relationship between compactified M-theory and IIA
string theory, a correspondence can be constructed between various objects in
the two theories. For example, the Kaluza-Klein photon associated
with the components $g_{\mu 11}$ of the 11D metric tensor can be
associated with the R-R gauge field $A_\mu$ in IIA string theory. The
only object which is charged under this R-R gauge field in IIA string
theory is the 0-brane; thus, the 0-brane can be associated with a
supergraviton with
momentum $p_{11}$ in the compactified direction. The membrane and
5-brane of M-theory can be associated with different IIA objects
depending on whether or not they are wrapped around the compactified
direction; the correspondence between
various M-theory and IIA objects is given in Table~\ref{tab:m2}.
\begin{table}[t]
\caption{Correspondence between objects in M-theory and
IIA string theory\label{tab:m2}}\vspace{0.4cm}
\begin{center}
\begin{tabular}{|l|l|}
\hline
M-theory & IIA\\
\hline
KK photon ($g_{\mu 11}$) & RR gauge field $A_\mu$\\
supergraviton with $p_{11}= 1/R$ & 0-brane\\
wrapped membrane & IIA string\\
unwrapped membrane & IIA D 2-brane\\
wrapped 5-brane & IIA D 4-brane\\
unwrapped 5-brane & IIA NS 5-brane\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Infinite momentum frame}
Roughly speaking,
the infinite momentum frame (IMF) is a frame in which the physics has
been
heavily boosted in one particular direction. This frame has the
advantage that it simplifies many aspects of relativistic quantum
field theories \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Kogut-Susskind}.
To study a theory in the IMF, we begin by choosing a longitudinal
direction; this will be $X^{11}$ in the case of M-theory. We then
restrict attention to states which have very large values of momentum
$p_{11}$ in the longitudinal direction. This is sometimes stated in
the form that any system of interest should be heavily boosted in the
longitudinal direction; however, this latter formulation leads to some
subtleties, particularly when the longitudinal direction is compact.
The basic idea of the IMF frame is that if we are interested in
scattering amplitudes where the in-states and out-states have large
values of $p_{11}$ then we can integrate out all the states with
negative or vanishing $p_{11}$, giving a simplified theory. In
general, intermediate states without large $p_{11}$ will indeed be
highly suppressed. Degrees of freedom associated with
zero-modes can cause complications, however \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Hellerman-Polchinski}.
One advantage of the IMF frame is that it turns a relativistic
theory into one with a simpler, Galilean, invariance group. If a
state has a large longitudinal momentum $p_{11}$ then to leading order
in $1/p_{11}$ a Lorentz boost acts as a
Galilean boost on the transverse momentum $p_\bot$ of the state
\begin{equation}
p_\bot \rightarrow p_\bot + p_{11} v_\bot.
\end{equation}
A massless particle has an energy which is given to leading order in
$1/p_{11}$ by
a Galilean energy
\begin{equation}
E-p_{11} = \frac{p_\bot^2}{2p_{11}}
\end{equation}
in the IMF. Thus, we see that the longitudinal momentum $p_{11}$
plays the role of the mass in the IMF Galilean theory.
If the longitudinal direction $X^{11}$ is compact with radius $R$,
then longitudinal momentum is naturally quantized in units of $1/R$,
so that $p_{11} = N/R$. Note that, as mentioned above, there are
subtleties with boosting a compactified theory; in particular, a boost
is not a symmetry of a Lorentz invariant theory which has been
compactified in the direction of the boost, since after the boost the
constant time surface becomes noncompact. By simply treating the IMF
frame as a way of calculating interactions between states with large
longitudinal momentum, however, this complication should not concern
us particularly.
The description of a theory in the
infinite momentum frame is closely related to the description of the
theory given in
light-front coordinates.
In fact, for comparing Matrix theory to supergravity it is most
convenient to use the language of discrete light-front quantization
(DLCQ) \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Susskind-DLCQ}. In this framework a system is
compactified in a lightlike dimension $x^-$ so that longitudinal
momentum $p^+$ is quantized in units $N/R$, where we set
\begin{equation}
x^\pm = \frac{1}{ \sqrt{2}} (x^0 \pm x^{11}), \;\;\;\;\;
x^-\equiv x^- + 2 \pi R
\label{eq:}
\end{equation}
(note that the light-front metric has $\eta_{+ -}= -1, \eta_{+ +} =
\eta_{--}= 0$). The DLCQ prescription gives a light-front description
of a theory in the IMF when $N \rightarrow \infty$. Further
discussion of DLCQ quantization in the context of Matrix theory can be
found in
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Sen,Seiberg-DLCQ,Hellerman-Polchinski,Bigatti-Susskind,Susskind-review}.
\subsection{The conjecture}
The following conjecture was made by Banks, Fischler, Shenker and
Susskind (BFSS) \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{BFSS}:
{\it M-theory in the IMF is exactly described by the $N \rightarrow
\infty$ limit of 0-brane quantum mechanics
\begin{equation}
{\cal L}= \frac{1}{2 R} {\rm Tr}\; \left[
\dot{X}^a \dot{X}_a
+ \sum_{ a < b}[X^a, X^b]^2+ \theta^T (i\dot{\theta}
- \Gamma_a[X^a, \theta]) \right]
\label{eq:matrix-Lagrangian}
\end{equation}
where $N/R$ plays the role
of the longitudinal momentum, and where $N/R$ and $R$ are both taken to
$\infty$.}
Note that (\ref{eq:matrix-Lagrangian}) is the same as
(\ref{eq:super-qm}) in units where $2 \pi \alpha' = 1$, after
replacing $g \sqrt{\alpha'}= R$
Although we will continue to work in the string units in which
(\ref{eq:matrix-Lagrangian}) is expressed, in many references the
Lagrangian is expressed in Planck units
\begin{equation}
{\cal L}={\rm Tr}\; \left[
\frac{1}{2 R} \dot{X}^a \dot{X}_a
+ \frac{R}{8\pi^2} \sum_{a < b} [X^a, X^b]^2+ \frac{1}{4 \pi} \theta^T
(i\dot{\theta}
- \frac{R}{2\pi} \; \Gamma_a[X^a, \theta]) \right].
\label{eq:matrix-Lagrangian-Planck}
\end{equation}
The change of units can be carried out
by simply replacing $\alpha' \rightarrow l_p^2 g^{-2/3}$ in
(\ref{eq:super-qm}) and setting $l_p = 1$.
The original evidence for this conjecture included the following:
\noindent
$\circ$ Only 0-branes carry $p_{11}$. Not only does this mean that
states in M-theory with large $p_{11}$ are composed primarily of
0-branes, but this also fits naturally into the holographic principle
espoused by 't Hooft and Susskind
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Hooft-reduction,Susskind-holographic} which states that at large
momentum string theory states can be described in terms of elementary
partons which each take up a single Planck unit of transverse area.
(Related ideas have also been discussed by Thorn \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Thorn-bits}.)
\noindent
$\circ$ The 10D Super-Galilean invariance of (\ref{eq:matrix-Lagrangian}).
\noindent
$\circ$ The fact that
graviton scattering amplitudes in 11D supergravity are correctly
described by the scattering amplitude of 0-branes arising from the leading
$v^4/r^7$ potential term.
\noindent
$\circ$ The natural appearance of the supermembrane in the matrix
quantum mechanics theory \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{dhn}. This connection between the
low-energy theory of 0-branes and the light-front supermembrane theory
was also pointed
out by Townsend \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Townsend}.
In the time since this conjecture was made, supporting evidence
has continued to appear. In the following section, we will discuss
some of this evidence.
{}
\subsection{Matrix compactification}
\label{sec:compactification}
Before discussing in detail the evidence for Matrix theory, let us
discuss briefly the issue of compactifying the theory. Compactifying
Matrix theory on a manifold $M$ would correspond to a
compactification of M-theory on $M \times S^1$ where the $S^1$ is
taken to a decompactified limit through $R \rightarrow \infty$. There
are several ways in which BFSS suggested it might be possible to
define a compactified version of Matrix theory.
The first approach to compactifying the theory would be to simply
define Matrix theory on a manifold $M$ to be the large $N$ limit of
the theory of $N$ 0-branes on $M$. For example, by using the
equivalence discussed in Section \ref{sec:T-duality} between the 0-brane
theory on a torus and super Yang-Mills theory on the dual torus, this
would define Matrix theory on the torus $T^d$ in terms of a
$d$-dimensional super Yang-Mills theory. The torus can then be modded
out by a finite group to get Matrix theory on an orbifold.
So far 0-brane quantum mechanics is only very well understood
on tori and orbifolds, however. There has been some progress made on curved
manifolds, particularly on K3 and Calabi-Yau spaces
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{dos,Douglas-Ooguri}, however
the situation is not as clear in these cases. Thus, this approach
does not immediately lead to a candidate definition of Matrix theory
compactified on an arbitrary manifold.
A second approach to compactifying Matrix theory
involves taking superselection sectors of the theory which may
correspond to different compactifications. For example, in the large
$N$ limit we can take infinite matrices satisfying
\begin{equation}
U X^a U^{-1} = X^a + \delta^{a9} 2 \pi R_9{\rlap{1} \hskip 1.6pt \hbox{1}}.
\end{equation}
for some ``translation'' operator $U$ and radius $R_9$. This
superselection sector of the theory corresponds to an $S^1$
compactification, since the matrices satisfying this relation can be
interpreted in terms of the fields of $(1 + 1)$-D super Yang-Mills theory on
the circle as in (\ref{eq:constraint2}). In a similar way, it is easy
to see that Matrix theory ``contains'' the SYM theory in all
dimensions $d \leq 10$. It is an interesting open question whether
there are other superselection sectors of the theory which naturally
correspond to compactifications on non-toroidal spaces.
Both of these approaches to Matrix theory compactification give the
same prescription for compactifying the theory on a torus. We will
use this description of Matrix theory on a torus in terms of the super
Yang-Mills theory on the dual torus in the following discussion. For
compactification on tori of dimension $d > 3$, however, additional
features emerge which make the story more complicated. This issue will be
discussed briefly in Section \ref{sec:compactification2}.
\section{Matrix theory: Symmetries, Objects and Interactions}
\label{sec:evidence}
If the Matrix theory conjecture is correct, we would expect that all
the symmetries of M-theory should correspond to symmetries of
(\ref{eq:matrix-Lagrangian}). Furthermore, it should be possible to
find matrix constructions of all the objects we expect to see in the
11-dimensional supergravity theory which describes M-theory at low
energies, and the interactions between these objects in Matrix theory
should agree with the interactions between the corresponding
supergravity objects.
Most of the evidence to date for the Matrix theory
conjecture consists of showing that some piece of this correspondence
is correct. In this section we review some of this evidence, divided
into the three categories mentioned. Recent arguments for the Matrix
theory conjecture based on more general principles are discussed in
Section \ref{sec:recent}.
\subsection{Symmetries in Matrix theory}
There are two important symmetries of M-theory which we would like to
see reproduced in Matrix theory. First is the Lorentz symmetry of the
theory. This is explicitly broken in the IMF; nonetheless, one would
hope that a residual version of this symmetry would still be present.
The second symmetry of M-theory which should be reproduced by Matrix
theory is the group of duality symmetries of the theory. We now
discuss these symmetries in turn.
\vspace{0.08in}
\noindent
{\it 7.1.1\ \ \ Lorentz symmetry in Matrix theory}
\vspace{0.05in}
There is as of yet very little evidence for a residual Lorentz
symmetry in Matrix theory. In fact, this is one of the directions in
which the least progress has been made. Some evidence that the theory
has Lorentz invariance at the classical level was given by de Wit,
Marquard and Nicolai \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{dmn}. As stressed by BFSS, however, the
quantum version of this argument is liable to be much more subtle.
Other results relevant to the Lorentz symmetry of the theory include
calculations of scattering with longitudinal momentum transfer which
we discuss further below.
\vspace{0.08in}
\noindent
{\it 7.1.2\ \ \ Duality in Matrix theory}
\vspace{0.05in}
In addition to Lorentz symmetry, M-theory has a set of
duality symmetries which appear when the theory has been compactified
on a $d$-dimensional torus. This group of
``U-duality'' symmetries increases in size and complexity as each
additional dimension is compactified, as discussed in the lectures
of Mukhi in this school. In this section we discuss the case $d = 3$
in some detail. Compactification on tori of other dimensions is
discussed briefly in Section \ref{sec:compactification2}.
After compactification on $T^3$, the U-duality group of M-theory is
$SL(3,Z) \times SL(2,Z)$. This group is generated by two types of
elementary symmetries. The $SL(3,Z)$ part of the U-duality group
corresponds to the symmetry group of the moduli space of $T^3$
compactifications. Since this symmetry is simply related to the
compactification space, it is a manifest symmetry of toroidally
compactified Matrix theory. The $SL(2,Z)$ part of the U-duality group
corresponds to a form of M-theory T-duality \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Sen,Aharony}.
Symmetries in this group can invert the volume of the compactification
3-torus, and are not manifest from the Matrix theory point of view.
We will now discuss T-duality in M-theory and its realization in
Matrix theory.
One simple way to understand T-duality in M-theory is through its
relationship with IIA T-duality. If we compactify M-theory on a
3-torus in dimensions 8, 9 and 11, then we can draw the following
commuting diagram of duality symmetries
\begin{center}
\centering
\begin{picture}(200, 60)(- 100,- 25)
\put(-25,15){\makebox(0,0){M}}
\put(25,15){\makebox(0,0){M}}
\put(-25, -15){\makebox(0,0){IIA}}
\put(25,-15){\makebox(0,0){IIA}}
\put(-15,15){\vector(1,0){30}}
\put(15,15){\vector(-1,0){30}}
\put(-13,-15){\vector(1,0){26}}
\put(13, -15){\vector(-1,0){26}}
\put(-25,7){\vector( 0, -1){14}}
\put(25,7){\vector( 0, -1){14}}
\put(-40, 0){\makebox(0,0){\tiny $R_{11}$}}
\put(40, 0){\makebox(0,0){\tiny $R_{11}$}}
\put(0,25){\makebox(0,0){\tiny $T_M$}}
\put(0,-6){\makebox(0,0){\tiny $T_{89}$}}
\end{picture}
\end{center}
Start with M-theory on the upper left. After compactification on
dimension 11 (the vertical arrow) this becomes IIA on $T^2$. Two
T-duality transformations give us another IIA theory on the dual of
the original $T^2$. This corresponds to a new compactification of
M-theory, so that by moving around the diagram we define an
isomorphism of M-theory. This isomorphism is an element $T_M$ of the
$SL(2,Z)$ T-duality group of M-theory.
A remarkable feature of this duality symmetry is that it acts on
M-theory in a way which is symmetric in dimensions 8, 9 and 11. More
precisely, after exchanging dimensions 8 and 9, the action of $T_M$ on
the original compactification $T^3$ is to invert the volume
of the torus through $T_M: V = R_8 R_9 R_{11} \leftrightarrow 1/V$.
This can be verified directly by following the various coupling
constants and radii around the diagram above.
It is interesting to consider the effects of the symmetry $T_M$ on the
various string and membrane states in the theory. Momentum on the
original $T^3$ can be identified with an element of the lattice dual
to that defining the compactification torus. (i.e., for each compact
direction $a$ on the torus there is a corresponding integer momentum
$k_a$.) Similarly, a membrane which has been wrapped around some
2-cycle on $T^3$ can be identified with a vector on the dual lattice
which is perpendicular to the membrane.
M-theory T-duality exchanges these two dual vectors, swapping membrane
wrapping number with string momentum in the compact directions. We
can easily check this in various special cases by following the T-duality
symmetry through the above diagram. For example, if we begin with an
M-theory membrane wrapped in directions 9 and 11, after projection
into IIA this becomes a string wrapped on dimension 9. Two IIA
T-dualities take this into a string with momentum in dimension 9 and
no winding. Exchanging dimensions 8 and 9 turns this into momentum in
dimension 8. This lifts back into momentum in dimension 8 in
M-theory, which is a vector
orthogonal to the original 9-11 membrane. The reader
can check as an exercise that an 8-9 membrane in M-theory is mapped
into a state with 11-momentum in a similar fashion.
Now that we have discussed M-theory T-duality in some detail, we can
ask how this symmetry is realized in Matrix theory. We would expect
that if Matrix theory is compactified on a 3-torus, say in
dimensions 7, 8 and 9, then the theory should have
an $SL(2,Z)$ group of self-duality symmetries corresponding to the
group of M-theory T-dualities. From the discussion of
compactification in Section \ref{sec:compactification}, we expect that
Matrix theory on $T^3$ should correspond to ${\cal N} = 4$ super
Yang-Mills theory on the dual $\hat{T}^3$. In fact, this theory does have a
nontrivial $SL(2,Z)$ self-duality symmetry: the S-duality symmetry discussed in
Section \ref{sec:S-duality}. This is precisely the duality symmetry which
implements the Matrix theory version of M-theory T-duality
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Susskind-duality,grt}.
As evidence for this identification of ${\cal N} = 4$
super Yang-Mills S-duality with
Matrix theory T-duality, we can consider the following observations:
First, as discussed in Section 3.1.2, the Yang-Mills
coupling constant for Matrix theory on $T^3$ is given by
\begin{equation}
\tau = \frac{i}{g_{{\rm YM}}^2} \sim iV_{789}.
\end{equation}
Under one element of the SYM S-duality group this coupling constant is
inverted through $\tau \rightarrow -1/\tau$; this corresponds to the
inversion of the volume of the torus which we expect from the element
$T_M$ of the M-theory T-duality group. Second, SYM S-duality
exchanges electric and magnetic fluxes. We have identified membranes
in Matrix theory with magnetic flux in the corresponding SYM theory
\begin{equation}
{\rm Tr}\;[X^a, X^b] \sim \int iB^{ab}
\end{equation}
and
momentum in Matrix theory with electric flux through
\begin{equation}
{\rm Tr}\; \Pi^a = {\rm Tr}\; \dot{X^a} \sim
\int {\rm Tr}\; \dot{A_a} = \int {\rm Tr}\; E^a.
\end{equation}
The exchange of these quantities corresponds precisely to the exchange
of membrane winding and momentum expected of M-theory T-duality.
Thus, although S-duality in ${\cal N} = 4$ super Yang-Mills has not
yet been definitively proven, we have strong evidence that Matrix
theory has the expected T-duality symmetry of M-theory, and that it
can be expressed precisely in terms of this more widely accepted field
theory duality symmetry. Combining the $SL(2,Z)$ of SYM S-duality with the
manifest $SL(3,Z)$ symmetry of the 3-torus we find that Matrix theory
has the full U-duality group expected from M-theory compactified on a
3-torus.
\subsection{Matrix theory objects}
We will now discuss evidence that Matrix theory contains most or all
of the objects which we expect to see in the 11-dimensional
supergravity theory which is the low-energy
limit of M-theory.
\vspace{0.08in}
\noindent
{\it 7.2.1\ \ \ Supergravitons}
\vspace{0.05in}
Let us first discuss the appearance of supergravitons in Matrix
theory. Since 0-branes are the carriers of longitudinal momentum, we
would expect a supergraviton with longitudinal momentum $N/R$ to
correspond to a bound state of $N$ 0-branes. From the fact that
Matrix theory has 16 supersymmetries, we know that threshold bound
states of 0-branes must live in a 256-dimensional representation of
the supersymmetry algebra. This corresponds precisely to the number
of Kaluza-Klein modes of the supergraviton arising from the graviton,
3-form, and gravitino (256 = 44 + 84 + 128). It has been shown that
these bound states exist, at least for $N$ prime
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Sethi-Stern,Yi,Porrati-Rozenberg}.
One remarkable feature of Matrix theory which is worth emphasizing at
this point is that
second quantization is {\it automatic} in Matrix theory. That is,
not only does Matrix theory naturally contain a set of states
corresponding to single gravitons, it actually has a Hilbert
space containing states with arbitrary numbers of separated
gravitons. The point is that in the large $N$ limit we can have
matrices which break up into an arbitrary number of blocks. For
example, a state with the schematic form
\begin{equation}
\left(\begin{array}{cccc}
M_1^a & 0 & 0 & \ddots\\
0 & M_2^a& \ddots & 0\\
0 & \ddots & \ddots & 0\\
\ddots & 0 & 0 & M_k^a
\end{array} \right)
\end{equation}
could describe a state of $k$ supergravitons, where the matrices $M_i$
are $N_i \times N_i$ matrices and the longitudinal momentum of the
$i$th graviton is $p_{+}= N_i/R$. A matrix of this form, of course,
corresponds to a classical Matrix theory configuration. A quantum
state describing multiple separated gravitons would be described by a
wavefunction which would approximate the tensor product of a number of
bound state wavefunctions as the separations between the gravitons are
taken to be very large.
\vspace{0.08in}
\noindent
{\it 7.2.2\ \ \ Supermembranes}
\vspace{0.05in}
We now discuss the appearance of the supermembrane in Matrix theory.
It was realized many years ago that there is a remarkable connection between
matrix quantum
mechanics and the light-front supermembrane
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Goldstone-membrane,Hoppe-all,bst2,dhn}. From the point of
view that has been taken in these notes, the easiest way to see that
the supermembrane must appear in Matrix theory is to note that the
(unwrapped) supermembrane corresponds to the 2-brane of type IIA
string theory. As discussed in Section \ref{sec:branes-smaller}, when
the theory is compactified on a 2-torus of area $A$, a 2-brane can be
built from 0-branes by constructing a 0-brane configuration with
\begin{equation}
{\rm Tr}\;[X^1, X^2] = \frac{iA}{2 \pi}.
\label{eq:membrane-torus}
\end{equation}
The energy of this configuration is
\begin{equation}
E = -\frac{1}{2 R} {\rm Tr}\;[X^1, X^2]^2
= \frac{A^2}{8 \pi^2 R N}
= \frac{A^2}{32 \pi^4 \alpha'^2 R N},
\end{equation}
where factors of $2 \pi \alpha'$ have been restored in the final expression.
This corresponds to the second term in an expansion in $1/N$ of the Born-Infeld
energy for a system of $N$ 0-branes and a single 2-brane
\begin{equation}
E_{{\rm BI}} = \sqrt{(N \tau_0)^2+ (A \tau_2)^2}
= N \tau_0 + \frac{A^2 \tau_2^2}{2 N \tau_0} + \cdots.
\end{equation}
Rewritten in Planck units the energy is
\begin{equation}
E= \frac{A^2 R}{32 \pi^4 N}
\end{equation}
which is precisely the light-front energy $E = (T_2A)^2/2p^+$ for an
M-theory membrane with area $A$, tension $T_2 = 1/(2 \pi)^2$ and
longitudinal momentum $p^+$.
To describe an infinite flat supermembrane in the noncompact theory, we
can consider a pair of infinite matrices $X^1, X^2$ satisfying
\begin{equation}
[X^1, X^2] = \frac{i}{2 \pi\rho} {\rlap{1} \hskip 1.6pt \hbox{1}}
\label{eq:membrane-infinite}
\end{equation}
For example, these matrices could be taken to be proportional to the
operators $q, p = -i \;d/dq$ acting on wave functions in one dimension
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{BFSS}. Comparing to (\ref{eq:membrane-torus}) we see that $\rho
\sim N/A$ corresponds to the density of 0-branes (longitudinal
momentum) on the membrane. Note that (\ref{eq:membrane-infinite})
cannot be satisfied by any finite dimensional matrices, but has
solutions only in the large $N$ limit.
In addition to flat membranes which are either infinite or wrapped
around a compact direction, it is desirable to have a Matrix theory
description of finite-size compact membranes moving in a noncompact
space. In fact, precisely such configurations were described in the
work of de Wit, Hoppe and Nicolai almost a decade ago \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{dhn}.
These authors studied the supermembrane theory in eleven dimensions in
light-front coordinates. In light-front gauge, the supermembrane
theory has a residual invariance under the group of area-preserving
diffeomorphisms on the world-volume. This group can be identified as
a large $N$ limit of $SU(N)$. This leads to a discretization of the
supermembrane theory which gives precisely the 0-brane quantum
mechanics theory. The key ingredient in the derivation of this result is the
construction of an explicit correspondence between functions on the
membrane and matrices in $U(N)$ \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Goldstone-membrane,Hoppe-all}.
In the case of a membrane of spherical topology, this correspondence
is particularly simple; functions on the 2-sphere which are expressed
in terms of polynomials in the euclidean coordinates $x_1, x_2, x_3$
are described in Matrix theory by the equivalent symmetrized
polynomials in the generators $J_1, J_2, J_3$ of the $N$-dimensional
representation of $SU(2)$. As a simple example, we can consider the
matrix representation of a symmetric 2-sphere. A rotationally
invariant 2-sphere of radius $r$ can be embedded in the first three
transverse directions of space through
\begin{equation}
X_a = \frac{2r}{N} J_a, \;\;\;\;\; a \in \{1, 2, 3\}.
\end{equation}
Even at finite $N$ this matrix configuration has a number of
geometrical properties which are associated with a smooth 2-sphere
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Dan-Wati}. For example, the matrices $X_a$ satisfy
\begin{equation}
X_1^2 + X_2^2 + X_3^2 = r^2 {\rlap{1} \hskip 1.6pt \hbox{1}} +{\cal O} (1/N^2)
\end{equation}
so that in a noncommutative sense the component 0-branes are
constrained to lie on the 2-sphere. This construction of the Matrix
theory spherical membrane is closely related to
the ``fuzzy'' 2-sphere which appears
in mathematical work on noncommutative geometry
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Madore,Madore-book}.
Toroidal Matrix theory membranes are similarly related to the fuzzy
torus \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{BFSS}.
\vspace{0.08in}
\noindent
{\it 7.2.3\ \ \ Longitudinal 5-branes}
\vspace{0.05in}
We now discuss 5-branes in Matrix theory. There are two ways in which
the M-theory 5-brane can appear as an object in Matrix theory. On the
one hand, it can be wrapped around the longitudinal direction, in
which case it appears as a 4-brane in Matrix theory. On the other
hand, it can be unwrapped in the longitudinal direction in which case
it should appear as a true (NS) 5-brane in Matrix theory. We will discuss
both cases, but we begin with the longitudinal 5-brane (L5-brane).
Longitudinal 5-branes in Matrix theory were first discussed by
Berkooz and Douglas \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Berkooz-Douglas}. They included these
branes as backgrounds for the 0-brane quantum mechanics theory by including
hypermultiplets in the theory corresponding to 0-4 strings. In this
work the L5-branes did not appear as dynamical objects described in
terms of matrix variables. The authors showed, however, that a
membrane which is moved around the L5-brane in the background will
pick up a Berry's phase which corresponds with that expected from the
effects of the 3-form field in supergravity.
A description of L5-branes in terms of Matrix theory variables can
be given in a fashion directly analogous to the above discussion of
the membrane \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{grt}. If we compactify on a $T^4$ of volume $V$
then as discussed in Section \ref{sec:branes-smaller} a flat 4-brane
wrapped around the torus can be constructed from a set of matrices
satisfying
\begin{equation}
{\rm Tr}\;\epsilon_{abcd} X^a X^b X^c X^d = \frac{V}{2 \pi^2}
\end{equation}
Taking the large volume limit of the torus, a
construction of a noncompact 4-brane with longitudinal momentum
density $N/V = \rho$ can be given in terms of infinite matrices
satisfying
\begin{equation}
\epsilon_{abcd} X^a X^b X^c X^d = \frac{1}{2 \pi^2 \rho} {\rlap{1} \hskip 1.6pt \hbox{1}}.
\end{equation}
There are a number of ways of
constructing a configuration of this type. One can construct a
``stack of 2-branes'' solution with 2-brane charge as well as 4-brane
charge \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{bss}. It is also possible to construct a
configuration with no 2-brane charge by identifying $X^a$ with the
components of the covariant derivative operator for an instanton on $S^4$
\begin{equation}
X^a = i \partial^a + A_a.
\end{equation}
This construction is known as the Banks-Casher
instanton \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Banks-Casher}.
Just as for the membrane, it is possible to construct a matrix
configuration corresponding to an L5-brane which has the transverse
geometry of a symmetric 4-sphere \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{clt}. A spherical
configuration corresponding to $n$ superimposed L5-brane spheres with
radius $r$ is defined through
\begin{equation}
X_a = \frac{r}{n} G_a, \;\;\;\;\; a \in \{1, \ldots, 5\}.
\label{eq:sphere}
\end{equation}
where $G_a$ are the generators of the $n$-fold symmetric tensor
product representation of the five four-dimensional gamma matrices
$\Gamma_a$. Although these configurations have the geometrical and
physical properties expected of $n$ coincident L5-brane spheres, they
also have a number of surprising characteristics. These
configurations can only be defined for $N$ of the form
\begin{equation}
N =\frac{(n + 1) (n + 2) (n + 3)}{6}.
\end{equation}
Furthermore, unlike the case of the membrane 2-sphere where arbitrary
fluctuations can be described by symmetrized polynomials in the
generators $J_a$, it seems that no similar approach correctly describes
fluctuations around the symmetric 4-sphere configuration. This
Matrix 4-sphere is closely related to the
fuzzy 4-sphere which has been discussed in the context of
noncommutative geometry \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{gkp}.
\vspace{0.08in}
\noindent
{\it 7.2.4\ \ \ Transverse 5-branes}
\vspace{0.05in}
A systematic way of understanding the membrane and 5-brane charges in
Matrix theory arises from considering the supersymmetry algebra of the
theory. Schematically, the 11-dimensional supersymmetry algebra takes
the form
\begin{equation}
\{Q, Q\} \sim P^I + Z^{I_1 I_2} + Z^{I_1 \ldots I_5}
\end{equation}
where the central terms correspond to 2-brane and 5-brane charges.
The supersymmetry algebra of Matrix theory was explicitly computed by
Banks, Seiberg and Shenker \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{bss}. Similar calculations had been
performed previously \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Claudson-Halpern,dhn}; however, in these
earlier analyses terms such as ${\rm Tr}\;[X^a, X^b]$ and ${\rm
Tr}\;X^{[a} X^{b} X^c X^{d]}$ were dropped since they vanish for
finite $N$. The full supersymmetry algebra of the theory takes the
form
\begin{equation}
\{Q, Q\} \sim P^I + z^a + z^{ab} + z^{abcd}.
\end{equation}
The charge
\begin{equation}
z^{ab} \sim {\rm Tr}\;[X^a, X^b]
\end{equation}
corresponds to membrane charge.
The charge
\begin{equation}
z^{abcd} \sim {\rm Tr}\;X^{[a} X^{b} X^c X^{d]}
\end{equation}
corresponds to longitudinal 5-brane charge, as we have just discussed.
The charge
\begin{equation}
z^a \sim {\rm Tr}\; \{P^b,[X^a, X^b]\}
\end{equation}
corresponds to longitudinal membranes (strings). This can be
understood easily in a dual Yang-Mills picture, where this charge
corresponds to the Poynting vector $F^{ab}E^b$; as usual, momentum is
the dual of string winding number.
Nowhere in this analysis of brane charges do we see any sign of a
charge corresponding to transverse 5-branes. As we will see in
the next section, there is also no sign of such a charge in the
general expression for the leading long-range gravitational
interaction between two matrix objects \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Dan-Wati2}. It was
argued by Banks, Seiberg and Shenker that in fact transverse 5-branes
cannot exist in the IMF since they are Dirichlet objects for the
M-theory membrane \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{bss}. Nonetheless, there is an argument
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{grt} that a T5-brane can be constructed implicitly using the
super Yang-Mills S-duality of Matrix theory on $T^3$. We now review
this argument briefly.
Let us compactify M-theory on dimensions 7, 8 and 9. We now place
an infinite membrane along dimensions 5 and 6. Performing M-theory
T-duality on dimensions 7, 8 and 9 has the effect of taking the
membrane to a 5-brane wrapped around dimensions 5-9, as can be seen in
the following commuting diagram
\begin{center}
\centering
\begin{picture}(200, 60)(- 100,- 25)
\put(-25,15){\makebox(0,0){M}}
\put(25,15){\makebox(0,0){M}}
\put(-25, -15){\makebox(0,0){IIA}}
\put(25,-15){\makebox(0,0){IIA}}
\put(-50, 15,){\makebox(0,0){\small M2(56)}}
\put(-50, -18,){\makebox(0,0){\small D2(56)}}
\put(55,15){\makebox(0,0){\small M5(56789)}}
\put(55,-18){\makebox(0,0){\small D4(5678)}}
\put(-15,15){\vector(1,0){30}}
\put(15,15){\vector(-1,0){30}}
\put(-13,-15){\vector(1,0){26}}
\put(13, -15){\vector(-1,0){26}}
\put(-25,7){\vector( 0, -1){14}}
\put(25,7){\vector( 0, -1){14}}
\put(-40, 0){\makebox(0,0){\tiny $R_{9}$}}
\put(40, 0){\makebox(0,0){\tiny $R_{9}$}}
\put(0,25){\makebox(0,0){\tiny $T_M$}}
\put(0,-6){\makebox(0,0){\tiny $T_{78}$}}
\end{picture}
\end{center}
Thus, to construct a T5-brane in Matrix theory we must begin with the
theory compactified on $T^3$ in dimensions 7-9, with a Yang-Mills
configuration having scalar fields satisfying
\begin{equation}
[X^5, X^6] \sim i{\rlap{1} \hskip 1.6pt \hbox{1}}
\label{eq:T5-brane}
\end{equation}
Performing SYM S-duality on this state should give a transverse
5-brane. There are a number of puzzling subtleties regarding
this construction, however. First, we have no explicit representation of
S-duality in 4D SYM, so we cannot construct the T5-brane state
explicitly. Second, there is a confusing issue about how the large
$N$ limit must be taken. In order to construct a configuration like
(\ref{eq:T5-brane}) we must take the large $N$ limit before performing
the S-duality transformation. It is unclear how SYM S-duality behaves
in the large $N$ limit. Finally, if this state truly exists, a good
reason needs to be found why the corresponding charge does not appear
in the supersymmetry algebra or in the leading term in the
long-distance potential. It is possible that this charge may be
nonlocal, and vanishes for a reason analogous to the vanishing of the
L5-brane and membrane charges at finite $N$.
\subsection{Interactions in Matrix theory}
\label{sec:matrix-interactions}
\vspace{0.08in}
\noindent
{\it 7.3.1\ \ \ The leading $1/r^7$ potential}
\vspace{0.05in}
We now turn our attention to the interactions between the objects of
Matrix theory. We discussed in Section 5.2.5\ the calculation of the
velocity-dependent effective potential (\ref{eq:scattering-potential})
between a pair of 0-branes in super Yang-Mills quantum mechanics.
This potential was found as the result of a one-loop calculation in
the Yang-Mills theory. As pointed out by BFSS \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{BFSS}, this
potential corresponds precisely with the leading long-range
supergravity potential between a pair of gravitons with longitudinal
momentum $1/R$ in light-front coordinates. An explicit calculation
shows that the leading term in the long-range supergravity potential
between a pair of pointlike objects with momenta $\hat{p}^{I}$ and
$\tilde{p}^{I}$ due to the exchange of a single graviton with no
longitudinal momentum is (in string units)
\begin{equation}
V_{\rm gravity} = - {15 \over 4} \, {R^4 \over r^7}
\left[\left(\hat{p} \cdot \tilde{p}\right)^2 - {1 \over 9} \hat{p}^2
\tilde{p}^2\right].
\label{eq:graviton-potential}
\end{equation}
Taking one of the Matrix theory 0-branes to be at rest and the other to have
transverse velocity $v^a$, we have
\begin{eqnarray}
\hat{p}^+ = \frac{1}{R} \;\;\;\;\; & & \hat{p}^a = 0 \;\;\;\;\;
\;\;\;\;\; \;\;\;
\hat{p}^-= 0\\
\tilde{p}^+ = \frac{1}{R} \;\;\;\;\; & & \tilde{p}^a = \frac{v^a}{R}
\;\;\;\;\; \;\;\;\;\;
\tilde{p}^-= \tilde{p}^2_\perp/2 \tilde{p}^+ = \frac{v^2}{2 R} \nonumber
\end{eqnarray}
Inserting these momenta into (\ref{eq:graviton-potential}) gives
\begin{equation}
V_{{\rm gravity}} = -\frac{15v^4}{16 \;r^7}
\end{equation}
in exact agreement with (\ref{eq:scattering-potential}). This exact
correspondence carries through for states with longitudinal momentum
$p^+ = N/R$. The gravitational potential in this case is simply
multiplied by the product $\hat{N} \tilde{N}$. The same factor enters
the Yang-Mills calculation because this is the multiplicity of string
states stretching between the two collections of 0-branes, assuming
that each of the two states is described by a localized bound state of
$N$ 0-branes.
In the last year or so, the one-loop potential has been calculated for
a variety of Matrix theory objects
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{DKPS,BFSS,Aharony-Berkooz,Lifschytz-Mathur,Lifschytz-46,bc,Lifschytz-transverse,ChepTseyI,MaldacenaI,Vakkuri-Kraus,Gopakumar-Ramgoolam,Chepelev-Tseytlin2,MaldacenaII,Esko-Per2}.
In all cases this potential was found to agree at leading order with
the expected leading long-distance potential from supergravity. A
general proof of this result was given by Kabat and the author in
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Dan-Wati2}; we will now describe briefly the analysis in the
general case. Given a pair of classical Matrix theory objects
described by matrices $\hat{X}$ and $\tilde{X}$ of sizes
$\hat{N}\times\hat{N}$ and $\tilde{N}\times\tilde{N}$ respectively,
the one-loop potential between these objects can be calculated in the
quasi-static appropriation by taking a background configuration
\begin{equation}
\langle X^a \rangle =
\left(\begin{array}{cc}
\hat{X}^a & 0\\
0 & \tilde{X}^a
\end{array} \right).
\end{equation}
Summing the frequencies of the string oscillators associated with the
bosons, fermions and ghosts in the off-diagonal matrix blocks as in
(\ref{eq:oscillator-sum}) gives the one-loop potential between the two
objects. If the centers of mass of the two objects are separated by a
distance $r$ which is large compared to the sizes of the objects then
this potential can be expanded in powers of $1/r$. The leading term
is of the form $F^4/r^7$ where $F$ can be a term of the form $\dot{X}$
or $[X, X]$ \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Metsaev-Tseytlin,Dan-Wati}. Decomposing the general
expression for this term into functions of $\hat{X}$ and $\tilde{X}$,
and grouping terms by their Lorentz structure it can be shown
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Dan-Wati2} that the leading term in the Matrix theory potential
between an arbitrary pair of separated objects is given by
\begin{equation}
\label{eq:matrix-potential-general}
V_{\rm matrix}=V_{\rm gravity} + V_{\rm electric} + V_{\rm magnetic}
\end{equation}
where
\begin{eqnarray}
V_{\rm gravity} & = & - {15 R^2 \over 4 r^7} \left( {\hat{\cal
T}}^{IJ} \tilde{{\cal T}}_{IJ}
- {1 \over 9} {\hat{\cal T}}^I{}_I \tilde{{\cal T}}^J{}_J\right)
\nonumber \\
V_{\rm electric} & = & - {45 R^2 \over r^7} {\hat{\cal J}}^{IJK}
{\tilde{\cal J}}_{IJK}
\label{eq:matrix-interactions} \\
V_{\rm magnetic} & = & - {45 R^2\over r^7} \hat{{\cal M}}^{+-ijkl}
\tilde{{\cal M}}^{-+ijkl}
\nonumber
\end{eqnarray}
and where we define the following quantities:
${\cal T}^{IJ}$ is a
symmetric tensor with components
\begin{eqnarray}
{\cal T}^{--} & = & {1 \over R} \; {\rm STr} \, \left( \frac{1}{4} \dot{X}^a \dot{X}^a
\dot{X}^b \dot{X}^b+
{1 \over 4} \dot{X}^a \dot{X}^a F^{bc} F^{bc} +
\dot{X}^a \dot{X}^b F^{ac} F^{cb} \right. \nonumber\\
& & \qquad \qquad \left. +
{1 \over 4} F^{ab} F^{bc} F^{cd} F^{da} -
\frac{1}{16} F^{ab} F^{ab} F^{cd} F^{cd} \right) \nonumber
\\
{\cal T}^{-a} & = & {1 \over R} \;{\rm STr} \, \left(\frac{1}{2} \dot{X}^a \dot{X}^b \dot{X}^b +
{1 \over 4} \dot{X}^a F^{bc} F^{bc}
+ F^{ab} F^{bc} \dot{X}^c \right) \nonumber \\
{\cal T}^{+-} & = & {1 \over R} \;{\rm STr} \, \left(\frac{1}{2} \dot{X}^a \dot{X}^a + {1
\over 4} F^{ab} F^{ab} \right) \label{eq:matrix-t} \\
{\cal T}^{ab} & = & {1 \over R} \;{\rm STr} \, \left( \dot{X}^a \dot{X}^b + F^{ac}
F^{cb} \right) \nonumber \\
{\cal T}^{+a} & = & {1 \over R} \;{\rm STr} \, \dot{X}^a \nonumber \\
{\cal T}^{++} & = & {1 \over R} \;{\rm STr} \, {\rlap{1} \hskip 1.6pt \hbox{1}} =
{N \over R} \nonumber
\end{eqnarray}
${\cal J}^{IJK}$ is a totally antisymmetric tensor with components
\begin{eqnarray}
{\cal J}^{-ab} & = & {1 \over 6 R} {\rm STr} \, \left( \dot{X}^a \dot{X}^c F^{cb} -
\dot{X}^b \dot{X}^c F^{ca} - \frac{1}{2} \dot{X}^c \dot{X}^c F^{ab}
\right.\nonumber\\
& & \qquad \qquad \left. + {1 \over 4} F^{ab} F^{cd} F^{cd} +F^{ac}
F^{cd} F^{db} \right) \nonumber \\
{\cal J}^{+-a} & = & {1 \over 6 R} {\rm STr} \, \left( F^{ab} \dot{X}^b \right)
\label{eq:matrix-j} \\
{\cal J}^{abc} & = & - {1 \over 6 R} {\rm STr} \, \left( \dot{X}^a F^{bc} +
\dot{X}^b F^{ca} + \dot{X}^c F^{ab} \right) \nonumber \\
{\cal J}^{+ab} & = & - {1 \over 6 R} {\rm STr} \, F^{ab} \nonumber
\end{eqnarray}
and
${\cal M}^{IJKLMN}$ is a totally antisymmetric tensor with
\begin{equation}
\label{eq:matrix-m}
{\cal M}^{+-abcd} = {1 \over 12 R} {\rm STr} \, \left(F^{ab} F^{cd} + F^{ac}
F^{db} + F^{ad} F^{bc}\right)\,.
\end{equation}
We have defined $F_{ab} = - i[X_a, X_b]$. The trace ${\rm STr} \,$ is defined
to be the trace symmetrized over all possible orderings of the factors
$\dot{X}$ and $F$. Tensors $\hat{{\cal T}}^{IJ}, \tilde{{\cal
T}}^{IJ}$, etc. are defined through
(\ref{eq:matrix-t}-\ref{eq:matrix-m}) as functions of $\hat{X}$ and
$\tilde{X}$. Note that the only components of ${\cal M}$ which appear in
(\ref{eq:matrix-interactions}) are those defined in
(\ref{eq:matrix-m}); there is no expression known for other components
of this tensor.
The general form of the Matrix theory potential
(\ref{eq:matrix-potential-general}) can be compared with the leading
long-distance potential in 11D supergravity between an arbitrary pair
of objects. In supergravity this potential arises from the exchange
of a single particle, which must be either a graviton or 3-form
quantum. In light-front coordinates the propagator for a quantum
carrying no longitudinal momentum carries a factor of $\delta
(\hat{x}^+ - \tilde{x}^+)$ \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Hellerman-Polchinski}. Thus, we
expect the light-front
supergravity potential to be instantaneous in light-front time. An
explicit calculation of this potential shows that the leading term is
precisely given by (\ref{eq:matrix-potential-general}) where ${\cal T}$,
${\cal J}$ and ${\cal M}$ are the integrated stress tensor, membrane current
and 5-brane current in supergravity. Thus, if we use
(\ref{eq:matrix-t}), (\ref{eq:matrix-j}) and (\ref{eq:matrix-m}) as
definitions of the integrated stress tensor and currents in Matrix
theory, we see that there is an exact correspondence between the
leading term in the one-loop Matrix theory potential and the leading
term in the supergravity potential for an arbitrary pair of objects.
\vspace{0.08in}
\noindent
{\it 7.3.2\ \ \ Further aspects of Matrix theory interactions}
\vspace{0.05in}
Although the correspondence between Matrix theory and supergravity
interactions has been demonstrated in general at order $1/r^7$, the
current understanding of subleading terms is much less developed.
There are a number of ways in which subleading terms appear in the
Matrix theory potential. A systematic analysis of the structure of
the subleading terms in the graviton scattering amplitude was carried
out by Becker, Becker, Polchinski and Tseytlin (BBPT) \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{bbpt} and has
also been considered by Susskind. Generally, at $n$th loop order,
there are terms in the potential of order $v^{4 + 2k}/r^{4 + 3n + 4k}$
for all values of $k$. For more general Matrix theory objects, the
structure is similar, with $F$ playing the role of $v$; however, there
are also dependencies on the fields $X$ so that the full expansion is
of the form
\begin{equation}
V = \sum_{n, k, l} V_{nkl} \frac{F^{4 + 2k} X^l}{r^{4 + l + 3n + 4k}}
\label{eq:general-expansion}
\end{equation}
where $n$ indicates the loop order at which a given term arises.
Generally, the contraction of the indices of $F$ and $X$ can be
carried out in many inequivalent ways; the coefficient $V_{nkl}$
therefore is shorthand for many independent coefficients at each order,
corresponding to all possible contractions. We will now discuss some
of the features of this loop expansion which are currently
understood. Note that our discussion focuses on interactions between
purely bosonic states. When fermions are included there can be
additional effects such as spin effects; it has been found that these
effects seem to be captured accurately by Matrix theory also, at
least at leading order \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Harvey-spin,mss,Kraus-spin}.
If we consider only the terms in the one-loop expansion which contain four
powers of $F$, the expansion reduces to a sum of terms of the form
$F^4 X^l/r^{7 + l}$. This set of terms was analyzed in
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Mark-Wati,Dan-Wati2}, where it was shown that these terms
can be described in terms of interactions between higher moments of the
Matrix theory stress tensor (\ref{eq:matrix-t}), membrane current
(\ref{eq:matrix-j}) and L5-brane current (\ref{eq:matrix-m}). These
interactions correspond precisely to the higher-order terms expected
from supergravity for the interaction between two extended objects due
to single graviton or 3-form exchange. It seems reasonable to
conclude that the role of the factors $X^l/r^l$ will in general be to
incorporate higher moments of
extended objects; thus, to understand the remaining terms in
(\ref{eq:general-expansion}) it will suffice to restrict attention to
the terms with $l = 0$, which are the only ones contributing in the
case of graviton scattering.
One set of terms of particular interest are the terms of the form $F^4/r^{4 +
3n}$. If such terms existed with nonvanishing coefficients beyond
one-loop order they would renormalize the $v^4$ interaction term which
already agrees at one-loop order with supergravity. It was
conjectured by BFSS that no such renormalization occurs. Becker and Becker
have performed the calculation explicitly for graviton scattering at
two-loop order and shown that the term of order $v^4/r^{10}$ vanishes
identically \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Becker-Becker}. This supports the hypothesis that
all remaining $v^4$ terms vanish; however, this has not been shown at
higher order. It has also been suggested that the $v^4$ terms may in
fact be
renormalized at higher loop order since analogous renormalizations
occur in three-dimensional theories \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Dine-Seiberg}. These terms
may also be affected by processes with longitudinal momentum transfer,
which we discuss briefly below.
The next several terms which contribute at two loops have also been
calculated for graviton scattering. It was shown by BBPT \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{bbpt}
that the term of order $v^6/r^{14}$ is also in agreement with the
potential expected from classical supergravity. This term corresponds
to a general relativistic correction to the lowest order term in the
potential. All the terms in the Matrix theory potential of the form
$v^{4 + 2m}/r^{7m}$ carry integral powers of the gravitational
constant when expressed in Planck units. It is believed that these
terms should reproduce classical supergravity to all orders. Although
none of these terms have been calculated precisely beyond two loops,
it has been argued
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Chepelev-Tseytlin2,Esko-Per2,Balasubramanian-gl,Chepelev-Tseytlin3}
that the general form of these terms should correspond with higher
order terms in the non-abelian Born-Infeld action. Although these
terms cannot be determined uniquely by this ansatz, in certain cases
exact expressions for the supergravity interactions are of the
Born-Infeld form, and are in agreement with this conjecture.
At each loop order there are terms which contribute at higher order in
$1/r$ than the terms which are expected to correspond with classical
supergravity. It is believed that these terms represent quantum
gravity corrections. There has been some discussion of this question
in the literature
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Susskind-talk,Berglund-Minic,Serone,Esko-Per-short,Balasubramanian-gl,Beckers-graviton},
but as yet there does not seem to be a detailed understanding of this
issue.
The loop expansion in Matrix theory which we have been discussing only
describes processes in which no longitudinal momentum is exchanged.
Clearly, for a full understanding of interactions in Matrix theory it
will be necessary to include processes with longitudinal momentum
transfer. Some progress has been made in this direction. Polchinski
and Pouliot have calculated the scattering amplitude for two 2-branes
for processes in which a 0-brane is transferred from one 2-brane to
the other \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Polchinski-Pouliot}. In the Yang-Mills picture the
incoming and outgoing configurations in this calculation are described
in terms of a $U(2)$ gauge theory with a scalar field taking a VEV
which separates the branes, as discussed in Section 5.2.2. The transfer
of a 0-brane corresponds to an instanton-like process where a unit of
flux is transferred from one brane to the other. The results of this
calculation are in agreement with expectations from supergravity.
This result suggests that processes involving longitudinal momentum
transfer may be correctly described in Matrix theory. Note, however,
that the Polchinski-Pouliot calculation is not precisely a calculation
of membrane scattering with longitudinal momentum transfer in Matrix
theory since it is carried out in the 2-brane gauge theory language.
In the T-dual Matrix theory picture the process in question
corresponds to a scattering of 0-branes in a toroidally compactified
space-time with the transfer of membrane charge. This process was
studied further by Dorey, Khoze and Mattis \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{dkm} and was related
to graviton scattering by Banks, Fischler, Seiberg and Susskind \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{BFSS2}.
\section{Matrix theory: Further Developments}
\label{sec:developments}
In the last year there has been a veritable explosion of Matrix theory
related papers. In this section we describe briefly a few of the
interesting directions in which this work has gone. There are a
number of important and interesting developments which we do not
discuss here at all. In particular, nothing is said in these notes
about the recent developments on Matrix theory black holes or light-front
5-brane theories. We also do not discuss Matrix theory
compactification on orbifolds.
Although we do give a brief description of the DVV
formulation of Matrix string theory, there are many other interesting
formulations of string theory in the matrix language, such as
heterotic Matrix strings, which we do not cover here. Another particularly
interesting set of developments involves the compactification of
Matrix theory on higher dimensional tori. Aside from a few brief
comments, this topic is not covered. Some of these topics are covered
in more detail in the reviews \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{banks-review,Susskind-review}.
\subsection{Matrix string theory}
An interesting feature of Matrix theory is that with a few minor
modifications it can be used to give a nonperturbative definition of
string theory. A number of approaches have been taken to Matrix
string theory \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{IKKT,Motl-string,Banks-Seiberg,DVV-string}. In
this section we review briefly a few aspects of the Matrix string
theory approach due to Dijkgraaf, Verlinde and Verlinde (DVV)
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{DVV-string,DVV-string2}.
If we consider Matrix theory compactified in dimension 9 on a circle
$S^1$, we have a super Yang-Mills theory in $(1+1)$-D on the dual
circle $\hat{S}^1$. In the BFSS formulation of Matrix theory, this
corresponds to M-theory compactified on a 2-torus. If we now think of
dimension 9 rather than dimension 11 as the dimension which has been
compactified to get a IIA theory, then we see that this super
Yang-Mills theory should provide a light-front description of type IIA
string theory. Because we are now interpreting dimension 9 as the
dimension of M-theory compactification, the fundamental objects which
carry momentum $p^{+}$ are no longer 0-branes, but rather strings
with longitudinal momentum. Thus, it is natural to interpret $N/R$ in
this super Yang-Mills theory as the longitudinal string momentum. It
was argued by Dijkgraaf, Verlinde and Verlinde that in fact this gives
a natural corollary to the Matrix theory conjecture, namely that 2D
super Yang-Mills in the large $N$ limit should correspond to
light-front IIA string theory.
To examine this form of the conjecture in more detail, let us begin by
considering the Matrix theory Hamiltonian (working in Planck units
and dropping factors of order unity as in \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{DVV-string})
\begin{equation}
H = R_{11} {\rm Tr}\; \left[
\Pi_a \Pi_a - [X^a, X^b]^2 +
\theta^T
\gamma_a[X^a, \theta] \right]\ .
\end{equation}
After compactification on $R_9$ we identify $X^9 \rightarrow R^9
D_\sigma$, $\Pi_9 \rightarrow R_9 \dot{A}_9\sim E_9/R_9$, where
$\sigma \in[0, 2 \pi]$ is the coordinate on the dual circle. With
these identifications, and using $g \sim R_9^{3/2}$, the Hamiltonian
was rewritten by DVV in the form
\begin{eqnarray}
H & = & \frac{R_{11}}{2 \pi} \int d \sigma \; {\rm Tr}\; \left[
\Pi_a \Pi_a + (D_\sigma X^a)^2 +
\theta^TD_\sigma \theta
\right.\nonumber\\
& &\hspace{1in} \left.
+
\frac{1}{g^2} \left( E^2 - [X^a, X^b]^2 \right)
+ \frac{1}{g} \theta^T
\gamma_a[X^a, \theta]\right]\ .
\end{eqnarray}
This is essentially the form of the Green-Schwarz light-front string
Hamiltonian, with the modification that the fields are now $N \times
N$ matrices which do not necessarily commute. This means that the
theory automatically contains multi-string objects living in a second
quantized Hilbert space. Furthermore, it is possible to construct
extended string theory objects in terms of the noncommuting matrix
variables, by a simple translation from the original Matrix theory
language. We reproduce in Table~\ref{tab:string}
a table of the extended objects in this Matrix string
theory. The objects are listed in terms of their interpretations in
M-theory, as well as their interpretations in Matrix string
theory and associated charges in Matrix string theory. Charges are
given only up to an overall constant.
\begin{table}[t]
\caption{Objects and their charges in Matrix string
theory\label{tab:string}}\vspace{0.4cm}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
M-theory & Matrix string object & Matrix string charge\\
\hline
$p_{11}$&$ p^+ $&$ N$\\
$p_9 $& D$0 $&$ E$\\
$p_a $&$ p_a $&$ \Pi_a$\\
M${}_{a9}\; \;$ ($a-9$ membrane) &$ w_a \; \;$(wound string)&$
D_\sigma X^a$\\
M${}_{11\; 9} $&$ w_+ $&$ \Pi_a (D_\sigma X^a) \; \; (E \times B)$\\
M${}_{ab} $&D$2_{ab} $&$[X^a, X^b]$\\
M${}_{a \; 11} $&D$2_{a +} $&$ E D_\sigma X^a
+ \Pi_b[X^a, X^b]$\\
$5_{abc+9} $&D$4_{abc+}$&$ D^{[9} X^a X^b X^{c]}$\\
$5_{abcd+} $&NS$5_{abcd+} $&$ X^{[a} X^b X^c X^{d]} $\\
\hline
\end{tabular}
\end{center}
\end{table}
To verify each of the entries in this table it suffices to consider a
Matrix theory object with its known charge, and to rewrite that object
and charge in terms of the Yang-Mills description after the ``9-11
flip'' corresponding to interpreting dimension 9 as the M-theory
compactification direction. For example, consider an M-theory
membrane wrapped in dimensions 8-9. In BFSS Matrix theory this is a
membrane with charge $[X^8, X^9]$. In Matrix string theory we take
$X^9 \rightarrow D_9$ so that the charge becomes $D_9 X^8$. Since
dimension 9 is the M-theory compactification direction, this
corresponds to a string wrapped around dimension 8. Note that the
only objects missing in this table are those corresponding to the
T5-brane in BFSS Matrix theory. These correspond to a 4-brane or
5-brane wrapped in transverse directions in Matrix string theory.
One particularly nice feature of the DVV approach to Matrix string
theory is the way in which the individual string bits carrying a
single unit of longitudinal momentum combine to form long strings. As
the string coupling becomes small $g \rightarrow 0$, the coefficient
of the term $[X^a, X^b]^2$ in the Hamiltonian becomes very large. This
forces the matrices to become simultaneously diagonalizable. Because
the string configuration is defined over $S^1$, however, the matrix
configuration need not be periodic in $\sigma$. The matrices $X^a
(0)$ and $X^a (2 \pi)$ can be related by an arbitrary permutation.
The lengths of the cycles of this permutation determine the numbers of
string bits which combine into long strings whose longitudinal
momentum $N/R_{11}$ can become large in the large $N$ limit. As the
coupling becomes very small, the theory therefore essentially becomes
a sigma model on $({\bb R}^8)^N/S^N$. The twisted sectors of this theory
correspond precisely to the sectors where the string bits are combined
in different permutations. In this picture, string interactions
appear as vertex operators in the conformal field theory
arising as the infrared limit of the sigma model theory.
It is not apparent, however, how such interactions are related to the
Yang-Mills description of the theory.
It would be nice to have a more direct understanding of this relationship.
Some discussion is given in the original DVV papers of the structure
of D-branes in Matrix string theory. This is another direction
in which it would be interesting to develop the theory in further
detail. For example, DVV suggest that a 0-brane corresponds to a
single string bit which does not become part of an extended string in
the large $N$ limit. It would be nice if there were a natural
way in which the known properties of 0-branes could be derived from
this point of view. Clearly, there is more to be said about the
relationship between Matrix string theory and other formulations of
light-front string theory.
\subsection{Compactification of more than three dimensions}
\label{sec:compactification2}
As discussed in Section \ref{sec:compactification}, Matrix theory
compactified on a torus of dimension $d \leq 3$ is described in terms
of super Yang-Mills theory on the dual torus.
Compactification on $T^3$ was described in section 7.1.2.
Compactification of the theory on $T^2$ was discussed by Sethi and
Susskind \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Sethi-Susskind}. They pointed out that as the $T^2$
shrinks, a new dimension appears whose quantized momentum modes
correspond to magnetic flux on the $T^2$. In the limit where the area
of the torus goes to 0, an $O (8)$ symmetry appears. This corresponds
with the fact that IIB string theory appears as a limit of M-theory on
a small 2-torus \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Aspinwall-duality,Schwarz-multiplet}.
When more than three
dimensions are toroidally compactified, the theory undergoes
even more remarkable transformations \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{fhrs}. For example, consider
compactifying the theory on $T^4$. The manifest symmetry group of
this theory is $SL(4,Z)$. The expected U-duality group of
M-theory compactified on $T^4$ is $SL(5,Z)$, however. It was pointed out by
Rozali \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Rozali} that the U-duality group can be completed by
interpreting instantons on $T^4$ as momentum states in a
fifth compact dimension. This means that Matrix
theory on $T^4$ is most naturally described in terms of a (5 +
1)-dimensional theory with a chiral $(2, 0)$ supersymmetry. This $(2,
0)$ theory is an unusual theory with 16 supersymmetries
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Seiberg-16} which appears to play a crucial role in a wide
variety of properties of M-theory and 5-branes.
Compactification on tori of higher dimensions than four continues to
lead to more complicated situations, particularly in the case of
$T^6$. A significant amount of literature has been produced on this
subject, to which the reader is referred to further details (see for
example \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Sen,Seiberg-DLCQ,Berkooz-duality,Ganor-Sethi} and
references therein). Despite the complexity of $T^6$
compactification, however,
it was recently suggested by Kachru, Lawrence
and Silverstein \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{kls} that compactification of Matrix theory on a more
general Calabi-Yau 3-fold might actually lead to a simpler theory than
that resulting from compactification on $T^6$.
\subsection{Proofs and counterexamples}
\label{sec:recent}
Since the time when these lectures were given, there has been a great
deal of debate about whether the Matrix theory conjecture is truly
correct to all orders in perturbation theory, and if so, why. The
results of this debate are still uncertain, and a full discussion of
the issues involved will not be given here. We will only briefly
review a few of the points in the discussion.
It was suggested by Susskind \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Susskind-DLCQ} that there might be a sense
in which the Matrix theory conjecture holds even at finite $N$. This
extended version of the conjecture would relate finite $N$ Matrix
theory with the finite $N$ discrete light-cone quantization of
M-theory. An argument has been given by Seiberg \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Seiberg-DLCQ}
which seems to indicate that this correspondence is correct, and
related discussions have been given by Sen \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Sen} and de Alwis
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{dealwis-DLCQ}. Seiberg's approach even seems to apply to
compactification of Matrix theory on an arbitrary manifold, although
the details of this argument have not been made precise.
However,
there are also a number of pieces of evidence that the correspondence
between Matrix theory and supergravity breaks down in certain
contexts. Attempts to formulate Matrix theory on curved
spaces seem to lead to discrepancies between the leading terms in the
Matrix theory and supergravity interaction potentials
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{dos,Douglas-Ooguri}. Higher-loop effects on orbifolds also seem
to give rise to discrepancies \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{ggr,ddm} (further comments on this
issue appear in \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Berglund-Minic}). Even in flat space, it seems
that Matrix theory may have problems in reproducing supergravity:
Recently Dine and Rajaraman considered 3-graviton scattering in Matrix
theory \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Dine-Rajaraman}, and argued that there are certain
diagrams in supergravity with nonzero amplitudes which simply cannot
be reproduced at any order in Matrix theory. When considering
interactions between extended objects, it also seems that Matrix
theory may diverge from supergravity in unusual ways; although the
one-loop interaction between any two objects must agree with
supergravity if the components of the source tensors are defined as in
Section 7.3.2, the components of the stress tensor for extended objects
are defined so that the equivalence principle seems to break down at
finite $N$ \@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Dan-Wati2}.
Thus, we seem to be faced with a contradiction: on the one hand proofs
that even at finite $N$ Matrix theory is a correct description of DLCQ
M-theory, on the other hand evidence that Matrix theory does not agree
with results one would expect from supergravity. Some recent papers
have addressed this puzzle
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{banks-review,Hellerman-Polchinski,Balasubramanian-gl,Bilal};
however, a complete resolution of the situation will certainly take
some time. It may be that to resolve these issues it will be
necessary to understand the large $N$ limit of Matrix theory in a more
precise fashion. It may also be that detailed aspects of the bound
state wave functions of gravitons will play a role in resolving these
contradictions.
\section{Conclusions}
{}
We have seen that a remarkable number of interesting properties of
D-branes can be understood from the point of view of the low-energy
super Yang-Mills description. Super Yang-Mills theory contains
information about all the known duality symmetries of type II string
theory and M-theory. We can construct higher- and lower-dimensional
branes of various types directly within the super Yang-Mills theory
for branes of a particular dimension. Super Yang-Mills theory even
seems to know about supergravity. From super Yang-Mills theory we
have some suggestion of how to generalize our notions of geometry to
include noncommutative spaces such as those of Connes
\@ifnextchar [{\@tempswatrue\@mycitex}{\@tempswafalse\@mycitex[]}{Connes}. All this is certainly an indication that super
Yang-Mills theory is a much richer theory than it has usually been
given credit for. Whether it will truly reproduce all of the physics
of string theory, let alone the standard model, however, remains to be
seen. To this author, it seems that there is still some fundamental
principle lacking. In particular, for Matrix theory to leave the IMF
or light-front gauge, it is necessary to introduce anti-0-branes.
These are virtually the only objects which cannot be constructed from
0-branes as some kind of generalized fluxes. As has been suggested by
a number of authors, it seems that we need some more fundamental
structure in which all the objects of the theory, even 0-branes and
anti-0-branes, appear as some generalized type of fluxes or as composites
of some underlying medium. Because of the way in which quantum
mechanics and geometrical constraints often seem to be related through
dualities, it easy to imagine that whatever fundamental principles we
are currently lacking will necessitate a rather substantial reworking
of our concepts of quantum mechanics and field theory themselves.
\section*{Acknowledgments}
I would like to thank the International Center for Theoretical Physics
in Trieste for their hospitality and for running the summer school at
which these lectures were presented. I would also particularly like
to thank Lorenzo Cornalba, Dan Kabat, Sanjaye Ramgoolam, L\'arus
Thorlacius and Mark Van Raamsdonk for reading a preliminary draft of
these notes and making numerous helpful suggestions and corrections.
This work was supported by the National Science Foundation (NSF) under
contract PHY96-00258.
\section*{References}
\bibliographystyle{unsrt}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.